text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Waves, Sound and Light: Light Waves Light Waves: Problem Set Unless told otherwise, use 2.998x108 m/s as the value of the speed of light. In 1957, the U.S. Naval Research Laboratory conducted the first ever radar measurements of the distance from the Earth to the moon. By reflecting light from an Earth-based source off the moon and measuring the back-and-forth time of transit, scientists determined that the moon is approximately 3.84 x108 m from the Earth. Determine the time it takes light to travel from Earth to the moon and back. The distance from the Earth to the sun is 1.496x1011 m. A solar flare occurs on the sun’s surface on Wednesday morning at 10:24 AM. At what time and on what day will electromagnetic radiation from the flare reach the Earth. In the 1600s, Ole Roemer became one of the first scientists to make a measurement of the speed of light. Roemer observed the orbits of Jupiter’s nearest moon and recognized that its orbital period was observed to be approximately 22 minutes longer when measured from Earth when it was furthest from Jupiter compared to when it was closest to Jupiter. Roemer reasoned that the difference was due to the fact that it took longer for light from Jupiter to travel the extra distance when Earth’s position was on the opposite side of the Sun from Jupiter. The distance d2 is 2.98x1011 m greater than the distance d1. Determine Roemer’s estimate of the speed of light in the 1600s. The German-born, American physicist Albert Michelson devoted much of his life to the accurate measurement of the speed of light. In 1923, he positioned mirrors and detectors on two different California mountains positioned nearly 35 km (nearly 22 miles) apart. Using a sophisticated timing method of involving the rotating of octagonal mirrors, Michelson determined the speed of light to be 299,774 km/sec. At this speed, estimate the time it takes light to travel 35 km between mountains. Mr. H catches the news on 780 AM, which broadcasts at 780. kHz. Determine the wavelength of these radio waves. Determine the frequency of … (GIVEN: 1 m = 109 nm) a. … red visible light (λ = 650 nm) b. … violet visible light (λ = 420 nm) a. 4.6x1014 Hz b. 7.1 x1014 Hz Determine the wavelength of the microwave radiation emitted … a. … by a microwave oven (f = 2.45x109 Hz). b. … by a cordless phone (f = 5.8x109 Hz). a. 0.122 m or 122 cm b. 0.052 m or 5.2 cm Determine the wavelength of the infrared carrier wave transmitted by a television remote control at 38 kHz. a. Determine the frequency of electromagnetic radiation which would have a wavelength of 1.0 mile (1.6 km). b. What part of the electromagnetic spectrum does this fall within? a. 1.9x105 Hz b. Radio wave spectrum Like light waves, water waves emerging from two sources interferes in the space surrounding the sources to produce a pattern of nodes and antinodes lying along lines. The diagram at the right represents the interference pattern created by two water waves. The waves were created by two objects bobbing up and down in phase at the same frequency. Point P on the pattern is a distance of 34.0 cm from S1 and 23.8 cm from S2. Determine the wavelength (in cm) of the water waves. The diagram at the right represents the interference pattern created by two water waves. The waves were created by two objects bobbing up and down in phase at the same frequency. Point P on the pattern is a distance of 36.9 cm from S1 and 61.5 cm from S2. Determine the wavelength (in cm) of the water waves. Water waves with a wavelength of 7.8 cm are created in a ripple tank by two in-phase sources bobbing up and down at the same frequency. The waves form an interference pattern in the space surrounding the sources. A point on the fourth nodal line is a distance of 58.2 cm from the nearest source. Determine the distance from this same point to the furthest source. Mr. H takes his class to the gymnasium to investigate two point source interference patterns produced by sound waves from two sound sources. The pure-tone output from a frequency generator is split and fed to two audio speakers positioned about 1-meter apart. The sound from the two speakers travels through the gymnasium and interferes constructively and destructively to create a pattern of nodes and antinodes. Mr. H directs the class to stand with one ear facing the speakers and the other ear covered and to walk slowly across the gymnasium, observing positions of relatively soft and loud sounds in alternating fashion. Once initial observations are made, Mr. H asks all the male students to stand at nodal positions and all the female students to stand at antinodal positions. Once done, Mr. H takes a picture of their positions and then makes several measurements of the distances between students and from some selected students to the speakers. Some sample data are shown below. Complete the table, determining the wavelength of the sound waves based on each student’s measurements. Speaker 1 (m) Speaker 2 (m) Speaker 1 (m) Speaker 2 (m) Two audio speakers have been arranged in a large room so as to produce a sound interference pattern. Miguel starts at a position on the central antinodal line and begins to slowly walk parallel to the imaginary line connecting the speakers. Miguel stops at the first position of minimum loudness. At this position, he is a distance of 17.9 m from the nearest speaker. Sound waves travel through the room at 345 m/s and the speakers are sounding out a frequency of 244 Hz. a. Determine the wavelength of the sound waves. b. Determine the distance from Miguel to the furthest speaker. a. 1.41 m b. 18.6 m Mr. H’s period 7 physics class is attempting to duplicate Thomas Young’s experiment in which they use a two-point source light interference pattern to measure the wavelength of light. They red shine laser light through a slide containing a double slit; the slit spacing is 0.125 mm. The light interference pattern created by the light which passes through the slits is projected on a screen a distance of 10.72 m away. Justin and Shirley measure the distance from the 3rd antinodal bright spots on opposite sides of the pattern to be 33.9 cm apart. Based on these measurements, what is the wavelength of the red laser light. Two narrow slits in a slide are separated by a distance of 45.0 micrometers. Light from a green laser (λ = 532 nm) is passed through the slits and the interference pattern is projected onto a screen 9.85 m away. Determine the distance between the central bright spot and the fourth bright spot. (GIVEN: 1 m = 106 mm) Jackson and Melanie are doing the Young’s Experiment Lab using a red laser pen and a slide with two slits spaced 25 micrometers apart. They project the interference pattern onto a whiteboard located 2.35 m from the slits. They measure the distance from the 3rd bright band on opposite sides of the pattern to be separated by 37 cm. Based on these measurements, what is the wavelength of the red laser light (in nanometers)? (GIVEN: 1 m = 106 mm, 1 m = 109 nm) Maria and Jason are doing the same lab as Jackson and Melanie (from the previous problem). Maria and Jason determine the distance between the central bight spot and the 4th bright spot to be 29 cm. The distance from their slide to the whiteboard (where the interference pattern is projected) is 2.76 m. The slits in their slide are also spaced 25 micrometers apart. Based on Maria and Jason’s measurements, what is the wavelength of the red laser light (in nanometers)? (GIVEN: 1 m = 106 mm, 1 m = 109 nm) Jill is helping her younger brother Nathan set up an exhibit for a Science Fair. Nathan’s exhibit pertains to the wave-particle nature of light waves. He wishes to demonstrate the wave nature of light by displaying the two point interference pattern of red laser light (λ = 648 nm). Nathan has purchased a double slit slide from a science warehouse which has slits separated a distance of 0.125 mm. Nathan has asked Jill to determine the slide-to-screen distance which will result in a 2.0 cm separation between adjacent bright spots. What distance will result in this antinodal spacing? The Bluebird Library has been celebrating the lives of famous scientists. Each month, a new scientist is selected and displays are created to feature the discoveries and contribution of the scientist. April's scientist of the month is Thomas Young; the library wishes to develop a Young's experient display. The library has purchased a blue laser which emits light with a wavelength of 473 nm. They also have purchased a slide with a double slit; the slit spacing is 44 µm. The library's current plans are to project the interference pattern onto a white board which is 3 feet wide and located 28 feet from the slits. What is the maximum number of bright spots which will appear on the board at these distances and what is the spacing distance between each bright spot? Assume that each bright spot is bright enough to see. (GIVEN: 1 m = 3.28 ft, 1 m = 106 mm, 1 m = 109 nm) Monochromatic yellow light (λ = 594 nm) passes throught two slits with a slit spacing of 0.125 mm and forms an interference pattern on a screen that is positioned 14.5 m away. Determine the distance between the fifth bright spots on opposite sides of the central bright spot. In a museum exhibit, monochromatic red light (λ = 648 nm) passes through a double slit and is projected onto a screen located 16.8 m from the slits. A metric ruler located above the interference pattern clearly shows the fifth dark fringe to be located 42.0 cm from the central bright spot. Determine the slit separation distance. A radio station has two antennae which are used to broadcast their 582 kHz radio wave signal. The Robinson family, who lives in the Cedar Ridge subdivision, have very poor reception when tuned to this signal due to the destructive interference of radio waves from the two antennae. The Robinson home is located a distance of 13.78 km from the nearest antenna. What is the likely minimum distance from the Robinson’s home to the furthest antenna? Always thinking ahead, Mr. H is investigating possible retirement communities in Flagstaff, Arizona. His favorite radio station in the Flagstaff area is KFIZ, broadcasting at 1420 kHz. One of the communities Mr. H is investigating is nestled in the cliffs, directly facing the KFIZ broadcasting station located several miles away. While driving through the neighborhood, Mr. H observes the KFIZ signal fading in and out. Mr. H reasons that the cause of the poor reception is that radio waves coming directly from the station undergoing are destructively interfering with waves which reflect off the cliffs from behind the retirement community. Knowing he must consider all factors in the purchase of a home, Mr. H decides to calculate all the possible distances from the cliffs for which destructive interference occurs. By doing so, he will be able to rule out the purchase of several lots in the neighborhood. Determine the six nearest distances from the cliffs that result in destructive interference of the 1420 kHz signal. (Assume that the reflected waves do not undergo a phase change upon reflection off the plane.) Noah Formula lives near the airport and frequently notices poor AM radio reception occurring as planes use the approach path which passes over his home. Having just finished the unit on light wave behavior, Noah now understands that the reception problem occurs because of radio wave signals reflecting off the planes and destructively interfering with waves which approach his antenna directly from the station. Noah’s favorite station – WFIZ – broadcasts at 1240 kHz and is located several miles from his home. Determine the five lowest heights above his home for which reflection off of planes will lead to destructive interference of this 1240 kHz signal. (Assume that the reflected waves do not undergo a phase change upon reflection off the plane.) Return to Overview
<urn:uuid:02a3b886-6ab8-45ca-a6a5-7e6d15b0e8f3>
3.734375
2,665
Tutorial
Science & Tech.
68.012279
Plasmonics: When silver is better than gold Published online 20 June 2012 Silver nanostructures exhibit a resonance feature that is useful for a multitude of sensing applications Scheme (top) of the dual-disk ring structure and scanning electron micrscosopy images of fabricated devices (bottom) Reproduced with permission, © 2011 Optical Society of America Certain metallic nanostructures are known to exhibit a distinctly asymmetric spectral feature. This characteristic feature, known as a Fano resonance, has attracted a considerable amount of attention due to its potential in sensing applications. Fano resonance is caused by the interference of two eigenmodes (modes of electron excitations), so its shape and wavelength are sensitive to slight variations in the environment. A small change in the refractive index, for example, could lead to a big change in the Fano resonance. So far, most of the metallic structures used to generate Fano resonances have been made of gold. The wavelength of such Fano resonances is typically in the infrared region, which is not ideal for practical sensing applications. Jing Bo Zhang and co-workers at the A*STAR Data Storage Institute have now proposed a silver dual-disk ring nanostructure for generating Fano resonance in the visible range1. The nanostructure comprises a dual-disk ring consisting of two silver disks, measuring tens of nanometers wide, placed inside a silver ring. The researchers calculated the optical modes of the structures using the finite-difference time-domain (FDTD) method. They found that the coupling between one of the dual-disk eigenmodes and one of the ring eigenmodes produces a Fano resonance just below 700 nanometers in wavelength, well within the visible spectrum. The shape and wavelength of the Fano resonance can be finely tuned by varying the geometric parameters that define the dual-disk ring structure. The key capability of a biomolecule sensor is its reaction to a change in the surroundings. The calculations showed that by increasing the refractive index of the environment, the Fano resonance is strongly red-shifted. This is to simulate for a case in which a thin coat of a dielectric material, such as a layer of specific biomolecules, is assumed to cover the nanostructure. The calculations were promising but had to be verified experimentally. The researchers used electron beam lithography and corresponding nanoprocessing techniques to fabricate silver dual-disk rings on quartz and indeed observed Fano resonance in the visible light range. Observation of the Fano resonance and its sensitivity to environmental changes in the visible range is an important result for sensing applications. The researchers aim to improve the design of the nanostructure further. “We have already determined and fabricated the optimum geometry of dual-disk ring structures for biosensing,” says Zhang. “Next we are going to functionalize the surface of the structure chemically to examine and improve the sensing power experimentally.” The A*STAR-affiliated researchers contributing to this research are from the Data Storage Institute - Niu, L., Zhang, J. B., Fu, Y. H., Kulkarni, S. & Luk'yanchuk, B. Fano resonance in dual-disk ring plasmonic nanostructures. Optics Express 19, 22974–22981 (2011). | article
<urn:uuid:dabca1aa-8095-4d2e-8704-5b165a62f9ac>
3.234375
702
Academic Writing
Science & Tech.
25.213405
The universe doesn't do medium. In terms of what turns heads, you're either very, very big (think galactic clusters) or very, very small (think neutrinos and bosons). That's long been assumed to be the rule for black holes too. For 30 years, astronomers have been looking for evidence of a theorized class of black hole that would be sort of a cosmic middle child, falling somewhere between the well-established smaller ones which are "only" 30 times the mass of our sun and the supermassive types that are the equivalent of millions of solar masses. There's more than just a taste for cosmic tidiness behind the hunt. These so-called intermediate-mass black holes could provide an important link in the life cycle of all black holes, suggesting that the mini and jumbo models are not two completely different species but rather members of a single species at different stages of maturation. Now there is evidence that medium-size black holes might indeed exist courtesy of an X-ray blast from a mysterious body 290 million light-years from Earth. It was three years ago that astrophysicists discovered the object they straightforwardly dubbed Hyper-Luminous X-ray source-1 (HLX-1), so named because it emits 260 million times the X-ray brightness of our sun. But the nature of the object was a head scratcher mostly because it was invisible. It couldn't be a foreground star or a background galaxy. That meant it was probably though not definitely a black hole, since the gasses and other material being pulled into the body would produce X-rays as a byproduct. The intensity of the emissions put the black hole's size at 500 times the sun's mass, which would place it in the long-sought intermediate range. But while potent X-rays are characteristic of black holes, they're only half of a two-part signature. In the vicinity of other black holes, astronomers witness a violent reaction to so much incoming gas: the region belches plasma jets, visible from Earth in the form of radio-wave emissions that erupt a few hours or days afterward. Now, in a paper published in the July 6 issue of Science, Natalie Webb of the Université de Toulouse in France has announced that she and her research team have detected those radio emissions at HLX-1 too, strongly suggesting that the object is indeed a not-too-big, not-too-small black hole. "When my group initially found HLX-1 in 2009 using a simple approach, I was extremely skeptical," Webb says. "However, we have observed it in all different wavelengths, and so far it is the first intermediate-mass black-hole candidate that has stood up to so many tests." The most elegant part of the new study is how the researchers used the mere presence of the plasma jets to calculate the black hole's size far more precisely than with the X-rays alone. X-ray intensity correlates to the amount of matter falling into the hole, while radio emissions correlate to the strength of the exiting jets and both appear to have a constant, scaled relationship to the mass of a black hole. If you know that constant and it's a pretty standard equation in the astrophysicist's toolbox you can calculate the mass of the body. While the X-ray readings put the black hole at the very low end of the medium range, the plasma jets boosted it higher from just 500 solar masses to somewhere between 9,000 and 90,000. That's a big range, but for a first discovery, it's not too shabby. Indeed, says Webb, her calculations represent "the most refined estimate of the mass of HLX-1 and indeed any intermediate-mass black hole proposed to date." The existence of a body in this mass range opens the door to new theories about just what it is that determines a black hole's size. Though all black holes grow by feeding off nearby matter, the source of that matter varies. Small, stellar black holes form as the result of the supernova collapse of a single star. Intermediate black holes may form inside older, glistening star swarms known as globular star clusters. The supermassive varieties are located in the high-density, high-velocity center of galaxies. No matter the size, there is a mathematical constant at work here too: studies suggest that black holes usually represent about 0.5% of the mass of the galaxy or stellar environment they inhabit, regardless of size. It's possible that a small black hole swells to middleweight size as it both collides with and consumes other objects within the confines of a globular cluster. Supermassive black holes could later form through mergers of several intermediate black holes. "Confirming that HLX-1 is an intermediate-mass black hole helps to substantiate the argument that supermassive black holes are formed from intermediate-mass black holes," Webb says. "Without observational proof of their existence, the theory was only speculative." Though HLX-1 is getting all the attention at the moment, researchers are investigating other potential medium-size black holes, including objects within globular clusters in the constellation Pegasus and the Andromeda galaxy. Astrophysicists are also focusing on low-mass "dwarf galaxies" that have had very little interaction with other galaxies, hoping to find intermediate-mass black holes hidden somewhere inside like treats in a cereal box. In the meantime, studies of HLX-1 will continue. "I am very excited that we have finally found the observational evidence to substantiate these theoretically proposed objects," Webb says. It seems HLX-1 is one middle child that will not be neglected.
<urn:uuid:5188f336-6a34-4adf-9b41-d9523b48f59d>
3.75
1,162
Truncated
Science & Tech.
44.994919
Scandium, Sc, is a hard silvery transition metallic element, found in Group IIIa of the - Atomic Number : 21 - Relative Atomic Mass : 44.9559 - Melting Point : 1540 degC - Boiling Point : 2850 degC - Relative Density : 3.0 Scandium was first discovered by Nilson in 1879AD. Scandium is a more abundant element in the sun than on Earth. Scandium occurs in Euxenite and Gadolinite. Scandium is a silvery metal which develops a yellow top pink cast on exposure to air. Scandium is used in light weight alloys in aircraft. Start of Hypertext .... Hypertext Copyright (c) 2000 Donal O'Leary. All Rights Reserved.
<urn:uuid:40a30021-bca2-4668-9f33-387f632d7ecb>
3.375
175
Knowledge Article
Science & Tech.
64.941748
In 1971, Stephen Cook and Leonid Levin -- a pair of pioneering computer scientists on opposite sides of the Iron Curtain -- independently posed a question that has flummoxed generations of researchers since. It's called P vs NP, and essentially asks whether any problem whose answer can be quickly verified by a computer can also be easily solved by a computer. However, a researcher claims to have found proved what many suspect -- that P does not equal NP. To get some idea of what P vs NP is, imagine you're choosing 100 co-workers to go in a new building from a list of 400. All well and good, except that your boss gives you a list of pairs of co-workers that can't be in the same building. It's a nightmare to generate a list from scratch that satisfies your boss's requirements, but it's very easy to check whether a list offered to you by someone else satisfies the conditions -- you just run down the pairs. P vs NP asks whether it's possible to build a supercomputer capable of generating that list -- taking a "brute force" approach by checking every combination of workers in a reasonable amount of time. In the symbols, P represents a set of problems that are solvable in a practical amount of time, whereas NP represents problems that have answers that can be easily checked to see if they're correct. Stanford University's Complexity Zoo refers to NP as "the class of dashed hopes and idle dreams". Working out whether those two classes are the same might not sound particularly troublesome, but it's considered one of the most important problems in the field, with extreme consequences both practical and philosophical if a proof can be found that P does equal NP. For example, most forms of cryptography are NP problems, so the internet's monetary transaction system would need to be replaced, a series of protein structure breakthroughs would spur considerable advances in biology and most branches of logistics would no longer become an issue, because the traveling salesman problem is also an NP problem. The question is one of the Millennium Prize problems, defined by the Clay Mathematics Institute in 2000, which carries with it a cash bounty of $1,000,000. Only one of the seven problems has been solved at the time of writing, but that might be about to be raised to two because a chap named Vinay Deolalikar, who works as a research scientist at HP Labs and has previously published several well-received papers on the topic, claims to have proof that P is not equal to NP. Deolalikar says: "The proof required the piecing together of principles from multiple areas within mathematics. The major effort in constructing this proof was uncovering a chain of conceptual links between various fields and viewing them through a common lens. Second to this were the technical hurdles faced at each stage in the proof. "This work builds upon fundamental contributions many esteemed researchers have made to their fields. In the presentation of this paper, it was my intention to provide the reader with an understanding of the global framework for this proof." That paper, which is 100 pages long and filled with terrifying sentences such as: "We embed the space of covariates into a larger product space which allows us to disentangle the ?ow of information during an LFP computation", hasn't yet been peer-reviewed, and has been updated a couple of times already since being published on the web, so it'll likely be still some time until the proof can be verified and accepted. But the mathematical community have mostly suspected for some time that P does not equal NP, with a survey in 2002 of 100 researchers indicating that 61 believed the answer to be no, nine believed the answer to be yes and 22 to be unsure. One researcher who answered "no", Scott Aaronson, justified his belief by saying: "If P equals NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in 'creative leaps', no fundamental gap between solving a problem and recognising the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss." Interestingly, eight believed the problem to be impossible to either prove or disprove, an issue acknowledged by Deolalikar, who says in the paper: "Resolving the P vs NP question might be outside the domain of mathematical techniques. More precisely, the question might be independent of standard axioms of set theory." But so far, the proof has met with a cautiously warm reception from other researchers. Stephen Cook, one of the original posers of the question, is reported to have said: "This appears to be a relatively serious claim to have solved P vs NP." Richard Lipton added: "The author has advanced serious and refreshingly new ideas of definite value, and deserves time and space to develop them more fully." Beyond making Deolalikar a considerably richer man, the only real practical impacts of an accepted proof that P does not equal NP would be that researchers would be able to focus their attention elsewhere. But it would also represent a significant advance in complexity theory and could perhaps raise more interesting questions for the future.
<urn:uuid:413a22ce-edab-41d1-8b13-69780e8605b0>
3.25
1,073
Knowledge Article
Science & Tech.
41.074486
With Comet 168P/ Hergenrother still bright and perfectly placed high in the southeast at nightfall, I wanted to share an updated map for amateur astronomers with 6-inch and larger telescopes who’d like to track the comet. At around magnitude 9.5, it’s still the brightest fuzzball in the fall sky. For the next couple weeks, 168P will track from northern Pegasus into Andromeda as it slowly fades. Put it on your list of autumn night sky targets and you won’t be disappointed. If you don’t see the comet this time around, you’ll have to wait 7 years for its return. Hergenrother is a short period or periodic comet – one that orbits the sun in fewer than 200 years. That’s what the “P” stands for in its name. Since its discovery in 1998 by American astronomer Carl Hergenrother, this feathery visitor is making its third observed trip. About 265 numbered periodic comets have been discovered to date. Unnumbered periodic comets number nearly 250. Next in our comet lineup is comet C/2011 L4 PANSTARRS. The “C” indicates a long-period comet or one that orbits the sun in more than 200 years. Two hundred? That’s nothing. L4 Pan-STARRS’s period is estimated at 110,000 years. Seeing it’s a once-in-a-lifetime experience for sure. Right now, the comet looks like a small, dense cotton wad of light in southern Libra visible only from the southern hemisphere low in the west during early evening hours. Pan-STARRS has plateaued at a dim 11.5 magnitude for the past few weeks, but is expected to slowly brighten through fall and winter. Northern hemisphere observers will have to be patient. We won’t spy it till next March because the comet will either be too near the sun or too low in the sky. On March 9, 2013 , L4 PANSTARRS passes just 28 million miles from the sun. In the days before and after, solar heating will furiously vaporize ice and dust from its outer crust causing the comet to quickly brighten and develop a substantial tail. A few days later it pops into the evening sky and could shine as bright as -1 magnitude or nearly the equal of Sirius, the brightest star. That’s what the predictions say anyway. More information and a sky chart HERE. At least we can see L4 Pan-STARRS with an 8-inch or larger telescope. Comet C/2012 S1 ISON at 17th magnitude dips way below the limit, though amateur astronomers using larger instruments and digital cameras have taken pictures of it. ISON was scooped up by Russian amateur astronomers Vitaly Nevski and Artyom Novichonok in the course of the work for the International Scientific Optical Network (ISON) Survey from near Kislovodsk, Russia on Sept. 21. I hope the two are eventually recognized for their discovery by having their names penned to the comet instead of a survey acronym. Other comets discovered during surveys have received the discoverer’s name. Why not this one? S1 ISON creeps very slowly across the constellation Cancer in the morning sky this month and next and won’t become visible in typical telescopes until next September. On November 28, 2013, the comet passes just 800,000 miles from the sun. If it survives the encounter, it could become brighter than Venus and be visible in broad daylight. A few days after its near-death experience, ISON swiftly moves northward, becoming visible in both evening and morning skies. Ernesto Guido and the team of amateur astronomers at the Remanzacco Observatory in Italy have observed a large amount of activity in the comet’s nucleus this fall despite it being 558 million miles from the sun or farther than Jupiter. Mike Mattei, another amateur astronomer, reports that Earth will pass under the incoming leg of ISON’s orbit. If the comet is large and active, he predicts we could see an increase in meteor activity around January 14-15, 2014 spawned by dust cooked off the comet nucleus. Isonids anyone? Though I’ve heard it’s possible ISON could rival the full moon’s brightness and become one of history’s “Great Comets” when it appears in both morning and evening skies in early December, I’m going to play the conservative card. I’ve been burned by a few comets that haven’t lived up to expectations, and besides, these creatures are unpredictable anyway. That’s their charm. It could easily be fainter or brighter, though the latter is preferable by far. More on the S1 ISON including sky charts HERE.
<urn:uuid:9b78e413-c7a5-4ab5-97e0-49382f40e019>
3.046875
1,009
Personal Blog
Science & Tech.
60.194691
It is happening frequently lately. A major weather event occurs---perhaps a hurricane, heat wave, tornado outbreak, drought or snowstorm-- and a chorus of activist groups or media folks either imply or explicitly suggest that the event is the result of human-caused (anthropogenic) global warming. Perhaps the worst offender is the organization www.350.org and their spokesman Bill McKibben. Close behind is Climate Central, which even has an extreme weather/climate blog. The media has noted many times that the U.S. in 2011 experienced a record 14 billion-dollar weather disasters--and many of the articles imply or suggest a connection with human-forced global warming. Even the NY Times has jumped into the fray recently, giving front-page coverage of an unscientific survey that found that a large majority of Americans believe recent extreme weather events are the result of anthropogenic global warming. One does not have to wonder very hard about where Americans are getting their opinions--and it is not from the scientific community. But what is so disturbing about all this is that there is very little evidence that these claims are true....that the extreme events of late are the result of greenhouse gas increases caused by humans. Take the recent amazing heat wave in the eastern and central U.S.: canary in the coal mine for global warming? No evidence of this. In fact, an in-depth analysis by Dr. Martin Hoerling of NOAA Earth Systems Research Lab (ESRL), found here, suggest that the heat wave was the result of natural variability and an unusual, but not unprecedented, change in the upper level flow pattern that pushed tropical air northward over the eastern U.S.. A recent discussion of the March warming by UW Professor Michael Wallace, one of the nation's leading climate scientists and a member of the U.S. National Academy of Sciences, found here, reaches a similar conclusion. Well, what about the extensive tornado outbreaks of 2011 over the southeast U.S. and the early tornadoes of 2012. Unusual extreme weather connected with global warming? There is no reason to believe this is true. Backing for this statement comes from a comprehensive report of the Intergovernmental Panel on Climate Change (IPCC) on extreme weather events, found here. To quote the IPCC report: "There is low confidence in observed trends in small spatial-scale phenomena such as tornadoes and hail." What about hurricanes? Have we seen an upward trend in those? The IPCC conclusions: "There is low confidence in any observed long-term (i.e., 40 years or more) increases in tropical cyclone activity (i.e., intensity, frequency, duration), after accounting for past changes in observing capabilities." What about extreme temperatures, heavy precipitation, and drought? To aid us in evaluating the trends in these extreme climate events, the National Weather Service has developed a Climate Extremes Index that you can access and plot online. I tried it out and here is the information for these parameters for 1910-2010: An important issue that is rarely discussed is that changes in extremes...either natural or from human-forced global warming will not be spatially uniform. So even a strong global warming signal will result in some places getting more extreme weather, while others will get less extreme weather. An obvious example is temperature....if temperatures warm there will be a tendency for more extreme highs and more heat waves. But that also implies that the cold waves will be weaker and less extreme. Ever wonder what is the biggest weather killer in the Northwest U.S.? Not hurricanes or tornadoes, not heat waves or droughts, not windstorms and floods. I am convinced from that statistics I have collected that roadway icing kills and injures more people around here than anything else. And warming should help reduce those deaths and injuries. And consider that most of the climate models suggest the jet stream will move north under global warming. Big storms and floods are associated with the jet stream. So some folks (on the north side of the current jet stream location) may experience more extreme storminess, but those on the south side could well experience less. There will certainly be losers due to changed extremes under global warming, but there will be a lot of winners as well. Never seem to hear about that. It is somewhat embarrassing for me to admit this, but part of the problem is that a small minority of my colleagues--people who should know better-- are feeding the extreme-weather/climate hype in the mistaken belief that by doing so they can encourage people to do the right thing--lessen their carbon footprint. Here is an example. Three final points: (1) Even if there are changes in the frequency of extremes, that does not necessarily mean human influences are behind them. For example, the earth has been warming for roughly 100-150 years as the planet exited the "Little Ice Age". Much of this warming has undoubtedly been natural, with human-forced warming only really significant during the past 30 years or so. Glaciers have been melting back over the past century and thus some of this loss is undoubtedly due to natural causes. (2) If we haven't seen trends in extremes that does not mean that we won't see them in the future when the impact of anthropogenic greenhouse warming increases substantially. The earth is only starting to warm up due to mankind's influence on greenhouse gases. The big action...including changes in extremes...is AHEAD of us. Activist types have made a huge mistake in thinking they need to point to observed changes in extremes to make their case for dealing with GW. They are particularly making a mistake when they make claims that have no scientific basis. Global warming skeptics and deniers have made the huge mistake of assuming that a lack of clear changes in the atmosphere during the past decades says something about what will happen in the future, since most of the GW impacts have not yet occurred . Ironically, the activist types are providing the deniers with a potent weapon, since it is pretty easy to disprove many of the activist claims of human-induced global warming enhancing past and current extreme weather. (3) The media has to do more homework on the claims of GW/extreme weather connections. All too often they simply quote and replay the baseless claims of advocacy groups, or juxtapose stories on extreme weather events and the potential for extremes under global warming...leaving their readers to reach their own, and often incorrect, conclusions. And as a side issue, when is the media going to provide information about some of the nonsense than denier groups are pushing (that global warming is ridiculous because the concentrations of CO2 are so small, that we can't forecast climate if we can't predict weather next week, etc....) I believe the science is fairly clear...the impacts of global warming due to human-enhanced greenhouse gases will be be very significant, that the effects will increase gradually at first, but then accelerate later in the century. There will be substantial impacts on extremes, but the magnitudes and spatial distributions will be complex, and we don't necessarily have a good handle on it at present.
<urn:uuid:2fc30220-379d-439a-be1f-049c9d8c4eed>
3
1,449
Personal Blog
Science & Tech.
51.461918
What Might Have Been In the new analysis, Newman and colleagues set out to predict ozone losses as if nothing had been done to stop them. The team started with the Goddard Earth Observing System Chemistry-Climate Model, an earth system model of atmospheric circulation that accounts for variations in solar energy, atmospheric chemical reactions, temperature changes and winds, and interactions between the stratosphere, where ozone is found, and the troposphere, the layer of atmosphere closest to Earth. Their “world avoided” simulation took months of computer time to process. The researchers let global emissions of CFCs and similar compounds in the model world increase by 3 percent per year, the rate at which they were growing before regulation in the late 1980s. Then they let the simulated world turn from 1975 to 2065. By the simulated year 2020, 17 percent of global ozone is destroyed, and an ozone hole forms each year over the Arctic as well as the Antarctic. By 2040, the ozone “hole”—concentrations below 220 Dobson Units—is global. The UV index in mid-latitude cities reaches 15 around noon on a clear summer day (10 is considered extreme today). Surprising Collapse of Tropical Ozone Layer In the 2050s, something strange happens: ozone levels in the stratosphere over the tropics collapse to near zero in a span of six years. According to Goddard scientist and study co-author Richard Stolarski, who was among the pioneers of atmospheric ozone chemistry in 1970s, the rapid, near-total ozone destruction is similar to what happens over Antarctica today. Seeing a similar process occur over the tropics was surprising, says Stolarski, “because we hadn’t expected the tropical stratosphere would get cold enough to form stratospheric clouds.” The dramatic cooling appears to be the result of two processes. “Ozone absorbs UV energy, which causes the surrounding atmosphere to warm,” explains Stolarski. “So, by itself, loss of ozone leads to cooling, which is something we expected to see.” More surprising, says Stolarski, is that the temperature change intensified the stratosphere’s large-scale, slow-moving circulation pattern. In that circulation, air from the lower stratosphere rises into the upper stratosphere at tropical latitudes, spreads toward the poles, and sinks. As air rises, it cools. As the circulation strengthened, the amount of cooling increased, allowing stratospheric clouds—today confined to polar latitudes—to form over the tropics. Runaway ozone destruction followed. By the end of the model run in 2065, global ozone drops to less than 110 DU, a 67 percent drop from the 1970s. Year-round Arctic polar values hover between 50 and 100 (down from 500 in 1960). The intensity of UV radiation at Earth’s surface doubles; at certain shorter wavelengths, intensity rises by as much as 10,000 times. Skin cancer rates would soar. “Our world avoided calculation goes a little beyond what I thought would happen,” said Stolarski. “The quantities may not be absolutely correct, but the basic results clearly indicate what could have happened to the atmosphere. And models sometimes show you something you weren’t expecting, like the precipitous drop in the tropics.” “We simulated a world avoided,” said Newman, “and it’s a world we should be glad we avoided.” The Real World The real world has been somewhat kinder. Production of ozone-depleting substances was finally halted in 1992, though their abundance is only beginning to decline because the chemicals can reside in the atmosphere for 50 to 100 years. The peak abundance of CFCs in the atmosphere occurred around 2000, and has decreased by roughly 4 percent to date. Stratospheric ozone has been depleted by 5 to 6 percent at middle latitudes, but has somewhat rebounded in recent years. The largest recorded Antarctic ozone hole was recorded in 2006, with holes of slightly smaller size since then. Newman, Stolarski, and other colleagues have used their model to simulate how the real world ozone layer will recover as well. Because of climate change from greenhouse gases, they say, the ozone layer will probably not look exactly like it did in the 1970s. “I didn’t think that the Montreal Protocol would work as well as it has, but I was pretty naive about the politics,” Stolarski added. “The Montreal Protocol is a remarkable international agreement that should be studied by those involved with global warming and the attempts to reach international agreement on that topic.” - Newman, P. A., Oman, L. D., Douglass, A. R., Fleming, E. L., Frith, S. M., Hurwitz, M. M., Kawa, S. R., Jackman, C. H., Krotkov, N. A., Nash, E. R., Nielsen, J. E., Pawson, S., Stolarski, R. S., and Velders, G. J. M. (2009). What would have happened to the ozone layer if chlorofluorocarbons (CFCs) had not been regulated? Atmospheric Chemistry and Physics, 9(6), 2113-2128.
<urn:uuid:1ca185ce-a761-4413-a41a-debfbc4ba062>
3.90625
1,110
Knowledge Article
Science & Tech.
56.290256
Like the much-maligned and infamous black flies, mayflies require cold, clean oxygen-rich water, like that in Dublin Lake shown here. Photo by Jerry and Marcy Monkman, EcoPhotography By Dave Anderson By May, twilight arrives late. From the high and lonesome expanse of Interstate-89, evening alpenglow illuminates a flank of Mount Kearsarge, changing every second from yellow to gold and now pink, like an ember fallen from a fire. Purple shadows climb the lower slopes as the sun sinks beyond Lake Sunapee into the hills of Vermont. My serpentine commute from Concord to Sutton winds from the Contoocook River’s wide red maple floodplain to parallel the lively Warner River along a state two-lane highway. The route eventually peels off on a small back road along the tiny Lane River, which tumbles from the rounded hills and narrow wet meadow valleys of its womb-like headwaters. Suddenly, unseen droplets pelt the windshield. Is it raining? I squint through a spattered windshield into a gauzy curtain ahead; a cloud of newly-hatched mayflies swarms above the river road. A colossal “hatch” of millions of half-inch long, forked-tailed insects is underway. I roll down the windows, stretch out my left arm, and plunge headlong into the swarm. Mayflies speckle the windshield, bounce off my open hand, and collect on my shirt sleeve. From where did the sudden mayfly hatch emerge? From the river. There are only a few dozen species of mayflies in New Hampshire. More species of aquatic insects inhabit the southern states, where mountain streams are less acidic, and calcium-rich bedrock yields a higher pH that supports a greater variety of aquatic insects. Mayflies belong to the order Ephemoptera. Translated from Latin, that means literally “short-lived wing.” Indeed, mayflies have no mouth parts nor time to eat. Their sole purpose is to mate, lay eggs on the water, and die. Aquatic insect eggs hatch underwater in lakes and streams by mid-summer. Several larval stages, called “instars,” of the various aquatic insect “nymphs” live through summer, autumn, and then overwinter, crawling amid pebbles on the bottom of clear, cold streams and spring-fed ponds. Like the much-maligned and infamous black flies, mayflies require cold, clean oxygen-rich water. They’re excellent indicators of high water quality in freshwater ecosystems. We’re fortunate to live in a region that still supports them. When water temperatures rise during spring, stream conditions trigger a massive synchronized hatch. The so-called “emergers” float or actively swim to the surface to shed their final larval exoskeleton on rocks or logs along the river banks and pond shores. Delicate dun mayfly nymphs swim to the water’s surface and crawl out to molt. The exoskeletons split open, allowing adult “spinners” with opaque, test-brand new wings that are not yet fully developed to flutter skyward for their final hours of life. The collective ascension of clear-winged spinners culminates in a brief conspicuous aerial mating swarm: a mayfly hatch. At last, the females lay eggs on the water, and the spent, adult mayflies die less than two days after they first crawled from their liquid birthplace. Aquatic insect life cycles are best known to anglers, who try to “match the hatch” to fool hungry trout with imitation flies. Bits of bead, feather, and colored wire are wound around hooks to represent various aquatic insect species and specific larval instars. “Wet fly” variations represent the swimming larval nymphs. “Dry flies” represent egg-laying adults and the floating spent spinners. There is a reverence for insects in every angler’s flybox. But it’s not just anglers who follow the emergence of hatching mayflies. The annual drama attracts appreciative audiences above and below the water’s surface. As they rise, in a final curtain call, from the dark, watery depths up into the wide blue yonder, this last burst of glory is the tolling of the dinner bell for wildlife. Insect protein in all its incarnations forms the broad base of the food pyramid. All wildlife, from fish, frogs, snakes, and turtles to songbirds, bats, and the largest, fur-bearing mammals, are ultimately dependent in some way on rich and diverse statewide insect populations. Insects are the primary reason why millions of colorful, neo-tropical songbirds hazard the risks of long-distance migration to northern latitudes to breed and raise their young. They’d stay year-round if the insects remained as plentiful as they are in May and June. In May, the tree swallows – our local troupe of aerial acrobats – wheel, dip, and tumble while hawking insects above an open marsh. Their beaks dimple the glassy water. Their acrobatic synchronized evening feeding ballet is a wonder of movement and light. They rise and fall, alternately showing iridescent green backs and snow-white breasts, like two-toned cottonwood leaves blown out over the Lane River from the surrounding woods. Unseen beneath the river’s surface, pink-flanked rainbow trout and bejeweled speckled brook trout rise up in a mirror image of tree swallow choreography. Trout sip swimming mayflies just beneath the water’s surface. Concentric rings spread across quiet pools. Occasionally, an audible tail splash breaks the mirrored water when fish snatch nymphs and roll back to the depths. Where tannic water runs clear, you may see a white flash of fins and bellies as trout troll through nymphs rising like champagne bubbles. New Hampshire’s rivers run high in spring. From the bank, I watch concentric rings spread across the water. Mayflies emerge, struggle, fly, breed, and die. A fluttering horde surrounds me as I hike back to my bug-spattered pickup truck in the last light of day. The biting black flies are thick and hungry; my wrists, neck and ears sting from bites. It’s a small price to pay for the performance I’ve just witnessed. Naturalist Dave Anderson is director of education and volunteer services for the Society for the Protection of New Hampshire Forests. He can be reached via e-mail at or through the Forest Society web-site: www.forestsociety.org.
<urn:uuid:b107180c-261c-4b34-b789-e4c8a9db83eb>
2.84375
1,412
Nonfiction Writing
Science & Tech.
52.881339
It's been some time since we've heard cries for help regarding the degradation of the Earth's ozone layer and subsequent doom of life as we know it. But apparently, the keening has begun again. In the HVACR industry, there probably isn't a single engineer, manufacturer, distributor, or contractor who doesn't know something of the ozone hole that hovers precariously over Antarctica, New Zealand, Australia, and parts of South America and Africa. After all, during the mid 1980s, after British scientists sounded the alarm, the world rushed to condemn the chemicals used to refrigerate our foods and medicines, keep us comfortable and healthy, and allow us to inhabit most of the uninhabitable places across our beloved planet. From that alarm arose the Montreal Protocol, which ushered in a new, green age in 1989. Since then rumors have it that because of mankind's brilliant efforts at reducing the consumption and subsequent release of these chemicals—chlorofluorocarbons being chief among them—that our planet is getting healthier, the hole is shrinking. Great news, right? Well, maybe. You see, there's a new study in town. It was conducted by scientists from New Zealand's National Institute of Water and Atmospheric Research. Their findings show that the recovery, in concert with climate change, may do harm as well as good. You heard that right. The study, detailed in the May edition of Geophysical Research Letters and highlighted in an article posted on Yahoo! News (tinyurl.com/34d76ud), revealed that variations in atmospheric circulation attributed to climate change will cause a 43-percent increase in gas exchange between the stratosphere (an upper layer of the atmosphere that contains the ozone layer) and the troposphere (the is lowest portion of Earth's atmosphere that contains the air we breath). The ozone layer resides just above the troposphere. As ozone is replenished in the stratosphere, it will have more opportunities to seep into the air we breathe. This is not a good thing, because ozone smells bad and can damage our respiratory systems (not to mention harm the environment). Will this lead to even more legislation that could impact our industry? According to the Yahoo! article, if carbon-dioxide levels in the atmosphere increase as expected from unabated emissions, the ozone layer will cool off, blurring the temperature boundary that separates it from the troposphere. Within the next century, more ozone than ever before will surge into our air. This idea is based on a computer modeling program designed by those New Zealand scientists. The Yahoo! article says the study’s author, Guang Zeng, hopes "that future studies of the impacts of climate change will account for the atmospheric composition of both the stratosphere and troposphere, as well as the movement of ozone between the two, to paint a better, more accurate picture of the Earth's environmental future." These "findings" and subsequent statements are the kind of things legislators drool over. All I can say is,"Holy cow, can we do nothing right?" In a mere 21 years, we've gone from nearly destroying the earth by stripping the protective layer and threathening all living things with massive overdoses of ultraviolet radiation, to sort of fixing that, but in the process opened the door to annihilation through asphyxiation because the ozone is now falling from the sky. Where is Chicken Little when you need him, Batman?
<urn:uuid:7760a45d-4978-4279-a403-8392ccb644c4>
3.28125
704
Personal Blog
Science & Tech.
41.590724
Review for 2nd Lecture Test Optical Mineralogy (Chapter 7) 1) What is an optic axis?† Compare and contrast uniaxial and biaxial minerals by describing differences in crystal structure and symmetry.† List the systems that belong in each group.† 2) Define and describe birefringence and the resulting interference colors.† What are the factors that control birefringence?† How is birefringence measured in minerals?† What is difference between pleochroism and birefringence?† 3) What is extinction? When and why does it occur in anisotropic minerals when viewed under cross-polarized light?† Name and describe the three types of extinction.† What are extinction angles and how are they measured? 4) Describe how the orientation of a mineral controls its birefringence.† Be able to describe which crystallographic positions produce the highest and lowest birefringence.† Describe two other important factors that control the birefringence of a mineral. 5) What are interference figures and how do they form?† Which components of the petrographic microscope are used to obtain these figures?† Define the following components of interference figures: isochromes, melatope, and isogyres. 6) List and describe the important optical properties used in identifying uniaxial minerals.† Why is refractometry more useful in identifying isotropic minerals?† Why do uniaxial minerals commonly show parallel extinction when they display elongation?† Which types of cleavage and crystal forms are not commonly seen in uniaxial minerals? 7) Compare and contrast the following uniaxial interference figures: centered optic axis, off-center optic axis, and optic normal or flash figure.† Be able to describe the visible components of each figure and crystal orientation that produces it. 8) Compare and contrast the following biaxial interference figures: acute bisectrix, centered optic axis, obtuse bisectrix and optic normal or flash figure.† Be able to describe the visible components of each figure and crystal orientation that produces it. 9) Describe how the optic sign, birefringence and indices of refraction are measured for uniaxial minerals.† Describe how the optic angle (2V), optic sign, birefringence and indices of refraction are measured for biaxial minerals. Atomic Structure and Crystal Chemistry (Chapter 3) 1) What are the major components of the atom?† Which components account of most of the volume of the atom?† Which components account for most of the mass of the atom?† Which force holds the protons and neutrons in the nucleus?† Which force holds the electrons in orbit around the nucleus? 2) What are isotopes?† Why do isotopes display similar behavior?† How are isotopes of the same element distinguished? 3) What is the basis for the theory of Quantum Mechanics?† List and describe the four basic quantum numbers.† Explain why no two electrons will have the same quantum number.† Why are some outer subshells often filled before inner ones?† Give examples of both kinetic and potential electron energy. 4) What are valence electrons?† Define electronegativity and describe how it relates to the formation of ions and the ionization potential. 5) Define and give examples alkali metals, alkali earth metals, transitional metals, nonmetals and noble gases.† What is the difference between metals and nonmetals? 6) What are the three most important controls on the size of an atom?† What is effective size? 7) List and describe the four major types of bonds that hold minerals together.† Describe the physical properties, occurrence, and atomic radii that characterize each bond type.† What are the Coordination Principle and Coordination Number?†† For which bond type are these important? 8) Compare and contrast isodesmic and anisodesmic bonding.†† How is each bond type commonly reflected in mineral properties? 9) What are the 8 elements that are most abundant in the crust?† Describe the oxygen-based structures that form using these elements. 10) Define the following: Pauliís Exclusion Rule, coinage metals, conductance bands, and hydrogen bonds.† Crystal Structure (Chapter 4) 1) What is the most important control on crystal structure?† Explain. 2) Describe the typical crystal structures and symmetries formed by ionic bonding.† Explain how electrostatic valency determines the nature and three- dimensional strength of the bond. Compare and contrast isodesmic, anisodesmic and mesodesmic bonding. 3) Compare and contrast the crystal structures and properties of minerals formed by covalent bonding with those formed by metallic bonding. 4) What are isostructural minerals?† Give examples.† How do they differ from polymorphs? 5) Compare and contrast the following types of polymorphs: reconstructive, displacive, and order-disorder.† Give an example of each. 6) Define solid solution.† What is the difference between simple and coupled solid solution?† Compare and contrast the following types of solid solutions: substitutional, omission, and interstitial.† Give mineral examples of each.† 7) What is exsolution and why does it occur?† Give an example of a mineral that commonly displays exsolution. 8) Describe the different ways in which mineral compositional data are shown on graphs.† What type of minerals are commonly shown on compositional plots?† Be able to determine the composition of a mineral plotted on either a linear or triangular plot. 9) Describe the procedure for writing mineral formulas relative to the order of cations and anions.
<urn:uuid:7c50ef3d-6157-44ad-b2ca-0ba48f49b338>
3.71875
1,230
Tutorial
Science & Tech.
31.417354
For number 2, I've successfully graphed it but how do I go about finding its area of the shaded region? That looks to me like a trapezoid. Do you know the formula for area of a trapezoid? If not, notice that if you draw a line at y= 3, you divide this figure into a rectangle, with length 6 and height 3 and a right triangle with base 6 and height 2. Do you know how to find the area of a rectangle and triangle?
<urn:uuid:e0d5b421-2b89-4df8-9296-958acfdb7ec3>
2.703125
104
Q&A Forum
Science & Tech.
77.232814
how do you graph xy<12? i can't find any examples in my book . I suppose you could do this: Transform xy<12 to y<12/x. Sketch the graph y = 12/x, a hyperbola if I am correct. Change your curves to dotted boundaries. Select some test points to find regions where xy<12. Shade the areas for which xy<12. I think you will have a two-dimensional graph at the end.
<urn:uuid:ba4c9ab4-91ce-4485-ba65-e1a1d442e739>
3.40625
111
Q&A Forum
Science & Tech.
94.671375
The Higgs Boson, Part III: How to Discover a Particle About this video 600 million collisions, every second, for two years. Guest featuring John Green of Crash Course this Minute Physics episode looks at exactly how one goes about 'discovering' a particle such as the Higgs Boson – or should we really say scientific 'fact checking'? Even after we've built our large particle colliders and performed huge numbers of collisions, how sure do we need to be before we can claim to have confirmed its existence? What extent must modern physicists go to, to confirm their predictions and successfully 'fact check' their science? - Henry Reich - Collections with this video: - Minute Physics Licence: Standard YouTube License Collections containing this video: Cool physics and other sweet science in 60 seconds.
<urn:uuid:b4afff75-2c87-4412-86d5-6384e05c51a8>
2.734375
171
Truncated
Science & Tech.
36.766625
Reports and Articles on Climate Change (6-18-02) This page contains summaries of or links to a variety of reports and articles relevant to climate change and its relationship to policy: National Research Council (NRC): Abrupt Climate Change: Inevitable Surprises, December 2001 Recognizing that there has been no plan for increasing knowledge about abrupt climate change, the U.S. Global Change Research Program asked the National Research Council (NRC) to create the Committee on Abrupt Climate Change. The purpose of the new group was to document the known information on abrupt climate change and to offer recommendations to improve the understanding of this subject with the hope that this information will enable humans to more effectively prevent and mediate the impacts of abrupt climate change. The results of the committee’s efforts are found in the report entitled Abrupt Climate Change: Inevitable Surprises, released by the NRC in December 2001. The causes and mechanisms behind abrupt climate change are unclear, but it is well-established that abrupt climate change has and can occur. It is likely that faster changes to the Earth system, either natural or anthropogenic, increase the probability that the threshold for a climate shift will be reached. The report likens this relationship to a person turning a light switch on and off. The speed and amount of pressure one applies to the switch determine how quickly a room becomes light or dark. The NRC committee “believes that increased knowledge is the best way to improve the effectiveness of [abrupt climate change] response, and thus that research into the causes, patterns, and likelihood of abrupt climate change can help reduce vulnerabilities and increase our adaptive capabilities.” As a result of the committee’s investigation they have five main recommendations for improving our knowledge of and preparation for abrupt climate change. 1) "To improve the fundamental knowledge base related to abrupt climate change." Specific attention should be paid to the mechanics of thresholds. Knowledge in this realm may be gained through study of ocean-atmosphere behavior, oceanic deepwater processes, hydrology, and ice. Other helpful information includes the creation of a comprehensive land-use census, covering a range of issues from wildlife disease to fragmentation of habitats to distribution of forest fires and the development of integrated economic and ecological data sets to aid in creation of strategies to adapt to climate change. Data collection should be focused in two types of areas: where impact of abrupt climate change is expected to be greatest and where information pertaining to ongoing changes will be particularly useful in understanding impacts and developing meaningful responses. 2) "To improve modeling focused on abrupt climate change." Abrupt climate change of the past is still not fully explained, as models tend to underestimate the size, speed, and extent of those changes. New, flexible models that include geophysics, ecology, and social-science considerations are needed. There is much room for increased accuracy of these models. 3) "To improve paleoclimatic data related to abrupt climate change." The past hydrologic cycle is of particular interest, therefore the addition of new proxies, which focus on changes in water (i.e. drought or flood) are needed. The selection of multi-parameter projects to come to a deeper understanding of past climate in specific locations and an increase in the geographical coverage of paleoclimate observations are needed as well. 4) "To improve statistical approaches." The majority of climate statistics are treated as if they are cyclic events, occurring at relatively predictable intervals. The committee feels that this current use of statistics needs to be re-examined, as one cannot treat abrupt climate change in the same manner that one would treat the occurrence of a 100-year flood. 5) "To investigate 'no-regrets' strategies to reduce vulnerability." Because abrupt climate change has potentially large impacts and current human abilities to predict the onslaught of this type of event are poor, special attention should be paid to “increasing the adaptability and resiliency of societies and ecosystems.” Policies aimed at doing this will be “no-regrets” as they often involve low- or no-cost changes that will be beneficial for quality of life regardless of the level of environmental change that occurs. Adjustments include improving air, water, and land quality; decreasing biodiversity loss; slowing climate change; improving climate forecasting; and assisting poor countries by offering research expertise. Although recommendations found within the NRC report focus on US institutions, they apply to the entire globe. The report stresses that the US is presented with a unique opportunity to provide both scientific and financial leadership, and to work collaboratively with scientists around the world in order to gain understanding of an issue likely to have widespread impacts. It will be easier for wealthy nations, such as the US, to adjust to and/or recover from abrupt climate change. However, these countries cannot ignore the rest of the world. Increased interconnectedness is the direct result of increased globalization. Because of this interdependence, adverse impacts of abrupt climate change are likely to cross over boundaries. A few examples of cross-border impacts offered by the report are human and biotic migration, economic shocks, and political aftershocks. The US Global Change Research Program (USGCRP) has a directory of online reports on their website entitled The Potential Consequences of Climate Variability and Change. All published in 2000 and 2001, the National Assessment Overview and Foundation reports summarize key findings for what the impacts of climate change will be on the US, including assessments for individual regions. They also detail the program's methods and assessment process. The USGCRP was created in 1989 as a Presidential Initiative, and formalized in 1990 by the Global Change Research Act of 1990 Earlier this year, in preparation for a summit in Europe to discuss the Kyoto Protocol, the Bush administration's Cabinet-level working group to review the status of U.S. climate change efforts sought additional input from the National Academy of Science. The result was a report that came out in early June entitled "Climate Change Science: An Analysis of Some Key Questions," intended to send President Bush to Europe fully informed on the status of climate change research in the U.S. and the world, as well as how the research should influence policy. Although the report did state that uncertainties remain regarding natural climate variation and current climate models, its primary emphasis was that "greenhouse gases are accumulating in the Earth's atmosphere as a result of human activities." However, while in Europe discussing the Kyoto Protocol, Bush seemed to place more emphasis on the uncertainties the report confirmed. Although Bush has stated he is open to policy that will deal with climate change, he has not come out in support of the Protocol. This report is a transition document for the new administration to use in analysis of what can be done at the federal level to address global change issues. It is a synthesis of several other NRC reports on global change. The extensive bibliography of the report is a great resource that includes not only primary sources but also other NRC report titles discussing a range of natural science topics -- some specifically studying the role of the geosciences in environmental questions. The brief report puts forth ideas to strengthen the link between legislators and scientists for the purpose of solving local and global problems occurring as the human population continues to grow and natural resources dwindle. It also identifies actions that should be taken at the national leadership level and at the agency level to promote research on regional and global change. The emphasis of the report's recommendations is the integration of all the natural sciences as well as aspects of social science across federal agency boundaries. It stresses the interconnection between the health of the environment with the health of society and the economy. The report charges the federal government with providing long-term reliable and consistent scientific information on regional and global change. It recommends that the federal government establish a body -- incorporating many federal agencies -- to coordinate global and regional environmental research independent of appropriation whims. Three administrative options for the group were identified -- creating a new National Environmental Council, strengthening the existing interagency structure through the National Science and Technology Council, or giving the Council on Environmental Quality oversight of relevant research. Whatever it's structure, the interagency group would fulfill the federal government's responsibilities in global and regional change research. These responsibilities include: ensuring that resources are directed into under-funded research areas that are not contained in one agency, ensuring that an integrated and comprehensive monitoring program is established, developing and maintaining multidisciplinary modeling and information systems, and ensuring that scientific research on local, regional, and global scales provide pertinent information for use in decision making. The second part of the report introduces actions that should be taken at the agency level to ensure that research continues on important environmental change topics. The report affirms that there already is expertise and capacity for generating information on global change, but the difficulty lies in "putting knowledge into action." The report makes recommendations for creating a focused fundamental research agenda in global, sustainable, and in environmental and ecosystem research. The unifying themes of the recommendations are "better understanding the interactions of earth systems with the social system, how those interactions contribute to changes in the environment, and how to develop new strategies for mitigating and adapting to the changes." The report indicates that in order to follow the recommendations there will have to be better links between the natural sciences, social sciences, and engineering. Implementing an effective research agenda will require federal agencies individually or in collaboration to follow eight recommended action items: The final section is an extensive bibliography of materials used in the synthesis of the report. This report addresses the fact that global-mean temperature at the earth's surface is estimated to have risen by 0.25 to 0.4 °C during the past 20 years. On the other hand, satellite measurements of radiances indicate that the temperature of the lower to mid-troposphere (the atmospheric layer extending from the earth's surface up to about 8 km) has exhibited a smaller rise of approximately 0.0 to 0.2 °C during this period. The report attempts to answer the question of whether these apparently conflicting surface and upper air temperature trends lie within the range of uncertainty inherent in the measurements and, if they are judged to lie outside that range, to identify the most probable reason(s) for the differences. The Briefing Book includes objective and nonpartisan information and reports on a wide range of Climate Change Issues, including: greenhouse effect and global climate change, greenhouse gas sources and trends, energy issues, economic issues, legal issues, technology, chronology, the U.N. Framework Convention, and the 1997 U.N. Kyoto Protocol. Please send any comments or requests for information to the AGI Government Affairs Program at email@example.com. Contributed by Spring 2001 AGI/AAPG Intern Mary Patterson, Summer 2001 AGI/AAPG Intern Caetie Ofiesh, and Summer 2002 AGI/AIPG Intern Sarah Riggen. Last Updated June 18, 2002 AGI Home | News | BookCenter | Careers | Data | Directory | Education | Environment | GeoRef | Geotimes | Govt. Affairs | Members | Publications 2001 American Geological Institute. All rights reserved.
<urn:uuid:805d027c-ee42-46fb-83f5-f5939e14717c>
2.953125
2,284
Content Listing
Science & Tech.
22.824235
BEE DIVERSITY AND THE DEVELOPMENT OF HEALTHY, SUSTAINABLE BEE POLLINATION SYSTEMS Location: Pollinating Insects-- Biology, Management and Systematics Research Title: The montane bee fauna of north central Washington, USA, with floral associations Submitted to: Western North American Naturalist Publication Type: Peer Reviewed Journal Publication Acceptance Date: January 26, 2010 Publication Date: July 12, 2010 Citation: Wilson, J., Wilson, L.E., Loftis, L.D., Griswold, T.L. 2010. The montane bee fauna of north central Washington, USA, with floral associations. Western North American Naturalist. 70(2):198-207. Interpretive Summary: Many species of montane plants are dependent on pollinators, mainly native bees. Little is known about the bee pollinators of the mountains of northern Washington and the specific plants they pollinate. Given the concerns about declines in pollinators and the attendant impact on plant reproductive success, a baseline assessment of bees and their plant relationships is of considerable value. A preliminary survey of the bees of the Tonasket Ranger District of the Okanogan-Wenatchee National Forest discovered a rich array of native bees: 140 species in 24 genera. These pollinators were visiting 57 plant species, with a number of bee species specializing on a single plant genus. Mason bees (Osmia), bumble bees, and ground nesting bees of the genus Andrena were particularly diverse. This study provides the foundation for further exploration into montane pollinators of northern Washington. The mountains of north central Washington contain a variety of habitat types, from shrub-steppe to high alpine meadows. While native bee surveys of some surrounding areas of the Columbia Basin are fairly complete, little work has been done in this region to document the diversity of bees found therein. A survey of native bees in the Tonasket Ranger District of the Okanogan-Wenatchee National Forest was conducted during the summer of 2004. Collections yielded a diverse bee fauna (140 species in 24 genera) visiting diverse floral elements (57 plant species in 18 families). These preliminary data suggest a rich bee fauna exists in the Okanogan Basin and surrounding mountains.
<urn:uuid:41104655-7824-4b09-bea5-c65279a08fff>
3.109375
480
Academic Writing
Science & Tech.
32.123245
wonders Where’s the warming?: Global greenhouse gas emissions have risen even faster during the past decade than predicted by the United Nations Intergovernmental Panel on Climate Change (IPCC) and other international agencies. According to alarmist groups, this proves global warming is much worse than previously feared. The increase in emissions “should shock even the most jaded negotiators” at international climate talks currently taking place in Bonn, Germany, the UK Guardian reports. But there’s only one problem with this storyline; global temperatures have not increased at all during the past decade. The evidence is powerful, straightforward, and damning. NASA satellite instruments precisely measuring global temperatures show absolutely no warming during the past the past 10 years. This is the case for the Northern Hemisphere mid-latitudes, including the United States. This is the case for the Arctic, where the signs of human-caused global warming are supposed to be first and most powerfully felt. This is the case forglobal sea surface temperatures, which alarmists claim should be sucking up much of the predicted human-induced warming. This is the case for the planet as a whole. If atmospheric carbon dioxide emissions are the sole or primary driver of global temperatures, then where is all the global warming? We’re talking 10 years of higher-than-expected increases in greenhouse gases, yet 10 years of absolutely no warming. That’s 10 years of nada, nunca, nein, zero, and zilch. Check out the links, which show charts over varying time sets, but which all show basically the same thing: no real change over longer periods of time. Not in the Arctic, which Taylor notes was supposed to be the canary in the coal mine, nor in the northern hemisphere, or the globe overall. That’s even true for just the last decade, but it’s especially true over the period of several decades. Periods of high amplitudes in warming are matched with low amplitudes. It seems more and more physicists are becoming brave enough to express considerable skepticism of the AGW hysteria, including one who worked in Australia’s climate-change ministry. This is the core idea of every official climate model: For each bit of warming due to carbon dioxide, they claim it ends up causing three bits of warming due to the extra moist air. The climate models amplify the carbon dioxide warming by a factor of three — so two-thirds of their projected warming is due to extra moist air (and other factors); only one-third is due to extra carbon dioxide. That’s the core of the issue. All the disagreements and misunderstandings spring from this. The alarmist case is based on this guess about moisture in the atmosphere, and there is simply no evidence for the amplification that is at the core of their alarmism. What did they find when they tried to prove this theory? Weather balloons had been measuring the atmosphere since the 1960s, many thousands of them every year. The climate models all predict that as the planet warms, a hot spot of moist air will develop over the tropics about 10 kilometres up, as the layer of moist air expands upwards into the cool dry air above. During the warming of the late 1970s, ’80s and ’90s, the weather balloons found no hot spot. None at all. Not even a small one. This evidence proves that the climate models are fundamentally flawed, that they greatly overestimate the temperature increases due to carbon dioxide. This evidence first became clear around the mid-1990s. It’s becoming even more clear now. If carbon increases and the predicted warming didn’t follow, then the obvious conclusion is that the hypothesis regarding cause and effect is incorrect — and the missing hot spots are even further evidence of this. The AGW mania is coming to an end, thankfully.
<urn:uuid:5f3cefc2-5af5-45d6-aef5-809a51f9044e>
3
799
Comment Section
Science & Tech.
45.706218
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...in the control of insect pests. Apanteles glomeratus, for example, parasitizes the larvae of the cabbage butterfly (Pieris rapae) and the cabbage looper (Trichoplusia ni). Apanteles congregatus parasitizes the tobacco hornworm (Manduca sexta) and the tomato hornworm (Manduca quinquemaculata). Some braconids attack wood-boring pests such as beetles of... ...attack tomato, tobacco, and potato crops. These leaf-feeding pests are green and can be 10 cm (4 inches) long. Control includes the use of a natural enemy, the braconid wasp ( Apanteles congregatus), which parasitizes the larvae. Pupation occurs in an earthen cell or loose cocoon at the soil surface. What made you want to look up "Apanteles congregatus"? Please share what surprised you most...
<urn:uuid:9507f2ab-1bdd-462e-b2d7-00c5f604c5b5>
3.171875
235
Knowledge Article
Science & Tech.
40.2804
An overview of the Chandra mission and goals, Chandra's namesake, top 10 facts. Classroom activities, printable materials, interactive games & more. Overview of X-ray Astronomy and X-ray sources: black holes to galaxy clusters. All Chandra images released to the public listed by date & by category Current Chandra press releases, status reports, interviews & biographies. A collection of multimedia, illustrations & animations, a glossary, FAQ & more. A collection of illustrations, animations and video. Chandra discoveries in an audio/video format. Contrary To Belief June 2, 2008 :: "Put it before them briefly so they will read it, clearly so they will appreciate it, picturesquely so they will remember it and, above all, accurately so they will be guided by its light." Joseph Pulitizer. One of the primary goals of communicating science to the public is to capture the excitement of scientific discoveries while trying to follow the advice of Joseph Pulitizer. Writers and speakers often fall short of this goal and say too much, use too much jargon, or use picturesque language carelessly, as in saying that a neutron star is "incredibly dense," when in fact the data have shown that the density is credible. More problematic is the use of the word "believe," as in "Astronomers believe that most galaxies harbor massive black holes at their centers," or "Scientists believe that elements such as oxygen, silicon and sulfur are dispersed into the galaxy primarily by the explosion of massive stars." In most cases, what is meant by these statements is something like "Based on the evidence at hand, this is what most scientists think is going on, and there is no good evidence to indicate otherwise." It is much briefer to say "scientists believe," but not nearly as accurate. For many readers, the word "believe" could indicate a statement of faith, as in "He believes in God." Or, it could be a statement that involves an educated guess, as in "I believe the Red Sox will win the World Series again this year." The latter is closer to what is meant, but does not reflect the state of scientific knowledge, which is well beyond an educated guess. In fact, the evidence is strong that supernovas play a critical role in the dispersal of heavy elements, and that supermassive black holes exist in the centers of most galaxies. It is healthy to maintain a little skepticism, but not to the point of inaccurately describing the state of understanding. To say that "The evidence indicates that most galaxies harbor massive black holes... " would seem to be a good compromise that satisfies both the brevity and accuracy criteria. The search for knowledge using the scientific method proceeds in a random walk, with steps that can be forward, backward, or sideways. It moves along a broad path from having an idea to thinking something might be true, to being so sure as to say that we know it is true. For example, we know that gravity acts throughout the universe, and that we can use our knowledge of the law of gravity to launch satellites and send them to planets in the outer reaches of the solar system. Progress along the path from ideas to knowledge is driven by applying the steps of the scientific method - observation, hypothesis, and testing with more observations or experiments. Several hypotheses are almost always proposed to explain initial observations, and further observations are undertaken to distinguish between hypotheses. For scientists who formulate hypotheses, the sobering fact is that most of their hypotheses will turn out to be wrong. But that’s not necessarily a bad thing. As Nobel laureate Frank Wilczek said, "If you don't make mistakes, you are not working on hard enough problems." The process of developing an idea into a working hypothesis and building models to test the hypothesis can take years or even decades, before it finally becomes a theory. More on theories next time. In the meantime, the evidence indicates that we should be more careful and use "The evidence indicates..." or words to that effect, instead of "Scientists believe..." when describing the state of scientific understanding. Part II: It Is Not Just a Theory... It Is a Theory!
<urn:uuid:16ba9467-6cb6-4e69-bdfd-1a3d7fa6c069>
3.015625
865
Content Listing
Science & Tech.
44.552821
Fewer Cold Nights and Cold Days More Warm Nights and Hot Days Global warming is increasing the frequency and intensity of some types of extreme weather. For example, warming is causing more rain to fall in heavy downpours. There are also longer dry periods between rainfalls. This, coupled with more evaporation due to higher temperatures, intensifies drought. Wet places have generally become wetter, while dry places have become drier. Heat waves have become more frequent and intense, while very cold days have decreased. Summer 2003 Heatwave in Europe Drought Has Increased Globally from 1900 to 2002
<urn:uuid:9ab8e0d2-ccf5-446c-b3b9-c95f6ce83533>
3.703125
123
Knowledge Article
Science & Tech.
39.02
Date: 10/24/00 2:39:27 PM Pacific Daylight Time Just looked at the "tubes" you have on your site. Of course it is impossible to say exactly what they are unless someone can remote view them, but I would like to offer a few observations. Looking at http://ida.wr.usgs.gov/fullres/divided/m04002/m0400291a.jpg we see what appears to be 3 reinforced plastic tubes entering a cave. If we assume that they are tubes with concentric reinforcing bands of thicker plastic or other material evenly spaced (similar to a dryer hose, but complete circles instead of a helix), then they do appear to fit this description quite well. If you take a plastic tube with evenly spaced marks on it and stretch it, then the tube will become thinner, and the marks further apart. This is exactly what we see on the top tube, where it is being stretched by gravity where it drops into the cave. The tube becomes thinner, and the reinforcements are further apart. Also if you overpressure a tube, then the tube will become wider, and the bands will become further apart, just like if you blow up a ballon. The tube toward the bottom shows what appears to have been an overpressuring, with the tube bulging out and the bands further apart. The appearance that this is a bulge is enhanced by the reflection of the sun, enchance due to the curved cross section of the bulge, and the likelyhood that a mat surface becomes smooth upon stretching (stretch most balloons and you will see them go from mat to shiney). Now a structure of this size, with no more reinforcement that that would be unlikely to be built to handle any liquids. The weight of the liquid would most likely collapse the tube even with the lower gravity of Mars. Instead I believe that what was being piped was a gas. Also it appears that the channels that the tubes are in may have eroded due to the tubes being buried there, and a deep hole where water could flow to (when Mars had water). Or they may have been laid into valleys, and one of them later got covered up by blowing dust. OK, the next question is, was the gas flowing into the hole or out of the hole. I suspect that something was being extracted from the hole for use elsewhere. Two things come to mind. A gas which was coming from deep in the area, such as natural gas, carbon dixoide, sulfur dioxide, or water vapor, which could be condensed at the destination to yield pure water. I think this should be explored further by anyone that can. Very
<urn:uuid:a817adca-6e0e-4904-8986-130ffb2069bd>
2.765625
587
Comment Section
Science & Tech.
59.989209
Gravity is an invisible force that pulls everything towards the earth. No matter how hard an object is thrown it starts to come back down to earth. This is an example of gravity at work. Gravity pulls everything down with the same force. It doesn't matter how heavy or how light the object is. It is the same for all objects. Gravity also pulls matter together. The bigger an object is or the greater its mass, there will be more force of gravity. Gravity helps keep the planets orbiting around the sun. Pulled by gravity, a ski jumper races down a steep chute into the air. The skier's high speed carries him forward as he drops, making a long jump. It also holds the hot gases in the sun. This is important because we get our heat, light, and energy from the sun. All box cars use gravity as their source of power. by Chad, Melissa, and Matthew
<urn:uuid:99b8cfaa-9093-4c73-a971-d77d1ed3a2d5>
3.75
197
Knowledge Article
Science & Tech.
72.03625
December 21, 2009 (New 2011 reference added.) Celebrated Moon Rocks--- Overview and status of the Apollo lunar collection: A unique, but limited, resource of extraterrestrial material. Written by Linda M. V. Martel Hawaii Institute of Geophysics and Planetology The Need for Lunar Samples and Simulants: Where Engineering and Science Meet sums up one of the sessions attracting attention at the annual meeting of the Lunar Exploration Analysis Group (LEAG), held November 16-19, 2009 in Houston, Texas. Speakers addressed the question of how the Apollo lunar samples can be used to facilitate NASA's return to the Moon while preserving the collection for scientific investigation. Here is a summary of the LEAG presentations of Dr. Gary Lofgren, Lunar Curator at the NASA Johnson Space Center in Houston, Texas, and Dr. Meenakshi (Mini) Wadhwa, Professor at Arizona State University and Chair of NASA's advisory committee called CAPTEM (Curation and Analysis Planning Team for Extraterrestrial Materials). Lofgren gave a status report of the collection of rocks and regolith returned to Earth by the Apollo astronauts from six different landing sites on the Moon in 1969-1972. Wadhwa explained the role of CAPTEM in lunar sample allocation. The six Apollo missions that landed astronauts on the Moon returned a collection of rock and soil samples weighing approximately 382 kilograms (842 pounds) and consisting of 2,196 separate samples. Today there are more than 110,000 individually numbered subsamples (split, chipped or sawed pieces) available to investigators for detailed studies. The collection also includes 16.5 meters (54 feet) of core samples pulled from the top of the lunar regolith. (The fine-grained, fragmental, loose material blanketing the Moon is most commonly referred to as soil but it has none of the organic sediment component as on Earth. The more precise term is regolith.) The number of samples increased as the missions progressed, as shown in the table below. Click on the emblems for more information about the missions from NASA. These missions, the astronauts, the thousands of people who worked to make the missions possible, and the lunar samples brought back to Earth were celebrated worldwide. |Successful Returns: [LEFT] Seated on the back of the car from left to right, the Apollo 11 astronauts, Edwin Aldrin, Michael Collins, and Neil Armstrong, are celebrated at a ticker tape parade in New York City. [RIGHT] The first Apollo 11 sample return container, holding the collected lunar surface material, is shown just after it arrived on July 25, 1969 at the Manned Spacecraft Center, now known as the Johnson Space Center in Houston, Texas. The rock box had arrived only minutes earlier at Ellington Air Force Base by air from the Pacific recovery area.| Today NASA continues to take charge of the curation and allocation of the Apollo lunar samples. The specially-built Lunar Sample Laboratory Facility, 30 years old this year, is a class 10K clean room (no more than 10,000 particles 0.5-micron size per cubic foot of air inside the laboratory). It is housed in a special building at the Johnson Space Center in Houston, Texas. Workers wear clean coveralls, hats, gloves, and shoe covers to minimize contamination. |NASA is in charge of organizing and storing the Apollo lunar materials. [LEFT] Samples are transferred for processing or examination into these sealed, stainless-steel glove cabinets in the Lunar Sample Laboratory at the NASA Johnson Space Center in Houston, Texas. The cabinets are continuously purged by nitrogen, a relatively non-reactive gas, to create an environment with minimum chemical reactions with the samples. Gloves are fitted to holes in the sides of the cabinets, capped when not in use, so that workers can reach and handle the samples. The special airlock into the wall, on the right side of the cabinet, opens to the pristine corridor where the samples are transferred to and from the storage vault. [RIGHT] Here I am inside the laboratory examining an Apollo 15 lunar breccia with an optical microscope at one of the cabinet viewing stations; photo by Jeff Taylor.| Meticulous facilities and strict handling procedures ensure the continued scientific integrity of the Apollo lunar samples for the needs of the research and engineering communities today and into the future. About 70% of the total weight of Apollo lunar samples is located in the Lunar Sample Laboratory's pristine sample vault. "Pristine" lunar samples (those continuously in NASA custody since return from the Moon) are stored in multiple layers of packaging in cabinets organized by mission. They are handled in stainless-steel glove cabinets purged by high-purity nitrogen gas, which is relatively non-reactive, in an environment monitored continuously for oxygen and moisture contents to minimize degradation of the samples or chemical reaction with air. Read Jeff Taylor's description of what it's like to work in the lunar laboratory at Johnson Space Center in this excerpt from his chapter in the book The Planets published by Bantam Books [link to excerpt]. Approximately 8% of the total weight of the collection is stored in the returned sample vault. These are samples lent to authorized researchers and returned to NASA. They are re-inventoried as "returned" because these samples were exposed to air when they were located in the investigators' laboratories. The samples are individually bagged, tagged, and are made available again for other research projects when contamination is less of a concern. Another 13% of the total weight is stored in the Brooks Air Force Base remote storage facility, which was completed in 2002. This representative sampling of the collection is stored at the second location to ensure the entire collection would not be lost in the event of a major hurricane or other catastrophe at Johnson Space Center. The other 9% of the total weight of lunar samples is currently outside the custody of the Johnson Space Center. Some are on loan to scientists and educators for research and teaching projects; others samples are on loan to museums, planetariums, and public scientific expositions [see the list of international Lunar Sample Display Locations]; a small percentage has been destroyed during approved experimentation; and some pieces of Apollo 11 and Apollo 17 samples were given as official gift plaques to all the states of the United States, to Puerto Rico, and to 135 foreign nations. U.S. regulations prohibit private ownership of Apollo lunar samples. The last container of lunar samples from the last Apollo mission was logged into the lunar laboratory on January 30, 1973. From their first arrival, the samples from the Apollo Missions have been under continuous investigation. They are, as you can imagine, highly sought after for scientific research in cosmochemistry, and for testing hypotheses of the origin of the Earth/Moon system, planetary formation, and solar system evolution. The renewed interest in robotic and human exploration of the Moon has spawned substantial interest in studying lunar materials among the engineering/resource utilization community. Their studies sometimes require lunar samples to validate development of tools and processes using simulants (soils made from Earth materials to mimic lunar properties). Because of the obviously limited supply of Apollo lunar samples, NASA has a robust allocation system that has been in place since the beginning of the collection. It distributes nearly 400 samples each year. |Apollo 17 scientist-astronaut Harrison H. Schmitt collects lunar rake samples at the Taurus-Littrow landing site. This rake was used to collect samples ranging in size from 1.3 centimeters to 2.5 centimeters.| Lunar Curator, Dr. Gary Lofgren, works with Dr. Meenakshi (Mini) Wadhwa and CAPTEM (Curation and Analysis Planning Team for Extraterrestrial Materials), a NASA advisory committee, to meet the needs of scientists and engineers who wish to obtain the most appropriate materials from the collection for their studies. Requests are considered for both basic research in planetary science and for applied studies including lunar materials beneficiation, resource utilization, toxicity, or hazards assessment. NASA provides access to the Apollo rocks, soils, and regolith core samples for destructive and non-destructive analyses. The checklist for requestors of Apollo lunar samples looks something like this: For planetary science studies, the request is submitted to the Lunar Sample Curator, Dr. Gary Lofgren, at NASA Johnson Space Center. For engineering/resource utilization studies, the request is submitted to the Lunar Simulant Curator, Dr. Carlton Allen, also at NASA Johnson Space Center, who verifies that all necessary tests with lunar simulants have been completed satisfactorily, and determines whether the request warrants use of lunar samples, in which case it is forwarded to the Lunar Sample Curator. The Lunar Sample Curator evaluates the submitted request and supporting materials, and makes a curatorial allocation if the request is from an investigator who has been approved previously for sample allocation by CAPTEM, and the request is for thin sections, "returned" lunar samples, or less than one gram of other lunar samples with no pristinity issues. The Curator otherwise forwards the request to CAPTEM for evaluation if the request is from a new investigator, and/or the request involves larger than one gram of material, or any samples with pristinity issues. Furthermore, with very few exceptions, no lunar sample will be allocated that reduces the remaining pristine sample below 50% by weight. The lunar sample requests forwarded to CAPTEM are evaluated by this standing committee. A positive recommendation by CAPTEM, followed by approval by NASA Headquarters, constitutes formal approval of the request. The Lunar Sample Curator prepares a Lunar Sample Loan Agreement (including a security plan) to be signed by the investigator. Finally, samples less than 10 grams are shipped within the U.S. by U.S. registered mail, outside the U.S. by U.S. diplomatic pouch mail to the American embassy nearest the investigator's location. Samples larger than 10 grams must be hand carried by the investigator or his/her representative. The Apollo lunar samples are a unique, but limited, resource of extraterrestrial rocks and regolith. Rest assured these treasured samples are in good hands. The planetary science community has a long heritage of developing sample-handling protocols and instrumentation for maximizing science while minimizing the amount of sample consumed. This approach is a good one and a necessary one for assuring that these lunar materials will be available for the ongoing testing of hypotheses, old and new, and development of new instruments, tools, and technologies as we plan and realize humanity's return to the Moon. LINKS OPEN IN A NEW WINDOW. [ About PSRD | [ Glossary | General Resources | Comments | Top of page ]
<urn:uuid:2b020090-4695-4e40-ad26-4708047cf3e5>
3.40625
2,210
Knowledge Article
Science & Tech.
32.589078
The solid state of water is known as ice; the gaseous state is known as water vapor (or steam). The units of temperature (formerly the degree Celsius and now the Kelvin) are defined in terms of the triple point of water, 273.16 K (0.01 °C) and 611.2 Pa, the temperature and pressure at which solid, liquid, and gaseous water coexist in equilibrium. Water exhibits some very strange behaviors, including the formation of states such as vitreous ice, a noncrystalline (glassy), solid state of water. At temperatures greater than 647 K and pressures greater than 22.064 MPa, a collection of water molecules assumes a supercritical condition, in which liquid-like clusters float within a vapor-like phase. An important feature of water is its polar nature. The water molecule forms an angle, with hydrogen atoms at the tips and oxygen at the vertex. Since oxygen has a higher electronegativity than hydrogen, the side of the molecule with the oxygen atom has a partial negative charge. A molecule with such a charge difference is called a dipole. The charge differences cause water molecules to be attracted to each other (the relatively positive areas being attracted to the relatively negative areas) and to other polar molecules. This attraction is known as hydrogen bonding. This relatively weak (relative to the covalent bonds within the water molecule itself) attraction results in physical properties such as a relatively high boiling point, because a lot of heat energy is necessary to break the hydrogen bonds between molecules. For example, sulfur is the element below oxygen in the periodic table, and its equivalent compound, hydrogen sulfide (H2S) does not have hydrogen bonds, and though it has twice the molecular weight of water, it is a gas at room temperature. The extra bonding between water molecules also gives liquid water a large specific heat capacity. Hydrogen bonding also gives water an unusual behaviour when freezing. Just like most other materials, the liquid becomes denser with lowering temperature. However, unlike most other materials, when cooled to near freezing point, the presence of hydrogen bonds means that the molecules, as they rearrange to minimise their energy, form a structure that is actually of lower density: hence the solid form, ice, will float in water. In other words, water expands as it freezes (most other materials shrink on solidification). Liquid water reaches its highest density at a temperature of 4 °C. This has an interesting consequence for water life in winter. Water chilled at the surface becomes denser and sinks, forming convection currents that cool the whole water body, but when the temperature of the lake water reaches 4°C, water on the surface, as it chills further, becomes less dense, and stays as a surface layer which eventually forms ice. Since downward convection of colder water is blocked by the density change, any large body of water frozen in winter will have the bulk of its water still liquid at 4°C beneath the icy surface, allowing fish to survive. This is one of the principal examples of finely-tuned physical properties that support life on Earth that is used as an argument for the anthropic principle. Another consequence is that ice will melt if sufficient pressure is applied. When an ionic or polar compound enters water, it is surrounded by water molecules. The relatively small size of water molecules typically allows many water molecules to surround one molecule of solute. The partially negative dipoles of the water are attracted to positively charged components of the solute, and vice versa for the positive dipoles. In general, ionic and polar substances such as acids, alcohols, and salts are easily soluble in water, and nonpolar substances such as fats and oils are not. Nonpolar molecules stay together in water because it is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in van der Waals interactions with nonpolar molecules. An example of an ionic solute is table salt; the sodium chloride, NaCl, separates into Na+ cations and Cl- anions, each being surrounded by water molecules. The ions are then easily transported away from their crystalline lattice into solution. An example of a nonionic solute is table sugar. The water dipoles hydrogen bond to the dipolar regions of the sugar molecule and allow it to be carried away into solution. Pure water is actually a good insulator (poor conductor), meaning that it does not conduct electricity well. Because water is such a good solvent, however, it often has some solute dissolved in it, most frequently salt. If water has such impurities, then it can conduct electricity much better, because impurities such as salt comprise free ions in aqueous solution by which an electric current can flow. Chemically, water is amphoteric: able to act as an acid or base. Occasionally the term hydroxic acid is used when water acts as an acid in a chemical reaction. At a pH of 7 (neutral), the concentration of hydroxide ions (OH-) is equal to that of the hydronium (H3O+) or hydrogen ions (H+) ions. If the equilibrium is disturbed, the solution becomes acidic (higher concentration of hydronium ions) or basic (higher concentration of hydroxide ions). Water can act as either an acid or an alkali in reactions. According to the Brønsted-Lowry system, an acid defined as a species which donates a proton (a H+ ion) in a reaction, and an alkali as something which receives a proton. When reacting with a stronger acid, water acts as an alkali, and will act as an acid when reaction with a weaker acid. For instance, it will receive a H+ from HCl in the equilibrium: HCl + H2O ---> H3O+ + Cl- And so here is acting as an alkali, by receiving an H+ ion. An acid donates a H+ ion, and water can also do this, such as the reaction with NaOH: NH3 + H2O ---> NH4+ + OH- Filtering: Water is passed through a sieve that catches small particles. The tighter the mesh of the sieve, the smaller the particles must be to pass through. Filtering is not sufficient to completely purify water, but it is often a necessary first step, since such particles can interfere with the more thorough purification methods. Boiling: Water is heated to its boiling point long enough to inactivate or kill microorganisms that normally live in water at room temperature. In areas where the water is "hard", (containing dissolved calcium salts), boiling decomposes the bicarbonate ion, resulting in some (but not all) of the dissolved calcium being precipitated in the form of calcium carbonate. This is the so-called "fur" that builds up on kettle elements etc. in hard water areas. With the exception of calcium, boiling does not remove solutes of higher boiling point than water, and in fact increases their concentration (due to some water being lost as vapour) Carbon filtering: Charcoal, a form of carbon with a high surface area due to its mode of preparation, adsorbs many compounds, including some toxic compounds. Water is passed through activated charcoal to remove such contaminants. This method is most commonly used in household water filters and fish tanks. Household filters for drinking water sometimes also contain silver, trace amounts of silver ions having a bactericidal effect. Distilling: Distillation involves boiling the water to produce water vapour. The water vapour then rises to a cooled surface where it can condense back into a liquid and be collected. Because the solutes are not normally vaporized, they remain in the boiling solution. Even distillation does not completely purify water, because of contaminants with similar boiling points and droplets of unvaporized liquid carried with the steam. However, 99.9% pure water can be obtained by distillation. Reverse osmosis: Mechanical pressure is applied to an impure solution to force pure water through a semi-permeable membrane. The term is reverse osmosis, because normal osmosis would result in pure water moving in the other direction to dilute the impurities. Reverse osmosis is theoretically the most thorough method of large-scale water purification available, although perfect semi-permable membranes are difficult to create. Ion exchange chromatography: In this case, water is passed through a charged resin column that has side chains that trap calcium, magnesium, and other heavy metal ions. In many laboratories, this method of purification has replaced distillation, as it provides a high volume of very pure water more quickly and with less energy use than other processes. Water purified in this way is called deionized water. The systematic acid name of water is hydroxic acid or hydroxilic acid, although these terms are rarely used. Likewise, the systematic alkali name of water is hydrogen hydroxide – both acid and alkali names exist for water because it is able to react both as an acid or an alkali, depending on the strength of the acid or alkali it is reacted with (it is amphoteric).
<urn:uuid:312c735c-3573-47f1-82b5-a5a9dbaa7a96>
4.125
1,914
Knowledge Article
Science & Tech.
32.833679
The study of processes that make life possible is hardly a leisurely pursuit, but that doesn't preclude researchers from taking advantage of the most advanced video gaming technology available to aid in their work. A team of University of Illinois at Urbana–Champaign (U.I.U.C.) physicists has assembled a supercomputer consisting of several hundred superfast graphics processing units (GPUs) --typically used for rendering highly sophisticated video game graphics--that they think will help them build a simulation depicting how chromatophore proteins turn light energy into chemical energy, a process called photosynthesis. "Ninety-five percent of the energy that life on Earth requires are fueled by photosynthetic processes," says Klaus Schulten , a (U.I.U.C.) physics professor leading the simulation-building effort and director of the school's Theoretical and Computational Biophysics Group . To better understand how these processes work, Schulten's team is assembling a computer-based, virtual photosynthetic chromatophore . - Played by humans, scored by nature, online game helps unravel secrets of RNATue, 11 Jan 2011, 9:21:43 EST - Amoeba may offer key clue to photosynthetic evolutionMon, 27 Feb 2012, 17:31:50 EST - Software tool helps tap into the power of graphics processingMon, 17 May 2010, 11:22:29 EDT - Building real security with virtual worldsThu, 26 Nov 2009, 14:39:36 EST - Orientation of antenna protein in photosynthetic bacteria describedThu, 2 Apr 2009, 17:57:03 EDT
<urn:uuid:769238e1-d96e-48b6-a43f-acb5111218a1>
3.53125
337
Content Listing
Science & Tech.
39.173269
Wild Thing is an occasional series where JHU Press authors write about the flora and fauna of the natural world—from the rarest flower to the most magnificent beast. Guest post by Uldis Roze Having grown up in large cities where porcupines are absent, I was in my 30s before I saw my first porcupine in the wild. We met at night, in the light cone of my flashlight, as the porcupine was chewing our freshly-built cabin at a woods edge in the Catskills. The animal looked surreal and wild, but I had no doubt about its identification. It had quills, therefore it was a porcupine. But the quills that give porcupines their easy identification and shape their natural histories are themselves the source of endless mystery and mystification. Do porcupines throw their quills? All scientific accounts assure readers to the contrary, but it wasn’t always so. Writing in the April 16, 1956 issue of Sports Illustrated, Dr. William J. Lang describes a porcupine he had surprised in a woodshed: “With an upward flick of his tail, one quill grazed my cheek, another stuck in my hat brim . . . three more clung by their barbed tips to the cedar splits.” Dr. Lang notwithstanding, porcupines can no more throw their quills than dogs can throw their hair, and if they somehow evolved the capacity to do so, it would do the throwers no good. This is for reasons of fundamental physics: the energy residing in a moving body is given by its momentum, the product of its mass times velocity. Because a porcupine quill has negligible mass, it would carry negligible momentum, and serve very poorly in the animal’s defense. A porcupine misunderstood. The royal crest of Louis XII of France featured a crested porcupine, shown throwing a shower of quills at distant enemies, while keeping other quills in reserve for an impregnable defense. Perhaps because Louis XII lost most of his military engagements, his successors abandoned the porcupine symbolism. Photo by Philippa Moore. Perhaps the flying quill hypothesis is so persistent because when quills arrive in human skin, they materialize in a microsecond, faster than the eye can follow. But quills do not arrive in flight–they arrive on the surface of the tail. And because the mass of the incoming is not the mass of the quill alone but the mass of the quill plus tail, the momentum is high and the quill can penetrate deeply. Another source of quill confusion is the one-way barbs. True or false: all porcupine quills have barbed tips. False! No Old-World porcupine (11 spp.) carries barbed quills. With a single exception, all New-World porcupines (15 spp.) carry barbed quills. The presence or absence of barbs is possibly the most fundamental difference between quills of the 2 porcupine families. Old-World porcupines are large animals, with some species reaching weights of 50 lbs in the wild. They are defended by large quills with sharp, knife-like tips that can kill lions and leopards. Large quills require large bodies for delivery. But large bodies are not an option for New-World porcupines, who live in trees. Their small bodies carry small quills. With the evolutionary invention of barbs, these small quills can travel deep inside a predator’s body, pulled by the predator’s own muscles until they either strike an organ or exit the body, far from the point of entry. That said, there are limits to the defense offered by small quills. Unlike their Old-World cousins, who can stand up to the large cats of Africa and Asia, New-World porcupines have no effective defense against their North American predator, the mountain lion. Rick Sweitzer, who studied a porcupine population in the Great Basin desert of Nevada, reports what happened when a single mountain lion started preying on his porcupines. In a 3-year period, the population plummeted from 82 animals to just 5. Instead of avoiding the quills, mountain lions eat their porcupines whole, and accept the consequences. Mountain lions autopsied in Oregon routinely showed quill tips embedded in the gums, where they had come to rest against the jawbone. How many quills does a North American porcupine carry? An answer given by one respondent is “roughly 658, but I lost count after they kept stabbing me.” A more common answer is “around 30,000.” The number, enshrined in the biological literature, seems to make sense because hundreds of quills may be lost with each predator attack, and lost quills require months to replace. Therefore carrying a hundred-fold excess represents an effective safety (pin)cushion. But the source of the 30,000 quill figure cannot be found. The earliest mention of the number is by Donald Spencer in 1950, in a National Geographic article. Spencer gives no indication that he counted the quills himself, nor identifies the source who did. Much else about porcupine quills remains unknown or misunderstood. Quills of North American porcupines carry surface antibiotics, and help disseminate a warning odor. Do other porcupine species show the same capabilities? We don’t know and can’t predict, because North American porcupines follow a unique life style, even within its New-World family. Shouldn’t we approach porcupines with the same openness we extend to our wives, husbands, lovers: work to know them as they are, not as we perceived them on first meeting? Uldis Roze is professor emeritus at Queens College in New York City. He is a contributor to Natural History magazine and is the author of Porcupines: The Animal Answer Guide, published by JHU Press.
<urn:uuid:fe8cb036-7141-4523-893c-7fd1093292a7>
3.40625
1,254
Personal Blog
Science & Tech.
51.272735
There are many examples from computing but I guess you're after areas that have mathematical interest in their own right. I think I good one is the Monte Carlo method. The idea has been around since Buffon's idea of estimating $\pi$. But it became important leading up to, and during the Manhattan Project, when Fermi and Ulam both came up with the method as a way to make impossible seeming integrals over high dimensional spaces tractable. For example, those involved in neutron transport. Today it's in use everywhere from finance to 3D graphics (with Pixar holding the patent on its application to ray-tracing). There have been all kinds of interesting variations such as Markov chain Monte Carlo and various 'stratification' strategies to increase reliability through 'even' sampling, and Las Vegas algorithms that guarantee correct results. There have been some interesting recent 'pure' applications to mathematics, eg. allowing 'exact' samplings of spaces such as the domino tilings of diamonds. And randomised algorithms are routine in areas like number theory - eg. primality testing. With hindsight it seems like an obvious idea but I think that at the time the idea of using random numbers to compute a non-random quantity was a pretty big shift of mindset. The Manhattan Project probably "had a significant impact in society".
<urn:uuid:463a875b-754b-4085-8171-2b35f7c4436a>
2.875
272
Comment Section
Science & Tech.
38.306017
Genetic Studies of Bats Impacted by Wind Farms There is increasing evidence that turbines used for power generation at wind farms can cause significant mortality of bats. Several migratory species, including Red Bats (Lasiurus borealis and Lasiurus blossevillii), Hoary Bats (Lasiurus cinereus) and Silver-haired Bats (Lasionycteris noctivagans) seem to be affected most often. Agencies responsible for addressing the effects of wind turbine mortalities face the difficult task of managing planning, construction, monitoring, and mitigation without basic information about the effects mortalities may be having on the bat populations affected. An interdisciplinary team of researchers is addressing this problem by using genetic methods to gather baseline data on levels of genetic variation, geographic patterns of distribution of this variation, and population sizes of bat species impacted by wind farms. These methods provide not only a means of evaluating the current status of populations, but can allow tracking of population changes over time. Comparisons with data from historical collections (e.g., DNA isolated from tissue samples taked from dry study skins in museum collections) will allow identification of trends that may predate the advent of wind farms. In order to provide a clear picture of population structure and reduce the error bars on analytical results, large numbers of samples from across the geographic range of a species are needed. Bats killed at wind farms will provide many of the samples needed for this project, although samples collected at other sites and for other purposes will also be included. Tissue samples will be analyzed using standard methods to isolate and sequence DNA. Both mitochondrial genes (e.g., cytochrome b, D-loop) and nuclear markers (e.g., microsatellites) will be included. Products of this research will include identification of distinct evolutionary lineages and the relationships among them, assessments of current levels of gene flow and genetic structuring among populations, and estimates of past and current populations sizes. We anticipate that bats collected at wind farms in future years will provide samples that can be used to effectively assess the effects of wind farm mortalities on population size and structure at local, regional, and even continental levels.
<urn:uuid:3c314fa2-0392-455f-af88-c886d5dc9279>
3.453125
451
Academic Writing
Science & Tech.
21.084963
Science Fair Project Encyclopedia Asian Tiger Mosquito |- valign=top |Suborder:||Culicomorpha Aedes albopictus (Family Culicidae), the Asian Tiger Mosquito or Forest Day Mosquito, is characterized by its black and white striped legs and small, black and white body. Other North American mosquitoes such as Ochlerotatus canadensis have a similar leg pattern. Asian Tiger Mosquitoes were first found in North America in a shipment of used tires at the port of Houston in 1985. Since then they have spread across southern USA, and as far up the East Coast as southern New Jersey. This species is an introduced species in Hawaii as well, but has been there since before 1896. This mosquito has become a significant pest in many communities because it closely associates with humans (rather than living in wetlands), and typically flies and feeds in the daytime rather than at night or at dusk and dawn. It is a container and puddle breeder, needing only a few ounces of water to breed. It has a short flight range (< 200 m), so breeding sites are likely to be close by where you find this mosquito (Nishida & Tenorio, 1993). Controlling Asian Tiger Mosquitoes A lot of futile and risky spraying has been done in the last few years because of the West Nile virus scare. This mosquito is active in the day time and so is identified if people are being bitten in the day time. Most mosquito spraying is done at night and will have little effect on Asian Tiger mosquitoes. (Daytime spraying is usually a violation of label directions because of foraging bees on blossoms in the application area.) It is however, simple to find and deal with the breeding spots, which are never far from where people are being bitten, since this is a weak flyer. Locate puddles that last more than 3 days, sagging or plugged roof gutters, old tires holding water, litter, bird baths, kiddie pools, and any other possible containers or pools of standing water. Flowing water will not be a breeding spot and water that contains minnows is not usually a problem, because the fish eat the mosquito larvae. Dragonflies are also an excellent method of imposing control. Dragonfly larvae eat mosquito larvae in the water, and adults will snatch adult mosquitoes as they fly. Insecticide application that also kills dragonflies may actually cause only a brief suppression of mosquitoes, followed by a long term increase in populations. Whenever possible, all sources of standing water, even if only a quarter cup, should be dumped every three days. Litter, especially containers in ditches, can hold water after the ditch dries up, and all litter should be cleaned up. Bird baths, wading pools, and any other container that can hold rainwater should be emptied. Rain barrels used for garden irrigation, and many other containers that cannot be dumped can be treated with a spoonful of vegetable oil, which will suffocate mosquito larvae as they try to breathe at the surface. Any standing water in pools, catchment basins, etc, that cannot be drained, dumped, or treated with a small quantity of vegetable oil, can be periodically treated with Bacillus thuringiensis israelensis. This is a disease organism that only affects the pest insects. It is readily available at farm, garden and pool suppliers. - Nishida, G.M. and J.M. Tenorio. 1993. What Bit Me? Identifying Hawai'i's Stinging and Biting Insects and Their Kin. University of Hawaii Press, Honolulu. 72 pp. - Tiger mosquitoes in Spain News tracking the recent spread of the tiger mosquito to Spain The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:c1eb2f67-fa9b-464c-a993-cebf1cf18023>
3.546875
794
Knowledge Article
Science & Tech.
48.648848
Promise meeting potential In their model, the team identified three crucial variables to controlling an MFC: the amount of waste material (fuel), the accumulation of biomass on the anode, and the electrical potential in the biofilm anode. The third factor is a totally novel concept in MFC research. "Modeling the potential in the biofilm anode, we now have a handle on how the MFC is working and why. We can predict how much voltage we get and how to maximize the power output by tweaking the various factors," said Marcus. For example, the team has shown that the biofilm produces more current when the biofilm thickness is at a happy medium, not too thick or thin. "If the biofilm is too thick," said Marcus, "the electrons have to travel too far to get to the anode. On the other hand, if the biofilm is too thin, it has too few bacteria to extract the electrons rapidly from the fuel." To harvest the benefits of MFCs, the research team is using its innovative model to optimize performance and power output. The project, which has been funded by NASA and industrial partners OpenCEL and NZLegacy, lays out the framework for MFC research and development to pursue commercialization of the technology. |Contact: Joe Caspermeyer| Arizona State University
<urn:uuid:1452cacb-6472-43b7-bb4a-49b044e0fd51>
3.03125
276
Knowledge Article
Science & Tech.
43.545415
The semaphore class works similar to the Monitor and Mutex class but lets you set a limit on how many threads have access to a critical section. It's often described as a nightclub (the semaphore) where the visitors (threads) stands in a queue outside the nightclub waiting for someone to leave in order to gain entrance. a piece of code that accesses a shared resource (data structure or device) but the condition is that only one thread can enter in this section in a time. provide all the method and property which are require to implement Semaphore. To use a semaphore in C#, you first need to instantiate an instance of a Semaphore object. The constructor, at a minimum, takes two parameters. The first is the number of resource slots initially available when the object is instantiated. The second parameter is the maximum number of slots available. If you want to reserve some slots for the calling thread, you can do so by making the first parameter smaller than the second. To reserve all slots for new threads, you should make both parameters After you instantiated your Semaphore object, you simply need to call the WaitOne() method when entering an area of code that you want restricted to a certain number of threads. When processing finishes, call the Release() method to release the slot back to the pool. The count on a semaphore is decremented each time a thread enters the semaphore, and incremented when a thread releases the semaphore. When the count is zero, subsequent requests block until other threads release the semaphore. When all threads have released the semaphore, the count is at the maximum value specified when the semaphore was created. Creating a new semaphore is accomplished through one of the existing constructors: Using the code - Semaphore(Int32,Int32): Initializes a new instance of the Semaphore class,specifying the maximum number of concurrent entries and optionallyreserving some entries. - Semaphore(Int32,Int32,String): Initializes a new instance of the Semaphore class, specifying themaximum number of concurrent entries, optionally reserving some entriesfor the calling thread, and optionally specifying the name of a systemsemaphore object. - Semaphore(Int32,Int32,String,Boolean): Initializes a new instance of the Semaphore class, specifying themaximum number of concurrent entries, optionally reserving some entriesfor the calling thread, optionally specifying the name of a systemsemaphore object, and specifying a variable that receives a valueindicating whether a new system semaphore was created. - Semaphore(Int32,Int32,String,Boolean,SemaphoreSecurity): Initializes a new instance of the Semaphore class, specifying themaximum number of concurrent entries, optionally reserving some entriesfor the calling thread, optionally specifying the name of a systemsemaphore object, specifying a variable that receives a value indicatingwhether a new system semaphore was created, and specifying security accesscontrol for the system semaphore. Thread threads = new Thread; Semaphore sem = new Semaphore(3, 3); is waiting in line...", Thread.CurrentThread.Name); enters the C_sharpcorner.com!", Thread.CurrentThread.Name); is leaving the C_sharpcorner.com", Thread.CurrentThread.Name); i = 0; i < 10; i++) threads[i] = new Thread(C_sharpcorner); threads[i].Name = "thread_" + important facts about Semaphore - There is no guaranteed order, such as FIFO or LIFO, in which blocked threads enter the semaphore. - The Semaphore class does not enforce thread identity on calls to WaitOne() or Release(). It is the programmer's responsibility to ensure that threads do not release the semaphore too many times. - For example, suppose a semaphore has a maximum count of two, and that thread A and thread B both enter the semaphore. If a programming error in thread B causes it to call Release twice, both calls succeed. The count on the semaphore is full, and when thread A eventually calls Release, a SemaphoreFullException is thrown. - Semaphores are of two types: local semaphores and named system semaphores. - If you create a Semaphore object using a constructor that accepts a name, it is associated with an operating-system semaphore of that name. - Named system semaphores are visible throughout the operating system, and can be used to synchronize the activities of processes. - You can create multipleSemaphore objects that represent the same named system semaphore, and you can use the OpenExisting() method to open an existing named system semaphore. - A local semaphore exists only within your process. It can be used by any thread in your process that has a reference to the local Semaphore object. - Each Semaphore object is a separate local semaphore.
<urn:uuid:ec519b6c-3250-4e20-afea-c9a9d9b758ac>
3.046875
1,108
Documentation
Software Dev.
31.613025
The United States is pushing for an Arctic agenda that promotes resource cooperation among Arctic and non-Arctic countries as part of a broader effort to foster diplomatic engagement in the High North. During a recent visit to Tromso, Norway in the Arctic Circle, Secretary of State Hillary Rodham Clinton emphasized that the United States is “committed to responsible management of those [Arctic] resources,” including oil, natural gas and other mineral resources. But while much attention is being focused on these lucrative mineral resources, there are significant opportunities for the United States and the other Arctic countries to enhance broader international cooperation, beginning with fisheries conservation. The Arctic is emerging as one of the most important maritime domains in the world. Environmental change is giving rise to new sea lanes that will cut the transit time between the Pacific and Atlantic, and opening up new areas for commercial development, including for oil, natural gas and minerals extraction, as well as fishing. There is no doubt that the opening of the Arctic is leading today to increased military, commercial and scientific activities. As these activities increase, it will become ever more important for Arctic countries and non-Arctic countries to cooperate around a range of emerging trends, including offshore energy development that could generate environmental challenges, commercial activity that could contribute to greater demand for search and rescue and other law enforcement capabilities, and increased military presence from Arctic (and potentially non-Arctic) countries that could foment uncertainty and lead to misperceptions about other countries’ intentions in the region. As U.S. policymakers look for opportunities to enhance cooperation in the Arctic Circle, it may be useful to begin with fisheries conservation. This rather low-politics area of engagement could get partners comfortably engaged in a discussion on Arctic issues that could then snowball into a broader conversation about cooperation around other security and foreign policy interests in the region. Here are a couple of ways that cooperation around protecting fisheries may serve broader foreign policy purposes in the Arctic Circle: Yesterday at the Rayburn House Office Building, the Wildlife Conservation Society (WCS) hosted a discussion, “Biodiversity Conservation in Afghanistan Advances U.S. Security Interests,” focusing on improving livelihoods and governance through natural resource management in Afghanistan – a cornerstone to long-term stability and achieving U.S. security interests in the state. As I learned yesterday, currently the most significant threats to Afghanistan’s natural resources include illegal hunting and trading, as well as an increase in deforestation and desertification. “Almost 80% of Afghanistan’s people depend directly upon the natural resource base for their survival and livelihoods, and three decades of near-continuous conflict has badly degraded this base,” said Afghanistan program director of the Wildlife Conservation Society, Dr. David Lawson. Most of WCS’s work in Afghanistan is community-based conservation, focusing on the local level, mobilizing local communities to institute new policies, laws and regulations and training community members “in natural resource management so they can work together to help build a sustainable future,” Lawson said. Another part of WCS’s work involves central governmental capacity building, which works to “improve the capacity of the government to take responsibility and manage the country’s critical resources,” according to Lawson. With help from the Ministry of Agriculture, Irrigation and Livestock and the National Environmental Protection Agency, WCS has helped the Afghan government to write environmental laws and regulations, as well as build nationally protected area networks, train officials and build government structures. Afghan individuals and communities participating in natural resource management benefit by generating income (some of them for the first time) and, as Lawson noted, “being able to benefit directly from conservation activities, and that actions taken to protect and preserve the environment can directly contribute to poverty reduction and improved community livelihoods.” I spent this weekend catching up on last week’s news – a busy week, with obviously nothing bigger than the pending British royal weeding (although, we regular Us Weekly readers have known for weeks that this was coming). Oh yes, and the DPRK has built a new nuclear plant, and you should all have been keeping up on the steady reports from Lisbon. On Monday, we mentioned the successful conference on the UN Convention on Biological Diversity, held in Japan through last week. While I haven’t yet had time to read the details of the new Nagoya Protocol, it is worth highlighting an important foreign policy aspect of the conference: It was a big win for Japan, a long-standing and critical U.S. ally. In an editorial, The Asahi Shimbun sounded off with well-earned pride: “Agreement has been reached on the second major environmental treaty bearing the name of a Japanese city.” (See this article for a tight summary of what the conference accomplished, as well.) Also last week, in what’s sure to be an important step in sculpting a renewed partnership with Japan, our fellow CNASers Patrick, Abe, and Dan released a report appropriately titled “Renewal: Revitalizing the U.S.-Japan Alliance.” This report follows on collaboration between CNAS and the Tokyo Foundation, which culminated last week in the release of a joint statement on the future of the alliance. We had a hand in the natural security section, which outlines areas ripe for cooperation, which I thought I’d post here in light of Japan’s environmental negotiations success: With two of the world’s leading science establishments, the United States and Japan acting in concert have a unique capacity to create a “green alliance” that addresses environmental and natural resource challenges. Together, Washington and Tokyo should address their dependence on scarce or insecure natural resources. This means above all reducing reliance on oil. The two allies can cooperate on advanced biofuels, energy storage technologies and infrastructure, including smart grid adoption. U.S. and Japanese companies have merged or established relationships that extend to wind, solar, nuclear and other non-petroleum energy sources. Both governments should supplement the private sector’s ongoing efforts by emphasizing cooperation to design demonstration projects for critical emerging technologies that are ready for testing and evaluation. Just beyond the tranquil picturesque landscape of the demilitarized zone on the Korean Peninsula lies modern day North Korea, a bizarre and mysterious world unto itself. The country is shrouded in uncertainty and most of what the outside world knows comes through accounts from defectors, rumors printed by the South Korean press and North Korean state-run media announcements. Case in point: at a recent U.S. Senate Hearing examining the current security situation on the Korean Peninsula, Senator John McCain asked Kurt Campbell, Assistant Secretary of State for East Asian and Pacific Affairs (and CNAS co-founder), if Kim Jong-un was the “likely successor” to his father Kim Jong-il, who has ruled since 1994. Secretary Campbell succinctly replied, “Your guess is as good as ours, sir.” The regime of Kim Jong-Il consistently draws the attention of the international community due to its ominous chemical, biological, and nuclear weapons capabilities and often erratic behavior. Furthermore, the humanitarian situation is extremely dire, with 8.7 million people in need of food assistance, 1 in 3 children under the age of 5 malnourished, and twenty-seven percent of the population at or below the absolute poverty level, living on less than 1 dollar a day. However, while North Korea’s humanitarian and military challenges gain prominent attention by Western media and governments, the state of North Korean’s ecosystem is rarely covered despite the vast implications this issue will have for the Korean peninsula in the years ahead. In the case of the DPRK, the past is prologue: famine and drought in the mid-1990s precipitated rampant deforestation, land erosion, pillaging of forests, pollution, and the contamination of water supplies, which all still negatively affect the country today. From 1994 to 1997, when the famine was at its worst, North Koreans had little or no electricity, resorting mostly to firewood to heat their homes. Undoubtedly, the use of firewood during the energy crisis led to a sharp decline in forest resources. Fires, landslides, insect damage, and drought have further contributed to the degradation of forests since the 1990s. Journalistic accounts, such as Barbara Demick’s novel, Nothing to Envy: Ordinary Lives in North Korea, articulate the desperation of North Koreans during the famine, explaining that children would kill and eat rats, mice, frogs, tadpoles, and grasshoppers just to have something to fill their stomachs. Throughout the famine, North Koreans often resorted to a variety of other wild foods, such as grass, mushrooms, and tree bark, to alleviate their hunger, leaving many forests barren of vegetation or animal life. Indeed, the dietary dependency North Koreans have on the natural environment has significantly impacted the diversity (or lack thereof) of plant and animal life today. I almost forgot to flag this for you all. You need to check out the current (soon to be last month's) edition of Scientific American for this: "How Much Is Left? The Limits of Earth's Resources: A graphical accounting of the limits to what one planet can provide." Does it have energy? Check. Does it mention minerals? You betcha. Does it - dare I say it - raise the troubles of biodiversity loss? You can count on it. It's not a long piece, but a good overview of relevant info, cool graphs, and an interesting piece to SciAm's annual single-concept edition, this year on "The End." If you work within the natural resources and national security/foreign policy world, you may want to join us on the Hill next Wednesday at noon for a lunch discussion on that exact topic. This will be a great conversation - everything from Yemen's water woes, to Pakistan's water woes, to China's water woes... A little pop analysis. Here are the number of mentions of our major natural security topics in the just-released National Security Strategy: Climate Change: 28 Agriculture: 3 (including specifically regarding India and Afghanistan) Conservation (forests): 1 By comparison - and this is very interesting: Nuclear (energy and other): 74 That's right, folks. The new NSS mentions "energy" more than "engagement" or "military." And "climate change" appears more than "intelligence." And for full context, here is a word cloud of the document (note: removed the words "United States").
<urn:uuid:869cf791-df12-43f1-9b2e-3f7ba1d08bca>
2.71875
2,179
Personal Blog
Science & Tech.
27.946717
This week's book giveaway is in the General Computing forum. We're giving away four copies of Arduino in Action and have Martin Evans, Joshua Noble, and Jordan Hochenbaum on-line! See this thread for details. Thread safety means that the object that is accessed by multiple threads is always in the valid and the consistent state. More precicely, it breaks down to safety in terms of 3 different concepts: deadlock, starvation and race. The deadlock most commonly occurs when one process requests a resource held by another process that is waiting for the my resource held by the first process. For example, if Hamas says "We will continue the terror until you leave the territory", and Israel says "We will not leave the territory until you stop terror", that's a classic deadlock. Starvation is when a process is waiting indefinitely for a resource to become free. For example, you could design the traffic light so that it switches the light every 5 hours, and it will do the job in the sense of preventing the collisions, but it's not really a good solution, is it? Race conditions occur when two or more threads do not coordinate well and the result is undeterministic. For example, if two people talk at the same time, and the third person listens, he may hear a sentence tht doesn't make any sense. Moreover, depending on how fast the two people speak, the outcome will be different. Joined: Sep 12, 2003 Thanks for the reply. But I would love if you can give me an example using threads so that I can get the correct picture. Please if any once can. Thanks once again. Hi Chandra I tried to write an example but I was unsuccessful as I wanted to write a simple example....but you can refer to this where you can see what hazards thread safety ignorance can cause... Regards Maulin One example we bump into is servlets. The server might have just one object instance of a servlet class. Every time a request comes in, the server starts a new thread and calls doGet() on the servlet. Thus, the servlet can be running in multiple threads for multiple users all at once. We have to be sure we write the servlet to be thread safe. One thing we must not do is use member variables. If we did something as simple as we could be surprised to find another thread changes mCounter in between those two lines, what was called a race condition above. So most thread safe classes don't use member variables unless they want to share the value among all threads, and then they have to synchronize very carefully. A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
<urn:uuid:94230cdd-6f5e-4e2c-87c4-e056a79b34e2>
3.125
579
Comment Section
Software Dev.
61.425268
Extraction of Metals Extraction of Copper - Purification by Electrolysis. The anode is a block of impure copper. The cathode is a thin piece of pure copper. When electricity is passed through the cell copper is dissolved at the anode by oxidation and Cu2+ ions go into solution. Cu(s) - 2e- Cu2+(aq) At the cathode, copper is deposited by reduction. Cu2+(aq) + 2e- Cu(s) ions move from the anode to the the anode gets smaller as the cathode gets bigger. This is a redox reaction (continued). Links Search Questions gcsescience.com Contents The Periodic Table Index Quizzes gcsescience.com Copyright © 2012 Dr. Colin France. All Rights Reserved.
<urn:uuid:08554384-f423-4049-a9b3-c42bcb21acb6>
3.5625
183
Knowledge Article
Science & Tech.
55.133333
Internally and externally induced climate variability Climate variations, both in the mean state and in other statistics such as, for example, the occurrence of extreme events, may result from radiative forcing, but also from internal interactions between components of the climate system. A distinction can therefore be made between externally and internally induced natural climate variability and change. When variations in the external forcing occur, the response time of the various components of the climate system is very different. With regard to the atmosphere, the response time of the troposphere is relatively short, from days to weeks, whereas the stratosphere comes into equilibrium on a time-scale of typically a few months. Due to their large heat capacity, the oceans have a much longer response time, typically decades but up to centuries or millennia. The response time of the strongly coupled surface-troposphere system is therefore slow compared with that of the stratosphere, and is mainly determined by the oceans. The biosphere may respond fast, e.g. to droughts, but also very slowly to imposed changes. Therefore the system may respond to variations in external forcing on a wide range of space- and time-scales. The impact of solar variations on the climate provides an example of such externally induced climate variations. But even without changes in external forcing, the climate may vary naturally, because, in a system of components with very different response times and non-linear interactions, the components are never in equilibrium and are constantly varying. An example of such internal climate variation is the El Niño-Southern Oscillation (ENSO), resulting from the interaction between atmosphere and ocean in the tropical Pacific. Feedbacks and non-linearities The response of the climate to the internal variability of the climate system and to external forcings is further complicated by feedbacks and non-linear responses of the components. A process is called a feedback when the result of the process affects its origin thereby intensifying (positive feedback) or reducing (negative feedback) the original effect. An important example of a positive feedback is the water vapour feedback in which the amount of water vapour in the atmosphere increases as the Earth warms. This increase in turn may amplify the warming because water vapour is a strong greenhouse gas. A strong and very basic negative feedback is radiative damping: an increase in temperature strongly increases the amount of emitted infrared radiation. This limits and controls the original temperature increase. A distinction is made between physical feedbacks involving physical climate processes, and biogeochemical feedbacks often involving coupled biological, geological and chemical processes. An example of a physical feedback is the complicated interaction between clouds and the radiative balance. Chapter 7 provides an overview and assessment of the present knowledge of such feedbacks. An important example of a biogeochemical feedback is the interaction between the atmospheric CO2 concentration and the carbon uptake by the land surface and the oceans. Understanding this feedback is essential for an understanding of the carbon cycle. This is discussed and assessed in detail in Chapter 3. Many processes and interactions in the climate system are non-linear. That means that there is no simple proportional relation between cause and effect. A complex, non-linear system may display what is technically called chaotic behaviour. This means that the behaviour of the system is critically dependent on very small changes of the initial conditions. This does not imply, however, that the behaviour of non-linear chaotic systems is entirely unpredictable, contrary to what is meant by "chaotic" in colloquial language. It has, however, consequences for the nature of its variability and the predictability of its variations. The daily weather is a good example. The evolution of weather systems responsible for the daily weather is governed by such non-linear chaotic dynamics. This does not preclude successful weather prediction, but its predictability is limited to a period of at most two weeks. Similarly, although the climate system is highly non-linear, the quasi-linear response of many models to present and predicted levels of external radiative forcing suggests that the large-scale aspects of human-induced climate change may be predictable, although as discussed in Section 1.3.2 below, unpredictable behaviour of non-linear systems can never be ruled out. The predictability of the climate system is discussed in Chapter 7. Global and hemispheric variability Climate varies naturally on all time-scales. During the last million years or so, glacial periods and interglacials have alternated as a result of variations in the Earth's orbital parameters. Based on Antarctic ice cores, more detailed information is available now about the four full glacial cycles during the last 500,000 years. In recent years it was discovered that during the last glacial period large and very rapid temperature variations took place over large parts of the globe, in particular in the higher latitudes of the Northern Hemisphere. These abrupt events saw temperature changes of many degrees within a human lifetime. In contrast, the last 10,000 years appear to have been relatively more stable, though locally quite large changes have occurred. Recent analyses suggest that the Northern Hemisphere climate of the past 1,000 years was characterised by an irregular but steady cooling, followed by a strong warming during the 20th century. Temperatures were relatively warm during the 11th to 13th centuries and relatively cool during the 16th to 19th centuries. These periods coincide with what are traditionally known as the medieval Climate Optimum and the Little Ice Age, although these anomalies appear to have been most distinct only in and around the North Atlantic region. Based on these analyses, the warmth of the late 20th century appears to have been unprecedented during the millennium. A comprehensive review and assessment of observed global and hemispheric variability may be found in Chapter 2. The scarce data from the Southern Hemisphere suggest temperature changes in past centuries markedly different from those in the Northern Hemisphere, the only obvious similarity being the strong warming during the 20th century. Regional patterns of climate variability Regional or local climate is generally much more variable than climate on a hemispheric or global scale because regional or local variations in one region are compensated for by opposite variations elsewhere. Indeed a closer inspection of the spatial structure of climate variability, in particular on seasonal and longer time-scales, shows that it occurs predominantly in preferred large-scale and geographically anchored spatial patterns. Such patterns result from interactions between the atmospheric circulation and the land and ocean surfaces. Though geographically anchored, their amplitude can change in time as, for example, the heat exchange with the underlying ocean changes. A well-known example is the quasi-periodically varying ENSO phenomenon, caused by atmosphere-ocean interaction in the tropical Pacific. The resulting El Niño and La Niña events have a worldwide impact on weather and climate. Another example is the North Atlantic Oscillation (NAO), which has a strong influence on the climate of Europe and part of Asia. This pattern consists of opposing variations of barometric pressure near Iceland and near the Azores. On average, a westerly current, between the Icelandic low pressure area and the Azores high-pressure area, carries cyclones with their associated frontal systems towards Europe. However the pressure difference between Iceland and the Azores fluctuates on time-scales of days to decades, and can be reversed at times. The variability of NAO has considerable influence on the regional climate variability in Europe, in particular in wintertime. Chapter 7 discusses the internal processes involved in NAO variability. Similarly, although data are scarcer, leading modes of variability have been identified over the Southern Hemisphere. Examples are a North-South dipole structure over the Southern Pacific, whose variability is strongly related to ENSO variability, and the Antarctic Oscillation, a zonal pressure fluctuation between middle and high latitudes of the Southern Hemisphere. A detailed account of regional climate variability may be found in Chapter 2. Other reports in this collection
<urn:uuid:1156523f-6279-497e-bc4f-833cca1361e8>
3.421875
1,608
Knowledge Article
Science & Tech.
20.228532
Quantum Physics & Consciousness In the most simple terms consciousness refers to the unified sense of awareness that we all have of who we are and what we are. It is the unique subjective inner world that we all experience during our daily lives. Understanding the nature of consciousness and the self, has become one of the most intriguing mysteries. Although at present no one yet knows how our sense of self and consciousness arises, there have been many theories put forward (for more information please review the mind body problem). The current lack of plausible biological mechanisms to account for how consciousness and our sense of self arises has led some to present some other possible theories. Stuart Hameroff, an anesthetist at the University of Arizona, and Roger Penrose, a mathematician from the University of Cambridge, have argued that consciousness may be a property of processes occurring within small protein structures within brain cells. According to these scientists however these processes are not taking place where brain cells connect with each other, but at a level far smaller than that. They argue that there is a process going on at the subatomic or quantum level - a level where things are even much smaller than atoms. The quantum processes theory put forward by Hameroff and Penrose is based upon the principle that there are two levels of explanation in physics: the familiar classical level used to describe large-scale objects and the quantum level used to describe very small events at the subatomic level. At the quantum level superimposed states are possible, that is, two possibilities may exist for any event at the same time, but at the classical level either one or the other must exist. Hameroff and Penrose propose that consciousness arises from tiny tube-like structures made of proteins that exist in all the cells in the body, including brain cells, and act as a skeleton that allows cells to keep their shape. They propose that these small structures are the site of quantum processes in the brain, due to their structure and shape. They argue that consciousness is thus not a product of direct brain cell to brain cell activity, but rather the action of subatomic processes occurring in the brain. At present there is little experimental scientific evidence to support this view, but it is an intriguing hypothesis. Mailpoint 810, Level F, Southampton General Hospital, Tremona Road, Southampton, Hampshire SO16 6YD, United Kingdom. Tel: +44 (0) 2380 001016
<urn:uuid:c2e4b3b1-a8de-4f7f-92d5-f896f7658e97>
2.859375
492
Knowledge Article
Science & Tech.
31.111673
Mars Hoax Day is once again here, so if you get an email telling you to look for a giant red planet in the night sky, just ignore it. For nearly a decade, Mars Hoax Day has turned up each August 27, spreading by email and claiming that Mars will appear in the sky just as large as the full moon. Wikipedia breaks it down like this: Mars Hoax Day originated in an e-mail in 2003 that claimed on August 27, Mars would be so close to earth that it would appear in the night sky as large as the full moon. After taking a year off, the hoax returned every year between 2005 and this year, fooling people into scanning the night sky for Mars. The original Mars Hoax Day may not have been intentional. An email message communicated that Mars and Earth in 2003 were the closest they had been in close to 60,000 years. This was misinterpreted to claim that Mars would appear supersized and, as exaggerations often do, spread like wildfire by way of email forwards. The text of the Mars Hoax Day email has remained largely the same since its introduction in 2003. Using pseudo-scientific language, it appears convincing enough to the untrained eye: “The Red Planet is about to be spectacular! This month and next, Earth is catching up with Mars in an encounter that will culminate in the closest approach between the two planets in recorded history. The next time Mars may come this close is in 2287. Due to the way Jupiter’s gravity tugs on Mars and perturbs its orbit, astronomers can only be certain that Mars has not come this close to Earth in the Last 5,000 years, but it may be as long as 60,000 years before it happens again.” “The encounter will culminate on August 27th when Mars comes to within 34,649,589 miles (55,763,108 km) of Earth and will be (next to the moon) the brightest object in the night sky. It will attain a magnitude of -2.9 and will appear 25.11 arc seconds wide. At a modest 75-power magnification Mars will look as large as the full moon to the naked eye. Mars will be easy to spot. At the beginning of August it will rise in the east at 10 p.m. and reach its azimuth at about 3 a.m.” But even NASA spoke up against Mars Hoax Day. In 2005, NASA published an article debunking the email hoax, or at least most of it. NASA did note that Mars was set to appear close to earth but would only appear as a pinprick of light. But it’s a good thing Mars Hoax Day isn’t real, NASA added. If Mars did come close enough to earth to rival the moon in size, its gravity would actually alter the Earth’s orbit and bring up destructive tides.
<urn:uuid:e7a395fb-2efa-47a2-ba43-166dd1b64cc5>
3.234375
604
Nonfiction Writing
Science & Tech.
74.230303
Optimize with a SATA RAID Storage Solution Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs All integer types supported by the JVM -- bytes, shorts, ints, and longs -- are signed two's-complement numbers. The two's-complement scheme allows both positive and negative integers to be represented. The most significant bit of a two's-complement number is its sign bit. The sign bit is one for negative numbers and zero for positive numbers and for the number zero. The number of unique values that can be represented by the two's-complement scheme is two raised to the power of the total number of bits. For example, the short type in Java is a 16-bit signed two's-complement integer. The number of unique integers that can be represented by this scheme is 216, or 65,536. Half of the short type's range of values are used to represent zero and positive numbers; the other half of the short type's range are used to represent negative numbers. The range of negative values for a 16-bit two's-complement number is -32,768 (0x8000) to -1 (0xffff). Zero is 0x0000. The range of positive values is one (0x0001) to 32,767 (0x7fff). Positive numbers are intuitive in that they are merely the base two representation of the number. Negative numbers can be calculated by adding the negative number to two raised to the power of the total number of bits. For example, the total number of bits in a short is 16, so the two's-complement representation of a negative number in the valid range for a short (-32,768 to -1) can be calculated by adding the negative number to 216, or 65,536. The two's-complement representation for -1 is 65,536 + (-1) or 65,535 (0xffff). The two's-complement representation for -2 is 65,536 + (-2) or 65,534 (0xfffe). Addition is performed on two's-complement signed numbers in the same way it would be performed on unsigned binary numbers. The two numbers are added, overflow is ignored, and the result is interpreted as a signed two's-complement number. This will work as long as the result is actually within the range of valid values for the type. For example, to add 4 + (-2), just add 0x0004 and 0xfffe. The result is actually 0x10002, but because there are only 16 bits in a short, the overflow is ignored and the result becomes 0x0002.
<urn:uuid:04d2e2fe-f800-4af7-9318-01eb1daaa60a>
3.296875
564
Documentation
Software Dev.
63.01434
|An Alabama Copperhead| 2. Those Who Do Not Learn From the Past... Biologists, and even conservation biologists, historically had some curious views regarding their study organisms, views that seem very odd today. The classic example, to me at least, is that of Sir Alfred Russell Wallace (1823-1913), a pioneering biologist. When studying orangutans in Borneo, he describes following these magnificent animals through the treetops, shooting them over and over until the large beasts finally succumbed to multiple gunshots. He killed 29 in one stay. We now know that orangutans reproduce very slowly (and are endangered); their populations decrease in numbers over time if many animals are killed. In other words, not only are the animals that are killed removed from the population, but the surviving animals don't produce enough offspring to make up for these deaths. However, Wallace never claimed to be conserving these animals, only studying them (for an excellent book on Wallace's pioneering studies, check out: Where Worlds Collide: The Wallace Line (Comstock Books). We are already familiar with the plight of bison in the United States (although commonly referred to as buffalo, buffalo are actually a group of animals found in the Old World, including Africa and Asia). We learn early on in school about how vast herds of these animals in the midwest plains were eventually hunted to oblivion (it is hard to grasp how common these animals once were, but the mountain of skulls on the right gives you an idea; imagine the experience of witnessing a herd of this many animals). But, bison actually ranged far outside of the midwest plains. They were once found in New York and all along the Appalachian Mountains south to Florida. They were found as far north as northern Canada but also in central Mexico. So, although we are all told stories to make us appreciate how there were once vast herds of bison in the United States, even these stories don't come close to communicating that bison were a major part of the natural world across the entire continent. Because much of the United States is now missing bison, it's almost as if we live in a sanitized version of what nature should really look like. In any case, bison were already extremely rare by 1886, when William Hornaday mounted an expedition to find the animal he fought to save. Perhaps to his dismay, he realized the animals were virtually absent. But, he did not give up in his search. Eventually, he found a small group of impressive beasts clustered together, perhaps we can think of it as a tiny herd. One might imagine his elation, after struggling to find any evidence that the animals even existed anymore, to have found these animals. It is harder to understand his motivation behind what he did next. He shot them all. Robert Krulwich writes an excellent essay of the subject. But, he is at a loss regarding how to explain what was going on in William Hornaday's mind that day. How do you explain the inconsistency between what William Hornaday fought for and what he did? Perhaps there was a disconnect between saving the bison as a species (which is an idea) and killing individual animals (which is an action). It's crucial to realize that what we do and how we act is what makes conservation happen, not just abstract ideas. It's easy to assume that we wouldn't make the same mistake again. How could we ever allow a huge beast, a giant mega-herbivore, to go extinct on our watch? Well, we do it all the time. Just in the last few weeks came news that there are no more Javan Rhinoceros is Vietnam and no more Western Black Rhinos anywhere in the wild. I hope you got a chance to see them. 3. A Lizard a Day Keeps the Doctor Away? Earlier this year I spent some time in the Florida Keys while I helped out on a python research project. The nights at our peaceful beachside cabin were occasionally interrupted by a odd hiccuping-chirp from outside. Tokay Geckos prowled the walls and the trees, calling to attract mates (you can hear one disturbing a family here). They are not native to North America, the animals we heard were probably descendants of escaped pets. This species is normally found in Asia, and it's hard to imagine this Florida pest being in trouble in its native range, but that's exactly the case. Traditional medicine suggests Tokay Geckos have useful disease-fighting properties. These claims haven't been backed up by scientific studies, but that doesn't stop people from harvesting the lizards. Fortunately, the trend was noticed early, before Tokays became critically endangered or extinct. Maybe it's not too late to change our ways. 4. Let's End on a High Note: There is a lot to be pessimistic about. But, there are still lots of fascinating animals patrolling their habitats in the wilderness and interacting with other species just as they always have. The Deep Sea News Blog provides some awesome footage of a giant ray gliding through the water and a school of yellow angelfish that rush to clean the beast. Check it out. Relevant Scientific Articles ROSTLUND, E. (1960). THE GEOGRAPHIC RANGE OF THE HISTORIC BISON IN THE SOUTHEAST Annals of the Association of American Geographers, 50 (4), 395-407 DOI: 10.1111/j.1467-8306.1960.tb00357.x Erik Meijaard, Alan Welsh, Marc Ancrenaz, Serge Wich, Vincent Nijman, & Andrew J. Marshall (2010). Declining Orangutan Encounter Rates from Wallace to the Present Suggest the Species Was Once More Abundant PLoS ONE, 5 (8)
<urn:uuid:f5aee1c3-b44a-4b6f-8bcb-fb9f0f661917>
2.828125
1,199
Personal Blog
Science & Tech.
52.149143
Greater Bamboo Lemur, Prolemur simus The greater bamboo lemur (Prolemur simus) is native only to the island of Madagascar. Its other common names include the broad-nosed gentle lemur and the broad-nosed bamboo lemur. It has a small range in southwestern Madagascar, although it is thought that its range was much larger in the past. Andringitra National Park and Ranomafana Nation Park are included in its range. The greater bamboo lemur is the largest bamboo lemur weighing an average of 5.5 pounds. It has a body length that can reach up to 1.5 feet. It has an overall grey coat with white ear tufts. The diet of this bamboo lemur consists of the bamboo within its habitat, exclusively Cathariostachys madagascariensis. It prefers to eat the shoots, but has been known to eat the leaves and the pith, or the middle, of the bamboo plant. Occasionally, the greater bamboo lemur will consume fruits, fungi, and flowers. It is unknown how the toxic cyanide within the bamboo shoots effects this lemur, as the usual daily amount consumed would kill a human. The greater bamboo lemur can live in groups of up to twenty-eight, highly sociable individuals. It is thought that this species of lemur is the only one to have male dominated groups. These lemurs are able to emit more than seven different vocalizations. Individuals in captivity can live to be up to seventeen years. The greater bamboo lemur is one of the most endangered lemurs, and has been placed in The World’s 25 Most Endangered Primates. It was thought that this lemur was extinct, until an isolated population was discovered in 1986. After this discovery, it was found that there were fewer than 75 individuals within southern and central eastern Madagascar. Other studies have found only 60 individuals, while others suggest that the population may be as high as 160 individuals. Only four percent of its former range is currently populated. This is due to the highly specialized diet of the greater bamboo lemur and the effects of habitat destruction it has endured. It is also in danger from hunting and mining. The IUCN has listed the greater bamboo lemur as “Critically Endangered”. Image Caption: Greater Bamboo Lemur (Prolemur simus) eating giant bamboo in its natural habitat in Madagascar. Credit: Cédric Girard-Buttoz/Wikipedia(CC BY-SA 3.0)
<urn:uuid:86d3c655-a467-4849-b3a2-9d33710ed39e>
3.625
525
Knowledge Article
Science & Tech.
50.41531
In about five billion years, Earth is going into the broiler. The sun will swell up into a red giant, engulfing us and the other inner planets. And that'll be that. Or will it? Astronomers recently found two small planets orbiting very close to a star about 4,000 light-years away. The star has already passed through its red giant phase. So how did the planets survive? One idea was that the red giant had swallowed two formerly large planets and spit out the smaller cores. Imagine a star engulfing a Jupiter and leaving behind a scorched Earth—literally. Now a new study proposes an even more violent explanation. Astrophysicists in Israel calculated that the two newfound planets might be the remnant of a single giant planet. That world would have been stripped to its core and then torn apart by the swelling star. The worlds now orbiting the former red giant would be the surviving, Earth-size chunks. ["A tidally destructed massive planet as the progenitor of the two light planets around the sdB star KIC 05807616," by Ealeal Bear and Noam Soker in The Astrophysical Journal Letters.] So a planet much bigger than Earth can escape total destruction when its star goes red giant. Although being broiled, pulled apart like taffy and tossed away is no planetary picnic. [The above text is a transcript of this podcast.]
<urn:uuid:cdd72f82-29e8-402e-ab5e-df2ddd08a2c0>
3.453125
290
Truncated
Science & Tech.
63.029954
Chemical study resolves art controversy: Scientists (NEWSER) – Picasso's great works had humble origins: They were painted using house paint, scientists say. Art historians have long debated whether Picasso was one of the first painters to use the standard enamel-based stuff rather than oil paints, but past paint-chip studies couldn't suss out the individual elements with enough precision, reports the New Scientist. In order to finally answer the question, scientists employed an X-ray nanoprobe—a government-developed machine that offers an incredibly detailed look at materials' chemical components. Read more: Newser
<urn:uuid:cccf3753-f435-42e7-8b57-a0e65b87fbe9>
2.984375
126
Truncated
Science & Tech.
23.30386
Light Clock Relations Physics Front Related Resources A link to the full collection of Galileo and Einstein Physics Flashlets, by the same authors. A set of multimedia resources targeting high school and lower-level undergraduate students. It presents the physics of special relativity plus simulations to visualize how things look at relativistic speeds. An animated tutorial created for novice learners that explores topics raised as consequences of the constant speed of light. Create a new relation
<urn:uuid:cc3ca6a9-88f5-4247-b082-eaf2c3690862>
2.921875
91
Content Listing
Science & Tech.
28.368421
Thu Feb 5 17:29:21 CET 2009 For a simple static C program with a bit of dynamic data, this is a simple list structure based only on concatenation, with "next" pointers embedded in the objects. This works for pure trees (flat trees). Conceptually, only these operations are necessary: * a PUSH operation of a singleton to a stack. * a REVERSE operation to finalize construction preserving order * a FOR operation for traversal If objects have built-in list pointers, there is conceptually no "element". All things are lists, and the primitive operations are "overlay_1" and "split_1". overlay_1 (abcd... , 1234...) = a1234... split_1 (abcd...) = bcd... I'm not sure if this way of looking at it is so useful, but there _is_ a difference between objects that contain a next pointer, and obects that are contained in a separate container structure (i.e. cons cells). Using embedded next is commonplace, and is easier to use with manual memory management, so maybe it's better to stick to it. Another way of looking at it: if datastructure construction is not time-critical, using quadratic list insertion is not really a problem.. That way append and iteration might be enough. The problem with embedded "next" pointers is that if you want to put the objects in an alternative container, you get two different containement mechanism (one built-in, one explicit).
<urn:uuid:73024876-2f2c-48fa-8842-983ba54e3972>
2.6875
343
Comment Section
Software Dev.
57.517721
Kepler finds red giants with rapidly spinning cores. "An international team of astronomers led by PhD student Paul Beck from Leuven University in Belgium have managed to look deep inside some old stars and discovered that their cores spin at least ten times as fast as their surfaces. The result appeared today in the journal Nature. It has been known for a long time that the surfaces of these stars spin slowly, taking about a whole year to complete one rotation. The team has now discovered that the cores at the heart of the stars spin much faster with about one rotation per month. The discovery was made possible because of the ultra high precision of the data from NASA's Kepler space telescope." I just found this site that launched 4 days ago. The address is http://exoplanetarchive.ipac.caltech.edu/. What is the NASA Exoplanet Archive? The NASA Exoplanet Archive collects and serves public data to support the search for and characterization of extra-solar planets (exoplanets) and their host stars. The data include published light curves, images, spectra and parameters, and time-series data from surveys that aim to discover transiting exoplanets. Tools are provided to work with the data, particularly the display and analysis of transit data sets from Kepler and CoRoT. All data are validated by the Exoplanet Archive science staff and traced to their sources. The Exoplanet Archive is the U.S. data portal for the CoRoT mission. It used to be the NASA Star and Exoplanet Database (NStED), it's just been overhauled to provide improved facility. Has it been anounced at what time we will get the next results from the Kepler search? AFAIK the current search only includes the findings until summer of 2011 since the rest needs more processing time. Neither of these rang a bell with me, but they illustrate the point.Really? Do you have a link for this? I've always found it interesting since i believe we some time in the future will find more moons with life than planets. TrES-2b: Pushing Exomoon Limits That's a good one, and here's a couple other good ones.Great site, thanks! Really didn't think anyone would release data on candidates since no one ever talked about their caractaristics other than the amount of candidates like the newly 1,000+ released candidates. Kepler Planet Candidate Data Explorer The Extrasolar Planets Encyclopaedia Catalog of Exoplanets from the Planetary Society Thanks for the links. Can't see where to find the candidates of the exoplanets though if that was what i was suppose to find it the provided links. And confirmed planets just jumped from 28 to 33. All 5 orbit the same star, Kepler 20, G-type, with a system of 5 planets. They include Kepler 20 E and F, Earth-sized but probably uninhabitable. One is only 3% larger than Earth, the other smaller than both Earth and Venus. They don't have the means to measure the masses of these planets via radial velocity - yet. They are hopeful for the near future. Last edited by Scriitor; 2011-Dec-20 at 06:26 PM. Planets proceeding outwards from Kepler 20 are "Neptune-like, rocky, Neptune, rocky, Neptune", all very close. Within a Mercury orbit, I think was said, but I might have misheard. They're proposing this challenges all planet-formation theories. They're challenging the astronomical community to come up with a way how this alternating planetary sequence might have come about. Last edited by Scriitor; 2011-Dec-20 at 06:43 PM. Kepler 20F, closest to Earth-sized, is too hot to have water now, but apparently might have held it for "several billion years" previously. They are hopeful of finding a true Earth-twin within a year or two, but it might take an extended mission of 4-5 years, as sensitivity improves. And yes, they are hopeful of detecting moons, but via gravity "wobble", not the visual transit method, except for possibly the smaller stars. Last edited by Scriitor; 2011-Dec-20 at 06:57 PM. At night the stars put on a show for free (Carole King) All moderation in purple - The rules I do not understand why no one remembered pulsar planets? i looked everywhere - forums, space portals, news, and nowhere this issue it talked about. So lets say this: "first ever planet smaller than Earth discovered in history" or other statements like this are pure **. Two papers on Transit Timing Variations from the Kepler team: arXiv:1201.1892 - Transit Timing Observations from Kepler: VI. Transit Timing Variation Candidates in the First Seventeen Months from Polynomial Models arXiv:1201.1873 - Transit Timing Observations from Kepler: VII. Potentially interesting candidate systems from Fourier-based statistical tests NASA's Kepler Mission Finds Three Smallest Exoplanets Astronomers using data from NASA's Kepler mission have discovered the three smallest planets yet detected orbiting a star beyond our sun. The planets orbit a single star, called KOI-961, and are 0.78, 0.73 and 0.57 times the radius of Earth. The smallest is about the size of Mars. All three planets are thought to be rocky like Earth but orbit close to their star, making them too hot to be in the habitable zone, which is the region where liquid water could exist. Of the more than 700 planets confirmed to orbit other stars, called exoplanets, only a handful are known to be rocky. "Astronomers are just beginning to confirm the thousands of planet candidates uncovered by Kepler so far," said Doug Hudgins, Kepler program scientist at NASA Headquarters in Washington. "Finding one as small as Mars is amazing, and hints that there may be a bounty of rocky planets all around us." Everything I need to know I learned through Googling. I suppose given the duration so far of the Kepler mission that the earth sized(or smaller) planets detected are almost bound to be close to their stars? AFAIK Kepler was supposed to be able to detect earth-sized planet in habitable zone of sun-like star within primary mission time, so not that close to star. But there is one problem. Stars osciliate too much, more than expected, making measurements harder. So successfully confirmed detection of this kind of planet require mission extension.
<urn:uuid:ec2197d6-6080-4464-b715-3fdb10f54eb4>
3.046875
1,379
Comment Section
Science & Tech.
53.976668
I = ∫∫R (x + y)dA where R is the region bounded by the curve y = x² and the line y = 1. (a) Calculate I by integrating first in y and then in x. (b) Calculate I by integrating first in x and then in y. This is what I got for (a): Limits for y are x² =< y =< 1, making this a parabola from origin to y=1. solving x²=1 (upper limit of y) gives the limits for x -1=<x=<1. which I hope is correct! I'm stuck on (b) What should I make the limits of x & y to? are the limits for x: ? This is from solving x² = y, making But what do I make the limits for y? If I use I get a different answer from (a). I get 13/20. where am I going wrong?
<urn:uuid:949719e6-13d9-461f-8153-27bef5b8514f>
2.75
211
Q&A Forum
Science & Tech.
97.715807
Dec. 31, 2009 NASA's Mars rover Spirit is about to mark six years of Red Planet exploration. However, the upcoming Martian winter could end the roving career of the beloved, scrappy robot. Dec. 29, 2009 Party planners take note. For the first time in almost twenty years, there's going to be a Blue Moon on New Year's Eve. Dec. 23, 2009 The solar system is passing through an interstellar cloud that Physicssays should not exist. In the Dec. 24th issue of Nature, a team of scientists reveal how NASA's Voyager spacecraft have solved the mystery. Dec. 18, 2009 NASA's Cassini Spacecraft has captured the first flash of sunlight reflected off a lake on Saturn's moon Titan, confirming the presence of liquid on the part of The Moondotted with many large, lake-shaped basins. Dec. 17, 2009 A continent-wide network of all-sky cameras has photographed a never-before-seen phenomenon: colliding auroras that produce explosions of light. The must-see images have solved a long-standing mystery of Northern Lights. Dec. 11, 2009 This week, researchers attending the United Nations Climate Change Conference in Copenhagen unveiled a unique web site that gathers and organizes climate data for decision makers, professional scientists and lay people. Dec. 8, 2009 The Geminid meteor shower has been intensifying in recent years, and researchers say 2009 could be the best year yet. This year's display peaks on Dec. 13th and 14th. Dec. 2, 2009 While stuck in a sandtrap, Mars rover Spirit has made a discovery one researcher calls "supremely interesting." Nov. 24, 2009 Data from NASA's STEREO spacecraft have confirmed the stunning reality of monster waves on the sun known as "solar tsunamis." Nov. 19, 2009 Imagine cutting retractable doors in the side of a 747 airliner, installing a 17-ton telescope, and flying to the stratosphere to solve one of astronomy's greatest puzzles. That's what NASA and the German Aerospace Center plan to do with a cutting-edge airborne observatory named SOFIA.
<urn:uuid:b266aab9-4efd-4c85-8aba-1fae4bcafee8>
2.859375
453
Content Listing
Science & Tech.
62.135882
Glen Whitney wants to change that. Whitney is the founder and executive director of New York City’s Museum of Mathematics, or MoMath. His goal? Create a museum that presents math as more than just numbers and arithmetic. “The way math is taught today is like teaching music education by forcing students to learn musical notation, without letting them listen to anything,” Whitney says. “There are beautiful mysteries in mathematics that are waiting to be explored.” You won’t find a copy of Fermat’s Last Theorem enshrined behind glass at MoMath. It’s a children’s museum, where “look, don’t touch” is tossed out the window. Younger kids can play with the Marble Multiplier (see below), a device that looks like a gigantic game of Perfection to visualize what multiplication means. Older kids (and adults) can learn why circles, ellipses and hyperbolas are referred to as conic sections by using a laser to slice cones at different angles. Many politicians, including President Obama, have pledged to make STEM education a priority. Books like Steven Strogatz’s Joy of X show that math can be both accessible and interesting. Lawrence believes that both education and entertainment should work together to improve STEM education in the U.S. The museum, for its part, is crafting lesson plans for teachers to use in tandem with the exhibits. “Kids start out interested, but that interest wanes in middle school,” Lawrence says. “We hope this museum keeps that interest alive.” The Museum of Mathematics is located at 11 East 26th Street in Manhattan. It opens to the public on December 15th, 2012.
<urn:uuid:13e58da0-2cc6-442d-b9b5-f4dde6108888>
3.421875
362
Personal Blog
Science & Tech.
53.14206
A new paper in Geophysical Research Letters presents “a 6800–year, decadally-resolved biomarker and multidecadally-resolved hydrogen isotope record of hydroclimate from a coastal Maine peatland.” The researchers say that “Regional moisture balance responds strongly and consistently to solar forcing at centennial to millennial time scales, with solar minima concurrent with wet conditions. We propose that the Arctic/North Atlantic Oscillation (AO/NAO) can amplify small solar fluctuations, producing the reconstructed hydrological variations.” Note that this method of solar amplification is supported by several studies in Europe. The cycles and amplifications are also supported by independent lake sediment and speleothem (cave formation) data. The researchers go on to say, “The Sun may be entering a weak phase, analogous to the Maunder minimum, which could lead to more frequent flooding in the northeastern US at this multidecadal timescale.” Nichols, J. E., and Y. Huang, 2012, Hydroclimate of the northeastern United States is highly sensitive to solar forcing, Geophys. Res. Lett., 39, L04707, doi:10.1029/2011GL050720. [Link to full paper]
<urn:uuid:d04ec01e-c978-4b77-966d-f843940e8d07>
2.796875
268
Truncated
Science & Tech.
33.272353
Science Fair Project Encyclopedia This subfamily is widely distributed and members are adapted to a wide varitely of environments. Faboideae may be trees, shrubs or herbs. Flowers are classically pea shaped and root nodulation is very common. Note: The type genus, Faba, is a synonym for Vicia and is listed here as Vicia. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:81dc8e81-4882-4dcf-9dac-6f3239a7c7a1>
3.015625
107
Knowledge Article
Science & Tech.
37.6675
Earth Flips Magnetic Poles All the Time Reversals are the rule, not the exception. Earth has settled in the last 20 million years into a pattern of a pole reversal about every 200,000 to 300,000 years, although it has been more than twice that long since the last reversal. A reversal happens over hundreds or thousands of years, and it is not exactly a clean back flip. Magnetic fields morph and push and pull at one another, with multiple poles emerging at odd latitudes throughout the process. Scientists estimate reversals have happened at least hundreds of times over the past three billion years. And while reversals have happened more frequently in "recent" years, when dinosaurs walked Earth a reversal was more likely to happen only about every one million years. Sediment cores taken from deep ocean floors can tell scientists about magnetic polarity shifts, providing a direct link between magnetic field activity and the fossil record. The Earth’s magnetic field determines the magnetization of lava as it is laid down on the ocean floor on either side of the Mid-Atlantic Rift where the North American and European continental plates are spreading apart. As the lava solidifies, it creates a record of the orientation of past magnetic fields much like a tape recorder records sound. The last time that Earth's poles flipped in a major reversal was about 780,000 years ago, in what scientists call the Brunhes-Matuyama reversal. The fossil record shows no drastic changes in plant or animal life. Deep ocean sediment cores from this period also indicate no changes in glacial activity, based on the amount of oxygen isotopes in the cores. This is also proof that a polarity reversal would not affect the rotation axis of Earth, as the planet's rotation axis tilt has a significant effect on climate and glaciation and any change would be evident in the glacial record. Another doomsday hypothesis about a geomagnetic flip plays up fears about incoming solar activity. This suggestion mistakenly assumes that a pole reversal would momentarily leave Earth without the magnetic field that protects us from solar flares and coronal mass ejections from the Sun. But, while Earth's magnetic field can indeed weaken and strengthen over time, there is no indication that it has ever disappeared completely. A weaker field would certainly lead to a small increase in solar radiation on Earth – as well as a beautiful display of aurora at lower latitudes -- but nothing deadly. Moreover, even with a weakened magnetic field, Earth's thick atmosphere also offers protection against the Sun's incoming particles. The science shows that magnetic pole reversal is – in terms of geologic time scales – a common occurrence that happens gradually over millennia. While the conditions that cause polarity reversals are not entirely predictable – the north pole's movement could subtly change direction, for instance – there is nothing in the millions of years of geologic record to suggest that any of the 2012 doomsday scenarios connected to a pole reversal should be taken seriously. A reversal might, however, be good business for magnetic compass manufacturers.
<urn:uuid:48473196-dd1a-4894-ab9c-36d5498fd57d>
4.15625
608
Knowledge Article
Science & Tech.
34.302436
Lurking behind dust and stars near the plane of our Milky Way Galaxy, IC 10 is a mere 2.3 million light-years distant. Even though its light is dimmed by intervening dust, the irregular dwarf galaxy still shows off vigorous star-forming regions that shine with a telltale reddish glow in In fact, also a member of the Local Group of galaxies, IC 10 is the closest known Compared to other galaxies, IC 10 has a large population of newly formed stars that are massive and intrinsically very bright, including a luminous star system thought to contain a Located within the boundaries of the northern constellation IC 10 is about 5,000 light-years across.
<urn:uuid:1a5ac452-d6c6-4444-a81f-6834a7288055>
2.765625
155
Knowledge Article
Science & Tech.
38.500402
We have seen how Riemann sums can be used to approximate areas and volumes. The definite integral, as the limit of a Riemann sum as the slice width goes to zero and the number of slices goes to infinity, provides a way to find the actual area or volume. We can use the same technique to find the length of the graph of a function. Obviously, if the function's graph is a straight line, we can just used the distance formula to find the length of a piece of the line. However, we can approximate a curve by using straight line segments and can use the distance formula to find the length of each segment. Then, as the segment size shrinks to zero, we can use a definite integral to find the length of the arc of the curve. Try the following: - The applet initially shows an arc that is part of the graph of a parabola. Initially, we approximate the length of this arc by a straight segment connecting the end points. This is clearly not a very good approximation, but we can do better by increasing the number of segments. Move the intervals slider to increase the number, and see how the black set of segments more closely approximates the magenta curve. Note that the approximate length gets closer and closer to the actual length. If we let Δx be the change in x between the endpoints of a straight line segment and Δy be the change in y, then from the distance formula the length of the segment is . Unfortunately, this doesn't look like the element of a Riemann sum, which should be a function of some variable times a little bit of that variable. We can massage this into that form with a little algebra by factoring a Δx out of the radical to get . Summing this up and taking the limit as Δx goes to zero gives us the definite integral , where a < b. Since dy/dx = f '(x) if f is the function whose arc length is being measured, this integral is more commonly written as . In our example this integral becomes , where 2x is the derivative of x². It is common that arc length integrals generate integrands for which it is not simple to find the antiderivative, hence it is usually best to evaluate arc length integrals numerically. - Select the second example from the drop down menu. This is essentially the same example, except that now x is a function of y. The general formula is just and the specific example in this case is . The length is the same, since this is really the same as the first example, just mirrored through the line y = x. - Select the third example from the drop down menu, showing a parametric curve. The curve is a semicircle (if it looks squashed a bit, click the Equalize Axes button). You can move the intervals slider to make the approximation closer to the actual answer, which we know from geometry is π. Since our independent variable in parametric equations is t, we can go back to the original distance formula and factor a Δt out of the radical to get . Summing this up and taking the limit as Δt goes to zero gives us the integral . Note that here a and b are t limits, not x limits. To evaluate for our example we just plug in the derivatives of the two parametric equations to get . Note that this integral can be evaluated exactly, using the Pythagorean Identity to simplify the integrand. - Select the fourth example, showing a polar curve. In this example, th is used instead of θ to make it easier to type from the keyboard. Move the intervals slider see how the approximation gets better as the number of segments increases. To find the arc length, first we convert the polar equation r = f (θ) into a pair of parametric equations x = f (θ)cosθ and y = f (θ)sinθ. We then use the parametric arc length formula , where the two derivatives are of the parametric equations. Our example becomes , which is best evaluated numerically (you can greatly simplify the radicand by finding the derivatives, expanding the terms, and simplifying with the Pythagorean Identity, but the result is still not a simple enough integrand). - You can explore your own arc lengths by selecting the example that matches the type of curve (normal, inverse, parametric, polar) in which you are interested, then setting a and b and zooming/panning as usual. Note that if you set a > b the resulting length comes out negative. Since length is usually defined to be positive, you should keep a < b. This work by Thomas S. Downey is licensed under a Creative Commons Attribution 3.0 License.
<urn:uuid:0354d4ee-586c-4b3a-818e-bf03baf6d81d>
3.6875
982
Tutorial
Science & Tech.
54.256667
Elon Musk, the founder of PayPal and chairman of electric car company Tesla, recently said that he believed most of the world's power would come from solar by 2040. That seems remarkably optimistic to me. At the Future in Review Conference, Musk said that in 30 years, solar thermal and solar photovoltaic power will, combined, produce more electricity than any other source. That title is currently held (and held firmly) by coal. Displacing the coal industry with renewables would require massive capital investments and innovations, particularly in power storage. I have to say, my mind doesn't have to stretch too far to see how it would be possible. But a few things need to happen first. Solar needs to get cheaper, and photovoltaics have to stop relying on raw materials (indium / monosilicon) that are difficult to acquire. And then we need to figure out how to store the power so we can use it at night. This could be through a combination of utility-scale power storage and distributed power storage through home fuel-cell and hydrogen creation systems. Musk, as the chairman of Solar City, a company that installs panels on houses, sometimes with no down payment at all, obviously believe in the distributed power model. The goal of Solar City is to have people pay, not for the $30,000 panels on their roof, but for the 30 years of electricity those panels will generate. Already Solar City is projecting $80 M in revenue for this year. The final piece of the puzzle in getting to solar supremacy came out in Musk's speech as well. Very simply, "There should be a carbon tax." It's unclear that, without one, whether solar will ever be able to do any more than nip at the heels of big daddy coal. written by David W. Keith, May 28, 2008 written by Kobayashi, May 28, 2008 written by Dennis Allard, May 29, 2008 written by CTYankee, May 31, 2008 |< Prev||Next >|
<urn:uuid:c5314f67-47c2-4a5b-82aa-fbc0f78b7576>
2.828125
424
Personal Blog
Science & Tech.
56.035539
Weather prediction improvements come from a balanced program of improved observations, improved understanding of phenomena, and improved numerical modeling. In addition to NOAA’s primary efforts in tropical storm prediction, there are efforts at Earth System Research Laboratory to make improvements in all three areas. In observing, the Unmanned Aircraft Systems program has been funded, and is making plans to test a variety of platforms for tropical storm observations. These include the use of small UAS like the Aerosonde which can sample the low levels of tropical storms, high altitude aircraft such as the Global Hawk and Zephyr which could stay above a storm for long periods, and the WISDOM Program which could optimally sample a large area around the storm. Improved understanding of the air-sea interface is an example of ESRL research. Finally, new modeling efforts include the FIM hydrostatic global model, plans between ESRL, GFDL and AOML to develop global nonhydrostatic models, and efforts to improve regional scale hurricane models.
<urn:uuid:7f2df0c1-8b1f-4f34-a17a-fd630f9f8a5e>
2.921875
203
Academic Writing
Science & Tech.
21.998257
The Blue Brain Project (BBP) According to the EPFL website: "The first phase of BBP will be to replicate in software, a column of the Neocortex (NCC) with 10,000 morphologically complex neurons with unprecedented detail for high-speed simulations.Today's IBM press release reads in part: The second and subsequent phases will be to simplify the NCC and expand the simulation to brain regions and eventually the whole brain. An accurate replica of the neocortical column is the essential first step to simulating the whole brain and will also provide the link between genetic and molecular levels of brain function and high-level cognitive functions. The neocortical column is considered to be the elementary network of neurons that can act as a unit exhibiting most of the complex functions of the brain." "Yorktown Heights, NY and Lausanne, Switzerland, June 6, 2005 – IBM and The Ecole Polytechnique Fédérale de Lausanne (EPFL) are today announcing a major joint research initiative – nicknamed the Blue Brain Project – to take brain research to a new level.[ ... Read the full press release ... } Over the next two years scientists from both organizations will work together using the huge computational capacity of IBM’s eServer Blue Gene supercomputer to create a detailed model of the circuitry in the neocortex – the largest and most complex part of the human brain. By expanding the project to model other areas of the brain, scientists hope to eventually build an accurate, computer-based model of the entire brain. Relatively little is actually known about how the brain works. Using the digital model scientists will run computer-based simulations of the brain at the molecular level, shedding light on internal processes such as thought, perception and memory. Scientists also hope to understand more about how and why certain microcircuits in the brain malfunction – thought to be the cause of psychiatric disorders such as autism, schizophrenia and depression. “Modeling the brain at the cellular level is a massive undertaking because of the hundreds of thousands of parameters that need to be taken into account,” said Henry Markram, the EPFL professor heading up the project. “IBM has unparalleled experience in biological simulations and the most advanced supercomputing technology in the world. With our combined resources and expertise we are embarking on one of the most ambitious research initiatives ever undertaken in the field of neuroscience.” The IBM webpage that outlines the BBP is linked here. Anthony H. Risser | neuroscience | neuropsychology | brain
<urn:uuid:1026af49-ddfd-4e4a-b726-8bddeee50652>
2.75
526
Personal Blog
Science & Tech.
36.369412
Named Vector Members We can assign names to vector members. For example, the following variable v is a character string vector with two members. We now name the first member as First, and the second as Last. Then we can retrieve the first member by its name. Furthermore, we can reverse the order with a character string index vector.
<urn:uuid:a2e6a1fb-18d6-46cc-8ea9-efe074bd7a58>
3.21875
73
Tutorial
Software Dev.
62.939356
I am guessing you are referring to the experiment on this page: http://www.sciencebuddies.org/science-f ... p011.shtml I will try to answer as many of these as I can. I believe most of these are answered in the webpage already though though. 2. What if the reflected ray isn't equal to θi ? Shall the reflected ray be ordered m=1 automatically ?? -- No, please see below: "Rays farther from the normal than the reflected beam have order 1, +2, +3, etc. Rays closer to the normal have order −1, −2, −3, etc. In certain cases, for example very small d, some or all of the negative m orders may actually be diffracted through such a large angle that they are on the same side of the normal as the incident light. When the diffracted beam is on the same side of the normal as the incident light, the angle for the diffracted beam is negative. In other words, if the reflected beam is on the right side of the your laser pointer (using the example on the website), the m order is negative where as if the reflected beam is on the left side of normal, that would be positive. 3. θi is placed to the right of the normal and there is a diffracted ray to the right of θi. What should this diffracted ray's order be ?? And the angle... is it positive or negative ?? - Please see answer to 2. 4. What if the computed d is negative ?? Is this reasonable or absurd ?? Or is there something wrong with substitution of values, especially the signs ?? - d is the spacing of the structure (in this case, the data tracks) so negative space is probably not correct. 5. What if the averaged d-values for some order of diffraction column is negative ?? Is this an error ?? What does this mean ? 6. Since it is mentioned that the d computed using the formula d=mλ/(sinθm-sinθi) is in nm when λ is in nm, how is the computed value used in determining data track spacing ?? Is it that when d is large, then the CD has low storage capacity or the other way around ?? How shall the computed d-values determine storage capacity of CD ?? - That I am afraid is not the entire story, you will also to consider the total number of "space" that's on disc. (A disc may also have multiple layers. This may be helpful: http://www.osta.org/technology/cdqa15.htm 7. In my thesis' Review of Related Literature, there's a part that says, "On a CD, the space between tracks is about 1.6 microns versus spacing on a DVD-R which is about 0.74-0.8 microns." If so, how can nm be converted to microns ? Is it possible ? - Yes, nm can be converted into microns. A nm is 1e-9 meter, while a micron is 1e-6 meter, i.e. 1 micron = 1000 nm. 8. Can nm be converted to MB to see if the storage capacity label of a CD matches with the computed ones ? If so, how ? - Please see answer to #6 9. Lastly, how is the d=mλ/(sinθm-sinθi) formula derived ?? What is the relationship of each variable to each other ?? What are the principles supporting this formula ?? - Please reference the introduction of the experiment for a more detail explanation. Hope this helps!
<urn:uuid:caf2c9de-0186-4872-a789-8b58b630f4d5>
3.0625
766
Q&A Forum
Science & Tech.
79.595132
Lots of Water, but Not Always Where It Is Needed One hundred and ten thousand cubic kilometers of precipitation, nearly 10 times the volume of Lake Superior, falls from the sky onto the earth’s land surface every year. This huge quantity would be enough to easily fulfill the requirements of everyone on the planet if the water arrived where and when people needed it. But much of it cannot be captured (top), and the rest is distributed unevenly (bottom). Where does the rain go? More than half of the precipitation that falls on land is never available for capture or storage because it evaporates from the ground or transpires from plants; this fraction is called green water. The remainder channels into so-called blue-water sources—rivers, lakes, wetlands and aquifers—that people can tap directly. Farm irrigation from these free-flowing bodies is the biggest single human use of freshwater. Cities and industries consume only tiny amounts of total freshwater resources, but the intense local demand they create often drains the surroundings of ready supplies. Water supplies today Much of the Americas and northern Eurasia enjoy abundant water supplies. But several regions are beset by greater or lesser degrees of “physical” scarcity—whereby demand exceeds local availability. Other areas, among them Central Africa, parts of the Indian subcontinent and Southeast Asia, contend with “economic” water scarcity, where lack of technical training, bad governments or weak finances limit access even though sufficient supplies are available.
<urn:uuid:a5e51d45-66e0-4475-868d-8aa67f8a56ac>
3.921875
306
Knowledge Article
Science & Tech.
29.723714
King crabs migrate to the Antarctic due to climate change. A new study published in the Proceedings of the Royal Society B by researchers at the University of Hawaii at Manoa in the USA has found that King crabs have been moving constantly in response to underwater temperature increases, driving them toward the previously uninhabitable Antarctic. Unfortunately, the arrival of these marine animals has disrupted their new ecosystems, creating imbalance in other organisms there. With the new crab population estimated at 1.6 million in the Antarctic's Palmer Deep Basin alone, researcher and oceanography Professor Craig Smith of the University of Hawaii at Manoa stated, “It looks like a pretty negative consequence of climate warming in the Antarctic.” Professor Smith and colleagues, we appreciate your work in revealing the dire effects of climate change on even the most remote marine ecosystems. May we act in unison to reverse this damaging process and restore lives of balance for ourselves and all fellow beings. During a May 2009 videoconference in Togo, as Supreme Master Ching Hai described how to explain global warming to young people, she mentioned the loss of precious animals and their habitats due to climate change as she suggested a way for everyone to be able to halt it. Supreme Master Ching Hai: You can show how the migrating birds have to fly farther and farther to find a place to nest, and the polar bears swim longer and longer now because there is no more ice until sometimes they drown of exhaustion, or why the neighboring country has so many floods in recent years, so many disasters, etc. http://www.huffingtonpost.com/2011/09/12/giant-red-crab-invasion-climate-change_n_956090.html http://www.msnbc.msn.com/id/44432195/ns/technology_and_science-science/?ocid=ansmsnbc11 http://www.collegenews.com/index.php?/article/king_crab_antarctica_13786/Extra News Tell them how climate change is affecting real lives, real animals, real people, and their own lives as well. But it’s also important to show the young people that there is still hope; we can still save the planet. It’s a chance to be true heroes, by being vegan and spread the news of this solution. In a study published in the Aug 31, 2011 issue of the Journal of Water Resources Planning and Management, US and Swedish scientists report that the spring-run Chinook salmon is in danger of a total extinction in California, USA, due to warming of freshwater summer streams in which they are unable to survive. In the 13th edition of The Times Comprehensive Atlas of the World, cartographers have had to remove 15% of Greenland's ice due to global warming, while adding the new Warming Island, which has appeared due to the ice retreating from accelerated temperature rise. http://news.sky.com/home/uk-news/article/16069473 http://uk.ibtimes.com/articles/214234/20110915/new-atlas-map-shows-extent-of-global-warming.htm http://www.guardian.co.uk/environment/2011/sep/15/new-atlas-climate-change
<urn:uuid:86576b11-9926-4db3-a908-f7f2e2f9bbd4>
3.359375
694
Content Listing
Science & Tech.
53.127251
Living Planet Index The Living Planet Index and the Living Planet Report are produced by the Worldwide Wildlife Foundation, whose mission is "to stop the degradation of our planet's natural environment, and build a future in which humans live in harmony with nature." The Living Planet Index is one of the longest-running measures of the trends in the state of global biodiversity and reflects changes in the health of the planet’s ecosystems by tracking trends in populations of mammals, birds, fish, reptiles and amphibians. Executive Summary - Living Planet Report 2010 2010 is, according to the 2010 Living Planet Report: - The year in which new species continue to be found, but more tigers live in captivity than in the wild. - The year in which 34 percent of Asia-Pacific CEOs and 53 percent of Latin American CEOs expressed concern about the impacts of biodiversity loss on their business growth prospects, compared to just 18 percent of Western European CEOs (PwC, 2010). - The year in which there are 1.8 billion people using the internet, but 1 billion people still without access to an adequate supply of freshwater. - There has been a 30 percent decline in the global vertebrate species population from 1970 - 2007. Living Planet Index trends Tropical vs. Temperate The global Living Planet Index is the aggregate of two indices — the temperate LPI (which includes polar species) and the tropical LPI — each of which is given equal weight. The tropical index consists of terrestrial and freshwater species’ populations found in the Afrotropical, Indo-Pacific and Neotropical realms, as well as marine species’ populations from the zone between the Tropics of Cancer and Capricorn. The temperate index includes terrestrial and freshwater species’ populations from the Palearctic and Nearctic realms, as well as marine species’ populations found north or south of the tropics. In each of these two indices, overall trends between terrestrial, freshwater and marine species’ populations are given equal weight. Tropical and temperate species’ populations show starkly different trends: the tropical LPI has declined by around 60 percent in less than 40 years, while the temperate LPI has increased by 29 per cent over the same period. This difference is apparent for mammals, birds, amphibians and fish, for terrestrial, marine and freshwater species, and across all tropical and temperate biogeographic realms. However, this does not necessarily imply that temperate ecosystems are in a better state than tropical ecosystems. If the temperate index were to extend back centuries rather than decades it would very probably show a long-term decline at least as great as that shown by tropical ecosystems in recent times, whereas a long-term tropical index would be likely to show a much slower rate change prior to 1970. There is insufficient pre-1970 data to calculate historic changes accurately, so all LPIs are arbitrarily set to equal one in 1970. View the full report here: Living Planet Index 2010 Report Launch - ↑ http://wwf.panda.org/what_we_do/. Retrieved 17, Dec 2010. - ↑ Living Planet Report 2010. Worldwide Wildlife Foundation. Gland, Switzerland. ISBN 978-2-940443-08-6.
<urn:uuid:40d5bf34-6287-4015-808d-a3bde307c835>
3.4375
668
Knowledge Article
Science & Tech.
41.225386
Water and pH The Water Molecule. All substances are made up of millions of tiny atoms. These atoms form small groups called molecules. In water, for example, each molecule is made up of two hydrogen atoms and one oxygen atom. The formula for a molecule of water is H2O. "H" means hydrogen, "2" means 2 hydrogen atoms, and the "O" means oxygen. Acids and bases in water. When an acid is poured into water, it gives up H (hydrogen) to the water. When a base is poured into water, it gives up OH (hydroxide) to the water. Find out more about atoms, ions, acids, and bases. A pH graph helps demonstrate this idea. A pH table lists the pH of
<urn:uuid:21e8b524-a4a8-4bb5-b000-1b2c8e31eacc>
3.859375
172
Knowledge Article
Science & Tech.
63.927273
Operators manipulate individual data items (operands) and return a result. Mimer SQL uses the following operators: UNION or UNION ALL Derives a final result set by combining two other result sets. If you specify UNION ALL, the result consists of all rows in both results sets. If you only specify UNION, the final result set is the set of all rows in both of the result sets, with duplicate rows removed. Arithmetical operators are used in forming expressions, see Expressions. The operators are: String operators are used in forming expressions, see Expressions. Comparison and Relational Operators Comparison operators are used to compare operands in basic and quantified predicates. Relational operators are used to compare operands in all other predicates. See Predicates. Both comparison and relational operators perform essentially similar functions. However, comparison operators are common to most programming languages, while the relational operators are more or less specific to SQL. = equal to <> not equal to < less than <= less than or equal to > greater than >= greater than or equal to The operators AND and OR are used to combine several predicates to form search conditions, see Search Conditions. The operator NOT may be used to reverse the truth value of a predicate in forming search conditions. This operator is also available in predicate constructions to reverse the function of a relational operator, see Search Conditions. This section summarizes standard compliance concerning operators. Mimer Information Technology AB Voice: +46 18 780 92 00 Fax: +46 18 780 92 40
<urn:uuid:73187822-a304-4877-b2dd-50d178c2c01a>
3.578125
330
Documentation
Software Dev.
26.058985
Scope and Qualifiers Go Up to Using the object model Index Scope determines the accessibility of an object's fields, properties, and methods. All members declared in a class are available to that class and, as is discussed later, often to its descendants. Although a method's implementation code appears outside of the class declaration, the method is still within the scope of the class because it is declared in the class declaration. When you write code to implement a method that refers to properties, methods, or fields of the class where the method is declared, you don't need to preface those identifiers with the name of the class. For example, if you put a button on a new form, you could write this event handler for the button's OnClick event: procedure TForm1.Button1Click(Sender: TObject); begin Color := clFuchsia; Button1.Color := clLime; end; The first statement is equivalent to Form1.Color := clFuchsia You don't need to qualify Color with Form1 because the Button1Click method is part of TForm1; identifiers in the method body therefore fall within the scope of the TForm1 instance where the method is called. The second statement, in contrast, refers to the color of the button object (not of the form where the event handler is declared), so it requires qualification. The IDE creates a separate unit (source code) file for each form. If you want to access one form's components from another form's unit file, you need to qualify the component names, like this: Form2.Edit1.Color := clLime; In the same way, you can access a component's methods from another form. For example, To access Form2's components from Form1's unit file, you must also add Form2's unit to the uses clause of Form1's unit. The scope of a class extends to its descendants. You can, however, redeclare a field, property, or method in a descendent class. Such redeclarations either hide or override the inherited member.
<urn:uuid:9c4fc27a-9715-495c-9a10-d191e40e5ff1>
3.296875
439
Documentation
Software Dev.
49.363193
(PHP 3 >= 3.0.6, PHP 4, PHP 5)str_replace -- Replace all occurrences of the search string with the replacement string This function returns a string or an array with all occurrences of replaced with the given replace value. If you don't need fancy replacing rules (like regular expressions), you should always use this function instead of ereg_replace() or As of PHP 4.0.5, every parameter in str_replace() can be an array. In PHP versions prior to 4.3.3 a bug existed when using arrays as subject is an array, then the search and replace is performed with every entry of subject, and the return value is an array replace are arrays, then str_replace() takes a value from each array and uses them to do search and replace on replace has fewer values than search, then an empty string is used for the rest of replacement values. If is an array and replace is a string, then this replacement string is used for every value of search. The converse would not make sense, are arrays, their elements are processed first to last. Example 1. str_replace() examples Note: This function is binary-safe. Note: This function is case-sensitive. Use str_ireplace() for case-insensitive replace. Note: As of PHP 5.0.0 the number of matched and replaced needles ( search) will be returned in countwhich is passed by reference. Prior to PHP 5.0.0 this parameter is not available.
<urn:uuid:663c918c-ba21-4de1-8bde-ba099e808915>
3.03125
338
Documentation
Software Dev.
65.791226
By courtesy of Image Google The Culebra Island giant Anole is another fascinating animal. The scientific name is Anolis Roosevelti. An another name for the Giant Anole is Xiphosurus Roosevelt. They where listed as endangered on July 21, 1977. The Anole is extremely rare, but some people think that they can even be extinct, the last sighting was in 1932. They can get to be 6.3 inches from head to tail. They are a brownish-gray and the underside is a whitish color. These types of anoles can be found in trees called Ficus or Gumbo-limbo trees. Anoles usually eat fruit, insects, or even smaller lizards depending on the type of lizard.
<urn:uuid:e3f92d38-2b5a-474a-9c8c-a06fb501def2>
2.96875
151
Knowledge Article
Science & Tech.
55.006105
Table of Contents: e is one of those special numbers in mathematics, like pi, that keeps showing up in all kinds of important places. For example, in Calculus, the function f(x) = c(ex) for any constant c is the one function (aside from the zero function) that is its own derivative. It is the base of the natural logarithm, ln, and it is equal to the limit of (1 + 1/n)n as n goes to infinity. In the proof below, we use the fact that e is the sum of the series of inverted factorials. Like Pi, e is an irrational number. It is interesting that these two constants that have been so vital to the development of mathematics cannot be expressed easily in our number system. For, if we define an irrational number as a number that cannot be represented in the form p/q, where p and q are relatively prime integers, we can prove fairly easily that e is irrational. The following is a well known proof, due to Joseph Fourier, that e is irrational. Home || The Math Library || Quick Reference || Search || Help
<urn:uuid:0d4cd3d8-d468-4973-b6a8-5d4d864579f6>
3.4375
240
Knowledge Article
Science & Tech.
53.8325
(sorry about bad drawing - my GIMP skills are hopeless) But yeah, that hopefully explains depression and elevation. Looking up from the direct line - that is the horizontal area that is 0 degrees is the elevation point (so it's out of proportion on my picture). Now Height X is equal to 50.2 metres (which is given in the question). From there and using the angle of depression we can find the distance between the two buildings. Which is found by (using tangent = opposite/adjacent): With that found you can now find Y. Again, it's found like this: tan(18.3) = (distance between buildings)/50.2 = tan(18.3) * 50.2 = 16.6021 Then you add the values of X and Y to get the height of the building. Perhaps round it down - but yeah. tan(56.5) * 16.6021 = y y = 25.083 Check my numbers, but hopefully you understand the concept. Be sure to add the measurement units to your answer. Often a diagram will help with the problem
<urn:uuid:8b3c1e96-9790-4a7f-833c-5d0f4f9d86fd>
3.4375
235
Q&A Forum
Science & Tech.
86.959367
Phys. Rev. Focus28, 12 (2011) – Published September 23, 2011 Experimentalists have mapped the quantum states (band structure) of cold atoms mimicking electrons in a crystal. The technique should allow researchers to study new aspects of electrons in crystals using the atoms as a model. Phys. Rev. Focus26, 25 (2010) – Published December 17, 2010 According to simulations, ultrashort electron pulses could serve as probes of the rapid motion of atomic or molecular electrons, such as the “breathing” of an excited atom or the shuttling of an electron between atoms in a molecule. Phys. Rev. Focus25, 24 (2010) – Published June 25, 2010 Researchers beat the quantum-mechanical fluctuations in an atomic clock by linking many atoms into an entangled quantum state and pushing the fluctuations into a realm that doesn’t influence the time measurement. Phys. Rev. Focus21, 11 (2008) – Published April 2, 2008 In the 1970s and 80s, researchers developed techniques for cooling atoms to very low temperatures using laser light. The work led to improvements in atomic clocks and the observation of a new ultracold state of matter.
<urn:uuid:8cd230ed-a192-4764-b92e-2e06e10ed1c4>
3.703125
250
Content Listing
Science & Tech.
53.246854
The classic Poisseuille flow approach is a fine approximate solution for situations that satisfy its assumptions. The effect of gravity can be accounted for well by including it in the pressure-drop term. It should work fine for water being sucked through a soda-straw. Surface tension forces won't be large for water or milkshakes sucked through an ordinary soda-straw (~5mm dia.) Surface tension (and the two non-dimensional numbers you mention) would become useful in problems of fluid flow through porous media where liquid is flowing through capillaries. I think milkshakes behave differently than water because they are non-Newtonian. When I pull the straw out of the milkshake, a thick layer (2-3 mm) clings to the outside of the straw ... I can set a cherry on the top of my milkshake and it does not sink into the glass. That's not because it's floating (cherries are more dense than milkshakes) its because the forces in the milkshake beneath the cherry do not exceed the critical shear-stress required to cause flow. Below this critical shear-stress, milkshakes behave as a solid. Note that the dimension of critical shear-stress (material property of milkshakes) multiplied by the characteristic length of the zone of yielding flow beneath the cherry just happens to have the same dimension as surface-tension. But that doesn't mean you're justified assuming the Bond or Cappillary numbers have physical meaning in this case. Dimensions may agree but the physics are different. The Poisseuille flow approach assume the fluid behaves as a Newtonian-fluid. That assumption is likely violated for a milkshake. So Poisseuille flow solutions might be a poor approximation for analyzing milkshakes.
<urn:uuid:84e52bbe-cea0-403f-9cbb-cab4d11185b5>
2.796875
375
Q&A Forum
Science & Tech.
50.813513
Our office has recently discovered the brilliance of StumbleUpon . We realized that we can not only use it to discover some great news and information on the very things PNS Energy is interested in, but we can also use it to waste some time when the brain can handle no more. Among the first things we “stumbled” upon were some very interesting stories about the future of the solar industry. One thing that kept popping up was the idea that solar can (and hopefully will) power the entire earth. In the rest of this post I will lay out some of the more exciting ideas floating around the web. Click for larger image. This awesome map shows the surface area required to power the entire Earth (that’s right, the whole world) in 2030. We pulled this map from Treehugger but they ultimately attributed this graphic to the Land Art Generator Initiative . When I first think of the numer of solar panels needed to power the entire Earth I think of an unattainable amount of panels, but when it is displayed visually on a map of the Earth, the actual surface area needed is relatively small. Ultimately, we would need 496,805 square kilometers covered in solar panels, which is roughly equal to the area of Spain. This is quite large but this puts it into a more manageable perspective; “the Saharan Desert is 9,064,958 square kilometers, or 18 times the total required are to fuel the world.” (Land Art Generator Initiative) One of the more alarming quotes from the map/article : “According to the United Nations 170,000 square kilometers of forest is destroyed each year. If we constructed solar farms at the same rate, we would be finished in 3 years.” Going along with installing solar panels in the desert, The Guardian is reporting that an initiative called “Desertec ” is breaking ground on a massive plan to install a network of solar farms, wind farms, and concentrated solar plants that will produce 15% of Europe’s electricity by 2050. Concentrated solar plants are different from solar farms in that they use large mirrors to reflect the heat from the sun to heat up large greenhouse that pushes hot air through turbines that produce electricity, here is an example of one being built in Arizona. This initiative is the result of a German physicist that wanted to estimate how much electricity was needed to meet the Earth’s demands. He quickly found out that “in just six hours, the world’s deserts receive more energy from the sun than humans consume in a year. If even a tiny fraction of this energy could be harnessed – an area of Saharan desert the size of Wales could, in theory, power the whole of Europe (The Guardian).” - Map showing how the Desertec plan would work. Solar tower similar to the ones being considered for the Desertec program. The last ‘out of this world’ example of solar energy production is just that. One day, scientists hope that solar arrays will be deployed into space. They will then collect unfiltered sunlight and beam the energy produced down to Earth. A solar satellite in space can be exposed to the sunlight 24 hours a day and can also transmit that energy to whichever area on Earth that has the highest demand at that time. This would ensure that whoever and whatever needed energy would have access to it immediately. Think of the impact this would have on groups like our military. Instant energy could be beamed to remote locations all over the world, allowing troops to set up small, mobile forward operating bases and still have access to all the energy they need. This idea appears to be in the very early stage of development but down the line, anything is possible. All in all I hope to someday reap the benefits of these solar dreams and if scientific news is any predictor of the future, there is a good chance that I will see some sort of these plans put into actual use. Picture the future of solar
<urn:uuid:2c36c7d6-4fb4-42dd-a51f-561f3c99fe7b>
2.875
829
Personal Blog
Science & Tech.
45.812464
Fun Solar System Facts The solar system has provided mysteries for scientists, philosophers, and religious leaders to ponder for millennia. In ancient times, the visibility of planets in the sky signified omens, while in the twentieth century, man landed ships on the moon and explored Mars and beyond with space probes. Solar System Facts - The solar system is composed of eight planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. The commonly accepted ninth planet, Pluto, was downgraded to a dwarf planet in 2006. - Venus is the brightest planet in the Earth's sky. Its yellow clouds are composed of sulfuric acid and reflect the sun's light. - Earth's surface is covered by three-quarters water. - Olympus Mons on Mars is the largest volcano in the solar system. - Discovered in 1801, a dwarf planet exists between Mars and Jupiter named Ceres. - Jupiter is the largest planet in the solar system. The red spot on the gas giant is a storm center that is larger than Earth. - Saturn is the second largest planet in the solar system. Its rings are composed of a moon's remnants. - Neptune is on the outer rim of the solar system and was discovered in 1846. - Pluto and Neptune trade places in their orbits to be closer to the sun. - In 2005, astronomers discovered a planetary body beyond Pluto, which they named Eris. It is actually larger than Pluto. - Haumea is a pair of dwarf planets identified beyond Eris, but still within the solar system. - Beyond Pluto and Eris is a planetoid named Sedna. It takes more than 10,000 Earth years to orbit the sun. Planets, Satellites, Comets and More - The downgrade of Pluto to a dwarf planet confused generations of school children who grew up memorizing the solar system's nine planets. Since that downgrade, scientists have identified more than 13 planetary-like bodies within the solar system. - A planet is defined as a body that orbits on the same plane, but Pluto's shifting orbit helped downgrade it from planet to dwarf planet. - The Hubble Telescope is a multi-billion dollar device set into orbit that helps astronomers and scientists discover more information about the solar system and beyond. - The inner planets of Mercury, Venus, Earth, and Mars are terrestrial and composed of metal and rock. - The outer planets of Jupiter, Saturn, Uranus and Neptune are primarily composed of hydrogen, helium, and other gases. - An asteroid belt lies between Mars and Jupiter's orbits. - Famous astronomers who helped define human understanding of the solar system include Johannes Kepler, Isaac Newton, Nicolaus Copernicus, and Galilelo Galilei. - The Earth has one satellite: the moon. - Mercury and Venus have no satellites. - Venus rotates in the opposite direction from all other planets in the solar system. - Mars has two moons: Phobos and Deimos. They are believed to be captured asteroids and are named for the children of Ares in Greek mythology. - Many of the planets are named for gods in Roman and Greek mythology, including Mars (the Roman name for Ares), Venus (the Roman name for Aphrodite), Mercury (the Roman name for Hermes), and Jupiter (the Roman name for Zeus). - Jupiter has a whopping 63 moons. The four largest moons are Ganymede, Callisto, Europa, and Io. - Ganymede would qualify for planetary status if it were not trapped in Jupiter's orbit. - Many of the planets have multiple moons, including Saturn with 60, Neptune with 13, and Uranus with 27. - Neptune's moons are named for beings associated with the God of the Seas, including Naiad, Thalassa, Despina, Galatea, Larissa, Proteus, Triton, Nereid, Halimede, Sao, Laomedeia, Psamathe, and Neso. - The moons of Uranus are named for fairies such as Titania, Oberon, and Ariel. - Pluto, the dwarf planet, has three known moons -- Charon, Hydra, and Styx. Technology helps us to increase our knowledge of the solar system with each passing year, from the identification of planets and satellites to the movement of celestial bodies such as comets.
<urn:uuid:75c1bf26-ff10-49b9-bff8-1ac4818cc180>
3.65625
913
Listicle
Science & Tech.
46.936818
Today the Sun stands still at 05:30 UT. Halting its steady march toward southern declinations and begining its annual journey north, the event is known as In the northern hemisphere December's solstice marks the astronomical start of winter. And if you're in the Great Basin Desert outside of Lucin, Utah, USA, near solstice dates you can watch the Sun rise and set through A monumental earthwork by artist Nancy Holt, the Sun Tunnels are constructed of four 9 foot diameter cast concrete pipes each 18 feet long. The tunnels are arranged in a wide X to achieve the solstitial sunset and sunrise alignments. In this dramatic snapshot through a Sun Tunnel the Sun is just on the horizon. The cold, cloudy sunset was near the 2010 winter solstice. During daylight hours, holes in the sides of the pipes project spots of sunlight on their interior walls, forming a map of the principal stars in the Draco, Perseus, Columba, and Capricorn. Fans of planet should note that the Sun Tunnels are about 150 miles by car from Robert Smithson's (Holt's late husband)
<urn:uuid:ed9cc109-fe40-406a-9f7d-b07775bf71a8>
3.28125
251
Knowledge Article
Science & Tech.
52.055323
calculusArticle Free Pass calculus, branch of mathematics concerned with the calculation of instantaneous rates of change (differential calculus) and the summation of infinitely many small factors to determine some whole (integral calculus). Two mathematicians, Isaac Newton of England and Gottfried Wilhelm Leibniz of Germany, share credit for having independently developed the calculus in the 17th century. Calculus is now the basic entry point for anyone wishing to study physics, chemistry, biology, economics, finance, or actuarial science. Calculus makes it possible to solve problems as diverse as tracking the position of a space shuttle or predicting the pressure building up behind a dam as the water rises. Computers have become a valuable tool for solving calculus problems that were once considered impossibly difficult. Calculating curves and areas under curves The roots of calculus lie in some of the oldest geometry problems on record. The Egyptian Rhind papyrus (c. 1650 bc) gives rules for finding the area of a circle and the volume of a truncated pyramid. Ancient Greek geometers investigated finding tangents to curves, the centre of gravity of plane and solid figures, and the volumes of objects formed by revolving various curves about a fixed axis. By 1635 the Italian mathematician Bonaventura Cavalieri had supplemented the rigorous tools of Greek geometry with heuristic methods that used the idea of infinitely small segments of lines, areas, and volumes. In 1637 the French mathematician-philosopher René Descartes published his invention of analytic geometry for giving algebraic descriptions of geometric figures. Descartes’s method, in combination with an ancient idea of curves being generated by a moving point, allowed mathematicians such as Newton to describe motion algebraically. Suddenly geometers could go beyond the single cases and ad hoc methods of previous times. They could see patterns of results, and so conjecture new results, that the older geometric language had obscured. For example, the Greek geometer Archimedes (c. 285–212/211 bc) discovered as an isolated result that the area of a segment of a parabola is equal to a certain triangle. But with algebraic notation, in which a parabola is written as y = x2, Cavalieri and other geometers soon noted that the area between this curve and the x-axis from 0 to a is a3/3 and that a similar rule holds for the curve y = x3—namely, that the corresponding area is a4/4. From here it was not difficult for them to guess that the general formula for the area under a curve y = xn is an + 1/(n + 1). Calculating velocities and slopes The problem of finding tangents to curves was closely related to an important problem that arose from the Italian scientist Galileo Galilei’s investigations of motion, that of finding the velocity at any instant of a particle moving according to some law. Galileo established that in t seconds a freely falling body falls a distance gt2/2, where g is a constant (later interpreted by Newton as the gravitational constant). With the definition of average velocity as the distance per time, the body’s average velocity over an interval from t to t + h is given by the expression [g(t + h)2/2 − gt2/2]/h. This simplifies to gt + gh/2 and is called the difference quotient of the function gt2/2. As h approaches 0, this formula approaches gt, which is interpreted as the instantaneous velocity of a falling body at time t. This expression for motion is identical to that obtained for the slope of the tangent to the parabola f(t) = y = gt2/2 at the point t. In this geometric context, the expression gt + gh/2 (or its equivalent [f(t + h) − f(t)]/h) denotes the slope of a secant line connecting the point (t, f(t)) to the nearby point (t + h, f(t + h)) (see figure). In the limit, with smaller and smaller intervals h, the secant line approaches the tangent line and its slope at the point t. Thus, the difference quotient can be interpreted as instantaneous velocity or as the slope of a tangent to a curve. It was the calculus that established this deep connection between geometry and physics—in the process transforming physics and giving a new impetus to the study of geometry. What made you want to look up "calculus"? Please share what surprised you most...
<urn:uuid:b67f09fa-051d-4489-b241-eab3699be279>
3.75
951
Knowledge Article
Science & Tech.
41.546316
Microsoft introduced table variables in SQL Server. Table variables are used instead of temporary tables. Similar to temp tables, we can use table variables to store data that we used to store in temp tables. The following statement is used to declare a Table variable, which is pretty similar to a CREATE TABLE statement in SQL. Declare @customersvar Table( Id int identity(1,1), customerID nchar(5) NOTNULL, Name varchar(50) , Address varchar(max) , PhoneNo varchar(50) ) We can write the following INSERT INTO statement to insert values in the table variable. Insert into @customervar Table (customerID, Name, Address, PhoneNo ) We can write the following SELECT statement to populate the table variable. SELECT * FROM @customersvar And to populate the table variables first fifty values, you can write the following SELECT statement SELECT TOP 50 * FROM @customersvar When we create a temporary table (#TABLE) , which physically creates the table in tempdb so it is creates burden . When we create a table variable which is creating in memory so it's much faster. And we can use table variables when creating batches, stored procedures, and user-defined functions (UDFs). And also you can UPDATE records in your table variable as well as DELETE records. UPDATE @customersvar SET Name = 'Reema' WHERE customerID = 158 DELETE FROM @customersvar WHERE customerID = 1020
<urn:uuid:1564cddd-ab86-40af-bedf-78e6a24a3be3>
3.015625
326
Tutorial
Software Dev.
29.419333
Whether it's true or false value returned by >>> will never be negative Answer is false and they give following explantion. for RHS of 32, no shifting will be done and same(if-ve then -ve and if +ve then +ve) Can somebody explain me what is mean by RHS of 32 by giving an example. Is this related to int widht? Thanks in advance Ash Joined: May 21, 2001 Hi Ash, The width of an Integer is 4 Bytes i.e. 32 Bits. So it can be shifted logically 32 times. If shift is by 33 then the actual shift occurs 33%32 times i.e. once only. Now 32%32=0 so No shift when you try to an int by 32! Also a long can be shifted 64%x times. ------------------ rptl [This message has been edited by Rajesh Patil (edited May 24, 2001).]
<urn:uuid:c4691731-c76f-4fbc-85ab-28585dab9e3b>
3.5625
193
Q&A Forum
Software Dev.
91.713851
| VOL. 23, NO. 18||MARCH 27, 1998 | Prof. Brian Greene: A Universe of at Least 10 Dimensions String Theory Finally Reconciles Theories of Relativity and Gravity BY VIRGIL RENZULLI |String theory requires at least six extra spatial dimensions tightly curled-up to microscopic size. Here we see two such dimensions, curled-up into tiny spheres.| hysicists have spent much of the 20th century answering three major questions and redefining space and time in ways that contradict human intuition. The three questions, all of which deal with the nature of the universe, are: Why cant you run away from a light beam and diminish its approach speed? If the sun were to explode, would you feel the gravitational impact on the Earths orbit before you saw the explosion eight minutes later? Why are the two major theories in physicsone dealing with stars and galaxies, the other with atoms and subatomic particles, both proved time and time againmutually incompatible? The answers to these questions have not been easy for physicists to find or for lay people to comprehend. Albert Einstein demonstrated that time slows at great speeds and that space is warped. The current master theory of particle physics holds that all matter is composed of tiny vibrating strings, which is easier to accept than the theorys requirement that there need to be at least six more spatial dimensions in addition to time and the three spatial dimensions that we can perceive. The question of how there can be at least 10 dimensions and probably 11 dimensions when there only appear to be four was one of the issues explored by Professor of Mathematics and Physics Brian Greene in a Graduate School of Arts and Sciences Deans Distinguished Lecture, Space And Time Since Einstein, delivered Mar. 12 at the University Club. Greene, who is also writing a book on the subject, The Elegant Universe, to be published in January 1999 by W.W. Norton, described the three central conflicts that have driven physics in the 20th century. The first conflict, which concerns motion and the speed of light, arose in the early 1900s. When an ordinary object such as a baseball or snowball is thrown at us, we can run away from it, causing the speed with which it approaches us to decrease. But if you try to run away from a beam of light, you cannot make it approach you any slower. Light will always approach you at 186,000 miles per second whether you run away from it, run toward it or stand still, said Greene. Einstein resolved the paradox by showing that our intuition regarding space and time was wrong, that our conception of motionthe distance something travels divided by the time it takes to get therewas incorrect. Einsteins Special Theory of Relativity explained that the speed of light is a constant and that at great speeds, time slows down (relatively speaking) and space becomes distorted. But in solving the paradox, Einstein came into conflict with another towering figure of physics, Isaac Newton and his Theory of Gravity, which holds that the gravitational force is transmitted instantaneouslyor faster than the speed of light. If the sun were to explode, said Greene, we would not know about it visually for eight minutes because it would take eight minutes for light from the explosion to reach us from the sun. According to Newton, however, the gravitational disturbance would immediately cause our orbit to abruptly change. So, the influence of gravity, in Newtons Theory, is transmitted much faster than light. Einstein knew that nothing could exceed the light speed, and for the next decade he struggled to resolve this conflict. His answer is the General Theory of Relativity, by which he showed us how gravity is transmitted through the warping of space, and if you look closely at how the space warps travel, much like ripples in a pond, you find they travel at light speed. And so, gravity is transmitted at exactly the same speed as light. |WHAT MATTER IS MADE OFAs explained by Brian Greene, above, all matter consists of atoms which are themselves composed of electrons swarming around a central nucleus. String theory adds a new ultramicroscopic layer by declaring that subatomic particles actually consist of tiny loops of vibrating energy, strings.| In actuality, then, if the sun were to explode, we would not know about it immediately by an abrupt change in our orbital motion. Instead, exactly when we saw the explosion, we would feel it. Einsteins General Theory of Relativity, which is applicable to things very biggravity, stars, galaxiesbecame one of the two pillars upon which 20th century physics is based. The second pillar is Quantum Mechanics, which describes the microscopic structure of the worldatoms and subatomic particles. Each of these pillars has been tested for accuracy, said Greene. Each comes through with flying colors, and yet, the two theories are mutually incompatible. And that has been the driving conflict in physics for the last half century. The heart of Quantum Mechanics is summarized by (Werner) Heisenbergs Uncertainty Principal and that tells us that there are certain features of the microscopic world that we cannot know with total precision. Its not a limit of technology; there are just some complimentary things we cant know simultaneously. For example, Heisenberg showed us that when you look at smaller and smaller regions of space, the amount of energy embodied in that space is known with less and less precision. There is a tremendous amount of roiling, hot, kinetic energy bound up in every little morsel of space and the smaller the morsel the more the energy. If youve got a lot of energy in tiny distances, it means that space is incredibly frothy and wildly undulating, and these undulations are so violent that they completely destroy Einsteins Geometrical Model of Space, the central principle of General Relativity. On large scales, such as that of galaxies and beyond, these microscopic kinetic undulations average out to zero; we dont see them. Only when we focus on microscopic distances, do we become aware of the tumult that is going on and realize that it is so severe that Einsteins theory falls apart. The conflict continued for half a century until the development of Super String Theory, which reconciles Quantum Mechanics with the General Theory of Relativity. If you examine microscopic particles the way people did in the early part of the century, you come to the conclusion that the elementary constituents of nature are little dots that have no further internal structures, explained Greene. String Theory tells us that if you were to probe inside these dots with a precision not possible with our present technology, you would find each has a little vibrating loop, a vibrating filament of energy, inside of it. And the difference between one particle of matter and another, according to Super String Theory, is the pattern of vibration that the string is undergoing. Different particles can be compared to different notes that an ordinary vibrating violin string can playelectrons, photons, quarks. String Theory also holds that there is a smallest possible distance in the world, the size of the string. And this distance is just large enough that the pernicious small scale quantum undulations predicted by Heisenbergs Uncertainty Principle are avoided. Some people feel cheated with this explanation. What it means is that the problem we thought was there was not there at all. String Theory may also lead to a Unified Theory in which all the principles and theories of physics can be distilled into a single overarching statement. String Theory holds that absolutely everything is a manifestation of a single objecta string. When it vibrates one way, it looks like an electron. When it vibrates another way, it looks like a photon. All the particles and all the forces are part of a single unified concept. |STRINGS IN ACTIONTwo string loops interact by joining together into a third string.| Super String Theory has its own remaking of space-time, said Greene. It requires that it have more than three space dimensions. If strings can only vibrate north and south, east and west, up and down, there are not enough variations to account for all the particles and forces. The equations of String Theory require at least six more spatial dimensions. Greene used an example of a garden hose to explain why we dont see these additional dimensions. From a distance, the hose looks like a straight line, and if an ant lived on the hose, it could move up and down its length. But if you move closer to the hose, you realize it has another dimension, its girth, and the ant could walk around the hose as well. Dimensions, therefore, would come in two types: those that are long and visible and those that are tiny and curled up, existing only on the microscopic level of strings. String Theory has the capacity to describe not only how the universe is, but how it got to be the way it is, said Greene. It may give us an explanation of why there is space and why there is time. In the same way that cloth is made of thread woven together in a pattern, some theorists have suggested that strings themselves are the threads of space and time. Space and time themselves may be the result of an enormous number of little vibrating strings all coalescing together and vibrating in a particular coherent pattern. If so, you can imagine a state of the universe when the strings have not coalesced in that manner, and space and time have not yet been formed. And it is possible that the universe could return to that state. Could strings also coalesce into another kind of universe? In principle, said Greene, it is possible.
<urn:uuid:ca041187-16fb-4e33-ae7e-f2b5babca7c2>
3.015625
1,994
Audio Transcript
Science & Tech.
43.795419
Ice age to warming - and back? The Little Ice Age and "the 8,200-year event" are not exactly household terms. Once only a handful of climate scientists puzzled over these episodes of abrupt climate change. Now, the topic is getting close scrutiny from the Pentagon, the halls of Congress, and even Hollywood - where a disaster movie set for release in May depicts a sudden deep freeze.Skip to next paragraph Subscribe Today to the Monitor One reason for all the interest? While policymakers have worried long and hard about global warming, which might raise Earth's temperature 1.4 to 5.8 degrees C by century's end, a growing body of evidence suggests natural forces could just as easily plunge Earth's average temperatures downward. In the past, the planet's climate has changed 10 degrees in as little as 10 years. That may not sound like much. But the last time the planet was 10 degrees colder, it was still in an ice age. "There's the very real potential of the climate system changing dramatically and rapidly" in ways that lie outside modern human experience, says Mark Eakin, who heads the paleoclimatology program of the National Oceanic and Atmospheric Administration (NOAA). The possibility of a sudden freeze doesn't mean mankind can relax efforts to curb global warming, many scientists warn. Indeed, given the complexity of Earth's climate, human activities that spew greenhouse gases into the atmosphere may increase the potential for an abrupt cooling. For example: Regional and global climates have undergone quick and dramatic changes even after what would appear to be only gentle prodding by natural influences, Dr. Eakin says. In many cases, that prodding has been far less severe than the changes humans have wrought via industrial emissions of carbon dioxide. "In the absence of better knowledge, we have to assume that humans are making abrupt climate change more likely - not because humans are worse than nature, it's just because we're changing the system," says Richard Alley, a Penn State University paleoclimatologist. Dr. Alley led a 2002 National Research Council panel that examined abrupt climate change and laid out recommendations for research priorities and possible adaptation strategies. Policymakers are beginning to pay attention. Last week, the Senate Commerce, Science, and Transportation Committee sent to the full Senate a bill that would give NOAA $60 million for research into the causes of abrupt change. The work could help provide more accurate modeling of past and future climate change, perhaps yielding clues that could serve as an early warning to abrupt change. Meanwhile, a report prepared for the Defense Department bids Pentagon planners to elevate the study of abrupt climate change "beyond a scientific debate to a US national security concern." The study was prepared by the Global Business Network, a corporate strategic planning and consulting firm in Emeryville, Calif. These actions are fueled by a growing body of evidence over the past five years that Earth's climate has a history of rapid variations - and that if the paleoclimate record is any indication, this history repeats itself. Some periods, like the Little Ice Age, would cause hardships today, but industrial countries probably could adapt, researchers say. The Little Ice Age lasted roughly from 1300 to around 1870 and dropped temperatures in parts of the northern hemisphere by about 1 degree C. To the casual observer, that drop may seem small. But from a climate and a social standpoint "it was huge," says Lloyd Keigwin Jr., a senior scientist at the Woods Hole Oceanographic Institution (WHOI) in Woods Hole, Mass. The Little Ice Age - actually three distinct cooling periods - chilled northern Europe and parts of the United States. It sent the Vikings back to Europe from their outposts in Greenland. Farms in Norway were covered with glaciers and crop failures around Europe caused famines and spikes in grain prices. In 1816, New England experienced its "year without summer," when many crops failed. One researcher argues that the storm that wiped out a large part of the Spanish Armada - and made Sir Francis Drake's job easier - was part of the Little Ice Age pattern.
<urn:uuid:77f1a2b3-152f-4c4d-918e-048d5e8f5fd3>
3.1875
830
Truncated
Science & Tech.
40.558588
Physics jokes, humor, and cartoons from Jupiter Scientific; Links to science jokes, astronomy jokes, biology jokes, and chemistry jokes. Press Releases | If you didn't get the joke, you probably didn't understand the science behind it. If this is the case, it's a chance for you to learn a little physics. Physics Joke 1: When a third grader was asked to cite Newton's first law, she said, "Bodies in motion remain in motion, and bodies at rest stay in bed unless their mothers call them to get up." Physics Joke 2: Q: What is the name of the first electricity detective? A: Sherlock Ohms Physics Joke 3: Q: Why are quantum physicists so poor at sex? A: Because when they find the position, they can't find the momentum, and when they have the momentum, they can't find the position. Physics Joke 4: A neutron walked into a bar and asked, "How much for a drink?" The bartender replied, "For you, no charge." Physics Joke 5: Q: What did one quantum physicist say when he wanted to fight another quantum physicist? A: Let me atom. Physics Joke 6: Two atoms were walking across a road when one of them said, "I think I lost an electron!" "Really!" the other replied, "Are you sure?" "Yes, I 'm absolutely positive." Physics Joke 7: When a third-grade student was asked to define the term "vacuum" in class, she answered, "A vacuum is an empty region of space where the Pope lives." Physics Joke 8: Q: Which right-hand rule do students use on bad physics professors? A: Step 1: Extend your right arm forward from the elbow. Step 2: Keeping your palm facing to the left, stick out your middle finger. Step 3: Rotate your hand 90 degrees clockwise. Physics Joke 9: Here is a teaching tip for physics professors: When a student tries to paraphrase something you have just taught, feed her or him the following line: "I know you believe you understand what you think I said, but I am not sure you realize that what you heard is not what I meant." This will guarantee that the student will not interrupt your class again until the next semester. Physics Joke 10: Murphy's Ten Laws for String Theorists: (1) If you fix a mistake in a mathematical superstring calculation, another one will show up somewhere else. (2) If your results are based on the work of others, then one such work will turn out to be wrong. (3) The longer your article, the more likely your computer hard disk drive will fail while you are typing the references. (4) The better your research result, the more likely it will be rejected by the referee of a journal; on the other hand, if your work is wrong but not obviously so, it will be accepted for publication right away. (5) If a result seems to good to be true, it is unless you are one of the top ten string theorists in the world. (By the way, these theorists refer to their results as "string miracles".) (6) Your most startling string-theoretic theorem will turn out to be valid in only two spatial dimensions or less. (7) When giving a string seminar, nobody will follow anything you say after the first minute, but, if miraculously someone does, then that person will point out a flaw in your reasoning half-way through your talk and what will be worse is that your grant review officer will happen to be in the audience. (8) For years, nobody will ever notice the fudge factors in your calculations, but when you come up for tenure they will surface like fish being tossed fresh breadcrumbs. (9) If you are a graduate student working on string theory, then the field will be dead by the time you get your Ph.D.; Even worse, if you start over with a new thesis topic, the new field will also be dead by the time you get your Ph.D. (10) If you discover an interesting string model, then it will predict at least one low-energy, observable particle not seen in Nature. In summary, anything in string theory that theoretically can go wrong will go wrong, but if nothing does go theoretically wrong, then experimentally it is ruled out. Physics Joke 11: Have you heard that entropy isn't what it used to be? Physics Joke 12: Q: Where does bad light end up? A: In a prism. Physics Joke 13: Title: A Sexual Encounter between a Capacitor and an Inductor One evening, with his charge at full capacity, Micro Farad decided to get a cute coil to discharge him. He went to the Magnet Bar to pick up a chip called Millie Amp. He caught her out back trying self induction; fortunately, she had not damaged her solenoid. The two took off on his megacycle and rode across the Wheatstone Bridge into a magnetic field, next to a flowing current , to watch the sine waves. Micro Farad was very much stimulated by Millie's characteristic curve. Being attractive himself, he soon had her field fully excited. He set her on the ground potential, raised his frequency, lowered her resistance, and pulled out his high voltage probe. When he inserted it in parallel, he short-circuited her shunt. Fully excited, Millie cried out, "ohm, ohm, give me mho". As he increased his tube to maximum output, her coil vibrated from the current flow. It did not take long for her shunt to reach maximum heat. Now with the excessive current shortening her shunt, Micro's capacity rapidly discharged every electron was drained off. But that was not the end of it. Indeed, they fluxed all night, tried various connections and hookings until his bar magnet weakened, and he could no longer generate enough voltage to sustain his collapsing field. With his battery fully discharged, Micro was unable to excite his tickler, so they went home. A few weeks later, they were merged forever and oscillated happily ever after. Physics Joke 14: A physics professor, who was teaching a graduate course on superstring theory, decided to add an essay question to this year's final exam. The instructions read, "Describe the universe in 400 words or less and give three examples." Physics Joke 15: This is apparently a true story. It took place just outside of Munich, Germany. Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, "Do you know how fast you were going?" Heisenberg replied, "No, but I know where I am." Physics Joke 16: The following is a little known, true story about Albert Einstein (attributed to Paul Harvey). Albert Einstein was just about finished his work on the theory of special relativity, when he decided to take a break and go on vacation to Mexico. So he hopped on a plane and headed to Acapulco. Each day, late in the afternoon, sporting dark sunglasses, he walked in the white Mexican sand and breathed in the fresh Pacific sea air. On the last day, he paused during his stroll to sit down on a bench and watch the Sun set. When the large orange ball was just disappearing, a last beam of light seemed to radiate toward him. The event brought him back to thinking about his physics work. "What symbol should I use for the speed of light?" he asked himself. The problem was that nearly every Greek letter had been taken for some other purpose. Just then, a beautiful Mexican woman passed by. Albert Einstein just had to say something to her. Almost out of desperation, he asked as he lowered his dark sunglasses, "Do you not zink zat zee speed of light is zery fast?" The woman smiled at Einstein (which, by the way, made his heart sink) and replied, "Si." And know you know the rest of the story. Physics Joke 17: Q: How many theoretical physicists specializing in general relativity does it take to change a light bulb? A: Two. One to hold the bulb and one to rotate the universe. Physics Joke 18: It has been rumored that Edmund Scientific is trying to keep up with the times. The following amusing incident confirms this belief. The Chairman of a Physics Department ordered some lab equipment from the company. When the package arrived, a secretary opened it and found the following warning label: "Despite its superficial appearance, this product at a microscopic level might be made of strings. Manufacturer will prosecute to the maximum extent of the copyright law any attempt to make a supersymmetric version. Physics Joke 19: Q: What is the simplest way to observe the optical Doppler effect? A: Go out at and look at cars. The lights of the ones approaching you are white, while the lights of the ones moving away from you are red. Physics Joke 20: The Official Unabashed Scientific Dictionary defines an elementary particle as the dreams that stuff is made of. Physics Joke 21: The Official Unabashed Scientific Dictionary defines a transistor as a nun who's had a sex change. Physics Joke 22: The Official Unabashed Scientific Dictionary defines hyperspace as the place where you park your limousine at a superstore. Physics Joke 23: Does a radioactive cat have 18 half-lives? Physics Joke 24: The Heineken Uncertainty Principle says "You can never be sure how many beers you had last night." Physics Joke 25: When a travel agent was asked if faster-than-light flights were available, she said, "Yes, but tickets must be purchased at least three weeks in advance and a Saturday night stay is required." Physics Joke 26: Q: What did Donald Duck say in his graduate physics class? A: Quark, quark, quark! Physics Joke 27: Q: What did one uranium-238 nucleus say to the other? A: "Gotta split!" Physics Joke 28: Physics quote of the day: Anything that doesn't matter has no mass. Physics Joke 29: According to Einstein's Theory of Relatives, the probability of in-laws visiting you is directly proportional to how much you feel like being left alone. Physics Joke 30: There has been too much action in reaction to political scandals. Please write to your congressman to repeal Newton's third law. Physics Joke 31: Einstein's favorite limerick was: There was an old lady called Wright who could travel much faster than light. She departed one day in a relative way and returned on the previous night. Physics Joke 32:A Cartoon about CERN (size=31 K) Physics Joke 33: A student riding in a train looks up and sees Einstein sitting next to him. Excited he asks, "Excuse me, professor. Does Boston stop at this train?" Physics Joke 34: Three months before his 1905 seminal relativity paper, Einstein perform the following thought experiment, which, by the way, is known as a gedanken experiment in theoretical physics: Einstein imagined, "If I vere to put my hand on a hot stove for a minute, it vould seem like an hour. But if I vere to sit with a pretty girl for an hour, it vould seem like a minute. By Jove, I think time is relative." Physics Joke 35: A little boy refused to run anymore. When his mother asked him why, he replied, "I heard that the faster you go, the shorter you become." Physics Joke 36: A six-year-old boy spotted Albert Einstein walking down the street and decided to try out his favorite joke on him: "Mr. Einstein! Why did the chicken cross the road?" To which the famous physicist replied, "My young burgeoning mind, zee question does not have a definite anzer. Vether zee chicken crossed zee road or zee road crossed zee chicken depends on your frame of reference." Physics Joke 37: There is a sign in Munich that says, "Heisenberg might have slept here." Physics Joke 38: Jupiter Scientific is pleased to report that physicists have embarked on their own product safety campaign, recommending that manufacturers provide consumers with all of the following labels: WARNING: Due to its heavy mass, this product warps the space surrounding it. No health hazards are yet known to be associated with effect. NOTE: This product may actually be nine-dimensional but, if this is the case, functionality is not affected by the extra six dimensions. HEALTH WARNING: This product (and every product of the Manufacturer) emits low-level nuclear radiation. NOTE: A subatomic "glue" holds the fundamental constituents of this product together. Since the exact nature of this glue is not yet fully understood, its adhesive power cannot be guaranteed. To date, no known malfunction of the product has resulted from glue failure. DISCLAIMER: Manufacturer is not responsible for loss should this product disappear into a wormhole. LIMITED WARRANTY: Despite the efforts of the Manufacturer, the chaos in this package has increased since being shipped. If such chaos has rendered the product defective, Buyer shall not hold Manufacturer responsible. Claims in this regard should be aimed directly at the Shipper. NOTE: Despite its appearance, this product is more than 99.99% empty space. READ THIS BEFORE OPENING: According to quantum theory, this product may collapse into another state if directly observed. HANDLE WITH CARE: This product contains countless, minute, electrically charged particles moving at extremely high speeds. EXTREME CAUTION: This product has an energy-equivalent that, if exploded, could destroy a small town. Under no circumstance shall a User perform a mass-energy transformation on any of the contents in this package. In case of misuse, liability shall rest entirely with the User. GUARANTEED RETURN CLAUSE: Because of the uncertainty principle, we have shipped this product with a limited speed notice. However, if shippers have disregarded our notice, we cannot guarantee that all the contents are in the box. If you discover missing components, please call the 1-800 number on the instruction sheet. IMPORTANT: This product is composed of 100% matter: It is the responsibility of the User to make sure that it does not come in contact with antimatter. Under no circumstances will the Manufacturer be liable for User mishandling in this regard. QUALITY STANDARD: The electrons, protons and neutrons are guaranteed to be of same quality as those used in other products of the Manufacturer. DISAPPEARANCE EXCLUSION: Due to quantum tunneling, there is an extremely tiny chance that this product may suddenly disappear at any time (and reappear elsewhere). The Manufacturer will not be responsible for such mysterious disappearances. AS REQUIRED BY LAW, we must inform you that any use of this product increases the amount of disorder in the universe. As of the date shipped, Congress has not passed any bills assigning a tax on disorder pollution. USE LIMITATION: This product cannot be guaranteed to function normally near a black hole. Physics Joke 39: Ten little known facts about relativity: (1) Nothing in the known universe travels faster than a bad check. (2) Energy equals milk chocolate square (attributed to Albert E. Hersey) (3) Delivery of Christmas gifts by Santa to the children of the world is now accomplished by riding Rudolf the red-shift reindeer. (4) The general relativity theory of gravitation is responsible for people falling in love. (5) The speed of an IRS tax refund is constant. (6) Anger is neither created nor conserved but only changed from one form to another. (7) The speed of time is one second per second, which is also called the fundamental unity. (8) Death and taxes are the same for all constantly moving observers. (9) Moving midgets are shortened. (10) Divorce and alimony are equivalent but the latter is multiplied by an enormous factor. Physics Joke 40: Q: How were three graduate physics students able to demonstrated that a human could travel faster than light? A: The three students went to a store and bought a stop watch and a candle. Then, they proceeded to a high school track field. The first student lit the candle and began to walk around the track. The second student waited a while and then ran after the first student. The third student worked the stop watch because physics experiments require precise measurements. When the second student rounded the track and came in first, the three students concluded that humans could travel faster than light. Physics Joke 41: What is the difference between an ohm and a coulomb? Find out in this cartoon (size=13 K) Physics Joke 42: Q: What did the male magnet say to the female magnet? A: From your backside, I thought you were repulsive. However, after seeing you from the front, I find you rather attractive. To the Index of Science Jokes For more science, see Jupiter Scientific's Information Page. This jokes on this page were compiled by the staff of an organization devoted to the promotion of science through books, the internet and other means of communication. This web page may NOT be copied onto other web sites, but other sites may link to this page. Copyright ©2002 by Jupiter Scientific
<urn:uuid:29954a06-02e8-4ffc-87ac-6b712614914e>
2.953125
3,790
Content Listing
Science & Tech.
56.044996
- Inferring extinction of mammals from sighting records, threats, and biological traitsDiana O Fisher The University of Queensland, School of Biological Sciences, St Lucia 4072, QLD, Australia Conserv Biol 26:57-67. 2012..Our new method can be used to test whether species with few records or recent last-sighting dates are likely to be extinct... - Correlates of rediscovery and the detectability of extinction in mammalsDiana O Fisher School of Biological Sciences, Goddard Building 8, The University of Queensland, St Lucia, Queensland 4072, Australia Proc Biol Sci 278:1090-7. 2011.... - Costs of reproduction and terminal investment by females in a semelparous marsupialDiana O Fisher School of Biological Sciences, The University of Queensland, St Lucia, Australia PLoS ONE 6:e15226. 2011..Iteroparity did not increase lifetime reproductive success, indicating that terminal investment in the first breeding season at the expense of maternal survival (i.e. semelparity) is likely to be advantageous for females...
<urn:uuid:b5d2c02c-82b1-46bb-96ac-7d5a77ca894f>
2.953125
228
Content Listing
Science & Tech.
44.542632
Physical Science: Session 7 A Closer Look: Why do We Need Heated Towel Racks? In this video, weather forecaster Bill Babcock tells us that, when we step out of the shower, it takes energy from our skin to turn liquid water into water vapor via evaporation. To understand this process better, let's take a closer look at what happens at the microscopic level. Recall that earlier in the session we looked at what happens when energy is transferred as heat from a hot mug of tea to your hand: Essentially, the process we are about to describe is the reverse of that process: heat flows from your skin to the water. In both your skin and the water on your skin, the particles are moving with a distribution of speeds (i.e., some move faster than others) but the average energy of their motion is related to the temperature of your skin and the temperature of the water. However, the particles in the water that are moving faster than average may be moving fast enough to overcome the pull they feel from their neighbors and break away from the surface of the liquid water into the air. By doing this, the remaining water is at a lower temperature because the average energy of motion of all the particles has gone down. See the side illustration for clarification. Since there is now a temperature difference between your skin and the water on it, heat flows to the water. On a microscopic level, your skin particles collide with the water molecules, transferring some energy of motion. As a result, the water molecules move faster and your skin particles move slower. This leaves the temperature of your skin lower and you feel colder, while the temperature of the water goes up and the faster-moving molecules escape. The process keeps repeating until all the water is gone. This is why it doesn't feel as cold if you towel off quickly. This same process happens at the surface of any liquid that is evaporating. Try observing what happens in the microscopic world when heat is transferred in the Session 3 Virtual Particle Lab. |prev: three scales of temperature|
<urn:uuid:3dc1f776-5840-44ac-a585-d39200a1c375>
3.5
427
Tutorial
Science & Tech.
48.432732
catch tag form* => result* Arguments and Values: tag---a catch tag; evaluated. forms---an implicit progn. results---if the forms exit normally, the values returned by the forms; if a throw occurs to the tag, the values that are thrown. catch is used as the destination of a non-local control transfer by throw. Tags are used to find the catch to which a throw is transferring control. (catch 'foo form) catches a (throw 'foo form) but not a (throw 'bar form). The order of execution of catch follows: If during the execution of one of the forms, a throw is executed whose tag is eq to the catch tag, then the values specified by the throw are returned as the result of the dynamically most recently established catch form with that tag. The mechanism for catch and throw works even if throw is not within the lexical scope of catch. throw must occur within the dynamic extent of the evaluation of the body of a catch with a corresponding tag. (catch 'dummy-tag 1 2 (throw 'dummy-tag 3) 4) => 3 (catch 'dummy-tag 1 2 3 4) => 4 (defun throw-back (tag) (throw tag t)) => THROW-BACK (catch 'dummy-tag (throw-back 'dummy-tag) 2) => T ;; Contrast behavior of this example with corresponding example of BLOCK. (catch 'c (flet ((c1 () (throw 'c 1))) (catch 'c (c1) (print 'unreachable)) 2)) => 2 Affected By: None. An error of type control-error is signaled if throw is done when there is no suitable catch tag. throw, Section 3.1 (Evaluation) It is customary for symbols to be used as tags, but any object is permitted. However, numbers should not be used because the comparison is done using eq. catch differs from block in that catch tags have dynamic scope while block names have lexical scope.
<urn:uuid:36821fd4-14ce-49b4-aa63-6bc823da420f>
3.4375
435
Documentation
Software Dev.
68.163816
Science Behind the Energy Network |Dr. Robert Ulanowicz, creator of the Energy Network for the All the essential components of life — DNA, proteins, enzymes, fats, and most notably, carbohydrates (sugars) — are built with carbon molecules. Carbon supports every living thing on our planet, beginning with plants, which perform photosynthesis — capturing energy from sunlight to convert water and carbon dioxide into oxygen and glucose (a carbon-based sugar). As plants use glucose, and then as animals eat plants, and as other animals eat those animals, carbon gets passed throughout the food web. Each organism breaks down carbon molecules, releasing energy stored in the chemical bonds. Although most of this energy is lost as heat, the remaining energy provides fuel for activities from swimming to breathing to building new carbon-based molecules, whether proteins, enzymes, DNA, fats, or sugars. In the late 1970s, when Chesapeake Biological Laboratory scientist Dr. Robert Ulanowicz set out to understand the food web interactions of Bay creatures, he knew where to start — follow the carbon. After years of research Ulanowicz completed an energy network for the mid-Chesapeake Bay — a “who eats whom” diagram showing the cycle of carbon throughout the 36 major players of the middle Bay. Read a Chesapeake Quarterly article on Ulanowicz and the development of the energy network. Looking exclusively at the middle (or mesohaline) Bay where estuarine waters are generally well mixed — not as salty as in the southern Bay and not as fresh as in the northern Bay — the diagram provides a framework for understanding how the Chesapeake ecosystem functions. It shows how tiny phytoplankton ultimately sustain large carnivorous fish, how dead plant material (detritus) feeds bottom dwellers like worms and crabs, how top predators feast on small fish like menhaden and shad. This picture of the interactions beneath the surface highlights that the many creatures of the Bay depend on each other — and they all depend on carbon. Interactive Food Web Visit Food Web Levels to learn about 36 major species of the mid-Chesapeake Bay. See how they comprise the lower, middle, and top levels of the food web and mouse over their names to learn about their role in the Bay. Then click on the Energy Network to explore the flow of energy through the mid-Chesapeake’s food web. Roll your mouse over the orange polygons (striped bass, blue crabs, or oysters) to highlight who eats whom. Trace the red lines and arrows connecting other species to see who they eat and who eats them. Ecosystem Network Analysis Network Analysis of the Trophic Dynamics of South Florida Ecosystems
<urn:uuid:1cf9166d-1f44-42ac-933e-473b22cbf4e7>
3.734375
560
Knowledge Article
Science & Tech.
35.511899
Crystal System: Hexagonal,Trigonal Status of Occurrence: Confirmed Occurrence Distribution: Locally Abundant Chemical Composition: Magnesium silicate hydroxide Chemical Formula: Mg3Si2O5(OH)4 Method(s) of Verification: Anglesey – XRD (Maltman, 1977); Rhiw – XRD (Natural History Museum, no. 5920F). - Hydrothermal: serpentinization Dark green lizardite from Rhoscolyn, Holy Island, Anglesey. Specimen 8 cm across. National Museum of Wales Collection (NMW 83.15G.M.1). Photo T.F. Cotterell, © National Museum of Wales. Introduction: lizardite belongs to the kaolinite-serpentinite group of minerals and is one of three minerals (antigorite, lizardite and chrysotile) commonly referred to as ‘serpentine’. Antigorite and lizardite are soft green platy minerals, whereas chrysotile is fibrous. These minerals commonly result from the hydrothermal or retrograde metamorphism of mafic minerals such as olivine, pyroxene or amphibole, in ultrabasic rocks. Lizardite is the most common of the three serpentine minerals and is typically found with brucite and magnetite. Occurrence in Wales: lizardite has been recorded from two occurrences of ultrabasic rocks in Wales. - Holy Island, Anglesey: lizardite occurs associated with antigorite in serpentinized peridotite bodies within the New Harbour Group (Monian Supergroup) metasediments. Textures show that the lizardite replaces primary olivine and orthopyroxene (Maltman, 1977). - Rhiw, Llŷn, Gwynedd: the presence of lizardite, in ultrabasic rocks of the Rhiw Intrusion, has been confirmed by X-ray diffraction analysis. - Greenly, E., 1919. The Geology of Anglesey. Memoirs of the Geological Survey of Great Britain, 980pp (2 volumes). - Maltman, A.J., 1977. Serpentinites and related rocks of Anglesey. Geological Journal, 12, 113-128.
<urn:uuid:a1a60bcc-2f71-49cc-87eb-b6252677fa94>
2.984375
497
Knowledge Article
Science & Tech.
20.799032
The wooden hearts of two cedar trees hold a 1200-year-old cosmic mystery – evidence of an unexplained event that rocked our planet in the 8th century. Cosmic rays are subatomic particles that tear through space. When they reach Earth they react with the oxygen and nitrogen in the atmosphere, producing new particles. One of these – carbon-14 – is taken up by trees during photosynthesis and is "fixed" in the tree's annual growth ring. Fusa Miyake at Nagoya University, Japan, and his colleagues examined the carbon-14 content of two Japanese cedar trees and were surprised to find that there was a 1.2 per cent increase in the amount of the isotope between AD 774 and 775. The typical annual variation is just 0.05 per cent. Miyake also found an increase in the carbon-14 record of North American and European trees around that time, as well as an increase in the isotope beryllium-10 in Antarctic ice cores – another isotope produced by cosmic rays. What cosmic event led to the ray boost? A supernova would do it, but Miyake points out that such an event would have left a visible trace in today's sky. It could have been a solar flare – but only if the flare was more energetic than any discovered so far. "I cannot imagine a single flare which would be so bright," says Igor Moskalenko, an astrophysicist at Stanford University, California, who was not involved in the work. "Rather, it may be a series of weaker flares over the period of one to three years." This is not the first time that tree records have suggested a cosmic event occurred in the mid 770s. Researchers from Queen's University Belfast, UK, also recently found an increase in carbon-14 in tree rings at that time but their work has yet to be published. Mike Baillie, a tree ring researcher at Queen's, has found evidence in the historical record that suggests something unusual did indeed happen at that time. The 13th-century English chronicler Roger of Wendover is quoted as saying: "In the Year of our Lord 776, fiery and fearful signs were seen in the heavens after sunset; and serpents appeared in Sussex, as if they were sprung out of the ground, to the astonishment of all." Journal reference: Nature, DOI: 10.1038/nature11123 If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article The Long Arm Of Science Mon Jun 04 06:10:19 BST 2012 by Edwin Holloway Impressive for its incredible detail of observations across space and time. I'd Say A Supernova Thu Jun 07 16:12:13 BST 2012 by Eric Kvaalen We just had an article three months ago explaining that there must be a lot of supernova remnants in the Milky Way which we haven't detected yet: (long URL - click here) Sat Jun 16 20:41:05 BST 2012 by Tony Sheridan I'm serious - Sirius. I seem to remember that there is some reason to think that Sirius appeared as red in Classical times. This could perhaps mean that Sirius B was in a red giant phase 2000 years ago. In which case the collapse to a white dwarf might have happened c770 CE, or rather the effects of the collapse could have reached us at that time. The actual collapse would have been a few years earlier. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:befec93b-23a0-456f-9cef-17fe8aa2044f>
3.890625
839
Comment Section
Science & Tech.
56.399516
The Earth and the dark star that is the second foci of the 12th Planet's orbit do not rotate around each other any more than the planets in your Solar System rotate around each other. The reason for the latter is that the Sun dominates the planets, and their influence on each other becomes the lesser voice. In like manner your Sun and this dark star, of a comparable size, are caught in a larger net and are essentially motionless within your Galaxy. This net exists for all the stars in your Galaxy, as elsewhere, and is the reason the stars in the sky do not lose their position and float toward each other. It is not that they are so far apart that they do not influence each other. Influence, however slight, is always there. It is rather that influences have been balanced to where an equilibrium is reached. To you, who see that distance is maintained, it looks like the lack of influence. It is balanced influence. Were you to have seen your galaxy born, clumping into masses with these masses first attracted and then to some degree repulsed by each other, motion initiated as a result of these opposing forces, you would intuitively understand that large bodies that cease motion do so not because there is no influence upon them and not because they were not at one time in motion, but because they came to a situation where they essentially are in a dither. The influences upon them are balanced. This second foci of the 12th Planet has not been located by your astronomers because it is dark, not lit, and does not happen to block any view your astronomers are particularly interested in. They think it empty space. Unlike the Sun, this dark twin never lit. Although comparable in size and mass, its composition was subtly different, and it has no potential for becoming a lit sun under the present conditions in your part of the Universe. It has no planets of any size to mention, though is orbited by a lot of trash. Should one wish to search for it, it stands at an angle of 11 degrees off the Earth's orbital plane around the Sun, in the same direction we have given for the approach of the 12th Planet. Not being a luminous body, and not giving off any radiation detectable by human devices, you will be unable to locate it, but this does not mean that it is not there. Do you, like a child with his hands over his eyes, think that if you cannot see something that it does not exist?
<urn:uuid:a03bf646-08d3-4c28-b897-f56f4c5dad1a>
2.96875
503
Nonfiction Writing
Science & Tech.
51.972634
[Click to envioletenate.] Pretty cool. First, of course, the purple color is not real. It’s just the color Andre chose for this picture when he processed it. Second, he used an Hα filter, which lets through a very narrow slice of light (actually in the red part of the spectrum). This color is emitted by warm hydrogen, and is preferentially under the influence of the Sun’s magnetism. You can see arching prominences – huge towers of gas – off the edge of the Sun. The long stringy bits on the face of the Sun are called filaments, and are actually the exact same thing as prominences! Prominences are filaments we see from the side, instead of looking down on them. The terminology is a holdover from when astronomers first started observing the Sun, and we’re kinda stuck with it. Also, Andre inverted the picture, so what looks black is actually very bright, and what looks bright is very dark. Those bright white blotches? Sunspots. For some reason, our brains can pick out detail better that way, and it also gives an eerie 3D sense to the image. He made a close-up mosaic of his pictures, too, which is actually a bit creepy. It’ll keep the Halloween spirit going for another day, at least! Image credit: Andre van der Hoeven, used by permission. - Jaw-dropping Moon mosaic (yes, you want to click that) - Zoom in – and in and IN – on an Austrian glacier - Incredible panorama of the summer sky - A spiral that can beat you with two arms tied behind its back
<urn:uuid:a2e5f62d-640b-4700-b51f-90476d78e906>
2.875
355
Personal Blog
Science & Tech.
63.211283
Electromagnetic Radiation & Electromagnetic SpectrumThe word light usually makes one think of the colors of the rainbow or light from the Sun or a lamp. This light, however, is only one type of electromagnetic radiation. Electromagnetic radiation comes in a range of energies, known as the electromagnetic spectrum. The spectrum consists of radiation such as gamma rays, x-rays, ultraviolet, visible, infrared and radio. Electromagnetic radiation travels in waves, just like waves in an ocean. The energy of the radiation depends on the distance between the crests (the highest points) of the waves, or the wavelength. In general the smaller the wavelength, the higher the energy of the radiation. Gamma rays have wavelengths less than ten trillionths of a meter which is about the size of the nucleus of an atom. This means that gamma rays have very high-energy. Radio waves, on the other hand, have wavelengths that range from less than one centimeter to greater than 100 meters (this is bigger than the size of a football field)! The energy of radio waves is much lower than the energy of other types of electromagnetic radiation. The only type of light detectable by the human eye is visible light. It has wavelengths about the size of a bacteria cell, and its energies fall between those of radio waves and gamma rays. [More Info: Field Guide] Return to E
<urn:uuid:c7be2f21-5a79-42a7-be0f-13d2128e8136>
3.875
277
Knowledge Article
Science & Tech.
42.838725
1 Using HTML and CSS Formatting and semantic markup You can set the font to whatever typeface you wish, but be aware that obscure fonts are unlikely to exist on most people's computers. Also note that fonts whose names have spaces in them must be surrounded by single quotes. Result: Times New Roman You can (and usually should) specify a list of fonts, separated by commas. Subsequent entries in the list are used if the first font is unavailable on a user's system. Preferably, the list should end with what is known as a generic family name. The generic family names are serif, sans-serif, cursive, fantasy and monospace. These indicate what type of font should be substituted if the user has none of the specified fonts on their machine. Result: Times, TNR, Result: Lucida or any cursive There are two ways to specify font colours - by name or by value. Names are things like "Blue", "DarkOrchid", or "LightGoldenRodYellow" and a list of the valid colour names can be found here. Values are specified with 8-bit hexadecimal values for each of the red, green and blue (RGB) channels. They are preceded with a hash (#) sign. Combining font settings You can combine as many CSS properties into one style="" attribute as you like - each property : value pair should be separated by a semicolon (which should also appear at the end of every style attribute). Result: Red 12 point Times Result:Bold 11 point Courier Shorthand notation for font settings Instead of setting the size, style and typeface individually, you can take advantage of the CSS font: shorthand notation. Basically, you simply put whatever font settings you want, separated by spaces, in the font: property. You can put any valid value for font-style, font-variant, font-weight, font-size, and font-family. For this shorthand notation to be recognized, you must include both a font-family setting, and a font-size setting. Result: 12pt Times bold Result: Italic, small caps, 14pt Arial Interestingly, this is the only way we can access the line-height: property. Setting the line-height on its own won't work because it gets removed. But the font: shorthand will accept a value for font-size that includes a line-height parameter (placed immediately after the font-size setting and separated by a forward slash). Result: Set font-size to 12pt and line-height to 28pt There are six different headers in HTML, with one being the most prominent. Heading size five Heading size five HTML character entities Character entities are used to insert special characters, including the < and > brackets that normally denote an HTML tag. You can find one list of character entities here but there are many lists on the net. © ® ™ ° Result: &< > " ¢ £ ¥© ® ™ °
<urn:uuid:7c6ee958-9be5-49ce-820c-db453e93a205>
3.25
646
Tutorial
Software Dev.
46.390099
||This article needs additional citations for verification. (April 2013)| An approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws. Approximations may be used because incomplete information prevents use of exact representations. Many problems in physics are either too complex to solve analytically, or impossible to solve using the available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. For instance, physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical behaviours (e.g. gravity) are much easier to calculate for a sphere than for other shapes. It is difficult to exactly analyze the motion of several planets orbiting a star, for example, due to the complex interactions of the planets' gravitational effects on each other, so an approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained. The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions. As another example, in order to accelerate the convergence rate of evolutionary algorithms, fitness approximation—that leads to build model of the fitness function to choose smart search steps—is a good solution. The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation. The approximation also refers to using a simpler process. This model is used to make predictions easier. The most common versions of philosophy of science accept that empirical measurements are always approximations—they do not perfectly represent what is being measured. The history of science indicates that the scientific laws commonly felt to be true at any time in history are only approximations to some deeper set of laws. For example, attempting to resolve a model using outdated physical laws alone incorporates an inherent source of error, which should be corrected by approximating the quantum effects not present in these laws. Each time a newer set of laws is proposed, it is required that in the limiting situations in which the older set of laws were tested against experiments, the newer laws are nearly identical to the older laws, to within the measurement uncertainties of the older measurements. This is the correspondence principle. Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. It also is used when a number is not rational, such as the number π, which often is shortened to 3.14159, or √2 to 1.414. Numerical approximations sometimes result from using a small number of significant digits. Approximation theory is a branch of mathematics, a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers. Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum (k/2)+(k/4)+(k/8)+...(k/2^n) is asymptotically equal to k. Unfortunately no consistent notation is used throughout mathematics and some texts will use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around. Symbols used to denote items that are approximately equal are wavy or dotted equals signs. - ≈ (Unicode 2248) - ≃ (Unicode 2243), a combination of ≈ and =, also used to indicate asymptotically equal to - ≅ (Unicode 2245), another combination of ≈ and =, which is used to indicate isomorphism or sometimes congruence - ≊ (Unicode 224A), also a combination of ≈ and =, used to indicate equivalence or approximate equivalence - ∼ (Unicode 223C), which is also sometimes used to indicate proportionality - ∽ (Unicode 223D), which is also sometimes used to indicate proportionality - ≐ (Unicode 2245), which can also be used to represent the approach of a variable to a limit - ≒ (Unicode 2252), commonly used in Japanese and Korean - ≓ (Unicode 2253), a reversed variation of ≒ See also |Look up approximation in Wiktionary, the free dictionary.| - "Mathematical Operators – Unicode". Retrieved 2013-04-20.
<urn:uuid:afb43d97-6b36-4686-939c-20e6a1159c80>
4.0625
1,096
Knowledge Article
Science & Tech.
22.522635
Black Holes seem to have bad press that is largely undeserved. This lecture with professor Ian Morison explains what Black Holes are, and how we can discover them even through they can’t be seen. This program was recorded in collaboration with Gresham College, on October 27, 2010. Gresham Professor of Astronomy Ian Morison made his first telescope at the age of 12 with lenses given to him by his optician. Having studied Physics, Maths and Astronomy at Oxford, he became a radio astronomer at the Jodrell Bank Observatory and teaches Astronomy and Cosmology at the University of Manchester. Over 25 years he has also taught Observational Astronomy to many hundreds of adult students in the North West of England. An active amateur optical astronomer, he is a council member and past president of the Society for Popular Astronomy in the United Kingdom. At Jodrell Bank he was a designer of the 217 KM MERLIN array and has coordinated the Project Phoenix SETI Observations using the Lovell Radio Telescope. He contributes astronomy articles and reviews for New Scientist and Astronomy Now, and produces a monthly sky guide on the Observatory’s website.
<urn:uuid:406a3f7f-d821-4560-8d86-14aa65ba3f34>
3.390625
243
Content Listing
Science & Tech.
34.047292
Tag: "rainforest" at biology news Endangered frogs coexist with fungus once thought fatal ...that remaining populations of T. eungellensis , a rainforest frog listed as endangered, can persist in the wild...ocused on six species living in the high-elevation rainforest streams of Eungella National Park in Queensland, Australia, where frog losses were "particularly cat... Fungus knocks a frog down but not out, raising questions about amphibian declines ... decline. The species largely disappeared from rainforest streams in the mid-1980's, but surviving remnant populations sampled in the mid-1990's show the continued presence of the fungus in 15% to 18% of the sampled frogs. Later investigation showed that infected frogs had similar survival to uninfected fr... Large-scale forces shape local ocean life, global study shows ...ists know that tropical coral reefs and the Amazon rainforest act as source areas, Witman said more areas must be identified. "This is particularly true in the marine environment," he said. "We don't know much about source pools. We need a lot more research in this area." The project focus... Chimpanzee 'workshop' discovered in Congo ...6, 2004) Scientists have discovered that a remote rainforest in Central Africa, saved from logging by a collaboration among the New York-based Wildlife Conservation Society, a timber company and the Republic of Congo, is home to a population of innovative, tool-making chimpanzees that "fish" for termite dinner... Tracking orangutans from the sky ...ced with the hip-deep muck and steep slopes of the rainforest floor. Instead, Ancrenaz and colleagues have developed a survey method by which estimates of orangutan numbers can be made from helicopters. By comparing ground survey data, collected over two years, with aerial counts garnered in only 72 hours the a... Stanford biologist working to restore native forests to Hawaii ...not only were coffee plants located near an intact rainforest more productive but the beans were also of higher quality. They traced this finding to the pollination services offered by rainforests, where native bees nest. "We've been pretty narrow minded in how we extract benefits from ecosystems," she said. "I... Arid Australian interior linked to landscape burning by ancient humans ...reserved in lake sediments at the boundary between rainforest and interior desert beginning about 50,000 years ago, Miller said. In addition, a number of rainforest gymnosperms -- plants whose seeds are not encased and protected and are therefore more vulnerable to... Population of rare gorillas may be increasing in war-torn Congo ...ho have heroically defended the gorillas and their rainforest home in eastern Democratic Republic of Congo, have played a key role in safeguarding these endangered primates. The census, led by WCS project director Innocent Liengola, counted 168 gorillas living in the mountain highlands of Kahuzi-Biega National ... Experimental radar provides 3-D forest view ...orneo. The sites varied in character from pristine rainforest to coastal mangroves and oil palm and rubber plantations. They were also measured in detail on the ground to provide 'ground truth' for the radar results, around 200 Gigabytes of raw data having been gathered during three weeks of flights. "We had ... Falling ants glide back to trunk to avoid dangers of forest floor ...Yanoviak of UTMB. While perched 100 feet up in the rainforest canopy waiting for mosquitoes to alight and feed on his blood, Yanoviak casually brushed off a few dozen ants that were attacking him and noticed their uncanny ability to land on the tree's trunk and climb back to the very spot from which they'd fall... World's largest rainforest drying experiment completes first phase ...g, El Nio episodes, and even the drying effects of rainforest clearing and burning itself." First, the biggest s...n rainfall will likely push this tall, green, lush rainforest towards a shorter, more stunted forest. As the forest becomes shorter and its leaf canopy more ope... Scientists map the world for nature conservation ...cies-rich areas on earth. Indeed, Borneo's lowland rainforest is the most diverse of all, with around 10,000 plant species. By comparison, the whole of the Federal Republic of Germany contains some 2,700 different native plants. "However, we have found out for the first time where, within each of the different ... Promiscuous catalytic activity possessed by novel enzyme structure ...a new way of bringing "bio-prospecting" out of the rainforest and into the lab. Their findings are published in the June 16th edition of the journal Nature. Stphane Richard, Joseph Noel and Tomohisa Kuzuyama isolated and examined a totally new enzyme that can mix and match biological chemicals to create a wide... Two new lemur species discovered ...nd within a protected reserve, but the surrounding rainforest is also heavily threatened by slash-and-burn activities. "It is simply remarkable that M. lehilahytsara was obtained at Andasibe, a protected area of forest that is considered one of the biologically best known sites on the island and is the most he... Birds and bats sow tropical seeds ...at once characterized expansive tracts of tropical rainforest gets a helping hand from native birds and bats. Ju...or adjacent to forest remnants or UNAM's protected rainforest to observe what trees grow, and why. Some tracts will be planted with 12 species of animal-dispersed... Ants, not evil spirits, create devil's gardens in the Amazon rainforest, study finds ...gardens are large stands of trees in the Amazonian rainforest that consist almost entirely of a single species, ...he Madre Selva Biological Station in the Amazonian rainforest of Loreto, Peru. The research team located 10 devil's gardens for the study, ranging in size from on... Study in Royal Society journal on first observation of giant squid in the wild ...ve been due to periods of drought during which the rainforest became fragmented, giving rise to geographic isola...the time-course of evolution. Our study shows that rainforest refuges, if they ever existed, had different effects on each group of butterflies, or that there was... Selective logging causes widespread destruction of Brazil's Amazon rainforest, study finds ...y and other hardwoods destroys an area of pristine rainforest big enough to cover the state of Connecticut. The ... The Amazon Basin contains the largest contiguous rainforest on Earth--a vast region nearly as big the continental United States that includes portions of Brazil... Picky female frogs drive evolution of new species in less than 8,000 years Berkeley -- Picky female frogs in a tiny rainforest outpost of Australia have driven the evolution of ...ween 1 and 2 million years ago with the retreat of rainforest to higher elevations, two separate frog lineages developed in the northern and southern parts of the... Rainforest conservation worth the cost, University of Alberta shows The economic benefits of protecting a rainforest reserve outweigh the costs of preserving it, says ...fuelwood and agricultural development compete with rainforest conservation. Since 1996, an ecotourism centre has been established at the forest and a growing numb... 1 2 3 4
<urn:uuid:0a1903c4-dba6-4848-8cfd-7d99b711e780>
2.734375
1,571
Content Listing
Science & Tech.
52.362463
Go behind the scenes and into SERC’s photobiology lab. This is where photobiologist Pat Neale spends a great deal of time examining the impact of UV radiation on photosynthesis. In this video you’ll get a look at one experiment that seeks to determine what would happen to the ocean’s phytoplankton if the ozone layer was suddenly destroyed by cosmic radiation. Video credits: Anne Goetz, Editor; Lia Kvatum, Producer/Writer/Camera; Tony Franken, Music. Learn more about this experiment in an earlier Shorelines post.
<urn:uuid:860fdba3-5704-4d56-876e-0a69bc7454fb>
3.0625
123
Truncated
Science & Tech.
40.83514
What is the basic difference between Factory and Abstract Factory Patterns? With the Factory pattern, you produce implementations ( With the Abstract Factory pattern, you produce implementations of a particular Factory interface -- e.g., Factory pattern: The factory produces IProduct-implementations Abstract Factory Pattern: A factory-factory produces IFactories, which in turn produces IProducts :) Abstract Factory vs. Factory Method The methods of an Abstract Factory are implemented as Factory Methods. Both the Abstract Factory Pattern and the Factory Method Pattern decouples the client system from the actual implementation classes through the abstract types and factories. The Factory Method creates objects through inheritance where the Abstract Factory creates objects through composition. The Abstract Factory Pattern consists of an AbstractFactory, ConcreteFactory, AbstractProduct, ConcreteProduct and Client. How to implement The Abstract Factory Pattern can be implemented using the Factory Method Pattern, Prototype Pattern or the Singleton Pattern. The ConcreteFactory object can be implemented as a Singleton as only one instance of the ConcreteFactory object is needed. Factory Method pattern is a simplified version of Abstract Factory pattern. Factory Method pattern is responsible of creating products that belong to one family, while Abstract Factory pattern deals with multiple families of products. Factory Method uses interfaces and abstract classes to decouple the client from the generator class and the resulting products. Abstract Factory has a generator that is a container for several factory methods, along with interfaces decoupling the client from the generator and the products. When to Use the Factory Method Pattern Use the Factory Method pattern when there is a need to decouple a client from a particular product that it uses. Use the Factory Method to relieve a client of responsibility for creating and configuring instances of a product. When to Use the Abstract Factory Pattern Use the Abstract Factory pattern when clients must be decoupled from product classes. Especially useful for program configuration and modification. The Abstract Factory pattern can also enforce constraints about which classes must be used with others. It may be a lot of work to make new concrete factories. Abstract Factory Example 1 This specification for the disks to prepare different types of pasta in a pasta maker is the Abstract Factory, and each specific disk is a Factory. all Factories (pasta maker disks) inherit their properties from the abstract Factory. Each individual disk contains the information of how to create the pasta, and the pasta maker does not. Abstract Factory Example 2: The Stamping Equipment corresponds to the Abstract Factory, as it is an interface for operations that create abstract product objects. The dies correspond to the Concrete Factory, as they create a concrete product. Each part category (Hood, Door, etc.) corresponds to the abstract product. Specific parts (i.e., driver side door for 99 camry) corresponds to the concrete products. Factory Method Example: The toy company corresponds to the Creator, since it may use the factory to create product objects. The division of the toy company that manufactures a specific type of toy (horse or car) corresponds to the ConcreteCreator. The Abstract Factory Pattern Reference: Factory vs Abstract Factory Factory method: You have a factory that creates objects that derive from a particular base class Abstract factory: You have a factory that creates other factories, and these factories in turn create objects derived from base classes. You do this because you often don't just want to create a single object (as with Factory method) - rather, you want to create a collection of related objects. Abstract factory produces a family of objects. In .NET the perfect example of this is DbProviderFactory. David Hayden has blogged about this. Example/Scenario for Abstract Factory I live in a place where it rains in the rainy season, snows in winter and hot and sunny in summers. I need different kind of cloths to protect my self from the elements. To do so I go to the store near my house and ask for clothing/item to to protect my self. The store keeper give me the appropriate item as per the environment and depth of my pocket. The items he gives me are of same level of quality and price range. Since he is aware of my standards its easy for him to do so. But when a rich guy from across the street comes up with the same requirements he gets an expensive, branded item. One noticeable thing is all the items he gives to me complement each other in term quality, standard, cost. One can say they go with each other. Same is the case with the items this rich guy gets. So by looking at above scenario, I now appreciate the efficiency of the shop keeper. I can replace this shopkeeper with and Abstract Shop. The items we get with abstract items and me and the rich as perspective clients. All we need is our product/item suiting to our need. Now I can easily see my self considering an online store which provides a set of services to its numerous clients. Each client belongs to either of the three group. When a premium group user opens up the site he gets great UI, highly customised advertisement pane, more options in the menus etc. These same set of features are presented to gold user but the functionality in the menu is less, advertisements are mostly relevent. slightly less agronomic UI. Lastly my kind of user, a ‘free group’ users, I am just served enough so that I do not get offended. The UI is bare minimum, advertisements are way off track so much so that I do not know what comes in it, lastly the menu has only log out. If I get a chance to build something like this website I would definitely consider Abstract Factory Pattern. Abstract Products : Advertisement Pane, Menu, UI painter. Abstract Factory : Web Store User Experience Concreate Factory: Premium User Experience, Gold User Experience, General User Experience. A Factory Method is a nonstatic method that returns a base class or interface type and that is implemented in a hierarchy to enable polymorphic creation. A Factory Method must be defined/implemented by a class and one or more subclasses of the class. The class and subclasses each act as Factories. However, we don't say that a Factory Method is a Factory. An Abstract Factory is an interface for creating families of related or dependent objects without specifying their concrete classes. Abstract Factories are designed to be substitutable at runtime, so a system may be configured to use a specific, concrete implementor of an Abstract Factory. Every Abstract Factory is a Factory, though not every Factory is an Abstract Factory. Classes that are Factories, not Abstract Factories, sometimes evolve into Abstract Factories when a need arises to support the creation of several families of related or dependent objects. Check here: http://www.allapplabs.com/java_design_patterns/abstract_factory_pattern.htm it seems that Factory method uses a particular class(not abstract) as a base class while Abstract factory uses an abstract class for this. Also if using an interface instead of abstract class the result will be a different implementation of Abstract Factory pattern.
<urn:uuid:199abad0-449a-4a41-8d6c-837ed64c6025>
3.8125
1,461
Q&A Forum
Software Dev.
37.908879
Science subject and location tags Articles, documents and multimedia from ABC Science Tuesday, 30 April 2013 Proto-dinos Ten million years after the world's largest mass extinction, a lineage of animals thought to have led to dinosaurs took hold in what is now Tanzania and Zambia, according to new research. Friday, 19 April 2013 Antarctica's abrupt deep freeze around 34 million years ago caused a plankton explosion that transformed Southern Ocean ecosystems, new research has found. Monday, 1 April 2013 In the coming decades, global warming will cause grass, shrubs and trees to thrive in Arctic soil stripped of ice and permafrost, while sea ice around Antarctica will grow, according to new research published today. Thursday, 14 March 2013 Distant starburst galaxies were more numerous and forming stars far earlier than previously thought, according to early observations from a revolutionary new telescope. Friday, 30 November 2012 An international team of scientists has delivered the most accurate assessment yet of both poles. Tuesday, 27 November 2012 The discovery of bacteria living in cold and salty water deep in an Antarctic lake suggests that microbes could exist in extreme conditions on other worlds, say US researchers. Friday, 2 November 2012 An international conference has failed to agree on new marine sanctuaries to protect thousands of polar species across Antarctica. Thursday, 25 October 2012 The seasonal hole in the ozone layer above the Antarctic this year was the second smallest in two decades. Monday, 22 October 2012 News analysis Australia and France hope to win support this week from 23 other nations for a unique system of new marine protected areas in eastern Antarctica. Wednesday, 26 September 2012 Antarctica's Weddell seals go into survival mode and put births on hold when conditions aren't favourable, a new study has found. Monday, 30 July 2012 New light has been shed on the Southern Ocean's ability to store carbon through an international study that pinpoints where carbon capture is most efficient. Monday, 16 July 2012 Policy-makers need to be more responsive to increasing pressures on Antarctica's environment, say an eminent team of scientific experts. Friday, 15 June 2012 Antarctica is home to an abundance of biodiversity that needs protecting, say Australian researchers. Thursday, 19 April 2012 Researchers searching for the source of cosmic rays are going back the drawing board after ruling out gamma ray bursts as the most likely source. Tuesday, 6 March 2012 The pristine environment of Antarctica is being threatened by invasive plants inadvertently brought in by tourists and researchers.
<urn:uuid:73da7f80-3b37-4bd8-96d5-ce20ad388087>
2.78125
521
Content Listing
Science & Tech.
34.284428
A demonstration in chemistry class goes awry when open flame sets off the sprinkler system. A simple chemical reaction, known around the world for decades, has the ability to send chills down your spine if you've never seen it before. Low fat chocolate is an abomination. Chocolate is more than a flavor: that luscious meltinyourmouthiness comes from giant awesome globs of cocoa butter. You can't get rid of the fat without destroying the essence of chocolate itself, but chemists have instead figured out how to magically replace it, using fruit juice. The reason trees are our friends is that they take CO2 out of the atmosphere and turn it into things that are less bad for the environment and much more useful to us, namely wood and fruit and oxygen and stuff. A novel new chemical reaction promises to do the same sort of thing, transforming CO2 straight into a semiconductor, fertilizer and a big pile o' energy. Need a reason to not fall asleep in chemistry class? How about learning how to shoot barrel drums into the air just by touching them with fire? If only chemistry class was this much fun, we'd have paid more attention. Challenging a man's claim that he found a mouse in his Mountain Dew, it's hard to say whether Pepsi Co. won or lost the battle with its defense in a small court. That defense? Not possible — Mountain Dew would have reduced that mouse to jelly by the time it was opened 15 months after bottling, according to the company's experts. Studying ancient Grecian pottery isn't just about learning about the past anymore — although that in itself is pretty cool. The National Science Foundation (NSF) has recently awarded grants to three research groups to explore the chemistry of the ancient vases to find out why they've been able to survive for so long. You'd have to be a pretty small ninja to wield one of these stars, since they're made of molecules. Or rather, each star is just one single molecule all tied in knots, and they're the most complex molecules (outside of DNA) that we've ever synthesized. Pigeons are one of mankind's greatest scourges — they're dirty, ugly and crap on you from perched above that you can't even see which one did the deed. A design firm has created a new system of feeding pigeons that will essentially convert pigeon waste into detergent.
<urn:uuid:68dcc29e-30a4-4615-9285-061cd8de8c91>
3.109375
498
Content Listing
Science & Tech.
54.959058
Click on the headline (link) for the full text. Many more articles are available through the Energy Bulletinhomepage How to survive the coming century Gaia Vince, New Scientist ... Fearing that the best efforts to curb greenhouse gas emissions may fail, or that planetary climate feedback mechanisms will accelerate warming, some scientists and economists are considering not only what this world of the future might be like, but how it could sustain a growing human population. They argue that surviving in the kinds of numbers that exist today, or even more, will be possible, but only if we use our uniquely human ingenuity to cooperate as a species to radically reorganise our world. The good news is that the survival of humankind itself is not at stake: the species could continue if only a couple of hundred individuals remained. But maintaining the current global population of nearly 7 billion, or more, is going to require serious planning. Four degrees may not sound like much - after all, it is less than a typical temperature change between night and day. It might sound quite pleasant, like moving to Florida from Boston, say, or retiring from the UK to southern Spain. An average warming of the entire globe by 4 °C is a very different matter, however, and would render the planet unrecognisable from anything humans have ever experienced. (25 February 2009) Polar research reveals new evidence of global environmental change Press release, International Council for Science Multidisciplinary research from the International Polar Year (IPY) 2007-2008 provides new evidence of the widespread effects of global warming in the polar regions. Snow and ice are declining in both polar regions, affecting human livelihoods as well as local plant and animal life in the Arctic, as well as global ocean and atmospheric circulation and sea level. These are but a few findings reported in “State of Polar Research”, released today by the World Meteorological Organization (WMO) and the International Council for Science (ICSU). In addition to lending insight into climate change, IPY has aided our understanding of pollutant transport, species’ evolution, and storm formation, among many other areas. The wide-ranging IPY findings result from more than 160 endorsed science projects assembled from researchers in more than 60 countries. Launched in March 2007, the IPY covers a two-year period to March 2009 to allow for observations during the alternate seasons in both polar regions. A joint project of WMO and ICSU, IPY spearheaded efforts to better monitor and understand the Arctic and Antarctic regions, with international funding support of about US$ 1.2 billion over the two-year period. “The International Polar Year 2007 – 2008 came at a crossroads for the planet’s future” said Michel Jarraud, Secretary-General of WMO. “The new evidence resulting from polar research will strengthen the scientific basis on which we build future actions.” (25 February 2009) Adapting to Water Woes Stephanie Tavares, Las Vegas Sun The southwestern United States is moving headlong toward an environmental catastrophe of apocalyptic proportions. "A lot of people say that in global warming there will be winners and losers. In the Southwest, we'll be in the losers' category," University of Arizona climatologist Jonathan Overpeck said at a symposium on global warming's effect on the Southwest. Overpeck discussed the latest scientific consensus on climate change at the Feb. 19 symposium, hosted by the Urban Land Institute at the Palms. ... The problem of climate change in the Southwest is fairly complex, but can be summed up in one word: water. The Southwest is the most persistent hot spot on the globe and has a history of severe drought. (27 February 2009) Also at Common Dreams. Las Vegas Running Out of Water Means Dimming Los Angeles Lights John Lippert and Jim Efstathiou Jr., Bloomsburg ... Water upheavals are intensifying because the population is growing fastest in places where fresh water is either scarce or polluted. Dry areas are becoming drier and wet areas wetter as the oceans and atmosphere warm. Economic roadblocks, such as the global credit crunch and its effects on Mulroy’s attempts to sell bonds, multiply during a recession. Yet local governments that control water face unyielding pressure from constituents to keep the price low, regardless of cost. Agricultural interests, commercial developers and the housing industry clash over dwindling supplies. Companies, burdened by slowing profits, will be forced to move from dry areas such as the American Southwest, Udall says. “Water is going to be more important than oil in the next 20 years,” says Dipak Jain, dean of the Kellogg School of Management at Northwestern University in Evanston, Illinois, who studies why corporations locate where they do. (26 February 2009)
<urn:uuid:748cd896-fd3d-446b-9b9f-b7d4bd9fdbba>
3.28125
1,003
Content Listing
Science & Tech.
39.772522
The International Space Station - The Cost Of Waiting shuttle crew iss progress In mid-2003 the U.S. Congress asked the General Accounting Office (GAO) to assess the effects of shuttle launch delays on the ISS. The GAO is an investigatory agency of the Congress. In September 2003 the GAO released a report titled Space Station: Impact of the Grounding of the Shuttle Fleet. The GAO report noted that the delay caused a number of operational problems. Before the Columbia disaster a multi-purpose logistics module (MPLM) named Rafaello had been packed and readied for a March 2003 launch at Kennedy Space Center. This module had to be unpacked to provide preventative maintenance to some of the equipment inside. Repacking the MPLM for a future shuttle flight will take at least two months. The module will also have to be retested for flight prior to launch. A giant solar array wing attached to a truss segment was to have launched in May 2003. The long launch delay pushed the wing past its storage time limit. NASA had to remove the wing and send it to a contractor to be retested and recertified. Batteries on truss sections waiting to be launched in 2003 had to be recharged. Prolonged storage had shortened their available life span. All of these problems resulted in unexpected costs in NASA's ISS program. Grounding the shuttle also has negative effects on ISS research projects. NASA had planned to launch three major research facilities to the station during 2003. On-board experiments must be conducted using existing facilities. However some of this equipment needs to be replaced or repaired, particularly refrigeration and freezer units in the science section. These units have suffered some failures. NASA had planned to replace them during 2003 with the launch of a new and larger cold-temperature facility. One of the largest drawbacks to ISS science is the presence of only two crewmembers. The number of new and continuing experiments conducted during Expeditions 7 and 8 had to be reduced so the crews could devote more time to station maintenance and operation. The GAO also found that shuttle delays affect the safety of the ISS. NASA had planned to transport a new on-orbit gyro to the station in March 2003 to replace a broken unit. The station includes four gyros that maintain the structure's orbital stability and permit navigational control. NASA scientists fear that problems could arise in the station's three remaining working gyros during a prolonged delay in shuttle flights. NASA had also planned to finish installing shielding on the Russian module Zvezda during 2003. Zvezda houses the ISS expedition crews. The module is supposed to be covered with twenty-three shielded panels to protect it from impacts by space debris. Only six panels have been installed so far. Every day that goes by without the additional shielding increases the risk that the module could be struck and damaged by debris. Shuttle delays also affect America's ISS partners. The original cost-sharing plan was worked out in the 1998 Agreement among the Government of Canada, Governments of Member States of the European Space Agency, the Government of Japan, the Government of the Russian Federation, and the Government of the United Sates of |Flight no.||Launch date||Mission name||Spacecraft flying to ISS||Primary cargo||Purpose| |1||11/20/98||1 A/R||Proton K||Control Module FGB (Zarya)||Assembly| |2||12/04/98||2A||Shuttle/STS-88||Node 1 (Unity), PMAs 1, 2||Assembly| |5||07/12/00||1R||Proton K||Service Module (Zvezda)||Assembly| |6||08/06/00||1P||Progress M1-3||Consumables, spares, props||Logistics| |8||10/11/00||3A||Shuttle/STS-92||Z1 truss, 4 CMGs, PMA 3||Assembly| |9||10/31/00||2R/1S||Soyuz TM-31||Expedition 1 crew||1st Crew| |10||11/15/00||2P||Progress M1-4||Consumables, spares, props||Logistics| |11||11/30/00||4A||Shuttle/STS-97||P6 module, PV array||Assembly| |12||02/07/01||5A||Shuttle/STS-98||U.S. Destiny Lab module, racks||Assembly| |13||02/26/01||3P||Progress M-44||Consumables, spares, props||Logistics| |14||03/08/01||5A.1||Shuttle/STS-102||Expedition 2 crew, MPLM Leonardo||2nd Crew| |15||04/19/01||6A||Shuttle/STS-100||SSRMS, MPLM Raffaello||Outfitting| |16||04/28/01||2S||Soyuz TM-32||1st Taxi (plus Tito)||NewCRV| |17||05/20/01||4P||Progress M1-6||Consumables, spares, props||Logistics| |18||07/12/01||7A||Shuttle/STS-104||U.S. Airlock, HP O2/N2 gas||Assembly| |19||08/10/01||7A.1||Shuttle/STS-105||Expedition 3 crew, MPLM Leonardo||3rd Crew| |20||08/21/01||5P||Progress M-245||Consumables, spares, props||Logistics| |21||09/14/01||4R||"Progress 301"||Docking Compartment 1||Assembly| |22||10/21/01||3S||Soyuz TM-33||2nd Taxi||NewCRV| |23||11/26/01||6P||Progress M-256||Consumables, spares, props||Logistics| |24||12/05/01||UF-1||Shuttle/STS-108||Expedition 4 crew, MPLM Raffaello||4th Crew| |25||03/21/02||7P||Progress M1-8 (257)||Consumables||Logistics| |26||04/08/02||8A||Shuttle/STS-110||S0 truss segment||Assembly| |27||04/25/02||4S||Soyuz TM-34||3rd Taxi (plus Shuttleworth)||NewCRV| |28||06/05/02||UF-2||Shuttle/STS-111||Expedition 5 crew, MBS, MPLM Leonardo||5th Crew| |29||06/26/02||8P||Progress M-24 (246)||Consumables, spares, props||Logistics| |30||09/25/02||9P||Progress M1-9 (258)||Consumables, spares, props||Logistics| |31||10/07/02||9A||Shuttle/STS-112||S1 truss segment||Assembly| |32||10/30/02||5S||Soyuz TMA-1 (211)||4th Taxi (plus Frank DeWinne)||NewCRV| |33||11/23/02||11A||Shuttle/STS-113||Expedition 6 crew, P1 truss segment||6th Crew| |34||02/02/03||10P||Progress M-47 (247)||Consumables, spares, props.||Logistics| |35||04/26/03||6S||Soyuz TMA-2 (212)||Expedition 7 crew||7th Crew| |36||06/08/03||11P||Progress M1-10 (259)||Consumables, spares, props.||Logistics| |37||08/28/03||12P||Progress M-48 (248)||Consumables, spares, props.||Logistics| |38||10/18/03||7S||Soyuz TMA-3 (213)||Expedition 8 crew (plus Duque)||8th Crew| |39||01/29/04||13P||Progress M1-11 (260)||Consumables, spares, props.||Logistics| |CMG - Control Moment Gyro||MPLM - Multi Purpose Logistics Module| |CRV - Crew Return Vehicle||PMA - Pressurized mating adapter| |DM - Double Cargo Module||Props - Propellents| |HP - High Pressure||PV - Photo Voltaic| |MBS - Mobile remote services base system||SSRMS - Space Station Remote Manipulator System| |SOURCE: Adapted from "International Space Station ISS Assembly Progress," in International Space Station, National Aeronautics and Space Administration, Washington, DC, October 2003 [Online] http://www.hq.nasa.gov/osf/station/assembly/ISSProgress.html [accessed January 12, 2004]| America Concerning Cooperation on the Civil International Space Station. This plan calls for NASA to pay the entire cost for ground operations and common supplies for the station. NASA is then reimbursed by the partner countries for their shares depending on their level of participation. Partner countries also fund operations and maintenance for any elements they contribute to the ISS, any research activities they conduct, and a share of common operating expenses. These costs will have to be adjusted as the shuttle fleet remains grounded and planned activities are cancelled. In addition there is a political problem with relying on Russia to transport crews to the station and to launch extra Progress supply ships while the shuttle fleet is out of commission. The cash-strapped Russian Space Agency is expected to ask for money from NASA to make such flights. Under the Iran Nonproliferation Act of 2000 NASA is prohibited from making large payments to Russia unless it can be shown that Russia is not sharing sensitive technical information with Iran. The GAO notes that it is unclear how this issue would be resolved. The United States might have to ask its European partners to pay for such flights. The GAO estimates that the United States spent $32 billion on the ISS between 1985 and 2002. At the time of GAO's report in 2003, NASA had received another $1.85 billion for fiscal year 2003 and was requesting $1.70 billion for fiscal year 2004. In January 2004 U.S. President George Bush announced a new plan for America's space program. This plan calls for retirement of the space shuttle fleet by 2010. Bush also wants to end ISS assembly as soon as the core-complete configuration is obtained and eliminate all ISS research projects that do not support the new plans for space travel. In February 2004 NASA released its 2005 budget request which included $1.9 billion for the ISS program. NASA had originally hoped to finish the core-complete station during 2004. NASA told the GAO that each month's delay in the shuttle program equals one month's delay in ISS assembly. As of July 2004 the shuttle's return to flight is not expected until fall or winter 2004, eighteen to twenty-two months after the Columbia disaster. Assuming that NASA's time estimate is correct, the delay would push ISS core completion well into 2005 and possibly 2006.
<urn:uuid:9bfbc0da-138f-4815-9ae3-22629e006c16>
2.984375
2,482
Knowledge Article
Science & Tech.
67.874303
Courtesy Mark RyanLast October, I attended the Geological Society of America’s annual meeting held here in Minneapolis. The convention presented plenty of opportunities to hear the latest ideas in geology, paleontology, and planetary science but the highlight for me was being able to join a GSA field trip on Lake Superior aboard the research vessel, the Blue Heron. Courtesy Mark RyanThe 86-foot vessel is owned by the University of Minnesota-Duluth (UMD) and operated by the Large Lakes Observatory (LLO), an organization created in 1994 for investigating the geochemical and geophysical properties of large lakes, and their global impact. To accomplish this research, the LLO required a worthy vessel for limnological research, and the Blue Heron was purchased just three years later. The vessel docks at the Corps of Engineers Vessel Yard on Park Point (aka Minnesota Point), a natural sand bar separating Duluth’s harbor basin from Lake Superior. The ten-mile spit was created by the lake’s wave action on material deposited by the St. Louis river, and is supposedly the largest freshwater sand bar in the world. Field trip leaders Doug Ricketts, the marine superintendent at LLO, and Charlie Matsch, professor emeritus of geology at UMD, greeted arriving participants and divided us into two groups. While one group spent the morning on Lake Superior, the other visited geological highlights in the Duluth area with professor Matsch. In the afternoon the groups switched places. I joined the morning shift on the lake with a dozen geologists made up of GSA attendees from Minnesota, Wisconsin, and City University of New York. Besides Doug Ricketts and the ship’s five crew members, regents professor Tom Johnson, and the director of the LLO, professor Steve Colman, were also on hand to help demonstrate and explain the Blue Heron’s research capabilities. Courtesy Mark Ryan Courtesy Mark RyanWe shoved off right on schedule, heading across the harbor toward the Superior entrance on the Wisconsin end of the sand bar. The crew spent this time going over the ship’s safety rules - how to descend ladders, which alarms meant what, how to communicate with the bridge - that sort of thing. We then made a quick tour of the facilities. The Blue Heron is equipped with a wet lab on the open deck and two dry labs inside, and all sorts of data gathering equipment for geophysical, geochemical, and biological sampling. These include multibeam sonar for profiling the lake bottom and sub-bottom, several coring instruments for collecting sediment samples, and water samplers able to collect at various depth levels in the water column while also measuring such things as temperature, depth, pH levels, and conductivity. There’s gear for tracking lake currents, and plankton nets and a trawl for gathering biological data. Inside, both above and below deck, computers record, display and analyze the gathered data. Many of the off-ship instruments can be monitored and controlled on-board from computer consoles. Courtesy Mark RyanThe R/V Blue Heron is outfitted to carry five crew members and six researchers and can stay on the lake, around the clock, for 21 days between port calls. It’s used mainly on Lake Superior, the largest and least studied of the Great Lakes. Shipboard amenities are sparse (there’s no television or DVD) but include eleven bunks, a full galley for food preparation, dining table, shower, and of course, the "head", or as you landlubbers like to call it, the toilet. Internet service is sometimes available when the vessel is near shore. Courtesy Mark RyanUpon entering Lake Superior, the crew set to work demonstrating some of the vessel’s science gear, which is pretty much the same kind of instrumentation used in oceanographic research. Just beyond the Superior entrance, the EchoTech CHIRP/sidescan sonar tow fish was lowered from the Blue Heron’s stern. This bright yellow instrument is towed underwater behind the vessel as it makes several passes over the lake bed, and able to gather hydrographic and bathymetric data. One function is to send out an intermittent, low frequency “chirp” pulse that can penetrate the sub-bottom and record changes in its geophysical properties. The sonar data is processed using on-deck computers.The first demonstration was a scan of the underwater channel of the Nemadji River, a Wisconsin tributary to the lake. The mouth of the Nemadji has been drowned by a process called post-glacial rebound or more scientifically, differential isostatic rebound. During the last ice age, a mile thick sheet of ice covered the region and placed enormous pressure on the earth’s crust, depressing it downward. As the glaciers retreated, that enormous weight was gradually removed, and the lake basin began to rebound (a process still going on today). But the northern and eastern ends of Lake Superior basin are rebounding at a faster rate, tilting the water southward and to the west and subsequently flooding those areas of the shoreline. Courtesy Mark RyanAs the submerged tow fish was doing its stuff, we all gathered at a couple workstations in the lower deck dry lab to watch as images appeared on the computer screens. In one, you could plainly see the distinct profile of the Nemadji’s drowned riverbanks. Courtesy Mark RyanThe other monitor displayed bathymetric information being picked up by the duel frequency sidescan sonar. Printouts of the lakebed topography, created from a mosaic of stitched-together scans, were laid out on a worktable with several charts and maps. Courtesy Mark RyanFor the next demonstrations, the Blue Heron moved out several miles onto the big lake. We’d all been warned of the lake’s fickle weather, and told to bring proper attire, just in case. Having been raised in Duluth, I was well acquainted with Superior’s moodiness, especially in autumn, so I brought along rain gear, a jacket, and an extra sweatshirt, expecting the worst. But I was most comfortable in jeans and a t-shirt. Cloud cover was sporadic, and while the water temperature was only around 49 degrees, the air temperature hovered in the mid to upper 70s during the entire excursion. We couldn’t have hoped for a nicer day; a perfect Duluth day, as we used to call them. While some of the group watched the crew prepare for the next presentation, others enjoyed lunch (sandwich, chips, fruit and a cookie) at the galley dining table. During my lunch break Tom Johnson told me the story of how the university came to own the research vessel. In her previous life, the Blue Heron was known as the Fairtry a commercial fishing trawler that fished the Grand Banks in the northwest Atlantic (like the Andrea Gail in The Perfect Storm). UMD purchased it in 1997 and Tom sailed it from Portland, Maine, through the St. Lawrence Seaway and across the Great Lakes to Duluth. Despite some minor engine problems at the start, he said it was a fantastic two-and-a-half week trip. Over the next winter, the Fairtry was converted into a limnological research vessel and re-christened the Blue Heron. Courtesy Mark RyanMeanwhile, out on the back deck, the crew was ready to launch the next instrument, a carousel of canisters called Niskin bottles used for sampling the water column. Courtesy Mark RyanThis device is lowered into the lake and controlled remotely from the deck, and can collect samples at various depths into any one of its dozen canisters. It can also measure temperature, conductivity, pH balance, transparency, dissolved oxygen levels and other tests. After deployment, marine technician, Jason Agnich, sat at a computer workstation just inside the hatch, and easily controlled the carousel with a joystick while monitoring its progress on a couple electronic displays. Courtesy Mark RyanWe moved a little farther down lake where two coring instruments, a spider-framed multi-corer, and an arrow-like gravity corer were put into action. The first can collect several shallow core samples by lowering it by winch to the lakebed, while the latter is dropped like a giant dart deep into the sub-bottom sediment for one large core. Courtesy Mark RyanAfter each was raised back to the surface, the collected core samples were removed from their tubing and laid out on the wet lab table for study. We all huddled around the workbench as each core was cut open with a knife so participants could take a closer look. The sediment cores were composed of a densely packed fine-grained mucky silt as brown as milk chocolate, and appeared more appropriate for a scatological study than a geological one, to me anyway. But that didn’t stop some of us from taking home a small plastic bag of it as a souvenir. Courtesy Mark Ryan Courtesy Mark RyanAs we made our way back toward the harbor, I stood at the starboard rail and took in the beautiful autumn colors lighting up the lake’s distant North Shore. We were three, maybe four miles offshore but I was able to pick out my old stomping grounds in Duluth’s east end. The old neighborhood – like much of the city - was built up on terraces formed by past shoreline configurations of prehistoric Lake Superior. Duluth’s Skyline Parkway, a boulevard that skirts the hilltop across the length of the city was built on an old gravel beach line of Glacial Lake Duluth when the water surface was nearly five hundred feet above its present level. The bridge over the mouth of the Lester River was just barely discernible from where I stood but it was easy to spot the large swath of dark pine forest that encompassed Lester Park and Amity creek (the western branch of Lester river) where my friends and I used to hang out. It’s also where Charlie Matsch would guide our group later in the afternoon. He brought us there to examine the Deeps, my favorite old swimming hole carved out of the massive basalt flows that extruded from what’s now the center of Lake Superior during the Mid-continental rifting event that took place nearly a billion years ago. Courtesy Mark RyanWe returned to port through the Duluth entrance, and as we entered the canal captain Mike King announced our arrival with a blast of the Blue Heron’s air horn. Duluth’s landmark Aerial-Lift Bridge, already raised for our return entry, responded in kind with a shrill loud blast of its own. Tourists lining the pier called out and waved as we passed the old lighthouse and rolled toward the harbor. We all waved back and I have to say it was kind of a thrill, for me anyway, after having participated in the same ritual, oh probably a hundred times in the past but always from the pier not from a vessel. Courtesy Mark RyanThe Blue Heron swung in through the harbor, and soon we were back at port where we started at the Corps of Engineers Vessel Yard. Charlie Matsch was there to greet us and take for the second leg of the field trip. Charlie took us first up the hillside to the rocky knob near the landmark memorial Enger Tower where he showed us some interesting exposures of gabbro, an intrusive rock common to the geological formation known as the Duluth Complex. Much of the bluffs west of downtown Duluth are composed of this dark, course-grained mafic rock. Now, I admit I enjoy a geological outcrop as much as the next guy (especially when a real geologist is explaining it), but it was the sweeping view from the hilltop that drew my attention. Courtesy Mark RyanThe lake and harbor and much of the St. Louis river bay stretched out below us in an array of vivid blues contrasting with the bright reds and golds of autumn. On one side of the harbor, bridges, railroads, and structures of industry jutted out on Rice's Point toward Wisconsin, paralleled on the other side by the slender ribbon of Park Point. As I took in this grand vista, a small, barely discernible bluish blur of movement caught my eye. There, cutting through the harbor, the Blue Heron headed southward toward the Superior entrance for another run on the great lake.
<urn:uuid:0b1818d0-5889-4993-afb9-1d2eb399dc7d>
2.75
2,580
Personal Blog
Science & Tech.
43.100063
Putting Nature’s Power to Work (Aug, 1932) Putting Nature’s Power to Work Methods of Harnessing Natural Energy Described by DICK COLE Upward of 40,000 inventions a year are granted patents by Uncle Sam, but not one of these offers a practical solution of the problem which scientists agree is the most pressing of them all— that is, how to harness natural sources of energy for power. Mr. Cole does not profess to have solved the problem, but the methods he describes here point out the trend of probable development. WHAT is the most needed invention? Not television—not new kinds of airplanes—not speedier automobiles. Men of science are agreed that what the world needs most is a motor which converts the sun’s rays and other forms of natural energy into usable power. Orville Wright, Lee De Forest, Elihu Thomson, and other leading scientists are among those who proclaim the need for a new motor. There are two sources of inexhaustible energy which at once occur to the inventor —wave or tidal power, and solar rays. A. Los Angeles inventor has developed a wave motor—an “inertia” motor, he calls it— which gives promise of being developed into a practical commercial project. Scores of so-called “wave motors” have been built in the past, but none has proved a conspicuous success commercially. Usually the initial erection cost and the maintenance cost has been out of proportion to the results obtained. And, too, the force of a storm has been underestimated, and the first severe gale completely wrecks the machine. The new “inertia motor” is absolutely storm-proof—in fact it could outride a tidal wave. And, too, it is the acme of simplicity —it requires no foundation, and has no connection with the ocean bottom except by its anchor chains. A study of the accompanying diagrams makes it clear how the inertia motor operates. If you are mechanically minded, you will be impressed. The application of the name “inertia” will be obvious. When a wave starts to lift the hollow sphere, the massive weight inside, because of its inertia, resists the movement and exerts terrific pressure in the lower cylinder. Finally the inertia of the weight is overcome. Then it possesses momentum. When the sphere reaches the crest of a wave, the combined effort of the momentum and the recoil of the huge, semi-elliptic springs exerts an equal pressure in the upper cylinder. The tremendous pressure is applied to oil, which, in turn, operates a special turbine which runs a generator. The current is conducted to the shore by submarine cable. The idea seems wholly practical. It is readily conceivable that a battery of “inertia motors” could be built into an elongated float set parallel with the wave movement, and power in unlimited quantities would be available. The cost of building these motors would not be excessive—the maintenance cost is almost nil—the storm hazard is eliminated. Collision is the greatest hazard, but this is hardly worth considering as the float would have “running lights” at night. A conservative estimate of the cost of the complete installation, except batteries, indicates that the value of the current generated, at 5c K.W. hour, would equal the installation cost in 18 months. The power available from ocean waves is unbelievably huge. Suppose a wave comes along and lifts a 25,000-ton ocean liner 10 feet in 5 seconds. How much power is expended? The 25,000 tons is equivalent to 50,000,000 pounds; which, raised 10 feet, represents 500,000,000 foot pounds. Since this work is performed in 5 seconds, the amount done in one minute would be six billion foot pounds. One horsepower is that required to perform 33,000 foot pounds per minute, so simple division gives the wave’s horsepower as 181,818! Every day in the year the sun is dissipating incalculable, immeasureable energy upon this earth of ours, energy which can be brought directly under control for immediate use, instead of waiting for a new geological era to make it available. The trapping and utilization of solar energy is not new—”solar engines” have been built before—but direct, commercial application of solar energy for power conversion has not been achieved. An idea for a sun-power plant, illustrated in these pages, seems to be in a fair way to achieving success. A drawing shows the theoretical working of the solar power plant. One side of the unit is a tank containing water, heat insulated on all sides. On top of the tank is a shallow basin of greater area than the cross-section of the tank itself. This basin is covered with a layer of sheet copper providing a space of about one inch within the basin. Water from the tank enters the basin at the center point and spreads out to the rim where it is returned to the tank. The top of the copper covering is painted flat black. Obviously when this tank is exposed to the glaring sun, the black surface absorbs the heat which is communicated to the water in the shallow basin. The entire mass of water is sealed from the atmosphere, and evaporation, which would tend to cool the water, is prevented. The thin layer of water becomes very hot and is carried into the main tank. The circulation goes on and on, constantly building up the temperature in the main tank. The heat insulation prevents radiation losses. In a working model, the temperature was built up to 180 degrees. We have now established a mass of hot water. Now for the cold element. The tank at the right is the same volume and is insulated the same as the “hot” tank. In this case the water is circulated to an evaporation cooling system. This consists of a shallow upper tank from which are suspended many sheets of a special flax fabric such as desert water bags are made of. The water trickles down these. This cooling system will maintain the temperature of about 60 degrees in the cooling tank. Apparatus Runs Automatically We now have established a heat differential of nearly 100 degrees. The boiler and the condenser are an innovation. Note that each is set in the water of their respective tanks. The system is synonymous with placing a conventional boiler in a mass of molten lava. The boiler-condenser system is partly filled with pure, distilled water. The initial vacuum is established by injecting live, super-heated steam into the system. Then the cocks are closed, and when the steam condenses a permanent vacuum is established. Thereafter the boiler vaporizes a vast amount of water; the condenser liquefies the vapor; an injector pump returns the water to the boiler; an eternal cycle going on continuously, while between the two, a low pressure steam turbine converts the energy into terms of kilowatts. Let us now transpose our theory into commercial terms. Southern California offers some ideal locations for a solar power-plant. Almost any point along its coast would meet the physical requirements, but Salton Sea, at the north end of Imperial Valley, is most ideal. Here the mid-day sun gives rise to temperatures of 110 to 120 degrees. And, too, the air is very dry which adds to the efficiency of the cooling system. Construction Cost Moderate The initial cost of erecting a power-plant like this would not be prohibitive. Even if it cost twice as much as a conventional steam plant of the same power output, this would be more than offset in several years by lower upkeep cost. The solar plant could be made so automatic in operation that it would require practically no attention. Practically the only upkeep cost of this solar distillation plant is for fuel for a small gasoline motor which operates two pumps; one for pumping the distilled water to the storage tank, and the other for maintaining the salt water in the evaporation basin. A surplus of water must be provided to the evaporation basin to prevent crystallization, and, too, the basin must be thoroughly flushed out occasionally. Still a third apparatus, deriving its power from running water rather than wave motion, has been built by an Arizona rancher. A main irrigation canal bordered his land, with a flow of 8 m.p.h. He constructed a raft with four 18-inch spiral rotors underneath, linked the driveshaft of each to a common shaft which drove a 32-volt generator. The motor functions with perfect satisfaction and supplies all the current consumed on the ranch. One of the accompanying drawings makes its construction clear. A similar machine can easily be constructed by anyone. Electricity From Flowing Water A momentum motor of this type has infinite possibilities. Imagine one with a battery of 20 spiral rotors 30 feet in diameter set in “the narrows” of the Bay of Fundy with its 60-ft. tides! Or in the St. Lawrence River or at Sault Ste. Marie! Or at numerous points in the Ohio, Missouri, Mississippi or Colorado Rivers! A momentum motor set in the Golden Gate at San Francisco without in any way impeding navigation, would develop enough power from the inflowing and outflowing tides to supply electricity to all the bay cities. As long as the tides ebb and flow, our supply of electricity will be assured. On all sides we see examples of wasted energy—power! Think of what is wasted every day by the millions of automobiles and other moving vehicles. In an automobile, heat, the very essence of power, is wilfully wasted in enormous quantities—in the radiator and out through the exhaust pipes. Waste Power of Automobiles Suppose a coil of copper tubing were placed inside the exhaust manifold, and a highly volatile liquid, as ether or alcohol, were injected into this coil, wouldn’t the expanded vapor run a fairly sizeable “steam” engine? And couldn’t the vapor be condensed in a special radiator and be used over and over? What about the energy wasted when the brakes are applied—dissipated in friction? Cruising along at 30 m.p.h.—”Stop” sign— on with the brakes. Twenty to fifty horsepower gone beyond recall. Suppose instead of friction brakes, electrical brakes were used, and the current generated were stored in batteries, how much wasted energy could be saved? Of course, with fuel at a reasonable price, it is not practical to make the elaborate installations necessary for conserving the wasted energy. We have one outstanding example of its being done. On the electrical section of the Chicago, Milwaukee & St. Paul railroad, when a train goes down the long grade over the Cascade mountains, the motors are converted into generators and serve as electric brakes, and at the same time push considerable current back into the line, which is utilized by another train coming up the grade. Trains are scheduled to take advantage of the electrical counterbalance. Heat Represents Power Heat is usually associated with all motors and engines. Somewhere in the scheme of things heat has played a part. But the generally accepted idea is that abnormal heat is required, as in the steam engine and internal combustion motor. The fact of the matter is that potential power exists wherever two masses of different temperature are available. Dr. Georges Claude demonstrated this with his remarkable vacuum, or low pressure steam turbine, down on the coast of Cuba, described in a past issue of Modern Mechanics and Inventions. Much has been written about atomic power, the tremendous energies compacted within the atom being depicted as capable of furnishing all the power the world will ever need for billions of years. This is undoubtedly true, but scientists are by no means sanguine that a method of applying the atom’s power will ever be devised.
<urn:uuid:7c235f07-d18d-4504-9a3a-cadca3bd6a29>
3.046875
2,492
Personal Blog
Science & Tech.
48.263128