text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Two special cases arise frequently in whichthe laws of signs may be used to advantage. The first such usage is in simplifying subtraction; the second is in changing the signs of the numerator and denominator when division is indicated in the form of a fraction. The rules for subtraction may be simplifiedby use of the laws of signs, if each expression to be subtracted is considered as being multiplied by a negative sign. For example, 4 -(-5) is the same as 4 + 5, since minus times minus is plus. This result also establishes a basis for the rule governing removal of parentheses. The parentheses rule, as usually stated, is: Parentheses preceded by a minus sign may be removed, if the signs of all terms within the parentheses are changed. This is illustrated as follows: 12 -(3 - 2 + 4) = 12 - 3 + 2 - 4 The reason for the changes of sign is clear when the negative sign preceding the parentheses is considered to be a multiplier for the whole parenthetical expression.Division in Fractional Form Division is often indicated by writing the dividend as the -numerator, and the divisor as the denominator, of a fraction. In algebra, every fraction is considered to have three signs. The numerator has a sign, the denominator has a sign, and the fraction itself, taken as a whole, has a sign. In many cases, one or more of these signs will be positive, and thus will not be shown. For example, in the following fraction the sign of the numerator and the sign of the denominator are both positive (understood) and the sign of the fraction itself is negative: Fractions with more than one negative signare always reducible to a simpler form with at most one negative sign. For example, the sign of the numerator and the sign of the denominator may be both negative. We note that minus divided by minus gives the same result as plus divided by plus. Therefore, we may change to the less complicated form having plus signs (understood) for both numerator and denominator, as follows: Since - 15 divided by -5 is 3, and 15 dividedby 5 is also 3, we conclude that the change of sign doss not alter the final answer. The same reasoning may be applied in the following ex- ample, in which the sign of the fraction itself is negative: When the fraction itself has a negative sign, asin this example, the fraction may be enclosed in parentheses temporarily, for the purpose of working with the numerator and denominator only. Then the sign of the fraction is applied separately to the result, as follows: All of this can be done mentally. If a fraction has a negative sign in one of the three sign positions, this sign may be moved to another position. Such an adjustment is an ad- vantage in some types of complicated expressions involving fractions . Examples of this type of sign change follow: In the first expression of the foregoing example, the sign of the numerator is positive (understood) and the sign of the fraction is negative. Changing both of these signs, we obtain the second expression. To obtain the third expression from the second we change the sign of the numerator and the sign of the denominator. Observe that the sign changes in each case involve a pair of signs. This leads to the law of signs for ‘fractions: Any two of the three signs of a fraction may be changed without altering the value of the fraction. AXIOMS AND LAWS An axiom is a self-evident truth.. It is a truth that is so universally accepted that it does not require proof. For example, the statement that "a straight line is the shortest distance between two points" is an axiom from plane geometry. One tends to accept the truth of an axiom without proof, because anything which is axiomatic is, by its very nature, obviously true. On the other hand, a law (in the mathematical sense) is the result of defining certain quantities and relationships and then developing logical conclusions from the definitions. Thefour axioms of equality with which we are concerned in arithmetic and algebra are stated as follows: 1. If the same quantity is added to each oftwo equal quantities, the resulting quantities are equal. This is sometimes stated as follows: H equals are added to equals, the results are equal. For example, by adding the same quantity (3) to both sides of the following equation, we obtain two sums which are equal: 2. If the same quantity is subtracted fromeach of two equal quantities, the resulting quantities are equal. This is sometimes stated as follows: If equals are subtracted from equals, the results are equal. For example, by subtracting 2 from both sides of the following equation we obtain results which are equal: 3. If two equal quantities are multiplied bythe same quantity, the resulting products are equal. This is sometimes stated as follows: If equals are multiplied by equals, the products are equal. For example, both sides of the following equation are multiplied by -3 and equal results are obtained: 4. If two equal quantities are divided by thesame quantity, the resulting quotients are equal. This is sometimes stated as follows:. If equals are divided by equals, the results are equal. For example, both sides of the following equation are divided by 3, and the resulting quotients are equal: These axioms are especially useful whenletters are used to represent numbers. If we know that 5x = -30, for instance, then dividing both 5x and -30 by 5 leads to the conclusion that x = -6. LAWS FOR COMBINING NUMBERS Numbers are combined in accordance withthe following basic laws: 1. The associative laws of addition and multiplication. 3. The distributive law. Associative Law of Addition The word "associative" suggests association or grouping. This law states that the sum of three or more addends is the same regardlessof the manner in which they are grouped. For example, 6 + 3 + 1 is the same as 6 + (3 + 1) or (6 +3) +1. This law can be applied to subtraction bychanging signs in such a way that all negative signs are treated as number signs rather than operational signs. That is, some of the addends can be negative numbers. For example, 6 - 4 - 2 can be rewritten as 6 + (-4) + (-2). By the associative law, this is the same as 6 + [(-4) + (-2)] or [6 + (-4)] + (-2). However, 6 - 4 - 2 is not the same as 6 - (4 - 2);the terms must be expressed as addends before applying the associative law of addition. Associative Law of Multiplication This law states that the product of three ormore factors is the same regardless of the manner in which they are grouped. For example, 6 x 3 x 2 is the same as (6 x 3) x 2 or 6 x (3 x 2). Negative signs require no special treatment in the application of this law. For exam le 6 - (-4) - (-2) is the same as [6 -(-4)].- (2) or 6 - [(-4) - (-2)]. Commutative Law of Addition The word "commute" means to change, substitute or move from place to place. The commutative law of addition states that the sum oftwo or more addends is the same regardless of the order in which they are arranged. For ex- ample, 4 + 3 + 2 is the same as 4 + 2 + 3 or 2+4+3. This law can be applied to subtraction bychanging signs so that all negative signs become number signs and all signs of operation are positive. For example, 5 - 3 - 2 is changed to 5 + (-3) + (-2), which is the same as 5 + (-2) + (-3) or (-3) + 5 + (-2). Commutative Law of Multiplication This law states that the product of two ormore factors is the same regardless of the order in which the factors are arranged. For example, 3 x 4 x 5 is the same as 5 x 3 x 4 or 4 x 3 x 5. Negative signs require no special treatment in the application of this law. For example, 2 * (-4) * (-3) is the same as (-4) x (-3) x 2 or (-3) x 2 x (-4). . This law .combines the operations of additionand multiplication. The word "distributive" refers to the distribution of a common multiplier among the terms of an additive expression. For example, To verify the distributive law, we note that 2(3 + 4 + 5) is the same as 2(12) or 24. Also, 6 + 8 + 10 is 24. For application of the distributive law where negative signs appear, thefollowing procedure is recommended:
<urn:uuid:0d7174c3-65cc-4b06-8b08-dc8875fbe010>
4.21875
1,896
Knowledge Article
Science & Tech.
55.46472
What will happen to the Solar System? Scientists believe that stars do not always remain the same. We believe our Sun and our solar system was formed by the collapse of a cloud of gas and dust in space 4.6 billion years ago. The Sun is now a middle aged star, with 9 planets and many other bodies near it in the solar system. In another 5 billion years, scientists think that the Sun will become much larger as energy from within makes the outer layers of the Sun expand, eventually becoming red giant. As this happens, most of the inner planets will be destroyed (including Earth). Eventually, after another 100 million years, the Sun will no longer be able to make energy, and will become a white dwarf, the size of a small planet. Shop Windows to the Universe Science Store! The Winter 2010 issue of The Earth Scientist , which includes articles on meteor cratering, classroom glaciers, podcasts in the classroom, and pyro-cumulonimbus clouds, is available in our online store You might also be interested in: Scientists believe that the solar system was formed when a cloud of gas and dust in space was disturbed, maybe by the expl osion of a nearby star (called a supernova). This explosion made waves in space...more The solar system is made up of the Sun, the // Call the planets count function defined in the document head print_planet_count('planets'); planets and // Call the planets count function defined in the...more Eris is a dwarf planet in our Solar System. Eris is a lot like Pluto, which is also a dwarf planet. Eris and Pluto are both very far from the Sun. They are both very, very cold. Eris was discovered in...more Do you know what a planet is? Guess what... astronomers are not quite sure what a planet is! Mercury, Venus, Earth, and Mars are the planets closest to the Sun. They are definitely all planets. They are...more Meteor showers are times when you can see a lot of meteors in one night. There are several different meteor showers. Each meteor shower happens at the same time every year. There is a meteor shower in...more The Leonid meteor shower is one of several major meteor showers that occur on roughly the same date each year. The Leonids typically "peak" (are at their greatest level of activity) in mid to late November....more A meteor shower is an astronomical event during which many meteors can be seen in a short period of time. Most meteor showers have a peak activity period that lasts between several hours and a couple of...more
<urn:uuid:d8ff5302-1218-4690-83be-5bdb8abd06ff>
3.578125
540
Content Listing
Science & Tech.
63.787938
Compound expressions, conditional expressions and casts are allowed as lvalues provided their operands are lvalues. This means that you can take their addresses or store values into them. Standard C++ allows compound expressions and conditional expressions as lvalues, and permits casts to reference type, so use of this extension is deprecated for C++ code. For example, a compound expression can be assigned, provided the last expression in the sequence is an lvalue. These two expressions are equivalent: (a, b) += 5 a, (b += 5) Similarly, the address of the compound expression can be taken. These two expressions are equivalent: &(a, b) a, &b A conditional expression is a valid lvalue if its type is not void and the true and false branches are both valid lvalues. For example, these two expressions are equivalent: (a ? b : c) = 5 (a ? b = 5 : (c = 5)) A cast is a valid lvalue if its operand is an lvalue. A simple assignment whose left-hand side is a cast works by converting the right-hand side first to the specified type, then to the type of the inner left-hand side expression. After this is stored, the value is converted back to the specified type to become the value of the assignment. Thus, if a has type char *, the following two expressions are equivalent: (int)a = 5 (int)(a = (char *)(int)5) An assignment-with-arithmetic operation such as "+=" applied to a cast performs the arithmetic using the type resulting from the cast, and then continues as in the previous case. Therefore, these two expressions are equivalent: (int)a += 5 (int)(a = (char *)(int) ((int)a + 5)) You cannot take the address of an lvalue cast, because the use of its address would not work out coherently. Suppose that &(int)f were permitted, where f has type float. Then the following statement would try to store an integer bit-pattern where a floating-point number belongs: *&(int)f = 1; This is quite different from what (int)f = 1 would do—that would convert 1 to floating point and store it. Rather than cause this inconsistency, we think it is better to prohibit use of "&" on a cast. If you really do want an int * pointer with the address of f, you can simply write (int *)&f.
<urn:uuid:71a7eb39-c888-4e84-9237-7aa177a10af7>
3.984375
530
Documentation
Software Dev.
54.575186
The understanding that light interacts with matter like a particle also lead to modern information technology, like computers, TVs, and lasers. The communicator is a direct example of the Photo-Electric Effect, one of Einsteinís groundbreaking 1905 theses. This is the achievement for which he won the Noble Prize for Physics in 1921. The Photo-Electric Effect centers on the ability of light to free up electrons inside metal atoms. This induces electric current through out an illuminated piece of metal. The idea that light can physically disturb electrons denotes a particle nature of light. The energy produced is proportional to the frequency of light, not the intensity of light. This fact draws on the wave-like properties of light within the same interaction. Laser Communicator – allow several students to communicate across the room. The laser emits high intensity light, which is absorbed by the photoreceptor on the receiver.
<urn:uuid:c551a345-cd9b-4d89-b1ee-ecf62c85c278>
3.515625
177
Knowledge Article
Science & Tech.
35.57056
The sky is big. Searching it for potentially hazard objects like asteroids and comets is hard. The best way to do it? A big ‘scope, equipped with a BIG camera, and a wide, wide field of view. That’s just what the Panoramic Survey Telescope & Rapid Response System — PanSTARRS — brings to the table. It’s just a prototype, but it has a 1.8 meter ‘scope on — wait for it, wait for it — Mount Haleakala, and it sports a 1.4 gigapixel camera. You read that right: 1.4 billion pixels. It scans the skies looking for threatening objects, and astronomers just announced they have found their first one: 2010 ST3, an asteroid 50 meters (150 feet or so) across. It was found September 16, when it was still 30+ million kilometers (20 million miles) from Earth. Here’s the object in question: How big a threat is this object? Well, not very: there’s "a very slight chance" it will hit Earth in 2098, so I’m not terribly concerned. When astronomers map an orbit of an object, there’s some uncertainty in the measurements. It’s hard to get the exact position of the object, and its motion over a day or two isn’t enough to get a good idea of its trajectory. The farther you try to project where it’ll be in the future, the fuzzier the prediction gets. For something like 2010 ST3, there’s a huge volume of potential space it might occupy come 2098, and it so happens that the Earth is in that same volume of space at that time. But the Earth is near the edge of the projected position, and as time goes on, and the orbit is better determined, the volume of space the asteroid might be in will shrink. Eventually, what almost always happens is that the Earth winds up outside that volume as our data get better. That’s why the odds of it hitting us are so low. Now, if it did hit us, it would be bad. An object a bit smaller than that carved out Meteor Crater in Arizona, a hole over 1.5 kilometers across (that’s me posing in front of it; click it to get an idea of how big this scar is, and bear in mind the far rim is almost a mile away). An impact by something like that is about the same as exploding a 20 megaton bomb. So yeah, bad. The good news here… let me correct myself: the great news here is that Pan-STARRS found this thing at all! From that distance, an object this small is really hard to see, and no other asteroid survey could’ve found it. That means that as time goes on, Pan-STARRS will find lots and lots of threatening objects. And they’re out there whether we look for them or not! So it’s best to find the beasties before they find us. And when we do find them, we need to keep a really good eye on them. We need accurate orbits, and good statistics, so that we can figure out just how big a threat these guys are. 2010 ST3 is almost certainly benign, at least for the next century. But there are thousands more like it roaming the sky. We don’t get hit very often, but we do get hit. For those of you who fret about such things, I like to say that this is something to be concerned about, but not something to worry about. Worry accomplishes nothing, but concern means we’re turning our brains to the problem. And that’s the very best way to solve problems. Image credit: PS1SC Links to this Post - Astronews Daily (2455468) | September 28, 2010 - Android OS news » Fifty Meter Asteroid Might Hit Earth In 2098 | September 28, 2010 - 20 Million Miles To Earth « Movie Mania | September 29, 2010 - ‘potentially hazardous’ asteroid to pass Earth Earth in mid-October - Dateline Zero | September 29, 2010 - Un nuevo peligro para la Tierra | La mentira está ahí fuera | September 30, 2010 - Un meteorito de 50 metros podría impactar la Tierra en el 2098 — Tecnoculto | October 5, 2010
<urn:uuid:06c18a2c-6a26-46d9-a413-009c2b53f292>
3.4375
936
Personal Blog
Science & Tech.
72.25094
September 17, 2009 In the United States, the lowly ficus sits quietly in the corners of our homes and offices, providing some much needed greenery and oxygen to our indoor spaces. But in the northeastern Indian state of Meghalaya, where Ficus elastica are large, native outdoor trees that live near water, the local people have been using the ficus’s roots as bridges for generations. These aren’t trees that have fallen naturally over streams, though, which are commonly used as bridges in other places. Instead, the people train the trees’ roots to grow over the streams, guiding them over a period of 20 or so years into the shapes of paths and handrails until they have a bridge strong enough to carry many people at once. And as the tree grows, so does the bridge, gaining in strength over time, as the magazine Geographical noted earlier this year: Once the roots have been trained across the stream bed, they anchor in the soil of the opposite bank, providing the foundations for a living bridge. Usually, several roots are threaded together for strength, while others provide handrails and supports for longer spans. Flat stones from the stream bed are used to fill gaps in the bridge floor and, in time, these are engulfed by woody growth and become part of the fabric of the bridge itself. A root bridge takes around 20 years to become fully functional. Once complete, however, it will probably last for several hundred years and, unlike its non-living counterparts, will actually increase in strength with age. Known in the Khasi language as jingkieng deingjri (‘bridge of the rubber tree’), the bridges may be anywhere from ten to 30 metres in span. Unlike most artificial structures, they are able to withstand the high level of soil erosion brought about by monsoon rains and, being living material rather than dead wood, are resistant to the ravages of termites. There is even a double-decker bridge supposedly capable of handling the weight of 50 people at a time. Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
<urn:uuid:0888c6c2-97f2-4af3-b23a-5ce16939c7f0>
3.78125
443
Truncated
Science & Tech.
45.165945
Even with global emissions of greenhouse gases drastically reduced in the coming years, the global annual average temperature is expected to be 2oC above pre-industrial levels by 2050. A 2oC warmer world will experience more intense rainfall and more frequent and more intense droughts, floods, heat waves, and other extreme weather events. Households, communities, and planners need to put in place initiatives that “reduce the vulnerability of natural and human systems against actual and expected climate change effects” (IPCC 2007). Without such adaptation, development progress will be threatened—perhaps even reversed. To shed light on adaptation costs the Economics of Adaptation to Climate Change (EACC) study was initiated by the World Bank in early 2008, funded by the governments of the Netherlands, Switzerland, and the United Kingdom. Its objectives are to develop an estimate of adaptation costs for developing countries and to help decisionmakers in developing countries understand and assess the risks posed by climate change and design better strategies to adapt to climate change. This initial study report , which focuses on the first objective, finds that the cost between 2010 and 2050 of adapting to an pproximately 2oC warmer world by 2050 is in the range of $70 billion to $100 billion a year. This range is of the same order of magnitude as the foreign aid that developed countries now give developing countries each year, but it is still a very low percentage of the wealth of countries as measured by their GDP.
<urn:uuid:a3c3dba7-c61d-4630-bb49-506dc4837378>
3.703125
297
Knowledge Article
Science & Tech.
24.994482
- Preyed upon by harpy eagles, anacondas, jaguars, ocelots and, of course, humans; excellent camouflage and slow movement help them elude predators Preyed upon by harpy eagles, anacondas, jaguars, ocelots and of course humans. Excellent camouflage and slow movement help them elude predators Several Species of pyramid moths occasionally inhabit fur (far more common on bradypus By defecating at the base of their host cecropia tree, the sloth provides the tree with fertilizer. Hi Jayme, thanks for contributing! if you add the scientific names of these predators and other associates then our system will automatically add links. Let us know if you need help. You also probably need to remove the San Diego fact sheet to the references box rather than the bibliographic citation box. The citation box is how others should cite YOUR text, not a source you consulted to create your text. You can leave it blank if you want.
<urn:uuid:22b3ea95-0308-4911-b8cf-e4369d8debea>
2.75
214
Knowledge Article
Science & Tech.
39.206676
Monday, March 14, 2011 - 13:00 in Physics & Chemistry A partial meltdown has occurred at three nuclear reactors in the Fukushima power plant in Japan, due to a failure of the cooling system. - Historian says history of nuclear power needs to be addressedTue, 5 Apr 2011, 13:03:49 EDT - Self-powered sensors to monitor nuclear fuel rod statusTue, 23 Oct 2012, 14:05:13 EDT - Is nuclear power fair for future generations?Thu, 5 May 2011, 10:38:23 EDT - Scientists assess radioactivity in the ocean from Japan nuclear power facilityFri, 9 Dec 2011, 20:36:31 EST - Dartmouth scientists track radioactive iodine from Japan nuclear reactor meltdownTue, 3 Apr 2012, 15:34:29 EDT
<urn:uuid:f3f94d80-5692-490d-b2f6-587fda621714>
2.703125
162
Content Listing
Science & Tech.
34.192586
Photo: RunnerJenny (flickr) Scientists have reason to believe birds have music on the brain when they’re sleeping. Researchers think that birds dream about their songs. The key is a structure in birds’ brains that controls the nerves that make singing possible. Not only does this structure control the bird’s singing, but it also responds to sounds. What’s interesting is that this structure has been shown to be significantly more responsive to sound, especially to recordings of the bird’s own voice, when the bird is asleep. Even when scientists aren’t playing recordings of the bird’s own songs to its sleeping brain, this structure shows bursts of activity throughout the duration of the creature’s sleep cycle. It seems the bird is hearing music in its head. The theory is that dreaming about its songs helps the bird learn new tunes and possibly improve them. As with most activities in life, birds learn to sing by studying and practicing. To be good singers they need to listen to songs and reproduce them. Scientists believe that when the bird sleeps it might be re-playing songs from the day, possibly memorizing the songs and trying out variations on the tunes.
<urn:uuid:efaf2df8-c67d-430f-8ee0-92c5f9ab110e>
3.5625
248
Knowledge Article
Science & Tech.
64.329252
What's happened to the Sun ? Sometimes it looks like the Sun is being viewed through a large lens. In the above case, however, there are actually millions of lenses: ice crystals. As water freezes in the upper atmosphere, small, flat, six-sided, ice crystals might be formed. As these crystals flutter to the ground, much time is spent with their faces flat, parallel to the ground. An observer may pass through the same plane as many of the falling ice crystals near sunrise or sunset. During this alignment, each crystal can act like a miniature lens, refracting sunlight into our view and creating phenomena like parhelia, the technical term for sundogs. This image was taken two years ago in Stockholm, Sweden. Visible in the image center is the Sun, while two bright sundogs glow prominently from both the left and the right. Also visible is the bright 22 degree halo -- as well as the rarer and much fainter 46 degree halo -- also created by sunlight reflecting off of atmospheric ice crystals. Image Credit & Copyright: Peter Rosén Explanation of the image from: http://apod.nasa.gov/apod/ap110110.html
<urn:uuid:5fd59f38-ad7e-4875-8b33-772d638905a4>
3.859375
247
Knowledge Article
Science & Tech.
56.002271
The MOC 10 trawl is lowered from the stern of the research ship. The collecting nets will remain closed until they reach the desired depth. Then they will be opened either electronically or mechanically by controls on the ship. Click image for larger view and image credit. The Global Explorer, a remotely operated vehicle (ROV), is lowered from its mother ship into the sea (seen here on a previous expedition to the Arctic). A high definition video camera in an underwater housing (a large black cylinder, lower center), still digital camera (in yellow housing), and other video cameras that the pilots use to guide the ROV are mounted on the frame in front. Six lights in white reflectors will illuminate the scene when the ROV descends into the darkness of midwater. The large funnel and plastic tube on the starboard side of the ROV (lower left) compose the front end of the suction sampler. The two clear canisters on the lower frame are used to collect larger animals. This requires much skill on the part of the ROV pilot, who must use the video cameras to guide the ROV into position with an open canister below the animal, then slowly rise until the animal is inside the canister and hydraulically close the top and bottom lids before it can escape. Click image for larger view and image credit. National Geographic Society Exploring the Inner Space of the Celebes Sea 2007 will provide the most complete vertical inventory of marine life ever accomplished in a short space of time by combining four technologies on a single ship. In surface waters, trawls will catch specimens and bluewater scuba divers will collect, photograph, and observe live animals. However, bluewater diving, which is described elsewhere, is limited to the top 30-to-40 meters (m) of the sea because of human physiology constraints. Other technologies must be used to explore the ocean realm below. No expedition to explore the biology of the ocean can be complete without physical specimens. The use of horizontally towed fishing trawls to collect samples of robust animals is the oldest technology, but it has been updated. The MOC 10 trawl is a multiple opening/closing net, carrying up to six separate nets that can be opened and closed at chosen depths by controls on the ship at the surface. The mesh size of the nets is 3.0 millimeters, so the nets will catch macrozooplankton as well as micronekton. The system includes a CTD — an instrument that records conductivity (a measure of salinity), temperature, and depth. Other sensors, such asan oxygen meter and a transmissometer that measures water clarity, can be added. Remotely operated vehicles (ROVs) are rapidly becoming the preferred tools for many kinds of in situ work in the deep sea. They provide the capability for direct observation and measurements at depth, as well as selective sampling of individual specimens. Because ROVs are tethered to, and powered from, a vessel at the surface, they can work underwater for long periods of time. The Global Explorer ROV that we will use on the Celebes Sea cruise can dive to 3,000 m. It is equipped with a high definition (HD) color video camera, a digital still camera, a 12-chambered suction sampler, and static canister samplers. The ROV has thrusters in three spatial planes, allowing it to hold position in midwater for extensive observations and to follow and collect individual animals. On our cruise it will also run quantitative video transects to match with the trawling data. The Global Explorer was designed and built by Deep Sea Systems International, an Oceaneering International Company. A RopeCam is ready for launch over the side of the ship. The two yellow balls (on the top of the frame) will provide buoyancy and ensure that the camera remains right-side up, while it hangs suspended at depth. Click image for larger view and image credit. National Geographic has been developing RopeCam — a low-cost, simple set of underwater-housed digital video cameras — over the past 12 years. Deployed on 5/16-inch lines and retrieved using a “pot hauler,” the camera packages are baited with tuna and can be placed on the sea floor or hung anywhere in the water column down to 4,500 m. Timers turn the lights and cameras on and off during a set time period. The resulting digital video is as good as any recorded from much more expensive manned submersibles or ROVs. The camera packages are usually deployed in groups of three, increasing the time and area of coverage beyond what can be accomplished with a single submersible. Animals that are not easily caught or trawled to the surface, such as giant deep-sea sharks, can be observed feeding and swimming.
<urn:uuid:bdc7150f-b404-4549-b663-661b171b8c66>
3.53125
1,000
Knowledge Article
Science & Tech.
41.100842
I would try to get more systematic about my posts from now on. For every two non-technical posts I would keep two technical posts. This post would also be the first in a series of posts that in which I intend to write about some Visual Illusions only. Before getting into subject of this post, it would be helpful to have a quick recap of the background. The Blind Spot: Consider a horizontal cross section of the human eye as shown below. As seen in the above, the innermost membrane is the Retina, and it lines the walls of the posterior portion of the eye. When the eye is focused, light from the focused object is imaged onto the Retina. It thus acts as a screen. Pattern vision is caused by the distribution of discrete light receptors called rods and cones over the retinal surface. Each eye has about 6-7 million cones, located primarily in the central portion of the Retina and they are highly sensitive to color. Humans can resolve fine details with cones as each cone is connected to its own nerve end. The vision due to cones is called Photopic or bright-light vision. The number of rods is about 75-150 million andare distributed throughout the retina. The amount of details that can be resolved by rods is lesser as several of them are connected to the same nerve unlike in the cones. Vision due to rods is simply to give an overall picture of the field of view. Objects that seen in bright day light appear as color-less forms in moonlight as only the rods are stimulated. This type of vision is called Scotopic or dim-light vision. As seen in the figure there is a portion on the retina which has no receptors (rods or cones), thus will not cause any sensation. This is called the blind spot. Now because of the blind spot a certain field of vision is not perceived. We however do not notice it as the brain fills it with details from the surroundings or using information from the other eye. The blind spots in both the eyes are arranged symmetrically so that the loss in field of vision in one eye will compensate for the other. This is shown by the figure below. If the brain would not fill the lost field of vision with surrounding details and information from the other eye, then the blind spot would appear something like the black dot on the image below. Now that means, if you close one eye then you can indeed detect the presence of the blind spot as the brain would not have sufficient information about the lost field of vision (though it would be good enough for us to not notice it normally). The presence of the blind spot can be demonstrated by the simple figure below. Click on the above image to enlarge Now enlarge the above image and close your right eye and focus your left eye on the X only. Don’t try to look at the O on the left. You’d just notice it at the periphery. The object of interest should only be X. Now move towards the screen, at a certain point you will not see O in the periphery. If you go ahead of this point or behind it you’ll see O again, this specific point (a range actually) where you can not see O indicates the presence of the blind spot. The Vanishing Head Illusion: This leads to some interesting illusions, one of the most interesting being the so called vanishing head illusion. As in the above figure. If the O is replaced by a head, the person would appear headless if the head falls on the blind spot. Check the video below in full screen for best results. View in Full Screen We notice that Richard Wiseman on the left indeed appears headless and that field of view is filled up by the orange background when the blind spot falls. Then he does something even more interesting. He uses a black bar and moves it up and down in front of his face. Now instead of seeing the bar as discontinuous, the brain manages to show the bar as a continuous entity!
<urn:uuid:f3b6854b-0ec7-4b8d-b0b3-1cfd1b3a7c20>
3.3125
829
Personal Blog
Science & Tech.
63.01185
Explanation of Direct Current Electricity by Ron Kurtus - Succeed in Understanding Physics. Also refer to physical science, static electricity, AC, DC, magnetism, voltage, current, amps, resistance, ohms, generator, battery, lemon, voltmeter, water, hose flow, School for Champions. Copyright © Restrictions Direct Current (DC) Electricity by Ron Kurtus (revised 11 January 2004) Direct current or DC electricity is the continuous movement of electrons from an area of negative (−) charges to an area of positive (+) charges through a conducting material such as a metal wire. Whereas static electricity sparks consist of the sudden movement of electrons from a negative to positive surface, DC electricity is the continuous movement of the electrons through a wire. A DC circuit is necessary to allow the current or steam of electrons to flow. Such a circuit consists of a source of electrical energy (such as a battery) and a conducting wire running from the positive end of the source to the negative terminal. Electrical devices may be included in the circuit. DC electricity in a circuit consists of voltage, current and resistance. The flow of DC electricity is similar to the flow of water through a hose. Questions you may have include: - What is DC electricity? - What are voltage, current and resistance? - How do we create DC electricity? This lesson will answer those questions. Useful tool: Metric-English Conversion Note: Click the Play button to hear the text being read. Time = 6 min. 47 sec. Right-click to download MP3 (Choose Save target or Save link) Continuous movement of electrons DC electricity is the continuous movement of electrons through a conducting material such as a metal wire. The electrons move toward a positive (+) potential in the wire. DC movement of electrons in wire In reality, there are millions of electrons weaving their way among the atoms in the wire. This is just an illustration of the movement. An electrical circuit consisting of a source of DC power and a wire making a complete circuit is required for DC electricity to flow. (See DC circuits for more information.) A flashlight is a good example of a DC circuit Current shown opposite Although the negative charged electrons move through the wire toward the positive (+) terminal of the source of electricity, the current is indicated as going from positive to negative. This is an unfortunate and confusing convention. Ben Franklin originally named charges positive (+) and negative (−) when he was studying static electricity. Later, when scientists were experimenting with electrical currents, they said that electricity travels from (+) to (−), and that became the convention. This was before electrons were discovered. In reality, the negative charged electrons move toward the positive, which is the opposite direction that people show current moving. It is confusing, but once a convention is made, it is difficult to correct it. Voltage, current and resistance The electricity moving through a wire or other conductor consists of its voltage (V), current (I) and resistance (R). Voltage is potential energy, current is the amount of electrons flowing through the wire, and resistance is the friction force on the electron flow. A good way to picture DC electricity and to understand the relationship between voltage, current and resistance is to think of the flow of water through a hose, as explained below. A potential or pressure builds up at one end of the wire, due to an excess of negatively charged electrons. It is like water pressure building up in a hose. The pressure causes the electrons to move through the wire to the area of positive charge. This potential energy is called Voltage, its unit of measurement is the Volt. The number of electrons is called current and its unit of measurement is the Ampere or Amp. Electrical current is like the rate that water flows through a hose. An Ohm is the unit of measurement of the electrical resistance. A conductor like a piece of metal has its atoms so arranged that electrons can readily pass around the atoms with little friction or resistance. In a nonconductor or poor conductor, the atoms are so arranged as to greatly resist or impede the travel of the electrons. This resistance is similar to the friction of the hose against the water moving through it. Comparison with hose The following chart compares water running in a hose and DC electricity flowing in a wire: Water in a Hose DC in a Wire |rate of flow||current (I)||Amps| Analogy between a Hose and Electricity in a Wire Creating DC electricity Although static electricity can be discharged through a metal wire, it is not a continuous source of DC electricity. Instead, batteries and DC generators are used to create DC. Batteries rely on chemical reactions to create DC electricity. The automobile battery consists of lead plates in a sulfuric acid solution. When the plates are given a charge from the car's generator or alternator, they change chemically and hold the charge. That source of DC electricity can then be used to power the car's lights and such. The biggest problem with this type of battery is that sulfuric acid is very caustic and dangerous. Another battery that you can make yourself is a lemon battery. This one needs no charging but depends on the acidic reaction of different metals. Copper and zinc work the best. You can use a copper penny or copper piece of wire. A zinc-coated or galvanized nail can be used as the other terminal. A standard iron nail will work, but not as good. Push the copper wire and galvanized nail into an ordinary lemon and measure the voltage across the metals with a voltmeter. Some people have been able to dimly light a flashlight bulb with this battery. Another reliable source of DC electricity is the DC generator, which consists of coils of wire spinning between North and South magnets. (See Generating Electrical Current for more information.) Direct current or DC electricity is the continuous movement of electrons from negative to positive through a conducting material such as a metal wire. A DC circuit is necessary to allow the current or steam of electrons to flow. In a circuit, the direction of the current is opposite the flow of electrons. DC electricity in a circuit consists of voltage, current and resistance. The flow of DC electricity is similar to the flow of water through a hose. Batteries and DC generators are the sources to create DC electricity. Use your creative abilities to make this a better world Resources and references The following resources provide information on this subject: Direct Current description - Simple illustrations of DC Electricity - Difference between AC and DC What do you think? Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible. Click on a button to share the link for this page: Or use our form to send this link to yourself or a friend. Students and researchers The Web address of this page is: Please include it as a link on your website or as a reference in your report, document, or thesis. Where are you now? Direct Current (DC) Electricity
<urn:uuid:9aee1a60-6894-4e79-a844-7d47fdb1a8f7>
3.953125
1,479
Tutorial
Science & Tech.
40.889972
There are many different types of particles, all with different sizes and properties. Three particles which are all around us are the proton, the neutron, and the electron. These are the particles which make atoms. The proton has a positive charge (a + charge). The neutron has a neutral charge. The electron has a negative charge (a - charge), and it is the smallest of these three particles. In atoms, there is a small nucleus in the center, which is where the protons and neutrons are, and electrons orbit the nucleus. Protons and neutrons are not elementary particles because they are made up of quarks. There are six different types of quarks. These are the up quark, the down quark, the strange quark, the charm quark, the bottom quark, and the top quark. A neutron is made of two down quarks and one up quark. The proton is made up of two up quarks and one down quark.
<urn:uuid:176e9971-a2a7-44ba-831f-36718f380408>
4.1875
205
Knowledge Article
Science & Tech.
61.316923
in waves is the fraction of a wave cycle which has elapsed relative to an arbitrary point. The phase of an oscillation Simple harmonic motion can serve as a mathematical model of a variety of motions, such as the oscillation of a spring. Additionally, other phenomena can be approximated by simple harmonic motion, including the motion of a simple pendulum and molecular vibration.... The sine wave or sinusoid is a mathematical function that describes a smooth repetitive oscillation. It occurs often in pure mathematics, as well as physics, signal processing, electrical engineering and many other fields... refers to a sinusoidal function such as the following: are constant parameters. These functions are periodic with period , and they are identical except for a displacement of axis. The term phase can refer to several different things: is any change that occurs in the phase of one quantity, or in the phase difference between two or more quantities. is sometimes referred to as a phase-shift, because it represents a "shift" from zero phase. But a change in is also referred to as a phase shift. For infinitely long sinusoids, a change in is the same as a shift in time, such as a time-delay. If is delayed (time-shifted) by of its cycle, it becomes: whose "phase" is now It has been shifted by is the difference, expressed in electrical degrees or time, between two waves having the same frequency and referenced to the same point in time. Two oscillators that have the same frequency and different phases have a phase difference, and the oscillators are said to be out of phase with each other. The amount by which such oscillators are out of step with each other can be expressed in degree A degree , usually denoted by ° , is a measurement of plane angle, representing 1⁄360 of a full rotation; one degree is equivalent to π/180 radians... s from 0° to 360°, or in radian Radian is the ratio between the length of an arc and its radius. The radian is the standard unit of angular measure, used in many areas of mathematics. The unit was formerly a SI supplementary unit, but this category was abolished in 1995 and the radian is now considered a SI derived unit... s from 0 to 2π. If the phase difference is 180 degrees (π radians), then the two oscillators are said to be in antiphase . If two interacting wave In physics, a wave is a disturbance that travels through space and time, accompanied by the transfer of energy.Waves travel and the wave motion transfers energy from one point to another, often with no permanent displacement of the particles of the medium—that is, with little or no associated mass... s meet at a point where they are in antiphase, then destructive interference will occur. It is common for waves of electromagnetic (light, RF), acoustic (sound) or other energy to become superposed in their transmission medium. When that happens, the phase difference determines whether they reinforce or weaken each other. Complete cancellation is possible for waves with equal amplitudes. Time is sometimes used (instead of angle) to express position within the cycle of an oscillation. - A phase difference is analogous to two athletes running around a race track at the same speed and direction but starting at different positions on the track. They pass a point at different instants in time. But the time difference (phase difference) between them is a constant - same for every pass since they are at the same speed and in the same direction. If they were at different speeds (different frequencies), the phase difference is undefined and would only reflect different starting positions. Technically, phase difference between two entities at various frequencies is undefined and does not exist. - Time zones are also analogous to phase differences. In-phase and quadrature (I&Q) components The term in-phase is also found in the context of communication signals: represents a carrier frequency In telecommunications, a carrier wave or carrier is a waveform that is modulated with an input signal for the purpose of conveying information. This carrier wave is usually a much higher frequency than the input signal... represent possible modulation In electronics and telecommunications, modulation is the process of varying one or more properties of a high-frequency periodic waveform, called the carrier signal, with a modulating signal which typically contains information to be transmitted... of a pure carrier wave, e.g.: ) The modulation alters the original ) component of the carrier, and creates a (new) ) component, as shown above. The component that is in phase with the original carrier is referred to as the in-phase component . The other component, which is always 90° ( radians) "out of phase", is referred to as the quadrature component In physics, coherence is a property of waves that enables stationary interference. More generally, coherence describes all properties of the correlation between physical quantities of a wave.... is the quality of a wave to display well defined phase relationship in different regions of its domain of definition. In physics, quantum mechanics Quantum mechanics, also known as quantum physics or quantum theory, is a branch of physics providing a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. It departs from classical mechanics primarily at the atomic and subatomic... ascribes waves to physical objects. The wave function is complex and since its square modulus is associated with the probability of observing the object, the complex character of the wave function is associated to the phase. Since the complex algebra is responsible for the striking interference effect of quantum mechanics, phase of particles is therefore ultimately related to their quantum behavior. is the correction of phase error (i.e., the difference between the actually needed phase and the obtained phase). A phase compensation is required to obtain stability in an opamp. A capacitor/RC network is usually used in the phase compensation to keep a phase margin. A phase compensator subtracts out an amount of phase shift from a signal which is equal to the amount of phase shift added by switching one or more additional amplifier stages into the amplification signal path. - Instantaneous phase The notions of Instantaneous Phase and Instantaneous Frequency are important concepts in Signal Processing that occur in the context of the representation and analysis of time-varying signals.... - Lissajous curve In mathematics, a Lissajous curve , also known as Lissajous figure or Bowditch curve, is the graph of a system of parametric equationswhich describe complex harmonic motion... - Phase angle In the context of vectors and phasors, the term phase angle refers to the angular component of the polar coordinate representation. The notation A\ang \!\ \theta, for a vector with magnitude A and phase angle θ, is called angle notation.In the context of periodic phenomena, such as a wave,... - Phase cancellation - Phase velocity The phase velocity of a wave is the rate at which the phase of the wave propagates in space. This is the speed at which the phase of any one frequency component of the wave travels. For such a component, any given phase of the wave will appear to travel at the phase velocity...
<urn:uuid:c2f7e32b-94a5-46a0-97ea-616ebde5cb99>
3.84375
1,525
Knowledge Article
Science & Tech.
38.446732
Methylated genes are acquired at fertilization. As the zygote develops into a fully grown organism, its somatic cells will bear the same methylated genes. At the time that the organism produces its own gametes, the imprinted genes are un-methylated. Nonetheless, genomic imprints would be re-instituted in its genome. The location of gene methylation would depend on the sex of that organism. The new gene imprints would consequently affect the phenotype of the its progeny. The effect would rely upon the sex of that organism transmitting the chromosome containing the methylated gene (i.e. whether it is a female or male parent). Results from our forum ... (Lion-Tiger hybrids) display abnormally large SIZE possibly due to genomic imprinting. How is it that their physical shape comes about? How does a single cell with a ... See entire post
<urn:uuid:a8b712d6-cacc-4f7b-93a9-a1af7b9384ef>
3.71875
186
Content Listing
Science & Tech.
42.882545
Title: SETI: Science Fact, Not Fiction Speaker: Jill Tarter Abstract: The privately funded Phoenix Project of the SETI Institute has been systematically searching the microwave portion of the radio spectrum for signals generated by another technology. This talk will discuss searches from both northern and southern hemispheres, putting the negative results to date into a cosmic perspective. Several more years of searching will be required to complete our target list of 1000 nearby stars. We are beginning to make plans for expanding the search to reach fainter signals from more distant stars, in case the current Phoenix search of our galactic backyard fails to find evidence for SETI. Reference for students: Jean has copies of the articles.
<urn:uuid:225d5f56-07c2-4df5-9b9d-66aceb3e399c>
2.96875
142
Academic Writing
Science & Tech.
31.960923
A magnetic field has a magnitude of 1.2×10-3 T, and an electric has a magnitude of 4.6×103 N/C. Both fields point in the same tion. A positive 1.8-µC charge moves at a speed of 3.1 × 106 m/s in direction that is perpendicular to both fields. Determine the of the net force that acts on the charge.
<urn:uuid:ac0006ec-8988-4fba-a9b3-a22155718590>
3.328125
89
Q&A Forum
Science & Tech.
102.324886
Shrinking land ice is wreaking havoc across the globe. See how the global warming is shrinking Greenland's glaciers and raising sea level—and find other hot spots experiencing shrinking land ice on the Climate Hot Map. - Sea-level rise. Water from shrinking glaciers and ice sheets is now the major contributor to global sea-level rise. Long locked away in polar regions and mountains, this extra runoff is adding new freshwater to the world's oceans. - Long-term decline in water resources. Nearly one-sixth of the world's population lives near rivers that derive their water from glaciers and snow cover. Most of these communities can expect to see their water resources peak and then ultimately decline during this century. - Short-term increase in flash floods. Many rivers that derive their water from melting glaciers or snow are likely to have earlier peak runoff in spring and an overall increase in runoff, at least in the short term—potentially increasing the risk of flash floods and rockslides. - Accelerated warming from albedo. Land ice in polar regions reflects some of the sun's energy back into space (known as albedo), helping keep the planet cool. As this ice shrinks and darker land is exposed, it absorbs more solar energy—creating a feedback loop that accelerates the planet's warming. Land ice includes any form of ice that lasts longer than a year on land, such as mountain glaciers, ice sheets, ice caps and ice fields (both similar to but smaller than an ice sheet), and frozen ground or permafrost. Nearly a quarter of the land area in the Northern Hemisphere is permafrost, with layers up to tens of meters thick. Above-freezing temperatures occur at the base of the permafrost layer.
<urn:uuid:ea963f52-425b-42d3-a057-38fd8ab44b15>
4.1875
360
Knowledge Article
Science & Tech.
52.429743
Bending moments are produced by transverse loads applied to beams. The simplest case is the cantilever beam , widely encountered in balconies, aircraft wings, diving boards etc. The bending moment acting on a section of the beam, due to an applied transverse force, is given by the product of the applied force and its distance from that section. It thus has units of N m. It is balanced by the internal moment arising from the stresses generated. This is given by a summation of all of the internal moments acting on individual elements within the section. These are given by the force acting on the element (stress times area of element) multiplied by its distance from the neutral axis, y . Therefore, the bending moment, M , in a loaded beam can be written in the form The concept of the curvature of a beam, κ, is central to the understanding of beam bending. The figure below, which refers now to a solid beam, rather than the hollow pole shown in the previous section, shows that the axial strain, ε, is given by the ratio y / R . Equivalently, 1/R (the "curvature", κ ) is equal to the through-thickness gradient of axial strain. It follows that the axial stress at a distance y from the Neutral axis of the beam is given by σ = E κ y The bending moment can thus be expressed as This can be presented more compactly by defining I (the second moment of area , or "moment of inertia") as The units of I are m 4 . The value of I is dependent solely on the beam sectional shape. Click here to see how I is calculated for two simple shapes. The moment can now be written as M = κ E I These equations allow the curvature distribution along the length of a beam (ie its shape), and the stress distribution within it, to be calculated for any given set of applied forces. The following simulation implements these equations for a user-controlled beam shape and set of forces. The 3-point bending and 4-point bending loading configurations in this simulation are SYMMETRICAL, with the upward forces, denoted by arrows, outside of the downward force(s), denoted by hooks Note: This animation requires Adobe Flash Player 8 and later, which can be downloaded here. A fruitful approach to designing beams which are both light and stiff is to make them hollow. Calculation of the second moment of area for hollow beams is very straightforward, since it is obtained by simply subtracting the I of the missing section from that of the overall section. For example, that for a cylindrical tube is given by previous | next
<urn:uuid:2e67c6df-dd63-4cdf-baf0-a9c948a0e28c>
4.03125
562
Tutorial
Science & Tech.
48.956683
Improved observational techniques reveal features which challenge the very definition of elliptical galaxies. Many ellipticals contain disks or rings of stars, dust, or gas; others show signs of recent accretions in the form of shells of luminosity. High-resolution surface photometry shows some elliptical galaxies have breaks (once known as `cores') where their luminosity profile slopes change decrease markedly, while other ellipticals show a constant or only gradually changing slope to the innermost point measured. Direct imaging (Kormendy et al. 1994) and nonparametric deprojection of luminosity profiles (Gebhardt et al. 1996) both dramatize the difference between resolved breaks and power-law profiles. But parameter correlations offer some support to the possibility that many power-law profiles actually break at projected radii smaller than 0.1 arc-sec. Perhaps most basic is the roughly linear correlation between absolute spheroid luminosity and physical break radius (Kormendy et al. 1994 , Faber et al. 1997). Most of the power-law profiles belong to galaxies with M_V > -21; these galaxies could well have breaks at radii R_b < 10 pc and may follow the correlation found for the well-resolved galaxies. Luminosity also correlates with central velocity dispersion sigma_0 and with surface brightness I_b at radius R_b. These parameters obey a `fundamental plane' relationship (Faber et al. 1997), analogous to the one already found for the global parameters R_e, sigma, and I_e. The existence of such a plane implies that the virial theorem rules the parameters at the break radius; however the central plane seems thicker than the global plane, perhaps implying that central M/L ratios vary from galaxy to galaxy. High signal-to-noise CCD images of elliptical galaxies show that some have non-elliptical isophotes (Carter 1979, 1987, Lauer 1985a, Jedrzejewski 1987, Bender, Dobereiner, & Mollenhoff 1988, KD89). Departures from a perfectly elliptical form typically have amplitudes of a few percent, and most galaxies are either boxy or pointed. Two equivalent methods are used to measure isophote shapes: Isophote fitting (Carter 1979). As a function of the azimuthal angle measured with respect to the major axis, a chosen isophote is fit to -- \ (1) R(theta) = a_0 + | a_i cos(i theta) + b_i sin(i theta) . / -- i = 1Here a_0 gives the isophote's mean radius, the i = 1 coefficients contain information about its center, and the i = 2 coefficients contain information about its ellipticity and position angle. Surface-brightness fitting (Lauer 1985b). To detect non-elliptical distortions, the surface brightness along a trial ellipse is fit to -- \ (2) I(theta) = A_0 + | A_i cos(i theta) + B_i sin(i theta) . / -- i = 1Here A_0 is the mean surface brightness along the ellipse, which is adjusted until the i = 1 and i = 2 coefficients vanish. Results obtained using these two methods are related by a_i A_i b_i B_i (3) --- = --------- , --- = --------- a_0 A_0 gamma a_0 A_0 gammawhere gamma is the logarithmic slope of the brightness profile. In most elliptical galaxies the most significant non-elliptical term is the a_4 coefficient, which is typically in the range -0.02 < a_4/a_0 < 0.04. Galaxies with a_4 < 0 are termed `boxy' since their isophotes are somewhat rectangular, while those with a_4 > 0 are accurately described as `pointed', though the term `disky' is widely used since such galaxies appear to contain disks. Kormendy & Bender (1996) have proposed that boxyness or diskyness be adopted as the primary classification criterion for elliptical galaxies. Isophotal shape correlates with several other important parameters, including optical luminosity, X-ray luminosity, and degree of rotational support (Bender et al. 1989, KD89, Fig. 3). But some galaxies appear both disky and boxy, depending on the surface brightness examined, somewhat blurring this classification scheme. In a sample of 42 early-type galaxies, 10% had a_4/a_0 > 0.02 and appear to be likely candidates for systems with embedded disks (Lauer 1985a). However, the actual percentage of ellipticals with embedded disks could be much larger. Photometric models combining a spheroidal r^1/4 bulge and an exponential disk indicate that embedded disks must be either very substantial or nearly edge-on to be readily detected. Detection statistics are consistent with the hypothesis that all ellipticals with a_4 > 0 contain disks contributing 20% of the total light (Rix & White 1990). Photometric modeling is unreliable in recovering disk parameters by decomposing the observed images. The practice of finding the best-fit ellipse and attributing only the residuals to the disk systematically and drastically underestimates the luminosity contributed by the latter (Rix & White 1990). Surface-brightness data cannot unambiguously diagnose the presence of a disk since some ellipticals may have intrinsically pointy isophotes for unrelated reasons. Line-of-sight velocity profiles provide a more definitive diagnosis (Rix & White 1990). To measure velocity profiles, galactic absorption-line spectra are compared with the spectra of template stars. Such comparison may be performed in several different ways (Rix & White 1992): Fourier transform methods invoke the convolution theorem; the transform of the observed spectrum is (approximately) equal to the transform of the velocity profile times the transform of the template spectrum (Sargent et al. 1977, Franx, Illingworth, & Heckman 1989). Cross-correlation methods measure the correlation between the observed and template spectra as a function of the relative shift in wavelength (Tonry & Davis 1979, Bender 1990). Direct methods synthesize the observed spectrum by adding up shifted versions of the template spectrum. This is somewhat expensive computationally but avoids the `belling' needed to produce well-behaved transforms of finite spectral intervals, allows masking of discrepant features to reduce mismatch between template and galactic spectra, and permits a straightforward error analysis (Rix & White 1992). Non-Gaussian line profiles may be characterized using the Gauss-Hermite series (van der Marel & Franx 1993): -- alpha(w) \ v - V (4) LP(v) = Gamma -------- | h_j H_j(w) , w = ----- , sigma / sigma -- j = 0where Gamma is the line strength, sigma is the Gaussian velocity dispersion, V is the system velocity, and -1/2 2 (5) alpha(x) = (2 pi) exp(- x / 2)is a normalized Gaussian with unit dispersion. The set of functions H_j(x) are Hermite polynomials of degree j; these are orthogonal with respect to the weight function alpha(x)^2: 1/2 / 2 (6) 2 pi | dx H_j(x) H_k(x) alpha(x) = delta_jk . / In applying Eq. 4, the parameters Gamma, sigma, and V are chosen by requiring that h_0 = 1 and h_1 = h_2 = 0; non-Gaussian line profiles yield nonzero h_j for j > 2. In particular, h_3 parametrizes the skewness of the line profile, while h_4 measures whether the profile is more or less peaked than a Gaussian. Application of these techniques has revealed line profiles characteristic of embedded stellar disks in many ellipticals (Franx & Illingworth 1988, Bender 1990, Rix & White 1992, van der Marel & Franx 1993, van der Marel et al. 1994). Embedded disks yield h_3 values with opposite signs on opposite sides of the galaxy's center, implying the presence of a rapidly rotating component. The characteristics of these embedded disks show considerable variation: NGC 5322 has a relatively `warm' disk, with v/sigma = 1.4, while the disk in NGC 3610 is quite `cold', with v/sigma = 4.5 (Rix & White 1992). Isophote shape is known to be correlated with rotation velocity: Elliptical galaxies with a_4 > 0 tend to be rapid rotators (KD89). Kinematically cold disks may be partly responsible for this correlation, since at some radii such disks may make very substantial contributions to the light and will tend to dominate line-of-sight velocity measurements parametrized by simple Gaussians (Rix & White 1990). For example, purely Gaussian fits to composite line profiles may overestimate rotation curve amplitudes by > 30% (van der Marel & Franx 1993). Many elliptical galaxies have dust lanes produced by cold interstellar material distributed in disks or rings (e.g. van Gorkom 1992). Velocity measurements show that in many cases this cold gas counter-rotates or is otherwise kinematically distinct from the underlying stellar component; a likely interpretation is that this material has been accreted since the galaxy formed. Unlike stars, the gas can only settle down on closed (and stable) orbits. In theory this might allow determination of the principal planes of a galaxy's potential, but in practice settling times are long enough that this is questionable (KD89). One galaxy in which a gas ring does define the principal plane is IC 2006 (Schweizer 1987, Franx, van Gorkom, & de Zeeuw 1994). This galaxy has an external ring containing some young stars. Neutral hydrogen observations show that the velocity varies along the ring in a perfectly sinusoidal manner, indicating that the ring - and the potential it moves in - is exactly circular. The inclination angle of the ring is 37 deg. X-ray observations indicate that many ellipticals contain 10^9 to 10^10 solar masses of gas at temperatures of 10^7 K (Forman et al. 1979, Trinchiere & Fabbiano 1985, Schweizer 1987). This is comparable to the mass of cold and warm gas present in typical spiral galaxies; thus it seems fair to say that ellipticals are not, in fact, gas poor compared to disk galaxies. The hot gas typically forms a pressure-supported `atmosphere' around the galaxy. The surface brightness of E galaxies doesn't always decline smoothly and monotonically with radius. When a smooth luminosity profile is subtracted from the actual surface brightness, `shells' or `ripples', centered on the galaxy, are seen (Malin & Carter 1980, Prieur 1990). At least 17% of field E galaxies have shell-like features, and the true fraction may be more than 44% (KD89). Shells have spectral energy distributions characteristic of starlight. In many cases the shells are somewhat more blue than the galaxies they occupy (KD89). Shell systems have a variety of morphologies; some galaxies have shells transverse to the major axis and interleaved on opposite sides of the center of the galaxy, while other galaxies have shells distributed at all position angles (Prieur 1990). Both the colors and the varied morphologies of shells can be explained by accretion events in which a large elliptical galaxy captures and tidally disrupts a smaller companion. Profile subtraction sometimes reveals other kinds of structures in E galaxies, including plumes, linear features or `jets' (not the jets seen in AGNs!), `X-structures', etc. (Schweizer & Seitzer 1992). Due date: 1/28/97 4. How could you use the correlation between effective surface brightness I_e and effective radius R_e (see Eq. 3 of Lecture 2) to measure distances? What kind of accuracy could you achieve, assuming all the scatter shown in Fig. 2 of KD89 is intrinsic to the galaxies? Last modified: January 22, 1997
<urn:uuid:54783488-3877-4797-9db3-76a4eb4bb59c>
2.734375
2,625
Academic Writing
Science & Tech.
39.232584
Earth & Space Science: Session 5 A Closer Look: Metamorphic Rocks What are metamorphic rocks? Metamorphic rock is rock that has physically and chemically changed, or "morphed," into new rock. The word "metamorphic" has its origins in classical Greek and means "to change form." Rock of any type (sedimentary, igneous, or metamorphic) that is subjected to high pressure, high temperatures, and/or reactions with chemical solutions can be converted to metamorphic rock. This transformation can involve changes in a rock’s texture (grain size and shape), fabric (how the grains are oriented relative to one another), chemical composition, and mineral content. The rock remains solid as these changes occur. Through the process of metamorphism, the original rock, or protolith, changes into a new metamorphic rock. How are these high pressures and temperatures generated? The answer lies in the processes of plate tectonics. Plates that move against each other produce huge forces that create high pressures and temperatures that can deform rock by bending or breaking it. Rock can also be buried and metamorphosed when plates collide. Temperatures and pressure within the Earth increase with depth, so that rock deep in the crust will experience extreme heat and pressure. Rock can also be subjected to high temperatures in regions of volcanism as well as in places beneath the Earth where magma intrudes into the rock above it. What types of metamorphism exist? Regional Metamorphism: Many metamorphic rocks form by regional metamorphism, named for the large areas of the crust that are affected. Regional metamorphism usually results from mountain building processes, which are caused by the collision of tectonic plates. These collisions compress and thicken the crust and cause considerable rock deformation. High Pressure Metamorphism: Some metamorphic rock forms at high pressures but at temperatures that are relatively low. This type of metamorphism occurs at subduction zones. Here, high pressures result when one plate is submerged under the mantle. Temperatures remain relatively low because the crust that forms the upper part of the subducting plate is cool, having been close to the Earth's surface. As the plate subducts, it actually cools the mantle. The subducting plate reaches high pressures faster than it heats to high temperatures, and this pressure is enough to cause metamorphism. High Temperature Metamorphism: Some metamorphic rock forms at high temperatures but without high pressures. This occurs near hot intrusions of magma from the mantle into the crust. Rock that is in contact with these intrusions undergoes contact, or thermal, metamorphism. This heat causes minerals to react with each other, which produces new minerals. Hydrothermal Metamorphism: This process is associated with contact metamorphism. When very large masses of magma — called plutons — intrude from the mantle into the crust, a great amount of heat is generated. This huge body of hot magma creates a heat source that can cause fluids in the crust to circulate. Chemical reactions occur as a result of this circulation. This type of metamorphism is common near mid-oceanic ridges and around large plutonic intrusions in the crust. |prev: a closer look intro|
<urn:uuid:d1dd5949-8a48-405c-adc2-8d110e1f9d0a>
4.46875
690
Knowledge Article
Science & Tech.
35.570162
"Our vision for the marine environment is clean, healthy, safe, productive and biologically diverse oceans and seas. Within one generation, we want to have made a real difference." UK Government, 2002 Oceans 2025 is NERC's co-ordinated marine science programme. Our seven marine research centres will work together to increase people's knowledge of the marine environment so that they are better able to protect it for future generations. Running for five years, this programme will improve our understanding of how our seas behave and how they are changing, and what that might mean not just for our oceans, but for society. More about Oceans 2025 16 Jul 2010 Combined cruise bids information presented for NOC, PML, and SAMS. Oceans 2025 Cruise Bids 2007-12 update 10 Jun 2010 Hungry sharks, swordfish and other ocean predators have adopted different hunting behaviours depending on how much food is readily available. Giant steps help hungry fish to find food
<urn:uuid:5fc475fc-bed4-41f9-952f-8fe0142f3fdb>
2.984375
197
Content Listing
Science & Tech.
43.103481
An alloy has been produced which not only has magnetic properties better than the best of current materials, but is also unrestricted by the patents that cover many widely used permanent magnets. Michael Coey, a professor at Trinity College Dublin, says that the new material, an alloy of iron, samarium and nitrogen, is easily bonded with plastic, rubber or polymers to make mass-produced magnets of any shape. It can also work at a much higher temperature than current magnets. The Japanese currently dominate the world market for permanent magnets, estimated to be worth more than 500 million Pounds a year. The magnets with the best properties, which are used in everything from industrial electric motors to stereo loudspeakers, are made of a complex iron-boron-neodymium alloy developed in Japan in 1983. A European initiative to develop an alternative was launched five years ago. The project, know as Concerted European Action on ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:23c45bbe-7962-456a-86ea-010916d3da92>
3.109375
214
Truncated
Science & Tech.
36.831356
EINSTEIN'S cosmic speed limit may not be absolute after all. When the overall energy level in space is negative, it should be possible to travel faster than light, a physicist claims. The special theory of relativity, which holds that nothing can travel faster than light, has confounded hopes of interstellar travel. But now Ken Olum of Tufts University in Medford, Massachusetts, has found a possible way. Olum compared the transit times of signals following various paths. If light passes through space containing mass, gravity will delay it slightly. "If you only have positive mass, you can only make delays," says Olum. To make the signal travel faster than light, the mass needs to be less than zero. "You've got to have negative mass, which is equivalent to negative energy," he says. No one knows whether negative energy can exist on a large scale, but it can exist in small regions. ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:005ccf30-70d5-40b9-ad91-3a8d5e5afc31>
3.6875
218
Truncated
Science & Tech.
52.064938
“The genesis of life is as inevitable as the formation of atoms,” is how Andrei Finkelstein, the director of the Russian Academy of Sciences’s Applied Astronomy Institute, explained his ambitious timeline for finding alien life to an audience of astrobiologists and reporters in June. “There is life on other planets, and we will find it in 20 years." But Tullis Onstott, a geologist at Princeton University who specializes in astrobiology, makes an even more ambitious prediction. “In the next 15 years,” he says, “we will likely discover life on an exoplanet near us.” Scientists have long predicted the discovery of extraterrestrial life, but Finkelstein and Onstott have good reason to be optimistic. Researchers are devoting more resources to the search for alien life than ever before, and they are getting some enticing results.
<urn:uuid:1d845da9-5fd7-451b-b568-958e486e38b4>
2.921875
189
Truncated
Science & Tech.
25.068159
In regular heptagon ABCDEFG, show that 1/AB = 1/AC + 1/AE. A regular heptagon is cyclic (as is as any regular polygon), and therefore any quadrilateral defined by four of its vertices is also cyclic. Ptolemy's Theorem states that in a cyclic quadrilateral the sum of the products of the two pairs of opposite sides equals the product of its two diagonals. Let AB = CD = DE = a, AC = CE = b, AD = AE = c. Applying Ptolemy's Theorem to cyclic quadrilateral ACDE, we obtain ab + ac = bc. Then a(b + c) = bc, and so 1/a = (b + c)/bc = 1/b + 1/c. Therefore, in regular heptagon ABCDEFG, 1/AB = 1/AC + 1/AE. The nonagon diagonals puzzle may be solved by applying Ptolemy's Theorem to cyclic quadrilateral ABDG. Source: Mathematics: Problems of the Month (problem since taken down)
<urn:uuid:583fd601-ea7b-4062-ab7b-969274e5693b>
2.921875
248
Knowledge Article
Science & Tech.
65.345625
The flier (Centrarchus macropterus) is a sunfish from the family Centrarchidae which is native to muddy-bottomed swamps, ponds, weedy lakes, and riverine backwaters across the American South, from southern Illinois east to the Potomac River basin and south to Texas. The flier, which can live up to five years, grows to a maximum recorded length of about 12 in. (30 cm). The maximum recorded weight of the species is just over one-half kilogram (about 19 oz). Fliers are occasionally kept in aquaria by North American native-fish fanciers. C. macropterus is currently the only species of genus Centrarchus Cuvier, 1829, but Lacépède had originally assigned it to Labrus (now confined to some marine wrasses). The generic name, Centrarchus, derives from the Greek κÎντÏον (centre, in this sense “sting”) and άÏχος (ruler, in this sense “anus”), in reference, presumably, to the sharp spines on the anal fin. Centrarchus being the type genus of family Centrarchidae, it gives its name to the whole sunfish family. The specific name, macropterus, derives from μακÏόν πτεÏόν (long fin). The Flier is also the primary publication of the Native Fish Conservancy. The Native Fish Conservancy is a non-profit organization dedicated to protecting North America’s native fishes and their habitats. A Gato-class submarine, USS Flier, built in 1943, was named for this species.
<urn:uuid:e1093f36-e9a3-463e-873c-7ca266667ec7>
3.03125
420
Knowledge Article
Science & Tech.
48.897932
View Full Version : Lunar Spacecraft - Solar Cells 2003-Feb-24, 07:23 AM I have wondered about this since I saw the movie Apollo 13 and a thread in the LC forum reminded me of it. I have always thought that solar cells were used on some of the first satellites to provide power for their systems. Why couldn't they have wrapped the LCM and LEM in small solar cells? This would have solved or helped solve the power problem with Apollo 13 and would have allowed the astronauts to stay on the Moon for longer (just guessing here). Any comments on this? If solar cells were invented before Apollo went to the moon why didn't they put them on? -Colt 2003-Feb-24, 07:47 AM It was simply the solution of an optimization problem. The longer a flight is, the more solar cells would be good to use. But for the short Apollo flights, fuel cells and batteries best fitted the weight constraints. For the CSM, fuel cells also have the advantage of producing potable water - otherwise dehydrating the food woudn't have been that smart if you would have had to carry all the water for rehydrating with you as water from the beginning. The first LM designs also made use of fuel cells, but later it was switched to batteries ( http://www.astronautix.com/details/all16943.htm http://www.hq.nasa.gov/office/pao/History/SP-4205/ch6-6.html ) due to problems with fuel cell development. Solar cells don't produce very much power per weight unit, but can produce much energy per weight unit if you leave them enough time. Chariots for Apollo: A History of Manned Lunar Spacecraft (http://www.hq.nasa.gov/office/pao/History/SP-4205/contents.html) is a pretty nice account of the LM development. <font size=-1>[ This Message was edited by: kucharek on 2003-02-24 02:48 ]</font> 2003-Feb-24, 08:20 AM It's interesting that the Soviets tried both batteries and solar panels on their Soyuz spacecraft. <font size=-1>[ This Message was edited by: Eric McLoughlin on 2003-02-24 03:21 ]</font> 2003-Feb-24, 08:55 AM I don't know the rationale for this, but I guess it was because Soyuz was planned for long-term missions and the russians couldn't get fuel cells to work. 2003-Feb-25, 06:45 AM How do fuel cells work in creating water exactly? Is this a certain type of fuel cell? -Colt 2003-Feb-25, 07:24 AM I recall a nuclear source on board the LEM. I remember it because they were worried about the Apollo13 LEM coming back to Earth. Was this a radioactive decay source or a real controlled fission reactor thing (pretty small). 2003-Feb-25, 08:45 AM Radioactive decay source, an RTG, using thermoelements to turn heat into electricit. Was used to power the ALSEP, the scientif experiment package deployed on the Moon. 2003-Feb-25, 08:47 AM Yes! I remember now. Thanks for the kick. 2003-Feb-25, 12:32 PM The Russians never devloped fuel cell technology so relied on batteries for short duration missions and solar panels for longer term missions. Soyuz was capable of flying in either configuration, depending on the length of the flight. 2003-Feb-25, 04:53 PM On 2003-02-25 01:45, Colt wrote: How do fuel cells work in creating water exactly? Is this a certain type of fuel cell? This explaines it pretty well: (from How Stuff Works) (http://science.howstuffworks.com/fuel-cell2.htm) Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.
<urn:uuid:68a20854-587b-4b63-902c-7a493cd5069a>
3.671875
879
Comment Section
Science & Tech.
81.691767
Paleozoic, Mesozoic, and Cenozoic. These are the major eras in the history of life on Earth, and the transition from one period to another has been marked by a major turnover in fossils — one assemblage of organisms going extinct and being replaced by another. Today paleontologists agree that the biggest extinction in the fossil record occurred at the transition between the Paleozoic and Mesozoic, about 250 million years ago. During this Permo-Triassic extinction, perhaps as much as 70 percent of the plant, reptilian, amphibian, and insect species died on land. In the ocean, the consequences were even more devastating; up to 96 percent of Earth’s marine species went extinct. The cause of such a catastrophic loss of life has been the subject of ongoing study. One proposed explanation is an asteroid strike like the one blamed for dinosaur extinction 65 million years ago. Another explanation involves the oxygen level in the ocean. Marine organisms need oxygen just as terrestrial organisms do, and some scientists have speculated that oxygen-poor water welled up from the ocean depths and suffocated marine life. Another hypothesis is large-scale volcanism. Studies published in November 2011 and May 2012 argue that volcanism does the best job of explaining all the evidence in the geologic record. And it not only explains the ancient mass extinction, but also hints at future threats to ocean life. The volcanic hypothesis centers around the Siberian Traps, flat-topped volcanic mountains in Russia. The massive eruption that produced these mountains occurred 250 million years ago, about the same time as the Permo-Triassic extinction. The eruption was one of the biggest volcanic events in the last 500 million years, and it matches up with not only the timing of the extinction, but with the kinds of animals that were hit hardest. Volcanoes release carbon dioxide, and the Siberian Traps eruptions would have emitted huge quantities of it, while also producing it indirectly. The basalts released by the eruptions flowed over sedimentary rock rich in organic material. Geologic studies of the Siberian Traps have revealed gas explosion structures along the margins of the flood basalts, which geologists have interpreted as evidence of sudden, violent carbon releases from sedimentary rocks under pressure by lava. Besides raising atmospheric temperatures with heat-trapping gas, the newly released carbon dioxide would also have affected the ocean. Carbon dioxide dissolves in seawater to create carbonic acid, increasing ocean acidity. The carbonic acid reacts with carbonate ions, leaving less carbonate for marine life to use for shells or skeletons. Animals with such shells or skeletons suffer, but they don’t all suffer equally. Mollusks and marine arthropods have what biologists refer to as “buffered physiology,” which means they have closed circulation systems and/or gas-exchanging features (such as gills) to buffer their internal tissues from changes in ocean chemistry. Other animals such as sponges, corals, sea urchins, and sea lilies do not; their tissues are directly exposed to seawater. What the Permo-Triassic extinction studies found was that the poorly buffered organisms experienced greater rates of extinction and took longer to rebound. Carbon dioxide alone did not cause the catastrophic extinction 250 million years ago. Other factors, including higher temperatures and lower oxygen levels in the water, also pressured marine life. But carbon dioxide likely played an outsized role. No one can predict when volcanic activity as widespread and destructive and the Siberian Traps eruptions might occur again. But we do know that rising carbon dioxide levels in the atmosphere pose a threat to marine life today. While volcanoes currently release 130 to 380 million metric tons of carbon dioxide each year, human burning of fossil fuels releases about 30 billion tons of it. That’s anywhere from 100 to 300 times as much greenhouse gas that can increase ocean acidity. Today’s ocean contains a sizable reservoir of fine-grained calcium carbonate sediment that acts as a counterweight to rising ocean acidity. Geologists surmise that such a reservoir probably didn’t exist in the Permo-Triassic ocean. Moreover, today’s marine organisms descended from the survivors of high acidity episodes over the last 250 million years, so they may be better able to withstand ocean chemistry changes. Nevertheless, rising ocean acidity could spell trouble for marine organisms such as corals. A 2011 study of volcanic carbon dioxide seeps in Papua New Guinea found that ocean acidification and temperature stress reduced coral diversity and abundance. As before, poorly buffered marine life could suffer. Clapham, M.E., Payne, J.L. (2011) Acidification, anoxia, and extinction: A multiple logistic regression analysis of extinction selectivity during the Middle and Late Permian. Geology. 39(11), 1059-1062. Fabricius, K. E., Langdon, C., Uthicke, S. Humphrey, C. Noonan, S., De’ath, G. Okazaki, R. Muehllehner, N. Glas, M.S., Lough, J.M. (2011) Losers and winners in coral reefs acclimatized to elevated carbon dioxide concentrations. Nature Climate Change. 1, 165-169. Kerr, R.A. (1997) Life’s winners keep their poise in tough times. Science. 278(5342), 1403. Mitchell, A. (2012, April 30) Life in the sea found its fate in a paroxysm of extinction. The New York Times. Payne, J.L., Clapham, M.E. (2012) End-Permian mass extinction in the oceans: an ancient analog for the twenty-first century? Earth and Planetary Sciences. 40, 89-111. PBS Evolution. (2001) Permian-Triassic extinction.
<urn:uuid:39a26a1a-9433-4aaf-a9dd-f22c108f5a24>
4.625
1,232
Knowledge Article
Science & Tech.
45.116975
Did you know that there are more than 12,000 species of ants all over the world! Here's some more fascinating facts about ants. - An ant can lift 20 times its own body weight. If a second grader was as strong as an ant, she would be able to pick up a car! - Some queen ants can live for many years and have millions of babies! - Ants don’t have ears. Ants "hear" by feeling vibrations in the ground through their feet. - When ants fight, it is usually to the death! - When foraging, ants leave a pheromone trail so that they know where they’ve been. - Queen ants have wings, which they shed when they start a new nest. - Ants don’t have lungs. Oxygen enters through tiny holes all over the body and carbon dioxide leaves through the same holes. - When the queen of the colony dies, the colony can only survive a few months. Queens are rarely replaced and the workers are not able to reproduce. General Manager - Staff Entomologist
<urn:uuid:253cc638-df8f-4802-bce4-9f8a12f891d2>
3.5
231
Listicle
Science & Tech.
74.972269
A team of Filipino and American scientists have rediscovered a highly distinctive mammal -- a greater dwarf cloud rat -- that was last seen 112 years ago. Furthermore, it has never before been discovered in its natural habitat and was thought by some to be extinct. - Dwarf cloud rat rediscovered after 112 yearsThu, 1 May 2008, 15:35:12 EDT - Dwarf crocodiles split into three speciesFri, 12 Dec 2008, 13:29:32 EST - Biologists rediscover endangered frog populationTue, 28 Jul 2009, 9:25:04 EDT - 'Extinct' monkey rediscovered in Borneo by new expeditionSat, 21 Jan 2012, 0:34:24 EST - Nearby dwarf galaxy and possible protogalaxy discoveredFri, 11 Jan 2013, 15:34:09 EST
<urn:uuid:b45c688b-c897-4c75-afa6-a079fa1e5d97>
2.71875
167
Content Listing
Science & Tech.
23.850357
Evolution, as we learn in school, is so gradual that changes take place over hundreds or thousands of years. And of course, most of us never get a concrete look at what the process of evolution looks like. But a certain type of Australian lizard is teaching the whole world a lesson about evolution by changing right before our eyes. The yellow-bellied three-toed skink looks like a small snake with tiny legs. In the cold mountainous regions of Australia the skink gives birth to live young, but in the warmer coastal regions the same species lays eggs. Scientists say that we are essentially seeing the lizard in the middle of evolving from egg-laying to live-bearing births. The process to go from egg laying to live birthing isn’t all that complicated. The females simply start keeping their young inside their bodies for longer and longer, usually because of harsh weather or other environmental factors. Over the generations, the incubation time inside mothers’ bellies gets longer and longer, and egg shells – which once protected the young from the outside world and provided calcium – get thinner and thinner. Because the process is relatively simple (in evolutionary terms, anyway) it’s happened plenty of times before. In fact, almost a hundred types of lizards have made the switch from egg-laying to live births. Currently, only two types of lizards other than the skinks use both types of reproduction methods. Seeing the skinks in this stage of their evolution is helping scientists figure out exactly how the change is made.
<urn:uuid:b53484d9-7c0b-4951-a540-5656f46cc22b>
3.8125
315
Personal Blog
Science & Tech.
49.346578
Advantages of the Domain Model Confining object/relational mapping logic to a clearly demarcated data access layer (DAL) within your application, facilitates the creation of a single, coherent, object-oriented Domain Model to represent your application’s business objects. Adherence to this practice leads to more robust, scalable, and maintainable software. Separation of concerns - Conceptual model. The Domain Model provides a concise representation of the domain of interest. This can aid in communication with experts in the problem domain who are not necessarily developers. - Division of labor and organization of code. Need to retrieve or persist data? Then call into the DataAccesssingleton. Need to execute some business logic when a condition’s selected field changes? Then go to the Domain Model and put the logic into the Condition.SelectedFieldsetter. Need to tell the View how to display itself differently? Then go to the View Model. - Code Reuse. Both the old and new apps contain object-relational mapping logic. The difference is that the new app’s Data Access layer, by way of generalized functions, eliminates the needless repetition found within the old app’s numerous Load and Save methods. - Consistent interface to in-memory data. Hiding the relational model specifics, as is accomplished by the new app’s Data Access layer, provides a consistent representation of business objects. The View layer in the old app, on the other hand, contains bindings that reference both .NET properties as well as database fields. - Maintainability. Because of the separation of responsibility, the new app can be modified more easily when business needs change. Changing a database field’s name, for example, requires a limited maintenance effort in the new app. Since data access is coded concisely in the Data Access layer, it involves altering just one string. Coding directly to ADO.NET objects, as the old app does, means that you cannot easily navigate the relationships between business objects. The new app’s Data Access layer, on the other hand, shields upper layers from database specifics. This allows for more natural modeling of data, and simpler handling of complex relationships between entities. - The business objects are strongly typed, so we benefit from compile-time type-checking. - Business objects’ members expose themselves through Intellisense, eliminating typos and problems with remembering property names. - We can navigate the object-oriented Domain Model by using the OO-dot syntax, such as condition.SelectedField.Type.AvailableOperators. That is, we just choose an object instance to be our point-of-entry, then navigate from there. The ADO.NET objects in the old app, on the other hand, don’t allow you to easily navigate the data model.
<urn:uuid:bed909b7-47fa-49f9-bd8c-a83411e05dbf>
2.921875
590
Personal Blog
Software Dev.
36.822334
Earth’s first animal was the ocean-drifting comb jelly, not the simple sponge, according to a new find that has shocked scientists who didn’t imagine the earliest critter could be so complex. What’s wrong with this? Basically, the study showed that the line leading to the ctenophores may have diverged before the other metazoan lineages. Stating comb jellies were the “first animal” is going to lead to people thinking that all animals evolved from comb jellies! Some other stories on this article do specify the ctenophore lineage diverged first, which is somewhat less misleading. However, this lineage diverging first does not mean modern ctenophores were there at the time. We might also consider that at that divergence the other lineage produced led eventually to humans, and we certainly were not swimming about in the ocean with the ctenophores waiting for land-living plants and animals to evolve so we could get out and dry off. Modern humans are much different from their early ancestors, and modern ctenophores may be as well. There are several possible outcomes to this phylogenetic study. First, the findings may have been in error. Many groups are only represented in the study by one or two examples, and the sponges are one of the poorly-represented groups. Secondly, the findings may be accurate, but the ctenophore ancestors may have been quite unlike their modern form at the time they diverged and may have independently evolved their complex characteristics. Thirdly, this may mean after all that sponges have secondarily lost some complex features possessed by their ancestors. This study represents a stride forward in metazoan phylogenetics, but there is still much more to learn! Not long after publishing my post on the improved metazoan tree of life I reviewed T. Ryan Gregory’s paper on understanding evolutionary trees. He is likewise vexed by the poor news coverage of this discovery and writes about the phylogenetic fallacy of thinking that early branching equals primitive. If you have a strong stomach and feel up to more mangling of science, you can visit this page and find the statement that the humble opossum descended from Smilodon. The author seems to have absorbed some knowlege of the existence of non-feline saber-tooths along the way, but thinks that Smilodon was not a feline, but a marsupial. The saber-toothed marsupial he’s probably really thinking of is Thylacosmilus. In addition to this major error, the author also makes the mistake of thinking modern species are on a line of descent from extinct species that were simply in the same clade, and certainly not direct ancestors. File that under “a little knowledge is a dangerous thing”!
<urn:uuid:dd712dc2-78e4-4806-9983-a7fc9321735f>
3.859375
587
Personal Blog
Science & Tech.
38.115909
"It's not as if I have the kids go in and do a science experiment, and then go in the next day and do another experiment, and so on.." my son's 6th grade science teacher told me. Rather, he assured, today's science classes focus on more important things, like communicating scientific ideas through presentations and posters. The science experiment is now an optional home venture: the fourth and final option on the weekly homework sheet, listed after (1) the calculator-facilitated metric conversion worksheet, (2) the calculator-facilitated area & volume worksheet, and (3) the communications assignment (pick a science article, summarize it, and write a personal reflection of what you thought about it). And the experiment's instructions are so imprecise that it's not clear what you're actually supposed to be doing or testing out. Does the amount of salt in water affect the amount of freshwater produced?"Produced"? In one trivial sense, the answer is yes. Any significant amount of salt reduces the amount of freshwater down to zero, because when you add enough salt to freshwater it's no longer fresh. In another trivial sense, the answer is no: adding substance B to substance A doesn't subtract from substance A. Whatever. Maybe the directions will somehow illuminate matters: Mix salt and water to make salt water.Do the proportions matter? A sprinkling of salt? A whole ladel full? Add about 2 inches of the water to a pot.But remember the area and volume sheet! Inches are linear! What on earth is "2 inches of water"? Put an empty glass in the bowl.Just "put?" Centered? On its side? Upside down? Seal plastic wrap over the top, weigh it down with a rock (centered above the bowl?)... Now you've made a solar still.Oh, OK. Let's re-position the cup accordingly. But what if no one at home knows what a solar still is already? Repeat with fresh water."Two inches" of water? In a pot the same size as the first? Actually, it's a good thing this is left unspecified: we don't have two equal pots. (Do most people?) Do two inches of fresh water get you the same amount of H2O as two inches of salt water? Is this what we're trying to find out? Or is starting out with the same amount of H2O a prerequesite for answering a different question about what happens to the water later on? Put the stills outside in the sun. Leave it [sic] alone for a few hours, or even a whole day. When you're ready, measure the water.In inches? And doesn't how long we leave them out affect the answer we get? Assuming we even know what the question is... Yes, I see now that communicating scientific ideas is very important. Perhaps we'll go with Option 3 next time.
<urn:uuid:52bb4d0c-8827-40f3-ae07-d9cf140774e5>
3.546875
618
Personal Blog
Science & Tech.
68.173758
Here is one of NASA’s latest videos of the surface of the sun — but it’s not just any video showing coronal mass ejections. This video was shot using a specific “extreme ultraviolet light” to best showcase the plasma in the sun’s atmosphere, also known as the corona. The corona reaches 600,000 Kelvin. In case you have forgotten sophomore chemistry, 273.16K is equivalent to 32.02 degrees F. To figure out the equivalent of Kelvins to degrees Fahrenheit an you multiply Kelvins by (9/5) and then subtract 459. The temperature of the corona is 1,079,540.33 degrees F. Check out the video, which Gizmodo describes as “our sun like you have never seen it before:” This video takes SDO images and applies additional processing to enhance the structures visible. While there is no scientific value to this processing, it does result in a beautiful, new way of looking at the sun. The original frames are in the 171 Angstrom wavelength of extreme ultraviolet. This wavelength shows plasma in the solar atmosphere, called the corona, that is around 600,000 Kelvin. The loops represent plasma held in place by magnetic fields. They are concentrated in “active regions” where the magnetic fields are the strongest. These active regions usually appear in visible light as sunspots. The events in this video represent 24 hours of activity on September 25, 2011. This video is public domain and can be downloaded at: http://svs.gsfc.nasa.gov/vis/a010000/a010900/a010990/index.html According to NASA’s description, “there is no scientific value” to the image processing that was used to enhance the structures in this video, but “it does result in a beautiful, new way of looking at the sun.” In the video, the loops you see are plasma “held in place by magnetic fields,” according to NASA. In visible light, these are considered sunspots.
<urn:uuid:b06fb6b2-dfcb-44a6-b005-3e79f71325b7>
3.828125
442
Personal Blog
Science & Tech.
59.303231
Research - Pelagic Plastic Plastic in the ocean may be one of the most alarming of today's environmental stories. Plastic, like diamonds, are forever! Because plastics do NOT biodegrade, no naturally occurring organisms can break these polymers down. Instead, plastic goes through a process called photodegredation, where sunlight breaks down plastic into smaller and smaller pieces until there is only plastic dust. But always plastic remains a polymer. When plastic debris meets the sea it can remain for centuries causing untold havoc in ecosystems. Studies indicate less than 5% of plastic ever gets recycled, while each American is said to contribute some 65 lbs. of plastic into landfills each year.The ocean is especially susceptible to plastic pollution. It takes longer for the sun to break apart plastic in the ocean than on land because of the oceans’ cooling capacity. Most plastic floats near the sea surface where some is mistaken for food by birds and fishes. Plastics are carried by currents and can circulate continually in the open sea. Broken, degraded plastic pieces outweigh surface zooplankton in the central North Pacific by a factor of 6-1. That means six pounds of plastic for every single pound of zooplankton. Storms flush plastics down stream and ultimately into the ocean. Plastic debris looks bad, but it behaves worse. Far worse! Plastic pollution negatively effects trillions upon trillions of ocean inhabitants and ultimately humans. "Synthetic Sea" shows how many marine birds and fishes ingest plastic, because it mimics the food they eat. The program reveals scientific research, indicating how plastic pieces can attract and hold hydrophobic elements like PCB and DDT up to one million times background levels. As a result, floating plastic is like a poison pill. As a result, new research regarding endocrine disrupters in floating plastic debris is being planned by the Algalita Marine Research Foundation.. "Synthetic Sea" is a documentary based on scientific findings backed by published scientific papers.
<urn:uuid:29960791-e1cf-4bda-88b3-8a515940b965>
3.890625
405
Knowledge Article
Science & Tech.
42.145655
Uric acidUric acid is an organic compound of carbon, nitrogen, oxygen and hydrogen, with the formula C5H4N4O3 The structure of the ureate anion (the anion of uric acid) is: It is a minor end-product of nitrogen metabolism in the human body (the main product being urea), and is found in small amounts in urine. In some other animals, such as birds and reptiles, it is the main end-product, and is excreted in faeces. The high nitrogen content of uric acid is why guano is so valuable as a fertiliser in agriculture. The disease gout in humans is associated with abnormal levels of uric acid in the system. Saturation of uric acid in the human blood stream may result in one form of kidney stones when the acid crystalizes into solid inside the kidney. A percentage of gout patients eventually get uric kidney stones. see also: xanthine oxidase
<urn:uuid:0056f65c-5416-4765-b249-57209fe8b9a8>
3.265625
206
Knowledge Article
Science & Tech.
34.005006
The animal itself was segmented and soft shelled. The head carried an array of five apparently functional eyes and a long flexible snout that appears to be in no way homologous to the appendages on the head of any other lifeform of the time. The body segments each featured a set of gills and a pair of flap like appendages dissimilar to other animals of the time. The rearmost three flaps formed the tail. There were spines associated with the snout and tail. Unlike known arthropods, the head appears not to be formed from fused segments. The animal was covered with what appears to be a soft, flexible shell with no joints between the segments. Opabinia has no known relatives except possibly Anomalocaris. Although Opabinia is a relatively minor constituent of the early faunas it has historical significance because it was one of the first truly unusual animals to be completely studied and described when redescription of the Burgess shale faunas was undertaken in the 1970s. Harry Whittington showed pretty convincingly in 1975 that the animal, previously thought to be an arthropod, was not an arthropod and moreover that it was unlikely that it belonged to any other known phylum. Taken with two other unexpectedly unique arthropods Marella and Yohoia that had been previously been described, Opabinia demonstrated that the softbodied Burgess fauna's were much more complex and diverse than anyone had suspected.
<urn:uuid:1ccc2bda-e71c-4250-b303-b68995973163>
3.8125
298
Knowledge Article
Science & Tech.
34.120105
Once river walleyes spawn, they typically disperse from spawning sites to current break locations to rest and recuperate. If the flow is high, they often seek refuge in flooded wood and slough backwaters, where they may remain until water levels subside. ‘EYE GOTTA SPAWN In spring, river walleyes seek out classic spawning locations — hard bottom, rock-rubble, or riprap areas swept by current; mussel beds mixed with gravel; or small tributary creeks with gravel washout bottoms. During years of high flow, walleyes may spawn on vegetation, like reed canary grass. They typically start spawning when the water reaches 40F to 45F. Males generally arrive at spawning sites about a week or two before females and remain about two weeks longer. Females often hold a short distance away from spawning sites, in slower current, until their eggs ripen. Once water temperatures reach the proper level, spawning begins and is usually staggered over a 14- to 20-day period. “Every spring, walleyes come out of their wintering habitat and move into warmer backwater locations,” says John Pitlo, Iowa Natural Resource biologist, who has spent many hours studying the prespawn and postspawn movements of river walleyes. “Mud bottoms in backwater lakes and shallow sloughs absorb the sun, which can raise water temperatures 2 to 6 degrees warmer than the main channel. “Female walleyes seek out warmer water during the prespawn period to help their eggs mature,” Pitlo says, “often staying in warmer backwaters until their eggs are ripe, before moving to the spawning ground where males await their arrival. Females generally don’t spend much time at the spawning site. In fact, a female may be on the spawning bed for only about half a day, long enough to dump her eggs, and then she’s out of there,” Pitlo explains. “She may return to the same backwater spots she came from if the water level remains the same. “During the postspawn period, we’ve observed interesting walleye behavior during studies where we radio tagged fish to track their movements. The fish are scattered during this period and may set up in the same spot for several weeks, say behind a tree or in a small clearing in the trees. “When I started these studies, we returned to the same spots day after day and found radio-tagged walleyes in the exact same spots. Just to see if they were dead or alive, we positioned right over them and banged on the side of the boat. Still no signs of life. We finally lowered a paddle into the water to touch them,” Pitlo explains, “and sure enough, they moved.” Continued — click on page link below.
<urn:uuid:2ca01423-f712-4300-87cb-2e9029288f41>
2.859375
604
Personal Blog
Science & Tech.
49.958544
The slide show below shows a geodesic dome. Details of how to construct this dome can be found in the book Spherical Models by Magnus Wenninger. It is basically a (spherical) icosahedron with each face split into 16 triangles (a mixture of scalene, isosceles and equilateral) resulting in 320 triangles. There are 12 pentagonal vertices (in light purple), the rest are hexagonal - see below for the maths behind this! The pentagons are regular but the hexagons are not (otherwise you would just have the Buckyball). It took around 20 hours to construct... why 12 pentagons? Suppose there are P pentagons and H hexagons, so the number of faces F = P + H. There are 5 edges on each P and 6 on each H, with each edge shared by two shapes, so the total number of edges E = ( 5*P + 6*H ) / 2 There are 5 vertices on each P and 6 on each H, with each vertex shared by three shapes, so the total vertices V = ( 5*P + 6*H ) / 3 Now, Euler's formula states V + F - E = 2. so if we substitute each of these for the expressions above we get P = 12. So there are always 12 pentagons!
<urn:uuid:a501375c-11c3-4785-b824-8bc557b001a0>
3.796875
284
Tutorial
Science & Tech.
72.028106
Since the beginning of August, NASA's Mars Curiosity rover has been exploring the Red Planet and learning as much as it can about the terrain. NASA's Mars Curiosity rover has found evidence of an old streambed on the Red Planet. NASA's Lunar Reconnaissance Orbiter (LRO) has acquired stereo images of the moon in high resolution (0.5 to 2 meters/pixel) that provide highly detailed 3D views of the surface. Astronomers have pieced together the deepest-ever view of the universe, peering back more than thirteen billion years. The robotic arm of NASA’s Curiosity rover recently made contact with a Martian rock for the first time - analyzing the chemical elements inside of a rock dubbed "Jake Matijevic" by scientists. Our Milky Way Galaxy is surrounded by an enormous halo of hot gas hundreds of thousands of light years across, and with as much mass as all the stars in the galaxy. NASA scientists and engineers are eyeing the construction of a manned outpost that would hover in orbit above the far side of the moon. NASA’s Hubble Space Telescope has captured a sharp image of NGC 4634, a spiral galaxy seen exactly side-on. Recent images snapped by NASA's Dawn spacecraft confirm that volatile, or easily evaporated materials, have colored Vesta's surface in a broad swath around its equator. NASA's Mars rover Curiosity is taking a break from its drive to Glenelg to examine a football-size rock with its robotic arm. NASA scientists are zapping organics with lasers in an attempt to discover how life arose on Earth. A Starship Enterprise-style warp drive could be a real possibility, according to a non-profit group of scientists and engineers. After four months aboard the International Space Station, three astronauts touched down safely in Kazakhstan early yesterday morning. Astronomers have, for the first time, spotted planets orbiting sun-like stars in a crowded cluster - the best evidence yet that planets can form in dense stellar environments. NASA's stalwart Opportunity rover has captured an image of the Martian surface that is puzzling scientists. NASA’s Hubble Space Telescope has captured an image of galaxy NGC 7090. The galaxy is viewed edge-on from the Earth, meaning astronomers are unable to easily see the spiral arms, which are full of young, hot stars. NASA's three stalwart mobile launcher platforms (MLP) are currently being revamped to better accommodate the space agency's next-generation launch vehicles. NASA scientists will next week test whether a heat shield made from the soil of the moon, Mars or an asteroid could stand up to a plunge through Earth's atmosphere. NASA’s Mars Curiosity rover has completed the delicate calibration of a robotic arm as it awaits further instructions in Gale Crater near the Martian equator. NASA's Mars Reconnaissance Orbiter (MRO) has returned the first definitive evidence of carbon dioxide snowfalls on Mars - the only place this is known to happen anywhere in our solar system.
<urn:uuid:1394485b-47ac-4c9f-9b07-16ac4aeb21a8>
3.265625
621
Content Listing
Science & Tech.
39.343436
Did you know that sugar contains some of the essential ingredients for life? That doesn’t mean that eating lots of sweets is healthy. However, sugar is made up of a simple recipe of chemicals called carbon, hydrogen and oxygen. And almost all life on Earth is made up of these three chemicals, plus another one, which is called nitrogen. ALMA (ESO/NAOJ/NRAO)/L. Calçada (ESO) & NASA/JPL-Caltech/WISE Team
<urn:uuid:dc7a9601-121b-48ca-913d-5a9ac9d8870a>
2.8125
106
Knowledge Article
Science & Tech.
58.269
What is the importance of chemical bonds in molecules? ... Chemical bonds are what keep the atoms in a molecule together, without them you won't exist, the tables and chairs and anything visible would no ... What is the importance of chemical bonds in molecules? Chemical bonds are what keep the atoms in a molecule together, without them you won't exist, the tables and chairs and anything visible would no exist. No compounds would exist but because of the way atoms are held together these bonds exist. It is impossible for them not to make bonds with their current structure. Hope this helps.Auto Answered|Score 1Weegy: Chemical Bonding question: What is the role of chemical bonding sustaining life? if there ... What is the role of chemical bonding in sustaining life? Importance of bonding in ...http://wiki.answers.com/Q/What_is_the_role_of_chemical_bonding_sustaining_lifeAuto Answered|Score .6901Weegy: Chemistry question: Is hydrogen the strongest bond that forms between molecules? ... chemical links found in molecules? Hydrogen bonding is strongest between molecules of? ...http://wiki.answers.com/Q/Is_hydrogen_the_strongest_bond_that_forms_between_moleculesAuto Answered|Score .6622Weegy: Chemistry question: A weak chemical attraction between polar molecules? Chemical Bond ... What is the importance of protein for maintaining homeostasis ...http://wiki.answers.com/Q/A_weak_chemical_attraction_between_polar_moleculesAuto Answered|Score .543Weegy: The type of chemical bonds in a compound determines the properties... Chemical bonding is essential to life because it is the key to forming every type of compound. ...http://www.answerbag.com/q_view/2070304Auto Answered|Score .541Weegy: Additional Chemical Bonding questions: How do you count the valence electrons of an atom ... What is the importance of hydrogen bonding between water molecules ...http://wiki.answers.com/Q/FAQMAP/5911Auto Answered|Score .5343 All Categories|No Subcategories|Auto answered|2/7/2011 4:48:46 PM View and rate new answers There are no new answers.
<urn:uuid:027d7ef1-38a3-4436-8c37-e73802b197f6>
2.984375
510
Q&A Forum
Science & Tech.
43.151953
||This article is missing information about the mechanism of parasitic action. Please expand the primary text and the lead to include this information. (October 2011)| A parasitic plant is one that derives some or all of its sustenance from another plant. About 4,100 species in approximately 19 families of flowering plants are known. Parasitic plants have a modified root, the haustorium, that penetrates the host plant and connects to the xylem, phloem, or both. Parasitic plants are characterized as follows: - 1a. Obligate parasite – a parasite that cannot complete its life cycle without a host. - 1b. Facultative parasite – a parasite that can complete its life cycle independent of a host. - 2a. Stem parasite – a parasite that attaches to the host stem. - 2b. Root parasite – a parasite that attaches to the host root. - 3a. Holoparasite – a plant that is completely parasitic on other plants and has virtually no chlorophyll. - 3b. Hemiparasite – a plant that is parasitic under natural conditions and is also photosynthetic to some degree. Hemiparasites may just obtain water and mineral nutrients from the host plant. Many obtain at least part of their organic nutrients from the host as well. For hemiparasites, one from each of the three sets of terms can be applied to the same species, e.g. - Nuytsia floribunda (Western Australian Christmas tree) is an obligate root hemiparasite. - Rhinanthus (e.g. Yellow rattle) is a facultative root hemiparasite. - Mistletoe is an obligate stem hemiparasite. Holoparasites are always obligate so only two terms are needed, e.g. - Dodder is a stem holoparasite. - Hydnora spp. are root holoparasites. Plants usually considered holoparasites include broomrape, dodder, Rafflesia, and Hydnoraceae. Plants usually considered hemiparasites include Castilleja, mistletoe, Western Australian Christmas tree and yellow rattle. Seed germination Seed germination of parasitic plants occurs in a variety of ways. These means can either be chemical or mechanical and the means used by seeds often depends on whether or not the parasites are root parasites or stem parasites. Both root and stem parasitic plants have evolved to use one or more means of finding their hosts in order to germinate. Most parasitic plants need to germinate in close proximity to their host plants because their seeds are limited in the amount of resources necessary to survive without nutrients from their host plants. Resources are limited due in part to the fact that most parasitic plants are not able to use autotrophic nutrition to establish the early stages of seeding. Root parasitic plant seeds tend to use chemical cues for germination. In order for germination to occur, seeds need to be fairly close to their host plant. For example, the seeds of the parasitic plant Witchweed (Striga asiatica) need to be within 3 to 4 millimeters (mm) of its host in order to pick up chemical signals in the soil to signal germination. This range is important because Striga asiatica will only grow about 4 mm after germination. Chemical compound cues sensed by parasitic plant seeds are from host plant root exudates that are leached in close proximity from the host’s root system into the surrounding soil. These chemical cues are a variety of compounds that are unstable and rapidly degraded in soil and are present within a radius of a few meters of the plant exuding them. Parasitic plants germinate and follow a concentration gradient of these compounds in the soil toward the host plants if close enough. These compounds are called strigolactones. Strigolactone stimulates ethylene biosynthesis in seeds causing them to germinate. There are a variety of chemical germination stimulants. Strigol was the first of the germination stimulants to be isolated. It was isolated from a non-host cotton plant and has been found in true host plants such as corn and millets. The stimulants are usually plant specific, examples of other germination stimulants include sorgolactone from sorghum, orobanchol and alectrol from red clover, and 5-deoxystrigol from Lotus japonicas. Strigolactones are apocarotenoids that are produced via the carotenoid pathway of plants. Strigolactones and mycorrhizal fungi have a relationship in which Strigolactone also cues the growth of mycorrhizal fungus. Stem parasitic plants unlike most root plants germinate using the resources inside its endosperm and are able to survive for a small amount of time. An example, Dodder (Cuscuta spp.) is a parasitic plant whose seed falls to the ground and may remain dormant for up to five years before it is able to sense a host plants nearby. Using the resources in the seed endosperm, Dodder is able to germinate. Once germinated, the plant has 6 days to find and establish a connection with its host plant before its resources run out. Dodder seeds germinate above ground and then the plant sends out stems in search of its host plant reaching up to 6 cm before it dies. It is believed that the plant uses two methods of finding a host. The stem is able to pick up its host plant’s scent whereby it then is able to orient itself in the direction of its host. Scientists used volatiles from tomato plants (α-pinene, β-myrcene, and β-phellandrene) to test the reaction of C. pentagona and found that the stem will oriented itself in the direction of the odor. Some studies suggest that by using light reflecting from nearby plants dodders are able to select host with higher sugar because of the levels of chlorophyll in the leaves. Once Dodder finds its host, it wraps itself around the host plants stem. Using adventitious roots, Dodder taps into the host plant’s stem and creates a haustorium, which is a special connection into the host plant vascular tissue. Dodder makes several of these connections with the host as it moves up the plant. Host range Some parasitic plants are generalists and parasitize many different species, even several different species at once. Dodder (Cassytha spp., Cuscuta spp.) and red rattle (Odontites verna) are generalist parasites. Other parasitic plants are specialists that parasitize a few or even just one species. Beech drops (Epifagus virginiana) is a root holoparasite only on American beech (Fagus grandifolia). Rafflesia is a holoparasite on the vine Tetrastigma. - Witchweed, broomrape and dodder cause huge economic losses in a variety of herbaceous crops. Mistletoes cause economic damage to forest and ornamental trees. - Rafflesia arnoldii produces the world's largest flowers at about one meter in diameter. It is a tourist attraction in its native habitat. - Indian paintbrush (Castilleja linariaefolia) is the state flower of Wyoming. - The Oak Mistletoe (Phoradendron serotinum) is the floral emblem of Oklahoma. - A few other parasitic plants are occasionally cultivated for their attractive flowers, such as Nutysia and broomrape. - Parasitic plants are important in research, especially on the loss of photosynthesis during evolution. - A few dozen parasitic plants have occasionally been used as food by people. - Western Australian Christmas tree (Nuytsia floribunda) sometimes damages underground cables. It mistakes the cables for host roots and tries to parasitize them using its sclerenchymatic guillotine. Plants parasitic on fungi About 400 species of flowering plants and one gymnosperm (Parasitaxus usta), are parasitic on mycorrhizal fungi. They are termed myco-heterotrophs rather than parasitic plants. Some myco-heterotrophs are Indian pipe (Monotropa uniflora), snow plant (Sarcodes sanguinea), underground orchid (Rhizanthella gardneri), bird's nest orchid (Neottia nidus-avis) and sugarstick (Allotropa virgata). - Nickrent, D. L. and Musselman, L. J. 2004. Introduction to Parasitic Flowering Plants. The Plant Health Instructor. doi:10.1094/PHI-I-2004-0330-01 - Scott, P. 2008. Physiology and behavior of plants: parasitic plants. John Wiley & sons pp. 103–112. - Runyon, J. Tooker, J. Mescher, M. De Moraes, C. 2009. Parasitic plants in agriculture: Chemical ecology of germination and host-plant location as targets for sustainable control: A review. Sustainable Agriculture Reviews 1. pp. 123-136. - Schneeweiss, G. 2007. Corelated evolution of life history and host range in the nonphotosynthetic parasitic flowering plants Orobanche and Phelipanche (Orobanchaceae). Journal Compilation. European Society for Evolutionary Biology. 20 471-478. - Lesica, P. 2010. Dodder: Hardly Doddering. Kelseya Newsletter of Montana Native Plant Society. Vol 23. 2, 6 - Parasitic Angiosperms Used for Food? - Sclerenchymatic guillotine in the haustorium of Nuytsia floribunda Further reading - Digital Atlas of Cuscuta (Convolvulaceae) - The Parasitic Plant Connection - The Strange and Wonderful Myco-heterotrophs - Parasitic Flowering Plants - The Mistletoe Center - Parasitic Plants Biology Study Guide - Nickrent, Daniel L. 2002. Parasitic plants of the world. - Calladine, Ainsley and Pate, John S. 2000. Haustorial structure and functioning of the root hemiparastic tree Nuytsia floribunda (Labill.) R.Br. and water relationships with its hosts. Annals of Botany 85: 723-731. - Milius, Susan. 2000. Botany under the mistletoe: Twisters, spitters, and other flowery thoughts for romantic moments. Science News 158: 411. - Hibberd, Julian M. and Jeschke, W. Dieter. 2001. Solute flux into parasitic plants. Journal of Experimental Botany 52: 2043-2049.
<urn:uuid:09ff90ae-1259-465b-9878-7ce3d0f189d1>
3.78125
2,327
Knowledge Article
Science & Tech.
40.626023
SOIL REFLECTANCE DATA (FIFE)Entry ID: FIFE_SOILREFL Abstract: Soil reflectance properties are an important factor in determining landscape reflectance characteristics. No soil reflectance data were collected as part of the FIFE experiment. Therefore, the FIS staff choose spectra from soils similar to those in the FIFE study area from the atlas of soil reflectance properties (Stoner et al., 1980). The atlas represents a wide range of soil types, and FIS staff ... choose spectra from soils similar to those in the FIFE study area. The selection of spectra was based on soil particle size, organic carbon content, taxonomic classification, and geography of soils found in the FIFE study area. All measurements were made on uniformly moist, sieved soils, which were equilibrated for 24 hours at a one-tenth bar moisture tension. Soil reflectance was measured using an Exotech Model 20 C spectroradiometer adapted for indoor use with a reflectometer equipped with an artificial illumination source, transfer optics, and sample stage. Spectral readings were taken in 0.01 micrometer increments over the 0.52 to 2.32 micrometer wavelength range. (Click for Interactive Map) Data Set Citation Dataset Originator/Creator: FIFE, STAFF SCIENCEHUEMMRICH, K.F. Dataset Title: SOIL REFLECTANCE DATA (FIFE) Dataset Release Date: 1994 Dataset Release Place: Oak Ridge, Tennessee, U.S.A. Dataset Publisher: Oak Ridge National Laboratory Distributed Active Archive Center Data Presentation Form: Online Files Dataset DOI: doi:10.3334/ORNLDAAC/114Online Resource: http://mercury.ornl.gov/ornldaac/send/query?term2=114&term2attribut... Start Date: 1989-10-31Stop Date: 1989-10-31 ISO Topic Category Data Set Progress Extended Metadata Properties (Click to view more) Creation and Review Dates DIF Creation Date: 1994-07-24 Last DIF Revision Date: 2011-05-23
<urn:uuid:b46b60ca-0e78-4f04-b264-fe573a9b2b7c>
2.796875
476
Structured Data
Science & Tech.
39.469971
Climate science is solid because it is based on the laws of physics, we hear sometimes, but perhaps this sentence conveys subliminally a level of uncertainty that is debatable. Even if the laws of physics are perfectly known, calculations based on these laws may be just approximate. This is the case for climate models. A simple comparison of the mean simulated by climate models and real data shows that the story is not that simple Perhaps some of you will be a bit surprised to read that very few problems in physics can be solved exactly. By 'exactly' physicists mean that a closed formula that can be written on paper has been found that allows to predict or describe the behavior of a physical system without. Physics students get the very few interesting problems that are solvable exactly and spend quite a lot of time analysing them because they form the template for the approximate solutions for more complex problems. For instance, the classical harmonic oscillator is a simple problem that we all learn at school and can applied to calculate approximately the frequency of oscillation of a pendulum, which is a much more complex problem. Similarly, the movement of two point bodies that attract each other gravitationally is an easy problem - solved by Newton a few centuries ago- can be used to describe approximately the movement to 3 point bodies. I wrote approximately, because a solution of these apparently simple problem is not known. It is not even known if the solar system is stable or unstable. The list of exactly solvable problems is pretty short. To these two mentioned cases we may two other from quantum physics: the harmonic oscillator and two charged particles under electrostatic attraction (e.g the hydrogen atom). To describe the dynamics of the climate is much more complex problem that all these previous examples. It can be only described approximately with the help of very complex numerical models. It can be argued that, nevertheless, the level of approximation is enough for our purposes, since they are based on the same 'laws of physics'. A quick look at result presented in the last IPCC report indicates that this is really not that simple. Lets have a look at the mean annual temperature for the present climate as simulated by the IPCC model http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-8-2.html Upper panel: observed mean near-surface annual temperature (contours) and the difference between observed temperature and the mean of all IPCC models (color shading); lower panel: typical error of an individual model. This figure, reproduced from the IPCC report shows the typical error of one of the IPCC models difference between the simulated and observed temperature at each grid-cell). We see differences of the order of 1 to 5 degrees, and the areas where the difference is 3 degrees or larger are really not negligible. For illustration, 3 degrees is the difference between the annual mean temperature in Madrid and Casablanca, or between Goteborg and Paris. Some of these differences are caused because climate models cannot represent well the topography of the surface due to their coarse resolution. This is probably the case for Greenland and the Himalayas. Other factors are the uncertainties in the observations (Antarctica).But nevertheless, I think that this is not a reassuring picture. We also have to take into account that climate modellers have certainly optimized the free parameters in their models, among others the uncertainty in the present solar output but also others internal parameters, to try to achieve the best fit to observations. And yet this fit is not that good. The situation for other variables such as precipitation, wind etc, is not better. This is no surprising news and nobody has tried to hide these discrepancies. They can be found in the IPCC report without difficulty. They do not mean necessarily that climate models are bad in simulating climate changes from the present state. It could well be that they are skillful in representing the reactions of the climate under perturbations of the external driving factors, such as CO2. But if they are based in the very same well known 'laws of physics', why is it not possible to simulate the present Earth climate with the accuracy that we require? Some of these errors are as large as the projected temperature changes in the future. Are we missing something fundamental? Quote from the IPCC AR4 Report :'The extent to which these systematic model errors affect a model’s response to external perturbations is unknown, but may be significant'
<urn:uuid:85f32628-8957-4321-9b32-15fd1f1a157e>
3.421875
908
Personal Blog
Science & Tech.
36.560803
WHO IS considered to be the father of algebra? While the word algebra comes from the Arabic language ( al-jabr "restoration") and much of its methods from Arabic/Islamic mathematics, its roots can be traced to earlier traditions, which had a direct influence on Muhammad ibn Ms al-Khwrizm (c. 780850). He later wrote The Compendious Book on Calculation by Completion and Balancing, which established algebra as a mathematical discipline that is independent of geometry and arithmetic. The roots of algebra can be traced to the ancient Babylonians, who developed an advanced arithmetical syst ...more Consequently he is considered to be the father of algebra, a title he shares with Diophantus. Latin translations of his Arithmetic, on the Indian numerals, introduced the decimal positional number system to the Western world in the 12th century. He revised and updated Ptolemy's Geography as well as writing several works on astronomy and astrology. His contributions not only made a great impact on mathematics, but on language as well. The word algebra is derived from al-jabr, one of the two operations used to solve quadratic equations, as described in his book. For complete intro: http://en.wikipedia.org/wiki/Al-Khwarizmi
<urn:uuid:8bd5eb62-cb8b-42b0-b7bf-6c4cf0f07747>
3.375
274
Q&A Forum
Science & Tech.
38.086897
Mission Type: Orbiter Launch Vehicle: Atlas-Able (no. 3 / Atlas D no. 91) Launch Site: Cape Canaveral, United States, launch complex 12 Spacecraft Mass: 176 kg Spacecraft Instruments: 1) micrometeoroid detector; 2) high-energy radiation counter; 3) ionization chamber; 4) Geiger-Mueller tube; 5) low-energy radiation counter; 6) two magnetometers; 7) Sun scanner; 8) plasma probe; 9) scintillation spectrometer and 10) solid state detector Total Cost: $9 - 10 million Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi National Space Science Data Center, http://nssdc.gsfc.nasa.gov/ The mission of Able VB, as with its two unsuccessful predecessors, was to enter lunar orbit. Scientific objectives included studying radiation near the Moon, recording the incidence of micrometeoroids, and detecting a lunar magnetic field. Planned lunar orbital parameters were 4,300 x 2,400 kilometers with a period of 9 to 10 hours. The spacecraft had a slightly different scientific instrument complement from that of its predecessors. This was third and last attempt by NASA to launch a probe to orbit the Moon in the 1959-60 period. Unfortunately, the Atlas-Able booster exploded 68 seconds after launch at an altitude of about 12.2 kilometers. Later investigation indicated that the cause was premature Able stage ignition while the first stage was still firing.
<urn:uuid:cbeaff14-1520-4378-8b00-67d87e3c63f6>
3.25
336
Knowledge Article
Science & Tech.
47.067526
The ice birds of SUMMIT COUNTY — You had to know it was coming sometime. I’m actually surprised I was able to resist posting a penguin photoblog for as long as I have. They’re so cute and photogenic, as well as incredibly tame, so you don’t even have to have any special wildlife photography skills to photograph them. But I have an ulterior motive, or two of them, actually. I’m hoping these birds are cute enough to drive a record number of page views to a single post — call it a social media experiment. And, I’m concerned about penguins. They are one of the species that just don’t really have anywhere to go as the Earth heats up. Some other species of plants and animals may be able to adapt, or find new niches with suitable habitat, but some ice-dependent species may be doomed. In fact, the simple and prolific food chain in the entire Antarctic region is under the global warming gun. In the last half century, winter temperatures on the Antarctic Peninsula — the skinny spit of land sticking up toward South America — have climbed five times faster than the global average. Subarctic conditions around the peninsula have given way to a moist maritime climate, with impacts to Antarctic birds and mammals, who all depend on krill for sustenance. Krill are tiny shrimp-like crustaceans found in great abundance in Antarctic waters. The krill feeds on tiny free-floating plants called phytoplankton. In turn, the krill is eaten in mass quantities by whales, sea birds, seals and penguins. But changing weather patterns linked to global warming are altering the system. Travelers to Antarctica can witness the changes first-hand, since many of the tourist voyages to the region explore the Peninsula and nearby islands. Some recent studies have shown that more cloudiness and less sea ice near the northern end of the peninsula combined to slow plankton growth. Farther south, sunnier skies and more sea ice spurred greater plankton growth. Ice-loving Adelie penguins are following the phytoplankton, and the krill who feed on them. In the process, sub-Antarctic species — including Chinstrap and Gentoo penguins–are replacing the Adélies in their former range. Though the Chinstraps and Gentoos are faring better than the Adélies, they, too, are pushing south in pursuit of food. Satellite imagery helps reveal changes in ocean color, temperature, sea ice distribution and wind. Many researchers, including a team from the University of Hawaii, are collecting data from the sea to help pinpoint the changes.
<urn:uuid:20163e11-8168-403e-b791-becbcab925b4>
3
564
Personal Blog
Science & Tech.
46.241069
Shepherd's beaked whale, Tasman whale Unlike most beaked whales, Shepherd's beaked whales feed on fish, and not squid. Body length: 6-7m, Weight: 2-3 tonnes. Shepherd's beaked whales have a dark brown/black upper surface, fins, flippers and flukes, and a creamy underside. They have a light patch on the top of the head, diagonal stripes on the sides and a steep, round forehead with a long, narrow beak. They live in the seas around New Zealand and South America, probably in deep water. Shepherd's beaked whales feed on fish. The behaviour of Shepherd's beaked whale is unknown. They are classified as Data Deficient by the 2000 Red List. Shepherd's beaked whales are one of the least-known cetaceans. They are only known from about 20 strandings and a few sightings.
<urn:uuid:dbe87289-4821-4578-9a32-eb203a08b00e>
3.390625
193
Knowledge Article
Science & Tech.
72.058435
Not every crazy idea, say dropping out of Harvard to start a software firm, is a bad one. But you don't have to be Bill Gates to place your bets that way. Consider atmospheric geoengineering -- pumping reflective particles into the stratosphere to reflect sunlight -- seen as a way to cut the effects of global warming. In 1991, the eruption of Mt. Pinatubo in the Philippines cooled the atmosphere's average temperature worldwide almost one degree Fahrenheit, a kind of "global dimming," serving as an inspiration for the idea. Such high-altitude aerosols, different from the ones found in spray cans, can play a big role in climate. A 2006 paper in the journal Science, for example, written by the eminent atmospheric scientist Tom Wigley of the National Center for Atmospheric Research, suggested that annually blasting roughly 500,000 tons of sulfur (about 7% of yearly sulfur production) into the stratosphere every year for three decades would prevent global ...
<urn:uuid:87a64e36-e3ed-4b55-b838-7552cfb3af38>
3.125
197
Nonfiction Writing
Science & Tech.
44.178805
Test or set particular bits in a string Treats the string in EXPR as a vector of unsigned integers, and returns the value of the bit field specified by OFFSET. BITS specifies the number of bits that are reserved for each entry in the bit vector. This must be a power of two from 1 to 32. vec() may also be assigned to, in which case parentheses are needed to give the expression the correct precedence as vec($image, $max_x * $x + $y, 8) = 3; Vectors created with vec() can also be manipulated with the logical operators ^, which will assume a bit vector operation is desired when both operands The following code will build up an ASCII string saying comments show the string after each step. Note that this code works in the same way on big-endian or little-endian my $foo = ''; vec($foo, 0, 32) = 0x5065726C; # 'Perl' vec($foo, 2, 16) = 0x5065; # 'PerlPe' vec($foo, 3, 16) = 0x726C; # 'PerlPerl' vec($foo, 8, 8) = 0x50; # 'PerlPerlP' vec($foo, 9, 8) = 0x65; # 'PerlPerlPe' vec($foo, 20, 4) = 2; # 'PerlPerlPe' . "\x02" vec($foo, 21, 4) = 7; # 'PerlPerlPer' # 'r' is "\x72" vec($foo, 45, 2) = 3; # 'PerlPerlPer' . "\x0c" vec($foo, 93, 1) = 1; # 'PerlPerlPer' . "\x2c" vec($foo, 94, 1) = 1; # 'PerlPerlPerl' # 'l' is "\x6c" To transform a bit vector into a string or array of 0's and 1's, use these: $bits = unpack("b*", $vector); @bits = split(//, unpack("b*", $vector)); If you know the exact length in bits, it can be used in place of the
<urn:uuid:7c92239f-c831-4c8c-a8be-6282ece0da1a>
3.46875
510
Documentation
Software Dev.
67.419173
make-package package-name &key nicknames use => package Arguments and Values: package-name---a string designator. nicknames---a list of string designators. The default is the empty list. use---a list of package designators. The default is implementation-defined. Creates a new package with the name package-name. Nicknames are additional names which may be used to refer to the new package. use specifies zero or more packages the external symbols of which are to be inherited by the new package. See the function use-package. (make-package 'temporary :nicknames '("TEMP" "temp")) => #<PACKAGE "TEMPORARY"> (make-package "OWNER" :use '("temp")) => #<PACKAGE "OWNER"> (package-used-by-list 'temp) => (#<PACKAGE "OWNER">) (package-use-list 'owner) => (#<PACKAGE "TEMPORARY">) Side Effects: None. The existence of other packages in the system. The consequences are unspecified if packages denoted by use do not exist. A correctable error is signaled if the package-name or any of the nicknames is already the name or nickname of an existing package. In situations where the packages to be used contain symbols which would conflict, it is necessary to first create the package with :use '(), then to use shadow or shadowing-import to address the conflicts, and then after that to use use-package once the conflicts have been addressed. When packages are being created as part of the static definition of a program rather than dynamically by the program, it is generally considered more stylistically appropriate to use defpackage rather than make-package.
<urn:uuid:697daeb3-deeb-4393-9a14-6ee5e842b25b>
3.03125
379
Documentation
Software Dev.
26.414222
At the NESL prompt you can type a NESL top-level expression, as defined by the language, or a top-level command, which is used to control or examine various aspects of the environment. The top-level commands are summarized in Figure 1 and most are described in Section 3. Figure 1: Top-level commands (screendump obtained by typing ?). A top-level expression is one of where exp is any expression and pattern can either be a single variable name or a parenthesized pattern of variable names (the square brackets indicate that the typedef in a function definition is optional). A full syntax for each of these is given in Appendix A of the NESL language definition . Some examples of top-level expressions include: Expressions that are not assigned to a user defined variable are assigned to the variable it.function double(a) = 2*a; function add3(a,b,c) = a + b + c; datatype complex(float,float); foo = double(3) + add3(1,2,3); foo; If you hit Return before an expression is completed, either for readability or by mistake, a ``>'' is printed at the beginning of each new line until the expression is completed. For example: If you get lost, instead of hitting Ctrl-C try typing a few semicolons to end the expression.<Nesl> 2 > + > 3; Compiling..Writing..Loading..Running.. Exiting..Reading.. it = 5 : int For an example NESL session showing many features of the language, see Appendix A.
<urn:uuid:355310f9-36b1-4bb0-b3de-373b9ada4861>
3.5
340
Documentation
Software Dev.
58.109497
down into the lava lake of Erebus. the heart of the volcano. senses on Erebus: by Noel Wanner and Paul Doherty an infant explore a strange object: they'll look at it intently, touch it, shake it, put it in their mouths-- almost anything that will help them determine what this new thing might be. Scientists, in their quest to undestand the world, are not so different from an infant in their curiosity; it's the tools which are different, tools which scientists use to extend their senses out into ranges far beyond human, to levels of precision far beyond sense impressions like "this tastes a two-day blizzard on Mount Erebus, with temperatures near -35 C (-30 F) and winds over 45 mph (57 kph), creating a wind chill of -90 F, we had plenty of time to meet the scientists on Erebus. They were using instruments to extend all of their senses to explore the geology of Erebus, to try to understand the "life" of this volcano. Rich Esser is the webmaster of the MEVO, Mount Erebus Volcano Observatory, website. He works with scientist Bill McIntosh to maintain the live webcamera on the rim of Erebus. The video images are used by scientists to find the exact time of volcanic eruptions. The time is used together with seismic data to probe the structure of the rocks beneath the volcano. Jeff Johnson is a geologist and mountain climber, he installed a wide frequency range microphone on the summit cone of Erebus. This microphone was sensitive in the subsonic frequency range. These frequencies of sound are emitted during volcanic eruptions, they do not get absorbed by the air and provide another means of timing the eruptions. In the past scientists have done chemical analysis of the rocks on Erebus. Crystals grow in the magma beneath Erebus and get spit out of the mountain inside glassy volcanic bombs. The glass quickly weathers away leaving the mountainside covered in crystals. Analysis shows that the crystals covering the mountain are feldspar crystals that are rich in sodium and potassium, they are also found on Mt. Kenya. These crystals are coveted by almost everyone at McMurdo Station. Crain sets up her "bionic nose". Several scientists were smelling Erebus! Graduate student Jessie Crain pumped air through filters and collected radioactive particles and gasses. These radioactive materials are sent to Paris where they will be placed into particle counters. The counters will reveal the radioactive isotopes in the plume of Erebus, and thus tell how long the magma had resided in the magma chambers beneath the volcano. Graduate student Tina Calvin plans to fly in a helicopter through the plume of Erebus to suck up volcanic gasses from the plume and then measure the carbon dioxide gas concentration in the plume. Carbon dioxide is one of the crucial "greenhouse gases", which help to determine the Earth's temperature, so understanding the roles volcanos play in releasing Co2 is important. Bill McIntosh is installing new broadband seismometers which radio their seismic information down to McMurdo Station to be recorded and forwarded to New Mexico. These seismometers feel the wavelike vibrations of the surface of the mountain generated by earthquakes and eruptions. These "seismic waves" show scientists the internal structure of the volcano. Bill also got the new wind generators on the mountain to work, to provide power to the instruments during the long am I? Inside your inner ear you have sensors which detect your orientation in space. Undergraduate Emily Desmarais uses a tool called GPS to perform a similar function, determining Mt. Erebus' changing position. (GPS stands for Global Positioning Satellite.) Desmarais has installed GPS recorders around the mountain. These recorders monitor the position of the outer surface of the volcano to within a few centimeters (a few fingerwidths). When the magma chamber beneath the mountain fills or empties, the surface of the mountain responds, expanding and contracting, just like your chest bulges out when you Brain In the body all of the sensory signals feed into the brain to be processed. On Erebus, Professor Phil Kyle of New Mexico Tech is the principal investigator. He advises on the science being done by his crew, and then helps them put together the observations into a picture of the inner workings of Mt Erebus. Every year the picture gets a little bit better,but it is never perfect -- so the scientist keep observing, hoping to learn more.
<urn:uuid:10f08d52-a9dd-4721-a10e-1ae5d9365d33>
3.4375
1,030
Nonfiction Writing
Science & Tech.
43.984674
Carl Friedrich Gauss His formal name is Johann Carl Friedrich Gauss, although "Johann" is normally left out. However, to the mathematicians of the world, he is simply Gauss. |Date of Birth: 4/30/1777 ||Date of Death: 2/23/1855 Birthplace: Braunschweig, Holy Roman Empire Gauss contributed so much to mathematics that he is sometimes referred to as the Prince of Mathematics. Keeping to the royal theme, Gauss studied many scientific fields, but he was particularly fond of math. He is quoted as referring to math as the "Queen of all the Sciences." |A Tale of Gauss as a Child A famous story of Gauss comes from his days in elementary school. As the story goes... The teacher gave the students a list of numbers to add together as a means of "busy work." For example, the teacher asked the students to add all of the numbers from 1 to 100 together and get a final answer (how long would that take you?). To the amazement of his teacher and his classmates, Gauss was able to perform this in seconds by using the "trick" below. 1 + 100 = 101, 2 + 99 = 101, 3 + 98 = 101, and so on... Because there are 50 "pairs" that add to 101, the final sum should be 50 x 101 = 5050. Today, many historians question the validity of this "folk tale" but no one questions Gauss's brilliance. Simply stated, Gauss's mathematical genius is undeniable. He contributed so much to mathematics, but his largest contributions came in the fields of statistics, analysis, and number theory. Return from Carl Friedrich Gauss to other Famous Math People.
<urn:uuid:1734b734-d0e5-4d99-89ba-2dc659b9c813>
3.203125
369
Knowledge Article
Science & Tech.
72.211667
Editor's Note: This is the latest episode of Energy NOW!, A video program dedicated to energy and environmental issues. You can see the full video at the bottom of this post, and archived episodes are online at EnergyNow.com. First up this week, is it possible to burn coal without putting carbon dioxide into the atmosphere? Coal accounts for almost half the electricity generated in the U.S. and up to 80 percent in rapidly growing countries such as China and India. Scientists have warned that carbon dioxide from coal, and other fossil fuels, is heating up the planet and changing the Earth's climate. Correspondent Dan Goldstein takes a look at a new technology for washing out the carbon before it can get into the air. This week's energyTHEN takes us back to 1947, and a documentary that cast coal as the hero in America's post-war industrial boom. The film refers to coal as a "black treasure from the Earth" and portrays black smoke and fumes as signs of production and prosperity. Then, Special Correspondent Josh Zepps explores innovations for cleaning up the carbon dioxide that's already in the atmosphere. Scientists at Columbia University have developed a kind of "artificial leaf" that can remove carbon dioxide from the atmosphere faster than actual trees. It can be re-purposed, for carbonated drinks, dry ice, even a replacement for gasoline. Can this new technology be deployed on a large enough scale to help the fight against climate change? Next up in "The Mix," Bruce Nilles, national coal campaign director for the Sierra Club, and Evan Tracey, a senior vice president at the American Coalition for Clean Coal Electricity, debate the future of coal. Nilles says his organization's "Beyond Coal" campaign is trying to close one-third of U.S. coal-fired power plants by 2020, and replace it with renewable energy such as wind turbines and solar panels. But Tracey says even with government subsides, the cost of renewable power is still too high for U.S. consumers. Nilles and Tracey also debate whether carbon capture and storage technology can cut back on coal's contribution to climate change. Finally, on the "Hot Zone," an unhappy anniversary of sorts, as Los Angeles marks the 68th anniversary of its first "big smog." We look at the first photos taken of the smog-shrouded city on July 26, 1943. It was the middle of World War II and some residents thought the air pollution was a chemical-weapons attack. They later realized the smog came from the smokestacks of industrial plants and tailpipes of cars.
<urn:uuid:6c212327-944a-4a8d-ab82-3582d3e1aa61>
3.390625
534
Content Listing
Science & Tech.
51.889007
FOR a few years now, astronomers have been quietly confident that the universe is 13.7 billion years old, give or take a hundred million years. They are about to learn that the size and age of the universe are not a done deal. Norbert Przybilla of the University of Erlangen-Nuremberg, Germany, and his colleagues used the 10-metre Keck-II telescope on Mauna Kea, Hawaii, and other telescopes to measure the distance to a so-called eclipsing binary star system in the Triangulum galaxy, also known as M33. The team measured light, velocity and temperature to find the true luminosity of the two stars, which eclipse one another on every orbit. Comparing this luminosity with their observed brightness gave a distance to the galaxy of 3.14 million light years - half a million light years further away than anyone thought (www.arxiv.org/astro-ph/0606279). "This is the farthest distance that anyone has been able ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:e8149132-1569-40a7-a0f4-9cc214d753c9>
3.890625
241
Truncated
Science & Tech.
51.821011
- In the News - Companion of Honour - Letters of Congratulation - Higgs Boson Discovery - Higgs Centre for Theoretical Physics - Searching for the Higgs Boson - Higgs Research at Edinburgh - Brief History of the Higgs Mechanism - Peter Higgs: Curriculum Vitae - My Life As A Boson - Nonino Prize Man of Our Time 2013 - Edinburgh Award 2011 - A Layperson's Guide to the Higgs Boson - A Lay-Scientist's Guide to the Higgs Boson - Image Galleries - Contact Us Physicists at the Large Hadron Collider at CERN, Geneva, have discovered a new particle, consistent with the Higgs boson. The Large Hadron Collider (LHC) collides bunches of protons, each containing up to 1011 protons, together. Physicists are trying to identify whether Higgs bosons are produced in these collisions. It is predicted that one in every hundred million (108) of these collisions will produce a Higgs boson. Physicists search for the distinctive signature of the Higgs boson as it decays into other particles. There are two experiments at the LHC looking directly for the Higgs boson: ATLAS and CMS. The latest results from the LHC were announced on 4th July 2011, using the data collected by the LHC in 2011 and the first part of 2012. Both experiments observe evidence for the production of a new particle, consistent with the Higgs boson with a mass of around 125 GeV. However more data from the LHC will be analysed to understand it the new particle has all the properties we expect the Higgs boson to have.
<urn:uuid:9ed11af7-0057-44e1-9d49-b28679f7f593>
3.078125
367
Content Listing
Science & Tech.
40.155081
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. April 11, 1997 Explanation: The Earth has once again endured a burst of particles from the Sun. The latest storm, which began Monday, was one of the best documented solar storms to date. At 10 am (EDT) ground monitors of the SOHO spacecraft, which continually monitors the Sun, noticed a weak spot in the solar corona was buckling again, this time letting loose a large, explosive Coronal Mass Ejection (CME). Almost simultaneously, NASA's WIND spacecraft began detecting bursts of radio waves from electrons involved in this magnetic storm. Supersonic waves rippled though the solar corona as a puff of high energy gas shot out into the Solar System. The above image shows two photographs of the Sun taken about 15 minutes apart and subtracted, highlighting the explosion. The CME gas will have little lasting effect on the Earth, but might make this a good weekend to see an aurora. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:d9fed548-d72e-4b7a-8c0e-193853566944>
3.234375
260
Knowledge Article
Science & Tech.
47.164796
|Sep16-12, 03:15 PM||#1| How exactly does a current create a magnetic field and vice versa? And why is it parallel? Do we actually know what causes magnetism at all? |Sep16-12, 03:56 PM||#2| |Sep16-12, 04:21 PM||#3| Relativity is an explanation of what happens, and is as good a one as we have. But you can always ask 'why does it work that way?? ' Why does a moving charge withrelative motion create a magnetic field...why do magnetic and elertic fileds exist at all?? nobody knows...but THAT is what happens in this universe...... science almost always explains what happens not why.... we know the 'right hand rule' works but that doesn't mean we really know why.....nor why the electric and magnetic field vectors are perpendicular.... |Similar Threads for: How exactly does a current create a magnetic field and vice versa?| |BA physics and BS math, or vice versa?||Academic Guidance||4| |matlab gui to m and vice versa????||Math & Science Software||6| |Imaginary into real but not vice versa?||General Math||2| |Are BECs superfluid or vice versa or what?||Atomic, Solid State, Comp. Physics||7| |why is there EMI and vice versa||General Physics||2|
<urn:uuid:875e1d53-c267-4e81-bcc3-2ca753a89bb6>
3.125
313
Comment Section
Science & Tech.
79.474142
If you look up at the night sky in most cities, you will see a murky haze, and few stars. This occurrence is no longer unique to cities; in fact, 2/3 of the world’s population can no longer see the Milky Way. As night skies become emptier, astronomers worry about observatories becoming obsolete, and others simply fear that they will no longer be able to see stars at night. Light pollution, which has increased dramatically with increased urbanization, has several different components, including glare, clutter, light trespass, energy waste and urban sky glow. Most light pollution is unnecessary and stems from inefficient or poorly-designed lighting sources. Unnaturally bright nights have detrimental effects on certain animals. Searchlights often interfere with birds’ nighttime migratory patterns, increased lighting has made it easier for nocturnal animals to be attacked by predators, and many creatures have experienced altered breeding behavior due to high nocturnal light levels. Humans are also affected by light pollution. Research has indicated that we need darkness to support our biological welfare. A dark skies movement has developed as a response to increasing light pollution. Groups all over the globe have been working towards lighting regulations and there has been recent progress. Austin, Texas, for example, invested in LED streetlights as part of an effort to decrease upward light pollution. In order to incentivize other cities to pass similar ordinances, participating local governments are able to save money and receive credits for greenhouse gas reduction. Improved lighting fixtures have the additional benefit of directing more light towards the streets, which makes both walking and driving at night relatively safer. There are, of course, challenges to the dark skies movement, as updating lighting fixtures can be expensive and many people are apathetic about the issue. The good news is that light pollution is a technically reversible problem, as long as there is awareness. The International Dark Sky Association is a non-profit working towards decreasing light pollution. They promote the practice of lighting only what you need, to the extent that safety and recreation can still be accounted for. The organization is also working on creating “dark sky” parks across the globe. Bill Nye (the science guy) is an embassador of the Dark Sky Association, and you can hear his opinions on the value of keeping the night sky dark here. Hopefully as lighting solutions develop and people become more aware of the issue, stargazing will not become a thing of the past, accessible only in planetariums. Written by Leslie Wolf, Class of 2015
<urn:uuid:7b667af5-3cfb-4f1b-9fdf-84efab896856>
3.375
509
Personal Blog
Science & Tech.
32.904403
Let be a closed contour parameterized by in the range , and a function meromorphic inside and on . Define for some . The argument principle relates the change in argument of as describes once in the positive direction to the number of zeros and poles inside the contour. The change in argument for one complete circuit around is given by . The Argument Principle then states: , where and are the number of zeros and poles inside the contour, counting multiplicities. This Demonstration shows for six simple functions over a circular contour for in the range . At , is calculated from the final value of . The left pane shows the progress of over the contour. Zeros of the selected function are shown by red points and poles by blue points. In the case , the pole has order of three and is therefore counted three times. The same would be done for zeros of higher order. The display keeps a running tally of the change in argument of along the contour as the slider moves through the range 0 to . The first plot shows the progress of the contour in the plane along with the zeros as red points and poles as blue points. The second is the image of over this contour. An approximation to is reported at the bottom of the plots. When the slider reaches , the values of and are displayed for the selected function.
<urn:uuid:570a3722-72b4-4d80-808b-3e0c28626349>
3.546875
276
Documentation
Science & Tech.
56.101121
Concept 20 A half DNA ladder is a template for copying the whole. Matthew Meselson and Franklin Stahl invented the technique of density gradient centrifugation and used this to prove that DNA is replicated semi-conservatively. Arthur Kornberg identified and isolated DNA polymerase I — one of the enzymes that can replicate DNA. Matthew Stanley Meselson (1930-) Matthew Meselson was born in Denver, Colorado. He had always wanted to be a chemist and had a huge lab workshop set up in his family's basement and garage. Meselson studied chemistry at the University of Chicago and then did his graduate work at the California Institute of Technology with Linus Pauling. Meselson's thesis project was to use X-ray crystallography to figure out the structure of a specific protein. In 1954, Meselson went to Woods Hole to be a teaching assistant. Here, Meselson met Franklin Stahl — a post-doctoral fellow who was taking courses to learn some molecular biology techniques. Meselson and Stahl had a profitable summer during which they discussed theory and possible experiments. They were especially interested in trying to devise a way to prove or disprove Watson and Crick's model of semi-conservative replication. Meselson and Stahl found themselves so in tune with each other's ideas that they agreed to work together on devising the right experiment. Stahl got a post-doctoral position in Caltech, and by 1957 the two had the experimental proof for the semi-conservative replication of DNA. They did this by inventing a new technique called density gradient centrifugation, which uses centrifugal force to separate molecules based on their densities. Their "classic" paper was published in 1958 and their experiment has been called "one of the most beautiful experiments in biology." In 1957, while doing the experiments with Stahl, Meselson gathered enough data to finish his Ph.D. with Pauling. He then stayed at Caltech, first as a research fellow and then as an assistant professor of chemistry. Meselson worked on phage recombination — showing that recombination results from the splicing of DNA molecules. In 1960, François Jacob and Sydney Brenner came to his lab at Caltech where they obtained the data necessary to prove the existence of mRNA. In the fall of 1960, Meselson accepted the position of associate professor of molecular biology at Harvard University, where he is currently the Thomas Dudley Cabot Professor of the Natural Sciences. He discovered the enzymatic basis of host DNA protection, where the cell recognizes its own DNA by adding methyl groups to it. Foreign DNA will be attacked and destroyed by restriction enzymes but host, methylated, DNA remains intact. Meselson also discovered the process of DNA mismatch repair, which allows cells to fix mistakes in DNA. Currently, Meselson's research interest has to do with the evolution of sexes, and he is using the small invertebrate Rotifera as a model system. Since 1963, Meselson has been concerned about the use of chemical and biological weapons in warfare. He has acted as a consultant for a number of government agencies, and participated in scientific studies that studied the effects of accidental and misuse of biological weapons. Meselson is the co-director of the Harvard Sussex Program on Chemical Biological Weapons (CBW) Armament and Arms Limitation. This is a program that attempts to set limits on the use of chemical and biological weapons. Meselson is also the co-editor of The CBW Conventions Bulletin. Matthew Meselson was one of the scientists who investigated the use of biological agents in Vietnam. The U.S. government asked him to analyze the residue left by possible bio weapons. The samples turned out to be bee pollen. Initially, Meselson and Stahl used phage DNA in their density gradient experiments. Phage DNA did not band well in the centrifuge tubes and gave uninterpretable results. Why might this be so?
<urn:uuid:6cc608fc-2060-4b58-8104-8c19957f5040>
3.53125
798
Knowledge Article
Science & Tech.
38.401487
Hosted by The Math Forum Problem of the Week 1152 A Better Cable Connection A uniform cable attached to a ceiling at one end and holding up a weight at the other end will always break at the top, because the top has to support the weight plus the cable below it. This is wasteful because the bottom part of the cable need not be as thick as the top. An efficient cable would be no thicker than needed to support the total weight below it. Imagine a tapered cable of circular cross-section and length L that hangs freely, where at every point the cable is exactly thick enough to support the cable below it plus the hanging weight, which we take to be 1 unit. Let r(x) be the radius of the cable at distance x from the bottom. Assume that a unit area of cable is strong enough to hold a weight of k units below it (i.e., the tensile strength of the cable is k) and that a unit volume of cable weighs d units (density is d). What is the function r(x)? Source: Julien Beasley, Seattle, who writes: "The inspiration for this problem came to me at a rock concert. I was looking at the lamps suspended from the ceiling on cables, and thinking 'The lower part of the cable is holding less weight than the top. The cable will always break at the top. What form should the cable take so that it would break anywhere with equal probability?'" © Copyright 2012 Stan Wagon. Reproduced with permission. Home || The Math Library || Quick Reference || Search || Help
<urn:uuid:2ca65b95-5cf6-42a2-9738-1d87438db5e6>
3.140625
328
Tutorial
Science & Tech.
67.814747
Recent developments in this field have been concerned with (1) new major catalogues of standard Hubble types; (2) slight modifications to existing classification systems; (3) the identification of new types of galaxies and improved understanding of older types; (4) identification of the major orbit resonances in grand design spirals and ringed galaxies; (5) widespread application of electronic detectors with high quantum efficiency to large numbers of galaxies of many types, and over a wide range of passbands; and (6) computer classification. Major catalogues with morphological data continue to be produced. These are summarized in reviews by Corwin 17 and Buta 13. The RC3 combined several of these catalogues and other smaller lists into the largest database of Hubble morphological type information ever compiled. The 17,700 types given in this catalogue are based solely on photographic sources and are on the classification system of de Vaucouleurs 19. Other major catalogues include the Virgo Cluster Catalogue 5 and the RSA; these give types on Hubble's revised system 44, with additions and revisions, some described below. No major new classification systems have been proposed since 1976, though modifications to existing systems have been suggested. For example, Kormendy 38 suggested that tenses be distinguished from rings using their own notation within the framework of the de Vaucouleurs revised Hubble system. He suggested denoting inner lenses by (1) and outer lenses by (L) in the same classification positions where inner and outer rings would be specified. Kormendy also suggested a different approach to morphology, the idea of characterizing galaxies in terms of a small number of "distinct components" (bars, rings, etc.) rather than a large number of morphological "cells". The objective of the approach is to make deductions concerning secular evolution from the ways these components might be expected to interact (see Table 1 of Kormendy 39). Kormendy suggested that such an approach leads to the possible conclusion that bars are not permanent features of galaxies but may evolve under certain circumstances to a lens. Whether this evolution actually takes place or not is still uncertain. Another revision to the classification systems is the recognition of "dusty E's." The misclassification of these objects as S0's is a noteworthy problem of catalogues emphasized by Ebneter, Djorgovski, and Davis 28 (and references therein; =EDD). The presence of such "features" in a type of galaxy which was by definition featureless led EDD to suggest a more physical classification of E's is now warranted. A lovely montage and catalogue of dust-lane ellipticals is provided by Bertola 3. Sandage and Brucato 48 pointed out that the original classes called Irr I and Irr II in the Hubble Atlas are not satisfactory because they combine widely differing objects into the same bin, namely "Irr". To distinguish galaxies which are not E, S0, or S but which have an amorphous appearance to the unresolved light, sometimes with imbedded resolved stars, they proposed the term "amorphous" galaxies. Sandage and Brucato emphasize that these objects are similar to, but not precisely like, the Irr II's in the Hubble Atlas, and that some similar objects classified as I0 by de Vaucouleurs may be peculiar spirals or S0's. One of the hallmarks of the amorphous class is a well-developed early-type absorption spectrum spread throughout the disk. In the case of spirals, many aspects of the "grand design" and "flocculent" spiral morphologies have now been quantified 31. These are aspects of spiral structure morphology that are not directly built into Hubble classifications. Flocculent galaxies lack bimodal symmetry and have a spiral-like structure composed only of small pieces of arms. Grand design galaxies generally have a two-armed structure and the arms are longer and more continuous than in flocculent galaxies. To account for these differences and for combinations of the two pattern types in many galaxies, Elmegreen and Elmegreen 30 proposed a system of 10-12 "arm classes" or AC's to highlight a systematic orderliness of spiral arms. The AC's are not exactly the same as van den Bergh luminosity classes because they emphasize symmetry and arm length, rather than arm contrast. The identification of the locations of specific dynamical orbital resonances in spiral galaxies has seen much progress in recent years. Research has focussed on two classes of objects: grand design spirals by the Elmegreens, and ringed galaxies by myself. The paper by Elmegreen and Elmegreen 29 summarizes how to recognize the primary orbit resonances in a relatively typical grand design spiral, NGC 1566. For this purpose, purely morphological methods guided by expectations from spiral structure theory are used. The features considered are spiral arm kinks, gaps, spurs, bifurcations, endpoints to star formation ridges, dust-lane crossover points, interarm star formation, and the ends of a weak bar. If consistency can be found between the positions of these features and those inferred for specific resonances from a rotation curve, then the pattern speed of the wave can be derived with reasonable confidence. However, even in an extreme grand design case like NGC 1566, the resonance features are very weak. It takes a great deal of tenacity, for example, for the reader to study and identify clearly all of the features summarized in Table 1 of Elmegreen and Elmegreen 29. Ringed galaxies refer to normal galaxies classified in the de Vaucouleurs revised Hubble system with the symbols (R)SB(r), (R')SB(rs), (R')SAB(s), (R)SAB(r,nr) etc., that is, objects which have inner, outer, or nuclear rings or pseudorings. These rings are believed to define the locations of specific orbital resonances with a bar or oval, and if correct they are much more obvious optical features with a direct link to resonances than some of the features seen in the best grand design spirals. Thus, they provide a promising way of indirectly estimating pattern speeds of bars and ovals, of which very little is known. At the moment, there is a great deal of evidence that the outer rings and pseudorings of SB and SAB galaxies trace the location of the outer Lindblad resonance, or OLR. This follows from statistics of their shapes and orientations with respect to bars 10, from their relative sizes with respect to inner rings 2 and most of all from their morphology 15. The Catalogue of Southern Ringed Galaxies 12 is designed specifically to understand the link between rings and resonances, and has been the basis for the studies of Buta 10 and Buta and Crocker 15. A number of interesting findings have been made concerning cluster galaxies. A photometric study of brightest cluster members, or "BCM's", including gE, D, and cD types (Schombert 51, 52, 53) has led to a refined and quantitative classification of these galaxies based on luminosity profile shapes. Schombert has noted that the characteristic extended envelopes of cD galaxies are generally fainter than 10% of the night sky brightness and are not readily seen on PSS prints. Thus, the rather shallow luminosity profiles of cD's is what led to their recognition, in addition to their central location in clusters. It is the existence of a true extended envelope that distinguishes the cD from the D class. As emphasized by Sandage and Binggeli 47 (=SB), the Virgo Cluster contains galaxies of virtually every known morphological type. Of particular interest has been the identification in Virgo of dwarf S0, or dS0 galaxies, which morphologically are like S0's but which are of considerably lower luminosity and surface brightness than normal S0's (see also Binggeli and Cameron 4). Most of the galaxies in Virgo fainter than B 14 appear to be dwarf E, or dE, systems. SB emphasize that the "great void" in luminosity below Sa, Sb, and Sc types is real - there are no convincing cases of dSa, for example. This confirms that the Hubble sequence is largely defined by giant galaxies. However, although no examples of dSa or dSb were found in Virgo, a promising example was found by van den Bergh 63 in the compact, apparent elliptical galaxy NGC 3928, a member of the Ursa Major Cloud of galaxies. The luminosity class system of van den Bergh has been extended to classes V-VI and VI by Corwin (see introduction to RC3) to allow for a greater range of apparent surface brightnesses seen among dwarf and late-type galaxies on the SRC-J sky survey. The RSA luminosity class system was also refined by SB to allow for a greater apparent range of surface brightnesses seen among Im galaxies in the Virgo Cluster. Among the galaxies classified as Im V by SB are "huge" Im types having significant diameters (up to 10 kpc) and peak central surface brightnesses less than 10% of night sky in blue light. These are accompanied by similar huge dE systems. The data from a variety of sources of luminosity classes have been compared and combined in RC3 24. Van den Bergh, Pierce, and Tully 65 (=BPT) have discussed the classification of 231 Virgo Cluster galaxies from CCD images. They propose a revision to the classification system of van den Bergh 62 to include Sd and Sm types, and demonstrate that the accuracy of luminosity classification is improved on digital images ((MTB) 0.7 mag) compared to classifications based on photographic plates published in the RSA ((MTB) 1 mag). Of particular interest in this work was the identification of a possible new class of galaxies, called "Virgo types." These galaxies have fuzzy outer regions and active star formation in their bulges or inner disk regions, and constitute 43% of 88 Virgo cluster spirals. In contrast, BPT find that the Ursa Major cluster includes only 2 "Virgo types" out of 35 spirals, suggesting real differences. BPT suggest that the early "Virgo types" represent a mild form of the Butcher-Oemler effect that persists at zero redshift. In a study of the HI and optical properties of cluster galaxies, Bothun, Schommer, and Sullivan 6 identified a class of red, HI-rich, low surface brightness spirals. A sample of these objects is compiled by Schommer and Bothun 54, and two extreme examples of the class, NGC 3883 and UGC 542, were studied by van der Hulst et al 66. The types of these galaxies range from Sa to Sc in Schommer and Bothun 54, and NGC 3883 is quite distinctive for its size and appearance in Abell 1367. Van der Hulst et al 66 interpret these galaxies in terms of a threshold HI surface density for star formation and possible interrupted star formation activity or an altered IMF. An important serendipitous finding from a study of a field of the Virgo Cluster was an object dubbed "Malin 1" 7. This galaxy appears small enough on PSS prints that it did not make inclusion into the UGC 42. However, on amplified deep IIIa-J plates, Malin 1 shows an extended, low surface brightness disk surrounding a small bright core. The object is not a member of the Virgo Cluster (it is 20 times as distant) and is now recognized as a new class of giant, HI rich, low surface brightness disk galaxies the likes of which had not been appreciated before. The properties of Malin 1 are further summarized and described by Impey and Bothun 36, and a second example of the class was reported by Bothun et al 8. These objects are now interpreted as disk galaxies whose HI surface density is so low that they evolve only slowly. The study of interacting galaxies has led to the recognition of several new morphologies. Polar ring galaxies 69 are believed to be cases where a small satellite has been disrupted into a polar orbit around an S0. Hoag-type ringed galaxies 55 may be related cases where the central object is an EO system. X-galaxies are edge-on S0's displaying a distinct X-shape across the center that may also be related to polar ring galaxies 68. "Ocular" galaxies are interacting galaxies displaying an "oval-apex" structure resembling an eye 32. The latter objects are particularly interesting, because they represent a type of bar not distinguished within the Hubble system. A key feature is a double arm on one side, as illustrated in Figure 1 of Elmegreen et al 32. Of particular interest to students of spiral structure is the discovery of a leading spiral arm in the interacting galaxy NGC 4622 16. The arm was first noticed by Byrd on a well-known commercially available photograph published in Shu 56. The galaxy is of type SA(r)ab and shows two major outer arms that wind clockwise, but inside the inner ring a single arm winding in the opposite sense is present. Since the "discovery" photograph was taken in blue light, Buta, Crocker, and Byrd 15 (=BCB) re-observed the galaxy in the Cousins I-band to test whether the arm is stellar or an artifact of dust. The leading arm was found to be a clear feature in the galaxy's old disk population. The fact that only a single leading arm is observed in this case, rather than two, is strong evidence that the arm was generated by a tidal interaction, as discussed in detail by BCB. The widespread use of high quality CCD's, especially the large format TEK CCD's at KPNO and CTIO, has greatly increased the number of large-scale images available for classifying galaxies. What is particularly important is that a typical modern CCD can provide in a short amount of time images that are deeper in limiting surface brightness than the SRC-J sky survey, and yet still provide detailed information on the central regions of galaxies. Thus, they bypass the main problems of direct prime focus or Schmidt plates and have the potential of adding greatly to our knowledge of morphology. It is also clear that recent advances in infrared detectors make the development of a classification system in the 1-3 µ wavelength range a real possibility. The advantages of using near infrared images to type galaxies are their increased sensitivity to the dominant old stellar populations, which tends to enhance the visibility of features such as bars and bulges. The young component of galaxies which dominates blue light images for many spirals as well as dust will be less prominent and therefore not important for typing purposes. The number of "cells" required to classify galaxies should therefore decrease somewhat. However, going to the infrared will not change the pitch angle of spiral arms or the relative sense of the Hubble sequence. What is clear is that the number of "non-barred" galaxies will probably decrease, as can be gathered just the ESO-B and ESO=R sky survey charts. Finally, in the future some catalogues of galaxies will probably include automatic classification 1, 58. This is an approach still under development, owing to the difficulty of defining some aspects of morphology, but once a satisfactory methodology is achieved, it has the potential of providing more consistent classifications than might be achieved visually.
<urn:uuid:6117b2e1-4fe1-4139-9a0b-0dcc62913c60>
2.875
3,197
Academic Writing
Science & Tech.
36.783237
Located 1,000 light-years from Earth in the constellation Perseus, a reflection nebula called NGC 1333 epitomizes the beautiful chaos of a dense group of stars being born. Most of the visible light from the young stars in this region is obscured by the dense, dusty cloud in which they formed. With NASA's Spitzer Space Telescope, researchers can detect the infrared light from these objects. This allows a look through the dust to gain a more detailed understanding of how stars like our sun begin their lives. The young stars in NGC 1333 do not form a single cluster, but are split between two sub-groups. One group is to the north near the nebula shown as red in the image. The other group is south, where the features shown in yellow and green abound in the densest part of the natal gas cloud. With the sharp infrared eyes of Spitzer, researchers can detect and characterize the warm and dusty disks of material that surround forming stars. By looking for differences in the disk properties between the two subgroups, they hope to find hints of the star- and planet-formation history of this region. The knotty yellow-green features located in the lower portion of the image are glowing shock fronts where jets of material, spewed from extremely young embryonic stars, are plowing into the cold, dense gas nearby. The sheer number of separate jets that appear in this region is unprecedented. This leads researchers to think that by stirring up the cold gas, the jets may contribute to the eventual dispersal of the gas cloud, preventing more stars from forming in NGC 1333. In contrast, the upper portion of the image is dominated by the infrared light from warm dust, shown as red. Posted By: Brooke
<urn:uuid:b8fea281-a8ff-4f9c-8a4f-4ee0aed0bd40>
4.34375
357
Personal Blog
Science & Tech.
56.731472
There's a rather impressive diversity of animals associated with this bamboo coral (Keratoisis sp.), discovered at 1,455 m depth at Davidson Seamount. Deep-sea Corals and How to Measure Their Age and Growth Allen H. Andrews Moss Landing Marine Laboratories Andrew DeVogelaere, PhD Davidson Seamount: Exploring Ancient Coral Gardens Marine Scientist/Research Coordinator Monterey Bay National Marine Sanctuary Corals in deep, cold water When most people think of corals they imagine tropical waters and snorkeling. They are surprised to learn that corals are also found throughout the world in the ice-cold, dark waters of the deep ocean. Cold-water (or deep-sea) corals are part of the taxonomic group called Cnidaria, and they are related to animals like sea anemones and jellyfish. They can live as individuals or as colonies that form extensive reefs. These corals feed by waiting for small food particles to flow past, and then use their stinging cells to capture them. They also provide habitat for other species. These beautiful animals are among the oldest living organisms; some reefs are several thousand years old, and some individual corals live several hundred years. Before researchers can fully understand the importance and sensitivity of these habitat-forming corals of the deep, we need more information about their life history, such as age, growth, and longevity. So far, age determination studies have found that deep-sea corals can attain ages anywhere from a hundred to perhaps thousands of years. Age and growth of deep-sea corals is typically determined from outgrowth studies in the field, growth-zone counts in the skeletal structure, a radiometric technique (e.g., lead-210 dating), or a combination of these techniques. Scientists used this cross section of bamboo coral, sampled from Davidson Seamount, for lead-210 dating. The series of holes followed circular patterns that could be seen in the growth structure. Age was determined based on the decay of lead-210, from the edge of the cross section to near the center. Lead-210 dating is a technique that uses the radioactive decay of lead-210 as a "natural clock" that can reveal estimates of age. This process begins with the natural incorporation of lead-210 from seawater into the coral skeleton. As the coral grows like a tree, laying down growth rings, the radioactivity of lead-210 decreases from the youngest to the oldest part of the skeleton. The reason lead-210 activity decreases is because it slowly decays (a process called radioactive decay) at a known rate — in this case, a half-life of 22.26 years. (A half-life is the time elapsed for the radioactivity of a substance to fall to half its original value.) To measure this change in radioactivity and relate it to age, researchers take a series of samples from the edge of a skeletal cross section to the center. By taking this series of lead-210 measurements, they can use the decrease of lead-210 activity to determine the age and growth of the coral. This approach is useful to about 100 years of age at which time the activity of lead-210 decreases to background, or supported, levels. Coral age estimates In a recent study, researchers estimated the age of bamboo coral from Davidson Seamount (Andrews et al. 2005a). The results from various growth-zone-counting interpretations led to a wide range of estimated age. For a cross section 8 to 11 mm in radius, estimates of 80 to 220 years were determined for the bamboo coral colony. From a series of lead-210 dating samples taken in that study, and a recent follow up study (Andrews et al. 2005b), support for the older age estimates prevailed. This line of work has been further applied to another species of bamboo coral from New Zealand and related to possible indicators of ocean climate change (Neil et al. 2005, Tracey et al. 2005). This diagrammatic representation of the decay of lead-210 over time shows that the activity is reduced to half at about 22 years. This trend continues to approach an activity of zero (or supported levels) in an exponential manner. It is this decrease in lead-210 activity that is a measure of age as activity approaches the asymptote (a line whose distance to a given curve tends to zero). In the current study, researchers will collect additional specimens from Davidson Seamount to better refine estimates of age using lead-210 dating. The aim of this portion of the study is to collect two relatively large colonies, along with a series of upper limbs or tips from colonies, within and between various locations. The two large colonies will be sampled more extensively for lead-210 dating to provide age determinations with minimal uncertainty. The series of colony tips will be used to relate growth rates within and between colonies and locations on the seamount. Andrews, A.H., G.M. Cailliet, L.A. Kerr, K.H. Coale, C. Lundstrom, and A. DeVogleare. 2005a. Investigations of age and growth for three species of deep-sea coral from the Davidson Seamount off central California. In: Cold-water Corals and Ecosystems. A. Freiwald and J.M. Roberts eds. Proceedings of the Second International Symposium on Deep Sea Corals. Erlangen, Germany. September 8 - 13, 2003. pp. 965-982 Andrews, A.H., D.M. Tracey, H. Neil, G.M. Cailliet, and C.M. Brooks. 2005b Abstract: Lead-210 dating bamboo coral (family Isididae) of New Zealand and California. Third International Symposium on Deep-Sea Corals: Science and Management. University of Miami, Florida. November 28 - December 2, 2005. Tracey, D.M., J.A. Sanchez, H. Neil, P. Marriott, A.H. Andrews, and G.M. Cailliet. 2005. Abstract: Age and growth, and age validation of deep-sea coral family Isididae. Third International Symposium on Deep-Sea Corals: Science and Management. University of Miami, Florida. November 28 - December 2, 2005. Helen Neil, D.M. Tracey, P. Marriott, R. Thresher, A.H. Andrews, J. Sanchez. 2005. Abstract: Preliminary evidence of oceanic climate change around New Zealand over the last century: the pole-equator seesaw. Third International Symposium on Deep-Sea Corals: Science and Management. University of Miami, Florida. November 28 - December 2, 2005.
<urn:uuid:9d3a5aaa-3dee-4283-970b-590b0ef4a367>
3.828125
1,396
Knowledge Article
Science & Tech.
56.464018
Yes. As John says, the high specific heat capacity of cheese causes it to absorb more heat to raise its temperature than the crust does. If you do an experiment by heating a sample of it in an oven (say to about 300 degrees), you could observe that the crust reaches the temperature more quickly than the cheese. So, cheese takes a longer time to absorb heat to get itself to that temperature. When you allow the pizza to cool down, the crust cools down. Because it doesn't have to absorb much heat energy to raise its temperature, it doesn't have to lose much heat energy to cool off either. The crust could also transfer enough heat to your fingers to cool it, without transferring enough to burn your fingers. But, the cheese makes the pizza more effective. It cools very slowly. It also transfers heat more easily — i.e. conducts (as John says). So, whenever "OUCH" comes in your mouth or fingers, the cheese or the sauce has given a lot of heat (probably a burn) to your skin…!
<urn:uuid:b804516c-b58d-40e4-aacc-16935df8e0ec>
2.734375
217
Q&A Forum
Science & Tech.
66.189835
This last weekend marked the 10th anniversary of the Biology Leadership Conference. One hundred and two biology educators gathered from all over the U.S., Canada (plus one attendee from Bangladesh!) for the three-day meeting in Tucson, Arizona. The conference keynote address was given by Hopi Hoekstra, Professor of Zoology and Curator of Mammals at Harvard University. Dr. Hoekstra’s lab studies evolutionary genetics, examining the genetic basis of fitness-related traits and what can genes tell us about the evolutionary process. Dr. Hoekstra, telling a good connection story. She started her talk with one of those amazing connection stories about Charles Darwin. Her story connected Walter Drawbridge Crick (grandfather of Francis Crick), a shoemaker (and amateur naturalist) who, many years ago, sent Darwin a sample of a cockle, clamped to the leg of a terrestrial beetle, which led to an article that Darwin published in Nature over 100 years ago. A lovely story to demonstrate remarkable connections, overlaid on the shared evolutionary history of all organisms. 25 February 2011 Science Cover The Hoekstra lab studies different populations of wild mice (genus Peromyscus, commonly known as deer mice) to uncover the genetic and developmental mechanisms underlying variation in cryptic coloration patterns and behavior. Though these mice superficially resemble the common house mouse, Peromyscus have larger eyes and often have two-tone coloring with a darker dorsal color and a lighter belly color. Wild mice are the most abundant mammals in North American. Since they are found in every habitat type, there are many local adaptation to investigate. Not only that, since these mice have been studied for so long by natural historians, there is a tremendous body of work documenting their behavior, physiology and morphology. As a result, these ping-pong ball-sized mice are turning out to be a vital, emerging model system for the study of evolution and genetics. In fact, Hoekstra explains that she thinks of mice as the Drosophila of North American mammology. The fascinating thing about Dr. Hoekstra’s research is that it is always a three-part story: environment, phenotype, and genotype. It’s inspiring to hear her describe the way researchers in her lab shift effortlessly from hardcore, basic field work to high-tech, molecular techniques and methods in the lab. First up in her talk was a description of their work on color variation between organisms and within an organism. As we know, color is involved in a number of important biological processes (warning, mate choice, mimicry, and crypsis). The variation in pigment pattern of deer mice gives the Hoekstra lab an ideal opportunity to examine natural variation and how patterns are formed. Peromyscus polionotus (commonly known as the oldfield mouse) naturally occurs in old, overgrown agricultural fields. These mice have recently invaded new territory in the white sandy beaches on the gulf coast of Florida, the Santa Rosa Islands. The Santa Rosa Island field mouse (or beach mice) are a sub species of the oldfield mouse, differing in their pigmentation and patterning. The oldfield mouse has a dark brown coat, light grey belly, and a stripped tail while the beach mice lacks pigment on its nose, its sides and its tail. Since geologists know that these sandy islands are only 6,000 years old, the researchers can deduce that the evolutionary patterns in these populations are relatively recent. It makes sense if you are a mouse, running around on very light-colored sand, that it would be an advantage to blend in with a light-colored coat. But of course, Hoekstra and her team wanted to prove this. Dark and light mice. Their plan was to examine the pattern by tagging 100 light mice and 1oo dark mice and then releasing them to see who survives. It turns out that’s really hard to do. So, instead, they made 1000′s of plasticine models of mice. Half the “fake” mice were brown, half were light-colored. They put them out in the environment in equal frequencies and observed what happened. Using mice models, instead of real mice, gave the researchers an added advantage – they were able to cut out any natural variation (odor, behavior, activity patterns) that might confound the experiment. Their results showed clearly that the a mismatched mouse (light mouse on dark soil or dark mouse on light sand) had higher levels of predation from owls, hawks, and their other natural predators. In other words, there was a strong selection against conspicuous mice. From this field work, they could conclude that natural selection is certainly a driver in this color variation. While many researchers may stop there, Hoekstra’s lab pushes further to take their questions to the lab to investigate the genetics – and thus the mechanisms. They brought the dark and light mice back to the lab, made crosses, intercross hybrids, and then took a look at the genetics of the 2nd generation. They use QTL mapping approaches to identify genomic regions involved with coloration, ultimately hoping to find the causal gene. By examining the 2nd generation offspring, they can see that dominate and recessive alleles contribute to the pigment pattern. They reason that it’s not a single, simple gene at work – since there is continuous variation – but they can also see that it’s not 100′s of genes. Based on the nature of the variation in the hybrid generations, they predicted there are most likely 3-5 genes involved. By mapping the genotype of the F2s against their phenotype variations, they pinpointed the Mc1r gene (Hopi explains that they refer to Mc1r as the “dream gene”). They found a mutation in that gene that causes a change in receptor function. That receptor function change impacts melanocytes that produce pigment (Mc1r is basically the switch that determines what type of melanin is produced resulting in either light or dark coloration). And with that discovery, the Hoekstra lab found a connection between a single nucleotide change and survival in the wild. But they didn’t stop there. They pinpointed two additional gene loci – Agouti and Corin. But in the case of these two genes, it isn’t a matter of simple coding mutations. Here, its likely regulatory mutations, resulting in increased expression. So the precise mechanism here is much harder to track down – we’ll have to stay tuned for those results. But, ultimately, pigmentation in these mice is controlled by these three interacting genes. The Hoekstra lab also examines patterning difference between dorsal and ventral pigmentation in mice. The pigment differences are expressed in the dermal cells and dermal papilla. The mainland mouse has a dark brown coat and a grey belly. The beach mouse has an upward shift in the boundary between dark and light and their flanks and belly are completely white. Suspecting that Agouti expression is important is determining this dorso-ventral boundary, they tried knocking out Agouti as well as increasing its expression to test the correlation. Through this work, they determined that over-expression of Agouti in mainland mouse, delays the maturation of melanocytes resulting in changes in pigmentation. Remvoing the foam cast of a burrow. Shifting gears from a color differences to a behavioral difference among the wild mice, the Hoekstra lab examines burrowing; looking for the genetic basis of burrow shape. Different species of Peromyscus build dramatically different burrows in the wild and, of course, burrow shape and structure is important for reproduction and survival. Turns out, the burrows of oldfield mice are larger and more complex than the burrows of the deer mouse. The oldfield mouse digs a second “escape tunnel” that radiates up from the nest to just below the surface (which is a good assist when snakes are your predators). Hoekstra’s team examines the structure of the burrow by, first chasing the residents out of the burrow by blowing air into the tunnel (and catching them). Once the mouse is captured, they inject plastic foam (which she hilariously calls “”pheno-foam”), which hardens in the perfect cast of the burrow. The cast is treated as a morphological artifact of the underlying behavior. A perfect example of Richard Dawkins’ “extended phenotype”. Using the same lab methods described for the pigmentation experiments, her lab has now identified four genetic regions that contribute to the genetic basis of burrow shape. To further examine burrowing behavior in their subjects, they’ve created a special “burrow box” (which she referes to as a “Pheno-dome”) with transparent walls that allows them to watch (and record) the mice digging a burrow. Since they’ve found it takes a given mouse roughly 16 hours to dig a burrow, they’ve now pioneered a way to track the mouse through time on the video, in an automated fashion. It’s a sort of fast-forwarding through the footage with a colored outline to achieve an automated/enhanced behavioral analysis. Using this method, they are able to record when, how, and how quickly the animals dig. Hoekstra’s efforts to examine natural variation in the wild and then leverage molecular techniques and tools to reveal the genetic mechanisms behind those variations are awe inspiring. Their work spans so many domains, requiring expertise at the level of the environment, the organism, and the gene (a rare combination in these reductionist days!). What’s more, she clearly communicates their sense of fun and the burning curiosity that drives their work – the discovery of genes to tell the story of evolution. “That perfection of structure and coadaptation which most justly excites our admiration.” - Charles Darwin Here are links to more information on Dr. Hoekstra’s work 1. PLOS article. 2. IBioSeminars. Three videos of talks by Dr. Hoekstra. 3. Science Daily article.
<urn:uuid:77c40315-1ab4-45da-9187-b68c895397a8>
2.953125
2,126
Personal Blog
Science & Tech.
41.880932
If the red roof image represents the panels, you are not tracking, & therefore are significantly limiting your collected power. To obtain a chart of the solar path in your area, see the University of Oregon website:http://solardat.uoregon.edu/SunChartProgram.html It is of course not just the path of the sun, but what might be “in the way”, and your weather. Even with the greater amount of atmosphere due to the low angle, at 10 degrees above the horizon, there may be available up to 50% of the total solar power. At around 30 degrees above the horizon the sunlight path thru the air is “short” enough that essentially full power is available to a panel perpendicular to the incoming solar rays. If you are not tracking though, that 10 degree above the horizon is something like 80 degrees off direct impact to your panel, meaning you do not collect much of it. If your panels are two feet wide, to not shade each other on the E/W axis when tilted to only 10 degrees up from the horizon, they must be spaced apart nearly twelve feet. If you limit your morning/evening aim to 30 degrees above the horizon the panels need to be spaced only four feet apart. Depending upon factors such as your latitude, time of the year, and physical barriers, the difference between ten and thirty degrees may be a lot of solar sky-time missed. Remember that if a solar panel is partially shaded, most lose a significant portion of their power generating capability, well beyond the percent of the panel shaded. If you do NOT track at all, a key selection is the angle of the panels. Are you going to align for maximum noontime collection for summer, winter, or the equinoxes? If you align for the noon equinoxes, noon at the summer and winter solstices will be off by 23.5 degrees (only receiving 92% of potential power). While someone with better math skills could calculate accurately, at a ballpark during the solstices when the sun is around 35 degrees east or west of the panel, you are only getting 50% or so of the available power. A significant aspect for the summer solstice is that the sun rises and sets North of an East/West line. Checking the Yuma Chart, for us optimistically it appears that the fixed panel would not even "see" the summer sun at due east until around 0840, and the sun would pass north of the panel at around 1520. (6 hours and 40 minutes exposure)
<urn:uuid:222fd21d-f200-426f-ae6f-ab891e48e894>
2.875
530
Comment Section
Science & Tech.
52.494053
Okay, we’re going to need some other algebraic tools before we go any further into linear algebra. Specifically, we’ll need to know a few things about the algebra of polynomials. Specifically (and diverging from the polynomials discussed earlier) we’re talking about polynomials in one variable, and with coefficients in the field we’re building our vector spaces over already. We’ll write this algebra as , where is now not a “variable”, like it was back in high school calculus. It’s a new element of the algebra. We start with the field which is trivially an algebra over itself. Then we just throw in this new thing called . Then, since we want to still be an algebra over , we have to be able to multiply elements. Defining a scalar multiple for each is a good start, but we also have to multiply by itself to get . There’s no reason this should be anything we’ve seen yet, so it’s new. Going along, we get , , and so on. Each of these is a new element, and we also get scalar multiples , and even linear combinations: as long as there are only a finite number of nonzero terms in this sum. That is, the coefficients are all zero after some point. We customarily take — the unit of the algebra. Note here that we’re not using the summation convention for polynomials, though we could in principle. Remember, an algebra is a vector space, and what we’ve said above establishes that the set constitutes a basis for this vector space! The algebra structure can be specified by defining it on pairs of basis elements. Remember that the structure is just a bilinear multiplication, which is just a linear map from the tensor square to . And we know that the basis for a tensor product consists of pairs of basis elements. So we can specify this linear map on a basis and extend by linearity — bilinearity — whatever… Anyhow, how should we define the multiplication? Simply: . Then the whole rest of the algebra structure is defined for us. Now this looks like adding exponents, but remember we can just as well think of these as indices on basis elements that just happen to add when we multiply corresponding basis elements. Thus we wouldn’t be out of place using the summation convention here, though we won’t for the moment.
<urn:uuid:ff1027c0-05ce-4ad1-91a6-48a2817e8c25>
3.546875
523
Personal Blog
Science & Tech.
54.22068
COULD YOU PLEASE TELL ME WHAT YOU CALL A GROUP OF WHALES. Whales travel together in relatively small groups called 'pods'. Scientists have spent much time studying the dynamics of these pods, which tend to be very fluid (which means that individual whales join and leave the pods, thus changing its structure and dynamic). But, pods of whales traveling together are only a part of the larger group that includes all whales in a 'population'. Populations of whales live, feed, migrate and mate together throughout their lives. The population is very spread out, often over miles and miles, but they remain in contact using sound. Thanks for your question! This archive was generated by hypermail 2b30 : Mon Aug 19 2002 - 10:33:04 EDT
<urn:uuid:0b80f3a0-a761-41e9-8a07-928815b370a1>
2.796875
170
Q&A Forum
Science & Tech.
57.861452
Diagram of the H3+ molecule (top circle) and its breakup into three distinct particles (bottom circle)--H+, H-, and H+. The H3+ molecule contains three protons (indicated by "+" signs in the diagram). When the three protons of H3+ are less than 10 atomic units apart (where 1 a.u. is equal to the size of the hydrogen atom), they are in what is called the "reaction zone." While in the reaction zone, the protons share the two electrons in the molecule, and form strong "covalent bonds." Since the electrons are sort of "spread out" among the protons, the H3+ system must be described by the laws of quantum mechanics, the modern theory of matter and energy at the atomic and molecular scale. As the protons spread out (indicated by the v's with arrows--these are called "velocity vectors") and the system increases in size, both electrons coalesce with a single proton, forming H-. Beyond a physical size of ~10 a.u, the H- can be identified as a distinct particle and the two remaining protons can be considered as distinct particles. In fact the protons can be thought of as H+ ions--hydrogen is just a proton plus an electron, and H+ is simply a hydrogen with the electron removed. The three particles--H+, H-, and H+--act as distinct objects and can be explained to a great extent by the laws of classical physics. They interact with one another and form what is known as a "three-body system." The H+, H-, and H+ interact via the Coulomb force, the "electrostatic" force that particles exert on each other because of their electrical charge. This research will be reported in paper K5.03 of the 1998 Annual Meeting of the Division of Atomic, Molecular, and Optical Physics (DAMOP), to be held between May 27-30 in Santa Fe, New Mexico. See also L.M. Wiese, O. Yenen, B. Thaden, and D.H. Jaecks, "Measured Correlated Motion of Three Massive Coulomb Interacting Particles," in Physical Review Letters, v. 79, no. 25, 22 December 1997. Link to related Physics News Update item (available May 20, 1998)
<urn:uuid:49c44eb0-cfdc-47b1-9580-0ef2bdac27d9>
4
490
Academic Writing
Science & Tech.
65.641155
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. characteristics of heliozoans Actinophrys sol is a common species often referred to as the sun animalcule. Acanthocystis turfacea is a similar species commonly called the green sun animalcule because its body is coloured by harmless symbiotic green algae (zoochlorellae). Actinosphaerium species are multinucleate, often reaching a diameter of 1 mm (0.04 inch). What made you want to look up "sun animalcule"? Please share what surprised you most...
<urn:uuid:64d42dad-f253-432f-b908-834fc46dff3e>
2.953125
145
Knowledge Article
Science & Tech.
37.841045
Los Alamos Seismic Network The Los Alamos Seismic Network (LASN) The Los Alamos Seismograph Network is located in North-Central New Mexico, about 100 km north of Albuquerque. This network has been operated by Los Alamos National Laboratory since September, 1973. For the first 10-15 years (to 1985), stations were located throughout Northern New Mexico. It now has a more limited geographic extent, but is continually being upgraded and expanded. The stations extend over an area with dimensions of about 20 Km (N-S) by 15 Km (E-W), approximately centered on 35 deg 50 min North, and 106 deg 20 min West. LASN Station Info North-Central New Mexico Earthquakes See our Links Page for more sites related to seismology The Albuquerque Seismological Laboratory is the U.S. Geological Survey's instrument test and development center, and currently is operating a broad-band shallow borehole triaxial seismometer package as station ANMO. The Department of Earth and Environmental Sciences, Geophysics at the New Mexico Institute of Mining and Technology ("New Mexico Tech") in Socorro operates networks of short-period seismographs in the Socorro area and in south-east New Mexico, near the DOE-WIPP site. Their earthquake listing page can be seen Here
<urn:uuid:1d2ca2ab-92a7-411e-898f-debc93e49b4f>
2.703125
284
Knowledge Article
Science & Tech.
38.681275
Jo Marchant, consultant In the 1970s, Karl Popper came up with a philosophical theory of reality that involved three interacting worlds: the physical world, the mental world, and "world 3", which comprises all products of the human mind - from ideas, pictures and music to every word ever written. Something very similar to world 3 is now real and increasingly influencing how we live, says George Djorgovski , co-director of the Center for Advanced Computing Research at Caltech. It's called the internet. It's the first morning of Science Foo camp , and I've chosen a session called "virtualisation of science and virtualisation of the world". In fact - fittingly for a meeting being held at Google headquarters - how we deal with life increasingly lived online turns out to be one of the main themes of the day. Djorgovski reckons that before long, being online will soon mean (among other things) not staring at a computer screen but being immersed in 3D virtual reality. He thinks this will be key to how we'll make scientific discoveries in the future. Forget graphs - two dimensions are totally inadequate for dealing with the vast amounts of data pouring out of everything from high-throughput genome sequencing to atom smashers like the Large Hadron Collider . We'll need machine intelligence capable of analysing these huge data sets, he says, as well as ways to visualise and interact with the results in Such technologies will surely revolutionise education too, with virtual learning replacing the traditional lecture. Djorgovski wants scientists and researchers to get more involved with this process now, pointing out that so far, advances in 3D technology are all coming from the entertainment industry: "We can't let the video game industry drive the future in what's the most important technology on the planet. There has to be more to it than spilling blood and slaying dragons." Sitting round the table are experts in everything from psychology and bioethics to space science. Pat Kuhl , an expert in early child learning from the University of Washington, wonders what learning everything online will do to young brains. The consensus around the table is that good or bad, the move into virtual reality environments is inevitable. "So let's try and offer something more than games," says Djorgovski. In a subsequent session on children's minds, Kuhl tells us about the importance of social cues in early learning. For example, it's well-known that babies differ in their ability to distinguish sounds, depending on the language they are exposed to, by the time they are 10-12 months old. But Kuhl and her colleagues have recently shown that simply hearing the sounds is not enough. After a few sessions with a Mandarin speaker, American babies could distinguish certain sounds as well as Taiwanese babies, but those given the same exposure via audio or video learned nothing. So if we don't want kids' brains to atrophy in an increasingly virtual world, we must work out how to incorporate the relevant social cues. Kuhl has already found that making the TV screen interactive, so babies can turn it on and off by slapping it, increases - a little bit - how much they learn. She's now experimenting with web cams. In the afternoon, UK journalist and commentator Andrew Marr tackles the question of what will happen to journalism in an online world, particularly as e-readers like the - which Marr calls a "great engine of destruction" - become The media we consume will no longer be just words, or just pictures, but a collision of text, video, audio and animated graphics. And people will be able to choose individual items to consume, rather than buying a whole newspaper or watching just one channel. Like most commentators, Marr thinks this will be the end of newspapers - and perhaps of traditional journalists too. But he thinks this can only be a good thing, arguing that journalism, with its short-term focus and trivial level of debate, has been failing us anyway. In the future he thinks news will come from niche, specialist groups, for example people interested in access to clean water, coming together online. These might include bloggers, campaigners and lobbyists. Above them, authoratitive news aggregators will pick out the most important stories of the day and feed them to the rest of us. Marr says this new model will be good for journalism and for democracy, because the people within each community of interest will be experts, and won't lose interest in a topic in the way that traditional I'm sure Marr's right that newspapers as we know them are not going to survive. But I don't feel so optimistic about his vision. I'm not sure that having aggregators pick from a pool of stories written by specialists with an agenda is necessarily going to give us good journalism. Who is going to write articles in a way that non-specialists can understand? Who will make connections between different fields? Who will have the authority to hold politicans to account? Unfortunately the session ends before we have a chance to get into these questions. For some historical perspective, I end the day in a session run by Tilly , curator of computing at the Science Museum in London. Whereas Marr spoke to a packed lecture hall, now just five of us sit cosily around a table. Blyth tells us how the Science Museum is using online technologies to try to bring the history of science and technology into our everyday lives. One project is an iPhone app that displays stories and pictures from history that are relevant to a user's location. The other involves asking 200 British scientists to tell their life stories, then linking those oral histories to video clips, searchable transcripts, and perhaps the relevant scientific papers. Blyth wants to create a "global memory" for science, so that we can learn from changes that have gone before. "We tend to think that we're living through this amazing period of revolution," she says. shows us a satirical illustration from 1880, entitled March of the , which depicts an array of futuristic contraptions including a steam-powered horse, a flying man, and a pneumatic tube linking London with Bengal. We aren't the first generation to grapple with the implications of radical technological change. Food for thought as I join the queue for dinner.
<urn:uuid:bc8ef435-8402-42a3-af4a-d9c696c98d96>
3.265625
1,373
Nonfiction Writing
Science & Tech.
44.335925
Report an inappropriate comment The Water World Must Have Life Tue Apr 21 23:56:55 BST 2009 by A. H. Engineer You must have misread this article. Gilese 581 is a Red Dwarf star, spectral type M3V. This means it is a main sequence Fusion powered star, of a red shift type. This star, approximately .3 times the mass of our sun, is in fact hot and contains enough energy to warm any plants within a certain distance. In this case, solar radiation would be more then enough to maintain the liquid state of the planets surface as long as the distance is within the habitability band around the star. To close and the water would burn off, to far away and it would freeze. The main problem with the possibility of life on this planet has nothing to do with the planet itself. Red Dwarf stars are notorious for being variables. At some times they will flare up, doubling their brightness in very short periods of time. At other times they can be covered by star spots that can reduce their output by up to 40%. While this is a fairly large red dwarf by most standards, its does have many inherent issues. Because of its power output and size, most of the light output is in the infrared spectrum. This can pose a problem for organic chemestry needed to form and support life. Its a very interesting planet, and close enough to get some great data from. But I would think even Europa has a better chance of life then this Planet.
<urn:uuid:84dc93a2-543e-48a5-81a9-b83241fdec9d>
2.890625
311
Comment Section
Science & Tech.
66.747794
This has been common knowledge for as long as the theory of evolution has been around, but only recently have scientists been able to identify a link between whales and their land ancestry. That all changed in 2007 when researchers discovered the 48-million-year-old skeleton of a creature called Indohyus. The bones of this tiny deer-like mammal have a thick outer layer, which is typically characteristic of slower aquatic animals such as our modern hippopotamus. This combined with the fact that its teeth also contain similar oxygen isotope ratios indicates that Indohyus must have spent much its time in water. This seems bizarre, but that is often the way in which evolution works. Over the years, environmental factors such as predators likely forced Indohyuses to spend more and more time in the water. The Indohyuses that were the most skilled at swimming were the most likely to evade capture and survive, so this trait eventually was more and more common in subsequent generations. Over millions of years, this adaptation became so dramatic and environmentally essential that now our modern whale spends ALL its time in the water! (this is a very simplistic explanation of evolution, but you get the idea) Here's a video depicting this idea. The African mousedeer, or chevrotain, has been known to escape predators by taking to the water (it is not closely related to whales, however):
<urn:uuid:be00aedb-b528-4242-93f9-43c8a4589e80>
3.90625
285
Personal Blog
Science & Tech.
31.129615
Mercier, A.; Sun, Z.; Baillon, S.; Hamel, J.-F. (2011). Lunar Rhythms in the Deep Sea: Evidence from the Reproductive Periodicity of Several Marine Invertebrates. Journal of Biological Rhythms. 26 (1) 82-86. While lunar rhythms are commonly documented in plants and animals living in terrestrial and shallow-water environments, deep-sea organisms have essentially been overlooked in that respect. This report describes evidence of lunar periodicity in the reproduction of 6 deep-sea species belonging to 2 phyla. Occurrences of gamete release in free spawners and larval release in brooders exhibited significant peaks around the new and full moons, respectively. The exact nature of this lunar period (endogenous or exogenous rhythm) and its adaptive significance in the deep sea remain elusive. Current knowledge suggests that proxies of moon phases at depth may include fluxes in particulate matter deposition, cyclic currents, and moonlight for species living in the disphotic zone.
<urn:uuid:9c9b92d5-3a2b-4aee-9709-a53372d24c5e>
3.171875
211
Academic Writing
Science & Tech.
36.912
Web Dev Reference Get flash to fully experience Pearltrees 3-tier application a program that is organized into three major parts: the workstation or presentation interface; the business logic; and the database and related programming. Each of these is distributed to one or more separate places on a network. agile software development calls for keeping code simple, testing often, and delivering small, functional bits of the application as soon as they're ready. The focus is to build a succession of parts, rather than delivering one large application at the end of the project. It is useful to make a distinction between the vocabulary of an HTML document—the elements and attributes, and their meanings—and the syntax in which it is written. HTML Element Content Models An HTML element's content model specifies what is allowed inside the element.
<urn:uuid:c2314acc-c7b0-4a92-a6a0-4d2ae927f203>
2.78125
164
Documentation
Software Dev.
31.475952
Measurement - Key terms The process of checking and correcting the performance of a measuring instrument or device against a commonly accepted standard. A method used by scientists for writing extremely large or small numbers by representing them as a number between 1 and 10 multiplied by a power of 10. Instead of writing 0.0007713, the preferred scientific notation is 7.713 · 10 −4. An abbreviation of the French term Système International d'Unités, or International System of Units. Based on the metric system, SI is the system of measurement units in use by scientists worldwide. Numbers included in a measurement, using all certain numbers along with the first uncertain number.
<urn:uuid:5f9338c1-e733-49b8-ae2e-b9fd32b3a6e2>
3.34375
139
Structured Data
Science & Tech.
49.465389
With the state our planet is in today, more and more people are attempting to learn all about green energy, what it is, and what it does for your home, your wallet, and for the environment. If you have questions about green energy, keep reading this article and you may just find an answer. In simple terms, green energy is renewable energy that has no adverse effect on the environment at large. For example, oil and coal send up harmful C02 emissions that cause a greenhouse effect. But other methods, like wind and solar energy, cause no such impact and are thus considered green and renewable. As stated above, when fossil fuels and other harmful gases and chemicals and substances are burned in order to create energy, they emit carbon into the atmosphere. There is only so much carbon dioxide our atmosphere can hold before we turn into a giant oven. So, by using energy sources that do not release this C02, the environment doesn’t suffer and thus it can begin to stabilize itself. You can find green energy being used all over the place. You’re reading this article via the Internet right now and you may be reading it from a site or a main host whose services are powered by wind or solar. You can also find different automobiles, personal homes, etc, using green energy to receive their power. Different countries are all trying to push through green legislation, with the European Union actually seeking in excess of ten trillion dollars in order to put forth a worldwide green initiative. For America in particular, there are always pieces of legislation being put forth, but none seem to make it out of the House before being shot down. You have probably heard of a few different types of green energy, and the ones you have heard about are probably the most popular. Wind, for example, is the most common type of green energy. And while natural gas isn’t considered “green” by most, because it’s a gas, this power source is also very safe and very popular. Solar comes in third, but that’s only because of its inefficiency at this current juncture. When speaking about the least green, you’re looking at oil. Crude oil and gas are incredibly harmful to the atmosphere, throwing out tons of emissions every single day. Coal is close behind, polluting the atmosphere regularly. Unless you want your descendants living on an iceberg, green energy has to take the place of fossil fuels. At first, the ice is going to melt and cause water levels to rise and the planet to heat up. Shortly after that, things are going to freeze, due to ocean current changes and no heat being able to penetrate the atmosphere. The only real drawbacks of green energy are really the costs associated with the sources and the amount of space wind and solar farms take up. Other than that, they do not impact the environment and can actually help to save it. The people living on the planet today are the people responsible for the next generation, so it’s important that we go green as soon as possible. If you’ve been thinking about going green, there’s no better day than today to start.
<urn:uuid:1752e2af-1044-4b64-9ff0-9780779415a9>
3.109375
651
Personal Blog
Science & Tech.
50.08591
...there was little evidence to back it up [the phenomenon of rogue waves]. But in 1995, an oil rig in the North Sea recorded an 84-ft.-high (25.6 m) wave that appeared out of nowhere, and in 2000, a British oceanographic vessel recorded a 95-ft.-high (29 m) wave off the coast of Scotland. Friday, April 20, 2012 Rogue waves: high as a 10-story building, they can capsize huge ships on the sunniest of days Rogue waves, until recently dismissed by scientists as sailors' fantasies, may be more common than previously thought and might go a long way toward explaining why so many big ships — up to two a week — sink, even in fair weather. From Time:
<urn:uuid:5025f22a-5409-4ea7-818f-8b02fe4f0367>
2.921875
157
Truncated
Science & Tech.
77.126079
Last week I wrote about timing how long it took valves at the Hatch plant to open and close. Here’s another case. On April 12, 2012, workers tested how long it took the three main steam isolation valves at the Shearon Harris nuclear plant near Raleigh, North Carolina to travel from the fully opened to the fully closed position. These valves are designed to close within 5 seconds to limit how much radioactivity is released to the atmosphere during an accident. Stopwatches have been used when timing valves. When the operator turns the switch in the control room to signal a valve to close (or open), the stopwatch is started when the valve’s position lights indicate it has begun moving. The stopwatch is stopped when the position lights indicate the valve’s movement has ended. That day at Harris, a calendar would nearly have been more appropriate than a stopwatch. Main steam isolation valve “A” at Harris took 4.51 seconds to close that day. But when the switches for main steam isolation valves “B” and then “C” were turned, the valves did not close. At least not right away. These valves have large springs that hold them closed. The springs for each valve are designed to provide 63,988 pounds (nearly 32 tons) of closing force. Compressed air is supplied to push the valves open against this spring force. This design allows the springs to close the valve (their fail-safe position) when either power or air pressure is lost. Workers went out into the plant to manually vent the compressed air supplied to main steam isolation valves “B” and “C.” They heard the tell-tale sound of the air being released, but the valves stayed open. Main steam isolation valve “B” closed 37 minutes after its air supply was removed. Main steam isolation valve “C” closed 4 hours and 7 minutes after its air supply was removed. Workers disassembled all three main steam isolation valves. They found corrosion caused some of the internal parts to swell in size nearly 20 percent. This growth effectively locked the valves in place against the springs’ closing force even after air pressure had been relieved. Eventually, the spring force overcame the friction to close the valves. The valves had been installed during construction of the plant more than a quarter of a century earlier. The valves’ manufacturer introduced models having internal parts more resistant to corrosion but had never recommended that customers with older valves upgrade them. Workers at Harris replaced all three main steam isolation valves with the new models. The replacement valves were re-tested successfully. The NRC dispatched a special inspection team to Harris in 2012. The NRC found that from the plant’s initial startup until 2000, workers had exercised the main steam isolation valves every three months per the manufacturer’s recommendation. These exercises involved closing each valve ten percent to verify proper functioning of the valves, their actuators, and controls. The plant’s owner discontinued this recommended testing in 2000 as a cost-saving measure. The safety evaluation per 10 CFR 5.59 performed in 2000 for discontinuing the quarterly exercising failed to mention that the valve vendor recommended the exercising or to discuss potential new failure modes – like the one that happened – that might be introduced by eliminating the periodic exercises. The NRC also discovered that the air-operated main steam isolation valves had never been tested under the plant’s air-operated valve testing programs. The main steam isolation valves had been classified as Category 2 valves which do not require testing. But the NRC determined that the main steam isolation valves met the Category 1 definition as performing an active safety-related function of high safety significance. It’s a line from Cool Hand Nuke – “what we’ve got here is failure to communicate.” The company failed to communicate solid justification in 2000 when it discontinued the quarterly exercising of the main steam isolation valves. The manufacturer recommended that the exercising be performed to verify proper functioning. After the valves had been installed at Harris in the 1980s, the manufacturer developed more corrosion resistant materials for the valves’ internals. The company’s justification more than a decade later failed to address how foregoing the exercising might affect the performance of the more corrosion-prone valves. The NRC also failed to communicate how it managed to overlook the facts that (a) the air-operated main steam isolation valves had never been tested under the plant’s air-operated valve program, and (b) the quarterly exercising of the main steam isolation valves had been discontinued more than a decade ago. After the valves literally took hours to close, NRC inspectors identified both testing irregularities. But why hadn’t NRC inspectors discovered these problems earlier? After all, there are many air-operated valves at Harris but the main steam isolation valves are among the small minority contained within the NRC-issued operating license for Harris. Is it unreasonable to expect that over the course of a decade an NRC inspector will assess the testing regime – or lack thereof – for the few air-operated valves having such high safety significance that they are explicitly mentioned in the reactor’s operating license? Clearly, the plant’s owner did a poor job testing the main steam isolation valves and needs to do better in the future. But the NRC also has lessons to learn from this and other special inspections it conducts. The NRC dispatches special inspection and augmented inspection teams to plant sites when events may increase the chance of core damage by a factor of 10 or more. The NRC sends out about a dozen such teams annually. These team inspections should serve dual purposes: (1) finding and fixing specific problem at the affected plants, and (2) assessing whether programmatic adjustments are needed in the NRC’s safety oversight process. The NRC lacks resources to monitor every test and inspect every inch of piping. These team inspections must be assessed by the NRC to determine if resource reallocations are needed to better focus their oversight efforts. Nuclear safety defense-in-depth demands best efforts by plant owners and the NRC. In this case, both failed. Both failures must be recognized and corrected for safety to be improved in the future. “Fission Stories” is a weekly feature by Dave Lochbaum. For more information on nuclear power safety, see the nuclear safety section of UCS’s website and our interactive map, the Nuclear Power Information Tracker. Support from UCS members make work like this possible. Will you join us? Help UCS advance independent science for a healthy environment and a safer world.
<urn:uuid:dfa0f2f1-46ab-4763-bbe4-1d63bb3a7aae>
3.171875
1,367
Nonfiction Writing
Science & Tech.
44.49581
Halobacterium salinarum needs very little to survive, but a Growth Medium is vital because it will be acting as a salty environment and provide nutrients for growth. A warm place around 37 degrees Celsius is ideal but not required; the heat simulates a warm climate which causes H. salinarium to grow at a faster rate. Agitation is important to circulate air and also helps Halo grow faster. H. salinarium is a halophilic and thermophilic model organism, which is commonly mistaken to be a bacterium because of its name but it is actually in the Archaea domain. They shares certain qualities of both domains, they have gas vesicles, flagella, and a cell wall like bacteria, but it can survive in high saline environments as well as high temperatures like other Achaeans. H. salinarium are often found in places with high salt concentration like San Francisco Bay, the Great Salt Lake, Yellowstone National Park, and many other places with saline levels around 4M+. For H. salinarium to survive in these environments without osmotic stress tearing their cell walls apart when salt levels change, they have osmoprotectants which helps to equilibrize them with their environment. These osmoprotectants allow the H. salinarium to pump large amounts of salt into its cell, but at the same time it can be a potentially lethal threat; if they are exposed to low molarity water, osmosis causes water to flood the cell causing the membrane to lyse or burst. To obtain the purple or reddish color, H. salinarium needs to make bacteriorhodopsin, a protein found in the membrane. There are many ways that it can do this, they include photophosphorylation, fermentation, the Krebs cycle, substrate-level phosphorylation, and oxidative phosphorylation. Bacteriorhodopsin captures light which is used to make other useful chemicals. They are single celled and rod shaped, they are unique because they are the only photoautotrophic archaeons. H. salinarium are often used in lab research because they are very similar to eukaryotes and they have simple DNA that has been completely sequenced which makes them easy to manipulate and study than other organisms. They also are ideal for sterility because of their high saline environment will not allow anything else to grow. For More Information: Click Here For Medium Recipe: Click Here Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
<urn:uuid:7ea1ad82-ffa0-4bda-a31d-22c79b2aa0eb>
3.546875
544
Knowledge Article
Science & Tech.
31.979077
Solar Panels are a widely regarded as one of the greatest innovations however there are still lots of myths and misconceptions. These include the use of solar panels during the winter and the efficiency of the panels themselves. Solar panel efficiency is too low As with any new technology it continuously develops to improve the quality. Solar panels are no different, they can and will be more efficient in the future but they are still good enough now to give an excellent return on investment. Solar won’t work in the UK Solar panels will work anywhere in the world where there is sunshine. Obviously the UK does not receive as much sunlight as other countries around the world but solar panels will still continue to produce solar energy even in cloudy conditions. The heat and temperature are irrelevant to solar panels as they require radiation for photovoltaic generation. The solar panels look ugly on my roof The look of solar panels comes down largely to personal taste. Some will say they are ugly whilst others will marvel at their aesthetic beauty. However if the solar panels are fitted flush to the roof they can often be mistaken for skylights. Solar energy is too expensive Yes, the initial investment of solar panels can be expensive with the average sized installation costing approximately £10,000. The cost of solar panels have come down drastically and will continue to as they are developed. With the governments Feed-in Tariff’s scheme you are looking at a return on investment period of around 7 – 8 years. Solar panels require maintenance This certainly not true. Solar panels are largely self maintaining with rain washing a large majority of the dust from the surface. The inverter is the only fragile part of the system with the average life span expected to be around 20 years before needing to be replaced.
<urn:uuid:3c47e98b-f843-4871-b22d-f2d887102f41>
2.96875
359
Listicle
Science & Tech.
39.137902
Last modified on 28 May 2013, at 14:54 Plate tectonics is the theory (nothing in science can be absolutely proven) that the Earth is constantly moving, usually only a few inches per century. If it moves more than this, there is an earthquake. Plate tectonics mainly deals with the crust of the earth, not the mantle or core. The theory of Pangaea is relative to this. The Pangaea theory is retarted and it says that the continents as they are now have moved apart over millions of years to make the world the way it is today, although to be fair to them, some creation scientists/geologists say that Pangaea was pulled apart rather suddenly in geological terms (only 40-150 days!). Plate tectonics is the reason earthquakes in, say, Wisconsin are not as felt or as damaging as an earthquake in, say, California-because Wisconsin is at the center of a plate, and California is at the convulsion of two plates. Plate tectonics is the reason earthquakes happen-when the plates move too fast, an earthquake happens.
<urn:uuid:6e99944b-0a4e-47e7-82cc-0a40c3d8bdae>
3.90625
225
Knowledge Article
Science & Tech.
51.316554
Processes, sediments, and stratigraphy of the Fly River Delta Walsh, John P., and Ridd, Peter V. (2009) Processes, sediments, and stratigraphy of the Fly River Delta. In: The Fly River, Papua New Guinea: environmental studies in an impacted tropical river system. Developments in Earth & Environmental Sciences 9 . Elsevier, Burlington, MA, USA, pp. 153-176. |PDF (Published Version) - Repository staff only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader| View at Publisher Website: http://dx.doi.org/10.1016/S1571-9197(08)... Voluminous rainfall and rugged, tectonically active mountains are two primary ingredients that make Indo-Pacific islands incredible suppliers of sediment to the ocean. It is estimated that six islands alone (Papua New Guinea (PNG), Sulawesi, Borneo, Sumatra, Java, Timor) supply 20–25% of the total annual sediment load transported to the ocean globally (Milliman et al., 1999). Many rivers draining these islands discharge onto broad, low-gradient continental shelves characterized by large tidal ranges, and these are locations for tide-dominated deltas. The Fly River of PNG is one example. |Item Type:||Book Chapter (Research - B1)| This publication does not have an abstract. The first paragraph of this chapter's Introduction is displayed as the abstract. |FoR Codes:||04 EARTH SCIENCES > 0405 Oceanography > 040503 Physical Oceanography @ 100%| |SEO Codes:||96 ENVIRONMENT > 9606 Environmental and Natural Resource Evaluation > 960604 Environmental Management Systems @ 100%| |Deposited On:||16 Apr 2010 14:15| |Last Modified:||12 Feb 2011 03:32| Last 12 Months: 0 Repository Staff Only: item control page
<urn:uuid:7329373d-ad7c-4efa-a9bb-c3edb2ddfc0b>
2.921875
420
Academic Writing
Science & Tech.
49.164479
Though the aliens in 2010 told humanity to stay away from Jupiter's moon Europa, that hasn't stopped aerospace engineer Joseph Shoer from designing a probe that could look beneath the moon's icy surface and peek into the oceans below. Shoer proposes a way to search beneath the kilometers-thick ice of Europa by sending light probes into vast rifts on Europa's surface (you can see one above). Experts believe that these rifts may open and close with Europa's tides. Shoer writes: The Ice Fracture Explorer, or IFE, would be a combination lander/penetrator vehicle that I imagine to be a little smaller than the size of one of the MER rovers. Ideally, several IFEs would accompany an orbiter to Europa. The orbiter component of the mission would contain instruments designed to give the planetary scientists on the mission enough information to select a few double-ridged cracks that are actively being worked open and shut by tides. The flight controllers would then dispatch an IFE to each of those cracks . . . The IFE will wait until the crack is closed, and then separate form the landing legs and inflate some gas-bladder cushions, causing the vehicle to roll down towards the center of the double ridge. Using its thrusters for attitude adjustment, the IFE will right itself, centered over the crack . . . Next, the IFE would fire projectiles into the crushed-ice ridges on either side of the vehicle. These projectiles could be barbed, contain chemical flash heaters, or anything else design to make them really stick into the ice, because they would be the anchors for twin tether lines that unreel from the spacecraft. The IFE would also deploy a high-gain antenna for communicating with the orbiter overhead, since the mission will have to happen very quickly from this point on. . . . As Jupiter rises overhead, its tides will pull apart the two sides of the ice fracture. The IFE will be suspended in the middle as the crack opens, with nothing below it until the ocean 1-10 km down! At this point, the IFE will drop its deflated cushions and begin to deploy a smaller penetrator vehicle from its underside. The penetrator is a small, two-stage vehicle with two instrument packages, a hard-shell body, and a data line connecting it to the IFE's main bus. . . Eventually, the penetrator would hit the ocean surface. The water would have iced over, but the weighted penetrator with its reinforced lower body would smash through the ice and reach the liquid water below. At that point, a buoyant surface instrument package would separate from the lower penetrator, which would continue down into the water. The surface instruments would try to identify any interesting chemistry or biology occurring at the water surface, where photosynthesis might take place. The lower body of the penetrator would simply try to go as far down as it can, illuminating the depths and taking pictures. There's a lot more to Shoer's idea, so you'll want to read more about it on his blog.
<urn:uuid:094ad698-1142-4840-9141-f2bd84ac92db>
2.9375
636
Personal Blog
Science & Tech.
49.84902
Science Fair Project Encyclopedia Ohm's law, named after its discoverer Georg Ohm , states that the potential difference (or voltage drop V) between the ends of a conductor (for example, a resistor R) and the current, (I) flowing through R are proportional at a given temperature: where V is the voltage and I is the current; the equation yields the proportionality constant R, which is the electrical resistance of the device. The law is strictly true only for resistors whose resistance does not depend on the applied voltage, which are called ohmic or ideal resistors or ohmic devices. Fortunately, the conditions where Ohm's law holds are very common (Ohm's law is never completely accurate [if R is assumed to be constant] for "real world" devices because no real device is an ohmic device for every voltage and current - at some level, the device will open or short, for example, by burning up or arcing ). The relation V / I = R even holds for non-ohmic devices, but then the resistance R depends on V and is no longer a constant. To check whether a given device is ohmic or not, one plots V versus I and compares the graph The Ohm's law equation is often stated as in part because that is the variation very commonly used with resistors. Details of physics and mathematics Physicists often use the so-called microscopic form of Ohm's Law: where j is the current density (current per unit area), σ is the conductivity (which can be a tensor in anisotropic materials) and E is the electric field. This is the form Ohm originally stated. The common form V = I·R used in circuit design is the macroscopic, averaged-out version. It is important to note that Ohm's law is not an actual mathematically derived law, but an observation supported by significant empirical evidence. There are times when Ohm's law does break down, however, because it is really a simplification. The primary causes of resistance to electrical flow in a metal include imperfections, impurities, and the fact that electrons bounce off the atoms themselves. When the temperature of the metal increases, the collisions between electrons and atoms increase, so that when a substance heats up because of electricity flowing through it (or by whatever heating process), the resistance will increase. The resistance of an Ohmic substance depends on temperature in the following way: where ρ is the resistivity, L is the length of the conductor, A is its cross-sectional area, T is its temperature, T0 is a reference temperature (usually room temperature), and ρ0 and α are constants specific to the material of interest. In the above expression, we have assumed that L and A remain unchanged within the temperature range. It is worth mentioning that temperature dependence does not make an substance non Ohmic as far as R does not vary with voltage (or V / I = constant) at a given temperature. Relation to heat conduction The equation for the propagation of electricity formed on Ohm's principles is identical with that of Jean-Baptiste-Joseph Fourier for the propagation of heat; and if, in Fourier's solution of any problem of heat-conduction, we change the word temperature to electric potential and write electric current instead of flux of heat , we have the solution of a corresponding problem of electrical conduction. The basis of Fourier's work was his clear conception and definition of conductivity. But this involves an assumption: that, all else being the same, the flux of heat is strictly proportional to the gradient of temperature. Although undoubtedly true for small temperature-gradients, it is not clear that it generalizes. An exactly similar assumption is made in the statement of Ohm's law: other things being alike, the strength of the current is at each point proportional to the gradient of electric potential. It happens, however, that with our modern methods it is much more easy to test the accuracy of the assumption in the case of electricity than in that of heat. Die galvanische Kette, mathematisch bearbeitet (Mathematical work on the electrical circuit, 1827) - Calculation of Ohm's law · The Magic Triangle - Calculation of electric power, voltage, current and resistance The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:23884f76-6a75-4f20-ae60-d5796a077ee7>
4.46875
926
Knowledge Article
Science & Tech.
28.983238
Gas Ring Around Young Star Raises Questions The ring is part of the star’s planet-forming disk, and it’s as far from V1052 Cen as Earth is from the Sun. Discovered with the European Southern Observatory’s Very Large Telescope, its edges are uniquely crisp. Carbon monoxide is often detected near young stars, but the gas is usually spread through the planet-forming disk. What’s different about this ring is that it is shaped more like a rope than a dinner plate, said Charles Cowley, professor emeritus in the University of Michigan who led the international research effort. “It’s exciting because this is the most constrained ring we’ve ever seen, and it requires an explanation,” Cowley said. “At present time, we just don’t understand what makes it a rope rather than a dish.” Perhaps magnetic fields hold it in place, the researchers say. Maybe “shepherding planets” are reining it in like several of Saturn’s moons control certain planetary rings. The star’s unique properties first caught the researchers’ attention in 2008, and they have been studying it intensely ever since. Understanding the interaction between central stars, their magnetic fields, and planet-forming disks is crucial for astronomers to reconstruct the solar system’s history. It is also important to account for the diversity of the known planetary systems beyond our own. This new finding raises more questions than it answers about the late stages of star and solar system formation. “Why do turbulent motions not tear the ring apart?” Cowley wondered. “How permanent is the structure? What forces might act to preserve it for times comparable to the stellar formation time itself?” The team is excited to have found an ideal test case to study this type of object. “This star is a gift of nature,” Hubrig said.
<urn:uuid:3ddb961b-ab37-4f49-8d31-7d3181935fbf>
3.65625
410
Truncated
Science & Tech.
52.889504
A mountain is a landform that extends above the surrounding terrain in a limited area. A mountain is generally much higher and steeper than a hill, but there is considerable overlap, and usage often depends on local custom. Some authorities define a mountain as a peak with a topographic prominence over an arbitrary value: for example, the Encyclopędia Britannica requires a prominence of 2,000 feet (610 m). A mountain is usually produced by the movement of lithospheric plates, either orogenic movement or epeirogenic movement . The compressional forces, isostatic uplift and intrusion of igneous matter forces surface rock upwards, creating a landform higher than the surrounding features. The height of the feature makes it either a hill or, if higher and steeper, a mountain. The absolute heights of features termed mountains and hills vary greatly according to an area's topography. The major mountains tend to occur in long linear arcs, indicating tectonic plate boundaries and activity. Mountain creation tends to occur in discrete periods, referred to as orogenies (orogeny). Two types of mountain are formed depending on how the rock reacts to the tectonic forces – block mountains or fold mountains. Some isolated mountains were produced by volcanoes, including many apparently small islands that reach a great height above the ocean floor. Block mountains are created when large areas are widely broken up by faults creating large vertical displacements. The uplifted blocks are block mountains or horsts. The intervening dropped blocks are termed graben: these can be small or form extensive rift valley systems. This form of landscape can be seen in East Africa, the Vosges, the Basin and Range province of Western North America and the Rhine valley. Where rock does not fault it folds, either symmetrically or asymmetrically. The upfolds are anticlines and the downfolds are synclines; in asymmetric folding there may also be recumbent and overturned folds. The Jura mountains are an example of folding. Over time, erosion can bring about an inversion of relief: the soft upthrust rock is worn away so the anticlines are actually lower than the tougher rock of the synclines. Heights of mountains are generally given as heights above mean sea level. The highest mountain on Earth is Everest, 8850 m, set in the world's most significant mountain range, the Himalaya. Other definitions of height are possible. The peak that is farthest from the centre of the Earth is Chimborazo in Ecuador. At 6,272 m above sea level it is not even the tallest peak in the Andes, but because the Earth bulges at the equator and Chimborazo is very close to the equator, it is 2,150 m further away from the Earth's centre than Everest. The peak that rises farthest from its base is Mauna Kea on Hawaii, whose peak is over 9,000 m above its base on the floor of the Pacific Ocean. The tallest known mountain in the solar system is Olympus Mons, located on Mars. Sufficiently tall mountains have very different climatic conditions at the top than at the base, and will thus have different life zones at different altitudes on their slopes. The plants and animals of a zone are somewhat isolated when the zones above and below are inhospitable, and many unique species occur on mountainsides as a result. Extreme cases are known as sky islands. Mountains are not generally favored for human habitation; the weather is harsher, less food is available, and there is little level ground suitable for farming. Most mountains of the world have been left in their natural state, and are today primarily used for recreation. Some mountains are very difficult to climb, and offer spectacular views. Some people therefore enjoy the sport of mountaineering. Mountains are also the site for the sport of downhill skiing. People engaging in these activities often stay at mountain resorts built for the purpose.
<urn:uuid:396ced8d-4ec3-4c5c-8896-62b05fbb1119>
4.15625
814
Knowledge Article
Science & Tech.
40.302463
Springtime promises green fields and woodlands as well as the bright colors and scents of flowers. But try awakening another sense this May. Each year millions of migrant songbirds return to the Northeast, and the way we first notice them is a change in the soundscape. For example, on spring evenings, the male American woodcock puts on its breeding display in the lowlands surrounding Wappinger Creek. This plump, pigeon-sized bird is best located by the loud “peent” call it produces on the ground. After a series of these calls, you might catch the male’s dark shadow climbing skyward with a twittering sound produced by its wing beats. After reaching a height of 100 meters or more, the woodcock then spirals back to the ground, interspersing vocalizations with the twittering wing beats. This display is for the female, as are many of the songs you’ll hear this spring. Birds produce a dazzling range of vocalizations, from chickadees’ simple “hey sweetie” and blue jays’ alarm scold to the haunting songs of veeries and other thrushes, the vocal acrobats of the bird world. In addition to creating a soundscape, which is as true to spring as are the flowers, the ecology of birds is intimately tied to the sounds of other organisms around them. The science of soundscapes is studied at the Cary Institute by Visiting Scientist Ken Schmidt. For more information on soundscapes, visit the kiosk in the lowlands when you’re spotting woodcocks this spring.
<urn:uuid:0a4e106b-1050-49f5-9238-8ef1b5286fa3>
3.421875
335
Knowledge Article
Science & Tech.
56.131581
HydroClim Minnesota - October 2009 A monthly electronic newsletter summarizing Minnesota's climate conditions and the resulting impact on water resources. Distributed on the Wednesday following the first Monday of each month. State Climatology Office - DNR Waters What happened in September: - September 2009 rainfall was very light across much of Minnesota. Monthly rainfall totals fell short of historical September averages by one to three inches in the eastern two-thirds of the state. Some east central and southeastern Minnesota communities received no measureable rainfall during the first three weeks of the month. Rainfall deficits, along with very warm September temperatures, amplified the drought situation in many locations.. [see: September 2009 Climate Summary Table | September 2009 Precipitation Departure Map] - Although this product is intended to summarize September 2009 climate conditions, a notable turn of events during the first week of October must be mentioned. Over three inches of rain has fallen thus far in October across much of the southern two-thirds of Minnesota. The heavy rains have substantially eased many concerns regarding soil moisture deficits that would have impacted agricultural, horticultural, and forestry interests in spring 2010. [see: Total Rainfall - October 1 through 7] - Monthly mean temperatures for September 2009 were very warm, averaging four to eight degrees above the historical average. It was Minnesota's fifth warmest September on record, even though the month ended with record-setting cold days. The warm September temperatures contrasted with the seasonally cool temperatures that persisted through the summer. Extreme temperature values for September ranged from a high of 88 degrees at Marshall on the 18th, to a low of 20 degrees in Embarrass (St. Louis County) on the 30th. Many low temperature records were set in northern Minnesota during the morning of September 30. Interestingly enough, no high temperature records were set during September in spite of the persistently warm weather during the first three weeks of the month. [see: September 2009 Climate Summary Table] Where we stand now: - The U. S. Drought Monitor, released on October 1, placed areas of east central Minnesota in the Severe and Extreme categories. An area of north central Minnesota, centered on the Mississippi headwaters region, was also depicted as experiencing Severe drought. Elsewhere across the state, many counties were described as Abnormally Dry or undergoing Moderate drought. The drought designations are the result of two spells of dry weather, one during this year's growing season, and one longer-term. The shorter-term dryness began in April 2009 and persisted through September over nearly all of Minnesota. April through September precipitation totals in many locations fell short of the historical average by more than four inches. The longer-term dry spell commenced in mid-June 2008 and most profoundly affects east central Minnesota. In this area, 16-month precipitation deficits of ten or more inches have led to a significant impact on hydrology. The U.S. Drought Monitor product to be released on October 8 will reflect the impacts of the heavy early-October rains. Large-scale categorical changes can be expected. The U. S. Drought Monitor index is a blend of science and subjectivity where drought categories (Moderate, Severe, etc) are based on several indicators. [see: Drought 2009] - DNR Waters and the U.S. Geological Survey report that stream discharge values for roughly 15 percent of Minnesota measurement sites rank below the 25th percentile in the historical data distribution for the date. Some measurements fall below the 10th percentile when compared with historical early-October values. Some of the lowest stream flow values, relative to historical data, are observed along the upper reaches of the Mississippi River and in northeastern Minnesota. By contrast, very high seasonally-weighted stream flow values are reported along the Red River and the upper Minnesota River. [see: USGS Stream Flow | DNR Stream Flow] - The Lake Superior water level is up one inch from last year at this time but remains below the long-term average. Water levels on many south central and east central Minnesota lakes are very low. White Bear Lake, on the Ramsey/Washington county border, is near its all-time recorded low level. [see: Corps of Engineers Great Lakes Water Levels | White Bear Lake Water Level] - The Minnesota Agricultural Statistics Service reports that as of October 4, topsoil moisture was 4% "Very Short", 13% "Short", 70% "Adequate", and 13% "Surplus". Additional rain falling after the October 4 reporting deadline continued to bolster soil moisture supplies. [see: Agricultural Statistics Service Crop Progress and Condition] - The potential for wildfires is currently rated by DNR Forestry as "Low" across Minnesota. [see: Fire Danger Rating Map] - The October precipitation outlook tilts towards above-normal precipitation in the southern two-thirds of Minnesota. Events during the first week of the month have already validated this projection. Elsewhere, the outlook offers equal chances of above, near, and below normal precipitation. Normal October precipitation ranges from one and one half inches in northwestern Minnesota, to over two and one half inches in portions of north central and northeastern Minnesota. [see: Climate Prediction Center 30-day Outlook | October Precipitation Normal Map] - The October temperature outlook depicts no significant tendencies away from historical climatological probabilities. Normal October high temperatures fall from the low to mid 60s early in the month, to the upper 40s by month's end. Normal October low temperatures drop from the low 40s early in the month to near 30 by late October. [see: Climate Prediction Center 30-day Outlook | October Temperature Normal Map] - The 90-day precipitation outlook for October through December shows no significant tendencies away from climatological probabilities. The October through December temperature projection indicates a strong tendency towards above-normal temperatures. [see: Climate Prediction Center 90-day Outlook] - The National Weather Service produces long-range probabilistic river stage and discharge outlooks for the Red River, Minnesota River, and Mississippi River basins. These products are part of the National Weather Service's Advanced Hydrologic Prediction Service (AHPS). [see: National Weather Service River Forecast Center] From the author: - Please join us for the 17th Annual Kuehnast Lecture in Meteorology and Climatology on October 15. This year's guest lecturer is Dr. Dennis Baldocchi from the University of California - Berkeley. His topic: Breathing of the Biosphere: How Physics Sets the Limits and Biology Does the Work. [see: 17th Annual Kuehnast Lecture] - The DNR Division of Waters prepares a monthly product providing general information on the quantitative status of water resources across Minnesota. The monthly Hydrologic Conditions Report places current measurements of precipitation, stream flow, lake levels, and ground water levels in historical context. [see: DNR Waters Monthly Hydrologic Conditions Report] Notes from around the state: Upcoming dates of note: - October 15: National Weather Service releases 30/90 day temperature and precipitation outlooks - October 15: 17th Annual Kuehnast Lecture Web sites featured in this edition: - http://climate.umn.edu - Minnesota Climatology Working Group, Minnesota DNR Waters and U of M Dept. of Soil, Water, and Climate - http://water.weather.gov - National Weather Service, Advanced Hydrologic Prediction Service - http://www.drought.unl.edu - National Drought Mitigation Center - http://water.usgs.gov/cgi-bin/dailyMainW?state=mn&map_type=weekd - U.S. Geological Survey - http://mndnr.gov/waters - Minnesota Department of Natural Resources, Division of Waters - http://www.lre.usace.army.mil - US Army Corps of Engineers, Detroit District - http://www.nass.usda.gov - USDA, National Agricultural Statistics Service - http://mndnr.gov/forestry - Minnesota Department of Natural Resources, Division of Forestry - http://www.cpc.ncep.noaa.gov - National Weather Service, Climate Prediction Center - http://www.crh.noaa.gov/ncrfc - National Weather Service, North Central River Forecast Center To subscribe or unsubscribe to HydroClim Minnesota please notify . Contributions of information and suggestions are welcome!
<urn:uuid:26363059-ae10-419a-9066-729ffe4db540>
2.75
1,768
Knowledge Article
Science & Tech.
35.022643
Pacific Rocky Intertidal Monitoring: Trends and Synthesis Click here for Biodiversity Survey findings Partington Cove is located in the Central Coast region of California, within the Monterey Bay National Marine Sanctuary. This site is located within Julia Pfeiffer State Park. This site is also located in an Area of Special Biological Significance (Julia Pfeiffer Burns Underwater Park), and is near the Partington Point/Julia P. Burns ASBS Mussel Watch site. This site receives low visitation by fisherman and tidepoolers. This moderately sloping site consists of extremely uneven terrain, containing many deep cracks and folds. Partington Cove is dominated by consolidated bedrock, and the area surrounding the site is comprised of consolidated bedrock. The primary coastal orientation of this site is west/northwest. Biodiversity Surveys were done by University of California Santa Cruz in 2003, and 2004. The Biodiversity Survey grid encompasses one section that is approximately 20 meters (along shore) x 5 meters (seaward). Click here to view Biodiversity Survey findings at this site. For more information about Partington Cove, please contact Pete Raimondi.
<urn:uuid:cce055de-1432-4725-a0de-0897d304e7b2>
3.078125
243
Knowledge Article
Science & Tech.
22.9175
Remember to use the Fill Down command after changing the output rules. The graphs and tables will change automatically with any changes you make to the columns. Groups: After completing Problems A1-A7, discuss your responses to Problem A7. The tendency may be to answer Problem A7 with particular numbers. For example, the first table will never contain the output 2, because the table starts at 3 and then increases. Think also about what types of numbers will never appear. For example, unless you use negative inputs, the first table will never contain negative numbers, numbers less than 3, fractions, or numbers that aren't multiples of 10 (except 3). Justify your claims. This will push you to think more deeply about exponential functions. Groups: Before moving on, take a few minutes to read through the chart or put up an overhead of the chart. Take a moment to work a bit with the notation. For example, if you have a rule that is y = 10x, what would you get for x = 2? For x = 4? Groups: You also may want to discuss zero as an exponent. Note that this is completely optional; knowledge of zero as an exponent is not assumed anywhere during the session. There are a couple of ways to explain the fact that any number (except zero) to the zero power is 1. First, discuss what 20 would mean and why. Then think about the following two explanations: Multiplying by 1 doesn't change anything, so you can think of powers of 2 (for example) as 1 times some number of 2s. 23 = 1 x 2 x 2 x 2 (1 times three 2s) 22 = 1 x 2 x 2 (1 times two 2s) 21 = 1 x 2 (1 times one 2) 20 = 1 (1 times zero 2s) Alternately, look at decreasing powers of 2: 23 = 8 22 = 4 21 = 2 Each power is half the previous one, so if the pattern is to continue, it must be the case that 20 = (1/2) x 2 = 1. Extending this pattern can help you find the meaning of negative exponents, as well. It should be the case that 2-1 = (1/2) x 1 = 1/2, and in fact this is what negative exponents mean. Think about why a closed-form rule like y = 3x would give rise to a table with a constant ratio between successive outputs. A possible explanation is that if the input increases by 1, then the product is multiplied by another 3, so each output will be 3 times the previous one. << back to Part A: Exploring Exponential Functions
<urn:uuid:6dca73b8-f193-4d46-aa1b-48fe5eb5f442>
4.625
565
Tutorial
Science & Tech.
73.717506