text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Photoelectric Effect and Non-metals I know what the photoelectric effect is, but everything I have read only talks about metals. Can the photoelectric effect occur on other substances, such as water? Non metals are not likely to exhibit any photoelectric effect, because only metals have 'spare' electrons available in their outer shells. It is those 'spare' electrons which are able to be dislodged by the occasional passing photon provided that the photon has sufficient energy. The energy of the photon is determined by the frequency of the radiation - and that also determines the colour if it is visible light - blue having more energy than red. Given all that, some non metals - mostly in the metalloid group - can exhibit weak photoelectric effects if the energy of the impeding photons is high enough - X-rays and so on. Covalent compounds- such as water - are MUCH less likely to show any photoelectric effect, because the electrons are all neatly paired off and bound up in shared electron shells and so on. Not much chance of knocking an electron out of there. Tennant Creek AUSTRALIA In principle yes. However, the energy required to remove an electron from a non-conductor is much greater than is required to remove the electron from a metal, so for practical purposes, only metals are easy to remove the The photoelectric effect can occur with any material, but it will seldom be seen enough to be noticed. For a metal, there are electrons moving through the metal, not joined to a specific atom. These electrons are easy to knock free from the material. The photoelectric effect happens often. With atoms that hold their electrons tightly, a photon is much less likely to knock an electron free. Also, greater energy is needed to make it happen. This means a higher energy photon, perhaps x-rays, rather than visible light. Photoelectric effect can still happen in non-metals, but it is much easier to make happen for a metal. Dr. Ken Mellendorf Click here to return to the Physics Archives Update: June 2012
<urn:uuid:3709c4c1-6b0d-4b73-9ed4-2da6a854d9b4>
3.34375
445
Q&A Forum
Science & Tech.
42.46504
More blades give you more cost, but very little increase in efficiency. Three blades turns out to be the optimum. With four or more blades, costs are higher, with insufficient extra efficiency to compensate. Edit: prompted by a comment, here's some elucidation - this is more expensive per unit electricity generated, if you go for more, but shorter, blades: if you have 4 shorter blades (rather than three longer ones), the blades are sweeping through a smaller volume of air (i.e. an amount of air with a lot less energy), swept area being proportional to the square of the radius. And the efficiency is only a few percent higher. You get higher mechanical reliability with three blades than with two: with two blades, the shadowing effect of tower & blade puts a lot of strain on the bearings. So although it costs more to make a three-bladed turbine, they tend to have a longer life, lower maintenance needs, and thus on balance reduce the unit cost of electricity generated, as the increased availability and reduced maintenance costs outweigh the extra cost of the third blade. Edit2: For the nitty-gritty of wind-turbine aerodynamics, wikipedia isn't a bad place to start: http://en.wikipedia.org/w/index.php?title=Wind_turbine_aerodynamics&oldid=426555179
<urn:uuid:4b869797-9475-4257-a58d-02a786bbb14d>
2.78125
284
Q&A Forum
Science & Tech.
44.250838
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Antipredator adaptations are adaptations developed over evolutionary time, which assist prey organisms in their constant struggle against their predators. There are several ways antipredator adaptations can be classified, such as behavioral or non-behavioral or by taxonomic groups. Antipredator behaviors range from chameleons and octopuses that change color in order to better camouflage themselves, to crabs and geometrid moth caterpillars that decorate their bodies to help them hide, to Batesian mimicry in insects, to Thomson’s gazelles that stot so as to advertise unprofitability to predators, to selfish herd acts performed by adelie penguins when they attempt to enter water inhabited with leopard seals. Their behaviors are extraordinarily varied from species to species and some of them are quite odd; some prey actually fight back. Three extremely odd defense behaviors are found in the sea cuke (cucumber), the Camponotus saundersi (Malaysian ant) and the horned lizard found in the desert southwest and Mexico. The sea cuke has a unique way of fighting off danger. Like other echinoderms the cuke has a type of collagen in its skin capable of excreting or absorbing more water effectively changing from a “liquid” to a “solid.” They can turn their bodies into mush, climb through small cracks and then solidify into small lumps so that they cannot be extracted. Their more desperate response is called evisceration. The cuke effectively turns itself inside out by excreting portions of its digestive tract and, on occasion, other organs like the respiratory tree or gonads. The cuke does this because its Cuvierian tubules, located in its hindgut, contain toxic chemicals. The toxins can actually kill other fish. Any predator that comes into contact with this noxious substance is likely to think twice about consuming the cuke. The cuke can then regenerate its digestive tract and continue to survive. The horned lizard found in the desert southwest of the U.S. has an odd defense mechanism as well. When threatened the lizard increases pressure in its sinus cavities until the blood vessels in the corners of its eyes burst, squirting blood at the attacker. Camponotus saundersi, an ant species found in Malaysia, also has a very interesting defense. The colony is divided up into different functional groups, one of which is soldier ants. These soldiers are charged with defending the colony at all costs. If battle ensues, these ants will actually self-destruct. They have two large glands that run the entire length of their body, and when they become stressed the ant contracts its abdominal muscles causing the glands to explode, spraying poison in all directions. These three examples seem to be some of the more bizarre ways that animals defend themselves against predation. - ↑ Alcock, J. (1998) Animal Behavior: An Evolutionary Approach (8th edition). Sinauer Associates, Inc. Sunderland, Massachusetts. ISBN 0-87893-009-4 - fr:Moyen de défense |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:384112ac-b144-4e67-a2a6-056d63f3fb80>
3.453125
678
Knowledge Article
Science & Tech.
32.623537
The Warming of Greenland By JOHN COLLINS RUDOLF Published: January 16, 2007 Flying over snow-capped peaks and into a thick fog, the helicopter set down on a barren strip of rocks between two glaciers. A dozen bags of supplies, a rifle and a can of cooking gas were tossed out onto the cold ground. Then, with engines whining, the helicopter lifted off, snow and fog swirling in the rotor wash. When it had disappeared over the horizon, no sound remained but the howling of the Arctic wind. ''It feels a little like the days of the old explorers, doesn't it?'' Dennis Schmitt said. Mr. Schmitt, a 60-year-old explorer from Berkeley, Calif., had just landed on a newly revealed island 400 miles north of the Arctic Circle in eastern Greenland. It was a moment of triumph: he had discovered the island on an ocean voyage in September 2005. Now, a year later, he and a small expedition team had returned to spend a week climbing peaks, crossing treacherous glaciers and documenting animal and plant life. Despite its remote location, the island would almost certainly have been discovered, named and mapped almost a century ago when explorers like Jean-Baptiste Charcot and Philippe, Duke of Orl?s, charted these coastlines. Would have been discovered had it not been bound to the coast by glacial ice. Maps of the region show a mountainous peninsula covered with glaciers. The island's distinct shape -- like a hand with three bony fingers pointing north -- looks like the end of the peninsula. Now, where the maps showed only ice, a band of fast-flowing seawater ran between a newly exposed shoreline and the aquamarine-blue walls of a retreating ice shelf. The water was littered with dozens of icebergs, some as large as half an acre; every hour or so, several more tons of ice fractured off the shelf with a thunderous crack and an earth-shaking rumble. All over Greenland and the Arctic, rising temperatures are not simply melting ice; they are changing the very geography of coastlines. Nunataks -- ''lonely mountains'' in Inuit -- that were encased in the margins of Greenland's ice sheet are being freed of their age-old bonds, exposing a new chain of islands, and a new opportunity for Arctic explorers to write their names on the landscape. ''We are already in a new era of geography,'' said the Arctic explorer Will Steger. ''This phenomenon -- of an island all of a sudden appearing out of nowhere and the ice melting around it -- is a real common phenomenon now.'' In August, Mr. Steger discovered his own new island off the coast of the Norwegian island of Svalbard, high in the polar basin. Glaciers that had surrounded it when his ship passed through only two years earlier were gone this year, leaving only a small island alone in the open ocean. ''We saw it ourselves up there, just how fast the ice is going,'' he said. With 27,555 miles of coastline and thousands of fjords, inlets, bays and straits, Greenland has always been hard to map. Now its geography is becoming obsolete almost as soon as new maps are created. Hans Jepsen is a cartographer at the Geological Survey of Denmark and Greenland, which produces topographical maps for mining and oil companies. (Greenland is a largely self-governing region of Denmark.) Last summer, he spotted several new islands in an area where a massive ice shelf had broken up. Mr. Jepsen was unaware of Mr. Schmitt's discovery, and an old aerial photograph in his files showed the peninsula intact. ''Clearly, the new island was detached from the mainland when the connecting glacier-bridge retreated southward,'' Mr. Jepsen said, adding that future maps would take note of the change. The sudden appearance of the islands is a symptom of an ice sheet going into retreat, scientists say. Greenland is covered by 630,000 cubic miles of ice, enough water to raise global sea levels by 23 feet. Carl Egede Boggild, a professor of snow-and-ice physics at the University Center of Svalbard, said Greenland could be losing more than 80 cubic miles of ice per year. ''That corresponds to three times the volume of all the glaciers in the Alps,'' Dr. Boggild said. ''If you lose that much volume you'd definitely see new islands appear.'' He discovered an island himself a year ago while flying over northwestern Greenland. ''Suddenly I saw an island with glacial ice on it,'' he said. ''I looked at the map and it should have been a nunatak, but the present ice margin was about 10 kilometers away. So I can say that within the last five years the ice margin had retreated at least 10 kilometers.'' The abrupt acceleration of melting in Greenland has taken climate scientists by surprise. Tidewater glaciers, which discharge ice into the oceans as they break up in the process called calving, have doubled and tripled in speed all over Greenland. Ice shelves are breaking up, and summertime ''glacial earthquakes'' have been detected within the ice sheet. ''The general thinking until very recently was that ice sheets don't react very quickly to climate,'' said Martin Truffer, a glaciologist at the University of Alaska at Fairbanks. ''But that thinking is changing right now, because we're seeing things that people have thought are impossible.'' A study in The Journal of Climate last June observed that Greenland had become the single largest contributor to global sea-level rise.
<urn:uuid:c6060389-316d-455c-8c56-846bc98d3f68>
3.078125
1,163
Truncated
Science & Tech.
53.941876
Jan. 15, 2009 A team of NASA and university scientists has discovered substantial plumes of methane floating through the atmosphere of Mars. The find indicates Mars is either biologically or geologically active. Jan. 14, 2009 NASA's next great Moon rocket promises to do more than land Astronautson the Moon. In its spare time, it could revolutionize the science of astronomy. Jan. 8, 2009 The biggest full Moon of 2009 is coming this weekend. It's a perigee Moon as much as 30% brighter than lesser moons we'll see in the months ahead. Get ready for moonlight! Jan. 7, 2009 Sledgehammer-toting scientists are "bustin' rocks" to make the finest possible simulated lunar regolith (a.k.a. fake moondust) in support of NASA's return to the Moon.
<urn:uuid:f59e7ddc-9f52-4715-82eb-3f6aa017cf06>
2.734375
179
Content Listing
Science & Tech.
68.910651
While we are working on losing weight, exploring a new hobby, adding on that addition to the house or preparing for the future, scientists are also actively exploring the future of medicine: Nanotechnology. While exhaustively exploring nanotechnology would take an entire newspaper to cover, I’d like to cover a brief portion of this new science, and encourage you to research this topic, with all its exciting possibilities. As you may have noticed in the past 20 years or more, technology is playing an active role in modern medicine. With imaging studies giving doctor’s the ability to explore the human body noninvasively, pace makers and diaphragmatic pacing systems sending electricity to make the heart beat and the lungs expand, and a whole slew of other forms of technology currently in the medical field, nanotechnology again takes a swing at becoming the front runner in modern medicine. Nanotechnology is the science of manipulating extremely small molecules, the size of a “nanometer”, to achieve a certain purpose. A nanometer is technically one billionth the size of a meter. By comparison, a nanometer could be the size of a marble, while a meter is the size of planet earth. Also, a nanometer is thought to be the length that the hair on a man’s beard grows in the time it takes him to lift a razor to his face. The basic unit of this technology is known as a fullerene, and these fullerenes can be manipulated to do extraordinary things. These “nanorobots” are given a genetic code and are built to do whatever the scientists wishes. Nanotechnology offers us many possibilities, not only to the medical field, but to other fields as well. Nanorobots are being used in paint, which can be used for instance to paint the inside of a concert hall, such that it blocks all incoming cell phone signals, so that the concert is not disturbed. In the medical field, nanorobots are being constructed to be used on animals to fight infection and even cancer. If this technology is successful, there may come a time when antibiotics and chemotherapy agents could be a thing of the past. These nanorobots could theoretically be ingested orally, and the nanorobots would be programmed with a genetic code that caused it to kill off the infection or cancerous cells. The USA is currently investing roughly 3.2 billion dollars in nanotechnology, with other countries close behind, trying to become the first company to get the technology right. There are many factors that go into finalizing this technology, such as the effect it could have on the environment, as well as what harm could be done with the technology if it was placed in the wrong hands. For more information on this topic, simply type in nanotechnology in your computer’s browser, you will be amazed at what you’ll find out. Justin Glaze is an LPN and contributing columnist for the Walker County Messenger. He can be reached at 678-988-1011 or email@example.com.
<urn:uuid:138dde82-0382-47e0-a9ff-612e28199eef>
3.046875
636
Nonfiction Writing
Science & Tech.
35.70013
this packing using fractal geometry. If you imagine the environment’s resources as a fractal, rather than a stick, small species will see that fractal at a higher magnification than large ones. Small species will see, and exploit, all the twists, turns, and branches in resource space invisible to larger species. Large animals perceive the world at a coarser scale and need more space—from feeding ground, to waterhole, to breeding ground, to roost—to get life’s jobs done. On Monte Pellegrino, Hutchinson noticed that the two bug species in the pool were of different sizes. If body size determines how much ecological space a species occupies, differences in body size seem an obvious way to occupy different niches and thus avoid competition. Different-sized organisms need different amounts of space and food, they can get and use different types of food, and they can tolerate different environments. Hutchinson sought a rule to explain size differences in coexisting species. In groups of animal species exploiting the same resource, he noticed, each species is often about twice the weight, or 1.3 (the cube root of 2) times the length, of its nearest neighbor. Hutchinson thought that the critical size difference was between the body parts used to obtain food. If you line up the Galapagos finches studied by Darwin in order of their size, each species has a bill about 1.3 times longer than the next smallest finch. Insects go through several different larval stages before adulthood, and each stage is roughly 1.3 times longer than the next. The idea caught on, and many more examples of what became known as Hutchinson ratios were spotted. In 1977 the ecologists Henry Horn and Robert May noted that in consorts of viols and recorders each instrument, as one moves from treble to tenor or tenor to bass, is about 1.3 times longer than its neighbor. They seem to divide up musical space so that each has a separate job and does not compete with the other group members. And in some sets of iron skillets sold together, they pointed out, each is 1.3 times wider than the next. No one would buy a set of five identical frying pans, but like animals the coexisting skillets have been selected to specialize in handling food items of different sizes. Horn and May suggested that Hutchinson’s ratios might apply generally to sets of complementary tools. Jared Diamond, a friend and colleague of MacArthur, offered another example of how competition between species might control
<urn:uuid:bddebe2e-cb8a-4cc0-b2e6-3c078cb5b2c9>
3.921875
517
Academic Writing
Science & Tech.
49.052469
range emerges as the best predictor of survivorship in marine bivalves, and argue that such indirect effects are probably more important than generally appreciated. I will also discuss regional variations in the balance of invasions and local origination in the aftermath of the K-T event, which are somewhat unexpected given that the extinction itself tended to increase biotic homogenization on a global scale by preferentially removing the more localized taxa. Invasions and extinctions are also important during times of “normal” extinction intensities, as I will illustrate with reference to the dynamics of the latitudinal diversity gradient. I will conclude with some implications for integrating insights for past and present-day extinctions and suggest that a powerful approach might involve comparative dissection in extinction patterns according to likely drivers. Throughout I will note gaps in our understanding that would benefit from combined study of modern and ancient systems. In this chapter, I will focus mainly on marine bivalves such as mussels, scallops, and cockles. Bivalves are becoming a model system for the analysis of large-scale biogeographic and evolutionary patterns (Crame, 2000, 2002; Jablonski et al., 2003b, 2006; Kidwell, 2005; Valentine et al., 2006; Krug et al., 2007) for several reasons. They are taxonomically rich but not unmanageable (≈3,000 living and fossil genera), and their systematics are increasingly understood, so that taxonomic standardization and phylogenetic treatment of heterogeneous data are feasible. They have diverse life habits, from filter-feeding to photosymbiosis and chemosymbiosis to carnivory. They occur at all depths from the intertidal zone to deep-sea trenches and from the tropics to the poles. They are abundant and often well preserved as fossils [although not all habitats and clades are equally represented; Valentine et al. (2006)], and they have diverse shell mineralogies and microstructures, which allows analyses to control statistically for, and thus factor out, some, although not all, of the biases in the fossil record (Kidwell, 2005; Valentine et al., 2006). These favorable attributes do not mean that the bivalve fossil record is perfect, and preservation and sampling biases must always be considered in large-scale analyses [see, for example, the variety of approaches in Alroy (2000), Foote (2003), Bush and Bambach (2004), Bush et al. (2004), Jablonski et al. (2006), and Smith (2007)]. However, our growing knowledge of living and fossil bivalves, including the taxonomic, preservational, and geographic factors that can distort their fossil record, makes this group an excellent vehicle for integrating present-day and paleontological diversity dynamics. A broad array of organismic and clade-level traits enter into extinction risk for present-day species. For example, in evaluating extinction
<urn:uuid:9ff375f7-1732-46f9-bb5d-cf1743098438>
3.265625
606
Academic Writing
Science & Tech.
24.38931
in 1969, astronomers discovered that three of the giant planets, Jupiter, Saturn, and Neptune, each radiate about twice as much energy as they receive from the Sun. Those planets each contain a powerful energy source which was inexplicable until J. Marvin Herndon, pictured at left, demonstrated in 1992 the feasibility of natural, nuclear fission reactors as the energy source for those initially considered thermal neutron reactors hydrogen, but soon realized that without hydrogen, the reactors would function quite well as fast neutron breeder reactors. Aware that the uranium resides almost exclusively in the alloy portion of the Abee enstatite chondrite, the part corresponding to the Earth's core, in 1993 Herndon published a scientific article, entitled "Feasibility of a nuclear fission reactor at the center of the Earth as the energy source for the geomagnetic field" in the Japanese Journal of Geomagnetism and Geoelectricity followed that a year later with an article in the Proceedings of the Royal Society of London . Both articles, like his 1996 article in the Proceedings of the National Academy of Sciences USA were based upon calculations Herndon made using Fermi's nuclear reactor theory. These began a series of step-by-step developments which have carried forward to the present [5-11] and have resulted in a fundamentally new understanding of georeactor structure, georeactor dynamics and georeactor generation of Earth's magnetic field. For more than thirty years, scientists and engineers at Oak Ridge National Laboratory have worked to develop, improve, and validate software for numerically simulating the operation of different types of nuclear reactors. Minor modifications were made to the software allowing numerical simulation of Earth's georeactor, which J. M. Herndon published with D. F. Hollenbach in 2001 in the Proceedings of the National Academy of Sciences USA simulation calculations, published in 2001, demonstrated that that Earth's georeactor is capable of functioning over the entire period which the Earth has existed, 4.5 billion years, and is capable of producing power at the same levels estimated to be necessary for powering the geomagnetic field. The calculations also showed that the georeactor would operate as a fast neutron breeder reactor and that it must have some inherent mechanism for regulating operating power and for removing fission products. Moreover, the helium fission products from the georeactor turned out to occur in the same range of compositions as the deep-Earth helium found in oceanic basalt . Evidence for Georeactor Existence Since the late 1960s, scientists throughout the world have found traces of helium in volcanic basalt that comes from within the Earth. Two isotopes of helium are observed, helium of mass 3 and helium of mass 4. Helium-4 was not a surprise because helium-4 is a product of the natural radioactive decay of uranium and thorium. Helium-3, however, was a great mystery as scientists were unaware of any mechanism for major production of helium-3. knowledge of an adequate deep-Earth production mechanism, scientists, for more than thirty years, have assumed that the observed helium-3 is a relic left over from planetary formation 4.5 billion years ago. To explain the helium found in volcanic basalt, scientists have also had to assume that about 9 times as much helium-4 from radioactive decay had to have been mixed with the assumed primordial helium-3 in such a way as to give a rather narrow range of compositions, shown statistically for 95% confidence in table at right. But then along came the georeactor numerical simulations. The figure at left shows more precise helium fission product data for two numerical simulations, published in 2003 by J. Marvin Herndon in the Proceedings of the National Academy of Sciences USA . for comparison, two bands are shown representing the ranges of helium ratios for mid-oceanic ridges and abandoned oceanic ridges. The arrow shown points to the present age of the Earth. TW stands for million-megawatts, the unit of georeactor power. Note the upward trend in the helium isotope ratios with the passage of time. The increase in the He-3/He-4 ratio is a consequence of the decrease in helium-4 from radioactive decay as the uranium fuel is consumed by nuclear fission. High He-3/He-4 ratios, some as high as 37 relative to air, are observed in Hawaiian and Icelandic basalts. These high ratios are an indication that the end of the georeactor's life is approaching, which means as well the end of the geomagnetic field, but the time frame is unknown . The helium coming out of the Earth is strong evidence for the georeactor's existence. See reference to download an excellent description of the helium problem solved. Ultimately, other evidence for georeactor existence may arise, such as seismic evidence and antineutrino evidence. At present, resolution of earthquake waves is not sufficient to reveal the georeactor. Detecting antineutrinos from georeactor fission products holds potential, but technological advances are necessary. Antineutrino measurements to date have not refuted the existence of the georeactor , but set an upper limit of 3TW (terawatts) on its energy production. That 3TW does not include the contribution the from radioactive decay energy of the georeactor's associated uranium (and possibly thorium). Georeactor Origin of the Earth's Magnetic Field more than a century, since Fredrick Gauss, scientists have known that the seat of the geomagnetic field lies at or near the center of the Earth. Because of energy-draining interactions with the solar wind and with the matter of Earth, scientists also know that there is an energy source, residing at or near the center of earth, continuously supplying energy to sustain the magnetic field; otherwise the field would soon In 1939 Walter Elsasser proposed that the Earth's magnetic field is produced by a convection-driven dynamo mechanism in Earth's fluid core. Elsasser envisioned convection motions in the fluid core being twisted by planetary rotation into a dynamo, essentially a magnetic amplifier. In 1993, when J. Marvin Herndon demonstrated the feasibility of a nuclear fission reactor at Earth's center, he envisioned the georeactor as being the dynamo's energy source. Beginning in 2007, J. Marvin Herndon began to discovered reasons why convection is physically impossible in the Earth's fluid core, and proposed instead that the Earth's magnetic field is generated, not in the fluid core, but in the georeactor fission-product sub-shell [8-12], illustrated at left. There are two reasons why convection is physically impossible in the Earth's fluid core. but not in the georeactor First, for long-term stable convection, the top of the fluid has to be cooler than the bottom. Thus, heat brought to the top must be efficiently removed. The Earth's core is covered with a thick, thermally-insulating blanket, the silicate mantle, which prevents efficient heat loss [8, 9]. Second, convection is physically impossible in the Earth's fluid core [11, 12]. J. Marvin Herndon discovered that, because of compression by the weight of the earth above, the matter at the base of the fluid core is too dense to float to the top as a result of thermal expansion. Convection under those circumstances is physically impossible. Herndon also discovered that the Rayleigh Number, often used to justify convection, is inappropriate for the core, as the Rayleigh Number was derived for an incompressible fluid, a fluid of constant density, of which the core is not. The base of the core is about 23% denser than the top. The impediments to convection in the Earth's fluid core, as noted by Herndon [11, 12], do not exist within the It is expected that convective motions within the electrically conducting fluid (or slurry) sub-shell will interact with the Coriolis forces produced by planetary rotation and act like a dynamo, a magnetic amplifier, as illustrated at right. And, unlike in Earth’s fluid core, the georeactor sub-shell contains large amounts of fission-produced elements which beta decay yielding electrons for generating magnetic seed-fields for amplification. The georeactor unit thus acts as both the energy source and the operant fluid for generating the Earth’s magnetic field by dynamo Even though variations in nuclear fuel occur over time, Herndon's georeactor uniquely is expected to be self-regulating through establishing a balance between heat-production and actinide settling-out . In the micro-gravity environment at the center of Earth, georeactor heat production that is too energetic would be expected to cause actinide sub-core disassembly, mixing actinide elements with neutron-absorbers of the sub-shell, quenching the nuclear fission chain reaction. But as the denser actinide elements begin to settle-out of the mix, the chain reaction would re-start, ultimately establishing a balance, an equilibrium between heat-production and actinide settling-out, a self-regulating mechanism. Rapid Magnetic Field Changes Earth's magnetic field has been decreasing in intensity over the past hundred years. Moreover, as shown in the figure at left, the North Magnetic Pole has recently moved at a rapid rate toward Siberia. These observations are taken by some as a possible indication of a forth-coming magnetic reversal, potentially the first in some 700,000 External influences, Herndon suggests, intermittently disrupt the stability of georeactor geomagnetic field generation and possibly lead to a magnetic reversal. For example, electrical currents induced by superintense solar outbursts would cause heating in the georeactor sub-shell, possibly disrupting convection. Severe Earth trauma might also disrupt disrupt convection in the georeactor's sub-shell. Because the georeactor mass is about one ten-millionth that of the fluid core, such changes might occur quickly. Origin of Planetary active internally generated magnetic fields have been detected in six planets (Mercury, Earth, Jupiter, Saturn, Uranus, and Neptune) and in one satellite (Jupiter’s moon Ganymede). Magnetized surface areas of Mars and the Moon indicate the former existence of internally generated magnetic fields in those bodies. Herndon has presented evidence attesting to the commonality of matter in the Solar System, which is like that of the deep-interior of Earth, and has made the suggestion [9, 10] that planetary magnetic fields generally arise from the same georeactor-type mechanism which Herndon [8, 9] has suggested powers and generates the Earth’s magnetic field. Origin of Earth's Magnetic Field (click here) This video is best "watched in high quality" as it contains an experimental demonstration of why long-term convection, and hence dynamo-action, in the fluid core De-bunking Copy-cat Georeactors After publication of Herndon's scientific articles on Earth's nuclear georeactor [1-9], several "copy-cat" georeactors were published. Copy-cat georeactors, purportedly operating at places within the deep-interior of Earth other than at its center, are all absent of a mechanism for preventing the inevitable meltdown to the center of Earth and are absent a mechanism for self-regulation as disclosed by Herndon for the georeactor . Georeactor as Hotspot Heat Source Hotspots power the volcanic activity that is continuing to produce basalt-lava that forms the Hawaiian Islands and Iceland. Seismic tomography appears to image vertical, column-like heat paths extending to the edge of the core for each of those hotspots. Recently, Norwegian scientists, R. Mjelde and J. I. Faleide , discovered that basalt eruptions in the Hawaiian Islands and in Iceland varied significantly over time. Remarkably, the pulse-like variations in productivity and time in each case were synchronized, as if being orchestrated by a common mechanism, even though the two hotspots are located on opposite sides of the globe. Subsequent work by Mjelde, Wessel, and Müller suggests the co-pulsations are a global hotspot phenomenon. The commonality appears to represent changes in heat from the Earth’s core. Georeactor-heat produced by nuclear fission can be variable, unlike heat from the natural decay of long-lived radioactive isotopes, which is essentially constant, decreasing slightly over very-long periods of time. Herndon, J. M., Nuclear fission reactors as energy sources for the giant outer planets. Naturwissenschaften, 1992, 79, 7-14. Herndon, J. M., Feasibility of a nuclear fission reactor at the center of the Earth as the energy source for the geomagnetic field.Journal of Geomagnetism and Geoelectricity, 1993, 45, 423-437. (click here for pdf) Herndon, J. M., Planetary and protostellar nuclear fission: Implications for planetary change, stellar ignition and dark matter. Proceedings of the Royal Society of London, 1994, 453-461. (click here for pdf) Herndon, J. M., Sub-structure of the inner core of the Earth.Proceedings of the National Academy of Sciences USA, 1996, 93, 646-648. (click here for pdf) Herndon, J. M., Examining the overlooked implications of natural nuclear reactors. Eos, Transactions of the American Geophysical Union, 79, 451, 456. (click here for pdf) Hollenbach, D. F. and Herndon, J. M., Deep-Earth reactor: Nuclear fission, helium, and the geomagnetic field.Proceedings of the National Academy of Sciences USA, 2001, 98, 11085-11090. (click here for pdf) Herndon, J. M., Nuclear georeactor origin of oceanic basalt 3He/4He, evidence, and implications.Proceedings of the National Academy of Sciences USA, 2003, 100, 3047-3050. (click here for pdf) Herndon, J. M., Nuclear georeactor generation of the Earth's geomagnetic field. Current 2007,93, 1485-1487. (click here for pdf) Herndon, J. M., Maverick's Earth and Universe. 2008, Vancouver: Trafford Publishing. ISBN 978-1-4251-4132-5. Herndon, J. M., Nature of planetary matter and magnetic field generation in the Solar System.Current Science, 2009, 96, 1033-1039. (click here for pdf) Herndon, J. M., Uniqueness of Herndon's georeactor: Energy source and production mechanism for Earth's magnetic field. arXiv:0901.4509 28 Jan 2009. (click here for pdf) Herndon, J. M., Origin of the Geomagnetic Field: Consequence of Earth's Early Formation as a Jupiter-Like Gas Giant: Thinker Media, Inc., 2012, Gando, A., et al., Partial radiogenic heat model for Earth revealed by geoneutrino measurements. Nature Geoscience, 2011, 4, Rao, K, R., Nuclear reactor at the core of the Earth! – A solution to the riddles of relative abundances of helium isotopes and geomagnetic field variability. Current Science, 2002, 82, 126-127. (click here for pdf) Mjelde, R. and Faleide, J. I., Variation of Icelandic and Hawaiian magmatism: evidence for co-pulsation of mantle plumes? Mar. Geophys. Res., 2009, 30, 61–72. Mjelde, R., Wessel, P. and Müller, D., Global pulsations of intraplate magmatism through the Cenozoic. Lithosphere, 2010, 2(5),
<urn:uuid:9f4720d1-d0ea-43fb-9a93-d329075e21a3>
3.9375
3,683
Knowledge Article
Science & Tech.
44.130296
Entamoeba is a genus of Amoebozoa found as internal parasites or commensals of animals. Several species are found in humans. Entamoeba histolytica is the pathogen responsible for amoebiasis (which includes amoebic dysentery and amoebic liver abscesses), while others such as Entamoeba coli and E. dispar are harmless. With the exception of Entamoeba gingivalis, which lives in the mouth, and E. moshkovskii, which is frequently isolated from river and lake sediments, all Entamoeba species are found in the intestines of the animals they infect. Entamoeba cells are small, with a single nucleus and typically a single lobose pseudopod taking the form of a clear anterior bulge. They have a simple life cycle. The trophozoite (feeding-dividing form) is approximately 10-20 μm in diameter and feeds primarily on bacteria. It divides by simple binary fission to form two smaller daughter cells. Almost all species form cysts, the stage involved in transmission (the exception is E. gingivalis). Depending on the species, these can have one, four or eight nuclei and are variable in size; these characteristics help in species identification. Entamoeba belongs to the Archamoebae, which are unusual in lacking mitochondria. This group also includes Endolimax, which also lives in animals and is similar in appearance to Entamoeba, although this may partly be due to convergence. Certain other genera of symbiotic amoebae, such as Endamoeba, might prove to be synonyms of Entamoeba but this is still unclear. Studying Entamoeba invadens, David Biron of the Weizmann Institute of Science and coworkers found that about one third of the cells are unable to separate unaided and recruit a neighboring amoeba (dubbed the "midwife") to complete the fission. He writes: They also reported a similar behavior in Dictyostelium.
<urn:uuid:6ed0599c-cc91-4781-9784-f7cf2ad9d490>
3.78125
445
Knowledge Article
Science & Tech.
25.862103
April 17, 2009 The pygmy seahorse (Hippocampus bargibanti) evolved its knobby body and rosy color to blend in with gorgonians (sea fans) of the genus Muricella, where the seahorse makes its home among the coral reefs of the Western Pacific. These fish are so tiny (only two centimeters in height) and so well camouflaged that they weren’t discovered until someone had collected the host gorgonian and placed it into an aquarium and then noticed little seahorses. This photograph, by Vickie Coker of Austin, Texas, won first place in the “Macro” category of the University of Miami, Rosenstiel School of Marine and Atmospheric Science 5th Annual Underwater Photography Contest. Check out all the winners on the contest site. Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
<urn:uuid:cdc52439-518d-4d21-8bbd-d5c6103aff12>
2.765625
192
Truncated
Science & Tech.
36.104167
Web development is a broad term for any activity related to developing a web site for the World Wide Web or an intranet. This can include e-commerce business development, web design, web content development, client-side/server-side scripting, and web server configuration. However, among web professionals, "web development" usually refers only to the non-design aspects of building web sites, e.g. writing markup and coding. Web development can range from developing the simplest static single page of plain text to the most complex web-based internet applications, electronic businesses, or social network services. For larger businesses and organizations, web development teams can consist of hundreds of people (web developers). Smaller organizations may only require a single permanent or contracting webmaster, or secondary assignment to related job positions such as a graphic designer and/or Information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. Web development as an industry Since the mid-1990s, web development has been one of the fastest growing industries in the world. In 1995 there were fewer than 1,000 web development companies in the United States alone, but by 2005 there were over 30,000 such companies. The web development industry is expected to grow over 20% by 2010. The growth of this industry is being pushed by large businesses wishing to sell products and services to their customers and to automate business workflow, as well as the growth of many small web design and development companies. In addition, cost of Web site development and hosting has dropped dramatically during this time. Instead of costing tens of thousands of dollars, as was the case for early websites, one can now develop a simple web site for less than a thousand dollars, depending on the complexity and amount of content. Smaller Web site development companies are now able to make web design accessible to both smaller companies and individuals further fueling the growth of the web development industry. As far as web development tools and platforms are concerned, there are many systems available to the public free of charge to aid in development. A popular example is the LAMP (Linux, Apache, MySQL, PHP), which is usually distributed free of charge. This fact alone has manifested into many people around the globe setting up new Web sites daily and thus contributing to increase in web development popularity. Another contributing factor has been the rise of easy to use WYSIWYG web development software, most prominently Adobe Dreamweaver or Microsoft Expression Studio (formerly Microsoft Frontpage) . Using such software, virtually anyone can develop a Web page in a matter of minutes. Knowledge of HyperText Markup Language (HTML), or other programming languages is not required, but recommended for professional results. The next generation of web development tools uses the strong growth in LAMP and Microsoft .NET technologies to provide the Web as a way to run applications online. Web developers now help to deliver applications as Web services which were traditionally only available as applications on a desk based computer. Instead of running executable code on a local computer, users are interacting with online applications to create new content. This has created new methods in communication and allowed for many opportunities to decentralize information and media distribution. Users are now able to interact with applications from many locations, instead of being tied to a specific workstation for their application environment. Examples of dramatic transformation in communication and commerce led by web development include e-commerce. Online auction sites such as eBay have changed the way consumers consume and purchase goods and services. Online resellers such as Amazon.com and Buy.com (among many, many others) have transformed the shopping and bargain hunting experience for many consumers. Another good example of transformative communication led by web development is the blog. Web applications such as WordPress and b2evolution have created easily implemented blog environments for individual Web sites. Open source content systems such as Typo3, Xoops, Joomla!, and Drupal have extended web development into new modes of interaction and communication. Web Development can be split into many areas and a typical and basic web development hierarchy might consist of; Client Side Coding XHTML (in accordance to modern web design standards, XHTML's use is replacing the older HTML4. This may change when HTML 5 is adopted by the browser development community.) Flash (Adobe Flash Player is a ubiquitous client-side platform ready for RIAs. Flex 2 is also deployed to the Flash Player (version 9+)) Microsoft SilverLight But doesn't seem to support older win9x versions Server Side Coding PHP (open source) ASP (Microsoft proprietary) .NET (Microsoft proprietary) CGI and/or Perl (open source) Java, e.g. J2EE or WebObjects Python, e.g. Django (web framework) (open source) Ruby, e.g. Ruby on Rails (open source) Smalltalk e.g. Seaside ColdFusion (Adobe proprietary, formerly Macromedia) Websphere (IBM proprietary) LAMP servers are the most popular setup used by the web development community. However lesser known languages like Ruby and Python are often paired with database servers other than MySQL (the M in LAMP). Below are example of other databases currently in wide use on the web. For instance some developers prefer a LAPR(Linux/Apache/PostrgeSQL/Ruby on Rails) setup for development. Microsoft SQL Server DB2 (IBM proprietary) In practice, many web developers will also have interdisciplinary skills / roles, including: Graphic design / web design Information architecture and copywriting/copyediting with web usability, accessibility and search engine optimization in mind Project management, QA and other aspects common to IT development in general The above list is a simple website development hierarchy and can be extended to include all client side and server side aspects. It is still important to remember that web development is generally split up into client side coding covering aspects such as the layout and design, then server side coding, which covers the website's functionality and back end systems. Looking at these items from an "umbrella approach", client side coding such as XHTML is executed and stored on a local client (in a web browser) whereas server side code is not available to a client and is executed on a web server which generates the appropriate XHTML which is then sent to the client. As the nature of client side coding allows you to alter the HTML on a local client and refresh the pages with updated content (locally), web designers must bear in mind the importance and relevance to security with their server side scripts. If a server side script accepts content from a locally modified client side script, the web development of that page shows poor sanitization with relation to security. Web development takes into account many things, such as data entry error checking through forms, as well as sanitization of the data that is entered in those fields. Malicious practices such as SQL injection can be executed through users with ill intent yet only primitive knowledge of web development as a whole. Not only this, but scripts can be exploited to grant unauthorized access to the hacker to gain information such as email addresses, passwords and protected content like credit card numbers. Some of this is dependent on the server environment (most commonly Apache or Microsoft IIS) on which the scripting language, such as PHP, Ruby, Python, Perl or ASP is running, and therefore is not necessarily down to the web developer themselves to maintain. However, stringent testing of web applications before public release is encouraged to prevent such exploits from occurring. Keeping a web server safe from intrusion is often called Server Port Hardening. Many technologies come into play when keeping information on the internet safe when it is transmitted from one location to another. For instance Secure Socket Layer Encryption (SSL) Certificates are issued by certificate authorities to help prevent internet fraud. Many developers often employ different forms of encryption when transmitting and storing sensitive information. A basic understanding of information technology security concerns is often part of a web developers knowledge. Because new security holes are found in web applications even after testing and launch, security patch updates are frequent for widely used applications. It is often the job of web developers to keep applications up to date as security patches are released and new security concerns are discovered. Recent trends in the sector Given the rapid growth of this sector, several companies have started to use offshore development in China, India and other countries with a lower cost per developer model. Several new Web 2.0 platforms and sites are now developed offshore while the entrepreneurs and management is located in Western countries such as US, UK and EU.
<urn:uuid:603db6a5-893e-4459-a7b4-4be4fecf17ec>
2.765625
1,777
Knowledge Article
Software Dev.
28.894805
I am not a specialist in this topic (nuclear rocketry), but I am an older, well-experienced aerospace/mechanical engineer, and I am well-read about these things. I have no hard numbers here, but these concepts really do look feasible and fruitful for manned Mars missions (and more). Using solid-core nuclear rockets as propulsion in anything resembling a safe manner is not a trivial issue, to be sure. I do think the applications of orbit-to-orbit transport, and planetary landing, end up addressing this risk entirely differently. For the orbit-to-orbit transport, most interestingly a Mars mission, issues of artificial gravity should interact very constructively with the need to provide radiation shielding from your reactor. There is a need to stage-off emptied propellant tanks after every “burn”, but if the design comprises a set of docked modules, it is easy to reconfigure into the same length “slender baton” shape at each stage-off. Spin the “baton” end-over-end for gravity: 56 m radius at 4 rpm is 1 full gee at a tolerable spin rate. (This is true even for chemically-powered designs.) See Figure 1. This reconfigurable “slender baton” shape not only maintains radius for artificial gravity at low rpm, it also maintains the much longer distance that is so very necessary for getting shielding benefits out of your remaining propellant tanks. Somewhere around 40 meters of propellant tank fluids and structures should be quite effective at shielding the crew from nuclear radiation, during or between “burns”. The lander is a vastly different proposition. Compact as it has to be for landing stability, shielding “steady state” by distance with tanks and fluids is impossible. The alternative is tons of lead or concrete, etc, also very undesirable. But since the descent and ascent “burns” are brief (minutes only), there is no need to shield “steady state”. Using what little tankage-and-structures shielding benefit that there is, the crew need only endure brief intense exposures, integrating to a very modest accumulated dose, actually. But this does require that the crew evacuate to a surface shelter remote from the lander during the surface stay. And you keep your distance from these landers in orbit, too, except when in use. See Figure 2. However, the numbers show such landers could fly both descent and ascent on a single fueling, and in a very practical design with significant cargo capability, at Mars. The intensity of the exposures might be mitigated slightly by a shift to thorium reactors instead of uranium. This gets the worst-offender plutonium-239 out of the picture. But, fission leaks neutrons, no matter what, so it remains a very serious risk. I like the thorium approach better than uranium, in part because of the slightly-reduced danger from shut-down cores, but mostly because it is a more plentiful fuel. But, I recognize that we have to start with what we know: uranium. Dangers of radiation from a contained core after engine shutdown (the worst risk of all) could be mitigated greatly by the open-cycle gas core concept, which is essentially an “empty steel can” between “burns” (see Figure 3). The light-bulb concepts feature a retained core, and suffer the same risks as solid core after engine shutdown. Only induced radioactivity in the engine shell is still a problem, and that is far less intense, and it decays far quicker. That’s why I’m such a fan of developing the open-cycle gas core technology as soon as possible, although we still have to start with what we know, that being a highly-enriched uranium solid core. No one has ever actually yet built and tested an open-cycle gas core engine, however. I am also a very big fan of revising the nuclear rocket engine to use water instead of liquid hydrogen as the propellant fluid. I think the better heat-absorbing characteristics of the water may allow higher reactor power levels, thus offsetting in part the loss of specific impulse due to the higher molecular weight. This has never been implemented and tested. The water provides a better radiation shield, and logistically is far easier to handle and store in space. Further, it seems to be very widely available as native ice, at many interesting destinations in the solar system, including Mars. There are several related articles on this site where I have posted design concepts and estimated performance numbers for “typical” designs, for both the orbit-to-orbit transport, and the landers. The best and most realistic of these articles are listed below, by date, title, and a content summary: 9-6-11 Mars Mission Second Thoughts Illustrated - looks at a revision of the original modular design in my original Mars Society convention paper, one with solid core nuclear/min energy transfer propulsion, instead of gas core/fast trip. Otherwise, the mission scenario and hardware designs are the same as that paper. This is revisited, and the discussion extended, in 4-23-12 Update to Mars Mission Design. 7-25-11 Going to Mars (or anywhere else nearby) the posting version – is the posted-here version of my original Mars mission paper at the Mars Society convention in Dallas, Texas, August 2011. The transit vehicle baseline propulsion in that paper was gas core nuclear/fast trip. The reusable solid core nuclear single stage lander performance is outlined very well in that paper. This lander design is very over-conservative, as no credit was given to aerodynamic drag during descent: the descent delta-vee was assumed to be the same as the ascent delta-vee. 7-19-12 Rough-Out Mars Mission with Artificial Gravity – this is the first analysis I ran that deliberately explores the integration of spin-generated artificial gravity into a slow-boat mission by using reconfigurable docked modules to build the orbital transport vehicle. This one assumes the same 60-ton reusable nuclear landers as the original paper, plus solid core nuclear/slowboat propulsion. The landers go with the transport in this analysis. In the original paper, they went separately. But the lander propellant supply still goes separately to Mars in this article. This design uses the 56 m radius figure at 4 rpm to provide 1 full gee of artificial gravity. 6-30-12 Atmosphere Models for Earth, Mars, and Titan – this provides realistic and traceable data for the “typical” Mars entry environment for lander design purposes. (Typical atmospheres for Earth and Titan are also defined.) I got these data from a posted NASA paper, cited within. 7-14-12 “Back of the Envelope” Entry Model – this provides a traceable and realistic means of quickly estimating how much velocity reduction can be achieved on descent to Mars (or any other location with an atmosphere), given a ballistic coefficient, a velocity at entry interface, and a descent angle at entry. I corrected the heat transfer model from a 1956-vintage warhead re-entry calculation that was discussed in the same posted NASA paper. The dynamics were fine. 8-10-12 Big Mars Lander Entry Sensitivity Study – sensitivity study of end-of-entry altitude to ballistic coefficient, primarilyfor grazing entry from low Mars orbit. The inherent grazing entry angle from low Mars orbit is important, and this shows up in the sensitivity study. Run with the corrected 1956-vintage model. 8-12-12 Direct-Entry Addition to Mars Entry Sensitivity Study – grazing entry for a typical interplanetary direct transfer speed. As expected, higher entry speeds do put end-of-hypersonics at a much lower altitude. Run with the corrected 1956-vintage model. 8-28-12 Manned Chemical Lander Revisit - A two-stage non-reusable chemical Mars lander design and performance estimate, using the entry results and direct rocket braking for the descent propellant requirement. Ascent is “standard” rocket equation with experiential “jigger factors” for gravity and drag losses. This is a multi-engine approach, using slight cant for supersonic retro plume stability. This article uses a more realistic ballistic coefficient than the design presented in an earlier article. Run with the corrected 1956-vintage model. Some closely-related discussions are given in two very recent articles: 12-31-12 Mars Landing Options – this one discusses the choices to be made (and their effects on mission design) of in-situ return propellant manufacture versus bringing the ascent propellant with you, and of landing one vehicle at any given site versus landing several close enough together to interact effectively. Too close is also a bad outcome. 12-31-12 On Long-Term Sustainable Interplanetary Travel – this one discusses the merits of choosing a solid-core nuclear propulsion design revised to use water instead of hydrogen as the propellant fluid. Future developments that might prove beneficial are also explored.
<urn:uuid:22bfa88d-134b-4d35-b07c-78ec698834c9>
2.84375
1,898
Personal Blog
Science & Tech.
37.75974
Search "Sun songs" on the Internet and you'll see numerous listings. From the Beatle's "Here Comes the Sun" to the Fifth Dimension's "Aquarius/Let the Sun Shine In," sunlight and warmth make for catchy lyrics. As the closest star to our planet and something critical for all life, the Sun resonates deeply with human emotions. It's also a great source for long-term, renewable energy. All of the world's fussing about oil and coal seems misplaced when an obvious energy machine passes over our heads every day. The situation is like a subsistence farmer starving from failed crops while plentiful game animals stroll through the fields. The Sun is essentially a big generator — a beautiful example of Einstein's E=MC2 in which matter is converted into massive amounts of energy. And it’s there for the taking — once our ingenuity finds a way to make more efficient use of it. There are numerous ways in which the Sun can be an energy source. Heat and visible light are obvious, but infrared and ultraviolet radiation, the solar wind of charged particles passing through space, magnetic fields and gravity are other potential ways. Undoubtedly, additional solar resources exist that we haven't discovered yet. And, with all the debate about global warming, why not capture and use the extra solar heat trapped in the atmosphere? A primary challenge with tapping the sun as a resource is for society as a whole to develop a mindset that focuses on solar power. Similar to America's audacious goal in the 1960s to reach the moon in less than 10 years, we need a national challenge to become energy independent by a fixed time, in large part relying on the Sun. European countries have been quicker to implement solar power than has the U.S. While we've been installing photovoltaics in limited ways, the Germans, for example, have been a leader in using the panels to generate electricity. They put them on rooftops, as well as farmland, the sides of highways and other open spaces. We can do the same, and more importantly, develop the next generation of solar technologies. American companies are creating some imaginative new systems, but continued investment in primary scientific research and practical business R&D are critical. As our country and world think through our energy woes, it will be important to keep the Beatle’s words in mind: "here comes the sun, and I say it's all right" (well, maybe everywhere except Seattle, where we've had an especially gray summer). (Note: If you are interested, check out our past "Brandner Takes")
<urn:uuid:b9f1289a-bab6-4204-afcd-e78ae8e8e448>
2.765625
533
Personal Blog
Science & Tech.
45.67102
Yes, you read that right. It seems impossible unless you grew up in a very cold climate. The story of the theory of freezing is an example of science marching backwards, of a case when folklore was correct instead of establishment science. The fact that hot water freezes faster than cold has been known for many centuries. The earliest reference to this phenomenon dates back to Aristotle in 300 B.C. The phenomenon was later discussed in the medieval era, as European physicists struggled to come up with a theory of heat. But by the 20th century the phenomenon was only known as common folklore, until it was reintroduced to the scientific community in 1969 by Mpemba, a Tanzanian high school student. Since then, numerous experiments have confirmed the existence of the “Mpemba effect”, but have not settled on any single explanation. Later, in the 1600’s, it was apparently common knowledge that hot water would freeze faster than cold. In 1620 [Francis] Bacon wrote “Water slightly warm is more easily frozen than quite cold” , while a little later Descartes claimed “Experience shows that water that has been kept for a long time on the fire freezes sooner than other water” . In time, a modern theory of heat was developed, and the earlier observations of Aristotle, Marliani, and others were forgotten, perhaps because they seemed so contradictory to modern concepts of heat. However, it was still known as folklore among many non-scientists in Canada , England [15-21], the food processing industry , and elsewhere. It was not reintroduced to the scientific community until 1969, 500 years after Marliani’s experiment, and more than two millennia after Aristotle’s “Meteorologica I” . The story of its rediscovery by a Tanzanian high school student named Mpemba is written up in the New Scientist . The story provides a dramatic parable cautioning scientists and teachers against dismissing the observations of non-scientists and against making quick judgements about what is impossible. If you’re still skeptical that warm water can freeze faster than cold water under some conditions, hear this logic and tremble with misgivings. Remember that a mole of water has ~10²³ particles moving in 3 dimensions, i.e. the full mathematical description has ~10²⁴ parameters for 3-position and 3-momentum of each particle. If you treat Temperature as uniform, that’s just one parameters — not the whole story. Ice is a crystal structure so if the heat allows the molecules to bump into place, they could solidify faster than slow-moving molecules.
<urn:uuid:1761e0f6-95c3-4694-8816-187dc71040cb>
3.765625
556
Personal Blog
Science & Tech.
43.436299
Geology and Geomorphology The Spring River Basin (Figure Ge02) is located along the border between the Osage Plains and the Springfield Plateau (MDNR 1986). The Osage Plains are a subdivision of the Central Lowland Physiographic Region and encompass the northwest portion of the Spring River Basin. This is an unglaciated area of smooth to rolling plains with low relief formed on Pennsylvanian sedimentary rock (MDNR 1986). The Springfield Plateau forms the western-most member of the three subdivisions of the Ozark Plateau and encompasses most of the Spring River Basin. Elevations range from 1,000 to 1,700 feet above mean sea level (msl). Mississippian limestones underlay the region, and karst features are locally prominent. The southern and eastern portions of the Spring River Basin, including the eastern and southern portions of the North Fork of the Spring River watershed and the Turkey Creek, Shoal Creek, and Center Creek watersheds, have surface layers comprised primarily of Mississippian age limestones (Figure Ge01) (MDNR 1984). A few remnants of Pennsylvanian sandstones and shales are dispersed throughout this area. A substantial portion of this area lies in the Burlington-Keokuk limestone within which most springs in the area are formed. Springs are relatively common, but generally low yielding (Table Ge01). Base flows are well sustained during dry periods. The northwest portion of the basin, which makes up the western portion of the North Fork of the Spring River watershed, lies within deposits of shale, sandstone, siltstone, limestone, clay, and coal of Pennsylvanian age (MDNR 1984). Springs are poorly developed, infiltration to subsurface strata is limited, and base flows are poorly sustained during dry periods. Three major soil regions are represented within the Spring River Basin; these are Cherokee Prairies, Ozark Borders, and Ozarks (MDNR 1986). Alluvial soils along major stream courses are assigned to the Cherokee Prairies category. Soils in the Cherokee Prairies region historically supported native vegetation comprised primarily of prairie grasses. These soils range from acidic, poorly drained soils to soils which are excessively well drained, droughty, and infertile. Ozarks soils are variable, and productivity encompasses a wide range. Ozarks soils may be stone free, but stone content can exceed 50 percent in some areas. Loess capped soils and soils located in valleys may be fertile and support improved pastures and grain farming. The Ozark Borders region contains both forest soils and areas of transition between forest and prairie derived soils. Slope, parent materials, climate, and landforms all contribute to a wide variety of distinct soil types in this region. Soil erosion ranges from 5 to 9 tons/acre/year from sheet and rill erosion on tilled lands, 2.5 to 5 tons/acre/year from sheet and rill erosion on permanent pasture, less than 0.25 tons/acre/year from sheet and rill erosion on non-grazed forests, and 100 to 199 tons/square mile from gully erosion. Approximately 1.4 tons of sediments/acre/year actually reach impoundments and streams within the basin. The sources of eroded sediment are derived as followed: 76% from sheet and rill erosion; 14% from gully erosion; 3% from streambank erosion; and 7% from urban and built-up areas (Anderson 1980). Stream orders were assigned to all streams in the Missouri portions of the basin using 7.5 minute topographic maps. There are a total of 144 third order and larger streams in the basin. Of this total, 111 are third order, 20 are fourth order, six are fifth order, three are sixth order, and one (Spring River) reaches seventh order before leaving Missouri. Total stream mileages by order are: 1) third order - 593.8 miles; 2) fourth order - 256.8 miles; 3) fifth order - 106.7 miles; 4) sixth order - 225.0 miles; and 5) seventh order - 128.3 miles. Overall, third order and larger streams in the Missouri portion of the basin total 1,310.5 miles. The major streams in the basin, with their respective lengths and orders are listed in Table Ge02. The basin has been divided into five major sub-basins, Upper Spring River, Lower Spring River/Center Creek, Shoal Creek, and upper and lower North Fork of the Spring River. Table Ge02 contains watershed areas for fifth order and larger streams in the basin summarized from Funk (1968) or as determined using available 1:100,000 scale topographic maps. The Spring River and its tributaries drain approximately 2,271 square miles in Missouri. The three sixth order streams, North Fork of the Spring River, Shoal Creek, and Center Creek drain 640, 472, and 302 square miles, respectively. The fifth order streams have watersheds ranging from 39 to 100 square miles. Stream gradient plots for all third order and larger streams were produced using U.S. Geological Survey (USGS) 7.5 minute topographic maps. This infromation is available from the Missouri Department of Conservation's (MDC) Southwest Regional Office in Springfield, MO. Average gradients were calculated for third order and larger reaches of the Spring River, North Fork of the Spring River, Shoal Creek, and Center Creek (Table Ge03). Channel gradients reflect the transitional Ozarks/Prairie topography of the basin. The higher gradients of Shoal Creek are more typical of those found in Ozark streams, while the lower gradients of the North Fork of the Spring River are more typical of a prairie stream. The gradients for Spring River and Center Creek are intermediate between the two.
<urn:uuid:e474fb17-2fd6-44cc-ab8e-d878c4b1a13b>
4
1,207
Knowledge Article
Science & Tech.
53.093999
Read all about Pythagoras' mathematical discoveries in this article written for students. This article for pupils and teachers looks at a number that even the great mathematician, Pythagoras, found terrifying. If the yellow equilateral triangle is taken as the unit for area, what size is the hole ? An introduction to proof by contradiction, a powerful method of mathematical proof. How many differently shaped rectangles can you build using these equilateral and isosceles triangles? Can you make a square? What fractions can you find between the square roots of 56 and 58? Ranging from kindergarten mathematics to the fringe of research this informal article paints the big picture of number in a non technical way suitable for primary teachers and older students. The number 2.525252525252.... can be written as a fraction. What is the sum of the denominator and numerator? Using the interactivity, can you make a regular hexagon from yellow triangles the same size as a regular hexagon made from green
<urn:uuid:ac47afc0-9c6f-4dff-b22e-7dd770cc73aa>
3.703125
215
Content Listing
Science & Tech.
44.950778
Temporary files and filenames. - TemporaryFile(mode='w+b', bufsize=-1, suffix='') - Create and return a temporary file (opened read-write by default). - Function to calculate the directory to use. - Function to calculate a prefix of the filename to use. - User-callable function to return a unique temporary file name. ||__file__ = '/usr/lib/python1.6/tempfile.pyc'| __name__ = 'tempfile' _pid = None counter = 0 tempdir = None template = None
<urn:uuid:4aeae4cb-97db-4adb-93c0-e352dfb8ce8a>
3.203125
132
Documentation
Software Dev.
30.628636
This picture is of the Alpha-Monocerotid meteor outburst in 1995. The Perseid meteor shower, usually the best meteor shower of the year, peaks in August. Over the course of an hour, a person watching a clear sky from a dark location might see as many as 100 meteors. Meteors are actually pieces of rock that have broken off a comet and continue to orbit the Sun. The Earth travels through the comet debris in its orbit. As the small pieces enter the Earth's atmosphere, friction causes them to burn up. Credit: S. Molau and P. Jenniskens, NASA Ames Research Center
<urn:uuid:bf2c3808-2b65-4d84-a26d-b62a5e6b52cb>
3.6875
129
Truncated
Science & Tech.
67.823162
Among the exciting preliminary discoveries made by our team of RAP scientists on the eastern edge of the Eastern Kanuku Mountains along the Kwitaro River: - 198 species of plants (57 families) - 73 species of large to medium fish - 193 bird species - 31 bats species - 4 rodent species - 3 opossum species |The round-eared bat, Tonatia silvicola.| As a result of this RAP expedition, the Kanukus Mountains now have the highest bat diversity recorded for a single area in the world! Some of these species numbers are new records for the Kanuku Mountains. In science-speak, a "new record" means a species that has never been recorded in that area until now but is already known to science. Finding a species in this location may mean that the range of where these animals are known to live is extended. Assumptions we make about the climate and environment they exist in may change. Other species usually associated with these plants or animals may also be present but as yet unseen because the RAP team ran out of sampling time to find them. Usually new records for an area indicate that further studies are needed. When I was transmitting these dispatches from a remote place in the Rewa, I spotted a heron that is not supposed to be in Guyana according to the bird books. But I saw it, and it lives there. Books represent what science has recorded, not what really exists in the world. So if we saw animals and plants never recorded in an area that means that we expanded our scientific and practical knowledge. |Last hill on expedition in the Kanuku | Books represent what science has recorded, and sometimes this is an incomplete picture. While transmitting these dispatches from a remote place in the Rewa, I spotted a heron that was not supposed to be in Guyana according to the bird books. But I saw it, and it lives there. The RAP team expands the world’s scientific and practical knowledge when they see animals and plants never recorded in an area.eam and I also added a new camera trap in those mountains that photographs ocelot, margay, and red brocket deer, among other animals in the hope of catching glimpses of new and unrecorded species in the area. LEARN MORE: Download the Full Preliminary Report EXPLORE: More expeditions
<urn:uuid:63a857f0-cded-4a20-ae12-0260beff0bfb>
3.515625
496
Knowledge Article
Science & Tech.
45.369706
Activities Assessments Glaciers changes Financially supported by UNEP and with the help of DEWA/GRID-Europe, the World Glacier Monitoring Service (WGMS) launched with UNEP a new publicaton on Glacier changes. During the 20th anniversary celebrations of the Intergovernmental Panel on Climate Change (IPCC), in September 2008, UNEP/DEWA and the WGMS released the report Global Glacier Changes: Facts and Figures. The report presents the latest fluctuations of glaciers and ice caps and underlines the overall trend of glaciers’ retreat. Among the conclusions of the report, given the urgency of climate change and the need for scientifically-based adaptation strategies, it is now essential to re-initiate interrupted long-term series in strategically important regions. It is equally urgent to strengthen the monitoring network in those regions which at the moment have sparse coverage and to include the latest technologies such as high-resolution remote sensing to compliment the traditional field observations. The report is available from www.grid.unep.ch/glaciers With the support of several UN organizations, including UNEP, WGMS also publishes the five-yearly series 'Fluctuations of Glaciers' (FoG) including internationally collected, standardized data on changes in glaciers throughout the world. The objectives of the publication are to reproduce a global set of data which affords a general view of the changes, encourage more extensive measurements, invite further processing of the results, facilitate consultation of the further sources, and serve as a basis for research. In fact, this standardized data publication can be regarded as a working tool for the scientific community, especially concerning the fields of glaciology, climatology, hydrology and quarternary geology. As such, this effort also contributes to the strengthening of the scientific basis of UNEP's assessment activities, requiring that sound assessments must be based on reliable data. As a key component, the Global Environment Outlook reporting process greatly benefits from the continued data collection on changes in glacier mass and volume. Fluctuations of Glaciers VIII for the years 1995-2000 is available both as a hardcopy and pdf (see http://www.wgms.ch/fog.html). The FoG series is complemented by regular publications of the Glacier Mass Balance Bulletin (MBB), designed to speed up and facilitate access to information concerning glacier mass balances by reporting measured values from selected reference glaciers at two-year intervals. The MBB reports are available at http://www.geo.unizh.ch/wgms/mbb.html.
<urn:uuid:d8bc1942-12c8-4818-9413-398e534a2db2>
2.875
535
Knowledge Article
Science & Tech.
31.695711
Spaceflight is inherently risky, and success is never assured. Case in point: on Tuesday an unmanned Russian cargo spacecraft failed in its attempt to dock with the International Space Station, the Associated Press reported. The craft was expected to try again on Sunday, after engineers try to figure out what went wrong. Embarrassing? Yes. But in the 55 years since Sputnik blasted into orbit to inaugurate the space age, space agencies have endured bigger black eyes. Leaving aside fatal accidents like the Challenger and Columbia space shuttle disasters, NASA has weathered some very embarrassing moments: Satellites that inexplicably fell silent. Unmanned spacecraft that became marooned in orbit--or crashed to earth. Craft that returned safely to earth--only to be lost at sea. Planetary probes that...well, you get the picture. And if NASA has gotten its share of shiners (along with many glorious successes), so have other national space agencies. The European Space Agency (ESA), the Soviet Union (and now Russia), Japan, South Korea, and North Korea have all had their own embarrassments. Want to see the biggest space agency black eyes ever? Keep clicking to see our fascinating photo gallery... Correction: A previous version of this story referred to the Challenger and Discovery disasters. In fact, it was the shuttle Columbia, not Discovery, that was lost. Liberty Bell 7 Sinks After the second manned space mission in 1961, the Liberty Bell 7 capsule was afloat in the Atlantic ocean, awaiting recovery. Astronaut Gus Grissom reported that he heard a dull thud as the hatch blew open--the module began filling with water, and Grissom had to struggle to escape before it sank. Did Grissom "screw the pooch," as Tom Wolfe famously wrote in <i>The Right Stuff</i>? Or was there a mechanical malfunction? The world may never know. Phobos-Grunt was a 2011 Russian mission to return a sample of soil from Mars' moon Phobos. It would have been the first such sample ever returned to earth. Because of a malfunction in the craft's propulsion system, however, Phobos-Grunt never made it out of low-earth orbit. It remained there, crippled, until early 2012, when its orbit decayed and it disintegrated in the atmosphere off the coast of Chile. Mars Climate Orbiter Burns Up This 1998 orbiter mission was supposed to study Mars' climate history and determine if the planet ever held life-sustaining water. But the $125 million craft never made it to Mars, burning up in the atmosphere on the day it was supposed to enter orbit. What caused the costly incident? One team of engineers had performed their calculations in metric units, while another used English units. Soyuz 5 Lands Hard After participating in the first in-flight transfer of cosmonauts from one spacecraft to another, pilot Boris Volynov (pictured), led Soyuz 5 back toward Earth. When the service module failed to detach, the craft plummeted toward earth upside-down, subjecting Volynov to extreme heat. The craft subsequently crashed hundreds of miles off course in the Ural mountains, breaking several of Volynov's teeth in the impact and leaving him to awaiting rescue in -38 degree (F) temperatures. He walked "a few kilometers" to a village and took shelter until help arrived. (<a href="http://www.wired.com/science/discoveries/news/2009/01/dayintech_0116">Source</a>) Hubble Telescope Can't See Although NASA's Hubble Space Telescope has produced incredible images of space for more than 20 years, it was nearly a failure of galactic proportions. After its launch in 1990, Hubble produced only blurry, grainy pictures due to a faulty mirror. Luckily, it was fixed three years later, and went on to capture some of the most incredible space images ever seen. Kwangmyŏngsŏng Program Flounders North Korea has attempted to launch its Kwangmyŏngsŏng satellites four times, and met with four failures. North Korean Umha-2 rockets, pictured, were used on the second and third launches but could not bring their satellite payloads into orbit. Fortunately, all the missions were unmanned. H-IIA Rocket Self-Destructs H-IIA is the launch system of the Japanese Aerospace Exploration Agency (JAXA). Its sixth launch was supposed to deliver a spy satellite into orbit, but a rocket booster failed to detach properly from the H-IIA, making the rocket too heavy to enter orbit. Ground control sent a destruct command to the rocket shortly after. Glory Sputters Out Climate science was dealt a blow in March 2011, when NASA's Glory satellite--which was supposed to study humans' effect on the Earth's atmosphere--failed to launch. The Taurus XL rocket carrying the observation satellite crashed into the Pacific after liftoff when Glory's protective casing didn't open. Cluster Breaks Up Cluster, a group of spacecraft launched by the European Space Agency (ESA) in 1995, were lost when launch vehicle Ariane 5 failed to reach orbit. The launch vehicle, on its maiden voyage, self-destructed after a software error caused it to veer off-course. What was the problem? A glitch similar to the issue that doomsayers said would bring disaster at 11:59:59, December 31, 1999. This time, however, the threat was real. Apollo 13 'Fails Successfully' NASA's most famous black eye came on April 14, 1970, when an oxygen tank on Apollo 13's service module exploded and the crew narrowly managed to abort the mission safely. The damaged service module that began the drama is pictured here, courtesy of <a href="http://www.alanbeangallery.com/">Alan Bean</a>, Apollo astronaut and "first artist on another world." Naro-1 was a carrier rocket created for South Korea's Korean Aerospace Research Institute. Both attempts to launch Naro-1 ended in failure, with one disintegrating in the atmosphere and another exploding before making it to space.
<urn:uuid:6573439b-774b-42a9-b43a-a8fc3635d218>
3.015625
1,282
Listicle
Science & Tech.
49.673097
Lectures / 16/09/2009 7:30 pmOpen Space? Is the Sun a Source of Danger? The sun and space weather as factors in climate change. The sun is the star that is closest to earth. Even the most ancient civilizations were in no doubt as to how important the Sun is for life on earth. Yet the sun is by no means immune to change. Solar activity follows a cycle of eleven years. The latest research has revealed that this cycle was repeatedly subject to breakdowns in the past and that this had grave consequences for the global climate. In addition to these long-term changes eruptions occur on the sun that may have serious repercussions on our hi-tec world: glitches in the sun’s radiation during an eruption may spell danger for astronauts in space; satellites may spin out of control due to surges in their electrostatic charge; the likelihood for short circuits to occur is higher; terrestrial radio traffic and GPS signals may be disturbed, computers in commercial airliners may run amok, and entire national grids may break down owing to excess voltage. All these influences are subsumed under the term space weather. It is the declared objective of modern solar research to make this space weather predictable. - Hanslmeier Arnold: The Sun and Space Weather, Springer, 2007 - Hanslmeier Arnold: Habitability and Cosmic Catastrophes, Springer, 2009 - Hanslmeier, Arnold: Gefahr von der Sonne, BLV, 2001 (im Buchhandel vergriffen, nur noch beim Autor beziehbar) - Vázquez, Hanslmeier: Ultraviolet radiation in the solar system, Springer, 2006
<urn:uuid:810c7e89-5cc6-4216-9805-c9cd3773f9c3>
3.3125
349
Content Listing
Science & Tech.
45.83946
1. In Ancient Greece a man named Democritus figured out that every single thing in the universe must be made up of tiny particles that can’t be cut anymore. He called these particles “atoms”, which in Greek means “uncuttable”. 2. An atom is the smallest particle of an element still having the same chemical properties of the element. 3.The particles smaller then atom are called subatomic particles. 4. At the center of the atom is a core called a nucleus, which is made up of these subatomic particles called protons and neutrons. Whizzing around the nucleus at incredible speeds are tiny particles called electrons. 5. Electrons are extremely small. You could fit 2,000 of them into one proton. 6. There are over 100 different kinds of atoms. 7. By combining theses atoms in different ways, we can make anything in the universe. 8. Atoms are so tiny that they can’t be seen, even with the most powerful microscope. 9. Even though protons and neutrons make up the nucleus of an atom, most of the nucleus is empty space. 10. When atoms combine together they form molecules.
<urn:uuid:e6ae78f3-a60d-4714-91ad-fe11432255f4>
3.765625
258
Listicle
Science & Tech.
60.418638
What makes High and Low pressure, and is one more common during particular seasons? High pressure is created by sinking, downward-moving air, while low pressure is created by rising air. That can be the result of global weather patterns...or very localized ones. For example...sinking cold air can created local zones of high pressure, while heating of the ground can result in rising air...and local zones of low pressure. In the Pacific northwest, high pressure is most common during the summer to early autumn months, while low pressure is most common during the late autumn to winter months.
<urn:uuid:2cfaf600-388c-4671-b287-a0f953bdddfd>
3.25
118
Knowledge Article
Science & Tech.
60.466955
It’s not hard to notice when your co-worker is grouchy, your friend is exhausted, or your boss is overjoyed. Without recognizing it, we easily pick up on other people’s emotions by registering certain behavioral cues. In turn, we understand whether we need to back off, lend a helping hand, or, in the case of the boss, ask for a raise. Now comes the question: If we can do this, then why not computers? Why not robots? Indeed, by picking up on some of these same emotional traits, robots today are learning to act more naturally around their human counterparts.
<urn:uuid:44873754-961e-4c8b-9c40-d1d585c09b03>
2.6875
128
Truncated
Science & Tech.
60.191765
The exploration of the Martian surface has begun. The first trip, a mere three metres, took 78 seconds. Nearly two weeks after it bounced to a landing on the barren plain of Gusev Crater, NASA's Spirit rover is off on its search for signs of water - and life. Cheers erupted and champagne flowed at mission control at the Jet Propulsion Laboratory in Pasadena, California, when engineers received the signal that Spirit's first trip was a success. It sets the stage for a sophisticated geological exploration of Mars. "We have six wheels in the dirt. Mars is now our sandbox, and we are ready to play and learn," JPL director Charles Elachi said. After rolling off its lander onto Martian soil, Spirit had to locate the sun, take pictures of its surroundings and wait for a passing Mars orbiter before it could phone home with confirmation of its success. Principal investigator Steven Squyres, of Cornell University, said: "This is the most significant milestone in the history of the project." Nearly all the high-risk manoeuvres of the craft are now complete, leaving geologists free to begin concentrating on exploration. The roll-off had originally been scheduled for Monday but was delayed because a collapsed airbag from the January 3 landing was partially blocking the planned exit ramp. To avoid the bag, engineers executed a 120-degree pivot of the rover on the lander so it could use a different ramp. The rover took one last look back at the lander, transmitting home a picture of the now-empty and useless platform. Engineers spent the rest of the mission day purging now-useless software from the rover and performing other housekeeping tasks to prepare for Spirit's 90-day journey over the Martian surface. Engineers will continue that process today and begin preparing the craft's instruments to look at the soil around the lander. Spirit will spend at least three days in its present location, about a metre from the lander, using its microscope to examine the soil and allowing engineers on Earth to practise with the other instruments. Spirit will then take off on its excursion at the stately pace of about four centimetres a second. It's first major goal will be a 200-metre wide crater about 300 metres to the north-east. Spirit's twin, Opportunity, is scheduled to land on Mars on January 24. - Los Angeles Times, Reuters |Print this article Email to a friend||Top| |Latest news by email Get free news emails from The Age |Science & Health Jobs| |text | handheld (how to)|| Copyright © 2004. The Age Company Ltd |advertise | contact us|
<urn:uuid:e6880e75-47da-4ff6-afcc-357b9d6e8eb9>
3.03125
554
Truncated
Science & Tech.
48.309281
A picture-perfect pure-disk galaxy Recent research suggests that bulge-less, or pure-disk, spiral galaxies like NGC 3621 are actually fairly common. February 3, 2011 The bright galaxy NGC 3621, captured here using the Wide Field Imager on the 2.2-meter telescope at the European Southern Observatory’s (ESO) La Silla Observatory in Chile, appears to be a fine example of a classical spiral. But it is in fact rather unusual — it does not have a central bulge and is therefore described as a pure-disk galaxy. This picture of spiral galaxy NGC 3621 was taken using the Wide Field Imager at ESO’s La Silla Observatory in Chile. NGC 3621 is about 22 million light-years away. It is comparatively bright and can be well seen in moderate-sized telescopes. The data from the Wide Field Imager on the MPG/ESO 2.2-meter telescope used to make this image was selected from the ESO archive by Joe DePasquale as part of the Hidden Treasures competition. ESO and Joe DePasquale NGC 3621 is a spiral galaxy about 22 million light-years away in the constellation Hydra the Water Snake. It is comparatively bright and visible through moderate-sized telescopes. This picture was taken using the Wide Field Imager on the MPG/ESO 2.2-meter telescope at the La Silla Observatory. Joe DePasquale as part of the Hidden Treasures competition selected the data from the ESO archive. Joe's picture of NGC 3621 was ranked fourth in the competition. This galaxy has a flat pancake shape, indicating that it hasn't yet come face to face with another galaxy, as such a galactic collision would have disturbed the thin disk of stars, creating a small bulge in its center. Most astronomers think that galaxies grow by merging with other galaxies in a process called hierarchical galaxy formation. Over time, this should create large bulges in the centers of spirals. Recent research, however, has suggested that bulge-less, or pure-disk, spiral galaxies like NGC 3621 are actually fairly common. This galaxy is of further interest to astronomers because its relative proximity allows them to study a wide range of astronomical objects within it, including stellar nurseries, dust clouds, and pulsating stars called Cepheid variables, which astronomers use as distance markers in the universe. In the late 1990s, NGC 3621 was one of 18 galaxies selected for a Key Project of the Hubble Space Telescope: to observe Cepheid variables and measure the rate of expansion of the universe to a higher accuracy than had been possible before. In the successful project, 69 Cepheid variables were observed in this galaxy alone. Multiple monochrome images taken through four different color filters were combined to make this picture. Images taken through a blue filter have been colored blue in the final picture; images through a yellow-green filter are shown as green; and images through a red filter as dark orange. In addition images taken through a filter that isolates the glow of hydrogen gas have been colored red. The total exposure times per filter were 30, 40, 40, and 40 minutes respectively.
<urn:uuid:cc9bce3d-4681-4365-aa7c-20d12ac6b79f>
3.21875
669
Knowledge Article
Science & Tech.
39.864156
Climate Change Impacts Our climate is changing. Sea level rose over 7 inches in the last 100 years along our coast. Ocean temperatures have warmed by 3 degrees F in parts of New England in the last 50 years while surface water salinity is becoming fresher. “We know the planet is absorbing more energy than it is emitting,” said James E. Hansen, Director of NASA’s Goddard Institute for Space Studies. “So we are continuing to see a trend toward higher temperatures." NASA scientists have produced a visualization that depicts the recent rise in global temperatures as felt over a span of 130 years. Sea Level Rise While incremental increase in temperatures around the world of just a very few degrees may not seem like much, given the variation we experience over a year, or even over a day, it is enough to trigger some serious global changes. As temperatures creep upward, the melting of sea ice increases in the polar regions and glaciers around the world at an alarming rate, which in turn leads to an increase in global sea level. Records show an increase of seven inches for the past century. Read about the Greenland and Antarctic ice sheets are losing mass at an accelerating pace. Arctic Monitoring and Assessment Program sets new estimate of global sea level rise by 35 to 63 inches by 2100, up from the 2007 projection of 7 to 23 inches when the melting of Greenland's massive ice sheet is included in calculations. Frequency and Intensity of Storms The global temperature increases that are already leading to sea level rise are putting heat energy into our oceans, lands, and atmosphere. This energy increase will drive much more active weather patterns, leading to a greater number of storms and an increase in storm severity. Climatic patterns on the planet are normally quite variable from year to year, and the factors that shape our climate are complex, but it is predicted that this variability will become more extreme and less predictable. The impact of future storms will exacerbate many of the other effects of climate change, including sea level rise, coastal storm surge, flooding, and drought. Even the seemingly modest increase in overall global temperature that we already experience is expected to lead to significant changes in climate patterns around the world. In some regions, this will lead to higher amounts of precipitation, while in other regions will experience less, and some will experience highly variable and unpredictable climatic patterns. The regions that are expected to suffer from drought the most are those that are already arid, leading to significant crop failure with a significant impact on local populations. As a direct result of human emissions of CO2, the ocean is becoming more acidic. It has absorbed about half of the CO2 that humans have generated over the last 200 years. Currently, the ocean is absorbing 22-25 million tons of CO2 a day, which has already caused a decrease in the ocean's pH! The average ocean waters pH has been 8.2. It is now at 8.1, and scientists are predicting a drop of 0.4 pH if CO2 emissions are not cut dramatically. While this may not sound like much, it is. Already species at the lowest level of the ocean food chain are being seriously impacted. Calcium-based animals may be on the road to extinction. According to fossil records, mass exinctions of living organisms on Earth have occurred five times previously, the last major event coinciding with the extinction of the dinosaurs 65 million years ago. Due to a combination of factors, primarily habitat loss and climate change, life on the planet is now undergoing the sixth major extinction event. Of all the impacts of human-induced climate change, species loss is one that is entirely irreversible. How will our communities and ecosystems adapt to changes that are already in the works? Good planning takes into account that the future will be different than the past and needs to incorporate different information which includes future climate change projections. More than ever adaptative management is necessary. As a Regional Coordinator for the Mass Bays Program, SSCW is a partner in EPA's Climate Ready Estuaries pilot program with the Massachusetts Bays National Estuary Program. Salem Sound Coastwatch will provide a local community link between federal, state and municipal entities, as we learn how communities along the coast can prepare for climate change. Climate Change and its many impacts are truly global issues that can seem much larger than any one individual could possibly combat. But rather than feel overwhelmed, there are lots of little things that each of us can do to help counter runaway carbon emissions and resulting changes to the planet's climate. These problems are not beyond our collective abilities to solve, but the window of opportunity to reverse many decades of human-induced global damage is closing. Read on to learn about just a few of many things we all can do to contribute to a better tomorrow for us and generations to come.
<urn:uuid:ca655a26-0d52-4f69-a951-bd8930caf2b6>
4.125
977
Knowledge Article
Science & Tech.
39.737288
Above is a MODIS visible satellite image, from the polar orbiting satellite “Aqua”, as it passed over the eastern Pacific ocean just off the coast of the Baja Peninsula of Mexico on April 28. On the upper left portion of the image is Guadalupe Island, which as of 2008 only had a population of 15! On this particular day, the volcanic Guadalupe Island created a barrier for the wind as it moved from northwest to southeast. As the flow is forced around the island and converges downwind, a vortex “street” develops, and continues over 100 miles south and east. These vortex “streets” form in part because the lower troposphere (the lowest part of the atmosphere) is very stable over the eastern Pacific, but the vortices wouldn’t be visible without the stratocumulus clouds that are also present. The phenomenon was named after fluid dynamicist and physicist Theodore von Kármán, and can also be seen in the wake of ships traveling on rivers. As previously mentioned, these vortex streets form in very stable marine environments, and thus don’t result in the formation of significant adverse weather. They certainly are beautiful from outer space, however. This article from Wired Science shows several other examples from other parts of the world.
<urn:uuid:5944b996-861e-4439-a9ab-bdac28bdd6e4>
3.90625
268
Knowledge Article
Science & Tech.
37.946741
Because the atmosphere is so thick on Venus, however, it's like as if Earth had no continents with a swift equatorial ocean current. So, I can see how the effect on Venus' rotation would be significant. Once the air currents got rolling, they would take on a life of their own (just like bathtubs that drain in a clockwise direction in the north.) So, the current winds on Venus are essentially a living fossil--evidence of a more primitive, prograde existence. (Check out papers written by Cheng Zhi Zhang in the Harvard database.) Zhang argues that the atmospheric tides are strong enough to counterbalance the tidal torque exerted by the Sun, and thus make possible the tidal Earth/Venus resonance that makes Venus always present the same side to Earth during Earth/Venus/Sun conjunctions.
<urn:uuid:9b30dc6b-80d7-4d08-b02a-c3306364398e>
2.96875
166
Comment Section
Science & Tech.
41.998922
Common dolphins are fond of coastal waters, but are also found well out to sea. Generally, they prefer surface temperatures greater than 10 degrees Celsius. These dolphins normally travel at 5 to 7 miles per hour (although they are known to reach speeds of 29 miles per hour when pursuing food), and can move up to 150 to 200 miles in a 48 hour period. When swimming, schools follow and dive over prominent features of the ocean bottom. Also, herd movements correlate with the seasonal shifts in population of certain fish. (Alpers, 1961; Baker, 1987; Schevill, 1974; http://whales.ot.com/). Aquatic Biomes: benthic ; reef ; coastal No one has provided updates yet.
<urn:uuid:e21c8947-c4d1-4128-b76f-b0fcdd121e5c>
3.1875
150
Knowledge Article
Science & Tech.
65.793175
Radiation Assessment Detector (RAD) Radiation Assessment Detector About the size of a small toaster, the Radiation Assessment Detector will look skyward and use a stack of silicon detectors and a crystal of cesium iodide to measure galactic cosmic rays and solar particles that pass through the Martian atmosphere. Image credit: NASA/JPL-Caltech/SwRI The Radiation Assessment Detector (RAD) is one of the first instruments sent to Mars specifically to prepare for future human exploration. The size of a small toaster or six-pack of soda, RAD will measure and identify all high-energy radiation on the Martian surface, such as protons, energetic ions of various elements, neutrons, and gamma rays. That includes not only direct radiation from space, but also secondary radiation produced by the interaction of space radiation with the martian atmosphere and surface rocks and soils. To prepare for future human exploration, RAD will collect data that will allow scientists to calculate the equivalent dose (a measure of the effect radiation has on humans) to which people would be exposed on the surface of Mars. RAD will also assess the hazard presented by radiation to potential microbial life, past and present, both on and beneath the martian surface. In addition, RAD will investigate how radiation has affected the chemical and isotopic composition of martian rocks and soils. (Isotopes are atoms of the same element having the same number of protons but a different number of neutrons.) A stack of paper-thin, silicon detectors and a small block of cesium iodide measure high-energy charged particles coming through the Martian atmosphere. As the particles pass through the detectors, they lose energy, producing electron or light pulses. An internal signal processor analyzes the pulses to identify each high-energy particle and determine its energy. In addition to identifying neutrons, gamma rays, protons, and alpha particles (subatomic fragments consisting of 2 protons and 2 neutrons, identical to helium nuclei), RAD will identify heavy ions up to iron on the periodic table. The RAD is lightweight and energy efficient so as to use as little of the Mars Science Laboratory's available mass and energy resources as possible. Radiation Assessment Detector for Mars Science Laboratory This instrument, shown prior to its September 2010 installation onto NASA's Mars rover Curiosity, will aid future human missions to Mars by providing information about the radiation environment on Mars and on the way to Mars. Image credit: NASA/JPL-Caltech/SwRI
<urn:uuid:7b4b2023-c613-416e-bef2-ef134c413941>
3.796875
512
Knowledge Article
Science & Tech.
21.014969
4.4. Individual and Orbital Masses of Double Galaxies A remarkable observational discovery in the dynamics of galaxies in recent years has been the identification of non-Keplerian rotation in the peripheral regions of many galaxies (Shostak and Rogstead, 1973). The prevalence of such flat rotation curves was convincingly demonstrated by Rubin and her collaborators (1980, 1982). The constancy of rotational velocity V(R) to the optical edge of the galaxies, contrasted with the asymptotic Keplerian behaviour V ~ R-1/2 for a point mass, indicates the presence of invisible massive haloes around these galaxies. The maximal extent and total mass of the postulated circumgalactic haloes are almost unknown because of the obvious difficulty of measuring V(R) beyond the optical edge of the galaxies. We work around this difficulty by a different method, measuring the masses of galaxies from their orbital motions in isolated pairs in which the separation between components generally exceeds their apparent sizes. However, the mean orbital mass-to-luminosity ratio <f0> = 7.75 obtained in section 4.2 for double galaxies may be accounted for without any additional assumptions about invisible massive haloes. This apparent contradiction between two solid observational results deserves the most careful attention. The proper resolution of this contradiction may be reached by comparing the orbital estimates of masses with the sum of the masses determined from internal motions for pair members. Such an approach has been presented earlier (Karachentsev, 1974, Dickel and Rood, 1980, van Moorsel, 1983), but the small number of cases studied in comparison with the number required for a proper statistical investigation of the problem did not allow a clear result. There currently exists a large number of galaxies located in pairs for which individual masses have been measured from rotation curves, from profile widths of the 21-cm radio HI line, or from the velocity dispersion of stars around the nuclear region of the galaxy. Karachentsev (1985) attempted to put all of these results into a single uniform system, incorporating the three different ways of looking at the observational data. 1. The most direct calculation of the mass is based on measuring the rotation curve of a galaxy. For a model with a spherical distribution of mass contained within some specified radius R, the rotation curve has the form where G is the gravitational constant. In the case of a flat rotation curve (V = constant), the mass increases linearly with radius, which may be described as saying that each R yields a distinct estimate of the mass. We will use the mass estimate corresponding to material within the standard isophote 25m/sq.arc sec. where Vm is the maximum rotation velocity in the galaxy and A25 is the linear diameter at the chosen isophote. With this technique we can obtain masses for 32 components of pairs. The basic data here are from our observations (Karachentsev and Mineva, 1984b). For five objects (double galaxies 27ab, 355ab, and 603a), estimates of Vm were taken from the works of Rubin and Ford (1983) and the Burbidges (1961, 1963). Note that for galaxies of late structural type it is important to include a thin disk in the model. This reduces the mass by 30% compared to a spherical model. For the moment we have ignored these distinctions. 2. Thanks to the work of Fisher and Tully (1981) and other observers, the line profile of the 21-cm HI line, Wp, has been measured for several thousand spiral galaxies. Among these are 126 galaxies located in pairs of our catalogue. For galaxies whose angular diameter does not exceed the beam width of the radio telescope, the maximum rotational velocity may be obtained using the calibration presented by Fisher and Tully (1981) where W20 is the width of the HI profile at 20% of maximum intensity, and i is the angle of inclination of the axis of rotation to the line of sight. To get the masses M25 in (4.37) we used the estimates of W20 presented by Fisher and Tully (1981). The system was augmented by further data from Hoffmeier et al. (1983), in which we compared estimates of the line width at heights of 20% and 25%. For several galaxies with measured Wp at the 50% level we used the empirical relation <W20 > = 1.38 <W50>. The inclination of galaxies was determined from cos i = (b/a)25, where (b / a)25 is the axial ratio reduced to the standard isophote. This method has several limitations. For small values of i, errors in measuring the angle of inclination introduce non-negligible errors in the mass estimator. For close pairs it is possible to include both components in the measurement and measure a line profile coming simultaneously from both galaxies. Further, Lewis (1983) showed that for galaxies with narrow lines there is a tendency to overestimate the width of the line because of statistical fluctuations. This happens most often in estimates for galaxies of late types, of generally low luminosity. Finally, this method is inapplicable to the vast number of galaxies of early type, which are gas poor. 3. In recent years a very extensive sample of galaxies has emerged for which the mass can be estimated from the stellar velocity dispersion sV2 in the central regions of the galaxy. From the virial energy balance for galaxies with a density distribution following a de Vaucouleurs law, an estimate of the total mass is where Ae is the effective linear diameter within which half of the luminosity (or mass) of the galaxy is found, and the coefficient e = 0.3358 (Poveda, 1958) is characteristic of a de Vaucouleurs profile. The dimensionless factor k incorporates the integration of the measured velocity dispersion along the radius of the galaxy. According to the data presented by Tonry (1983) we adopt k = 1/2. The effective diameter Ae is known for only a small number of galaxies. For 194 galaxies from the RCBG we have the mean relation which will be used to determine mass within the standard isophotal diameter of the galaxy A25. The term cT in (4.40) incorporates a weak dependence on morphological type. Empirical values for this correction are: 0.00 (E), +0.06 (S0), +0.03 (Sa), -0.02 (Sb) and -0.05 (Sc). Therefore, the equivalent of (4.39) will be In the work of Tonry and Davis (1981) and White et al. (1983) measurements of sV are given for 69 galaxies in common with our catalogued pairs. We have incorporated these mass estimates. The bulk of the galaxies in this last sample occur among the early types E and S0 which somewhat compensates for the selection bias of the first two methods. Altogether, the three methods give measurements of the individual masses for 227 galaxies in pairs. In the calculation by Karachentsev (1985) there were 119 pairs included for which individual masses were known either for both components or for the brighter one, when its luminosity exceeds 60% of the total luminosity of both pair members. That calculation also includes mass estimates for 35 components of double systems, the luminosity of which does not satisfy this lower limit. For such galaxies, individual mass-to-luminosity ratios fin were calculated. Comparison of the mean values grouped according to the three methods show that the various methods yield results which are the same to within the expected statistical errors. The overall distribution of 227 double galaxies according to their individual mass-to-luminosity ratios is shown in figure 26. It has an asymmetric appearance, close to a log-normal form, agreeing very well with the data of Shaw and Reinhart (1973). The mean value of fin is 7.3 ± 0.4 with standard deviation f = 5.0. It is important to note here that <fin> for the components of pairs agrees very well with the analogous mean for isolated galaxies <fig> = 7.0 ± 1.0 (Karachentsev and Mineva, 1984a), and further with the uncontaminated mean estimate of orbital mass-to-luminosity ratio 7.8 ± 0.7 obtained from a sample of 286 pairs (see table 8). In section 3.6 it was demonstrated that the luminosities of galaxies in pairs are tightly correlated with one another. An analogous empirical relation also appears for the individual masses of double galaxies. For 73 pairs the correlation coefficient for log M for the two components is +0.68, with 95% confidence interval [0.52 - 0.81]. Part of this effect, obviously, arises from observational selection. In contrast to masses or luminosities, their ratio fin should not depend on the basic selection of double systems. Nevertheless, figure 27 demonstrates a much weaker correlation coefficient (+0.48) between log fin for pair members. The mutual association of mass-to-luminosity ratios for double galaxies apparently arises from the basic formation of pair members from common protogalactic surroundings. We will begin with this empirical form, and, by extrapolating it, assume that the value of fin is a constant for the components of pairs. This allows us to add to the 73 pairs with measured sums of individual galaxy masses, another 46 for which the mass of the fainter component is estimated from its luminosity. Therefore, we have a sub-sample of 119 pairs with orbital mass to total individual mass ratios The distribution of values of logµ for 119 pairs is shown in figure 28. The logarithmic scale was chosen to include the entire range of logµ, more than six orders of magnitude. Analysing this distribution we focus attention on four basic effects which can introduce a large dispersion in µ: We will look at each of these possible effects in turn. 1. Projection effects. If all of the mass of double systems were located within the optical boundaries of their components, the ratio of estimates of the orbital mass of the pair to the sum of the individual masses of the galaxies becomes identically equal to the projection factor (µ ). For circular orbits, the factor will be bounded in the interval [0, 32 / 3]. For the general case of elliptical motion with orbital eccentricity e, the interval of possible values of extends somewhat higher: [0, (1 + e)32 / 3]. Using a random realisation of (4.11) and (4.12) we modelled the distribution of log for various values of the eccentricity. Results for e = 0.05, 0.35, 0.65 and 0.95 are presented in figure 29. For pure circular motion, the distribution p (log) rises monotonically with increase in and peaks with a value log(32 / 3) 0.53. For the other values of orbital eccentricity, the peak of the distribution on the right side becomes less pronounced, and the maximum of the distribution drops and moves towards lower values of . The sensitivity of the form of the distribution p (log) to the value of the eccentricity allows determination of the type of orbital motion in the catalogue pairs. The best agreement with the observed distribution is for e = 0.25. The density distribution p (log) for e = 0.25 is shown in figure 28 as the curved line. Low values of eccentricity found in this way agree very well with the results presented in section 4.2. 2. The role of fictitious pairs. The histogram in figure 28 presents the distribution in logµ of all double systems regardless of radial velocity difference or orbital mass-to-luminosity ratio. The information from the modelling (section 3.2), showed that around 44% of the objects in the catalogue would be non-isolated members of systems, or optical pairs. It is obvious that these false double systems have excessive values of µ in the region µ > 64 / 3. The relative number of cases in this critical region is only 21%, which allows a simple discrimination of these fictitious pairs. We may introduce two pieces of evidence to support the notion that the tail of the distribution in figure 28 for µ > 64 / 3 consists of false pairs. Firstly, consider the pairs which satisfy the strictest isolation criterion (++). The relative number of fictitious pairs in this sub-sample should be less by an order of magnitude than in the whole sample. The strictly isolated double systems are shown with cross hatching in figure 28. Not one of these is in the critical region µ > 64/3. The physical nature of double galaxies may also be shown by signs of interaction between them. Forty-eight pairs with known ratio µ exhibit tails, bridges, or a common atmosphere. Only two of these, numbers 481 and 483, have orbital mass to total galaxy mass ratios higher than the critical value. Both of these pairs are located in the central region of the large cluster in Hercules and their velocities are typical of the radial velocities found in this cluster. Most probably these are chance superpositions along the line of sight of cluster members which manage to imitate interacting pairs. 3. Errors in measured velocities. In calculating the orbital mass of pairs from (4.8) we incorporated the error y in measuring the radial velocity difference. Because of the quadratic dependence of M on y, estimates of the orbital mass will be systematically high. The unbiased estimate of orbital mass uses Mc = (32 / 3) G-1 X(y2 - y2). For large values of y the quantities Mc and µC = Mc / (M1 + M2) may produce spurious values which will falsify the analysis of the observational data. The general effect of errors is to dilute the strong maximum of the distribution p (logµ) and increase the fraction of pairs in the critical region, µ > 64 / 3. Calculation of all these effects shows that the observed distribution of pairs according to the value of µ may be explained without incorporating any massive invisible haloes around double galaxies. In view of the considerable importance of this question we will examine it further below.
<urn:uuid:b271f137-5394-4899-9ddb-f585e39ed0d8>
2.90625
2,941
Academic Writing
Science & Tech.
50.137155
sigwait - wait for queued signals The sigwait() function shall select a pending signal from set, atomically clear it from the system's set of pending signals, and return that signal number in the location referenced by sig. If prior to the call to sigwait() there are multiple pending instances of a single signal number, it is implementation-defined whether upon successful return there are any remaining pending signals for that signal number. [RTS] If the implementation supports queued signals and there are multiple signals queued for the signal number selected, the first such queued signal shall cause a return from sigwait() and the remainder shall remain queued. If no signal in set is pending at the time of the call, the thread shall be suspended until one or more becomes pending. The signals defined by set shall have been blocked at the time of the call to sigwait(); otherwise, the behavior is undefined. The effect of sigwait() on the signal actions for the signals in set is unspecified. If more than one thread is using sigwait() to wait for the same signal, no more than one of these threads shall return from sigwait() with the signal number. If more than a single thread is blocked in sigwait() for a signal when that signal is generated for the process, it is unspecified which of the waiting threads returns from sigwait(). If the signal is generated for a specific thread, as by pthread_kill(), only that thread shall return. [RTS] Should any of the multiple pending signals in the range SIGRTMIN to SIGRTMAX be selected, it shall be the lowest numbered one. The selection order between realtime and non-realtime signals, or between multiple pending non-realtime signals, is unspecified. Upon successful completion, sigwait() shall store the signal number of the received signal at the location referenced by sig and return zero. Otherwise, an error number shall be returned to indicate the error. The sigwait() function may fail if: - The set argument contains an invalid or unsupported signal number. To provide a convenient way for a thread to wait for a signal, this volume of IEEE Std 1003.1-2001 provides the sigwait() function. For most cases where a thread has to wait for a signal, the sigwait() function should be quite convenient, efficient, and adequate. However, requests were made for a lower-level primitive than sigwait() and for semaphores that could be used by threads. After some consideration, threads were allowed to use semaphores and sem_post() was defined to be async-signal and async-cancel-safe. In summary, when it is necessary for code run in response to an asynchronous signal to notify a thread, sigwait() should be used to handle the signal. Alternatively, if the implementation provides semaphores, they also can be used, either following sigwait() or from within a signal handling routine previously registered with sigaction(). Signal Concepts, Realtime Signals, pause(), pthread_sigmask(), sigaction(), sigpending(), sigsuspend(), sigwaitinfo(), the Base Definitions volume of IEEE Std 1003.1-2001, <signal.h>, <time.h> First released in Issue 5. Included for alignment with the POSIX Realtime Extension and the POSIX Threads Extension. The restrict keyword is added to the sigwait() prototype for alignment with the ISO/IEC 9899:1999 standard. IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/131 is applied, updating the DESCRIPTION section to state that if more than a single thread is blocked in sigwait(), it is unspecified which of the waiting threads returns, and that if a signal is generated for a specific thread only that thread shall return.
<urn:uuid:d0491894-6e3c-48ef-a9b0-d3ecab0584a6>
2.71875
802
Documentation
Software Dev.
46.982192
Are there any reasons I shouldn't do this? Yes, there is a reason why you shouldn't do this. Referencing a member variable with this-> is strictly required only when a name has been hidden, such as with: void bang(int val); void Foo::bang(int val) val = val; foo.val = 42; cout << foo.val; The output of this program is 84, because in bang the member variable has been hidden, and val = val results in a no-op. In this case, this-> is required: void Foo::bang(int val) this->val = val; In other cases, using this-> has no effect, so it is not needed. That, in itself, is not a reason not to use this->. The maintennance of such a program is however a reason not to use You are using this-> as a means of documentation to specify that the vairable that follows is a member variable. However, to most programmers, that's not what usign this-> actually documents. What using this-> documents is: There is a name that's been hidden here, so I'm using a special technique to work around that. Since that's not what you wanted to convey, your documentation is broken. Instead of using this-> to document that a name is a member variable, use a rational naming scheme consistently where member variables and method parameters can never be the same. Edit Consider another illustration of the same idea. Suppose in my codebase, you found this: int(*fn)(int) = pingpong; Quite an unusual construct, but being a skilled C++ programmer, you see what's happening here. fn is a pointer-to-function, and being assigned the value of pingpong, whatever that is. And then the function pointed to by pingpong is being called with the singe 42. So, wondering why in the world you need such a gizmo, you go looking for pingpong and find this: static int(*pingpong)(int) = bangbang; Ok, so what's int bangbang(int val) cout << val; "Now, wait a sec. What in the world is going on here? Why do we need to create a pointer-to-function and then call through that? Why not just call the function? Isn't this the same?" Yes, it is the same. The observable effects are the same. Wondering if that's really all there is too it, you see: /* IMPLEMENTATION NOTE * I use pointers-to-function to call free functions * to document the difference between free functions * and member functions. So the only reason we're using the pointer-to-function is to show that the function being called is a free function and not a member function. Does that seem like just a "matter of style" to you? Because it seems like insanity to me.
<urn:uuid:d47198c4-c082-48a5-b3b2-b2b2e55d1c88>
2.6875
657
Q&A Forum
Software Dev.
67.932941
- Author: Kathy Keatley Garvey If that word is not in your everyday vocabulary, just think of a symbiotic relationship where one organism transports another organism of a different species for the benefit of both. And there you have it--at least part of it--of what evolutionary ecologist Leslie Saul-Gershenz of the University of California, Davis, is doing. Saul-Gershenz researches a species of digger bee, Habropoda pallida, a solitary ground-nesting bee, and its nest parasite, a blister beetle, Meloe franciscanus, found in the Mojave Desert ecosystem. She and Neal Williams (her major professor) of UC Davis and Jocelyn Millar of UC Riverside just received a grant to study digger bee ecology and conservation. They're working with SaveNature.Org, which Saul-Gershenz co-founded. The relationship between the bee and the blister beetle is part of it. What's this symbiotic relationship about? The larvae of the parasitic blister beetle produce a chemical cue or a pheromone similar to that of a female solitary bee to lure males to the larval aggregation. The larvae attach to the male bee and then transfer to the female during mating. The end result: the larvae wind up in the nest of a female bee, where they eat the nest provisions and likely the host egg, Saul-Gershenz says. Like to read more about this exciting research? Saul-Gershenz and Millar published their blister beetle/digger bee work, "Phoretic Nest Parasites Use Sexual Deception to Obtain Transport to their Host's Nest," in the Proceedings of the National Academy of Sciences (PNAS, 2006). That led Pulitzer Prize-winning author Natalie Angier to feature their work in "The Art of Deception," published in the August 2009 edition of the National Geographic magazine. Most recently, U. S. Department of Agriculture (USDA) Forest Service entomologist Michael Ulyshen, writing for the Journal of Natural History, mentioned their work in "Bugback Riding: Transportation for the Masses" (September 2011). Saul-Gershenz said she became interested in the subject while she was a graduate student at San Francisco State University. She wrote "Beetle Larvae Cooperate to Mimic Bees" in the journal Nature (2000). Of her newest grant, she says: “Our preliminary data show that the blister beetle exploits four other native California bees including important pollinators in the genus Habropoda and Anthophora." Historically, M. franciscanus was known to be a nest parasite of Anthophora edwardsii distributed throughout California. You may know Saul-Gershenz as a past president of the Pacific Coast Entomological Society. In 1991, she became the first female president in its 91-year history.
<urn:uuid:755c5d7f-2700-4be8-ac1c-5fa0f848d76b>
3.125
602
Knowledge Article
Science & Tech.
36.335569
Today I’ll give another great way to get rings: from semigroups. Start with a semigroup . If it helps, think of a finite semigroup or a finitely-generated one, but this construction doesn’t much care. Now take one copy of the integers for each element of and direct sum them all together. There are two ways to think of an element of the resulting abelian group, as a function that sends all but finitely many elements of to zero, or as a “formal finite sum” where each is an integer and is “” from the copy of corresponding to . I’ll try to talk in terms of both pictures since some people find the one easier to understand and some the other. We can go back and forth by taking a valid function and using its nonzero values as the coefficients of a formal sum: . This sum is finite because most of the values of are zero. On the other hand, we can use the coefficients of a formal sum to define a valid function. So we’ve got an abelian group here, but we want a ring. We use the semigroup multiplication to define the ring multiplication. In the formal sum picture, we define , and extend to sums the only way we can to make the multiplication satisfy the distributive law. In the function picture we define where we take the sum over all pairs of elements of whose product is . This takes the product of all nonzero components of and and collects the resulting terms whose indices multiply to the same element of the semigroup. The ring we get is called the “semigroup ring” of , written . There are a number of easy variations on the same theme. If is actually a monoid we sometimes say “monoid ring”, and note that the ring has a unit given by the identity of the monoid. If is a group we usually say “group ring”. If in any of these cases we start with a commutative semigroup (monoid, group) we get a commutative ring. So here’s the really important thing about semigroup rings. If we take any ring and forget its additive structure we’re left with a semigroup. If we take any semigroup homomorphism from to this “underlying semigroup” of we can uniquely extend it to a ring homomorphism from to . This is just like what we saw for free groups, and it’s just as important. As a side note, I want to mention something about the multiplication in group rings. Since only if we can rewrite the product formula in the function case . This way of multiplying two functions on a group is called “convolution”, and it shows up all over the place.
<urn:uuid:dec0fca6-e85d-49f5-8126-cce9d907c64b>
3.15625
584
Personal Blog
Science & Tech.
55.571841
|The prominent ridge of emission featured in this vivid skyscape is designated IC 5067. Part of a larger nebula with a distinctive shape, popularly called The Pelican Nebula, the ridge spans about 10 light-years and follows the curve of the cosmic pelican's head and neck. The Pelican Nebula close-up was constructed from narrowband data from sulfur, hydrogen, and oxygen atoms to red, green, and blue colors. Fantastic, dark shapes inhabiting the view are clouds of cool gas and dust by energetic radiation from young, hot, massive stars. But stars are also forming within the dark shapes. In fact, twin jets emerging from the tip of the long, dark tendril below center are the telltale signs of an embedded protostar cataloged as The Pelican Nebula itself, also known as IC 5070, is about 2,000 light-years away. To find it, look northeast of bright star Deneb in the high flying constellation Cygnus. Credit & Copyright:
<urn:uuid:e6c2ff23-0862-42dc-ae5c-1d7e90aa4a6d>
3.21875
235
Knowledge Article
Science & Tech.
47.20121
If every piece of code worked perfectly the first time, programming would be an easy job. Sadly, that is not the case. In any large project, you will probably spend more time testing, tracking down bugs, and figuring out how to fix them than you spend writing the code in the first place. Dealing with bugs is an unavoidable part of writing software, but there are things you can do to make the job easier. At every step of the process, you make decisions which influence the risk of creating bugs later on. Experienced programmers understand this and intentionally write their code in a way that minimizes the opportunities to make mistakes. This is sometimes known as "defensive programming". In this article I make this practice a little more systematic by presenting a taxonomy of "bug opportunities" -- situations in which it is possible to make a programming mistake. I categorize them into five levels, based on the time and effort needed to identify the mistake. And I give examples of each level, and suggests strategies for turning the more severe levels into less severe ones. A bug opportunity is not the same as a bug. With enough caution, even a minefield can be successfully navigated. But you will make mistakes from time to time, and the more opportunities you have for mistakes, the more mistakes you will end up making. A key to writing bug-free software is therefore to avoid situations in which you might make a mistake; or when that is not possible, to choose your risks carefully so that bugs will be found and corrected as quickly as possible. Most of the examples in this article are in Java, and I assume familiarity with the Java language. None of the principles are Java specific, however, and they apply just as well to almost any other language.
<urn:uuid:1cb9b3f4-54d4-40ea-82b9-ec5784b89fc1>
2.71875
359
Personal Blog
Software Dev.
40.757084
Cave-riddled hills jut steeply from the flat pine savanna of Runaway Creek Nature Reserve in Belize. Tapirs, jaguars and wild pigs call the forest-blanketed hillsides home. The territory also encompasses the range of a group of spider monkeys whose lives University of Calgary anthropologist Mary Pavelka and graduate students Kayla Hartwell and Jane Champion have chronicled for four years. The team has amassed a detailed record that goes beyond the animals' daily comings and goings to include measuring stress hormones and the parasites that inhabit their intestinal tracts. Years in the jungle confers exposure to the natural cycles that inevitably beset any forested ecosystem, opening a broader panorama on the dynamics of the animals' lives. In October 2010 Hurricane Richard ravaged the jungle, uprooting countless trees and stripping foliage. The destruction, which left humans and monkeys disoriented, caused the researchers to switch gears and track the recovery of the forest as well as forge new trails through the most afflicted areas to see how the monkeys fared. Fruit trees had suffered extensive damage, forcing the monkeys to consume what leaves they could find. Just as the jungle started to recover, the dry season brought scorching temperatures—and with it, of course, fire. The surrounding savanna commonly burns but the abundance of hurricane deadfall drove flames into the hills, reducing huge tracts of forest to char. For weeks smoke and ash choked the researchers as they scrambled over the smoldering tree remains. The fires spared the spider monkeys, but a number of individuals from a nearby group of howler monkeys succumbed to the blaze. Food shortages and the stresses of a decimated habitat forced the spider monkeys to adapt yet again. The group has proved its resilience, but how it will fare with the likelihood of more storms followed by yet more habitat burns, a possible by-product of the planet's inexorable warming, remains unknown.
<urn:uuid:ed067262-f254-48d1-80b4-c87db434be19>
3.71875
389
Truncated
Science & Tech.
33.067011
Late in the afternoon last Friday (one week ago), scattered thunderstorms developed west and north of Washington and moved north-northeast. By 6:00pm, a line of thunderstorms developed to the southwest of Washington and began racing northeastward at over 40 mph. One of the storms near Stafford was severe and a funnel cloud was observed. A tornado warning was issued in eastern Prince William County and southern Fairfax County. As the storms moved north in Fairfax County and Arlington they weakened a bit. The arrival of the thunderstorm was proceeded by an ominous-looking shelf cloud that heralded the arrival of gusty winds which was followed several minutes later by heavy rain. I observed about a dozen lightning strikes from my location at Fairfax Town Center, but nothing that I would classify as severe weather. The shelf cloud did look cool as it pushed through the area. Read below to see a time lapse video of this shelf cloud and to learn more about the shelf cloud. Time lapse video from Fairfax Town Center showing the shelf cloud moving northward. A shelf cloud is a low, wedge-shaped cloud that often proceeds the arrival of a thunderstorm. It is created by the thunderstorm’s downdraft which advances ahead of the parent thunderstorm. The cool air of the downdraft undercuts the warm air at the surface and condenses a cloud that looks like it is rolling, or plowing, ahead of the thunderstorm. The shelf cloud is accompanied by a gust front which can range from a cool breeze to a violent wind squall. In severe cases, short-lived vortices resembling tornadoes can touch the ground with the shelf cloud. These vortices are called gustnadoes.
<urn:uuid:73ac0676-14fb-4c29-900c-4c076b019a62>
2.9375
348
Personal Blog
Science & Tech.
56.854665
Software Design Using C++ What is a Queue? A queue is a "waiting line" type of data structure. Much like with a stack, you can insert items into a queue and remove items from a queue. However, a queue has "first come, first served" behavior in that the first item inserted into the queue is the first one removed. This is sometimes abbreviated FIFO (first in, first out). A queue is also a LILO (last in, last out) data structure. Thus, items are removed from the queue in the very same order in which they were put into the queue in the first place. Uses of Queues Queues are commonly used in operating systems and other software where some sort of waiting line has to be maintained for obtaining access to a resource. For example, an operating system may keep a queue of processes that are waiting to run on the CPU. It might also keep a queue of print jobs that are waiting to be printed on a printer. Queues are also used in simulations of stores and their waiting lines at the check-out counters. A Linked List Implementation The first five files above are the same as the files by the same names in the section on stacks. Then the queue.h file sets up the following abstract base class: virtual void Insert(const ItemType & Item) = 0; virtual void Remove(ItemType & Item) = 0; virtual bool Empty(void) const = 0; Thus we know that we will have Empty functions. The then derives the following class by public inheritance: class LstQueClass: public QueBaseClass bool Empty(void) const; void Insert(const ItemType & Item); void Remove(ItemType & Item); ListClass List; // an embedded List object As with a stack, a queue object (a LstQueClass object to be precise) contains an embedded list for holding the data. The implementation of the three queue functions is easy, since we can simply call upon the appropriate list-processing functions. To insert an item into a queue, we insert it at the rear of the embedded list. To remove an item from a queue, we remove it from the front of the list. See for the details. Then look at quetest.cpp for a test program that tries out a queue object. An Array Implementation It is also possible to implement queues so that a queue object uses an array to hold the data. For example, if we use an array that can hold 5 items, we might have a picture like the following: This picture shows that the queue contains (in order): 28, 70, 33, 125. Note that 0 for Front indicates that the first item in the queue is the 28, the item at index 0. Similarly, the 3 for Rear shows that the last item in the queue is the 125, the item at index 3. Of course, 4 for Count tells us that there are currently 4 items in the queue. Let's now remove an item from the queue. Of course, we always remove from the front, so the 28 is removed. The new picture of the queue is as follows: Note that no attempt is made to overwrite the 28. The key is to change the Front index to show that the queue now runs from index 1 to index 3, that is, the queue contains 70, 33, 125. The 28 is still sitting in there as garbage data. Next, let's insert 64. This leads to the following picture. Note that we insert at the rear of the queue, of course. Let's next insert 99. You might think that we are out of luck since the last item inserted went at the top index in the array. In such a case you just wrap around to the start of the array and use index 0 if it is unused. This is sometimes referred to as a "circular array", since the effect is the same as what you would get by pasting the two ends of the array together to form a circle. The picture of the queue follows: Note that the queue is now filled, as you can easily tell by looking at the A complete example program that uses an array-based queue is given in the following files. The particular test program is essentially the same as the one we used to try out a list-based queue. The first two files are exactly the same as before. In looking at arrqueue.h and arrqueue.cpp you see that we derived the Count. However, we could remove additional items. Once that is done, new items could be inserted. We might wrap around the array many times in the course of using this queue! ArrQueClass by public inheritance from the abstract base class QueBaseClass. A constructor was added so that the Count fields of the newly constructed object are correctly initialized for an empty queue. (To see why Rear is initialized as it is, draw a picture of what happens when the first item is inserted into the newly created queue.) A private helping function, called Advance, is set up so that we can just call it anytime we want to advance the Rear to the next index (with wrap around). Note that it is an inline function for efficiency. Back to the main page for Software Design Using C++
<urn:uuid:1bf68317-c1d1-404d-b03e-c75db502c546>
4.09375
1,169
Documentation
Software Dev.
65.488537
Three top climate researchers claim that the greenhouse gases already in the atmosphere should have warmed the world more than they have. The reason they have not, they say, is that the warming is being masked by sun-blocking smoke, dust and other polluting particles put into the air by human activity. But they warn that in future this protection will lessen due to controls on pollution. Their best guess is that, as the mask is removed, temperatures will warm by at least 6°C by 2100. That is substantially above the current predictions of 1.5 to 4.5°C. This makes more sense to me than the consensus forecast. As I pointed out in previous posts, human activity has grown exponentially over the past century, yet the consensus model of global warming is approximately linear--going backward as well as forward. To me, this is bizarre. I think it's fairly common to have a linear treatment variable and a linear response variable. I think it's fairly common to have exponential treatment and exponential response. And I think it's even common to have linear treatment and exponential response. But exponential treatment and linear response? The only example I can come up with is an economic example--diminishing returns. You could have exponentially increasing labor on a plot of land, and only linear increases in output. But why should the human impact on climate exhibit diminishing returns--that is, the more we spew into the air, the less effect our spewing has? Instead of a consensus around a linear forecast, it seems to me that we should have divergence involving various nonlinear forecasts. Maybe that sort of divergence would confuse the general public. But it would make more sense to me!
<urn:uuid:43474e9f-ff53-4db2-a09e-1a882d12e830>
2.8125
342
Personal Blog
Science & Tech.
52.928786
Phylogenetics became a science with a consistent and objective methodology after the introduction of phylogenetic systematics, or cladistics, by Willi Hennig in the 1950s and 1960s (1, 2). As a life-long student of ichthyology and a scientist who specializes in phylogenetic methods, I have been interested in hypotheses and graphical depictions of phylogenetic relationships of ray-finned fishes that were published prior to the introduction of cladistics. There is no longer much attention paid to these early views of fish phylogeny, which I think is unfortunate. There is an opinion that with the advent of cladistics, there is no need to study and understand these pre-cladistic hypotheses of fish relationships. However, it is important to note that most biologists in the 19th Century immediately accepted Darwin’s fundamental thesis that all life on Earth shares common ancestry (3). Notable examples of ichthyologists that never accepted evolution include Louis Agassiz, a professor at Harvard University, and Albert K. L. G. Günther. The late 19th and early 20th Century ichthyologists that were thinking about how lineages of fishes were related to one another were explicitly attempting to create taxonomies that reflect hypothesized genealogical relationships. The problem is that prior to Hennig there was no standard method to infer these relationships, which meant that even when using the same type of information scientists could arrive at dramatically different conclusions about phylogeny. |Phylogeny of teleost fishes from E.D. Cope's book, Primary Factors of| Organic Evolution, 1896. It is not entirely clear to me what we can specifically learn by studying pre-cladistic efforts at fish phylogeny. Will we discover a hypothesis that is now again finding support in explicit post-Hennig phylogenetic analyses, or will we see reflections of both method and theory that will allow a more nuanced view of how we approach phylogeny inference in the 21st Century? Even if there are no obvious undiscovered gems in these old phylogenetic trees, an understanding of this history will minimally allow us to appreciate the set of objective approaches shared by most comparative biologists interested in phylogeny, regardless of the group of organisms investigated. What I think we do see in these old trees is that the approach used by different scientists to infer relationships was idiosyncratic and often limited by the patterns of biological diversity exhibited in the specific organismal lineage. |Phylogeny of fishes from Gill's 1871 work,| Arrangement of the Families of Fishes, or Classes Pisces, Marsipobranchii, and The earliest fish phylogeny shown here is from Theodore Gill’s very influential and informed classification of fishes that includes Cope’s Nematognathi and Müller’s Teleostei (11, p. xliii), which as mentioned above, was not recognized by Cope. Phylogeny of ray-finned fishes from Dean's book Fishes, Living and Fossil, 1895. Part II will begin with George A. Boulenger. 1. Hennig, W. 1950. Grundzüge einer Theorie der phylogenetischen Systematik. Berlin: Deutscher Zentralverlag. 2. Hennig, W. 1966. Phylogenetic systematics. Urbana: University of Illinois Press. 3. Darwin, C. 1859. On the origin of species. London: John Murray. 4. Patterson, C. 1977. The contribution of paleontology to teleostean phylogeny, in Major patterns in vertebrate evolution, P.C. Hecht, P.C. Goody, and B.M. Hecht, Editors. Plenum Press: New York. p. 579-643. 5. Patterson, C. 1981. Significance of fossils in determining evolutionary relationships. Annual Review of Ecology and Systematics. 12:195-223. 6. Haeckel, E. 1866. Generalle morphologie der organismen. Berlin: G. Reimer. 7. Cope, E.D. 1871. Observations on the systematic relations of the fishes. American Naturalist. 5:579-593. 8. Cope, E.D. 1871. Contribution to the ichthyology of the Lesser Antilles. Transactions of the American Philosophical Society. N.S., 14:445-483. 9. Cope, E.D. 1872. Observations on the systematic relations of the fishes. Proceedings of the American Society for the Advancement of Science. 20:317-343. 10. Cope, E.D. 1896. Primary factors of organic evolution. Chicago: The Open Court Publishing Company. 11. Gill, T.N. 1872. Arrangement of the families of fishes, or classes Pisces, Marsipobranchii, and Leptocardii. Smithsonian Miscellaneous Collections. 11:i-xlvi, 1-49. 12. Dean, B. 1895. Fishes, living and fossil. New York: Columbia University Press.
<urn:uuid:da9c885a-f702-4e45-a764-9acb0b210c87>
2.90625
1,087
Personal Blog
Science & Tech.
47.737997
- Queensland Maths - File Types Students will approximate the area under a curve using Riemann sums. This will be done by utilizing a program that computes the Riemann sum as well as drawing the graphical representation. The activity concludes with students discovering that if enough Riemann sums are used, then the area under a curve can be calculated with the required degree of precision. From Sean Bird's website: "Put this [TI-Nspire] file in MyLib so that you can access the area approximation methods from any document." For other TI-Nspire files from Sean's website, visit: Choose a function and see the effect on the Riemann sum when you change the slice width. The file itself can be downloaded and installed on your computer or network. This animation from Lou Talman is a dynamic representation of the area function we introduce in the standard proof of the Fundamental Theorem of Calculus. Riemann Sums 1 A lovely Geogebra java applet from Miguel Bayona, The Lawrenceville School, Lawrenceville, NJ. It demonstrates the lower sum, upper sum, left sum, right sum, midpoint sum and trapezoidal sum for a function of your choosing. Loading the applet can take a while, so be patient.
<urn:uuid:03f0bd32-96e8-4e40-88ad-76a5c30513e6>
3.625
267
Content Listing
Science & Tech.
43.138
The work of Oliver Heaviside and Laplace put the electrical theories in a firm footing. Heaviside invented an operational calculus for solving differential equations arising out of electrical network analysis, which was justified rigorously later by Laplace Transforms(but which makes full sense only incorporating the theory of distributions). This might not seem important enough historically. But, all power generation, motors, the light you have in your room, and indeed all uses of electricity were able to be set up properly thanks to the work of these people, and the midnight oil they burned. We wouldn't have computers or MO without electricity distribution everywhere, for instance.
<urn:uuid:8eee1c5c-b58a-45cb-8b06-83d2ed710e74>
2.890625
130
Knowledge Article
Science & Tech.
20.657925
Adults are slender, soft-bodied, with four membranous, extensively veined wings held upright and together (like a butterfly). The forewings are much longer and often overlap the hindwings. When perching, the front pair of legs are often held outward. They have short antennae and large compound eyes. There are 2 long, threadlike cerci (antenna-like appendages extending from the tip of the abdomen). The naiads (nymphs) somewhat resemble the adults, though they lack wings, have a series of leaflike external gills attached below the abdomen, have smaller eyes and often have a flattened head that helps them to adhere to rocks in fast-flowing water. Nymphs possess 3 long cerci (sometimes 2) extending from the tip of the abdomen.
<urn:uuid:a7dd980e-8042-4356-9a1a-936211d9a987>
3.4375
165
Knowledge Article
Science & Tech.
47.395126
Scanning Probe Microscope Piezoelectric Crystals In this resource we disassemble the piezoelectric assembly of a scanning probe microscope. At its core is a white cylinder of the piezoelectric material. If you look closely, it has a granular texture that reflects the fact that it is actually made up of many small crystals. John Bean, University of Virginia Virtual Science Lab Researchers should cite this work as follows: John C. Bean (2005), "Scanning Probe Microscope Piezoelectric Crystals," https://nanohub.org/resources/444.
<urn:uuid:9732f1b3-79f6-45fb-a1bc-479dba080b22>
2.8125
129
Knowledge Article
Science & Tech.
39.599167
Dan Brown in his book, The Da Vinci Code, talks about the "divine proportion" as having a "fundamental role in nature". Brown's ideas are not completely without foundation, as the proportion crops up in the mathematics used to describe the formation of natural structures like snail's shells and plants, and even in Alan Turing's work on animal coats. But Dan Brown does not talk about mathematics, he talks about a number. What is so special about this number? Probabilities and statistics: they are everywhere, but they are hard to understand and can be counter-intuitive. So what's the best way of communicating them to an audience that doesn't have the time, desire, or background to get stuck into the numbers? This article explores modern visualisation techniques and finds that the right picture really can be worth a thousand words. The only good thing about a wash-out summer is that you get to see lots of rainbows. Keats complained that a mathematical explanation of these marvels of nature robs them of their magic, conquering "all mysteries by rule and line". But rainbow geometry is just as elegant as the rainbows themselves. What makes a perfect football? Anyone who plays or simply watches the game could quickly list the qualities. The ball must be round, retain its shape, be bouncy but not too lively and, most importantly, be capable of impressive speeds. We find out that this last point is all down to the ball's surface, the most prized research goal in ball design.
<urn:uuid:608a6b00-4ace-41ee-be82-056fc2c2b1ad>
2.828125
309
Truncated
Science & Tech.
53.808079
CaryophylliinaStephen D. Cairns The caryophylliines are known from the early Jurassic (180 million years ago) to the Recent, and occur in most marine environments to a depth of 3200 m. They are an extremely diverse group, consisting of 91 living genera and 457 living species (Cairns et al., 1999). Although some species form large colonies that may contribute to both shallow-water and deep-water reef/bank structure, most members of this suborder are small (less than 30 mm) and inconspicuous, occurring in cryptic shallow-water environments or in cold (as low as -1°C), dark, deep waters. Although several other scleractinian families occur in deep water (e.g., Dendrophylliidae, Micrabaciidae, Fungiacyathidae), the caryophylliines have been the most successful in exploiting the deep-sea realm. Because many occur below the euphotic zone or cryptically in shallow water, most species are azooxanthellate (do not contain symbiotic unicellular dinoflagellates) and therefore rarely attain a large size. Suborders within the Scleractinia are distinguished by the structure of their septa. The septa of the caryophylliines are lamellar, composed of one fan system of simple, very small trabeculae (minitrabeculate), resulting in a smooth or nearly smooth inner margin (Vaughan and Wells, 1943; Wells, 1956). The superfamilies within this suborder are characterized by the structure of their walls (theca). The most primitive of the three superfamilies, the Volzeioidea, have an exclusively epithecate wall (see Stolarski, 1995, 1996); that of the Flabelloidea is marginothecate; and that of the Caryophyllioidea is septo- or parathecate. In the latter two cases a nontrabecular deposit of epitheca or textura may be added to the exterior of the corallum (Stolarski, 1995). No phylogenetic analysis has been performed on the suprageneric taxa of the Caryophylliina; however, Stolarski (1995, 1996), Romano and Palumbi (1996), and Romano and Cairns (2000) are currently investigating and reassessing the higher level relationships in the suborder by using characteristics of the corallum microstructure and DNA sequencing, respectively (see Scleractinia branch page). Roniewicz and Morycowa (1993) and Veron (1995: 110) should also be consulted for more recent, non-cladistic evolutionary trees of all scleractinian families. The higher classification of the suborder Caryophylliina is in a state of flux as more information is being discovered through microstructural and molecular analyses. Stolarski (1996, 2000) has best summarized the conflicting morphological characters, which resulted in two fairly different classification hypotheses for the superfamilies and families of the caryophylliines, which are both different from the classification presented here. In general, we agree with he work of Stolarski (2000), and, in general, adopt his "Hypothesis A", but with some modifications that reflect a more traditional view. Thus, we continue to recognize the superfamily Flabelloidea and place the Guyniidae in that superfamily, but we do accept his new family Stenocyathidae, which he places in the superfamily Caryophyllioidea, and his new family Schizocyathidae, which he places in the superfamily Volzeioidea. Cairns S. D., B. W. Hoeksema, and J. van der Land. 1999. Appendix: List of Extant Stony Corals. Attol Research Bulletin, 459:13-46. Romano, S. L. and S. R. Palumbi. 1996. Evolution of scleractinian corals inferred from molecular systematics. Science, 271: 640-642. Romano, S. L. and S. D. Cairns. in press. Molecular Phylogenetic Hypotheses from the Evolution of Scleractinian Corals. Bulletin of Marine Science. Roniewicz, E. and E. Morycowa. 1993. Evolution of the Scleractinia in the light of microstructural data. Courier Forschungsinstitut Senkenberg, 164: 233-240. Stolarski, J. 1995. Ontogenetic development of the thecal structures in caryophylliine scleractinian corals. Acta Palaeontologica Polonica, 40: 19-44. Stolarski, J. 1996. Gardineria -- a scleractinian living fossil. Acta Palaeontologica Polonica, 41: 339-367. Stolarski, J. 2000. Origin and phylogeny of Guyniidae (Scleractinia) in the light of microstructural data. Lethaia, 33: 13-38. Vaughan, T. W., and J. W. Wells. 1943. Revision of the suborders, families, and genera of the Scleractinia. Geological Society of America, Special Papers, 44: 363 pp. Veron, J. E. N. 1995. Corals in Space and Time. 321 pp. UNSW Press, Sydney. Wells, J. W. 1956. Scleractinia. Pp. F328-F444 In: Moore, R. C (editor) Treatise on Invertebrate Paleontology, Part F: Coelenterata. University of Kansas Press, Lawrence. Technical assistance was rendered by Adorian Ardelean. Correspondence regarding this page should be directed to Stephen D. Cairns at Page copyright © 2002 Page: Tree of Life Caryophylliina. Authored by Stephen D. Cairns. The TEXT of this page is licensed under the Creative Commons Attribution-NonCommercial License - Version 3.0. Note that images and other media featured on this page are each governed by their own license, and they may or may not be available for reuse. Click on an image or a media link to access the media data window, which provides the relevant licensing information. For the general terms and conditions of ToL material reuse and redistribution, please see the Tree of Life Copyright Policies. - First online 28 October 2002 Citing this page: Cairns, Stephen D. 2002. Caryophylliina. Version 28 October 2002. http://tolweb.org/Caryophylliina/19161/2002.10.28 in The Tree of Life Web Project, http://tolweb.org/
<urn:uuid:eda83abb-afad-4098-940e-2be97da7af77>
3.59375
1,447
Knowledge Article
Science & Tech.
43.268442
am trying to store an object using a key in a Hashtable. And some other object already exists in that location, then what will happen? The existing object will be overwritten? Or the new object will be stored elsewhere? existing object will be overwritten and thus it will be lost. An enumeration is an interface containing methods for accessing the underlying data structure from which the enumeration is obtained. It is a construct which collection classes return when you request a collection of all the objects stored in the collection. It allows sequential access to all the elements stored in the collection. the basic properties of Vector and ArrayList, where will you use Vector and where will you use ArrayList? basic difference between a Vector and an ArrayList is that, vector is synchronized while ArrayList is not. Thus whenever there is a possibility of multiple threads accessing the same instance, one should use Vector. While if not multiple threads are going to access the same instance then use ArrayList. Non synchronized data structure will give better performance than the synchronized
<urn:uuid:8793cf0a-0b5c-4093-a5d0-95eb6c9e85f5>
2.984375
238
Q&A Forum
Software Dev.
38.86
Trying to create visual representations of what our climate futures might look like is always a taxing and delicate task. Computer-generated images of our familiar coastal cities inundated with sea water certainly attract attention, but they also – quite rightly, perhaps – get slammed for being "alarmist", especially if they are imagined around worst-case predictions. Within this context, it is worth noting a new photography project being orchestrated by British Columbia's Ministry of Environment in Canada. King Tides (also known as perigean spring tides) are extreme high tide events that occur when the sun and moon's gravitation forces reinforce one another at times of the year when the moon is closest to the earth. They happen twice a year, but they are typically more dramatic during the winter due to the low pressure cells in the atmosphere that also exert a gravitational pull on the water. Next week how birds fly by creating gravity above their wings.
<urn:uuid:7e696a43-abb6-4ea2-be8b-5602181b923d>
3.765625
188
Personal Blog
Science & Tech.
30.424292
Add your answer here. Check out some similar questions! Physics Question on Newton's Laws and the Equations of Motion [ 6 Answers ] A train is travelling up a 3.73 degree incline at a speed of 3.25 metres per second, when the last cart breaks free and begins to coast without friction. (i) How long does it take for the last cart to come to rest momentarily (ii) How far did the last cart travel before (momentarily) coming to... Physics Question on Ideal Gas Law [ 7 Answers ] A glass bulb of volume 400cm cubed is connected to another of volume 200cm cubed by a tube of negligible volume. Both bulbs initially contain dry air at 20 degrees celsius and 1 atm. The larger bulb is then immersed in steam at 100 degrees celsius and the smaller in melting ice at 0 degrees... Physics Question on Vector Addition and Newton's Laws [ 7 Answers ] A child on a toboggan (combined weight 70kg) is pulled from rest on a level surface by two friends. The first friend pulls with a force of 100N in the North-West direction and the second friend pulls with a force of 60N in the direction 20 degrees East of North. What is the net force exerted on the... Physics problem involving Newton's 2nd Law and the Gravitational Force [ 2 Answers ] You observe an articial satellite orbiting the earth. You estimates it is at an altitude h = km above the earth's surface and has a mass m = 3500 kg. You wish to calculate when the satellite will be back in the same position. From the second law of motion and the gravitational force law,... Newton's Second Law [ 2 Answers ] I need help understanding how to solve this problem F=ma if: how much force is needed to acceleratea 1000-kg car at a rate of 3m/s-squared I don't understand the steps involved. Thanks for your help. View more Math & Sciences questions Search
<urn:uuid:0a4e55a7-b962-45f8-8b2b-a2e49061d299>
3.015625
420
Q&A Forum
Science & Tech.
75.892905
The original experimental scheme required the electrical field to be planar to within 10 nanoradians to achieve the goal sensitivity of 10-24 e-cm. Newer schemes, using, for example, a central electric field structure with the beam circulating in both directions, relax the planarity requirement appreciably. However, it is still necessary to monitor the direction of the E field to the order of microradians or less, and to establish the effect of known changes generated in the tilt of the E field. The Jones differential capacitor device was suggested by Farley who knew of the work of his fellow Fellow of the Royal Society. (See, e.g., Journal of Physics E: Scientific Instruments 1973, Vol 6, p 589.) The intrinsic sensitivity of this instrument appears to be limited only by quantum mechanical considerations; sensitivity better than 10-10 radians was demonstrated decades ago. The differential capacitor approach offers extreme simplicity, great sensitivity, and modest cost. The first development phase is to incorporate modern electronics, and to demonstrate sensitivity of 100 nanoradians, sufficient to observe earth tides at BNL. The second development phase will explore the use of modern materials unavailable when the device was first developed almost a half If you have a question that is not addressed in these pages, please send an email to Top of Page
<urn:uuid:604b0519-8f83-49d0-aee8-cc5d0cd18590>
3.0625
292
Knowledge Article
Science & Tech.
28.764091
Lower Continental Crust Re–Os isotope evidence for the composition, formation and age of the lower continental crust Knowledge of the composition of the lower continental crust is important not only for understanding the formation and evolution of the crust as a whole, but also to evaluate the petrogenesis of continental basalts and the chemical effect of the lower crust recycling into the mantle. We presented Re–Os isotope data for two well characterized suites of lower-crustal xenoliths from North Queensland, Australia, which have average major and trace element compositions similar to estimates of the bulk lower continental crust. Our data indicate that the lower crust has 1 to 2 times as much osmium, about half as much rhenium, and is less radiogenic than the upper continental crust. We interpret the Re-Os isotope systematics to indicate that assimilation and fractional crystallization of basaltic melts are important processes in the formation of the lower crust, and lead to dramatic changes in the osmium isotopic composition of mafic lavas that pond and fractionate there. A consequence of this is that the Re-Os isotopic system should not be relied on to yield accurate mantle extraction ages for continental rocks. Saal, A.E., Rudnick, R.L., Ravizza, G.E. and Hart, S.R. - (1998) - Re-Os isotope evidence for the composition, formation and age of the lower continental crust. Nature 393, 58-61. Other project collaborators: R. Rudnick (University of Maryland); G. Ravizza (University of Hawaii); S. Hart (WHOI)
<urn:uuid:3b184047-ffe2-4e22-ac25-720f79aef2f6>
2.8125
342
Academic Writing
Science & Tech.
37.636846
Disturbing the Nanosphere Cornell University researchers deliberately created atomic-level disorder in order to probe the workings of heavy fermion compounds. In an interview, Mohammad Hamidian discusses how this work, and a new tool created at Cornell, may be just the thing for revolutionizing 21st century technology. NEW IMAGING TECHNOLOGY is giving scientists unprecedented views of the processes that affect the flow of electrons through materials. In one such experiment, a team at Cornell University’s Laboratory for Atomic and Solid State Physics has altered a familiar tool in nanoscience, the Scanning Tunneling Microscope, to probe for very tiny energy variations. By using the microscope for “Spectroscopic Imaging,” scientists were able to visualize what happens when they change the electronic structure of a “heavy fermion” compound made of uranium, ruthenium and silicon. What they saw was widespread disorder – and, with it, clues to the physics of magnetism and conduction. (See press release.) This knowledge also sheds light on superconductivity, the movement of electrons without resistance, which typically occurs at extremely low temperatures and could revolutionize electronics if scientists can find a way achieve it at something close to room temperature. The Cornell experiment and its results are presented in the Proceedings of the National Academy of Sciences (see “How Kondo-holes create intense nanoscale heavy-fermion hybridization disorder,” PNAS, available online). The research team included J.C. Seamus Davis, a member of the Kavli Institute at Cornell for Nanoscale Science and developer of the SI-STM technique. Working with synthesized samples created by Graeme Luke from McMaster University (Canada), the experiment was designed by Mohammad H. Hamidian, a post-doctoral fellow in Davis’ research group, along with Andrew R. Schmidt, a former student of Davis at Cornell and now a post-doctoral fellow in physics at UC Berkeley. Inês Firmo, a graduate student and Milan Allan a postoral associate in the Davis lab contributed to the new analysis techniques developed for this project. Hamidian recently talked with The Kavli Foundation about the heavy-fermion research, its significance to condensed-matter physics and the study of superconductivity, and the exciting possibilities opened up by Cornell’s Spectroscopic Imaging-Scanning Tunneling Microscope (SI-STM). Here are highlights of that conversation: An Interview with Mohammad H. Hamidian THE KAVLI FOUNDATION (TKF): Let’s begin by discussing heavy fermions. In this case, heavy fermions are electrons that seem “heavy” because they’re slowed down by intense interactions with magnetic atoms in the material they’re travelling through. What is it about heavy fermion systems that make them so interesting? MOHAMMAD H. HAMIDIAN: Heavy fermions provide a platform to study two big issues in condensed matter physics, which are also applicable to the technology of the 21st century. One is magnetic interactions at the atomic scale, and the other is achieving superconductivity through unconventional means. Mohammad H. Hamidian, post-doctoral fellow in physics at Cornell University. In heavy fermions we have both of these co-existing at very low temperatures. If we can resolve how superconductivity can co-exist with magnetism, then we have a whole new understanding of superconductivity, which could be applied toward creating high-temperature superconductors. In fact, magnetism at the atomic scale could become a new tuning parameter of how you can change the behavior of new superconducting materials that we make. TKF: So magnetism could be a switch to control superconductivity. HAMIDIAN: Yes. One reason we study high temperature superconductorsis to find out what causes the transition temperature of superconductivity to be as high as it is. And knowing that, perhaps we can tune the temperature so it can be even higher. TKF: And if you do that, what implication does that have for, say, computing? HAMIDIAN: Many of the devices that people are thinking about using in the future would use superconducting circuits. You may have heard the term “superconducting qubit.” These are circuits that have superconducting components in them, but at the present time the only feasible superconductor that can be used is a conventional metal, and you can only make it superconducting at very, very low temperatures – liquid helium temperatures, actually. To make it feasible to eventually use these superconductors in a desktop computer, you have to be able to drive up the transition temperature from very close to absolute zero to room temperature. And to do that, you really have to have an understanding of how the material works. TKF: So what did you do in this experiment and what did you find? HAMIDIAN: To start, heavy fermions are materials with complex electron interactions, primarily based on a process known as the Kondo Effect. Electrons are continually switching back and forth between two states: one is being attached to magnetic atoms in fixed positions on a lattice-like landscape, and the other is simply being “smeared” across the system. This constant switching slows them down as they travel through the material making them seem ‘heavy.’ In the experiment we disturbed this fundamental switching process. We did this by replacing only about 1% of the magnetic atoms with non-magnetic ones. And what we found this had a big impact across the whole system. Left: A view from below of the experimental probe, with the vacuum can removed. The STM head can be seen in white at the bottom while the rest of the device is effectively a refrigerator used to push the temperature down close to absolute zero. Right: The sealed probe, which includes the STM head and other refrigeration. It is iced over in the picture because this is just as we have removed it from a bath of Liquid helium (at 4Kelvin). Courtesy: M. Hamidian This is interesting, because the presence of magnetic atoms is typically detrimental to electron flow and conventional superconductivity. In this case, however, we found that removing the magnetic atoms proved detrimental to the flow. In other words, by creating a “Kondo hole,”, the elimination of the Kondo Effect at a given atom, we took away a very essential element of what makes a heavy fermion. The heavy fermion needs to be able to switch between states, and when suddenly electrons could no longer attach to the magnetic atoms, this sent out ripples into the rest of the material, forcing other electrons to change their character as well. In effect, the glue that held the heavy fermion electrons together was severely weakened by a very small number of Kondo holes across the entire material. TKF: Which reveals that heavy fermion material, which might be critical for understanding high-temperature superconductivity, won’t work without this magnetic relationship – even though with other substances, this would typically be detrimental to superconductivity. So, you uncovered this by specially modifying a scanning tunneling microscope? HAMIDIAN: Yes. A scanning tunneling microscope is not a microscope in which you use light to see the atoms or to see the electrons; it pulls or pushes electrons into the material. With a new technique, we are now using the microscope for measuring how hard it is to push in and pull out an electron at a given spot in a heavy fermion compound. By doing this, we actually learn a lot about the material’s electronic structure. Then by mapping that structure out over a wide area, we can start seeing variations in those electronic states, which come about for quantum-mechanical reasons. Our newest advance, crucial to this paper, was the ability to see, at each atom, the strength of the interactions that make the electrons ‘heavy.' So how could that lead to anything useful? Well, if you know something about the quantum-mechanical state of a material and the electron flavor, you know a lot. In fact, you know how the electrons move at a given energy, you know what momentum they have, you know how the system has self-organized to lead to those particular values and how it can be tuned by introducing defects. TKF: Where does your research proceed from here? HAMIDIAN: The paper’s main point was that Kondo holes have a destructive impact on these heavy fermion materials. More importantly however, was developing new SI-STM technology and techniques that enabled us to study this very rich physics . This was an incredibly difficult project, because the energy variations that you have to be able to probe are very tiny; but we now have the ability to even measure the strength of the “glue” which holds heavy fermions together. Ultimately, we are trying to link heavy fermions and superconductivity in a much more solid and comprehensive way. By continuing to develop the SI-STM we hope to open up a whole new channel of atomic-scale study for complex materials that will form the basis of technology in the coming century. - October, 2011
<urn:uuid:3a1bcd89-3d5f-45cc-b0eb-2b27e3d616a1>
3.21875
1,940
Audio Transcript
Science & Tech.
32.376537
This section describes some of the basic ways in which you can use the Class Browser by giving some examples. If you wish, you can skip this section and look at the descriptions of each individual view: these start with Examining slot information. When examining a class, the slot names of the class are displayed by default. To examine a class, follow the instructions below: :title "Test Buttons" :items '(:one :two :three))) The push button panel appears on your screen. This invokes the Class Browser on the button panel. The class is described in the Class Browser. Figure 7.1 Examining classes in the Class Browser Notice that, although you invoked the browser on an object that is an instance of a class, the class itself is described in the Class Browser. Similarly, if you had pasted the object into an Inspector, the instance of that object would be inspected. Using the environment, it is very easy to pass Common Lisp objects between different tools in this intelligent fashion. This behavior is achieved using the Common LispWorks clipboard; see Using the clipboard for details. See Performing operations on selected objects for a full description of the standard action commands available.
<urn:uuid:c8e916a8-17da-4ad9-9e96-a45be596b437>
3.046875
249
Documentation
Software Dev.
52.020333
THE human brain is wonderful at spotting patterns. It's an ability that is one of the foundation stones of science. When we notice a pattern, we try to pin it down mathematically, and then use the maths to help us understand the world around us. And if we can't spot a pattern, we don't put its absence down to ignorance. Instead we fall back on our favourite alternative. We call it randomness. We see no patterns in the tossing of a coin, the rolling of dice, the spin of a roulette wheel, so we call them random. Until recently we saw no patterns in the weather, the onset of epidemics or the turbulent flow of a fluid, and we called them random too. It turns out that "random" describes several different things: it may be inherent, or it may simply reflect human ignorance. Little more than a century ago, it ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:e27e14bf-605b-4dea-a02c-c04124fb0c11>
2.984375
209
Truncated
Science & Tech.
63.72487
The PHP Freaks website has a new tutorial posted today thats an introduction to design patterns - what they are and which were the ideas of the "Gang of Four". Implementing Design Patterns is gradually getting more common in the PHP world. The hype around Ruby on Rails, which is based on the Model-View-Controller architectural pattern, has spawned a generation of PHP based frameworks which embrace this pattern also, paving the way for others to embrace design patterns in general in their PHP applications. Design patterns are just a standardized way of doing something (like a Factory where, when you request an object, if one already exists you get that one instead of a new one). They illustrate the design pattern concept with a database abstraction system (UML) and a configuration setup where the main class inherits from the child type-specific ones.
<urn:uuid:80e58324-4991-4fba-b1b9-4ea4a016704b>
2.6875
170
Personal Blog
Software Dev.
34.8405
|Dec1-12, 01:03 PM||#1| How do thermocouples work? Hello there! I am planning to do an experiment on thermocouples but have a few questions. 1. What is the physics behind why the thermocouple works? 2. Why do two different metals need to be used for making this? |Dec1-12, 01:23 PM||#2| Do you understand what a Fermi surface is? |Dec1-12, 03:01 PM||#3| No, I do not. However, I did look through wikipedia and I think I might have found a section that pertains to my question. [http://en.wikipedia.org/wiki/Thermoe...rier_diffusion] The part I am not clear on is this. [Taken from that article directly] "If the rate of diffusion of hot and cold carriers in opposite directions is equal, there is no net change in charge. The diffusing charges are scattered by impurities, imperfections, and lattice vibrations or phonons. If the scattering is energy dependent, the hot and cold carriers will diffuse at different rates, creating a higher density of carriers at one end of the material and an electrostatic voltage." Why would there be no potential difference if the rate of movement of the hot/cool electrons is equal? Why are two different metals necessary? Why would impurities, imperfections, lattice vibrations, or phonons make them diffuse at different rates, and how does that end up creating an electrostatic voltage? |Similar Threads for: How do thermocouples work?| |If I'm thinking about this correctly, the voltage difference in a||General Physics||12| |Are thermocouples not affect by EMI?||General Engineering||11| |thermocouples how does it work||Advanced Physics Homework||0| |Relationship between temperature and voltage of Type T thermocouple||Electrical Engineering||5| |help with thermocouples! (A2 planning)||Introductory Physics Homework||0|
<urn:uuid:aa9c5f34-5c57-454e-a86d-ca97aee55c04>
2.859375
454
Comment Section
Science & Tech.
54.601843
The Allen Telescope Array (ATA) is a "Large Number of Small Dishes" (LNSD) array designed to be highly effective for “commensal” (simultaneous) surveys of conventional radio astronomy projects and SETI (search for extraterrestrial intelligence) observations at centimeter wavelengths. The idea for the ATA emerged in a series of workshops convened by the SETI Institute in 1997 to define the path for future development of technology and search strategies for SETI. The advance of computer and communications technology made it clear that LNSD arrays were more efficient and less expensive than traditional large antennas. The final report of the workshop, “SETI 2020,” recommended the construction of the One Hectare Telescope. (1HT) (A hectare is an area equivalent to a square 100 meters on a side.)The SETI Institute sought private funds for the 1HT and in 2001 Paul Allen (co-founder of Microsoft) agreed to fund the technology development and first phase of construction (42 antennas). In October 2007 the array began commissioning tests and initial observations.The array is now being used for radio astronomy observations of our galaxy and other galaxies, gamma ray bursts and transient radio sources, and SETI.
<urn:uuid:b6b49a05-8da0-4c41-b737-779b9ae6de52>
3.46875
253
Knowledge Article
Science & Tech.
24.918105
Section: String and data retrieving fun (3stap) Updated: March 2013 Return to Main Contents function::kernel_pointer - Retrieves a pointer value stored in kernel memory The kernel address to retrieve the pointer from Returns the pointer value from a given kernel memory address. Reports an error when reading from the given address fails. This document was created by using the manual pages. Time: 09:58:15 GMT, March 20, 2013
<urn:uuid:24150854-f25b-4a65-ab60-2c3857f0e3fc>
3.390625
97
Documentation
Software Dev.
52.298696
Geometric Structures: Session I Pleasing Shapes, Careful Looking, Same-Different, and the Power of Words Department of Mathematics York College (CUNY) Jamaica, New York 11451 Our daily lives are enriched by pleasing shapes: Some shapes are pleasing because they are so symmetrical and other shapes are pleasing because they are so complicated. Shapes are the concern of the area of mathematics called geometry. However, here we will adopt a broader view of geometry, which can be thought of as the science of studying visual patterns. Shapes are only one part of the story. Take a careful look at the shape below, which you should think of as being a drawing in the plane (flat surface) of an object in 3 dimensions. Do now 1: a. Write down on a piece of paper a list of as many properties or facts about the object (Figure 3) as you can. (Minimum 5) b. Exchange your list with the person sitting next to you. Use the list of your neighbor to help you add 5 new additional properties to your original list. When one looks at shapes and wants to describe them to other people, one uses words. Some of the words that might be used in describing the object in Figure 1 are: Immediately you may realize that different people may use different words for the same thing. Also different people may use the same word but have different meanings in mind - and I will restrict myself to only using words in English. Furthermore, sometimes words that we use in common everyday language may be used in a different way from the way that these terms are used in mathematics. Practice careful looking at these objects: Here I will try to show that historically the evolution of the meaning of words has helped drive the creation of new mathematics. For example, all of the diagrams in Figure 6 would today be called polygons, but at times in the past this would not have been true! Figure 6 illustrates convex 3-gons and 4-gons and a non-convex polygon with 12 sides, as well as a self-intersecting polygon with 6 sides. Sometimes it will be convenient to think of polygons as points (vertices) joined by rods, and sometimes as filled in regions with a certain number of sides. I will refer to these as rod and membrane models for the polygon concept. When one thinks of a polygon as a system of "rods," it is still customary to refer to a polygon such as the 4-gon in Figure 6 as convex. The usual definition of a convex set is that a set X is convex if for any two points p and q in X the line segment joining p and q is also in X. Strictly speaking, if one is thinking of the 4-gon in Figure 6 as a collection of rods, one should say that this polygon together with its interior points is convex. One of the most fundamental properties of the Euclidean plane is that if one draws a polygon which does not intersect itself such as the one in Figure 7, then the polygon divides the plane into three sets: those points on the polygon, those points in the interior of the polygon, and those points in the exterior of the polygon. This fact (generalized to curves rather than polygons) is known as the Jordan Curve Theorem and is named for the French topologist Camille Jordan. Many of the shapes we will be interested in studying can be understood with the help of a geometric tool called a graph. A graph is a diagram (Figure 8) consisting of dots called vertices (the singular is vertex) which are joined by line segments, which may be straight or curved, called edges. The valence of a vertex where there is no self-loop is the number of distinct edges at the vertex. A self-loop contributes two (2) to the valence of the vertex it is located at. In the graph above, the vertex where the self-loop appears has valence 3. The other vertices which are not 1-valent (there are three 1-valent vertices in this graph) have valences 5, 4, 3, 2, 2 and 2. Thus, a complete list of the valences for this graph is: 5, 4, 3, 3, 2, 2, 2, 1, 1, 1. The sum of the valences in a graph is twice the number of edges in the graph. Another name for the valence of a vertex is degree of a vertex. Computer scientists often refer to vertices with valence 1 as the leaves of a graph. The path shown has length 3 and the circuit also has length 3. Note that this graph (Figure 8) has been drawn in the plane so that edges in the graph meet only at vertices. Drawings of this kind are known as plane drawings. Sometimes, with advantage, the same object can be looked at from different points of view. For example, the diagram in Figure 9 can be thought of as a polygon with 4 sides and also as a graph with 4 vertices and 4 edges. The diagram in Figure 10 shows a shape which I will not consider a polygon because its sides are not portions of straight lines. For this particular shape, the graph associated with the shape would still be a 3-gon, just the way a triangle's graph would be a 3-gon. This shape consists of circular arcs, and if we join the vertices of the shape with straight lines, we get an equilateral triangle. This shape is known as a Reuxleaux Triangle, named for the German engineer Franz Reuxleaux. This is an example of a shape of constant breadth (width), that is, its width in any direction is the same. The intuitive idea here is that if the shape is grasped in any orientation between the jaws of a vice which are parallel lines, then the distance between the jaws will always be the same. Many people are surprised that there are shapes other than the circle with this property. In fact, there are infinitely many such shapes. One of the most important fundamental ideas in mathematics is having a notation of when two things that may look different are in some sense the same. Here are some examples: 1/2 and 12/24 look different but they represent the same number; 1 and .9999999..... look different but represent the same number. As a polygon, the shape below (Figure 11) is different from the shape in Figure 9. Figure 9 shows a quadrilateral we would call a trapezoid, a quadrilateral with at least one pair of its sides parallel. Figure 11 also shows a trapezoid but a special kind of trapezoid which has all sides the same length, and interior angles which are right angles. This shape is commonly called a square. There are many words that have been invented to describe different kinds of quadrilaterals, including parallelogram, square, rectangle, rhombus, kite, and trapezoid. Sometimes more than one of these words will apply to the same object. Figure 11 is a rhombus, trapezoid, parallelogram, rectangle, kite, and square. In this situation we can use the definitions of the words to determine that Figure 11 is an example of all of these types of quadrilaterals. However, there are quadrilaterals which are, for example, rhombuses but not squares. Furthermore, thought of as a graph, Figure 9 and Figure 11 show the "same" graph. Both of these figures have 4 vertices and 4 edges. The technical term for when two graphs look different but we can treat them as if they are the same is isomorphic. While Figure 12 is not a polygon (it has some curved sections), as a graph it is isomorphic to the the graphs in Figures 9 and 11. The word "isomorphic" here is designed to connote that one has two things but they have the "same structure." Mathematicians talk about isomorphic graphs, groups, rings, and fields. The term comes up in geometry, topology, and algebra as well as many other parts of mathematics. The formal definition of isomorphic is in terms of the concept of a function called an isomorphism. Isomorphisms preserve some feature of the objects they relate. For graphs, they preserve the way that vertices are joined up to each other by the edges. Figure 13 shows another graph with 4 vertices and 4 edges but it is not isomorphic to the graphs in Figures 9, 11, 12. If graphs have the same structure it seems natural that their essential properties would be the same. Thus, isomorphic graphs would have the same number of vertices and edges. They would also have the same numbers occurring as the valences of the vertices of the graphs. Figure 8 has vertices of valence 2, 2, 2, 2. Figure 13 has vertices of valence 3, 2, 2, 1. Hence, the graphs in these figures are not isomorphic. Having the same valences does not, however, guarantee that there is an isomorphism between two graphs. All three of the graphs in Figure 14 look different. However, they all have valences 4, 4, 2, 2, 2, 2, Two of the graphs in Figure 14 are isomorphic to each other but not to the third graph. Can you tell which two graphs are isomorphic? The two graphs in Figure 14 which are isomorphic as graphs have a sense in which they are not the same. Whenever, one has items which can be distinguished from each other because of additional attributes that were not previously contemplated, one has made progress. It may be helpful to consider a situation outside of mathematics to understand the importance of words and of refining the meaning of words. One of the most important attributes of objects to people who are not color- blind is the phenomenon of color. Color is an amazingly complex subject with issues involving the physics of light, the way the eye perceives color, and linguistic issues. For example, there are languages where there are no separate words to distinguish between blue and green. Furthermore, distinctions are made between the terms color, hue, and saturation. The point is that when an author can distinguish in a novel between azure and sky blue, it enriches the experience of reading. Similarly, geometry is enriched when we are can make new distinctions between objects we think of as being familiar. For example, most of the polygons we tend to see that are drawn in the plane are convex. So making the distinction between convex polygons and non-convex polygons broadens our perspective. Now, non-convex polygons be classified according to whether they have edges that intersect themselves (at places other than vertices) or not. Figure 15 shows a convex polygon, one which is not convex and one which intersects itself, and is also not convex. All of these polygons are 4-gons. The diagram in Figure 16 shows a geometric object that has a lot in common with a polygon but is not a polygon. Sometimes this type of geometric figure is referred to as a polygon with holes - in this example one hole. In fact, so far, a definition for what a polygon is has not been given! Based on these discussions how might you define the word polygon? Do Now 2: a. How would you define the concept of polygon so that all of the examples in Figure 15 but not the example in Figure 16 would be polygons? b. Draw a variety of polygons which illustrate the ideas of being convex, non-convex, and self-intersecting. c. Give some ideas about how to distinguish between polygons which go beyond the distinctions of being convex, non-convex, and self-intersecting. Hint: i. What can you say about side lengths and angle lengths of polygons? ii. What can you say about angle "types" (e.g. acute, right, obtuse) of polygons? d. Our discussion has dealt with polygons that can be drawn in a plane. What ideas do you have about how to extend this discussion to 3-dimensional and higher dimensional spaces? e. A common word used in connection with polygons is "regular." How would you define what it means to be a regular polygon?
<urn:uuid:c6aef05f-abfb-4b20-ad98-aa4338bcb3ab>
3.875
2,598
Academic Writing
Science & Tech.
57.512437
A base is a molecule which produces hydroxide ions when added to water (according to the Arrhenius definition), which accepts H+ ions in a chemical reaction (according to the Bronsted-Lowry definition), or which donates an electron pair in a reaction (according to the Lewis definition). Bases have a bitter taste and turn red litmus paper blue. Bases feel slippery when dissolved in water. Most strong bases contain a hydroxide ion. Bases are indicated by numbers from above 7 to 14 on the pH scale, and are the opposite of acids.
<urn:uuid:e530b55f-a2e7-47eb-bb5d-ec33cf07da86>
3.640625
118
Knowledge Article
Science & Tech.
41.474088
A real number is called normal if you can "find" all (finite) digit sequences in its expansion. There are several variant definitions in use, and often it's important to determine which is relevant. First, there's the question of how often each sequence must occur. - Easiest is the demand that the digit sequence just occur anywhere in the expansion of the number. - Next, we can demand that the digit sequence appear infinitely many times in the expansion. - Finally, we can demand that the digit sequence appear with the same density it would have in a truly random independent sequence of digits (for decimal, this means every digit sequence of length k appears with density 10-k). Additionally, there's a question of bases: - We can demand any of the above in base 10 (decimal) only (or in binary, or in some other prespecified base), - OR - - We can demand the above in every base. No matter which choice we make, almost every real number turns out to be normal. So a randomly chosen number in the interval [0,1) is normal with probability 1. But proving a number is normal is usually very hard; we only know of a few explicitly-constructed examples that are normal. Note that rational numbers have a cyclic expansion in every base, so they're never normal. AxelBoldt asks me to mention Gregory Chaitin's Omega. Chaitin's Omega Ω, for any Turing complete system, is normal (by all definitions). The proof by hand waving for this is that if it were not, you could make statistical judgements of Turing machines. Unfortunately, it's also the consummate example of an "unconstructable" number (for the exact same reasons), so how important this is is debatable.
<urn:uuid:a594a1c2-237e-4ca1-9401-06f4598eb074>
3.25
376
Knowledge Article
Science & Tech.
42.668953
OpenGL accelerated video renderer One of the most important components of a video player or a multimedia framework is the video renderer. The quality and performance of the playback greatly depends on it. Despite this, many video players still use obsolete techniques, which cannot fully take advantage of the enormous processing power of modern GPUs. GStreamer is a powerful multimedia framework, but it lacks a modern video renderer, which could rival similar solutions provided with popular proprietary operating systems. The goal of this project is to develop such a component using OpenGL acceleration. The project would consist of two major components: - the video renderer itself, a standalone library, which would be completely independent of GStreamer - a GStreamer plugin, a video sink, which would use the video rendering library The library would use OpenGL 2.1 with GLSL shaders to accelerate all video processing stages (color space conversion, color correction, de-interlacing, high quality resampling, compositing). Both on-screen and off-screen (texture or system memory) rendering would be possible, so the library would be also usable for intermediate processing (as a filter element). Back to TaskList
<urn:uuid:d325c61f-624c-4bb1-ae25-65cbc69d0a64>
2.71875
241
Knowledge Article
Software Dev.
20.403992
Java is considered very safe programming language compared to C and C++ as it doesn't have free() and malloc() to directly do memory allocation and deallocation, You don't need to worry of array overrun in Java as they are bounded and there is pointer arithmetic in Java. Still there are some sharp edges in Java programming language which you need to be aware of while writing enterprise application. Many of us make subtle mistake in Java which looks correct in first place but turn out to be buggy when looked carefully. In this series of java articles I will be sharing some of common Java mistake programmers make while programming application in Java. As I have said earlier every day we learn new things but we forget something equally important. This again highlight importance of code review and following best practices in Java. In this part we will discuss why double and float should not be used in monetary or financial calculation where exact result of calculation is expected. Using double and float for exact calculation This is one of common mistake Java programmer make until they are familiar with BigDecimal class. When we learn Java programming we have been told that use float and double to represent decimal numbers its not been told that result of floating point number is not exact, which makes them unsuitable for any financial calculation which requires exact result and not approximation. float and double are designed for engineering and scientific calculation and many times doesn’t produce exact result also result of floating point calculation may vary from JVM to JVM. Look at below example of BigDecimal and double primitive which is used to represent money value, its quite clear that floating point calculation may not be exact and one should use BigDecimal for financial calculations. From above example of floating point calculation is pretty clear that result of floating point calculation may not be exact at all time and it should not be used in places where exact result is expected. Using Incorrect BigDecimal constructor Another mistake Java Programmers make is using wrong constructor of BigDecmial. BigDecimal has overloaded constructor and if you use the one which accept double as argument you will get same result as you do while operating with double. So always use BigDecimal with String constructor. here is an example of using BigDecmial constructed with double values: I agree there is not much difference between these two constructor but you got to remember this. Using result of floating point calculation in loop condition One more mistake from Java programmer can be using result of floating point calculation for determining conditions on loop. Though this may work some time it may result in infinite loop another time. See below example where your Java program will get locked inside infinite while loop: This code will result in infinite loop because result of subtraction of amount1 and amount 2 will not be 1.5 instead it would be "1.0499999999999998" which make boolean condition true. That’s all on this part of learning from mistakes in Java, bottom line is : - Don’t use float and double on monetary calculation. - Use BigDecimal, long or int for monetary calculation. - Use BigDecimal with String constructor and avoid double one. - Don’t use floating point result for comparing loop conditions. Other Java tutorials you may like
<urn:uuid:2746ee19-3da2-4e92-815c-8f1e9bbf0cb4>
3.3125
664
Personal Blog
Software Dev.
34.912902
There are many reasons for the loss of biodiversity around our planet. These serious issues affect both plants and animals. Did you know that throughout history the earth has gone through several periods called mass extinctions. In the past, these have been occurred due to natural causes such as volcanoes and natural climate change. Today we've entered into another period of mass extinctions. This time however, it is caused by human activities! People have overfished, overhunted, destroyed habitats, and used harmful chemicals. Our planet's biodiversity is in danger! Follow the links to learn about the problems threatening our biodiversity.
<urn:uuid:8573b067-c673-4c92-801f-cb849059c165>
3.484375
124
Knowledge Article
Science & Tech.
38.808801
Using the DockPanel layout panel A DockPanel layout panel in a Windows Presentation Foundation (WPF) project provides a layout area within which you can arrange child objects around the edge of the screen based on compass direction: North, South, East, and West. The DockPanel was historically used for root layout in other forms packages, because it allows panels to be docked around the edge of the screen. When you add child objects to a DockPanel, the objects are docked to the left part of the panel by default. The last child object that you add can fill the remaining space in the panel if the LastChildFill property for that object is set to True. By default, this property is set to False. When objects fill up the panel, they are then clipped or hidden by the parent layout container. If you drag a child object of a DockPanel on the artboard, notice that a large four-way pointer shows the directions that you can dock the object (see image below). You can use this four-way pointer as an alternative way to change dock orientation. Simply drag the object over the direction arrow that you want. Notice that the direction arrow that you select becomes highlighted, to indicate that you can drop the object to dock it in that direction. Add a DockPanel to a document by selecting DockPanel from the Assets panel or from the layout container button in the Tools panel, and then dragging with the pointer on the artboard. The following XAML code is added to your project: <Grid x:Name="LayoutRoot"> <DockPanel HorizontalAlignment="Left" Height="100" LastChildFill="False" VerticalAlignment="Top" Width="100"/> </Grid>
<urn:uuid:740beeae-a617-484a-b72d-9891dd4c9a41>
2.796875
354
Documentation
Software Dev.
49.030689
John Dabiri, bioengineer at Caltech, has developed new techniques for studying the motion of aquatic animals. In a recent study in the journal Nature , Dabiri and colleagues explain how swimming animals mix the ocean. Ocean mixing is important for the distribution of gases and nutrients throughout the sea, and can even affect global climate. Video courtesy of Nature John Dabiri, Kakani Katija, John H. Costello and Sean P. Colin. Music by Aaron Kerr and Swallows. Produced by Flora Lichtman
<urn:uuid:ea401c8a-3864-4ede-8ad4-134f49353d7c>
3.1875
111
Truncated
Science & Tech.
42.243291
The Higgs Mechanism part 4: Symmetry Breaking At last we’re ready to explain the Higgs mechanism. We start where we left off last time: a complex scalar field with a gauged phase symmetry that brings in a (massless) gauge field . The difference is that now we add a new self-interaction term to the Lagrangian: where is a constant that determines the strength of the self-interaction. We recall the gauged symmetry transformations: If we write down an expression for the energy of a field configuration we get a bunch of derivative terms — basically like kinetic energy — that all occur with positive signs and then the potential energy term that comes in the brackets above: Now, the “ground state” of the system should be one that minimizes the total energy, but the usual choice of setting all the fields equal to zero doesn’t do that here. The potential has a “bump” in the center, like the punt in the bottom of a wine bottle, or like a sombrero. So instead of using that as our ground state, we’ll choose one. It doesn’t matter which, but it will be convenient to pick: where is chosen to minimize the potential. We can still use the same field as before, but now we will write Since the ground state is a point along the real axis in the complex plane, vibrations in the field measure movement that changes the length of , while vibrations in measure movement that changes the phase. We want to consider the case where these vibrations are small — the field basically sticks near its ground state — because when they get big enough we have enough energy flying around in the system that we may as well just work in the more symmetric case anyway. So we are justified in only working out our new Lagrangian in terms up to quadratic order in the fields. This will also make our calculations a lot simpler. Indeed, to quadratic order (and ignoring an irrelevant additive constant) we have so vibrations of the field don’t show up at all in quadratic interactions. We should also write out our covariant derivative up to linear terms: so that the quadratic Lagrangian is Now, the term in parentheses on the right looks like the mass term of a vector field with mass . But what is the kinetic term of this field? And so we can write down the final form of our quadratic Lagrangian: In order to deal with the fact that our normal vacuum was not a minimum for the energy, we picked a new ground state that did minimize energy. But the new ground state doesn’t have the same symmetry the old one did — we have broken the symmetry — and when we write down the Lagrangian in terms of excitations around the new ground state, we find it convenient to change variables. The previously massless gauge field “eats” part of the scalar field and gains a mass, leaving behind the Higgs field. This is essentially what’s going on in the Standard Model. The biggest difference is that instead of the initial symmetry being a simple phase, which just amounts to rotations around a circle, we have a (slightly) more complicated symmetry to deal with. For those that are familiar with some classical groups, we start with an action of on a column vector made of two complex scalar fields with a potential of the form: which is invariant under the obvious action of and a phase action of . Since the group is three-dimensional there are three gauge fields to introduce for its symmetry and one more for the symmetry. When we pick a ground state that breaks the symmetry it doesn’t completely break; a one-dimensional subgroup still leaves the new ground state invariant — though it’s important to notice that this is not just the factor, but rather a mixture of this factor and a subgroup of . Thus only three of these gauge fields gain mass; they become the and bosons that carry the weak force. The other gauge field remains massless, and becomes — the photon. At high enough energies — when the fields bounce around enough that the bump doesn’t really affect them — then the symmetry comes back and we see that the electromagnetic and weak interactions are really two different aspects of the same, unified phenomenon, just like electricity and magnetism are really two different aspects of electromagnetism.
<urn:uuid:8ccd573f-fb45-44b7-b005-a4db3fff5fe4>
2.71875
927
Personal Blog
Science & Tech.
44.29404
Learn more physics! OK you can make some thing float with helium ok i have a rocket i need to float how much helium will it take to float an object say per gm? the baloon has a volume of 28295.8367 cm^3 how much will it float? - Ellison (age 16) According to the paragraph on helium in my CRC Handbook of Chemistry and Physics, 1000 cubic feet of helium in a balloon (presumably not under huge pressure, but around one atmosphere) will lift 68.5 pounds at sea level. Your balloon has a volume very close to one cubic foot, so it can exert a lifting force of 0.0685 pounds or about 1.1 ounces, minus the weight of the balloon material. (published on 10/22/2007) Follow-up on this answer.
<urn:uuid:435553a4-3e0e-4908-9298-761025872102>
2.8125
177
Q&A Forum
Science & Tech.
80.959167
Sunglint and Clouds off Western South America Astronaut photograph ISS031-E-35310 was acquired on May 15, 2012, with a Nikon D2Xs digital camera using a 180 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science & Analysis Laboratory, Johnson Space Center. The image was taken by the Expedition 31 crew. It has been cropped and enhanced to improve contrast, and lens artifacts have been removed. The International Space Station Program supports the laboratory as part of the ISS National Lab to help astronauts take pictures of Earth that will be of the greatest value to scientists and the public, and to make those images freely available on the Internet. Additional images taken by astronauts and cosmonauts can be viewed at the NASA/JSC Gateway to Astronaut Photography of Earth. Caption by William L. Stefanov, Jacobs/ESCG at NASA-JSC. The setting sun highlights cloud patterns—as well as the Pacific Ocean surface itself—in this photograph taken by an astronaut on the International Space Station. The ISS was located over the Andes Mountains of central Chile at the time. The camera view is looking back towards the Pacific Ocean as the Sun was setting in the west (towards the upper right). Light from the setting Sun reflects off the water surface and creates a mirror-like appearance, a phenomenon known as sunglint. Bands of relatively low-altitude cumulus clouds appear like a flotilla of ships, with west-facing sides illuminated by waning sunlight and the rest of the clouds in shadow. Due to the low Sun angle, the clouds cast long and deep shadows over large swaths of the ocean. Given the short camera lens used, an individual cloud shadow may extend for miles. Light gray clouds at image lower left appear to be at a higher altitude. The cloud cover is likely a remnant of a frontal system that moved in from the Pacific and over inland South America a day or two prior to when the image was taken. This image originally appeared on the Earth Observatory. Click here to view the full, original record.
<urn:uuid:c6cac533-d3b2-457c-8e67-9101e7debb08>
3.328125
428
Knowledge Article
Science & Tech.
46.009609
Physics and Feynman's Diagrams In the hands of a postwar generation, a tool intended to lead quantum electrodynamics out of a decades-long morass helped transform physics As theoretical physics blossomed during the 1930s and 1940s, so did the problem of infinities: The equations physicists constructed to answer fundamental questions about the origins of the universe and the interactions of light, energy and matter led again and again to frustrating non-answers. Richard Feynman offered a tool for solving this problem: using a diagram for specifying the terms of the equations. The idea did not catch on at first, but soon the diagrams began spreading, thanks to Freeman Dyson's work in deriving and explaining the new bookkeeping devices and showing a generation of postdocs how to use them. Curiously, they became most popular in fields of physics for which they were less suited—particularly in theories of nuclear-particle interactions. Go to Article
<urn:uuid:8e939665-87cd-4cce-9e82-a8fc07824af3>
3.5
198
Truncated
Science & Tech.
22.86504
magnetic mirrorArticle Free Pass magnetic mirror, static magnetic field that, within a localized region, has a shape such that approaching charged particles are repelled back along their path of approach. A magnetic field is usually described as a distribution of nearly parallel nonintersecting field lines. The direction of these lines determines the direction of the magnetic field, and the density (closeness) of the lines determines its strength. Charged particles such as electrons tend to move through a magnetic field by following a helical path about a magnetic field line. If the field lines along the path of the particle are converging, the particle is entering a region of stronger magnetic field. The particle continues to circle about the field line, but its forward motion is retarded until it is stopped and finally forced back along its original path. The exact location at which this mirroring occurs depends only upon the initial pitch angle describing its helical path. Two such magnetic mirrors can be arranged to form a magnetic bottle that can trap charged particles in the middle. What made you want to look up "magnetic mirror"? Please share what surprised you most...
<urn:uuid:3e393b77-4b22-4e18-906d-40376fbf5edd>
4.125
229
Knowledge Article
Science & Tech.
39.460625
Found 0 - 10 results of 20 programs matching keyword "black sand" For thousands of years, Indian women have created these elaborate geometric designs using a variety of natural materials—flowers, spices, sand, and natural pigment—to mark auspicious occasions, celebrations, and milestones. The Nautilus has discovered several well-preserved shipwrecks on their mission, from ancient Greek trading vessels to modern sailboats. Join us as we talk with chief scientist Katy Croff Bell live aboard the Nautilus and see the latest video of their discoveries. Join Exploratorium educator Ken Finn as he unlocks the mystery behind the black sand (a.k.a. magnetite) at Ocean Beach. This piece explores the origin of magnetite in the Sierra Nevada mountains, its journey down the Sacramento and San Joaquin rivers to the Bay, and the interesting physical properties of this mineral, plus some fun things you can do with it. This After Dark event, which explored the science behind slowing down, included artist Joe Mangrum, who created a sand mandala on the floor of the museum. In this timelapse video, shot over 8 hours, you can see the full arc of the work. Burning Man is a literal hotbed of explosions and fire. Join Exploratorium Senior Scientist Paul Doherty as he looks at the properties that make up fire through the lens of the Burning Man event. Exploratorium Senior Scientist Paul Doherty explains a double rainbow sighting at Burning Man 2010! Slow motion footage of Pyrograph, a work by Earl "Dodger" Stirling that has been described as a cross between Dante's Inferno and the Foucault Pendulum. Like a fiery version of the Exploratorium's classic Drawing Board exhibit, Pyrograph swings a pendulum across a sandy, flaming cauldron and traces out oscillating patterns in colorful fire. We take a tour of Black Island, and speak with Tony Marchetti who for the last 13 years has been running this vital communications station for the U.S. Antarctic operations. Aeolian Landscape is an exhibit in which a miniature wind-swept desert landscape is recreated by an electric fan and finely ground sand that mimics the process of wind picking up and depositing small particles. Visitors can change the direction of the fan, influencing the shape of the dunes. The two Mars Rovers are alive and well after surviving their second Martian winter. Come and see photos of discoveries they made during their third year on Mars, with Exploratorium Senior Scientist Paul Doherty.
<urn:uuid:37afba40-727c-4f66-84f6-edafc8d0f14a>
2.78125
523
Content Listing
Science & Tech.
46.825804
There are now a greater number of global coupled atmosphere-ocean models and a number of them have been run for multi-century time-scales. This has substantially improved the basis for estimating long time-scale natural unforced variability. There are still severe limitations in the ability of such models to represent the full complexity of observed variability and the conclusions drawn here about changes in variability must be viewed in the light of these shortcomings (Chapter Some new studies have reinforced results reported in the SAR. These are: - The future mean Pacific climate base state could more resemble an El Niño-like state (i.e., a slackened west to east SST gradient with associated eastward shifts of precipitation). Whilst this is shown in several studies, it is not true of all. - Enhanced interannual variability of daily precipitation in the Asian summer monsoon. The changes in monsoon strength depend on the details of the forcing scenario and model. Some new results have challenged the conclusions drawn in earlier reports, - Little change or a decrease in ENSO variability. More recently, increases in ENSO variability have been found in some models where it has been attributed to increases in the strength of the thermocline. Decadal and longer time-scale variability complicates assessment of future changes in individual ENSO event amplitude and frequency. Assessment of such possible changes remains quite difficult. The changes in both the mean and variability of ENSO are still Finally there are areas where there is no clear indication of possible changes or no consensus on model predictions: - Although many models show an El Niño-like change in the mean state of tropical Pacific SSTs, the cause is uncertain. In some models it has been related to changes in cloud forcing and/or changes in the evaporative damping of the east-west SST gradient, but the result remains model-dependent. For such an El Niño-like climate change, future seasonal precipitation extremes associated with a given ENSO would be more intense due to the warmer mean base state. - There is still a lack of consistency in the analysis techniques used for studying circulation statistics (such as the AO, NAO and AAO) and it is likely that this is part of the reason for the lack of consensus from the models in predictions of changes in such events. - The possibility that climate change may be expressed as a change in the frequency or structure of naturally occuring modes of low-frequency variability has been raised. If true, this implies that GCMs must be able to simulate such regime transitions to accurately predict the response of the system to climate forcing. This capability has not yet been widely tested in climate models. A few studies have shown increasingly positive trends in the indices of the NAO/AO or the AAO in simulations with increased greenhouse gases, although this is not true in all models, and the magnitude and character of the changes varies across models.
<urn:uuid:39a1b5f2-18ac-451d-959c-dd2ab94516aa>
2.734375
640
Academic Writing
Science & Tech.
29.171289
The COSIMA (Cometary Secondary Ion Mass Spectrometer) on board the Rosetta spacecraft will chemically and isotopically analyze the dust particles that are captured in the coma of comet Wirtanen. The experiment works with secondary ion mass spectroscopy (SIMS). A high energy primary ion beam, COSIMA uses 115In+ at 10 keV, hits the target and knocks off molecules, of which typically 0.1 to 10% are ionized, the so called secondary ions. A time-of-flight mass spectrometer is connected in order to enable the measurements over a large mass range. The target can spatially be resolved, limited by the cross section of the primary ion beam (ca. 10 µm). Cosima consists of a dust collector and target manipulator, a light microscope for target inspection, the primary ion source and the mass spectrometer with its ion optics and ion detector. The development of the instruments resulted from an international collaboration chaired by the Max-Planck-Institute for extraterrestrial Physics in Garching, Germany. More detailed information can be found at FMI. The institute for space research (IWF) of the Austrian Academy of Sciences is involved in the hardware and electronics for the primary ion beam system.
<urn:uuid:24478106-79ac-4b79-a1d5-66253f573aa7>
3.328125
263
Knowledge Article
Science & Tech.
33.68873
4.2 The Tracer traceon one or more functions. Tracing information is normally printed whenever the traced function is entered or exited. Tracing information consists of the name of the function followed at entry by its arguments and on exit by its return values. For example, the following code traces a call to the function > (trace cdr) (CDR)If a function is already being traced, > (cdr '(1 2 3)) 1 Enter CDR (1 2 3) 1 Exit CDR (2 3) (2 3) tracecalls the macro untracebefore starting the new trace. Calling tracewith no arguments returns a list of all functions that are currently being traced. untrace restores functions to their normal state. You can call this macro with one or more function names as arguments. Calling it with no arguments untraces all the functions currently being traced. Forms are evaluated in the lexical environment at the time of the call to trace. Special forms cannot be traced because they are neither functions nor macros. Calling trace on a macro traces the macro expansion, not the evaluation of the form. Generated with Harlequin WebMaker
<urn:uuid:5f81f1dc-e46b-4a6a-bb01-a8c77baa9870>
2.8125
249
Documentation
Software Dev.
55.441496
Searches for the named texture. If the current search mode is rwLOCAL, the function searches only the current dictionary. If the current search mode is rwGLOBAL, the function searches the whole of the texture dictionary stack. If the search fails, an attempt is made to read the named texture from disk. If the named texture is found, it is stored into the current texture dictionary. name Name of a texture. A pointer to the named texture if successful, and NULL otherwise. The string supplied as the texture name should form the leaf part (i.e., without path or extension) of the filename of the texture file. Furthermore, for the sake of portability of texture files across the different platforms supported by RenderWare, it is best to choose texture file names that are a maximum of eight characters long and which are acceptable to MS-DOS as file names. If the function cannot find the named texture in the texture dictionary stack, it will look for a texture file in the directories whose names appear in the shape path. Furthermore, if the specified name does not have a file extension, then this function will also search for the specified name followed by the extensions ".ras", ".env", ".tex", ".bmp" and ".rle". An example of a valid texture is "marble", which will match with file names "marble", "marble.tex", "marble.ras", "marble.env", "marble.bmp" and "marble.rle".
<urn:uuid:30ce79b6-e813-41db-9e72-a72a69794c67>
2.765625
316
Documentation
Software Dev.
57.216458
Today, we fly to the North Pole. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. So, who first reached the North Pole? Getting there lay on people's minds for centuries. Thomas Jefferson suggested going in a balloon. For more than a century after Jefferson, a parade of daring explorers tried to reach the North Pole by ship and land; and they all failed. Robert Peary and four Inuits claimed to've made it in 1909, but their claim has been sharply questioned. When Byrd and Bennett flew a Fokker Trimotor over the Pole in 1926, their navigation was also You might say Jefferson was vindicated when Amundsen and Ellsworth finally, and convincingly, flew over the North Pole, not in an airplane, but in the semi-rigid, Italian-built dirigible Norge. The Norge flew from Svalbard, an archipelago in the Arctic Ocean. It crossed the Pole and landed in Alaska. But that was a dirigible. It could be steered. Much earlier, in 1897, Swedish physicist and aeronaut S. A. Andrée tried to reach the Pole in a balloon. Dirigibles were then still in their infancy, but unsteerable balloons were a mature technology. Andrée was wonderfully confident, convincing, and wholly unrealistic. His idea of how to steer a balloon as it rode with the wind boggles the mind. He meant to fit his balloon with sails which, he understood, would hang slack as he rode with the wind. But, he would fly low and drop a long rope. He rope would drag on the ice below and slow the balloon. Then he could tack against the westerlies, and move northward. Andrée never did manage to make the drag ropes work. During one test flight, he was blown all the way across Sweden and the Baltic Sea to Finland. But his blind confidence survived. He raised about a million in our dollars today from the King of Sweden, Alfred Nobel, and others. Then he set out with two companions from a base on Spitsbergen Island. While they were still in sight of land, the drag ropes almost pulled the balloon into the sea, then tore loose from the gondola. They were without directional control from the start, and that was the last anyone saw of them -- until 1930. Then an expedition to a small island east of Spitsbergen found their remains along with Andrée's journal and even undeveloped photos. They'd been blown north, lost hydrogen, and bounced to a safe landing on the ice two days later, their gear intact. They spent the next three months working their way south. They finally died, possibly of trichinosis from uncooked polar bear meat. Through it all, Andrée's kept his optimism. He'd sent a carrier pigeon message from the out-of-control balloon saying things were going well. His subsequent journal, filled with heroic good cheer, became a Swedish saga, reworked in books and a song-cycle. It even became an starring Max von Sydow as Andrée. For this tale simply had to be changed from the misguided technology it was, into heroic tragedy -- which it was, as well. I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds work. See the excellent Wikipedia account of Andrée's flight. All images from this source. Above, S. A. Andrée and his balloon, the Eagle, taking off with ropes trailing. For a nice review of early aerial exploration of the North Pole, see: D. D. Jackson, The Explorers. (Alexandria, VA: Time-Life Books, 1983): Chapter 1, The Race to the Top of The World. For an odd take on our fascination with flying to the North Pole, see Episode 278. Photo taken by the crew of Andrée's Eagle, just after their last descent onto the ice A legend is made: Schoolchildren looking at the crew's boat, recovered and exhibited. The Engines of Our Ingenuity is Copyright © 1988-2008 by John H.
<urn:uuid:3b6c46c9-a379-4179-bfe5-19584b3a351b>
4.21875
939
Audio Transcript
Science & Tech.
64.35376
Imagine, we need to read and eval code from some untrusted source. This code may change lisp environment in uncertain way as expected behaviour, but it may also contain error, which stops execution in the middle. This is not what we want: environment should be changed completely or rolled back to previous state. Naive approach to solve the problem is to fork() lisp image, execute untrusted code in the child (child will be the full clone of parent), return zero in the case of succes and non-zero value otherwise. Parent may check if it's safe to executes code in its environment this way . This is only few lines of code: (defun rep (expr) (eval (read-from-string expr))) (defun safe-eval (expr) (let ((pid (sb-posix:fork))) (multiple-value-bind (pid status) (sb-posix:waitpid pid 0) (declare (ignore pid)) (when (zerop status) (sb-ext:quit :unix-status 1))) (sb-ext:quit :unix-status 0)) (error "fork failed~%"))))) It doesn't even matter if child's image completely crashes due to uber-invalid code or compiler's bugs, parent will survive and understand that the code is bad. Next question is: how expensive fork() operation is? Linux kernel (I'm Linux-only guy) is enough smart not to copy all the process data one-to-one: it uses copy-on-write (COW) technique to separate memory pages of new process from respective pages of old process only at the moment of page changes. So, basically, kernel duplicates task structures for the new process, and that's all. Unfortunately, modern lisps (like SBCL) creates huge image up to tens megabytes, and it may grow up to gigabytes. In-kernel VMA chains, which describes how much memory the process has and how this memory is organized in address space of process, may grow significantly. The cost of fork operation grows respectively (not to mention, overall system slowdowns as well). One of accessible and acceptable solution is to use huge pages in those programs, which manipulate with a large memory portions. Normal page size is equal to 4 kilobytes on most of systems. However, modern hardware allows to set page size for part of memory to larger size. For example, x86-64 supports huge pages of size 2 megabytes. I wrote a small test, which allocates portions of memory with normal and huge page size, and does fork(). Let see the results: I wish SBCL built-in memory manager to work with hugepages...
<urn:uuid:fee4f3af-a8ed-4555-9910-b29f290c42ab>
2.90625
579
Personal Blog
Software Dev.
47.526971
August 30th, 2011 MODIS true color Red/Green/Blue (RGB) images A sequence of daily 250-meter resolution MODIS true color Red/Green/Blue (RGB) images from the SSEC MODIS Direct Broadcast site (above) showed the development and evolution of the smoke plume emanating from a swamp fire that was burning in the Bayou Sauvage National Wildlife Refuge near New Orleans during the 26 August – 31 August 2011 time period. The change in daily wind directions resulted in very different smoke dispersion patterns on each day. Smoke from this fire caused air quality alerts to be issued for the New Orleans and Baton Rouge areas. AWIPS images (below) of the 1-km resolution MODIS 0.65 µm visible channel data at 19:16 UTC (2:16 pm local time on 30 August) showed the curving smoke plume; about 9.5 hours later, the fire “hot spot” (black to red to yellow color enhanced pixels) was seen on a 1-km resolution MODIS 3.7 µm shortwave IR image at 04:42 UTC (11:42 pm local time on 30 August). MODIS 0.65 µm visible image + MODIS 3.7 µm shortwave IR image August 30th, 2011 GOES-13 Visible (0.63 µm) images (click image to play animation) An area of disturbed weather that emerged off the coast of Africa over the weekend has acquired sufficient organized convection to be classified as a Tropical Storm, Katia, the 11th named storm of this active Atlantic Hurricane Season. Analyses from the CIMSS Tropical Weather Website show the storm just south of a region of dry air from the Sahara. Shear analyses at the site show that Katia is projected to move into a region of decreasing shear in the next 24 hours. In addition, sea surface temperatures are warm. The forecast from the National Hurricane Center suggests slow strengthening over the next 3 days. Overshooting Tops diagnosed with MSG data (click image to play animation) Overshooting tops diagnosed using MSG data (at this site) (above) show a decrease in OT generation over the center of the system today, coincident with warming of the cloud tops. Variability in the number of OTs is common, as shown here. August 29th, 2011 MODIS true color images: 16 August and 28 August 2011 Heavy rainfall associated with Hurricane Irene included 20.40 inches at Virginia beach, Virginia and 20.00 inches at Jacksonville, North Carolina (HPC summary). Winds gusted as high as 115 mph at Cedar Island, North Carolina. The effects of the heavy rain and strong winds can be seen in a before/after comparison of 250-meter resolution MODIS true color Red/Green/Blue (RGB) images from the SSEC MODIS Today site (above). On the “before” image (16 August 2011), there was a large smoke plume seen from a fire that was burning in the Great Dismal Swamp area in far southeastern Virginia; on the “after” image (28 August 2011), water turbidity was significantly enhanced due to suspended sediment across the Outer Banks region of North Carolina — and a narrow filament of sediment was being actually being entrained into the flow of the Gulf Stream. AWIPS images of the corresponding MODIS 0.65 µm visible channel data and the MODIS Sea Surface Temperature (SST) product (below) showed that the enhanced turbidity features seen on the MODIS true color image generally exhibited slightly cooler SST values (in the middle to upper 70s F, blue color enhancement) compares to the waters located closer to the Gulf Stream (SST values in the lower 80s F, darker red color enhancement). MODIS 0.65 µm visible channel image + MODIS Sea Surface Temperature image Farther to the north, another before/after MODIS true color image comparison revealed additional areas of sediment being carried off the coast of the Northeast US (below). Also note that there was a great deal of sediment in the Hudson River (perhaps better seen in this 20 August / 29 August comparison). MODIS true color images: 26 August and 29 August 2011
<urn:uuid:64c674a6-8bdb-4727-9f30-c18b99bbc2ef>
2.765625
876
Content Listing
Science & Tech.
48.00484
Typically in Erlang you write .. (fun(X) -> X * 2 end)(6) to get 12 as the result. With string lambda, you write .. (utils:lambda("X * 2"))(6) to get the same result. Ok, counting the characters of the module and the function, you grin at the fact that it is really more verbose than the original one. But we have less boilerplates, with the explicit function declaration being engineered within the lambda. A couple of more examples of using the string lambda : (utils:lambda("X+2*Y+5*X"))(2,5) => 22 (utils:lambda("Y+2*X+5*Y"))(5,2) => 34 Less noisy ? Let's proceed .. lists:map(fun(X) -> X * X end, [1,2,3]). lists:filter(fun(X) -> X > 2 end, [1,2,3,4]). lists:any(fun(X) -> length(X) < 3 end, string:tokens("are there any short words?", " ")). and with string lambdas, you have the same result with .. utils:any("length(_) < 3", string:tokens("are there any short words?", " ")). In the last example, _is the parameter to a unary function and provides a very handy notation in the lambda. Here are some more examples with unary parameter .. %% multiply by 5 %% length of the input list > 2 and every element of the list > 3 (utils:all_and(["length(_) > 2", "utils:all(\">3\", _)"]))([1,2,3,4]). As per original convention, ->separates the body of the lambda from the parameters, when stated explicitly and the parameters are matched based on their first occurence in the string lambda .. If not specified explicitly, parameters can be implicit and can be effectively deduced .. %% left section implicit match %% right section implicit match %% both sections implicit match Chaining for Curry ->can be chained to implement curried functions. or deferred invocation .. Higher Order Functions String lambdas allow creating higher order functions with much less noise and much more impact than native boilerplates in the language. %% compose allows composition of sequences of functions backwards %% higher order functions utils:map("+1", utils:map("*2", [1,2,3])). lists:map(utils:compose(["+1", "*2"]), [1,2,3]). And here are some catamorphisms .. lists:reverse()is typically defined by Erlang in the usual recursive style : reverse( = L) -> reverse([_] = L) -> reverse([A, B]) -> reverse([A, B | L]) -> lists:reverse(L, [B, A]). reverse()using catamorphism and delayed invocation through string lambdas .. Reverse = utils:lambda("utils:foldl(\"[E] ++ S\", , _)"). and later .. Or the classic factorial .. Factorial = utils:lambda("utils:foldr(\"E*S\", 1, lists:seq(1,_))"). and later .. Motivation to learn another programming language I am no expert in functional programming. I do not use functional programming in my day job. But that is why I am trying to learn functional programming. Also it is not that I will be using functional programming to write production code in the near future. We all know how the enterprise machinery works and how a typical application developer has to struggle through the drudgery of boilerplates in his bread-earner job. Learning newer paradigms of programming will teach me how to think differently and how to move gradually towards writing side-effect free programs. The principal take-away from such learning is to be able to write a piece of code that encapsulates my intention completely within the specific block, without having to look around for those unintentional impacts in other areas of the codebase. I am a newbie in Erlang and the attempt to port string lambdas to Erlang is entirely driven by my desire to learn the programming language. The source code is still a bit rusty, may not use many of the idioms and best practices of the language, and may not cover some of the corner cases. Go ahead, download the source code, and do whatever you feel like. But drop in a few comments in case you have any suggestions for improvement. - lib_lambda.erl (the string lambda implementation) - lib_misc.erl (miscellaneous utilities) - lib_lambda_utilities.erl (utility wrappers)
<urn:uuid:d3144993-0e2f-46bc-995c-d9921d1b2bb2>
3.015625
1,039
Personal Blog
Software Dev.
64.214619
Do animals as simple as flies have personalities? Definitions of personality vary substantially across fields, and are often left implied. The heart of the phenomenon seems to be that individual organisms display idiosyncratic behavioral patterns that persist for a substantial amounts of time. By this definition, flies have abundant personality. The culmination of a couple years of work by Jamey Kain in our lab, our paper on fly personality was just published at PNAS. The detection of such personality requires a method to statistically distinguish variability within the behavior of an individual from variability between individuals. If there is more variability between individuals than within (i.e. the observed behavioral distribution is over-dispersed compared to what would be expected from sampling error alone) – and the particular behavioral tendencies of individuals persist on subsequent re-evaluation then you have uncovered personality. There is mounting evidence in organisms ranging from pea aphids to trout exhibit personality in a wide variety of behaviors. We suspect that personality may be universal. We came to work on this project while trying to map genetic differences between lab strains of Drosophila simulans that exhibit different light preferences. One runs toward light, while the other runs away from light, when startled. These strains are essentially isogenic, harboring no genetic diversity between individuals, and yet we found that the distribution of behavioral scores within each strain was broader than we’d expect based on sampling error alone (we gave each fly a choice to go toward or away from light 20 times, and thus had a fairly precise estimate of its preference). This heterogeneity meant that mapping the genetic underpinnings of the difference in strain mean preference was going to require bigger sample sizes, and consequently be very labor intensive. That’s when we decided to build FlyVac, a platform for the autonomous manipulation of flies to measure phototaxis. That’s also when we realized that fly phototactic idiosyncrasy was an interesting phenomenon on its own. So, after a couple years’ work, what have we learned about fly phototactic idiosyncrasy? A number of things: • All fly strains seem to have it. Only when we put blind flies into the device, or gave them symmetrical light stimuli did we observe behavioral distributions matching what we’d expect based on sampling error alone. In fact, the phenomenon is not limited to flies. The white clover weevil Ischnopterapion virens performed very similarly. • It is persistent. Flies recovered from FlyVac and tested up to 28 days later show significantly correlated phototactic preferences • Genetic differences between animals cannot explain their behavioral differences. There isn’t much genetic diversity to begin with in most of the lines we examined, but additionally we inbred them for 10 generations, mating daughters back to fathers or grandfathers (sorry flies). This did not reduce the magnitude of personality; if anything, it amplified it. • Personalities cannot be inherited (even by non-genetic means, such as epigenetics). Mating two flies with strong positive (or negative) light preferences had no effect on the behavioral distribution of their progeny. In all cases the progeny had behaviors identically distributed as the parents. • The gene white, and the neurotransmitter serotonin (whose synthesis may depend on white) act in wild type flies to suppress personality, driving behavior toward homogeneity. Please see our paper for all the gory details: Why does non-heritable personality exist? This will be the focus of forthcoming paper from our lab (hopefully soon). But there are two basic ideas. Personality could be essentially noise, i.e. non-adaptive, but tolerated for some other greater good. Possible greater goods include rapid development, or avoiding the metabolic costs of additional signaling pathways or neurons that would suppress the developmental stochasticity that generates the variability. Alternatively, it might be inherently advantageous. Retaining a population with high diversity might mean that when a transient selective pressure arises that favors a subset of animals, they will successfully produce a subsequent generation. If that selective pressure goes away, then the non-heritability of the behaviors means that the subsequent generation will revert to the original (well adapted for typical conditions) distribution of behaviors immediately. Another consideration is that if behaviors are entirely predictable, they can be exploited. If flies always run toward the light, predators will learn this, and any fly that runs away from the light will have an advantage. This is an example of frequency-dependent selection, in which the fitness of a particular phenotype depends on the number of other individuals exhibiting it. Such dynamics often equilibrate to what economists and game-theorists call mixed strategies, when the optimal strategy entails acting randomly from instance to instance. Jamey and I have a Q and A conversation about this work in the following video/podcast. If you stick around to the end of the video, we’ll take you on a walking tour of the lab, and show you FlyVac in operation:
<urn:uuid:8f9671c3-82f3-4db1-a58a-dd8b177a9dc2>
3.28125
1,028
Personal Blog
Science & Tech.
24.252282
The physics is actually much easier than it seems at first glance. Power generators are engines just like the everyday ones we see all around in our cars, lawnmowers, snowblowers, etc. Except for new power sources like some wind and solar systems with electronic inverters, the vast majority of power is supplied by large rotating AC generators turning in synch with the frequency of the grid. The frequency of all these generators will be identical and is tied directly to the RPM of the generators themselves, generally 3600 RPM for gas turbines and 1800 RPM for nuclear plants. If there is sufficient power in the generators then the frequency can be maintained at the desired rate (i.e. 50Hz or 60Hz depending on the locale). The power from the individual generators will lead the grid in phase slightly by an amount roughly corresponding to the power they deliver to the grid. An increase in the power load is accompanied by a concurrent increase in the power supplied to the generators, generally by the governors automatically opening a steam or gas inlet valve to supply more power to the turbine. However, if there is not sufficient power, even for a brief period of time, then generator RPM and the frequency drops. This is much like what happens to a car on cruise control if you start going up a hill, if the hill is not too steep you can maintain speed, once you reach the limits of the torque supplied by the engine, the car and engine slow down. If the combined output of all the generators cannot supply enough power then the frequency will drop for the entire grid. All the generators slow down just like your car engine on a hill. For large grids the presence of many generators and a large distributed load makes frequency management easier because any given load is a much smaller percentage of the combined capacity. For smaller grids, there will be a much larger fluctuation in capacity as delays in matching power supplied are harder to manage when the loads represent a relatively larger percentage of the generated power. So a battery systems like the one in the article is really designed to keep short-term fluctuations in power requirements from dropping the frequency because of lags in the governors and generators which require a finite time to adjust to the new power requirements. These "frequency regulator" power stations can supply very high power for short bursts to keep the power requirements even so that the other generators don't see too much load faster than they can respond due to mechanical limitations.
<urn:uuid:12856ec6-8baa-418d-b58e-e9b9ec343485>
3.390625
493
Q&A Forum
Science & Tech.
37.898048
Text File Databases: Part 1 October 19, 2010 There is a lot of data stored in plain-ascii text files consisting of records separated by newlines, each record consisting of multiple fields, and it is useful to have a function library for dealing with them. This exercise looks at some functions for reading the data; the next exercise will look at some functions for processing the data. We will consider four common types of text file databases. A file with fixed-length data fields has records of a fixed number of characters, each record containing fields that are similarly in fixed positions; the data may be preceded by a fixed-length header. A file with character-delimited fields has variable-length records, each with fields separated by a single-character delimiter; the delimiter is often a tab or vertical bar. A particular type of variable-length delimited text database is known as comma-separated values, where the delimiter is a comma and fields may be surrounded by double-quote characters so that a comma within a quoted field loses its meaning as a field separator; in that case, a literal double-quote character may appear within a quoted field as two double-quote characters in succession. The fourth type that we will consider is a name-value record, where each record consists of multiple fields, one field per line, separated by blank lines, each field consisting of a type-name and a value separated by a delimiter; this format is often used for databases that have many optional fields, such as bibliographic databases. We want reader functions for each of these file formats that all return a single record each time they are called, or an end-of-file marker when the input is exhausted, and advance the file pointer to the beginning of the next record. The return value should be a list or array, whichever is convenient, containing the value of one field in each element, except for the name-value record, which should return a list of name/value pairs. Different operating systems have different methods of signalling the end of a line. For maximum portability, your functions should accept the end of a line indicated by a carriage return, a line feed, or both characters in either order. You should be prepared to accept any type of line marker because the data may come from any source; for instance, your computer running MS Windows with a CRLF line marker may fetch data from a Linux computer with a bare LF for the line marker. You should also accept the final line in the file whether or not it has a trailing line marker. Your task is to write functions to read one record from each of the four file types described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
<urn:uuid:88e37770-b49e-4bf6-9a0b-9798727b0e2d>
2.875
586
Tutorial
Software Dev.
32.261407
A general property of magnetic fields is that they decay with the distance from their magnetic source. But in a new study, physicists have shown that surrounding a magnetic source with a magnetic shell can enhance the magnetic field as it moves away from the source, allowing magnetic energy to be transferred to a distant location through empty space. The basis of the technique lies in transformation optics, a field that deals with the control of electromagnetic waves and involves metamaterials and invisibility cloaks. While researchers have usually focused on using transformation optics ideas to control light, here the researchers applied the same ideas to control magnetic fields by designing a magnetic shell with specific electromagnetic properties. Although no material exists that can perfectly meet the requirements for the magnetic shell’s properties, the physicists showed that they could closely approximate these properties by using wedges of alternating superconducting and ferromagnetic materials. The Polywell depends greatly on advances in magnet technology. This approach may be applicable, I don’t know. A practical realization of a magnetic metamaterial still requires all the inconvenience of superconductors which tempers my enthusiasm. Still, this idea glimmers with potential. Also, I bet I could build and test one. In fact I have almost all the materials on hand. Submit your ideas for an experiment in the comments. FIG. 4: Enhanced magnetic coupling of two dipoles through free space. In (a), magnetic energy density of two identical cylindrical dipoles separated a given gap. When separating and enclosing them with two of our shells with R2=R1 = 4 [(b)], the magnetic energy density in the middle free space is similar to that in (a). When the inner radii of the shells are reduced to R2=R1 = 10 [(c)], the magnetic energy is concentrated in the free space between the enclosed dipoles, enhancing the magnetic coupling.
<urn:uuid:9c676403-b18f-4342-9372-55db40839a7c>
3.4375
386
Personal Blog
Science & Tech.
34.627406
At some stage the land underneath West Cumbria will be used as a dump for spent nuclear material produced as a by product of nuclear electricity generating power stations. West Cumbria, the geologists believe, was formed by rocks laid down at various times between 500 million years ago and 200 million years ago. Since then the landscape has been formed by great climatic changes in the last two million years which have meant the land has been covered by sea, desert and ice at various times. In the past decade Cumbria has suffered from unprecedented levels of flooding, which was both unusual and unexpected. Its geology apparently leads it to be a favourable site to store nuclear waste. Now to store nuclear waste safely is rather difficult. We simply do not know enough about the future to be able to predict accurately what will happen in hundreds of years to uranium stored today. The cunning plan that humanity has devised is to find a piece of land where either none of the inhabitants object to uranium being stored or a piece of land when the inhabitants can be rewarded for agreeing to have uranium stored. Having got the land the humans plan to dig a very deep hole (the precise depth and dimensions are not yet known, and then dump the waste uranium suitably stored in containers then encased with cement and concrete. Having done this the hole will be covered and no doubt some kind of guard placed over the hole to prevent others from digging it up. The geology of the land is very important. You cannot dump uranium in places where there are fault lines, known earthquakes, volcanic activity and geological stress. The uranium may be thrown up to the surface at some time in the future or the containers may be cracked causing uranium to leach into the deep water table. Stability is important and most of West Cumbria has this stability. I expect in ten or twelve thousand years the inhabitants of the land where the hole was dug will have probably forgotten that a uranium dump was ever below the land. This, you might think, is no matter because ten thousand years is longer than recorded history on this planet. However uranium has been around much longer than humanity and spent uranium takes quite a long time until it has decayed and no longer presents a threat to life. The two key types of uranium produced as a by product of nuclear energy are uranium 235 and uranium 238. Both types are used in nuclear power stations but only uranium 235 can sustain a chain reaction and usually the uranium is enriched so that there is more uranium 235 compared with uranium 238 in the reactor so longer chain reactions can be sustained. Uranium 238 takes 4,500 million years to decay to half its potency. That is rather a long time by you may be relieved to know that uranium 235 takes rather less time; only 713 million years, which is more time than it took to lay down the rocks of West Cumbria. Uranium is radioactive and as it decays it emits alpha and beta particles. I do not suggest that the uranium buried will remain dangerous for millions of years; it certainly will remain very dangerous for ten thousand years to eighty thousand years. The geology might be right for storage of radioactive material but what about flooding? Perhaps in future the poets of the Lake District will sigh:- I wandered in a radioactive cloud With isotopes dancing in hills When all at once I saw a crowd A host of alpha particles
<urn:uuid:9246298c-8199-432a-af29-0392d29dfe8f>
3.546875
685
Personal Blog
Science & Tech.
45.205441
What causes the sun to vary? We live in the extended atmosphere of a magnetic variable star that drives our solar system and sustains life on Earth. Our Sun varies in every way we can observe it. The Sun gives off light in the infrared, visible, ultraviolet, and at x-ray energies, and it gives off magnetic field, bulk plasma (the solar wind) and energetic particles moving up to nearly the speed of light, and all of these emissions vary. These variations occur on timescales from milliseconds to billions of years. Most of these variations are related to the solar magnetic field, which is caused by the moving plasma inside the rotating Sun, which make a dynamo. *Sort missions by clicking the column headers. Advanced Composition Explorer (ACE) observes particles of solar, interplanetary, interstellar, and galactic origins, spanning the energy range from solar wind ions to galactic cosmic ray nuclei. This mission is part of SMD's Explorers Program. This mission is part of SMD's ... |19970825 August 25, 1997||3Operating| The Balloon Array for Radiation-belt Relativistic Electron Losses mission is a balloon-based Mission of Opportunity to augment the measurements of NASA's RBSP spacecraft. This mission is part of SMD's LWS program. IMP 8 has deepened understanding of the space environment near Earth in many ways. Observations from IMP 8 provided insight into plasma physics, the Earth's magnetic field, the structure of the solar wind and the nature of cosmic rays. |19731026 October 26, 1973||4Past| Solar and Heliospheric Observatory (SOHO) is a solar observatory studying the structure, chemical composition, and dynamics of the solar interior. SOHO a joint venture of the European Space Agency and NASA. This mission is part of SMD's Heliophysics Research program. |19951202 December 02, 1995||3Operating| The goal of STEREO is to understand the origin the Sun's coronal mass ejections (CMEs) and their consequences for Earth. The mission consists of two spacecraft, one leading and the other lagging Earth in its orbit. The spacecraft carries instrumentation ... |20061025 October 25, 2006||3Operating| Yohkoh, an observatory for studying X-rays and gamma-rays from the Sun, is a project of the Institute for Space and Astronautical Sciences, Japan. |19910830 August 30, 1991||4Past|
<urn:uuid:f8e9d8ed-39d9-46a6-9a30-a680417d9ece>
3.921875
526
Content Listing
Science & Tech.
50.615597
Learning Intention: Students will understand the properties of igneous, sedimentary and metamorphic rocks and how they are formed. They will relate chnages in landscape formations over time with processes such as weathering, erosion, fossilisation and movement of continental plates. Success Criteria: At the completion of this unit each student will be able to distinguish between igneous, sedimentary and metamorphic rocks and give common examples of each. They will be able to describe the way our earth changes over geological time. This photograph was taken at Nature’s Window, near Kalbarri, in Western Australia. You can see the layers of different coloured rock, which indicates that this is sedimentary rock. In the western district, we see a lot of igneous rocks, formed from lava flows of volcanoes. Metamorphic rocks are those that have changes over time due to heat and pressure. - “How do rocks undergo change” is an excellent resource with photographs and an interactive rock-cycle animation. - The Australian Museum also has some great resources to learn about “Shaping the Earth”. - BBC Bitesize has seven revision bites about the three different types of rocks, weathering, erosion and the rock cycle. - Discover how rocks are formed with the “Rock Hounds”.
<urn:uuid:53772f54-2d92-4bc6-905a-9df74525f41e>
4.46875
275
Tutorial
Science & Tech.
37.807942
Optical data analysis looks to clinical applications Previous work under the direction of the Dr. Reijo Pera and Dr. Baer correlated dark-field time-lapse imaging with gene expression profiling to conclude that success of the embryo to develop into the blastocyst stage can be reliably predicted within two days of fertilization, when the embryo is made up of only four cells and before embryonic gene activation (EGA), by using time-lapse optical imaging and an algorithm to detect three key features in the development cycle.1 The three parameters that are used to determine successful development into the blastocyst stage are (i) the duration of the first cytokinesis (the very brief last step in mitosis that physically separates the two daughter cells); (ii) time interval between the first and second mitosis events; and (iii) the time interval between the second and third mitosis events. Software developed by their team of researchers was used to detect the cells via computer vision and to follow the shape of the cells and evaluate the three parameters. The figure shows an example of how the computer vision method detects the cells and their shapes and follows them through this critical stage of development. The outcomes of the algorithm applied to time lapse imaging were correlated to gene expression data, and successful blastocyst formation was found to be predictable with 93% sensitivity and specificity. |The bottom row shows results of the tracking algorithm applied to the top row images of a single embryo. Images were taken at 5-minute intervals, and only a sampling of the images is shown here.1| In an extension to this study, it was found that the same time lapse imaging parameters can be used within tighter tolerances to predict ploidy versus aneuploidy with 100% sensitivity and 66% specificity.2 Aneuploidy is a condition found in 50-80% of cleavage stage embryos, where an abnormal number of chromosomes are present in the embryo which are either not compatible with live birth or can cause conditions such as down syndrome. Fragmentation of the embryo was shown to be correlated with time lapse imaging to predict types of aneuploidies. These studies proved promising for applications in successful in vitro fertilization (IVF), since typical IVF methods do not accurately predict healthy embryos and usually multiple embryos are transplanted with negative consequences such as multiple births, the need for fetal reduction, and miscarriage.1 To this end the PIs and some co-authors of the paper listed in reference 1 have created the company Auxogyn, which brings this time-lapse imaging and computer vision automated analysis to the clinic in their Early Embryo Viability Assessment (EEVA) System. The current work of our group applies similar imaging and analysis techniques to study other biological phenomena in culture. Optical imaging is an enabling technology for looking at biological samples because it is less invasive than many other imaging techniques. In many of our applications, we image the cells repeatedly over periods of several days, so the imaging system must be designed to support this effort. I look forward to updating you on more of our work soon. Thanks for reading! 1. C. C. Wong et al., Nat. Biotechnol. , 28, 1115-1121 (2010). 2. S. L. Chavez et al., Nat. Comm. , 3, 1251 (2012) . CHRISTINE AMWAKE is a PhD candidate in Electrical Engineering at Stanford University under the advisement of Prof. Olav Solgaard. Her current research interest is applying optical imaging techniques to biomedical applications.
<urn:uuid:904560a2-b981-468c-8f00-0716e786a562>
3.171875
730
Academic Writing
Science & Tech.
37.382956
become an editor the entire directory only in Optics/Color Open Directory - Science: Physics: Optics: Color Business: Chemicals: Dyes and Pigments: Pigments Computers: Graphics: Web: Colors Health: Conditions and Diseases: Eye Disorders: Color Blindness Science: Physics: Education: Light and Optics Science: Social Sciences: Psychology: Sensation and Perception: Color Causes of Color - Explore the phenomena that create our colorful world. - Clarifies aspects of colour specification and image coding that are important to computer graphics, image processing, video, and the transfer of digital images to print. - Several resources about color and its relationship with other human activities. Glossary of Color Science Terms - When one speaks about color, some of these terms are sure to be used. Molecular Expressions: Science, Optics and You - An educational resource for the science of optics and the physics of light and color intended for teachers, students, and the general public. Munsell Color Science Laboratory - Academic laboratory dedicated to research and education in color science, based at Rochester Institute of Technology. The Munsell Color System - The original book of this name, published in 1921. The concepts of color hue, color value and color chroma are diagrammed and explained. Theory of Color - Unconventional way of classifying and teaching colors. - Explanation of applying fluorescent whitening agents to increase perceived whiteness. efg's Computer Lab: CIE Chromaticity Diagrams - Demonstrate how to display a 1931 CIE chromaticity chart, as well as the transformations needed for the 1960 and 1976 charts. The charts can be displayed using either the 1931 2-degree or 1964 10-degree standard observer. efg's Computer Lab: Color Mix Lab Report - Interactive demonstration of mixing additive or subtractive colors. efg's Computer Lab: Maxwell Triangle Lab Report - Demonstrate mixture of light with three primaries. Introduce concept of chromaticity coordinates. efg's Computer Lab: Spectra Lab Report - A complete Delphi program, including source code, to display wavelength colors as a function of wavelength and optionally display the emission and absorption spectra of hydrogen. " search on: to edit this category. Copyright © 2013 Netscape Visit our sister sites Last update: Thursday, February 11, 2010 7:27:35 PM EST -
<urn:uuid:3400b213-66c4-4c52-bb2f-01133d2d1a61>
3.125
517
Content Listing
Science & Tech.
24.879767
How far off are scientists from making a synthetic cell? So far in synthetic biology, manipulations have been limited to designing different genetic circuits by reshuffling some of the natural genes to perform a particular task and moving metabolic pathways from one organism into another. For example, the Keasling group engineered a metabolic pathway in Escherichia coli that produces precursors to the antimalaria drug artemisinin. Currently, the drug is harvested from the Artemisia annua plant, but it is very expensive to produce. However, these researchers also reproduced this plant metabolic pathway in yeast, allowing for an inexpensive way to produce the drug. Furthermore, the Voigt group engineered E. coli to produce spider silk, one of the strongest fibers known. Another area where there is progress in synthetic biology is designing almost digital-like circuits by perturbing gene control at different levels (transcriptional, translational, posttranslational). For example, scientists can currently design circuits that perform Boolean logic (i.e., involving AND, OR, and NOT operations). In this way, biological organisms can act as sensors, for example, producing an output only when they sense "A" AND they sense "B." In an effort to push the field further, several organizations are trying to make parts of genetic circuits more modular, so that one could just pick the desired parts and combine them like interlocking bricks to have a desired logic/output performed (e.g., see http://biobricks.org/). However, many challenges still exist. Compared to electronics, biological systems have a lot of noise; rather than predictably producing an exact output, they generate drastically variable results. A lot of “bricks” are not well-defined; therefore, they work either only in a particular organism or only under particular conditions. The circuits are not as predictable as one would think, because of unknown interactions with the host organism machinery. Finally, way too much is still not known about natural biological processes and circuits, leading to more of a “black box testing” scenario, rather than purposeful design. However, because the price of DNA synthesis and sequencing has dropped over the past few years, allowing more and more scientists to tinker with genetic circuits, it is only a matter of time before a very simple synthetic cell is designed.
<urn:uuid:5b0dc18c-1f16-49da-abec-9ee8649011a1>
3.671875
477
Q&A Forum
Science & Tech.
28.795897
The Water Vapor map shows areas of moist and dry air at mid-levels of the atmosphere (about 12,000 feet). Water vapor or water vapour, also aqueous vapor, is the gas phase of water. Water vapor is one state of the water cycle within the hydrosphere. Water vapor can be produced from the evaporation of liquid water or from the sublimation of ice. Under normal atmospheric conditions, water vapor is continuously generated by evaporation and removed by condensation. Whenever a water molecule leaves a surface, it is said to have evaporated. Each individual water molecule which transitions between a more associated (liquid) and a less associated (vapor/gas) state does so through the absorption or release of kinetic energy. The aggregate measurement of this kinetic energy transfer is defined as thermal energy and occurs only when there is differential in the temperature of the water molecules. Liquid water that becomes water vapor takes a parcel of heat with it, in a process called evaporative cooling. The amount of water vapor in the air determines how fast each molecule will return back to the surface. When a net evaporation occurs, the body of water will undergo a net cooling directly related to the loss of water. Evaporative cooling is restricted by atmospheric conditions. Humidity is the amount of water vapor in the air. The vapor content of air is measured with devices known as hygrometers. The measurements are usually expressed as specific humidity or percent relative humidity . The temperatures of the atmosphere and the water surface determine the equilibrium vapor pressure; 100% relative humidity occurs when the partial pressure of water vapor is equal to the equilibrium vapor pressure. This condition is often referred to as complete saturation. Humidity ranges from 0 gram per cubic metre in dry air to 30 grams per cubic metre (0.03 ounce per cubic foot) when the vapour is saturated at 30°C. Another form of evaporation is sublimation, by which water molecules become gaseous directly from ice without first becoming liquid water. Sublimation accounts for the slow mid-winter disappearance of ice and snow at temperatures too low to cause melting. Water vapor will only condense onto another surface when that surface is cooler than the temperature of the water vapor, or when the water vapor equilibrium in air has been exceeded. When water vapor condenses onto a surface , a net warming occurs on that surface . The water molecule brings a parcel of heat with it. In turn, the temperature of the atmosphere drops slightly. In the atmosphere, condensation produces clouds, fog and precipitation (usually only when facilitated by cloud condensation nuclei). The dew point of an air parcel is the temperature to which it must cool before water vapor in the air begins to condense. Also, a net condensation of water vapor occurs on surface when the temperature of the surface is at or below the dew point temperature of the atmosphere. Deposition, the direct formation of ice from water vapor, is a type of condensation. Frost and snow are examples of deposition.
<urn:uuid:537daef3-e5f2-474f-884c-6572b122c378>
4.125
636
Knowledge Article
Science & Tech.
30.776304
The Doppler shift is a shift in the wavelength of light or sound that depends on the relative motion of the source and the observer. A familiar example of a Doppler shift is the apparent change in pitch of an ambulance siren as it passes a stationary observer. When the ambulance is moving toward the observer, the observer hears a higher pitch because the wavelength of the sound waves is shortened. As the ambulance moves away from the observer, the wavelength is lengthened and the observer hears a lower pitch. Likewise, the wavelength of light emitted by an object moving toward an observer is shortened, and the observer will see a shift to blue. If the light-emitting object is moving away from the observer, the light will have a longer wavelength and the observer will see a shift to red. By observing this shift to red or blue, astronomers can determine the velocity of distant stars and galaxies relative to the Earth. Atoms moving relative to a laser also experience a Doppler shift, which must be taken into account in atomic physics experiments that make use of laser cooling and trapping.
<urn:uuid:4d00b578-2e95-4d9f-be09-7bb51f5d3eff>
4.40625
219
Knowledge Article
Science & Tech.
40.153125
Improving our understanding of the role of the oceans and the cryosphere (ice) in the climate system. • Improving ocean and ice modelling capability. • Providing advice to government regarding climate mitigation. • Understanding how the oceans, sea-ice and land-ice could be affected by climate change and how these changes could feed back onto the climate system. Improving and evaluating the performance of the ocean and sea ice components of global climate models. Investigating the role of ocean circulation in current and future climates, in particular that of the meridional overturning circulation. Providing policy-relevant evidence and research on avoiding dangerous climate change and its impacts. Sea level rise is a key marine impact of climate change. Changes in sea ice and land ice have important climate feedbacks, through albedo and ocean circulation. The melt of land ice results in sea level rise. Ocean carbon and biogeochemical cycles influence climate on a range of timescales. Information on the wide range of work our climate, cryosphere and oceans scientists undertake; the projects they are involved with and their skills and interests.
<urn:uuid:62363ae9-7353-4065-9254-855794fbf32e>
3.140625
232
Content Listing
Science & Tech.
24.385086
A Comet Impacts Jupiter Name: gwendolyn williams Date: 1993 - 1999 Hello, I am gwendolyn williams and I teach high school physical science last week a very significant event took place on the planet Jupiter. When school opens in the fall, this event will be some what history, and I am wondering, if there will be computer based resources that the students might ref er to for references about this event and some graphics which will be available showing what actually happened as the comet impacted Jupiter. Please let me know where I might refer my students for information on this topic. Thanks very much. There's loads of stuff (text, images, even movies) available if you have a browser like NCSA Mosaic available. If you do, here are some "places" (URLs) to go: These are sites maintained by NASA's Jet Propulsion Lab. (The second was set up as a "mirror" of the first to handle the heavy demand made at the time of the impacts.) These also provide "links" to many other sites all over the world. Two other good staring points: The SEDS home page has a page for the Shoemaker/Levy comet, and the last URL is for Astro Web, a great place to begin a search for anything astronomy-related. Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:113d09b0-b135-456f-b021-f7f2e745a0d6>
2.9375
308
Q&A Forum
Science & Tech.
46.1
Tetrachromacy is the ability to see color through four different “channels.” More specifically, an animal with tetrachromacy would have four different types of cone cells in each eye. A tetrachromate has a four dimensional color view, which just means that they see with four different primary colors as opposed to just three. It is thought that some fish, amphibians, reptiles, arachnids and insects are tetrachromates. The best example of a tetrachromate is the bird, since most birds are tetrachromates. This means that birds distinguish a greater spectrum of colors than humans do. The zebra finch and columbidae are birds that rely on their tetrachromacy as integral parts of their lives. They use their vision to select a mate as well as find food. In a mate, they generally look for ultraviolet plumage. Some researchers have hypothesized that some humans might be tetrachromates, and might just not realize it because they have no point of reference to compare their vision to. Women, in particular, are suspected of having this capability on a rare occasion because of their two X chromosomes. By having two X chromosomes, they could have extra cone cell pigments.
<urn:uuid:5bd26353-753e-4c28-95fe-05f356e58527>
4.21875
263
Knowledge Article
Science & Tech.
39.639793