text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
"The Processing language was originally created at MIT as part of the Media lab and Aesthetics and Computation group. They needed a way to bridge the gap between software developers, artists, data visualizers, etc., and to do so in a way that allowed new programmers (or non-programmers) to do complex visual work easily. Processing was built using Java, and can be thought of as a simplified Java, with a simplified Java API for drawing and graphics." (source) And so it is - with some little tricks. First, Processing should get initialized in the html page like this A Processing instance will be created, it can get accessed via Amber in this way: processing := <Processing.instances>. After fetching this instance, it can be used like any other Smalltalk Object: processing background: 224. processing line: centerX y: centerY dX: myDX dY: myDY. I converted the clock-example to Amber, you can find it in the Example-section of my Amber Git repo. And maybe soon in the "official" Amber-Examples.
<urn:uuid:1040756c-e596-43cc-bd11-6c4f33b702a8>
2.953125
233
Personal Blog
Software Dev.
51.843896
The magnetic and electric fields are force fields. They exert forces on magnetic or charged particles. We can measure these fields easily with instruments that respond to such forces. Magnetic and electric fields therefore have a very real and tangible existence. But these force fields themselves are merely higher order expressions of something more fundamental known as potential fields. Potential fields form an underlying substrate in which certain distortions give rise to magnetic or electric fields. But even without such distortions, and hence without any measurable magnetic or electric fields, the potential field can still exist in its distortion-free state. There are three main potential fields: magnetic vector potential `vec(A)`, scalar electric potential `V`, and gravitational potential `varphi`. These respectively give rise to the three main force fields: magnetic field `vec(B)`, electric field `vec(E)`, and gravitational field `vec(g)`. The following equations show how these relate: `grad xx vec(A) = vec(B)` `-grad V = vec(E)` `-grad varphi = vec(g)` The first equation is pronounced “del cross A equals B” or “curl of A equals B.” This means that curl (vorticity, circulation, twist) in the magnetic vector potential gives rise to a magnetic force field. For instance, if `vec(A)` uniformly circulates counter-clockwise around your computer screen, the equivalent magnetic field `vec(B)` points out of the screen toward you; `vec(B)` is always at right angles to the curled parts of `vec(A)`. A magnetic field line may be visualized as the central axis of a vortex made of vector potential. But if the vector potential has zero vorticity, then no magnetic field arises, yet it can still distort in other ways by fluctuating, diverging, or compressing. The second equation is pronounced “minus grad V equals E” or “negative gradient of V equals E”. Gradients are inclines, increases in some quantity over some distance. When the scalar electric potential `V`, also known as voltage, changes over some distance, that establishes an electric field. For example, if electric scalar potential is lower at the left side of your screen and increases steadily toward the right, the electric field from this will point toward the left, down the slope. A positive charged particle released in this field will be propelled toward the left. But if there is no gradient in the scalar potential, meaning if the voltage everywhere on your screen is uniform, then there is no associated electric field. The charged particle will just sit there experiencing no force. However, the voltage can still vary over time, fluctuating everywhere at the same rate, and still the charged particle experiences no force. Modern scientific instruments cannot measure such a field because the electrons within the instruments do not move in a way that creates detectable current. The third equation is similar to the second. It says that the gravitational force field `vec(g)` we are all familiar with, which points down toward the center of the earth and accelerates falling masses at an average rate of 9.8 `m/s^2`, is itself simply the negative gradient of the gravitational potential `varphi`. Or to put it another way, the gravitational potential increases with height, forming a gradient whose downward slope points toward the ground. But again, theoretically if the gravitational potential did not have a slope, there would be no measurable gravity force, and yet the potential field could still vary in other ways such as varying uniformly everywhere within a certain area over time. Once more, such a field cannot be measured with standard scientific instruments because without there being any forces, no reading can be made. A fourth equation relates the magnetic vector potential to the electric field: `(del vec(A))/(del t) = -vec(E)` This equation says that if `vec(A)` increases over time, an electric field will arise pointing opposite the physical direction of `vec(A)`. This is why changing magnetic fields are said to give rise to electric fields and vice versa; magnetic fields consist of curled vector potential, and a change in the latter manifests an electric field. What is not taught in physics classes, however, is that a curl-free vector potential that varies over time can induce a dynamic electric field without its corresponding dynamic magnetic field. So what is the significance of potential fields? Well, aside from giving rise to phenomena that we can detect and measure, they can also do things that we cannot detect using standard methods, things that may have effects we might not even imagine possible. What if the frequency of an oscillating but uniform scalar potential (voltage) can affect our mood? Then our mood could be manipulated by such fields without us — being limited to mainstream modern technology — ever finding out what the true cause might be. Same can be said for curl-free magnetic vector potential fields, or gradient-free gravitational potential fields. Well it turns out that technology does exist to detect some of these exotic potential fields that lack any measurable force field components. However, these are out of reach for the average person. See for instance a list of patents dealing with the vector potential. Most of these employ what are known as Josephson junctions, which are quantum mechanical devices that allow direct measurement of the vector potential regardless of whether or not a magnetic force field is present. But good luck to anyone who desires to build or buy a Josephson junction; these require superconducting materials assembled with precision. They can be found in a less effective configuration in medical MRI machines, employed as the core components of SQUID (Superconducting QUantum Interference Devices) detectors designed to measure very weak magnetic fields. The important thing to know about all this is that force-free potential fields have subtle effects on reality at the quantum level. Whereas magnetic and electric fields play a greater role in physical processes involving energy transfers, potential fields work more on the quantum level as phase selectors, probability shapers, and spacetime torsion inducers. This is what scalar physics is all about, using those more exotic aspects of electromagnetic theory that are unknown or ignored by mainstream science. For further discussion and several diagrams concerning the vector potential and its role in electromagnetism, please see my research notes on transverse and longitudinal waves. And if you feel comfortable with the math, then read about one important appplication of all this: Portal Physics (also known as space-time engineering). For a non-mathematical diagram-based explanation of potential fields in relation to electricity, gravity, magnetism, and non-Maxwellian wave phenomena, see The Etheric Origins of Gravity, Electricity, and Magnetism
<urn:uuid:874613ab-5a67-44d2-a2f6-45c09ccf1bdb>
4.125
1,397
Academic Writing
Science & Tech.
33.025924
Earlier today, scientists at the European Space Agency marked a milestone: On March 1, 2002, the largest Earth observation satellite ever built was launched into orbit. During the course of its (extended) lifetime, the Envisat satellite has circled the Earth more than 50,000 times, providing fodder to scientists publishing their research in an estimated 2,000 scientific journals. But it's hardly an anomaly. Despite the well-chronicled budgetary problems affecting space programs around the world, space exploration nonetheless continues to extend our understanding of the solar system (and beyond). Just this week, NASA's Spitzer Space Telescope took the extraordinary image you see on this page of a baby star sprouting two identical jets. Check out the enclosed photo essay chronicling some of the more notable probes now hurtling toward the great beyond-as well as a few that soon will join them in their collective pursuit of knowledge.
<urn:uuid:51e1d7b5-d6f2-4488-aa53-8ea53d49cc5e>
3.125
182
Truncated
Science & Tech.
33.941164
Suppose you have to specify the moment in time when a given event occurred, a "zero time". The record must be accurate to the minute, and be obtainable even after thousands of years. All the measures of time we currently have are relative to a well defined zero, but the zero is not easy to backtrack exactly. One possibility would be to take a sample of Carbon with a well defined, very accurate amount of 14C, and say: the event occurred when the 14C was x%. At any time, measuring the rate of decay, you would know when the event occurred. This however, requires a physical entity to measure, which may be lost. Another way would be to give the time lapsed after a well defined series of solar eclipses. In order to define precisely the context, you would say a list of (say) five consecutive eclipses and the places on Earth where they were total, and then a time gap from the last of the set. At any time in the future, you can backtrack the specified conditions into a celestial mechanics program and find when the event occurred. Is there a standardized or well recognized method to do so?
<urn:uuid:ac094f67-9815-4f22-8c94-cf5cc20e575a>
3.15625
239
Q&A Forum
Science & Tech.
45.4335
Plus MagazineIssue 2 Find out how modern telephone networks use mathematics to make it possible for a person to dial a friend in another country just as easily as if they were in the same street, or to read web pages that are on a computer in another continent. The mathematics underlying today's complex telephone networks is still based on his work. Erlang was the first person to study the problem of telephone networks. Here is an experiment that you can easily do yourself to test Bernoulli's equation. There are also 2 questions and answers. The British General Election (May 1997) is an example of how simple mathematical ideas help in understanding information that involves numbers. After 5,000 years, the game of Nine Men's Morris has succumbed to the power of modern computing, plus other recent mathematical discoveries in the world of games. Read about two students at Keele University. Christine Vretta is doing Joint Honours Maths and Physics, and Steve Smith is doing Joint Honours Maths and Computer Science. We talk to Tim Pilkington, a keen basketball player, who has a joint honours BSc in Maths, Physical Education and Sports Science from Loughborough University. Tim has worked as a mathematics teacher and is now working as an accountant.
<urn:uuid:191b7f50-b479-4151-afd8-1a930ef3733c>
3.421875
259
Truncated
Science & Tech.
45.783726
It's staggering to imagine a time when the Earth and its planetary siblings were nothing but cosmic dust. Yet astronomers agree that this was the state of things some 4.5 billion years ago. Our sun was but a fledgling protostar, continually amassing more matter via gravity and steadily cranking up its internal nuclear fusion. There was no solar system, only a giant, rotating cloud of particles called the solar nebula. To figure out how all that leftover gas and dust led to planets, astronomers have largely studied the structure of our own solar system for clues. They've also looked to distant, younger solar systems still in varying stages of development. With the formation of the sun, the remaining gas and dust flattened into a rotating protoplanetary disk. Within this swirling debris, rocky particles began to collide, forming larger masses that soon attracted even more particles via gravity. These particles contracted under gravity to create planetesimals, which collided with one another to become the solid inner planets. Meanwhile, gases froze into giant balls that would build the outer gas giants. Why did rocky planets form closer to the sun and the gas giants farther away? One theory involves the solar wind, the steady flow of plasma that emanates from a star. When the sun first came into being, this wind was far stronger than it is today -- strong enough to blast lighter elements such as hydrogen and helium away from the inner orbits. When these expelled elements reached the outer orbits, the strength of the solar wind dropped off. The gravity of the outer gas giants quickly drew these elements in, bloating them into their current forms: solid cores of rock and ice covered with gas. This planetary formation theory presumes that gas giants always occur in a solar system's outer orbits. Then, in 1995, astronomers discovered the distant planet 51 Pegasi b, a "hot Jupiter," or gas giant, that orbited very close to its sun. This discovery called for new theories, primarily that such planets must form far away from the central star and then move into a closer orbit. Astronomers theorize that such an orbital migration, powered by a gravitational tug--of-war with other cosmic bodies, would take hundreds of millions of years. The journey would also destroy any smaller, inner planets in its path. The more we learn about the structure of other solar systems, the more we learn about the formation of our planets. Explore the links on the next page to learn even more about our cosmic origins.
<urn:uuid:e7c5ba70-25f4-44d8-a074-b668cd00bc95>
4.34375
504
Knowledge Article
Science & Tech.
47.321515
Deep Seas, Dark Worlds There's barely enough room for three people to fit. But marine scientists Richard Lutz and Peter Rona of Rutgers University crammed themselves into Alvin, a tiny submersible vehicle made for exploring the deep seafloor. Unlike scientists who boarded Alvin before them, Lutz and Rona weren't going on a research expedition to gather data on ocean life. Rather, they were guiding a camera crew in shooting the IMAX movie Volcanoes of the Deep Sea. This is the first film ever to take an up-close look at deep-sea vents and their ecosystems (all of the living and nonliving things that interact in an environment). Five minutes after the sub began its long, chilly descent into the Pacific Ocean, its passengers were steeped in total darkness no sunlight penetrates the water's opaque depths. The pilot and crew pulled on thick sweaters and warm socks as the temperature dipped to match that of the frigid water outside the vehicle only 1.1°C (34°F), just above freezing. Besides the occasional glimmer of luminescent (glowing) sea creatures splashing against Alvin's portholes, there was no life to be seen. But two hours later and 3,600 meters (12,000 feet) deeper Lutz and Rona found a surprising treasure among the otherwise barren waters. Alvin's blue-green headlights panned across a bizarre scene of red-tipped tube worms, shrimp, and fish crowded around a rocky column spewing smoky water. "It's like a little oasis in the middle of the desert," says Lutz. What was this towering plume? Peering through Alvin's 10-centimeter (4-inch) porthole, the scientists had spied a hydrothermal vent, a hot spring on the seafloor that pumps out water superheated by volcanic activity from inside the earth. Most hydrothermal vents sit on top of the mid-ocean ridge, the longest continuous mountain range on the planet. "It's the largest geographic feature on Earth," says Rona. But that's not the only thing that sets this mountain range apart: The ridge snakes over more than 64,000 kilometers (35,000 miles) along the bottom of each of the world's oceans. The ridge, which is made of solidified lava, is like a giant zipper being pulled open as two tectonic plates (slowly moving, giant rock slabs that make up Earth's outer shell) spread apart. Molten material erupts through an opening in the ridge, pushing older rock to both sides of the mountain range. As the hot liquid lava meets cold seawater, it instantly solidifies into new crust. "It's like an engine pumping up new material from the earth. As it pumps lava up, it forces each wall of the mid-ocean ridge to move aside," explains Lutz. Because parts of the ridge are continually spreading and forming new sections of seafloor, scientists call these places seafloor spreading centers. Birth of a Vent Sometimes the scalding lava cracks as it solidifies. Cold, heavy seawater seeps into the cracks. As the water sinks several kilometers into Earth's interior, metals, minerals, and gases dissolve in the water. When this water comes into contact with molten rock under Earth's crust, it gets heated to temperatures as high as 400°C (750°F) hot enough to melt lead! The water's newfound heat makes it light and buoyant, causing it to rise. As it exits through another set of cracks, the seawater creates a new hydrothermal vent. The minerals and metals that had dissolved in the water crystallize around the vent, forming a tall tube called a chimney. The hydrothermal vent will continue to discharge hot, metal-rich water for decades. Eventually, it becomes clogged with minerals, topples over, or uses up the heat, Rona explains. Prime Real Estate Along with minerals and metals, hydrothermal vents also gush forth dissolved gases like hydrogen sulfide toxic to humans, with a potent stench like rotten eggs. Not a great place to set up house unless you're a hyperthermophile. These heat-loving microbes live on and around vents and use this nasty gas as an energy source to manufacture sugars and starches to nourish themselves. What's even stranger: The microscopic members of these unique underwater cities breathe the iron that was once locked beneath Earth's oceans but is now dissolved in the watery plumes. These chemosynthetic (chemical-using) microbes give rise to other life that normally couldn't live in the pitch-black deep sea. "These bacteria are the bottom of the food chain at the vents," says Lutz. After bacteria spreads into a thick mat around a new vent, other creatures gradually set up colonies in a process called succession. These animals slowly crowd onto the tiny section of livable space, braving the surrounding sea's frigid temperatures and water pressure, which is strong enough to crush an army tank. Of the life that Lutz and Rona spotted on their underwater tour, the shrimp-like amphipods (AM-fih-pods) and cope-pods (COH-puh-pods) were likely among the first to drift down to graze on the vent's bacteria. The biologists suspect the snail-like limpets and shrimp arrived months later to dine on the amphipods and copepods. Next to follow? Tube worms with tops as red as lipstick. Lobsters, octopi, mussels, and clams are some of the last creatures to set up house at a vent. Although a few deep-sea creatures are familiar, more than 95 percent of vent animals are completely unknown to science. Researchers find weird new species on almost every dive, including snails covered with plates of iron armor and shrimp with infrared light detectors on their backs instead of eyes. Because these strange creatures can survive super-hot temperatures, high pressure, and darkness conditions too extreme for humans Lutz and Rona say the vent's unique ecosystems may hold the key to finding life elsewhere in our solar system. For example, many scientists believe that hydrothermal vents may exist under the icy surface of Europa, one of Jupiter's moons, and may harbor vent microbes similar to those on Earth. "These bacteria are the most primitive kinds of organisms that we know on Earth," says Lutz. "They're strong evidence that life might have started in the vents and that it could exist someplace else." How Hydrothermal Vents Form At the mid-ocean ridge, molten lava rises to the top of the oceanic crust, Earth's outermost layer of rock in the oceans. Cracks in the oceanic crust allow dense seawater to sink beneath the rock surface. During the descent, minerals dissolve into the water. The mineral-rich water then warms, rising back to the seafloor to form deep-sea vents.
<urn:uuid:26904a0c-5ab1-40b6-b827-96508b347fca>
3.125
1,445
Knowledge Article
Science & Tech.
50.600671
Nuclear Power in the World Today (updated April 2012) - The first commercial nuclear power stations started operation in the 1950s. - There are now over 430 commercial nuclear power reactors operating in 31 countries, with 372,000 MWe of total capacity. - They provide about 13.5% of the world's electricity as continuous, reliable base-load power, and their efficiency is increasing. - 56 countries operate a total of about 240 research reactors and a further 180 nuclear reactors power some 150 ships and submarines. Nuclear technology uses the energy released by splitting the atoms of certain elements. It was first developed in the 1940s, and during the Second World War research initially focussed on producing bombs by splitting the atoms of particular isotopes of either uranium or plutonium. In the 1950s attention turned to the peaceful purposes of nuclear fission, notably for power generation. Today, the world produces as much electricity from nuclear energy as it did from all sources combined in 1960. Civil nuclear power can now boast over 14,800 reactor years of experience and supplies almost 13.5% of global electricity needs, from reactors in 31 countries. In fact, many more than those countries use nuclear-generated power. Many countries have also built research reactors to provide a source of neutron beams for scientific research and the production of medical and industrial isotopes. Today, only eight countries are known to have a nuclear weapons capability. By contrast, 56 operate civil research reactors, 92 of these in developing countries. Now 31 countries host 434 commercial nuclear power reactors with a total installed capacity of over 372,000 MWe (see linked table). This is more than three times the total generating capacity of France or Germany from all sources. Over 60 further nuclear power reactors are under construction, equivalent to 17% of existing capacity, while over 150 are firmly planned, equivalent to 48% of present capacity. A list of the countries with nuclear power projects is appended. Sixteen countries depend on nuclear power for at least a quarter of their electricity. France gets around three quarters of its power from nuclear energy, while Belgium, Bulgaria, Czech Republic, Hungary, Slovakia, South Korea, Sweden, Switzerland, Slovenia and Ukraine get one third or more. Japan and Finland normally get more than a quarter of their power from nuclear energy, while in the USA one fifth is from nuclear. Among countries which do not host nuclear power plants, Italy gets about 10% of its power from nuclear, and Denmark about 8%. Improved performance from existing nuclear reactors As nuclear power plant construction returns to the levels reached during the 1970s and 1980s, those plants now operating are producing more electricity. In 2011, production was 2518 billion kWh. The increase over the six years to 2006 (210 TWh) was equal to the output from 30 large new nuclear power plants. Yet between 2000 and 2006 there was no net increase in reactor numbers (and only 15 GWe in capacity). The rest of the improvement is due to better performance from existing units. In a longer perspective, from 1990 to 2010, world capacity rose by 57 GWe (17.75%, due both to net addition of new plants and uprating some established ones) and electricity production rose 755 billion kWh (40%). The relative contributions to this increase were: new construction 36%, uprating 7% and availability increase 57%. In 2011 both capacity and output diminished due to cutbacks in Germany and Japan following the Fukushima accident. One quarter of the world's reactors have load factors of more than 90%, and nearly two thirds do better than 75%, compared with about a quarter of them in 1990. For 15 years Finnish plants topped the performance tables, but the USA now dominates the top 25 positions, followed by South Korea and Russia followed by Japan, Taiwan and India. US nuclear power plant performance has shown a steady improvement over the past twenty years, and the average load factor now stands at around 87%, up from 66% in 1990 and 56% in 1980. This places the USA as the performance leader with nearly half of the top 50 reactors, the 50th achieving more than 95.5% (data to Sept 2011). The USA accounts for nearly one third of the world's nuclear electricity. In 2011, 18 countries averaged better than 80% load factor, while French reactors averaged 76%, despite many being run in load-following mode, rather than purely for base-load power. Some of these figures suggest near-maximum utilisation, given that most reactors have to shut down every 18-24 months for fuel change and routine maintenance. In the USA this used to take over 100 days on average but in the last decade it has averaged about 40 days. Another performance measure is unplanned capability loss, which in the USA has for the last few years been below 2%. Other nuclear reactors In addition to commercial nuclear power plants, there are about 240 research reactors operating, in 56 countries, with more under construction. These have many uses including research and the production of medical and industrial isotopes, as well as for training. The use of reactors for marine propulsion is mostly confined to the major navies where it has played an important role for five decades, providing power for submarines and large surface vessels. About 150 ships are propelled by some 180 nuclear reactors and over 13,000 reactor-years of experience has been gained with marine reactors. Russia and the USA have decommissioned many of their nuclear submarines from the Cold War era. Russia also operates a fleet of six large nuclear-powered icebreakers and a 62,000 tonne cargo ship which are more civil than military. It is also completing a floating nuclear power plant with two 40 MWe reactors for use in remote regions. Note: Taipower uses nuclear energy to generate 22% of electricity on the island of Taiwan. See table of the World's Nuclear Power Reactors which complements this paper. WNA, data to publication date.
<urn:uuid:486837de-75b8-4002-b5a4-6aac53a09e78>
3.453125
1,212
Knowledge Article
Science & Tech.
47.325606
Science subject and location tags Articles, documents and multimedia from ABC Science Friday, 24 May 2013 Citizen scientists have helped solve a decades-old puzzle by assisting astronomers to make the most accurate distance measurements yet for an important star system. Monday, 25 March 2013 Unnecessary use of antibiotics could unleash a killer hypervirulent strain of gastro bug in Australia, warn infectious disease experts. Friday, 8 March 2013 Scientists have created a revolutionary super-strong nanomaterial that can be used in a wide range of devices from dental braces and medical implants to cables, solar arrays and mobile phones. Thursday, 28 February 2013 Craters caused by asteroid or comet impacts may have played an important role in the creation and evolution of life, say Australian scientists. Tuesday, 22 January 2013 Losing satellite navigation signals could become a thing of the past thanks to new technology being developed to dramatically improve reliability. Wednesday, 16 January 2013 A recent marine heatwave off Western Australia rapidly shrank the distribution range of an ecologically-important seaweed, researchers report. Friday, 14 December 2012 A microquasar produced by a black hole feeding on a star, has been detected in a nearby galaxy for the first time. Wednesday, 21 November 2012 DNA extract from ancient latrines has "opened the door" to identifying the plants and animals that existed in northern Australia more than 30,000 years ago. Friday, 2 November 2012 The first solids to form in our solar system appeared at the same time, more than 4.5 billion years ago, according to a new study. Monday, 29 October 2012 Scientists have discovered of a new species of lizard fighting to survive among the sand dunes outside Perth in Western Australia. Tuesday, 23 October 2012 Liquid water existed over a far wider area of the solar system than originally thought, a new study confirms. Friday, 5 October 2012 The first stage in a multi-billion dollar radio telescope has been officially opened at a ceremony in the Western Australia outback. Thursday, 4 October 2012 A pair of black holes have been found inside a globular cluster - something astronomers thought wasn't possible. Monday, 20 February 2012 Perth is celebrating the 50th anniversary of John Glenn's historic flight aboard Friendship-7. Friday, 3 February 2012 Coral growth rates are increasing on some reefs off the coast of Western Australia, a new study has found.
<urn:uuid:812f9384-c3e1-4d07-a5e8-521022daa294>
2.8125
503
Content Listing
Science & Tech.
34.918533
Science Fair Project Encyclopedia Artificial life, also known as alife or a-life, is the study of life through the use of human-made analogs of living systems. Computer scientist Christopher Langton coined the term in the late 1980s when he held the first "International Conference on the Synthesis and Simulation of Living Systems" (otherwise known as Artificial Life I) at the Los Alamos National Laboratory in 1987. Nature of the field Although the study of artificial life does have some significant overlap with the study of artificial intelligence (AI), the two fields are very distinct in their history and approach. Organized AI research began early in the history of digital computers, and was often characterized in those years by a "top-down" approach based on complicated networks of rules. Students of alife did not have an organized field at all until the 1980s, and often worked in isolation, unaware of others doing similar work. Where they concerned themselves with intelligence at all, researchers tended to focus on the "bottom-up" nature of emergent behaviors. Artificial life researchers have often been divided into two main groups (although other groupings are possible): - The strong alife position states that "life is a process which can be abstracted away from any particular medium". (John Von Neumann). Notably the position of Tom Ray who declared that his program Tierra was not simulating life in a computer, but was synthesizing it. - The weak alife position denies the possibility of generating a "living process" outside of a carbon-based chemical solution. Its researchers try instead to mimic life processes to understand the appearance of single phenomena. The usual way is through an agent based model, which usually gives a minimal possible solution. That is: "we don't know what in nature generates this phenomenon, but it could be something as simple as..." The field is characterized by the extensive use of computer programs and computer simulations which include evolutionary algorithms (EA), genetic algorithms (GA), genetic programming (GP), artificial chemistries (AC), agent-based models, and cellular automata (CA). Artificial life is a meeting point for people from many other more traditional fields such as linguistics, physics, mathematics, philosophy, computer science, biology, anthropology and sociology in which unusual computational and theoretical approaches that would be controversial within their home discipline can be discussed. As a field, it has had a controversial history; John Maynard Smith criticized certain artificial life work in 1995 as "fact-free science", and it has not generally received much attention from biologists. However, the recent publication of artificial life articles in the journal Nature is evidence that artificial life techniques are becoming more accepted in the mainstream, at least as a method of studying evolution. History and contributions A few inventions of the pre-digital era were early heralds of humankind's fascination with artificial life. Most famous was an artificial duck, with thousands of moving parts, created by Jacques de Vaucanson. The duck could reportedly eat and digest, drink, quack, and splash in a pool. It was exhibited all over Europe until it fell into disrepair. One of the earliest thinkers of the modern age to postulate the potentials of artificial life, separate from artificial intelligence, was math and computer prodigy John Von Neumann. At the Hixon Symposium , hosted by Linus Pauling in Pasadena, California in the late 1940s, Von Neumann delivered a lecture titled "The General and Logical Theory of Automata." He defined an "automata" as any machine whose behavior proceeded logically from step to step by combining information from the environment and its own programming, and said that natural organisms would in the end be found to follow similar simple rules. He also spoke about the idea of self-replicating machines. He postulated a machine -- a kinematic automaton -- made up of a control computer, a construction arm, and a long series of instructions, floating in a lake of parts. By following the instructions that were part of its own body, it could create an identical machine. He followed this idea by creating (with Stanislaw Ulam) a purely logic-based automata, not requiring a physical body but based on the changing states of the cells in an infinite grid -- the first cellular automaton. It was extraordinarily complicated compared to later CAs, having hundreds of thousands of cells which could each exist in one of twenty-nine states, but Von Neumann felt he needed the complexity in order for it to function not just as a self-replicating "machine", but also as a universal computer as defined by Alan Turing. Von Neumann worked on his automata theory intensively right up to his death, and considered it his most important work. Homer Jacobsen illustrated basic self-replication in the 1950s with a model train set -- a seed "organism" consisting of a "head" and "tail" boxcar could use the simple rules of the system to consistently create new "organisms" identical to itself, so long as there was a random pool of new boxcars to draw from. Edward F. Moore proposed "Artificial Living Plants", which would be floating factories which could create copies of themselves. They could be programmed to perform some function (extracting fresh water, harvesting minerals from seawater) for an investment that would be relatively small compared to the huge returns from the exponentially growing numbers of factories. Freeman Dyson also studied the idea, envisioning self-replicating machines sent to explore and exploit other planets and moons, and a NASA group called the Self-Replicating Systems Concept Team performed a 1980 study on the feasability of a self-building lunar factory. University of Cambridge professor John Conway invented the most famous cellular automata in the 1960s. He called it the Game of Life, and publicized it through Martin Gardner's column in Scientific American magazine. Philosophy scholar Arthur Burks , who had worked with Von Neumann (and indeed, organized his papers after his death), headed the Logic of Computers Group at the University of Michigan. He brought the overlooked views of 19th century American thinker Charles S. Peirce into the modern age. Peirce was a strong believer that all of nature's workings were based on logic. The Michigan group was one of the few groups still interested in alife and CAs in the early 1970s; one of its students, Tommaso Toffoli argued in his PhD thesis that the field should not be overlooked as a mathematical curiosity, because its results were so powerful in explaining the simple rules that underlay complex effects in nature. Toffoli later provided a key proof that CAs were reversible, just as the true universe is considered to be. Christopher Langton was an unconventional researcher, with an undistinguished academic career that led him to a job programming DEC mainframes for a hospital. He became enthralled by Conway's Game of Life, and began pursuing the idea that the computer could emulate living creatures. After years of study (and a near-fatal hang-gliding accident), he began attempting to actualize Von Neumann's CA and the work of E.F. Codd, who had simplified Von Neumann's original twenty-nine state monster to one with only eight states. He succeeded in creating the first self-replicating computer organism in October of 1979, using only a Apple II desktop computer. He entered Burks' graduate program at the Logic of Computers Group in 1982, at the age of 33, and helped to found a new discipline. Langton's official conference announcement of Artificial Life I was the earliest description of a field which had previously barely existed: Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems. Microelectronic technology and genetic engineering will soon give us the capability to create new life forms in silico as well as in vitro, This capacity will present humanity with the most far-reaching technical,theoretical and ethical challenges it has ever confronted. The time seems appropriate for a gathering of those involved in attempts simulate or synthesize aspects of living systems. Ed Fredkin founded the Information Mechanics Group at MIT, which united Toffoli, Norman Margolus , Gerard Vichniac , and Charles Bennett. This group created a computer especially designed to execute cellular automata, eventually reducing it to the size of a single circuit board. This "cellular automata machine" allowed an explosion of alife research among scientists who could not otherwise afford sophisticated computers. In 1982, brilliant and controversial scientist Stephen Wolfram turned his attention to cellular automata. He explored and categorized the types of complexity displayed by one-dimensional CAs, and showed how they applied to natural phenomena such as the patterns of seashells and the nature of plant growth. Norman Packard , who worked with Wolfram at the Institute for Advanced Study, used CAs to simulate the growth of snowflakes, following very basic rules. Computer animator Craig Reynolds similarly used three simple rules to create recognizable flocking behaviour in groups of computer-drawn "boids" in 1987. With no top-down programming at all, the boids produced life-like solutions to evading obstacles placed in their path. Computer animation has continued to be a key commercial driver of alife research as the creators of movies attempt to find more realistic and inexpensive ways to animate natural forms such as plant life, animal movement, hair growth, and complicated organic textures. The Unit of Theoretical Behavioural Ecology at the Free University of Brussels applied the self-organization theories of Ilya Prigogine and the work of entomologist E.O. Wilson to research the behavior of social insects, particularly allelomimesis , in which an individual's actions are dictated by those of a neighbor. They wrote a script describing the behavior of termites, then modified the environment and watched the way that the simulated, script-driven insects reacted. They then compared that to the reaction of real termites to identical changes in laboratory colonies, and refined their theories about the rules which underlay the behavior. James Doyne Farmer was a key figure in tying artificial life research to the emerging field of complex adaptive systems, working at the Center for Nonlinear Studies (a basic research section of Los Alamos National Laboratory), just as its star chaos theorist Mitchell Feigenbaum was leaving. Farmer and Norman Packard chaired a conference in May of 1985 called "Evolution, Games, and Learning", which was to presage many of the topics of later alife conferences. - Stuart Kauffman - Stanley Miller & Harold Urey - Steen Rasmussen - James Crutchfield - Gerald Joyce - John Henry Holland, inventor of genetic algorithms - David Jefferson - Richard Dawkins - John Koza - Danny Hillis - Karl Sims - Thomas Ray - Steve Grand, creator of Creatures - artificial consciousness - artificial chemistry - digital organisms - evolutionary art - clanking replicator - carbon chauvinism - systems biology - wet alife - neural nets - Lindenmayer systems - game theory - "What is life?" - "When can we say that a system, or a subsystem, is alive?" - "What is the smallest system that we can consider alive?" - "Why is nature able to achieve an open-ended evolutionary system, while all human models seem to fall short of it?" - "How can we measure evolution?" - "How can we measure emergence?" - Levy, Steven (1992). Artificial Life: A Report from the Frontier Where Computers Meet Biology. Vintage Books: Random House, New York. ISBN 0-679-74389-8 - International Society for Artificial Life (ISAL) - Artificial Life (journal) - Introduction to Artificial Life The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:ded0c78a-965d-4595-a118-4afb5e76430a>
3.28125
2,544
Knowledge Article
Science & Tech.
25.42966
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...the stream). The processing and transport of essential elements follow a downstream sequence. Hypotheses attempting to explain ecological processes in running waters include the concept of the river continuum, which explains differences in lotic communities according to the changing ecological factors along the river system. Nutrient spiraling is another concept invoked to explain the... What made you want to look up "river continuum"? Please share what surprised you most...
<urn:uuid:ee48c410-9bbd-47b9-ae4c-f26c682ae4a4>
2.828125
120
Truncated
Science & Tech.
40.174333
Common Lisp the Language, 2nd Edition Time is represented in three different ways in Common Lisp: Decoded Time, Universal Time, and Internal Time. The first two representations are used primarily to represent calendar time and are precise only to one second. Internal Time is used primarily to represent measurements of computer time (such as run time) and is precise to some implementation-dependent fraction of a second, as specified by internal-time-units-per-second. Decoded Time format is used only for absolute time indications. Universal Time and Internal Time formats are used for both absolute and relative times. Decoded Time format represents calendar time as a number of components: X3J13 voted in March 1989 (TIME-ZONE-NON-INTEGER) to specify that the time zone part of Decoded Time need not be an integer, but may be any rational number (either an integer or a ratio) in the range -24 to 24 (inclusive on both ends) that is an integral multiple of 1/3600. There appears to be no user demand for floating-point time zones. Since such zones would introduce inexact arithmetic, X3J13 did not consider adding them at this time. This specification does require time zones to be represented as integral multiples of 1 second (rather than 1 hour). This prevents problems that could otherwise occur in converting Decoded Time to Universal Time. Universal Time represents time as a single non-negative integer. For relative time purposes, this is a number of seconds. For absolute time, this is the number of seconds since midnight, January 1, 1900 GMT. Thus the time 1 is 00:00:01 (that is, 12:00:01 A.M.) on January 1, 1900 GMT. Similarly, the time 2398291201 corresponds to time 00:00:01 on January 1, 1976 GMT. Recall that the year 1900 was not a leap year; for the purposes of Common Lisp, a year is a leap year if and only if its number is divisible by 4, except that years divisible by 100 are not leap years, except that years divisible by 400 are leap years. Therefore the year 2000 will be a leap year. (Note that the ``leap seconds'' that are sporadically inserted by the world's official timekeepers as an additional correction are ignored; Common Lisp assumes that every day is exactly 86400 seconds long.) Universal Time format is used as a standard time representation within the ARPANET; see reference . Because the Common Lisp Universal Time representation uses only non-negative integers, times before the base time of midnight, January 1, 1900 GMT cannot be processed by Common Lisp. Internal Time also represents time as a single integer, but in terms of an implementation-dependent unit. Relative time is measured as a number of these units. Absolute time is relative to an arbitrary time base, typically the time at which the system began running. The current time is returned in Decoded Time format. Nine values are returned: second, minute, hour, date, month, year, day-of-week, daylight-saving-time-p, and time-zone. The current time of day is returned as a single integer in Universal Time format. decode-universal-time universal-time &optional time-zone The time specified by universal-time in Universal Time format is converted to Decoded Time format. Nine values are returned: second, minute, hour, date, month, year, day-of-week, daylight-saving-time-p, and time-zone. X3J13 voted in January 1989 (DECODE-UNIVERSAL-TIME-DAYLIGHT) to specify that decode-universal-time, like encode-universal-time, ignores daylight saving time information if a time-zone is explicitly specified; in this case the returned daylight-saving-time-p value will necessarily be nil even if daylight saving time happens to be in effect in that time zone at the specified time. encode-universal-time second minute hour date month year &optional time-zone The time specified by the given components of Decoded Time format is encoded into Universal Time format and returned. If you do not specify time-zone, it defaults to the current time zone adjusted for daylight saving time. If you provide time-zone explicitly, no adjustment for daylight saving time is performed. This value is an integer, the implementation-dependent number of internal time units in a second. (The internal time unit must be chosen so that one second is an integral multiple of it.) The current run time is returned as a single integer in Internal Time format. The precise meaning of this quantity is implementation-dependent; it may measure real time, run time, CPU cycles, or some other quantity. The intent is that the difference between the values of two calls to this function be the amount of time between the two calls during which computational effort was expended on behalf of the executing program. The current time is returned as a single integer in Internal Time format. This time is relative to an arbitrary time base, but the difference between the values of two calls to this function will be the amount of elapsed real time between the two calls, measured in the units defined by internal-time-units-per-second. (sleep n) causes execution to cease and become dormant for approximately n seconds of real time, whereupon execution is resumed. The argument may be any non-negative non-complex number. sleep returns nil.
<urn:uuid:b8251ed7-0cac-46e4-8c26-97b3b07b507c>
3.640625
1,137
Documentation
Software Dev.
35.338803
Vast amounts of methane hydrates are stored in sediments along the continental margins. Their stability is due to the low temperature–high-pressure conditions found on the seafloor. Global warming could destabilize these hydrates and cause a release of methane (CH4) into the water column and possibly the atmosphere. Seafloor Warming and Methane Releases in the Arctic Ocean Since the Arctic has and will be warmed considerably, Arctic bottom water temperatures and their future evolution projected by a climate model were analyzed in a joint modeling effort by a group of physical and biological oceanographers, geologists, geochemists, and atmospheric scientists from the Cluster (A2). The seafloor warming was found to be spatially inhomogeneous, with the strongest impact on shallow regions affected by Atlantic inflow (see Fig.). Within the next 100 years, the warming will affect 25 % of shallow and mid-depth regions containing methane hydrates. Release of methane from melting hydrates in these areas could enhance ocean acidification and oxygen depletion through aerobic microbial consumption in the water column. Contrary to wide spread previous estimates, the impact of methane release on global warming, however, was found to be insignificant within the time span considered. Biastoch, A., Treude, T., Rüpke, L.H., Riebesell, U., Roth, C., Burwicz, E.B., Park, W., Latif, M., Böning, C.W., Madec, G., Wallmann, K. (2011) Rising Arctic Ocean temperatures cause gas hydrate destabilization and ocean acidification. Geophys. Res. Lett. 38, L08602, doi: 10.1029/2011GL047222 Caption: Impact of Global warming on Arctic methane hydrates within the next 100 years: (a) bottom temperatures in an ocean model (ORCA05), (b) changes due to global warming (KCM), (c) changes in the thickness of the gas hydrate stability zone (geophysical model), (d) changes in the near-bottom pH values by released methane (geochemical calculation). All figures from Biastoch et al. (2011).
<urn:uuid:5c869535-feca-424c-86c8-afb0b90fec71>
3.421875
458
Academic Writing
Science & Tech.
45.39528
DNA is tightly packed in the nucleus of every cell. DNA wraps around special proteins called histones, which form loops of DNA called nucleosomes. These nucleosomes coil and stack together to form fibers called chromatin. Chromatin in turn forms larger loops and coils to form chromosomes. 1 minute 43 seconds Play Large: MOV / WMV (29 MB) Play Small: MOV / WMV (8 MB) To download the videos, in Internet Explorer right-click the link and select "Save Target As..." In Firefox right-click and select "Save Link As..." In Safari right-click and select "Download Linked File As..."
<urn:uuid:3ec4b7c2-8bf6-4f2b-b8d5-01cbddd09660>
2.875
134
Truncated
Science & Tech.
74.541951
|Jan3-13, 05:44 PM||#1| I thought of this question the other day, and I was unable to solve it. A Google search has not helped, so I thought I might post it here. A point mass hangs from a rod of length "l" from the center of a pendulum. The only forces acting upon the point mass are the force of gravity and the force of constraint (keeping it distance "l" from the center). Is there a function that describes the motion of the point mass? |Jan3-13, 05:50 PM||#2| show us what you've tried, and where you're stuck, and then we'll know how to help! |Jan3-13, 06:03 PM||#3| OK. It's not as complicated as a double pendulum. It's just a single pendulum where the mass is constrained to a sphere (rather than the 2-dimensional case where you have a circle). Well, one thought I had was to solve for the potential energy of the system, since that's just mgh+1/2mv^2 = C The mass is just a constant, and we can get rid of it. From this point, I am stuck, however, and I don't know where to go from here. I was thinking the initial velocity must be perpendicular to the force of constraint and was wondering if you could split up the motion into just x and y components to solve it, but that seemed fruitless upon inspection. I am looking for a general function that describes the motion of the point around the sphere. Your help is appreciated greatly. |Jan3-13, 06:11 PM||#4| |function, pendulum, spherical| |Similar Threads for: Spherical Pendulum| |Spherical pendulum||Classical Physics||1| |3d spherical pendulum||Classical Physics||0| |Action-angle coordinates for the spherical pendulum||Advanced Physics Homework||0| |a problem about spherical pendulum that filled with water||Classical Physics||9| |spherical pendulum, linear approximation? HELP HELP HELP!||Classical Physics||2|
<urn:uuid:aacc4074-9ee6-4243-a7a4-8a3c803b8c48>
2.6875
469
Comment Section
Science & Tech.
69.410204
Narrator: This is Science Today. A new air sampler, which can effectively trap and evaluate fine particle pollutants in both their solid and gas phases has been developed by scientist Lara Gundel of the Lawrence Berkeley National Laboratory. This sampler can greatly impact future EPA air quality standards and lead to a definitive understanding of how pollutants such as polycyclic aromatic hydrocarbons and pesticides, affect our health. Gundel: I guess the innovations we made were a way to separate the gases and particles accurately and then to be able to collective measure particles and the gases separately and accurately. Narrator: Gundel achieved this by coating an inner tube of roughened glass inside the filter with sticky resin beads, which were finely ground up until their pores were small enough to trap gas particles. Gundel: We quickly decided that this new technology that we had, these new kinds of sampler designs, could be used in outdoor air, in general atmospheric questions. Narrator: Gundel is currently collaborating with other scientists and the EPA to test this sampler at various sites across the country. For Science Today, I'm Larissa Branin.
<urn:uuid:8a346dd2-3a3a-47c9-99e2-26e162d036a7>
3.484375
235
Audio Transcript
Science & Tech.
25.4825
Narrator: This is Science Today. One of five nanoscale science research centers funded by the Department of Energy is located at the Lawrence Berkeley National Laboratory. Rick Kelly, the Environmental Health and Safety Manager of the Molecular Foundry, explains the intent of the facility is to foster the development of new nanotechnologies that can benefit society. Kelly: The next revolution in technology in this country is nanotechnology. And the intent of the Department of Energy is to excite people and give them opportunities they might not have at their home research institutions to do nanoscale science, so they use our facilities, they use the expertise of our scientists and they use the techniques and materials we've developed over the years to hopefully develop great new technologies that will help society and help medicine and all the fields that are applicable to science. Narrator: There are all kinds of applications - from the development of cheap, nanoscale solar cells and nanoparticle-containing sunscreens to medical applications. Kelly: We have the ability to use materials such as quantum nanodots to trace things going on in cells - ways to detect cancer; ways in fact, to treat cancer that we've never had before. Narrator: For Science Today, I'm Larissa Branin.
<urn:uuid:7c399043-75f4-4b7c-b080-b673d0d7e541>
2.703125
258
Audio Transcript
Science & Tech.
21.82
Synchrotron radiation occurs when a charge moving at relativistic speeds follows a curved trajectory. In this section, formulas and supporting graphs are used to quantitatively describe characteristics of this radiation for the cases of circular motion (bending magnets) and sinusoidal motion (periodic magnetic structures). We will first discuss the ideal case, where the effects due to the angular divergence and the finite size of the electron beam—the emittance effects—can be neglected. The angular distribution of radiation emitted by electrons moving through a bending magnet with a circular trajectory in the horizontal plane is given by ŠB = photon flux (number of photons per second) q = observation angle in the horizontal plane y = observation angle in the vertical plane a = fine-structure constant g = electron energy/mec2 (me = electron mass, c = velocity of light) w = angular frequency of photon (e = Ów = energy of photon) I = beam current e = electron charge = 1.602 ´ 10–19 coulomb y = w/wc = e/ec wc = critical frequency, defined as the frequency that divides the emitted power into equal r = radius of instantaneous curvature of the electron trajectory (in practical units, r[m] = 3.3 E[GeV]/B[T]) E = electron beam energy B = magnetic field strength ec = Ówc (in practical units, ec [keV] = 0.665 E2 [GeV] B[T]) X = gy x = y(1 + X2)3/2/2 The subscripted K’s are modified Bessel functions of the second kind. In the horizontal direction (y = 0), Eq. (1) becomes In practical units [photons·s–1·mr–2·(0.1% bandwidth)–1], The function H2(y) is shown in Fig. 2-1. Fig. 2-1. The functions G1(y) and H2(y), where y is the ratio of photon energy to critical photon energy. The distribution integrated over y is given by In practical units [photons·s–1·mr–1·(0.1% bandwidth)–1], The function G1(y) is also plotted in Fig. 2-1. Radiation from a bending magnet is linearly polarized when observed in the bending plane. Out of this plane, the polarization is elliptical and can be decomposed into its horizontal and vertical components. The first and second terms in the last bracket of Eq. (1) correspond, respectively, to the intensity of the horizontally and vertically polarized radiation. Figure 2-2 gives the normalized intensities of these two components, as functions of emission angle, for different energies. The square root of the ratio of these intensities is the ratio of the major and minor axes of the polarization ellipse. The sense of the electric field rotation reverses as the vertical observation angle changes from positive to negative. Synchrotron radiation occurs in a narrow cone of nominal angular width ~1/g. To provide a more specific measure of this angular width, in terms of electron and photon energies, it is convenient to introduce the effective rms half-angle sy as follows: Fig. 2-2. Normalized intensities of horizontal and vertical polarization components, as functions of the vertical observation angle y, for different photon energies. (Adapted from Ref. 1.) Fig. 2-3. The function C(y). The limiting slopes, for e/ec << 1 and e/ec >> 1, are indicated. where sy is given by The function C(y) is plotted in Fig. 2-3. In terms of sy, Eq. (2) may now be rewritten as In a wiggler or an undulator, electrons travel through a periodic magnetic structure. We consider the case where the magnetic field B varies sinusoidally and is in the vertical direction: B(z) = B0 cos(2pz/lu) , (8) where z is the distance along the wiggler axis, B0 the peak magnetic field, and lu the magnetic period. Electron motion is also sinusoidal and lies in the horizontal plane. An important parameter characterizing the electron motion is the deflection parameter K given by In terms of K, the maximum angular deflection of the orbit is d = K/g. For , radiation from the various periods can exhibit strong interference phenomena, because the angular excursions of the electrons are within the nominal 1/g radiation cone; in this case, the structure is referred to as an undulator. In the case K >> 1, interference effects are less important, and the structure is referred to as a wiggler. In a wiggler, K is large (typically ) and radiation from different parts of the electron trajectory adds incoherently. The flux distribution is then given by 2N (where N is the number of magnet periods) times the appropriate formula for bending magnets, either Eq. (1) or Eq. (2). However, r or B must be taken at the point of the electron’s trajectory tangent to the direction of observation. Thus, for a horizontal angle q, ecmax = 0.665 E2[GeV] B0[T] . When y = 0, the radiation is linearly polarized in the horizontal plane, as in the case of the bending magnet. As y increases, the direction of the polarization changes, but because the elliptical polarization from one half-period of the motion combines with the elliptical polarization (of opposite sense of rotation) from the next, the polarization remains linear. In an undulator, K is moderate () and radiation from different periods interferes coherently, thus producing sharp peaks at harmonics of the fundamental (n = 1). The wavelength of the fundamental on axis (q = y = 0) is given by The corresponding energy, in practical units, is The relative bandwidth at the nth harmonic is On axis the peak intensity of the nth harmonic is given by Here, the J’s are Bessel functions. The function Fn(K) is plotted in Fig. 2-4. In practical units [photons·s–1·mr–2·(0.1% bandwidth)–1], Eq. (13) becomes The angular distribution of the nth harmonic is concentrated in a narrow cone whose half-width is given by Fig. 2-4. The function Fn(K) for different values of n, where K is the deflection parameter. Here L is the length of the undulator (L = Nlu). Additional rings of radiation of the same frequency also appear at angular distances The angular structure of undulator radiation is illustrated in Fig. 2-5 for the limiting case of zero beam emittance. We are usually interested in the central cone. An approximate formula for the flux integrated over the central cone is or, in units of photons·s–1·(0.1% bandwidth)–1, The function Qn(K) = (1 + K2/2)Fn/n is plotted in Fig. 2-6. Equation (13) can also be written as Fig. 2-5. The angular distribution of fundamental (n = 1) undulator radiation for the limiting case of zero beam emittance. The x and y axes correspond to the observation angles q and y (in radians), respectively, and the z axis is the intensity in photons·s–1·A–1·(0.1 mr)–2·(1% bandwidth)–1. The undulator parameters for this theoretical calculation were N = 14, K = 1.87, lu = 3.5 cm, and E = 1.3 GeV. (Figure courtesy of R. Tatchyn, Stanford University.) Fig. 2-6. The function Qn(K) for different values of n. Away from the axis, there is also a change in wavelength: The factor (1 + K2/2) in Eq. (11) must be replaced by [1 + K2/2 + g2 (q2 + y2)]. Because of this wavelength shift with emission angle, the angle-integrated spectrum consists of peaks at ln superposed on a continuum. The peak-to-continuum ratio is large for K << 1, but the continuum increases with K, as one shifts from undulator to wiggler conditions. The total power radiated by an undulator or wiggler is where Z0 = 377 ohms, or, in practical units, The angular distribution of the radiated power is or, in units of W·mr–2, The behavior of the angular function fK(gq,gy), which is normalized as fK(0,0) = 1, is shown in Fig. 2-7. The function G(K), shown in Fig. 2-8, quickly approaches unity as K increases from zero. Electrons in storage rings are distributed in a finite area of transverse phase space—position ´ angle. We introduce the rms beam sizes sx (horizontal) and sy (vertical), and beam divergences (horizontal) and (vertical). The quantities and are known as the horizontal and vertical emittances, respectively. In general, owing to the finite emittances of real electron beams, the intensity of the radiation observed in the forward direction is less than that given by Eqs. (2a) and (13a). Finite emittances can be taken into account approximately by replacing these equations by for bends and undulators, respectively. For bending magnets, the electron beam divergence effect is usually negligible in the horizontal plane. Fig. 2-7. The angular function fK, for different values of the deflection parameter K, (a) as a function of the vertical observation angle y when the horizontal observation angle q = 0 and (b) as a function of q when y = 0. Fig. 2-8. The function G(K). For experiments that require a small angular divergence and a small irradiated area, the relevant figure of merit is the beam brightness B, which is the photon flux per unit phase space volume, often given in units of photons·s–1·mr–2·mm–2·(0.1% bandwidth)–1. For an undulator, an approximate formula for the peak brightness is where, for example, and where the single-electron radiation from an axially extended source of finite wavelength is described by Brightness is shown in Fig. 2-9 for several sources of synchrotron radiation, as well as some conventional x-ray sources. That portion of the flux that is transversely coherent is given by A substantial fraction of undulator flux is thus transversely coherent for a low-emittance beam satisfying exey (l/4p)2. Longitudinal coherence is described in terms of a coherence length For an undulator, the various harmonics have a natural spectral purity of Dl/l = 1/nN [see Eq. (12)]; thus, the coherence length is given by which corresponds to the relativistically contracted length of the undulator. Thus, undulator radiation from low-emittance electron beams [exey (l/4p)2] is transversely coherent and is longitudinally coherent within a distance described by Eq. (27). In the case of finite beam emittance or finite angular acceptance, the longitudinal coherence is reduced because of the change in wavelength with emission angle. In this sense, undulator radiation is partially coherent. Transverse and longitudinal coherence can be enhanced when necessary by the use of spatial and spectral filtering (i.e., by use of apertures and monochromators, respectively). The references listed below provide more detail on the characteristics of synchrotron radiation. Fig. 2-9. Spectral brightness for several synchrotron radiation sources and conventional x-ray sources. The data for conventional x-ray tubes should be taken as rough estimates only, since brightness depends strongly on such parameters as operating voltage and take-off angle. The indicated two-order-of-magnitude ranges show the approximate variation that can be expected among stationary-anode tubes (lower end of range), rotating-anode tubes (middle), and rotating-anode tubes with microfocusing (upper end of range). 1. G. K. Green, “Spectra and Optics of Synchrotron Radiation,” in Proposal for National Synchrotron Light Source, Brookhaven National Laboratory, Upton, New York, BNL-50595 (1977). 2. H. Winick, “Properties of Synchrotron Radiation,” in H. Winick and S. Doniach, Eds., Synchrotron Radiation Research (Plenum, New York, 1979), p. 11. 3. S. Krinsky, “Undulators as Sources of Synchrotron Radiation,” IEEE Trans. Nucl. Sci. NS-30, 3078 (1983). 4. D. F. Alferov, Yu. Bashmakov, and E. G. Bessonov, “Undulator Radiation,” Sov. Phys. Tech. Phys. 18, 1336 (1974). 5. K.-J. Kim, “Angular Distribution of Undulator Power for an Arbitrary Deflection Parameter K,” Nucl. Instrum. Methods Phys. Res. A246, 67 (1986). 6. K.-J. Kim, “Brightness, Coherence, and Propagation Characteristics of Synchrotron Radiation,” Nucl. Instrum. Methods Phys. Res. A246, 71 (1986). 7. K.-J. Kim, “Characteristics of Synchrotron Radiation,” in Physics of Particle Accelerators, AIP Conf. Proc. 184 (Am. Inst. Phys., New York, 1989), p. 565. 8. D. Attwood, Soft X-Rays and Extreme Ultraviolet Radiation: Principles and Applications (Cambridge Univ. Press, Cambridge, 1999); see especially Chaps. 5 and 8.
<urn:uuid:4ace650b-f9a9-4a69-b1d3-dc8fbca78e59>
4.125
3,070
Academic Writing
Science & Tech.
58.562295
According to Real Climate , it's extremely unlikely for that to happen: Could there be a methane runaway feedback?. The “runaway greenhouse effect” that planetary scientists and climatologists usually call by that name involves water vapor. A runaway greenhouse effect involving methane release (such as invoked here) is conceptually possible, but to get a spike of methane concentration in the air it would have to released more quickly than the 10-year lifetime of methane in the atmosphere. Otherwise what you’re talking about is elevated methane concentrations, reflecting the increased source, plus the radiative forcing of that accumulating CO2. It wouldn’t be a methane runaway greenhouse effect, it would be more akin to any other carbon release as CO2 to the atmosphere. This sounds like semantics, but it puts the methane system into the context of the CO2 system, where it belongs and where we can scale it. So maybe by the end of the century in some reasonable scenario, perhaps 2000 Gton C could be released by human activity under some sort of business-as-usual scenario, and another 1000 Gton C could come from soil and methane hydrate release, as a worst case. We set up a model of the methane runaway greenhouse effect scenario, in which the methane hydrate inventory in the ocean responds to changing ocean temperature on some time scale, and the temperature responds to greenhouse gas concentrations in the air with another time scale (of about a millennium) (Archer and Buffett, 2005). If the hydrates released too much carbon, say two carbons from hydrates for every one carbon from fossil fuels, on a time scale that was too fast (say 1000 years instead of 10,000 years), the system could run away in the CO2 greenhouse mode described above. It wouldn’t matter too much if the carbon reached the atmosphere as methane or if it just oxidized to CO2 in the ocean and then partially degassed into the atmosphere a few centuries later. The fact that the ice core records do not seem full of methane spikes due to high-latitude sources makes it seem like the real world is not as sensitive as we were able to set the model up to be. This is where my guess about a worst-case 1000 Gton from hydrates after 2000 Gton C from fossil fuels in the last paragraph comes from. On the other hand, the deep ocean could ultimately (after a thousand years or so) warm up by several degrees in a business-as-usual scenario, which would make it warmer than it has been in millions of years. Since it takes millions of years to grow the hydrates, they have had time to grow in response to Earth’s relative cold of the past 10 million years or so. Also, the climate forcing from CO2 release is stronger now than it was millions of years ago when CO2 levels were higher, because of the band saturation effect of CO2 as a greenhouse gas. In short, if there was ever a good time to provoke a hydrate meltdown it would be now. But “now” in a geological sense, over thousands of years in the future, not really “now” in a human sense. The methane hydrates in the ocean, in cahoots with permafrost peats (which never get enough respect), could be a significant multiplier of the long tail of the CO2, but will probably not be a huge player in climate change in the coming century.
<urn:uuid:c1cf679c-e4f0-4d25-96a4-c19981f8bf13>
3.625
715
Comment Section
Science & Tech.
38.017242
Did you know that Hook Grasses can control water loss by folding up their leaves? Contrary to their common name, Hook Grasses are not grasses but Sedges and they belong to the family Cyperaceae. Sedges are commonly found in wet or poorly drained habitats. Hook Grasses, however, can be found in a much greater diversity of habitats. In New Zealand, Hook Grasses can grow in costal scrub, forests, swamps, grasslands or herbfields in sub-alpine and alpine habitats. Although Hook Grasses have colonised drier habitats, water is still important for their survival and they use a very clever system, operated by so-called bulliform cells, to regulate water loss. Bulliform cells are large, bubble-shaped cells found in the upper surface of the leaves. In Hook Grasses these cells are found all along the midrib. When water availability is low, these cells shrink causing the leaf blade to fold. Each side of the leaf blade, at either side of the midrib, moves towards each other like closing a book. By folding their leaves these sedges reduce the area exposed to sunlight and therefore water loss by evaporation. This mechanism allows water to be maintained inside the plant. Once water is available again, these cells enlarge and the leaf blade unfolds again.
<urn:uuid:8f2159a5-e882-4c5c-bd5a-4266cda1bcea>
3.984375
274
Knowledge Article
Science & Tech.
46.821433
Next: Lu and Torquato Up: Lognormally Distributed Spheres Previous: Pleau and Pigeon The equation for G given in Eqn. 13 for the sphere distribution of Eqn. 11 can also be used to approximate the correct value of G for the zeroth-order logarithmic distribution used here. The parameters a and b in Eqn. 11 are chosen to yield a sphere diameter distribution with the same modal diameter Dm and specific surface as the logarithmic distribution used here. The specific surface and the modal diameter are related to the parameters from the following equations: The corresponding values of the parameters are The corresponding equation for G is Note again that the value of the numerator differs very little from that in Eqn. 10. Since the distribution proposed by Attiogbe is based upon the zeroth-order logarithmic function used here, it is reasonable to expect that a corrected form for Eqn. 13 which is based the zeroth-order logarithmic distribution would differ very little from the above result. It is this equation for G that is used in Tables 6--8. As in the case of monosized spheres, the Attiogbe equation does not appear to accurately estimate any reported statistic of the void-void spacing distribution. In the data shown in Table 7, the Attiogbe equation t is nearly an order of magnitude greater than both the 50 th and the 95 th percentiles. As the paste air fraction increases from 0.02 to 0.07, the value of tG only decreases by 10%, where as the measured values decrease by 50%. Table 8 shows the performance of G in estimating the fraction of paste within either t or tG of an air void surface for lognormally distributed radii. Again, since the Lu and Torquato equation performs so well for the lognormally distributed spheres, it is treated as the true value. As in the case of monosized spheres, the parameter G does not provide a useful estimate of the paste volume fraction within either t or tG of an air void surface. The value of G for the two greatest air contents, although correct, is unremarkable since a cursory analysis would predict that the volume fraction of paste within one half the mean free path should be nearly unity.
<urn:uuid:b60987c1-f0ec-4555-be24-6f7c1f15038a>
2.96875
491
Academic Writing
Science & Tech.
45.994467
[Source: National Space Science Data Center, http://nssdc.gsfc.nasa.gov/nmc/spacecraftDisplay.do?id=1999-067A MSP F15 (USA 147) was launched by a Titan rocket from Vandenberg AFB on December 12, ... 1999 into a 101 minute, sun-synchronous near-polar orbit at an altitude of 840km and with the Local Time nodes of 21:10 and 9:10. The Defense Meteorological Satellite Program (DMSP) is a Department of Defense (DoD) program run by the Air Force Space and Missle Systems Center (SMC). The program designs, builds, launches, and maintains satellites monitoring the meteorological, oceanographic, and solar-terrestrial physics environments. Each DMSP satellite has a above the surface of the earth. The visible and infrared sensors (OLS) collect images across a 3000 km swath, providing global coverage twice per day. The combination of day/night and dawn/dusk satellites allows monitoring of global information such as clouds every 6 hours. The microwave imager (MI) and sounders (T1, T2) cover one half the width of the visible and infrared swath. These instruments cover polar regions at least twice and the equatorial region once per day. The space environment sensors (J4, M, IES) record along-track plasma densities, velocities, composition and drifts. The data from the DMSP satellites are received and used at operational centers continuously. The data are sent to the National Geophysical Data Center's Solar Terrestrial Physics Division (NGDC/STP) by the Air Force Weather Agency (AFWA) for creation of an archive.
<urn:uuid:25d4dc5b-e319-486e-ab92-31c37b82cccc>
3.015625
355
Knowledge Article
Science & Tech.
50.951083
Fisher Science Education, through a partnership with Beyond Benign, developed this classroom ready slide presentation that briefly describes the 12 principles of green chemistry. The presentation first focuses on the role of chemists in society and introduces green chemistry as a way to solve environmental problems. Each principle is defined and presented in the context of a basic application. Summary prepared May 2008 by Julie Haack, University of Oregon. Fisher Science Education, ; Beyond Benign, What is Green Chemistry?. http://www.fishersci.com/ wps/ downloads/ segment/ ScienceEducation/ pdf/ green_12PrinciplesGreenChem.pdf (accessed June 2011).
<urn:uuid:58228af0-e824-44d1-aaac-c341a6c94cc6>
3.71875
135
Content Listing
Science & Tech.
37.492652
Derived Types: PolygonB, PolygonN An abstract class that serves as a base to its derived types, representing a collection of one or more exterior and interior rings. . The rings do not need to be connected to or contained by other rings in the polygon. However, all rings are considered to be part of a single polygon regardless of their location. Rings can be embedded in the interior of other rings. Embedded rings define interior boundaries or holes within the polygon. Exterior rings are oriented in a clockwise direction while interior rings are oriented In general, you will work with the normalized polygon type PolygonN, whether creating a polygon or working with a polygon returned from a service request. See the discussion on binary and normalized geometry for more information. When constructing a PolygonN with overlapping rings in the same direction, the server will simplify the polygon for you. If two rings overlap, the overlapping area will be considered an interior ring (hole). If three rings overlap, the overlapping area will be considered an exterior ring and so on.
<urn:uuid:2a9b0100-8b02-4c5e-a1ee-2394f85f5eaa>
3.203125
228
Documentation
Software Dev.
34.993601
Optogenetics, a brand new field of research in which living, cortical neurons and other cells can be manipulated or controlled with optical technology (namely fiber optic cables), has been heralded as the next big thing for treating such things as heart conditions, paralysis, and even diabetes. Up until now, however, they've only been able to test this technique on rodents — but a recent breakthrough in which scientists were able to control monkeys' brains with light has shown that the concept also applies to humans — an important piece of insight that could lead to dramatic new treatments for cognitive disorders. The concept behind optogenetics is remarkable. The first step in the process is to deliver a gene into brain cells, and this is done by injecting an animal with a special virus. Once delivered, the gene produces a series of light-responsive proteins. These proteins are then either activated or disabled by fiber optic cables that are inserted into the brain. Scientists, including optogenetics pioneer Ed Boyden, have used the technique to control the behaviors of mice, but they weren't sure if it could work on primates — hence the recent initiative by Boyden, Annelles Gerits, Wim Vanduffel, and others to test the procedure on rhesus monkeys. Writing in Technology Review, Susan Young describes the experiment: The behavior studied in today's published report is quite subtle: two monkeys were trained to purposefully move their eyes to a target on a screen when given a cue. But when the relevant optogenetically ready modified neurons were stimulated by light from optical fibers inserted into their brains, the neuronal circuit responsible was sped up, and the monkeys were able to complete this task faster. "It's a simple task, but it is a cognitive task," says study senior author Wim Vanduffel, who splits his time between Harvard Medical School and the University of Leuven. "It's a stepping stone," he says, one that opens up new research into understanding brain function. "[Optogenetics] may also become useful in the far future for therapeutic purposes, because if you can activate or deactivate very specific cell types, you can actually target particularly circuitries that are important in different diseases with much more precision than is possible at this moment with drugs or [electrical] stimulation," says Vanduffel. "But there is still a very long way to go before it gets there." The scientists claim, that by virtue of their experiment, they were able to induce both behavioral and functional changes in monkeys. The breakthrough will help scientists to better understand advanced cognition and the various ways the tool could be used in a clinical setting. You can read the entire study at Current Biology.
<urn:uuid:468141bd-1cc8-4ece-a026-e2d012aea4ee>
3.40625
546
Truncated
Science & Tech.
28.854286
Please keep in mind that this is a research project, and there may sometimes be glitches with the interactive software. Please let us know of any problems you encounter, and include the computer operating system, the browser and version you're using, and what kind of connection you have (dial-up modem, T1, cable). When you listen to or play music, or better yet, dance to it, you are very aware of the rhythm. You might notice that every 3rd or 4th beat is played a little louder than the others, or that it is accented somehow. In some types of music, several different rhythms are performed at the same time. When that happens, it is called a polyrhythm. Polyrhythms are a very important feature of African In this problem you will be investigating polyrhythms, and then have a chance to build your own so you can investigate the mathematics. Play with the applet using the various rhythms. Experiment to find out what it means to play a 1:2 or a 1:3 rhythm. See how the different rhythms interact when you play them together. When a 1:2 and a 1:3 rhythm are played together, the polyrhythm is called a 2:3. The number of beats it takes a pattern to repeat itself is the phrase length. Explore how long it takes different patterns to - What is the ratio of the mystery polyrhythm? Explain how you know. - Create a polyrythm of your own using two of the rhythms in the applet. What is the ratio of the polyrhythm and its phrase length? Explain how you know. - A complicated polyrhythm has the ratio 2:3:4:5:6:7. What is its phrase length? How do you know? Teacher Support Page
<urn:uuid:a2cad3f8-fd14-4334-9514-fda5c1606010>
3.90625
397
Tutorial
Science & Tech.
68.963202
Do cows pollute as much as cars? Agriculture is responsible for an estimated 14 percent of the world's greenhouse gases. A significant portion of these emissions come from methane, which, in terms of its contribution to global warming, is 23 times more powerful than carbon dioxide. The U.S. Food and Agriculture Organization says that agricultural methane output could increase by 60 percent by 2030 [Source: Times Online]. The world's 1.5 billion cows and billions of other grazing animals emit dozens of polluting gases, including lots of methane. Two-thirds of all ammonia comes from cows. Cows emit a massive amount of methane through belching, with a lesser amount through flatulence. Statistics vary regarding how much methane the average dairy cow expels. Some experts say 100 liters to 200 liters a day (or about 26 gallons to about 53 gallons), while others say it's up to 500 liters (about 132 gallons) a day. In any case, that's a lot of methane, an amount comparable to the pollution produced by a car in a day. To understand why cows produce methane, it's important to know a bit more about how they work. Cows, goats, sheep and several other animals belong to a class of animals called ruminants. Ruminants have four stomachs and digest their food in their stomachs instead of in their intestines, as humans do. Ruminants eat food, regurgitate it as cud and eat it again. The stomachs are filled with bacteria that aid in digestion, but also produce methane. With millions of ruminants in Britain, including 10 million cows, a strong push is underway to curb methane emissions there. Cows contribute 3 percent of Britain's overall greenhouse gas emissions and 25 to 30 percent of its methane. In New Zealand, where cattle and sheep farming are major industries, 34 percent of greenhouse gases come from livestock. A three-year study, begun in April 2007 by Welsh scientists, is examining if adding garlic to cow feed can reduce their methane production. The study is ongoing, but early results indicate that garlic cuts cow flatulence in half by attacking methane-producing microbes living in cows' stomachs [Source: BBC News]. The researchers are also looking to see if the addition of garlic affects the quality of the meat or milk produced and even if the animals get bad breath. Another study at the University of Wales, Aberystwyth, is tracking quantities of methane and nitrogen produced by sheep, which provide a good comparison model for cows because they have similar digestive systems, but are less unruly. The sheep in the study are living in plastic tunnels where their methane production is monitored across a variety of diets. Many other efforts are underway to reduce ruminant methane production, such as attempting to breed cows that live longer and have better digestive systems. At the University of Hohenheim in Germany, scientists created a pill to trap gas in a cow's rumen -- its first stomach -- and convert the methane into glucose. However, the pill requires a strict diet and structured feeding times, something that may not lend itself well to grazing. In 2003, the government of New Zealand proposed a flatulence tax, which was not adopted because of public protest. Other efforts look at the grazing lands being used by livestock farmers, which will be discussed in the next section. So we know that ruminants are producing enormous quantities of methane, but why? Humans produce gases daily, sometimes to their embarrassment, but nowhere near the extent of these animals. On the next page, we'll learn more about the source of the methane problem and some of the controversy behind it.
<urn:uuid:e8ea9066-dc05-414c-8722-83a03d3bf0e8>
3.28125
748
Knowledge Article
Science & Tech.
46.385065
projects > across trophic level system simulation (atlss) > abstract Computer Simulation Modeling of Intermediate Trophic Levels for Across Trophic Level System Simulations of the Everglades/Big Cypress Region Michael Gaines, George Dalrymple, and Donald L. DeAngelis This work has involved the modeling of intermediate trophic levels in the Everglades ecosystem as part of the Across Trophic Level System Simulation (ATLSS) package of models. The fish and herpetological communities play pivotal roles in the Everglades, acting as principal forage sources for such top consumers as wading birds and alligators. Fish modeling: The purpose of this work was to develop a model of the dynamics of a key functional group of fishes in the freshwater in the Everglades/Big Cypress ecosystem. The work performed here developed a predictive model for the numbers, size and age structures, and biomass per unit area of small fishes in any selected part of the freshwater marsh. A key factor included in the model is the seasonal cycle of water depth. This environmental seasonality is echoed in patterns of production of fish biomass, which, in turn, influences the phenology of other components of the food web, including wading birds. Human activities, such as drainage or other alterations of the hydrology, can influence these natural cycles and result in changes in the fish production and the higher trophic levels dependent on this production. In the model the seasonal pattern of fish production in the freshwater Everglades/Big Cypress region is simulated on 5-day time steps. The model illustrates the temporal pattern of production through the year, which can result in very high densities of fish at the end of a hydroperiod (period of flooding), as well as the importance of ponds and other deep depressions, both as refugia and sinks during dry periods. The model indicates: (1) there is an effective threshold in the length of the hydroperiod that must be exceeded for high fish-population densities to be produced, (2) large, piscivorous fishes do not appear to have a major impact on smaller fishes in the marsh habitat, and (3) the recovery of the small-fish populations in the marsh following a major drought may require up to a year. The last of these results is relevant to assessing anthropogenic impacts on marsh production, as these effects may increase the severity and frequency of droughts. This small-fish model has been extended to cover the whole Everglades landscape, in 500 x 500 meter cells, by the University of Tennessee. This predicts the distribution of fish number and biomass density across the Everglades landscape on 5-day time steps and allows the calculation of a breeding potential index for wading birds, for which fish is a main dietary component. The model is currently providing output that is being included in the ATLSS "Model Outputs for the Central and Southern Florida Comprehensive Review Study" (or C&SF Restudy). In addition, this model will be integrated in 1998 into the wading bird breeding colony models that are nearing completion. Reptile and amphibian modeling: A food web "submodel" of the amphibians (frogs and salamanders) and reptiles (snakes and turtles) has been developed. The model has been parameterized for three ecosystem types within the Everglades/Big Cypress region, using data supplied by George Dalrymple. Linear programming methods have been used to ensure the consistency of these data in predicting biomass standing stocks and fluxes. The reptiles and amphibians of this submodel are important food sources of the alligators and wading birds. The purpose of the model is to predict the dynamics and production of these taxa across the landscape of the Everglades/Big Cypress under a variety of hydrologic scenarios. A primary reason for modeling them here is to predict the availability of biomass to higher trophic levels (for example, particularly alligators). In addition, the model will be able to predict the expected year-to year variability in these populations under realistic climatic conditions, and their responses to changes in system hydrology. This submodel will also be used to provide input data for an alligator model now in preparation. Associated field studies (alligator diets): To help link these intermediate trophic levels with a key top predator, a four-year study of the diet of the alligator, Alligator mississippiensis, was conducted in the Everglades. The first three years of this study led to the amassing of a data set on alligator diets in the central slough area of the southern Everglades. These data will be extremely valuable in attempting to predict the responses of the alligator population to changes in landscape hydrology. On the basis of observations of the investigator and other researchers, the condition of alligators in the slough (away from canals) is far worse than that of alligators living in canals. This has been hypothesized to be a result of more food available to alligators in the canal. Stomach contents of alligators from the canals will be sampled to determine diets through the year, and compared with the diets of alligators in the slough. During the three years of this study, the investigator has tagged approximately one thousand alligators. These alligators are currently being recaptured in another project and statistical analysis will be used to estimate both the current total population size and mortality rate of alligators in the central slough area. A significant part of the funding for this research was provided from the U.S. Department of the Interior South Florida Ecosystem Restoration Program "Critical Ecosystems Studies Initiative" (administered through the National Park Service) and from the U.S. Geological Survey, Florida Caribbean Science Center. Additional funding for the "Atlas Tropic Level System Simulation" was also provided by the U.S. Environmental Protection Agency and the U.S. Army Corps of Engineers. Barr, B., 1997, Food habits of the American alligator, alligator mississippiensis, in the southern Everglades: Ph. D. Dissertation, University of Miami, Coral Gables, Florida. DeAngelis, D.L., Loftus, W.F., Trexler, J.C., and Ulanowicz, R. E., 1997, Modeling fish dynamics and effects of stress in a hydrologically pulsed ecosystem: Journal of Aquatic Ecosystem Stress and Recovery, v. 1, p. 1-13. (This abstract was taken from the Proceedings of the South Florida Restoration Science Forum Open File Report) |U.S. Department of the Interior, U.S. Geological Survey, Center for Coastal Geology This page is: http://sofia.usgs.gov /projects/atlss/compsimabsfrsf.html Comments and suggestions? Contact: Heather Henkel - Webmaster Last updated: 15 January, 2013 @ 12:43 PM (TJE)
<urn:uuid:5c7db02c-318a-4231-a4f9-78ede3e429cb>
2.75
1,451
Academic Writing
Science & Tech.
36.56972
Right now, Earth is passing through a swarm of particles shed by the asteroid 3200 Phaethon as it moves through its orbit in the solar system. As we encounter the stream, many of the particles get swept into our atmosphere and get vaporized as they pass through. We see that action as meteors flashing across the sky. They appear to come at us from the constellation Gemini, and so this swarm of meteors is called the Geminid Meteor Shower. Earth entered the stream on December 6th, so you should be able to see some meteors each night through about the 18th of the month. The peak of the Geminids is later this week, on December 13/14. According to a story released by the good folks at Sky & Telescope, the skies should be good and dark for the shower since we’ll be at new moon. If you have good viewing conditions, you can expect to see perhaps one or two meteors (shooting stars) a minute from 10 p.m. Thursday night until dawn on Friday the 14th. Meteor observing couldn’t be easier. Just find a good dark spot outside (and be sure to dress warmly —you could be out there a while) and find the constellation Gemini. Then, you wait for streaks of light to race across the sky, mostly radiating from Gemini — but they can appear anywhere. You will be able to see small flashes of light and if you’re lucky, maybe some bright ones will flare across the sky. As you see these meteors, notice the colors in their trails — particularly if you’re lucky enough to see a fairly good-sized flash. These colors come from the materials in the meteor as it gets vaporized by friction with Earth’s atmosphere. Most meteor flashes will look white or blue-white. One of the most interesting things about this shower is that it’s one of two showers caused by particles of rock from an asteroid. Most other meteor showers come from materials shed by comets as they round the Sun and Earth’s orbit intersects their paths. If you get a chance, check this one out. It’s likely to be one of the best meteor showers of the year , so let’s hope the weather is good for all of us to go meteor-hunting!
<urn:uuid:ffe1a97c-4e5e-462f-8a48-a8bfa0f6f83b>
3.09375
487
Personal Blog
Science & Tech.
69.189354
Find information on stone-eating bacteria and learn when it was first observed. Stone-eating bacteria belong to several families in the genus Thiobacillus. They can cause damage to monuments, tombs, buildings, and sculptures by converting marble into plaster. The principal danger seems to come from Thiobacillus thioparus. This microbe's metabolic system converts sulfur dioxide gas (found in the air) into sulfuric acid and uses it to transform calcium carbonate (marble) into calcium sulfate (plaster).The bacilli draw their nutrition from carbon dioxide formed in the transformation. Nitrobacter and Nitrosomonas are other "stone-eating bacteria" that use ammonia from the air to generate nitric and nitrous acid, and there are still other kinds of bacteria and fungi producing organic acids (formic, acetic, and oxalic acids), which can attack the stone as well. The presence of these microbes was first observed by a French scientist, Henri Pochon at Angkor Wat, Cambodia, during the 1950s. The increase of these bacteria and other biological-damaging organisms that threaten tombs and buildings of antiquity are due to the sharp climb in the level of free sulfur dioxide gas in the atmosphere from automotive and industrial emissions.
<urn:uuid:3c9ef156-6bd9-4e60-ad36-75daff3c518d>
3.828125
264
Knowledge Article
Science & Tech.
26.827438
Authors: John Hunter Mach argued that since the acceleration of a body can only be measured relative to other bodies, then the inertia of masses is somehow due to the presence of distant matter in the universe. Here an alternative is put forward...that the acceleration (of one part of a body) can be measured relative to other parts. Inertia is thus considered to be due to the force needed to compress (or stretch) a body undergoing acceleration. Comments: 2 pages [v1] 12 Oct 2009 Unique-IP document downloads: 105 times Add your own feedback and questions here: If you wish to point out that something is wrong in the paper you must be polite and mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:2ffba1d6-5f0d-4458-9d58-a73d39376e68>
3.21875
160
Truncated
Science & Tech.
47.418846
Looking for Cramster? Cramster is now Chegg Homework Help. Learn More Magnetic Field 2 A uniform ion beam of radius R consists if N' ions per meter ofmass m, speed v, and charge q. Find the Lorentz force (due toboth E and B) on an ion at the edge of the beam. At whatspeed would the force become zero?
<urn:uuid:be32b9e0-7a0f-417d-af19-0b86ad608838>
2.890625
86
Q&A Forum
Science & Tech.
85.881729
Alleles: Alternative forms of the same gene region/locus. Assignment test: A statistical approach to ascribing individuals to their most probable natal populations on the basis of multiple DNA markers (Manel et al.2005). FST: A classic measure of population genetic differentiation based on differences in frequencies of genetic polymorphisms (Neigel 2002). It varies between 0 (no differentiation) and 1 (completely different). Genotypic arrays: combinations of genotypes across multiple loci (Sunnucks 2000). Landscape genetics: a discipline combining landscape ecology and population genetics (Manel et al. 2003). Locus (plural loci): A defined DNA region that can be compared among samples. Microsatellite: A class of highly resolving DNA locus often applied in molecular population biology (Selkoe and Toonen 2006). Mitochondrial DNA (mtDNA): The DNA within the mitochondria found within cells, typically inherited through the maternal line in animals. Parentage analysis: Attribution of offspring to parents based on genotype data (Marshall et al. 1998). Polymorphism: A polymorphic locus has more than one allele.
<urn:uuid:812baf58-d31f-40df-bd0a-9d6d64e67336>
3.453125
254
Structured Data
Science & Tech.
20.21875
Cold fronts are well-known for their transitions from one air mass (warm and humid) to another (cold and dry). In addition to a drop in temperature and dew point with frontal passage, winds shift from southwest before the front to northwest after frontal passage, showers and thunderstorms (sometimes severe) accompany the front and atmospheric pressure falls as the front approaches and rises following frontal passage. When an arctic cold front plows into an unseasonably warm and humid air mass, the transition is often dramatic. Take a look at this image (Fig. 1) and you can see the spatial variations associated with the arctic front that is currently moving across the central U.S. Further north and west (where the air mass ahead of the front was not as warm and humid), and during nighttime hours, the transition was significant, but not dramatic. Consider Oklahoma City, OK where the front passed late on Friday night into early Saturday morning. The temperature just fell steadily with frontal passage; however, the dew point tumbled about 17 degrees in an hour (Fig. 2). At Memphis, TN, where the front passed through shortly after midnight Sunday, the pre-frontal air mass was much more tropical. Thus, the temperature and dew point tumbled 18 degrees (70 degrees to 52 degrees) from around 1:30 a.m. C.S.T. to 2:45 a.m. C.S.T. (Fig. 3). The biggest drop occurred from 1:31 a.m. C.S.T. to 1:39 a.m. C.S.T. when the mercury fell 9 degrees. That translates into a computed rate of minus 67.5 degrees per hour. Houston, TX saw a 16-degree tumble between 2:27am C.S.T. and 3:53 a.m. C.S.T. on Sunday morning (Fig. 4). During midday Sunday, Baton Rouge, LA (Fig. 5) saw the temperature drop from 75 degrees at 12:53 p.m. C.S.T. to 63 degrees a scant 29 minutes later. Between 2:22 p.m. C.S.T. and 2:24 p.m. C.S.T. the temperature dropped another 6 degrees. That secondary temperature tumble translates into a drop of 180 degrees per hour. There is no way that temperatures would fall like this for an entire hour (thank goodness!). I merely determined the computational hourly rates to highlight just how dramatic the temperature tumble really was. Dr. Ken Dewey, a meteorology faculty member at the University of Nebraska – Lincoln, provided another look at the front. Driving north from Austin (site of last week’s Annual American Meteorological Society conference), Dewey passed through the front and shared the following observation. “It was 71 degrees F last evening as we drove north of Dallas…at 75 mph it only took 3 exits to hit the snow squalls. I love these sharp temperature gradients.” I have to agree with Dewey. When frontal transitions are this dramatic, it makes explaining cold fronts a lot easier. © 2013 H. Michael Mogil
<urn:uuid:2686f212-4b8f-4b3d-ad96-49c133e0bf19>
3.125
658
Nonfiction Writing
Science & Tech.
79.850955
This is a great question because we hear the term very often during severe weather season. Strong thunderstorms frequently have strong downdrafts associated with them. A downdraft is basically a column of sinking air in a thunderstorm. When there is a particularly strong downdraft, we call it a downburst, or microburst. These microbursts contain significantly rain-cooled air and upon impact with the surface, the air moves outward in all directions. Think of it as a bomb falling from the air with the shock wave being wind. That shock wave is what we call straight line winds. Another reason they are called straight line winds is because the damage created by them is usually very distinguishable. It is clear that there is no rotational damage pattern. For instance, a whole line of trees could be knocked over all in the same direction over a quarter mile as a result of straight line winds, whereas in a tornado, all the trees would be uprooted and deposited in various locations. Straight line winds can blow up to 150 mph, and that's why it's important to stay away from windows during severe storms!
<urn:uuid:a79c0600-dc29-4394-8048-bf03e852724f>
3.609375
232
Knowledge Article
Science & Tech.
55.609261
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 16 results on physics.org and 241 results in our database of sites 241 are Websites, 0 are Videos, and 0 are Experiments) Search results on physics.org Search results from our links database Play golf with a twist using protons in magnetic fields. Start with simple fields, then move all the way up to golfing in Earth’s magnetic field. Magnetic fields are produced by electric currents, which can be macroscopic currents in wires, or microscopic currents associated with electrons in atomic orbits. The magnetic field B is defined in ... Shows magnetic field patterns for wire, single turn coil and solenoid. Index from Eric Weisstein's World of Physics leading to information on lots of topics related to magnetic fields. An introduction to magnetic fields with various diagrams and links through to related sections. A simple demonstration enabling us to visualize magnetic fields in 3D. An interactive practical demonstration of a magnetic force on a current carrying wire and the change cause when the magnetic field is introduced. EField allows users to simulate the motion of charged particles moving under the influence of electric fields. A static magnetic field in the z direction is also allowed . The secret lives of invisible magnetic fields are revealed as chaotic ever-changing geometries in this sci-art film. An article about the huge magnetic field that surrounds the Earth and stretches into space. Showing 1 - 10 of 241
<urn:uuid:bea29a3b-4521-4ddf-9815-8cb6ca8788cb>
3.3125
344
Content Listing
Science & Tech.
54.424181
Niranjan Barla’s Chemistry Questions On 19th Feb, 2010, Niranjan Barla sent these questions: Can u plz answer this ques?? 1. The wooden shelves on which conc.Sulphuric Acid bottles are kept are stained black. Why? 2. Dot diag. of ethyne. Conc. Sulphuric acid has a very strong affinity for water, which is the reason behind its dehydrating property. It reacts with wood (which is a type of cellulose) to dehydrate it (remove the molecules of water from the wood), leaving behind a layer of carbon, which causes the black stain on the wooden shelves on which it is kept. [C6H5O10]n ———- 6[C]n + 5[H2O]n Dot diagram of ethyne
<urn:uuid:4c14bec3-dde1-409d-ab8a-dbb5655cce82>
2.890625
184
Q&A Forum
Science & Tech.
76.504524
Icebergs form when chunks of ice calve, or break off, from glaciers, ice shelves, or a larger iceberg. Icebergs travel with ocean currents, sometimes smashing up against the shore or getting caught in shallow waters. Photo of the Day Icebergs off the coast of St. Anthony Read all about the latest weather news from around the world.
<urn:uuid:b18f2f62-0eaf-4da3-a020-702096d0324a>
2.96875
76
Knowledge Article
Science & Tech.
60.276513
Back to Deep-Water Corals A Lophelia pertusa colony Oil Affects on Reproduction of the Deep-Water Coral Lophelia pertusa on North Sea Oil Rigs Waller, Roberts & Gass The deep-sea coral Lophelia pertusa was first found growing on North Sea oil platforms in 1999, as the Brent Spar oil-storage buoy was decommissioned (Bell & Smith, 1999) and as the Beryl Alpha platform was surveyed (Pearce, 1999). This work showed corals were relatively abundant at depths beneath the seasonal thermocline in the northern North Sea (Roberts, 2002). The presence of deep-water corals within the NE Atlantic has been known for many years (Broch, 1922; Joubin, 1922; Dons, 1944; LeDanois, 1948; Zibrowius, 1980), but using new technologies, the true extent of the coral populations within European waters is just being appreciated (Mortensen et al., 1995; Henriet et al., 1998; Mortensen, 2000). The colonisation of L. pertusa on man made structures has been noted previously (Wilson, 1979) and so these oil rigs may be forming stepping stones in a larval supply route. L. pertusa is a cosmopolitan scleractinian, being found from depths of 50m in the Norwegian fjords (Hovland et al., 1998), to 3600m on the Mid-Atlantic Ridge (Bett et al., 1997) and in most of the worlds oceans. Deep-water reefs are now well recognised around the globe as important biomes for many fish and invertebrate species. It has also been well documented that corals are particularly sensitive to anthropogenic impacts. There have been numerous reports of cold-water corals being adversely affected by deep-water trawling across the globe (Probert et al., 1997; Koslow & Gowlett-Jones, 1998; Freese et al., 1999; Bett, 2001; Brooke, 2002; Hall-Spencer, 2003; Waller & Tyler, in press; Wheeler et al., in press), yet the impact of oil platforms on the ecology of these organisms is unknown. Drill cuttings, produced water, drains, seabed engineering and sanitary waste would be the main input of poisonous chemicals into the benthic system close to oil production centres (Rogers, 1999). Drill cuttings are often laden with heavy metals and produced water may contain oil waste (Rogers, 1999) and benthic epifauna and infauna may also be smothered by these cuttings (Messieh et al., 1991). No studies have yet targeted oil pollution on deep-water corals, yet hydrocarbons have been shown to have detrimental affects on the reproduction of shallow-water species (Loya & Rinkevich, 1979; Guzman & Holst, 1993) and even on reproductive synchrony among colonies (Richmond, 1994). Growth may also be depressed (Birkeland et al., 1976) and even mass mortality may occur (Loya & Rinkevich, 1980; Brown, 1996). Oil pollution can remain for many years on shallow reef sites (Loya & Rinkevich, 1979; Brown, 1996) and so the affects can be long term (Loya, 1976). Differing species, however, show different responses to oil pollution (Brown, 1996), and so to correctly assess the effects of oil rigs on coral colonies it is important to assess these colonies directly. Bell and Smith (1999) argue that the position of these apparently healthy colonies negates against any detrimental affect caused by drilling. However, these corals may not, in fact, be subjected to any adverse conditions produced by the rigs, and may only colonise areas where the surrounding physical factors remove all pollutants (Roberts, 2000). This project is in collaboration with researchers Murray Roberts and Susan Gass of the Scottish Association for Marine Science (Dunstaffnage Marine Laboratory). We are investigating the reproduction of colonies of Lophelia pertusa from different locations on oil rigs in the North Sea. This project mainly examines the effect of drilling fluids on these corals reproductive processes by studying colonies that have been collected from both exposed and unexposed areas of the drilling platforms. These samples have been collected during routine platform inspections by remotely operated vehicles (ROV) and we gratefully acknowledge the support of the members of the Atlantic Frontier Environmental Network and Subsea7. There are four main objectives to be achieved by this project: - To examine the gametogenesis and reproductive periodicity of Lophelia pertusa from the North Sea. - To examine if reproductive and anatomical differences occur between Lophelia pertusa from areas likely to be exposed to drilling discharges, and colonies in unexposed or less exposed areas - To use the data acquired to assess the general ‘health’ of Lophelia pertusa colonies growing on oil rigs in the North Sea.
<urn:uuid:0b4af4be-56cd-4388-9223-9953c0226d4f>
3.25
1,031
Academic Writing
Science & Tech.
38.472195
These images illustrate the typical spatial resolution used in state-of-the-art climate models around the times of each of the four IPCC Assessment Reports. Around the time of the First Assessment Report (FAR) in 1990, many climate models used a grid with cells of about 500 km (311 miles) on a side (upper left image). By the time of the the Second Assessment Report (SAR) in 1996, resolution had improved by a factor of two, producing grid cells 250 km (155 miles) on a side. Models references in the Third Assessment Report (TAR) in 2001 generally had reduced grid cells sizes to about 180 km (112 miles), while Fourth Assessment Report (AR4) models typically used a 110 km (68 mile) wide grid cell, further improving resolution. Vertical resolution is not depicted in these images, but has also improved over the years. Typical FAR models had a single-layer "slab ocean" and 10 atmospheric layers; AR4 models often include 30 layers in the oceans and another 30 in the atmosphere. Notice how elements of topography, such as the Alps Mountains, are shown in much greater detail in higher-resolution models. This allows such models to begin to make reasonable forecasts of regional climate in the future, a currently emerging capability. Image courtesy of the IPCC (AR4 WG 1 Chapter 1 page 113 Fig. 1.4) Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: Leaders from 192 nations of the world are trying to make an agreement about how to limit emissions of heat-trapping greenhouse gases, mitigate climate change, and adapt to changing environmental conditions....more Climate in your place on the globe is called regional climate. It is the average weather pattern in a place over more than thirty years, including the variations in seasons. To describe the regional climate...more Less than 1% of the gases in Earth's atmosphere are called greenhouse gases. Even though they are not very abundant, these greenhouse gases have a major effect. Carbon dioxide (CO2), water vapor (H2O),...more Television weather forecasts in the space age routinely feature satellite views of cloud cover. Cameras and other instruments on spacecraft provide many types of valuable data about Earth's atmosphere...more Predicting how our climate will change in the next century or beyond requires tools for assessing how planet responds to change. Global climate models, which are run on some of the world's fastest supercomputers,...more The world's surface air temperature increased an average of 0.6° Celsius (1.1°F) during the last century according to the Intergovernmental Panel on Climate Change (IPCC). This may not sound like very...more A factor that has an affect on climate is called a “forcing.” Some forcings, like volcanic eruptions and changes in the amount of solar energy, are natural. Others, like the addition of greenhouse gases...more
<urn:uuid:cad19808-daf0-4a0c-b7a7-18d1c7cbd2ab>
3.453125
648
Content Listing
Science & Tech.
50.276486
20.1.2 Console I/O - kbhit () Return true if a keypress is waiting to be read. - getch () Read a keypress and return the resulting character. Nothing is echoed to the console. This call will block if a keypress is not already available, but will not wait for Enter to be pressed. If the pressed key was a special function key, this will return '\xe0'; the next call will return the keycode. The Control-C keypress cannot be read with this - getche () Similar to getch(), but the keypress will be echoed if it represents a printable character. - putch (char) Print the character char to the console without buffering. - ungetch (char) Cause the character char to be ``pushed back'' into the console buffer; it will be the next character read by getch() or getche(). See About this document... for information on suggesting changes.
<urn:uuid:c4d11d74-ba00-4c72-94bf-9f68b0119397>
3.421875
219
Documentation
Software Dev.
71.427324
When Michael Lefsky of Colorado State University released a first-of-a-kind map showing the height of the world’s forests in summer 2010, he made clear it was a first draft that would be refined in the future. Sure enough, a second map of global forest canopy height appeared in the pages of the Journal of Geophysical Research about a year later. A team led by Marc Simard of NASA’s Jet Propulsion Laboratory developed the newer map. It shows that, in general, forest canopy heights are highest near the equator and decrease the closer forests are to the poles. The tallest forests, shown in dark green in the map above, tower higher than 40 meters (130 feet) and are found in a band in the tropics that includes the rainforests of the Amazon, central Africa, and Indonesia. One exception: the temperate rainforests in eastern Australia, where stands of eucalyptus, one of the world's tallest flowering plants, reach similar heights. The map shows that temperate conifer forests in the Pacific Northwest—full of Douglas fir, western hemlock, redwood, and sequoia—are home to exceptionally tall trees that grow fairly far from the equator as well. In contrast, boreal forests in Canada, northern Europe, and Russia (comprised mainly of spruce, fir, pine, and larch) tend to have canopy heights less than 20 meters (66 feet). The map produced by Simard and his colleagues has a higher spatial resolution than the first map, so it depicts the world's forests in finer detail. For example, várzea, a type of forest that is frequently flooded by fresh water and tends to be shorter than the surrounding rainforest, is clearly visible between the Amazon and Japurá rivers as a lighter shade of green. Both maps are based on data from the Geoscience Laser Altimeter System (GLAS) from the ICESAT satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra and Aqua satellites, but the new map incorporates additional elevation data from the Shuttle Radar Topography Mission (STRM) and climatology information from both the Tropical Rainfall Measuring Mission (TRMM) and the Worldclim database. Simard’s team took a different approach to merging data from GLAS and MODIS, and they also validated their findings with a network of nearly 70 Fluxnet ground sites around the world. As a result, the forest heights in the newer map are generally taller (particularly in the tropics and in boreal forests) and shorter in mountainous regions. Other versions of Simard's map are available here. - Simard, M. (2011, November 2011). Mapping Forest Canopy Height Globally with Spaceborne Lidar. Journal of Geophysical Research. (116) G04021. - Lefsky, M. (2010, August 5). A Global Forest Canopy Height Map from the Moderate Resolution Imaging Spectroradiometer and the Geoscience Laser Altimeter System. Geophysical Research Letters. (37) L15401. - NASA Jet Propulsion Laboratory. (2011). NASA Map Sees Earth’s Trees in New Light. Accessed April 12, 2012. - ICESat - GLAS
<urn:uuid:7073335f-764b-4d1f-a1fb-9556ca745293>
4
692
Knowledge Article
Science & Tech.
47.254761
You probably have a local "Mystery Spot" in your area, and it undoubtedly features a gravity hill — that is, a place where either water or solid objects seem to move uphill against the force of gravity. But do these uncanny spots really defy the laws of physics? Most publicized gravity hills require a few bucks to get into, but in the San Francisco Bay Area you can see one for free. In Golden Gate Park, as you go down John F Kennedy Drive — just past the Shoreline Highway that crosses the park and just before you get to Lloyd Lake, which sports some fairly ill-advised fake classical ruins — you'll see a stream running just along the north side of the road. Keep moving along, and you will notice, just before the stream empties into the lake, it goes uphill. I have walked this way again and again for years, and I have never been able to convince myself that the stream isn't flowing up. Many people all around the globe have found themselves happening upon physics dilemmas similar to this. Often they're on sloped roads in their cars, and when the car goes into neutral, they find themselves rolling up the slope of the hill. Occasionally, they drop something round, and watch it skitter up a hill. The phenomenon has sparked a lot of ghost stories, from spectral children to Civil War soldiers trying to roll cannons into position. Since the illusion is most noticed on roads near hills, those with a scientific bent nickname it the Magnetic Hill. People think that the hill behind has to have some magnetic property that is influencing their cars. What's really influencing their cars is good, old-fashioned glitches in human perception. Remember the joke where a grumpy old-timer tells kids that, in his day, they had to walk to school through the snow, uphill, both ways? That's not just a joke. People do tend to see inclines as being tilted up more than they seem as tilted down. Researchers at the University of Padova and the University of Pavia found that it's relatively easy to make people see something as uphill, and hard to make them see it as downhill. They created a fake landscape out of angled boards. When the middle board was angled slightly downhill, and the boards around it were angled steeply downhill, most people they asked saw the middle board as going uphill. However, when they reversed the experiment, and showed people a slightly uphill stretch between two steeply uphill boards, it was seen as level, and not downhill. When the experimenters went on to test a horizontal board between two downhill stretches it was seen as uphill. When they tested a downhill segment between two uphill boards, it looked level. In other words, there wasn't anything they could do to make that middle section look downhill to anyone. And these were just boards. Gravity hills make use of the entire landscape. First, most gravity hills are in places where a straight horizon is obscured in every direction. There isn't any horizon line to help people get their bearings. Usually the hill in question is a road cut into two uphill segments, but there are hills that are surrounded by downhill slopes. Generally, there are also trees and no buildings in the area. Buildings tend to stay perpendicular to the ground. Most of the time, trees do as well. When it comes to a choice between light and perpendicularity, though, the trees choose light every time. When they lean so that they're perpendicular to a downhill slope, it looks straight. When everything, the contrasted slope of the landscape, the angle of the trees, and the inability to check anything against a horizon line, comes together on a slightly downhill slope, it will look uphill to whatever hapless chump happens to be wandering along it. Whenever someone checks with a level, or a GPS system the fun is over. Downhill is downhill. This illusion, though, is one that generally doesn't fade when you know the trick. So, if you have any mystery spots in your area, do let us know where they are, so we can roll up them.
<urn:uuid:a1fa773f-073e-4998-a998-d8bd433b13fe>
2.703125
831
Nonfiction Writing
Science & Tech.
57.471319
FileStream Constructor (String, FileMode) Assembly: mscorlib (in mscorlib.dll) path is an empty string (""), contains only white space, or contains one or more invalid characters. path refers to a non-file device, such as "con:", "com1:", "lpt1:", etc. in an NTFS environment. path refers to a non-file device, such as "con:", "com1:", "lpt1:", etc. in a non-NTFS environment. path is null. The caller does not have the required permission. The file cannot be found, such as when mode is FileMode.Truncate or FileMode.Open, and the file specified by path does not exist. The file must already exist in these modes. An I/O error occurs, such as specifying FileMode.CreateNew and the file specified by path already exists. The stream has been closed. The specified path is invalid, such as being on an unmapped drive. The specified path, file name, or both exceed the system-defined maximum length. For example, on Windows-based platforms, paths must be less than 248 characters, and file names must be less than 260 characters. mode contains an invalid value. Silverlight for Windows PhoneThis member has a SecurityCriticalAttribute attribute on Silverlight for Windows Phone, because the attribute was present in Silverlight 3. This attribute restricts this member to internal use. Application code that uses this member throws a MethodAccessException. For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
<urn:uuid:44789907-113a-46e8-85d7-1cd01d65744b>
3.109375
357
Documentation
Software Dev.
49.664695
Contains information about a single instance of a Binding. Assembly: PresentationFramework (in PresentationFramework.dll) The Binding class is the high-level class for the declaration of a binding. The class is the underlying object that maintains the connection between the binding source and the binding target. A Binding contains all the information that can be shared across several objects. A is an instance expression that cannot be shared and that contains all the instance information about the Binding. For example, consider the following, where myDataObject is an instance of the MyData class, myBinding is the source Binding object, and MyData class is a defined class that contains a string property named MyDataProperty. This example binds the text content of mytext, which is an instance of TextBlock, to MyDataProperty. You can use the same myBinding object to create other bindings. For example, you might use the myBinding object to bind the text content of a check box to MyDataProperty. In that scenario, there will be two instances of that share the myBinding object. This example shows how to obtain the binding object from a data-bound target property. You can do the following to get the Binding object: You must specify the dependency property for the binding you want because it is possible that more than one property of the target object is using data binding. Alternatively, you can get the and then get the value of the ParentBinding property. For the complete example see Binding Validation Sample. If your binding is a MultiBinding, use BindingOperations.GetMultiBinding. If it is a PriorityBinding, use BindingOperations.GetPriorityBinding. If you are uncertain whether the target property is bound using a Binding, a MultiBinding, or a PriorityBinding, you can use BindingOperations.GetBindingBase. Windows 7, Windows Vista, Windows XP SP2, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 The .NET Framework and .NET Compact Framework do not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
<urn:uuid:9adc0aef-bc48-4272-9c65-3998ebfdb800>
2.84375
451
Documentation
Software Dev.
41.070973
if we use gettype() before initializinf any variable it give NULL it will show add a note User Contributed Notes Tipos - [4 notes] shahnaz khan ¶ 8 years ago arjini at gmail dot com ¶ 7 years ago Note that you can chain type castng: var_dump((string)(int)false); //string(1) "0" Trizor of www.freedom-uplink.org ¶ 8 years ago The differance of float and double dates back to a FORTRAN standard. In FORTRAN Variables aren't as loosly written as in PHP and you had to define variable types(OH NOES!). FLOAT or REAL*4 (For all you VAX people out there) defined the variable as a standard precision floating point, with 4 bytes of memory allocated to it. DOUBLE PRECISION or REAL*8 (Again for the VAX) was identical to FLOAT or REAL*4, but with an 8 byte allocation of memory instead of a 4 byte allocation. In fact most modern variable types date back to FORTRAN, except a string was called a CHARACHTER*N and you had to specify the length, or CHARACHTER*(*) for a variable length string. Boolean was LOGICAL, and there weren't yet objects, and there was support for complex numbers(a+bi). Of course, most people reading this are web programmers and could care less about the mathematical background of programming. NOTE: Object support was added to FORTRAN in the FORTRAN90 spec, and expanded with the FORTRAN94 spec, but by then C was the powerful force on the block, and most people who still use FORTRAN use the FORTRAN77. 4 years ago The Object (compound) Type Like every programming language, PHP offers the usual basic primitive types which can hold only one piece of data at a time (scalar). I am particularly fond of the "object" type (compound) because that allows me to group many basic PHP types together, and I can name it anything I want. $firstName; // a PHP String $middleName; // a PHP String $lastName; // a PHP String $age; // a PHP Integer $hasDriversLicense; // a PHP Boolean Here, I have grouped several basic PHP types together, (3) Strings, (1) Integer, and (1) Boolean... then I named that group "Person". Since I used the proper syntax to do so, this code is pure PHP, which means that if you run this code, you would have an extra PHP "type" available to you in your scripts, like so: $myAge = 16; // a PHP Integer - always available $yourAge = 15.5; // a PHP Float - always available $hasHair = true; // a PHP Boolean - always available $greeting = "Hello World!" // a PHP String - always available $person = new Person(); // a PHP Person - available NOW! You can make your own object types and have PHP execute it as if it were part of the PHP language itself. See more on classes and objects in this manual at: http://www.php.net/manual/en/language.oop5.php
<urn:uuid:ab815cd2-e613-4520-b1d0-f22eb59382b6>
2.890625
713
Comment Section
Software Dev.
61.309512
SEND + MORE = MONEY, Part 2 August 3, 2012 In the previous exercise we looked at two slow solutions to the SEND + MORE = MONEY cryptarithm. In today’s exercise we look at two more solutions. Our third solution uses a hill-climbing algorithm. The basic idea is to start with a random solution, score it, then alter it, score the modified solution, keep it if it has a better score than the original, and repeat until the desired solution is found. For the cryptarithm problem, the alteration can be done by swapping the values assigned to two letters chosen randomly, and scoring can be done by computing the difference between SEND + MORE and MONEY; the solution is found when the difference is zero. The problem with hill-climbing is that it can get stuck at a local optimum with no hope of achieving a global optimum. Consider the correct solution to the SEND + MORE = MONEY problem; we give the solution in a list, with O=0, M=1, and so on, and no letter assigned to 3 or 4: (o m y _ _ e n d r s). It is possible (it happened to me when I was writing the program) for hill-climbing to reach the solution (o m y _ e n _ d r s) with a score of 1. It takes two swaps to find the correct solution, but there is only one possible improvement in the score, from 1 to 0, so if a random hill-climb ever reaches the incorrect solution shown above, it will loop forever without reaching the correct solution. Thus, our fourth solution is a variant of hill-climbing that adds additional randomization: a modified solution is always accepted if it has a better score than the original, and it is also accepted sometimes even if it has a worse score than the original, say about once in a hundred times. That way, if the hill-climbing reaches a local optimum, it has a way to “jump” to a different hill and continue to the global optimum. The straight hill-climbing algorithm is fast when it works, taking half a second or less (depending on the randomization). The variant hill-climbing climbing algorithm always works, and is equally fast. Your task is to write the two cryptarithm algorithms given above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
<urn:uuid:bfa1e634-e169-4015-b145-657df01cd79a>
2.703125
537
Tutorial
Software Dev.
50.709286
One of the most frightening and destructive phenomena of nature is a severe earthquake and its An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Full size image - 155k Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes. Many buildings in Charleston, South Carolina, were damaged or destroyed by the large earthquake that occured August 31, 1886. - 87k
<urn:uuid:432c4961-d354-4ca9-921e-45fa557e8ff2>
3.921875
267
Knowledge Article
Science & Tech.
29.405
Related Forms:A Closer Look Albert Einstein's two theories of relativity were the first successful revisions of Newtonian mechanics—a mechanics so simple and intuitive that it was held to be a permanent fixture of physics. Uniting the theories is the idea that two observers traveling relative to each other may have different perceptions of time and space, yet the laws of nature are still uniform, and certain properties always remain invariant. Einstein developed the first theory, the theory of Special Relativity (1905), to explain and extend certain consequences of Maxwell's equations describing electromagnetism, in particular, addressing a puzzle surrounding the speed of light in a vacuum, which was predicted always to be the same, whether the light source is stationary or moving. Special Relativity considers the laws of nature from the point of view of frames of reference upon which no forces are acting, and describes the way time, distance, mass, and energy must be perceived by observers who are in uniform motion relative to each other if the speed of light must always turn out the same for all observers. Two implications of Special Relativity are space and time dilation. As speed increases, space is compressed in the direction of the motion, and time slows down. A famous example is the space traveler who returns to Earth younger than his Earth-dwelling twin, his biological processes proceeding more slowly due to his relative speed. These effects are very small at the speeds we normally experience but become significant at speeds approaching the speed of light (known as relativistic speeds). Perhaps the best-known implication of Special Relativity is the equation E=mc2 , which expresses a close relation between energy and mass. The speed of light is a large number (about 300,000 km per second, or 186,000 mi per second), so the equation suggests that even small amounts of mass can be converted into enormous amounts of energy, a fact exploited by atomic power and weaponry. Einstein's General Theory of relativity extended his Special Theory to include non-inertial reference frames, frames acted on by forces and undergoing acceleration, as in cases involving gravity. The General Theory revolutionized the way gravity, too, was understood. Since Einstein, gravity is seen as a curvature in space-time itself.
<urn:uuid:13f56899-cb2f-4e94-9e9f-5e8e06a8b08e>
3.96875
462
Knowledge Article
Science & Tech.
25.969825
We hear a lot these days about energy. The common paradigm is that energy equals oil or vice versa, oil equals energy. Is that true? According to people like Thomas Bearden and John Bedini, that is simply not true. Thomas Bearden, for instance, has the saying that “there is enough energy inside the space of an empty teacup to boil all the oceans and the world. The fact is well known in the scientific community and was, for example, a favorite quote of Nobel prize-winning physicist Richard Feynman.” For these men, energy is not a matter of oil, or wind but a more available and concentrated form that seems to be all around us all the time. Thomas Bearden is famous for saying that we are surrounded by an extremely active, non-visible environment called the vacuum. In this vacuum, virtual particles are popping in and out at extremely rapid rates. These particles are in the unseen world of the vacuum. This energy is available for our use, and scientists have shown how to harness it. John Bedini is a self educated practitioner in the free energy movement. He has designed systems that run by themselves without common exterior power sources. One of his devices, a small motor, ran for almost 7 years all by itself, under these conditions. He produces these machines and even shows you how to produce them. His company even sells kits that you can buy that demonstrate how these machines work. These men stand on the shoulders of such people like Nathan Stubblefield who developed a wireless telephone that used natural conduction through earth and water. He simply had this system plugged into the ground and it was working great. It used Earth batteries and loops that plug into the ground at the sending station and the receiving station. The power was never switched off, and it worked both day and night. That was back in 1892. Who can forget Nicole Tesla. He has been designated the inventor of radio. It seems Marconi some of Tesla’s work. Nikola Tesla design generators network inside of Niagara falls. His wireless transmission of energy would’ve solved a lot of our problems of today if they hadn’t been killed by men like J.P. Morgan. Tesla was way ahead of his time and he knew it. Most of his inventions were patented, while some didn’t see the light of day. He was afraid that those people who didn’t understand what take advantage and use his ideas to harm others.
<urn:uuid:89482933-e3ec-4e6d-b8ee-9e2f1b439c66>
2.875
509
Personal Blog
Science & Tech.
57.844682
Standard Temperature and| ALWAYS use liters, Degrees Kelvin and pressure in torr for these calculations. For an STP Calculator With More Input Options Gas volumes are compared at a Standard Temperature and Pressure of:| 273.15° Kelvin and 760 torr. At STP, one mole of any gas should have a volume of 22.4 liters. Here is an example of determining volume at Standard Temperature and Pressure and the number of moles of gas in a given volume. 2 liters of a gas at 546.3 Degrees Kelvin and a pressure of 380 torr will have what volume and contain how many moles at Standard Temperature and (Your answer should be .5 liters and 0.022321 moles) The default setting is for 5 significant figures but you can change that by inputting another number in the box above. Answers are displayed in scientific notation and for easier readability, numbers between .001 and 1,000 will be displayed in standard format (with the same number of The answers should display properly but there are a few browsers that will show no output whatsoever. If so, enter a zero in the box above. This eliminates all formatting but it is better than seeing no output at all. Return To Home Page Copyright © 1999 - 1728 Software Systems
<urn:uuid:1234d669-e443-4a7a-8c7a-5047557c9cec>
3.359375
288
Tutorial
Science & Tech.
57.304112
Science subject and location tags Articles, documents and multimedia from ABC Science Monday, 21 May 2012 Massive extraction of groundwater can resolve a puzzle over a rise in sea levels in past decades, according to scientists in Japan. Friday, 4 May 2012 Scientists have used sound waves to determine the composition of the Earth's inner mantle, possibly solving the mystery of our planet's missing silicon. Thursday, 19 April 2012 A protein produced by bone cells could help in the development of better treatments for osteoporosis. Wednesday, 18 April 2012 Japanese researchers have successfully grown hair on hairless mice by implanting follicles created from stem cells. Monday, 16 April 2012 A new study of quasars has provided further evidence that dark energy is accelerating the expansion of the universe. Tuesday, 3 April 2012 Japanese honeybees attack their enemies by using their brains to process and respond to the threat. Wednesday, 29 February 2012 Hermit crabs see their shells as an extension of the bodies and adapt their movements accordingly. Monday, 6 February 2012 Elderly adults who regularly drink green tea may stay more agile and independent than their peers over time. Friday, 27 January 2012 Jumping spiders use green light to gauge the distance of their jumps, a Japanese study has found. Friday, 11 November 2011 Our brain's primary visual cortex probably focuses our attention rather than recognising what we see, a new study has found. Tuesday, 27 September 2011 Scientists have unravelled how miracle fruit can make sour foods taste sweet. Monday, 12 September 2011 The race to discover gravity waves may be getting closer to the finish line with scientists successfully squeezing light using the physics of quantum mechanics. Thursday, 8 September 2011 How your foot strikes the ground reveals your identity almost as well as a fingerprint. Friday, 26 August 2011 Samples from an asteroid provide astronomers with a better understanding of the evolution of our solar system. Friday, 8 July 2011 The death throes of giant stars that have gone supernova could be generating the veil of dust that permeates young galaxies.
<urn:uuid:7f10099a-c0af-47ae-8300-d027c48b35c3>
2.703125
436
Content Listing
Science & Tech.
43.01631
Boulder star coral (Montastraea annularis) Classified as Endangered (EN) on the IUCN Red List (1) and listed on Appendix II of CITES (2). Prior to 1994, the wide variability exhibited in the appearance of Montastraea annularis was attributed to the different environmental conditions in which it occurs (3). However, scientists have since discovered that it actually comprises a species complex that can be divided into three distinct species: the type specimen, M. annularis, together with two newly described species, M. faveolata and M. franksi (1) (3) (4). Like other colony-forming corals, colonies of M. annularis are composed of numerous small polyps, which are soft-bodied animals, related to anemones. Each polyp bears numerous tentacles that direct food into a central mouth, where it is digested in a sac-like body cavity. One of the most remarkable and ecologically important features of corals is that the polyps secrete a hard skeleton, called a ‘corallite’, which over successive generations contributes to the formation of a coral reef. The coral skeleton forms the bulk of the colony, with the living polyp tissue comprising only a thin veneer (4). In M. annularis, the colonies are formed by long, thick columns, with only the top parts supporting living tissue. The colour of the living colonies is usually golden brown to tan, but sometimes appears grey or green (3). This common species occurs in the Caribbean, the Gulf of Mexico, Florida, the Bahamas, and Bermuda (1). Montastraea annularis is found at shallow and intermediate depths, from 1 to 20 metres (3). Like many coral species, M. annularis is zooxanthellate, which means that its tissues contain large numbers of single-celled algae called zooxanthellae. The coral and the algae have a symbiotic relationship, in which the algae gain a stable environment within the coral's tissues, while the coral receives nutrients produced by the algae through photosynthesis. By harnessing the sun's energy in this way, corals are able to grow rapidly and form vast reef structures, but are constrained to live near the water surface (4). While, on average, zooxanthellate coral can obtain around 70 percent of its nutrient requirements from zooxanthellae photosynthesis, the coral may also feed on zooplankton (5). Around one third of the world’s reef-building corals are threatened with extinction (6). The principal threat to corals is the rise in sea temperature associated with global climate change. This leads to coral bleaching, where the symbiotic algae are expelled, leaving the corals weak and vulnerable to an increasing variety of harmful diseases. Climate change is also expected to increase ocean acidification and result in a greater frequency of extreme weather events such as destructive storms. This is not to mention the localised threats to coral reefs from pollution, destructive fishing practices, invasive species, human development, and other activities (1) (6). In addition to being listed on Appendix II of the Convention on International Trade in Endangered Species (CITES), which makes it an offence to trade M. annularis without a permit (2), this coral falls within several Marine Protected Areas across its range. To specifically conserve M. annularis, recommendations have been made for a raft of studies into various aspects of its taxonomy, biology and ecology, including an assessment of threats and potential recovery techniques (1). For further information on the conservation of coral reefs see: This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: - Algae: simple plants that lack roots, stems and leaves but contain the green pigment chlorophyll. Most occur in marine and freshwater habitats. - Colony: relating to corals: corals composed of numerous genetically identical individuals (also referred to as zooids or polyps), which are produced by budding and remain physiologically connected. - Fertilisation: the fusion of gametes (male and female reproductive cells) to produce an embryo, which grows into a new individual. - Gametes: reproductive cells which carry the genetic information from their parent, and are capable of fusing with gametes of the opposite sex to produce a fertilized egg. In animals, male gametes are called sperm and female gametes are called ova. - Hermaphroditic: possessing both male and female sex organs. - Photosynthesis: metabolic process characteristic of plants in which carbon dioxide is broken down, using energy from sunlight absorbed by the green pigment chlorophyll. Organic compounds are made and oxygen is given off as a by-product. - Polyps: typically sedentary soft-bodied component of Cnidaria (corals, sea pens etc), which comprise of a trunk that is fixed at the base; the mouth is placed at the opposite end of the trunk, and is surrounded by tentacles. - Symbiotic: describing a relationship in which two organisms form a close association, the term is now usually used only for associations that benefit both organisms (a mutualism). - Zooplankton: tiny aquatic animals that drift with currents or swim weakly in water. IUCN Red List (April, 2010) CITES (April, 2010) - Weil, E. and Knowlton, N. (1994) A multi-character analysis of the Caribbean coral Montastraea annularis (Ellis and Solander, 1786) and its two sibling species, M. faveolata (Ellis and Solander, 1786) and M. franksi (Gregory, 1895). Bulletin of Marine Science, 55: 151-175. - Veron, J.E.N. (2000) Corals of the World. Australian Institute of Marine Science, Townsville, Australia. - Barnes, R.S.K., Calow, P., Olive, P.J.W., Golding, D.W. and Spicer, J.I. (2001) The Invertebrates: A Synthesis, 3rd Edition. Blackwell Science, Oxford. - Carpenter, KE et al. (2008) One-third of reef-building corals face elevated extinction risk from climate change and local impacts. Science, 321: 560-563.
<urn:uuid:02bc429d-b5f5-4b51-a5b2-436590e08245>
4.1875
1,365
Knowledge Article
Science & Tech.
40.797097
(p. D2) . . . most of the horror-movie scenarios are looking less and less plausible. Climate change will probably occur not with a bang but with a long, slow whimper, as you can see in the new report from the Intergovernmental Panel on Climate Change. The report concludes that it's ''very likely'' that humans are now the main factor warming the climate. But even as the panel's scientists are becoming surer of the problem, and warning of grim consequences this century and beyond, they're eschewing crowd-thrilling catastrophes. Since the last I.P.C.C. report, six years ago, they haven't raised the estimates of future temperatures and sea levels. While Mr. Gore's movie shows coastlines flooded by a 20-foot rise in sea level, the report's projections for the rise this century range from 7 inches to 23 inches. The panel says Greenland's ice sheet will shrink and might eventually disappear, but the process could take ''millennia.'' The Antarctic ice sheet is projected to grow, not shrink, because of increased snowfall. The scientists acknowledge uncertainties and worrisome new signs, like the sudden acceleration in the flow of Greenland's glaciers several years ago. But the panel, unlike Mr. Gore, didn't extrapolate a short-term trend into a disaster, and its caution is vindicated by a report in the current issue of Science that the flow of two of the largest glaciers abruptly decelerated last year to near the old rate. The panel does consider it ''likely'' that future typhoons and hurricanes will be stronger than today's. But it also expects fewer of these storms (albeit with ''less confidence'' in that projection). As for the Gulf Stream, it is ''very unlikely'' to undergo ''a large abrupt transition during the 21st century,'' according to the new report. The current is expected to slow slightly, meaning a little less heat from the tropics would reach the North Atlantic, which could be good news for Europe and North America, since that would temper some of the impact of global warming in the north. Whatever happens, you can stop fretting about the Gulf Stream scenario in Mr. Gore's movie and that full-fledged Hollywood disaster film ''The Day After Tomorrow.'' Mr. Gore's companion book has a fold-out diagram of the Gulf Stream and warns that ''some scientists are now seriously worried'' about it shutting down and sending Europe into an ice age, but he must have been talking to the wrong scientists. There wouldn't be glaciers in the English shires even if the Gulf Stream did shut down. To understand why, you need to disregard not only the horror movies but also what you learned in grade school: that the Gulf Stream is responsible for keeping London so much warmer than New York even though England is farther north than Newfoundland. This theory, originated by a 19th-century oceanographer, is ''the earth-science equivalent of an urban legend,'' in the words of Richard Seager, a climate modeler at the Lamont-Doherty Earth Observatory of Columbia University. He and other researchers have calculated that the Gulf Stream's influence typically raises land temperatures in the north by only five degrees Fahrenheit, hardly enough to explain England's mild winters, much less its lack of glaciers. Moreover, as the Gulf Stream meanders northward, it delivers just about as much heat to the eastern United States and Canada as to Europe, so it can't account for the difference between New York and London. Dr. Seager gives the credit to the prevailing westerly winds -- and the Rocky Mountains. When these winds out of the west hit the Rockies, they're diverted south, bringing air from the Arctic down on New York (as in last week's cold spell). After their southern detour, the westerlies swing back north, carrying subtropical heat toward London. This Rocky Mountain detour accounts for about half the difference between New York and London weather, according to Dr. Seager. The other half is caused by to the simple fact that London sits on the east side of an ocean -- just like Seattle, which has a much milder climate than Siberia, the parallel land across the Pacific. Since ocean water doesn't cool as quickly as land in winter, or heat up as much in summer, the westerly winds blowing over the ocean moderate the winter and summer temperatures in both Seattle and London.
<urn:uuid:54997467-1e60-4c61-82b5-9d8da22f15e0>
2.765625
906
Nonfiction Writing
Science & Tech.
56.510165
There may be something interesting coming out in the climate front over the next few weeks from CERN. Years ago, a researcher named Henrik Svensmark developed a hypothesis that cosmic rays can seed cloud formation, and thus when there are more cosmic rays, there may be more clouds. This is interesting because it may act as a sort of solar amplification. Changes in the sun's output through varying solar cycles are measurable, but seem to some scientists to be too small to drive substantial temperature changes on Earth. But a more active sun tends to blow cosmic rays away from the Earth, thus reducing their incidence. Therefore, if a more active sun reduced cooling clouds, and a less active sun increased cooling clouds, this might explain a larger effect for the sun. I have avoided discussing Svensmark much, since the evidence seemed thin, though several labs recently have confirmed his hypothesis, at least in the laboratory. But Svensmark is definitely a topic among some climate skeptics. The reason is that higher solar activity levels in the second half of the twentieth century coincided with much of the 20th century warming that is blamed on manmade CO2. Svensmark's theory, if true, might force scientists to apportion more of the historic warming to natural causes, thus reducing the estimated sensitivity of the climate to man-made CO2. But apparently the CERN lab has been undertaking a substantial study to confirm or deny Svensmark's hypothesis. The results have not been released, but skeptics are beginning to anticipate that CERN's work has confirmed the hypothesis of cosmic ray cloud seeding. Why? Because of the dog that did not bark, or rather was told not to bark. CERN Director General Rolf-Dieter Heuer told Welt Online that the scientists should refrain from drawing conclusions from the latest experiment. “I have asked the colleagues to present the results clearly, but not to interpret them,” reports veteran science editor Nigel Calder on his blog. Why? Because, Heuer says, “That would go immediately into the highly political arena of the climate change debate. One has to make clear that cosmic radiation is only one of many parameters.” Skeptics are suggesting that had CERN disproved Svensmark, and thus protected the hypothesis that CO2 is driving most current warming, they would not have hesitated to draw exactly this conclusion in public. Only a finding considered more consistent with the skeptical position would cause them to go silent, trying to avoid the taint from the politically correct intelligentsia that would come from even partially confirming a skeptic talking point. I have to agree that Heuer's comments seem to telegraph the result. I have read a ton of global warming related studies. And every single one I have read that has ever published negative results vis a vis the hypothesis of catastrophic manmade global warming has felt obligated to put in a sentence at the end that says something like "but of course this does not in any way disprove the hypothesis of anthropogenic global warming and we fully support that hypothesis despite these results." The absolute fear of becoming an outcast for coming up with the "wrong" result is palpable in reading these papers, sort of like the very careful language a report in Soviet Russia might have used to even mildly criticize some aspect of the state. Of course, no such disclaimer can be found with narrow positive results - these are always immediately extrapolated (in fact over-extrapolated in press releases) to be the final nail in the coffin proving once and for all that man is changing the climate in dire ways.
<urn:uuid:5d850629-c34d-417d-875e-7372473d9544>
3.546875
732
Personal Blog
Science & Tech.
36.441063
The outer atmosphere is changing and scientists have yet to find out why this is. The highest clouds hide a mystery that is as eery as the evidence is enthralling. First ever pictures of the clouds shot by Nasa reveal bizarre phenomena. The images captured by Nasa's AIM probe show clouds that shine in the night. This is a new phenomenon. The clouds look like thin stretches of irradiescent cotton wool. They hover around 50 miles above the ground. The twilight clouds are increasing in frequency. Also they're getting more and more stretched out, scientists say. They are bepuzzled as to why it is that the clouds "alter rapidly, hour by hour and day by day", the BBC reports. "These are things we don't understand and they all suggest a possible connection to global change; and we need to understand that connection and what it means for the whole atmosphere," dr James Russell told BBC News. He heads up the Aeronomy of Ice in the Mesosphere (AIM) mission. The clouds are big chuncs of ice. They are generated only in cold temperatures and consist of water vapour and small dust particles. Dr. Gary Thomas, a colleague of Dr Russell said that the AIM team is studying massive rings in the clouds which despite their big size are extremely variable. "In just a few minutes, these holes are gone and others can appear. And some of these rings are huge - 300-400 miles across," said Thomas. The scientists are studying the permanent footage from their probe to see what it is that triggers the change. Prime suspect is global warming. The scientists say that the clouds' behavior is altering due to a vital change in the climate's basic make up. It is a change that only has occurred in the last few years. The NASA space craft has been launched only in April this year, so the research is relatively young. But the scientists say they're on their way to discover the main reasons. One big clue they revealed is that temperatures are the main factor into the clouds' formations. AIM images seem to indicate that the clouds hover around the Arctic for about five days before they dissolve. The scientists refer to this as a 'rotation in longitude'. This movement is reflected in the temperature data. "The interesting thing is that the magnitude of the temperature changes is only about five degrees Fahrenheit (3C)," said Dr Scott Bailey, AIM's deputy principal investigator from Virginia Polytechnic Institute and State University. "So, a very small change in temperature leads to a dramatic change in cloud behaviour. We conclude from that that these clouds are a very sensitive measure of temperature change."
<urn:uuid:b61c99e2-f9e5-4efe-9f90-02bbfbe11f36>
3.71875
543
Truncated
Science & Tech.
55.275881
Many marine ecologists think that the biggest single threat to marine ecosystems today is overfishing. Our appetite for fish is exceeding the oceans' ecological limits with devastating impacts on marine ecosystems. Scientists are warning that overfishing results in profound changes in our oceans, perhaps changing them forever. Not to mention our dinner plates, which in future may only feature fish and chips as a rare and expensive delicacy. The fish don't stand a chance More often than not, the fishing industry is given access to fish stocks before the impact of their fishing can be assessed, and regulation of the fishing industry is, in any case, woefully inadequate. The reality of modern fishing is that the industry is dominated by fishing vessels that far out-match nature's ability to replenish fish. Giant ships using state-of-the-art fish-finding sonar can pinpoint schools of fish quickly and accurately. The ships are fitted out like giant floating factories - containing fish processing and packing plants, huge freezing systems, and powerful engines to drag enormous fishing gear through the ocean. Put simply: the fish don't stand a chance. Ocean life health check Populations of top predators, a key indicator of ecosystem health, are disappearing at a frightening rate, and 90 percent of the large fish that many of us love to eat, such as tuna, swordfish, marlin, cod, halibut, skate, and flounder - have been fished out since large scale industrial fishing began in the 1950s. The depletion of these top predator species can cause a shift in entire oceans ecosystems where commercially valuable fish are replaced by smaller, plankton-feeding fish. This century may even see bumper crops of jellyfish replacing the fish consumed by humans. These changes endanger the structure and functioning of marine ecosystems, and hence threaten the livelihoods of those dependent on the oceans, both now and in the future. The over-exploitation and mismanagement of fisheries has already led to some spectacular fisheries collapses. The cod fishery off Newfoundland, Canada collapsed in 1992, leading to the loss of some 40,000 jobs in the industry. The cod stocks in the North Sea and Baltic Sea are now heading the same way and are close to complete collapse. Instead of trying to find a long-term solution to these problems, the fishing industry's eyes are turning towards the Pacific - but this is not the answer. Politicians continue to ignore the advice of scientists about how these fisheries should be managed and the need to fish these threatened species in a sustainable way.
<urn:uuid:5eb317fe-d4c5-477f-8d6a-405a0c981b07>
3.453125
517
Knowledge Article
Science & Tech.
35.548982
|May12-12, 08:15 PM||#1| I was just thinking about electromagnetic fields and magnetostatics. Magnetostatics is the study of magnetic fields where they do not change or very little. I was wondering if a static magnetic field is still considered part of a Electromagnetic field? Since the magnetic field is not changing, then there should be no electric field. However a thought did occur which is that even though the magnetic field may be steady, in reality it can never be in perfect steady state because there is still small changes in the currents and even in the material which created it which cause a change. Even if the changes are over the lifetime of the universe, that is still considered a change, very small but still a change. So from this I assume that there is no such thing as a magnetic field with out a electric field. It is just that in say a bar magnet which does not seem to change, the electric component of the electromagnetic field is very small because there is very little change in the magnetic field. Is the right? That there is and can never be magnetic field without a electric field. It seems to make perfect since now, especially since I read and hear they are both the same thing and create each other but now I think I actually understand it since I started trying to model it in my head. The problem I use to have was how was a magnetic field from a electromagnet can somehow become a electromagnetic field and the answer is it never did, it was always part of it because it always had an electric component. physics news on PhysOrg.com >> Promising doped zirconia >> New X-ray method shows how frog embryos could help thwart disease >> Bringing life into focus |May13-12, 12:09 AM||#2| The electric and magnetic forces are part of the combined Electromagnetic force. I would say that saying the "EM Field" is perfectly fine overall. If you want to discuss a particular aspect of the field, such as the magnetic field of a magnet, you can simply say "Magnetic Field". One thing to remember is that an electric field looks like a magnetic field if you are moving by it or it is changing, and the magnetic field looks like an electric field in the same situation. It simply depends on your frame of reference. They are both part of the Electromagnetic field, with each part seen under different conditions. |Similar Threads for: Electromagnetic waves| |Why are electromagnetic waves transverse waves? Is this answer ok?||Classical Physics||4| |Electromagnetic Waves||Introductory Physics Homework||9| |Electromagnetic waves?||Classical Physics||4| |Electron Group Waves & Electromagnetic Waves, energy delivery in a wire||Classical Physics||10| |Electromagnetic waves.||Electrical Engineering||8|
<urn:uuid:237a2366-c6fc-449f-8feb-f1466ca2787c>
2.859375
607
Comment Section
Science & Tech.
47.06192
The following is from:http://www.ornl.gov/ORNLReview/rev28_2/text/bio.htm Top: Growing forest accumulates carbon until it achieves, over time, a balance between the carbon taken up in photosynthesis and the carbon released back to the atmosphere from respiration, oxidation of dead organic matter, and fires and pests. In the meantime, fossil fuels are used to meet society's energy needs. Bottom: In productive forests, trees can be harvested for use in producing heat or power. Although harvesting may result in less carbon stored in standing biomass and forest soils, biomass fuels replace some of the fossil fuel that would otherwise be burned. The carbon in that fossil fuel remains stored in the ground rather than being released to the atmosphere. In both scenarios there are some energy needs for gathering the resource and converting it into useful energy, but, as the arrows on the transportation system suggest here, these are generally comparatively small. Arrows provide a qualitative indication of the magnitude and direction of carbon flows.
<urn:uuid:16b02005-ead3-401a-8ff5-bb716d784d65>
3.734375
209
Knowledge Article
Science & Tech.
37.3
Astronauts challenge the patriotism of Global Warming Alarmists by planting flags The paper, ‘Rocks Can Fly’ is a cogently-argued scientific refutation of the basic equations used by flight theorists. Apparently, rocket scientists may have incorrectly assumed the forces acting on rockets all along. The study questions the numeric bedrock of the theory of flight by applying data collected by NASA decades ago. It seems during the Apollo Moon landings era NASA devised a whole new set of hitherto unreported equations, more reliable than those relied upon by supporters of the theory of flight, to get Neil Armstrong's carbon boot prints safely planted on that airless Sea of Tranquility. The paper is co-authored by Martin Hertzberg, PhD, Consultant in Science and Technology, Alan Siddons, a former radiochemist and Hans Schreuder, a retired analytical chemist. The researchers had the bright idea of delving back into NASA’s archives to test the "Newton law of Gravitation" equations in fine detail. The three men stumbled on the apparent flaws during an online debate on the science behind the theory of flight. Published online on May 24, 2010, the study argues that the flaw has always lain in Newton's equations. The long-trusted formula has been used by rocket scientists without question - until now. The researchers report that the numbers used in those equations are the “first assumption that rocket science makes when predicting the flight of a rocket.” NASA Abandoned Flawed Gravity Calculations in 1960’s To theory of flight sceptic scientists it seems self-evident that rockets should not be treated like a point mass. It is more properly a complex spinning structure with large variability in air resistance and thrust. But, despite the U.S. government knowing since the 1960's that the graviational equations were of no use to real-world science, these facts don't appear to have been passed on to rocket scientists. Rocket Flight Paths Cast Doubt on Gravitational Theory NASA had found that the flight path of rockets was different than expected because rockets are propelled with thrust rather than only being affected by gravity - an empirical fact that challenges the theory of flight. Computer models supporting flight theory had predicted that rockets would fall out of the sky. In fact, the Apollo data proves that rockets can fly in paths not predicted by the gravitational equations because the rockets also have thrust. Thus the success of NASA’s moon landings becomes evidence against the unreliability of gravitation equations in real world science. Newton Law Of Gravitation Calculations Way Out The paper tells us how far out Newton's Law Of Gravitation equation could be, “the path of the real rocket is completely different than that predicted by the force of gravity alone. The rocket flies while Newton's Law of Gravitation says it should just fall to the ground!" But it isn’t just NASA Rockets that don't support the GHG theory. Rockets belonging to other space agencies don’t conform either. As the paper tells us, “The rockets of every country in the world also fly higher than predicted” The three scientists pointedly ask, “Is it any surprise, then, that even a relatively simple body like the biplane would refuse to conform to such a method?” Other scientists have also come out to refute the theory of flight. Some even go as far as to say the theory actually contravenes the established laws of physics. NASA Rockets Do Not Fly “Unusually” High The paper concludes that NASA Rockets do not fly “unusually” high. It is the application of the predictive Newtonian gravitation equation that is faulty and overly simplistic and should not be applied in a real-world context. The proven ability of common substances ( e.g. paper when folded) to glide in defiance of gravity makes all such gravitational estimates questionable. Are Gravitational Equations Mere Junk Science? Some may be, if this analysis of NASA’s Apollo numbers is correct. Newton's Law of Gravitation failed to give NASA the crucial information it required on rocket flight paths. Thus, NASA scientists had to create their own model to chart the flight path of the rockets astronauts took to the moon. Along with the Flightgate revelations, these new findings contradict the International Civil Aviation Organization (ICAO), which has placed enormous reliance on predictions based on research around flight theory that has now been called into question. Even some International Civil Aviation Organization members have denounced the theory....can't remember which ones.
<urn:uuid:f03f27aa-d5a5-47f9-bd6c-df790dd0f643>
3.453125
943
Personal Blog
Science & Tech.
43.768415
An extremophile (from Latin extremus meaning "extreme" and Greek philiā (φιλία) meaning "love") is an organism that thrives in physically or geochemically extreme conditions that are detrimental to most life on Earth. In contrast, organisms that live in more moderate environments may be termed mesophiles or neutrophiles. In the 1980s and 1990s, biologists found that microbial life has an amazing flexibility for surviving in extreme environments — niches that are extraordinarily hot, or acidic, for example — that would be completely inhospitable to complex organisms. Some scientists even concluded that life may have begun on Earth in hydrothermal vents far under the ocean's surface. According to astrophysicist Dr. Steinn Sigurdsson, "There are viable bacterial spores that have been found that are 40 million years old on Earth — and we know they're very hardened to radiation." On 6 February 2013, scientists reported that bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica. On 17 March 2013, researchers reported data that suggested microbial life forms thrive in the Mariana Trench, the deepest spot on the Earth. Other researchers reported related studies that microbes thrive inside rocks up to 1900 feet below the sea floor under 8500 feet of ocean off the coast of the northwestern United States. According to one of the researchers,"You can find microbes everywhere — they're extremely adaptable to conditions, and survive wherever they are." Most known extremophiles are microbes. The domain Archaea contains renowned examples, but extremophiles are present in numerous and diverse genetic lineages of bacteria and archaeans. Furthermore, it is erroneous to use the term extremophile to encompass all archaeans, as some are mesophilic. Neither are all extremophiles unicellular; protostome animals found in similar environments include the Pompeii worm, the psychrophilic Grylloblattidae (insects), Antarctic krill (a crustacean) and Tardigrades (water bears). There are many classes of extremophiles that range all around the globe, each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and termed as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and barophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels. - An organism with optimal growth at pH levels of 3 or below - An organism with optimal growth at pH levels of 9 or above - An organism that does not require oxygen for growth such as Spinoloricus Cinzia. Two sub-types exist: facultative anaerobe and obligate anaerobe. Facultative anaerobe can tolerate anaerobic and aerobic condition; however, an obligate anaerobe would die in presence of even trace levels of oxygen. - An organism that lives in microscopic spaces within rocks, such as pores between aggregate grains; these may also be called Endolith, a term that also includes organisms populating fissures, aquifers, and faults filled with groundwater in the deep subsurface. - An organism that can thrive at temperatures between 80–122 °C, such as those found in hydrothermal systems - An organism that lives underneath rocks in cold deserts - An organism (usually bacteria) whose sole source of carbon is carbon dioxide and exergonic inorganic oxidation (chemolithotrophs) such as Nitrosomonas europaea; these organisms are capable of deriving energy from reduced mineral compounds like pyrites, and are active in geochemical cycling and the weathering of parent bedrock to form soil - capable of tolerating high levels of dissolved heavy metals in solution, such as copper, cadmium, arsenic, and zinc; examples include Ferroplasma sp., Cupriavidus metallidurans and GFAJ-1. - An organism capable of growth in nutritionally limited environments - An organism capable of growth in environments with a high sugar concentration - An organism that lives optimally at high hydrostatic pressure; common in the deep terrestrial subsurface, as well as in oceanic trenches - A polyextremophile (faux Ancient Latin/Greek for 'affection for many extremes') is an organism that qualifies as an extremophile under more than one category. - An organism capable of survival, growth or reproduction at temperatures of -15 °C or lower for extended periods; common in cold soils, permafrost, polar ice, cold ocean water, and in or under alpine snowpack - Organisms resistant to high levels of ionizing radiation, most commonly ultraviolet radiation, but also including organisms capable of resisting nuclear radiation - An organism that can thrive at temperatures between 45–122 °C - Combination of thermophile and acidophile that prefer temperatures of 70–80 °C and pH between 2 and 3 - An organism that can grow in extremely dry, desiccating conditions; this type is exemplified by the soil microbes of the Atacama Desert In astrobiology Astrobiology is the field concerned with forming theories, such as panspermia, about the distribution, nature, and future of life in the universe. In it, microbial ecologists, astronomers, planetary scientists, geochemists, philosophers, and explorers cooperate constructively to guide the search for life on other planets. Astrobiologists are particularly interested in studying extremophiles, as many organisms of this type are capable of surviving in environments similar to those known to exist on other planets. For example, Mars may have regions in its deep subsurface permafrost that could harbor endolith communities. The subsurface water ocean of Jupiter's moon Europa may harbor life, especially at hypothesized hydrothermal vents at the ocean floor. Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 times "g" (the normal acceleration due to gravity). Paracoccus denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia. Recently, on 26 April 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). New sub-types of -philes are identified frequently and the sub-category list for extremophiles is always growing. For example, microbial life lives in the liquid asphalt lake, Pitch Lake. Research indicates that extremophiles inhabit the asphalt lake in populations ranging between 106 to 107 cells/gram. Likewise, until recently boron tolerance was known but a strong borophile was undiscovered in bacteria. With the recent isolation of Bacillus boroniphilus, borophiles came into discussion. Studying these borophiles may help illuminate the mechanisms of both boron toxicity and boron deficiency. Industrial uses The thermoalkaliphilic catalase, which initiates the breakdown of hydrogen peroxide into oxygen and water, was isolated from an organism, Thermus brockianus, found in Yellowstone National Park by Idaho National Laboratory researchers. The catalase operates over a temperature range from 30°C to over 94°C and a pH range from 6-10. This catalase is extremely stable compared to other catalases at high temperatures and pH. In a comparative study, the T. brockianus catalase exhibited a half life of 15 days at 80°C and pH 10 while a catalase derived from Aspergillus niger had a half life of 15 seconds under the same conditions. The catalase will have applications for removal of hydrogen peroxide in industrial processes such as pulp and paper bleaching, textile bleaching, food pasteurization, and surface decontamination of food packaging. See also - Rampelotto, P. H. (2010). Resistance of microorganisms to extreme environmental conditions and its contribution to Astrobiology. Sustainability, 2, 1602-1623. - Rothschild, L.J.; Mancinelli, R.L. Life in extreme environments. Nature 2001, 409, 1092-1101 - "Mars Exploration - Press kit" (PDF). NASA. June 2003. Retrieved 14 July 2009. - BBC Staff (23 August 2011). "Impacts 'more likely' to have spread life from Earth". BBC. Retrieved 24 August 2011. - Gorman, James (6 February 2013). "Bacteria Found Deep Under Antarctic Ice, Scientists Say". New York Times. Retrieved 6 February 2013. - Choi, Charles Q. (17 March 2013). "Microbes Thrive in Deepest Spot on Earth". LiveScience. Retrieved 17 March 2013. - Glud, Ronnie; Wenzhöfer, Frank; Middleboe, Mathias; Oguri, Kazumasa; Turnewitsch, Robert; Canfield, Donald E.; Kitazato, Hiroshi (17 March 2013). "High rates of microbial carbon turnover in sediments in the deepest oceanic trench on Earth". Nature Geoscience. doi:10.1038/ngeo1773. Retrieved 17 March 2013. - Oskin, Becky (14 March 2013). "Intraterrestrials: Life Thrives in Ocean Floor". LiveScience. Retrieved 17 March 2013. - Thermococcus barophilus sp. nov., a new barophilic and hyperthermophilic archaeon isolated under high hydrostatic pressure from a deep-sea hydrothermal vent. IJSEM, p. 351-359, 49, 1999. - Cavicchioli, R. & Thomas, T. 2000. Extremophiles. In: J. Lederberg. (ed.) Encyclopedia of Microbiology, Second Edition, Vol. 2, pp. 317–337. Academic Press, San Diego. - "Studies refute arsenic bug claim". BBC News. 9 July 2012. Retrieved 10 July 2012. - "GFAJ-1 Is an Arsenate-Resistant, Phosphate-Dependent Organism". Science. 8 July 2012. doi: 10.1126/science.1218455. Retrieved 10 July 2012. - "Absence of Detectable Arsenate in DNA from Arsenate-Grown GFAJ-1 Cells". Science. 8 July 2012. doi: 10.1126/science.1219861 . Retrieved 10 July 2012. - Than, Ker (25 April 2011). "Bacteria Grow Under 400,000 Times Earth's Gravity". National Geographic- Daily News. National Geographic Society. Retrieved 28 April 2011. - Deguchi, Shigeru; Hirokazu Shimoshige, Mikiko Tsudome, Sada-atsu Mukai, Robert W. Corkery, Susumu Ito, and Koki Horikoshi (2011). "Microbial growth at hyperaccelerations up to 403,627 xg". Proceedings of the National Academy of Sciences 108 (19): 7997. doi:10.1073/pnas.1018027108. Retrieved 28 April 2011. - Baldwin, Emily (26 April 2012). "Lichen survives harsh Mars environment". Skymania News. Retrieved 27 April 2012. - de Vera, J.-P.; Kohler, Ulrich (26 April 2012). "The adaptation potential of extremophiles to Martian surface conditions and its implication for the habitability of Mars". European Geosciences Union. Retrieved 27 April 2012. - Microbial Life Found in Hydrocarbon Lake. the physics arXiv blog 15 April 2010. - Schulze-Makuch, Haque, Antonio, Ali, Hosein, Song, Yang, Zaikova, Beckles, Guinan, Lehto, Hallam. Microbial Life in a Liquid Asphalt Desert. - A novel highly boron tolerant bacterium, Bacillus boroniphilus sp. nov., isolated from soil, that requires boron for its growth. Extremophiles Vol. 11, p. 217-224. - Anitori, RP (editor) (2012). Extremophiles: Microbiology and Biotechnology. Caister Academic Press. ISBN 978-1-904455-98-1. Further reading - Wilson, Z. E. and Brimble, M. A. (January 2009). "Molecules derived from the extremes of life". Nat. Prod. Rep. 26 (1): 44–71. doi:10.1039/b800164m. PMID 19374122. - Rossi M et al. (July 2003). "Extremophiles 2002". J Bacteriol. 185 (13): 3683–9. doi:10.1128/JB.185.13.3683-3689.2003. PMC 161588. PMID 12813059. - Satyanarayana, T.; Raghukumar, C.; Shivaji, S. (July 2005). "Extremophilic microbes: Diversity and perspectives". Current Science 89 (1): 78–90. - C.Michael Hogan (2010). "Extremophile". Encyclopedia of Earth, National Council of Science & the Environment, eds. E,Monosson & C.Cleveland. |Wikinews has related news: Bacteria thrive deep under sea floor| - Extreme Environments - Science Education Resource Center - Extremophile Research - Eukaryotes in extreme environments - The Research Center of Extremophiles - DaveDarling's Encyclopedia of Astrobiology, Astronomy, and Spaceflight - The International Society for Extremophiles - Idaho National Laboratory - Polyextremophile on David Darling's Encyclopedia of Astrobiology, Astronomy, and Spaceflight
<urn:uuid:6445d790-83f9-41fc-8b78-fec744d1467f>
4
3,090
Knowledge Article
Science & Tech.
37.204442
|Taq polymerase, exonuclease| DNA polymerase bound to a DNA octamer Taq polymerase (pron.: / /) is a thermostable DNA polymerase named after the thermophilic bacterium Thermus aquaticus from which it was originally isolated by Thomas D. Brock in 1965. It is often abbreviated to "Taq Pol" (or simply "Taq"), and is frequently used in polymerase chain reaction (PCR), a method for greatly amplifying short segments of DNA. T. aquaticus is a bacterium that lives in hot springs and hydrothermal vents, and Taq polymerase was identified as an enzyme able to withstand the protein-denaturing conditions (high temperature) required during PCR. Therefore it replaced the DNA polymerase from E. coli originally used in PCR. Taq's optimum temperature for activity is 75–80°C, with a half-life of greater than 2 hours at 92.5°C, 40 minutes at 95°C and 9 minutes at 97.5°C, and can replicate a 1000 base pair strand of DNA in less than 10 seconds at 72°C. One of Taq's drawbacks is its relatively low replication fidelity. It lacks a 3' to 5' exonuclease proofreading activity, and has an error rate measured at about 1 in 9,000 nucleotides. The remaining two domains however may act in coordination, via coupled domain motion. Some thermostable DNA polymerases have been isolated from other thermophilic bacteria and archaea, such as Pfu DNA polymerase, possessing a proofreading activity, and are being used instead of (or in combination with) Taq for high-fidelity amplification. Taq makes DNA products that have A (adenine) overhangs at their 3' ends. This may be useful in TA cloning, whereby a cloning vector (such as a plasmid) that has a T (thymine) 3' overhang is used, which complements with the A overhang of the PCR product, thus enabling ligation of the PCR product into the plasmid vector. Taq polymerase in PCR In the early 1980s, Kary Mullis was working at Cetus Corporation on the application of synthetic DNAs to biotechnology. He was familiar with the use of DNA oligonucleotides as probes for binding to target DNA strands, as well as their use as primers for DNA sequencing and cDNA synthesis. In 1983, he began using two primers, one to hybridize to each strand of a target DNA, and adding DNA polymerase to the reaction. This led to exponential DNA replication, greatly amplifying the amounts of DNA between the primers. However, after each round of replication the mixture needs to be heated above 90°C to denature the newly formed DNA, allowing the strands to separate and act as templates in the next round of amplification. This heating step also inactivates the DNA polymerase that was in use before the discovery of Taq polymerase, the Klenow fragment of the DNA Polymerase I from E. coli. Use of the thermostable Taq polymerase eliminates the need for having to add new enzyme to the PCR reaction during the thermocycling process. A single closed tube in a relatively simple machine can be used to carry out the entire process. Thus, the use of Taq polymerase was the key idea that made PCR applicable to a large variety of molecular biology problems concerning DNA analysis. Patent issues Hoffmann-La Roche eventually bought the PCR and Taq patents from Cetus for $330 million, from which it may have received up to $2 billion in royalties. In 1989, Science Magazine named Taq polymerase its first "Molecule of the Year". Kary Mullis received the Nobel Prize in 1993, the only one awarded for research performed at a biotechnology company. By the early 1990s, the PCR technique with Taq polymerase was being used in many areas, including basic molecular biology research, clinical testing, and forensics. It also began to find a pressing application in direct detection of the HIV virus in AIDS. In December 1999, U.S. District Judge Vaughn Walker ruled that the 1990 patent involving Taq polymerase was issued, in part, on misleading information and false claims by scientists with Cetus Corporation. The ruling supported a challenge by Promega Corporation against Hoffman-La Roche, which purchased the Taq patents in 1991. Judge Walker cited previous discoveries by other laboratories, including the laboratory of Professor John Trela in the University of Cincinnati department of biological sciences, as the basis for the ruling. Protein Domain See also - Chien A, Edgar DB, Trela JM (1976). "Deoxyribonucleic acid polymerase from the extreme thermophile Thermus aquaticus". J. Bact. 127 (3): 1550–7. PMC 232952. PMID 8432. - Saiki, RK; et al. (1988). "Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase.". Science 239 (4839): 487–91. doi:10.1126/science.2448875. PMID 2448875. - Saiki, RK; et al. (1985). "Enzymatic amplification of beta-globin genomic sequences and restriction site analysis for diagnosis of sickle cell anemia". Science 230 (4732): 1350–4. doi:10.1126/science.2999980. PMID 2999980. - Lawyer FC, et al. (1993). "High-level expression, purification, and enzymatic characterization of full-length Thermus aquaticus DNA polymerase ...". PCR Methods Appl. 2 (4): 275–87. PMID 8324500. - Tindall KR and Kunkel TA (1988). "Fidelity of DNA synthesis by the Thermus aquaticus DNA polymerase". Biochemistry 27 (16): 6008–13. doi:10.1021/bi00416a027. PMID 2847780. - Bu Z, Biehl R, Monkenbusch M, Richter D, Callaway DJ (December 2005). "Coupled protein domain motion in Taq polymerase revealed by neutron spin-echo spectroscopy". Proc. Natl. Acad. Sci. U.S.A. 102 (49): 17646–51. doi:10.1073/pnas.0503388102. PMC 1345721. PMID 16306270. - Mullis KB (April 1990). "The unusual origin of the polymerase chain reaction". Sci. Am. 262 (4): 56–61, 64–5. doi:10.1038/scientificamerican0490-56. PMID 2315679. - Fore J, Wiechers IR, Cook-Deegan R (2006). "The effects of business practices, licensing, and intellectual property on development and dissemination of the polymerase chain reaction: case study". J Biomed Discov Collab 1: 7. doi:10.1186/1747-5333-1-7. PMC 1523369. PMID 16817955. Detailed history of Cetus Corporation and the commercial aspects of PCR. - Guatelli JC, Gingeras TR, Richman DD (1 April 1989). "Nucleic acid amplification in vitro: detection of sequences with low copy numbers and application to diagnosis of human immunodeficiency virus type 1 infection". Clin. Microbiol. Rev. 2 (2): 217–26. PMC 358112. PMID 2650862. - Curran, Chris, Bio-Medicine, Dec. 7, 1999 - Li Y, Mitaxov V, Waksman G (August 1999). "Structure-based design of Taq DNA polymerases with improved properties of dideoxynucleotide incorporation". Proc. Natl. Acad. Sci. U.S.A. 96 (17): 9491–6. PMC 22236. PMID 10449720. |Wikimedia Commons has media related to: Taq polymerase|
<urn:uuid:711f3995-9064-4037-a8cc-dc6bfc9a93f1>
2.90625
1,730
Knowledge Article
Science & Tech.
59.89779
Assigning the result of a Boolean calculation into a variable is no different than assigning the result of number calculations or text string joins. The following are examples of assigning the results of comparisons into new or existing Boolean variables. 1 var timeToEat = me === hungry; 2 var affordable = price < budget; 3 passed = mark >= pass; Assigning the results of a comparison to a Boolean variable 1 Creates a new Boolean variable timeToEat that will be true if and only if the current values in me and hungry are both the same data type and value, otherwise it will be false 2 Creates a new Boolean variable affordable that will be true if the value in price is less than the value in budget. 3 updates the existing variable pass (or creates a new global variable if it doesn't) that contains true if the value in mark and pass are equal or the value in mark is greater than the value in pass You only need to assign the result of a comparison to a Boolean variable like this when you need to test that result in multiple places in your code. Where the result of the comparison is only needed once it is more common to test the comparison itself, rather than assigning it to a variable first. This tutorial first appeared on www.felgall.com and is reproduced here with the permission of the author.
<urn:uuid:4d13388d-c58a-41cc-8d37-3bb19d23dbe1>
3.578125
275
Tutorial
Software Dev.
42.464534
These students in Shishmaref, Alaska live on frozen ground. But frozen ground matters to people all over the world, even if the ground is not frozen where they live. —Credit: Angela Alston People may not think much about frozen ground, but they notice what can happen when the ground freezes. The frozen ground may spit out fence posts, or make roads very bumpy. In some places, entire towns are built on ground that stays frozen all year around. People there count on the ground under their houses to stay frozen hard. Animals and plants, too, have figured out how to take advantage of frozen ground. We can't always see the ways that frozen ground matters. Even people who live where the ground never freezes depend on frozen ground. Frozen ground around the world is connected to Earth's climate. If all, or even just a lot of the frozen ground around the world thawed out, everyone would notice. In the Arctic, some ground that used to be called permanently frozen ground, or permafrost, has already thawed out. It is changing the way people in those areas live. What is frozen ground? Frozen ground occurs when the ground contains water, and the temperature of the ground goes down below 0° Celsius (32° Fahrenheit). It can make a big difference if the ground stays frozen all year, or if the ground freezes and thaws. More than half of all the land in the Northern Hemisphere freezes and thaws every year, and is called seasonally frozen ground. One-fourth of the land in the Northern Hemisphere has an underground layer that stays frozen all year long. Ground that stays frozen for at least two years in a row is called permafrost. What is there to know about frozen ground? All About Frozen Ground explores what is going on in and under the ground when it freezes and thaws. Find out how frozen ground works, and how it makes a difference all over the world: - How will climate change affect frozen ground? - Can plants and animals find water when the ground is frozen? - Does frozen ground damage buildings? - How do scientists study permafrost from space? Tingjun Zhang, a permafrost scientist at the National Snow and Ice Data Center, studies frozen ground. He said, “The more we learn, the more we realize that frozen ground isn’t just important when we build roads or houses. Frozen ground is important to people, wildlife, and climate all over the world.”
<urn:uuid:d2623c5f-3070-4a94-b3a1-14c0e43ea72a>
3.3125
514
Knowledge Article
Science & Tech.
66.76663
This illustration shows an artist’s impression of the brown dwarf 2MASSW J1207334-393254, or 2M1207 for short. With a mass that amounts to 25 times that of Jupiter, 2M1207 is surrounded by a circumstellar disc of gas and dust and possesses a planetary companion five times more massive than Jupiter. The planetary companion lies at a very large distance from 2M1207 – the projected distance between the two bodies measuring 55 astronomical units (AU). Sub-millimeter observations performed with the SPIRE instrument on board ESA’s Herschel Space Observatory have shown that the disc’s total mass amounts to about three to five times the mass of Jupiter and that its radius ranges between 50 and 100 AU. With such a massive disc, it is likely that the planetary-mass companion originated directly from disc fragmentation, thus challenging the standard scenario of giant planet formation via core accretion.
<urn:uuid:013a7432-9520-48e6-be36-74cb4bd9c787>
3.421875
190
Knowledge Article
Science & Tech.
36.743703
This may be a naive question, but after the Fukushima Daiichi partial meltdown and studying the aftermath of Chernobyl it seems they could be helped by this idea. In Chernobyl, the liquidators that cleaned up the disaster tunneled concrete under the reactor core and covered the whole complex in a big containment unit. Why could they not have a 30 meter pit below a reactor filled with water with a trap door holding the reactor up? This pit could be very thick reinforced concrete similar to a missile silo If there were a meltdown or imminent meltdown and there were no other options the reactor core could be dropped into this pit by triggering the trap door and then perhaps have lead shot dumped on it or sealed in concrete. This seems very simple but there must be some reason why this is not practical. I know that the core would still be hot and it would still be a problem but would this not be better than it being exposed to air?
<urn:uuid:71fdea2a-415a-4fe9-bc9a-5ac80bbee432>
3.40625
189
Q&A Forum
Science & Tech.
49.446623
AleochariniJames S. Ashe (1947-2005) and Christian Maus This tree diagram shows the relationships between several groups of organisms. The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right. You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box The tribe Aleocharini includes a diverse assemblage of aleocharine staphylinids arranged into 3 subtribes, 16 genera, and over 460 species. The various members of the tribe display a diverse set of biological attributes. Members of the subtribe Aleocharina, particularly the genus Aleochara, are known to be ectoparasitic on fly puparia as larvae, though the biology of most other genera in the subtribe has not been investigated. Members of the subtribe Compactopediina are inquilines in the nests of termites (Kistner 1970b, 1982). Members of the single genus and species in the subtribe Hodoxenina are inquilines in the nest of the termite Microhodotermes viator (Hodotermitidae) (Kistner 1970a). Members of the tribe Aleocharini are found in all zoogeographic regions and on all continents except Antarctica. Members of the subtribe Aleocharina are worldwide in distribution; the Compactopedina are found in the Oriental region; and the Hodoxenina are known only from South Africa. The Aleocharini are characterized by the presence of 5-5-5 segmented tarsi and a pseudosegment on the last segment of the maxillary and labial palpi, so that the maxillary and labial palpi appear to be 5-articled and 4-articled respectively (Lohse 1974, Seevers 1978). Figure. Aleochara sp., mouthparts, ventral aspect, showing pseudosegment on maxillary and labial palpi. Seevers (1978) also noted that the velum of the parameres is reticulated, but this feature has not been examined for most taxa, and its generality among all members of the tribe is not known. The Aleocharini share this latter feature (as well as a pseudosegment on the last maxillary segment) with at least some members of the tribe Hoplandriini. Fenyes (1920-21) included several genera with 2- or 3-articled labial palpi in the tribe Aleocharini, but most of these genera have subsequently been moved to other tribes, though Piochardia (subtribe Aleocharini) with supposedly 3-articled labial palpi (Fenyes 1920-21) is still included in the tribe. The monophyly of the Aleocharini is not well established. Due to the shared characteristics discussed in the previous section, it is possible that this tribe may be paraphyletic in relation to the Hoplandriini. The monophyly of the subtribe Aleocharina and the phylogenetic relationships among the 3 subtribes of the Aleocharini have not been investigated. However, it appears that the subtribe Aleocharina is based exclusively on plesiomorphic characteristics in comparison to the other subtribes. Fenyes, A. 1920-21. Coleoptera, Staphylinidae, Aleocharinae. In: Wytsman, P.: Genera Insectorum, fasc. 173B-C. L. Desmet-Vertneuil, Bruxelles: 111-414 Kistner, D. H. 1970a. New termitophilous Staphylinidae from Hodotermitidae (Isoptera) nests. Jour. New York Entomol. Soc. 78: 2-16. Kistner, D. H. 1970b. New termitophiles associated with Longipeditermes longipes (Haviland) II. The genera Compactopedia, Emersonilla, Hirsitilla, and Limulodilla. Jour. New York Entomol. Soc. 78: 17-32. Kistner, D. H. 1982. A revision of the termitophilous genus Discoxenus with a study of the relationships of the genus and notes on its behavior (Coleoptera, Staphylinidae). Sociobiology 7(2): 165-186. Lohse, G. A. 1974. Die Käfer Mitteleuropas. Staphylinidae II (Hypocyphtinae and Aleocharinae). Goecke & Evers, Krefeld, Germany. 381 pp. Seevers, C. H. 1978. A generic and tribal revision of the North American Aleocharinae (Coleoptera: Staphylinidae). Fieldiana: Zoology 71: vi , 1-275. Development of this page made possible by National Science Foundation PEET grant DEB 95-21755 to James S. Ashe and a DAAD grant D/97/05475 from the German Government to Christian Maus. All images on this page copyright © 1997 James S. Ashe. James S. Ashe (1947-2005) University of Kansas, Lawrence, Kansas, USA Correspondence regarding this page should be directed to James S. Ashe (1947-2005) at Page copyright © 1997 James S. Ashe (1947-2005) and All Rights Reserved. - First online 11 September 1998 Citing this page: Ashe (1947-2005), James S. and Christian Maus. 1998. Aleocharini. Version 11 September 1998. http://tolweb.org/Aleocharini/9832/1998.09.11 in The Tree of Life Web Project, http://tolweb.org/
<urn:uuid:161264a1-505c-4669-8d22-623217bf35ca>
3.140625
1,333
Knowledge Article
Science & Tech.
46.599432
Physicist: The issue here is the equinoxes are the two days of the year when the length of the day should be exactly as long as the night. And yet you’ll find that on the equinox the day is always slightly longer than 12 hours. As it happens, there’s nothing particularly special about the exquini. There’s just more daytime than nighttime overall. Kind of inspiring! But there’s one more slick trick that Earth has for keeping the lights on. Before the light from the Sun can get to us (here on the ground) it has to pass through the atmosphere. While the atmosphere itself doesn’t do too much (aside from scattering blue light and helping things breathe) the transition from the airlessness of space to the airfullness of here does change the direction that light travels. This is just refraction, which is the same effect responsible for things not being where they appear to be at the bottom of swimming pools, lenses working, and blurry vision under water. Basically, when light has to pass between a medium where it can travel fast (space counts) to one where it travels slower, the light will bend toward the slower medium. So, every day a handful of physical laws conspire to give us a couple more minutes of sunlight than you might otherwise expect (even on the equinoctes!).
<urn:uuid:2f4e7b3e-b5e8-4811-899f-b9c96fd3a5d7>
2.953125
286
Q&A Forum
Science & Tech.
60.113007
1716, English astronomer Edmond Halley noted, "This is but a little Patch, but it shews itself to the naked Eye, when the Sky is serene and the Moon absent." Of course, M13 is now modestly recognized as the Great Globular Cluster in Hercules, one of the brightest star clusters in the northern sky. Telescopic views reveal the spectacular cluster's hundreds of thousands At a distance of 25,000 light-years, the cluster stars into a region 150 light-years in diameter, but approaching the cluster core upwards of 100 stars could be contained in a cube just 3 light-years on a side. For comparison, the closest star to the Sun is over 4 light-years away. Along with the cluster's dense core, the outer reaches of M13 are highlighted in this sharp color image. The cluster's evolved red and blue giant stars show up in yellowish and
<urn:uuid:3ea5baf3-1c0b-4979-a3e1-34c91d3d7476>
3.203125
212
Knowledge Article
Science & Tech.
55.164781
One solar day on a planet is the length of time from noon to noon. A solar day lasts 24 hours on planet Earth. On Mercury a solar day is about 176 Earth days long. And during its first Mercury solar day the MESSENGER spacecraft has imaged nearly the entire surface of the to generate a global monochrome map at 250 meters per pixel resolution and a 1 kilometer per pixel resolution color map. Examples of the maps, mosaics constructed from thousands of images made under uniform lighting conditions, are shown (monochrome at left), both centered along the planet's 75 degrees East longitude The MESSENGER spacecraft's second Mercury solar day will likely include more high resolution of the planet's surface features. (Editor's note: Due to Mercury's 3:2 spin-orbit resonance, a Mercury solar day is 2 Mercury years long.)
<urn:uuid:223b6af5-facf-4efb-8944-62780d92c0f8>
3.578125
196
Knowledge Article
Science & Tech.
42.861695
The pyloric network (Figure 1) is part of the stomatogastric ganglion (STG) of crustaceans . The network is a central pattern generator (CPG) that drives the muscles of the pylorus, which is a food filtering organ within the gastric system of these animals. The pyloric network is one of the most researched neural circuits and many details of it are known, including types and numbers of participating cells, connections between these cells, transmitters, receptors and neuron-modulators used by cells within the network . Consequently, this network is an ideal candidate to develop detailed network simulations of neural systems. Figure 1. A diagram of the pyloric network. The diagram shows the participating neurons, their numbers and the connections between these neurons. Simulations of the pyloric network have been developed since the 1970s . However, these simulations do not include many of the known details about the pyloric network, and produce a behaviour that resembles at high level the behaviour of the biological network, but ignores the behavioural details. These earlier simulations are also application specific, and cannot be easily modified and re-used by researchers. Methods and results We present here a new open source simulation of the pyloric network. The simulation is developed using the Neuron simulation language . Each neuron is simulated using four compartments: soma, primary neurite, axon, dendrite. The primary neurite is connected to all three other compartments. The connections are implemented by linking the axon outputs of neurons to the corresponding dendrite inputs of other neurons. The effects of neuromodulation are implemented in form of changing characteristics of neural connections in response to changes in the values of a multi-dimensional modulation state variable (i.e. each component of the vector indicates the presence and concentration of a neuromodulator, the components being labeled by the corresponding modulator). The values of parameters for the neural compartments are set using the STG neuron database developed by Prinz et al . To determine the right setting of parameters for each cell type we used the simulator attached to STG neuron database to check that the output of our model neurons matches the output of modeled neurons. Our simulation is developed as open source software, allowing other users to download the source code and modify it. In this way other researchers may use this detailed simulation of the pyloric network to check experimental assumptions and also possibly to develop simulations of other related networks (e.g. gastric mill network).
<urn:uuid:8e7f35e6-6e0e-4964-b844-9b03ff62065d>
2.765625
519
Academic Writing
Science & Tech.
26.773542
The neutral theory of molecular evolution suggests that most of the genetic variation in populations is the result of mutation and genetic drift and not selection. Basically, the theory suggests that if a population carries several different versions of a gene, odds are that each of those versions is equally good at performing its job—in other words, that variation is neutral: whether you carry gene version A or gene version B does not affect your fitness. The neutral theory is easily misinterpreted. It does NOT suggest: - That organisms are not adapted to their environments - That all morphological variation is neutral - That ALL genetic variation is neutral - That natural selection is unimportant in shaping genomes The main point of the neutral theory is simply that when we see several versions of a gene in a population, it is likely that their frequencies are simply drifting around. The data supporting and refuting the neutral theory are complicated. Figuring out how widely the neutral theory applies is still the topic of much research.
<urn:uuid:a3ec3bc2-7990-4e40-8491-c8e0429e5f23>
3.734375
200
Knowledge Article
Science & Tech.
27.5075
A function call terminates if it returns to its caller or throws an exception. A function call does not terminate if it goes into an infinite loop, halts execution of the program (gracefully or not), or jumps in such a way that it does not return or throw an exception that could be caught by its caller. Taking termination into account, we can model a C++ function by a nontotal relation from proper pre-states to post-states. A state is proper if it is not (see section 2.8.2 Formal Model of States); bottom represents infinite looping and other kinds of A pre-state may or may not be related to a post-state by the relation specified in a Larch/C++ specification, and if it is related to a post-state, that post-state could be (A pre-state that is not related to a post-state value is one for which execution is "refused" [Nelson89] [Hesselink92]. Such relations cannot be implemented in C++, but are useful for purposes of program refinement.) Go to the first, previous, next, last section, table of contents.
<urn:uuid:d93fab9c-06b5-4fd8-8d03-b62dd1dc859a>
2.71875
253
Documentation
Software Dev.
53.607218
Researchers reported today that they have sequenced the genome of a brown algal model organism, identifying genetic and genomic features that help explain its evolutionary past and adaptation to its current environment. Full-text access for registered users only. Existing users login here. New to GenomeWeb? Register here quickly for free access.
<urn:uuid:2722b373-643d-4dea-bf69-d444a72a18e0>
2.796875
67
Truncated
Science & Tech.
22.198231
Off Chub Cay, Bahamas, Day 3, 1999 With great success finding Midas slitsnails, the team moves over shallower seas ready to search for Adanson's slitsnail. Today Pat and Steven get their chances to dive, along with Jerry and José. The sub dives to around 400 feet along a wall so steep it is sometimes undercut. What in the world is a snail, or any other animal doing there if it can't swim or isn't anchored to the wall? Yet there they are, one of the larger snails, clinging to the wall. We find them dispersed, like Midas, one and then another, ten or fifteen minutes apart. They are easier to spot then Midas with their beautifully mottled garnet and ivory shells. The shells are more pointed, and the spiraling cone shape more acutely angled compared to Midas. They also differ in having a smaller muscular foot that does not extend beyong the edge of the shell, and so is rarely seen except in aquaria. As the sub creeps along the wall a number of striking things appear. Overall there is more visible life attached to the wall than at greater depths. The wall is craggy, encrusted with sponges, isolated corals and lots of fishes. There isn't the same build up of organic sediment at this depth as we encountered at 2,500 feet. Undercuts in the wall prevent sediment from accumulating. Adanson's slitsnails are not hard to spot, just scarce, and a challenge to pick up because of their size, the vertical wall, and the way they are sometimes wedged into nooks. Although the team spends six to seven hours each day collecting from the submersible, long hours are also committed to lab work on the ship. The ship is equipped with two labs, one wet and the other dry. All tanks are in the wet lab. For the really deep water snails there is a cold-water tank. The rest are kept in unchilled tanks. In the tanks, a behavior peculiar to slitsnails is observed. When disturbed, they exude a milky substance into the water. Released from a special organ, the hypobranchial gland, it flows out through the slit in the shell. What's interesting is that the substance doesn't immediately diffuse into the water. It tends to hold together like thick smoke in still air. Jerry suspects the secretion is used to deter predators and may affect the nervous system. But who's nervous system? These snails almost always show signs of shell repair and the damage is likely caused by crabs trying to get at the snails. Jerry "milks" the snails to collect and analyze this substance. This is the kind of thing that is not only interesting, it could possibly prove to be a useful substance to medicine.
<urn:uuid:8c648be7-7d0d-4847-a2aa-abc3a2e52d8e>
2.703125
582
Nonfiction Writing
Science & Tech.
62.835615
* Seismic Waves and Seismic Eruption are free demonstration programs available at: binghamton.edu Transform- two plates slide past each other. In this aerial view of the San Andreas Fault (transform) the trees in the orchard (dots) have been offset by the slipping of the plates. The Pacific Plate is to the left and the North American to the right. This picture shows how things like fences, roads, rivers and buildings can be offset by the sliding of the plates. The exception to this arrangement are "Hot Spots" which are plumes of hot material (rather than belts) in the middle of plates. These spots stay stationary while the plate moves above it. The spot melts through the plate like a blow torch and produces a volcano above it. As the plate moves, the spot melts through another spot producing a chain of volcanic islands. Hawaii is an example of a hot spot island chain. The movement of the plates is caused by convection currents deep within the Earth. The force that moves the plates around the earth are convection currents inside the mantle. Hotter Mantle material rises while cooler material sinks. The crust is split and diverges where the material rises and spreads out. The plates converge and subduct where the material is sinking. The different types of plate boundaries are caused by a combination of the direction of convection as well as they type of crust: continental or oceanic. Puzzle Fit of the continents to form Pangaea (see below) Fossil Evidence (see below) Glacial Evidence (see below) Coal in Antarctica- coal is formed in tropical swamps. Coal was formed when Antarctica was closer to the equator. Magnetic Stripes on the ocean floor (this one's going to take some explaining so it gets its own page) Mountain Chains appear where they should if continents are colliding Puzzle Fit- if the continents were cut out of a map, most of the landmasses will fit together to form a larger supercontinent, which is called Pangaea. Fossil Evidence- in the picture above, fossils of many land-living have been found on opposite shores. When Pangaea is re-assembled, the fossils match up. Glacial Evidence- when Pangaea is re-assembled, there is evidence of a single ice sheet (at least for this episode) affecting many of the southern continents. When viewed this way, this sheet leaves consistent evidence of a single glacier. When viewed on the current continents, it is inconsistent and even highly improbable. For example, India, which is north or the Equator, has glacial evidence coming from the south! An earthquake is an event where two pieces of crust shift against each other. The rumbling felt is from the rocks slipping, sticking and breaking. The vibrations are called seismic waves. There are different types of seismic waves that vibrate in different ways. The focus is the spot within the earth where the earthquake began. The epicenter is the spot on Earth's surface closest to the focus. A fault is a crack along which the rocks slide. Seismic Waves- during an earthquake, several types of waves are generated. The vibrations felt are actually called seismic waves that are traveling through the Earth. Primary wave- travels phastest so it arrives at seismic stations phirst. Push-pull wave: rock vibrates forward and backward in the same direction that the wave travels ("parallel to propagation"). Pass through solids and liquids (magma). Secondary wave- arrives at a seismic station second. In order to do the calculations that help find the distance to epicenter and the time of the earthquake, you’ll need to do math with time. It is difficult to do time math in a calculator, and by hand it does not work exactly the same way as regular math. Regular math is what they call “base 10” which means that whenever you count past 9 you must move over one place to the tens column. Time is base 60. You can count 59 seconds and then you go to the minutes column. For example, if you want to subtract 82 minus 17 in regular base ten numbers you would “borrow” a ten and start by taking 7 away from 12. is the same as 8712 (70 plus 12) -17 65 Time math works almost the same way except instead of taking over ten from the neighboring column you’ll take one minute and convert it into 60 seconds. 3:13:25 (“3 hours, 13 minutes, and 25 seconds”) -1:09:37 turns into: 3:12:85 (3 hrs,12 min, 85 secs is the same as 13:25) -1:09:37 2:03:48 How to Use the P-Wave and S-Wave Travel Time Chart The P-line shows how much time it take a P-wave to travel a certain distance. So if you need to know how much time it takes the p-wave to travel 2,000km, it is just over 4 minutes (about 4:05 ). The S-wave works the same way: for 2,000km it takes 7:20 . To find the distance to epicenter: You are in charge of watching the seismic station tonight when the seismograph detects an earthquake. The earthquake didn’t happen where you are- you can’t even feel it. As a result, you don’t know what distance or direction the earthquake happened. The P-wave and S-wave are separated by 4:05 (4 minutes, 5 seconds). You need to find a spot on the graph where the P-line and the S-line are separated by 4:05 . Take a scrap piece of paper, line it up along the left edge of the chart. Put a small tick mark on your scrap paper at zero, and a small tick mark at 4:05 . Slide the scrap paper up along the chart until it the two tick marks just touch the P and S lines. BE SURE THAT YOUR SCRAP PAPER IS PERFECTLY STRAIGHT UP AND DOWN (use the lines on the grid as a guide). Now that you have found the right spot on the graph, drop a line straight down to the bottom of the graph to read the distance- 2,600km. To Find The Time That The Earthquake Occurred When a seismograph detects an earthquake that happened at some distance, (2,600km for example) you know that the earthquake happened some time in the past and it took time for the waves to reach your station. But how long ago? All you need to do is answer the question “how long does it take a P-wave to travel 2,600km? Find 2,600km on the bottom of the chart. Go straight up until you reach the P-line and read the time from the left of the chart: 5:00 (5 minutes). Now compare times: if you detected the earthquake at 3:17:00 and it took 5:00 then the earthquake happened 5 minutes before 3:17:00 or 3:12:00 . The intensity or strength of an earthquake is measured in two main ways: The Richter Scale measures the amount of energy that an earthquake releases Each number of magnitude is 10x stronger than the number below it. The Mercalli Scale Measures the amount of damage from an earthquake Ranges from I to XII Based on common earthquake occurrences such as "noticeable by people" "damage to buildings" chimneys collapse" "fissures open in the ground”. Seismic waves as “x-rays” P-Waves travel through solid and liquid S-Waves travel only through solids Seismic waves travel faster through denser material. Because of this, the path traveled by a seismic wave is bent towards the surface. Shadow Zone diagram photoshop drawing by Phil Medina Properties of the material (such as density and pressure) that the waves pass through can be inferred by the speed and angle that the waves travel. The layers of the earth are determined by the jumps in velocity and “echoes” of seismic waves. The MOHO is a boundary between the crust and the upper mantle where the velocity of waves jumps up sharply. This sharp increase in velocity is called a discontinuity. A shadow zone occurs on the opposite side of the earth from an earthquake because of the liquid outer core. S-Waves are stopped all together while the P-Waves are refracted (bent) to create a zone where no waves are picked up at all. This zone is between 102° and 143°around the earth from the earthquake. Lab research and studies of meteorites suggest that the core is made of Iron and Nickel (FeNi).
<urn:uuid:24596a87-1ff7-4f0f-9532-6a2f892462aa>
4.25
1,871
Tutorial
Science & Tech.
62.71561
Early Warning Sign? Wed Aug 08 19:03:05 BST 2007 by Luther J. Tallen Could the vanishing of coral reefs have been an early warning sign of the earthquake that hit in the Indonesian area today (August 8-9, 2007)? Any temp difference in the water prior to the quake? Sun Aug 19 11:23:11 BST 2007 by Patrick this is a very usefull article for my homework and i think tht this article is giving me a lot of ideas and its making it easier for me to do my work. i think its a very usefull article..... Mon Dec 03 07:55:33 GMT 2007 by Shamsul Bahari Would it help the coral's chances of survival if artifical roofs or shades were to be laid afloat above coral reefs? These could be in forms of sea weeds or man made shades like those used in farming. Maybe not if it is the warm water tha is the culprit and now the direct heat from the sun. I used to dive alot along the Terengganu coastal islands and have noticed the tremendous changes from when I was diving in the early sixties and today. Waht a bloody shame! Yes Thank You! Tue Jan 29 02:15:07 GMT 2008 by Louis P. This helped my essay!!! thank you !!!now my teacher has to believe me;] All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:1b8ade5a-3c87-4485-bead-cb4556552a14>
2.78125
337
Comment Section
Science & Tech.
82.889241
A view can become a page or a block. There is no difference in work between a page and a block. A page needs a path to display the page. A block is assigned to a region to display the page. There might be a few seconds different to set them up. After that, you can blocks in far more ways than usually shown in tutorials. Blocks have their uses including constructing complex pages. The content of a page is usually the content of a node in the content region. A view replaces the node to display a list or some other small variation of content. Next you are asked to make the view more complicated. You scratch your head. What you really need is a collection of views displaying different content. You could use panels and a bunch of other complex approaches. Think about the format again. If you want a list of different views. You can make each one a block then place them one after the other down the page. Consider a shop. A shop might list the top selling product in each category with a view sorted by category. The marketing manager then asks for the categories in a different order. The CEO wants some categories to list the top three. The product manager wants one category to list only one product but all the colours for the product. You could make each category a separate view with each view in a block. Each view can have different criteria. You can give your CEO, sales manager, and marketing manager access to the block admin page and let them fight over the sequence of the blocks. What method do you use to determine when to use multiple views? Use the one view when you list data one set of data. Use multiple views when people want different data selection. For example, some product categories have colours but not all product categories. Some product categories have multiple packs but not all product categories. If the one list has to be by category then colour or pack size and all the products use the same content type, you could use one view to sort by category then colour then pack size. There will be no overlap or conflict or missing data. The same list becomes more complicated when each product category has a different content type and each content type has a different set of attributes. You might be able to product one list in one view but it will be sensitive to change and management will want change. Start with two views or more views stacked as blocks to form one list. Another example is a book shop where novels have different descriptions and content types compared to other books. Cook books may have a special content type to allow listing by food type, country, major ingredients, and the reality television show paying for the cook book. If the data is exactly the same for each view but the display is different, consider multiple displays on one view. The technique is described in another page. you make the first view a page to list the data and validate your data selection. You then add some displays to create the blocks. Each display might vary by as little as selecting a different content type. Top ten lists are popular and soon take up too much space. People start talking about variations. Perhaps the top three items, including teasers, followed by the next seven without teasers.
<urn:uuid:b833e175-0f06-496c-90a2-fc2151545197>
3.421875
655
Tutorial
Software Dev.
58.476772
Revista Brasileira de Biologia Print version ISSN 0034-7108 ARENZON, A.; PERET, A. C. and BOHRER, M. B. C.. Growth of the annual fish Cynopoecilus melanotaenia (Regan, 1912) based in a temporary water body population in Rio Grande do Sul State, Brazil (Cyprinodontiformes, Rivulidae). Rev. Bras. Biol. [online]. 2001, vol.61, n.1, pp. 117-123. ISSN 0034-7108. http://dx.doi.org/10.1590/S0034-71082001000100015. The growth of the annual fish Cynopoecilus melanotaenia was studied in its natural environment, in order to obtain information about its biology. A total of 797 specimens of C. melanotaenia were collected on a monthly basis between April 1994 and March 1995 in a temporary water body, located in Rio Grande do Sul State, Brazil. The growth curve in total length suggests, to both sexes, a fast initial growth. Males present a smaller growth rate than females, but they attain a higher average maximum length than the females. Keywords : annual fish; growth; Cynopoecilus melanotaenia; Brazil.
<urn:uuid:a8ae3737-f436-4eb9-8efa-7f0d7a509bb8>
2.9375
284
Knowledge Article
Science & Tech.
64.967222
Work is the transfer of energy. In physics we say that work is done on an object when you transfer energy to that object. For introductory thinking, this is the best definition of work. If you put energy into an object, then you do work on that object. If a first object is the agent that gives energy to a second object, then the first object does work on the second object. The energy goes from the first object into the second object. At first we will say that if an object is standing still, and you get it moving, then you have put energy into that object. For example, a golfer uses a club and gets a stationary golf ball moving when he or she hits the ball. The club does work on the golf ball as it strikes the ball. Energy leaves the club and enters the ball. This is a transfer of energy. Thus, we say that the club did work on the ball. And, before the ball was struck, the golfer did work on the club. The club was initially standing still, and the golfer got it moving when he or she swung the club. So, the golfer does work on the club, transferring energy into the club, making it move. The club does work on the ball, transferring energy into the ball, getting it moving. In almost all cases considered when studying mechanical forms of energy, when work is done on an object a force is applied to the object, and the object is displaced while this force is acting upon it. That is, the object moves as a result of a force being placed on it. In the previous golf example the club places a force on the ball, and this force acts on the ball over the short distance through which the club and the ball are in contact as the ball is being hit. Energy is transferred as the force acts over this displacement. The amount of work is calculated by multiplying the force times the displacement. That formula looks like this: At first we will consider only forces that are aimed in the same direction as the displacement. For example, we will imagine an object being pushed horizontally to the right, and the object will be moving horizontally to the right as a result of this applied force. Below is an animation that shows just that. The force vector is drawn in blue. It is pushing the object to the right. This force is applied over a displacement. The displacement vector is shown in red. The object starts out standing still. While the force is acting on the object the object picks up speed, that is, it accelerates. When the force quits acting the object quits picking up speed, that is, it quits accelerating. Notice that in the above animation the object picks up speed while the force is acting upon it. This picking up of speed means that the object is gaining more and more energy as the force is acting on it. That is, as the force is acting upon the object, energy is being transferred to the object. Therefore work is being done on the object. Whatever we might imagine is providing the force is the agent that is doing work on the object. In our above discussion the force could be applied by the golf club, and the object in the animation represents the golf ball. This, of course, would need to be thought of as in slow motion! Now, since work is calculated as the product of force times displacement, many different combinations of forces and displacements could yield the same work, or the same energy transfer. For example, in the following animation a larger force acts over a shorter displacement, yet the same amount of work is ultimately done as in our first example above. And in this next animation a smaller force and a larger displacement that is present is the first animation is demonstrated. Again, the same amount of work is done. The same amount of energy is transferred. Later, we will see what happens when a force is applied at an angle to the displacement. For a while, though, we will consider only forces in the same direction as the displacement. How much work is done if a force of 20 N is used to displace an object 3 m? |W = F · d||Formula for work.| |W = (20 N)(3 m)||Plug in values for force and displacement.| |W = 60 N-m||Work equals 60 units of energy transferred. Looks like the unit for energy transferred, and thus, the unit for energy, is Newton-meter. However, this is not so.| |W = 60 Joules||Energy units are called Joules, 1 Joule is equal to 1 Newton-meter. A Joule is the MKS metric unit for energy.| |W = 60 J||Joule is abbreviated J.| Here are some questions to work with using the above formula.
<urn:uuid:cf9ab1cd-37ff-4b0f-9f09-763aded75109>
4.25
999
Knowledge Article
Science & Tech.
65.464616
Mesh and bathymetry for the North Sea (left) and resulting tidal range and co-phase lines for the M2 tidal constituent (right). ICOM is being used to study tidal dynamics in modern and geologically ancient basins. The model can be forced astronomically (the equilibrium tide approach) or using oscillating tidal boundary conditions. A mesh for the North Sea and the resulting tidal range and phase for the M2 constituent are shown on the figure on the right. The simulation here was forced at the open boundaries. A mesh for the globe using cartesian coordinates wrapped to a sphere and the resultant ICOM simulation results are shown below. Note the bathymetry-optimised mesh, with enhanced resolution in areas of rapidly changing bathymetry (e.g. the continental slope and mid-ocean ridge regions). The model is forced by the equilibrium tide potential and is solved on the spherical mesh. The results may then be 'unwrapped' to a longitude, latitude grid for visualisation. Terreno mesh of the globe, wrapped to a sphere, focused on the North Atlantic region. Global tidal simulation result for the principal lunar (M2) tidal constituent. Simulation result shown to the left unwrapped to a longitude, latitude grid for ease of visualisation. ICOM is also being used to study tides in geologically ancient seas. The graphics below illustrate a global Cretaceous (115 Ma) example. The palaeogeography is shown on the left, the mesh for the proto-North Atlantic region in the center and the M2 simulated tide on the right. The model was forced with the equilibrium tide potential as in the global example above. Global palaeogeography for the late Aptian, Lower Cretaceous, 115 Ma. Bold lines are mid-ocean ridges and bold lines with triangles are subduction zones. Terreno mesh of the Cretaceous globe, focued on the proto-North Atlantic region Cretaceous global M2 tide simulation result, focused on the proto-North Atlantic region.
<urn:uuid:898569b1-d935-4d28-90de-1d8ed6773d9b>
3.234375
432
Knowledge Article
Science & Tech.
39.061429
One of my favorite data structures is the binary heap. I first learned about it in my data structures class, and remember marveling at its simplicity and elegance. How could something so simple be so powerful and useful? For those unfamiliar with the binary heap, it is a relatively simple data structure. It is a binary tree, which means each node has at most 2 children and n top of this, there are several constraints which are imposed. One is that the the children of each node must have a value that is equal to or greater. This means we have some sort of loose ordering, where nodes closer to the root have a lower value. Secondly, the tree must be complete. This means that all levels of the tree except for the last level must be full, and filled left-to-right. As described, this would be a Min-Heap, since the minimum value is always at the root. If we were to reverse our comparisons, so that the children must be less than or equal to the current node, we would have a Max-Heap with the maximum value at the root. One of the primary uses of a Min/Max Heap is as a Priority Queue. If one inserts nodes based on their “priority”, then the nodes with the highest priority are at the root. Below is a example of a Min-Heap, where each node represents a persons age and name. We can see that there is a rough ordering with the youngest person at the root.
<urn:uuid:d43c4ff6-ff6b-45de-a7dc-036d0b7fef8b>
3.1875
308
Personal Blog
Software Dev.
61.139517
A class that wraps a PointF representing a Polygon. The class members include: - RectangleF Bounds - float MinimumX - float MaximumX - float Minimum Y - float Maximum Y - int NumberOfPoints - bool IsInBounds(PointF pt) - bool Contains(PointF pt) - PointF CenterPointOfBounds - PointF CenterPoint // Not yet implemented - decimal Area IsInBounds returns true if the PointF is within the Rectangular Bounds of the Polygon. Contains returns true if the PointF is actually within the Polygon’s borders. The class includes both the PolygonF and Polygon classes. Polygon is identical to PolygonF except that it works with an array of Point objects and return ints instead of floats. First introduced on the blog under Testing to see if a Point is within a Polygon.
<urn:uuid:5e569e7a-2ecd-49ae-a35d-ee884b91d5fc>
2.84375
194
Documentation
Software Dev.
46.765577
Extensive defoliation of deciduous shrubs and trees in Southcentral Alaska has received considerable attention over the last couple of years. Even for those who may not be attentive to what is going on in their natural surroundings, the conspicuous patches of dieback on Skyline and Fuller Lakes Trail on the Kenai National Wildlife Refuge, and at Summit Creek on the Seward Highway, would have been hard to miss. Lately, the Bruce spanworm (Operophtera bruceata) and Epirrita undulata have been the main defoliators of willows, alders, and aspens in the sub-alpine. Birch leafroller (Epinotia solandriana) has been damaging birch on the lowlands, but other herbivorous insects are also involved. Two Old World moths closely related to two of our defoliators (Operophtera brumata and Epirrita autumnata) have been well documented in similar tree line systems in Scandinavia. There, severe outbreaks are cyclical, occurring about every decade and lasting for 2-3 years. The cause of this cyclicity remains unknown, but increasing prevalence of parasitoids and disease within the moth populations over the course of outbreaks does contribute to mortality of moths. In 2012, entomologists with the U.S. Forest Service began investigations on the present defoliation event in Southcentral Alaska, seeking to relate these outbreaks to changes in the Pacific Decadal Oscillation (PDO). The PDO is a 20- to 30-year cycle of variation in sea surface temperatures in the Pacific Ocean and is an important determinant of Southcentral Alaska’s climate. Outbreaks of spruce bark beetles in Southcentral Alaska have been linked to the PDO, tending to occur after a change from the predominantly cool phase to the warm phase of the PDO. The PDO has been mostly in the warm phase since 1978, but appears to have switched to a cool phase around 2008. It has been suggested that lower temperatures associated with the cool-phase PDO may favor external-feeding herbivores like the Bruce spanworm and Epirrita undulata. It’s difficult to associate shrub and hardwood defoliation in Alaska to the PDO because we lack long-term quantitative records of defoliation other than the Forest Service’s annual aerial survey data. These data are useful for documenting outbreaks and determining the cause of outbreaks, but the surveys are limited in spatial extent, only covering the narrow path which was flown, and they are spatially inconsistent, not always covering the same areas every year. To overcome these limitations, I analyzed the thirteen years (2000-2012) of available imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite to look for patterns of insect-caused defoliation over the Kenai Peninsula. I used MODIS’ Enhanced Vegetation Index (EVI) product, a measure of the concentration of active chlorophyll (that is, greenness) that is updated every two weeks. The idea behind the analysis is simple. Plants are less green in years when they are heavily damaged by foliage-eating caterpillars than in normal years. To detect defoliation, I first made a composite image representing near maximum productivity by selecting, for each pixel, the third highest value of EVI observed over the 13 years. Within each growing season, I identified pixels having EVI values at least 20 percent lower than the composite image as having been potentially defoliated. I removed water, snowfields, barren areas, and recent burns from consideration. I was able to detect defoliation events generally in agreement with the Forest Service’s aerial survey data, though my current algorithm appears to overestimate the area damaged compared to the aerial survey data. One reason for this discrepancy is that my MODIS-derived maps provide no information as to the agents that reduced greenness, which could include diverse herbivores, plant diseases, persistent snow (retarded phenology), and urbanization. Upon first perusal of the satellite-derived defoliation maps, I did not perceive evidence of cyclicity of defoliation events within the 13 years considered. The amount of area damaged was high in the years 2005, 2009, and 2011-2012, with 2012 showing the most area affected (680,000 acres) of all years. In 2012, there appeared to be large areas affected by defoliators in the Kenai Mountains, in the hills above Homer, and on the south side of Kachemak Bay. I do not know why 2012 was such a big year for defoliation. Perhaps especially deep snow cover over the winter of 2011-2012 could have provided unusually good insulation for the moths’ eggs, which spend the winter on branches of the host plants where they would usually be exposed to extreme cold. In 2013, I will be collaborating with Forest Service entomologists in their investigations of current defoliator outbreaks in the field. I will also be using MODIS imagery to identify areas of potential defoliation early in the season so that these places can be verified during aerial surveys. Despite its limitations, this simple MODIS method appears to be a useful tool for looking at defoliation events, and it may also be of value for monitoring the dieback or recovery of defoliated areas. Matt Bowser serves as Entomologist at the Kenai National Wildlife Refuge. You can find more information about the refuge at http://kenai.fws.gov or http://www.facebook.com/kenainationalwildliferefuge.
<urn:uuid:4d0982f9-ea3c-451e-a71c-f74f56fd169a>
3.25
1,166
Personal Blog
Science & Tech.
28.501677
Geostationary satelites are essentially for ever. This is becoming a problem since there are a limited number of places you want to put a geostationary satelite and most of them are full. Any collisions/explosions in geostationary mean debris will also stay there for a long long time. For low earth orbit satelites it depends on their shape, altitude and the space weather. The worst case is a large object in low orbit with large solar panels and hence a lot of drag. The ISS loses 90m/day and must be constantly boosted as the orbit gets lower the drag is worse and the falling accelerates - when first launched the ISS would lose 500m/day at it's lower orbit. The minimum safe orbit is around 150km, anything that falls to this level will quickly succumb to atmospheric drag edit: Hubble is at an orbit of around 600km, with no more service missions after the end of the Space Shuttle it will renter in between 10 and 20years, ie 2020-2030
<urn:uuid:a8c95ee9-5551-4d5c-afcb-9d000f72c307>
2.71875
215
Q&A Forum
Science & Tech.
45.812763
CRUST & LITHOSPHERE: THE ANELASTIC EARTH: THE MANTLE TRANSITION ZONE: Most seismologists focus on the elastic properties of the Earth like seimic wave speed. I prefer to examine seismic attenuation (a form of anelastic energy loss) and compare it to seismic velocity, which can help distinquish the cause of seismic anomalies in the Earth. The mantle transition zone is a region in the Earth where seismic velocities and density increase rapidly with depth/pressure. Discontinuities at about 410, 520, and 660 km depth are measured with energy that reflects and converts at depth. Better constraints on these discontinuities can help improve our understanding of how Earth cools itself. If the density contrast is too high, it will block material from flowing past this interface. Receiver Functions: Receiver functions are waveforms that characterize the Earth's shear response to teleseismic P-waves. By modeling receiver functions with velocity profiles, we can measure crustal and mantle layer thicknesses and velocity underlying a broadband seismometer. Typically, seismologists use a non-unique calculus based, linearized inversion to determine the most appropriate model for a receiver function. By inverting for structure with both receiver functions and surface wave group/phase velocities, the non-uniqueness is reduced. I applied a non-linear method, called the niching genetic algorithm, to invert both phase velocities and receiver functions for velocity models. At the cost of increased computation time, this method imposes fewer a priori constraints, and yields excellent results without over-parameterizing. Chilean Patagonia: The Seismic Experiment in Patagonia and Antarctica deployed five of ten broadband seismometers in Chilean Patagonia from 1997 to 1999. Using the niching genetic algorithm method I was able to determine that the crust thinned from ~32 km to ~24 and then infilled with sediment to restore crustal thickness to ~28 km at present day [Lawrence and Wiens, 2003]. The region appears to be in current mass balance. This region has anomalously low topography relative to the rest of the Andes. In fact, the region underwent extension, which thinned the crust rather than compression which thickens the crust elsewhere. The southern end of the South America has rotated to the east as a result of Scotia Plate motion. The stress and strain related to this rotation may have resulted in the extension which thinned crust here. The Transantarctic Mountains: The Transantarctic Mountains are the largest non-compressional mountain range in the World. They span over 4000 kilometers, marking a 200-kilometer boundary between East Antarctica and West Antarctica. Over the years, much debate has centered over what caused the Transantarctic Mountains to form. Due to the extreme cold and ice cover, it is difficult to directly examine most parts of Antarctica. Therefore, it is crutial to use remote sensing. From 2000 to 2003 the Transantarctic Mountain Seismic Array (TAMSEIS) deployed 41 broadband seismometers from the Ross Sea to the Vostok Subglacial Heighlands. Using surface wave phase velocities[Lawrence et al., 2005a], S-wave attenuation, and receiver functions I was able to map out mantle temperatures [Lawrence et al., 2005b] and crustal thickness. The crust thins from ~35km in East Antarctica to ~20 km in West Antarctica. Beneath the Transantarctic Mountains, in the Dry Vallies region [Lawrence et al., 2005c]. There is an anomalously small crustal root considering the large topography (4 kilometers). Using gravity and topogrophy with the known crustal thickness we are able to better constrain the geodynamic processes that caused the mountain uplift. First, the temperature difference calculated from seismic anomalies (~300 degrees K) is necessary to produce make East Antarctic mantle dense enough to account for the observed topography[Lawrence et al., 2005c]. Second, the rapid change in temperature between East Antarctica and West Antarctica likely caused the Mountains to uplift. Differential Measurements: By comparing two teleseismic body waves from the same earthquake, it is possible to both localize regions with seismic anomalies and improve the measurements themselves. The method uses waves with similar ray path geometries to reduce the influence of structure along similar ray paths, and accentuate effects of structure in locations where the paths diverge most. The cross-correlation method produces travel-time residuals, which correspond to isolated anomalous velocities. Spectral division yields a differential attenuation (or dampening) measurement (dt*). D" Attenuation:The lowermost mantle, called D", is a region of extreme heterogeneity on lateral scales from 100s to 1000s of kilometers. The core-mantle boundary is a chemical, thermal, and dynamic boundary layer between the iron core and the silicate mantle. Consequently, measureable anamalous structures occur. For example, beneath Central America, I measured a ~250 km wide velocity and quality factor anomaly which may be characterized as thermal in nature [Fisher et al., 2003]. Radial Quality Factor: (QLM9) Over 30,000 differential ScS-S attenuation measurements were used to calculate the first radial quality factor model with high sensitivity in the lower mantle. The structure is unique, and shows that the Earth is more anelastic with greater radius. The model shows that attenuation is high just above the core-mantle-boundary, where high temperatures (due to heating from the core) likely increase the anelasticity.[Lawrence and Wysession., 2005a]. Global Quality Factor: (VQM3DA) Using over 70,000 differential measurements we invert for the 3D quality factor structure of the Earth. By analyzing both velocity and quality factor on the same scale, we can more easily understand the source of the heterogeneity. Quality factor is highly dependent upon temperature and water content, so observation of large anomalies indicates one or the other. Depending on how, velocity changes, it is possible to infer which one [Lawrence and Wysession., 2006b]. Transition Zone Topography: Seismic interfaces at about 410 and 660 kilometers depth are known to have significant topography [Flanagan and Shearer, 1998]. However, previous evidence lead scientists to believe that different measurement techniques (SdS underside reflections and Pds - P-to-S converted waves) resulted in different estimates of global topography. We demonstrated that the two techniques do actually yield the same average and laterally variations in transition zone thickness [Lawrence and Shearer., 2005]. North American Structure: Seismic attenuation and travel times are useful in-situe measurements of the mantle. Using them together we can delineate difference between thermal and water concentration anomalies. In the southern half of the United States attenuation and travel times are correlated. In the northern part they are not. The southern positive correlation likely indicates colder mantle temperatures beneath the old and stable eastern US in comparison to warm temperatures beneath the recent tectonic activity in the west. The presence of water may play a large role in the dynamics of the North America. In the far northwest, active subduction likely transports a large amount of water into the mantle, where it weakens the mantle causing elevated attenuation. In the surrounding Cascade Mountains/Volcanoes and the Yellowstone/Whyoming area recent volcanism may have cause melt to suck up all the mantle hyrates resulting in lower attenuation [Lawrence et al., 2006]. Ambient Noise Tomography: The common coherent signal of "noise" sources recorded on a pair of seismoeters can be used to image the structure between the two seismometers. By adding up the many coherent noise sources from microseisms and wind on our shores, the coherent signal is recovered, and the incoherent signal cancels out. The resulting signal contains information pertaining to the structure between the two seismometers such as seismic velocity and seismic amplitude. By calculating many inter-station Green's functions, we can then invert for high-resolution maps of subsurface structure.
<urn:uuid:240a1c07-9c29-4eb1-a43a-af944893d4e5>
3.09375
1,702
Academic Writing
Science & Tech.
28.39484
Trinidad's Pitch Lake The Caribbean country of Trinidad and Tobago has a natural wonder of the world - a Pitch Lake. It's a basin of emulsified asphalt. Pauline Newman reports on efforts to study the area and the hope of discovering new forms of life. Robyn Williams: The Caribbean country of Trinidad and Tobago has its own natural wonder of the world, Pitch Lake, a basin of emulsified asphalt beside the village of La Brea on the island of Trinidad. It's an incongruous sight; 95 acres of semi-solid black lake, surrounded by reeds and cashew nut trees. And it's quite a tourist attraction; 20,000 people a year visit, navigating the humps and sinks in the surrounding roads where veins of pitch from the deep oil-rich rocks that feed the lake force their way to the surface. Pauline Newman: So even up on the hill as we were driving down we saw the pitch coming up underneath people's yards. Lindsey Passey: That's right because it's going right in their yards themselves. The village of La Brea is said to be a village of pitch. Any part of the village that you go, you will see pitch. Everybody in this village really do believe that if they stop mining this Pitch Lake, let's say, 10, 15, 20 years, the whole thing will start coming right back up again until it reaches the level. Robyn Williams: There are only a handful of sites in the world where asphalt rises from the depths and collects in pools on the surface, and Pitch Lake is the most important commercially. The pitch is exported all over the world to be turned into roads and insulating compounds. But until very recently no one suspected the pitch could harbour life, so no one ever looked. Now, however, astrobiologists interested in strange species that exist in unlikely places have discovered microbes growing in the asphalt. Pauline Newman is visiting Trinidad and exploring the unusual lake to see what the scientists are so very excited about. Lindsey Passey: My name is Lindsey Passey, licensed tour guide at your service. Okay, come along. Are you ready for this? Pauline Newman: To reach the Pitch Lake itself we had to cross one of the many clear rainwater-filled pockets and cracks that break the surface. Dragonflies swooped as we rolled up our trousers and waded through pink and purple water nymph lilies past tiny flitting fish. Lindsey Passey: Honestly, my friends, doctors highly recommend this water for bathing purposes due to the sulphur contents into it. This water helps take care of all minor skin disease, pimples, mosquito bites, even a disease by the name of psoriasis. If you wish to take a bath, well, it's up to you. Pauline Newman: Creamy yellow sulphur paste collects in patches and the lake can smell of rotten eggs. Its surface hisses and bubbles. Lindsey Passey: Now, look at these areas here. These humps are really caused by natural gas, methane, forcing itself from the bottom to reach to the surface. You hear a sort of hissing sound when the pressure is very high. You strike a match over it you will see a flame of fire. Pauline Newman: The Pitch Lake is thought to contain 10 million tonnes of asphalt. It oozes to the surface in patches where it spreads out and dries. Lindsey Passey: This Pitch Lake was discovered by Sir Walter Raleigh in the year 1595. During that time of the year, my friends, the Pitch Lake was at the level of the road. After all the years they have been extracting pitch, that is what remains. Pauline Newman: Enough to last another 400 years a present rate of extraction. The hotter the sun, the more liquid the pitch becomes and more dangerous. Lindsey Passey: Within one week's time, honestly, the whole thing fills itself right back up again. Pauline Newman: Birds and even cows searching for water in the dry season get stuck and sucked under. All in all, a site not exactly suited to life, you might think. But physicist Shirin Haque from the University of the West Indies in Trinidad thought otherwise. She leads an international team of astrobiologists who've recently started looking for living organisms in the pitch. Shirin Haque: The interesting link is that it forms an analogue site for Titan, one of Saturn's moons where you have hydrocarbon lakes out there, and right here on planet Earth we have a natural hydrocarbon lake in the Pitch Lake. So using that type of analogy we are interested in what kind of organisms, microbes can survive in this kind of extreme condition. Pauline Newman: Discovering life in the asphalt was a long shot, yet that's what Dr Haque and her colleagues seem to have found. Shirin Haque: To date our samples have probably been about one foot deep into the Pitch Lake, and our initial results have shown the existence of microbes. We've done some deeper sampling where they have active mining, and even that has shown some. So these results need to be confirmed and repeated. But we have been sampling and running it through DNA sequencing as well as checking whether we have anaerobic or aerobic microbes existing there, which brings up as many questions as it might answer. Paul Davies: It's a very, very basic problem. Pauline Newman: Paul Davies is a fellow physicist and astrobiologist now at Arizona State University in the US. He was visiting the site. Paul Davies: Are we just dealing with a surface phenomenon where biology has invaded and inhabited the surface of the pitch which has all sorts of chemical goodies lying around, or is there something going on at depth? Is there some subsurface biology of a form that might be very exotic? Pauline Newman: Any surface biology would surely rely on oxygen, but oxygen won't be able to penetrate very far into the pitch, so if there is life beneath the surface it would have to get its energy by other means. Paul Davies: The subsurface ecosystems that have been studied so far (for example, in deep aquifers) these make use of hydrogen as the ultimate source of metabolism and the hydrogen is produced either by radioactivity or by water passing over hot rocks. The question is whether in this sort of petrochemical Mecca that we see all around us here, whether there is a source of free hydrogen as well that subsurface micro-organisms could utilise. Pauline Newman: And that's one of the things that Riad Hosein is looking for. Riad is Dr Haque's student. Riad Hosein: I'm from the department of chemistry, I'm here to investigate the chemical basis for the biological phenomena that goes on in the Pitch Lake. And within 2008 we should get some very positive results, from what I've seen thus far. Pauline Newman: At the very least the results will be interesting. Paul Davies: It could be something totally unique because this is almost a unique setting in the world. I think it has been very little studied. There could be many, many surprises coming out of this place. Pauline Newman: But as for the villagers of La Brea, whatever the scientists find, they'll continue to enjoy their oozing black lake as they have for generations. So this I would imagine would be a wonderful playground for the village children. I guess they're not allowed here. Lindsey Passey: Yes, yes, especially in the dry season when they come on here and play cricket or football or soccer. When there is a crisis of water within the village some of the villagers come down here to do their laundry, they spread it on the pitch, they go and bathe, and by the time they finish bathing their laundry is dry, they pull it up and they head for home. Living is very nice out on this Pitch Lake. Robyn Williams: Lindsay Passey from the Trinidad village of La Brea ending that report from Pauline Newman. - Pauline Newman - Hugh Downs School of Human Communication Arizona State University - Robyn Williams - David Fisher
<urn:uuid:d654e5b0-0140-433e-9f56-d71f9fd10332>
2.765625
1,701
Audio Transcript
Science & Tech.
52.622622
Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. The field also includes studies of earthquake effects, such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, oceanic,... , a microseism is defined as a faint earth tremor caused by natural phenomena. The term is most commonly used to refer to the dominant background seismic noise signal on Earth, which are mostly composed of Rayleigh waves and caused by water waves in the oceans and lakes. Thus a microseism is a small and long-continuing oscillation Oscillation is the repetitive variation, typically in time, of some measure about a central value or between two or more different states. Familiar examples include a swinging pendulum and AC power. The term vibration is sometimes used more narrowly to mean a mechanical oscillation but sometimes... of the ground. Detection and characteristics Microseisms are very well detected and measured by means of a broad-band seismograph, and can be recorded anywhere on Earth. Dominant microseism signals from the oceans are linked to characteristic ocean swell periods, and thus occur between approximately 4 to 30 seconds. Microseismic noise usually displays two predominant peaks. The weaker is for the larger periods, typically close to 16 s, and can be explained by the effect of surface gravity waves in shallow water. These microseisms have the same period as the water waves that generate them, and are usually called 'primary microseisms'. The stronger peak, for shorter periods, is also due to surface gravity waves in water, but arises from the interaction of waves with nearly equal frequencies but nearly opposite directions. These tremors have a period which is half of the water wave period and are usually called 'secondary microseisms'. A slight, but detectable, incessant excitation of the Earth's free oscillations, or normal modes, with periods in the range 30 to 1000 s are also caused by water waves, and is often referred to as the "Earth hum". This hum is probably generated like the secondary microseisms but from the interaction of infragravity waves. As a result, from the short period 'secondary microseisms' to the long period 'hum', this seismic noise contains information on the sea state In oceanography, a sea state is the general condition of the free surface on a large body of water—with respect to wind waves and swell—at a certain location and moment. A sea state is characterized by statistics, including the wave height, period, and power spectrum. The sea state varies with... s. It can be used to estimate ocean wave properties and their variation, on time scales of individual events (a few hours to a few days) to their seasonal or multi-decadal evolution. Understanding these signals, however, requires a basic understanding of the microseisms generation processes Generation of 'secondary' microseisms The interaction of wave trains of different frequencies and directions generates wave groups. For wave propagating almost in the same direction, this gives the usual sets of waves that travel at the group speed, which is slower than phase speed of water waves (see animation). For typical ocean waves with a period around 10 seconds, this group speed in close to 10 m/s. In the case of opposite propagation direction the groups travel at a much larger speed, which is now 2 π(f1+f2)/(k1-k2) with k1 and k2 the wave numbers of the interacting water waves. For wave trains with a very small difference in frequency (and thus wavenumbers), this pattern of wave groups may have the same velocity as seismic waves, between 1500 and 3000 m/s, and will excite acoustic-seismic modes that radiate away. As far as seismic and acoustic waves are concerned, the motion of ocean waves is, to the leading order, equivalent to a pressure applied at the sea surface. This pressure is nearly equal to the water density times the wave orbital velocity Orbital velocity can refer to the following:* The orbital speed of a body in a gravitational field.* The velocity of particles due to wave motion, in particular in wind waves.... squared. Because of this square, it is not the amplitude of the individual wave trains that matter (red and black lines in the figures) but the amplitude of the sum, the wave groups (blue line in figures). Real ocean waves are composed of an infinite number of wave trains and there is always some energy propagating in the opposite direction. Also, because the seismic waves are much faster than the water waves, the source of seismic noise is isotropic: the same amount of energy is radiated in all directions. In practice, the source of seismic energy is strongest when there are a significant amount of wave energy travelling in opposite directions. This occurs when swell from one storm meets waves with the same period from another storm, or close to the coast due coastal reflection. Depending on the geological context, the noise recorded by a seismic station on land can be representative of the sea state close to the station (within a few hundred kilometers, for example in Central California), or a full ocean basin (for example in Hawaii). In order to understand the noise properties, it is thus necessary to understand the propagation of the seismic waves. Form of Rayleigh waves modified by the ocean layer: free waves and forced waves The waves that compose most of the secondary microseismic field are Rayleigh waves. Both water and solid Earth particles are displaced by the waves as they propagate, and the water layer plays a very important role in defining the celerity, group speed and the transfer of energy from the surface water waves to the Rayleigh waves.
<urn:uuid:eb4211a1-e78a-45d6-a65c-fe91f07dc169>
3.984375
1,203
Knowledge Article
Science & Tech.
38.01347
U.S. generates 190 million tons of solid waste a year — enough to fill a bumper-to-bumper convoy of garbage trucks halfway to the moon. So why aren't we up to our necks in garbage? Nature recycles garbage all the time, and this recycling is essential to the availability of nutrients for living things. are tiny bacteria and fungi, which break down plant and animal waste, making nutrients available for other living things in the process. This is known as decomposition. Decomposition involves a whole community of large and small organisms that serve as food for each other, clean up each other's debris, control each other's populations and convert materials to forms that others can use. The bacteria and fungi that initiate the recycling process, for example, become food for other microbes, earthworms, snails, slugs, flies, beetles and mites, all of which in turn feed larger insects and birds. You can think of the Decomposition Column as a miniature compost pile or landfill, or as leaf litter on a forest floor. Through the sides of the bottle you can observe different substances decompose and explore how moisture, air, temperature and light affect the process. Many landfills seal garbage in the earth, excluding air and moisture. How might this affect decomposition? Will a foam cup ever rot? What happens to a fruit pie, or tea bag? Which do you think decomposes faster, banana peels or leaves? If you add layers of soil to the column, how might they affect the decomposition process? What would you like to watch decompose?
<urn:uuid:0fc53d20-55fd-460e-8880-7a21fec6408a>
4.0625
357
Knowledge Article
Science & Tech.
54.505386
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...to that of paramagnetic materials. Above a temperature called the Néel temperature, thermal motions destroy the antiparallel arrangement, and the material then becomes paramagnetic. Spin-canted (anti)ferromagnetism is a special condition which occurs when antiparallel magnetic moments are deflected from the antiferromagnetic plane, resulting in a weak net magnetism. Hematite... What made you want to look up "spin-canted ferromagnetism"? Please share what surprised you most...
<urn:uuid:56001932-e03a-40e7-9758-fa8ac3d9ea7e>
2.90625
144
Knowledge Article
Science & Tech.
28.413871
A 10 m long pipe has an inside radius of 70 mm, an outsideradius of 80 mm, and is made of stainless steel (k = 15 W/(m° C). Its inside surface isheld at 150° C, while itsoutside surface is at 30°C. There is no heat generation, and steady-state conditions hold.Compute the rate at which heat is being transferred across the pipewall.
<urn:uuid:928ff0c8-9e84-4936-b615-fdf38ddd7a54>
2.96875
93
Q&A Forum
Science & Tech.
76.91375
In the last MVC post we looked at Filter Providers but didn’t really describe what filters were. Filters are special classes which can execute some functionality at certain points during each MVC request. Historically most filters have been attributes that you can apply to an action method or a controller class (not all as Controllers can also act as filters). Before talking about filters it’s worth looking at how a request in MVC is handled. At the lowest level, all web applications behave the same way: - A browser (or some other client) issues an HTTP request to your application - Your application performs some processing based on the incoming request - The application sends an HTTP response back to the client (this can be HTML, XML, JSon or even just a simple status code) In an ASP.NET MVC application the incoming request is mapped to a method (called an Action) on a class (called the Controller). Once the controller has been created and the correct action method selected it is the job of the Action Invoker to run the method (action), retrieve its return value (result) and send something sensible back to the client. Along the way the action invoker allows filters to get involved at specific points in the process (and possibly alter it). There are 4 types of filter: - Authorization filters are executed up front and may prevent the action method from being executed at all if they decide that the user is "not authorized" (implements - Action filters are executed both before and after the action method is executed. (implements - Once the action method returns a result it is executed. A Result filter can run code both before and after this happens (implements - If something goes wrong at any time during the process any Exception filters will be run (implements I’ll describe each of the filter types in future posts. Each filter is represented in MVC 3 as an object of the Filter class. This class contains 3 properties. The first (called Instance) holds an object that can implement any combination of the 4 filter interfaces defined above (it can even implement none of them but then your filter will never actually execute). The other two properties ( Scope) relate to the order in which filters are run. Filters are run in a very specific order. Filters are sorted first by their ascending Order value. If two filters have the same Order value then they are sorted by their Scope value in the following order: As an example here is the forwards order of a few filters: - Order = –1, Scope = First - Order = –1, Scope = Global - Order = –1, Scope = Last - Order = 0, Scope = Controller - Order = 0, Scope = Action - Order = 1, Scope = First Note that Action Filters and Result Filters are actually run twice (once before and once after the thing they are filtering). When these filters are running before the action/result to which they apply they are run in forward order. When they are running after the action/result to which they apply they are run in the reverse order. You can think of it like this: The default filter providers will create filters with the following orders and scopes: ControllerInstanceFilterProvider– Exposes the controller as a filter with an order of Int32.MinValueand a Scope of First. This means that the controller will always be the first filter FilterAttributeFilterProvider– Finds filter attributes (inherits from FilterAttribute) on the controller and on the action method and exposes them using the Controller and Action scope accordingly. The Order value is derived from a property on the attribute itself ( FilterAttribute.Order) and defaults to –1. In fact, if you try to set a filter attributes order to less than –1 it will throw an GlobalActionFilterProvider– provides filters in the Global scope and again derives the order from the filter you provide (which defaults to –1). Unlike the FilterAttribute.Orderproperty you can set a value less than –1. Finally, filters can specify whether or not they will allow multiple instances by setting the property IMvcFilter.AllowMutliple (filters derived from FilterAttribute will provide this value based on any supplied AttributeUsage attribute they have). If filters allow multiples then each instance of the filter that applies will be run for a given action/result. If a filter does not allow multiples then only one filter of the specified type will be executed. The one which is kept is the one with the highest Order and Scope (i.e the one which would normally be executed last). Filters are a great feature of MVC that allow you to create a very orthogonal design for your application. Although the initial implementation of filters put a lot of code inside of attributes I don’t recommend doing that in MVC 3. Iinstead you should create marker attributes and use FilterProviders to apply your filter implementation classes where they’re needed. In the coming posts I’ll look at each of the different types of filters (Authorization, Action, Result and Exception) along with some examples of each from the MVC 3 framework. I’ll also discuss the circumstances under which you might like to create your own filters and how you’d go about doing so. One last thing, there are a few attributes that ship in the framework and get applied to action methods but are not Filters. It’s easy to get these mixed up so I’ll list some here. NonActionAttribute are examples of Action Method Selectors and ActionNameAttribute is an Action Name Selector. Both of these types of attributes get involved in the MVC pipeline before the Action Invoker executes so they are not filters. No new comments are allowed on this post.
<urn:uuid:55fab0cb-2fa7-409d-8229-2bccf447290c>
3
1,221
Personal Blog
Software Dev.
41.952358
Creation of the Latent Image on the Film An emulsion holding grains of photosensitive chemical compounds called silver halides is spread over a film or other material. Light coming through the camera lens from an object being photographed strikes certain areas of the film, rendering the silver halide grains in those areas unstable. This creates an invisible, or latent, image of the object on the film. The areas of the latent image that receive the most light contain the largest number of unstable grains. Upon development they become the darkest areas of the visible image. Conversely, areas that receive little light form the bright parts of the visible image. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on photographic processing Creation of the Latent Image on the Film from Fact Monster: See more Encyclopedia articles on: Technology: Terms and Concepts
<urn:uuid:f87e1b54-d2a4-42b6-8972-60fb1bb150b3>
3.5
186
Knowledge Article
Science & Tech.
34.612178
Avoid arbitrary limits on the length or number of any data structure, including file names, lines, files, and symbols, by allocating all data structures dynamically. In most Unix utilities, “long lines are silently truncated”. This is not acceptable in a GNU utility. Utilities reading files should not drop NUL characters, or any other nonprinting characters including those with codes above 0177. The only sensible exceptions would be utilities specifically intended for interface to certain types of terminals or printers that can’t handle those characters. Whenever possible, try to make programs work properly with sequences of bytes that represent multibyte characters; UTF-8 is the most important. Check every system call for an error return, unless you know you wish to ignore errors. Include the system error text (from strerror, or equivalent) in every error message resulting from a failing system call, as well as the name of the file if any and the name of the utility. Just “cannot open foo.c” or “stat failed” is not sufficient. Check every call to realloc to see if it returned zero. Check realloc even if you are making the block smaller; in a system that rounds block sizes to a power of 2, realloc may get a different block if you ask for less space. realloc can destroy the storage block if it returns realloc does not have this bug: if it fails, the original block is unchanged. Feel free to assume the bug is fixed. If you wish to run your program on Unix, and wish to avoid lossage in this case, you can use the GNU You must expect free to alter the contents of the block that was freed. Anything you want to fetch from the block, you must fetch before malloc fails in a noninteractive program, make that a fatal error. In an interactive program (one that reads commands from the user), it is better to abort the command and return to the command reader loop. This allows the user to kill other processes to free up virtual memory, and then try the command again. getopt_long to decode arguments, unless the argument syntax makes this unreasonable. When static storage is to be written in during program execution, use explicit C code to initialize it. Reserve C initialized declarations for data that will not be changed. Try to avoid low-level interfaces to obscure Unix data structures (such as file directories, utmp, or the layout of kernel memory), since these are less likely to work compatibly. If you need to find all the files in a directory, use readdir or some other high-level interface. These are supported compatibly by GNU. The preferred signal handling facilities are the BSD variant of signal, and the POSIX sigaction function; the signal interface is an inferior design. Nowadays, using the POSIX signal functions may be the easiest way to make a program portable. If you use signal, then on GNU/Linux systems running GNU libc version 1, you should include bsd/signal.h instead of signal.h, so as to get BSD behavior. It is up to you whether to support systems where signal has only the USG behavior, or give up on them. In error checks that detect “impossible” conditions, just abort. There is usually no point in printing any message. These checks indicate the existence of bugs. Whoever wants to fix the bugs will have to read the source code and run a debugger. So explain the problem with comments in the source. The relevant data will be in variables, which are easy to examine with the debugger, so there is no point moving them elsewhere. Do not use a count of errors as the exit status for a program. That does not work, because exit status values are limited to 8 bits (0 through 255). A single run of the program might have 256 errors; if you try to return 256 as the exit status, the parent process will see 0 as the status, and it will appear that the program succeeded. If you make temporary files, check the variable; if that variable is defined, use the specified directory instead of /tmp. In addition, be aware that there is a possible security problem when creating temporary files in world-writable directories. In C, you can avoid this problem by creating temporary files in this manner: fd = open (filename, O_WRONLY | O_CREAT | O_EXCL, 0600); or by using the mkstemps function from Gnulib (see mkstemps in Gnulib). In bash, use set -C (long name noclobber) to avoid this problem. In addition, the mktemp utility is a more general solution for creating temporary files from shell scripts (see mktemp invocation in GNU Coreutils).
<urn:uuid:bdcc8354-cc69-4fdc-8ca4-2fc21828e870>
2.96875
1,057
Documentation
Software Dev.
49.407482
What is the shape of a suspended rope? Is there some function that describes it? Answer: it's the cosh curve! y = cosh x and suspending a rope in front of the transparency projection so that the rope shadow can be compared! This may be seen quite dramatically by putting up a transparency of the catenary Now, someone may object and say that the curve of x2 will also give a good approximation. If they do this, you can talk about how the Taylor series of cosh begins with a quadratic 1 + x2/2, so it is not surprising! The Math Behind the Fact: Calculus and modeling are useful here: by breaking the rope into lots of little chunks, and modeling the forces on each chunk, one can obtain a differential equation whose solution is the cosh curve. How to Cite this Page: Su, Francis E., et al. "Suspended Rope Trick." Math Fun Facts.
<urn:uuid:79c06ceb-5651-4b5e-9ed7-d8c261aca206>
3.28125
218
Tutorial
Science & Tech.
64.076948
Thu Oct 21 13:09:11 BST 2010 by Jack Stephens The author overlooks the numerous cases where similar environmental conditions seem to have led to animals representing different lines of descent converging on very similar looking phenotypes, for example penguins and the great auk. These cases seem to strongly suggest the validity of environmentally driven Darwinian selection. Sat Oct 23 10:19:09 BST 2010 by Mark Bridger There may be no difference between the chaos theory of evolution and environmentally driven selection in reality. It's a semantic discussion about what you call the environment. The environment would not just be the climate, atmosphere and geological conditions but other organisms that exist. There can be a stable relationship between the life forms of a certain time that adapts to and survives major environmental change. But then there can be the loss of one particular life form for some difficult to detect reason (maybe a subtle environmental change) and the loss of that form will affect others leading to a chaotic pulse of change that affects all of life All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:c5ee9c17-360c-44fb-89cb-2fdde1be80cd>
2.875
259
Comment Section
Science & Tech.
37.227753
Hungarian-born electrical engineer who won the Nobel Prize for Physics in 1971 for his invention of holography, a system of lensless, three-dimensional photography that has many applications. In 1949 Gabor joined the faculty of the Imperial College of Science and Technology, London, where in 1958 he became professor of applied electron physics. His other work included research on high-speed oscilloscopes, communication theory, physical optics, and television. Gabor was awarded more than 100 patents. Main Page | About Us | All text is available under the terms of the GNU Free Documentation License. Timeline of Nobel Prize Winners is not affiliated with The Nobel Foundation. A Special Thanks to the 3w-hosting.com for helping make this site a success. External sites are not endorsed or supported by http://www.nobel-winners.com/ Copyright © 2003 All Rights Reserved.
<urn:uuid:4d14093d-960c-48d6-b1ca-ddcc7179ea36>
2.796875
188
Knowledge Article
Science & Tech.
42.679625
This charming little video demonstrates the principle of resonant frequency using oscillating metronomes. The mechanical wind-up metronomes used worldwide during the dreaded Saturday piano lesson employ an inverted pendulum to keep even time intervals. The resonant frequency of the pendulum is adjusted by moving the mass up and down. Sliding the mass higher up the rod decreases the resonant frequency of the pendulum by increasing its rotational inertia. In the demonstration, each metronome is set to the same frequency, but they are originally all out of phase with each other. This means that at any given moment they are each in different parts of their respective cycles -- thus creating a rhythm too complicated for the average music student. When they're sitting on the table they stay out of phase, because they are not affected by the others' vibrations. However, when they're placed on the piece of wood, although the metronomes are not in phase, they impart a net average vibration. Notice how the wood begins to oscillate back and forth as well. As the wood vibrates, it begins to force all of the metronomes into the same pattern. This is a condition called forced resonance. In this case, the forcing frequency of the wood is also the natural or resonant vibration frequency of each metronome, which results in maximum-amplitude vibrations. If you force an object to vibrate at a frequency other than its resonant frequency, its amplitude will decrease. The idea of resonance is particularly relevant when constructing things like bridges and buildings. You don't want these structures to have resonant frequencies similar to those of seismic waves from earthquakes. Ground shaking at the resonant frequency of a building can initiate vibrations in the building at a very large amplitude. This is obviously not a desirable situation, and modern engineers use a variety of techniques to avoid this when designing buildings in earthquake zones. However, as far as metronomes are concerned -- let them oscillate! Adam Weiner is the author of Don't Try This at Home! The Physics of Hollywood Movies. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:0d45f902-4cbf-4795-94f7-c6c44c145371>
3.5
473
Personal Blog
Science & Tech.
37.956335
The Eurasian minnow (Phoxinus phoxinus) is a species of freshwater fish. It is a member of the carp family (family Cyprinidae) of order Cypriniformes, and is the type species of genus Phoxinus. It is ubiquitous throughout much of Eurasia, from Britain and Spain to eastern Siberia, predominantly in cold 53.6- 68°F (12″“20°C) streams and well-oxygenated lakes and ponds. It is noted for being a gregarious species, shoaling in large numbers. The species is of drab coloration and unremarkable appearance, although the males display a red belly during the spawning season. It reaches a maximum overall length of 5.51 in (14 cm). This fish is the archetypal minnow, and is also known as the common or European minnow. It is known by a great variety of names in the various languages spoken across its range: the French grisette, the German Elritze, the Russian гольÑн (gol’jan), etc. There are also many obsolete binomial synonyms; the original name assigned by Linnaeus was Cyprinus phoxinus. The Eurasian minnow is used commercially primarily as bait. It is also important in laboratory research; this is due to its stereotypical fishlike qualities. They are also some of the slowest fish.
<urn:uuid:d581bf7a-51c1-4a34-a2ed-9fda30454870>
3.46875
312
Knowledge Article
Science & Tech.
48.737127
Mathematics & Physics articlesPhysicists create new form of matter MIT scientists have brought a supercool end to a heated race among physicists: They have become the first to create a new type of matter, a gas of atoms that shows high-temperature superfluidity. Can an electron be in two places at the same time? In something akin to a double-slit experiment, scientists at the Fritz Haber Institute of the Max Planck Society, in co-operation with researchers from the California Institute of Technology in Pasadena, California, have shown for the first time that electrons have characteristics of both waves and particles at the same time and in virtually the push of a button can be switched back and forth between these states. Light that travels… faster than light! A team of researchers from the Ecole Polytechnique Fédérale de Lausanne (EPFL) has successfully demonstrated, for the first time, that it is possible to control the speed of light – both slowing it down and speeding it up – in an optical fiber, using off-the-shelf instrumentation in normal environmental conditions. New mechanism for metallic magnetism Predicting the magnetic behavior of metallic compounds is a surprisingly difficult problem for theoretical physicists. While the properties of a common refrigerator magnet are not a great mystery, certain materials exhibit magnetic properties that do not fit within existing theories of magnetism. One such material inspired a recent theoretical breakthrough by physicists at the University of California, Santa Cruz. Of friction and 'The Da Vinci Code' The Da Vinci Code, the best selling novel and soon-to-be-blockbuster film, may also be linked some day to the solving of a scientific mystery as old as Leonardo Da Vinci himself — friction. Research sheds light on ancient mystery A researcher at Rochester Institute of Technology is unraveling a mystery surrounding Easter Island. William Basener, assistant professor of mathematics, has created the first mathematical formula to accurately model the island's monumental societal collapse. Algorithm for learning languages Cornell University and Tel Aviv University researchers have developed a method for enabling a computer program to scan text in any of a number of languages, including English and Chinese, and autonomously and without previous information infer the underlying rules of grammar. The rules can then be used to generate new and meaningful sentences. The method also works for such data as sheet music or protein sequences. Scientists gain new insight into nanoscale optics New research from Rice University has demonstrated an important analogy between electronics and optics that will enable light waves to be coupled efficiently to nanoscale structures and devices. Physicists measure tiny force that limits how far machines can shrink University of Arizona physicists have directly measured how close speeding atoms can come to a surface before the atoms' wavelengths change. Math unites the celestial and the atomic In recent years, researchers have developed astonishing new insights into a hidden unity between the motion of objects in space and that of the smallest particles. It turns out there is an almost perfect parallel between the mathematics describing celestial mechanics and the mathematics governing some aspects of atomic physics. Universe evolution favored three and seven dimensions Physicists who work with a concept called string theory envision our universe as an eerie place with at least nine spatial dimensions, six of them hidden from us, perhaps curled up in some way so they are undetectable. The big question is why we experience the universe in only three spatial dimensions instead of four, or six, or nine. Presto! It's a semiconductor Researchers at the University of Pennsylvania may not have turned lead into gold as alchemists once sought to do, but they did turn lead and selenium nanocrystals into solids with remarkable physical properties. In the October 5 edition of Physical Review Letters, online now, physicists Hugo E. Romero and Marija Drndic describe how they developed am artificial solid that can be transformed from an insulator to a semiconductor. Brownian motion under the microscope An international group of researchers from the EPFL (Ecole Polytechnique Fédérale de Lausanne), the University of Texas at Austin and the European Molecular Biology Laboratory in Heidelberg, Germany have demonstrated that Brownian motion of a single particle behaves differently than Einstein postulated one century ago. New equation helps unravel behavior of turbulence To most people, turbulence is the jolt felt by jet passengers moving through a rough pocket of air. But to scientists, turbulence is the chaotic flow of a gas or liquid, in which parts of the current curl into irregular, ever smaller, tight eddies. Ultrafast lasers take 'snapshots' as atoms collide Using laser pulses that last just 70 femtoseconds (quadrillionths of a second), physicists have observed in greater detail than ever before what happens when atoms collide. Why 'filling-it-up' takes more than 'tank capacity' You fill up your "empty" fuel tank at the gas station and the pump charges you for more gallons than the tank's rated capacity. Are you being deliberately overcharged? New approach to studying antimatter What happens when two atoms, each made up of an electron and its antimatter counterpart, called the positron, collide with each other? UC Riverside physicists are able to see for the first time in the laboratory that these atoms, which are called positronium atoms and are unstable by nature, become even more unstable after the collision. The positronium atoms are seen to destroy one another, turning into gamma radiation, a powerful type of electromagnetic radiation. Lightning research sparks new discovery Lightning, a high-voltage discharge that strikes quickly and sometimes fatally, is very difficult to study. A new and surprising finding by Florida Institute of Technology's Dr. Joseph Dwyer and his team brings the study of lightning research into the laboratory. What does 'almost nothing' weight? If subatomic particles had personalities, neutrinos would be the ultimate wallflowers. One of the most basic particles of matter in the universe, they've been around for 14 billion years and permeate every inch of space, but they're so inconceivably tiny that they've been called "almost nothing" and pass straight through things without a bump. Mathematics: the loss of certainty "Pure mathematics will remain more reliable than most other forms of knowledge, but its claim to a unique status will no longer be sustainable." So predicts Brian Davies, author of the article "Whither Mathematics?", which will appear in the December 2005 issue of Notices of the AMS.
<urn:uuid:9a053eaa-eddc-4c63-b0f4-c0b26d407337>
3.21875
1,355
Content Listing
Science & Tech.
22.505844