text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
June 23, 2010 Oil Spill Affecting Marine Life, Top to Bottom From nesting turtles to larval fish, Gulf ecosystem ravaged by Sharon Young A large disoriented tiger shark swims near a Florida Beach. Dolphins are photographed swimming in an oil slick. Pelicans flounder in thick oil. We see these ghastly images in the news every day. The impacts of the recent disaster caused by the explosion of the Deepwater Horizon oil rig continue to spread from the largest creatures to the smallest. As part of The HSUS team recently in the Gulf to assess wildlife impacts of the oil, I saw dying hermit crabs and other small creatures struggling in oily tide pools or unable to walk along the beach. Further up the food chain, I saw dolphins frolicking in the bays where some of them spend their entire lives. Although some live offshore, some spend their entire lives resident in a particular bay. Oil entering the only home they know can leave the dolphins no place to go. The team also visited de-oiling facilities and saw the plight of the oiled birds first hand. It was a study in contrasting beauty and devastation. Nesting and Mating Season at its Height No time is good for a massive oil spill, but the timing could not have been worse for the Gulf of Mexico. Endangered turtles are flocking to its beaches to nest, sharks and giant bluefin tuna are gathered there to spawn in the only known mating area for some of these species. Countless bird species are in their nesting colonies and dabbling for food along the shoreline. The oil affects virtually every kind of marine life in the rich ecosystems of the Gulf, from bottom to top of the food chain. As oil spreads in the water column and oil-consuming microbes proliferate, oxygen is depleted. This oxygen poor environment can drive fish and other marine life into shallow waters in less affected areas. A Chain Reaction The recent increase in numbers of deepwater sharks and large fish in coastal waters may be a result of chasing the schools of smaller fish that have moved there to avoid deeper oxygen-depleted waters. Oil also clogs the fragile gills of fish, crabs and other marine residents, preventing them from getting oxygen. The dispersants used to break up the oil are themselves deadly neuro-toxins. Even air breathing animals such as turtles, dolphins and whales are affected because oil on the surface can contaminate their prey or be inhaled as they rise to breathe. The incidence of dolphins stranding on beaches is elevated all around the Gulf. Hundreds of endangered turtles have died, and now researchers are attempting to capture them at sea before they can swim into the oil. Some of these species such as Kemp’s Ridley turtles are already teetering on the brink of extinction. On the surface, large floating rafts of a dense seaweed called Sargassum host micro-communities of larval fish, tiny marine creatures, crabs and juvenile sea turtles who feed on the abundant life in the Sargassum mats. These mats can also collect debris and oil and thereby doom the young generations of fish and turtles who start life sheltered in them. Deep below, sub-surface oiling of deep water coral beds that teem with life and are the base of a rich ecosystem is of considerable concern. No End In Sight The oil continues to spew from the broken well. Dispersants continue to be sprayed. Mats of Sargassum continue to be burned at sea. We may never know the full impact of this disaster. Though we mourn for the dying birds and dead dolphins and turtles and even the tiny oil-drenched crab, they are simply a visible sign of a destruction that threatens the health of an entire ecosystem. Most of the destruction and loss of life will remain unseen and unaccounted but likely to affect the Gulf ecosystem and the animals and people who depend on it for years to come. Sharon Young is a member of the HSUS Oil Spill Assessment Team and is field director for marine issues for The HSUS.
<urn:uuid:a79ee1ac-03a4-42f2-9182-5baf8880a65d>
3.375
834
Nonfiction Writing
Science & Tech.
50.229944
September 23, 2011 It is a concept that forms a cornerstone of our understanding of the universe and the concept of time – nothing can travel faster than the speed of light. But now it seems that researchers working in one of the world’s largest physics laboratories, under a mountain in central Italy, have recorded particles travelling at a speed that is supposedly forbidden by Einstein’s theory of special relativity. Scientists at the Gran Sasso facility will unveil evidence on Friday that raises the troubling possibility of a way to send information back in time, blurring the line between past and present and wreaking havoc with the fundamental principle of cause and effect. This article was posted: Friday, September 23, 2011 at 6:45 am
<urn:uuid:1c0c92ec-8eb3-43a9-8b59-ea96faba53e4>
3.375
149
Truncated
Science & Tech.
24.547857
The first supersolid – a ghostly, quantum form of matter in which a solid flows, frictionless, through itself – was reportedly made in 2004. But a debate has raged ever since over whether the researchers involved had simply misinterpreted their results. Now two new studies suggest that genuine supersolids have been made after all. According to quantum theory, supersolidity should kick in at very low temperatures. In a solid, atoms are bound together in a regular lattice, keeping their structure rigid under normal circumstances. But if you cool some solids close to absolute zero, they should become frictionless, flowing supersolids, while retaining their lattice structure. In the original experiment, Eunseong Kim – now of the Korea Advanced Institute of Science and Technology in Daejeon, South Korea – and Moses Chan of Pennsylvania State University in University Park cooled and pressurised liquid helium until the atoms were forced into a crystal lattice. They then made a cylinder filled with this solid helium spin one way and then the other, over and over again. As they cooled it, the cylinder switched direction more frequently. The researchers concluded that some of the helium was standing completely still, reducing the mass that was rotating along with the cylinder and allowing it to switch more quickly. They assumed that this was because some of the helium had become frictionless due to supersolidity. Earlier this year, however, this interpretation was challenged by John Reppy of Cornell University in Ithaca, New York. He suggested that the reason the cylinder switched more quickly at lower temperature was because the helium had become a wobbly "quantum plastic", a previously unknown phase of matter that is distinct from a supersolid. The increased elasticity of this new material allowed the cylinder to more easily reverse its rotation, he said. To test whether Reppy was right, Kim spun the larger apparatus in which the cylinder sits: the apparatus spun in just one direction, while the cylinder spun one way, then the other, as it had before. He reasoned that elasticity should affect only how quickly the cylinder switched direction, not its actual spinning rate. Therefore if Reppy was right, and the solid helium was a quantum plastic, adding a constant underlying rotation should not change the results. His team found, however, that it did. Unlike in the original experiment, the direction switches did not get faster with a falling temperature. The best way to explain this, says Kim, is if the helium is indeed supersolid. That's because, in a supersolid, the constant rotation should cause vortices to form, rather as in a liquid, disturbing the material's quantum properties, and reducing the supersolidity. Holes under pressure In a tantalising coincidence, Yaroslav Lutsyshyn of the Polytechnic University of Catalonia in Barcelona, Spain, and his colleagues have just found further evidence of supersolidity. Theory suggests that supersolid helium flows because holes form in the crystal lattice. Lutsyshyn's team experimented with how likely these holes were to form under different pressures; and it turned out that the pressure under which holes formed most easily matched that at which Kim and Chan identified the largest proportion of supersolid helium in their system. "It's like rabbit ears sticking out of the grass," says Lutsyshyn of his team's results. He also holds that the experiment by Kim's team is strong evidence – but not full proof – that the solid helium contains a supersolid. Reppy, however, remains unconvinced that Kim's most recent work rules out quantum plasticity as the cause of the apparent supersolid effects. He disputes Kim's assumption that the extra energy added by rotating the whole apparatus would not affect the rate at which a quantum plastic switches. "I'm now fairly certain that the large supersolid signals that they were seeing are manifestations of the elastic properties," he says. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:8aaf5ccb-c1c9-4a2a-9b2a-e2869d7744ba>
4.03125
946
Truncated
Science & Tech.
33.553333
In John D. Pettigrew's lab, there is less to human experience than meets the eyes. Over the past several years, dozens of test subjects have stared through goggles and pressed keys while the neuroscientist squirted ice water into the volunteers' ear canals, fired strong magnetic pulses into their heads or told jokes that made them giggle. These unusual experiments, which were reported in part last March in Current Biology and presented more fully in November at a neuroscience conference in New Orleans, confirmed that people often cannot see what is plainly before their eyes. More important, the studies suggest that many optical illusions may work not by deceiving our visual system, as long suspected, but rather by making visible a natural contention between the two hemispheres of the human brain. If Pettigrew's theory is correct, then the reason an optical illusion such as the Necker cube outline, which seems to turn inside out periodically, works is that, in some deep biological sense, you are of two minds on the question of what to see. Reversible figures, such as the Necker cube and drawings of a white vase between black faces, have been curiosities for centuries. And it was in 1838 that Charles Wheatstone first reported an even more peculiar phenomenon called binocular rivalry. When people look through a stereoscope that presents irreconcilable patterns, such as horizontal stripes before one eye and vertical bars before the other, most don't perceive a blend of the two. Instead they report seeing the left pattern, then the right, alternating every few seconds. "Every couple of seconds something goes ¿click' in the brain," Pettigrew says. "But where is the switch?" The answer is still unknown. This article was originally published with the title Side Splitting.
<urn:uuid:156b570c-4b63-4544-95a0-53df8b0557a9>
3.1875
363
Truncated
Science & Tech.
40.357688
Click on image for full size Image from the Hubble Space Telescope, reproduced with permission from AURA/STScI. What's in a Name: || Arabic for "shoulder of the giant". Also known as the Martial Star. | |Claim to Fame:|| First star seen as a sphere instead of a point of light by the Hubble Space Telescope on March 3, 1995. 12th brightest star in the sky . Possibly will be the very next supernova. |Type of Star:|| Orange-Red Supergiant (M2 Iab Spectral Class). 3300K surface temp. |How Far Away:|| Over 300 light years away |How Big:|| 1300 times the sun's diameter. Would overfill the orbit of Jupiter if placed at the sun's position in the solar system |How Bright:|| 54,000 times the sun's visual luminosity (absolute visual magnitude, Mv = -7) |Where to View:|| 2nd brightest star In the constellation of Orion |When to View:|| Best viewed from the Northern hemisphere during December-March Shop Windows to the Universe Science Store! Our online store includes fun classroom activities for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist are also full of classroom activities on different topics in Earth and space science!
<urn:uuid:369e2355-d1b8-42f6-8252-c9a5a3c05139>
3.125
280
Knowledge Article
Science & Tech.
61.051136
BACKGROUND: Inspired by the Namib Desert beetle that lives in one of the driest regions of the world, researchers at the Massachusetts Institute of Technology, in Cambridge, have developed a new material that can capture and control tiny amounts of water, just like the beetle does. Applications include its use for self-contaminating surfaces that could channel and collect harmful substances, such as germs, that could then be easily killed or deactivated. It could also be used for lab-on-a-chip diagnostics of DNA screening. ABOUT DESERT BEETLES: The desert beetle has a built-in water collection system that allows it to survive where there is no water to be found, even when the humidity in the air is close to zero. This is important since normal condensation can't take place in the Namib Desert because the fog is too light. When fog blows across the surface of the beetle's back, water droplets begin to gather on top of the bumps on the insect's back These bumps attract water. They are also surrounded by waxy, water-repellent channels that pins the water drops on the beetle's back. Over time, the droplets get bigger, until they are large enough to roll down into the insect's mouth. ABOUT THE MATERIAL: The new material developed by the MIT scientists can capture and control tiny amounts of water because its structure mimics that of the desert beetle. There are two surfaces, one water-repellant and another water attracting, that act together to separate and channel water drops. The researchers found they could control the surface texture of their material by repeatedly dipping glass or plastic substrates into charged polymer solutions. With every dip, another layer coats the surface, gradually making the material more porous so it easily attracts water. Adding silica nanoparticles -- particles only a few millions of a millimeter wide -- creates even more bumps to trap the collected water droplets. The final touch is a Teflon-like coating that makes the material super-water-repellent. And the scientists can create any pattern they want by adding more layers of charged polymers or nanoparticles in specific areas. The Materials Research Society contributed to the information contained in the TV portion of this report.
<urn:uuid:5162eae7-2940-403a-8c58-1269d5fde610>
4.25
460
Knowledge Article
Science & Tech.
37.533938
Today is a special day. You see, today marks the very first day of the rest of my life. In order to mark this special occasion, I am starting a blog. This blog post will forever be remembered (or forgotten) as the first among many. I have decided to dive right into things, and begin with one of my favorite mathematical formulas. This formula is incredibly useful, the concept is simple, and it is easy to remember. It is known as the Pythagorean Theorem. What is it? The Pythagorean theorem is a formula which directly relates to right triangles. A right triangle is composed of a 90 degree angle, also known as a square angle, and two smaller angles. The theorem gives a relation for each of the three sides. Using the formula, if you know the length of two sides of the triangle, you can find the length of the third side. Generally, the two sides which form the right angle are assigned the variables A and B, and the side opposite the right angle is assigned the variable C (look at the picture for a good visualization). The formula states that A squared plus B squared equals C squared. Where did it come from? Although named for Pythagoras, the Pythagorean theorem’s history is debated. In all likelihood, Pythagoras was not actually the first to discover this property, nor the first to write about it. Some historians believe the Babylonians may have known about the theorem, accounts of the Pythagorean Theorem in India may go as far back as 8000 BC. The methods of proof for the theorem are numerous and vary widely. A great number of these are simple geometric proofs, and require nothing but some algebra and geometry to understand. If you’re interested, there is a long list of them at Wolfram MathWorld. Why is it important? The theorem comes up surprisingly often in higher math. You’ll see it all the time, especially in calculus and physics. One of the most common usages of it arises in vectors. A vector is merely a direction and a magnitude, usually visualized as an arrow or a line. In the diagram below, you can see two vectors, A and B. When you put them end-to-end, they add to vector C. Each vector is composed of two components, one vertical and one horizontal component. If you add the horizontal length of A to the horizontal length of B you obtain the horizontal length of C, and the same applies to the vertical lengths. When you have the horizontal and vertical lengths of C, you simply plug them into the Pythagorean theorem to find the length of vector C. Thus, 152+92=C2. Solving (with a calculator) gives C=17.5. It’s a surprisingly simple process, with an immense number of applications.
<urn:uuid:b9355f28-0de7-4057-9844-272ee6fcb076>
3.046875
589
Personal Blog
Science & Tech.
60.799067
The following list indicates some of the ways that the mysqld server uses memory. Where applicable, the name of the system variable relevant to the memory use is given: All threads share the key buffer; its size is determined by the Other buffers used by the server are allocated as needed. See Section 8.11.2, “Tuning Server Parameters”. Each thread that is used to manage client connections uses some thread-specific space. The following list indicates these and which variables control their size: The connection buffer and result buffer each begin with a size equal to but are dynamically enlarged up to as needed. The result buffer shrinks to after each SQL statement. While a statement is running, a copy of the current statement string is also allocated. All threads share the same base memory. When a thread is no longer needed, the memory allocated to it is released and returned to the system unless the thread goes back into the thread cache. In that case, the memory remains allocated. system variable can be set to 1 to enable memory-mapping Each request that performs a sequential scan of a table allocates a read buffer (variable When reading rows in an arbitrary sequence (for example, following a sort), a random-read be allocated to avoid disk seeks. All joins are executed in a single pass, and most joins can be done without even using a temporary table. Most temporary tables are memory-based hash tables. Temporary tables with a large row length (calculated as the sum of all column lengths) or that contain BLOB columns are stored on If an internal in-memory temporary table becomes too large, MySQL handles this automatically by changing the table from in-memory to on-disk format, to be handled by MyISAM storage engine. You can increase the permissible temporary table size as described in Section 126.96.36.199, “How MySQL Uses Internal Temporary Tables”. Most requests that perform a sort allocate a sort buffer and zero to two temporary files depending on the result set size. See Section C.5.4.4, “Where MySQL Stores Temporary Files”. Almost all parsing and calculating is done in thread-local and reusable memory pools. No memory overhead is needed for small items, so the normal slow memory allocation and freeing is avoided. Memory is allocated only for unexpectedly large strings. MyISAM table that is opened, the index file is opened once; the data file is opened once for each concurrently running thread. For each concurrent thread, a table structure, column structures for each column, and a buffer of size N is the maximum row length, not counting requires five to eight bytes plus the length of the BLOB data. The MyISAM storage engine maintains one extra row buffer for internal use. Handler structures for all in-use tables are saved in a cache and managed as a FIFO. The initial cache size is taken from the value of the variable. If a table has been used by two running threads at the same time, the cache contains two entries for the table. See Section 188.8.131.52, “How MySQL Opens and Closes Tables”. TABLES statement or mysqladmin flush-tables command closes all tables that are not in use at once and marks all in-use tables to be closed when the currently executing thread finishes. This effectively frees most in-use memory. TABLES does not return until all tables have The server caches information in memory as a result of CREATE SERVER, and INSTALL PLUGIN statements. This memory is not released by the corresponding DROP SERVER, and statements, so for a server that executes many instances of the statements that cause caching, there will be an increase in memory use. This cached memory can be freed ps and other system status programs may report that mysqld uses a lot of memory. This may be caused by thread stacks on different memory addresses. For example, the Solaris version of ps counts the unused memory between stacks as used memory. To verify this, check available swap with swap -s. We test mysqld with several memory-leakage detectors (both commercial and Open Source), so there should be no memory leaks.
<urn:uuid:ce8075cf-5a79-4416-b344-e28dcf725c4b>
2.953125
943
Documentation
Software Dev.
48.628755
Programmable matter refers to matter which has the ability to change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) in a programmable fashion, based upon user input or autonomous sensing. Programmable matter is thus linked to the concept of a material which inherently has the ability to perform information processing. Programmable matter is a term originally coined in 1991 by Toffoli and Margolus to refer to an ensemble of fine-grained computing elements arranged in space (Toffoli & Margolus 1991). Their paper describes a computing substrate that is composed of fine-grained compute nodes distributed throughout space which communicate using only nearest neighbor interactions. In this context, programmable matter refers to compute models similar to cellular automata and lattice gas automata (Rothman & Zaleski 1997). The CAM-8 architecture is an example hardware realization of this model. This function is also known as "digital referenced areas" (DRA) in some forms of self-replicating machine science. In the early 1990s, there was a significant amount of work in reconfigurable modular robotics with a philosophy similar to programmable matter. As semiconductor technology, nanotechnology, and self-replicating machine technology have advanced, the use of the term programmable matter has changed to reflect the fact that it is possible to build an ensemble of elements which can be "programmed" to change their physical properties in reality, not just in simulation. Thus, programmable matter has come to mean "any bulk substance which can be programmed to change its physical properties." In the summer of 1998, in a discussion on artificial atoms and programmable matter, Wil McCarthy and G. Snyder coined the term "quantum wellstone" (or simply "wellstone") to describe this hypothetical but plausible form of programmable matter. McCarthy has used the term in his fiction. In 2002, Seth Goldstein and Todd Mowry started the claytronics project at Carnegie Mellon University to investigate the underlying hardware and software mechanisms necessary to realize programmable matter. In 2004, the DARPA Information Science and Technology group (ISAT) examined the potential of programmable matter. This resulted in the 2005–2006 study "Realizing Programmable Matter", which laid out a multi-year program for the research and development of programmable matter. Finally, in 2007, Alex Wissner-Gross at Harvard University proposed a comprehensive taxonomy for classifying programmable matter according to its physical input and output types. For example, since a field-effect transistor transduces one electrical signal (a voltage) to another (a current), it can be classified as electrical-to-electrical programmable matter. Approaches to programmable matter In one school of thought the programming could be external to the material and might be achieved by the "application of light, voltage, electric or magnetic fields, etc." (McCarthy 2006). For example, in this school of thought, a liquid crystal display is a form of programmable matter. A second school of thought is that the individual units of the ensemble can compute and the result of their computation is a change in the ensemble's physical properties. An example of this more ambitious form of programmable matter is claytronics, where the units in the ensemble "compute" and the result is a change in the shape of the ensemble. There are many proposed implementations of programmable matter. Scale is one key differentiator between different forms of programmable matter. At one end of the spectrum reconfigurable modular robotics pursues a form of programmable matter where the individual units are in the centimeter size range. At the nanoscale end of the spectrum there are a tremendous number of different bases for programmable matter, ranging from shape changing molecules to quantum dots. Quantum dots are in fact often referred to as artificial atoms. In the micrometer to sub-millimeter range examples include claytronics, MEMS-based units, cells created using synthetic biology, and the utility fog concept. Examples of programmable matter There are many conceptions of programmable matter, and thus many discrete avenues of research using the name. Below are some specific examples of programmable matter. "Simple" programmable matter These include materials that can change their properties based on some input, but do not have the ability to do complex computation by themselves. The physical properties of several complex fluids can be modified by applying a current or voltage, as is the case with liquid crystals. Metamaterials are artificial composites that can be controlled to react in ways that do not occur in nature. One example developed by David Smith and then by John Pendry and David Schuri is of a material that can have its index of refraction tuned so that it can have a different index of refraction at different points in the material. If tuned properly this could result in an "invisibility cloak." An active area of research is in molecules that can change their shape, as well as other properties, in response to external stimuli. These molecules can be used individually or en masse to form new kinds of materials. For example, J Fraser Stoddart's group at UCLA has been developing molecules that can change their electrical properties. Self-reconfiguring modular robotics Self-Reconfiguring Modular Robotics is a field of robotics in which a group of basic robot modules work together to dynamically form shapes and create behaviours suitable for many tasks. Like Programmable matter SRCMR aims to offer significant improvement to any kind of objects or system by introducing many new possibilities for example: 1. Most important is the incredible flexibility that comes from the ability to change the physical structure and behavior of a solution by changing the software that controls modules. 2. The ability to self-repair by automatically replacing a broken module will make SRCMR solution incredibly resilient. 3. Reducing the environmental foot print by reusing the same modules in many different solutions. Self-Reconfiguring Modular Robotics enjoys a vibrant and active research community see Self-Reconfiguring Modular Robotics for more information and links. Claytronics is an emerging field of engineering concerning reconfigurable nanoscale robots ('claytronic atoms', or catoms) designed to form much larger scale machines or mechanisms. The catoms will be sub-millimeter computers that will eventually have the ability to move around, communicate with other computers, change color, and electrostatically connect to other catoms to form different shapes. Cellular automata are a useful concept to abstract some of the concepts of discrete units interacting to give a desired overall behavior. Quantum wells can hold one or more electrons. Those electrons behave like artificial atoms which, like real atoms, can form covalent bonds, but these are extremely weak. Because of their larger sizes, other properties are also widely different. Synthetic biology is a field that aims to engineer cells with "novel biological functions." Such cells are usually used to create larger systems (e.g., biofilms) which can be "programmed" utilizing synthetic gene networks such as genetic toggle switches, to change their color, shape, etc. Programmable matter in fiction Programmable matter is still, for the most part, a fantastic vision for the future. The ideas behind it are explored in many works of science fiction. Examples: - The T-1000 from Terminator 2 fits the definition of programmable matter, although it is not described that way in the film. (The term was only just coined the year of the film’s release.) - It is called "wellstone" in many of Wil McCarthy's books and stories. - It is called "Trillions" in the 1973 children's book Trillions by Nicholas Fisk - It is called "reality graphics" in Vernor Vinge's book A Fire Upon the Deep. - It is called "the Flesh" in Doctor Who. - David Brin's Kiln People - It is called "Computronium" in Charles Stross' Accelerando. - Programmable Silicon is used to quickly erect buildings in Peter F Hamilton's Night's Dawn Trilogy - The Replicators from the Stargate universe are based on this technology. - In the Pendragon Adventure series, "Forge" is a programmable matter device created by Mark Dimond and Andy Mitchell. - The Caeliar are an alien race in the Star Trek novel trilogy "Star Trek: Destiny" whose bodies are made up of catoms (or claytronic atoms). - In the book Extras by Scott Westerfeld a substance called smart matter is a very useful type of programmable matter. It is capable of anything one can program it to do as read in the book. - In the book Anathem by Neal Stephenson, a substance called "new matter" can alter its physical properties through the application of electrical charges. - In the movie Super 8 the alien spacecraft is composed of programmable matter. - In the Pokemon universe, it is possible that Ditto is a form of programmable matter. - Gort from The Day the Earth Stood Still - In just about any work of fiction, biological systems such as humans may be regarded as programmable matter, with the source code written into the DNA of almost every cell in the body - "CAM8: a Parallel, Uniform, Scalable Architecture for Cellular Automata Experimentation". Ai.mit.edu. Retrieved 2013-04-10. - http://www.geocities.com/charles_c_22191/temporarypreviewfile.html?1205202563050[dead link] - "DARPA research solicitation".[dead link] - DARPA Strategic Thrusts: Programmable Matter[dead link] - Physically programmable surfaces, Alex Wissner-Gross, Ph.D. Thesis, Department of Physics, Harvard University (2007) - [dead link] - "UCLA Chemistry and Biochemistry". Stoddart.chem.ucla.edu. Retrieved 2013-04-10. - (Yim et al. 2007, pp. 43–52) An overview of recent work and challenges - McCarthy, Wil (2003). The Wellstone. - Nicholas Fisk (1973). Trillions. ISBN 0-394-92601-3. - Vinge, Vernor (1992). A Fire Upon the Deep. - Brin, David (2002). Kiln People. - Stross, Charles (2005). Accelerando. - Goldstein, Seth Copen; Campbell, Jason; Mowry, Todd C. (June, 2005). "Programmable Matter". IEEE Computer 38 (6): 99–101. doi:10.1109/MC.2005.198. - McCarthy, Wil (2006). "Programmable Matter FAQ". - McCarthy, Wil (2003). Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable Atoms. New York: Basic Books. ISBN 0-465-04428-X. - Rothman, D.H.; Zaleski, S. (1997). "Lattice Gas Cellular Automata". Cambridge University Press. doi:10.2277/. - Toffoli, Tommaso; Margolus, Norman (1991). "Programmable matter: concepts and realization". Physica D 47: 263–272. doi:10.1016/0167-2789(91)90296-L. - Yim, Mark; Shen, Wei-Min; Salemi, Behnam; Rus, Daniela; Moll, Mark; Lipson, Hod; Klavins, Eric; Chirikjian, Gregory (March 2007). "Modular Self-Reconfigurable Robot Systems". IEEE Robotics & Automation Magazine 14 (1): 43. doi:10.1109/MRA.2007.339623. - "Boston University's Programmable Matter Group". - "Synthetic Biology at Boston University". - "Claytronics Project at Carnegie Mellon University". - "World with controllable Matter". - "Universally Programmable Intelligent Matter Project". - "DARPA (US Military) Programmable Matter Thrust".
<urn:uuid:67ffa64c-36b6-4779-a1a9-1b96fa24e9ae>
3.0625
2,557
Knowledge Article
Science & Tech.
38.855076
The Pace of Evolution Does evolution occur in rapid bursts or gradually? This question is difficult to answer because we can’t replay the past with a stopwatch in hand. However, we can try to figure out what patterns we’d expect to observe in the fossil record if evolution did happen in bursts, or if evolution happened gradually. Then we can check these predictions against what we - What should we observe in the fossil record if evolution is slow and steady? If evolution is slow and steady, wed expect to see the entire transition, from ancestor to descendent, displayed as transitional forms over a long period of time in the fossil record. In the above example, the preservation of many transitional forms, through layers representing a length of time, gives a complete record of slow and In fact, we see many examples of transitional forms in the fossil record. For example, to the right we show just a few steps in the evolution of whales from land-dwelling mammals, highlighting the transition of the walking forelimb to the flipper. Transitional forms in whale evolution - What would we observe in the fossil record if evolution happens in “quick” jumps (perhaps fewer than 100,000 years for significant change)? If evolution happens in “quick” jumps, wed expect to see big changes happen quickly in the fossil record, with little transition between ancestor and descendent. In the above example, we see the descendent preserved in a layer directly after the ancestor, showing a big change in a short time, with no transitional forms. When evolution is rapid, transitional forms may not be preserved, even if fossils are laid down at regular intervals. We see many examples of this “quick” jumps pattern in the fossil record. - Does a jump in the fossil record necessarily mean that evolution has happened in a “quick” jump? We expect to see a jump in the fossil record if evolution has occurred as a “quick” jump, but a jump in the fossil record can also be explained by irregular fossil preservation. This possibility can make it difficult to conclude that evolution has happened We observe examples of both slow, steady change and rapid, periodic change in the fossil record. Both happen. But scientists are trying to determine which pace is more typical of evolution and how each sort of evolutionary
<urn:uuid:bc846d95-56c6-42cf-a983-b2aa3f5fb4fd>
3.53125
508
Knowledge Article
Science & Tech.
39.528724
Scientific names for organisms are also known as Latin names. Carl Linnaeus (pictured) started the naming scheme used in biology to today, with a few modifications. At the time, Latin was still a fairly dominant language of scholarship. So Linnaeus gave plants, and later animals, names in Latin. I think names based on Greek are also acceptable. Today, for animals, the rules for names are handled by the International Commission on Zoological Nomenclature (ICZN). There are similar but separate organizations for plants and microbes and, I believe, fossils. I'm going to do my best to summarize other people's arguments about giving Marmorkrebs a scientific name. (Some of these are based on half-remembered conversational snippets from conferences, so forgive me if I make errors.) Marmorkrebs seems to belong in the genus Procambarus. The major crayfish taxonomist who described many of the members of this genus was a man with the wonderfully alliterative name of Horton H. Hobbs, Jr.. He based many of his species descriptions on the basis of the male sex organs... which Marmorkrebs, being all females, do not have. The next plan of attack might be to use genetics. But there really isn't a lot of precedent for describing species on the basis of genes alone. I am not sure whether the ICZN code allows it. Regardless of whether it is permissible, there are strongly established traditions in taxonomy, and formal species descriptions are supposed to have detailed morphological descriptions, designate a type specimen for safekeeping in a museum, and so on. And as you consider the problem of whether to give Marmorkrebs a scientific name, it leads to a very core question: What is a species? This is a question that has resulted in a lot of ink being spilled in scientific papers, and simply put, there is no consensus on how to define a species. Based on some comments Keith Crandall has made, Marmorkrebs are genetically extremely similar Procambarus alleni... so following some species concepts that revolve around morphological separation, this might qualify as putting Marmorkrebs in with P. alleni. But because Marmorkrebs reproduce asexually, they do not interbreed with sexual species... so following species concepts that revolve around interbreeding, this would suggest that Marmorkrebs should get its own species name. As the preface to the latest edition of the code of zoological nomenclature notes: One should always keep in mind that an important function of classifications is information retrieval. So maybe it's not really critical to have a scientific Latin name immediately in these days of search engines and databases... but what is definitely important is to have a consistent name, one that can be found in databases. For obvious reasons, I suggest "Marmorkrebs" be that name for use in titles and keywords.
<urn:uuid:541e8b30-1b59-4b05-adcf-2d7d33972bfe>
3.09375
620
Personal Blog
Science & Tech.
40.127204
The mill is silent now, and still. When International Paper succumbed to the slump in logging a few years ago and shut the doors on a plant that had once employed 650 in Gardiner and neighboring Reedsport, commerce in the coastal Oregon communities took a body blow. The pain of the mill closure was compounded by catches of coho salmon that had been dwindling for a decade. As loggers knocked the mud off their caulk boots for the last time and fishermen let their commercial licenses lapse, families drifted away. Schools lost students. Today, Reedsport’s many boarded-up storefronts signal a community in distress. The town sits back from the open ocean, snug against the sheltering hills of the Coast Range and wrapped inside the arc of sand that forms Winchester Bay. But out beyond the bay, past the bar, churns the constant, unceasing movement of what could someday stanch the decline of this place: Pacific Ocean waves. That’s because a team of Oregon State University researchers has been inventing devices for creating electricity — clean, renewable, low-impact energy — from the motion of the ocean. And they’ve zeroed in on Reedsport as the “sweet spot” for testing and demonstrating new technologies — in part because the old mill’s power substation, now sitting idle, could quickly be reengaged and once again buzz with electricity. Just a tiny fraction of the energy contained in the Earth’s seas — their currents, tides, waves, and heat — could power the entire planet. Tom Tymchuk is awe-struck by the statistic. “If you could harness even 1 percent of ocean energy, you could light up the world,” says the Central Lincoln Public Utility District board member, struggling to take in the enormity of that idea. “Light up the world!” Compared to wind — the current frontrunner in renewables — waves are a lot more efficient. That’s because of what OSU electrical engineer Annette von Jouanne calls “energy density.” “Water is about 1,000 times more dense than air,” she points out. “That means you can extract more power from a smaller volume, which in turn means lower cost.” Besides, waves roll in with a lot more regularity than wind blows. Energy is available from waves upward of 80 percent of the time, compared to 45 percent or less from wind, leading to more efficient scheduling for other energy sources on the grid. More than 20 agencies, including the Oregon and U.S. departments of energy, are backing OSU’s initiative to launch a U.S. Ocean Wave Energy Research, Development and Demonstration Center to create and test wave-power technologies. With members of Oregon’s congressional delegation strongly behind the initiative, it’s quite possible that the roar of the surf and the tang of salt spray could someday replace the kthunk-kthunk of the mill and the acrid smell of pulp as the sounds and smells of prosperity in Reedsport and other sagging economies up and down the coast. Surviving the Tempest The notion of extracting energy from waves is not new. When von Jouanne and her colleague, OSU electrical engineering professor Alan Wallace, began exploring the potential of wave power, their search for prior scientific writings and inventions took them into records two centuries old. As they pored over thousands of patents for turning wave energy into electricity, they pinpointed the big flaw in those earlier designs: too many moving parts. In an environment as tempestuous as the sea, moving parts require frequent maintenance and are vulnerable to breakdowns. “To capture energy from waves, the device must be survivable, reliable, and maintainable,” says von Jouanne, a principal investigator in OSU’s wave energy research project. “In the past, there have been some failures because of the survivability issue.” Prevailing technologies generate power by compression of a liquid (such as water) or a gas (such as air). Pumps and pistons, valves and filters, hoses and tubes, fittings and couplings and all sorts of switches, gauges, meters and sensors go into operating these systems. In contrast, with $270,000 from the National Science Foundation and a total of $60,000 in proof-of-concept grants from Oregon Sea Grant at OSU, von Jouanne and Wallace are developing technologies that work with just a handful of basic components, including an electric coil, a buoy and a magnetic shaft secured by a steel cable. One of the OSU devices on the drawing board — which the engineers describe as a “permanent magnet linear generator” — works like this: A spiral of copper wire is secured inside a 12- by 15-foot long buoy made of an impervious composite of plastic and fiberglass. The coil surrounds a magnetic shaft, which is stationary and tethered to the ocean floor by a steel cable. As the buoy rises and falls on the waves, the coil moves up and down relative to the shaft, inducing voltage as it passes through the magnetic field. A power take-off cable carries the resulting electric current about 100 feet down to the seafloor where another cable takes the power generated by many buoys to an onshore substation. One buoy is projected to generate 100 kilowatts of power, on average. A network of about 500 such buoys could power downtown Portland. Moreover, wave parks could address the state’s energy imbalance. West of the Cascades, Oregon consumes about 1,000 megawatts more than it generates. By tapping about 5 percent of the coastline, wave energy could make up the difference, and no new transmission lines would be needed. The engineers’ goal is to produce a device that is lean and streamlined, designed to withstand gale-force winds, monster storms and the vagaries of sea life, from rafts of floating bull kelp to colonies of seals looking for a place to haul out. The engineers are now working on their fourth and fifth prototypes. They call their simplified approach to energy conversion “direct drive.” The fishermen just call it common sense. As one lifelong Oregon fisherman, Terry Thompson, puts it, “There’s a rule of working in the ocean that fishermen use that goes, ‘Keep it simple, stupid.’” Wallace and von Jouanne agree. “Simplicity is the essence of it,” Wallace says. However, embedded in their design is a great deal of engineered precision. The magnetic shafts are made of a steel alloy that creates an exceptionally strong force field. The highly conductive “air-gap” coils are made of solid copper instead of the more common combination of copper and steel used in generator armatures. Thus, the conversion of mechanical motion (waves) into electrical energy can take place with great efficiency and efficacy. The engineers develop their prototypes in OSU’s Motor Systems Resource Facility, the highest-power motor and drives testing lab at any U.S. university, and test them across campus in the O.H. Hinsdale Wave Research Laboratory, which boasts a 342-foot flume. But it will be in Reedsport that the wave-energy buoys meet their real test: the Pacific Ocean. Of all the waves washing across the planet, Oregon’s are optimal for extracting energy, according to a study by the Palo Alto, California-based Electric Power Research Institute (EPRI). That’s because on the West Coast, the trade winds blow strong and steady, and the seafloor is a long, gentle slope, a configuration that lends itself to good wave action. And then there’s the old mill just north of Reedsport. In addition to its 50-megawatt electrical substation, it has an outflow pipe stretching 3,000 feet into the ocean — a ready-made conduit for the subsea power cable bringing electricity back to shore. Von Jouanne and Wallace have been working closely with Justin Klure of the Oregon Department of Energy to promote the Reedsport/Gardiner area as an optimal location for the nation’s first commercial wave park. Several developers have stepped forward with the first planned phases in the 20- to 30-megawatt range. Manufacturing and fabrication would be performed locally, meaning job opportunities for coastal Oregonians. At about one to three miles offshore, the park will be invisible from the beach, thus preserving views, but close enough to make anchoring and transmission feasible. Read more on this story at Terra Magazine
<urn:uuid:d7bf56a5-b107-4def-91a5-e7fb3216518d>
3.25
1,796
Truncated
Science & Tech.
44.980724
The Xconq networking strategy is based on having each connected program maintain a complete and accurate game state, and propagating user commands from one of the programs to all others. Thus all programs are equal; but in order to serialize multiple simultaneous user commands, one of the programs is a "first among equals" called the master. If a player interacting with a non-master clicks to move, then the command actually just goes to the master and doesn't actually happen until the master propagates the command to all programs. The high-level protocol for all this is part of the Xconq kernel, and includes buffering, checksum, and other features. The OS interface code need only provide definitions for these functions: void open_remote_connection(char *methodname)establishes a connection to another program, using information supplied in void low_send(int id, char *buf)sends the contents of buf to the program whose remote id is id. int low_receive(int *idp, char *buf, int maxchars, int timeout)waits for data from the given remote program, possibly timing out. Returns TRUE if data was received. void close_remote_connection(void)takes down and cleans up (which?) connection.
<urn:uuid:89a14def-47eb-40d6-becd-8d28ee06e5ca>
2.875
267
Documentation
Software Dev.
37.355177
Using ESA's Herschel space observatory, astronomers have discovered vast comet belts surrounding two nearby planetary systems known to host only Earth-to-Neptune-mass worlds. The comet reservoirs could have delivered life-giving oceans to the innermost planets. In a previous Herschel study, scientists found that the dusty belt surrounding nearby star Fomalhaut must be maintained by collisions between comets. In the new Herschel study, two more nearby planetary systems – GJ 581 and 61 Vir – have been found to host vast amounts of cometary debris. Herschel detected the signatures of cold dust at 200ºC below freezing, in quantities that mean these systems must have at least 10 times more comets than in our own Solar System's Kuiper Belt. GJ 581, or Gliese 581, is a low-mass M dwarf star, the most common type of star in the Galaxy. Earlier studies have shown that it hosts at least four planets, including one that resides in the 'Goldilocks Zone' – the distance from the central sun where liquid surface water could exist. Two planets are confirmed around G-type star 61 Vir, which is just a little less massive than our Sun. The planets in both systems are known as 'super-Earths', covering a range of masses between 2 and 18 times that of Earth. Interestingly, however, there is no evidence for giant Jupiter- or Saturn-mass planets in either system. The gravitational interplay between Jupiter and Saturn in our own Solar System is thought to have been responsible for disrupting a once highly populated Kuiper Belt, sending a deluge of comets towards the inner planets in a cataclysmic event that lasted several million years. "The new observations are giving us a clue: they're saying that in the Solar System we have giant planets and a relatively sparse Kuiper Belt, but systems with only low-mass planets often have much denser Kuiper belts," says Dr Mark Wyatt from the University of Cambridge, lead author of the paper focusing on the debris disc around 61 Vir. "We think that may be because the absence of a Jupiter in the low-mass planet systems allows them to avoid a dramatic heavy bombardment event, and instead experience a gradual rain of comets over billions of years." Which might mean no 'life.'
<urn:uuid:9f7a877d-9f29-47c2-bd9b-6ce84c9b7aff>
3.875
484
Personal Blog
Science & Tech.
43.417131
Primary colors: primary only for us Q: Why are red, green, and blue the primary colors? A: What we call "primary colors" has nothing to do with the nature of light. It depends rather on how we humans detect light. "Color really refers to a sensation that a particular wavelength of light [produces]," says Brian DeBroff, ophthalmology professor at the Yale School Human eyes have two types of cells for absorbing light: rods and cones. You see little color at night. Consequently, the rods, used mostly for night vision, detect primarily light and dark. The cones detect color and favor some colors over others. They absorb blue, green, and red light best. That's why those colors are called "primary." We perceive white when the cones absorb light containing equal amounts of the three primary colors. By adding together primary-color responses, our eyes see almost any color. How do we perceive light at all? Special pigments in the rods and cones absorb light energy and change it into chemical energy, which, in turn, stimulates an impulse within a nerve cell, says John Meyer, entomologist at North Carolina State University. Colors, however, aren't universal even among humans. Some languages don't have separate words for two primary colors: green and blue, for example, even though these people see both colors. All languages name black and white. Eskimos use 17 words for white as applied to different snows. If a language has a word for another color, it most likely is red, the color of blood. Different species have different primary colors. Birds are sensitive to red. Bees are blind to red but can see ultraviolet-beyond our range of color perception. Dogs can barely perceive any color. (Answered by April Holladay, science correspondent, April 24, 2002)
<urn:uuid:d53c762e-eb3a-46d7-abda-f8700d9a549e>
4.0625
404
Q&A Forum
Science & Tech.
50.423409
Diversity of Foliar Endophytes in Wild and Cultured Metrosideros Polymorpa Inferred from Environmental PCR and ITS Sequence Data Above me, the forest was alive. Tree fern fronds brushed against each other, honeycreepers piped melodies as they flitted between towering branches, and the trees stood anchored strongly in the soil, their waxy leaves stretching 200 feet higher into the canopy. As my gaze shifted to the fallen leaves surrounding my black rubber boots, I spotted a tiny white mushroom, almost angelic in its presence. The mushroom was no more than an inch tall, with a fragile opaque stem topped by a gilled white cap. It was growing out of an ohi'a leaf that was dark and slimy after many days on the damp and decaying forest floor. As I inspected the other fallen leaves, I noticed that only those of ohi'a trees hosted these tiny fungi. I was astounded that the mushroom was species-specific in its choice of host, and I began to wonder how this mushroom had originally become associated with ohi'a trees. It dawned on me that not only is the forest a system, but each individual tree is an intricately connected hub of ecological activity. Each leaf, even, could be host to many species of fungi. Fungal foliar endophytes are fungi that live within healthy leaves. The relationship between these fungi and their hosts is symbiotic in some cases: the fungi benefit their hosts via increased drought tolerance, protection against herbivores, and resistance to pathogens (Higgins 2007). In other cases, the relationship is antagonistic. Fungal symbioses with plants are key determinants of biomass, nutrient cycling, and ecosystem productivity (Arnold 2007). Despite many years of research and hundreds of journal articles that mention endophytic fungi, the diversity, phylogenetic position, and relationship of foliar endophytes with their plant hosts, are still largely unknown. The fossil record indicates that plants have been associated with endophytic fungi for more than 400 million years, and that these fungi were likely associated with the first plants to colonize land. Thus, the fungal community within plants has played a significant role in the evolution of life on land. This coevolution involves the reduction of enzymatic capabilities in the fungi, increasing their dependence on the host plant to provide nutrients for growth, and an increase in their production of particular secondary metabolites beneficial in the process of symbiosis (Rodriguez 2009). Very little work has been done on studying the endophytic communities associated with foliage in Hawai'i, and even less on those of native Hawaiian plants. Given the apparent significance of fungal endophytes to the evolution and overall survival of plant species across the globe, I was very excited to embark on a critical study of the fungal foliar endophytic communities associated with the native Hawaiian plants in my backyard. Work by Hoffman et al. in 2009 points to a need to compare endophytic communities within different sites and plant communities, which would indicate the role of surrounding species on endophytic community composition. Rodriguez et al. questioned in 2009 the evolutionary dynamics of habitat-adapted symbiosis. It is apparent that a study involving the endophytic community of a native Hawaiian species would not only be groundbreaking, but also a crucial element in understanding the role of the foliar endophytic community in the overall health of the plant host within different habitats, natural or not. The first step in such an understudied field is the most fundamental. With functional application toward conservation in mind, the species chosen for study should be common and widespread in the habitat. Study of the foliar fungal endophytic community of a threatened rare species might seem more critical, but a baseline study utilizing a more dominant species is certainly more functional. Ohi'a lehua (Metrosideros polymorpha) is a common native Hawaiian plant. The word polymorpha means many forms, because ohi'a trees are common in tropical jungles near the ocean shore as well as along the volcanic rim on our island mountains. The towering tree whose slimy leaves I had noticed in the rain forest near my home is an ideal model for native plant communities across the island chain. As native species become more threatened, garden and greenhouse propagation, followed by out planting into natural environments, will become increasingly common. Because we know so little about endophytic fungi, the appropriate place to start would be to investigate whether the environment or the host plays the principal role in determining the species of fungal foliar endophytic community structures. The endophytic community within an ohi'a that has grown from seed in a greenhouse may be different from that of an ohi'a plant that has grown in its natural environment, surrounded by other native species that may have contributed crucial elements to its fungal foliar endophytic community. The main objective of this study is to determine the composition of the foliar fungal endophytic community associated with Ohi'a lehua (Metrosideros polymorpha), a common native Hawaiian tree species (Figure 1 and 2). The endophytic communities of M. polymorpha from different elevations (8,100 feet and 350 feet) and from wild and cultivated trees were investigated using live cultures, environmental PCR, cloning, and molecular sequencing. The variable of elevation was introduced in addition to the variable of environment in order to compare two phenotypes of the same species and to broaden the application and implications of this project. The results of this investigation will further our understanding of the similarities between a plant's endophytic community and its surrounding environment. This relationship has implications that could be a critical element of native plant conservation. If individual ohi'a trees growing at different elevations are shown to harbor similar fungal foliar endophytic communities, then it can be inferred that host selection for particular fungal endophytes exists. This inference could be imperative to outplanting efforts. If the communities of fungi are different, then it can be inferred that the surrounding environment largely influences the composition of the fungal foliar endophytic community in a particular tree. Approximately 20 mature leaves per tree were collected. Leaves with galls or other markings were avoided (Figure 3). The samples were kept refrigerated in sealed bags until sterilization. Four trees (two replicates from each elevation) were sampled from the Common Garden in Volcano (elevation 3,600 feet). Leaves of two trees at each elevation (350 feet and 8,100 feet) from the same aged lava flow were collected as well, for a total of eight samples. Figure 4 shows an ohi'a plant that originated from a population at 350 feet elevation growing in the Common Garden. Figure 5 shows an ohi'a plant growing about 50 meters away that originated from a population at 8,100 feet elevation. The leaves in Figure 4 are waxy and large, whereas those of Figure 5 are smaller and pubescent (furry). Leaves were individually rinsed with running tap water to remove loose surface particles, then submerged in a series of solutions to surface-sterilize the leaves so that the only DNA being analyzed is that inside the leaf. The submersion sequence is as follows: 30 seconds in 95% ethanol, two minutes in 10% chlorine bleach, and two minutes in 70% ethanol (Figure 6). In preparation for DNA extractions, four leaf disks per tree were placed into drying tubes to ensure a sterile environment. Tissue samples (5-10 mg of dried leaf material) were lysed in Lysing Matrix A Tubes using the FastPrep instrument for 20 seconds at a speed setting of 4.0. Total genomic DNA, including that of fungal endophytes, was isolated using the DNeasy Plant Mini Kit (Qiagen Inc.), following the manufacturer's protocol. The fungal-specific primers ITS1F and ITS4 were used to amplify the internal transcribed spacer (ITS) region of the fungal nuclear genome. PCR protocol consisted of an initial denature for three minutes at 96?C, followed by 35 cycles of 94?C for 30 seconds, 55? to 58?C for 60 seconds, and 72?C for 60 seconds, followed by a final extension at 72?C for 10 minutes. PCR quality and yield were analyzed via gel electrophoresis using a 1% agarose gel. Cloning reactions were performed using a TOPO-TA Cloning Kit for Sequencing (Invitrogen Inc.), following the manufacturer's protocol. For each cloning reaction, 6 to 10 colonies were isolated and screened for the appropriate fungal DNA insert, using restriction digest analysis and PCR. Restriction Digest Analysis Restriction digest analysis was performed using a TOPO-TA Cloning Kit for Sequencing (Invitrogen Inc.), following the manufacturer's protocol and using the endonuclease enzyme EcoR1. PCR (to Analyze Transformants) PCR was performed to analyze transformants using a TOPO-TA Cloning Kit for Sequencing (Invitrogen Inc.), following the manufacturer's protocol and using the primers ITS1F and ITS4. PCR was purified for sequencing using a QIAquick PCR Purification Kit (Qiagen Inc.) according to the manufacturer's protocol. Cleaned PCR products were submitted to Elim Biopharmaceuticals Inc. for sequencing. ITS data were recovered from 39 clones. Sequence data were screened and manually edited using Sequencher 4.0. BLAST was used to obtain closest match to existing sequences in the NCBI GenBank Database. MacClade was used to manually align 37 clones (clones 1F and 4B were removed as outliers), and GARLI was used to generate a maximum likelihood phylogenetic hypothesis. PAUP* was used to calculate pairwise genetic distances between the cloned ITS sequences. DATA ANALYSIS AND DISCUSSION This study was conducted to document the diversity of the foliar fungal endophytic community associated with ohi'a lehua (Metrosideros polymorpha) to assess whether the environment or the host determines the endophytic community structure of different elevational phenotypes. Sequencing of the ITS region was utilized to identify the fungal endophytes associated with wild and cultivated trees. In the Common Garden, the trees had been cultivated in an environment foreign to that which they had adapted to, surrounded by species such as kahili ginger (Hedychium gardnerianum), hapu'u pulu (Cibotium splendens), and banana poka (Passiflora tripartite). At the low elevation site, the dominant species present were uluhe (Dicranopteris linearis) and strawberry guava (Psidium cattleianum). At the high elevation site, the species composition was almost entirely native, consisting of a'ali'i (Dodenaea viscosa)and pukiawe (Styphelia tameiameiae). Restriction digest analysis was performed using the endonuclease enzyme EcoR1 to remove the fungal DNA insert from the E. coli plasmid. Unfortunately, an EcoR1 site was found to exist within the middle of the DNA insert, so the EcoR1 enzyme not only digested the EcoR1 sites on each end of the insert, but also cleaved the insert in half. This method was found to be useful to take a baseline assessment of the DNA present, as sequences can be compared by fragment length along the ladder. As shown in Figure 7, the ITS sequences recovered from M. polymorpha leaves are divided into two distinct clades, indicating that high and low elevation phenotypes harbor distinct endophytic communities. Each of these clades in turn is composed entirely of endophytes from both wild and cultivated trees of a single elevational phenotype. This suggests that the host, not the growing environment, determines endophytic community composition. Figure 8 confirms that the ITS sequences recovered from environmental PCR are nearly identical within elevational phenotypes, and not within growing environments. Figure 9 illustrates that mean pairwise genetic distances between sequenced endophytes of low elevation phenotypes are greater than those of high elevation phenotypes. This suggests that low elevation phenotypes of M. polymorpha harbor greater genetic diversity than do those at high elevation. Additionally, the mean pairwise genetic distances of endophytes within cultivated trees of both elevational phenotypes are greater than those in a wild environment. The findings of this experiment from both live cultures and environmental PCR and ITS sequence data support the conclusion that the host, not the environment, determines the foliar fungal endophytic community of M. polymorpha. Furthermore, high and low elevation phenotypes harbor distinct endophytic communities. Host, not locality (wild versus cultivated), determines endophytic community composition. Low elevation phenotypes of M. polymorpha harbor a greater genetic diversity of fungal endophytes than do those of high elevation. Samples of M. polymorpha from a cultivated environment harbor greater genetic diversity of fungal endophytes than do those of a wild environment. These findings indicate that there is likely a previously undocumented level of selection by plant hosts of fungal endophytes in Hawaiian botanical communities. This study is the first significant one of its kind. The foliar fungal endophytic community of M. polymorpha has never been studied using environmental PCR. These findings are very intriguing because they are novel and because they present a conclusion which has implications to play a large role in conservation efforts of native Hawaiian plants. These findings may suggest a previously undocumented evolutionary relationship between foliar fungal endophytes and their hosts. Further studies should study the surrounding species in each sampled environment to determine the role of surrounding community on contributing to the endophytic community of M. polymorpha. This study is the first one of its kind. The foliar fungal endophytic community of M. polymorpha has never been researched so extensively. My findings have implications for the conservation of native Hawaiian plants and possibly of plants worldwide. Further studies would indicate whether or not the introduced endophytes are harmful or not, as well as the individual role of each endophyte within the plant's endophytic community. A study comparing the fungi discovered within ohi'a trees to those in a sampling of surrounding plants would be helpful to determine the true effect the environment has on foliar fungal endophytes. I hope that with this research as a baseline, the role of foliar fungal endophytes in native Hawaiian plants can be assessed and one day utilized in conservation efforts so that the breathtaking forestscape of my backyard is a sight for my children's children to behold. I would like to thank Dr. Brian Perry for his constant guidance and mentorship. The funding for this project was provided by his lab fund at the University of Hawai'i at Hilo. My mother, Betsy Kodis, and my advisor, Jamie Nekoba, both also provided invaluable support so that I could complete this research. Arnold, Elizabeth, et al. "Diversity and phylogenetic affinities of foliar fungal endophytes in loblolly pine inferred by culturing and environmental PCR" Mycologia 99.2 (2007): 185-206. Arnold, Elizabeth, and Francois Lutzoni. "Diversity and Host Range of Foliar Fungal Endophytes: Are Tropical Leaves Biodiversity Hotspots?" Ecology 88.3 (2007): 541-549. Brown, Alfred E. Benson's Microbiological Applications. 10th ed. Boston: McGraw Hill, 2007, pp. 95-96. Hata, K., R. Atari, and K. Sone. "Isolation of endophytic fungi from leaves of Pasania edulis and their within-leaf distributions." Mycoscience 43.5 (2002): 369-73. Retrieved from the World Wide Web on 5 Oct 2009. http://www.springerlink.com/content/h2utgd8tknvpmgv1 Hoffman, Michelle, et al. "Molecular Analysis Reveals a Distinctive Fungal Community Associated with Foliage of Montane Oaks in Southeastern Arizona." Journal of the Arizona-Nevada Academy of Science 401.1 (2008): 99-100. Higgins, Lindsay, et al. "Phylogenetic relationships, host affinity, and geographic structure of boreal and arctic endophytes from three major plant lineages" Science Direct 42 (2007): 543-555. Rodriguez, R.J., et al. "Fungal endophytes: diversity and functional roles." New Phytologist 182 (2009): 313-330. White, D.A. "Mycology and Taxonomy." 20 Nov 2009. Retrieved from the World Wide Web on 10 Dec 2009. http://www.ontarioprofessionals.com/weird3.htm Zimmerman, Naupaka. Telephone interview. 3 Nov 2009. More About This Resource... Supplement a study of biology with an activity drawn from this winning student essay. - Send students to this online article, or print copies of the essay for them to read. - Working in small groups, have students research > M. polymorpha . What do these trees look like? How are they used? Are they located anywhere else in addition to Hawaii? OriginYoung Naturalist Awards
<urn:uuid:7acb3a9e-393d-46a1-a12b-df7dee87a6d0>
3.390625
3,645
Academic Writing
Science & Tech.
33.324037
the Great Comet of 1997, became much brighter than any surrounding stars. It was seen even over bright city lights. Away from city lights, however, it put on quite a Here Comet Hale-Bopp was photographed above Val Parola Pass in the blue ion tail, consisting of ions from the is pushed out by the solar wind. dust tail is composed of larger particles of dust from the nucleus driven by the pressure of sunlight, that orbit behind the Observations showed that Comet Hale-Bopp's nucleus spins about once every 12 hours. A comet that may well exceed Hale-Bopp's peak brightness is expected to fall into the inner Solar System next year. Credit & Copyright: A. Dimai, (Col Druscie Obs.),
<urn:uuid:b06ce62b-4430-456b-8352-b43bd2558b1a>
3.703125
168
Knowledge Article
Science & Tech.
60.858115
Arctic ice melt could see rise of "Grolar bear" LONDON: Scientists have suggested that due to the adverse effects of Arctic ice melting, the hybrid of a polar bear and grizzly bear - dubbed the 'grolar bear', might rise in numbers. According to a report in , the effects of climate change means that the hybrid bears could become more common as their habitats increasingly overlap due to global warming. "One of the real things that is happening is that grizzlies are moving north, at the same time the polar bears are forced to be on the beach and we have found a number of grizzly bear polar bear hybrids," said biologist Dr George Divoky, who has worked in the Arctic region for over three decades. "Essentially that could mean that it would save the polar bear genes in the grizzly population," he added. Biologists have already spotted the hybrid species. Read the FULL ARTICLE
<urn:uuid:eeca1e64-0d1e-4e9a-b5e9-6ab9f25bf541>
3.25
202
Truncated
Science & Tech.
41.411294
A translation or slide involves moving a figure in a specific direction for a specific distance. Vectors are often used to denote translation, because the vector communicates both a distance (its length) and a direction (the way it is pointing). The vector shows both the length and direction of the translation. A glide reflection is a combination of two transformations: a reflection over a line, followed by a translation in the same direction as the line. Reflect over the line shown; then translate parallel to that line. Only an infinite strip can have translation symmetry or glide reflection symmetry. For translation symmetry, you can slide the whole strip some distance, and the pattern will land back on itself. For glide reflection symmetry, you can reflect the pattern over some line, then slide in the direction of that line, and it looks unchanged. The patterns must go on forever in both directions.
<urn:uuid:7cc3c020-316e-40a5-b6db-3fb7e26885ae>
3.75
180
Tutorial
Science & Tech.
40.214827
Joined: 16 Mar 2004 |Posted: Fri May 15, 2009 2:29 pm Post subject: Researchers Develop Novel Renewable Energy Materials |University of Queensland researchers have made a ground-breaking discovery that produces highly efficient miniature crystals which could revolutionise the way we harvest and use solar energy. Professor Max Lu , from UQ's Australian Institute for Bioengineering and Nanotechnology (AIBN), said they were one step closer to the holy grail of cost-effective solar energy with their discovery. “We have grown the world's first titanium oxide single crystals with large amounts of reactive surfaces, something that was predicted as almost impossible,” Professor Lu said. “Highly active surfaces in such crystals allow high reactivity and efficiency in devices used for solar energy conversion and hydrogen production. “Titania nano-crystals are promising materials for cost-effective solar cells, hydrogen production from splitting water, and solar decontamination of pollutants. “The beauty of our technique is that it is very simple and cheap to make such materials at mild conditions. “Now that the research has elucidated the conditions required, the method is like cooking in an oven and the crystals can be applied like paints.” A new chemical method guides micrometer-sized TiO 2 crystals to expose a large fraction of reactive (001) facets versus nonreactive (101) facets. Credit: Chenghua Sun/Australian Inst. Bioengineering & Nanotechnology Professor Lu, who was recently awarded a second prestigious Australian Research Council Federation Fellowship, said it wasn't just renewable energy where this research could be applied. “These crystals are also fantastic for purifying air and water,” he said. “The same principle for such materials to convert sunlight to electricity is also working to break down pollutants in water and air. “One could paint these crystals onto a window or a wall to purify the air in a room. “The potential of applications of this technology in water purification and recycling are huge.” Professor Lu said it would be about five years for the water and air pollution applications to be commercially available, and about 5 to 10 years for the solar energy conversion using such crystals. He said the breakthrough technology was a great example of cross-discipline collaborations with work by Professor Sean Smith's Computational Molecular Science group at AIBN, who conducted key computational studies and helped the experimentalist researchers to focus on specific surface modification elements for control of the crystal morphology. “First-principle computational chemistry is a powerful tool in aiding the design and synthetic realisation of novel nanomaterials, and this work is a beautiful example of the synergy,” Professor Smith said. Professor Lu said the work was also the result of a very fruitful and long-term international collaboration with Professor Huiming Cheng's group from the Chinese Academy of Sciences , a world-class institution with which UQ has many productive research collaborations. The research, which was produced with colleagues Huagui Yang, Chenghua Sun, Shizhang Qiao, Gang Liu, Jin Zou, has been published in the latest edition of scientific journal Nature (doi:10.1038/nature06964).
<urn:uuid:9c83b2dc-d8e6-4400-947f-fc05114d89de>
3.265625
672
Comment Section
Science & Tech.
24.722064
Mars Rover Cammand and Control Name: David E. Date: February 2004 How does NASA send an instruction to one of the Mars Rover crafts? How does this command travel to the Rover and how long before the instruction is received? Very thought provoking question. Using the speed of light as the speed of the radio transmissions it takes anywhere from 4 to 20 minutes for a one way trip of radio waves to Mars. I am not totally certain about this but I believe that NASA uses Earth based extremely high gain antennas and at relatively high power. Why? Sure, it is all well and good that the communications are "line of sight" between Mars and our planet. However, consider the distance. It takes 1.28 seconds (approximately) for a signal to get to the Moon (ONE WAY TRIP). It takes 188 times that long (at the nearest Mars -- Earth planetary distance) to get a signal to Mars. And Physics tells us that radiation (or radio wave density) usually measured in Watts / m^2 will drop according to the 1/r^2 rule. To put it in simpler terms: Transmit signal A to our Moon and it hits right smack in the middle of the crater Tycho with a radiation power intensity (or signal strength) of lets say 0.0001 W / m^2. Now let us aim that same signal A at Mars. Now let us target the rover. The rover will "see" the signal at a signal strength of about 0.0001 / (188^2) in Watts / meter^2 ... or .... 2.8 x 10 ^ (-9) or 0.0000000028 Watts per square meter. My guess is that NASA, in fact, does use ground based high power transmitters into high gain antennas just to make sure they can get as much radiation density (radio wave strength) onto the Martian surface as possible ... or at least as allowed by the FCC ;) Click here to return to the Engineering Archives Update: June 2012
<urn:uuid:2566ac4f-5b79-42fc-84ff-a7b5171c40a4>
3.484375
427
Q&A Forum
Science & Tech.
77.00576
by Stefan Rahmstorf, Michael Mann, Rasmus Benestad, Gavin Schmidt, and William Connolley On Monday August 29, Hurricane Katrina ravaged New Orleans, Louisiana and Missisippi, leaving a trail of destruction in her wake. It will be some time until the full toll of this hurricane can be assessed, but the devastating human and environmental impacts are already obvious. Katrina was the most feared of all meteorological events, a major hurricane making landfall in a highly-populated low-lying region. In the wake of this devastation, many have questioned whether global warming may have contributed to this disaster. Could New Orleans be the first major U.S. city ravaged by human-caused climate change? The correct answer–the one we have indeed provided in previous posts (Storms & Global Warming II, Some recent updates and Storms and Climate Change) –is that there is no way to prove that Katrina either was, or was not, affected by global warming. For a single event, regardless of how extreme, such attribution is fundamentally impossible. We only have one Earth, and it will follow only one of an infinite number of possible weather sequences. It is impossible to know whether or not this event would have taken place if we had not increased the concentration of greenhouse gases in the atmosphere as much as we have. Weather events will always result from a combination of deterministic factors (including greenhouse gas forcing or slow natural climate cycles) and stochastic factors (pure chance). Due to this semi-random nature of weather, it is wrong to blame any one event such as Katrina specifically on global warming – and of course it is just as indefensible to blame Katrina on a long-term natural cycle in the climate. Yet this is not the right way to frame the question. As we have also pointed out in previous posts, we can indeed draw some important conclusions about the links between hurricane activity and global warming in a statistical sense. The situation is analogous to rolling loaded dice: one could, if one was so inclined, construct a set of dice where sixes occur twice as often as normal. But if you were to roll a six using these dice, you could not blame it specifically on the fact that the dice had been loaded. Half of the sixes would have occurred anyway, even with normal dice. Loading the dice simply doubled the odds. In the same manner, while we cannot draw firm conclusions about one single hurricane, we can draw some conclusions about hurricanes more generally. In particular, the available scientific evidence indicates that it is likely that global warming will make – and possibly already is making – those hurricanes that form more destructive than they otherwise would have been. The key connection is that between sea surface temperatures (we abbreviate this as SST) and the power of hurricanes. Without going into technical details about the dynamics and thermodynamics involved in tropical storms and hurricanes (an excellent discussion of this can be found here), the basic connection between the two is actually fairly simple: warm water, and the instability in the lower atmosphere that is created by it, is the energy source of hurricanes. This is why they only arise in the tropics and during the season when SSTs are highest (June to November in the tropical North Atlantic). SST is not the only influence on hurricane formation. Strong shear in atmospheric winds (that is, changes in wind strength and direction with height in the atmosphere above the surface), for example, inhibits development of the highly organized structure that is required for a hurricane to form. In the case of Atlantic hurricanes, the El Nino/Southern Oscillation tends to influence the vertical wind shear, and thus, in turn, the number of hurricanes that tend to form in a given year. Many other features of the process of hurricane development and strengthening, however, are closely linked to SST. Hurricane forecast models (the same ones that were used to predict Katrina’s path) indicate a tendency for more intense (but not overall more frequent) hurricanes when they are run for climate change scenarios (Fig. 1). Figure 1. Model Simulation of Trend in Hurricanes (from Knutson et al, 2004) In the particular simulation shown above, the frequency of the strongest (category 5) hurricanes roughly triples in the anthropogenic climate change scenario relative to the control. This suggests that hurricanes may indeed become more destructive (1) as tropical SSTs warm due to anthropogenic impacts. But what about the past? What do the observations of the last century actually show? Some past studies (e.g. Goldenberg et al, 2001) assert that there is no evidence of any long-term increase in statistical measures of tropical Atlantic hurricane activity, despite the ongoing global warming. These studies, however, have focused on the frequency of all tropical storms and hurricanes (lumping the weak ones in with the strong ones) rather than a measure of changes in the intensity of the storms. As we have discussed elsewhere on this site, statistical measures that focus on trends in the strongest category storms, maximum hurricane winds, and changes in minimum central pressures, suggest a systematic increase in the intensities of those storms that form. This finding is consistent with the model simulations. A recent study in Nature by Emanuel (2005) examined, for the first time, a statistical measure of the power dissipation associated with past hurricane activity (i.e., the “Power Dissipation Index” or “PDI”–Fig. 2). Emanuel found a close correlation between increases in this measure of hurricane activity (which is likely a better measure of the destructive potential of the storms than previously used measures) and rising tropical North Atlantic SST, consistent with basic theoretical expectations. As tropical SSTs have increased in past decades, so has the intrinsic destructive potential of hurricanes. Figure 2. Measure of total power dissipated annually by tropical cyclones in the North Atlantic (the power dissipation index “PDI”) compared to September tropical North Atlantic SST (from Emanuel, 2005) The key question then becomes this: Why has SST increased in the tropics? Is this increase due to global warming (which is almost certainly in large part due to human impacts on climate)? Or is this increase part of a natural cycle? It has been asserted (for example, by the NOAA National Hurricane Center) that the recent upturn in hurricane activity is due to a natural cycle, e.g. the so-called Atlantic Multidecadal Oscillation (“AMO”). The new results by Emanuel (Fig. 2) argue against this hypothesis being the sole explanation: the recent increase in SST (at least for September as shown in the Figure) is well outside the range of any past oscillations. Emanuel therefore concludes in his paper that “the large upswing in the last decade is unprecedented, and probably reflects the effect of global warming.” However, caution is always warranted with very new scientific results until they have been thoroughly discussed by the community and either supported or challenged by further analyses. Previous analysis of the AMO and natural oscillation modes in the Atlantic (Delworth and Mann, 2000; Kerr, 2000) suggest that the amplitude of natural SST variations averaged over the tropics is about 0.1-0.2 ºC, so a swing from the coldest to warmest phase could explain up to ~0.4 ºC warming. What about the alternative hypothesis: the contribution of anthropogenic greenhouse gases to tropical SST warming? How strong do we expect this to be? One way to estimate this is to use climate models. Driven by anthropogenic forcings, these show a warming of tropical SST in the Atlantic of about 0.2 – 0.5 ºC. Globally, SST has increased by ~0.6 ºC in the past hundred years. This mostly reflects the response to global radiative forcings, which are dominated by anthropogenic forcing over the 20th Century. Regional modes of variability, such as the AMO, largely cancel out and make a very small contribution in the global mean SST changes. Thus, we can conclude that both a natural cycle (the AMO) and anthropogenic forcing could have made roughly equally large contributions to the warming of the tropical Atlantic over the past decades, with an exact attribution impossible so far. The observed warming is likely the result of a combined effect: data strongly suggest that the AMO has been in a warming phase for the past two or three decades, and we also know that at the same time anthropogenic global warming is ongoing. Finally, then, we come back to Katrina. This storm was a weak (category 1) hurricane when crossing Florida, and only gained force later over the warm waters of the Gulf of Mexico. So the question to ask here is: why is the Gulf of Mexico so hot at present – how much of this could be attributed to global warming, and how much to natural variability? More detailed analysis of the SST changes in the relevant regions, and comparisons with model predictions, will probably shed more light on this question in the future. At present, however, the available scientific evidence suggests that it would be premature to assert that the recent anomalous behavior can be attributed entirely to a natural cycle. But ultimately the answer to what caused Katrina is of little practical value. Katrina is in the past. Far more important is learning something for the future, as this could help reduce the risk of further tragedies. Better protection against hurricanes will be an obvious discussion point over the coming months, to which as climatologists we are not particularly qualified to contribute. But climate science can help us understand how human actions influence climate. The current evidence strongly suggests that: (a) hurricanes tend to become more destructive as ocean temperatures rise, and (b) an unchecked rise in greenhouse gas concentrations will very likely increase ocean temperatures further, ultimately overwhelming any natural oscillations. Scenarios for future global warming show tropical SST rising by a few degrees, not just tenths of a degree (see e.g. results from the Hadley Centre model and the implications for hurricanes shown in Fig. 1 above). That is the important message from science. What we need to discuss is not what caused Katrina, but the likelyhood that global warming will make hurricanes even worse in future. 1. By ‘destructive’ we refer only to the intrinsic ability of the storm to do damage to its environment due to its strength. The potential increases that we discuss apply only to this intrinsic meteorological measure. We are not taking into account the potential for increased destruction (and cost) due to increasing population or human infrastructure. Delworth, T.L., Mann, M.E., Observed and Simulated Multidecadal Variability in the Northern Hemisphere, Climate Dynamics, 16, 661-676, 2000. Emanuel, K. (2005), Increasing destructiveness of tropical cyclones over the past 30 years, Nature, online publication; published online 31 July 2005 | doi: 10.1038/nature03906 Goldenberg, S.B., C.W. Landsea, A.M. Mestas-Nuñez, and W.M. Gray. The recent increase in Atlantic hurricane activity. Causes and implications. Science, 293:474-479 (2001). Kerr, R.A., 2000, A North Atlantic climate pacemaker for the centuries: Science, v. 288, p. 1984-1986. Knutson, T. K., and R. E. Tuleya, 2004: Impact of CO2-induced warming on simulated hurricane intensity and precipitation: Sensitivity to the choice of climate model and convective parameterization. Journal of Climate, 17(18), 3477-3495.
<urn:uuid:5b19dee7-09c1-492a-95bf-7422217a4f1c>
3.328125
2,428
Academic Writing
Science & Tech.
40.46227
Nov. 9, 2010 Carotenoid pigments are the source of many of the animal kingdom's most vivid colours; flamingos' pink feathers come from eating carotenoid-containing shrimps and algae, and carotenoid colours can be seen among garden birds in blackbirds' orange beaks and blue tits' yellow breast feathers. These pigments play a crucial role in sexual signals. According to the study's lead author Dr Tom Pike of the University of Exeter: "Females typically use carotenoid colours to assess the quality of a potential mate, with more colourful males generally being regarded as the most attractive." This long-held assumption is, however, hard to study because we see colour very differently to fish and previous studies have not taken such differences into account, instead comparing only the colours perceived by humans. "The major difference between stickleback vision and our own is that they can see ultraviolet light, which is invisible to humans. This may be important because carotenoids reflect ultraviolet light as well as the red, oranges and yellows that we can see," Dr Pike explains. The model developed by Dr Pike and colleagues from the University of Glasgow and Nofima Marine in Norway mimics the stickleback's visual system, allowing the researchers to determine what 'colours' the fish see. "The model tells us how much of the light reflected from a carotenoid signal is actually detected by a female and how this information might be processed by her brain, and so gives us exciting new insights into how females may use colour to choose the best mates," says Dr Pike. Male sticklebacks can fine tune the colours they display to females by varying both the overall amount of carotenoids and the relative amount of the two constituent carotenoids, the red-coloured astaxanthin and the yellow tunaxanthin. The model reveals that sticklebacks' visual system and coloration are extremely well co-adapted, and that females are surprisingly good at assessing the quantity of carotenoids a male is able to put in his signal -- which previous studies by the authors have shown is linked to his parenting ability. The results will help ecologists get a better understanding of why carotenoid-based signals evolved in the first place, and provides insights into why males use the specific carotenoids they do. According to Dr Pike: "There are many carotenoids in the sticklebacks' diet, but males use only two of them for signalling; because the visual system evolved long before male coloration in this species, it suggests that males 'chose' to use those two carotenoids to make the most of what the female fish sees." The study was funded by the UK's Natural Environment Research Council (NERC). Other social bookmarking and sharing tools: - Thomas W. Pike, Bjørn Bjerkeng, Jonathan D. Blount, Jan Lindström, Neil B. Metcalfe. How integument colour reflects its carotenoid content: a stickleback’s perspective. Functional Ecology, 2010; DOI: 10.1111/j.1365-2435.2010.01781.x Note: If no author is given, the source is cited instead.
<urn:uuid:8af6f8c8-0283-41f7-b090-2466dbe334aa>
3.734375
686
Knowledge Article
Science & Tech.
40.353045
Oct. 11, 2012 Returning briefly to the surface for great lungfuls of air, the underwater lifestyles of whales had been a complete mystery until a small group of pioneers from various global institutions – including Malene Simon, Mark Johnson and Peter Madsen – began attaching data-logging tags to these enigmatic creatures. Knowing that Jeremy Goldbogen and colleagues had successful tagged blue, fin and humpback whales to reveal how they lunge through giant shoals of krill, Simon and her colleagues headed off to Greenland where they tagged five humpback whales to discover how the animals capture and consume their prey: krill and agile capelin. Attaching individual tags behind the dorsal fin on three of the whales – to record their stroke patterns – and nearer the head in the remaining whales – to better measure head movements – the team successfully recorded high resolution depth, acceleration and magnetic orientation data from 479 dives to find out more about the animals' lunge tactics. Simon, from the Greenland Institute of Natural Resources, Madsen, from Aarhus University, Denmark and Johnsen from the University of St. Andrews, UK, report how whales choreograph their foraging lunges at depth in The Journal of Experimental Biology at http://jeb.biologists.org. Analysing the whales' acceleration patterns, Simon saw that as the whales initiated a lunge, they accelerated upward, beating the tail fins (flukes) twice as fast as normal to reach speeds of 3m/s, which is not much greater than the whales' top cruise speeds. However, while the animals were still beating their flukes, the team saw their speed drop dramatically, although the whales never came to a complete standstill, continuing to glide at 1.5m/s even after they stopped beating their flukes. So, when did the whales throw their mouths open during this sequence? Given that the top speed attained by the whales during the early stages of the lunge were similar to the animals' cruising speeds and the fact that the whales were beating their flukes much harder than usual to maintain the speed, the team conclude, 'The implication is that the mouth must already be open and the buccal [mouth] pouch inflated enough to create a higher drag when the high stroking rates… occur within lunges'. In addition, the team suggests that the whales continue accelerating after opening their mouths in order to use their peak speed to stretch the elastic ventral groove blubber that inflates as they engulf water. Once the buccal pouch is fully inflated, the whales continue beating their flukes after closing their mouths to accelerate the colossal quantity of water, before ceasing fluke movement and slowing to a new speed of 1.5m/s. Finally, the animals filter the water and swallow the entrapped fish over a 46s period before resuming beating their flukes as they launch the next lunge. Considering that humpback whales and other rorquals were thought to grind to a halt after throwing their jaws wide and that reaccelerating their massive bodies from a stationary start was believed to make lunge feeding extortionately expensive, the team's discovery that the animals continue gliding after closing their mouths suggests that lunge feeding may be cheaper than previously thought. However, the team concedes that despite the potential reduction in energy expenditure, lunge feeding is still highly demanding – the whale must accelerate the 30 tons of water held in its mouth – although they suggest that the high-speed tactic is essential for the massive hunters to engulf their nimble prey. Other social bookmarking and sharing tools: - M. Simon, M. Johnson, P. T. Madsen. Keeping momentum with a mouthful of water: behavior and kinematics of humpback whale lunge feeding. Journal of Experimental Biology, 2012; 215 (21): 3786 DOI: 10.1242/jeb.071092 Note: If no author is given, the source is cited instead.
<urn:uuid:56a5d74b-43b1-4728-90bb-de02d272671d>
3.796875
812
Truncated
Science & Tech.
42.026824
Web edition: November 28, 2012 To catch a skittish lunch, blue whales roll their massive bulk over and do a belly-up lunge if that’s what it takes. Video cameras and high-tech tags stuck to whales’ backs revealed the underwater acrobatics that let the world’s largest living predators survive on some of the tiniest prey, says Jeremy Goldbogen of the Cascadia Research Collective in Olympia, Wash. Blue whales manage to grow to about 30 meters in length (longer than two school buses put together) just by opening their huge gapes and lunging to engulf mouthfuls of seawater and whatever creatures are swimming in it. It’s a diet based on soup, and it sustains a whale of a body only if the lunger nabs a dense school of prey. Sticking instruments on 22 blue whales off the southern California coast let Goldbogen and his colleagues study dining styles, including 44 rollovers. During a lunge, the maneuver could position the whale’s mouth for the best gulp. Between lunges, rolling over may provide a panoramic scan of what to swallow next, Goldbogen and his colleagues report November 28 in Biology Letters. J.A. Goldbogen et al. Underwater acrobatics by the world’s largest predator: 360° rolling manoevures by lunge-feeding blue whales. Biology Letters. Posted online Nov. 28, 2012. doi: 10.1098/rsbl.2012.0986 S. Milius. How whales, dolphins, seals dive so deep. Science News. Vol. 157, April 8, 2000, p. 230. [Go to]
<urn:uuid:32e920db-d7c8-48f1-a8dc-fcc5845b3474>
3.25
361
Truncated
Science & Tech.
68.975193
Oracle Certification - SQL Fundamentals I - Ampersand Substitution Single Ampersand Substitution When Oracle finds a variable starting with an ampersand (&) character it attempts to resolve it by first checking to see if the variable has been defined using the DEFINE command, otherwise the user is prompted for the value, i.e. Double Ampersand Substitution When a variable will be repeated with the same value throughout a SQL statement it is preferable to use a double ampersand (&&) substitution. This way the first time Oracle comes across the variable Oracle prompts the user, and then all subsequent instances of that variable with be given the value entered. This double ampersand substitution causes a session variable to be created for each distinct ampersand substitution. DEFINE, UNDEFINE and VERIFY VERIFY controls whether or not variable substitutions are echoed to the screen. The basic syntax is SET VERIFY ON|OFF. DEFINE creates a session variable, syntax DEFINE variable=value; is will also list all defined session variables if called as DEFINE;. UNDEFINE allows a session variable to be removed, the syntax is UNDEFINE variable. It is also possible to control whether or not Oracle scans for ampersand variables to substitute. This can be useful when running a script that has ampersands that are not intended for substitution. The syntax is SET DEFINE ON|OFF. Moving on, Single Row Functions.
<urn:uuid:a9c82d6e-3afe-47a2-8d75-4441fda34658>
3.3125
307
Documentation
Software Dev.
30.466895
Learn something new every day More Info... by email A wheel spider is an amazingly unique creature found in the Namib Desert in southern Africa. Also called a golden wheel spider or a dancing white lady spider, the wheel spider evades predators by burrowing into the sand or by rolling itself into a ball and cartwheeling down sand dunes at a remarkable speed. The wheel spider's primary predator is the parasitic pompilid wasp. One typically expects to find spiders where there is vegetation or other structures in which to build webs to capture prey. The wheel spider is among some of the more than 400,000 spider species which do not build webs. They are nocturnal hunters, which means they hunt at night, and their prey is insects, which they inject with venom. The wheel spider's venom is not considered to be harmful to humans, however. Typical wheel spiders are about three fourths of an inch (20 mm) in diameter. They are also called dancing white lady spiders because of their color. The spider camouflages itself against the sand dunes through its unique whitish color, blending into the sand around it. Its species name is carparachne aureoflava, and it is among a family of so-called huntsman spiders, also called giant crab spiders because of their appearance. During the day, a wheel spider rests mostly protected from predators within a burrow it digs in the sand. The burrow can extend more than 15 inches (40 cm) below the surface of the sand. During the process, the spider astoundingly lifts more than 80,000 times its own body weight to dig the burrow. The wheel spider is so named because of its technique to avoid predators. It rolls into a ball and flings itself down a sand dune. Rolling at an astounding rate of as much as 44 turns per second, the wheel spider can outrun a wasp. It then attempts to burrow itself into another hole before hunting again at night. The primary predator for the wheel spider is the parasitic pompilid wasp, commonly called a spider wasp. The pompilid wasp is a solitary wasp which uses the spider as a host for feeding its larvae. The spider is paralyzed by the wasp's stinger and taken to another inconspicuous location where there is already a nest, or where it will build one. There, the wasp will lay an egg on the abdomen of the spider, which is still alive, and the egg will eventually turn into another wasp. The spider dies at some point in this process.
<urn:uuid:8cb82b33-38e3-4840-928c-db6205c4bc7b>
3
531
Knowledge Article
Science & Tech.
57.049392
Learning OOP in PHP ASAP! PHP is so much more than a scripting language. It’s a full-fledged language capable of building very complex applications. By harnessing the full power of Object Oriented Programming, you can reduce the amount of time you spend coding and use it to build better websites. This tutorial will show you how. OOP stands for Object Oriented Programming. OOP is a programming paradigm wherein you create “objects” to work with. These objects can then be tailored to your specific needs, to serve different types of applications while maintaining the same code base. Quite useful indeed. Read the rest of this article on Net Tuts+
<urn:uuid:64498b2e-bba0-440f-909b-53e110484586>
3.0625
140
Truncated
Software Dev.
55.993081
Many of you have probably heard about asteroid 2005 YU55, the massive rocky body that tomorrow night will collide with Earth in a ball of flames pass the planet safely, albeit closer than any asteroid in the last 35 years. And while astronomers are certain we'll be spared this time, the brush with such a massive rock raises important questions about so-called near-Earth objects like asteroids, comets, and meteoroids. So without further ado, here are ten things you probably didn't know about our solar system's more minor bodies. 10. The difference between asteroids, comets, meteoroids, meteors and meteorites Let's just get this out of the way, shall we? According to NASA's Near Earth Object (NEO) Program, a large, rocky body in orbit around the Sun is referred to as either an asteroid or a minor planet. Asteroids are thought to have been created in the "warmer" solar system, i.e. within Jupiter's orbit. Comets, on the other hand, are believed to have formed in the cold, outer solar system — beyond the orbit of our solar system's outermost planets. Comets and asteroids also differ in composition, the most notable difference being the comet's possession of an icy nucleus, which, when subjected to the relatively warmer temperatures of the inner solar system, begins to vaporize, creating a distinctive glow called a "coma," and a long, bright tail of dust and debris. There are other features that distinguish asteroids and comets, but recent findings continue to blur the lines between the two. Smaller Sun-orbiting particles, thought to originate from comets and asteroids, are known as meteoroids. When a meteoroid enters Earth's atmosphere, it usually vaporizes, becoming a meteor in the process (aka a "shooting star"). If a meteoroid is large enough to make it through Earth's atmo and make landfall without vaporizing completely, it's no longer a meteor, but a meteorite. The same goes for asteroids. (For more info, see NASA's NEO FAQ page, which is also the source of the handy chart featured here.) 9. Meteoroids: there's a lot of them Seeing as a meteoroid can be classified as pretty much anything bigger than a speck of dust and smaller than an asteroid, it makes sense that there would be quite a few of them orbiting the Sun and burning up in Earth's atmo at any given moment. The International Space Station, for example is the most heavily shielded spacecraft ever to occupy Earth's orbit. Why? To keep its astronauts and equipment safe from meteoroids. It's estimated that 100,000 of the buggers will make contact with the ISS over the course of its 20-year stint in space, and while the majority of these wont measure larger than a centimeter across, medium size particles (between 1cm and 10cm across) still pose a grave threat to the space station and its crew, and call for impressive sounding defensive measures like "multi-layered hypervelocity Whipple shields." (For more info on the safety measures employed on the International Space Station, see this informational sheet prepared by NASA's Micrometeoroid and Orbital Debris (MMOD) Protection program.) 8. Your odds of getting smacked by a meteorite They're slim, even if Earth is constantly being bombarded by meteoroids. The fact is that most of them simply don't survive long enough once breaking atmo to attain meteorite status, and those that do have barely any chance of actually hitting anybody. Such an event is only confirmed to have happened once, when Annie Hodges of Sylacauga, Alabama (pictured here) was struck in the hip by an eight pound meteorite after it crashed through her roof and bounced off a radio. Several studies have attempted to calculate the likelihood of a meteorite actually hitting a human target, taking into consideration everything from the average time a person spends outside to the amount of Earth's surface that the average person takes up. One of the most commonly cited figures is from a paper published in Nature in 1985, that calculates the rate of impacts to humans as .005 per year, or once every 180 years. 7. Why you need a telescope to spot 2005 YU55 YU55 is what's known as a C-type — or "carbonaceous" — asteroid, meaning it is especially rich in carbon. The composition of C-type asteroids makes them extremely dark (think darker than charcoal), and therefore difficult to spot with anything weaker than a telescope with at least a 6" mirror. (For those of you with scopes with this much imaging power, be sure to check our instructional on how to spot 2005 YU55 tomorrow night.) 6. 2005 YU55 could help us learn more about Panspermia Panspermia is the hypothesis that the building blocks for life are found throughout the universe, and are carried around on asteroids, comets and the like. If you've ever heard someone say that life on Earth may have come from space, they were probably talking about Panspermia (or the related astrobiological hypothesis of exogenesis). According to Dan Yeomans, manager of NASA's Near-Earth Object Program, the 2005 YU55 flyby presents a unique opportunity to learn about C-type asteroids, without which, Yeoman speculates, humans probably wouldn't exist. Since we rarely get an opportunity to see an asteroid so close (the last time we saw an asteroid like this so close to home was over 30 years ago), astronomers and astrophysicists the world over will be observing the asteroid to learn as much as possible about its composition. Objects like 2005 YU55 play an important part in the Panspermia hypothesis for their role in bringing carbon-based materials to a young Earth. If astronomers were to observe evidence that 2005 YU55 also harbors other organic materials (or even frozen water — see the note above about blurring the lines between asteroids and comets), it would go a long way in supporting the Panspermia hypothesis. 5. Getting hit by 2005 YU55 would suck Ok, so you almost definitely knew this, but just how much would it suck, exactly? After all, Yoemans says that the NASA's Near-Earth Object program is "extremely confident, 100 percent confident" that 2005 YU55 will not make contact with Earth, but that doesn't mean we can't speculate over how catastrophic it would be were the asteroid actually to hit us. Well, according to Jay Melosh, professor of Earth and atmospheric sciences at Purdue University, if 2005 YU55 were to hit the planet, it would likely produce a crater about 4 miles wide and 1700 feet deep, generating 7+ magnitude earthquakes and, depending on where it struck, devastating tsunami waves (for more info see Impact: Earth! which is based on Melosh's calculations). Again, will this actually happen? No. But observing the asteroid will help us be better prepared for when one does plot a course for Earth. As Yoeman says, "this [asteroid] is not a threat... But it is an opportunity." 4. Speaking of catastrophic Earth-asteroid impacts, aren't we about due for one of those? Maybe, but it's not likely. Some of you may have heard about an asteroid named Apophis. Astronomers believe Apophis to be 885 feet across, and estimate that on April 13, 2029, the asteroid will fly within 20,000 miles of Earth's surface — that's closer than the orbit of many of the planet's satellites. Astronomers are confident that Apophis will spare us on its 2029 flyby, but say that there is a 1/250,000 chance that it will pass through a "gravitational keyhole" that would set the asteroid on course for a future planetary impact exactly 7 years later. Could it happen? Yes. Is it likely? No. ...But it could totally happen. 3. But Michael Bay told me we could deflect an incoming asteroid with nuclear weapons... we can do that, right? Actually, yes. Back in 2007, NASA issued a report claiming that the best way to deflect asteroids and other near Earth objects away from Earth was with the use of nuclear devices... in space. Except NASA wouldn't do it by planting it in the asteroid's core, ala Armageddon, they'd do it by triggering the explosion in the vicinity of the asteroid. The power of the explosion would amounts to a nuclear-bomb-sized "nudge" capable of throwing the Asteroid off-course and preventing a collision with Earth. 2. We can even "finesse" an asteroid off a collision course with Earth. That is to say, WITHOUT the use of nuclear explosions... ...because some people think the idea of "weaponizing" space isn't such a good idea, chief among them being the group of astronauts and scientists comprising The B612 Foundation (The Little Prince, anyone?). One of the methods of deflection proposed by The B612 Foundation is to launch a probe of significant mass (weighing 1—2 tons, but depending upon the size of the asteroid) and "parking" it in space — not on the asteroid, but near it. The gravity of the asteroid would pull on the probe, but the mass of the probe would be just enough to pull ever so slightly on the asteroid, as well. If we were to move the probe very slowly with jet propulsion, we could, in theory, gently tug on the asteroid, "finessing" it into a safe orbit. (For more details on this approach, check out this brilliant TED talk on asteroids and the technologies we can use to deflect them away from Earth, delivered earlier this year by Bad Astronomy's Phil Plait at TEDxBoulder. 1. We have found almost all of the asteroids that pose a threat to Earth. We think. Recent findings from NASA's WISE satellite indicate that there are actually far fewer asteroids and near-Earth objects threatening life on Earth than we once thought. What's more, the so called "Near-Earth Asteroid Census" found that 90 percent of NEOs have now been mapped, which should allow us to keep an eye on them in the event one decides to make a bee-line for Earth.
<urn:uuid:65162c43-0d15-4c54-9566-16d814ee7f10>
3.828125
2,145
Listicle
Science & Tech.
54.048114
Photo by Barbara Albrecht azteca is a 1/8- to 1/4-inch-long crustacean commonly found in lakes, ponds, and streams throughout North America. They are an important link in the aquatic food chain and a food source for several predators, including fish and various invertebrates. Pesticides such as pyrethroids from residential runoff have recently been discovered to kill Hyalella. Low numbers of aquatic organisms, like Hyalella and Ceriodaphnia, is an indication of poor water quality.
<urn:uuid:f4b8ef86-c4fe-49c2-85ff-9f865fcef847>
3.4375
119
Knowledge Article
Science & Tech.
34.576769
A controversial idea to brake global warming, first floated by the father of the hydrogen bomb, is affordable and technically feasible, but its environmental impact remains unknown, a trio of US scientists say. Sowing the stratosphere with particles to reflect the Sun and cool the planet is possible with current technology and would cost a fraction of the bill from climate change or reducing emissions by fossil fuels, they argue. Back in 1997, as man-made global warming became a political issue, US nuclear physicist Edward Teller and others suggested spreading sulphate particles into the upper atmosphere. Carried around the globe on high-speed winds, the whitish particulates, known as aerosols, would reflect the Sun, reducing solar radiation by around one percent. It would provide a cooling similar to when volcanoes spew out clouds of dust, said Teller, who argued this option was far smarter than switching out of cheap and dependable fossil fuels. Teller, a hawk on nuclear weapons who reputedly inspired the movie character Dr. Strangelove, was lashed for an idea that critics said was unworkable and laden with risk. The new study, published in the British journal Environmental Research Letters, makes a cost analysis of so-called solar radiation management, or SRM, by aerosols.
<urn:uuid:98375d80-65dc-47f8-9cc7-160631860948>
3.140625
264
Truncated
Science & Tech.
32.624049
8th July 2009 - 01:13 PM I am currently working on a uni project which involves getting material properties of different materials (steel, rubber, foam, etc.). Now I have always thought that getting Young's modulus was simply from doing a tensile test, getting the stress-strain graph from stress = P/A and strain = deltaL/L and then E = stress/strain. However, after doing a compression (flexural bending) test on a steel rebar 10x10 mm cross-sectional area and 475 mm length -> I get the pre-yield part of the graph going up to 6 MPa stress and strain of 0.02. 6/0.02 = 300MPa which is about 1000x smaller than the E = 200GPa that I should be getting. I've already checked the units - 400 N of load gives me about 5mm of deflection. The equation deflection = 1/48*PL^3/(EI) shows this to be correct. P/A = 400/100 = 4 N/mm^2 = 4 MPa and 5 mm/475 mm = approx. 0.01 strain. I have also done a tensile test which gives me a graph within the similar range. What am I doing wrong? 19th July 2009 - 08:52 AM Okay, ive finally figured out what I had done wrong. The strain in the stress-strain graph is not the change in length of the bar divided by the length of the bar. Strain can only be obtained through a strain gauge. Now the only thing is, I'm not sure why? I thought that the strain gauge also measured change in length over length - unless there is a specific length that you can only test it on. 19th July 2009 - 09:58 AM When you bend a beam, you get compression and deflection. As such the stress is of two types and not all the area of the beam contributes. I suggest you re-read the sections on second moment of area and deflection of a cantilever. And if you have no textbook, here's Wikipedia.http://en.wikipedia.org/wiki/Deflection_(engineering)http://en.wikipedia.org/wiki/Area_moment_of_inertia 26th July 2009 - 03:07 AM With the right units, the figures you give are consistent. Except that 5mm deflection and 475mm length do not combine simply to give 0.01 strain (which would already be a lot for steel). By the way, bending combines tension and compression, yes... But near to another, and then the upper and lower part of the beam aren't allowed to expand freely in the direction of the width. Then, some form of Poisson's factor must improve the formula a bit, like (1-n^2), but in a complicated manner that depends on the thickness-to-width ratio. With 10mm*10mm, this correction isn't attainable by a hand computation. You mention rubber: bad luck. All elastomers are extremely nonlinear, most of them have hysteresis, as so computations with Young's and sheer modulus and strength, which work with metals, don't nearly work with elastomers. They work so little that elastomer manufacturers generally refuse to give such numbers; you just get Shore hardnesses, oh good.
<urn:uuid:11fb8676-2cfe-4b09-bf88-cc89258d2e1e>
2.765625
718
Comment Section
Science & Tech.
80.825919
Ivars Peterson's MathTrek Suppose there is a finite number of primes, Euclid argued. This means there's also a largest prime, n. Multiply all the primes together, then add 1: (2 x 3 x . . . x n) + 1. The new number is certainly bigger than the largest prime. If the initial assumption is correct, the new number can't be a prime. Otherwise, it would be the largest. Hence, it must be a composite number and divisible by a smaller number. However, because of the way the number was constructed, any known prime, when divided into the new number, leaves a remainder of 1. Therefore, the initial assumption can't be correct, and there can be no largest prime. Primes often occur as pairs of consecutive odd integers: 3 and 5, 5 and 7, 11 and 13, 17 and 19, and so on. These so-called twin primes are scattered throughout the list of all prime numbers. However, there's no proof yet that there are infinitely many pairs of primes that differ by only 2. There's now hope that the matter will finally be resolved. The distribution of primes follows a remarkably simple pattern: The average spacing between primes near a number x is the natural logarithm of x, a number closely related to the number of digits in x. At the same time, the gap between consecutive primes can often be much smaller or much larger than average. Indeed, mathematicians have proved that there are infinitely many pairs of primes that are closer than one-quarter of the average spacing. More than a year ago, Daniel A. Goldston of San Jose State University and Cem Y. Yildirim of Bogaziçi University in Istanbul announced a proof that, given any fraction, no matter how small, there are infinitely many prime pairs closer together than this fraction of the average. In other words, tight clusters of primes show up among the whole numbers no matter how large the numbers are. However, Andrew Granville of the University of Montreal and Kannan Soundararajan of the University of Michigan soon found a flaw in the purported proof. So it was back to the drawing board. Now, with the help of Janos Pintz of the Rényi Mathematical Institute of the Hungarian Academy of Sciences in Budapest, Goldston and Yildirim have circumvented the flawed approach and completed the proof, confirming that the spacing between consecutive primes is sometimes very much smaller than the average spacing. The new proof is much shorter and expressed in terms more familiar to number theorists than was the previous effort. In technical language, the theorem states that, for any positive number x, there exist primes p and p' such that the difference between p and p' is smaller than x log p. The proof of an even stronger result about the size of these gaps is now in circulation among mathematicians, but it hasn't yet been fully verified. At a presentation at the American Institute of Mathematics in Palo Alto, Calif., on May 24, Goldston suggested that, based on results to date, a proof of the twin prime conjecture may not be far away. Copyright © 2005 by Ivars Peterson 2005. Breakthrough in prime number theory. American Institute of Mathematics press release. June 3. Available at http://www.aimath.org/. Goldston, D.A., and C.Y. Yildirim. Preprint. Small gaps between primes I. Abstract available at http://front.math.ucdavis.edu/math.NT/0504336. Goldston, D.A., S.W. Graham, J. Pintz, and C.Y. Yildirim. Preprint. Small gaps between primes or almost primes. Abstract available at http://front.math.ucdavis.edu/math.NT/0506067. Goldston, D.A., Y. Motohashi, J. Pintz, and C.Y. Yildirim. Preprint. Small gaps between primes exist. Abstract available at http://front.math.ucdavis.edu/math.NT/0505300. Klarreich, E. 2003. Uncovering a prime failure. Science News 163(May 31):350. Available at http://www.sciencenews.org/articles/20030531/note17.asp. ______. 2001. Prime finding: Mathematicians mind the gap. Science News 163(March 29):195. Available at http://www.sciencenews.org/articles/20030329/fob1.asp. Peterson, I. 2001. Prime twins. MAA Online (June 4). ______. 1996. Prime theorem of the century. MAA Online (Dec. 23). Additional information about twin primes can be found at http://primes.utm.edu/top20/page.php?id=1 and http://mathworld.wolfram.com/TwinPrimes.html.
<urn:uuid:f4a08676-27e2-4369-9e03-f55b0e723683>
3.390625
1,069
Knowledge Article
Science & Tech.
67.35517
[Python-3000] [ANN] Python 3 Symbol Glossary tjreedy at udel.edu Sun Nov 2 02:52:43 CET 2008 Over the years, people have complained about the difficulty of finding the meaning of symbols used in Python syntax. So ... I wrote a Python 3 Symbol Glossary There are .txt and .odt versions. The first is usable, the second is nicer if you have an OpenDocFormat viewer or editor such as OpenOffice. There is no .html because conversion mangles the format by deleting tabs . From the Introduction: The ascii character set includes non-printable control characters (designated below with a '^' or '\' prefix), letters and digits, and other printable symbols. A few of the control characters and most of the symbols are used in Python code as operators, delimiters, or other syntactic units. Some symbols are used alone, some in multi-symbol units, and some in both. There are separate entries for each syntactic unit and for each different use of a unit. In total, there are nearly 100 entries for over 50 symbols and combinations. Entries are in ascii collating (sorting) order except that ?= entries (where ? is a symbol) follow the one for ? (if there is one) and the general 'op=' entry follows the one for =. The two lines after the entry for '\r\' are entries for the invisible blank space ' '. Most entries start with P, I, or S to indicate the syntactic unit's use as a prefix, infix, or suffix. (These terms are here not limited to operators.) If so, a template follows, with italicized words indicating the type of code to be substituted in their place. Entries also have additional explanations. Some syntactic units are split into two subunits that enclose code. Entries for these are the same except that two initials are used, PS or IS, depending on whether the first subunit is a prefix or infix relative to the entire syntactic construct. If I missed anything or made any errors, let me know. PSF people are free to make any use of this they wish. Terry Jan Reedy More information about the Python-3000
<urn:uuid:a1f3b9f8-e805-4b52-ba4d-aa8a0e947345>
2.75
506
Comment Section
Software Dev.
51.710172
The exception that is thrown when a non-fatal application error occurs. For a list of all members of this type, see ApplicationException Members. [Visual Basic] <Serializable> Public Class ApplicationException Inherits Exception [C#] [Serializable] public class ApplicationException : Exception [C++] [Serializable] public __gc class ApplicationException : public Exception [JScript] public Serializable class ApplicationException extends Exception Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. ApplicationException is thrown by a user program, not by the common language runtime. If you are designing an application that needs to create its own exceptions, derive from the ApplicationException class. ApplicationException extends Exception, but does not add new functionality. This exception is provided as means to differentiate between exceptions defined by applications versus exceptions defined by the system. ApplicationException does not provide information as to the cause of the exception. In most scenarios, instances of this class should not be thrown. In cases where this class is instantiated, a human-readable message describing the error should be passed to the constructor. ApplicationException uses the HRESULT COR_E_APPLICATION, which has the value 0x80131600. For a list of initial property values for an instance of ApplicationException, see the ApplicationException constructors. Platforms: Windows 98, Windows NT 4.0, Windows Millennium Edition, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003 family, .NET Compact Framework Assembly: Mscorlib (in Mscorlib.dll)
<urn:uuid:e9cd6963-c595-4682-b46b-13950d6d6583>
2.828125
344
Documentation
Software Dev.
20.512903
September 17, 2012 The accidental introduction of the brown tree snake on Guam has resulted in a loss of birds and a subsequent explosion of spiders. Photo by: Isaac Chellman. "You can't walk through the jungles on Guam without a stick in your hand to knock down the spiderwebs," says lead author Haldre Rogers, a Huxley Fellow in Ecology and Evolutionary Biology at Rice University. Rogers and her team counted spider webs in order to estimate spider abundance on the island. In the wet season, the team averaged 18.37 spider webs every 10 meters (around 33 feet), and in the dry season spiders were even more abundant: 26.19 webs for every 10 meters. Since there was no historical data of spider abundance prior to the loss of bird communities, the researchers compared Guam's spider abundance to similar nearby islands with intact bird communities: Rota, Tinian, and Saipan. In the dry season, Guam's spider abundance was more than twice as large as its neighbors, but it was the wet season when things got really wild: spider abundance on Guam exploded to 40 times the number on other islands. Guam's unique subspecies of Micronesian kingfisher has been eradicated from the island by the brown tree snake, however zoos are currently breeding the bird and hope to return it to the island one day. Photo by: Dylan Kesler. Brown tree snakes, which are native to Australia and Papua New Guinea, have been an ecological disaster for Guam, but to date no one has determined a method to rid the island of the rarely-seen, nocturnal snakes. In 2010, the U.S. Department of Agriculture 'bombed' the island with dead, frozen mice laced with acetaminophen to poison the snakes, and the country already spends over a $1 million very year working to make sure the brown tree snake doesn't make it off the island to invade somewhere else. The snake population is believed to be currently in decline due to exceeding its overall carrying capacity, but its too late for most of Guam's birds. Nine birds have become locally extinct in Guam with five of these (either species or subspecies) found no-where else, including the Guam flycatcher (Myiagra freycineti) which was last seen in 1983. "Birds pollinate our crops, control crop pests, and, it would seem, keep spider populations from exploding," George Fenwick, President of American Bird Conservancy, said in a press statement. "Arachnophobes and those afraid of snakes would do well to stay out of the forests of Guam," he added. But the problem may go beyond Guam according to Rogers. "With insectivorous birds in decline in many places in the world, I suspect there has been a concurrent increase in spiders," she says. Currently, Guam provides a natural laboratory whereby one can see just how the loss of one group of species impacts others. Photo of the now extinct Guam flycatcher. Photo by: Smithsonian. While Rogers had her teams suspect that spiders are more abundant on Guam due directly to a lack of predation from birds, they also write that bird loss may have other impacts that benefit spiders. For example, spiders no longer have to compete with insect-eating birds for prey and may expend less energy on re-building webs destroyed by flying birds. For now Rogers says snakes and spiders won't keep her away from continuing to explore Guam's bizarre environment, including a rainforest almost wholly empty of birdsong. "Ultimately, we aim to untangle the impact of bird loss on the entire food web, all the way down to plants," she says. "For example, has the loss of birds also led to an increase in the number of plant-eating insects? Or can this increase in spiders compensate for the loss of birds?" Spider bonanza in Guam. Photo by: Isaac Chellman. CITATION: Rogers H, Hille Ris Lambers J, Miller R, Tewksbury JJ (2012) ‘Natural experiment’ Demonstrates Top-Down Control of Spiders by Birds on a Landscape Level. PLoS ONE 7(9): e43446. doi:10.1371/journal.pone.0043446 Forgotten species: the plummeting cycad (12/06/2010) I have a declarative statement to make: cycads are mind-blowing. You may ask, what is a cycad? And your questions wouldn't be a silly one. I doubt Animal Planet will ever replace its Shark Week with Cycad Week (perhaps the fact that it's 'animal' planet and not 'plant' planet gave that away); nor do I expect school children to run to see a cycad first thing when they arrive at the zoo, rushing past the polar bear and the chimpanzee; nor do I await a new children's book about a lonely little anthropomorphized cycad just looking for a friend. In the world of species-popularity, the cycad ranks pretty low. For one thing, it's a plant. For another thing, it doesn't produce lovely flowers. And for a final fact, it looks so much like a palm tree that most people probably wouldn't know it wasn't. Still, I declare the cycad to be mind-blowing. Massive snake found in Florida (photos) (08/14/2012) Researchers in Florida have documented the biggest snake ever found in Florida. But the snake is an invader — it's not native. Meet the world's rarest snake: only 18 left (07/10/2012) It's slithery, brown, and doesn't mind being picked up: meet the Saint Lucia racer (Liophis ornatus), which holds the dubious honor of being the world's most endangered snake. A five month extensive survey found just 18 animals on a small islet off of the Caribbean Island of Saint Lucia. The snake had once been abundant on Saint Lucia, as well, but was decimated by invasive mongooses. For nearly 40 years the snake was thought to be extinct until in 1973 a single snake was found on the Maria Major Island, a 12-hectare (30 acre) protected islet, a mile off the coast of Saint Lucia (see map below). Island bat goes extinct after Australian officials hesitate (05/23/2012) Nights on Christmas Island in the Indian Ocean will never again be the same. The last echolocation call of a tiny bat native to the island, the Christmas Island pipistrelle (Pipistrellus murrayi), was recorded on August 26th 2009, and since then there has been only silence. Perhaps even more alarming is that nothing was done to save the species. According to a new paper in Conservation Letters the bat was lost to extinction while Australian government officials equivocated and delayed action even though they were warned repeatedly that the situation was dire. The Christmas Island pipistrelle is the first mammal to be confirmed extinct in Australia in 50 years. Invasion!: Burmese pythons decimate mammals in the Everglades (01/30/2012) The Everglades in southern Florida has faced myriad environmental impacts from draining for sprawl to the construction of canals, but even as the U.S. government moves slowly on an ambitious plan to restore the massive wetlands a new threat is growing: big snakes from Southeast Asia. A new paper in the Proceedings of the National Academy of Sciences (PNAS) has found evidence of a massive collapse in the native mammal population following the invasion of Burmese pythons (Python molurus bivittatus) in the ecosystem. The research comes just after the U.S. federal government has announced an importation ban on the Burmese python and three other big snakes in an effort to safeguard wildlife in the Everglades. However, the PNAS study finds that a lot of damage has already been done. California city bans bullfrogs to safeguard native species (01/26/2012) Santa Cruz, California has become the first city in the U.S. to ban the importation, sale, release, and possession of the American bullfrog (Rana catesbeiana). Found throughout Eastern and Central U.S., the frogs have become an invasive threat to wildlife in the western U.S. states and Canada. U.S. implements snake ban to save native ecosystems (01/25/2012) Last week the U.S. Fish and Wildlife Service (USFWS) announced it was banning the importation and sale across state lines of four large, non-native snakes: the Burmese python (Python molurus bivittatus), the yellow anaconda (Eunectes notaeus), and two subspecies of the African python (Python sebae). Although popular pets, snakes released and escaped into the wild have caused considerable environmental damage especially in the Florida Everglades.
<urn:uuid:db04f1e5-4e9b-4a03-a72d-591bfcf43732>
3.234375
1,875
Content Listing
Science & Tech.
52.529234
What is the smallest number with exactly 14 divisors? Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? If it takes four men one day to build a wall, how long does it take 60,000 men to build a similar wall? Some 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using the digits 1 to 9 each once and only once. Well done Sally Nelson and Sarah Dunn, S2, Madras College, St Andrew's for finding altogether six funny factorisations, but there is one more. It is now a Tough Nut to find the last one. You might like to write a computer program to find all seven funny factorisations or you might come up with a different method. Let us know. The number 4396 = 2 x 2 x 7 x 157 and there are not many possible combinations. By trial and error we get 4396 = 28 x 157. The number 5796 = 2 x 2 x 3 x 3 x 7 x 23. So 5796 = (2 x 3 x 7) x ( 2 x 3 x 23) or (2 x 2 x 3) x (3 x 7 x 23) amongst other possibilities which don't turn out to be 'funny'. In this way we find the two funny factorisations: 5796 = 42 x 138 and 5796 = 12 x 483. Similarly 5346 = 2 x 3 5 x 11 and the funny factorisations are: 5346 = 27 x 198 and 5346 =18 x 297. Here you must use the digits 1 to 9 once, but only once, to replace the stars and complete this multiplication example. Firstly I found out the possible solutions for the top row. It could not be a number above 250 or below 100 and it had to end in a 9. The number could not have a 4 or a 6 or another 9. The only possibilities were 129, 139, 159, 179, 189, 219 and 239. So I tried these numbers with every 2 digit number beginning with a 4 until I found the answer 159 x 48 = 7632. We received a Python program from Ryan for exhaustively finding solutions to the problem. You can download it here
<urn:uuid:8597d65c-153b-4eb4-a810-ca9e004a7923>
2.875
496
Knowledge Article
Science & Tech.
92.568618
Image by Getty Images via @daylifeI'm sure like many of you, I have been watching the news coverage of the crisis in Japan. Perhaps because I am a math teacher, I have been more aware of the amount of numbers in the reports. From earthquake scales to radiation values, we seem to need numerical values to understand or quantify just how severe the disaster has been. How high were the tsunami waves? How far did they travel? How much stronger is a 9.0 quake than a 7.9? What is the safe distance from radiation fallout? etc. etc. The worst question of all: How many people died? Numbers are the universal language. One lesson for your students from all of this is that people understand numerical values. Numbers may be pronounced differently in other languages but they are worth the same no matter where you are. The color blue has a variety of interpretations but the number two is the same for all. And so, although it is difficult for us to comprehend a disaster like this, numbers are a way for all of us to understand the severity of the event. The Google Earth Blog has posted links to Google Earth material that covers the earthquake in Japan. This includes before and after overlays of the eastern coast of Japan. Here is a link to their post.
<urn:uuid:e8f086d1-8a4b-4d44-bc7a-16006bbb8d07>
3.734375
262
Personal Blog
Science & Tech.
62.009457
In the blizzard of climate prevarication we've had to endure this winter, because, you know, it still gets cold in January, some important points get lost. It's not just about temperature! That stuff we spew into the atmosphere does damage in many ways. When we humans burn fossil fuels, we pump carbon dioxide into the atmosphere, where the gas traps heat. But much of that carbon dioxide does not stay in the air. Instead, it gets sucked into the oceans. If not for the oceans, climate scientists believe that the planet would be much warmer than it is today. Even with the oceans’ massive uptake of CO2, the past decade was still the warmest since modern record-keeping began. But storing carbon dioxide in the oceans may come at a steep cost: It changes the chemistry of seawater.... ....But there’s a crucial difference between the Earth 100 million years ago and today. Back then, carbon dioxide concentrations changed very slowly over millions of years. Those slow changes triggered other slow changes in the Earth’s chemistry. For example, as the planet warmed from more carbon dioxide, the increased rainfall carried more minerals from the mountains into the ocean, where they could alter the chemistry of the sea water. Even at low pH, the ocean contains enough dissolved calcium carbonate for corals and other species to survive. Today, however, we are flooding the atmosphere with carbon dioxide at a rate rarely seen in the history of our planet. The planet’s weathering feedbacks won’t be able to compensate for the sudden drop in pH for hundreds of thousands of years.There's more. As permafrost melts, and as the ocean changes, we get methane. Lots of methane. In the past few days (the paper is from a couple of years ago, Sid interjects), the researchers have seen areas of sea foaming with gas bubbling up through "methane chimneys" rising from the sea floor. They believe that the sub-sea layer of permafrost, which has acted like a "lid" to prevent the gas from escaping, has melted away to allow methane to rise from underground deposits formed before the last ice age. They have warned that this is likely to be linked with the rapid warming that the region has experienced in recent years. Methane is about 20 times more powerful as a greenhouse gas than carbon dioxide and many scientists fear that its release could accelerate global warming in a giant positive feedback where more atmospheric methane causes higher temperatures, leading to further permafrost melting and the release of yet more methane.
<urn:uuid:1880afaa-7347-4726-baa6-3a115ede2fda>
3.6875
532
Personal Blog
Science & Tech.
54.229224
0.5" (13mm). Overall reddish-brown with spines on thorax; often seen carrying leaves and other plant material. NATURAL HISTORY: Feed on fungus that they raise underground. Workers snip off leaves and other plant parts and carry them back to the nest, sometimes in long lines. The leaves and plant parts are "chewed" and fed to the fungus they grow in large chambers underground. New queens will take a "starter" batch of fungus with them to found their new nest colony. This is a great example of mutualism; in this case the ants depend on the fungus for food and the fungus depends on the ants for a place to live and food. See Dale Ward's Ant's of Arizona website for more information and photos. Click on thumbnails below for more images:
<urn:uuid:65a35131-9d72-4e51-82b2-330b7a308568>
2.90625
179
Knowledge Article
Science & Tech.
67.979494
Swing a ringing phone inside a long sturdy sock and you will hear the strangely familiar Doppler effect. Sprinkle some iron filings on an old swipe card and pretty magnetic patterns appear right before your eyes. Can you unmix a mixed-up liquid? Master this impossible looking trick and you'll feel like a ninja Jedi master of the ever-expanding universe... Create a miniature fountain in a bottle with a straw, adhesive putty and the nifty physics of thermal expansion. Explore more Tricks Monday, 15 October 2012 Everyone knows ice floats on water but is this normal behaviour? If you're nodding or simply shrugging your shoulders, grab some olive oil and an ice tray because you're in for a surprise. Wednesday, 15 August 2012 Try synchronising two or more metronomes so they tick in perfect unison and you'll end up tearing your hair out, unless you know the cunning trick. Tuesday, 17 July 2012 Make a toroidal vortex 'gun' that shoots doughnut-shaped vortices you have to see to believe. Wednesday, 13 June 2012 Use the magic of static electricity to make a little loop of obsolete audiocassette tape leap into the air and levitate. Wednesday, 30 May 2012 Now that you've mastered some basic play dough electronics, apply your creative skills and produce your own shiny, flashy, beepy works of art. Wednesday, 2 May 2012 Last time, we went through the basics of how to make simple electrical circuits with play dough and light-emitting diodes, now it's time to experiment with electricity Tuesday, 17 April 2012 Create colourful electrical circuits with play dough and light-emitting diodes - even the littlies can get involved! Tuesday, 3 April 2012 Trying to persuade a fussy eater to eat their vegetables? Try red cabbage. Tuesday, 20 March 2012 Dissolving stuff in water sounds like child's play. Unless you try it with four coloured sugar-coated chocolates! Tuesday, 6 March 2012 Flying an air-filled water balloon with a straw is child's play. Trying to explain how it flies can be a physics professor's worst nightmare. Tuesday, 21 February 2012 Predicting how objects roll, fall and trigger each other to move may have more to do with our basic instincts that the laws of probability. Watch this trick and understand why toddlers are excited by something as simple as a rolling ball. Tuesday, 13 December 2011 Things that move for no apparent reason give some people the heebie-jeebies. So does maths. Monday, 14 November 2011 Tricking the mind can often be more than meets the eye. Using nothing more than a ball or a bowl, you too will be able to discover your myterious sixth sense. Tuesday, 1 November 2011 Make a cool wave animation and, in the process, learn about the ebb and flow of the surf. Thursday, 8 September 2011 Why do we think smaller objects are heavier than bigger ones even though they weigh the same? Put your back in to it and try not to be fooled by this awesome trick of the mind.
<urn:uuid:99e28d82-816e-4223-af00-8da50a76fd5c>
2.828125
662
Content Listing
Science & Tech.
60.513689
Humans have altered almost every thing about the Mekong River ecosystem. They have altered the water quality from fertilizer, pesticide, and urban waste pollution. They have affected the biodiversity by over fishing and fishing by explosives. In addition, humans have affected biodiversity because they have destroyed habitats by deforestation, for example the mangroves, and by blocking or destroying spawning grounds. They have altered the path of the river by building dams and blasting canals to open trade routes. This stops the flow of sediments and nutrients that are vital for soil replenishment. Humans have introduced invasive species like hyacinth and mimosa that quickly take over the wetlands and crowd out native plants. Every aspect of the river has been affected by humans. The river ecosystem provides many services to humans. It provides sustenance in the form of water, fish, and soil to grow rice and other crops. Rivers provides habitat for the greatest concentration of life on the planet (Biodiversity article) and the Mekong is the third most bio-diverse inland waterway in the world (The Mekong article). The river ecosystem provides wood for fuel and shelter, and power in the form of hydroelectricity. The river is the basic underpinning of the entire economy of the region. Human activity threatens the economy of the region. As the fish population declines, it affects the fishing industry as well as all the support industries. As the soil becomes less suitable for rice and other crops, the agriculture industry declines. Food becomes scarce. The affect of species extinction is hard to predict. Then the fact that the river provides hydroelectricity becomes irrelevant because the people are starving. Scientist: Cathy 16 Jul 09 1:35 PM Certainly, there are several major costs and disadvantages associated with the damming (and the proposed damming) of the Mekong, and you reviewed them very well. However, energy production will have to increase in the coming years to sustain the growing global population, but especially to provide utility services to the millions of people currently without access. Energy production and consumption are major issues in the world today. With respect to the Mekong, are there other opportunities that the relevant governments could explore that could raise utility standards of those without service now and for the growing population later while alleviating the pressure on the natural ecosystem? Jennifer 17 Jul 09 3:16 PM There seem to be problems with every energy source. I think I might have chosen nuclear power over hydroelectric. The main problem is getting rid of the nuclear wastes - transporting them and storing them until they decay. But it seems to be a much more efficient energy source. Instructor: Cheryl 18 Jul 09 6:13 AM There has been a lot of industry, including nuclear power cropping up along the waterways and the result is that these industries use a fair amount of water. If you weigh the two options would you still choose nuclear power? Jennifer 20 Jul 09 1:35 PM Yes, and as I understand it, the water when it is returned to the environment from the nuclear power plants is warmer than normal, so thermal pollution is another problem. It seems that maybe the way we are implementing hydroelectric power with the gigantic dams is maybe too aggressive. Maybe many small dams would have less affect than one big dam. I think I would still choose nuclear at least for the short term until the scientists can get solar to work more efficiently or make fusion practical. Brian 26 Jul 09 3:25 PM There are at least three water use issues with thermoelectric power plants (coal, oil, gas, nuclear): 1) The amount of water consumed which varies with the technology used (closed loop, open loop, wet cooling or evaporative cooling) which seems to vary from 1-5% of total water use - or from 95-99 % non-consumptive use. 2) The total amount of water used. Example - "The Harris reactor near Raleigh, N.C., draws 33 million gallons of water a day, with 17 million gallons lost to evaporation in the cooling towers." (Evaporative cooling withdraws far less water but most is used consumptively.) http://planetsave.com/blog/2008/01/23/water-shortage-could-dry-up-nuclear-power-plants-in-southeast/ 3) The thermal pollution created by water returned to the water sources. In some areas the water being used is groundwater and the power company has purchased land/water rights to the water. http://www.kctribune.com/article.cfm?articleID=18748 An issue from the standpoint of the power company & its customers: Power plants have come close to reducing output or closing because insufficient cooling water is available. http://www.istockanalyst.com/article/viewiStockNews/articleid/3234023 Another interesting resource: ENERGY DEMANDS ON WATER RESOURCES Gloria 17 Jul 09 9:14 PM Population is the number one reason that many of our ecosystems are in decline, including the Mekong and Colorado River systems. Solar, solar, solar is still the answer, a mirror powered electricity production facility, or flex panels on every rooftop. In some areas wind could work pumps or turn turbines. Will it cost more than a dam? Only if you don't consider all of the damage we have all been discussing this last week. The displacement of the people behind the dam, the loss of fish that the people rely on for one of their main food sources, the loss of fertility of the soil, and the loss of countless species that we don't even know yet. The Mekong has a great opportunity to start out this century by starting with the newest ways of improving the lives of their people, and not by relying of last century's methods. Jose 17 Jul 09 6:29 AM I had been thinking about what the these river basins would be like if humans did not exist. Then noticed reference in the text that without human interference, life would be sustained indefinitely in a "steady state" or "balance of nature" and, "without human disturbance, the net storage of chemical elements within an ecosystem will remain constant over time". [Environmental Science - Earth as a Living Planet, Botkin & Keller, John Wiley & Sons, 2009. Page 91]. Makes you wonder if we can restore the balance that we disrupt.
<urn:uuid:df3817ec-5f01-4046-afc0-c1d25251a1a7>
3.390625
1,320
Comment Section
Science & Tech.
47.324892
Titan's Lapping Oil Waves When the European Huygens probe on the Cassini space mission parachutes down through the opaque smoggy atmosphere of Saturn's moon Titan early next year, it may find itself splashing into a sea of liquid hydrocarbons. In what is probably the first piece of "extraterrestrial oceanography" ever carried out, Dr Nadeem Ghafoor of Surrey Satellite Technology and Professor John Zarnecki of the Open University, with Drs Meric Srokecz and Peter Challenor of the Southampton Oceanography Centre, calculated how any seas on Titan would compare with Earth's oceans. Their results predict that waves driven by the wind would be up to 7 times higher but would move more slowly and be much farther apart. The team worked with a computer simulation, or 'model', that predicts how wind-driven waves on the surface of the sea are generated on Earth, but they changed all the basic inputs, such as the local gravity, and the properties of the liquid, to values they might expect on Titan. |The haze of an atmospheric layer on Saturn's moon, Titan. With an atmosphere thicker than Earth's, and composed of many biochemically interesting molecules (methane, hydrogen and carbon), Titan's rich chemistry will continue to interest astrobiologists as they look forward to landing a probe on its surface in 2004-5. Credit: Voyager Project, JPL, NASA Arguments about the nature of Titan's surface have raged for a number of years. Following the flyby of the Voyager 1 spacecraft in 1980, some researchers suggested that Titan's concealed surface might be at least partly covered by a sea of liquid methane and ethane. But there are several other theories, ranging from a hard icy surface at one extreme to a near-global hydrocarbon ocean at the other. Other variants include the notion of hydrocarbon 'sludge' overlying an icy surface. Planetary scientists hope that the Cassini/Huygens mission will provide an answer to this question, with observations from Cassini during several flybys of Titan and from Huygens, which will land (or 'splash') on 14 January 2005. |Huygens parachutes onto Titan. ESA's Huygens probe descends through Titan's mysterious atmosphere to unveil the hidden surface (artist's impression) Credit: ESA The idea that Titan has significant bodies of surface liquid has recently been reinforced by the announcement that radar reflections from Titan have been detected using the giant Arecibo radio dish in Puerto Rico. Importantly, the returned signals in 12 out the 16 attempts made contained reflections of the kind expected from a polished surface, like a mirror. (This is similar to seeing a blinding patch of light on the surface of the sea where the Sun is being reflected.) The radar researchers concluded that 75% of Titan's surface may be covered by 'open bodies of liquid hydrocarbons' -- in other words, seas. The exact nature of the reflected radar signal can be used to determine how smooth or choppy the liquid surface is. This interpretation says that the slope of the waves is typically less than 4 degrees, which is consistent with the predictions of the British scientists, who showed that the maximum possible slope of waves generated by wind speeds up to 7 mph would be 11 degrees. "Hopefully ESA's Huygens probe will end the speculation" says Dr Ghafoor. "Not only will this be by far the most remote soft landing of a spacecraft ever attempted but Huygens might become the first extraterrestrial boat if it does indeed land on a hydrocarbon lake or sea." Although not designed specifically to survive landing or to float, the chances it will do so are reasonable. However, the link back to Earth from Huygens via Cassini, which will be flying past Titan and acting as a relay, will only last for a maximum of 2 hours. During this time, if the probe is floating on a sea, one of the 6 instruments Huygens is carrying, the Surface Science Package experiment, which is led by John Zarnecki, will be making oceanography measurements. Among the 9 sensors that it carries are ones that will measure the height and frequency of the waves and also the depth of the sea using sonar. It will also attempt to determine the composition of the sea. |Huygens landing probe to Saturn's moon, Titan. Image Credit: ESA What would the sea look like? "Huygens does carry a camera so it is possible we shall have some direct images," says Professor Zarnecki, "but let's try to imagine that we are sitting onboard the probe after it has landed in a Titan ocean. What would we see? Well, the waves would be more widely dispersed than on Earth but they will be very much higher - mostly as a result of the fact that Titan gravity is only about 15% of that on Earth. So the surface around us would probably appear flat and deceptively calm, but in the distance we might see a rather tall, slow-moving wave advancing towards us -- a wave that could overwhelm or sink us." Related Web Pages JIMO Science, JPL Primordial Recipe: Spark and Stir Saturn-- JPL Cassini Main Page Cassini Imaging Team Voyager: Beyond the Great Beyond Titan's Oily Lake Titan's Icy Bedrock Saturn-- JPL Cassini Main Page Alien Landers: Extreme Explorers Hall of Fame Titan: Biological Birthplace? Solar System Bodies: Titan (NASA JPL) The Probe Mission (NASA JPL) Why Titan? (ESA)
<urn:uuid:f12350f1-b2ed-4c24-ae76-6deb55cea313>
3.75
1,177
Knowledge Article
Science & Tech.
39.342221
A chord is a line segment whose endpoints are on a circle. A diameter is a chord that passes through the center of a circle. Another one of the parts of a circle is a radius, which is a line segment with one endpoint at the center and one endpoint on the circle. Congruent circles have congruent radii (the plural of radius). Concentric circles have the same center. A central angle has a vertex on the center and endpoints on the circle. The definition of a circle is the set of all points in the same plane that are a given distance from a given point. So here we have a given point, our center, and the circle, which is the black line. And those are all the points that are a given distance away from the center. And that given distance is your radius. If you have a chord or a segment that passes through the center, that is called a diameter. Now, two commonly confused terms regarding circles are congruent circles and concentric circles. Circles whose radii have the same measure are congruent. Circles who share the same center are concentric, which means the center of the smaller circle is the same center as the larger circle. The radiis are different, so they are not going to be congruent. However, you could have concentric and congruent circles in which case I could say that here I've drawn two concentric and congruent circles because they are just going to overlap because they have the same center and the same radius. So keep that in mind when you are answering questions, usually true and false and matching, about circles.
<urn:uuid:438164ca-98ec-4e5a-81d6-459bbc4e67df>
4.3125
340
Tutorial
Science & Tech.
58.383067
Physics: Understanding Newton's First Law of Motion Drum roll, please. Newton's laws explain what happens with forces and motion, and his first law states: "An object continues in a state of rest, or in a state of motion at a constant speed along a straight line, unless compelled to change that state by a net force." What's the translation? If you don't apply a force to an object at rest or in motion, it will stay at rest or in that same motion along a straight line. Forever. For example, when scoring a hockey goal, the hockey puck slides toward the open goal in a straight line because the ice it slides on is nearly frictionless. If you're lucky, the puck won't come into contact with the opposing goalie's stick, which would cause it to change its motion. In everyday life, objects don't coast around effortlessly on ice. Most objects around you are subject to friction, so when you slide a coffee mug across your desk, it glides for a moment and then slows and comes to a stop (or spills over — don't try this one at home). That's not to say Newton's first law is invalid, just that friction provides a force to change the mug's motion to stop it. What Newton's first law really says is that the only way to get something to change its motion is to use force. In other words, force is the cause of motion. It also says that an object in motion tends to stay in motion, which introduces the idea of inertia. Getting it going: Inertia and mass Inertia is the natural tendency of an object to stay at rest or in constant motion along a straight line. Inertia is a quality of mass, and the mass of an object is really just a measurement of its inertia. To get an object to move — that is, to change its current state of motion — you have to apply a force to overcome its inertia. Say, for example, you're at your summer vacation house, taking a look at the two boats at your dock: a dinghy and an oil tanker. If you apply the same force to each with your foot, the boats respond in different ways. The dinghy scoots away and glides across the water. The oil tanker moves away more slowly (What a strong leg you have!). That's because each one has different masses and, therefore, different amounts of inertia. When responding to the same force, an object with little mass — and a small amount of inertia — will accelerate faster than an object with large mass, which has a large amount of inertia. Inertia, the tendency of mass to preserve its present state of motion, can be a problem at times. Refrigerated meat trucks, for example, have large amounts of frozen meat hanging from their ceilings, and when the drivers of the trucks begin turning corners, they create a pendulum motion that they can't stop from the driver's seat. Trucks with inexperienced drivers can end up tipping over because of the inertia of the swinging frozen load in the back. Because mass has inertia, it resists changing its motion, which is why you have to start applying forces to get velocity and acceleration. Mass ties force and acceleration together. The units of mass (and, therefore, inertia) depend on your measuring system. In the meter-kilogram-second (MKS) system or International System of Units (SI), mass is measured in kilograms (under the influence of gravity, a kilogram of mass weighs about 2.205 pounds). In the centimeter-gram-second (CGS) system, the gram is featured, and it takes a thousand grams to make a kilogram. What's the unit of mass in the foot-pound-second system? Brace yourself: It's the slug. Under the influence of gravity, a slug has a weight of about 32 pounds. It's equal in mass to 14.59 kilograms. Mass isn't the same as weight. Mass is a measure of inertia; when you put that mass into a gravitational field, you get weight. So, for example, a slug is a certain amount of mass. When you subject that slug to the gravitational pull on the surface of the earth, it has weight. And that weight is about 32 pounds.
<urn:uuid:6e80c619-d7df-4407-9e1b-5473e14a6db6>
4.25
884
Tutorial
Science & Tech.
64.787623
2012 - 2014 Center for Climate System Modeling Global climate change represents one of the grand challenges facing humanity. Although there are major efforts underway to develop and implement international agreements to limit further growth of the underlying drivers, i.e., primarily the emission of greenhouse gases, substantial changes in Earth’s climate are ahead. In fact, through the forcing by current greenhouse gases concentrations alone, humankind is already committed to a warming of about 1.2°C since pre-industrial times (IPCC, 2007). About 0.8°C of that warming has occurred, while approximately another 0.4°C are in the store due to the inertia of the Earth’s climate system (IPCC, 2007). Hence, even if greenhouse gases wouldn't rise any further, Earth will move into another climate state, and due to the slowness of the natural process removing these greenhouse gases from the air, it will remain in this state for the next few centuries (Solomon et al., 2009; Gillett et al., 2011). Consequently, the development of strategies to cope with climate change is urgently required irrespective of how successful mitigation efforts will be in the coming decades. This project aims to make fundamental advances in our understanding and our ability to quantitatively model a number of key processes and interactions within the Earth’s hydrological cycle, with a special emphasis on those involving strong scale interactions. We focus on four clusters of interactions and processes that were identified to be particularly limiting scientific progress: i) local to regional processes governing clouds and rainfall over complex topography, with a view towards the challenge of downscaling climate change information to local scales; ii) regional to global processes relevant for the Earth’s energy budget and the global hydrological cycle, including change in solar radiation and aerosol loads; iii) atmosphere-ocean interactions determining the net transfer of freshwater and the resulting impacts on atmosphere and ocean dynamics in the Southern Ocean region; and iv) atmosphere-land-surface interactions that are essential in determining the water vapor and energy exchange fluxes in Europe and hence altering droughts and rainfall patterns. Diese Website wird in älteren Versionen von Netscape ohne graphische Elemente dargestellt. Die Funktionalität der Website ist aber trotzdem gewährleistet. Wenn Sie diese Website regelmässig benutzen, empfehlen wir Ihnen, auf Ihrem Computer einen aktuellen Browser zu installieren. Weitere Informationen finden Sie auf The content in this site is accessible to any browser or Internet device, however, some graphics will display correctly only in the newer versions of Netscape. To get the most out of our site we suggest you upgrade to a newer browser.
<urn:uuid:2b30f550-c7e2-492f-8543-d6460e17c6e1>
3.046875
597
Knowledge Article
Science & Tech.
25.550975
Get flash to fully experience Pearltrees Regular expressions can be scary…really scary. Fortunately, once you memorize what each symbol represents, the fear quickly subsides. If you fit the title of this article, there’s much to learn! Let’s get started. Section 1: Learning the Basics Mark-Jason Dominus Copyright © 1998 The Perl Journal. Reprinted with permission. This isn't an article about how to use regexes; you've probably seen plenty of those already. It's about how you would write a regex package from scratch, in a language like C that doesn't already have regexes. I'll demonstrate a new module, Regex.pm , which implements regexes from nothing, in Perl. Abstract The Regex Coach is a graphical application for Windows which can be used to experiment with (Perl-compatible) regular expressions interactively. It has the following features: It shows whether a regular expression matches a particular target string. It can also show which parts of the target string correspond to captured register groups or to arbitrary parts of the regular expression. It can "walk" through the target string one match at a time. It can simulate Perl's split and s/// (substitution) operators.
<urn:uuid:dd37e59a-37ee-41e4-9fee-8f4f809624ee>
2.984375
256
Truncated
Software Dev.
51.788931
Web edition: November 16, 2012 Print edition: December 1, 2012; Vol.182 #11 (p. 4) NEW DATING METHOD FOR MILLION-YEAR-OLD FOSSILS — A new radioactive dating method promises to close one of the major remaining gaps in methods of fixing dates on the geological and archaeological time scales. The new procedure, based on radioactive inequality in nature between uranium-234 and its parent U-238, was originated by David Turber of Columbia’s Lamont Geological Observatory at Palisades, N.Y. The research is described in the Journal of Geophysical Research, Nov. 1962. Uranium-234 is an isotope of uranium formed by the radioactive decay of U-238. The “disequilibrium” between the two isotopes possibly can be employed to date sedimentary material — which often contains fossils — as old as 1 million to 1.5 million years.
<urn:uuid:cbb986bb-c52f-4960-a083-6e41d3634d1c>
3.609375
193
Knowledge Article
Science & Tech.
48.091294
Editor's note: This story will appear in our January issue but is being posted early because of a publication in today's Nature. Thousands of years after the last woolly mammoth lumbered across the tundra, scientists have sequenced a whopping 50 percent of the beast’s nuclear genome, they report in a new study. Earlier attempts to sequence the DNA of these icons of the Ice Age produced only tiny quantities of code. The new work marks the first time that so much of the genetic material of an extinct creature has been retrieved. Not only has the feat provided insight into the evolutionary history of mammoths, but it is a step toward realizing the science-fiction dream of being able to resurrect a long-gone animal. Researchers led by Webb Miller and Stephan C. Schuster of Pennsylvania State University extracted the DNA from hair belonging to two Siberian woolly mammoths and ran it through a machine that conducts so-called highthroughput sequencing. Previously, the largest amount of DNA from an extinct species comprised around 13 million base pairs—not even 1 percent of the genome. Now, writing in the November 20 issue of Nature, the team reports having obtained more than three billion base pairs. “It’s a technical breakthrough,” says ancient-DNA expert Hendrik N. Poinar of McMaster University in Ontario. Interpretation of the sequence is still nascent, but the results have already helped overturn a long-held assumption about the proboscidean past. Received wisdom holds that the woolly mammoth was the last of a line of species in which each one begat the next, with only one species existing at any given time. The nuclear DNA reveals that the two mammoths that yielded the DNA were quite different from each other, and they seem to belong to populations that diverged 1.5 million to two million years ago. This finding confirms the results of a recent study of the relatively short piece of DNA that resides in the cell’s energy-producing organelles—called mitochondrial DNA—which suggested that multiple species of woolly mammoth coexisted. “It looks like there was speciation that we were previously unable to detect” using fossils alone, Ross D. E. MacPhee of the American Museum of Natural History in New York City observes. Thus far the mammoth genome exists only in bits and pieces: it has not yet been assembled. The researchers are awaiting completion of the genome of the African savanna elephant, a cousin of the woolly mammoth, which will serve as a road map for how to reconstruct the extinct animal’s genome. Armed with complete genomes for the mammoth and its closest living relative, the Asian elephant, scientists may one day be able to bring the mammoth back from the beyond. “A year ago I would have said this was science fiction,” Schuster remarks. But as a result of this sequencing achievement, he now believes one could theoretically modify the DNA in the egg of an elephant to match that of its furry cousin by artificially introducing the appropriate substitutions to the genetic code. Based on initial comparisons of mammoth and elephant DNA, he estimates that around 400,000 changes would produce an animal that looks a lot like a mammoth; an exact replica would require several million.
<urn:uuid:de9c378b-a9b8-4e76-9763-ab98727a79be>
4.09375
665
Truncated
Science & Tech.
42.312813
Captive Breeding and Conservation When effective alternatives are unavailable or unsuccessful, captive-breeding can play a crucial role in the recovery of critically endangered species. The popularity of incorporating captive-breeding programs in species recovery plans has increased over the years. With constantly improving techniques, this can make the difference between survival and extinction for many species. Successful examples include the California Condor (Gymnogyps californianus), the Mauritius Kestrel (Falco punctatus), the black-footed ferret (Mustela nigripes) and the Guam rail (Rallus owstoni). The goal of establishing a captive breeding program for an endangered species may be to establish stable and genetically healthy captive populations to reintroduce animals back into the wild where they can boost population numbers or to introduce new populations in additionally suitable habitat areas. There are many limitations to captive-breeding as a recovery strategy and when employed, should always be coupled with recovery objectives for wild populations as it is not a long-term solution. Such programs are not to be confused with captive propagation for other reasons such as public exhibits, research or conservation education. The Hawaii Endangered Bird Conservation Program Is a partnership composed of the Zoological Society of San Diego’s CRES, the U.S. Department of the Interior, the State of Hawaii and Hawaii’s private land owners. These groups work together to save some of Hawaii’s most endangered forest birds. The Maui Forest Bird Recovery Project is involved in capturing wild birds for use in this program, harvesting wild eggs for captive incubation and rearing and to plan future releases of captive-bred birds. To see more about this program visit the CRES website at http://cres.sandiegozoo.org/projects/sp_hawaii_birds.html
<urn:uuid:61ec1c2e-ebf5-43ac-8095-737c49808e69>
3.828125
374
Knowledge Article
Science & Tech.
21.853
Clear and Bright With a Chance of Sunspots and Magnetic Storms Because of our increasing reliance on satellite-driven technology and far-flung power grids, the Sun and its magnetism can wreak havoc on society in a matter of hours. New computer models and observing tools developed at NCAR are sharpening scientists’ views of the vast forces shaping magnetism on and above the Sun’s surface. These tools also help point the way toward prediction of solar storms, as well as the strength and timing of the Sun’s 11-year cycle. The case of the missing sunspots It wasn’t big news when sunspot activity diminished between roughly 1645 and 1715. Astronomers of the day took note of the spots that did occur, but the overall lack of activity didn’t stand out until the discipline itself grew older. In the 1880s, German astronomer Gustav Spörer took a closer look at the 1645–1715 sunspot drought. He concluded that it was more than just the result of a small number of observers in a still-young discipline. A few years later, E.W. Maunder carried out further study on the mysterious minimum. In 1976, NCAR scientist John Eddy labeled the period the Maunder Minimum and named an earlier minimum, from about 1420 to 1570, for Spörer. The two periods fall within a regional cooldown in Earth’s climate known as the Little Ice Age. Today, researchers continue to explore and debate the extent to which variations in the Sun affect climate on Earth. AT THE UNIVERSITIES Atmospheric research in outer space David Charbonneau (Harvard University) was in graduate school in the mid-1990s when astronomers announced the first discovery of a planet outside our solar system. “It was clear that extrasolar planets were going to be a big field of astronomy,” he recalls. “I was very interested in developing new methods of finding them.” Charbonneau pursued his interest while completing his doctorate at Harvard by visiting NCAR and launching a fruitful collaboration with HAO’s Timothy Brown. The two focused on detecting the dimming of light caused when a planet transits, or crosses in front of, its parent star. They broke new scientific ground in 2001 when they used the imaging spectrograph on NASA’s Hubble Space Telescope to detect the first atmosphere on an extrasolar planet. Charbonneau, Brown, and several colleagues recently set up the Trans-Atlantic Exoplanet Survey, a network of small telescopes designed specifically to look for planets orbiting bright stars. In 2004, they detected a Jupiter-sized planet some 500 light years from Earth. Charbonneau expects TrES to lead to new insights into the formation and evolution of planets orbiting other stars by deploying tools such as NASA’s newly launched Spitzer Infrared Space Telescope to determine temperatures, atmospheric compositions, and other properties. “We want to see if most stars like the Sun have systems of planets that are similar to ours,” says Charbonneau. A solar veteran’s forecast: increasing progress One of NCAR’s most senior solar researchers, Thomas Holzer—who came to NCAR in 1973 to start a program investigating Earth’s magnetosphere—believes we are on the verge of critical scientific breakthroughs because of increasingly powerful observing instruments and improvements in solar modeling. “I have a feeling we’re going to be making enormous strides over the next ten years,” he says. In his role as senior scientist, Holzer is helping to oversee research into such issues as the origins of coronal mass ejections and the generation of magnetic fields. He is especially interested in solar prominences, which may provide insights into the types of coronal mass ejections that can affect Earth’s atmosphere. “We’re laying the basis for eventually decent space weather forecasting,” he says. Meanwhile, several NCAR groups are collaborating on the Whole Atmosphere Community Climate Model (WACCM), which incorporates Sun-driven effects on Earth’s upper atmosphere. The time may not be far off, Holzer adds, when scientists will be able to model the entire Sun-Earth system. “It would be my hope that we will eventually have a coupled model all the way from the solar interior to the surface of Earth,” he says. For centuries, the ebb and flow of weather has engaged the predictive efforts of soothsayers, folklorists, and—more recently—scientists. In the late 20th century, the notion of solar weather prediction joined its earthbound counterpart as a scientific goal. As the threats of solar activity to a burgeoning array of vulnerable technology became clear, solar weather prediction became both more critical and more plausible. So what features tell us most about when, where, and how powerfully the next solar storm will assail satellites, mobile phones, or electrical grids? The vast solar corona, the outermost part of the Sun’s atmosphere, holds key clues. It’s the launching pad for energized particles that can trigger geomagnetic storms in Earth’s atmosphere. Research on the corona has been part of NCAR since 1961, when the new center absorbed the pioneering High Altitude Observatory (HAO). There’s now fresh momentum to HAO’s work on solar processes in the corona and other promising regions. New computer models, observations, and a groundbreaking NCAR instrument may yield critical insights into the 11-year cycle of sunspots, the mechanisms that lead to giant eruptions known as coronal mass ejections, and the behavior of magnetic fields. Will the next solar cycle be on time? Scientists for generations have speculated about the mysterious mechanisms that drive sunspots. These regions of concentrated magnetic fields at the Sun’s surface can cause powerful solar storms, known as coronal mass ejections, that sometimes buffet Earth’s atmosphere. But why does sunspot activity tend to ebb and flow in cycles of about 11 years? Equipped with a groundbreaking computer model, NCAR scientists and colleagues may be closing in on some important clues. The key to the 11-year cycle, they believe, has to do with a current of plasma, or electrified gas, that circulates between the Sun’s equator and its poles. If this proves to be the case, the finding could lead to better predictions of upcoming solar cycles. For example, the team forecasts that the next one, known as cycle 24, will begin in late 2007 or early 2008—or at least six months late—because of a deceleration of the plasma circulation. Scientists for years have known about this current of plasma, or the meridional flow, which moves at a pace of around 72 kilometers per hour (45 miles per hour) near the surface. But they had not previously connected it to sunspot activity. The meridional flow appears to act as a sort of conveyor belt by slowly transporting remnant magnetic signatures of the sunspots of previous cycles from the Sun’s surface to the interior. Inside the Sun, the remnants give rise to a new generation of magnetic fields that produce new sunspots at the surface. “In our model, we can show how physical processes relate the surface signatures of solar magnetic fields from old cycles to that of the new cycle,” explains NCAR’s Mausumi Dikpati. The model results are consistent with observed solar features, she adds. Dikpati is working with HAO colleagues Giuliana de Toma, Peter Gilman, and Oran White; also on the team is Charles “Nick” Arge (U.S. Air Force). The team is putting the model through a number of tests. It has already successfully reproduced abnormal elements of cycle 23, when the typical magnetic reversal of the Sun’s north and south poles took place slowly. Now the researchers are working on making additional predictions for cycle 24, including when the peak will occur and how intense it will be. Ten years from now, Dikpati hopes the model will be able to make some estimates of sunspot count for the next cycle. The model might also help society brace for an extended period of unusual solar activity, such as that during the Little Ice Age, when the number of sunspots dropped dramatically and temperatures cooled in some regions of the globe. Forecasting mighty magnetic forces Sunspots get attention in part because they may cause a far larger type of solar disturbance, one that can propagate all the way to Earth’s atmosphere. About once a week when the Sun is relatively quiet and about two or three times a day at the peak of the 11-year solar cycle, a great bundle of plasma escapes from the Sun’s surface. This coronal mass ejection, or CME, accelerates through the corona in only a few hours. If it’s pointed at Earth, it can irradiate astronauts, disable the circuitry in satellites, knock out surface power grids, degrade the accuracy of the Global Positioning System, and paint the high-latitude skies with shimmering auroras. Forecasts issued by NOAA shortly after a CME emerges from the Sun provide warnings from hours to several days in advance of a potential geomagnetic storm. But will we ever be able to predict one before it erupts? To give society such lead time, scientists will have to learn the precursors of CMEs. They’ll need to illuminate the plasma contortions below the solar surface that give birth to a CME and the coronal magnetic fields that shape its evolution. Spotting a newborn CME is now routine, but viewing the magnetism that lies at its heart—and throughout the surrounding corona—isn’t so easy. Since 1998, NASA’s Transition Region and Coronal Explorer satellite (TRACE) has parted the curtains somewhat. With a tight focus on small regions, it measures how the magnetic field shapes coronal plasma from the photosphere up through the corona at an exceptionally fine horizontal resolution. While intrigued by the TRACE images, coronal experts have been tantalized by what is still unseen. The arching structures uncovered by TRACE denote only a few of the corona’s intricately nested magnetic field lines—arches within arches, as it were. To help see these multilayered structures, many coronal specialists have turned to animation. Technology is on their side: desktop computers and software packages are now powerful enough to produce useful animations in short order. At HAO, Sarah Gibson is using visualization routines based on modeling by colleague Yuhong Fan to see how an idealized twisted tube of magnetic flux—a CME in the making—might appear in observations. She concentrates on a sigmoidal (S-shaped) portion of the twisted field. This zone is the interface between field lines that are firmly tethered to the dense solar surface and those that have a portion suspended in the atmosphere and thus move more freely. “This is the region where heating is likely to happen during an eruption,” says Gibson. Recent x-ray data show that hot coronal gas can take on a sigmoidal structure a few days to weeks before a CME emerges in the same area. While the relationship isn’t guaranteed, and it currently has limited use as a forecasting tool, “the link is definitely intriguing from a scientific point of view,” says Gibson. “If we can figure out the science behind the eruptions, we’ll be in a much better position for making future forecasts.” A Sun-wide glimpse of coronal magnetism Attempts to observe the solar corona have long been thwarted by the Sun’s far-brighter surface, as if someone were trying to decipher a whisper amid a thunderstorm. Eclipses help muffle the visual noise of the solar disk, and filters can artificially block it, but each approach has its limitations. In early 2004, a handful of NCAR researchers fulfilled a long-sought measurement dream. At the National Solar Observatory in New Mexico, they collected the first-ever data on magnetic fields across the entire solar limb (the slice of the Sun’s corona perpendicular to Earth). Animations from their instrument, the Coronal Multichannel Polarimeter (CoMP), reveal turbulent, high-velocity magnetic features spewing outward from the Sun’s surface. “People have measured coronal magnetism before,” says HAO’s Steven Tomczyk, “but we believe this is the first time it’s being done in a time sequence like this, where you can see an evolving structure. I think we’re making important steps and demonstrating that this technology works.” Near the Sun’s surface—especially in the photosphere, the lowest part of the Sun’s atmosphere—magnetism has been traced for over a decade by ground- and space-based instruments, such as NCAR’s Advanced Stokes Polarimeter. These devices infer the magnetic field by measuring several components of visible radiation. The brightness of the polarized light is proportional to the strength of the magnetic field along the line of sight. The HAO team also devised a way to measure two wavelength components simultaneously. Earth’s atmosphere scatters a continuously varying amount of background light from the brighter disk into the coronal line of sight. The simultaneous measurements in CoMP allow the varying background signal to be accurately removed while preserving the faint coronal signal. Using a separate instrument based on a somewhat different approach, a team led by HaoSheng Lin (University of Hawaii) began producing maps of the solar corona later in 2004. “There’s a little bit of friendly competition,” says Lin. However, both groups hope to pair their devices with a larger, yet-to-be-designed telescope on the order of a meter in diameter. “Ultimately,” says Tomczyk, “you want to gather more light.”
<urn:uuid:36dd219e-65b9-4ed4-9e55-c3cfb2958275>
3.96875
2,947
Knowledge Article
Science & Tech.
40.611135
Elysia is an “opisthobranch” sea slug famous on the internet for its remarkable ability to photosynthesise, giving it the nickname of “solar-powered sea slug”. It does this by kleptoplasty – stealing plastids from its algal food. If you note the greenish colour in E. asbecki above (Wägele et al., 2010), the green comes from the harvested chloroplasts. This post will look at this process, and generally at the biology of the genus. Before starting, it’s worth noting that while Elysia is the most famous of these examples on the internet, it’s definitely not the only animal that can photosynthesise by symbiosis with algae. Falkowski & Knoll (2007) have a three page-long table (pp. 90-92) listing various eukaryotes where this has been documented, including sponges, corals, ascidians, flatworms, and bivalves. E. chlorotica, if internet fame is anything to go by, is probably the poster child for the phenomenon, and also the one that exhibits it the best. It can harvest the chloroplasts from any number of ulvophycean and xanthophyte algae, but the most successful relationship is with the xanthophyte alga, Vaucheria litorea. The chloroplasts get sequestered intracellularly in the digestive epithelium, and stay functional for over 14 months (Rumpho et al., 2006). This isn’t just an opportunistic relationship, but one that has evolved into a real symbiosis, as evidenced by the identification of lateral transfer of V. litorea nuclear genes into the E. chlorotica genome (Schwartz et al., 2010), most tellingly the transfer of the oxygenic photosynthesis gene, psbO, and fcp, associated with light-harvesting complexes (Rumpho et al., 2008). This is also the reason why naming it a symbiosis may be a mistake, as it doesn’t involve two genetically independent organisms (Law & Lewis, 1983). Nomenclature aside, the chloroplast is hugely beneficial for the slug, as it provides “free” carbohydrates at times when food isn’t available (e.g. in winter, when host algae don’t grow). Interestingly, the algal chloroplast loses its host membrane and outer two of the four chloroplast membranes (Rumpho et al., 2000) and is found free-living in the cytoplasm in adults after getting phagocytosed (Rumpho et al., 2000). In juveniles, they are bound within a special membrane. Elysia contains over 120 species, accounting for ~40% of all sacoglossans. In fact, the phenomenon was first discovered in E. atroviridis, not E. chlorotica (Kawaguti & Yamasu, 1965), with functionality of chloroplasts documented by Taylor (1967) and evidence that the chloroplasts contribute to the animal’s metabolism by Trench et al. (1972). Other Elysia species do not nearly have E. chlorotica‘s efficiency, with most retaining the chloroplast for a couple of months at best, after which the chloroplasts degrade and get enveloped in phagosomes and presumably digested (Marín & Ros, 1993). Feeding allows chloroplasts to get replaced, but this is limited by the presence of algae. It is however notable that chloroplasts from multiple species can get sequestered (Curtis et al., 2006), so it’s not necessarily a specific interaction. Elysia belongs to the Plakobranchidae, a family where long-term plastid retention is autapomorphic (Händeler et al., 2009) and lasts over a couple of weeks generally, but outside of the family, the association lasts for no more than a week. Individuals behaviourally try to control light intensity in order to maximise the lifespan of the inherited chloroplasts and to regulate photosynthesis rate (Casalduero & Muniain, 2008), but at some point, the chloroplasts just stop working. A very valid question to ask is how such an association can arise. The ability to acquire plastids from food is most probably an ancestral one in sacoglossans, and is linked to one of the main autapomorphies of the Sacoglossa, a radula with only one row of teeth and one median tooth (Mikkelsen, 1996; pictured above from E. asbecki from Wägele et al. (2010)), which is what allows the animal to pierce the algal cell and suck out the plastids (Jensen, 1997). This is a feeding style that is highly-specialised and not found anywhere else in the molluscs, and underlies the reason why sacoglossans feed only on septate and siphonous algae, with only some also feeding additionally on other plants such as seagrass (Jensen, 1981). Each species, depending on their specificity, can also have further modifications to fit the radula perfectly to its host plant (Jensen, 1994), and it’s likely that host shifts can play a role in diversification and speciation, as hinted at by Trowbridge & Todd (2001)‘s work on the shifting of a Scottish subpopulation of E. viridis to feeding on an invasive alga within the past 50 years. The retention of plastids can be imagined as having considerable fitness benefits, for example a free food source in the winter when algae don’t grow, or conferring a defensive adantage as camouflage – once they ingest the chloroplasts, the slugs turn green, identical to the background, and this is a tremendous advantage considering their lack of shell. There is also some evidence that their food gives them defensive compounds to sequester as well as chloroplasts, for example chlorodesmin, a fish repellent (Hay et al., 1989). On to Elysia‘s general biology. The monophyly of the genus is not known for sure, and is strongly supported only by molecular phylogenies (Händeler et al., 2009). Truly diagnostic macroscopic features are not to be found, as is clear from field guides, where sacoglossans are all lumped together as unidentified morphospecies. So if you happen to catch one (your best bet is to look carefully in algal meadows, at any depth, keeping in mind that they will be well-camouflaged), your best bet is to consult an expert or trawl through seaslugforum.net. As all gastropods, Elysia mating involves “love darts”. Schmitt et al. (2007) describe the process in E. timida, where it basically amounts to a synchronised shooting of hypodermic love darts, followed by a short period of standard vaginal impregnation aided by glandular fluids. You can view3 videos from that paper here. Elysia‘s life cycle involves a planktonic larval stage, with the veliger larva having a sinistral shell and dispersing for a couple of weeks or more. The metamorphosis to the adult takes place when the veliger attaches itself to a film of microorganisms growing in an alga-rich habitat. If you enjoy my blog, please consider supporting my scientific research by sharing and/or donating to my Petridish project. Thank you! Casalduero FG & Muniain C. 2008. The role of kleptoplasts in the survival rates of Elysia timida (Risso, 1818): (Sacoglossa: Opisthobranchia) during periods of food shortage. Journal of Experimental Marine Biology and Ecology 357, 181-187. Falkowski PG & Knoll AH. 2007. Evolution of primary producers in the sea. Green BJ, Li W-J, Manhart JR, Fox TC, Summer EJ, Kennedy RA, Pierce SK & Rumpho ME. 2000. Mollusc-Algal Chloroplast Endosymbiosis. Photosynthesis, Thylakoid Protein Maintenance, and Chloroplast Gene Expression Continue for Many Months in the Absence of the Algal Nucleus. Plant Physiology 124, 331-342. Händeler K, Grzymbowski YP, Krug PJ & Wägele H. 2009. Functional chloroplasts in metazoan cells – a unique evolutionary strategy in animal life. Frontiers in Zoology 6, 28. Hay ME, Pawlik JR, Duffy JE & Fenical W. 1989. Seaweed-herbivore-predator interactions: host-plant specialization reduces predation on small herbivores. Oecologia 81, 418-427. Jensen KR. 1981. OBSERVATIONS ON FEEDING METHODS IN SOME FLORIDA ASCOGLOSSANS. Journal of Molluscan Studies 47, 190-199. Jensen KR. 1993. Morphological adaptations and plasticity of radular teeth of the Sacoglossa (= Ascoglossa) (Mollusca: Opisthobranchia) in relation to their food plants. Biological Journal of the Linnean Society 48, 135-155. Jensen K. 1997. Evolution of the Sacoglossa (Mollusca, Opisthobranchia) and the ecological associations with their food plants. Evolutionary Ecology 11, 301-335. Kawaguti S & Yamasu T. 1965. Electron microscopy on the symbiosis between an elysioid gastropod and chloroplasts of a green alga. Biological Journal of Okayama University 11, 57-65. Law R & Lewis DH. 1983. Biotic environments and the maintenance of sex–some evidence from mutualistic symbioses. Biological Journal of the Linnean Society 20, 249-276. Marín A & Ros J. 1993. ULTRASTRUCTURAL AND ECOLOGICAL ASPECTS OF THE DEVELOPMENT OF CHLOROPLAST RETENTION IN THE SACOGLOSSAN GASTROPOD ELYSIA TIMIDA. Journal of Molluscan Studies 59, 95-104. Mikkelsen PM. 1996. The evolutionary relationships of Cephalaspidea s.l. (Gastropoda: Opisthobranchia): a phylogenetic analysis. Malacologia 37, 375-442. Rumpho ME, Summer EJ & Manhart JR. 2000. Solar-Powered Sea Slugs. Mollusc/Algal Chloroplast Symbiosis. Plant Physiology 123, 29-38. Rumpho ME, Dastoor FP, Manhart JR & Lee J. 2006. The Kleptoplast. Advances in Photosynthesis and Respiration 23, 451-473. Rumpho ME, Worful JM, Lee J, Kannan K, Tyler MS, Chattacharya D, Moustafa A & Manhart JR. 2008. Horizontal gene transfer of the algal nuclear gene psbO to the photosynthetic sea slug Elysia chlorotica. PNAS 105, 17867-17871. Schmitt V, Anthes N & Michiels NK. 2007. Mating behaviour in the sea slug Elysia timida (Opisthobranchia, Sacoglossa): hypodermic injection, sperm transfer and balanced reciprocity. Frontiers in Zoology 4, 17. Taylor DL. 1967. THE OCCURRENCE AND SIGNIFICANCE OF ENDOSYMBIOTIC CHLOROPLASTS IN THE DIGESTIVE GLANDS OF HERBIVOROUS OPISTHOBRANCHS. Journal of Phycology 3, 234-235. Trench RK, Trench ME & Muscatine L. 1972. Symbiotic Chloroplasts; Their Photosynthetic Products and Contribution to Mucus Synthesis in Two Marine Slugs. Biological Bulletin 142, 335-349. Trowbridge CD & Todd CD. 2001. HOST-PLANT CHANGE IN MARINE SPECIALIST HERBIVORES: ASCOGLOSSAN SEA SLUGS ON INTRODUCED MACROALGAE. Ecological Monographs 71, 219-243. Wägele H, Stemmer K, Burghardt I & Händeler K. 2010. Two new sacoglossan sea slug species (Opisthobranchia, Gastropoda): Ercolania annelyleorum sp. nov. (Limapontioidea) and Elysia asbecki sp. nov. (Plakobranchoidea), with notes on anatomy, histology and biology. Zootaxa 2676, 1-28. Research Blogging necessities :) Katharina Händeler1, Yvonne P Grzymbowski, Patrick J Krug, & Heike Wägele (2009). Functional chloroplasts in metazoan cells – a unique evolutionary strategy in animal life Frontiers in Zoology DOI: 10.1186/1742-9994-6-28 Mary E. Rumpho, Elizabeth J. Summer, & James R. Manhart (2000). Solar-Powered Sea Slugs. Mollusc/Algal Chloroplast Symbiosis Plant Physiology DOI: 10.1104/pp.123.1.29 Mary E. Rumpho, Farahad P. Dastoor, James R. Manhart, & Jungho Lee (2006). The Kleptoplast Advances in Photosynthesis and Respiration DOI: 10.1007/978-1-4020-4061-0_23
<urn:uuid:ace4e8ad-ff2a-4f8e-ba69-7386f927ed12>
3.203125
2,973
Personal Blog
Science & Tech.
48.924907
Scientists have taken to egg swapping in the hopes of reinvigorating the ailing population of streaked horned larks in Washington state. The streaked horned lark is a small bird native to both western Washington and Oregon currently on the candidate list of endangered specied according to the U.S. Fish and Wildlife Service. The Washington population of the bird has decreased significantly in recent years to due a lack of genetic diversity and inbreeding. To turn this trend around wildlife biologists are taking healthy eggs from Oregon’s Willamette Valley and swapping them with the sickly eggs from native Washington larks. The hope is to inject the population with some much needed healthy new blood and to revive this species on the brink. Thus far the project seems to be working, as many of the test mothers have taken to their adopted young as if they were their own, but only time will tell if it’s enough to save the species.
<urn:uuid:a9324028-87c6-40f4-84f8-33938778e70e>
3.21875
194
Personal Blog
Science & Tech.
47.892361
What’s the News: While the Kepler spacecraft is busy finding solar system-loads of new planets, other astronomers are expanding our idea where planets could potentially be found. One astronomer wants to look for habitable planets around white dwarfs, arguing that any water-bearing exoplanets orbiting these tiny, dim stars would be much easier to find than those around main-sequence stars like our Sun. Another team dispenses with stars altogether and speculates that dark matter explosions inside a planet could hypothetically make it warm enough to be habitable, even without a star. “This is a fascinating, and highly original idea,” MIT exoplanet expert Sara Seager told Wired, referring to the dark matter hypothesis. “Original ideas are becoming more and more rare in exoplanet theory.” How the Heck: What’s the Context: Not So Fast: References: Eric Agol. “TRANSIT SURVEYS FOR EARTHS IN THE HABITABLE ZONES OF WHITE DWARFS.” doi: 10.1088/2041-8205/731/2/L31 Dan Hooper and Jason H. Steffen. “Dark Matter And The Habitability of Planets.” arXiv:1103.5086v1 Image: NASA/European Space Agency In life, most people try to avoid entanglement, be it with unsavory characters or alarmingly large balls of twine. In the quantum world, entanglement is a necessary step for the super-fast quantum computers of the future. According to a study published by Nature today, physicists have successfully entangled 10 billion quantum bits, otherwise known qubits. But the most significant part of the research is where the entanglement happened–in silicon–because, given that most of modern-day computing is forged in the smithy of silicon technology, this means that researchers may have an easier time incorporating quantum computers into our current gadgets. Quantum entanglement occurs when the quantum state of one particle is linked to the quantum state of another particle, so that you can’t measure one particle without also influencing the other. With this particular study, led by John Morton at the University of Oxford, UK, the researchers aligned the spins of electrons and phosphorus nuclei–that is, the particles were entangled. The weights, they are a-changin’. What we’re taught in school science classes is a streamlined version of a muddier and more complicated reality, and it’s no different with something as iconic as the periodic table of elements. This week the venerable chart’s overseers decided to fiddle with the atomic weights of 10 elements, changing their values from a single set number to a range of numbers, which is messier but more accurately resembles the messy real world. The reason for the change is that atomic weights are not always as concrete as most general-chemistry students are taught, according to the University of Calgary, which made the announcement, and the snappily named International Union of Pure and Applied Chemistry‘s Commission on Isotopic Abundances and Atomic Weights, which oversees such weighty matters. [CNET] We’ve known this for a while. We started stockpiling the stuff near Amarillo, Texas in 1925, in part for dirigible use, and stepped up reserves in the 1960s as a Cold War asset. In 1996, Congress passed the Helium Privatization Act mandating that the United States sell the gas at artificially low prices to get rid of the stockpile by 2015. This February, the National Research Council published a report estimating that, given increasing consumption, the world may run out of helium in 40 years. That’s bad news given helium’s current applications in science, technology, and party decorations–and possible future applications in fusion energy. Now physicist Robert Richardson, who won a 1996 Nobel Prize for work using helium-3 to make superfluids, has come forward to stress the folly of underselling our supply of the natural resource. He suggested in several interviews that the gas’s price should mirror its actual demand and scarcity. He estimates that typical party balloons should cost $100 a pop. “They couldn’t sell it fast enough and the world price for helium gas is ridiculously cheap,” Professor Richardson told a summer meeting of Nobel laureates…. “Once helium is released into the atmosphere in the form of party balloons or boiling helium it is lost to the Earth forever, lost to the Earth forever,” he emphasised. [The Independent] If we don’t heed Richardson’s warning, here are some sources the United States might have to tap when we run out: It’s big, it’s loud, it’s Iron Man 2, and it opens today. Like a lot of summer blockbusters, this sequel stretches the laws of physics and the capabilities of modern technology. But, admirably, a lot of the tech in Iron Man 2 is grounded in fact. Spoiler Alert! Read on at your own risk. Palladium and particle colliders Being Iron Man is killing Tony Stark. As this sequel begins, the palladium core that powers the suit and keeps Stark alive is raising toxicity levels in his bloodstream to alarming highs. It’s not hard to see why Iron Man would try palladium—the now-infamous cold fusion experiments that created a storm of hype in 1989 relied on the metal. And it’s true that palladium does have some toxicity, though it’s been used in alloys for dentistry and jewelry-making. Having exhausted the known elements in the search for a better power source, Stark, ever the DIY enthusiast, builds a particle collider in his workshop. This is actually not crazy: Physicist Todd Satogata of Brookhaven National Lab says you can build tiny particle colliders; even whiz-kid teenagers do it. Powering the accelerator, however, might be an issue. 2.5 miles long, Brookhaven’s superconducting collider needs 10 to 15 megawatts of power—enough for 10,000 or 15,000 homes. “For Stark to run his accelerator, he’s gotta make a deal with his power company or he’s gotta have some sort of serious power plant in his backyard,” Satogata says [Popular Mechanics]. In addition, Stark doesn’t appear to have the magnets needed to focus a beam as tightly as he does in the film, where it shreds his shop before he gets it focused in the right place. And, as we covered with the recent discovery of element 117, the ultra-heavy lab-created elements that Stark could have created in his accelerator don’t last long. However, back in 1994 when only 106 elements dotted the periodic table, DISCOVER discussed the idea some physicists have of an “island of stability” where elements we’ve yet to discover/create might be able to exist in a stable way. Perhaps Tony found it. The guts of the suit After a long quest, the U.S. military gets its hands on Stark’s most magnificent piece of technology, the Iron Man suit. What they saw when they looked inside was the work of special effect wiz Clark Schaffer. The silvery suit, originally seen in the first “Iron Man,” is shown again in the new movie in an “autopsy” scene in which the government begins tearing it apart to see how it works. “[The filmmakers] wanted it to look like what you see under the skin of a jet,” said Schaffer, who, along with friend and modeler Randy Cooper, worked on the suit in Los Angeles for six weeks. “There’s an aesthetic to it. I try to make it look as functional and practical as possible but also something that has beauty to it. That was my baby” [Salt Lake Tribune]. But how might the Iron Man suit be able to stand up to the punishment Stark continually receives? Tech News Daily proposes that he took advantage of something scientists are developing now: carbon nanotube foam with great cushioning power. Iron Man’s nemesis in this second installment is Ivan Vanko, played by the villainous and murky Mickey Rourke, who you might have seen in previews stalking around a racetrack with seemingly electrified prostheses attached to his arms. The explanation in the film is hand-waved a bit, but it seems Vanko’s weapons rely on plasma. Scientists actually are developing weapons based on plasma, such as the StunStrike, which essentially fires a bolt of lightning, creating an electrical charge through a stream of plasma. Researchers have recently even created what appears to be ball lightning in microwave ovens, which Iron Man’s “repulsor blasts” resemble [Tech News Daily]. Drones and hacking Vanko isn’t happy with just amazing plasma tentacles, though. Working for Stark’s rival military-industrialist Justin Hammer (Sam Rockwell), he develops a horde of ghastly humanoid drones for each branch of the military. That, of course, is straight out of science fact—our military relies increasing on robots, be they unmanned aerial vehicles, bots on the ground that investigate roadside bombs, or even unmanned subs currently under development. He’s a hacker, too, seizing control of an Iron Man suit worn by Don Cheadle as Stark sidekick James Rhodes. As DISCOVER covered in December, that’s a real-life worry, too. Hackers figured out how to steal the video feeds from our Predator drones because of an encryption lapse at one step in the process. DISCOVER: 10 Obscure Elements That Are Most Important Than You’d Think (gallery) DISCOVER: An Island of Stability DISCOVER: Attaining Superhero Strength in Real Life, and 2 more amazing science projects DISCOVER: The Science and the Fiction presents the best and worst use of science in sci-fi films 80beats: A Hack of the Drones: Insurgents Spy on Spy Planes with $26 Software Bad Astronomy: Iron Man = Win A little square that has been left blank on the periodic table for all these years might finally be filled in. A team of American and Russian scientists have just reported the synthesis of a brand new element–element 117. Says study coauthor Dawn Shaughnessy: “For a chemist, it’s so fundamentally cool” to fill a square in that table [The New York Times]. If other scientists confirm the discovery, the still-unnamed element will take its place between elements 116 and 118, both of which have already been tracked down. A paper about element 117 will soon be published in Physical Review Letters, and scientists say the new element appears to point the way toward a brew of still more massive elements with chemical properties no one can predict [The New York Times]. Element 117 was born in a particle accelerator in Russia, where the scientists smashed together calcium-48 — an isotope with 20 protons and 28 neutrons — and berkelium-249, which has 97 protons and 152 neutrons. The collisions spit out either three or four neutrons, creating two different isotopes of an element with 117 protons [Science News]. The new element 117, takes it place between two superheavy elements that scientists know to be very radioactive and that decay almost instantly. But many researchers think it is possible that even heavier elements may occupy an “island of stability” in which superheavy atoms stick around for a while [Science News]. If this theory holds up, scientists say, the work could generate an array of strange new materials with as yet unimagined scientific and practical uses [New York Times]. Scientists have found a way to safely store notoriously dangerous white phosphorus on the atomic scale: in a cage made of atoms that can only be unlocked by a specific molecule, according to a study published in the journal Science.Containing white phosphorus, a tetrahedral formation of phosphorus atoms, will be useful because the molecule readily reacts if it comes into contact with air. It’s not surprising, then, that it is often used in military campaigns to create smokescreens to mask movement from the enemy, as well as an incendiary in bombs, artillery and mortars [ScienceDaily]. White phosphorus is also an essential ingredient in many plant fertilizers and weed killers, so the ability to safely transport and store the molecule would also be an asset for those industries. Researchers in Germany produced element 112 in 1996, and now that it has been recognized by the International Union for Pure and Applied Chemistry, it will be the newest addition to the periodic table of the elements. It’s currently known as ununbium, Latin for ‘one-one-two,’ but it will be given an official name before it’s added to the chart. The new element is one of only 22 elements that are man-made, and it’s 277 times heavier than hydrogen, making it the weightiest element on the periodic table. To make it, scientists at Germany’s Centre for Heavy Ion Research fused the the nuclei of zinc and lead. The atomic number 112 refers to the sum of the atomic numbers of zinc, which has 30, and lead, which has 82. Atomic numbers denote how many protons are found in the atom’s nucleus [Reuters]. Creating new elements isn’t just a why-not-do-it challenge: It has also helped researchers to understand how nuclear power plants and atomic bombs function [Reuters].
<urn:uuid:da245efa-412f-42d5-8666-50fa88f7bbfa>
3.671875
2,880
Content Listing
Science & Tech.
44.360883
Many mathematical surveys indicate that the "Proof of the Riemann Hypothesis" is the most important open question in mathematics. The rapid pace of mathematics, along with computer-assisted mathematical proofs and visualizations, leads me to believe that this question will be resolved in my lifetime. Math aficionado John Fry once said that he thought we would have a better chance of finding life on Mars than finding a counterexample for the Riemann Hypothesis. In the early 1900s, British mathematician Godfrey Harold Hardy sometimes took out a quirky form of life insurance when embarking on ocean voyages. In particular, he would mail a postcard to a colleague on which he would claim to have found the solution of the Riemann Hypothesis. Hardy was never on good terms with God and felt that God would not let him die in a sinking ship while Hardy was in such a revered state, with the world always wondering if he had really solved the famous problem. The proof of the Riemann Hypothesis involves the zeta function, which can be represented by a complicated-looking curve that is useful in number theory for investigating properties of prime numbers. Written as f(x), the function was originally defined as the infinite sum: When x = 1, this series has no finite sum. For values of x larger than 1, the series adds up to a finite number. If x is less than 1, the sum is again infinite. The complete zeta function, studied and discussed in the literature, is a more complicated function that is equivalent to this series for values of x greater than 1, but it has finite values for any real or complex number, except for when the real part is equal to one. We know that the function equals zero when x is -2, -4, -6, ... . We also know that the function has an infinite number of zero values for the set of complex numbers, the real part of which is between zero and one—but we do not know exactly for what complex numbers these zeros occur. In 1859, mathematician Georg Bernhard Riemann (1826–1866) conjectured that these zeros occur for those complex numbers the real part of which equals 1/2. Although vast numerical evidence exists that favors this conjecture, it is still unproven The proof of Riemann's Hypothesis will have profound consequences for the theory of prime numbers and in our understanding of the properties of complex numbers. A generalized version of the Hypothesis, when proven true, will allow mathematicians to solve numerous important mathematical problems. Amazingly, physicists may have found a mysterious connection between quantum physics and number theory through investigations of the Riemann Hypothesis. I do not know if God is a mathematician, but mathematics is the loom upon which God weaves the fabric of the universe. Today, over 11,000 volunteers around the world are working on the Riemann Hypothesis, using a distributed computer software package at Zetagrid.Net to search for the zeros of the Riemann zeta function. More than 1 billion zeros for the zeta function are calculated every day. In modern times, mathematics has permeated every field of scientific endeavor and plays an invaluable role in biology, physics, chemistry, economics, sociology, and engineering. Mathematics can be used to help explain the colors of a sunset or the architecture of our brains. Mathematics helps us build supersonic aircraft and roller coasters, simulate the flow of Earth's natural resources, explore subatomic quantum realities, and image faraway galaxies. Mathematics has changed the way we look at the cosmos. Physicist Paul Dirac once noted that the abstract mathematics we study now gives us a glimpse of physics in the future. In fact, his equations predicted the existence of antimatter, which was subsequently discovered. Similarly, mathematician Nikolai Lobachevsky said that "there is no branch of mathematics, however abstract, which may not someday be applied to the phenomena of the real world."
<urn:uuid:3292f0ca-6493-4563-83da-6e3f9d25201a>
2.859375
826
Knowledge Article
Science & Tech.
39.37682
HPF adds directives to Fortran to allow the user to advise the compiler on the allocation of data objects to processor memories. The model is that there is a two-level mapping of data objects to memory regions, referred to as ``abstract processors.'' Data objects (typically array elements) are first aligned relative to one another; this group of arrays is then distributed onto a rectilinear arrangement of abstract processors. (The implementation then uses the same number, or perhaps some smaller number, of physical processors to implement these abstract processors. This mapping of abstract processors to physical processors is implementation-dependent.) The following diagram illustrates the model: The underlying assumptions are that an operation on two or more data objects is likely to be carried out much faster if they all reside in the same processor, and that it may be possible to carry out many such operations concurrently if they can be performed on different processors. Fortran provides a number of features, notably array syntax, that make it easy for a compiler to determine that many operations may be carried out concurrently. The HPF directives provide a way to inform the compiler of the recommendation that certain data objects should reside in the same processor: if two data objects are mapped (via the two-level mapping of alignment and distribution) to the same abstract processor, it is a strong recommendation to the implementation that they ought to reside in the same physical processor. There is also a provision for recommending that a data object be stored in multiple locations, which may complicate any updating of the object but makes it faster for multiple processors to read the object. There is a clear separation between directives that serve as specification statements and directives that serve as executable statements (in the sense of the Fortran standards). Specification statements are carried out on entry to a program unit, as if all at once; only then are executable statements carried out. (While it is often convenient to think of specification statements as being handled at compile time, some of them contain specification expressions, which are permitted to depend on run-time quantities such as dummy arguments, and so the values of these expressions may not be available until run time, specifically the very moment that program control enters the scoping unit.) The basic concept is that every array (indeed, every object) is created with some alignment to an entity, which in turn has some distribution onto some arrangement of abstract processors. If the specification statements contain explicit specification directives specifying the alignment of an array A with respect to another array B, then the distribution of A will be dictated by the distribution of B; otherwise, the distribution of A itself may be specified explicitly. In either case, any such explicit declarative information is used when the array is created. In the case of an allocatable object, we say that the object is created whenever it is allocated. Specification directives for an allocatable object may appear in the specification-part of a program unit, but take effect each time the object is created, rather than on entry to the scoping unit. Alignment is considered an attribute (in the Fortran sense) of a data object. If an object A is aligned with an object B, which in turn is already aligned to an object C, this is regarded as an alignment of A with C directly, with B serving only as an intermediary at the time of specification. We say that A is immediately aligned with B but ultimately aligned with C. If an object is not explicitly aligned with another object, we say that it is ultimately aligned with itself. The alignment relationships form a tree with everything ultimately aligned to the object at the root of the tree; however, the tree is always immediately ``collapsed'' so that every object is related directly to the root. Every object that is the root of an alignment tree has an associated template or index space. Typically, this template has the same rank and size in each dimension as the object associated with it. (The most important exception to this rule is dummy arguments with the INHERIT attribute, described in Section 4.4.2.) We often refer to ``the template for an array,'' which means the template of the object to which the array is ultimately aligned. (When an explicit TEMPLATE (see section 3.7) is used, this may be simply the template to which the array is explicitly aligned.) The distribution step of the HPF model technically applies to the template of an array, although because of the close relationship noted above we often speak loosely of the distribution of an array. Distribution partitions the template among a set of abstract processors according to a given pattern. The combination of alignment (from arrays to templates) and distribution (from templates to processors) thus determines the relationship of an array to the processors; we refer to this relationship as the mapping of the array. (These remarks also apply to a scalar, which may be regarded as having an index space whose sole position is indicated by an empty list of subscripts.) Every object is created as if according to some complete set of specification directives; if the program does not include complete specifications for the mapping of some object, the compiler provides defaults. By default an object is not aligned with any other object; it is ultimately aligned with itself. The default distribution is implementation-dependent, but must be expressible as explicit directives for that implementation. Identically declared objects need not be provided with identical default distribution specifications; the compiler may, for example, take into account the contexts in which objects are used in executable code. The programmer may force identically declared objects to have identical distributions by specifying such distributions explicitly. (On the other hand, identically declared processor arrangements are guaranteed to represent ``the same processors arranged the same way.'' This is discussed in more detail in section 3.6.) Sometimes it is desirable to consider a large index space with which several smaller arrays are to be aligned, but not to declare any array that spans the entire index space. HPF allows one to declare a TEMPLATE, which is like an array whose elements have no content and therefore occupy no storage; it is merely an abstract index space that can be distributed and with which arrays may be aligned. An object is considered to be explicitly mapped if it appears in an HPF mapping directive within the scoping unit in which it is declared; otherwise it is implicitly mapped. A mapping directive is an ALIGN, or DISTRIBUTE, or INHERIT directive, or any directive that confers an alignment, a distribution, or the INHERIT attribute. Note that we extend this model in Section 8 to allow dynamic redistribution and remapping of objects.
<urn:uuid:ea875d22-5935-4825-8487-fcea5c27cb1c>
3.328125
1,341
Documentation
Software Dev.
23.990842
I occasionally come across a new piece of notation so good that it makes life easier by giving a better way to look at something. Some examples: Iverson introduced the notation [X] to mean 1 if X is true and 0 otherwise; so for example Σ1≤n<x [n prime] is the number of primes less than x, and the unmemorable and confusing Kronecker delta function δn becomes [n=0]. (A similar convention is used in the C programming language.) The function taking x to x sin(x) can be denoted by x ↦ x sin(x). This has the same meaning as the lambda calculus notation λx.x sin(x) but seems easier to understand and use, and is less confusing than the usual convention of just writing x sin(x), which is ambiguous: it could also stand for a number. I find calculations with Homs and ⊗ easier to follow if I write Hom(A,B) as A→B. Similarly writing AB for the set of functions from B to A is really confusing, and I find it much easier to write this set as B→A. Conway's notation for orbifolds almost trivializes the classification of wallpaper groups. Has anyone come across any more similar examples of good notation that should be better known? (Excluding standard well known examples such as commutative diagrams, Hindu-Arabic numerals, etc.)
<urn:uuid:2bd619dc-d0a3-442a-8cd8-2e1bd7a036ea>
3.109375
303
Q&A Forum
Science & Tech.
60.381029
I agree with Michael that the question is probably expecting you to estimate the answers from the graph. However, this is what I would do if these were experimental results I was analysing. Let's assume that the blue line is a straight line $f(x)$ and the curve is a quadratic $g(x)$. A straight line has the form $y = ax + b$ where $a$ is the gradient and $b$ is the $y$ intercept. From the graph the $y$ intercept is 1 and the gradient is 4/10 i.e. 0.4. Then: $$ f(x) = 0.4x + 1 $$ The quadratic is a little harder, but if you know the two zeroes of the quadratic, $x_1$ and $x_2$ then the function has the form $g(x) = A(x - x_1)(x - x_2)$ where $A$ is some constant. In our case the two zeros are both $x = 5$ so the function is $g(x) = A(x - 5)^2$. To find the constant $A$ note that when $x = 0$ $y = 4$ so the constant $A$ must be 4/25 or 0.16. $$ g(x) = 0.16(x - 5)^2 = 0.16x^2 - 1.6x + 4 $$ So to find the two values of $x$ when the curves cross we just set $f(x) = g(x)$: $$ 0.4x + 1 = 0.16x^2 - 1.6x + 4 $$ and a quick rearrangement gives: $$ 0.16x^2 - 2x + 3 = 0 $$ To get the two solutions to this use the quadratic formula and you find the curves cross at $x \approx 1.743$ and $x \approx 10.757$. As you say, the two particles have the same velocity when the gradients are the same i.e. $f^'(x) = g'(x)$. Differentiating our expressions for $f(x)$ and $g(x)$ gives: $$ f^'(x) = 0.4 $$ $$ g^'(x) = 0.32x - 1.6 $$ Set these equal to find the point where the slopes are equal: $$ 0.4 = 0.32x - 1.6 $$ $$ x \approx 6.25 $$ Incidentally, I'm a bit concerned by your calculation of the average velocity. Unless there's a bit to the question you haven't posted, the average velocity of A is distance moved (4 metres) divided by time take (10 secs) so the average velocity is 0.4 m/sec not 1.118.
<urn:uuid:88486348-20ab-404c-9f99-7ca91aec9dbc>
3.125
626
Q&A Forum
Science & Tech.
107.225003
The important factor is not the absolute speed of the atoms but the speed relative to each other. I would guess the article on the rubidium atoms is related to making a Bose-Einstein condensate, and slow relative speed is needed otherwise your cloud of atoms just disperses instead of making a condensate. Increasing the speed of an isolated atom makes absolutely no difference to it, but if you increase the relative speed of atoms in some assemblage the atoms will collide with each other at that speed. As you increase the relative speed from the low levels of the rubidium atoms the collision energy will grow to the point where it ionises the atoms, so the atoms form a plasma. If you carry on increasing the speed up to near light speed the collisions between the nuclei will be so violent that they completely destroy the atomic nuclei and break them into showers of hadrons. We can be confident about what happens in near light speed collisions, because this is exactly what the RHIC does. The LHC has also done heavy ion collisions in between finding the Higgs boson. It's worth a note that Tokamak fusion reactors work by raising deuterium and tritium ions to very high relative speeds so that the collisions cause the nuclei to fuse. The corresponding temperature is about 100 million degrees, though from a quick Google I couldn't find the velocities of the ions. You don't get fusion in the RHIC/LHC experiments because the energies are so high that the nuclei are completely blown apart.
<urn:uuid:0fc1880d-c508-4739-af88-47a9244a0cdb>
3.484375
317
Q&A Forum
Science & Tech.
43.300519
26 posts tagged NASA When ships cross bodies of water, they leave behind visible “tracks” of pollution. NASA has been using satellite imagery to collect data on ship tracks, and the results are mildly disturbing. The image above shows only nitrogen dioxide (NO2) emissions, and is a composite of data collected by the Ozone Monitoring Instrument on NASA’s Aura satellite from 2005 through 2012. Nitrogen dioxide causes respiratory problems in humans in addition to creating ground-level ozone and fine particle pollution, and scientists are collecting data to see just how much shipping contributes to global NOx emissions. Right now, estimates are that shipping is responsible for between 15 and 30 percent, with the rest coming from sources as diverse as agricultural burning, oil drilling and even lightning. In the image above, ship tracks can be seen in dark red. They’re concentrated around the most heavily trafficked and congested shipping routes, with the most prominent in the Indian Ocean between Singapore and Sri Lanka. Other visible tracks exist in the Mediterranean Sea, the Gulf of Aden and the Red Sea. If you see areas without ship tracks, it isn’t necessarily because pollution is absent. In fact, it’s often quite the opposite: Ship tracks along the coasts of Europe, North America and China are obscured by existing pollution from offshore drilling and coastal cities. The Atlantic and Pacific oceans appear clear because they’re open enough for ship tracks to be widely dispersed, and weather prevents accurate data collection from the Arctic region. (via Satellite Image Shows Tracks of Shipping Pollution | Autopia | Wired.com) Inflatable dwelling for astronauts to be tested on International Space Station Prototype habitat, which is a just a third of the weight of a traditional capsule, to be roadtested in orbit in 2015 A low-cost space dwelling that inflates like a balloon in orbit will be tested aboard the International Space Station, opening the door for future free-flying outposts and deep-space astronaut habitats for Nasa. The Bigelow Expandable Activity Module, nicknamed Beam, will be the third orbital prototype developed and flown by privately owned Bigelow Aerospace. The Las Vegas-based company, founded in 1999 by Robert Bigelow, owner of the Budget Suites of America hotel chain, currently operates two small unmanned experimental habitats called Genesis 1, launched in 2006, and Genesis 2, which followed a year later. Beam, about four metres in diameter when inflated, is scheduled for launch in mid-2015, said Mike Gold, director of operations for Bigelow Aerospace. “It will be the first expandable habitat module ever constructed for human occupancy,” Gold said. A successful test flight on the space station would be a stepping stone for planned Bigelow-staffed orbiting outposts that the company plans to lease to research organisations, businesses and wealthy individuals wishing to holiday in orbit. Bigelow Aerospace has invested about $250m (£156m) in inflatable habitation modules so far. It has preliminary agreements with seven non-US space and research agencies in the UK, the Netherlands, Australia, Singapore, Japan, Sweden and the UAE. (via Inflatable dwelling for astronauts to be tested on International Space Station | Science | guardian.co.uk) How many planets are in our galaxy? Billions and billions of them at least. That’s the conclusion of a new study by astronomers at the California Institute of Technology, which provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32 — planets that are representative, they say, of the vast majority of planets in our galaxy and thus serve as a perfect case study for understanding how most of these worlds form. “There are at least 100 billion planets in the galaxy, just our galaxy,” says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. “That’s mind-boggling.” “It’s a staggering number, if you think about it,” adds Jonathan Swift, a postdoctoral student at Caltech and lead author of the paper. “Basically, there’s one of these planets per star.” M-dwarf study Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known. The planetary system in question, which was detected by NASA’s Kepler space telescope, contains five planets. Two of the planets orbiting Kepler-32 had previously been discovered by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by Kepler. M-dwarf systems like Kepler-32′s are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole Kepler-32 system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun) — a distance that is about a third of the radius of Mercury’s orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. “It’s just a weirdo,” he says. (via Billions and billions of planets | KurzweilAI) A study published in the journal PLOS ONE shows for the first time that exposure to radiation levels equivalent to a mission to Mars could produce cognitive problems and speed up changes in the brain that are associated with Alzheimer’s disease. While space is full of radiation,… Happy New Year! December 26, 2012 Happy New Year from the Cassini mission! For a higher resolution version, click here. Credit: NASA/JPL-Caltech/ (via Cassini Solstice Mission: Happy New Year!) After two decades of satellite observations, an international team of experts brought together by ESA and NASA has produced the most accurate assessment of ice losses from Antarctica and Greenland to date. This study finds that the combined rate of ice sheet melting… NASA reports that a community of bacteria has been uncovered from some 65ft below the icy surface of Lake Vida, the largest of several unique lakes found in the McMurdo Dry Valleys. Why is this so exciting? Lake Vida contains no oxygen, is mostly frozen and possesses the… A few months ago, physicist Harold White stunned the aeronautics world when he announced that he and his team at NASA had begun work on the development of a faster-than-light warp drive. His proposed design, an ingenious re-imagining of an Alcubierre Drive, may eventually result in an engine that can transport a spacecraft to the nearest star in a matter of weeks — and all without violating Einstein’s law of relativity. We contacted White at NASA and asked him to explain how this real life warp drive could actually work. The above image of a Vulcan command ship features a warp engine similar to an Alcubierre Drive. Image courtesy CBS. The Alcubierre Drive The idea came to White while he was considering a rather remarkable equation formulated by physicist Miguel Alcubierre. In his 1994 paper titled, “The Warp Drive: Hyper-Fast Travel Within General Relativity,” Alcubierre suggested a mechanism by which space-time could be “warped” both in front of and behind a spacecraft. Red dust swirls from the surface, blue sea salt is tossed within cyclones, green smoke rises from fires, and white sulphate particles stream from volcanoes and fossil fuel emissions. It’s just another day in the life of our planet, as pictured by NASA’s Discover supercomputer. This simulation shows the various types of aerosols - particles and liquid droplets - suspended in the Earth’s atmosphere. It was created using the Goddard Earth Observing System Model, a global atmospheric simulation designed at the Goddard Space Flight Center in Greenbelt, Maryland. The model aims to study the climate system. The Discover computer built this stunning high-res image at a 10-kilometre resolution, but it’s capable of even more detail - as fine as 3.5-kilometre resolution, the highest available for a global climate model. (via Short Sharp Science: Supercomputer portrait reveals Earth’s swirling veil) Parts for the rocket engines of NASA’s Space Launch System will be created using a method of 3D-printing known as selective laser melting. The space agency is taking advantage of new technology to help improve safety and save money as it builds the SLS — a heavy-lift launch vehicle intended to facilitate long-duration deep space exploration, including trips to near-Earth asteroids and, ultimately, to Mars. “It’s the latest in direct metal 3D printing — we call it additive manufacturing now,” says Ken Cooper, leader of the Advanced Manufacturing Team at the Marshall Centre. “It takes fine layers of metal powder and welds those together with a laser beam to fuse a three-dimensional object from a computer file.” Although not all of the rocket parts can be generated using the current SLM process, it can be used to improve the overall safety of the system by creating the geometrically complex pieces which would normally require a lot of welding. (via 3D-Printed Rockets Help Propel NASA’s Space Launch System | Wired Design | Wired.com)
<urn:uuid:bc28b9e7-b81d-4497-bd81-a1fd986706ef>
2.84375
2,055
Content Listing
Science & Tech.
37.506162
II.A.1 Part Two-Compiling and Graphing the Data During this portion of the activity, you and your class will compile all the ant lion data and plot the results on a graph. Each student will need: - Ant Lion Class Data Worksheet - One piece of graph paper |Instructions for the teacher: - Copy the Class Data Worksheet on the chalkboard or overhead projector. - Collect data from your student teams column by column as they appear on the Class Data Worksheet. Record these on the board or overhead projector. (You may need a team spokesperson.) Not all teams will have information for each column. Students should fill in their Class Data Worksheets along with you as each team calls out its information. - Once all the data have been recorded on the Data Worksheets, guide students through the steps below to make a graph of the data. The graph you make will address the question: Is there a relationship between ant lion size and pit size? - Draw an outline of the graph. (The x-axis, or bottom, horizontal line, should be approximately 16 squares long. The y-axis, or vertical line, should be approximately 16 - Label the x-axis "ant lion size." - Label the y-axis "ant lion pit size." - Put a zero at the point where the y-axis and x-axis come together. - On the y-axis, make a mark every 5mm, starting at the zero. Label these marks .5cm, 1cm, 1.5cm, 2cm, 2.5cm, 3cm, 3.5cm, 4cm, 4.5cm and so on until the largest ant lion pit is - On the x-axis, make a mark every 2mm, starting with the zero. Label these 2mm, 4mm, 6mm, 8mm, and so on until the largest ant lion is accounted for. - Now plot the data from the Data Worksheet onto your graph. Start with information collected by team #1. Find team #1's ant lion pit size on the y-axis. Lightly draw a horizontal line to the right. - Next find team #1's ant lion size on the x-axis. Draw a vertical line up from that point until it intersects with your first line. Make an easy to see dot where the lines intersect. Continue doing this until the data from each team are included in your graph. - To interpret your graph, use the examples below. - If the dots on the graph form an almost straight line or are scattered in an almost straight line as seen in Example A, there is in fact a direct relationship between ant lion size and ant lion pit size. This means that pit size is fairly predictable. From looking at an ant lion, you would know approximately how large a pit it would make. Or, by looking at the pit alone, you could make a good guess about how large the ant lion was that made it. - If the dots are unorganized as in Example B, then no relationship exists between ant lion size and ant lion pit size. This would mean that a big ant lion could make a small pit or a small ant lion could make a big pit, etc. After completing this activity, students should: - Understand predator/prey relationships. - Be able to give examples of adaptations sand-dwelling animals have for digging. - Understand the concept of an ecological niche. - Be able to carefully observe and measure. - Be able to participate in discussion and learn cooperatively. Further Questions and Activities for Motivated Students - Collect as many different kinds of small insects as you can find to feed your captive ant lion. Keep a data sheet to record the time and date, how many and what kind of insects you fed your ant lion, and what was captured and eaten. Continue this for at least one week. Try to answer the following questions: - What is the average number of insects that the ant lion eats in one day? - What kind of insect does the ant lion prefer? - What is the average distance the ant lion can throw the discarded body of its prey? Be sure to properly care for your captive ant lion. Design an experiment to test whether ant lions prefer wet or dry sand.
<urn:uuid:2b322655-426e-418f-a8c7-b1392e5d4457>
4.09375
936
Tutorial
Science & Tech.
70.189087
July 2012: To pronounce all the objects and terms in our hobby, we need to be multilingual. May 29, 2012 |A middle school science class is learning about the solar system. “The seventh planet from the Sun,” the teacher states, “is Yur-AIN-is.” Jimmy, the class clown, seizes the opportunity and asks, “Have you ever seen your anus with a telescope?” A smattering of giggles ensues.| From achondrite to zodiacal light, astronomy is a veritable minefield of how-do-you-say-it words. We trip over the simple two-letter name of Jupiter’s moon Io (pronounced EYE-oh or occasionally EE-oh) and fall flat on our faces as we struggle with a mile-long star name like Zubenelgenubi (zoo-BEN-el-je-NEW-bee). Is the plural of nebula NEB-yoo-lee, NEB-yoo-lie, or NEB-yoo-lay? Is M13 a GLOBE-yoo-ler cluster or a GLOB-yoo-ler cluster? Speaking proper “Astronomese” is akin to struggling with a new language, and with good reason. It is a foreign language! Astronomy magazine subscribers can read the full column for free. Just make sure you're registered with the website. You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:0efe669a-2ce8-4b7d-a243-820120ad129f>
2.75
474
Truncated
Science & Tech.
53.964381
|Origin and Evolution of Flight||Theory (Origin and Evolution Continued)||Relationship Between Insect Flight and Thermoregulation||Body Size, Metabolism, Wing Length, Body Temperature||Co-Adaptation||Thermoregulation in various Insects||Sources||HOME| |Co-Adaptation and Survival| |Access to nutritional resources, appropriate microhabitats, and suitable oviposition sites would have been facilitated by evolution of flight (Dudley 2000). One benefit would have been the many more occupiable niches and the new-found access to plants.| (Permission from Kingsolver) |Figure 11 illustrates how both temperature and lift are contigent upon wing length and body size. Selective pressures would naturally press for the most optimum features of that time period.| Short wings can tend to have large thermoregulatory effects, especially in small insects. Because of this, increasing wing length significantly increases temperature excess for wing lengths up to 20-40% of body length (Kingsolver 1985). The wings serve to increase the surface area of absorption. For wings of high thermal conductivity, wings both greatly increase the effective surface area and increase the heat transfer (Kingsolver 1985). This is beneficial because it reduces the constraints that environmental temperature have insect behavior. (Freely Licensed by Wikimedia) Kingolver and Koehl note that the conductivity of heat in the wings of present-day insects is small. Fossil records suggest that the wings of early insects were thicker and more heavily venated, and that they contained more hemolymph than do wings of present-day insects (Kingsolver 1985). Assuming this was the case it would increase the conductance of heat through the wings. This may have been a necessary mechanism based on the environment at the time. Regardless, we see how insects have evolved over time in correspondence to environmental pressures.
<urn:uuid:38904048-8eda-4347-8ed2-9b220757f39c>
4.28125
386
Academic Writing
Science & Tech.
21.262857
Current Biology, Volume 15, Issue 6, 29 March 2005, Pages 549-554 Cynthia E. Kicklighter, Shkelzen Shabani, Paul M. Johnson and Charles D. Derby Numerous studies have demonstrated that chemical defenses protect prey from predation [1,2,3,4,5,6,7] and have often assumed that these defenses function by repelling predators. Surprisingly, few have investigated the mechanisms whereby predators are affected by these defenses [8, 9,8, 9]. Here, we examine mechanisms of chemical defense of sea hares (Aplysia californica), which, when attacked by spiny lobsters (Panulirus interruptus), release defensive secretions from ink and opaline glands [10, 11,10, 11]. We show that ink-opaline facilitates the escape of sea hares by acting through a combination of novel and conventional mechanisms. Ink-opaline contains millimolar quantities of amino acids that stimulate chemoreceptor neurons in the spiny lobster’s nervous system. Ink stimulates appetitive and ingestive behavior, opaline can elicit appetitive behavior but can also inhibit ingestion and evoke escape responses, and both stimulate grooming. These results suggest that these secretions function by “phagomimicry,” in which ink-opaline stimulates the feeding pathway to deceive spiny lobsters into attending to a false food stimulus, and by sensory disruption, in which the sticky and potent secretions cause high-amplitude, long-lasting chemo-mechanosensory stimulation. In addition, opaline contains a chemical deterrent that opposes appetitive effects. Thus, chemical defenses may act in more complex manners than palatability assays of prey chemistry may suggest. Summary | Full Text | PDF (349 kb) Trends in Biotechnology, Volume 20, Issue 7, 1 July 2002, Pages 276-277 Kristina S Mead Robots are needed to locate the sources of toxic chemical plumes. Lobsters, which track odor plumes for many ecologically crucial activities, can provide inspiration for robot designers. Before accurate search strategies for robots can be developed, how odor molecules are captured by the lobster's chemosensors must be understood. A recent study by Koehl et al. shows how lobster olfactory antennules alter the patterns of concentration in turbulent odor plumes during odor sampling. Abstract | Full Text | PDF (36 kb) Copyright © 1971 The Biophysical Society All rights reserved. Biophysical Journal, Volume 11, Issue 2, 211-234, 1 February 1971 Denis J.M. Poussart Random fluctuations in the steady-state current of neural membrane were measured in the giant lobster axon by means of a low noise voltage-clamp system. The power density spectrum S(f) of the fluctuations was evaluated between 20 and 5120 Hz and found to be of the type 1/f. Mean values of the potassium, sodium, and leakage currents ĪK, ĪNa, and ĪL were also measured by usual voltage-clamp techniques. Comparisons between these two types of data recorded under a number of different experimental conditions, such as presence of tetrodotoxin (TTX), substitution of calcium by lanthanum, and changes in the external concentration of potassium, have strongly suggested that the intensity of the fluctuations is related to the magnitude of ĪK.
<urn:uuid:1133df47-bbc9-46a0-94ab-61fcfa4907dd>
2.703125
712
Academic Writing
Science & Tech.
29.546556
Contact: Jason Bardi American Institute of Physics Caption: Pressure contours show the effect of a front-facing blast at various times after detonating 1.5 kg of C4 explosives from a distance of three meters. Black represents 1.0 atmosphere of pressure, and red indicates pressures over 3.5 atmospheres. Credit: NRL's Laboratory for Computational Physics and Fluid Dynamics Usage Restrictions: none Related news release: The physics of explosives and blast helmets
<urn:uuid:0476f402-b24e-4e08-9b44-31869036a594>
2.765625
99
Truncated
Science & Tech.
37.881753
ANSI Common Lisp 21 Streams 21.2 Dictionary of Streams &optional input-stream eof-error-p eof-value recursive-p - Arguments and Values: input-stream - an input stream designator. The default is standard input. eof-error-p - a generalized boolean. The default is true. eof-value - an object. The default is nil. recursive-p - a generalized boolean. The default is false. line - a string or the eof-value. missing-newline-p - a generalized boolean. Reads from input-stream a line of text that is terminated by a newline or end of file. If recursive-p is true, this call is expected to be embedded in a higher-level call to read or a similar function used by the Lisp reader. The primary value, line, is the line that is read, represented as a string (without the trailing newline, if any). If eof-error-p is false and the end of file for input-stream is reached before any characters are read, eof-value is returned as the line. The secondary value, missing-newline-p, is a generalized boolean that is false if the line was terminated by a newline, or true if the line was terminated by the end of file for input-stream (or if the line is the eof-value). (setq a "line 1 (read-line (setq input-stream (make-string-input-stream a))) "line 1", false (read-line input-stream nil nil) - Affected By: - Exceptional Situations: If an end of file2 occurs before any characters are read in the line, an error is signaled if eof-error-p is true. - See Also: The corresponding output function is write-line. - Allegro CL Implementation Details:
<urn:uuid:057a4f4e-0004-40d9-b2a5-7c151be8f618>
3.34375
435
Documentation
Software Dev.
57.628188
They're all estimations, there's obviously no way to know exactly how many stars there are in the entire Universe as its dimension still remains unknown.http://www.huffingto...e_n_790563.html When scientists previously estimated the total number of stars, they assumed that all galaxies had the same ratio of dwarf stars as the Milky Way, which is spiral-shaped. Much of our understanding of the universe is based on observations made inside our own galaxy and then extrapolated to other galaxies. But about one-third of the galaxies in the universe are elliptical, not spiral, and van Dokkum found they aren't really made up the same way as ours. Using the Keck telescope in Hawaii, van Dokkum and a colleague gazed into eight distant, elliptical galaxies and looked at their hard-to-differentiate light signatures. The scientists calculated that elliptical galaxies have more red dwarf stars than predicted. A lot more. Generally scientists believe there are 100 billion to a trillion galaxies in the universe. And each galaxy – the Milky Way included – was thought to have 100 billion to a trillion stars. Sagan, the Cornell University scientist and best-selling author who was often impersonated by comedians as saying "billions and billions," usually said there were 100 billion galaxies, each with 100 billion stars. Van Dokkum's work takes these numbers and adjusts them. That's because some of those galaxies – the elliptical ones, which account for about a third of all galaxies – have as many as 1 trillion to 10 trillion stars, not a measly 100 billion. When van Dokkum and Conroy crunched the incredibly big numbers, they found that it tripled the estimate of stars in the universe from 100 sextillion to 300 sextillion. Van Dokkum's paper challenges the assumption of "a more orderly universe" and gives credence to "the idea that the universe is more complicated than we think," Ellis said. "It's a little alarmist."
<urn:uuid:dcb05742-1145-4e25-811b-4c17be85b8c4>
3.5625
419
Comment Section
Science & Tech.
49.006217
MODERN agriculture is set to become as bad for the planet's health as global warming, a team of leading environmental scientists has warned. They list rainforest destruction, nitrogen pollution and the spread of diseases such as foot and mouth and BSE among the growing threats from agriculture. "The environmental effects of agriculture are on a trajectory soon to rival those of climate change," says David Tilman, an ecologist at the University of Minnesota. Tilman and nine other ecologists in the US forecast that over the next 50 years growing populations and an increasing demand for meat will mean the world needs new farmland covering an area the size of the US. Meeting this demand will probably see the destruction of "the vast majority of the rainforests and savannah grasslands in Latin America and central Africa, which harbour a large portion of the Earth's biological diversity", Tilman told New Scientist. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:3a7726a2-5fd2-4127-98d2-d47216fe8e3c>
2.953125
206
Truncated
Science & Tech.
33.618947
Discovery of New Mineral, Krotite, in a CAI The first natural occurrence of a low-pressure CaAl2 mineral has been found in a refractory inclusion in a carbonaceous chondrite meteorite, as reported in the May/June 2011 issue of American Mineralogist . The discovery was led by Harold C. Connolly, Jr. of Kingsborough Community College of CUNY, the American Museum of Natural History in New York, and University of Arizona. While synthetic low-pressure and high-pressure CaAl2 phases are well known in the field of materials science, only the high-pressure polymorph had been identified previously in nature (in another chondrite). Quantitative elemental microanalysis of the new mineral using the electron microprobe resulted in an empirical formula (based on four oxygens) of Ca 1.02 . It is now officially approved by the Commission on New Minerals, Nomenclature, and Classification of the International Mineralogical Association as krotite. The mineral's name honors Dr. Alexander N. Krot, a University of Hawaii researcher known for his achievements in meteoritics, especially for studies of the formation of calcium-aluminum-rich inclusions (CAIs ) and chondrules , and his significant contributions to the understanding of early Solar System processes. Dr. Krot is also a 2004 recipient of the University of Hawaii Regents' Medal for excellence in research and a friend of PSRD Krotite is the dominant mineral in the central and mantle areas of an unusual CAI in the NWA 1934 carbonaceous chondrite [Data link from the Meteoritical Bulletin]. The 2.75 mm x 4.5 mm inclusion is composed mainly of aggregates of (colorless and transparent) krotite crystals, with a few other calcium-aluminum or magnesium-aluminum oxides and a few silicates in a thin rim of concentric layers. Cracks, mainly filled with iron and aluminum hydroxides, crosscut the CAI's rim and parts of its interior giving it a "cracked egg" appearance. The CAI itself is surrounded by a matrix of mostly fine-grained olivine. The discovery team suggests formation in a highly refractory condensate/evaporative environment in the cooling nebular gas. They say that the primary mineral assemblage in this CAI was introduced into a hot gas, which did not melt the CAI but perhaps reacted with surficial krotite crystals to produce the observed layered rim. The cracks may have been caused by compaction during accretion of the parent body, but the meteorite as a whole is essentially unshocked. A final, probably terrestrial, alteration process most likely filled the cracks with hydrated oxides during exposure to the northwest African environment. This CAI, along with its new mineral, krotite, will be the subject of additional research to determine formation and cosmochemical details. Because CAIs were the first solids formed in the solar nebula about 4.6 billion years ago, they help cosmochemists piece together records of nebular and early Solar System processes and how the first solid building blocks eventually turned into asteroids and planets. Ma, C., Kampf, A. R., Connolly, Jr., H. C., Beckett, J. R., Rossman, G. R., Sweeney Smith, S. A., and Schrader, D. L. (2011) Krotite, CaAl2O4, A New Refractory Mineral from the NWA 1934 Meteorite, American Mineralogist, v. 96, p. 709-715. doi:10.2138/am.2011.3693. Ma, C., Kampf, A. R., Connolly, Jr., H. C., Beckett, J. R., Rossman, G. R., Sweeney Smith, S. A. and Schrader, D. L. (2010) Krotite, IMA 2010-038. CNMNC Newsletter, October 2010, page 901; Mineralogical Magazine, v. 74, p. 899-902. [ pdf link ] Dating the Earliest Solids in our Solar System. Written by Linda Martel, Hawaii Institute of Geophysics and Planetology, for PSRD
<urn:uuid:8b8bb034-ef8c-4da9-bde7-c2bfcea5eba5>
2.90625
912
Knowledge Article
Science & Tech.
49.878389
Now, let's consider what happens when we are given one of the two outcomes of each event (a,b) from consistently, and we want to deduce the other. To be specific, let the be generated N times, and let the value of B be seen each time. What is the asymptotic average log number of ways to see A given that B is seen each time? Well, let be the number of times that occurs, and for these occurrences of , count the occurrences of . Define the vectors then for a fixed value of B there are ways that the A values could be distributed for this . Taking the product of these numbers of ways gives us the number of ways that the A values could be distributed given the B values. This is Taking the logarithm and doing the asymptotics gives gives us where we have . Averaging by dividing by N gives us the entropy of A given B, or . Note that . Similarly, . Note that we could have found the log number of ways that A could occur given as , and then noted that asymptotically to average this and find the result above. After working through these examples, the interpretation of entropy as an uncertainty - an additive quantity representing the state of ignorance of the outcome - is straightforward. For example, if A is determined by B, then there is no uncertainty in A given B, immediately ; further there is no more uncertainty in the joint distribution then there is in the distribution of B, i.e. S(A,B) = S(B). Finally, note that the quantity gives the uncertainty change between not knowing B and knowing B, and is called the mutual information. It is symmetric in its arguments, and can be written as The mutual information is clearly a quantity that for two random variables can be labeled the information about one variable that is in the other, and vice-versa. It is the information that each random variable shares about the other. In section 3.12 higher order information functions of this nature are defined, the information correlation functions, and these can be interpreted as the information between a set of random variables. There are several other information functions that are of interest. We may define the redundancy of one random variable in another as the mutual information of the two. We might also define the normalized redundancy of two random variables as the mutual information divided by the joint information (entropy), M(A,B)/S(A,B). This is a quantity that has value zero only for independent processes, and has value one when one process completely determines the other. For two or more random variables the redundancy has been defined as the sum of the single entropies minus the joint entropy, [66, 93]. This redundancy is distinctly different from that of the information correlation functions to be defined in section 3.12 When there are only two processes this is the mutual information. A measure of correlation has been defined as 1-S(B|A)/S(A) . Note that this is asymmetric in the processes. It is 0 when the entropy of B given A is equal to the entropy of A, which for identically distributed variables occurs only when they are independent. A symmetric function with similar properties is 2(1-S(A,B)/(S(A)+S(B)))=2M(A,B)/(S(A)+S(B)).
<urn:uuid:45ec52a1-0d43-4875-a99a-ea709e96d206>
3.765625
728
Academic Writing
Science & Tech.
51.238479
In fact, of course, the photon number does not increase without limit as atoms keep crossing the resonator. Because the walls are not perfect reflectors, the more photons there are, the greater becomes the chance that one of them will be absorbed. Eventually this loss catches up to the gain caused by atomic injection. About 100,000 atoms per second can pass through a typical micromaser (each remaining perhaps 10 microseconds); meanwhile the photon lifetime within the cavity is typically about 10 milliseconds. Consequently, such a device running in steady state contains about 1,000 microwave photons. Each of them carries an energy of about 0.0001 electron volt; thus, the total radiation stored in the cavity does not exceed one tenth of one electron volt. This amount is much smaller than the electronic excitation energy stored in a single Rydberg atom, which is on the order of four electron volts. Although it would be difficult to measure such a tiny field directly, the atoms passing through the resonator provide a very simple, elegant way to monitor the maser. The transition rate from one Rydberg state to the other depends on the photon number in the cavity, and experimenters need only measure the fraction of atoms leaving the maser in each state. The populations of the two levels can be determined by ionizing the atoms in two small detectors, each consisting of plates with an electric field across them. The first detector operates at a low field to ionize atoms in the higher-energy state; the second operates at a slightly higher field to ionize atoms in the lower-lying state (those that have left a photon behind in the cavity). With its tiny radiation output and its drastic operational requirements, the micromaser is certainly not a machine that could be taken off a shelf and switched on by pushing a knob. It is nevertheless an ideal system to illustrate and test some of the principles of quantum physics. The buildup of photons in the cavity, for example, is a probabilistic quantum phenomenon-- each atom in effect rolls a die to determine whether it will emit a photon-- and measurements of micromaser operation match theoretical predictions. An intriguing variation of the micromaser is the two-photon maser source. Such a device was operated for the first time five years ago by our group at ENS. Atoms pass through a cavity tuned to half the frequency of a transition between two Rydberg levels. Under the influence of the cavity radiation, each atom is stimulated to emit a pair of identical photons, each bringing half the energy required for the atomic transition. The maser field builds up as a result of the emission of successive photon pairs. The presence of an intermediate energy level near the midpoint between the initial and the final levels of the transition helps the two-photon process along. Loosely speaking, an atom goes from its initial level to its final one via a "virtual" transition during which it jumps down to the middle level while emitting the first photon; it then jumps down again while emitting the second photon. The intermediate step is virtual because the energy of the emitted photons, whose frequency is set by the cavity, does not match the energy differences between the intermediate level and either of its neighbors. How can such a paradoxical situation exist? The Heisenberg uncertainty principle permits the atom briefly to borrow enough energy to emit a photon whose energy exceeds the difference between the top level and the middle one, provided that this loan is paid back during the emission of the second photon. Like all such quantum transactions, the term of the energy loan is very short. Its maximum duration is inversely proportional to the amount of borrowed energy. For a mismatch of a few billionths of an electron volt, the loan typically lasts a few nanoseconds. Because larger loans are increasingly unlikely, the probability of the two-photon process is inversely proportional to this mismatch.
<urn:uuid:cef78f4a-6379-4f3c-a648-4b9c98b049f5>
3.9375
790
Academic Writing
Science & Tech.
34.959013
Observing the Sun - Our nearest Star Sitting a modest 93 million miles from us, and with light taking only minutes to reach us, our Sun gives us a great opportunity to see how a star really behaves. To most people looking out on a warm summer's day the sun seems a stable and calm globe. When we start to look in a little more detail this so called stable globe is a mass of seething gas, forever changing in its appearance. The blemish free surface is not as blemish free as you would imagine. Many a feature can be observed on its surface. Sunspots in particular provide fascinating viewing. These cooler regions of the Sun appear dark as they are not as hot as the background surface. Detail can be seen in their shape and it is possible to watch them evolve over periods of days, as they rotate across the face of the Sun. Sometimes regions of white known as faculae can also be seen, and these can be very short lived so that you can see these change over shorter periods. A WARNING - ONLY OBSERVE THE SUN USING EITHER THE PROJECTION METHOD, OR BUY SPECIAL FULL APERTURE SOLAR FILTERS - DO NOT USE THE FILTERS OFTEN SUPPLIED WITH SMALL SCOPES WHICH FIT OVER THE EYEPIECE. IT COULD BE THE LAST THING YOU EVER SEE AS THE BACK OF YOUR EYE IS TURNED INTO A BLACK MASS OF CONGEALED DEAD TISSUE - GET THE PICTURE ! Even regular solar observers can make mistakes. Make sure your filter is secured over the front of your scope, the wind can always blow the front filter off. Users of a Herschel Wedge can easily forget the Polarising filters when changing eyepieces. All that aside the Sun makes fascinating viewing. I use a Takahashi FS128 and FCT76 for most of my observing. I also use a Herschel Wedge for white Light observation, this is not the safest means of observing, but the quality of the image is considerably better than that when viewed via a full aperture solar filter IMHO. The manufacture is Intes and I rate this model very highly. . . . . Picture here is the 31mm Nagler in the Intes Herschel Wedge, and additionally in the photographic mode with a 2x Barlow lens. . . . . . . . . Above is the LX200 10" with a full aperture solar mask, and in addition the same instrument with an off axis mask, this time containing an ERF for H alpha observations To see examples of my Solar observations click on the links below Solar Observing Page Observations of the Sun in White Light Observations of the Sun in H alpha Total Solar Eclipse Observations Transit of Mercury
<urn:uuid:84b6a656-4fef-479e-8dfc-9dd675d1c68e>
2.75
636
Personal Blog
Science & Tech.
57.855401
All the astronomy sites are buzzing over this amazing image of a sunspot: |Click to embiggen.| I don’t blame them… it’s gorgeous! And it’s not even real! It’s a computer-generated model of the magnetic field in a sunspot; near the center the field lines are mostly vertical and around the edges they are mostly horizontal. But to me, as a scientist as well as an appreciator of gee-whiz images, it’s this shot that tickles my brain: Click it to see it cromulently embiggened. I know, it doesn’t seem like quite as much to look at, but it is: it’s the first time a computer has modeled in detail the magnetic field line strengths of a pair of coupled sunspots vertically, in three dimensions, into and beneath the surface of the Sun! Wow. You have to understand, magnetic fields are the devil’s own work to model; they’re fiercely complicated. The equations are tough enough to solve, but the field lines interact with one another as the gas moves around, making this sort of modeling just as painfully hard as it could be. We understand quite a bit about sunspots in general, but in detail they are still a mystery; models like this will help us grasp those details. The resolution is incredible; the computer modeled points in the virtual Sun just 10 to 20 miles apart. That meant it needed to keep track of nearly two billion points. That image is amazing, and beautiful. The way they colored it, it looks like a slice beneath the Earth’s surface… but the width of that image is far larger than the Earth itself. As for the science, check out this animated sequence to see just how this simulation allows scientists to understand the movement of the gas in a sunspot, too. You can see the gas flowing outward from the center, and the convection inside the Sun driving parcels of gas up and down. And just as amazing is the computer itself that did this work: NCAR’s Bluefire, which can perform 76 trillion calculations per second. I’m glad they didn’t name it Skynet. This comes at a time when we’re starting to understand how streams of gas under the Sun’s surface relates to its overall sunspot cycle. All of this has been a huge mystery for centuries, and we now live in a time of accelerating understanding of the Sun. And, on top of all this, due for launch later this year is the Solar Dynamics Observatory, a highly sophisticated spacecraft that will study our nearest star in better detail than ever before. This is a great time to be a science geek. Revel in it. Oh, and to the Plasma/Electric Universe believers who always froth and foam about how "mainstream" scientists don’t understand magnetism and plasma: you’re looking increasingly marginalized, dudes. You might want to look into a new line of work, like UFOs, or 9/11 theories. Science makes progress while pseudoscience makes excuses… and your field ("field"! Oh man, I slay me!) is looking weaker every day. Links to this Post - Primera Mancha Solar computarizada « Ungaman’s Free Blog | June 22, 2009 - The Fine Art of Eccentricity · World record skinny dip coming up in July | June 23, 2009 - Mancha solar gerada por computador | Espaçonauta - O espaço visto de perto | June 23, 2009 - Interesting Stuff: Early July 2009 « The Outer Hoard | July 10, 2009
<urn:uuid:9db9d2a8-e00f-4659-9ed4-2dd6f0717741>
3.6875
780
Personal Blog
Science & Tech.
56.552022
July 3, 2012 Last week, paleontologists at the Argentine Museum of Natural Science in Buenos Aires literally unveiled a new dinosaur. Named Bicentenaria argentina to celebrate the museum’s 200th anniversary and just over two centuries of Argentine independence, the dinosaur was presented in a dramatic mount in which two of the predatory dinosaurs face off against each other. As yet, there’s not very much to say about the dinosaur. The paper officially describing Bicentenaria has yet to be published. Based on various news reports, though, Bicentenaria appears to be a 90 million year old coelurosaur. This is the major group of theropod dinosaurs that contains tyrannosaurs, deinonychosaurs, therizinosaurs, and birds, among others, and Bicentenaria is reportedly an archaic member of this group that represents what the earliest coelurosaurs might have looked like. It wouldn’t be an ancestor of birds or other coelurosaur groups – by 90 million years ago, birds and other coelurosaurs had already been around for tens of millions of years – but Bicentenaria may have had a conservative body plan that preserved the form of the dinosaurs that set the stage for other coelurosaurs. For now, though, we’re left to admire the impressive skeletal mount until the paper comes out. May 25, 2012 Dinosaur skeletons are marvelous things. The reconstructed bones of Allosaurus, Stegosaurus, Styracosaurus, Barosaurus and the like are beautiful monuments of natural architecture. But what really makes the skeletons so fantastic is that we know they once cradled viscera and were wrapped in flesh. It’s impossible to look at a dinosaur’s skeleton and not wonder about how the animals looked and acted in life. How social dinosaurs were is one of the most persistent mysteries of their natural history. Rare trackways record the steps of dinosaurs that walked together, and bonebeds containing the bones of multiple individuals of a particular species have sometimes been taken as evidence that the dinosaurs must have been traveling together when they died. But the evidence is never straightforward. Sometimes multiple dinosaurs walked over the same patch of ground at different times, creating trackway slabs that record the independent activities of several dinosaurs rather than a coordinated herd. And just because dinosaurs were preserved together doesn’t necessarily mean that they composed a social group—natural disasters such as drought and flood, as well as transportation of carcasses by water, can create assemblages of animals that didn’t actually flock together in life. Great care is required in piecing together dinosaur lives. With this in mind, I was curious to read a paper by Leonardo Salgado and colleagues in the latest Journal of Vertebrate Paleontology about possible evidence for social sauropods from Cretaceous Patagonia. While searching for a previously discovered dinosaur quarry in Argentina, Salgado and collaborators stumbled across a small bonebed containing the jumbled remains of three sauropods. The deposit was formed over 100 million years ago. The largest dinosaur at the site—presumably an adult—was primarily represented by strings of articulated vertebrae arranged in the classic dinosaur death pose, while two smaller sauropod skeletons were scattered in other parts of the quarry. The dinosaurs are still undergoing study and don’t have a formal identity yet, but they appear to be rebbachisaurids, a group of sauropods that were distant cousins of the more familiar Diplodocus. The juvenile dinosaurs alone were a significant find—no one had identified juvenile rebacchisaurids before. But the association of those skeletons is the focus of the new paper. Evidence from trackways and bonebeds has hinted that different sauropods had distinct social structures. Some, such as Alamosaurus, seemed to group together in small herds as juveniles and either become solitary as they grew or form age-segregated adult herds. Other sauropods seemed to live in mixed-age herds, where juveniles remained with older individuals. In the case of the bonebed in Argentina, it would seem that juveniles and adults traveled together. But how do we know these dinosaurs really lived together? The skeletons are incomplete and mostly disarticulated—perhaps they were all washed up to the same spot and buried. Salgado and co-authors present a different interpretation. The bonebed doesn’t seem to be a trap or mire, and the paleontologists noted that the skeletons show “few signs of transport.” It would seem that the sauropods died all at once. The reason why is a mystery. While they frustratingly do not provide details about this scenario, the researchers speculate that “the death of the adult triggered the death of the two juvenile individuals.” The fact that the three dinosaurs were preserved in place, without evidence of transport, seems to be fair evidence that this species of sauropod was social. But even that hypothesis brings up a series of other questions. Did individuals stay with the herd from the time they were born? Was there any form of parental care after the babies left the nest? Did these dinosaurs really form large herds, or did the young simply stick with one of their parents? We still have a lot to learn about the lifestyles of the big and extinct. Myers, T., & Fiorillo, A. (2009). Evidence for gregarious behavior and age segregation in sauropod dinosaurs Palaeogeography, Palaeoclimatology, Palaeoecology, 274 (1-2), 96-104 DOI: 10.1016/j.palaeo.2009.01.002 Salgado, L., Canudo, J., Garrido, A., & Carballido, J. (2012). Evidence of gregariousness in rebbachisaurids (Dinosauria, Sauropoda, Diplodocoidea) from the Early Cretaceous of Neuquén (Rayoso Formation), Patagonia, Argentina Journal of Vertebrate Paleontology, 32 (3), 603-613 DOI: 10.1080/02724634.2012.661004 May 24, 2012 Some dinosaur lineages are more famous than others. I can say “tyrannosaur” and most anyone immediately knows what I’m talking about: a big-headed, small-armed predator similar to the notorious Tyrannosaurus rex. The same goes for “stegosaur,” and of course it helps that Stegosaurus itself is the famous emblem of this bizarre group. But public understanding hasn’t kept up with new discoveries. In the past two decades, paleontologists have identified various dinosaur lineages vastly different from the classic types that gained their fame during the Bone Wars era of the late 19th century. One of those relatively obscure groups is the abelisaurids: large theropod dinosaurs such as Carnotaurus with high, short skulls and ridiculously stubby arms that make T. rex look like Trogdor the Burninator. And paleontologists Diego Pol and Oliver Rauhut have just described an animal close to the beginning of this group of supreme predators—a dinosaur from the dawn of the abelisaurid reign. Pol and Rauhut named the dinosaur Eoabelisaurus mefi. Discovered in roughly 170-million-year-old Jurassic rock near Chubut, Argentina, the mostly complete dinosaur skeleton is about 40 million year older than the next oldest abelisaurid skeleton. Eoabelisaurus, placed in context with other theropod dinosaurs of the same era, represents a time when predatory dinosaurs were undergoing a major radiation. Early members of many terrifying Cretaceous predators such as the tyrannosaurs and abelisaurids had already appeared by the Middle to Late Jurassic. Not all of these Jurassic predators looked quite like their later Cretaceous counterparts. Jurassic tyrannosaurs such as Juratyrant and Stokesosaurus were relatively small predators, unlike their bulky, titanic relatives from the Late Cretaceous. Eoabelisaurus was a little closer to what was to come. Despite being many tens of millions of years older than relatives such as Carnotaurus and Majungasaurus, the newly described dinosaur displays some tell-tale features that characterize the group. While a significant portion of the dinosaur’s skull is missing, the head of Eoabelisaurus had the short, deep profile seen among other abelisaurids. And this dinosaur already had distinct forelimbs. Much like its later relatives, Eoabelisaurus had a strange combination of heavy shoulder blades but wimpy forelimbs, with a long upper arm compared to the lower part of the arm. The dinosaur’s condition was not as extreme as in Carnotaurus—a dinosaur whose lower forelimbs were so strange that we have no idea what, if anything, Carnotaurus was doing with its arms—but they were still comparatively small and tipped with little fingers good for wiggling but probably useless in capturing prey. And with a 40-million-year gap between Eoabelisaurus and its closest kin, there are plenty of other abelisaurids to find. The question is where they are. Is their record so poor that very few were preserved? Or are they waiting in relatively unexplored places? Now that the history of these blunt-skulled predators has been pushed back, paleontologists can target places to look for the carnivores. Pol, D., Rauhut, O. (2012). A Middle Jurassic abelisaurid from Patagonia and the early diversification of theropod dinosaurs. Proceedings of the Royal Society B, 1-6 : 10.1098/rspb.2012.0660 January 23, 2012 Imagine a dinosaur as massive as Apatosaurus sitting on a nest. It doesn’t really work, does it? We know without a doubt that these large sauropod dinosaurs laid eggs, but there is no conceivable way that the gargantuan dinosaurs could have sat on their grapefruit-sized eggs without crushing them all. There must have been some other way that the eggs could have been kept safe and warm enough to develop properly. One special site in Argentina suggests that some sauropods had a geological solution to the problem. Two years ago, paleontologists Lucas Fiorelli and Gerald Grellet-Tinner announced the discovery of a unique nesting site that sauropods returned to multiple times. During a stretch between 134 million and 110 million years ago, expectant mother sauropods came to this site to deposit clutches of up to 35 eggs within a few feet of geysers, vents and other geothermal features. This basin held naturally heated dinosaur nurseries. A new, in-press paper about the site by Fiorelli, Grellet-Tinner and colleagues Pablo Alasino and Eloisa Argañaraz reports additional details of this site. To date, more than 70 clutches of eggs have been found across an area spanning more than 3,200,00 square feet in a section of rock about four feet thick. Rather than focusing on the habits of the dinosaurs, however, the new study fills out the geological context of the place as a possible explanation for why the dinosaurs came here. On the basis of geological features and minerals, the authors suggest that the site may have resembled the Norris Geyser Basin of present-day Yellowstone National Park. A series of underground pipes and tubes fed geysers, hot springs and mud pots scattered across an ancient terrain crossed by rivers. The fact that the egg clutches are consistently found near the heat-releasing features is taken by Fiorelli and co-authors as an indication that parent dinosaurs were seeking out these spots to lay their eggs. And this site isn’t the only one. Fiorelli and collaborators also point out that similar sauropod egg sites have been found in South Korea. Exactly what happened to preserve so many nests is not immediately clear, but the eggs were buried in sediments at least partly produced by the surrounding geothermal features. The eggs were eroded and thinned by the acidic nature of the entombing sediment. Some eggs were destroyed by these and other processes, but others held out and became preserved in place. Not all sauropod dinosaurs selected such sites for nests. Particular populations near geothermal features may have received a benefit from the natural heat, but how did other populations and species far removed from these hot spots lay and protect their nests? We still have much to learn about how baby sauropods came into the world. Fiorelli, L., Grellet-Tinner, G., Alasino, P., & Argañaraz, E. (2011). The geology and palaeoecology of the newly discovered Cretaceous neosauropod hydrothermal nesting site in Sanagasta (Los Llanos Formation), La Rioja, northwest Argentina Cretaceous Research DOI: 10.1016/j.cretres.2011.12.002 December 21, 2011 Alvarezsaurs are Cretaceous mysteries. These small dinosaurs, a feathered subgroup of coelurosaurs, had long jaws studded with tiny teeth, and their arms were short, stout appendages that some researchers hypothesize were used to tear into anthills or termite mounds. But no one knows for sure. We understand very little about the biology of these dinosaurs, but even as we puzzle over their natural history, more previously unknown genera are being found. The latest is Bonapartenykus ultimus from the Late Cretaceous of Patagonia, and what makes this dinosaur so special is what was found with its bones. Paleontologists Federico Agnolin, Jaime Powell, Fernando Novas and Martin Kundrát describe the new dinosaur in an in-press Cretaceous Research paper. The alvarezsaur was not in good shape when the researchers found it. While some of the bones, particularly those of the leg, were close to their original articulation, Bonapartenykus is represented by an incomplete set of partially damaged bones, without a skull. In life, the dinosaur is estimated to have been about eight and a half feet long. (Subtle characteristics of the preserved vertebra, shoulder girdle, and hips are what led Agnolin and co-authors to identify this animal as an alvarezsaur despite the paucity of bones.) But there was also something else. Next to the bones were the battered remnants of at least two dinosaur eggs. Could these be fossil evidence of a Bonapartenykus that was protecting its nest? Determining who laid those eggs is a difficult task. No evidence of embryos has been found inside the egg, so we can’t entirely be sure of what kind of dinosaur was growing inside. The close association between the fossils is the primary line of evidence that the eggs might be attributable to Bonapartenykus. This is the hypothesis favored by Agnolin and co-authors, but they doubt that the small site represents parental care. There is no evidence of a nest. Instead the scientists suggest that the two eggs may still have been inside the dinosaur when it died—a hypothesis based on the previous discovery of an oviraptorosaur from China with a pair of eggs preserved where the dinosaur’s birth canal would have been. When the alvarezsaur perished, the eggs may have fallen out of the body and been preserved with the bones. Yet I wonder if there might be alternative explanations. Just because fossils are found together does not necessarily mean that the organisms those fossils represent interacted in life. Making connections between organisms found at the same site requires a detailed understanding of taphonomy—what happened to those organisms from the time of death to discovery. In this case, the bones of Bonapartenykus are scattered and poorly preserved, and the eggs were also partially broken. Did the animal simply fall apart, as the authors seem to suggest, or were the bones and eggs brought together through rushing water? Perhaps the body of Bonapartenykus was carried by a water flow to the location of the eggs, fell apart after the water receded and then was buried again. This is a bit of armchair speculation on my part, and the hypothesis proposed by Agnolin and co-authors is a reasonable one, but we need a detailed understanding of how this little fossil pocket formed if we are to understand the relationship between the eggs and the bones. The geological and taphonomic details of the fossil site are important for framing hypothesis about what happened so many millions of years ago. We may have to wait for more intricately preserved fossils to be sure. A Bonapartenykus preserved on a nest, or a female dinosaur with eggs preserved within her hips, would do nicely. Agnolin, F., Powell, J., Novas, F., & Kundrát, M. (2011). New alvarezsaurid (Dinosauria, Theropoda) from uppermost Cretaceous of north-western Patagonia with associated eggs Cretaceous Research DOI: 10.1016/j.cretres.2011.11.014
<urn:uuid:34a5a9eb-0916-4c91-b0e4-351a79b57c42>
3.703125
3,623
Content Listing
Science & Tech.
38.51349
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules. What happens when you add the digits of a number then multiply the result by 2 and you keep doing this? You could try for different numbers and different rules. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! Ram divided 15 pennies among four small bags. He could then pay any sum of money from 1p to 15p without opening any bag. How many pennies did Ram put in each bag? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total?
<urn:uuid:dff68dab-2343-41db-a456-94bbc36aeba8>
3.6875
203
Content Listing
Science & Tech.
71.752667
E L N I Ñ O / L A N I Ñ A The term El Niño (Spanish for "the little boy" or "the Christ Child") was originally used by fishermen along the coasts of Ecuador and Peru to refer to above-normal sea-surface temperatures that typically appear around Christmas time in the eastern Pacific Ocean and last for several months. But El Niño's effects are not limited to Peru and Ecuador. They can impact weather patterns around the world, and the disruption of the normal climate can have profound and even tragic consequences. How Does El In normal years, winds tend to blow from east to west across the waters of the tropical Pacific. The easterly winds (remember that wind directions refer to the direction FROM which the wind comes: an easterly trade wind comes from the EAST and blows WEST) push the surface waters westward across the ocean. In turn, this causes deeper, colder waters to rise to the surface. This "upwelling" of deep ocean waters brings with it nutrients that otherwise would remain near the bottom. Fish populations living in the upper waters are dependent on these nutrients for survival. During El Niño years, the winds weaken, reducing or even choking off the upwelling of deep water. The consequent warming of the ocean surface further weakens the winds and strengthens El Niño. As the Pacific continues to heat up, the warmer waters shift eastward, and so do the clouds and thunderstorms that produce heavy rainfall along the equator. This results in changes in jet streams (the winds high aloft), which lead to dry conditions in Indonesia and Australia, and floods in Peru and Ecuador. El Niño events occur on average every 3 to 5 years, though there can be periods of up to a decade without an El Niño. The 1982-83 El Niño was unusually strong. In Ecuador and northern Peru, up to 100 inches of rain fell during a six-month period, transforming the coastal desert into grassland dotted with lakes. Abnormal wind patterns also caused the monsoon rains to fall over the central Pacific instead of on the western shores, which led to droughts and disastrous forest fires in Indonesia and Australia. Overall, the loss to the global economy as a result of the El Niño amounted to more than $8 billion. Likewise, the winter of 1997-1998 was marked by a record-breaking El Niño event. The result was unusual weather in parts of the world, including the U.S. Severe weather events included flooding in the southeastern United States, major storms in the Northeast, and flooding in California. El Niño's twin sister is La Niña ("the little girl" in Spanish). Her effects are, as any siblings would expect, the exact opposite of El Niño's: for instance, precipitation is below normal in California and the southeastern U.S.! La Niña is characterized by below-normal sea surface temperatures in the eastern equatorial Pacific. There are large variations in weather for many U.S. locations from warm spells to cold waves during a La Niña winter. and Global Warming: Any Connection? Scientists still cannot say with certainty that global warming is affecting El Niño events. In January 1999, however, scientists at the National Center for Atmospheric Research (NCAR) and elsewhere reported that global warming may accentuate El Niño's current and future impacts. El Niño events have become more frequent and have had greater climate impacts over the past century. This change in El Niño events corresponds to a rise in global temperatures. To see how El Niño and La Niña change North America's seasons, check out WHEN. (Adapted, with permission, from the ERA/NASA/NOAA Climate Change Presentation Kit CD-ROM.) link to RESOURCES Images provided by NOAA
<urn:uuid:74da975d-7efa-4c07-8ec1-f69e849a5217>
4.15625
773
Knowledge Article
Science & Tech.
51.239009
Sections 27.1 - 27.3 We're now starting to talk about quantum mechanics, the physics of the very small. At the end of the 19th century one of the most intriguing puzzles in physics involved the spectrum of radiation emitted by a hot object. Specifically, the emitter was assumed to be a blackbody, a perfect radiator. The hotter a blackbody is, the more the peak in the spectrum of emitted radiation shifts to shorter wavelength. Nobody could explain why there was a peak in the distribution at all, however; the theory at the time predicted that for a blackbody, the intensity of radiation just kept increasing as the wavelength decreased. This was known as the ultraviolet catastrophe, because the theory predicted that an infinite amount of energy was emitted by a radiating object. Clearly, this prediction was in conflict with the idea of conservation of energy, not to mention being in serious disagreement with experimental observation. No one could account for the discrepancy, however, until Max Planck came up with the idea that a blackbody was made up of a whole bunch of oscillating atoms, and that the energy of each oscillating atom was quantized. That last point is the key : the energy of the atoms could only take on discrete values, and these values depended on the frequency of the oscillation: Planck's prediction of the energy of an oscillating atom : E = nhf (n = 0, 1, 2, 3 ...) where f is the frequency, n is an integer, and h is a constant known as Planck's constant. This constant shows up in many different areas of quantum mechanics. The spectra predicted for a radiating blackbody made up of these oscillating atoms agrees very well with experimentally-determined spectra. Planck's idea of discrete energy levels led Einstein to the idea that electromagnetic waves have a particle nature. When Planck's oscillating atoms lose energy, they can do so only by making a jump down to a lower energy level. The energy lost by the atoms is given off as an electromagnetic wave. Because the energy levels of the oscillating atoms are separated by hf, the energy carried off by the electromagnetic wave must be hf. Einstein won the Nobel Prize for Physics not for his work on relativity, but for explaining the photoelectric effect. He proposed that light is made up of packets of energy called photons. Photons have no mass, but they have momentum and they have an energy given by: Energy of a photon : E = hf The photoelectric effect works like this. If you shine light of high enough energy on to a metal, electrons will be emitted from the metal. Light below a certain threshold frequency, no matter how intense, will not cause any electrons to be emitted. Light above the threshold frequency, even if it's not very intense, will always cause electrons to be emitted. The explanation for the photoelectric effect goes like this: it takes a certain energy to eject an electron from a metal surface. This energy is known as the work function (W), which depends on the metal. Electrons can gain energy by interacting with photons. If a photon has an energy at least as big as the work function, the photon energy can be transferred to the electron and the electron will have enough energy to escape from the metal. A photon with an energy less than the work function will never be able to eject electrons. Before Einstein's explanation, the photoelectric effect was a real mystery. Scientists couldn't really understand why low-frequency high-intensity light would not cause electrons to be emitted, while higher-frequency low-intensity light would. Knowing that light is made up of photons, it's easy to explain now. It's not the total amount of energy (i.e., the intensity) that's important, but the energy per photon. When light of frequency f is incident on a metal surface that has a work function W, the maximum kinetic energy of the emitted electrons is given by: Note that this is the maximum possible kinetic energy because W is the minimum energy necessary to liberate an electron. The threshold frequency, the minimum frequency the photons can have to produce the emission of electrons, is when the photon energy is just equal to the work function: Back to the course schedule
<urn:uuid:afb64acc-adeb-4fa9-adb5-7765660dd862>
4.40625
879
Academic Writing
Science & Tech.
45.434631
y hold integer values, you just choose to use 0b binary literals to define them. If you want to combine the two into z, you can easily do so using a shifting operation and binary bitwise operations: >>> x = 0b00010101010 >>> y = 0b1110000 >>> z = (x << 7) | y Do note that python does not track how many bits 'precision' you want to keep on such values; you defined y with 7 bits of information, so I had to shift x to the left 7 times to make room for y. You'd have to track this information yourself. Your next problem is representing data as base64 as this format requires you to provide it with bytes only. That's chunks of 8 bits of binary data. This means you'll have to align your bits to the 8-bit boundaries and then turn these into bytes (e.g. strings). You won't be able to get around that, I'm afraid. I'd use the struct module for that, it lets you pack integers and other data-types into bytes with ease: >>> import struct >>> struct.pack('>H', x) >>> struct.pack('>H', x).encode('base64') In the above example I've packed the 11-bits x variable as an unsigned short (using the little-endian standard C format), resulting in 2 bytes of information, then encoded that to base64. The reverse then requires decoding from base64, then an unpack operation: >>> foo = struct.unpack('>H', 'AKo=\n'.decode('base64')) Again, Python doesn't track how many bits of information is important to you, you'll have to track that explicitly.
<urn:uuid:12238b65-e880-4323-bb6c-fc9a0e848904>
2.875
380
Q&A Forum
Software Dev.
71.584063
Posted on Mar 3, 2010 in Astronomy | 1 comment How does the Milky Way look in the night sky to our inaided eye? To the unaided eye, on a clean night the Milky Way appears as a band of light that covers nearly the entire sky. With magnification, the sight is quite different. See Milky Way-Wikipedia for some nice photos. Was this answer helpful? You must be logged in to post a comment. Posted by SciPress on May 24, 2013 Posted by on Sep 7, 2012 Posted by on Feb 17, 2010 Posted by im not that innocent on Aug 7, 2012 Posted by SciPress on Sep 28, 2012 Posted by SciPress on Mar 27, 2013 Posted by SciPress on Apr 8, 2013 © 2010-2013 scianswers.com . Owned and operated by SciScout LLC. All rights reserved.
<urn:uuid:c712e281-d378-40af-b7c2-55be40a56b11>
3.0625
189
Q&A Forum
Science & Tech.
86.716361
From Uncyclopedia, the content-free encyclopedia “I hope people remain interested in MY old bones” Vertebrate palaeontologists are people unnaturally obsessed with old bones. They should not be confused with geologists or archaeologists, and are distinguished by complete lack of knowledge of anything outside their very small area of expertise. Every vertebrate palaeontologist is the world's leading expert - in fact, the only expert - in their particular field, which means that they can write anything they like in the knowledge that nobody else knows anything which can prove that they are wrong. edit The History of Vertebrate Palaeontology The most important figure in the early days of the science was William Buckland, and he set standards of behaviour followed by all subsequent palaeontologists. These are to eat anything, dead or alive, to walk at least 60 miles every day, and to stand firm on your beliefs regardless of what the evidence shows. Shunned by most monarchs because of his depraved taste for their hearts, he nevertheless managed to establish himself as an important figure in high Victorian Society. The pressure of time demanded by his social obligations took too much time away from his research, so his trained bear attended many functions in his place. Such was the politeness of Victorian Society that nobody commented on this unusual substitution. Other important figures in the history of vertebrate palaeontology were Richard Owen and Gideon Mantell. Richard Owen is well-known as the inventor of names for many anatomical features, most of which were observable only to himself. He was obsessed with lizards, and had a running feud with Gideon Mantell over whose lizard was superior. When Mantell showed him some bones his wife had picked up, he denounced them as being "terrible lizards". The name stuck, and dinosaurs have been called “terrible lizards” ever since. Charles Darwin is an important figure in the history of the science. As a young man he went on a long sea voyage. He was a keen sportsman, and spent much of his time shooting everything that moved including many small birds such as finches. When he came home, he did little except to play the cello to his collection of pet worms in the hope that they would develop an appreciation for music. This enterprise failed, but hearing of a young man who was talking about a new scientific theory called natural selection, decided he’d steal his limelight. He published a book called “The Origins of the Specious”, and it immediately gained widespread fame and notoriety. Darwin reasoned that few would read the book, and as he had written it in such dense and impenetrable language nobody nobody noticed that it was no more than a manual giving advice on breeding pigeons. edit The Modern Vertebrate Palaeontologist In the old days, palaeontologists used to look carefully at bones, describe them in great detail, and make suggestions as to how extinct animals were related to each other by using deductive logic. The problem with this method is that it takes a lot of time, and a detailed knowledge of anatomy. Modern methods do away with such old-fashioned concepts. Cladistics, the favoured method of all modern palaeontologists, is based on the principle well-known in computing circles which is that the more garbage one loads into a data set, the more reliable the output of any analysis. Hypotheses are tested by running the same data through the programme again, and if it comes up with the same result the hypothesis is verified. This is good science because it uses computers rather than human intelligence.
<urn:uuid:7beacb22-88f2-49d1-a5a6-925469933322>
2.734375
747
Knowledge Article
Science & Tech.
33.741446
United We Orbit It's a story of spacecraft meets spacecraft. - By James E. Oberg - Air & Space magazine, January 1997 (Page 4 of 5) Before mission STS-71, the astronauts “flew” over 200 approaches in a variety of simulators. Docking with Mir requires a very slow closing speed—barely more than an inch per second during the final approach. It also demands great precision. The docking rings have to be parallel within two degrees in each axis, and the targets have to be aligned within three inches of each other. The astronauts have various tools to help them measure the alignment. A metal “stand-off cross” extends on a rod above and parallel to a black painted cross on the Mir target. If the crosses appear in TV views to line up perfectly, the pilot knows he’s on track. The TV cameras also have grid markings to make it easier for the astronauts to check their alignment. One concern had been the disorienting view caused by the camera’s being at a distance from the pilot’s eyeballs. “You’re not looking at the real world,” explains Precourt. “It’s not like landing an airplane with a view straight out the front windshield.” It’s more like closing your eyes, holding your hands out, and trying to touch your fingertips, he says. But even though it took some getting used to during training, it turned out not to be a problem. Gibson and Precourt, as well as every docking crew after them, learned in the simulators to hit the marks every time, even when jets and instruments and computers failed. On the STS-71 docking, the angular errors were measured in tenths of degrees, almost too small to be noticed. The arrival time was nearly perfect too: They were only two seconds off. Experience has shown that on-time arrival doesn’t matter all that much. “I always argued against getting hung up on the docking time as if it were critical,” says Kevin Chilton. “I wanted to dock a minute later or a minute early just to show it’s not important.” He ended up docking “pretty much on time” anyway. In fact, so far every docking has been a model of precision. “When you think about it,” says Precourt, “it’s pretty amazing that you’d have two vehicles flying in space that are subject to bending and moving, yet the relative position of the docking ports can be precisely known when we arrive.” With at least five more shuttle-Mir missions planned, and with dockings to the international space station scheduled to begin in 1998, orbital docking is finally becoming, if not routine, then at least no cause for great anxiety. Engineers working on the space station have come up with a few modifications to the shuttle-Mir design but not many. They plan to fine-tune the orbiter’s damping mechanism to further reduce the energy transferred to the station at contact. The station also will have a few of the old-style probe-drogue ports, since a variety of Russian, American, European, and Japanese vehicles will have to dock with it. Dockings have now taken place with four different configurations of the shuttle and Mir (approaching the Russian station, with all its protruding solar arrays, modules, and vehicles, is “like docking with a porcupine,” says STS-79 commander Bill Readdy). The STS-74 crew brought up a new docking module to attach to Mir last year, which provides greater flexibility and places the docking interface at a distance from the main station. This addition, plus the station’s different configuration and greater mass, may account for the fact that Mir crews are now feeling less of a jolt than Dezhurov and his companions experienced. Readdy says that when Atlantis pulled up to the docking port last September, Shannon Lucid and her cosmonaut crewmates hardly felt a thing. The STS-74 astronauts even came up with a soundtrack to accompany all the slow, graceful maneuvers in space. A Strauss waltz had already been appropriated by Stanley Kubrick, and besides, it evoked Vienna, not Moscow. So Ken Cameron and his Atlantis crew went with Tchaikovsky’s “Swan Lake” for their final approach and docking.
<urn:uuid:118536e6-738b-4c09-a7d2-1fa3474d46c7>
2.78125
935
Truncated
Science & Tech.
54.384356
HTML Cheat Sheet HTML Help Forum Link to us How to make a basic frame They key to creating frames, is understanding how they work. Once you have that down, its pretty simple. A basic framed page is made up of 3 separate web pages: When you put them all together, you get a framed page, like this: Lets get started: Your first step is to create your left.html and right.html pages. These are 2 normal web pages, and you can put anything you want on them. Usually the left.html is the one with the links, or navigational menu, and the right.html is your main web page. The next step is putting these 2 web pages into a single window. We do this with the index.html. The index.html does not have any text, images, or anything else that you can see, all this page does is put the 2 web pages into a single window. To create an index.html, you first have to create a new web page, and name it, yea you guessed it, index.html. In your index.html place the following code: <html>Now its just a matter of editing this code to fit your web page. You can edit everything that is in BOLD print. cols in the above code stands for columns. The web page we are creating is made up of 2 columns, a left one, and a right one. The 2 percentages after that (20%,80%) are the percentages of the browser window each column will take up. Meaning, the left.html will take up 20% of the window, and the right.html will take up 80% of the window. You can change these to what you think looks best. You can also use the word rows in place of the word cols. This will make the frames horizontal, instead of vertical. Now you have to name your frames, in the above code, left.html is named leftside and the right.html is named rightside. You can edit these to whatever you want. When all of this is done, place all of these page into the same directory, and view your index.html page. Your frames should look like this: Click here to see an example © 2001 TheHTMlSource.com, INC. All Rights Reserved.
<urn:uuid:6c36d1c4-96c9-4c4e-86a2-d6dfe5ff8a07>
3.515625
492
Tutorial
Software Dev.
82.386398
Using Math to Predict Future Extinctions "Our study provides a theoretical basis for management efforts that would aim to mitigate extinction cascades in food web networks. There is evidence that a significant fraction of all extinctions are caused not by a primary perturbation but instead by the propagation of a cascade," said Motter. Extinction cascades are often observed following the loss of a key species within an ecosystem. As the system changes to compensate for the loss, availability of food, territory and other resources to each of the remaining members can fluctuate wildly, creating a boom-or-bust environment that can lead to even more extinctions. According to the study, more than 70 percent of these extinctions are preventable, assuming that the system can be brought into balance using only available resources--no new factors may be introduced. Motter explained further, "We find that extinction cascades can often be mitigated by suppressing--rather than enhancing--the populations of specific species. In numerous cases, it is predicted that even the proactive removal of a species that would otherwise be extinct by a cascade can prevent the extinction of other species." The finding may seem counterintuitive to conservationists because the compensatory actions seem to inflict further damage to the system. However, when the entire ecosystem is considered, the effect is beneficial. This news holds promise for those charged with maintaining Earth's biodiversity and natural resources--the health of which can counteract many of the causes of climate change, and some man-made disasters such as the Gulf of Mexico oil spill. The dodo bird, Raphus cucullatus, is one example of extinction due to human activity. The dodo was a large, flightless bird that became extinct in the 1600s. It is likely that a combination of factors including hunting, loss of habitat, and perhaps even a flash flood, stressed the ecosystem on the island of Mauritius, home of the dodo. Some researchers think that human introduction of non-native species, such as dogs, pigs, cats and rats to the island, is what ultimately lead to the demise of the dodo. The goal of this project, funded by the National Science Foundation's Division of Mathematical Sciences, is to develop mathematical methods to study dynamical processes in complex networks. Although the specific application mentioned here may be useful in management of ecosystems, the mathematical foundation underlying the analysis is much more universal. The broad concept is innovative in the area of complex networks because it concludes that large-scale failures can be avoided by focusing on preventing the waves of failure that follow the initial event. This approach could be used to stabilize a wide array of complex networks. It can apply to biochemical networks in order to slow or stop progression of diseases caused by variations inside individual cells. It can also be used to manage technological networks such as the smart grid to prevent blackouts. It can even apply to regulation of complicated financial networks by identifying key factors in the early stages of a financial downturn, which, when met with human intervention, could potentially save billions of dollars. The world is a complicated place that gets even trickier when trying to mathematically explain a complex network, especially when the network evolves within an environment that is itself changing. But, Motter says his mathematical model is promising for the study of changing environments. "Uncertainty itself is not a problem," he quipped. "The problem comes when you cannot estimate uncertainty."
<urn:uuid:1848522a-fab6-45f0-8102-06fd2cfbcbb3>
3.609375
695
Knowledge Article
Science & Tech.
25.731176
SAVING WOLVES IN THE GREAT LAKES REGION Once one of the most widely distributed land mammals in North America, the gray wolf has also been one of the most persecuted. State, local and private bounties and a federal extermination program have nearly eliminated the gray wolf from the lower 48. By the 1970s, Great Lakes wolves survived only in northeastern Minnesota and Lake Superior’s Isle Royale National Park. With federal protection, wolves have grown in numbers and dispersed from Minnesota into northern Wisconsin and, from there, into the Upper Peninsula of Michigan. In the spring of 2010, for the first time in decades, wolves raised pups in the northern lower peninsula of Michigan. Such progress clearly demonstrates the effectiveness of the Endangered Species Act, but wolf recovery is far from complete. The latest science shows that wolves in the Great Lakes suffer from hybridization with coyotes, disease, illegal shootings and vehicle kills. Despite the gray wolf's continuing endangerment in Great Lakes states, in December 2011 the U.S. Fish and Wildlife Service stripped them of Endangered Species Act protections. The Center and allies have successfully sued to derail the Service’s past efforts to prematurely reduce and remove federal protections from gray wolves, including overturning three such rules in the Great Lakes — and we're doing everything we can to defend them now. In September 2012, we and Howling for Wolves sued the Minnesota Department of Natural Resources for its failure to provide a formal opportunity for public comment on recently approved rules establishing wolf hunting and trapping. We sought a preliminary injunction to prevent the opening of hunting and trapping seasons that fall, but the injunction was denied. To spur true recovery for all gray wolves, in 2010 the Center filed a scientific petition and notice of intent to sue to compel the Obama administration to develop a national recovery plan that would establish wolf populations in suitable habitat in the Pacific Northwest, California, southern Rockies, New England and Colorado Plateau. Unfortunately, in 2012, the Fish and Wildlife Service went the opposite direction, in March 2012 recommending removing federal protections from gray wolves that remain on the endangered species list after wolves in the northern Rocky Mountains and upper Midwest had their protections stripped the year before (though the Service conceded it would still consider protection for subspecies or breeding populations, including Mexican gray wolves, and for populations in the Pacific Northwest and Northeast).
<urn:uuid:a44b0c91-e080-46c2-943d-116136786658>
3.359375
476
Knowledge Article
Science & Tech.
24.852898
a class with a default constructor (one that takes no arguments) that prints a message. Create an object of this class. an overloaded constructor to Exercise 1 that takes a argument and prints it along with your message. an array of object handles of the class you created in Exercise 2, but don’t actually create objects to assign into the array. When you run the program, notice whether the initialization messages from the constructor calls Exercise 3 by creating objects to attach to the array of handles. by running the program using the arguments “before,” “after” and “none.” Repeat the process and see if you detect any patterns in the output. Change the code so that and observe the results.
<urn:uuid:276ec70e-f8d1-41df-9a52-d3712a191423>
3.140625
161
Tutorial
Software Dev.
53.745935
The Viability of Frozen Human Embryos: Lessons From Animal Research In the long debate over the ethical permissibility of destroying embryonic human beings in research, it has sometimes been suggested that frozen embryos are somehow less human, or less alive, than are unfrozen embryos. For instance, in January 2000, Senator Arlen Specter of Pennsylvania introduced S. 2015, a bill providing for federal funding of human embryonic stem cell research. The bill provided, in pertinent part, that: The human embryonic stem cells involved shall be derived only from embryos that otherwise would be discarded that have been donated from in-vitro fertilization clinics with the written informed consent of the progenitors. In a statement to the press, the senator said, "[T]he discarded embryos are not going to be used for human life. If there was any possibility, I would be the first to oppose their use for scientific research." Though his statement is ambiguous, it appears Sen. Specter meant to suggest that, once frozen, the embryo was either dead or could not be unfrozen, implanted, and brought to live birth. However, frozen embryos remain alive, and can be thawed, implanted and brought to a live, healthy birth. This has, in fact, been accomplished several times, both in the United States and elsewhere. For instance, on February 5, 2004, BBC News reported that an Israeli woman gave birth to twins from embryos frozen twelve years ago. While such reports conclusively rebut Specter's suggestion that frozen embryonic human beings cannot be brought to live birth, there is significant scientific research examining other animals--such as invertebrates, insects, frogs, and turtles--which reinforces the point. In this paper, we summarize research concerning how these creatures survive in low temperatures. Freezing places these creatures in suspended animation, but it does not kill them. We hope this will conclusively rebut any belief that frozen human embryos are either no longer alive or cannot be brought to a live birth. Frozen embryonic human beings, like the frozen animals discussed in this paper, can survive freezing. Indeed, whether frozen embryonic human beings will, as Specter put it, "be used for human life" (or, more accurately, whether they will be born alive) depends solely upon the will, good or ill, of the human beings into whose hands fate has placed them. Nothing about their dilemma robs them of the humanity they share with us. Freeze Survival Techniques in Animals Animals in their natural environments have to face many situations that, if not handled properly, will cause their deaths. One of these life threatening phenomena is cold weather. In the majority of cases, freezing of animal body tissues or organs is lethal. However, there are some species that can undergo some degree of freezing without damage. In order to prevent injuries due to low temperatures, two basic strategies have been developed:freeze-avoidance and freeze-tolerance. In this paper, I will indicate the mechanisms that are necessary to prevent damage by freezing. At the end, I will provide some data for freeze tolerance of intertidal marine invertebrates, insects, frogs, and turtles respectively. Freeze-avoidance and Freeze-tolerance Low temperatures are lethal to cold-blooded animals. These species can adopt two basic strategies to face temperatures below the freezing point of their body fluids (FP): freeze-avoidance and freeze-tolerance. Both strategies involve adaptations on behavioral, physiological, and biochemical levels. Freeze-avoidance is the safest way to prevent the lethal effects of freezing. The most effective strategy is to find a place for hibernation where the temperature does not fall bellow the freezing point (FP). However, some animals have developed another strategy. It enables them to keep their body fluids liquid even at temperatures far below FP. The contact between environmental ice and a supercooled animal is often lethal, because so-called "inoculative freezing" Freeze-tolerance is the most challenging hibernation strategy. Because an animal in a frozen state is completely helpless, there must be good reasons why freeze-tolerance has been developed. There are at least three important advantages achieved by this hibernating strategy. The first advantage is the possibility of early spring emergence. Hibernating in less-protected hibernation sites, animals can detect the warmer spring temperatures sooner than those which hibernate in more protected places. This prolongs the time for their growing season. The second advantage has to do with the protection against predators. Turtles that hatch late in the season stay in place and first surface the following spring. But at that time the hatchlings are older and stronger, and thus, more prepared for facing the challenge of predators. The third main advantage of adopting the freeze tolerance is range extension. The species can penetrate into areas where freeze avoidance is not possible. Freeze tolerance requires several mechanisms for preventing possible freezing injuries: Freezing in general must be carefully controlled in order to ensure survival. Ice growth is initiated in two ways. It may be inoculated, i.e., body fluids start to freeze when brought in contact with environmental ice at or below FP. Or it may occur spontaneously in supercooled body fluids. The slower the freezing occurs the more time the cells have to prepare for freezing. The lower the temperature at which ice formation begins, the faster the ice will be formed, and thus less time will be available for implementing cryoprotective means. The animals that can survive freezing of their body fluids belong to various species. So far, freeze tolerance is observed with some insects (members of Coleoptera, Hymenoptera, Diptera, and Lepidoptera), marine invertebrates (bivalves--Mytilus edulis, Modiolus demissus, Cardium edule, Venus mercenaria; gastropods--Littorina littorea, Nassarius obsoletus, Acmea digitalis, Melampus bidentatus; and barnacles--Balanus balanoides), four species of land hibernating frogs (wood frog--Rana sylvatica, gray tree frog--Hyla versicolor, spring peeper--Hyla crucifer, and chorus frog--Pseudacris triseriata), reptiles (box turtles--Terrapene carolina, and painted turtles--Chrysemis picta, etc., garner snakes, some lizards). A. Marine Invertebrates In the intertidal zone, marine invertebrates can be exposed to sub-zero temperatures twice a day. Because of the environment, marine invertebrates cannot use dehydration to protect themselves from the impacts of freezing. The factors influencing freezing survival are: salinity, temperature, anaerobis, Examples of freeze tolerance in insects can be found at all stages of life. Some species C. Terrestrially Hibernating Frogs The limits for frog survival are narrow but still sufficient for chosen hibernation sites. Frogs usually supercool to 28.4 or 26.6 degrees Fahrenheit (-2 or -3 degrees Celsius) and can survive freezing at 21.2 to 17.6 degrees Farenheit (-6 to -8 degrees Celsius). It is important to note that the survivable temperatures vary with individual species and location. However, it seems that survival ranges are well matched to the needs of the species. Long-term survival is sufficient--animals of all four species that have been studied survived three days frozen at 26.6 degrees Farenheit (-3 degrees Celsius). When frozen, frogs have stiffened limbs, ice crystals under the skin and interspersed with skeletal muscles, and ice filling the abdominal cavity and surrounding all organs. There is no breathing, no heartbeat, and no bleeding when the aorta is severed. Organs such as the liver and the heart are pale, because of blood withdrawal. The heart is the first organ to start functioning again when thawed. Frogs, in order to secure freeze tolerance, also use cryoprotectants (glucose, glycerol, etc.). They might die from dehydration if hibernating in direct contact with air because so-called "freeze drying" can occur. The above-mentioned species of turtles remain in their nest after hatching. This may cause the hatchlings to be exposed to sub-zero temperatures. Many experiments were made to determine the characteristics and limits of freeze tolerance of turtles. Turtles before freezing in laboratory conditions usually supercool. When freezing is just about to occur, the body temperature rises slightly below the FP and controlled ice formation starts. It is difficult to determine exact data for freezing tolerance in individual animal species. These data depend on the way the laboratory experiments are conducted. For example, the lowest survivable temperature (or the length of time animals can be exposed to freezing) depends on how fast the temperature was lowered during the experiments. Bailey R. M. (1949). "Temperature toleration of garter snakes in hibernation." Ecology, Vol. 30, No. 2. Block W. (1991). "To fr eeze or not to freeze? Invertebrate survival of sub-zero temperatures." Functional Ecology, Vol. 5. Churchill T. A., Storey K. B. (1992). "Responses to freezing exposure of hatchling turtles Trachemys Scripta Elegans: Factors influencing the development of freeze tolerance by reptiles." Journal of Experimental Biology, Vol. 167. Churchill T. A., Storey K. B. (1992a). "Natural freezing survival by painted turtles Chrysemys picta marginata and C. picta bellii." American Journal of Physiology, Vol. 262. Claussen D. L., Constanzo J. P. (1990). "A simple model for estimating the ice content of freezing ectotherms." Journal of Thermal Biology, Vol. 15, No. 3/4. Claussen D. L., Zani P. A. (1991). "Allometry of cooling, supercooling, and freezing in the freeze-tolerant turtle Chrysemys picta." American Journal of Physiology, Vol. 261. Costanzo J. P., Iverson J. B., Wright M. F., Lee R. E. (1995). "Cold hardiness and overwintering strategies of hatchlings in an assemblage of northern turtles." Ecology, Vol. 76, No. 6. Diamond J. M. (1989). "Resurrection of frozen animals." Nature, Vol. 339, Issue 6225. Layne J. R. (1992). "Postfreeze survival and muscle function in the leopard frog (Rana pipiens) and the wood frog (Rana sylvatica)." Journal of Thermal Biology, Vol. 17, No. 2. Layne J. R., Lee R. E. (1987). "Freeze tolerance and the dynamics of ice formation in wood frogs (Rana sylvatica) from southern Ohio." Canadian Journal of Zoology, Vol. 65. Layne J. R., Lee R. E., Huang J. L. (1990). "Inoculation freezing at high subzero temperatures in a freeze-tolerant frog (Rana sylvatica) and insect (Eurosta solidaginis)." Canadian Journal of Zoology, Vol. 68. Mazur P. (1984). "Freezing of living cells: mechanisms and implications." American Journal of Physiology, Vol. 247. Packard G. C., Packard M. J. (1990). "Patterns of survival at subzero temperatures by hatchling painted turtles and snapping turtles." Journal of Experimental Zoology, Vol. 254. Packard G. C., Packard M. J., Ruble K. A. (1993). "Hatchling snapping turtles overwintering in natural nests are inoculated by ice in frozen soil." Journal of Thermal Biology, Vol. 18, No. 4. Storey K. B., Storey J. M. (1988). "Freeze tolerance in animals." Physiological Reviews, Vol. 68, No. 1. Storey K. B., Storey J. M. (1996). "Natural freezing survival in animals." Annual Review of Ecology and Systematics, Vol. 27.
<urn:uuid:5d028ede-1d3f-4acd-a0d8-1a5958bdf1d5>
2.90625
2,560
Academic Writing
Science & Tech.
44.396728
Here’s something you don’t see every day: a tornado on the surface of the sun. NASA’s Solar Dynamics Observatory posted this stunning video, which shows the sun’s plasma sliding and spinning around in the star’s magnetic fields for 30 hours earlier this month. Terry Kucera, a solar physicist with NASA’s Goddard Space Flight Center, told Fox News that the tornado might be as large as the Earth itself and have gusts up to 300,000 miles per hour. By comparison, the strongest tornadoes on earth, F5 storms, clock wind speeds at a relatively paltry (though incredibly destructive) 300 mph. The sun is an extremely active star, regularly spitting radiation and atomic particles into space. This space weather has direct impacts here on Earth, like forcing the rerouting of planes andlighting up the auroras. (Andrew Prince is a producer on NPR’s Science Desk.) This post was submitted by Andrew Prince / NPR.
<urn:uuid:a3ae770a-8a74-4e44-a00e-889a220f1ab7>
3.453125
212
Truncated
Science & Tech.
57.211535
Joined: 16 Mar 2004 |Posted: Wed Jan 31, 2007 12:04 pm Post subject: Quantum Dots Aid Cancer Imaging Studies |Quantum Dots Aid Cancer Imaging Studies One of the more difficult diagnostic tasks an oncologist faces is determining if cancer has spread to the lymphatic system, particularly when it is necessary to assess lymphatic drainage from two separate drainage basins. Now, using multiple quantum dots, each with a distinct color, researchers at the National Cancer Institute’s Center for Cancer Research have developed a method for accurately mapping lymphatic flow from more than one drainage basin. In a demonstration of this new technique, a research team headed by Hisataka Kobayashi, M.D., Ph.D., used two colors of quantum dots to map lymphatic flow from the breast and upper extremities into common lymph nodes. To track the quantum dots, the investigators used two-color near infrared lymphangiography. The researchers report their findings in the journal Breast Cancer Research and Treatment. In the course of their studies, the investigators found that proper selection of quantum dot size improved their ability to map lymphatic flow from different regions of the body. Larger quantum dots, approximately 12 nanometers in diameter, proved optimal for imaging lymphatic flow from the upper extremities, while quantum dots of approximately 6 nanometers in diameter worked best for mapping lymphatic drainage from mammary glands. Quantum dots are also proving useful in labeling human blood cells, a finding that could help researchers answer questions about how these cells move through the body. In a paper published in the journal Leukemia Research, a team of investigators headed by H. Phillip Koeffler, M.D., and colleagues at the University of California, Los Angeles, have shown that they can label all major types of human blood cells with quantum dots. As a demonstration of the utility of this technique, the researchers then monitored dividing human blood cells. The investigators also showed that they could label specific types of blood cells by linking the quantum dots to antibodies that recognize molecules found on those cells. As a result, the investigators were able to identify different kinds of blood cells, such as leukemic cells and normal cells, in a mixture of blood cells. This story was posted on 24th October2006.
<urn:uuid:3c2a3fa7-2c71-4771-90a8-30e54ad89d83>
2.6875
467
Comment Section
Science & Tech.
39.58455
Its a simple one this week, we are looking at sinking and floating. My children love playing with water so had a great time testing things today. What you need: - a large bowl of water - Objects to test (anything you like, but good to pick some solid and some hollow) - Fill the bowl or container about 2/3 full of water. - Gently place the objects on the water, some objects will float when you place them on the water gently, but sink when you drop them. The Science bit. Whether an object floats or sinks depends on its density. Density is how tightly packed the material inside an object is. Just because something is heavy does not mean it will sink. For example, ships are very heavy but not very dense so they float. I let my children pick some items to test and this is what happened. Something else to try would be to make a boat from plasticine and show how marbles or something similar sink on their own, but hopefully float when inside the boat.
<urn:uuid:c1313c73-b26d-4ec0-a6b1-da60f9cd2473>
3.6875
216
Personal Blog
Science & Tech.
66.837421
More In This Article Editor's Note: This article originally appeared in the October 2005 issue of Scientific American. In May 1972 a worker at a nuclear fuel–processing plant in France noticed something suspicious. He had been conducting a routine analysis of uranium derived from a seemingly ordinary source of ore. As is the case with all natural uranium, the material under study contained three isotopes— that is to say, three forms with differing atomic masses: uranium 238, the most abundant variety; uranium 234, the rarest; and uranium 235, the isotope that is coveted because it can sustain a nuclear chain reaction. Elsewhere in the earth’s crust, on the moon and even in meteorites, uranium 235 atoms make up 0.720 percent of the total. But in these samples, which came from the Oklo deposit in Gabon (a former French colony in west equatorial Africa), uranium 235 constituted just 0.717 percent. That tiny discrepancy was enough to alert French scientists that something strange had happened. Further analyses showed that ore from at least one part of the mine was far short on uranium 235: some 200 kilograms appeared to be missing— enough to make half a dozen or so nuclear bombs. For weeks, specialists at the French Atomic Energy Commission (CEA) remained perplexed. The answer came only when someone recalled a prediction published 19 years earlier. In 1953 George W. Wetherill of the University of California at Los Angeles and Mark G. Inghram of the University of Chicago pointed out that some uranium deposits might have once operated as natural versions of the nuclear fission reactors that were then becoming popular. Shortly thereafter, Paul K. Kuroda, a chemist from the University of Arkansas, calculated what it would take for a uraniumore body spontaneously to undergo selfsustained fission. In this process, a stray neutron causes a uranium 235 nucleus to split, which gives off more neutrons, causing others of these atoms to break apart in a nuclear chain reaction. Kuroda’s first condition was that the size of the uranium deposit should exceed the average length that fission-inducing neutrons travel, about two thirds of a meter. This requirement helps to ensure that the neutrons given off by one fissioning nucleus are absorbed by another before escaping from the uranium vein. A second prerequisite is that uranium 235 must be present in sufficient abundance. Today even the most massive and concentrated uranium deposit cannot become a nuclear reactor, because the uranium 235 concentration, at less than 1 percent, is just too low. But this isotope is radioactive and decays about six times faster than does uranium 238, which indicates that the fissile fraction was much higher in the distant past. For example, two billion years ago (about when the Oklo deposit formed) uranium 235 must have constituted approximately 3 percent, which is roughly the level provided artificially in the enriched uranium used to fuel most nuclear power stations. The third important ingredient is a neutron “moderator,” a substance that can slow the neutrons given off when a uranium nucleus splits so that they are more apt to induce other uranium nuclei to break apart. Finally, there should be no significant amounts of boron, lithium or other so-called poisons, which absorb neutrons and would thus bring any nuclear reaction to a swift halt. Amazingly, the actual conditions that prevailed two billion years ago in what researchers eventually determined to be 16 separate areas within the Oklo and adjacent Okelobondo uranium mines were very close to what Kuroda outlined. These zones were all identified decades ago. But only recently did my colleagues and I finally clarify major details of what exactly went on inside one of those ancient reactors.
<urn:uuid:8a5f5a22-b4a3-469a-bb36-6607339f1d7c>
3.9375
755
Truncated
Science & Tech.
34.492419
The Earth Trials Can we test our geoengineering schemes before we have to use them? In their Science article, Alan Robock of Rutgers and colleagues underscore that point. The group has used a NASA computer model to simulate the spraying of a sulfur dose roughly one-seventh as large as Pinatubo, released annually over the Arctic for two decades. (The comparison is messy, but deploying the Pinatubo Option to compensate for expected global warming would use roughly half as much sulfur as was released by the volcano in 1991.) Given the extreme variability in Earth's climate system, it might take an experiment of that size or bigger to get useful data on major side effects, Robock says, and the impacts of such a test would be profound. In the simulation, global rain and snowfall was reduced and the summer monsoon over Africa and Asia was weakened. Some models say that plants, including crops, might thrive with less heat; Robock fears testing the Pinatubo Option could affect "the food and water supplies of 2 billion people." So one group of scientists argues that by gradually increasing the size of our experiments, we can get as much data as possible with minimal risk. Another says that only a dangerous, full-scale deployment can shed light on the crucial issue of how effective a particular dose will be. The point isn't who's right—it's that countries won't be willing to run these potentially harmful tests if scientists can't agree on how useful they'll be. Indeed, those disputes might continue even after a given set of tests was completed. In the case of another potential form of geoengineering—adding trace amounts of iron to the ocean to grow algae to suck carbon out of the sky—we've already seen an example of how this plays out. A dozen small-scale experiments to grow algae blooms have been conducted around the world's oceans since 1993, and there's still no consensus among oceanographers as to what the results suggest. Some scientists say a few of the experiments have worked to permanently sequester some carbon in the deep ocean; others say that carbon was recycled back to the surface. The authors of the Nature and Science papers hope that international agreements or treaties could be put in place ahead of time, to avert ecological harm or political strife. The surest way to minimize risks would be Keith's approach: Conduct internationally-coordinated, modest tests over a decade or more, teasing out the cooling signal over time. But given the inevitable political and scientific conflicts that would arise, such a long series of tests will just give countries more chances to bail on the project. That's why we may not see any medium-scale, informative tests until we're unlucky enough to face with looming, catastrophic changes in the climate. At that point, we'll have no choice but to go all out with a full deployment, with little more than computer-based risk estimates to guide us. From one dangerous global experiment—our current carbon binge—to another. Science reporter Eli Kintisch's book, Hack the Planet: Science's Best Hope—or Worst Nightmare—for Averting Climate Catastrophe, was published in April. Photograph of Mount Pinatubo by Romeo Gacad/AFP/Getty Images.
<urn:uuid:1e51f25c-fbb1-48e5-8a10-990c02e5b3c6>
3.5625
662
Truncated
Science & Tech.
41.045412
ID selectors are a lot like class selectors. They work in a very similar way, and have a similar syntax. The essential difference is that while class selectors apply to one or more elements on a page, ID selectors apply to exactly one element. For example, while you can have many lists of ingredients, you might have only one main title. Given how class selectors work you may find yourself questioning the need for ID selectors at all. It's a good question, but the best answer to it is an unsatisfying "because they're there". The fact is that the ID attribute exists in HTML 4, so there's no reason not to use it in a CSS selector. As we talk about below, the ID attribute and selector has come to be widely used in CSS positioning. As just noted, the syntax for the ID selector is much like that for the class selector. Again there are two kinds of selector, those associated with a particular type of element, and the more general selector that can apply to any element with an ID that matches the ID of the selector. We'll see shortly what it means for an element to have an ID. Solitary ID selectors ID selectors that can apply to any type of element have the simple syntax #idname, for example, #title. This selector selects the single element on a page that has an ID of "title". ID selectors that apply only to a particular type of element (for instance only headings of level one or paragraphs) have the syntax element-name#idname, for example h1#title. This selector selects the single heading of level 1 with an ID of "title". It will not select any other element with that ID, nor will it select any other headings of level 1. Note that an ID comprises only alpha numeric characters, and hyphens. They cannot include underscores and other characters, nor spaces. An ID cannot begin with a numeral. We've already mentioned that ID selectors, like the ID attribute, are kind of redundant when you consider that you already have class selectors and the class attribute. We saw in the section on class selectors that HTML 4.0 introduced the class attribute. HTML elements can have classes, and to give an element a class, you add the attribute class to the opening tag for that element like this, IDs are very similar. HTML 4.0 introduced the ID attribute, which is given to an element in a very similar way, by adding the ID attribute to the element tag. For example, to give a heading an attribute of title, you use the following tag, <h1 id="title">. Note that in any valid HTML document there should only be one heading 1 with an ID of title. To select this <h1> then you would use the selector One area in which ID selectors have become the de facto standard though is in CSS positioning, even though you could do exactly the same thing with classes. Because positioned elements are very often unique on each page, they are distinguished from each other using ID attributes. Then, in the style sheet they are selected with ID selectors. I started using it as a learning tool, assuming I would go back to editing my CSS in my text editor - but it is just so convenient.
<urn:uuid:f2829c41-dc87-4733-931f-e81d658f03ff>
3.046875
688
Tutorial
Software Dev.
51.687517
Cascading Style Sheets (CSS) - Use style sheets to control layout and presentation - All pages using CSS must be usable when CSS is disabled or not supported. for example, when an HTML document is rendered without style sheets, it must still be possible to read the document - Allow users to override the formatting, if required. For example, allow the font size or colour and contrast to be changed to suit an individual's needs. - Style sheets must not overwrite the corporate settings for corporate navigation or body text Cascading style sheets (CSS) give the designer of a web page the ability to separate the styling elements from the content. They are of fundamental importance to the usability of a page. It is essential that pages are still usable (graceful degradation) when CSS are not supported by a browser. CSS are the preferred method to control layout and formatting. Do not use tables as a means to control the layout or formatting of a page. Influences on this standard UK Government GuidelinesCentral Office of Information guidelines - Delivering Inclusive Websites "Consistent design (is) achieved through the use of Cascading Style Sheets where the web developer can reuse the same layout and design for each page in the website. This can be helpful users with cognitive impairments, and benefits all users". W3C Web Accessibility Initiative "Tables should be used to mark up truly tabular information.... Content developers should avoid using them to lay out pages."
<urn:uuid:26691a2e-c714-414f-856e-bbe0abbcdd68>
3.765625
309
Knowledge Article
Software Dev.
39.230238
¡Viva la Science! When tiny arms became crooked legs Big Bird is a terrible example to us all, at least when it comes to bird anatomy. Check out those gams and you’ll see why. Like humans, real birds are bipedal, but their legs aren’t straight up and down. Instead, bird legs zigzag in such a way that birds are essentially in a permanent crouch, using their muscles to resist gravity. We humans don’t have to do that―our weight is borne passively on our straighter frames. But of course, we can’t fly. The crouching posture peculiar to birds, says a recent study published in Nature, has everything to do with their evolution from dinosaur ancestors into animals capable of flight. Previously, it was believed that the bird stance came about as a way for bird bodies to balance as massive T-Rex-style tails disappeared. Using 3-D digital reconstruction, however, the authors of the study determined that the key change was actually in the size of those adorable dinosaur arms. According to co-author John R. Hutchinson: The tail is the most obvious change if you look at dinosaur bodies. But as we analyzed, and reanalyzed, and punishingly scrutinized our data, we gradually realized that everyone had forgotten to check what influence the forelimbs had on balance and posture, and that this influence was greater than that of the tail or other parts of the body. Read more about the evolutionary adaptation that made bird flight possible here.
<urn:uuid:57257f9f-57af-489f-a7e8-60b99fc6103f>
3.59375
319
Truncated
Science & Tech.
50.724911
The Current Dewpoint image shows the dewpoint , contoured every 10 degree F, for the most recent hour. The dewpoint (or dewpoint) is the temperature to which a given parcel of air must be cooled, at constant barometric pressure, for water vapor to condense into water. The condensed water is called dew. The dewpoint is a saturation point. When the dew point temperature falls below freezing it is called the frost point, as the water vapor no longer creates dew but instead creates frost or hoarfrost by deposition. The dew point is associated with relative humidity . A high relative humidity indicates that the dew point is closer to the current air temperature . If the relative humidity is 100%, the dew point is equal to the current temperature . Given a constant dewpoint , an increase in temperature will lead to a decrease in relative humidity At a given barometric pressure, independent of temperature, the dewpoint indicates the mole fraction of water vapor in the air, and therefore determines the specific humidity of the air. The dew point is an important statistic for general aviation pilots, as it is used to calculate the likelihood of carburetor icing , and estimate the height of the cloud base.
<urn:uuid:9e4e3139-377b-4ecb-a72c-f138a75585c7>
4.3125
280
Knowledge Article
Science & Tech.
33.22753
Scripting is such a great way to relax. Well, at least in my eyes, that’s a great way to relax. Scripting can be as easy as telling your OS it needs to do something at a specific time or as complicated as synchronizing multiple folders on various servers on various platforms. (Insert example here). I thought it would be fun to post a little `Computer Science 101` here. Most of us Developers love our terminal (possibly even the Microsoft lovers), and we all know how to start off a basic shell application or script. Shebang, Hashbang.. what ever you call her, she is there to get your script aimed in the right direction. In computing, a shebang (also called a sha-bang, hashbang, pound-bang, hash-exclam, or hash-pling) is the character sequence consisting of the characters number sign and exclamation mark (that is, “#!”) when it occurs as the initial two characters on the initial line of a script. Under Unix-like operating systems, when a script with a shebang is run as a program, the program loader parses the rest of the script’s initial line as an interpreter directive; the specified interpreter program is run instead, passing to it as an argument the path that was initially used when attempting to run the script. For example, if a script is named with the path “path/to/script”, and it starts with the following line: then the program loader is instructed to run the program “/bin/sh” instead (usually this is the Bourne shell or a compatible shell), passing “path/to/script” as the first argument. The shebang line is usually ignored by the interpreter because the “#” character is a comment marker in many scripting languages; some language interpreters that do not use the hash mark to begin comments (such as Scheme) still may ignore the shebang line in recognition of its purpose. Some typical shebang lines: #!/bin/sh— Execute the file using sh, the Bourne shell, or a compatible shell #!/bin/csh— Execute the file using csh, the C shell, or a compatible shell #!/usr/bin/perl -T— Execute using Perl with the option for taint checks #!/usr/bin/php— Execute the file using the PHP command line interpreter #!/usr/bin/python -O— Execute using Python with optimizations to code #!/usr/bin/ruby— Execute using Ruby Shebang lines may include specific options that are passed to the interpreter (see the Perl example above). However, implementations vary in the parsing behavior of options; for portability, only one option should be specified (if any) without any embedded whitespace. Go ahead, read more. It’s always fun to go back and have a little refresher isn’t it?
<urn:uuid:39c641dd-25d2-4522-a3e4-72f54a70df59>
3.03125
626
Personal Blog
Software Dev.
52.37299
Guile-GSL is a C/Scheme library extension for Guile, the GNU's Ubiquitous Intelligent Language for Extensions. It implements a binding to the GNU Scientific Library (GSL). The purpose is to implement an environment similar to GNU Octave, but using Scheme as a programming language. Guile-GSL is a part of GEE, a collection of extensions for Guile. This is a tutorial on the usage of Guile-GSL, not on how to program with Scheme. To understand what's going on here you have to know the basics of Scheme: how to define variables, how to define procedures, how to invoke procedures, what is a cons. Many people can learn the Scheme basics in a few days; the Internet is full of introduction papers on the subject. The first chapters of the SICP are perfect: Structure and Interpretation of Computer Programs (sicp) Harold Abelson and Gerald Sussman Copyright (C) 1996 Massachussets Institute of Technology this book is available on the Internet, there is also a GNU Texinfo version. On this wiki are available installation instructions GEE-Guile-GSL-install. To load the module: (define-module (this-script) #:use-module (oop goops) #:use-module (gee math oop) #:use-module (gee math gsl) #:duplicates merge-generics) ;; your code goes here When Guile-GSL is installed a couple of scripts are placed on the system to use the modules from the Guile REPL (read, eval, print loop): a Bourne shell interface scripts that sets up the environment and then runs guile with gsh.scm as script; installed under a path like /usr/local/bin/gsh; a Guile script that loads Guile-GSL and other modules then invokes the Guile REPL; it is installed under a path like /usr/local/libexec/guile-gsl/0.1.0/gsh.scm. If the installation was successful at the shell prompt we can do: $ gsh Guile-GSL version 0.1.0 Written by Marco Maggi. Copyright (C) 2006, 2007 by Marco Maggi. This is free software; see the source or use the '--license' option for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. gsh> gsh> is the default Guile-GSL shell prompt. Both the scripts are very simple and you are encouraged to inspect and modify them to suit your needs. The gsh shell script is there exactly to be customised with site-specific environment variables. Vectors and matrices are the basic objects of math computations with Guile-GSL. The easiest interface is the one that uses the GOOPS extension of Guile; there is only one class that we have to be aware of: <gsl>, and we rarely need to use it explicitly. Quick code to overview: (define a #,(gvr 3 1 2 3)) (define b #,(gvr 3 4 5 6)) (display (+ a b)) ;; -> [5 7 9] (write (+ a b)) ;; -> #,(gvr 3 5 7 9) (display (dot a b)) ;; -> 32.0 (display (cross a b)) ;; -> [-3.0 6.0 -3.0] We may want to define vectors and matrices in one of three ways: just define an "empty" object, define an object with known constant elements, define an object with elements that we have to compute using math functions. We use the Scheme "read syntax" with the SRFI-10 features included in Guile. There are 4 symbols: to declare vectors of real numbers (gvr = GSL vector real); to declare vectors of complex numbers (gvc = GSL vector complex); to declare matrices of real numbers (gmr = GSL matrix real); to declare matrices of complex numbers (gmc = GSL matrix complex). the definitions look like the following: #,(gvr 3 1 2 3) #,(gvc 3 1+2i 2-3i 3+4i) #,(gmr (2 . 3) 1 2 3 4 5 6) #,(gmc (2 . 3) 1+2i 2-3i 3+4i 4-5i 6+7i 8-9i) The first element after the tag must be the dimension: for vectors the number of elements; for matrices a cons holding the number of rows in the car and the number of columns in the cdr. Notice that when using the read syntax the cons does not need to be quoted. After the tag and the dimension we simply write the elements separated by blanks (spaces, tabs, newlines). The elements must be numbers. Sometimes we need to define an object with known dimension that will be filled with values later. We do it like this: (define a (make-gsl-vector-real 3)) (define b (make-gsl-vector-complex 3)) (define c (make-gsl-matrix-real '(3 . 5))) (define d (make-gsl-matrix-complex '(19 . 2))) as Scheme dictates: outside the read syntax, the conses must be quoted. All the elements are set to 0.0; optionally we can give an additional parameter representing the default: (define a (make-gsl-vector-real 3 4.5)) ;; -> [4.5 4.5 4.5] A special maker exists for identity matrices: (eye 3) ;; -> [[1 0 0] ;; [0 1 0] ;; [0 0 1]] Defining objects with computed values is similar to defining empty objects in that we invoke functions for it: (define a (read-gsl-vector-real 3 (sin 0.2) (cos 0.5) (tan 0.6))) (define b (read-gsl-vector-complex 3 (sin 0.2+9i) (cos 0.5) (tan -0.6i))) (define c (read-gsl-matrix-real '(2 . 3) (sin 0.2) (cos 0.5) (tan 0.6) (sin 0.3) (cos 0.9) (tan 0.2))) (define d (read-gsl-matrix-complex '(2 . 3) (sin 0.2) 0.5 (tan 0.6) (sin 0.2+9i) (cos 0.5) (tan -0.6i))) Once we have defined vectors and matrices we just apply functions to them. The four arithmetic functions +, -, * and / do what we expect element by element; there is also the unary - that negates all the elements: (- #,(gvr 3 1 2 3)) ;; -> [-1 -2 -3] Other functions, like hyperbolic ones, are applied to all the elements, the following form: (sinh #,(gvr 3 1 2 3)) is equivalent to: (read-gsl-vector-real 3 (sinh 1) (sinh 2) (sinh 3)) DOT does the row by column product; if the arguments are both vectors: the operation is the scalar product; if the arguments are a matrix and a vector: (dot #,(gmc (2 . 3) 1+2i 2-3i 3+4i 4-5i 6+7i 8-9i) #,(gvr 3 1 2 3)) ;; -> #,(gvc 2 14.0+8.0i 40.0-18.0i) and if they are two matrices: (dot #,(gmc (2 . 3) 1+2i 2-3i 3+4i 4-5i 6+7i 8-9i) #,(gmc (3 . 2) 1+2i 2-3i 3+4i 4-5i 6+7i 8-9i)) ;; -> #,(gmc (2 . 2) 5.0+48.0i 61.0-16.0i ;; 115.0+50.0i 35.0-168.0i) There is no such concept as row vector or column vector: the vectors are interpreted as rows or columns as need be. We can transpose matrices like this: and do the conjugate: The setter/getter synopsis is ugly, but that's the way it is. The synopsis of all the getters is: (type object key) so to get elements from a vector: (define a #,(gvr 3 1 2 3)) (elm a 0) ;; -> 1 (elm a 1) ;; -> 2 (elm a 2) ;; -> 3 indexes are zero based. To get elements from a matrix we have to select both the row and the column: (define a #,(gmr (2 . 3) 1 2 3 4 5 6)) (elm a '(0 . 0)) ;; -> 1 (elm a '(0 . 1)) ;; -> 2 (elm a '(1 . 2)) ;; -> 6 to get a whole row: (row a 0) ;; -> [1 2 3] (row a 1) ;; -> [4 5 6] and to get a whole column: (column a 0) ;; -> [1 4] (column a 1) ;; -> [2 5] The synopsis of all the setters is: (set! (type object key) value) to set elements: (define a #,(gvr 3 1 2 3)) (set! (elm a 0) -1) ;; -> [-1 2 3] (define b #,(gmr (2 . 3) 1 2 3 4 5 6)) (set! (elm b '(1 . 2)) -1) ;; -> #,(gmr (2 . 3) 1 2 3 ;; 4 5 -1) to set a whole row or column: (define b #,(gmr (2 . 3) 1 2 3 4 5 6)) (set! (row b 0) #,(gvr 3 -1 -2 -3)) ;; -> #,(gmr (2 . 3) -1 -2 -3 ;; 4 5 6) (set! (column b 0) #,(gvr 3 -8 -9)) ;; -> #,(gmr (2 . 3) -8 -2 -3 ;; -9 5 6) to set a diagonal: (define b #,(gmr (2 . 3) 1 2 3 4 5 6)) (set! (diagonal b 1) #,(gvr 2 -8 -9)) ;; -> #,(gmr (2 . 3) 1 -8 3 ;; 4 5 -9)
<urn:uuid:77f8cfcb-62b1-4f60-8a04-d37ffae12506>
2.9375
2,401
Documentation
Software Dev.
94.20755
Extinction is when a species dies out completely. It could be argued that it can occur before that, say, if there are only males or females left in a sexually reproducing species, or if the remaining pairs are geographically isolated preventing mating. Evolutionists believe that over the history of the earth, many more species have become extinct than have existed at any one time. Sometimes a species is replaced by descendants that have altered their form enough not to be considered the same species anymore. An example of this is the history of the modern horse, which has descended through a series of distinct, and now extinct, precursor species.
<urn:uuid:3d6be173-e326-4599-8522-a0195b264b86>
3.703125
126
Knowledge Article
Science & Tech.
29.478642
Ball, M. and Pinkerton, Harry (2006) Factors affecting the accuracy of thermal imaging cameras in volcanology. Journal of Geophysical Research - Space Physics, 111. B11203. ISSN 0148-0227Full text not available from this repository. Volcano observatories and researchers are recognizing the potential usefulness of thermal imaging cameras both before and during volcanic eruptions. Obvious applications include measurements of the surface temperatures of active lava domes and lava flows to determine the location of the most active parts of these potentially hazardous features. If appropriate precautions are taken, the new generation of thermal imaging cameras can be used to extract quantitative as well as qualitative information on volcanic activity. For example, they can be used to measure the temperature of lava on eruption and to reveal how the crust cools during flow emplacement. This is important for the validation of lava flow models. To ensure that meaningful temperatures are collected, thermal imaging data must be corrected for instrumental errors, emissivity of the surface being imaged, atmospheric attenuation, viewing angle and surface roughness. Controlled laboratory experiments have been undertaken to determine the emissivity of smooth and rough samples and the effects of viewing angle and to quantify the errors. Measured emissivities range from 0.973 ± 0.002 for smooth samples of basalt and 0.984 ± 0.004 for rough samples. Errors in emissivity-corrected temperatures are within ±15°C for lava at 1100°C. Variations from individual sensor receptors, which provide individual pixel temperature data, were found to be 0.6% and instrumental errors of the cameras used were 0.1%. Apparent temperatures were found to vary by less than the instrumental error for viewing angles up to 30 degrees from normal to lava, and thereafter increased by ∼1°C per degree. By increasing the apparent viewing distance of a small vent on Mount Etna from 1.5 to 30 m, the maximum temperature is shown to decrease by 53°C due to integrated averaging of radiance over increased pixel areas. At a viewing distance of 250 m the maximum temperature decreased by ∼200°C with a further 75°C decrease due to atmospheric attenuation for a relative humidity of 50%. However, errors in relative humidity measurements can lead to atmospheric attenuation correction inaccuracies up to 200°C at viewing distances of 1 km. We show how temperatures measured using thermal imaging cameras can be corrected to give improved estimates of temperature distributions on the surface of active lava flows. |Journal or Publication Title:||Journal of Geophysical Research - Space Physics| |Additional Information:||Copyright (2006) American Geophysical Union. Further reproduction or electronic distribution is not permitted Ball was a Lancaster PhD student. The unique set of laboratory measurements that form the core of this research are essential for generating realistic temperature data on active volcanoes using thermal imaging cameras. Pinkerton is the corresponding author. RAE_import_type : Journal article RAE_uoa_type : Earth Systems and Environmental Sciences| |Uncontrolled Keywords:||thermal imaging ; lava ; emissivity.| |Subjects:||G Geography. Anthropology. Recreation > GE Environmental Sciences| |Departments:||Faculty of Science and Technology > Lancaster Environment Centre| |Deposited On:||08 Apr 2008 09:32| |Last Modified:||26 Jul 2012 15:45| Actions (login required)
<urn:uuid:f2ab65e7-c394-4871-af74-567301682c56>
2.75
702
Academic Writing
Science & Tech.
24.749416
So... what have X-ray and gamma-ray done for ME? Very often, technologies developed for one purpose can be re-used in other ways. NASA has an active program to encourage this for technologies used in its research and exploration programs, to get the most out of each taxpayer dollar. It does this by allowing businesses to use NASA inventions. In order for X-rays and gamma-rays coming from astronomical objects to be studied, very sensitive instruments must be developed. Astronomers always want to be able to study fainter and more distant objects. So they are always designing and building better detector systems. These detectors, while developed to study distant objects in the Universe, have found to be useful as well here on Earth.
<urn:uuid:2db9ca8b-5ffc-484d-99d8-2088d581cb1a>
3.078125
161
Knowledge Article
Science & Tech.
46.822922