text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
At first, I was somewhat confused by the critical texts against the use of biochar as a carbon dioxide reduction agent issued by the famous environmentalist George Monbiot. Apart from some obvious exaggerations (‘turning the planet into charcoal’, 'primary source of world heating fuel’) and misunderstandings, he talks about the destructions of the forests, enormous monoculture plantations, and so on, that would be the effects of large scale use of charcoal, or biochar, as the term is used to differ it from fossil coal. He also talks repugnantly and ironically against the obviously beneficial by-products of producing char; the increased plant production from the enhanced microbial activity achieved by mixing char into the soil, and the use of heat and tars emitted as by-products from the charring procedure (pyrolysis). He claims that biochar proponents say that these by-products could replace the use of fossil fuels throughout the world. At first, I just thought that Monbiot and others with him, just reacted with some sort of conditioned reflex to protest against anything that looks as a behavioural turning or the introduction of a method that could be used generally, but is different from what we do today. However, Monbiots part of it surprised me, since he had earlier accepted and approved other issues that are far more radical. But then, I realised what was the fundamental mistake, not only by Monbiot, but also by some of the biochar proponents. Either, they think about removing all the excess carbon from the atmosphere. Immediately, in one stroke. That is at least 35 ppm worth, or 2.12 x 35 Gt carbon (= 75 Gt), or almost three times the current net annual plants production of coarse biomass. It would wipe out the plant cover. Or, they think about removing all the current emission (8 Gt C) of carbon dioxide, plus hopefully, an extra Gt annually, to successively decrease the carbon dioxide content in the air and move out of the danger zone. That would require about two third of the annual production, leading to diversity loss and ruthless exploitation. In that respect, Monbiot is perfectly right. But, let us, just for a second, imagine that a responsible way to solve the problem of climatic carbon dioxide excess could be thread. Then, I imagine that a maximum of 15% -20% of the net annual biomass production could be appropriated for charring. That is about the same size as the global forestry sector, which certainly has had severe adverse effects on the face of Earth. But, in contrast to the forestry industry, biomass for carbonisation can be of any kind, from rice husks and other harvest surplus to twigs and branches, to plants purposely grown for carbon dioxide absorption. Just look at your local environment with ‘the eyes of a sequester’! Plant production for food, to increase local diversity, or to absorb nutrient leakage does not exclude charring of the residues or decaying plants. A ‘black revolution’ does not necessarily exclude an ethically correct management. Charring just 15% of the global net production does not considerably change the global atmospheric carbon dioxide content. But most of us agree that it is impossible to char more, because it will undermine our life support system, thus only let us jump from the frying-pan into the fire. But here, the normally futile way of answering the climate change problem starts to make sense; If we, simultaneously with increasing charring, could considerably reduce our carbon dioxide emissions, say, with 90%, then, the emissions would be smaller than the possible sequestration! With the figures above, the sequestration exceeds the emissions with about 1-2 Gt per year. This reduction in the use of fossil fuels will also reduce our capacity to make food from oil (We call this activity agriculture.), but that is another story… Trying to obtain these combined goals means that we would have started a route towards a real decrease of the global carbon dioxide content together with a possible increase in biodiversity and soil fertility. In the attached graph, a scenario assuming an increasing popularity of charring combined with an emission reduction to 90 % over a period of 20 years is assumed. Of course it is severely unrealistic, but it points out that there is a least a theoretical possibility to release ourselves out of the current problems. It also shows that such a Herculean effort also may stop the increase of the atmospheric carbon dioxide content within two decades. [Be like the blackbird: It sees the morning long before the sun has risen]
<urn:uuid:ae94ebe9-08f5-4f6c-aaf0-e5e3157efb68>
2.796875
950
Personal Blog
Science & Tech.
38.287375
(Submitted October 28, 1996) I am an undergraduate in Astrophysics at the University of Calgary. I am doing a small research project on the evidence for and against a black hole at the center of the milky way. I found your email address on the StarChild page dealing with this topic. I was wondering if you had any suggestions of articles or books discussing this subject. Thank you for your time. It is generally believed that a black hole does exist at the center of the Milky Way galaxy. The latest value we have seen is that it has a mass of about 2,000,000 that of the Sun. In fact, it is believed that this may be common for most galaxies. Observational evidence supports these ideas more and more. However, you must keep in mind that due to the large absorption and source confusion when trying to look into the center of a galaxy, it is very, very hard to see what's there! So we have to be clever about the observations we make and the interpretations of these observations. This is one reason that X-rays and gamma-rays are powerful probes in trying to answer such questions; they are much more likely to "get out" of the central region of the galaxy than other wavelengths. Some references you may find useful (and which give many more references) are: A more general Milky Way reference is Blitz, Binney, Lo, Bally & Ho 1993, Nature 361, 417. - Sky and Telescope, June 1996, p.28. - "ASCA View of Our Galactic Center: Remains of Past Activities in Publications of the Astronomical Society of Japan, v.48, p.249-255.
<urn:uuid:3105be62-e2f7-4b61-9c1d-b26c5bbb1c4a>
3.40625
367
Q&A Forum
Science & Tech.
60.108952
Team members at NASA's Jet Propulsion Laboratory share the challenges of the Curiosity Mars rover's final minutes to landing on the surface of Mars . It may be described as reasoned - even genius - engineering. But even the engineers who designed it agree it looks crazy. Six vehicle configurations, 76 pyrotechnic devices, 500,000 lines of code, zero margin for error. What exactly will it take to land NASA's next Mars rover, Curiosity, on the surface of Mars on Aug. 5? The latest video from NASA's Jet Propulsion Laboratory breaks down all "7 minutes of terror."
<urn:uuid:9f3e0d53-ba13-4860-b3e6-e2b2f2b413ef>
2.90625
120
Truncated
Science & Tech.
51.04949
Updated: April 2009 Represents a button that can be selected, but not cleared, by a user. The IsChecked property of a can be set by clicking it, but it can only be cleared programmatically. Assembly: PresentationFramework (in PresentationFramework.dll) XMLNS for XAML: http://schemas.microsoft.com/winfx/2006/xaml/presentation, http://schemas.microsoft.com/netfx/2007/xaml/presentation A has two states: true or false. The is a control that is usually used as an item in a group of controls. However, it is possible to create a single . Whether a is selected is determined by the state of its IsChecked property. When a is selected, it cannot be cleared by clicking it. When elements are grouped, the buttons are mutually exclusive. A user can select only one item at a time within a group. You can group controls by placing them inside a parent or by setting the GroupName property on each . Dependency properties for this control might be set by the control’s default style. If a property is set by a default style, the property might change from its default value when the control appears in the application. The default style is determined by which desktop theme is used when the application is running. For more information, see Themes. The following example shows how to create controls, group them inside a container, and handle the Checked event. The following code sample creates two separate groups: colorgrp and numgrp. The user can choose one in each group. Windows 7, Windows Vista, Windows XP SP2, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 The .NET Framework and .NET Compact Framework do not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
<urn:uuid:e0e929e6-ec32-4f6d-a8f9-f312d07f6635>
3.25
404
Documentation
Software Dev.
57.803988
It might be worth taking a look to the original text, Galileo's Discourses on Two New Sciences. The reasoning you're looking for is on the Third Day, a translation of which may be found online. The relevant parts are labelled Theorem I and Theorem II in the above-linked translation. To derive that distance in a uniformly accelerated motion (e.g. free fall) goes as time squared, Galileo first argues that The time in which any space is traversed by a body starting from rest and uniformly accelerated is equal to the time in which that same space would be traversed by the same body moving at a uniform speed whose value is the mean of the highest speed and the speed just before This is argued on a graphical basis (see above link). However, even though the pictures may look pretty similar to modern functional representations (e.g. of velocity vs. time) and arguments involve finding equal areas in different situations, the arguments never involve an actual calculation of an "area" with mixed units, which wasn't yet conceivable at the time (e.g. $m/s \cdot s = m$). In fact, the whole Third Day seems very convoluted precisely because the notion of velocity wasn't clearly numerical yet, since only commensurable (same unit) quantities could conceivably be operated with (added, divided,...). Proportions of non-commensurable quantities, however, could be compared (today we'd say they are dimensionless), as in If a moving object traverses two distances in equal intervals of time, these distances will bear to each other the same ratio as the speeds (earlier in the Third Day)
<urn:uuid:dc8b63be-e888-48a7-9a5d-00badaa0d583>
3.484375
346
Q&A Forum
Science & Tech.
47.277002
The prevailing theory of dark matter is the Cold Dark Matter (CDM) hypothesis. This hypothesis is favored because it is assumed that the dark matter particles are non-relativistic - i.e. slow moving. Because they are slow moving, they can essentially orbit in and around the original small density fluctuations, making these small density fluctuatuions stable. These small density fluctuations can clump into denser clumps due to three body gravitational interactions. A three body interaction of small clumps can result in one of the clumps being ejected at a higher speed while the other two clumps slow down and become more gravitationally bound. However, baryonic matter can clump more effectively than CDM since the electromagnetic interractions allow baryonic matter to cool more effectively than the CDM clumps. That is why the prevailing theory is that the DM forms halos that are more distended than the clumps of baryonic matter. Thus a visible galaxy will have a more extended DM halo that extends far beyond the visible stars of the galaxy. The DM halo will also be more spherical than the flattened galactic disk. It is true that most DM models assume the DM particles do have weak interactions, but these interactions aren't required by the CDM model. However, these weak interactions are required if any of the dark matter detection experiments are to be successful. [Note: After more research, I discovered that my original answer was wrong. I now think this answer is correct. Sorry about that.]
<urn:uuid:1c4e95bb-0acc-4fd9-b816-0f91d61bacb3>
3.140625
311
Q&A Forum
Science & Tech.
43.0525
- Periodic structure in nuclear matter (1992) - The properties of nuclear matter are studied in the framework of quantum hadrodynamics. Assuming an ω-meson field, periodic in space, a self-consistent set of equations is derived in the mean-field approximation for the description of nucleons interacting via σ-meson and ω-meson fields. Solutions of these self-consistent equations have been found: The baryon density is constant in space, however, the baryon current density is periodic. This high density phase of nuclear matter can be produced by anisotropic external pressure, occurring, e.g., in relativistic heavy ion reactions. The self-consistent fields developing beyond the instability limit have a special screw symmetry. In the presence of such an ω field, the energy spectrum of the relativistic nucleons exhibits allowed and forbidden bands, similar to the energy spectrum of the electrons in solids.
<urn:uuid:f4b44646-f599-46c0-99d9-a9e099c5484f>
2.859375
198
Academic Writing
Science & Tech.
33.321567
[Mystery bird] Great tit, Parus major, photographed in Helsinki, Finland. [I will identify this bird for you tomorrow] Image: GrrlScientist, 24 November 2008 [larger view]. Please name at least one field mark that supports your identification. Rick Wright, Managing Director of WINGS Birding Tours Worldwide, writes: Chickadees all look alike: chubby, rather long-tailed little birds with fluffy plumage, big heads, and black-and-white faces. In North America, our chickadees are a colorless lot, only Chestnut-sided straying from the standard pattern of gray and white. It’s different in the Old World, where chickadees (there called tits, ‘little things’, as in “tidbits”) come in a range of bright colors. Thus, our quiz bird is green on the back, blue on the wings, and yellow beneath. In Europe, two species — Blue Tit and Great Tit — show that color combination, but only Great Tit has the extensive black “helmet” with bright white cheeks. This bird’s relatively muted colors and apparent lack of a strong black stripe down the breast suggest that it is a female. Like Tufted Titmouses in eastern North America, Great Tits will begin to sing just after the winter solstice, and their buzzy, syncopated “dzeezeeba, dzeezeeba” chant livens up the gray days of winter in cities and towns across Europe.
<urn:uuid:630157bb-8098-47b7-baad-92dffe6fcc7f>
2.828125
330
Personal Blog
Science & Tech.
50.307458
Calculations show that the celestial visitor could be dazzlingly bright in November 2013 and be easily visible in broad daylight as it rounds the Sun. Comet ISON is so named because it was first spotted on photos taken by Vitali Nevski and Artyom Novichonok from Russia using the International Scientific Optical Network telescope... That makes it a type of comet called a sungrazer, and there is a risk that the comet - essentially a giant ball of rock and ice, will break up when it makes that close approach. But it could become brighter than the greatest comet of the last century, Comet Ikeya-Seki, which excited astronomers in 1965...The article at The Telegraph indicates that this comet should be "fifteen times brighter than the moon." Comet ISON, which has the official label C/2012 S1, appears to be on a nearly parabolic orbit which leads scientists to believe that it is making its first trip through the Solar System. This means it may have been dislodged from a vast reservoir of icy debris surrounding the Sun far beyond the planets, called the Oort Cloud. It is a giant ball of rock and ice that is likely to be packed with volatiles including water ice that will erupt as brilliant jets of gas and dust when it is at its best.
<urn:uuid:7973d0bf-f251-4eb1-828c-9266ff0d971e>
3.78125
269
Personal Blog
Science & Tech.
56.839821
Water: National Coastal Condition Report National Coastal Condition Report (2001) Download Site Coastal Research and Monitoring Strategy (PDF, 1.1MB, 70 pages)About PDF files The Office of Wetlands, Oceans, and Watersheds, Coastal Programs, announces the release of the National Coastal Condition Report. This Report is the first federal effort to provide a comprehensive picture of the health of the nation's coastal waters. This Report initiates a series describing the ecological and environmental conditions in U.S. coastal waters. It summarizes the condition of ecological resources in the estuaries of the United States and highlights several exemplary Federal, State, Tribal, and local programs that assess coastal ecological and water quality conditions. This Report is based on data collected from a variety of Federal, State, and local sources and rates the overall condition of U.S. coastal waters as fair to poor, varying from region to region. It represents a coordinated effort among EPA, the National Oceanic and Atmospheric Administration (NOAA), the U.S. Geological Survey, the U.S. Fish and Wildlife Service and coastal States. The resulting ecological assessment of the nation's estuaries shows estuaries to be in fair condition, varying from poor conditions in the Northeast and Puerto Rico to fair conditions in the Southeast, Gulf of Mexico, Great Lakes and West Coast. Use the links in the table below to download the individual chapters. (Note: If your browser is opening the files instead of downloading them, right click on the link with your mouse and choose "Save Target As" or "Save Link As" from the pop-up menu.) Table of Contents - Report Cover & Table of Contents (PDF format, 4.5MB) - List of Acronyms (PDF format, 417KB) - Executive Summary (PDF format, 3.4MB) - Introduction (PDF format, 7.5MB) - National Coastal Condition (PDF format, 16.9MB) - Northeast Coastal Condition (PDF format, 16.2MB) - Southeast Coastal Condition (PDF format, 13.1MB) - Gulf of Mexico Coastal Condition (PDF format, 17.2MB) - West Coastal Condition (PDF format, 8.3MB) - Great Lakes Coastal Condition (PDF format, 7.9MB) - Alaska, Hawaii and Island Territories Coastal Condition (PDF format, 7.2MB) - The Future -- A National Strategy (PDF format, 10.1MB) - References (PDF format, 1.8MB)
<urn:uuid:dcaab108-858e-4656-a2f3-44e097797a88>
2.828125
532
Content Listing
Science & Tech.
53.777399
Descriptions and instructions for using the Japanese, Chinese, and Aztec abacus. Best if your browser is version 3.0 or higher so you can take full advantage of the site. College Math Prep for High School Students Multiple choice math problems designed to help high school students assess their readiness for college Math. Designed for students at the precalculus level, Math Lessons demonstrate the use of various basic mathematical concepts in real-life problems situations. Coolmath.com has games and activities for ages 13-100. Related CoolMath sites include CoolMath4Kids.com and FinanceFreak.com. There are pages for parents and for teachers; math games, math lessons, a math dictionary, and more. Need to convert units? This site will help you convert units of length, temperature, speed, volume, weight, cooking, area, fuel currency and many others. There is even a "miscellaneous" category for those conversions that do not fit other categories. Geometry from the Land of the Incas An eclectic mix of sound, science, and Incan history intended to interest students in Euclidean geometry. The site includes geometry problems, proofs, quizzes, puzzles, quotations, scientific speculation, and more. Mathematics Archives provides organized Internet access to a wide variety of mathematical resources, including such teaching resources as educational software, laboratory notebooks, problem sets and lecture notes and reports on innovative methods of teaching. In addition, this site has a collection of links to other sites that are of interest to mathematicians. This has many resources for teachers, students and parents at all levels (K-12, college, and advanced mathematics) including class materials, puzzles, publications, graphics, and the very popular "Problem of the Week". Of special interest to science teachers is their extensive selection of applied mathematics links. This great interactive site helps make math fun. It has word problems and flash cards; math videos, math games, math flash cards and more. Although designed for 3rd through 5th graders, the material is also suitable for older students who want or need math enrichment, remediation, or a simply a chance to "play" with math. Mancala Web Home Page Mancala games are among those ancient games that last and last because the rules to play are simple, but the subtleties of winning take a long time to master. This online Mancala is one version of these games, also called Kalaha. (Online versions are at the "idiot" and "novice" levels.) Miscellaneous Mathematical Utilities National Council of Teachers of Mathematics The latest on the revision and implementation of math standards, information on relevant professional meetings, even a discussion of block scheduling, all can be found on the NCTM website. Nick's Mathematical Puzzles A collection of geometry, probability, number theory, algebra, calculus, and logic puzzles. Hints are provided, along with answers, fully worked solutions, and links to related mathematical topics. Many of the puzzles are elementary in their statement, yet challenging. The Population Clock A service of the US Census Bureau which let's you know what the population is in the US and the World. Be sure to check out their home page while you're there! Powers of Ten Here is a great site for math and science teachers on the Powers of 10..... wonderful interactive photos! Return to Resource Center Table of Contents Updated: 15 October 2009
<urn:uuid:95b288c2-0026-4b53-8183-0ace02b17649>
3.296875
715
Content Listing
Science & Tech.
38.766751
This simple demonstration of Einstein's explanation for Brownian motion shows little particles batting about a more massive one, and what it would look like if you could see only the massive one through a microscope. Einstein showed that the overall visible motion, averaged over many observations, exactly matches what you would expect if the little particles were atoms or molecules. (This applet in Java 1.1 may not work in all browsers, and to "Reset" you may have to use your browser to "Refresh" or "Reload" the page.)
<urn:uuid:5afba3f1-94d8-4569-a959-74f913f59e37>
2.859375
117
Truncated
Science & Tech.
41.66698
Astronomers Discover Deep Fried Planets This discovery, published in the science journal Nature, may shed new light on the destiny of stellar and planetary systems, including our solar system. When our Sun nears the end of its life in about 5 billion years, it will swell up to what astronomers call a red giant, an inflated star that has used up most of its fuel. So large will the dying star grow that its fiery outer reaches will swallow the innermost planets of our solar system Mercury, Venus, Earth and Mars. Researchers believed that this unimaginable inferno would make short work of any planet caught in it until now. This report describes the first discovery of two planets or remnants thereof that evidently not only survived being engulfed by their parent star, but also may have helped to strip the star of most of its fiery envelope in the process. The team was led by Stephane Charpinet, an astronomer at the Institut de Recherche en Astrophysique et Planétologie, Université de Toulouse-CNRS, in France. “When our Sun swells up to become a red giant, it will engulf the Earth,” said Elizabeth ‘Betsy’ Green, an associate astronomer at the University of Arizona’s Steward Observatory, who participated in the research. “If a tiny planet like the Earth spends 1 billion years in an environment like that, it will just evaporate. Only planets with masses very much larger than the Earth, like Jupiter or Saturn, could possibly survive.” The two planets, named KOI 55.01 and KOI 55.02, circle their host star in extremely tight orbits. Having migrated so close, they probably plunged deep into the star’s envelope during the red giant phase, but survived. In the most plausible configuration, the two bodies would respectively have radii of 0.76 and 0.87 times the Earth radius, making them the smallest planets so far detected around an active star other than our Sun. The authors concluded that planetary systems may therefore influence the evolution of their parent stars. They pointed out that the planetary system they observed offers a glimpse into the possible future of our own. The discovery of the two planets came as a surprise because the research team had not set out to find new planets far away from our solar system, but to study pulsating stars. Caused by rhythmic expansions and contractions brought about by pressure and gravitational forces that go along with the thermonuclear fusion process inside the star, such pulsations are a defining feature of many stars. By studying the pulsations of a star, astronomers can deduce the object’s mass, temperature, size and sometimes even its interior structure. This is called asteroseismology. “Those pulsation frequency patterns are almost like a finger print of a star,” Green said. “It’s very much like seismology, where one uses earthquake data to learn about the inner composition of the Earth.” To detect the frequencies with which a star pulsates, researchers have to observe it for very long periods of time, sometimes years, in order to measure tiny variations in brightness. “The brightness variations of a star tell us about its pulsational modes if we can observe enough of them very precisely,” Green said. “Let’s say there is one pulsational mode every 5859.8 seconds, and there is another one every 9126.39 seconds. There could be lots of stars with rather different properties that could all manage to pulsate at those two frequencies. However, if we can measure 10, or better yet, 50 pulsational modes in one star, then it’s possible to use theoretical models to say exactly what the star must be like in order to produce those particular pulsations.” For that reason, the team used data obtained from NASA’s Kepler Space Telescope for this study. Unobstructed by the Earth’s atmosphere and staring at the same patch of sky throughout its five-year mission, the Kepler Space Telescope sits in a prime spot to detect tiny variations in brightness of stars. Green had been pursuing a survey to look for hot subdwarf stars in the galactic plane of the Milky Way. “I had already obtained excellent high-signal to noise spectra of the hot subdwarf B star KOI 55 with our telescopes on Kitt Peak, before Kepler was even launched,” she said. “Once Kepler was in orbit and began finding all these pulsational modes, my co-authors at the University of Toulouse and the University of Montreal were able to analyze this star immediately using their state-of-the art computer models.” This was the first time that researchers were able to use gravity pulsation modes, which penetrate into the core of the star, to match subdwarf B star models to learn about their interior structure. While analyzing KOI 55’s pulsations, the team noticed the intriguing presence of two tiny periodic modulations occurring every 5.76 and 8.23 hours that caused the star to flicker ever so slightly, at one five thousandth percent of its overall brightness. They showed that these two frequencies could not have been produced by the star’s own internal pulsations. The only explanation came from the existence two small planets passing in front of the star every 5.76 and 8.23 hours. To complete their orbits so rapidly, KOI 55.01 and KOI 55.02 have to be extremely close to the star, much closer than Mercury is to our Sun. On top of that, the Sun is a cool star compared to KOI 55, which burns at about 28,000 Kelvin, or 50,000 degrees Fahrenheit. The extremely tight orbits are important because they tell the researchers that the planets must have been engulfed when their host stars swelled up into a red giant. “Having migrated so close, they probably plunged deep into the star’s envelope during the red giant phase, but survived,” lead author Charpinet said. “As the star puffs up and engulfs the planet, the planet has to plow through the star’s hot atmosphere and that causes friction, sending it spiraling toward the star,” Green added. “As it’s doing that, it helps strip atmosphere off the star. At the same time, the friction with the star’s envelope also strips the gaseous and liquid layers off the planet, leaving behind only some part of the solid core, scorched but still there.” “We think this is the first documented case of planets influencing a star’s evolution,” Charpinet said. “We know of a brown dwarf that possibly did that, but that’s not a planet, and of giants planets around subdwarf B stars, but those are too far away to have had any impact on the evolution of the star itself.” “I find it incredibly fascinating that after hundreds of years of being able to only look at the outsides of stars, now we can finally investigate the interiors of a few stars even if only in these special types of pulsators and compare that with how we thought stars evolved,” Green said. “We thought we had a pretty good understanding of what solar systems were like as long as we only knew one ours. Now we are discovering a huge variety of solar systems that are nothing like ours, including, for the first time, remnant planets around a stellar core like this one.”
<urn:uuid:3ab4e91f-bc81-4773-b7eb-eb4623f1dbbc>
3.515625
1,581
Knowledge Article
Science & Tech.
49.647374
Before we try and understand the program, let us run it to see if it works! The PLANStart tool is provided as a means to inject PLAN programs into the network directly from the command line. It is invoked by using the Java interpreter. Typing 'java PLANStart' gives you the usage of PLANStart : % java PLANStart usage : java PLANStart [-v] [-p port] <code> <RB> <router IPv4 2address> The code argument is the name of the file that contains the PLAN program. The RB argument specifies the initial amount of resource to hand to the packet going into the network. The last argument specifies the host that is to be the entry point into the network. -v option produces a verbose output. The -p option allows you to specify the outgoing TCP port, much as was described above for the active router. This also serves to set the port portion of the identity for the PLANStart program should the thisHost local service routine be invoked during the initial invocation (see the ping example below). Note that PLANStart and ARMain should use the same port number, or they will not be able to communicate properly. Suppose that you typed the Hello world program into a file called Helloworld.plan, and that you want to use an active host called ``MyActiveNode'' as the network entry point. Let us also give this program an initial resource amount of 10. Now type : % java PLANStart Helloworld.plan 10 MyActiveNode The program waits for you to give it the next input : the initial invocation. Recall that a PLAN program is a list of definitions - when the program arrives at an active host, you must specify which function to start executing and with what arguments. In the case of the Hello world program, there is only one function to execute, and it takes no parameters, so you type : Out comes the response : IPv4 : MyActiveHost.domain.name/some.ip.address says : Hello world! Notice that you must press Ctrl-C to get back to the command line. This is because the PLANStart tool, after injecting the program into the network, waits for possible responses. Since it does not know how many responses to expect, it has to wait indefinitely until told explicitly to stop. As a final note, had you included the -v option, you would have seen the following: % java PLANStart Helloworld.plan 10 MyActiveNode Checking for parse errors in source code ... Binding Top Environ ... Getting Initial Resource ... Getting Address of Active Router ... Parsing the initial invocation ... This is useful in debugging errors. At this point you would type the initial invocation, as noted above.
<urn:uuid:e051e700-ee9b-4f1e-92b8-7c65b7cbc843>
3.046875
572
Documentation
Software Dev.
56.979985
of deep-sea hydrothermal vent faunas By Cindy Lee Van Dover Driving from New York to Miami across 15 degrees of latitude, hardwood forests give way to palmetto scrub. From Washington State to Southern California, again across 15 degrees of latitude, evergreen forests disappear and desert cacti dominate. Any traveler who has noticed these changing botanical provinces has practiced the science of biogeography, the study of the patterns of distributions of organisms and the processes that determine these patterns. Lee Van Dover, Chief Scientist of this expedition, is a professor of biology at the College of William and Mary in Williamsburg, Virginia. She is an expert on hydrothermal vent ecology. On land, latitudinal changes in climate - humidity, rainfall, temperature - help determine the geographic distributions of organisms. But hydrothermal vents, two to three miles beneath the surface of the sea, are not greatly affected by surface climate. Instead, distributions of hydrothermal vent organisms on mid-ocean ridges appear to be influenced by features such as deep-ocean circulation patterns, by major topographic characteristics such as deep, cross-cutting fracture zones or changes in depth of the ridge system, and by the position and movement of tectonic plates over time. Based on recent explorations, we know now that across more than 30 degrees of latitude along the East Pacific Rise, there is a single hydrothermal biogeographic province! Giant tubeworms, clams, and mussels -- and many smaller species of polychaete worms, shrimp-like crustaceans, and snails -- have immense ranges, despite physiological and ecological requirements that restrict the adults to isolated vent habitats separated by tens to hundreds of kilometers. tube worms, called Riftia, grow up to 6 feet long and are commonly found at vent sites on the East Pacific Rise. different species of tube worm, called Ridgea, live at vent sites on the Juan de Fuca Ridge in the Northeast Pacific composition of the animal communities (fauna) at hydrothermal vents is far from the same all over the worlds oceans, however. For example, there is a difference in the vent fauna of the East Pacific Rise off the western coast of Mexico and the vent fauna of the Juan de Fuca Ridge off Vancouver Canada. Why should these Pacific vent faunas be different? The East Pacific Rise goes terrestrial at the mouth of the Colorado River in the Gulf of California, becoming the San Andreas Fault. The fault moves back off-shore at Mendocino, California, and gives way again to a triplet of mid-ocean ridge spreading centers of the northeast Pacific that includes the Juan de Fuca Dr. Verena Tunnicliffe, a biologist at the University of Victoria, suggests that at one time, before the North American Plate overrode the mid-ocean ridge, there was a single biogeographic province in the eastern Pacific. With placement of a continental barrier to dispersal, the hydrothermal faunas began to diverge, eventually forming the sister species we observe today. An example of this is the closely related tubeworms species found at the East Pacific Rise and the Juan de Fuca Ridge. If Juan de Fuca Ridge and East Pacific Rise vent faunas are sisters, then the vent animals of the Mid-Atlantic Ridge are cousins several times removed. There are many shared families, some shared genera, but few shared species between Atlantic and Pacific vents. For example, shrimp in the family Alvinocaridae are found at both East Pacific Rise and Mid-Atlantic Ridge vents, but Alvinocaris lusca is the Pacific shrimp species, while Rimicaris exoculata dominates at some Atlantic vents. Why should Atlantic and Pacific vent faunas differ? There is no single satisfying explanation. Because the basic ingredients of hydrothermal systems -- basalt and seawater -- are relatively uniform, and the major element chemistry of vent fluids in the Atlantic and Pacific is similar, it seems unlikely that differences in the chemical setting of the hydrothermal sites plays a dominant role. Even so, there can be differences in the chemical setting of individual vents that contribute to the differences in the animals observed. more importantly, geographic isolation of the Atlantic and Pacific Ridge systems can account for some of the faunal differences. Trace the global mid-ocean ridge system starting from the East Pacific Rise in the Gulf of California. There is currently no direct connection to the Atlantic without following the Pacific-Antarctic Ridge south of Australia and into the Indian Ocean along the Southeast and then Southwest Indian Ridges. The Southwest Indian Ridge, after it passes the tip of the African continent, loops northward to become the Mid-Atlantic Ridge. If the East Pacific Rise vent fauna can migrate only by step-by-step migration along mid-ocean ridges, then isolation by distance alone might account for much of the difference between Atlantic and Pacific vent faunas. Mid-ocean ridges of the Indian Ocean may prove to be the corridor for exchange of faunas between Atlantic and Pacific vents. The Japanese discovery of the Kairei Field on the Central Indian Ridge and our detailed sampling on this expedition are already helping to unravel some of the questions we posed nearly five years ago when we first proposed this research. Of the species we collected from the Kairei Field, some appear to be linked to Atlantic vent faunas, while many others are familiar from Pacific hydrothermal vents. As is usual for any scientific endeavor, our observations lead us to further questions. To resolve faunal affiliations and to understand what underlying processes control bioegrographic patterns at deep-sea hydrothermal vents, many more vents on the global mid-ocean ridge system need to be explored. showing the global distribution of major hydrothermal vent sites. Colored circles show vents with similar animal communities. Funding to support this research comes from the National Science Foundation (Biological Oceanography) and The College of William & Mary. Back to Main Hot Topics
<urn:uuid:d0187b3a-7dcd-4213-aad5-0eebe59ef62d>
3.921875
1,343
Knowledge Article
Science & Tech.
24.667523
Is Narmada water being made to flow in Sabarmati not supplied to city of Ahmedabad? This has furthered the idea of river... I have been selling glass for commercial buildings talking about light, thermal/solar heat gain etc.etc..but I... Dear Saxena ji, Thank you for inquiry. West facing windows can be a big source of heat, first measure which you... Animal diversity in the soil mapped for the first time ANIMALS that live in the soil play an important role in the ecosystem but little is known about their correlation with organisms above the ground. A study of soils from 11 areas around the world shows that unlike most species above the ground, soil animals have restricted distributions. A team of ecologists from the US and UK collected soil samples from 11 sites, including tropical forest in Costa Rica, arid grasslands in Kenya, warm temperate forest in New Zealand, tundra and boreal regions of Alaska and Sweden, and shrub steppe of Argentina, to conduct a comprehensive molecular analysis of the global distribution of soil animals like nematodes and microarthropods. Through testing and sequencing of the genetic material, they found that 96 per cent of the population of soil animals was restricted to a single location. This means each ecosystem is unique with its own soil fauna. Conserving soil fauna at one location would not help in its distribution to other locations, says Jim Garey, lead researcher from the University of South Florida in the US. When asked the reason for restricted distribution of soil fauna, the researchers said it could be related to nutrients. If soil nutrients are tied up in plants that are above ground, they are less available to soil fauna. Greater diversity of plants and animals above the ground results in high levels of inorganic nitrogen and low pH in the soil. High soil nitrogen favours growth of bacteria instead of fungi. Studies have shown that soil animals are more diverse in fungi-dominated soil and less diverse in bacteria-dominated soil, Garey explains. It has been accepted that a wider range of species can be found above ground at the equator than at the poles. The study proves this does not apply to species living underground. “It was not true for the latitudes we studied,” explains Garey. The study was published in Proceedings of the National Academy of Sciences on October 17.
<urn:uuid:34f12ef1-936c-4927-83b6-31ce36a24045>
2.8125
495
Comment Section
Science & Tech.
50.487915
Copy will copy the cotents of newValue to an ascii, wide charactor or biniary string, from the offset location for a given length. TVariable& Copy( const TVariable &newValue, int offset = 0, int length = 0 ); |newValue||The new value.| |offset||The start location, in bytes from the start of the string. If start is negative, the returned string will start at the start 'th character from the end of string.| |length||This will cut this length from start. It this value is zero then the remaining length of the string will be returned. If length is given and is negative, then that many characters will be omitted from the end of string (after the start position has been calculated when a start is negative).| These methods will return the a temporary const value created on the stack. - This method will return a TVariable of the same type as the original TVariable.
<urn:uuid:a1dddc89-3f0a-4021-be6c-3e388c43e2c4>
2.90625
208
Documentation
Software Dev.
61.890333
Captive Animals and Evolution This is about natrual selection/evolution, Im not very clear on this subject so please clear this up for me, If I was to take animals from nature (lions for example), and took care of them (fed them by hand, food predigested and full of nutrients for it to grow) will they lose certain attributes they had gain over thousands of years of evolution? If you take one lion, and do these things, no. Organisms don't evolve in their own lifetime. And traits don't just fade away without use. Look at housecats, which are relatives of lions. When we speak of evolution, we are talking about populations evolving over time. There would have to be a random mutation that took place in say the tooth length gene in one lion, and because that lion didn't hunt anymore, that random mutation wouldn't matter. It wouldn't affect whether that lion reproduced or not. Perhaps that lion would pass the mutated gene on to a few of its offspring, and then there would be more like him-and if they didn't hunt either, the mutation wouldn't matter. Organisms don't evolve because they NEED to or because they don't NEED a trait. When mutations happen, if the environment is such that they can survive with the mutated trait there will be more offspring with that trait. Now if somet ime down the road, the offspring of this original lion were suddenly set free and had to hunt again, they would probably be at a disadvantage and wouldn't survive long in the wild. This is an interesting question. If you're talking about individual animals during one generation, then no, you won't see any inherited traits lost or gained. There may be other characteristics that change during an individual's lifetime. A lion's claws may grow long if it is kept in captivity, for example, or its muscles may atrophy if it doesn't get much exercise. However, an individual's genetic makeup will not change. It's offspring will have all the normal traits of lions living in the wild. On the other hand, if you're talking about keeping a population of animals in captivity for many generations, then the situation is different. The most important thing to think about is how these animals reproduce. Do they have free choice of mates or do you control their breeding? Do all individuals have an equal chance of reproducing or will some have more offspring than others? Also, it is important to ask questions about the population being kept. How large is the population? Does the population accurately reflect the natural population? If you control the breeding, then the population absolutely can change over the course of a few generations. Indeed, this has happened with dogs and cats (and sheep and camels and wheat and broccoli and apples and yeast and all other domesticated organisms). In this case, evolution can be strikingly rapid. If your animals are allowed to have free choice of mates, then the story may be different. Some may end up reproducing more than others. If there is some inherited trait that helps them reproduce, then their offspring may inherit that trait, and those offspring will also reproduce more, etc. This is called natural selection. The key is this: future generations will most closely resemble those individuals who reproduce most successfully. Notice we are talking about whole populations. Populations evolve; individuals do not. There is a common misconception that if some trait is not used, then it will go away ("use it or lose it"). But evolution doesn't work that way. It doesn't really matter if a trait is used or not; all that matters is whether that trait helps the individual reproduce. Harmful traits tend to disappear from populations. Helpful traits tend to become more common in populations. Neutral traits will tend to stick around, but they typically won't become any more common than they In the case of the lions, if the animals are kept reasonably healthy and they reproduce normally, then we should not expect them to lose their sharp teeth or their muscular physiques. If you control their breeding, though, you could probably breed those kinds of traits out of them. Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:682bdb94-0aa7-4a26-b4a3-16c122b9b519>
3.015625
920
Audio Transcript
Science & Tech.
52.583589
60 Second Adventures in Astronomy: Track 1 Ever wondered where the Universe came from? Or more importantly, where... Ever wondered where the Universe came from? Or more importantly, where it's headed? Voiced by David Mitchell, this series of 60 second animations examines different scientific concepts from the big bang to relativity, from black holes to dark matter. The series also explores the possibility of life beyond Earth and considers why David Bowie is still none the wiser about life on Mars. - Duration: 15 mins - Published on: Thursday 20th December 2012 - Introductory Level - Posted under: Across the Sciences Just how big was the Big Bang? Discover how scientists have calculated the exact volume of the noise created at the birth of the Universe. - Read a transcript of this track - you'll need a PDF viewer, such as Adobe's free Adobe Reader - See details of the Open University course this album comes from - Discover more from The Open University and iTunesU at open.edu/itunes Tracks in this podcast: |1||The Big Bang||Just how big was the Big Bang? Discover how scientists have calculated the exact volume of the noise created at the birth of the Universe. Play now The Big Bang| |2||Supernovae||What happens when a star explodes? Learn how all the elements in the Universe were formed, and where exactly your favourite silver necklace comes from. Play now Supernovae| |3||Exoplanets||We can't see exoplanets, but we know they're there. This episode explores how scientists have studied distant stars to learn more about the invisible planets that orbit them. Play now Exoplanets| |4||A Day on Mercury||No-one on Mercury could claim there's not enough hours in the day. Find out how you'd pass the time on a planet where a single day lasts two years. Play now A Day on Mercury| |5||The Rotating Moon||The Moon is like a loyal servant to a Queen, and never turns it's back on the Earth. Discover how the Moon's orbit means we always see it's best side. Play now The Rotating Moon| |6||Life on Mars||Discover how asteroids and microbes flying through space could hold the secret to life on Earth and Mars. Play now Life on Mars| |7||Event Horizons||Just what is the point of no return? German physicist, Karl Schwarzchild calculated the event horizon of black holes. And it can tell us more about the eventual fate of all the galaxies. Play now Event Horizons| |8||Dark Matter||Fritz Zwicky was a Swiss astronomer who discovered Dark Matter in the Universe. But what's the matter with dark matter? Play now Dark Matter| |9||Special Relativity||Who had more fun in life, Albert Einstein or Richard Feynman? Whichever one of them was travelling faster. Play now Special Relativity| |10||Large Hadron Collider||Some thought it would create another Universe, while others thought it would suck us all into a black hole. But the Large Hadron Collider is not as dangerous as we thought. Play now Large Hadron Collider| |11||Dark Energy||Who'd have thought Albert Einstein could make a mistake? Dark Energy explores how Einstein was right all along about the expanding Universe. We never should have doubted him. Play now Dark Energy| |12||Black Holes||Is it possible to make your own black hole? DIY experts take note. Play now Black Holes|
<urn:uuid:2660dfa2-ddfe-42a7-86d5-d5622dda05b6>
3.125
733
Truncated
Science & Tech.
57.127335
A big size,colorful dragon fly hovering over a pond. The Green Darner or Common Green Darner (Anax junius), after its resemblance to a darning-needle, is a species of dragonfly in the family Aeshnidae. One of the most common and abundant species throughout North America and its range south to Panama. It is well known for its great migration distance from the northern United States south into Texas and Mexico It also occurs in the Caribbean, Tahiti, and Asia from Japan to mainland China. It is the official insect for the state of Washington in the United States. The Green Darner is one of the largest dragonflies existent: males grow to 76 mm (3.0 in) in length with a wingspan of up to 80 mm (3.1 in). Females oviposit in aquatic vegetation, eggs laid beneath the water surface. Nymphs (naiads) are aquatic carnivores, feeding on insects, tadpoles and small fish. Adult darners catch insects on the wing, including ant royalty, moths, mosquitoes and flies.
<urn:uuid:6bbebfb1-f352-4d04-a403-d3d70d585c26>
3
223
Knowledge Article
Science & Tech.
55.660185
During the week of May 13th, the CO2 level at the Mauna Loa Observatory in Hawaii topped 400 ppm repeatedly. Daily levels of CO2 can vary due to weather, and there are seasonal trends as well. The level of atmospheric greenhouse gases continues to increase, now over 120 ppm since the Industrial Revolution began. For more on the Keeling Curve, see http://keelingcurve.ucsd.edu/. Find out more about greenhouse gases and warming. The week of May 19 brings dozens of tornadoes to Tornado Alley in the states of Oklahoma, Kansas, Iowa, Illinois and Missouri. On May 20th, a massive tornado struck Moore, Oklahoma, devastating communities - destroying over 100 homes and hitting two elementary schools and a hospital - with many casualties and deaths. Our thoughts are with our friends and colleagues suffering from these storms. For more on the May 20th storms, see the NOAA Storm Prediction Center Storm Report. Despite growing awareness of the problem of plastic pollution in the world's oceans, little solid scientific information has existed about the nature and scope of the issue. This week, a team of researchers...Read more
<urn:uuid:34c18cf8-5a89-4438-8155-fbcdc0a50b24>
2.796875
230
Content Listing
Science & Tech.
53.703839
Almost one in ten species of European butterflies (37 species) are under threat of extinction and almost one-third are declining according to a major new report from IUCN and Butterfly Conservation Europe. A further 10% are close to being threatened and one species, the Madeiran Large White, is probably extinct, not have been seen for 20 years. Because butterflies are good indicators of biodiversity, the results indicate a serious crisis for Europe’s wildlife. Around one-third of all European butterfly species are unique to Europe and the report shows that 15 of these are now globally threatened. The main factor causing the declines has been the extensive loss of key habitats such as flower-rich grassland and wetlands, due to agricultural intensification. However, changes in habitat management and abandonment of pastures in mountain areas have taken their toll. Over half of European butterflies rely on traditional grazing to maintain their flower-rich, grassland habitats. Such systems are being abandoned on a massive scale as they cannot compete economically with modern, highly intensive agriculture. Climate change is thought to be a serious future threat to many species. Among the most endangered species are the Danube Clouded Yellow, now thought to be confined to a few sites in Romania, and the Violet Copper, a beautiful wetland species that has undergone drastic declines in many countries. The only British species on the Endangered list is the Large Blue, which become extinct here in 1979 but has since been successfully re-introduced. It is declining rapidly in every other country where it occurs in Europe. Two other British butterflies are in serious decline at a European level (classed as Near Threatened): the Duke of Burgundy and Lulworth Skipper. Both had their worst ever year in Britain last year, declining by 65% and 87% respectively since 2000. Dr Martin Warren, Chief Executive of Butterfly Conservation and one of the report’s authors said “The rapid decline of so many species is extremely worrying. "They point to a major loss of wildlife and wild habitats across Europe. Far more effort is needed to support the traditional farming systems on which many species depend and protect key areas from development.” The new Red List of European butterflies was produced by a team of over 50 experts from countries across Europe, co-ordinated by Butterfly Conservation Europe and IUCN. Europe has an exceptionally rich butterfly fauna comprising 435 resident species, including spectacular species like the Apollo and Swallowtail.
<urn:uuid:cc5d6f27-4e08-4493-863a-560072904bed>
3.59375
504
Knowledge Article
Science & Tech.
30.615332
The KEY specifier defines the access keys for records in an indexed file. It takes the following form: The defaults are CHARACTER and ASCENDING. The key starts at position e1 in a record and has a length of e2-e1+1 . The values of e1 and e2 must cause the following calculations to be true: 1 .LE. (e1) .AND. (e1) .LE. (e2) .AND. (e2) .LE. record-length 1 .LE. (e2-e1+1) .AND. (e2-e1+1) .LE. 255 If the key type is INTEGER, the key length must be either 2 or 4. Defining Primary and Alternate Keys You must define at least one key in an indexed file. This is the primary key (the default key). It usually has a unique value for each record. You can also define alternate keys. RMS allows up to 254 alternate keys. If a file requires more keys than the OPEN statement limit, you must create the file using another language or the File Definition Language (FDL). Specifying and Referencing Keys You must use the KEY specifier when creating an indexed file. However, you do not have to respecify it when opening an existing file, because key attributes are permanent aspects of the file. These attributes include key definitions and reference numbers for subsequent I/O operations. However, if you use the KEY specifier for an existing file, your specification must be identical to the established key attributes. Subsequent I/O operations use a reference number, called the key- of-reference number, to identify a particular key. You do not specify this number; it is determined by the key's position in the specification list: the primary key is key-of-reference number 0; the first alternate key is key-of-reference number 1, and so forth. For More Information: For details on the FDL, see the OpenVMS Record Management Services Reference Manual.
<urn:uuid:b62f331f-bf21-410e-8158-b1d63ed077f2>
2.765625
436
Documentation
Software Dev.
56.75499
How did animals respond to the ice age? Ancient bison preserved in the Canadian arctic have given scientists a clue. Why did rainforests evolve so much more diversity than other parts of the globe? Are you familiar with the La Brea Tar Pits? Learn more on this Moment of Science. During the current ice age glaciers have advanced and retreated over twenty times. We’re living during one of the warmer periods when the glaciers have retreated.
<urn:uuid:a881d3b3-7bcf-4809-8546-b5e0eff93d4a>
2.90625
95
Content Listing
Science & Tech.
52.838831
A recent article in Scientific American by Jennifer Ackerman entitled “The Ultimate Social Network”, highlights a particular problem when trying to sequence the genomes of eukaryotic organisms. The problem is that the organism in question, whether it is an ant, butterfly, a polar bear, frog or Blue whale is not a singular organism at all. In fact the organism in question plays host to many millions of other prokaryotic organisms, mainly bacteria, viruses, fungi or parasites. In humans for example the genetic material from the microbiome outnumbers the human genome by at least 10 to 1. This is also expected to be true of all other eukaryotic species which harbour and maintain a symbiotic relationship with their microbiome. The genes from the microbiome help process beneficial compounds and act to temper host immune defences for example. Therefore, when taking and extracting DNA from a eukaryotic organism it has to be considered what other genomes you may be preparing and sequencing alongside the desired genome of interest. For example it cannot be simply a case of freeze drying an insect crushing into a powder then extracting the DNA, as the resultant samples will contain a highly mixed and diverse set of genomes, whereby the genome of interest may be present only in the lowest possible ratio. So, be warned! When assembling genomes be sure you know what your starting material actually contains.
<urn:uuid:a992af92-0dcb-4d7d-a46e-7cdddb13b720>
3.25
275
Personal Blog
Science & Tech.
29.570318
Highlights Glycolysis II 1. Conversion of F6P to fructose-1,6-bisphosphate (F1,6BP - need to know) is catalyzed by the enzyme phoshofructokinase (PFK) (need to know). This reaction also requires ATP and is an irreversible reaction. ATP is an allosteric effector. High levels of ATP inhibit the enzyme. Low levels stimulate the enzyme. This is consistent with the energy needs of the cell - when ATP is low, cells need glycolysis to run, so PFK is turned ON. When ATP is high, cells don't need glycolysis to run, so PFK is turned OFF. PFK is a major control point for glycolysis because it stops the pathway for entry of either glucose or fructose. 2. In the next step of glycolysis, the six carbon F1,6BP is split into two three carbon piece (DHAP and G3P) s in a reaction catalyzed by aldolase. This reaction is very unfavorable when there are equal concentrations of products and reactants. To make the reaction go forward in the cell, cells "push" (increase amounts of reactants) and "pull" (decrease concentrations of products) to make the process favorable. 3. In reaction 5, DHAP is converted to G3P, so from this step forward, there are two of each molecule. The enzyme catalyzing this reaction is triose phosphate isomerase, one of the most efficient enzymes known. 4. Reaction 6 of glycolysis involves the only oxidation. The enzyme responsible is glyceraldehyde-3-phosphate dehydrogenase (G3PDH). In the reaction, the aldehyde of G3P is converted to an acid group, which is subsequently linked to a phosphate. Note that the energy of the oxidation provides the necessary energy to put the phosphate on. ATP is not required. 5. In reaction 7, ATP is generated in a reaction catalyzed by phosphoglycerate kinase. In this reaction, 1,3BPG + ADP <=> 3-PG + ATP. This reaction is referred to as a substrate level phosphorylation (ATP being made directly from ADP by transfer of a phosphate from another molecule with phosphate). Substrate level phosphorylation is one of three types of phosphorylation in cells. The others are oxidative phosphorylation (in mitochondria) and photophosphorylation (in the chloroplasts of plants). 8. Reaction 8 is an isomerization catalyzed by phosphoglycerate mutase. This enzyme starts with 3-phosphopglycerate (3-PG) and converts it to 2-phosphoglycerate (2-PG). In between, an intermediate known as 2,3 BPG is made. It is stable and can diffuse from the enzyme and interact with hemoglobin. 9. Reaction nine involves removal of a water molecule from each three carbon intermediate to form the high energy molecule called phosphoenolpyruvate (PEP). 10. Reaction 10 is the "big bang" reaction of glycolysis. It produces another ATP for each PEP (by substrate level phosphorylation) and in turn, each PEP is converted to pyruvate, the end product of glycolysis. The enzyme, pyruvate kinase, is an important one, as it provides yet another control point for glycolysis. Pyruvate kinase is controlled by both allosteric and covalent modifications. This reaction is VERY energetically favorable and helps to "pull" earlier reactions that are not so favorable. It also contributes a fair amount of heat. 11. Glycolysis is regulated by three enzymes - hexokinase (inhibited by G6P), phosphofructokinase (inhibited by ATP), and pyruvate kinase (inhibited by ATP). I will say more about regulation later. 12. Pyruvate is the ending point for glycolysis. Which pathway is taken from that point forward depends on the needs of the cell. Since cells have a VERY strong interest in keeping glycolysis going, the primary consideration is keeping NAD+ levels high. Under aerobic conditions (plenty of oxygen), NAD+ is readily made from NADH without problems. Thus under aerobic conditions, cells (animal and microbial cells) convert pyruvate to acetyl-CoA, CO2, and NADH, since the NADH can readily be converted back to NAD+. 13. Under anaerobic conditions, animals convert pyruvate to lactate using the enzyme lactate dehydrogenase (and producing NAD+ from NADH). Under anaerobic conditions, microbial cells undergo NON-OXIDATIVE (no electrons lost) decarboxylation (formation of CO2) to produce acetaldehyde followed by reduction of acetaldehyde by NADH to form ethanol and NAD+. 14. Addition of electrons and protons to pyruvate (from NADH) creates lactate and regenerates NAD+. This is important in muscles when they run low on oxygen. 15. Metabolism of glucose by anaerobic pathways does not release nearly as much energy as when glucose is metabolism by the aerobic pathway. Note that conversion of pyruvate to ethanol by microorganisms is a two step process. The last step in the process is catalyzed by alcohol dehydrogenase. In microorganisms, the direction of the reaction is towards producing ethanol. Animals also have an alcohol dehydrogenase, but they use it for the reverse direction to break down ethanol. The product of the reverse reaction is acetaldehyde and may be responsible for hangovers.
<urn:uuid:3e34966b-0c0d-4b89-afb5-0870e64dedf8>
3.3125
1,205
Academic Writing
Science & Tech.
39.811687
Contents of this page: H+ flux linked to ATP synthesis or hydrolysis Composition & roles of major domains of the ATP Synthase Binding change mechanism Structure of F1 & central stalk Evidence for rotation Fo & peripheral stalk subunits F1Fo ATP Synthase of mitochondria, chloroplasts, and bacteria is represented schematically at right. When the electrochemical H+ gradient is favorable, F1Fo catalyzes ATP synthesis coupled to spontaneous H+ flux toward the side of the membrane where F1 protrudes. E.g., in mitochondria, the pH and electrical gradients drive H+ transport from the intermembrane space to the matrix compartment. The chemiosmotic theory is discussed elsewhere. If no membrane potential or pH gradient exists to drive the forward reaction, the Keq favors the reverse reaction, ATP hydrolysis (ATPase activity). In some bacteria, the reverse reaction has a physiological role, providing a mechanism for ATP-dependent creation of a proton gradient that drives other reactions. Viewed by electron microscopy with negative staining, the ATP synthase appeared as "lollipops" on the inner mitochondrial membrane, facing the matrix (V & V Fig. 22-36 p. 827). Higher resolution cryo-electron microscopy later showed each lollipop to have two stalks. E.g., see movie on a website of J. Rubinstein. Roles of major subunits were determined in studies of submitochondrial particles (SMP). If mitochondria are treated with ultrasound, the inner membrane breaks and reseals as vesicles, with F1 on the outer surface. Since F1 of intact mitochondria faces the interior matrix space, these SMP are said to be inside out. F1, the lollipop head, when extracted from SMP, catalyzes ATP hydrolysis (the spontaneous reaction in the absence of an energy input). Thus F1 contains the catalytic domain(s). After removal of F1, the SMP membrane containing Fo is leaky to H+. Adding back F1 restores the normal low permeability to H+. Thus it was established that Fo includes a "proton channel." Either oligomycin or DCCD blocks the H+ leak in membranes depleted of F1. Thus oligomycin and DCCD inhibit the ATP Synthase by interacting with Fo. |ATP synthase complexes of bacteria, mitochondria and chloroplasts are all very similar, with only minor differences. Mitochondria are believed to have evolved from symbiotic aerobic bacteria ingested by an anaerobic host cell. The limiting membrane of the bacterium became the inner mitochondrial membrane. Mitochondria contain a small DNA chromosome, but genes that encode most mitochondrial proteins are located in the nucleus, consistent with transfer of some DNA to the nucleus during evolution. The subunit composition of the ATP Synthase was first established for E. coli, which has an operon that encodes genes for all subunits. Stalk subunits were classified initially as being part of either F1 or Fo, based on whether they co-purified with extracted F1. Mammalian mitochondrial F1Fo is slightly more complex than the bacterial enzyme, with a few additional subunits. Also, since names were assigned based on apparent molecular weights, some subunits were given different names in different organisms. There is evidence that the ATP Synthase (F1Fo) may form a complex with the adenine nucleotide translocase (ADP/ATP antiporter) and the phosphate carrier (Pi/H+ symporter). This complex has been designated the ATP Synthasome. |The binding change mechanism of energy coupling was proposed by Paul Boyer. He shared the Nobel prize for this model, which accounts for the existence of 3 sites in F1. For simplicity, only the catalytic b subunits are shown in the diagram at right. It is proposed that an irregularly shaped "shaft" linked to Fo rotates relative to the 3b subunits, which are arranged in a ring. The rotation is driven by flow of H+ through Fo. The conformation of each b subunit changes sequentially, as it interacts with the rotating shaft. Each of the 3 b subunits is in a different stage of the catalytic cycle at any time. For example, the green subunit shown above sequentially changes to: This model is supported by two major lines of evidence: 1. The crystal structure of F1 with the central stalk was determined by John Walker, who shared the Nobel prize for that achievement. The g (gamma) subunit was found to include a bent helical loop that constitutes a "shaft" within the ring of a and b subunits. |Shown at right is bovine F1, treated with DCCD to yield crystals in which more of the central stalk is ordered, allowing structure determination. (Structure solved by C. Gibbons, M. G. Montgomery, A. G. W. Leslie, & J. E. Walker, 2000, PDB 1E79). Subunit colors: a yellow, b green, g red, d blue, and e magenta. Note the wide base of the rotary shaft, including part of g as well as d and e subunits. Recall that the bovine d subunit, which is located at the base of the shaft, is equivalent to the e subunit of bacterial F1. In crystals of F1 not treated with DCCD (PDB file 1COW), less of the shaft structure is elucidated, but ligand binding may be observed under more natural conditions. The 3 b subunits are found to differ in conformation and bound ligand: Bound to one b subunit is a non-hydrolyzable analog of ATP (assumed to be the tight conformation). Bound to another b subunit is a molecule of ADP (assumed to be the loose conformation). The third b subunit has an empty active site (assumed to be the open conformation). These findings are consistent with the binding change mechanism, which predicts that each of the three b subunits, being differently affected by the irregularly shaped rotating shaft, will be in a different stage of the catalytic cycle. Additional data are consistent with there being an intermediate conformation between the major transitions discussed above. This intermediate conformation may have nucleotide bound at all three sites. By one model, considering the left-most image in the diagram above: ATP synthesis (on the green subunit) is associated with transition to an intermediate conformation that allows binding of ADP + Pi to the adjacent, previously empty site (magenta subunit). A further conformational change then occurs as ATP formed in the previous step is released (from the cyan subunit). See also recent articles, especially the paper by Kagawa et al. Explore at right the structure of bovine F1 with bound ADP and The non-hydrolyzable AMPPNP is used as a substitute for ATP, which would hydrolyze during crystallization. 2. Rotation of the g shaft relative to the ring of a and b subunits was demonstrated by H. Noji, R. Yasuda, M.Yoshida & K. Kinoshita. b subunits of a bacterial F1 were tethered to a glass surface, as represented at right. A fluorescent-labeled actin filament (shown in yellow) was attached to the protruding end of the g subunit. Video recording showed the fluorescent actin filament rotating like a propeller. The rotation was found to be ATP-dependent. Studies using varied techniques have shown ATP-induced rotation to occur in discrete 120o steps, with intervening pauses. Some observations indicate that each 120o step consist of 80-90o and 30-40o substeps, with a brief intervening pause. Such substeps are consistent with evidence for an intermediate conformation between the major transitions, discussed above. Although the binding change mechanism is widely accepted, some details of the reaction cycle are still debated. View videos showing F1 rotation, at a website that includes details of the experimental approach used. Then view at right an animation based on observed variation in conformation of F1 subunits attributed to rotation of the g shaft. The c subunit of Fo has a hairpin structure, with 2 transmembrane a-helices and a short connecting loop. (Structure at right determined via NMR by M. E. Girvin, V. K. Rastogi, F. Abildgaard, J. L. Markley, & R. H. Fillingame, 1998). The small c subunit (79 amino acid residues in E. coli), is also referred to as proteolipid, because of its hydrophobicity. One a-helix of the c subunit includes an aspartate or glutamate residue whose carboxyl group reacts with DCCD (Asp61 in E. coli). Mutation studies have shown this DCCD-reactive carboxyl group, which is located in the middle of the bilayer, to be essential for H+ transport through Fo. View at right a low resolution, partial structure of yeast F1 with the central stalk and attached Fo c subunits (D. Stock, A. Leslie, & J. Walker, 1999, PDB file 1Q01). Display as backbone and color chain. Question: How many c subunits, are in the Fo c-ring? Visualize the aspartate residue near the middle of one transmembrane segment of each c subunit. An atomic resolution structure of the complete ATP Synthase, including F1 and Fo with peripheral as well as central stalks, has not yet been achieved. However partial or complete structures of individual protein constituents, mutational studies, and evidence for inter-subunit interactions, have defined the roles of most subunits. The image at right, depicting models of mitochondrial and bacterial ATP Synthase subunit structure, was provided by Dr. John Walker. Keep in mind that some equivalent subunits from different organisms are assigned different names. The proposed "rotor" consists of the ring 10 c subunits, plus the central stalk (subunits g, d, & e in the mitochondrial enzyme; or g & e in E. coli). The proposed "stator" consists of the 3a and 3b F1 subunits, the a subunit of Fo, and a peripheral stalk that connects these. The peripheral stalk consists of 2b, and d in E. coli, or subunits b, d, F6, and OSCP in bovine mitochondria.. Mitochondrial ATP Synthase E. coli ATP Synthase The b subunit includes a membrane anchor, one transmembrane a-helix in E. coli and two in mammalian F1Fo, that interacts with the intramembrane a subunit. A polar, a-helical domain of b extends out from the membrane. OSCP, which is homologous to the E. coli d subunit, interacts with the protruding end of the b subunit and with the distal end of an F1 a subunit. This linkage, along with interactions of the b subunit with residues on the surface of F1, are postulated to hold back the ring of a and b subunits, keeping it from rotating along with the central stalk. The a subunit of Fo (271 amino acid residues in E. coli) is predicted, e.g., from hydropathy plots, to include several transmembrane a-helices. It has been proposed that the intramembrane a subunit contains two half-channels or proton wires (each a series of protonatable groups or embedded water molecules), that allow passage of protons between the two membrane surfaces and the interior of the bilayer. Protons may be relayed from one half-channel or proton wire to the other only via the DCCD-sensitive carboxyl group of a c-subunit. Recall that the essential carboxyl group of each c-subunit (Asp61 in E. coli) is located half way through the membrane (see above). An essential arginine residue on one of the transmembrane a-subunit a-helices has been identified as the group that accepts a proton from Asp61 and passes it to the exit channel. As the ring of 10 c subunits rotates, the c-subunit carboxyls relay protons between the 2 a-subunit half-channels. This allows H+ gradient-driven H+ flux across the membrane to drive the rotation. It has been proposed that rotation of the ring of c subunits may result from concerted swiveling movements of the c-subunit helix that includes the DCCD-sensitive Asp61 and transmembrane a-subunit helices having the residues that transfer H+ to or from Asp61, as protons are passed from or to each half-channel. See also Fig. 22-43 p. 832. Copyright © 1998-2007 by Joyce J. Diwan. All rights reserved. Additional material on F1Fo
<urn:uuid:4ce4ba9b-6b53-4dcc-87b8-9992a7d72eb5>
2.75
2,845
Knowledge Article
Science & Tech.
48.705731
Most of us are familiar with Gray Squirrels, those critters that make "kuk, kuk, kuk" sounds and flick their tails around our back yards. Like all rodents, Gray Squirrels have two pairs of chisel-shaped incisors. Squirrels' incisors grow five to six inches each year. Gnawing wears them down to "normal" length. The animals rely on nuts for food in cold weather, storing their snacks underground and in tree cavities. Squirrels have an amazing ability to locate buried nuts using their sense of smell, retrieving at least 85% of their underground eats. Sciurus carolinensis - page includes species description, phylogeny, skull illustrations, geographic distribution with range maps of North Carolina and the Great Smoky Mountains National Park, habitat, and conservation biology. Part of the Discover Life Web site. The Eastern Gray Squirrel is also one of North Carolina's State Symbols. back to Nature Notebook Right photo: Hoss
<urn:uuid:2ea396df-35b2-4bf3-8416-e350918821ce>
3.703125
205
Knowledge Article
Science & Tech.
47.530136
Comment: 19:34 - 20:39 (01:05) Source: Annenberg/CPB Resources - Earth Revealed - 21. Groundwater Keywords: groundwater, resource, oil, hydrocarbon, energy, life, replenishment, infiltration, well Our transcription: The great irony about groundwater is how little attention it often gets in discussions about dwindling resources. We worry about running out of oil, and there's no question that it's of great importance for our mobile economy. But when all is said and done, life could go on even without hydrocarbons. There are, after all, alternative sources of energy. But when it comes to water, there is no substitute. Without water, life is simply not possible. Fortunately, groundwater resources are not as limited or irreplaceable, for groundwater is more rapidly replenished by nature than is oil. But the time scale of natural replenishment is very long, at least by human standards. At the current level of infiltration it would take thousands of years to restore the water supply that has been pumped from beneath many cities in but a fraction of that time. Geology School Keywords
<urn:uuid:2ecf252f-887a-4f94-9ec0-642e2ea58263>
3.140625
242
Truncated
Science & Tech.
41.143357
Science Fair Project Encyclopedia Examples of monomers are hydrocarbons such as the alkane, alkene, and alene (homologous) series. Other hydrocarbon monomers such as styrene and ethene form polymers to make plastics like polystyrene and polyethene. Amino acids are natural monomers, and polymerize to form proteins. Glucose monomers can also polymerize to form starches, amilopectins and glycogen polymers. The polymerization reaction is known as a dehydration or condensation reaction (due to the formation of water (H2O) as one of the products) where a hydrogen atom and a hydroxyl (-OH) group are lost to form H2O and an oxygen molecule bonds between each monomer unit. Note that polymers built from monomers can also be called dimers, trimers, tetramers, pentamers, octamers, 20-mers, etc. if they have 2, 3, 4, 5, 8, or 20 monomer units, respectively. Any number of these monomer units may be indicated by the appropriate prefix, eg, decamer, being a 10-unit monomer chain or polymer. Larger numbers are often stated in English in lieu of Greek. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:e4899a8e-8b25-4ece-b4ef-c650ed06ae11>
3.5
297
Knowledge Article
Science & Tech.
30.685865
Peppered moth (Biston betularia) |Size||Adult wingspan: 45-62 mm (2)| Caterpillar length: up to 60 mm (3) Common and widespread (2). In its typical form, the Peppered Moth has pepper-and-salt camouflage pattern. In some areas it also has a sooty black or ‘melanic’ form known as carbonaria (2) (4). The long, stick-like caterpillar may be various shades of brown or green, with small warts and projections that resemble bark. The head is deeply notched (3). This species is widespread throughout most of the British Isles and often fairly numerous (2). The melanic form is most frequent in the industrial areas of central Scotland, northern England, the Midlands and London, but is currently declining (5). Found in woodland, hedgerows, parks and gardens, even in urban areas (6). This species is single-brooded, with a protracted emergence. Adults are on the wing from May into August. Females lay their eggs in large batches, but the newly hatched caterpillars soon disperse. They spin silk threads and float off downwind until they land again by chance. Fortunately they can eat a very wide range of deciduous trees and shrubs. Caterpillars feed only at night, and are full-grown in September. The pupal stage overwinters in the soil and adults emerge the following spring (6). The Peppered Moth has been widely used as a textbook example of evolution by natural selection. During the industrial revolution, sooty deposits darkened much of the habitat. The melanic form of the moth was first recorded in Manchester in 1848. Within 50 years it had almost replaced the typical form both there and in other industrial areas (5) Classic experiments carried out by Ketterwell during the 1950s suggested that bird predation was the crucial factor. The melanic form was better camouflaged when resting on sooty tree trunks and branches than was the paler typical form, hence it survived better. Conversely, the typical form was at an advantage in unpolluted areas where the tree bark was covered in lichens. In recent years, various doubts have been cast on this simplistic explanation, and less fairly on the quality of Keterwell’s work (5) (7). While his pioneering experiments were undoubtedly flawed when judged by modern standards, his basic premise is still accepted by most evolutionary biologists. Although the full picture may well have been more complicated. Following the Clean Air Acts, introduced from 1964 onwards, smoke pollution and soot deposition have been greatly reduced. The melanic form of the Peppered Moth has now lost its advantage and undergone a dramatic decline in frequency. It is now scarce in areas where it previously dominated and may soon disappear completely. Whatever the finer details of its rise and fall, the carbonaria story is a fascinating one. Not currently threatened. No conservation action has been targeted at this species. Enjoying Moths by Roy Leverton (Poyser). Hooper, J. (2002) Of Moths and Men: Intrigue, tragedy and the Peppered Moth. Fourth Estate, London. Information supplied and authenticated by Roy Leverton with the support of the British Ecological Society: - Pupal stage: stage in an insect's development, when huge changes occur that reorganise the larval form into the adult form. In butterflies the pupa is also called a chrysalis. - Single-brooded: (Also known as 'univoltine'). Insect life cycle that takes 12 months to be complete, and involves a single generation. The egg, larva, pupa or adult over winters as a dormant stage. - National Biodiversity Network Species Dictionary (March 2003): http://www.nhm.ac.uk/nbn/ - Skinner, B (1984) Colour identification guide to moths of the British Isles. Viking, Middlesex. - Carter, D.J. & Hargreaves, B. (1986) A field guide to caterpillars of butterflies and moths in Britain and Europe. William Collins sons and Co. Ltd., London. - Leverton, R. (2001) Enjoying moths. T & AD Poyser, Ltd., London. - Marjerus, M.E.N. (1998) Melanism: Evolution in Action. Oxford University Press, Oxford. - Leverton, R. (2004). Pers. comm - Hooper, J. (2002) Of Moths and Men: Intrigue, tragedy and the Peppered Moth. Fourth Estate, London.
<urn:uuid:be4ac657-9708-4c0f-9970-b61565afa835>
3.34375
987
Knowledge Article
Science & Tech.
56.159384
Paper being a light and foldable raw material is a cost efficient and simple means of generating electrically conducting structures Paper is becoming a high tech material Researchers at the Max Planck Institute of Colloids ... - Read More A new semiconductor device capable of emitting two distinct colours has been created by a group of researchers in the US potentially opening up the possibility of using light emitting diodes LEDs universally for cheap ... - Read More An Australian team led by researchers at the University of New South Wales has achieved a breakthrough in quantum science that brings the prospect of a network of ultra powerful quantum computers connected via a ... - Read More Applying femtosecond x ray methods researchers at the Max Born Institute in Berlin Germany and the Ecole Polytechnique Federale de Lausanne Switzerland observed an extremely fast collective electron transfer of ~100 molecular ions after excitation ... - Read More The application of light for information processing opens up a multitude of possibilities However to be able to adequately use photons in circuits and sensors materials need to have particular optical and mechanical properties Researchers ... - Read More intramolecular or intermolecular Here we report results on radical cation stability and reactivity for possible redox shuttles in order to understand the possible mechanisms for the shuttle reaction in batteries DFT design and study of ... - Read More By introducing individual silicon atom 'defects' using a scanning tunnelling microscope scientists at the London Centre for Nanotechnology have coupled single atoms to form quantum states Share This See Also Physics Graphene Quantum Physics Quantum ... - Read More A team of scientists from the University of California Los Angeles UCLA and Northwestern University has produced 3 D images and videos of a tiny platinum nanoparticle at atomic resolution that reveal new details of ... - Read More Many discoveries in physics came as a big surprise for example the phenomenon that some materials loose almost all their electrical resistance at low temperatures or that others become superconductors at unexpectedly high temperatures In ... - Read More Micrometer level naked eye detection of cesium ions a major source of contamination in the vicinity of radioactive leaks is demonstrated in a material developed by researchers in Japan Share This See Also Weapons Technology ... - Read More Researchers at the Vienna University of Technology show that a recently discovered class of materials can be used to create a new kind of solar cell Share This See Also Materials Science Solar Energy Graphene ... - Read More Researchers in Japan have developed a way to detect caesium contamination on a scale of millimetres enabling the detection of small areas of radioactive contamination Share This See Also Weapons Technology Chemistry Materials Science Inorganic ... - Read More An MIT researcher has developed a technique that provides a new way of manipulating heat allowing it to be controlled much as light waves can be manipulated by lenses and mirrors Share This See Also ... - Read More Please recommend us on Facebook, Twitter and more: Tell us what you think of Chemistry 2011 -- we welcome both positive and negative comments. Have any problems using the site? Questions? Chemistry2011 is an informational resource for students, educators and the self-taught in the field of chemistry. We offer resources such as course materials, chemistry department listings, activities, events, projects and more along with current news releases. The history of the domain extends back to 2008 when it was selected to be used as the host domain for the International Year of Chemistry 2011 as designated by UNESCO and as an initiative of IUPAC that celebrated the achievements of chemistry. You can learn more about IYC2011 by clicking here. With IYC 2011 now over, the domain is currently under redevelopment by The Equipment Leasing Company Ltd. Are you interested in listing an event or sharing an activity or idea? Perhaps you are coordinating an event and are in need of additional resources? Within our site you will find a variety of activities and projects your peers have previously submitted or which have been freely shared through creative commons licenses. Here are some highlights: Featured Idea 1, Featured Idea 2. Ready to get involved? The first step is to sign up by following the link: Join Here. Also don’t forget to fill out your profile including any professional designations.
<urn:uuid:1395f7ec-5b63-481a-b82a-147f7000d817>
3.09375
850
Content Listing
Science & Tech.
31.79639
|Are Birds Really Dinosaurs?| Evidence presented on this site is overwhelmingly in favor of birds being the descendants of a maniraptoran dinosaur, probably something similar (but not identical) to a small dromaeosaur. Dr. Jacques Gauthier created the first well-accepted, detailed phylogeny of the diapsids. His work provided strong, compelling support for the theory that birds are theropod dinosaurs. The development of the theory is traced and a list of twenty major skeletal characteristics the first birds shared with many coelurosaurian dinosaurs is included. The site contains many active links for further study. Intended for grade levels: Type of resource: No specific technical requirements, just a browser required Cost / Copyright: Copyright 1996-2000 by The Museum of Paleontology of The University of California, Berkeley; the Regents of the University of California; and The Paleontological Society. No part of the referring document residing on the server may be reproduced or stored in a retrieval system without prior written permission of the publisher, except for educational purposes, and in no case for profit. DLESE Catalog ID: DLESE-000-000-005-091 Resource contact / Creator / Publisher: Author: John R Hutchinson
<urn:uuid:7a0a289e-7339-445a-833a-e2c4974bb948>
3.140625
263
Content Listing
Science & Tech.
24.236465
AGS / Geohazards / Earthquakes / General Information Earthquakes In Arkansas Numerous earthquakes occur every year throughout the State of Arkansas, but most go unnoticed. Earthquakes that are felt can be startling, and serve as good reminders that Arkansas is located near one of the most hazardous earthquake zones in the country. Earthquakes have been historically documented in Arkansas, as far back as 1699, by missionaries traveling down the Mississippi River near Helena (Phillips County), Arkansas. Although, it is uncommon for major earthquakes to occur far away from active tectonic boundaries, earthquakes associated with the New Madrid seismic zone (NMSZ), an active earthquake zone extending from Cairo, Illinois, into Marked Tree (Poinsett County), Arkansas, have been some of the largest earthquakes to ever strike North America. What Causes An Earthquake? Earthquakes are caused by movement along geologic faults, or fractures in the Earth’s crust. When a fault moves, energy is released and transfers through the earth causing the shaking that is experienced during an earthquake. Arkansas has hundreds, if not thousands of faults. Most of these faults are considered inactive. However, faults associated with the New Madrid seismic zone are active, and deeply buried beneath many layers of unconsolidated sediment and sedimentary rock, making them almost impossible to identify on the Earth’s surface. These faults exist within a failed rift zone, known as the Reelfoot Rift, which developed in the Earth’s crust over 600 million years ago. Types Of Faults Strike Slip Fault |If the movement of a fault is predominately horizontal, the fault is considered a strike-slip fault. The San Andreas Fault zone is one of the most famous examples of a strike-slip fault in the United States.| Dip Slip Faults |If the movement along a fault is predominately vertical, the fault is considered to be a dip-slip fault, or in other words the displacement occurs along the dip plane of the fault. There are two main types of dip-slip faults; Normal faults and Reverse. In order to determine the differences between dip-slip faults it's important to understand the terms hanging wall and footwall.| |Normal faults occur when the foot wall is displaced upward with respect to the hanging wall.| |Reverse faults occur when the hanging wall is displaced upward with respect to the footwall.| Where Do Earthquakes Occur? Most earthquakes occur at plate boundaries. Earthquakes associated with faults on plate boundaries are called interplate earthquakes. About 5% of earthquakes are intraplate earthquakes and occur in the center of a tectonic plate. There are two scales in common use that give some measure of the size of an earthquake. These scales are quite commonly confused but measure very different parameters of the seismic event. The first scale, and the one most commonly heard on the news, is a magnitude scale. The magnitude scale was first developed in the 1930’s and is an objective instrumental measurement. It is based on the amplitude and amount of displacement of an instrument record trace calibrated and corrected with known distance, magnification and instrument factors. The magnitude scale is a logarithmic scale, meaning that each unit of increase (1, 2, 3…) of the scale value indicates a ten-fold increase in the value being measured. Therefore, if a magnitude 3 earthquake shows a record trace displacement of 1mm, then a magnitude 4 earthquake will show a trace displacement of 10mm. Each increase in magnitude corresponds with about 33 times more released energy. For example a magnitude 1 earthquake is equal to 56 kilograms of explosive energy. A magnitude 2 earthquake is then equal to 1,800 kilograms of explosive energy (approximately 56 times 33). A magnitude 7 earthquake releases 56,000,000,000 kilograms of explosive energy compared to a magnitude 8 earthquake which releases 33 times more energy. That’s approximately 1,800,000,000,000 kilograms of energy! As one can see, the amount of energy released in an earthquake drastically increases the higher the magnitude of the earthquake. |This diagram was produced in cooperation with the USGS and the University of Memphis.| The other scale often used to indicate earthquake size is the Modified Mercalli Intensity scale. Intensity is a qualitative measure of the strength of ground shaking at a particular site. It is a subjective scale relying on the observations of people trained to relate the earthquake effects to a numeric scale from I-X+ (Roman numerals are used to distinguish this as an empirical scale). Because this is an observational scale, the intensity will decrease as a function of the distance from the epicenter. Most people (but not all) in an area experiencing Intensity III effects will just feel the earthquake shock. Intensity VII areas are indicated when damage is slight to moderate. Some people may have found it difficult to stand during the quake; chimneys, windows and plaster walls will be cracked; furniture may have overturned. Although damage may be moderate in an area of Intensity VII, it will be mostly architectural damage—i.e. it will look bad but almost all damage will be superficial. Beyond Intensity VII we start seeing structural damage and some collapse buildings. At first only poorly designed structures are destroyed, but as the Intensity level reaches X damage increases significantly. |The Modified Mercalli Intensity Scale| |MMI Value||Perceived Shaking||Potential Damage||Full Description| |I.||Not Felt||None||Not Felt| |II.||Weak||None||Felt by persons at rest, on upper floors, or favorably placed.| |III.||Weak||None||Felt indoors. Hanging objects swing. Vibration like passing of light trucks. May not be recognized as an earthquake.| |IV.||Light||None||Hanging objects swing. Vibration like passing of heavy trucks; or sensations of a jolt like a heavy ball striking the walls. Standing motor cars rock. Windows, dishes, doors rattle. Glasses clink. Crockery clashes. In the upper range of IV, wooden walls and frame creak.| |V.||Moderate||Very Light||Felt outdoors; direction estimated. Sleepers wakened. Liquids disturbed, some spilled. Small unstable objects displaced or upset. Doors swing, close, open. Shutters, pictures move. Pendulum clocks stop, start, change rate.| |VI.||Srong||Light||Felt by all. Many frightened and run outdoors. Persons walk unsteadily. Windows, dishes, glassware broken. Knickknacks, books, etc., off shelves. Pictures off walls. Furniture moved or overturned. Weak plaster and masonry cracked. Trees, bushes shaken (visibly, or heard to rustle).| |VII.||Very Strong||Moderate||Difficult to stand. Noticed by drivers of motors cars. Furniture broken. Damage to masonry, including cracks. Weak chimneys broken at roof line. Fall of plaster, loose bricks, stones, tiles, cornices (also unbraced parapets and architectural ornaments). Waves on pond water, turned with mud. Small slides and caving in along sand or gravel banks. Large bells ring. Concrete irrigation ditches damaged.| |VIII.||Severe||Moderate/Heavy||Steering of motor cars affected. Damage to masonry. Twisting, fall of chimneys, factory stacks, monuments, towers, elevated tanks. Frame houses moved on foundations if not bolted down. Branches broken from trees. Changes in flow or temperature of springs and wells. Cracks in wet ground and on steep slopes.| |IX.||Violent||Violent/Heavy||Some Masonry destroyed, heavily damaged or completely collapsed. General damage to foundations. Frame structures, if not bolted, shifted off foundations. Frames racked. Serious damage to reservoirs. Underground pipes broken. Conspicuous cracks in ground. In alluvial areas sand and mud ejected, earthquake fountains, sand craters.| |X.||Extreme||Very Heavy||Most masonry and frame structures destroyed with their foundations. Some well-built wooden structures and bridges destroyed. Serious damage to dams, dikes, embankments. Large landslides. Water thrown on banks of canals, rivers, lakes, etc. Sand and mud shifted horizontally on beaches and flat land. Rails bent slightly.| In general earthquakes smaller than magnitude 2.5 will not be felt in most situations, they are simply too small and will certainly do no damage. Earthquakes between magnitude 2.5 and 5 will be increasingly felt but generally do little damage. When earthquakes scale to over magnitude 5 their effects become highly significant. Earthquakes of magnitude 5 to 6 will almost always cause some damage, but most of it will be architectural rather than structural. An earthquake of magnitude 6 can cause significant architectural damage and some structural damage and collapse of poorly-built structures. The area of damage will tend to be roughly bulls-eye shaped, with the greatest damage in the immediate epicentral area and lessening damage radially away from that area. This general pattern will be distorted more or less by variations in the nature of the bedrock and topography of the region in relation to the source of the earthquake and the exact manner in which it releases its energy. An earthquake of magnitude 7 will cause near total devastation in the epicentral area and cause structural damage and collapse of poorly-built structures over a much larger area (remember, this earthquake is 30 times more powerful than a magnitude 6 event). The great earthquakes, those over magnitude 8, will destroy most of the infrastructure in a very large area, several 10s of miles in diameter, and can cause structural damage and collapse of poorly-built structures as much as 100 miles away. Great earthquakes can still be felt many hundreds of miles away. |Illustration courtesy of the USGS| Two main types of seismic waves are generated in an earthquake. Body waves are seismic waves that travel through the interior, or the body, of the earth. There are two main types of body waves; primary waves and secondary waves. Primary waves, or P waves, are compressional waves and travel faster than Secondary, or S waves. As a result, P waves are first to be detected by a seismic station after an earthquake has occurred, thus “primary”. Secondary waves, or S waves, travel in a side-to-side motion and move through the rock perpendicular to the direction the wave is traveling. S waves travel slower than P waves are recorded by seismic stations after the P wave arrival. Surface waves travel through the very outer layers of the earth’s crust and arrive at a seismic station after the P and S waves. Two main types of surfaces waves are Raleigh and Love waves. Raleigh waves travel in a circular or rolling motion through the earth’s crust, similar to that of an ocean wave on the water. Love Waves are a type of surface wave that travels in a horizontal, side-to-side motion through the outer layer of the earth’s crust. Determining An Earthquake Epicenter The amount of time that passes between the arrival of the Primary wave and the Secondary wave allows seismologists to determine the distance from the seismic station to the earthquake epicenter. When a seismic station records a P wave arrival, the longer it takes for the S wave to arrive, the farther away the earthquake epicenter.
<urn:uuid:0b67b30f-57a6-495b-b5c7-a068bd0adf91>
3.59375
2,376
Knowledge Article
Science & Tech.
43.676251
Some bacteria have hair- or whip-like appendages called flagella used to ‘swim’ around. Others produce thick coats of slime and ‘glide’ about. Some stick out thin, rigid spikes called fimbriae to help hold them to surfaces. Some contain little particles of minerals that orient with the planet’s magnetic fields to help the bacteria figure out whether they’re swimming up or down. Some bacteria move about their environment by means of long, whip-like structures called flagella. They rotate their flagella like tiny outboard motors to propel themselves through liquid environments. They may also reverse the direction in which their flagella rotate so that they tumble about in one place. Other bacteria secrete a slime layer and ooze over surfaces like slugs. Others are fairly stationary.
<urn:uuid:2bf7f074-d1f2-4faf-b0b4-32c4625a435d>
3.6875
175
Knowledge Article
Science & Tech.
41.044757
What if low-lying Gulf Coast communities could gauge the impact of future storms? Well, now there’s an app for that. Under the direction of Dr. Jorge Brenner, associate director of marine science, The Nature Conservancy in Texas has developed a web portal and series of tools that will help coastal managers, scientists, the conservation community and people within the Gulf of Mexico Governor’s Alliance predict how hurricanes, storm surges and sea level will affect their cities and coastal habitats in the future. Nearly 90 years into the future, to be exact; this online tool utilizes maps and models along with different inundation levels, ranging from a few inches to several feet, to assess coastal conditions in the years 2025, 2050, 2075 and 2100. Brenner and his team ran analysis in five different locations in three different Gulf Coast states: Such a tool couldn’t come at a better time; the Gulf of Mexico is a workhorse, pumping more than $230 billion a year into our national economy and supporting upwards of 20 million jobs. If the five Gulf Coast states (Texas, Louisiana, Mississippi, Alabama and Florida) were considered a country, they would comprise the seventh largest economy in the world. But despite the region’s economic power, we’ve seen the devastation Mother Nature can bring. The National Oceanic and Atmospheric Administration predicts sea rise of three millimeters per year in the Gulf of Mexico, an important factor in future storm preparedness. The results of Brenner’s model, along with scenario maps and online tools, link to the Conservancy’s Gulf of Mexico Resilience Decision Support Tool, and this suite of tools can, in turn, be can be combined with geographic information systems data to understand the impacts of those rising sea levels.
<urn:uuid:2dbdbe3d-2bf3-493c-9242-dfc58f6a5e90>
3.421875
368
Knowledge Article
Science & Tech.
33.313973
Source Newsroom: Cornell University Rachel Bean, professor of astronomy at Cornell University, comments on new data released by the European Space Agency offering a detailed map of relic radiation from the Big Bang. “The European Space Agency's Planck satellite has measured the oldest cosmic fossil, the cosmic microwave background (CMB) radiation, with exquisite precision. Its map of the sky is markedly better than the results of its predecessor, NASA's WMAP satellite, on which I worked, which itself transformed our understanding of this relic astronomical signal. “The CMB map is a snapshot of the universe 380,000 years after the Big Bang. This imprint provides physicists with a singular insight into the universe at a time when it was governed by energies and physical laws inaccessible to experiments here on Earth. Planck's results will have a powerful influence in guiding theories of the early universe, which physicists hope to connect to fundamental laws of physics using theories such as string theory. “The CMB maps also give a record of what the photons have encountered along the way, from the very first stars, to the super-heated, million Fahrenheit, gas in clusters of galaxies, millions of light years across. Now, the Planck survey has enabled us to read this historical record of the universe more precisely than ever before. One example is an unparalleled detection of how the CMB light is deflected as it feels the gravitational pull of galaxies and galaxy clusters it's passed on its way to us, ‘gravitational lensing.’ How the light's path is distorted gives a direct insight into the properties of gravity and how it relates to normal and dark matter and their underlying distribution. “The Planck results make a significant improvement in our understanding of the matter in our universe and how the universe came into being. It will not be alone, however, in its impressive combination of precision and sky coverage; a number of upcoming astrophysical surveys will refine the CMB maps further, while others will provide censuses of galaxies and galaxy clusters, and their gravitational influence, across all observable space and time.” Contact Syl Kacapyr for information about Cornell's TV and radio studios.
<urn:uuid:328341be-db17-4722-9d98-6f02f1365e0a>
3.625
445
Audio Transcript
Science & Tech.
27.679389
Resonance and Standing Waves Purchase a set of water goblets with liquid level markings corresponding to musical notes.Baseball Bat Physics Learn about the acoustics and vibrational qualities of baseball bats.Flickr Physics Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of standing waves. Purchase a set of water goblets with liquid level markings corresponding to musical notes.Bottle Vibrations Enrich your understanding about the vibrations of bottles with explanations and animations from an acoustic researcher.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Sound.Musical Bells Learn about the physics associated with the construction of musical bells.The Laboratory Looking for a lab that coordinates with this page? Try the Natural Frequency and Standing Waves Lab from The Laboratory.The Laboratory Looking for a lab that coordinates with this page? Try the Music in a Bottle Lab from The Laboratory.Baseball Bat Vibrations Excite your students with a little everyday physics - the natural frequencies of aluminum versus wooden baseball bats.Sound Video This page features a collection of fascinating video footage and follow-on activities; intended as lesson starters. As has been previously mentioned in this unit, a sound wave is created as a result of a vibrating object. The vibrating object is the source of the disturbance that moves through the medium. The vibrating object that creates the disturbance could be the vocal cords of a person, the vibrating string and soundboard of a guitar or violin, the vibrating tines of a tuning fork, or the vibrating diaphragm of a radio speaker. Any object that vibrates will create a sound. The sound could be musical or it could be noisy; but regardless of its quality, the sound wave is created by a vibrating object. Nearly all objects, when hit or struck or plucked or strummed or somehow disturbed, will vibrate. If you drop a meter stick or pencil on the floor, it will begin to vibrate. If you pluck a guitar string, it will begin to vibrate. If you blow over the top of a pop bottle, the air inside will vibrate. When each of these objects vibrates, they tend to vibrate at a particular frequency or a set of frequencies. The frequency or frequencies at which an object tends to vibrate with when hit, struck, plucked, strummed or somehow disturbed is known as the natural frequency of the object. If the amplitudes of the vibrations are large enough and if natural frequency is within the human frequency range, then the vibrating object will produce sound waves that are audible. All objects have a natural frequency or set of frequencies at which they vibrate. The quality or timbre of the sound produced by a vibrating object is dependent upon the natural frequencies of the sound waves produced by the objects. Some objects tend to vibrate at a single frequency and they are often said to produce a pure tone. A flute tends to vibrate at a single frequency, producing a very pure tone. Other objects vibrate and produce more complex waves with a set of frequencies that have a whole number mathematical relationship between them; these are said to produce a rich sound. A tuba tends to vibrate at a set of frequencies that are mathematically related by whole number ratios; it produces a rich tone. Still other objects will vibrate at a set of multiple frequencies that have no simple mathematical relationship between them. These objects are not musical at all and the sounds that they create could be described as noise. When a meter stick or pencil is dropped on the floor, it vibrates with a number of frequencies, producing a complex sound wave that is clanky and noisy. an alteration in either speed or wavelength will result in an alteration of the natural frequency. The role of a musician is to control these variables in order to produce a given frequency from the instrument that is being played. Consider a guitar as an example. There are six strings, each having a different linear density (the wider strings are more dense on a per meter basis), a different tension (which is controllable by the guitarist), and a different length (also controllable by the guitarist). The speed at which waves move through the strings is dependent upon the properties of the medium - in this case the tightness (tension) of the string and the linear density of the strings. Changes in these properties would affect the natural frequency of the particular string. The vibrating portion of a particular string can be shortened by pressing the string against one of the frets on the neck of the guitar. This modification in the length of the string would affect the wavelength of the wave and in turn the natural frequency at which a particular string vibrates at. Controlling the speed and the wavelength in this manner allows a guitarist to control the natural frequencies of the vibrating object (a string) and thus produce the intended musical sounds. The same principles can be applied to any string instrument - whether it is the harp, harpsichord, violin or guitar. As another example, consider the trombone with its long cylindrical tube that is bent upon itself twice and ends in a flared end. The trombone is an example of a wind instrument. The tube of any wind instrument acts as a container for a vibrating air column. The air inside the tube will be set into vibration by a vibrating reed or the vibrations of a musician's lips against a mouthpiece. While the speed of sound waves within the air column is not alterable by the musician (they can only be altered by changes in room temperature), the length of the air column is. For a trombone, the length is altered by pushing the tube outward away from the mouthpiece to lengthen it or pulling it in to shorten it. This causes the length of the air column to be changed, and subsequently changes the wavelength of the waves it produces. And of course, a change in wavelength will result in a change in the frequency. So the natural frequency of a wind instrument such as the trombone is dependent upon the length of the air column of the instrument. The same principles can be applied to any similar instrument (tuba, flute, wind chime, organ pipe, clarinet, or pop bottle) whose sound is produced by vibrations of air within a tube. There were a variety of classroom demonstrations (some of which are fun and some of which are corny) that illustrate the idea of natural frequencies and their modification. A pop bottle can be partly filled with water, leaving a volume of air inside that is capable of vibrating. When a person blows over the top of the bottle, the air inside is set into vibrational motion; turbulence above the lip of the bottle creates disturbances within the bottle. These vibrations result in a sound wave that is audible to students. Of course, the frequency can be modified by altering the volume of the air column (adding or removing water), which changes the wavelength and in turn the frequency. The principle is similar to the frequency-wavelength relation of air columns; a smaller volume of air inside the bottle means a shorter wavelength and a higher frequency. A toilet paper roll orchestra can be created from different lengths of toilet paper rolls (or wrapping paper rolls). The rolls will vibrate with different frequencies when struck against a student's head. A properly selected set of rolls will result in the production of sounds that are capable of a Tony Award rendition of "Mary Had a Little Lamb." Maybe you are familiar with the popular water goblet prom trick that is often demonstrated in a Physics class. Obtain a water goblet and clean your fingers. Then gently slide your finger over the rim of the water goblet. If you are fortunate enough, you might be able to set the goblet into vibration by means of slip-stick friction. (It is not necessary to use a crystal goblet. It is often said that crystal goblets work better; but the trick is just as easily performed with clean fingers and an inexpensive goblet.) Like a violin bowstring being pulled across a violin string, the finger sticks to the glass molecules, pulling them apart at a given point until the tension becomes so great. The finger then slips off the glass and subsequently finds another microscopic surface to stick to; the finger pulls the molecules at that surface, slips and then sticks at another location. This process of stick-slip friction occurring at a high frequency is sufficient to set the molecules of the glass into vibration at its natural frequency. The result is enough to impress your dinner guests. Try it at home!! Perhaps you have seen a pendulum bob vibrating back and forth about its equilibrium position. While a pendulum does not produce a sound when it oscillates, it does illustrate an important principle. A pendulum consisting of a longer string vibrates with a longer period and thus a lower frequency. Once more, there is an inverse relationship between the length of the vibrating object and the natural frequency at which the object vibrates. This very relationship carries over to any vibrating instrument - whether it is a guitar string, a xylophone, a pop bottle instrument, or a kettledrum. To conclude, all objects have a natural frequency or set of frequencies at which they vibrate when struck, plucked, strummed or somehow disturbed. The actual frequency is dependent upon the properties of the material the object is made of (this affects the speed of the wave) and the length of the material (this effects the wavelength of the wave). It is the goal of musicians to find instruments that possess the ability to vibrate with sets of frequencies that are musically sounding (i.e., mathematically related by simple whole number ratios) and to vary the lengths and (if possible) properties to create the desired sounds. A physics instructor makes a water goblet sing at its natural frequency.
<urn:uuid:3875a982-f416-4b5a-9cd0-de95beec678e>
3.65625
2,042
Tutorial
Science & Tech.
44.533421
Hi. My name is Isabel and I am from St. Mark Catholic school. I just wanted to know- What is the main thing that you will be studying on your expedition trip? Thank you for taking your time to read this. On 6/18/2012 10:03 AM, webmaster [at] polartrec [dot] com wrote: Thanks for the question! The main thing our group is studying is the effects of earlier snowmelt and increased warming on the ecosystem. We are particularly interested in the relationships among the plants, their roots, and the soil environment they live in. One of the key elements needed for growth and decomposition is nitrogen (N), and we are focusing on the cycling of N in the system, looking to see when during the season it is most abundant, when it is least abundant, and why. The relationship between the release of carbon (C) from the soil really depends on N availability. This is a difficult relationship to study in the field, but a very important one to figure out. Knowing more about the nitrogen will help scientists better understand the effects of earlier snowmelt and/or increased warming, both phenomena that are currently occurring with climate change!
<urn:uuid:6fb4ec84-6588-4c8b-98eb-4b3ebf094332>
2.71875
264
Comment Section
Science & Tech.
59.777034
The Younger Dryas (YD) impact hypothesis is a recent theory that suggests that a cometary or meteoritic body or bodies hit and/or exploded over North America 12,900 years ago, causing the YD climate episode, extinction of Pleistocene megafauna, demise of the Clovis archaeological culture, and a range of other effects. The physical evidence interpreted as signatures of an impact event can be separated into two groups. The first group consists of evidence that has been largely rejected by the scientific community and is no longer in widespread discussion…. The second group consists of evidence that has been active in recent research and discussions:…. Over time, however, these signatures have also seen contrary evidence rather than support. In summary, none of the original YD impact signatures have been subsequently corroborated by independent tests. Of the 12 original lines of evidence, seven have so far proven to be non-reproducible. The remaining signatures instead seem to represent either (1) non-catastrophic mechanisms, and/or (2) terrestrial rather than extraterrestrial or impact-related sources. The YD impact hypothesis made a big splash at AGU in 2007, and we’ve written about it a few times since. Our assessment was (in 2007), that this would need a lot of confirmatory evidence to get accepted, and even if it was, it did not provide much explanation for other, very similar, abrupt changes in the record. In 2009, we were still skeptical and noted that “the level of proof required for this extraordinary idea will need to be extraordinarily strong”. Unfortunately, as this paper makes clear, neither a lot of confirmatory evidence nor extraordinarily strong proofs have been forthcoming. This paper is unlikely to the very last word on the subject, but it is likely to be the last time the mainstream paleo-climatologists are going to pay this much heed unless some really big new piece of evidence comes to light. However, while the specifics of this particular hypothesis and its refutation are interesting in many ways… The YD impact hypothesis provides a cautionary tale for researchers, the scientific community, the press, and the broader public. Let’s be specific… … since there are indeed lessons that can be drawn here: - ‘Bold’ ideas can get published and get serious people to pay attention. The claims about the YD impact were entirely at odds with mainstream views, yet taken seriously and looked at by a wide variety of other researchers. - Like most bold ideas that initially raise skeptical eyebrows, the evidence for this one decreased with time. This is not inevitable, but it is not unusual. - Science is self-correcting because other scientists take the time to look for new evidence backing up or refuting initial ideas, and go back and re-interpret what was previously done. - Even eventually discarded ideas can provide abundant directions for good science to get done. For instance, a fair amount of research into nanodiamonds has occurred because of the interest in this idea. - The media loves the ‘radical new idea’ presented by ‘outsider’ scientists (3 documentaries on this so far, a big NYT piece). It fits a lot of the romantic archetype of what science is supposed to be about. It has controversy, narrative and outsize personalities. Whether the ideas are good or not is barely relevant. - The Feynmanian ideal of a single scientist both proposing and refuting their own new idea is very rare. In practice, the roles of proposing and refuting are far more often done by the scientific community as a whole, not an individual. - Scientists gain credibility for doing careful work and not going beyond the evidence in interpreting it. This is opposite to what gains readership on blogs. :-) The Younger Dryas, an extremely abrupt, and still mysterious, interval of climate change, will no doubt continue to excite people across the field of paleo-climate, but we hypothesize that the impact hypothesis has had all the impact it’s going to.
<urn:uuid:b0bb29ae-6723-4cb3-b9da-c6d3663d9cea>
3.359375
840
Personal Blog
Science & Tech.
32.076614
|Group||13||Melting point||660.323 oC, 1220.581 oF, 933.473 K| |Period||3||Boiling point||2519 oC, 4566.2 oF, 2792.15 K| |Block||p||Density (kg m-3)||2698| |Atomic number||13||Relative atomic mass||26.982| |State at room temperature||Solid||Key isotopes||27Al| |Electron configuration||[Ne] 3s23p1||CAS number||7429-90-5| |ChemSpider ID||4514248||ChemSpider is a free chemical structure database| Molar heat capacity (J mol-1 K-1) |24.2||Young's modulus (GPa)||Unknown| |Shear modulus (GPa)||Unknown||Bulk modulus (GPa)||Unknown| The analysis of a curious metal ornament found in the tomb of Chou-Chu, a military leader in 3rd century China, turned out to be 85% aluminium. How it was produced remains a mystery. By the end of the 1700s, aluminium oxide was known to contain a metal, but it defeated all attempts to extract it. Humphry Davy had used electric current to extract sodium and potassium from their so-called ‘earths’ (oxides), but his method did not release aluminium in the same way. The first person to produce it was Hans Christian Oersted at Copenhagen, Denmark, in 1825, and he did it by heating aluminium chloride with potassium. Even so, his sample was impure. It fell to the German chemist Friedrich Wöhler to perfect the method in 1827, and obtain pure aluminium for the first time by using sodium instead of potassium. |Listen to Aluminium Podcast| Chemistry in Its Element - Aluminium You're listening to Chemistry in its element brought to you by Chemistry World, the magazine of the Royal Society of Chemistry This week the chemical cause of transatlantic linguistic friction. Is it an um or an ium at the end? It turns out us Brits might have egg on our faces as well as a liberal smattering of what we call aluminium. Kira J. Weissman 'I feel like I'm trapped in a tin box at 39000 feet'. It's a common refrain of the flying-phobic, but maybe they would find comfort in knowing that the box is actually made of aluminium - more than 66000 kg of it, if they're sitting in a jumbo jet. While lamenting one's presence in an 'aluminium box' doesn't have quite the same ring, there are several good reasons to appreciate this choice of material. Pure aluminium is soft. However, alloying it with elements such as such as copper, magnesium, and zinc, dramatically boosts its strength while leaving it lightweight, obviously an asset when fighting against gravity. The resulting alloys, sometimes more malleable than aluminium itself, can be moulded into a variety of shapes, including the aerodynamic arc of a plane's wings, or its tubular fuselage. And whereas iron rusts away when exposed to the elements, aluminium forms a microscopically thin oxide layer, protecting its surface from further corrosion. With this hefty CV, it's not surprising to find aluminium in many other vehicles, including ships, cars, trucks, trains and bicycles. Happily for the transportation industry, nature has blessed us with vast quantities of aluminium. The most abundant metal in the earth's crust, it's literally everywhere. Yet aluminium remained undiscovered until 1808, as it's bound up with oxygen and silicon into hundreds of different minerals, never appearing naturally in its metallic form. Sir Humphrey Davy, the Cornish chemist who discovered the metal, called it 'aluminum', after one of its source compounds, alum. Shortly after, however, the International Union of Pure and Applied Chemistry (or IUPAC) stepped in, standardizing the suffix to the more conventional 'ium'. In a further twist to the nomenclature story, the American Chemical Society resurrected the original spelling in 1925, and so ironically it is the Americans and not the British that pronounce the element's name as Davy intended. In 1825, the honour of isolating aluminium for the first time fell to the Danish Scientist Hans Christian Øersted. He reportedly said of his prize, 'It forms a lump of metal that resembles tin in colour and sheen" - not an overly flattering description, but possibly an explanation for airline passengers' present confusion. The difficulty of ripping aluminium from its oxides - for all early processes yielded only kilogram quantities at best - ensured its temporary status as a precious metal, more valuable even than gold. In fact, an aluminium bar held pride of place alongside the Crown Jewels at the 1855 Paris Exhibition, while Napoleon is said to have reserved aluminium tableware for only his most honoured guests. It wasn't until 1886 that Charles Martin Hall, an uncommonly dogged, amateur scientist of 22, developed the first economic means for extracting aluminium. Working in a woodshed with his older sister as assistant, he dissolved aluminium oxide in a bath of molten sodium hexafluoroaluminate (more commonly known as 'cryolite'), and then pried the aluminium and oxygen apart using a strong electrical current. Remarkably, another 22 year-old, the Frenchman Paul Louis Toussaint Héroult, discovered exactly the same electrolytic technique at almost exactly the same time, provoking a transatlantic patent race. Their legacy, enshrined as the Hall-Héroult process, remains the primary method for producing aluminium on a commercial scale - currently million of tons every year from aluminium's most plentiful ore, bauxite. It wasn't only the transportation industry that grasped aluminium's advantages. By the early 1900s, aluminium had already supplanted copper in electrical power lines, its flexibility, light weight and low cost more than compensating for its poorer conductivity. Aluminium alloys are a construction favourite, finding use in cladding, windows, gutters, door frames and roofing, but are just as likely to turn up inside the home: in appliances, pots and pans, utensils, TV aerials, and furniture. As a thin foil, aluminium is a packaging material par excellence, flexible and durable, impermeable to water, and resistant to chemical attack - in short, ideal for protecting a life-saving medication or your favourite candy bar. But perhaps aluminium's most recognizable incarnation is the aluminium beverage can, hundreds of billions of which are produced annually. Each can's naturally glossy surface makes as an attractive backdrop for the product name, and while its thin walls can withstand up to 90 pounds of pressure per square inch (three times that in a typical car tyre), the contents can be easily accessed with a simple pull on the tab. And although aluminium refining gobbles up a large chunk of global electricity, aluminium cans can be recycled economically and repeatedly, each time saving almost 95% of the energy required to smelt the metal in the first place. There is, however, a darker side to this shiny metal. Despite its abundance in Nature, aluminium is not known to serve any useful purpose for living cells. Yet in its soluble, +3 form, aluminium is toxic to plants. Release of Al3+ from its minerals is accelerated in the acidic soils which comprise almost half of arable land on the planet, making aluminium a major culprit in reducing crop yields. Humans don't require aluminium, and yet it enters our bodies every day - it's in the air we breathe, the water we drink, and the food we eat. While small amounts of aluminium are normally present in foods, we are responsible for the major sources of dietary aluminium: food additives, such as leavening, emulsifying and colouring agents. Swallowing over-the-counter antacids can raise intake levels by several thousand-fold. And many of us apply aluminium-containing deodorants directly to our skin every day. What's worrying about all this is that several studies have implicated aluminium as a risk factor for both breast cancer and Alzheimer's disease. While most experts remain unconvinced by the evidence, aluminium at high concentrations is a proven neurotoxin, primarily effecting bone and brain. So, until more research is done, the jury will remain out. Now, perhaps that IS something to trouble your mind on your next long haul flight. Researcher Kira Weissman from Saarland University in Saarbruken, Germany with the story of Aluminium and why I haven't been saying it in the way that Humphrey David intended. Next week, talking of the way the elements sound, what about this one. There aren't many elements with names that are onomatopoeic. Say oxygen or iodine and there is no clue in the sound of the word to the nature of the element, but zinc is different - zinc, zinc, zinc, you can almost hear a set of coins falling into an old fashioned bath. It just has to be a hard metal. In use, zinc is often hidden away, almost secretive. It stops iron rusting, sooths sunburn, keeps dandruff at bay, combines with copper to make a very familiar gold coloured alloy and keeps us alive but we hardly notice it. And you can catch up with the clink of zinc with Brian Clegg on next week's Chemistry in its Element. I'm Chris Smith, thank you for listening and goodbye. Chemistry in its elementis brought to you by the Royal Society of Chemistry and produced by thenakedscientists dot com. There's more information and other episodes of Chemistry in its element on our website at chemistryworld dot org forward slash elements. Mining and Sourcing data: British Geological Survey – natural environment research council. Text: John Emsley Nature’s Building Blocks: An A-Z Guide to the Elements, Oxford University Press, 2nd Edition, 2011. Additional information for platinum, gold, neodymium and dysprosium obtained from Material Value Consultancy Ltd www.matvalue.com Data: CRC Handbook of Chemistry and Physics, CRC Press, 92nd Edition, 2011. G. W. C. Kaye and T. H. Laby Tables of Physical and Chemical Constants, Longman, 16th Edition, 1995. Members of the RSC can access these books through our library.
<urn:uuid:022a7c9b-5998-413b-8c87-e6f746366d3f>
2.953125
2,184
Knowledge Article
Science & Tech.
42.732868
Carbon footprints designed by Christian Guthier for a climate change campaign at the UN Global Climate Change conference in Poznan. It's a start, anyway. Today, the Environmental Protection Agency releases a new national greenhouse gas database, made up of self-reported data from 9 groups of polluters around the country: refineries, power plants, chemical facilities, "other industrial" facilities, landfills, metals, minerals, pulp and paper plants, and government and commercial sites. We've got plenty of those here, and it's a fun tool to play around with if you're interested in the climate impacts of industry. Though it probably won't surprise you. Let's give it up for the BP Carson refinery…which reported 3,960,504 metric tons of carbon dioxide released and 492.652 metric tons of hydrogen produced in 2010. LA County's big footprint winner! Orange County can barely compete: combined GHG emissions for AES Huntington Beach, the biggest polluter behind the Curtain, are just over 14% of that: 572,203 metric tons of greenhouse gas released in 2010. Hyperion Treatment Plant processes our biofluids. For the last 5 years, LA's been shooting them into the ground. This is always the sort of topic that makes me want to talk like Homer Simpson. When you treat sewage and spit the water out one side, a spongy, sterilized byproduct comes out the other. That's "biosolids," and for the last 5 years, LA has been testing a new way to deal with them...a way that is, in fact, "the nation’s first full scale application of deep well injection technology." Explaining what that means is complicated but cool. All of the crap we send into sewers produces 1 million pounds of biosolids in southern California. The City of Los Angeles and specifically its Hyperion Treatment Plant can raise its hand and take credit for a quarter of that..."pathogen free, exceptional quality, Grade A biosolids." Some of those biosolids get composted in Griffith Park. And for 11 years much of those high-quality biosolids have been trucked to a field in Kern County and spread over non-food farmland. They serve as fertilizer for Green Acres Farm, a 4-thousand acre property near Bakersfield that LA bought specifically so that it could have land on which to spread solid waste. LA farms alfalfa and other feedstock grains that the city sells locally in Kern County.
<urn:uuid:b3b7ba1d-ff05-43e4-89b8-556dfd0c8dda>
2.6875
518
Personal Blog
Science & Tech.
58.802307
An artist’s conception of a black hole gobbling a star. (Credit: NASA/CXC/M. Weiss). NASA’s swift spacecraft caught something interesting on the night of March 28th, 2011. Launched in 2004, the spacecraft is designed to detect extragalactic x-ray and gamma-ray flashes. And what a flash they caught in GRB 110328A; a burst four billion light years distant that peaked at a brightness one trillion times that of our own Sun. But what’s truly interesting was that the power curve seen by astronomers was consistent with a galactic mass black hole devouring a star. Word on the astro-street from the Bad Astronomer, Phil Plait is that a yet to be released set of Hubble follow up images of the region seem consistent with the burst occurring near the core of a distant galaxy. In addition, NASA’s Fermi satellite, which also watches for gamma-ray bursts, has detected no past activity from the galaxy in question; this was an individual event without precedent. Did astronomers witness a “death by black hole” of a star? Perhaps such an event could occur if a nearby passage of another star put the body on a doomsday orbit. And interesting side note; astronomers established a thread to track GRBs in another pair of science/astronomy blogs that you might have heard of, the Bad Astronomy/Universe Today bulletin board. Much of the initial discovery and follow-up action occurred here, a forum worth following. And they say, “What good is blogging…”
<urn:uuid:f0d772d7-7301-4b47-99d9-3eb3dd0fd619>
3.421875
330
Personal Blog
Science & Tech.
56.001697
© 1997, Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, CA 94112. They are born. They take shape. They go through a turbulent youth, and then they live out their lives in a predictable pattern. Maybe they have companions they provide for. Someday they rapidly decline and die. Stars, in many ways, are just like people. Our Sun, constant and permanent though it may seem, is no exception. Once, people regarded the Sun as a different sort of object than the stars. It ruled the day; stars adorned the night. But over the past few centuries astronomers have come to recognize that the Sun is just one middle-aged member of the vast family of stars. From far away, the Sun would look just like any other star -- a point of light. And like any other star, the Sun is mortal. The realization that the Sun is a star has done wonders for astronomy. By studying the Sun, the closest star, scientists have learned about all stars. Conversely, by studying the stars, in all their variety, scientists have learned about the past and future of the Sun. This, in turn, has told them about the past and future of life on Earth. After all, the Sun is the ultimate root of our food chain. When the Sun came into being, it provided the light and warmth needed to make Earth a hospitable place. When it dies, our planet will no longer be fit for living things. Despite the progress of astronomy over the past centuries, our knowledge of stars is by no means complete. Recent advances in astronomy, ranging from instruments capable of observing 100 stars at once to the Keck telescope with its enormous mirror 10 meters (33 feet) across (compared to Hubble Space Telescope's 2.4 meters), have joined forces with powerful new computers to propel our understanding of how stars are born, evolve, and die. That Time Bomb in the Middle Activity: Pinhole Protractor The womb. The Orion nebula, long a favorite of backyard observers, is the tip of a huge cloud of gas and dust floating in interstellar space. Within the cloud are dense lumps where stars are sired. Photo courtesy of Lick Observatory. The extreme age came as a surprise to most scientists. Astronomers already knew the basic facts about the Sun. It is simply a huge ball of gas, mostly hydrogen, held together by its own powerful gravity; it gives off light because of some source of energy within it. Astronomers thought that this source of energy was a slow but steady contraction of the Sun under the force of gravity, much as a house slowly settles. But this source of energy would only have kept the Sun alive for 20 million years. Other sources of energy -- say, a huge fire -- would burn out even quicker. The solution to this age discrepancy is an example of how leaps in scientific understanding frequently involve the insights from disparate fields of study. In the years after World War I, British astronomer Arthur Eddington put together three ideas and boldly proposed a new energy source for the Sun. First, astronomers knew that the Sun has to be extremely hot and dense in its center if it is to support its own weight. Gas at a high temperature exerts a strong pressure, and this holds up the outer layers of the Sun. Second, physicists had recently compared the weight of four atoms of hydrogen with that of one helium atom. Both the hydrogen quadruplet and the helium are composed of essentially the same number of subatomic particles. Yet the helium weighs less. Third, Albert Einstein's new theory of relativity showed that matter can be converted into energy (E=mc2). At first glance, these three ideas might seem totally unrelated. But from them, Eddington deduced that the Sun's energy source was a process then unknown on Earth: the nuclear fusion of hydrogen to helium. The word nuclear has gotten a bad rap. Normally people utter it in the same breath as mass death. But in happier circumstances, nuclear processes are responsible for maintaining all life on Earth. Deep in the hot and dense core of the Sun, hydrogen atoms are squeezed together, or fused, into helium atoms -- roughly akin to crunching a few baseballs together and getting a football. A helium atom has less mass than the hydrogen atoms from which it was created, and this missing mass turns into energy. Few other schemes can generate as much energy as nuclear fusion. A small amount of hydrogen can produce an immense amount of energy -- which is why nuclear bombs are so destructive, and why the Sun can keeping going for billions of years. The baby. Swaddled in a disc of dust and gas not much larger than our solar system, this newborn star is a recent addition to the Orion family. The proud parents have the Hubble Space Telescope to thank for the portrait. Photo courtesy of Chris O’Dell of Rice University and NASA. The closest example is in the Orion constellation, a pattern of bright stars easily visible from the Northern Hemisphere in winter. For thousands of years, the pattern has reminded many viewers of a person with one raised arm, wearing a belt. If you look below the belt, there are four bright, blue stars called the Trapezium. If you look even more closely with binoculars, a fuzzy patch called the Orion nebula becomes visible. This is a stellar nursery -- an enormous, lumpy cloud of cold gas and dust which is turning into hundreds of new stars. The gas is mostly hydrogen; the dust is something like the dust in a desert storm: basically, microscopic rocks. Within the clouds are hundreds of condensed, cold lumps of gas and dust. A disturbance, such as a blast wave from a nearby stellar explosion, can cause each lump to begin collapsing under its own weight. We can see many examples of such star-forming regions. It seems that stars, like people, are born in families. For stars, these very large families are called clusters, and we know of 1,500 such clusters. Astronomers presume that the Sun was also born into a family, but, as seems to be typical of clusters, the Sun's probably broke up in the first 100 million years of its life. About two-thirds of stars are actually born with nearby twins or triplets, but the Sun is alone. Astronomers aren't in complete agreement on where the clouds themselves come from, but it's likely that the gas and dust have more than one source. There is the pristine hydrogen gas synthesized in the creation of the universe [see "The Biggest Bang of Them All," The Universe in the Classroom, first quarter 1997]. There is the gas and dust that our galaxy has pilfered from its satellite galaxies, such as the "Magellanic stream," a streamer of gas ripped out of the nearby Large Magellanic Cloud. And there is the gas and dust from previous generations of stars. When stars die, they blow much of their material back into space, where it can form other stars. Stars in the Galaxy are the ultimate recycling machines: They use gas and dust over and over again. The sisters. The Pleiades star cluster contains 200 stars, all born 50 or so million years ago. The wisps of dust around the stars might be remnants of the cloud from which the bicenttuplets emerged. Photo courtesy of Mount Wilson Observatory. When the massive lump of cold dust and gas which became our Sun collapsed, the nuclear forces began to come into play. The weight of all that dust and gas produced great pressure and density at the center, and the friction of the infalling particles released heat. When the temperature in the core reached several million degrees, the hydrogen atoms started to fuse together, forming helium atoms. This released energy, the pressure increased, more atoms fused together, more energy was released, and so on, and so on. A chain reaction started that will go on for billions of years. The outward pressure created by this nuclear fusion counterbalanced the inward pressure of gravity, and when the two canceled each other out, the natal lump of dust and gas stopped collapsing. Astronomers think this process took about 100 million years. The Sun was born. Although the embryonic Sun slurped up most of the gas and dust from the lump, some crumbs were left over. As this extra material spun around the center, the centrifugal force prevented it from falling into the center. Instead, it flattened into a whirling disc. Astronomers have seen such discs around many young stars. Within these discs, scientists think that blobs of material clump together into the smallish bodies we call planets, asteroids, and comets. | 1 | 2 | 3 | next page >> back to Teachers' Newsletter Main Page
<urn:uuid:5d293d3d-15d3-4e3c-ae13-4b6ff537f72c>
3.65625
1,795
Knowledge Article
Science & Tech.
55.787727
by Terry Burton, Digital Media Coordinator Thanks to amazing new pictures from the High Resolution Imaging Science Experiment (HiRISE) camera mounted on NASA’s Mars Reconnaissance Orbiter, you can imagine that you’re pressing your nose to the window of a plane flying over the Red Planet: cruising Mars from the comfort of your chair. The camera is operated by the University of Arizona, Tucson, and is part of NASA’s Mars Reconnaissance Orbiter (MRO) mission. The MRO was launched August 12, 2005, and is searching for evidence that water persisted on the surface of Mars for a long period of time. Scientific instruments aboard the MRO are zooming in for extreme close-up photography of the martian surface, analyzing minerals, looking for subsurface water, tracing how much dust and water are distributed in the atmosphere, and monitoring daily global weather. The HiRISE camera is the largest ever flown on a planetary mission. This camera is capable of showing objects as small as three feet across — the size of your dining room table!
<urn:uuid:fd0b68b4-0179-4829-836b-4ae458513411>
3.296875
220
Personal Blog
Science & Tech.
26.951826
- de Lucia, M. and Bejan, A., Thermodynamics of energy storage by melting due to conduction or natural convection, Trans. ASME, J. Sol. Energy Eng. (USA), vol. 112 no. 2 pp. 110 - 16 . (last updated on 2007/04/08) The authors describe the most basic thermodynamic aspects of the process of energy storage by melting of a phase change material when the energy source is a stream of hot single-phase fluid. The first part of the paper considers the melting process ruled by pure conduction across the liquid phase, and the second part deals with the quasi-steady melting dominated by natural convection. The paper establishes the relationship between the total irreversibility of the melting process and design parameters such as the number of heat transfer units of the heat exchanger placed between the energy source and the phase change material, the duration of the melting process, and the position of the energy storage process on the absolute temperature scale heat transfer;melting;thermal analysis;thermal energy storage;thermodynamics;
<urn:uuid:4508746f-ec3d-4c24-82bd-0287ee90a253>
2.828125
226
Academic Writing
Science & Tech.
43.364085
Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is a language designed specifically for distributed concurrent programming with fault tolerance but many other languages provide features for concurrent programming, such as asynchronous workflows in the F# programming language. Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Strassen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is pioneered the most promising techniques for high-performance parallel programming on shared-memory computers (including multicores) and its technology is now offered by Intel in their Threaded Building Blocks (TBB) and Microsoft in .NET 4. So this is also easily accessible from the F# programming language.
<urn:uuid:bc1e98e7-efef-4354-a820-b2c47abb8688>
3.46875
428
Personal Blog
Software Dev.
22.536277
With all the bad news coming out of the record-setting drought of 2012, there are a couple of nuggets of good news. First, the Gulf of Mexico "dead zone" is much smaller than normal, and we have had a record-low number of tornadoes in the month of July. The Gulf's "dead zone" is a pocket of water off the mouth of the Mississippi River that is lacking oxygen because of nutrients in the river water. These nutrients allow plankton to bloom in huge quantities. The plankton uses all the available oxygen, making it impossible for other sea life to survive in the dead zone. This has a huge impact on the fishing industry. Scientists have never seen this area as small as it is this year. After an extremely active, and record-breaking start to our tornado season, we have done a complete turnaround when it comes to tornado activity. This is directly attributable to the lack of available moisture. It does seem, though, that the storms that have developed may have produced more than the average number of straight-line wind damage reports.
<urn:uuid:c0ed9484-bfbf-411e-91e1-3b6e522e8cb6>
2.828125
219
Personal Blog
Science & Tech.
52.132575
Manual Section... (3) - page: strsep NAMEstrsep - extract token from string #include <string.h> char *strsep(char **stringp, const char *delim); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): DESCRIPTIONIf *stringp is NULL, the strsep() function returns NULL and does nothing else. Otherwise, this function finds the first token in the string *stringp, where tokens are delimited by symbols in the string delim. This token is terminated with a '\0' character (by overwriting the delimiter) and *stringp is updated to point past the token. In case no delimiter was found, the token is taken to be the entire string *stringp, and *stringp is made NULL. RETURN VALUEThe strsep() function returns a pointer to the token, that is, it returns the original value of *stringp. NOTESThe strsep() function was introduced as a replacement for strtok(3), since the latter cannot handle empty fields. However, strtok(3) conforms to C89/C99 and hence is more portable. BUGSBe cautious when using this function. If you do use it, note that: - This function modifies its first argument. - This function cannot be used on constant strings. - The identity of the delimiting character is lost. SEE ALSOindex(3), memchr(3), rindex(3), strchr(3), strpbrk(3), strspn(3), strstr(3), strtok(3) COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. This document was created by man2html, using the manual pages. Time: 15:26:50 GMT, June 11, 2010
<urn:uuid:f2863719-e413-435d-acc5-83082440e822>
2.703125
443
Documentation
Software Dev.
69.866474
On Today’s Date: August 29 & 30 August 29, 2010Posted by Lofty Ambitions in Other Stuff, Science, Space Exploration. Tags: Apollo, Nobel Prize, Nuclear Weapons, Physics Sixty-one years ago, the Cold War began. On this date in 1949, the Soviet Union detonated its first atomic bomb. As in all great Russian narratives, the main character went by several names—Joe-1, RDS-1, First Lightning, Special Jet Engine, Stalin’s Jet Engine, and Russia Does It Herself—but the end result was the same: a completely surprised American government, military, and populace. The 22-kiloton device had been designed by a team headed by Professor Igor Kurchatov at Laboratory No. 2, sometimes referred to as Los Arzamas (a play on the name of the town where the first American atomic bombs were designed, Los Alamos). Kurchatov was known as the beard, because he began to grow a beard at the outbreak of World War II and refused to shave it off until the Russians had won. He wore that beard, in various styles, until he died. His ashes are buried in the Kremlin Wall. British physicist, mathematician, and software developer Stephen Wolfram was born on this date ten years after the first Soviet atomic bomb test. Wolfram developed the system Mathematica and wrote A New Kind of Science, a tome of almost 1200 pages asserting that computational systems or simple programs, rather than traditional mathematics, should be used to understand nature. When we checked Amazon today, it was #8 in the Modeling and Simulation bestseller list and #40 in Science Research. On August 30, 1984, the Space Shuttle Discovery launched on its maiden voyage, a six-day mission to deploy three telecommunications satellites. Later, Discovery launched the Hubble Telescope and carried astronaut John Glenn back into space when he was 77 years old. This shuttle also flew the missions immediately following the Challenger and Columbia disasters. Discovery has flown more missions than any other Shuttle. Its final mission is scheduled to launch November 1, 2010, the Shuttle program’s penultimate flight. The National Air and Space Museum has dibs on Discovery after that. August 30 is also the birthday of astronaut Jack Swigert, who was assigned to the Apollo 13 mission three days before launch, when the crew had been exposed to German measles and Ken Mattingly had no immunity. Houston, we’ve had a problem here—those are Swigert’s words. He died of bone cancer, after being elected to Congress, but before being sworn in. If you’re feeling negative on Monday, it could be because British physicist J. J. Thompson, Nobel laureate and discoverer of the electron, died on August 30, 1940. He is also credited with the demonstration of hydrogen’s single electron, the discovery of isotopes, and the invention of the mass spectrometer. So today, remember that things are not merely as they appear to the naked eye.
<urn:uuid:be94e883-c148-4d20-ab09-16a32c840814>
3.125
627
Personal Blog
Science & Tech.
46.532533
- History & Legacy - Graphics Gallery - Photo Gallery - Contact Us |Keeling Curve Lessons| Page 1 of 7 Lessons for long-term earth observations Charles David Keeling directed a program to measure the concentrations of CO2 in the atmosphere that continued without interruption from the late 1950's through the present. This program, operated out of Scripps Institution of Oceanography, is responsible for the Mauna Loa record, which is almost certainly the best-known icon illustrating the impact of humanity on the planet as a whole. Lessons can be learned about making long-term measurements based on the experiences of this program. The idea of making measurements at Mauna Loa arose while Charles David Keeling was a post-doc at Cal Tech. In the course of working on a project involving carbon in river water - a project that incidentally required making measurements of CO2 in air - Charles David Keeling made a key discovery. What he discovered was that when he sampled the air remote from forests, cities, and other obvious sources or sinks for CO2, he always got almost the same value of 310 ppm. Previous measurements of CO2 in the atmosphere did not show such constancy, but these measurements had been made by wet chemical methods that were considerably less accurate than the dry manometric method he was employing. This postdoctoral experience taught him two key lessons that were to guide his entire career: (1) that the earth system might behave with surprising regularity, and (2) the necessity of making highly accurate measurements to reveal that regularity.
<urn:uuid:816ef515-8a5b-4e0d-b234-0f9671da59f2>
3.453125
318
Knowledge Article
Science & Tech.
25.519522
Twitter / KHayhoe: @ChairmanAl no; point is, if ... @ChairmanAl no; point is, if the planet were populated by 200k hunter-gatherers living in tents, climate change would be no big dealFlashback: How Human Beings Almost Vanished From Earth In 70,000 B.C. : Krulwich Wonders... : NPR ...once in our history, the world-wide population of human beings skidded so sharply we were down to roughly a thousand reproductive adults. One study says we hit as low as 40. Forty? Come on, that can't be right. Well, the technical term is 40 "breeding pairs" (children not included). More likely there was a drastic dip and then 5,000 to 10,000 bedraggled Homo sapiens struggled together in pitiful little clumps hunting and gathering for thousands of years until, in the late Stone Age, we humans began to recover. But for a time there, says science writer Sam Kean, "We damn near went extinct." Then — and this is more a conjectural, based on arguable evidence — an already cool Earth got colder. The world was having an ice age 70,000 years ago, and all that dust hanging in the atmosphere may have bounced warming sunshine back into space. Sam Kean writes "There's in fact evidence that the average temperature dropped 20-plus degrees in some spots," after which the great grassy plains of Africa may have shrunk way back, keeping the small bands of humans small and hungry for hundreds, if not thousands of more years. So we almost vanished.
<urn:uuid:fdad82ae-5996-4240-adc9-40740bd419f4>
2.875
337
Personal Blog
Science & Tech.
67.937407
Effect of acid rain on algae Green algae and red algae will be able to survive in acidic water as they are acidophilic plants. Acid rain is caused by pollutant gases such as sulfur dioxide and nitrogen, that are emitted as industrial waste and by vehicles, into the atmosphere. These gases combine with moisture in the atmosphere and become sulfuric acid and nitric acid. Rainwater mixes with these acids , and turns acidic. Rainwater that contains these acids will have a lower pH reading and is called acid rain. Acid rain precipitates at locations far from the source of the pollution. This is because pollutantgasses are carried by the wind for days or weeks and over long distances before they combine with water vapor and return to the ground as rainfall. Normally, rain water has a pH value of around 6.0 but acid rain has pH levels of between 4.0 to 5.5. Acid rain can cause serious damage to plants and the environment. Plant roots are damaged and plants may experience stunted growth or may even die as a result of acid rain. Most algae are also killed by acid rain. However, some types of algae are acidophilic and are able to survive in acidic waters. Basic safety requirements, Use gloves and safety goggles/masks when handling the sulphuric acid.
<urn:uuid:56eec9e7-cd55-4b94-bcde-6cd1aa3b5b95>
3.703125
270
Knowledge Article
Science & Tech.
49.465389
Cosmic rays - 100 years of discovery A century ago - as scientists raced to find out where different types of radiation came from - an Austrian physicist took his electroscopes up in a hot air balloon and made a discovery that was to change thinking for good. High in the sky, Victor Hess discovered that background radiation came from space - and not, as had been thought, from the ground here on Earth. Take a look at how cosmic rays were first understood - and see how far our knowledge has come since then - with Simon Peeters, a senior lecturer in Physics and Astronomy at the University of Sussex.Continue reading the main story Click bottom right for image information. Images courtesy Science Photo Library, Getty Images, Nasa, Google, Aspera and the Pierre Auger Observatory. Music by KPM Music. Slideshow production by Paul Kerley and Darren Baskill. Publication date 8 August 2012. The BBC is not responsible for the content of external websites. More audio slideshow:
<urn:uuid:73530c62-28c6-48dc-88d6-593e926f1133>
3.6875
206
Truncated
Science & Tech.
43.909444
Invasive species are species that are not native to an area and which may compete with and displace native species. - UNEP Releases Papers on Bioenergy Sustainability, 25 October 2010 by IISD Reporting Services: "The UN Environment Programme (UNEP) has published a series of four Issue Papers on bioenergy sustainability, aiming to inform decision makers on debates and emerging issues in this policy area, as well as options for improving the sustainability of the production and consumption of bioenergy." - "The first paper presents potential socioeconomic and environmental challenges related to land use, land use change, and bioenergy....The second paper looks at the confluence of bioenergy and water, highlighting how bioenergy production interacts with water quality, efficiency of water use, and research gaps." - "The third paper looks at risks, including biodiversity impacts, of introducing potentially invasive species as bioenergy feedstocks, and the fourth at the importance of incorporating stakeholder engagement in bioenergy planning, as well as methods to do so. The Issue Papers were presented on the sidelines of the 10th meeting of the Conference of the Parties (COP 10) to the Convention on Biological Diversity (CBD), in Nagoya, Japan." - 'Invasive' biofuel crops require monitoring and mitigation measures, 21 January 2010 by ENN/European Consumers Bioenergy Division: "Biofuel crops will impact on biodiversity and natural ecosystems unless tightly controlled, says a panel of European experts." - The Bern Convention "adopted a recommendation on potentially invasive alien plants being used as biofuel crops (Recommendation 141, 2009). They warn that some biofuel crops are able to escape as pests, and in so doing impact on native biodiversity. As rural communities plan to grow more biofuel crops, the likelihood of new and harmful 'invasions' will increase apace." - "Therefore the Council of Europe made recommendations, which are legally binding on member states: - 1. Avoid the use of biofuel crops already recognised as invasive; - 2. Carry out risk assessments for new species and genotypes; - 3. Monitor the spread of biofuel crops into natural habitats and their effects on native species; - 4. Mitigate the spread and impact on native biodiversity wherever biofuel crops escape cultivation." - Biofuel crops pose invasive pest risk, 22 April 2009 by ENN/Public Library of Science: Researchers concluded "that biofuel crops proposed for use in the Hawaiian Islands are two to four times more likely to establish wild populations or be invasive in Hawaii and in other tropical areas when compared to a random sample of other introduced plants." - "The researchers used a weed risk assessment that examines a plant's biology, geographic origin, pest status elsewhere, and published information on its behavior in Hawaii to identify plants with a high risk of becoming invasive pests in Hawaii or other Pacific islands." - "'By identifying the species with the highest risk, and pushing for planting guidelines and precautionary measures prior to widespread planting, we hope to spare the Hawaiian Islands and similar tropical ecosystems from future economic and environmental costs of the worst invaders while encouraging and promoting the use of lower risk alternative crops,' said Christopher Buddenhagen, co-author of "Assessing Biofuel Crop Invasiveness: A Case Study." - The article noted that "Despite reservations about their adverse environmental impacts, no attempt has been made to quantify actual, relative or potential invasiveness of terrestrial biofuel crops at an appropriate regional or international scale, and their planting continues to be largely unregulated." - Fuel crops 'pose invasion risk', 20 May 2008 by BBC: "Nations should avoid planting biofuel crops that have a high risk of becoming invasive species," according to a report by the Global Invasive Species Programme (GISP) released at a meeting of the UN Convention on Biological Diversity (CBD). - The report urges that biofuel crop selection be preceded by careful assessments, and that plants selected should be native species and those with low risk of spreading and degradation of native habitat. Species selection is site-specific: "For example, a crop like Arundo donax (giant reed), which would cause concern in North America, would not cause the same concern in its native habitat in places like Eurasia....Giant reed, which is naturally flammable, increases the risk of wildfires in places such as California, threatening human settlements as well as native species." - Download the GISP report, "Biofuel crops and the use of non-native species: Mitigating the risks of invasion", at http://www.gisp.org/publications/briefing/index.asp. |Biodiversity | Invasive species | Wildlife| What is bioenergy? | Benefits/Risks | Who is doing what?
<urn:uuid:c03db069-1f87-45f7-a7a4-75d5fd5c342b>
3.59375
982
Content Listing
Science & Tech.
24.186602
Force Field development As an example, let us construct a force field compatible with the AMBER94 Draw the molecule with a mouse Open the Free drawing panel in the Build > Free drawing menu or by pressing the toolbar button Select methane as a primer for the model by clicking the button on the kernel box. Right-click the hydrogen atom to transform it to carbon. Click Add hydrogen button. Choose fluoride from the list of elements and transform two hydrogen atoms into it by right-clicking them. A crude model of 1,2-difluoroethane is ready. Let us transform it into two models in gauche- and trans-conformations. To construct a gauche model, invoke the Geometry Editor. Choose the Dihedral angle mode and the F-C-C-F angle by consecutively clicking the corresponding atoms. Specify the angle value of 60 degrees and press Enter. The resulting model has a gauche conformation but wrong bond lengths. Rapid preliminary optimization can be done by semiempirical quantum chemical methods. Open the GAMESS panel in the Tools > PC GAMESS or by pressing the toolbar button Select the OPTIMIZE calculation. Semiempirical method can be selected in the Assembly panel, e.g., Basis > PM3. After optimization started, the molecule rapidly changes its conformation. The conformation can be further specified using Ab inition calculations. Let us first use the 3-21+G basis with an electrons correlation at the MP2 level. Then, perform the optimization in MP2 / 6-311+G(2d,p). AMBER94 was developed in the basis of 6-31G(d), but we will shift to MP2 / 6-311+G(2d,p) since the 2d bases provide for substantially better results, and current computers are powerful enough to make such calculations routine. An MP2 / 6-311+G(2d,p) optimization yields an accurate model in a gauche conformation. Save it to a file. Potential-derived charges on atoms With a model in a sensible conformation, one can determine the partial charges on atoms. Since our molecule includes only four heavy atoms, the electrostatic field around it can be calculated using a relatively large basis, e.g., aug-cc-pVTZ. For typical calculations, aug-cc-pVDZ, can be recommended since it yields the dipole moments close to experimental (for MP2 electron correlation). Select the Electrostatics calculation, MP2 correlation, and aug-cc-pVTZ basis. After a calculation for about 1 hour, partial charges are assigned to the model atoms. Do not forget to save the model after the calculation. Repeat the above steps to generate a model of difluoroethane in a trans conformation. The charges on atoms in the gauche and trans models slightly differ. Fortunately, the differences in this case are not great and we can just average the charges on equivalent atoms in both conformations. The conformational dependence of charges is a serious problem of parametrization of molecular mechanics force fields. Some kind of averaging is a common solution. Parameters of valence interactions Construction of an accurate model includes a good torsion energy approximation. This requires either valid experimental data or adequate quantum chemical calculations. The torsion potentials cannot be specified by analogy with other molecules, such analogies are not practicable. The torsion potentials should be specified obligatorily after the partial charges on atoms are specified, since they heavily depend the charges. Let us calculate our optimized gauche and trans models in MP4 / aug-cc-pVDZ. Select the aug-cc-pVDZ bases in the Standard section, specify MP4 in the above panel, and select the ENERGY calculation. Do not forget to test the settings by the Check button, and start calculation. (The calculation lasted 13 min on our PC). | Gauche - Trans Thus, at the MP4 / aug-cc-pVDZ approximation, the gauche conformation proved more stable than the trans conformation by -0.0010739464 Hartree (-0.674 kcal/mol or -2.82 kJ/mol). The experimental values are 0.83 kcal/mol (S. Nonoyama et al., 51st Annual Meeting of Chemical Society, Kanazawa, Japan) and 0.57 ± 0.09 kcal/mol, (K.B. Wiberg and M.A. Murcko, J. Phys. Chem., 1987, vol. 91). Before selecting the force field parameters, we have to specify molecular mechanics types for the atoms. Force fields by different authors have different sets of types. Fine Types were introduced to allow switching between them. Fine Types are the most specialized types for all supported force fields. The /data/ForceField folder contains the FineType.mlm file with a table of conversion from Fine Types to particular force field types. Unique types can be created for all atoms in the molecule, but we will use the -CH2- and H2C- types for carbon and hydrogen, respectively. Create a new Fine Type F-tutor for fluorine. F-tutor - - - - F - - - - - - - - - - - - - - - - - We specified that it should be converted to the F type of the AMBER94 field. Restart the program to apply the FineType.mlm file changes. Save the corresponding types to the model files. The result can look as follows: @Table Atoms 8 str str double double double double str str str str double double double str ID Element X Y Z Q Type Mol Res Flag R* Epsilon Mass Comment 0 C -0.19366 -0.04181 -0.06123 0.1056 -CH2- 0 - - - - - 1 C 1.31260 0.05690 -0.12459 0.1056 -CH2- 0 - - - - - 2 H -0.51562 -1.08063 0.01309 0.0732 H2C- 0 - - - - - 3 H -0.65415 0.42715 -0.93092 0.0732 H2C- 0 - - - - - 4 F -0.61717 0.63461 1.08129 -0.2520 F-tutor 0 - - - - - 5 H 1.77308 -0.41205 0.74510 0.0732 H2C- 0 - - - - - 6 F 1.73611 -0.61953 -1.26711 -0.2520 F-tutor 0 - - - - - 7 H 1.63456 1.09571 -0.19893 0.0732 H2C- 0 - - - - - @Table Bonds 7 str str str double double str ID1 ID2 Order R-eqv Force Comment 0 1 s - - 0 2 s - - 0 3 s - - 0 4 s - - 1 5 s - - 1 6 s - - 1 7 s - - Let us open the model and calculate its energy. A warning appears "No such angle parameter: CT CT F", pointing to missing parameters for the current force field (AMBER94). The full list of missing parameters is specified in /Report/FF_error.txt. Another way to determine the missing parameters is to save the model in the mmol format. Specify the missing parameters in the In this example, they will be chosen by analogy with those available in AMBER94. For an accurate model, they can be determined by the approximation of quantum mechanical calculations; however, they are not critical for many tasks. Specify the following parameters: CT CT F 40.0 109.50 tutor HC CT F 50.0 109.50 tutor Energy calculation finds no unspecified parameters. Although the torsion angles have not been specified, AMBER94 has default values for the *-CT-CT-* angles. We can specify the model by setting the F-CT-CT-F value. Include the following line in the /data/ForceField/AMBER94_3_torsional.xls file: F CT CT F 1 1.0 180.0 1 tutor This line should follow the lines with asterisks to override the default value. The force constant was temporarily set to unity. Its value should be selected so that the difference between the trans and gauche conformation energies approximates that obtained by quantum mechanics or experiment. The comparison should be performed in local energy minima; accordingly, both models should be optimized after each parameter change. The Optimization panel is invoked from the Compute > Optimize menu. The default parameters are suitable for our task. Several experiments with the force constant demonstrate that the V/2 value of 2.02 kcal/mol provides for the desired energy difference. F CT CT F 1 2.02 180.0 1 tutor As a result, a simple model of 1,2-difluoroethane was constructed using the following steps: - Geometry determination by quantum mechanical calculations - Potential derived charge determination - Specification of force constants for valance bonds and angles - Adjustment of the torsion potential for quantum chemical or experimental data A finer model requires the calibration of van der Waals and electrostatic interactions to fit the vaporization heat and density of liquid 1,2-difluoroethane to experimental data.
<urn:uuid:6e7c8de3-92ba-49c9-b9c4-58f4b928b883>
2.71875
2,118
Tutorial
Science & Tech.
66.591889
House of dreams From The Economist, March 3, 2012 Physicists rarely become household names. Pretty much anyone watching television in Britain will have heard of Brian Cox who is credited with making physics sexy again. But before him you would probably have to go back a century or so to Albert Einstein, or three centuries to Isaac Newton, to find a name that is universally recognised. One day, though, Peter Higgs and his eponymous boson might outshine them all. Mr Higgs's road to stardom began with a short, equation-riddled paper published in 1964. In it he predicted the existence of a particle which gives other subatomic species their mass. The challenge Mr Higgs set ultimately led to the construction of the Large Hadron Collider (LHC, illustrated above) the most ambitious—and, at SFr10 billion ($10 billion) the most expensive—scientific experiment in history. It has also sparked a mini-publishing boom of books to explain what all the fuss is about. In "Higgs Force" Nicholas Mee, a fellow of the Royal Astronomical Society with a doctorate in theoretical particle physics from Cambridge University, lays out why the Higgs matters, and what is being done to find it. The LHC smashes together subatomic particles called protons in a 27km underground circular tunnel outside Geneva at within a whisker of the speed of light. Its scientists then study the detritus in cathedral-sized detectors.
<urn:uuid:8b198ba4-71ea-4668-8b67-144cb434d8f3>
2.78125
302
Nonfiction Writing
Science & Tech.
42.073951
The previous chapter introduced changesets and the commands dopatch (fancy variations on the theme of the In this chapter, we'll look in a bit more detail about how changesets are used in archives, how they are used by the commands, and what this implies for how you can make the best use of Suppose that you get the latest revision of a project, make some changes, write a log message, and commit those changes to an archive. What happens? In essence, commit: 1 Computes a changeset that describes what changes you've made compared to the latest revision. 2 Creates a directory for the new revision in the archive. 3 Stores your log message and the changeset in the archive. In that light, you might want to go back and review an earlier section: How it Works – commit of a New Revision in Checking-in Changes. Earlier, you learned that the cat-archive-log command retrieves a log message from an archive (see Studying Why Alice Can Not commit in The update/commit Style of Cooperation). You can also retrieve a changeset from an archive: % cd ~/wd % tla get-changeset hello-world--mainline--0.1--patch-1 patch-1 [...] get-changeset retrieves the changeset from the archive and, in this case, stores it in a directory called (The format of changesets is described in The arch Changeset Format.) The changeset format is optimized for use by programs, not people. It's awkward to look at a changeset "by hand". Instead, you may wish to consider getting a report of the patch in diff format by using: % tla show-changeset --diffs patch-1 If you've been following along with the examples, you'll recognize the output format of show-changeset from the changes command introduced earlier (see Oh My Gosh – What Have I Done? in Checking-in Changes). When you commit a set of changes, it is generally "best practice" to make sure you are creating a clean changeset. A clean changeset is one that contains only changes that are all related and for a single purpose. For example, if you have several bugs to fix, or several features to add, try not to mix those changes up in a single There are many advantages to clean changesets but foremost among them are: Easier Review It is easy for someone to understand a changeset if it is only trying to do one thing. Easier Merging As we'll learn in later chapters, there are circumstances in which you'll want to look at a collection of changesets in an archive and pick-and-choose among them. Perhaps you want to grab "bug fix A" but not "new feature B". If each changeset has only one purpose, that kind of cherrypicking is much more practical.
<urn:uuid:ce22f619-a2e3-43df-a9d6-b554c9c94b43>
2.71875
625
Tutorial
Software Dev.
53.29643
The British government is considering forcing biotech companies to use "DNA bar coding" to identify genetically modified organisms. The National Institute of Agricultural Botany (NIAB) in Cambridge, UK, was granted a patent this week on a DNA bar-coding technique. The technology would make it easier for regulators to trace GM food or detect crops that have been contaminated by GM strains. It could also have wider uses. Banknotes or designer clothes made from bar-coded cotton would be harder to counterfeit. A spokesman for Britain's Department for Environment, Food and Rural Affairs (DEFRA) says it is too early to commit to any one method, but told New Scientist that such technology would be "actively encouraged". A recent European Union directive gives governments the power to make it compulsory. "We have been talking about techniques for encoding unique identifiers in the context of GMOs for some time," says Howard Dalton, DEFRA's chief scientific adviser. "Any development which would help in the process of detecting and identifying GMOs would be welcomed." The idea is to add the same unique sequence to all GM organisms, regardless of how else they are modified. That means a single, simple DNA test could identify any product as GM if it contains intact DNA. Since such a sequence would not code for any protein, it would not affect a plant's properties. Most creatures' genomes are already littered with vast stretches of non-coding DNA. DNA bar codes could also provide detailed information about a product. NIAB's patent describes how a series of sequences that contain compressed information - such as which company made the GM organisms and what modifications it has - could be added. "Simpler techniques for access to that information will help us ensure effective traceability and labelling through the food supply chain. This will ensure consumer choice and increase confidence," says Dalton. Detecting GM products is difficult at present, because you have to know what you are looking for, says Derek Matthews, a molecular biologist at NIAB. For example, you need to know the short sequences that flank any added piece of DNA, or the sequence of added genes or of the DNA regions that control their activity. But biotech companies are often reluctant to reveal such information because of fears that other companies may copy their technology. For instance, Gro-Ingunn Hemre at the National Institute of Nutrition and Seafood Research in Bergen, Norway, has been trying for nearly three years to get data and material from a number of biotech companies for a research project, without success. "Very, very difficult" The recent EU directive also requires biotech companies to supply detailed information on every GM product, including how to identify it, before approval. But companies are still reluctant to cooperate. "It's very, very difficult to get stuff out of them, even though they are legally obliged," says Matthews. He thinks most companies would prefer genetic bar codes, since this would allow them to label their products without giving away any secrets. The Agricultural Biotechnology Council, which represents the British industry, has given the idea a cautious welcome. Over many generations, DNA bar codes could be corrupted or lost, but it won't matter if only a few plants in a field lose their bar code. And NIAB's patent includes techniques for error correction just like those used in computers. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:7af670c6-bf2f-4187-b0cd-a3cdbcf5ca20>
3.1875
762
Truncated
Science & Tech.
38.015714
|Aug1-12, 01:43 AM||#1| I have looked through my optics textbook and many websites about single-slit diffraction. They all end up deriving an equation that looks something like this: I = I0*(sinc(B))2, where B = (1/2)*k*b*sin(theta), k = wavenumber, b = slit width. I don't know if there's something I'm not understanding, but I have a hard time believing that the intensity only depends on the angle. Shouldn't intensity decrease as distance from the slit increases? physics news on PhysOrg.com >> Promising doped zirconia >> New X-ray method shows how frog embryos could help thwart disease >> Bringing life into focus |Aug1-12, 04:13 AM||#2| You are correct that the intensity does also vary with distance - the diffraction pattern is wider at greater distances so must be fainter. In the formula you quoted, I0 is the on-axis intensity (i.e. I(θ=0)=I0), and this is where the distance dependence has been "hidden". Generally, you don't care about distance dependence in far-field diffraction because the transverse distribution is where the interesting physics is, so your text has hidden the boring bits in the interests of clarity. Well spotted. If you wanted to insert a distance term, the formula above tells you the way the pattern spreads perpendicular to the slit and you could measure laser beam width at different distances to get the spread parallel to the slit. The product of the two is the overall dependence of I0 on distance from the slit. Does that make sense? |Similar Threads for: Single-slit diffraction| |single slit diffraction||Quantum Physics||4| |single slit diffraction||Introductory Physics Homework||3| |Single-slit diffraction||Introductory Physics Homework||0| |Single slit diffraction||Advanced Physics Homework||6|
<urn:uuid:c839d72c-5c7d-47ee-94cf-ec2324d6efd5>
2.953125
438
Comment Section
Science & Tech.
53.417652
Social structure of short-finned pilot whales Since the early 80s of the last century researchers have studied the social structure of short-finned pilot whales. Mostly, they used the non-invasive photo-identification technique (...here...) in order to document relationships between free-ranging animals at sea. They photographed individuals of distinct groups and compared association patterns between individuals. Short-finned pilot whales were shown to live in stable social groupings and are believed to form matrilinear kinship groups. Within such groups all individuals are genetically related with the oldest female and mating occurs between individuals from different groups. Click ...here... to see a large group meeting underwater (.mov file with 8.6 MB) with more than 25 individuals. Click ...here... to see a pilot whale subgroup with a newborn (.mov file with 2.6 MB). This newborn still has fetal folds and is closely related with its mother. From 2003 to 2011 Filipe Alves and colleagues photo-identified 364 individuals south and east off Madeira island. 85 animals were re-sighted during 1-8 successive years. The mean group size was 18 individuals and ranged 2-60. They found a population consisting of long-term resident, regular visitor and transient groups. By further using molecular genetics techniques, the researchers found genetic relatedness between group individuals. Combined with their observation that individuals from different groups temporarily associate, this suggests that short-finned pilot whales form matrilinear kinship groups and mating might occur between individuals of different groups. Tenerife, Canary Islands From 1991 to 1992 Jim Boran and Sara Heimlich photo-identified 495 short-finned pilot whales within 46 pods southwest off Tenerife. Based on patterns of occurrence, the area is frequented by visitor and resident groups. It was found that during meetings of different groups, the highest ranked associations of reproductive females were with males from other pods suggesting that mating occurs between individuals of different pods. These findings show parallels to the social structure of matrilinear pods of resident killer whales. Pacific coast of Japan From 1986 to 1987 Tomio Miyashita, Toshio Kasuya and Kyoichi Mori photo-identified at least 101 free-ranging short-finned pilot whales off the Pacific coast of Japan. However, this was a feasibility study and did not reveal insights into long-term social affiliations. Further research from Toshio Kasuya and Helene Marsh on sympatric groups suggests that their social structure resembles group structures observed for cetacean species living in matrilineal kinship groups. Big Island, Hawaii From 1985 to 1988 Susan Shane and Dan McSweeney photo-identified short-finned pilot whales off the island of Hawaii. They found that 30 whales were sighted two or more times and 30 whales were identified between seasons. Data indicated a degree of pod cohesiveness and individual adult males did not associate with the same pod all the same time. Comparable to the analysis of the data obtained off California, pilot whale pods in Hawaiian waters are fairly stable. Santa Catalina Island, California From 1983 to 1986 Susan Shane and Dan McSweeney photo-identified short-finned pilot whales in waters off Santa Catalina Island. They resighted 32 whales on two or more days, and 15 during two or more seasons. Their association patterns indicated a degree of social affiliation between some individuals and it is suggested that pilot whale pods are fairly stable. Individual adult males did not associate with the same pod all the same time.
<urn:uuid:96a4892b-eae0-4c5b-9ca5-2de7897277ab>
3.34375
732
Knowledge Article
Science & Tech.
43.906038
The kelvin (symbol: K) is the SI unit of temperature. It is defined as the fraction 1/273.16 of the thermodynamic (absolute) temperature of the triple point of water. For more information about the topic Kelvin, read the full article at Wikipedia.org, or see the following related articles: Recommend this page on Facebook, Twitter, and Google +1: Other bookmarking and sharing tools:
<urn:uuid:1f3f724d-7533-4efb-b1a5-8af5ae327966>
3.3125
92
Knowledge Article
Science & Tech.
43.866875
Could porous rocks deep in the ocean floor be a place to stash unwanted carbon dioxide? While researchers are trying to develop ways to scrub unwanted carbon dioxide from industrial and power plant emissions, the problem of what to do with the captured CO2 is a tricky one. Some have proposed injecting the gas deep into oil wells, while others suggest chemical ways to convert the gas into a solid form that could be buried. Now, scientists at the Lamont-Doherty Earth Observatory suggest that undersea basalt formations some 8,000 feet under the ocean off the shore of the Pacific Northwest could absorb up to 120 years worth of US CO2 emissions. Their work was reported last week in The Proceedings of the National Academy of Sciences. In this segment, we'll talk with one of the researchers behind the proposal about how it might work. Produced by Annette Heist, Senior Producer
<urn:uuid:05ffeeef-b4e8-4670-9094-11d22ce230c6>
3.25
175
Truncated
Science & Tech.
46.587794
Previous abstract Next abstract The 24-inch telescope of the Capilla Peak Observatory of the University of New Mexico was used to observe Jupiter during the week of the impact of comet Shoemaker-Levy-9 onto Jupiter. Images of Jupiter were obtained in several narrow-band interference filters. The opacity in the spots is calculated as a function of wavelength, and used to infer the distribution of the debris with altitude. Tuesday program listing
<urn:uuid:20c77ac4-e048-49cf-9382-61aca55dc924>
2.6875
87
Knowledge Article
Science & Tech.
21.744279
Read/Search this Article Proceedings of the Horiba International Conference "New Direction of Ocean Research in the Western Pacific" : Past, Present and Future of UNESCO/IOC/WESTPAC Activity for 50 years and the JSPS Project "Coastal Marine Science"Section I: Research Articles/Harmful microalgaeField studies in the Upper Gulf of Thailand and Manila Bay on red tides of green Noctiluca scintillans with the photosynthetic symbiont Pedinomonas noctilucae showed that vertical maximum of N. scintillans often occurred below halocline at 10 to 15 meter depths, suggesting that salinity influenced the vertical distribution of this organism. Then, we examined the influence of salinity on the vertical distribution of N. scintillans under laboratory conditions. A three layer system with varying salinity of 10, 20 and 31 was produced in the black polyethylene tubes of 20 cm in diameter and 1.5 m long. Tubes were exposed to a 12 : 12 LD cycle under the light intensity of 70μmolm(-2)s(-1) provided at the top. Two set of experiments were conducted to observe behaviors of N. scintillans for a week: in the first set N. scintillans cells were added at the surface of the tubes while in the second one cells were added at the bottom. In the stratified tubes, all cells released at the surface died immediately at the beginning of the experiment, while cells released at the bottom slowly migrated toward the upper layer, and uniformly distributed throughout the column within 24 h. In the control tubes with a uniform salinity of 28, it took shorter (3 h) for cells released from the bottom to attain the uniform distribution. During the latter half of the experiment most cells stayed at the surface. These results indicate that N. scintillans is able to tolerate a wide range of salinity, and that an acclimation period is needed to adapt to low salinity conditions; this adaptive feature may be an important factor to maintain its population and to form red tide in the river mouth areas. - Coastal marine science Coastal marine science 35(1), 70-72, 2012-00-00 International Coastal Research Center, Atmosphere and Ocean Research Institute, the University of Tokyo
<urn:uuid:cc141f44-ee70-494d-9ea3-8d68670a7633>
2.8125
478
Truncated
Science & Tech.
36.701529
The object factory is actually a general mechanism used throughout the JNDI. In this lesson, object factories are used to transform information stored in the directory into Java objects that applications can use. And typically, these objects are objects that the application uses directly (for example, like a Person object or a Drink or Fruit object). The following discussion introduces you to other uses of object factories. It is intended as background information for API users. Developers of service providers can find full discussions of these topics in the Beyond the Basics trail and the Building a Service Provider trail. Federation and Context Factories You saw how an object can be bound into the directory. What if the object happens to be the root of another naming system? In the LDAP, for example, you can bind an object that is the root of a file system. You can then supply an object factory whose role it is to convert the information stored in the LDAP directory about the file system into the root context of the file system. This type of object factory is called a context factory. Given information about the context object to create, a context factory will create and return an instance of Context . The file system in this example is called the nns (see the Federation lesson). Just as the nature of the information stored in a directory about a Java object can vary (from a reference to attributes to a serialized object), so can the nature of the information stored in a directory about the nns. In the file system example, you might store a URL that identifies the file system's server and protocol information as a JNDI reference. By storing nns information in a directory, you are federating. naming systems, thereby allowing them to resolve composite names. See the Federation lesson for details. URL Context FactoriesA special kind of context factory is a URL context factory, which creates contexts for resolving URLs or contexts whose locations are specified by URLs. For example, an LDAP URL context factory can create a context for accepting arbitrary LDAP URLs. The same LDAP URL context factory can create a context identified by an LDAP URL. That context will then be able to resolve names relative to the location specified by the URL. URL context factories are used for federation and are also used by the initial context to resolve and process requests for URLs. In fact, in the remote reference example , the remote object is stored in the directory as a reference that contains an RMI URL. When the object is looked up from the directory, the JNDI uses an RMI URL context factory to look up and return the object from the RMI registry named in the URL.
<urn:uuid:bea8284d-cf81-4d33-840a-85668a32a5f2>
3.53125
536
Documentation
Software Dev.
40.302475
This page has been flagged with the following issues: In the following code example, the desc element is used to define a description of an element. This element can be read programatically to analyze SVG structure. Copy this sample to a text file and save it with the .html file extension. Run it in Internet Explorer 9 to see a greenyellow ellipse. The element will look like this: <!DOCTYPE HTML> <html> <head></head> <body> <svg width="400" height="400"> <ellipse cx="150" cy="100" rx="100" ry="75" fill="greenyellow"> <desc>This is the description of an ellipse.</desc> </ellipse> </svg> </body> </html> Note: In addition to the attributes, properties, events, methods, and styles listed above, SVG elements also inherent core HTML attributes, properties, events, methods, and styles. - Scalable Vector Graphics: Document Structure, Section 5.11.5 The SVGDescElement object has these events: - onload: Occurs when the browser has fully parsed the element and all of its descendants. The SVGDescElement object has these properties: - className: Gets the names of the classes that are assigned to this object. - focusable: Determines if an element can acquire keyboard focus (that is, receive keyboard events) and be a target for field-to-field navigation actions (such as when a user presses the Tab key). - ownerSVGElement: Gets the nearest ancestor svg element. - style: Gets a style object. - viewportElement: Gets the element that established the current viewport. - xmlbase: Gets or sets the base attribute on the element. - xmllang: Gets or sets a value that specifies the language that is used in the contents and attribute values of an element. - xmlspace: Gets or sets a value that indicates whether white space is preserved in character data. This article contains content originally from external sources. Portions of this content come from the Microsoft Developer Network: [Windows Internet Explorer API reference Article]
<urn:uuid:35ea8482-bc3a-4d52-bee8-e8086d24aab4>
2.734375
466
Documentation
Software Dev.
45.990423
A symbol is like an immutable string, but symbols are normally interned, so that two symbols with the same character content are normally eq?. All symbols produced by the default reader (see Reading Symbols) are interned. The two procedures string->uninterned-symbol and gensym generate uninterned symbols, i.e., symbols that are not eq?, eqv?, or equal? to any other symbol, although they may print the same as other symbols. Regular (interned) symbols are only weakly held by the internal symbol table. This weakness can never affect the result of an eq?, eqv?, or equal? test, but a symbol may disappear when placed into a weak box (see Weak Boxes) used as the key in a weak hash table (see Hash Tables), or used as an ephemeron key (see Ephemerons). |(symbol? v) → boolean?| |v : any/c| Returns #t if v is a symbol, #f otherwise. |> (symbol? 'Apple)| |> (symbol? 10)| |(symbol->string sym) → symbol?| |sym : symbol?| Returns a freshly allocated mutable string whose characters are the same as in sym. |> (symbol->string 'Apple)| |(string->symbol str) → symbol?| |str : string?| Returns an interned symbol whose characters are the same as in str. |> (string->symbol "Apple")| |> (string->symbol "1")| |(string->uninterned-symbol str) → symbol?| |str : string?| |> (string->uninterned-symbol "Apple")| |> (eq? 'a (string->uninterned-symbol "a"))| |(gensym [base]) → symbol?| |base : (or/c string? symbol?) = "g"| Returns a new uninterned symbol with an automatically-generated name. The optional base argument is a prefix symbol or string. |> (gensym "apple")|
<urn:uuid:e80219df-1e11-4691-a867-b37c59490aa5>
3.453125
469
Documentation
Software Dev.
60.641627
Rhope is a dynamically typed dataflow programming language that also borrows some ideas from other paradigms. Unlike mainstream programming languages, statements are not necessarily executed in the order they are written, but instead based on their dependencies. Statements that do not share dependencies run in parallel. Most operations have value semantics (i.e. modifying an object makes a copy rather than changing the original) making this parallelism safe. For managing global state, Rhope has a transaction mechanism. Nit is a statically typed object-oriented programming language. The goal is to propose a statically typed programming language where structure is not a pain. It has a simple, straightforward style and can usually be picked up quickly, particularly by anyone who has programmed before. While object-oriented, it allows procedural styles. The Nit Compiler (nitc) produces efficient machine language binaries. Brace is a dialect of C that looks like Python. It has coroutines, hygenic macros, header generation, and libraries with graphics and sound. It is meant to be good for beginners, kids, and experts. Brace is translated to C, then compiled, with #! support and cached executables. It is fairly portable, and runs on GNU/Linux, Unix, and Windows with MinGW. It should also run on Mac OS X. It comes with a lot of demo programs, many with animated graphics. Crules is a dynamic programming language that takes influences from Python, Perl, and Haskell. The main motivation for this language was the concept or design of a new paradigm or feature called "rules". A rule is a potential entry point which has dependencies rather than parameters. Any rule can be overridden to have different or no dependencies. Since the language itself can decide on the best course of actions for an operation, dependencies become preconditions for execution. It also features lazy evaluation, object orientation, variadic and anonymous-parameter functions, and reflection. These features help make the language truly dynamic. Shannon is a general purpose stream-oriented programming language; it is concise and yet feature rich. Streams, FIFOs, and Unix shell-style pipes are first-class concepts in the language. You can connect functions and FIFOs within your program similar to the way you connect processes with pipes in the Unix shell. These constructs in Shannon, however, are highly efficient as no true multitasking is involved, and at the same time they allow you to write more concise and readable code for chained data processing. State is a special type of function that returns a reference to its own local data and any nested functions it may have. In effect, states implement classes in terms of OOP, and yet classes per se aren't part of the language. A special type of modules marked as "persistent" is an effective replacement for databases and SQL. This allows you to access persistent shared data using native Shannon constructs, eliminating the need for an extra query language. Intuitive and minimalist syntax and semantics are used. Particularly, "minimalist semantics" means less things to remember and more possibilities. Shannon is statically-typed, although it provides dynamic typing facilities as well. I is a programming language that was designed to be efficient to write and run. The system incorporates many major libraries, allowing the creation of major projects such as Aciqra. It is an interpreted language and supports CGI scripting through the use of the CGI for Aciv/I extension. Hybris (hybrid scripting language) is a dynamic scripting programming language created to help developers to automate everyday procedures in a easy and fast way. Although is a high level language, Hybris supports dynamic library linking, native C function calls, and a lot of other low level functionality. mbrChunker is a utility that allows you to mount raw disk images (created by dd, dcfldd, dc3dd, ftk imager, etc.) and create VMDK files. It does this by taking the raw image, analyzing the master boot record (physical sector 0), and getting specific information that is need to create a working VMDK file that points to your raw image. It can also extract information such as heads, cylinders, and sectors per track. With version 0.3.15, the tool now has the ability to search for hex byte offsets within any binary file. It will give you the byte location for every hex pattern found. More information about this can be found in the README.
<urn:uuid:a5bd7522-c700-41d4-88de-49269f623942>
2.78125
909
Content Listing
Software Dev.
39.553302
6.2. The peak-background split We now consider the central mechanism of biased clustering, in which a rare high density fluctuation, corresponding to a massive object, collapses sooner if it lies in a region of large-scale overdensity. This `helping hand' from the long-wavelength modes means that overdense regions contain an enhanced abundance of massive objects with respect to the mean, so that these systems display enhanced clustering. The basic mechanism can be immediately understood via the diagram in figure 8; it was first clearly analysed by Kaiser (1984) in the context of rich clusters of galaxies. What Kaiser did not do was consider the degree of bias that applies to more typical objects; the generalization to consider objects of any mass was made by Cole & Kaiser (1989; see also Mo & White 1996 and Sheth et al. 2001). Figure 8. The high-peak bias model. If we decompose a density field into a fluctuating component on galaxy scales, together with a long-wavelength `swell' (shown dashed), then those regions of density that lie above a threshold in density of times the rms will be strongly clustered. If proto-objects are presumed to form at the sites of these high peaks (shaded, and indicated by arrows), then this is a population with Lagrangian bias - i.e. a non-uniform spatial distribution even prior to dynamical evolution of the density field. The key question is the physical origin of the threshold; for massive objects such as clusters, the requirement of collapse by the present imposes a threshold of 2. For galaxies, there will be no bias without additional mechanisms to cause star formation to favour those objects that collapse first. The key ingredient of this analysis is the mass function of dark-matter haloes. The universe fragments into virialized systems such that f (M) dM is the number density of haloes in the mass range dM; conservation of mass requires that M f (M) dM = 0. A convenient related dimensionless quantity is therefore the multiplicity function, M2f (M) / 0, which gives the fraction of the mass of the universe contained in haloes of a unit range in ln M. The simplest analyses of the mass function rest on the concept of a density threshold: collapse to a virialized object is deemed to have occurred where linear-theory averaged over a box containing mass M reaches some critical value c. Generally, we shall assume the value c = 1.686 appropriate for spherical collapse in an Einstein-de Sitter universe. Now imagine that this situation is perturbed, by adding some constant shift to the density perturbations over some large region. The effect of this is to perturb the threshold: fluctuations now only need to reach = c - in order to achieve collapse. The number density is therefore modulated: This gives a bias in the number density of haloes in Lagrangian space: f / f = bL , where the Lagrangian bias is In addition to this modulation of the halo properties, the large-scale disturbance will move haloes closer together where is large, giving a density contrast of 1 + . If << 1, the overall fractional density contrast of haloes is therefore the sum of the dynamical and statistical effects: halo = + bL . The overall bias in Eulerian space (b = halo / ) is therefore Of course, the field can hardly be imposed by hand; instead, we make the peak-background split, in which is mentally decomposed into a small-scale and a large-scale component - which we identify with . The scale above which the large-scale component is defined does not matter so long as it lies between the sizes of collapsed systems and the scales at which we wish to measure correlations. To apply this, we need an explicit expression for the mass function. The simplest alternative is the original expression of Press & Schechter (1974), which can be written in terms of the parameter = c / (M): We now use d / dc = (M)-1(d / d) = ( / c)(d / d), since M is not affected by the threshold change, which yields This says that M* haloes are unbiased, low-mass haloes are antibiased and high-mass haloes are positively biased, eventually reaching the b = / value expected for high peaks. The corresponding expression can readily be deduced for more accurate fitting formulae for the mass function, such as that of Sheth & Tormen (1999): We can now understand the observation that Abell clusters are much more strongly clustered than galaxies in general: regions of large-scale overdensity contain systematically more high-mass haloes than expected if the haloes traced the mass. This phenomenon was dubbed natural bias by White et al. (1987). However, applying the idea to galaxies is not straightforward: we have shown that enhanced clustering is only expected for massive fluctuations with 1, but galaxies at z = 0 fail this criterion. The high-peak idea applies will at high redshift, where massive galaxies are still assembling, but today there has been time for galaxy-scale haloes to collapse in all environments. The large bias that should exist at high redshifts is erased as the mass fluctuations grow: if the Lagrangian component to the biased density field is kept unaltered, then the present-day bias will tend to unity as (Fry 1986; Tegmark & Peebles 1998). Strong galaxy bias at z = 0 therefore requires some form of selection that locates present-day galaxies preferentially in the rarer haloes with M > M* (Kauffmann, Nusser & Steinmetz 1997). This dilemma forced the introduction of the idea of high-peak bias: bright galaxies form only at the sites of high peaks in the initial density field (Bardeen et al. 1986; Davis et al. 1985). This idea is commonly, but incorrectly, attributed to Kaiser (1984), but it needs an extra ingredient, namely a non-gravitational threshold. Attempts were therefore made to argue that the first generation of objects could propagate disruptive signals, causing neighbours in low-density regions to be `still-born'. It is then possible to construct models (e.g. Bower et al. 1993) in which the large-scale modulation of the galaxy density is entirely non-gravitational in nature. However, it turned out to be hard to make such mechanisms operate: the energetics and required scale of the phenomenon are very large (Rees 1985; Dekel & Rees 1987). These difficulties were only removed when the standard model became a low-density universe, in which the dynamical argument for high galaxy bias no longer applied.
<urn:uuid:3a1f9add-ec91-400f-b429-f6d849e63d7a>
3.109375
1,404
Academic Writing
Science & Tech.
42.660194
The degree to which fresh water and saltwater mix in an estuary is measured using isohalines. Isohalines are areas in the water that have equal salt concentrations, or salinities. In estuaries, salinity levels are generally highest near the mouth of a river where the ocean water enters, and lowest upstream where fresh water flows in. To determine isohalines, scientists measure the water's salinity at various depths in different parts of the estuary. They record these salinity measurements as individual data points. Contour lines are drawn connecting data points that have the same salinity measurements. These contour lines showing the boundaries of areas of equal salinity, or isohalines, are then plotted onto a map of the estuary. The shape of the isohalines tells scientists about the type of water circulation in that estuary.
<urn:uuid:536356a0-14f4-4bf4-8951-4ef8f7f16547>
4.34375
178
Knowledge Article
Science & Tech.
25.213257
At Horns Rev windfarm off the coast of Denmark, sometimes in winter, clouds appears in the wake of the turbines. I've only seen photos of the phenomenon when the wind direction is exactly aligned with the grid layout - that is, it's blowing directly from a turbine to its closest neighbour. That may be because it's most picturesque then (and thus most likely to be photographed); or it may be that there's something going in the fluid dynamics that requires that alignment for the phenomenon to occur. I guess there are several things at work here: that wake losses are highest when wind is exactly aligned with one axis of the turbine grid; that air temperatures vary with height above water; that the temperature is low enough to be close enough to form fog anyway (and in the photo, it looks like they're a layer of mist just above the sea's surface); that the turbine's wake is mixing air from different altitudes I'm wondering if it's possible to predict when the phenomenon in the photo here might occur. So my question is - what's the specific formulation of what's going on, here: what does the quantification of causes and effect look like?
<urn:uuid:78b48cab-121a-4a45-97c4-1ad9a1c4ddb0>
3.265625
238
Q&A Forum
Science & Tech.
32.892047
There is a very deep antipathy between duplication and abstraction. This continues our series on the four Big Ideas in software development. Be sure to check earlier issues for articles on Cohesion and Coupling. This month, our goal is to cast new light on abstraction. A naive approach to object-oriented design is to create a system using classes that model real-world things. If you learned about object-oriented programming in the 1990s, chances are someone had you model Dog, Cat, Mammal, and Animal classes as an exercise. Abstraction meant implementing the parts that related to your software needs. You might have designed a bark() behavior, but not a tailWag() behavior, along with a few supporting attributes such as size and speed. Your classes were a straightforward abstraction of the real world, each with as many attributes and behaviors as made sense for that real-world element. From this introduction to abstraction comes quite naturally a mindset that the best way to create an an object-oriented design is to model the real world, leaving a few bits out. This is not necessarily wrong, but it is misleading. Abstraction is deeper and more profound than this mindset makes it sound. We base our primary focus on abstraction on a definition by Uncle Bob Martin: Abstraction is the elimination of the irrelevant and the amplification of the essential. See how we just emphasized both essential phrases in that definition and eliminated the BS about “the real world?” Abstract and Concrete We start our discussion of abstraction with the concept of abstract types vs. concrete types. Abstract types do not completely specify behavior, whereas concrete types contain specific code details for all behaviors. Purely abstract types in C#, Java, and the like (where absolutely no behavior is defined) are known as interfaces. From concept to code, then, abstraction is directly implemented in the form of interfaces. The set of behaviors supported by a class appears as a standalone declaration, a contract of sorts: The FineCalculator interface captures the concept of determining how much to charge library patrons for borrowed materials that they return late. The interface captures only this singular concept and no implementation details (other than the argument and return types). A fine calculator implementation will complete the interface by implementing charge(). You might imagine BookFineCalculator, MovieFineCalculator, and NewReleaseFineCalculator implementations. Though implementations may vary, the abstract concept of determining an appropriate fine charge for a given number of late days is likely to remain unchanged from the point of view of a FineCalculator user. Among the benefits of having this abstraction are: The specific portion of the client code that must obtain fines can be written once, regardless of the material types involved. “If” statement logic isn’t littered throughout the client: “if the material is a book, calculate the fine using this algorithm, otherwise if it’s a movie, calculate it that way, otherwise….” New FineCalculators can be introduced, and existing algorithms changed, without touching virtually any other code elsewhere in the system (a great example of Bertrand Meyer’s open-closed principle). The interface isolates the client software from any changes to the implementation details of each FineCalculator algorithm, as long as it continues to meet its contract (which is assured by unit tests). The client software can be unit-tested in isolation and thus not have to depend on interacting with any one specific material type. Tests for the client can substitute a test-double that implements the same abstraction solely for purposes of testing. This ability to test against a simple, in-memory construct isolates the client code being tested from dependency on a collaborating class that might be volatile, slow, or even non-existent. Without the interface, the client is dependent on concrete details of the algorithms, which are likely to change over time. Introducing an abstraction layer, in the form of an interface, basically nets you all the positive benefits of reduced coupling. Generalization is also Abstraction It would be possible to name the charge() method something like CalculateChargeForBooksOverTenDaysLate(), but that has a problem of over-specification and implementation exposure. It is not an essential feature of FineCalculator that the charge is for a book (modern libraries lend a variety of materials), nor that the charge is only calculated for lateness over 10 days. A name that reveals only relevant and correct information is an abstraction. One may back into abstraction by stripping irrelevant details from names in the system. With a name like FineCalculator a developer can know in an instant if this is a class he wishes to subclass or not. Generalization has limits, though. The simpler name “Calculator” lacks evocative value. Uncertain whether to implement its interface, a developer may create a new interface (duplication!), hack new behavior into existing code (complication!), or directly modify the caller of the existing FineCalculators with code for calculating his specific fine (duplication and coupling!) The TDD community has been recently buzzing with the realization that code becomes more general as tests become more specific, revealing that test-driving code alone will push it to a more appropriate level of abstraction. It is still up to the human(s) at the keyboard to change the class and method names to match. Data Duplication vs. Abstraction There is a very deep antipathy between duplication and abstraction. One frequent example we’ve encountered is the pervasive use of a parameterized collection object. For example, the library system works with lists of holdings: Throughout the code, you’ll find dozens of references to the List<Holding> type, often in signatures or method calls: This is a subtle form of duplication: We have to specify two pieces of information—the collection type and the type to which the collection is bound—in every appropriate code place. Suppose we must now associate additional characteristics with the collection of holdings as a whole, such as a date stamp to indicate when the collection was created. We can pass this date stamp around as an additional argument here and there where appropriate: This opens up the door to increasingly long method signatures over time, instead of helping the system to evolve gracefully. The date is really an attribute of the list of collections as a whole—yet we have no abstraction in which we could capture that information. Prefer instead to create an abstraction that simply encapsulates the two: This amplifies what’s important—the collection of holdings—and buries the irrelevant fact that holdings are stored as a sequential list. As you need, you can easily incorporate new behaviors into HoldingSet without having to revisit numerous method signatures throughout the application. The abstractions become richer over time instead of the parameter lists becoming more cumbersome. Abstraction drives out duplication. The same principle applies to a loose collection of primitive parameters. Perhaps a repeating set of (latitudeHours, latitudeMinutes, latitudeSeconds, longitudeHours, longitudeMinutes, longitudeSeconds) might indicate a missing map coordinate abstraction? Do the coordinates have related methods scattered about the code? Code Duplication vs. Abstraction You may frequently find two-line or even single-line duplications. In the Risk game implementation we’ve looked at, there is a large class named Risk which looks to control everything about the game. Within this multi-thousand-line class are numerous methods and lines of code involving both an offensive player (attacker) and a defensive player: The class that controls the game includes additional information related to making attacks: Similar code is sprinkled through the Risk class. Virtually every place there is code relating to an attacker, there is also code relating to a defender. The related code can be rolled into a single abstraction, an Attack: With this design change, you see very subtle bits of unnecessary (duplicate) code disappear. For example, we were able to change the method name isValidAttack to isValid, once we moved it into the Attack class. The client code becomes simpler overall. We’ve moved two lines of complexity involving interaction with a game object into a single method in the Attack class, rollDice. That change didn’t eliminate any duplication yet, but it did simplify the client and achieved command-query separation (i.e. we can ask for attacker and defender results multiple times without having to re-roll the dice): Further, we made it possible to change the implementation of how dice are rolled without having to open and touch the client class. The game object is now referenced in the client only when constructing the Attack object. The design isn’t yet “perfect”—perhaps we should move the rollDice, getAttackerDice, and getDefenderDice methods into the Attack class itself—but we now have a new home into which we can relocate attack-related code. With the introduction of this previously missing abstraction, our many-thousand-line blob class shrinks by perhaps a few dozen lines of code. As the Risk class shrinks over time, additional opportunities for abstraction become more obvious. Abstraction begets abstraction. Spotting “missing” abstractions takes a bit of practice. Here are a few smells that might point to the need for additional abstractions: Code chunks that seem to repeat (perhaps not exactly) throughout the code. ctrl-c / ctrl-v “I know I saw something similar somewhere else in the code.” Extensive detailed test setup Sometimes, you’ll spot two lines, or even a single line, that redundantly specifies code. Here’s a bit of ugliness used to add two new menu items, and corresponding actions, to an existing menu: Don’t hesitate to factor these couplets into a single method! While they may not represent a top-level abstraction like a class, helper methods in the same class are still abstractions—you’re replacing a complex implementation detail with a simple declaration: And once you’ve created such methods, you may start to notice that they too may be better suited in another class, whether existing or new. Further, you might recognize that things are a bit disjoint and implicit—it seems as if there’s an action object somewhere that is associated to the key identified by CleanAction. A good goal for this code might be to shape it into something like: Of course, the library type for menu may not support this—perhaps it’s time to create your own abstractions that wrap the third-party types. We hear the same resistance to these ideas all the time: “But all these new method calls and object instantiations are going to degrade performance.” When we hear this, we recommend that the programmers try and measure. You will be surprised to find what is fast, what is slow, and why. The world changes too fast to blindly follow rules of thumb about performance. Unit tests, particularly those created as a virtue of doing test-driven development (TDD), must document the essence of what’s going on: What data is being created for purposes of the test? What behavior is being executed? How do we know that the expected behavior happened? It’s far too easy to drown these three key test elements in a sea of difficult-to-understand test code. Tests must amplify what’s essential and bury what’s not relevant to understanding the requirement. Tests that are not sufficiently abstract will be difficult to understand and will break for all the wrong reasons. Test abstraction is such a significant element of doing TDD well that we’ve chosen to discuss it in an upcoming article. Abstraction is where object-oriented software design starts. We strive to build a system that presents straightforward concepts to the reader, not overwhelming masses of detail. The process of abstracting drives out duplication and reveals more natural abstractions over time, making the code easier to read and easier to test. A well-abstracted design imparts meaning and provides easy navigation. We can deftly navigate the system through its simple abstractions, and push them aside when we need to get to the nitty-gritty implementation details. Jeff Langr has been happily building software for three decades. In addition to co-authoring Agile in a Flash with Tim, he’s written another couple books, Agile Java and Essential Java Style, contributed to Uncle Bob’s Clean Code, and written over 90 articles on software development. Jeff runs the consulting and training company Langr Software Solutions from Colorado Springs. Tim Ottinger is the other author of Agile in a Flash, another contributor to Clean Code, a 30-year (plus) software developer, agile coach, trainer, consultant, incessant blogger, and incorrigible punster. He writes code. He likes it.
<urn:uuid:ca4c2cb8-3078-4258-a897-79efbf9c77c1>
3.3125
2,701
Personal Blog
Software Dev.
33.282931
In this blog I have done a lot of looking at two nearby towns on the Great Plains. The reason for doing this is that such temperatures should be somewhat similar. They should not have strong bias's over short distances like 15 miles. Today's example is of Geneva, Nebraska and Fairmont, Nebraska, two towns merely 15 miles apart. These two towns are especially intersting as Fairmont has an elevation of 1641 feet and Geneva 1644 feet. No one can claim that there is a lapse rate of any importance between these two towns. The thermometers are, however in a particularly good place to test out the urban heating of a house. Geneva's MMTS is located 12 feet from a house, in a neighborhood. Below is the picture. Fairmont Nebraska's Stevenson Screen is sited at least 100 meters away from a house. It's station is shown below. Pictures of both stations are from Anthony Watts' surfacestations.org site. Geneva should be subject to heat being emitted from the house during the winter and thus should be warmer during the winter than Fairmont. This is precisely what we see. Every winter, Geneva, Nebraska is warmer than Fairmont by about a degree. This can't be due to CO2 because CO2 doesn't go up in the winter at Geneva and down at Fairmont. This can only be due to heat affecting the validity of the Geneva measurements. Why in the world the global warming hysteriacs can act as if they are collecting data of sufficient scientific quality is far beyond me. As a physicist, I learned that the first thing one must do is ensure that the measurements are free of bias--and this is something that the climatologists are not doing. I would point readers to my previous blog calculating how much heat is added to the radiation field of a city, even a small one, by our modern lifestyle. how energy use warms the earth
<urn:uuid:2ad68af9-14cf-4202-b944-8731caaf9ddf>
3.203125
392
Personal Blog
Science & Tech.
60.139545
Code development consists of mainly Programming languages, Debugging tools, Version Management tools, Compiling tools, and Integrated Development Environments (IDEs) where all the above are coupled as a single software application. Links are provided to various compilers used in Scientific Computing like FORTRAN, C, C++, Java and more recently Python. GNU Compiler Collection : GNU's project to produce a world class optimizing compiler. It works on multiple architectures and diverse environments. Currently GCC contains front ends for C, C++, Objective C, GNU Fortran-95, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,..). For manuals on using the various GCC compilers check out The GCC online documentation g77 : The GCC front end for FORTRAN 77. It is a very good FORTRAN77 compiler. It however does not have the -r8 option which compiles a program as double precision. This could be a good compiler design philosophy but in many cases gives problems when porting a code from SUN / DEC / HP workstations onto Linux systems. The g77 manual is available at The Gcc Online documentation site. gfortran. I was happy to receive this link by mail. It was 3 years since I had migrated to using the GNU C compiler for scientific computing because there was no "truly free" FORTRAN-95 compiler available then. I thank Paul Thomas for this link. g95. gfortran above and g95 are reportedly offshoots from the same CVS tree. Has an impressive list of programs that compiles and runs using this compiler. fort77 and f2c: fort77 is a perl program which invokes the f2c command (a Fortran to C translator) transparently, so it can be used just like a real Fortran compiler. Fort77 can be used to compile Fortran, C and assembler code and can link the code with f2c libraries. If you install fort77, you'll also need to install the f2c package. This does not have the "-r8" problem. You can download fort77 and f2c from the above link. lush: An object-oriented programming language, which combines the flexibility of an interpretive language, with the efficiency of a compiled language. It has full interfaces to numerical libraries (GSL, LAPACK, BLAS), graphics libraries (OpenGL), which allows creation of graphics and 3D animations and many other features that sound too good to be true. I have not yet tried this out, but it sounds very promising. Scientific Python: You may want to explore Python for your scientific computing needs. Python is an interpreted, interactive, object-oriented programming language. It has a number of extensions for numerics, plotting, data storage and combined with Tk lets you develop very good GUIs for your codes. The most exciting aspect is that it simplifies programming because it has modules for almost anything (vectors, tensors, transformations, derivatives, linear algebra, Fourier transforms, statistics, etc ...) are available. You can also wrap C and Fortran libraries from Python. Finally if you want to write a numerical scheme of your own you may find that it is simpler in Python. There are also interfaces to netCDF (portable binary files), MPI and BSPlib (parallel programming). You can further explore Python for Scientific computing here: Scientific-Python: A collection of modules for scientific computing on Python. All the necessary modules can be downloaded as either a tar file or an RPM file from here. The maintainer Konrad HINSEN also has a nice tutorial on Scientific Computing in Python. SciPy An open source library of scientific tools for Python. It includes modules for graphics and plotting, optimization, integration, special functions, signal and image processing, genetic algorithms, ODE solvers, etc. In this section links are given to mainly debugging tools for GCC and FORTRAN. I understand that python has a debugging module built in though I have not used it. The purpose of a debugger is to allow you to see what is going on inside a program while it executes or what the program was doing when/if it crashed. Ftnchek: A FORTRAN checker designed to detect errors in a Fortran program that a compiler usually does not. Therefore it is best to run ftnchek on your FORTRAN programs after it has compiled without errors. Its purpose is to assist the user in finding semantic errors. Semantic errors are legal in the Fortran language but are wasteful or may cause incorrect operation. An on-line manual is available. This project is looking for volunteers to bringing ftnchek up to the Fortran 90 standard. gdb : All programs written in the languages supported by GCC can be debugged using gdb, an excellent interactive, command line debugger. You can compile your programs using a -g option which then compiles your code with debugging information inserted into the executable. It can start your programs, stop your programs on specified conditions and at specified locations, examine what happened when your program stops. In a large code with multiple cascading calls to various functions it can back trace the function calls. You can also Download the document Debugging with GDB and a quick reference card. xxgdb: It is a front end to the gdb debugger. Useful for beginners to gdb as it lists out the whole gdb commands as buttons with a area for viewing source on which one can include break points, etc by a click of the mouse, and another area for viewing the debugging results. DDD: The GNU Data Display Debugger, GNU DDD, is a graphical front-end for command-line debuggers such as GDB, DBX, WDB, Ladebug, JDB, XDB, the Perl debugger, or the Python debugger. Besides ``usual'' front-end features such as viewing source texts it also has a good interactive graphical data display, where data structures are displayed as graphs. Follow this link for a DDD manual in postscript / HTML / PDF format. It will be worth your while investing some time in learning to use one of the version control tools below (cvs is what I use ..) if you are into any serious code development. Concurrent Versions System : CVS is one of the most popular version control systems running on the Linux operating system. Popular Linux projects like Apache, EGCS, GIMP, and others are using CVS to coordinate their efforts ... This is how the URL linked above describes their effort. Project Revision Control System : PRCS, the Project Revision Control System, is the front end to a set of tools that (like CVS) provide a way to deal with sets of files and directories as an entity, preserving coherent versions of the entire set. PRCS was designed primarily by Paul N. Hilfinger, with input and modifications by Luigi Semenzato and Josh MacDonald. PRCS is written and maintained by Josh MacDonald. Its purpose is similar to that of SCCS, RCS, and CVS, but (according to its authors, at least), it is much simpler than any of those systems. This page is where information on the latest developments in PRCS can be found. Gbuild : gbuild is a script written in the Bourne shell language to simplify package maintenance by allowing you to automate code update from CVS, compilation, building tar/rpms/srpms of your package. some external scripts which certain functions of gbuild depend on are written in Perl. gbuild is released under the GPL. Integrated development environments (IDEs) can be very useful for building code and ideally come with all the above tools (i.e a compiler, a debugger and a version control tool). In addition to that IDEs also usually provide a makefile generator, documenting help, online help manuals, etc. Kdeveloper : A easy to use C/C++ IDE (Integrated Development Environment) for Linux. It supports KDE/Qt, GNOME, plain C and C++ projects. This site has a lot of documentation ..... a highly browsable site for software developers. Specifically, KDevelop manages or provides: All development tools needed for C++ programming like Compiler, Linker, automake and autoconf; KAppWizard, which generates complete, ready-to-go sample applications; Class generator, for creating new classes and integrating them into the current project; File management for sources, headers, documentation etc. to be included in the project; The creation of User-Handbooks written with SGML and the automatic generation of HTML-output with the KDE look and feel; Automatic HTML-based API-documentation for your project's classes with cross-references to the used libraries; Internationalization support for your application, allowing translators to easily add their target language to a project; KDevelop also includes WYSIWYG (What you see is what you get)-creation of user interfaces with a built-in dialog editor; Debugging your application by integrating KDbg; Editing of project-specific pixmaps with KIconEdit; The inclusion of any other program you need for development by adding it to the "Tools"-menu according to your individual needs. VDKbuilder: VDKbuilder is a tool that helps programmers in constructing GUI interfaces, editing, compiling, linking, and debugging within an integrated environment. Using VDKBuilder dramatically reduces developing time since all code related to GUI construction and signal processing is automatically generated, maintained and updated. It is distributed under the GNU Public License. Visit the site for downloading the software.
<urn:uuid:2fc8386b-be30-4168-a376-cc61ce058d76>
3.125
2,013
Tutorial
Software Dev.
45.804284
Thiomargarita namibiensis Brandie Amsden Thiomargarita namibiensis is a very unique bacteria because not only does it live where most bacteria can not survive it is the largest bacteria ever found. It took the record of the largest bacteria from Epulopiscium fishelsoni by being one hundred times larger. These prokaryotic, spherical bacteria are about 0.75 millimeters in diameter, which allows it to be visible by the naked eye. It is generally found in chains of ten or more. It is also very easy to notice because it shines like a pearl. The pearl color gives it the name “Sulfur Pearl of Namibia”. The rest of its name comes from the fact that it eats sulfur and that it was found off of the coast of Namibia. Heide Schulz is from the Max Planck Institute for Marine Microbiology and with the help of her colleagues they stumbled across this fascinating microbe deep down on the ocean floor. They were on the Russian research vessel Petr Kottsov looking for Thioploca and Beggiatoa off the coast of Namibia when the whiteness from this microbe caught their eye. Heide Schulz knew right away that this new discovery was a microorganism but it took a little more to convince the others that she was right. So far this is the only species in this genus and there have not been any pure cultures made outside of the original environment. They have been able to bring the microorganism back to the lab for research but it must be maintained in its environment. Trying to get a pure culture has been a challenge because other bacteria are colonizing on the mucous sheath since it is so big. The giant size of this microorganism comes from the large vacuole that is inside of it. The vacuole takes up about ninety eight percent of the interior. There are a few sulfur globules and the cytoplasm that fill the remainder of the space. It needs the large vacuole in order to store the nitrate it needs for survival because it can not move. Since they can not move they wait for what nitrate they are given. Each time there is a storm the ocean floor is stirred up and the mucous sheath that links the cells together catch the nitrate. The amount of nitrate that they capture allows them to go for about three months before they will die of starvation. They type of environment that they are in allows them to obtain the sulfur they need to break down the nitrate which keeps them alive. They are still trying to learn more about this microorganism currently but they hope that it can be use to clean up the ocean waters where there is a lot of runoff. With more research they can also look further into how the nitrogen and sulfur cycle work together. This organism and others like it also help keep the ocean from smelling like rotten eggs. For the moment they are very curious about how Thiomargarita namibiensis is able to store large amounts of nitrate over a period of time. Henahan, Sean. Giant Bacteria Discovered. 16 April 1999. http://www.accessexcellence.org/WN/SUA12/marg499.html Travis, J. Pearl like bacteria are largest ever found. Science News Online. 17 April 1999. http://sciencenews.org/pages/sn_arc99/4_17_99/fob5.htm CNN.com. Scientist discover biggest bacteria ever. 15 April 1999. http://www.cnn.com/NATURE/9904/15/biggest.bacteria/ The Worlds Largest Bacteria. Woods Hole Currents. *Disclaimer - This report was written by a student participaring in a microbiology course at the Missouri University of Science and Technology. The accuracy of the contents of this report is not guaranteed and it is recommended that you seek additional sources of information to verify the contents. Return to Missouri S&T Microbiology HomePage Go to DJW's HomePage This Document is maintained by djwesten@ mst.edu
<urn:uuid:33003396-8411-4ec9-8fbc-dcb11f3f529e>
3.96875
865
Knowledge Article
Science & Tech.
56.536449
A tantalizing glimpse inside this dome was captured after sunset at the mountain top Pic Du Midi Observatory in the French Pyrenees. But while most are just beginning their work at sunset, this observatory's day was done. The instrument looming within (for Christian Latouche IMageur Solaire), dedicated to exploring dynamic phenomena across the surface and atmosphere of the Sun. To image the solar atmosphere or corona, CLIMSO uses Developed by French astronomer in the 1930s, coronographs block light from the center of the telescope beam to create an artificial and allow a continuous view of the solar corona. In this surreal twilight scene above a sea of clouds, the dome's interior was revealed by the single, long exposure as the open slit rotated across the field of view. Credit & Copyright:
<urn:uuid:95941b29-375f-4174-90a1-9692e1498e71>
3.015625
185
Knowledge Article
Science & Tech.
26.861727
yield pointArticle Free Pass yield point, in mechanical engineering, load at which a solid material that is being stretched begins to flow, or change shape permanently, divided by its original cross-sectional area; or the amount of stress in a solid at the onset of permanent deformation. The yield point, alternatively called the elastic limit, marks the end of elastic behaviour and the beginning of plastic behaviour. When stresses less than the yield point are removed, the material returns to its original shape. For many materials that do not have a well-defined yield point, a quantity called yield strength is substituted. Yield strength is the stress at which a material has undergone some arbitrarily chosen amount of permanent deformation, often 0.2 percent. A few materials start to yield, or flow plastically, at a fairly well-defined stress (upper yield point) that falls rapidly to a lower steady value (lower yield point) as deformation continues. Any increase in the stress beyond the yield point causes greater permanent deformation and eventually fracture. See deformation and flow. What made you want to look up "yield point"? Please share what surprised you most...
<urn:uuid:7f4f6e75-abee-45ff-9163-949340f54adc>
3.765625
235
Knowledge Article
Science & Tech.
47.083564
- Find geographical coordinates using Google Maps (revisited) Joor Loohuis, 2010-03-08 A little while ago we posted a short instruction on how to find the map coordinates of a location for use on a Google Map. With some new additions to Google Maps, this has now become even easier. - Find geographical coordinates using Google Maps Joor Loohuis, 2009-05-10 There are occasions where you need accurate geographical coordinates of a location, for example for putting a marker on a map. Assuming you don't have a GPS receiver handy, or the location you're interested in isn't around the corner, finding coordinates can be a pain. Fortunately, Google Maps provides a little tool that will help you find coordinates for any location you can display on a map. - Microformats: embedding information in webpages the right way Armijn Hemel, 2009-04-14 - Unintentional side effects of using microformats Joor Loohuis, 2009-04-13 Interpreting a website using microformats semantics has consequences for webdesigners and webdevelopers, even if they don't intentionally use them. Certain markup is assumed to have a specific meaning, which may result in functional and presentational side effects. Therefore, designers and developers should be aware of the syntax of microformats, and avoid it if the semantics are not applicable.
<urn:uuid:e72731b1-2f8c-448a-9c7e-a22ead8c2407>
2.8125
295
Content Listing
Software Dev.
29.459451
Strontium Isotopic Signatures of the Streams and Lakes of Taylor Valley, Southern Victoria Land, Antarctica: Chemical Weathering in a Polar Climate Source: Aquatic Geochemistry, Volume 8, Number 2, June 2002 , pp. 75-95(21) We have collected and analyzed a series of water samples from three closed-basin lakes (Lakes Bonney, Fryxell, and Hoare) in Taylor Valley, Antarctica, and the streams that flow into them. In all three lakes, the hypolimnetic waters have different 87Sr/86Sr ratios than the surface waters, with the deep water of Lakes Fryxell and Hoare being less radiogenic than the surface waters. The opposite occurs in Lake Bonney. The Lake Fryxell isotopic ratios are lower than modern-day ocean water and most of the whole-rock ratios of the surrounding geologic materials. A conceivable source of Sr to the system could be either the Cenozoic volcanic rocks that make up a small portion of the till deposited in the valley during the Last Glacial Maximum or from marble derived from the local basement rocks. The more radiogenic ratios from Lake Bonney originate from ancient salt deposits that flow into the lake from Taylor Glacier and the weathering of minerals with more radiogenic Sr isotopic ratios within the tills. The Sr isotopic data from the streams and lakes of Taylor Valley strongly support the notion documented by previous investigators that chemical weathering has been, and is currently, a major process in determining the overall aquatic chemistry of these lakes in this polar desert environment. Document Type: Research article Affiliations: 1: Corresponding author, E-mail: firstname.lastname@example.org 2: Byrd Polar Research Center, The Ohio State University, Columbus, Ohio 43210-1002 3: U.S. Geological Survey, 3215 Marine Street, Boulder, Colorado 80303 4: U.S. Geological Survey, 345 Middlefield, Menlo Park, California 94025 5: Department of Geological Sciences, University of Alabama, Tuscaloosa, Alabama 35487 6: Desert Research Institute, 2215 Raggio Parkway, Reno, Nevada 89512 Publication date: 2002-06-01
<urn:uuid:5206a2b2-541e-4fb4-87b6-5ba9a01a7662>
2.71875
464
Academic Writing
Science & Tech.
32.289946
We have several members of staff who are using YouTube to enhance their teaching, and as an alternative to more traditional methods of communication. The videos available on the YouTube channels below range from worked examples from the undergraduate syllabus to sometimes controversial lectures on the history, foundations and philosophy of science and mathematics. The makers of these videos would welcome your feedback, which can be provided via YouTube. Note that although UNSW has provided support to produce some of these videos, they are personal presentations by the staff members involved. The views expressed and the content presented in these videos are the personal views of the authors and are not necessarily endorsed by the School of Mathematics and Statistics. Students in particular should note that any instructional videos on these channels are not necessarily appropriate in level or method for the particular course they are enrolled in. If in doubt please ask your course lecturer. Dr Chris Tisdell's YouTube Channel is very popular, with a considerable number of subscribers. Some of his playlist topics include Engineering Mathematics, Several Variable Calculus / Vector Calculus, and Mathematics for Finance & Actuarial Studies. A/Prof Norman Wildberger's YouTube Channel has attracted a large following. He has playlists WildTrig (explaining Rational Trigonometry), WildLinAlg (a first course on Linear Algebra), MathFoundations, History of Mathematics, and Universal Hyperbolic Geometry. Prof John Perram's YouTube Channel - Prof. Perram uses Wolfram's Mathematica to generate mathematical narratives, carrying out technical manipulations step-by-step with Mathematica code. He has applied this method to tutorial problems in first year calculus and algebra and Engineering Maths (MATH2019). Dr Denis Potapov's YouTube Channel contains an assortment of mathematical demonstrations. His collection includes videos on isometrical transformations of spherical triangles, and pendulum waves simulation. He also has an Advanced Maths Lecture series. Prof James Franklin discusses his book "What Science Knows: And How It Knows It". The book describes some colourful examples of discoveries in the natural, mathematical, and social sciences and the reasons for believing them. It also examines the limits of what science knows, giving special attention both to mysteries that may be solved by science, and those that may in principle be beyond its reach. Randell Heyman's YouTube Channel features animated videos to explain and discuss mathematics problems. Randell is a tutor within the School of Mathematics and Statistics who also participates in our Girls Do The Maths workshops and other outreach activities. The School of Mathematics and Statistics also has a series of videos hosted on UNSWTV. A compilation of videos from the PRIMA 2009: 1st PRIMA Congress, which was a large international congress held at the University of New South Wales in 2009, can also be found on UNSWTV. The footage includes talks from several eminent mathematicians.
<urn:uuid:e55a831f-eede-4184-99ff-8189d1a998e8>
2.6875
586
Content Listing
Science & Tech.
25.41517
Climatic and Environmental Processes Centennial years) Causes of Climate Change Over the Past Forcing Factors Millennial looking for climate processes and the forces that influence them at periods ranging from 100 to 1000 years, paleoclimatologists splice instrumented data with calibrated proxy data such from tree rings, cores from icecaps, glaciers, marine and lake sediments layers, and corals, and evidence of vegetation change found in pollen samples and packrat middens. Some climate patterns or possible millennial scale oscillations have been observed in the paleo record that are not necessarily operating today. For example, scientists examining the long-term climate record from ice cores have noted a ~900 year oscillation that has appeared in the North Atlantic. In their article "Holocene climate variability on centennial-to-millennial time scales," Schulz and Paul (2002) note that "Proxies of atmospheric temperature and humidity from Greenland and northern/central Europe show evidence for 900 year climate oscillations between 3,000 and 8,500 years ago. The magnitude of the climate perturbations in Europe was probably large enough to affect human societies, especially since they occurred during the important transition from hunting- gathering life style to sedentary agriculture." of Climate Change Over the Past 1000 Years Image of Mount St. Helens from USGS While the climate of the past 1000 years may have its own unique qualities, such as the growth in human population, it does serve as a fascinating case study of how climate varies in both subtle and sometimes According to the research of Crowley (2000), between 40-65% of decadal-scale temperature variations during the past 1000 years prior to 1850 were caused by changes in solar irradiance and volcanism. While individual volcanoes usually only impact climate for a year or so, clustered eruptions can perturb the climate system for longer periods of time. Reconstructions of temperatures and climate forcing are crucial for developing and testing climate models that can separate natural climate variability from the impact of human activities such as the release of carbon dioxide from the combustion of fossil fuels and changes in land cover. Measured Scientists rely on Paleoclimatic Proxy records to reconstruct variability and climate patterns at the 1000 year scale. Cores from coral reefs, ice, ocean and lake sediments can provide an array of information including temperature, precipitation, chemical composition of air or water, biomass or vegetation, volcanic eruptions and solar activity with varying degrees of accuracy and detail. The figure below is from Mann, M. E., Rutherford, S., et. al. (2003). Optimal Surface Temperature Reconstructions Using Terrestrial Borehole Data in JGR Atmospheres. Figure shows comparisons between different Northern Hemisphere temperature reconstructions and instrumental record. Shown are smoothed (40 year lowpassed) reconstructions and, in the case of the Mann et al reconstruction, the associated 95% confidence interval. Shown for comparison are the HPS00 reconstruction, and the really-weighted mean of gridded HPS00 borehole reconstructions.
<urn:uuid:19e6453d-9825-4cb5-bd26-e0f49653f882>
3.765625
681
Knowledge Article
Science & Tech.
20.078113
The explosions of egg yolks which had been in a microwave oven for only one minute, on the puncture of the yolk by a fork, seem to be explicable as a nucleation event (Letters, 1 February). The doctors were right to assume that the yolk was heated above its boiling point. This can happen because of the absence of any suitable nucleus on which boiling can occur. This is a common situation in clean liquids. The additional assumption by the doctors that the pressure inside the yolk had risen was not correct. The superheated liquid would be only at atmospheric pressure, explaining why the weak membrane of the yolk could remain intact. However, on puncture by the fork, the foreign surface introduced in this way would contain an abundance of air pockets and nonwetted areas, allowing prolific nucleation of vapour bubbles. If the yolk was highly superheated, then the bubble ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:70c826e5-1041-400a-83ab-37d0e3576470>
3.4375
218
Truncated
Science & Tech.
49.13348
The data is dismal. If climate change continues unmitigated as it has for the past century, temperatures around the world will increase 5 degrees Celsius (9 degrees Fahrenheit) by 2100 -- the equivalent increase between today's climate and the last ice age. This change won't impact the world equally, with local changes varying from almost none to more than 10 degrees Celsius, depending on scenario, location and season. All of these maps were designed using Development Seed's TileMill, an easy-to-use open-source map design tool that we've written about here before, and hosted on MapBox Hosting. TileMill is free to download and has loads of documentation to help people get started making maps. For design tips on map making, check out a blog post from Development Seed's AJ Ashton on the thinking behind the design of these maps. Preparing for climate change These maps tell the story of the anticipated impact of climate change, from the basics of where we'll see the biggest increase in temperature and fluctuation in precipitation levels to larger societal impacts on food security, countries' economies, and people's vulnerability to natural disasters. With these maps, the World Bank aims to not only show the urgency in preparing for climate changes, but also to target efforts to the countries and regions that will be most affected. This map shows the expected worldwide temperature increases, assuming that global population continues to increase and regionally oriented economic growth is slower than in other scenarios. Agriculture is expected to be one of the most affected industries, impacting countries' economies -- and only more so for ones whose GDP (gross domestic product) is made up largely of agriculture-related business. For example, agriculture is 61.3 percent of Liberia's GDP and 47.68 percent of Ethiopia's, while it's just 1.24 percent of the U.S. GDP. Low-lying coastal areas will likely be more vulnerable to increased flooding, with countries such as Bangladesh, Myanmar and India at highest risk due to the huge populations that live there. More details on the maps are available in this blog post by Development Seed's Alex Barth. The data powering the maps is all publicly available from the World Bank, as part of its larger open data push with data.worldbank.org. This and other related climate data is all housed in its Open Data Resources for Climate Change. The World Bank is encouraging people to use this data and is hosting an Apps for Climate challenge to promote and reward this use. Check out the details, and be sure to submit your app by March 16.
<urn:uuid:d380daad-80dd-406a-ad3f-d955615b87f5>
3.59375
521
Knowledge Article
Science & Tech.
46.295338
Web edition: August 27, 2012 You’ve probably learned lessons by watching other people goof up. For example, if you saw another kid ride her bike too fast around a corner and fall down, you might ride your bike more slowly on that turn. “We humans are very sensitive to others’ mistakes,” Masaki Isoda of the Okinawa Institute of Science and Technology in Japan told Science News. And the same is true for other animals, his new data show. Isoda’s team has discovered that in monkeys, a small part of the animal’s brain is activated when a companion monkey makes an error. The finding appeared August 5 in a scientific journal. Sanders, L. Monkey brains sensitive to others’ flubs. Science News, Vol. 182, August 6, 2012, P. 12. Available online: [Go to]
<urn:uuid:d083964c-3272-4cea-a2fa-b675b0c6dd7a>
2.921875
182
Truncated
Science & Tech.
61.257022
Sir David Baulcombe is one of the world's top scientists whose work identified small RNAs, and he's a nice person as well. He will be a Keynote Speaker at the upcoming UK Plant Sciences Federation meeting in Dundee, Scotland, April 2013, which is sure to be a stimulating meeting http://www.plantsci2013.org.uk/programme/ DARPA's Physical Intelligence program represents a potential major advance in artificial intelligence research, as the “physical intelligence” device would not require computer programming or the use of human controllers to provide directions, as with traditional robots. Instead, the device operates via nano-scale interconnected wires that send signals through synthetic synapses, just like the human brain. Such a system is capable of remembering information, meaning that robots might be able to act like humans in the foreseeable future. Compared to traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles. Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency (DARPA) on a program called Physical Intelligence. The stated objective of the program is: "The analysis domain is to develop analytical tools to support the development of human-engineered physically intelligent systems and to understand physical intelligence in the natural world". The increasingly ambiguous divide between man and machine just got blurred that much more with Stanford's recent announcement: scientists have successfully created the first truly biological transistor made entirely out of genetic material. We believe quantum computing may help solve some of the most challenging computer science problems, particularly in machine learning. Machine learning is all about building better models of the world to make more accurate predictions. If we want to cure diseases, we need better models of how they develop. If we want to create effective environmental policies, we need better models of what’s happening to our climate. And if we want to build a more useful search engine, we need to better understand spoken questions and what’s on the web so you get the best answer. So today we’re launching the Quantum Artificial Intelligence Lab. NASA’s Ames Research Center will host the lab, which will house a quantum computer from D-Wave Systems, and the USRA (Universities Space Research Association) will invite researchers from around the world to share time on it. Our goal: to study how quantum computing might advance machine learning. Machine learning is highly difficult. It’s what mathematicians call an “NP-hard” problem. That’s because building a good model is really a creative act. As an analogy, consider what it takes to architect a house. You’re balancing lots of constraints -- budget, usage requirements, space limitations, etc. -- but still trying to create the most beautiful house you can. A creative architect will find a great solution. Mathematically speaking the architect is solving an optimization problem and creativity can be thought of as the ability to come up with a good solution given an objective and constraints. Classical computers aren’t well suited to these types of creative problems. Solving such problems can be imagined as trying to find the lowest point on a surface covered in hills and valleys. Classical computing might use what’s called “gradient descent”: start at a random spot on the surface, look around for a lower spot to walk down to, and repeat until you can’t walk downhill anymore. But all too often that gets you stuck in a “local minimum” -- a valley that isn’t the very lowest point on the surface. That’s where quantum computing comes in. It lets you cheat a little, giving you some chance to “tunnel” through a ridge to see if there’s a lower valley hidden beyond it. This gives you a much better shot at finding the true lowest point -- the optimal solution. We’ve already developed some quantum machine learning algorithms. One produces very compact, efficient recognizers -- very useful when you’re short on power, as on a mobile device. Another can handle highly polluted training data, where a high percentage of the examples are mislabeled, as they often are in the real world. And we’ve learned some useful principles: e.g., you get the best results not with pure quantum computing, but by mixing quantum and classical computing. Can we move these ideas from theory to practice, building real solutions on quantum hardware? Answering this question is what the Quantum Artificial Intelligence Lab is for. We hope it helps researchers construct more efficient and more accurate models for everything from speech recognition, to web search, to protein folding. We actually think quantum machine learning may provide the most creative problem-solving process under the known laws of physics. We’re excited to get started with NASA Ames, D-Wave, the USRA, and scientists from around the world. There is a fundamental chasm in our understanding of ourselves, the universe, and everything. To solve this, Sir Martin takes us on a mind-boggling journey through multiple universes to post-biological life. On the way we learn of the disturbing possibility that we could be the product of someone elses experiment. By: Mike Wall Published: 04/18/2013 02:55 PM EDT on SPACE.com NASA's Kepler space telescope has discovered three exoplanets that may be capable of supporting life, and one of them is perhaps the most Earth-like alien world spotted to date,... String theory is one of the more popular candidates to combine quantum mechanics and relativity into a grand unified theory. But it had remained completely untestable until recent experiments at the Large Hadron Collider.
<urn:uuid:d3670d82-bbb4-48b1-aacb-b3df4f13b2ba>
3.078125
1,213
Content Listing
Science & Tech.
39.619641
Global environmental challenges How green is the Amazon? Not as green as it used to be, as shown in an analysis of satellite images made during last year’s record-breaking drought. Because greenness is an indication of health in the Amazon, a decline in this measurement means this vast area is getting less healthy — bad news for biodiversity and some native peoples in the region. What does a drop in the greenness index look like? It looks gold, orange and red in a graphic accompanying an article to be published in the journal Geophysical Research Letters: Gray areas are the norm, based on a decade of satellite observations that cover every acre (actually every square kilometer) on the planet. Dots that are gold, orange or deep red show areas with a decrease in greenness. Scientists call this the Normalized Difference Vegetation Index (NDVI on this chart) or the greenness index.
<urn:uuid:091f8bec-63da-4682-a457-2743d0d7707e>
3.4375
187
Personal Blog
Science & Tech.
39.271126
The RMMaterial class is a special material that is only of use if you are creating images via a RenderMan renderer and you want to write your shaders in external shader files or use shaders that are already compiled. The material may consist of a surface shader, a displacement shader and an interior shader. The shader source files (or only the shader names) are passed via RMShader instances as arguments to the constructor. If the RMShader instance points to a file, the material object will take care of the compilation of the file. Otherwise, it is up to you to compile the shader and make sure that the renderer can find it. surface specifies the surface shader to use. It can either be a string containing the shader name or a RMShader instance representing the shader. You can also pass None if no surface shader should be instantiated. displacment specifies the displacment shader to use. It can either be a string containing the shader name or a RMShader instance representing the shader. You can also pass None if no displacment shader should be instantiated. displacementbound is a tuple (coordinate system, distance) that specifies the maximum displacement. The distance is the maximum amount that a surface point is displaced and is given in the specified coordinate system. interior specifies the interior shader to use. It can either be a string containing the shader name or a RMShader instance representing the shader. You can also pass None if no interior shader should be instantiated. color is the color that should be set via RiColor(). opacity is the opacity that should be set via RiOpacity(). The parameters of the shaders are made available as attributes of the material objects. The corresponding slots can be obtained by adding the suffix _slot to the name. Attribute names in the surface shader have priority over the attributes in the displacement shader which in turn has priority over the interior shader. This means, if there are identical parameter names in all shaders you will access the parameter of the surface shader. You can also access the attributes of each shader via the surface, displacement and interior attributes which contain the corresponding RMShader instances. mat = RMMaterial(surface = RMShader("mysurface.sl"), displacement = RMShader("c:\\shaders\\dented.sl"), color = (1,0.5,0.8) ) ... Sphere(pos=(1,2,3), radius=0.5, material=mat) In this example, the material uses the surface shader mysurface.sl and the displacement shader c:\shaders\dented.sl. The shaders will be compiled automatically because the shader source files are given (instead of just the shader names). The RMShader class encapsulates a single RenderMan shader. shader is either the name of a shader or the shader source file. If a shader file is given then the shader is read to extract the parameters. Each parameter will be made available as slot. transform is a mat4 containing a transformation that should be applied to the shader. This means you can transform the shader relative to the object it is applied to. cpp determines the preprocessor that should be used when extracting parameters. cpperrstream is used to output errors from the preprocessor. includedirs is a list of strings that contain directories where to look for include files. defines is a list of tuples (name, value) that specify the predefined symbols to use (see the function slparams.slparams() (section slparams — Extracting RenderMan Shader parameters) for details). params can be used to declare parameters if the shader source is not available. The value must be a dictionary that contains token/value pairs. The token may contain an inline declaration. Any additional keyword argument is also considered to be a shader parameter. However, this parameter cannot have an inline declaration, so it is recommended to declare the parameter afterwards using the declare() method, otherwise no declaration will be written in the RIB file and you have to care about the declaration yourself. Return the shader name or None if the name is not known. Return the shader type as a string ("surface", "displacement", "light", ...) or None if the type is not known. Declare a shader parameter. name is the parameter name. name may also contain the entire declaration in SL syntax. In this case, all other arguments are ignored, otherwise they provide the missing information. type is the only parameter that is mandatory if name does not contain the entire declaration. It contains the name of the SL parameter type (float, string, color, point, vector, normal, matrix). cls is the storage class (uniform, varying). arraysize specifies the size of the array and default contains the default value. When a parameter is declared it is added to the list of known parameters and a corresponding slot (<name>_slot) is created. shader.declare('uniform float Ka=0.5') shader.declare('uniform float Ka') shader.declare('float Ka') shader.declare('Ka', type='float') A parameter that was specified in the constructor is used as default value when the parameter is declared. In this case, any default value passed to the declare() method is ignored. Return a dictionary containing the parameters for the current time. The key is the parameter name (containing an inline declaration if available) and the value is the current value of the parameter.
<urn:uuid:41c17953-0421-45e1-b025-211750d8f449>
3.125
1,146
Documentation
Software Dev.
43.200949
Glaciology of Blue Ice Areas in Antarctica Our main objective for this project is to contribute to the glaciological understanding of blue ice areas in Antarctica. Most of the Antarctica Ice Sheet has a snow/firn cover of ~100m that overlies the ice. In blue ice areas, the ice is on the surface and there is no blanket of snow. This is because wind and sublimation remove more snow than is accumulated by snowfall, causing a negative mass balance. Typically, blue ice areas form where mountains obstruct ice flow and high winds transport a lot of snow. These conditions cause ice to flow upwards, towards the surface (Figure 1). By walking in the direction of the Allan Hills in Figure 1, we are essentially walking back in time over older and older ice. As a result, we can obtain long records by collecting a ‘horizontal ice core’ as opposed to a vertical one (Figure 2). We will investigate the ice dynamics of two blue ice areas – Mt. Moulton and the Allan Hills. Below is a brief list of techniques that we will use. We’ll explain them in more detail in our logbook: - GPS (Global Positioning System): Both field sites have a pre-existing network of velocity markers. We will determine the displacement of these markers to update our velocity measurements. - Radar: Radar profiles will be collected in order to map the bedrock topography and assess how well ice layers are preserved as they pass over bedrock bumps. - ECM (Electrical Conductivity Measurements): ECM profiling is a way to delineate ice layers while in the field. Conductivity measurements vary depending on sea salt concentrations, volcanic tephras, etc. - Shallow Ice Cores: Outside of the blue ice area, shallow firn cores will be collected to determine accumulation rates. - Meteorite Dating: The ice dynamics of blue ice areas make them the primary collection zones for meteorites. These meteorites help us date the surrounding ice (through the radioactive decay of cosmogenic nuclides). Visit our Expedition Page to read about our field work.
<urn:uuid:deef512f-382e-47c8-9794-023ea9044ab3>
3.515625
436
Academic Writing
Science & Tech.
37.853912
This topic was created by BristolBachelor. <b>SPACE</b> the final frontier This is our place to talk about all things space. Feel free to join-in, regardless of whether you are just interested or are a real-life rocket scientist! OK, here's a question We want to build a big space habitat, large enough for lots of people to live on - and do useful work (as well as zero-G disco of course). Until then, we're just visitors to space. And I assume we'll need gravity - otherwise it's not really a permanent presence. So which is easiest? To go off and capture an asteroid for materials and manufacture it in space? Or to build it in bits on earth, and fly it up there and assemble? I guess I'm asking is the 2001 big spinning space station possible, or even worthwhile, when you might be able to hollow out, seal and spin an asteroid much more easily? Neither is going to be easy, or quick of course. I guess for either to be practical, we're going to need to assume better technology for getting to orbit. Or do we have a chicken and egg problem that's going to keep us out of space for centuries? We haven't got the raw materials, or the means to process them up there. And it's really hard to launch them. You can't do manufacturing in orbit without practise, but you can't support a large permanent presence without a facility big enough to be self-sufficient, which you can't build without the raw materials, that you haven't got the infrastructure to mine, or the rockets to lift... There's raw materials and energy for the taking, just as soon as you've got up there with sufficient kit to start you off. Then you can achieve close to self-sufficiency, and then start export to earth. Permanence in space has always seemed just over the horizon, if only we could make a few big pushes, and make a sustained investment. But is it actually such a mess of huge, interlinked problems, that it will take centuries? Leaving space to the scientists until then... Re: OK, here's a question I'd go for the hollowed-out asteroid approach. Much less crap to lift into orbit - structural materials are right where you want them. You could send up teams of miners to do the main prep - maybe even do it on a commercial basis (got an asteroid here guv - already hollowed out for you). It would need some international agreements of course : who owns the asteroids anyway? Also, there's the problem of how you control the placement of them (assuming there is a desire to bring asteroids closer in to Earth). Re: Re: OK, here's a question I think an asteroid makes most sense, but the question is, how do you do the first one? What are the miners going to live in while they build it? What are they going to build it with, and how do you get it there? Once you have a factory in space, you're laughing, and everything else is gravy. But getting that first one built might be incredibly hard, hence my question about it taking centuries. The ISS has cost $100bn odd, and only houses 6 people (plus the odd visitor). As you say, there's an arms controls aspect to all this. What's to stop a government with a bloody great rock on a rocket, from dropping it on someone they don't like? Although that's true of a big enough space station too. But I think the political problems are mild compared to the technical and financial ones. Re^3: OK, here's a question I guess the first one has to be kicked off in an automated fashion. Assuming you've licked the problem of mining in a vacuum and have a machine to do most of the work, you start out by hollowing out living quarters. Nothing fancy - just a space which is lined appropriately for insulation and air-tightness. Then hollow out equipment sheds/docks to hold the cool gear that the miners will use to do the more refined job of structuring the thing in accordance with its intended use. Then you move the miners in. Before all that, though you have to design a power plant which has sufficient megawattage to satisfy all needs (including particulary the excavation). That has to be delivered and installed before anything else happens. Probably before this you have to select the asteroid and move it to where you want it - strapping something to it, or maybe something more clever. Gonna need some geologists (or is that asterologists?) to survey and choose the appropriate asteroid. Lots to think about, so little time. Maybe the answer is simple : get China (eg) to announce that it has plans to mine/convert asteroids, and see how quickly the rest respond. I think that the theories about putting an asteroid in orbit just don't allow for their about mass and inertia. The amount of fuel you would need to alter its course to get closer to Earth, and then alter it again so that it is captured by Earth's gravity must be staggering. I think you'd be better off looking at that piece of rock that's already in orbit. But it doesn't stop there. Just think what you want to build your disco from, then work back from there. You'll start with a mini digger (always fancied one, personally), but then you'll want to sort the rocks, crush them, melt them, mix them, cast them, shape them..... eventually you'll end up with a piece of metal of 5kg that needed 1,000,000kg of stuff on the moon to make :( I think that for the next 100 years, it's going to be pre-fabricate it on Earth, ship it to orbit and bolt it together there. As for the cost of the ISS, I think a lot of the costs that make up the numbers were the very expensive shuttle launches, and the amount of work performed to work out "how to" rather than actually doing. I think that the repeat orders of living spaces and common equipments will be a lot lower. The technology I'm waiting for is the one that overcomes inertia. Come on Higgs, where is your Boson, and how does it work? Re: Moving asteroids I can't say I have a handle on what it would take to shift an asteroid. I guess it boils down to 'how badly do you want one?' I agree that we'd be better off looking at Near-Earth Objects, but there might be claim issues there (a bit like if someone were to start mining in Antarctica). Mining the metals to be used might be a bit over the top in the short term, but given that the asteroid itself provides much of the structure, maybe the needs aren't too onerous. I'd be interested in what actual mining engineers think of the problems. I concur on the technology to overcome inertia. It's in my list of 'Three things that would revolutionise space exploration' - the other two being Instantaneous Communication and Cold Fusion. All three preferably, but I think any one would change things dramatically. Re: OK, here's a question (got an asteroid here guv - already hollowed out for you). but we did need to lift a few thousand tons of polyfilla to fill all the cracks....... we did fill all the cracks didn't we?? WTF? why are we doing a civil engineering project without having a civil engineer on board... er Huston, you have a problem Re: OK, here's a question You'll need water and air to survive.... guess where it started! Antihydrogen fusion!!! Relativistic Perturbation Mantle.... antihydrogen fusion in a self contained sphere produces Sprites above thunderstorms. Anti-Nuclear-fusion "Mantle" produces high energy photons for the aether/dark matter for the universe. This is the first stage in water and air production for the universe. Energy-helium spun off to the moon- Carbon sealing in the fusion- Oxygen in massive quantities that stores as liquid slowing the process back to Carbon sealing this fusion generator in,then it starts to burn/convert the first Carbon carbon ring to high energy photons, this is a new discovery that allows for Compton Scattering to take place. Mantle produces the background photon for our biosphere as well as rejuvenate it. Re: OK, here's a question I have made a discovery in Lightning physics that revealed the very centerpiece of our worlds. Since the beginning of time lightning effect on hydrogen has been producing anti-hydrogen. One form of this is a Gamma ray producer 12 ft. ringed sphere, Relativistic Perturbation (Mantle) For 4.5 billion years... lightning in earths atmosphere has been producing this self contained sphere utilizing rings of Carbon, Liquid oxygen surrounding a sphere of fused negative energy (anti-hydrogen fusion). Mantle's release comes in the form of Sprites above storms. Mantle produced the atmosphere Ionosphere and anti-protons produces dark matter around earths Ionosphere. "Mantle's are the missing link in the Earths physiology and the tie between Man Metaphysics and the Dimensional world." Ronald Patrick Marriott. Mantle for short produces Gamma rays found by NASA's FERMI satellite during Sprite production. PAMELA satellite found the antiprotons around the Van Allen belt produced by Mantle's discharge energy that follows the magnetic field lines from Earth. Mantle's discovery will lend efforts to light speed "Warp" travel, wormhole production and dimension technology. The release of charged liquid Oxygen is converting to Air/water for the atmosphere and electrons for the Ionosphere. It produces force fields and an endless supply of highly charged Liquid Oxygen to repair the earth with. This will supply us with clean energy any place on earth or space. I am developing team oriented corporate structures to grab the steep developmental curves of high energy physics for my discoveries. These technologies are for advancement of the human race in the areas of Food production, Computer storage/dimensional, Transportation, Infrastructure, Medical, Space Travel and Dimension Building using high energy particle physics from my discoveries with Mantle. It's not that hard, depends how fast you want it done. Moving an asteroid from the belt to Earth orbit is not actually all that hard or expensive... Although the energy term is far away from anything we can get into space, much less out beyond Mars, we don't have to lift the fuel, just the engine and we've done things at that scale many times already. Let's take a cubic kilometre of asteroid, that's about 10^10 tonnes, or 10^13 Kg and say we need to change it's velocity by 2 kilometres per second, our old friend e = 1/2 MV^2 tells us the energy is 2,000,000,000,000,000,000 joules, enough to run your PC for a 1/4 of a million years That's a big electricity bill, but in the belt, the Sun gives you about 30 w per square metre. Solar panels don't need to mass much and we have the capacity to deliver a couple of square kilometres to the belt for far less than a manned mission to Mars,, my call is that it would be about the same price as the lander we just sent there, maybe twice as much maybe half. Two square kilometres would give us about 60 megawatts which we could then direct into a mass driver engine. Basically smash the rock to dust, charge it and spit it out the back havinfbeen accelerated by an electric field. The thrust to weight ratio would make Jeremy Clarkson vomit with contempt, not far off blowing at your monitor. But... There's no friction, and if you blew at your monitor 24*7*52 you'd have a very fast screen indeed. At 100% efficiency, it would be in Earth orbit 2-3 years after the probes landed. Of course we won't get 100% efficiency, or anything close. But solar is a 24*7 resource, so if we're prepared to wait 10 years or send a couple of more probes it is actually something we could build today. We already have rockets that can get there, space borne solar cells are a technology older than most people reading this, smashing rocks is well understood and if it wasn't for the fact that space flight is purely a way of propping up the profits of aerospace firms, it would cost a few billion and take 15 years from start to finish. Re: It's not that hard, depends how fast you want it done. How about by 2040? It would be a neat solution to the following : Oh, and I like the idea of smashing the rock to dust and using that to drive it. Didn't they recently talk about making structures on the moon only using the resources found on the moon? If so the more logical way would be to build the station from that as it's a lot closer then any asteroid and less problems lifting the parts off the surface than from the earth. How about the moon No need to transport it, hollow it out and usse the material to build the disco like a big mooncrete bond lair megaplex. Simples Re: How about the moon The moon is said to be hollow already. If you put some sort of equipment up there that could repel it from a planet's orbit, perhaps by absorbing the flux energy of the planet itself you'd have enough room for entire cities, and could use it to travel the galaxy. Who's to say it hasn't already been done? The main problem I think a lot of people would have is the whole "steering a ruddy huge rock into orbit". Who has control? How is it being aimed? And, of course, the lawyers will ask: Can you be 100% sure that no software failure, no hardware failure, or no other, unknown failure will cause your asteroid to lose control and smash into the Earth, killing (potentially) billions? The answer to which has to be "No. Nothing like that can be guaranteed 100%." Which is the reason we're never going to get decent sized nuclear reactors into orbit. Because there's a CHANCE that the rocket could fail, and the resultant crash could spread nuclear fuel over the face of the planet. Fear, more than anything, is working to keep humans Earthbound. I'm trying to look for a site that allows me to "see" what satellites or other orbiting objects were visible from a given point in the ground at a given date. I've checking heavens-above but, although it's a great site, don't provide this capability (or at least not during daylight hours)... Google only took me so far... The reason for the question (as if it matters) is that I saw today a really faint dot in the sky, around 11:30 am, I watched it by sheer chance in the first place, as it was almost invisible. It wasn't Sirius, as it seemed to move slowly and in a seemingly straight line, but seemed too distant to be a plane. I thought it could be the ISS (it wasn't) or maybe an Iridium satellite, but cant find a way to be sure. Little green men from Outer Space? I hope no... Re: Satellite tracking I've used this one with partial success: "http://www.satview.org/". Lack of success was mainly due to the usual British weather - if there's anything to see there are usually clouds in the way. - Product Round-up Smartwatch face off: Pebble, MetaWatch and new hi-tech timepieces - Geek's Guide to Britain The bunker at the end of the world - in Essex - FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know - If you've bought DRM'd film files from Acetrax, here's the bad news - VIDEO Herschel Space Observatory spots galaxies merging
<urn:uuid:a69d329d-4c93-42da-a4fc-b0802e181390>
2.75
3,369
Comment Section
Science & Tech.
66.289653
There is a lesson that statisticians, especially of the Bayesian persuasion, have been hammering into our skulls for ages: do not subtract background. Nevertheless, old habits die hard, and old codes die harder. Such is the case with X-ray aperture photometry. When C counts are observed in a region of the image that overlaps a putative source, and B counts in an adjacent, non-overlapping region that is mostly devoid of the source, the question that is asked is, what is the intensity of a source that might exist in the source region, given that there is also background. Let us say that the source has intensity s, and the background has intensity b in the first region. Further let a fraction f of the source overlap that region, and a fraction g overlap the adjacent, “background” region. Then, if the area of the background region is r times larger, we can solve for s and b and even determine the errors: Note that the regions do not have to be circular, nor does the source have to be centered in it. As long as the PSF fractions f and g can be calculated, these formulae can be applied. In practice, f is large, typically around 0.9, and the background region is chosen as an annulus centered on the source region, with g~0. It always comes as a shock to statisticians, but this is not ancient history. We still determine maximum likelihood estimates of source intensities by subtracting out an estimated background and propagate error by the method of moments. To be sure, astronomers are well aware that these formulae are valid only in the high counts regime ( s,C,B>>1, b>0 ) and when the source is well defined ( f~1, g~0 ), though of course it doesn’t stop them from pushing the envelope. This, in fact, is the basis of many standard X-ray source detection algorithms (e.g., celldetect). Furthermore, it might come as a surprise to many astronomers, but this is also the rationale behind the widely-used wavelet-based source detection algorithm, wavdetect. The Mexican Hat wavelet used with it has a central positive bump, surrounded by a negative annular moat, which is a dead ringer for the source and background regions used here. The difference is that the source intensity is not deduced from the wavelet correlations and the signal-to-noise ratio ( s/sigmas ) is not used to determine source significance, but rather extensive simulations are used to calibrate it.
<urn:uuid:c671868e-5fb4-4ce3-9a38-f92f318a6d41>
2.8125
540
Academic Writing
Science & Tech.
48.369989
Filed under: Uncategorized | Tags: bluefin tuna, cesium, cesium tuna, Cesium-134, Cesium-137, Fukushima, meltdown, radioactivity Five months after the Fukushima disaster, Fisher of Stony Brook University in New York and a team decided to test Pacific bluefin that were caught off the coast of San Diego. To their surprise, tissue samples from all 15 tuna captured contained levels of two radioactive substances—ceisum-134 and cesium-137—that were higher than in previous catches. The results “are unequivocal. Fukushima was the source,” said Ken Buesseler of the Woods Hole Oceanographic Institution, who had no role in the research. Bluefin tuna absorbed radioactive cesium from swimming in contaminated waters and feeding on contaminated prey such as krill and squid, the scientists said. As the predators made the journey east, they shed some of the radiation through metabolism and as they grew larger. Even so, they weren’t able to completely flush out all the contamination from their system. “That’s a big ocean. To swim across it and still retain these radionuclides is pretty amazing,” Fisher said. All well below “safe levels,” of course, according to govt. agencies. That’s fine, I won’t be eating any. More for you! Leave a Comment so far Leave a comment
<urn:uuid:b64a7fb5-0fbb-4540-9f55-2ec24da55b53>
3.359375
305
Personal Blog
Science & Tech.
46.612308
I’m sure this has been written about ad nauseum, but I spent some time yesterday explaining it to someone who didn’t understand, and now I feel like writing it up a bit more formally. What is attr_accessible? In Ruby on Rails, attr_accessible allows you to specify which attributes of a model can be altered via mass-assignment (most notably by new(attrs)). Any attribute names you pass as parameters will be alterable via mass-assignment, and all others won’t be. How does mass-assignment work normally? By default, mass-assignment methods accept a hash of attribute values, each keyed by their associated attribute’s name. If I ran the following code: A new instance of the User model would be created, and the 1 2 3 4 In addition to creating a user with the appropriate attributes, this will update the specified courses to be owned by this user(assuming a user has_many courses in our app). How can this be abused? Very easily. What if someone did this: This Draco Malfoy fellow may not actually be a teacher, but the system is none the wiser. Of course, the developer would never code this; in a real Rails app, the code is going to look like this: The elements in params[:user] are taken from the POST/GET/PUT data passed along when the action was run. They’re thrown blindly into the mass-assignment, and any attributes whose names match the keys will be set. “So what’s the big deal? Just don’t include an ‘is_teacher’ field in the web form, and the param won’t be there.” This is true for innocent users, but the malicious ones (and Draco Malfoy is definitely a malicious one) have an easy way around this. A web form is just a way to make it easy for users to pass data to your app. There are other ways. For example, if I wanted to register for the app via the command line instead of a browser, I could do it like this: This sends a request to http://myapp.com/users/ and passes data in the exact format it would’ve appeared if I’d filled out a web form that asked for a name and email address. However, I could also do this: 1 2 3 is_teacher is an attribute name in my User model, and mass-assignment methods blindly accept whatever attributes they see, Draco Malfoy has just set himself a teacher. Even worse, I could use this to grab courses that may not be mine. 1 2 3 Draco Malfoy has now taken courses 1 and 2 away from whoever they originally belonged to (Dumbledore, if my memory serves me) and given them to himself. How can we prevent this? There are a few obvious but clumsy ways. We could skip mass assignment, setting each individual attribute in our controller, but this will introduce a lot of duplicate and unnecessary code. We could explicitly pull unwanted parameters out: This also introduces a lot of duplicate code. If we ever add new columns that we want to restrict, or decide we want to unrestrict a column, we’re going to have to go through the update actions, and any others that perform mass assignment. We could factor these out into some sort of sanitize_params method on each model. This is a better solution, but you still have to call it in every action that alters the data. It’s definitely not as good as the built-in one: attr_accessible. We can add this to the top of the horcruxes, for example), no one has to think about updating the What does this not do? I saw one person say “Why would I put anything in attr_accessible? Why would I want any of my attributes to be hackable?” Make no mistake: attr_accessible is no substitution for proper access control. If all users have write access to all other users, attr_accessible will let one user change another’s name attribute if it’s specified. Regular authentication and access control must be used to prevent users from writing to model instances that they shouldn’t be able to write to. Once this is done correctly, attr_accessible can be used to prevent a malicious user from altering data of her own that she shouldn’t be able to alter. To be more clear, it could be considered “hacking” if a user were able to change everyone’s name to “Voldemort”. attr_accessible can’t prevent this; you need to do proper authentication with something like Authlogic. Once you’ve set your controllers up to prevent a user from even attempting to change another user’s data, you’ve prevented this “hack”. If the user tries to change his own name to “Voldemort”, that’s totally fine. We don’t care if he does it via the web app, curl, or anything else; users are allowed to change their own name. Including attr_accessible isn’t making it “hackable”, because it’s an attribute that users should be able to change. If the user tries to change his is_teacher attribute from true, that’s also considered “hacking”. We don’t want to let users do this, so we exclude attr_accessible to prevent it. Are attributes excluded from attr_accessible immutable? No. They can still be altered, just not via mass-assignment. If I exclude attr_accessible, and I go: 1 2 3 That will work just fine. The difference is, it forces you to set the attribute explicitly, so there’s no potential of accidentally setting an attribute unexpectedly passed to mass-assignment. This way, I can allow my non-dangerous attributes to be set via mass-assignment with attr_accessible, then explicitly provide or deny control over dangerous attributes in other actions.
<urn:uuid:06434ace-b608-4705-966b-fde153baae36>
2.828125
1,320
Personal Blog
Software Dev.
52.772224
On Determining Changes in Intertidal Marine Species Ranges Helmuth, B., Yamane, L., Lalwani, S., Matzelle, A., Tockstein, A. and Gao, N. 2011. Hidden signals of climate change in intertidal ecosystems: What (not) to expect when you are expecting. Journal of Experimental Marine Biology and Ecology 400: 191-199. Using a simple heat budget model that was ground-truthed with approximately five years of in situ temperature data obtained by biomimetic sensors, in their own experiment Helmuth et al. "explored the sensitivity of aerial (low tide) mussel body temperature at three tidal elevations to changes in air temperature, solar radiation, wind speed, wave height, and the timing of low tide at a site in central California USA (Bodega Bay)." The six U.S. scientists say their results suggest that "while increases in air temperature and solar radiation can significantly alter the risk of exposure to stressful conditions, especially at upper intertidal elevations, patterns of risk can be substantially reduced by convective cooling such that even moderate increases in mean wind speed (~1 m/sec) can theoretically counteract the effects of substantial (2.5°C) increases in air temperature." They also indicate that "shifts in the timing of low tide (+1 hour), such as occur [when] moving to different locations along the coast of California, can have very large impacts on sensitivity to increases in air temperature," noting that "depending on the timing of low tide, at some sites increases in air temperature will primarily affect animals in the upper intertidal zone, while at other sites animals will be affected across all tidal elevations." In addition, they report that "body temperatures are not always elevated even when low tide air temperatures are extreme," due to "the combined effects of convective cooling and wave splash." Helmuth et al. say their findings suggest that the timing and magnitude of organismal warming "will be highly variable at coastal sites, and can be driven to a large extent by local oceanographic and meteorological processes." Thus, they "strongly caution against the use of single environmental metrics such as air temperature" for "making projections of the impacts of climate change."
<urn:uuid:cab421c4-ca6b-463a-9bee-1de5dc1bcd3d>
3.078125
468
Academic Writing
Science & Tech.
30.346
The first of three articles on the History of Trigonometry. This takes us from the Egyptians to early work on trigonometry in China. The second of three articles on the History of Trigonometry. This article tells you all about some early ways of measuring as well as methods of measuring tall objects we can still use today. You can even have a go at some yourself! Mathematics has always been a powerful tool for studying, measuring and calculating the movements of the planets, and this article gives several examples. If you would like a new CD you would probably go into a shop and buy one using coins or notes. (You might need to do a bit of saving first!) However, this way of paying for the things you want did. . . . Noticing the regular movement of the Sun and the stars has led to a desire to measure time. This article for teachers and learners looks at the history of man's need to measure things. Calendars were one of the earliest calculating devices developed by civilizations. Find out about the Mayan calendar in this article.
<urn:uuid:5c5373e1-33e7-4349-bcac-57b89a8742ef>
3.328125
223
Content Listing
Science & Tech.
58.584012
WXPython is a very powerful and easy-to-use graphical user interface (GUI) toolkit. Because it is as cross-platform as Python, you can develop an application with it and not have to worry about porting issues. Graphical User Interfaces (GUIs) rely heavily on object-oriented programming. wxPython is no different. You first instantiate an object and then work with that object's attributes and methods. In this guide, we will look at the rudiments of a very simple program that prints "Hello, World!" to a window on your desktop.
<urn:uuid:e9631f12-4979-4cae-a952-2233d3aaeaf1>
2.71875
121
Tutorial
Software Dev.
51.845
SOHO's solar wind of change 21 May 2003We have known for 40 years that space weather affects the Earth, which is buffeted by a 'wind' from the Sun, but only now are we learning more about its precise origins. Solving the mystery of the solar wind has been a prime task for ESA's SOHO spacecraft. Its latest findings, announced on 20 May 2003, may overturn previous ideas about the origin of the 'fast' solar wind, which occurs in most of the space around the Sun. Earlier results from SOHO established that the gas of the fast wind leaks through magnetic barriers near the Sun's visible surface. Straight, spoke-like features called plumes have also been seen rising from the solar atmosphere at the polar regions, where much of the fast wind comes from. According to previous ideas, the gas of the fast wind streams out in the gaps between the plumes. "Not so," says Alan Gabriel of the Institut d'Astrophysique Spatiale near Paris, France. Careful observations with SOHO now suggest that most of the fast wind leaves the Sun via the plumes themselves, which are denser than their surroundings. Gabriel and his team tracked gas rising at about 60 kilometres per second to a height of 250 000 kilometres above the Sun's visible surface. "If this controversial result is right, it will clear up a big misunderstanding," says Bernhard Fleck, ESA's Project Scientist for SOHO. "We need to know how the fast wind is subsequently accelerated to 750 kilometres per second. To find out, we'd better be looking in the right places." SOHO has also investigated the origin of a slower wind, half the speed of the fast wind, which comes from the Sun's equatorial regions. The gas of the 'slow' wind leaks from triangular features called 'helmets', which are plainly protruding into the Sun's atmosphere during a solar eclipse. Blasts of gas called 'coronal mass ejections' also contribute to the solar wind in the equatorial zone of the Sun. The ESA/NASA Ulysses spacecraft has twice passed over the poles of the Sun and signalled the relative importance of these fast and slow winds. Its measurements show that the fast wind predominates in the heliosphere, which is a huge bubble blown into interstellar space by the Sun's outpourings, and extending far beyond the outermost planets. In interplanetary space, the fast wind often collides with the slow wind. Like the mass ejections, the collisions create shock waves that agitate the Earth's space environment. The four satellites of ESA's Cluster mission are now studying the interaction between the solar wind and our planet's defences. The Earth's magnetic field creates a bubble within the heliosphere, but it does not give us perfect protection from Sun's storms. Ulysses, SOHO, and Cluster together provide an extraordinary overview of solar behaviour and its effects, both near and far in the Solar System. Note to editors The new solar wind results, obtained with the SUMER instrument on SOHO, are published by A.H. Gabriel, F. Bely-Dubau and P. Lemaire in the Astrophysical Journal, 20 May 2003. SOHO is a project of international cooperation between ESA and NASA. For more information, please contact:
<urn:uuid:5c849290-744c-422f-a3e3-b06dc9d6db88>
3.75
693
Knowledge Article
Science & Tech.
52.170895
Image courtesy of Pacific Northwest National Laboratory Computational modeling of uranium oxide ions with aluminum oxide provides insights that are contributing to development of a cheap and effective way to clean-up nuclear waste sites. A simple computational methodology was developed to determine how uranium adheres to aluminum oxide, a mineral in complex soil environments. This research provides insight into how radionuclides, like uranium, interact with soil minerals, which may lead to efficient, more affordable solutions for cleaning contaminated ground. Determining how radioactive material sticks to soil and affects its movement into nearby water sources is a major challenge for cleaning up nuclear waste sites. This waste, which may include uranium, can be diffuse as well as difficult to isolate and remove. To reduce the cost and complexity of complete removal, innovative and inexpensive methods are needed to expedite clean-up efforts around the world, especially in sites with vast areas of contamination. Scientists at Pacific Northwest National Laboratory discovered that the surface of a common soil mineral, aluminum oxide, adheres to uranium making it less mobile. The researchers assembled a detailed picture of how uranium adheres to the mineral surface using a computational model. By modeling the behavior of uranium in a complex subsurface environment, they were able to show that uranium sticks to the surface of aluminum oxide without changing it in any way and that a more acidic environment improves how well the two stick together. This cluster model approach used by the researchers allows for a straight forward comparison to be made between different sorption mechanisms and predictions can be directly related to X-ray adsorption experiment measurements. This approach can be used to model surface reactivity and be further utilized in other complex model systems. Wibe A. de Jong Basic Research: Office of Science Basic Energy Sciences program and Biological and Enviornmenal Research program (EMSL) Glezakou V, and W A de Jong. "Cluster-models for Uranyl(VI) Adsorption on α-Alumina." J.Phys. Chem. A. 115(7):1257-1263. [DOI: 10.1021/jp1092509] BES, CSGB, BER DOE Laboratory, SC User Facilities, BER User Facilities, EMSL
<urn:uuid:f28ecb32-89a3-4118-96d0-4e892e7f6005>
3.75
460
Knowledge Article
Science & Tech.
22.773071
The Red River giant softshell turtle (Rafetus swinhoei). © Asian Turtle Program The Red River giant softshell turtle (Rafetus swinhoei) may be the rarest and most threatened of all turtles, in addition to one of the largest – only four individuals are known remaining alive in the world. Two long-term captive animals in China were brought together three years ago and have produced eggs, but these eggs failed to develop. One lone animal confined in Hoan Kiem lake in downtown Hanoi is revered as symbol of Viet Nam’s independence. And the last animal remaining in the wild became the reluctant subject of a hostage drama when his home reservoir burst its dam in November 2008, and was caught downriver by a fisherman; the turtle was handed over to conservationists only after protracted negotiations, and was then released back into its native wetland. IN PHOTOS: View a photo gallery of endangered turtles.
<urn:uuid:e5e5a401-7735-45f3-8702-54eedc11d5bf>
2.9375
197
Knowledge Article
Science & Tech.
34.501429
Researchers Discover Doped Aluminum May Make Hydrogen Fuel Cells More Practical November 1, 2011 2:16 PM comment(s) - last by Aluminum doped with titanium was able to catalyze hydrogen We already know that hydrogen is a green fuel that can . The catch is that hydrogen is dangerous to store both at fueling stations and aboard the vehicle. The catalyst material used in a hydrogen fuel cell is often platinum or other rare and very expensive metal. A team of researchers from the University of Texas at Dallas and Washington State University think that they may have found a much cheaper catalyst material to advance the adoption of fuel cell technology. The new catalyst material that the researchers are investigating is a doped aluminum alloy surface. The aluminum alloy is doped with titanium. The titanium is used sparingly in the new catalyst material. Using controlled temperatures and pressures the team studied the titanium doped aluminum surface searching for signs of catalytic reactions taking place near the titanium atoms. To discover the catalytic reaction the team used the stereoscopic signature of carbon monoxide added to the test to specifically help locate signs of a reaction. Mercedes-Benz B-Class hydrogen fuel cell vehicle "We've combined a novel infrared reflection absorption-based surface analysis method and first principles-based predictive modeling of catalytic efficiencies and spectral response, in which a carbon monoxide molecule is used as a probe to identify hydrogen activation on single-crystal aluminum surfaces containing catalytic dopants," says lead researcher Yves J. Chabal of the University of Texas at Dallas. The titanium added to the aluminum advances the process by helping hydrogen bind to aluminum to form aluminum hydride. When used as a fuel storage device, aluminum hydride could be made to release the hydrogen stores it holds by raising the temperature of the storage medium. Other researchers have been studying for storing hydrogen. This article is over a month old, voting and posting comments is disabled RE: Hydrogen != Fuel 11/2/2011 7:46:32 PM Speaking of fusion. I find these interesting: RE: Hydrogen != Fuel 11/4/2011 11:53:37 AM Way ahead of you. ;) This sparked my interest. 500Kw~ per hour in the size of a cargo container. Not bad at all, needs more validation, science and public acceptance. "I'd be pissed too, but you didn't have to go all Minority Report on his ass!" -- Jon Stewart on police raiding Gizmodo editor Jason Chen's home Mercedes F125 Fuel Cell Concept Leaked Ahead of Schedule September 12, 2011, 9:56 AM New Composite Material Stores Hydrogen March 14, 2011, 2:53 PM Chevrolet Prices Spark EV at $27,495 Before $7,500 Tax Credit May 23, 2013, 9:08 AM Ford Expects Four-Cylinder Engines to Significantly Increase in Popularity May 21, 2013, 8:56 AM Toyota Wants to Increase Lithium-Ion Battery Production to 20,000/Year May 20, 2013, 9:30 PM Mercedes Aims for 45 MPG Highway on New E250 Bluetec 4Matic Diesel Sedan May 20, 2013, 8:20 AM Tesla to Issue More Stock, Pay Off Energy Loans with Proceeds May 16, 2013, 1:58 PM NC Becomes Latest State to Threaten Tesla's Direct Sales to Customers May 14, 2013, 11:33 AM Most Popular Articles High School Student Creates Storage Device that Can Charge in 20 Seconds May 20, 2013, 6:51 AM Seawater Cooling Saves Data Center Big Bucks, Energy, Despite Jellyfish Issues May 17, 2013, 3:23 PM Newegg Legal Chief: "We don't Feed the Trolls"; Defeats Bell Lab Shell Comp. May 17, 2013, 10:11 AM Former Intel CEO Regrets Passing Up on iPhone Gravy Train May 17, 2013, 11:46 AM NASA Awards $125,000 Grant for 3D Printed Food on Long-Term Space Travels May 21, 2013, 1:32 PM Latest Blog Posts Lumosity: Does it Work? May 22, 2013, 8:20 PM Quick Note: Sony "Teases" PS4 Ahead of Xbox Reveal in New Video May 20, 2013, 12:33 PM Nokia Introduces Instagram-Like App of Its Own to Help Lumia Sales May 20, 2013, 7:10 AM Parents of Pre-Teen Drivers Commonly Practice Distracted Driving Says Study May 9, 2013, 7:16 AM Apple's iOS 7 Running Into Internal Delays Due to Massive Overhaul May 1, 2013, 4:26 PM More Blog Posts Copyright 2013 DailyTech LLC. - Terms, Conditions & Privacy Information
<urn:uuid:d6ba6da3-1ef5-4b55-984f-51ea6ceebb39>
2.703125
1,022
Comment Section
Science & Tech.
47.389102