text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
The embryonic revolution in material science now taking place—specifically “smart materials” and superlight materials—offers strong evidence that there are no limits to growth. So-called smart materials, as defined on Wikipedia, “are materials that have one or more properties that can be significantly changed in a controlled fashion by external stimuli.” They can produce energy by exploiting differences in temperature (thermoelectric materials) or by being stressed (piezoelectric materials). Other smart materials save energy in the manufacturing process by changing shape or repairing themselves in response to external stimuli. These materials have all passed the “proof of concept” phase (i.e., are scientifically sound), and many are in the prototype phase. Some are already commercialized and penetrating the market. For example, the Israeli company Innowattech has placed piezoelectric materials under a one-kilometer stretch of highway to “harvest” the wasted stress energy of vehicles passing over and converting it to electricity. This is called “parasitic energy harvesting.” The company reckons that Israel has stretches of road where the traffic could efficiently produce 250 megawatts. If this is verified, consider the tremendous electricity potential of the New Jersey Turnpike or the thruways of Los Angeles and elsewhere. Consider the potential of railway and subway tracks. We are talking about tens of thousands of potential megawatts produced without any fossil fuel. Thermoelectric materials can transform wasted heat into electricity. Some estimate that the wasted heat from industrial processes alone could provide up to 20% of America’s electricity needs—this would make cogeneration even more efficient. Cogeneration is already making headway around the industrialized world and still has tremendous unexploited potential; again, this would yield a tremendous savings in fossil fuels. Smart glass is already commercialized and can save significant energy in heating, air conditioning, and lighting—up to 50% savings in energy in retrofitted buildings (such as the former Sears Tower in Chicago). New buildings designed to take maximum advantage of this and other technologies could save even more. Since buildings consume about 40% of America’s electricity production, this technology alone could over time reduce electricity consumption by 20%. Even greater savings in electricity could be realized by replacing incandescent and fluorescents with LEDs, which use one-tenth of the electricity of incandescent and half of the electricity of fluorescents. The United States could flatline its electricity consumption—gradually replacing fossil-fuel electricity production with alternatives. Conservation of energy and parasitic energy harvesting, as well as urban agriculture, would greatly cut the planet’s energy consumption and air and water pollution. Waste-to-energy technologies could also begin to replace fossil fuels. Garbage, sewage, and all forms of organic trash, agricultural, and food-processing waste are essentially hydrocarbon resources that can be transformed into ethanol, methanol, biobutanol, or biodiesel. These can be used for transportation, electricity generation, or feedstock for plastics and other materials. Waste-to-energy is essentially a recycling of carbon dioxide already in the environment and not the introduction of new CO2. These technologies also prevent methane from entering the environment. Methane, a product of rotting organic waste, contributes just 28% of the amount that CO2 contributes to global warming but is 25 times more powerful as a greenhouse gas. Numerous prototypes of a variety of waste-to-energy technologies are already in place. When their declining costs meet the rising costs of fossil fuels, they will become commercialized and, if history is any judge, replace fossil fuels very quickly—just as coal replaced wood in a matter of decades and petroleum replaced whale oil in a matter of years. But it is superlight materials that have the greatest potential to transform civilization and ultimately help introduce a “no limits to growth” era. I refer, in particular, to carbon nanotubes—alternatively referred to as buckyballs or buckypaper (in honor of Buckminster Fuller). Carbon nanotubes are between 0.01% and 0.002% the width of a human hair, more flexible than rubber, and 100 to 500 times stronger than steel per unit of weight. Imagine the energy savings if planes, cars, trucks, trains, elevators—everything that needs energy to move—were made of this material and weighed 1% of what they weigh now. Present costs and production methods make this unpractical at present, but that infinite resource—the human mind—has confronted and solved this problem before. Let us take the example of aluminum. One hundred fifty years ago, aluminum was more expensive than gold or platinum. When Napoleon III held a banquet of state, he provided his most-honored guests with aluminum plates. Less-distinguished guests had to make do with gold plates. When the Washington Monument was completed in 1884, it was fitted with an aluminum cap—the most expensive metal in the world at the time—as a sign of respect to George Washington. It weighed 2.85 kg. Aluminum at the time cost $1 per gram (or $1,000 per kg). A typical day laborer working on the monument was paid $1 per day for 10–12 hours a day. In other words, today’s common soft-drink can, which weighs 14 grams, could have bought 15 ten-hour days of labor in 1884. Today’s U.S. minimum wage is $7.50 an hour. In other words, using labor as the measure of value, a soft-drink can would cost $1,125 today (or $80,000 a kilogram). Then, in 1886, a process discovered independently by two chemists—American Charles Marten Hall and Frenchman Paul Héroult—turned aluminum into one of the cheapest commodities on earth. Aluminum now costs $3 per kilogram, or $3,000 per metric ton. The soft-drink can that would have cost $1,125 without the process now costs four-tenths of a cent, or $0.004. Today, industrial grade carbon nanotubes cost about $50–$60 per kilogram. This is already far cheaper than aluminum in 1884 in real value, if we use the cost of labor as the measure of value. Yet, revolutionary methods of production are now being developed that will drive the costs down even more radically. For instance, researchers at Cambridge University in England are working on a new electrochemical production method (in the prototype stage) that could produce 600 kilograms of carbon nanotubes per day at a projected cost of around $10 per kilogram, or $10,000 a metric ton. This cost-saving process will do for carbon nanotubes what the Hall–Héroult process did for aluminum. Nanotubes will become the universal raw material of choice, displacing steel, aluminum, copper, and other metals and materials. Steel currently costs about $750 per metric ton. Nanotubes of strength equivalent to a metric ton of steel would cost $100 if this Cambridge process (or others being pursued in research labs around the world) is successful. Imagine planes, trucks, buses, cars, and elevators that weigh 5%, 2%, or even 1% of what they weigh today. Imagine the savings in conventional energy. Imagine the types of alternative energy that would be practical. Imagine the positive impact on the environment of replacing many industrial and mining processes and thus lessening air and groundwater pollution. The most promising use of nanotubes is to turn them into paper. “Buckypaper” looks like ordinary carbon paper. It appears flimsy but will revolutionize the way we make everything from airplanes to cars to buildings to household appliances. It is 100 times stronger than steel per unit of weight, and it also conducts electricity like copper and disperses heat like steel or brass. Ben Wang, director of Florida State University’s High-Performance Materials Institute, claims, “If you take just one gram of nanotubes, and you unfold every tube into a graphite sheet, you can cover about two-thirds of a football field.” Since other research has indicated that carbon nanotubes could be a suitable foundation for producing photovoltaic energy, consider the implications of this statement. Several grams of this material could be the energy-producing skin of new generations of dirigibles—making these airships energy autonomous. These energy-neutral airships could replace airplanes as the primary means to transport air freight. Is this a futurist fable, or is it entirely within the scope of development in the next 20 years (or even 10)? Modern history has shown that anything human beings decide they want done can be done in 20 years if it does not violate the laws of nature. The atom bomb was developed in four years from the time the decision was made to make it; putting a man on the Moon took eight years from the time the decision was made to do it. It is a reasonable conjecture that, by 2020 or earlier, an industrial process for the inexpensive production of carbon nanotubes will be developed, and that this is the key to solving our energy, raw materials, and environmental problems. The revolution in material science will help enable us to become self-sufficient in energy. It will enable us to create superlight vehicles and structures that will produce their own energy and obviate the need to pump oil or mine many resources. Carbon nanotubes will replace steel, copper, and aluminum in a myriad of functions. Whatever residual need we might have for such materials will be satisfied by the recycling of existing reserves already in the system. Such developments will help overcome the limits of growth and enable human civilization to become a self-contained system. Tsvi Bisk is director of the Center for Strategic Futurist Thinking and author of The Optimistic Jew: A Positive Vision for the Jewish People in the 21st Century (Maxanna Press, 2007). He is also the THE FUTURIST’s contributing editor for Strategic Thinking. E-mail bisk@ futurist-thinking.co.il.
<urn:uuid:a98694e5-68ed-404f-a1a4-e2e232120cdf>
3.78125
2,106
Nonfiction Writing
Science & Tech.
36.916856
It wasn’t easy to feel the sun this morning with 19 below zero and a sharp northwest wind. No matter what the season, the sun’s brilliance remains the same, but you’ll strain to sense the warmer side of its personality on days like today. When the thermometer scrapes bottom in my town, Lake Superior exhales foggy breath just like people do. We call it lake steam or ice fog. Colder air blowing over the warmer open water suddenly drops in temperature; the water it’s carrying condenses into millions of wispy vapors. The swirls combine into clouds that rise into a tidal wave of steam in the distance. Raw, menacing, ethereal – pick your adjective. We love the apparition and consider it one of the many intangible reasons we choose to live here. After a period of doldrums, solar activity is picking up again with several picturesque and magnetically active sunspot groups dotting the sun’s face. In particular, Region 1401 has a busy, complicated mix of magnetic polarities (north and south magnetic poles) that’s been responsible for an ongoing series of flares. Once the group rotates more directly into our line of sight, we might see some effects on Earth. Meanwhile material from a January 16 coronal mass ejection (CME) is expected to touch our planet starting late tonight through the 20th. That means an increased chance for northern lights for observers at higher latitudes. If you live in the northern U.S. or southern Canada, it’s worth checking the northern sky both nights. The sun is a middle-aged star with about five billion years of an active, exciting life remaining before it runs out of nuclear fuel. In the year 5,000,000,001 A.D. – give or take – the sun will sheds its outer layers to reveal a carefully kept secret – a tiny, compressed core called a white dwarf star. Though only as big as the Earth, a white dwarf is twice as hot and so fantastically dense that a teaspoon of the stuff would weigh as much as an elephant. Surrounding the dwarf will be a butterfly or ring-shaped cloud of gas astronomers call a planetary nebula. The name comes from its resemblance to the round shape of a planet. The scenic cloud are the remains of the sun’s outer layers that will be expelled by powerful stellar winds during its tumultuous transition to white dwarfdom. Every time we observe a planetary nebula through our telescopes, we see the sun’s distant future. European astronomers released a brand new photo today of the Helix planetary nebula in the constellation Aquarius taken in infrared light. The main ring of the Helix is two light years across and glows due to excitation from strong ultraviolet light emitted by the white dwarf at center. Each of the fine strands radiating from the nebula’s center span the size of our solar system and is composed of hydrogen molecules. To learn more about the Helix, please click HERE. Assuming the Earth survives until the time the sun becomes a white dwarf, we’ll still revolve around it as always, but what we call “sun” will be only a pinpoint of white fire in a twilight-dark sky.
<urn:uuid:e5b67139-faeb-4fe8-a58b-9b678e8c783e>
2.71875
679
Personal Blog
Science & Tech.
54.220475
Scientific name: Leptidea sinapis One of the smallest and most dainty of the white butterflies found in Britain. Rare in south England and the Burren region of western Ireland. A small butterfly with a slow flight, usually encountered in sheltered situations such as woodland glades or scrub. Upperwings are white with rounded edges. Males have a black mark on the edge of forewing. Undersides white with indistinct grey markings. Males fly almost continuously throughout the day in fine weather - patrolling to find a mate. Females spend much of their time feeding on flowers and resting. In the characteristic courtship display the male lands opposite the female and waves his head and antennae backwards and forwards with his proboscis extended. The butterfly has a localised distribution in England and Wales and has declined rapidly over the past few decades. Size and Family - Medium Sized - Wing Span Range (male to female) - 42mm - Listed as a Section 41 species of principal importance under the NERC Act in England - Listed as a Section 42 species of principal importance under the NERC Act in Wales - UK BAP status: Priority Species - Butterfly Conservation priority: Medium - European Status: Not threatened - Protected in Great Britain for sale only Various legumes are used, commonly Meadow Vetchling (Lathyrus pratensis), Bitter-vetch (L. linifolius), Tufted Vetch (Vicia cracca), Common Bird’s-foot-trefoil (Lotus corniculatus) and Greater Bird’s-foot-trefoil (L. pedunculatus). (Note that some vetches are not used, notably Bush Vetch, V. sepium, and Common Vetch, V. sativa). - Countries - England and Wales - This rapidly declining species used to be found across much of southern England and into eastern Wales. Its strongholds are now the woods of the West Midlands and Northamptonshire and the coastline of East Devon. - Distribution Trend Since 1970’s = Britain:-65% The Wood White breeds in tall grassland or light scrub in partially shaded or edge habitats. In Britain, most colonies breed in woodland rides and clearings, though a few large colonies occur on coastal undercliffs. A few smaller colonies occur on disused railway lines and around rough, overgrown field edges (for example in north Devon). In Ireland, more open habitats are used, often far from woodland, including rough grassland with scrub, road verges, hedges, and disused railway lines.
<urn:uuid:f81c90dd-d656-4d0f-9b0c-b5536805b58c>
3.59375
552
Knowledge Article
Science & Tech.
42.653482
!( 1 || 1 && 0 ) ANSWER: 0 (AND is evaluated before OR) The above is from the C++ tutorial. The tutorial gives the answer as 0 (or false) with the explanation, (AND is evaluated before OR). My question is: if 1 && 0 is 0 (or false) doesn't the ! outside the parentheses convert the final result to !0 ( 1 or true)?
<urn:uuid:380ae58c-c2ea-4baa-93b2-fb8e9d658b92>
2.875
86
Q&A Forum
Software Dev.
82.142302
SUMMARY: Scientists have figured out how they can use special instruments on board two NASA satellites to detect the early stages of plankton "blooms". These blooms are caused by excessive runoff of industrial fertilizer which makes marine algae grow - sometimes so thickly that water looks black. Bacteria consume the algae and use up oxygen in the water. This can kill fish in large quantities. The MODIS instruments on NASA's Terra and Aqua satellites can detect the glow in plankton's chlorophyll from orbit, and pinpoint exactly where large blooms are forming. View full article What do you think about this story? post your comments below.
<urn:uuid:6a14f971-a3e1-45de-8a00-c37ab18da910>
3.5625
133
Comment Section
Science & Tech.
46.208396
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893| Publisher:SAGE Publications, Inc.About this encyclopedia OCEAN CURRENTS AFFECT not only the temperature, but also the precipitation on land areas adjacent to the ocean. A cold ocean current near land causes the air just above the water to be cold, while the air above is warm. There is very little opportunity for convection, thus denying moisture to nearby land. Coastal deserts of the world usually border cold ocean currents. Contrary to this, warm ocean currents, such as Gulf Stream, bring moisture to the adjacent land areas. The Gulf Stream, together with its northern extension towards Europe, the North Atlantic Drift, is a powerful, warm, and swift Atlantic Ocean current that originates in the Gulf of Mexico, exits through the strait of Florida, and follows the eastern coastlines of the United States and Newfoundland (Canadian island) before crossing the Atlantic Ocean. It carries a huge amount of warm water to northerly lands, which has enormous significance to ...
<urn:uuid:35a51b45-22c2-4399-b3c1-7bcfe3dd57c0>
4.40625
246
Structured Data
Science & Tech.
50.811643
Poly allows the user to view and interact with polyhedra and their nets, thus connecting two and three dimensional geometries. Exploring The Software: - Begin by going to the File menu and opening "Preferences". I suggest you check "3-D Shaded", "3-D Edges", and "2-D Net" views. - Start with one of the Platonic Solids. Select the 3-D shaded view. To explore one of the interactive options, try a demonstration routine under View > "Start Demo". You can control this action with a click-&-drag on the slider, or have the slider move automatically with a grab-drag-&-release action. - Another interactive option is to grab-&-drag the figure in the viewing field, or use a grab-drag-&-release action to put it into motion. - You can use the View > "Printer" to set up what you'd want to have printed on a handout. - You can get information about the polyhedra categories in the Help menu. Thinking About the Mathematics: - A net is an arrangement of connected polygons that become faces when the two dimensional net is folded to form a three dimensional solid. Note the arrangement of the squares in the net shown for the cube. How many other nets could be folded into a cube? How many arrangements of four triangles can be folded into a tetrahedron? Based on thinking about these two examples how many possible nets do you expect to find for the octagon? For which of the Platonic and Archimedian solids can you have symmetrical nets? - You would want to add tabs to a printed net in order to create a polyhedron. Where would the best places be to add tabs for a given figure? - Each of the Platonic and Archemedian solids exhibits 2-fold and 3-fold symmetries, and is said to belong to either the "2-3-4" or "2-3-5" symmetry family depending on whether it also exhibits 4-fold or 5-fold symmetry. That is, the solid exhibits 2-fold, 3-fold, 4-fold and/or 5-fold symmetries centered about a vertex, edge, or face. Rotating the closed figures helps visualize these different symmetries. - What is the essential characteristic that determines whether a solid is in the 2-3-4 or 2-4-5 family? - How many polyhedra can you identify that exhibit only one symmetry? How many polyhedra can you find that exhibit two symmetries? What combinations are possible or not possible?
<urn:uuid:10f097fa-1553-4659-8670-18807f8d2d98>
3.359375
564
Tutorial
Science & Tech.
56.622204
Why does a can of cranberry sauce roll so far? A can of cranberry sauce rolls so far because it is thick and heavy. There are two physical quantities of concern here: momentum and angular momentum. When you push the can, you give it momentum. By making the can roll, you also give it angular momentum. The thickness of the sauce makes the cranberry sauce rotate with the can itself. Both quantities are proportional to mass and speed. The can stops after both momentum and angular momentum have been removed. Because a rolling object does not slide along the table, friction cannot take away the momentum or angular momentum. Only air molecules can do this. The more mass and velocity an object has, the more time is required for the momentum to be drained. [A more extreme example is a bowling ball vs. a balloon. If both are moving at the same speed, it is much more difficult to stop the bowling ball because of its much greater mass.] Click here to return to the Physics Archives Update: June 2012
<urn:uuid:6476862f-960d-439f-9e1d-c4670d1826a6>
3.34375
224
Knowledge Article
Science & Tech.
55.605
Analysis of a crater-forming meteorite impact in Peru Article first published online: 16 SEP 2008 Copyright 2008 by the American Geophysical Union. Journal of Geophysical Research: Planets (1991–2012) Volume 113, Issue E9, September 2008 How to Cite 2008), Analysis of a crater-forming meteorite impact in Peru, J. Geophys. Res., 113, E09007, doi:10.1029/2008JE003105., , , , , , , and ( - Issue published online: 16 SEP 2008 - Article first published online: 16 SEP 2008 - Manuscript Accepted: 3 JUN 2008 - Manuscript Revised: 27 APR 2008 - Manuscript Received: 5 FEB 2008 - meteorite fall The fireball producing a crater-forming meteorite fall near Carancas, Peru, on 15 September 2007 has been analyzed using eyewitness, seismic, and infrasound records. The meteorite impact, which produced a crater of 13.5 m diameter, is found to have released of order 1010 J of energy, equivalent to ∼2–3 tons of TNT high explosives based on infrasonic measurements. Our best fit trajectory solution places the fireball radiant at an azimuth of 82° relative to the crater, with an entry angle from the horizontal of 63°. From entry modeling and infrasonic energetics constraints, we find an initial energy for the fireball to be in the 0.06–0.32 kton TNT equivalent. The initial velocity for the meteoroid is restricted to be below 17 km/s from orbit considerations alone, while modeling suggests an even lower best fit velocity close to 12 km/s. The initial mass of the meteoroid is in the range of 3–9 tons. At impact, modeling suggests a final end mass of order a few metric tons and impact velocity in the 1.5–4 km/s range. We suggest that the formation of such a substantial crater from a chondritic mass was the result of the unusually high strength (and corresponding low degree of fragmentation in the atmosphere) of the meteoritic body. Additionally, the high altitude of the impact site (3800 m.a.s.l) resulted in an almost one order of magnitude higher impact speed than would have been the case for the same body impacting close to sea level.
<urn:uuid:a78689ef-08b7-4d68-bfa4-1ed86656a391>
2.8125
489
Academic Writing
Science & Tech.
56.002051
OK – so this did not really happen. It’s a PR stunt to mark the start of science month on the TV channel, Eden. However, the video does talk about what could happen if a real meteor of this size hit London, and it would be a lot more devastating than one crushed taxi. The video talks about the meteor that wiped out the dinosaurs 65 million years ago. This could be a nice intro to a lesson on extinction or to promote a discussion on theories: are we sure this is what happened? Why? For a physics lesson you could use the video to look at why such a small-ish meteor would cause so much damage. The video estimates the damage that would really be caused by a meteor this size but how did they work it out? You could discuss why the meteor would have so much kinetic energy when it hit the ground. A simple experiment you can do with KS3 and KS4 students is to model the impact of a meteor on the surface of the Earth using a ball of modelling clay dropped into a tray of flour. They can choose the independent variable they use (mass of ‘meteor’, height it is dropped from, surface area of ‘meteor’) and measure the dependent variable – the width of the ‘crater’. There is lots of scope for talking about control variables and analysing the results. To make sure the crater is visible a good idea it to sprinkle poster paint powder over the flour. Another tip is to do this experiment outside. For KS5 physics classes (and beyond) there is a nice discussion on this post from the Wired blog on working out the amount of energy a meteor this size would transfer to the surface of the Earth. Back to Eden’s Science Month. It runs all day every day on Sky 532 and Virgin 208 across July. Highlights in this first week include The Code (Wednesday 4th July at 10pm) and Deadliest Volcano (Thursday 5th July at 7pm). There looks like there could be lots of interesting shows that you could incorporate into many science lessons.
<urn:uuid:82b39935-5192-4a6d-8f73-704c5fe3b0d9>
4.21875
430
Personal Blog
Science & Tech.
64.28241
Monday, November 11, 1996 Home range of the wolves Other wolf researchers have learned that the amount of food (or "prey density") in an area has a great effect on how large an area the wolves travel in. This area is called their "home range." The size of the home range depends in part on how much ground the wolves need to cover in order to find enough prey to survive. At certain times of the year, and throughout some entire years, there is not enough food (berries, acorns, etc.) available in the higher elevation forests for the animals that the wolves prey upon. The lack of food causes the prey animals to move to other areas. It may also limit their ability to reproduce and raise young. This in turn forces the wolves to go elsewhere to search for prey. If the wolves settle on private property or begin preying upon domestic animals, we have to capture them and return them to pens or move them to another recovery area. We use radio-tracking to follow the 10 collared red wolves here in the Park. In general, the home range is 16 square kilometers (6 square miles) in Cades Cove. It is as much as 10 to 20 times larger for wolves in the higher elevation forests. Breeding pairs, especially the females, limit their range during the denning season, but expand it again as the pups grow and need more food. The current population is between 10 and 24 wolves. The Park is 500,000 acres (780 square miles), but we cannot predict how many wolves can be supported without more accurate information on prey densities throughout the park. We caught 52 raccoons in Cades Cove but have only caught 9 raccoons at Tremont. The low number is not as bad as it seems at first because we are only using half the number of traps at Tremont as we did at Cades Cove. We also caught opossums, rabbits, skunks, and a gray fox. The rabbits and opossums are also prey for the red wolves but they rarely kill skunks or foxes. We hope to use the information from the raccoon studies in the future to select areas to release red wolves that will give them the best chances of surviving.
<urn:uuid:0621f50e-5089-4829-ab98-7ba40e8bf50b>
3.625
457
Knowledge Article
Science & Tech.
58.33384
Flying over the three-dimensional Moon - DLR Planetary researchers generate a new model of the Moon The landing sites of Apollo 11, 12 and 14 are located centrally in a region depicted in tranquil blue. In these colour-coded 3D images of the lunar surface, blue is used to indicate low-lying flat ground. It was not until the later missions that the US became more adventurous in the choice of a landing site. For example, the astronauts on the Apollo 15 and 17 missions were sent into regions of the Moon that posed a much greater challenge. In the model created by the DLR Institute of Planetary Research in Berlin-Adlershof, these areas are depicted in green - indicating that these regions are at a slightly higher elevation, and are not as flat as those used for the earlier Moon landings. To make this 3D depiction possible, the wide-angle camera (LROC WAC) on board the American LRO spacecraft recorded images from an altitude of 50 kilometres. In the next step, DLR project scientist Frank Scholten from the Institute of Planetary Research evaluated the 70,000 stereo images, using special software to compare them pixel by pixel, then used the information relating to where the picture was taken and the direction of view of the camera to calculate roughly 100 billion 3D points. The result is a 3D model covering about 37 million square kilometres, which is more than 98 percent of the lunar surface and over twice the area of Russia. The Global Military Satellites Market 2012-2022 - Industry Trends, Recent Developments and Challenge... The Moon in focus It took a network of 40 computers two weeks of computing time to perform these elaborate calculations. The software required for this task was developed at the DLR Institute of Planetary Research and had already been employed successfully on image data from other planets; for example, the Mars Express mission. The result, known as the GLD 100 (Global Lunar Digital Terrain Model), delivers elevation figures at 100-metre intervals right across the surface of the Moon. "Over the last few years, planetary research has been focusing primarily on other planets, Mars being just one example. The Moon remained in the background during this period," explains Scholten. The team led by DLR planetary researcher Jurgen Oberst performed its measurements of the Moon in several different ways. Camera imagery was complemented by data from the Lunar Orbiter Laser Altimeter (LOLA), which employs laser pulses to measure elevations on the lunar surface; these were then compared with the data in the GLD100 elevation model. These two methods complemented one another; the laser instrument provides extremely accurate elevations, but covers only part of the lunar surface. Gaps of several kilometres still exist, particularly in regions near the lunar equator. The cameras on board the LRO compensate for this because they are able to completely cover large areas. "Our elevation model will help planetary researchers to examine questions for which an accurate and complete knowledge of the topography of the Moon is important," says Scholten. With this data, scientists wish to investigate a number of things, including whether the central latitudes of the Moon are home to any deep craters where water ice might exist in the permanent shadows - in a similar way to the regions close to the two poles. The elevation model clearly depicts the diverse landforms - for example, mountains, craters and rilles. The colour-coded view depicts the third dimension - altitude - in colours ranging from blue (roughly -9100 metres) to red/white (roughly +10,760 metres). Whereas the 'front', or Earth-facing side of the Moon, with its flat plains, or mares, and the Apollo landing sites, appear for the most part in blue and green, the hitherto relatively unexplored far side of the Moon - the side not visible from Earth - has its high ground depicted in red. This far side is home to the lowest as well as the highest points on the Moon. "This depiction clearly shows how gigantic and deep the South Pole Aitken Basin is," explains DLR planetary researcher Ulrich Kohler. This basin has a diameter measuring about 2500 kilometres, making it the largest known impact crater in the Solar System. It is about 13 kilometres deep "and is perhaps a window on the distant past of the Moon because it may extend down to the original mantle," suggests Kohler. Using the data from this elevation model, scientists can also simulate low-altitude flights across the lunar surface. The 'sightseeing' flights over the Apollo 15 and Apollo 17 landing sites show clearly that the astronauts landed close to mountain ranges some several thousand metres in height and set out from there to explore the Moon. Assessment for future moon landings "With this data, we are laying important foundations for future Moon missions, whether manned or unmanned," states lunar researcher Ulrich Kohler. "These 3D maps of the Moon enable us to better evaluate future landing sites." There are a total of seven instruments on board the NASA orbiter; DLR Space Administration funds the German members of the LRO team. With each new orbit of the Moon, and with each new image of the lunar surface, the planetary researchers are able to further refine their 3D model of Earth's companion. "Every month, we cover the entire surface of the Moon once more with the camera," explains Frank Scholten. "This data is included in our model on a continuous basis, which enables us to view the surface in ever greater detail." Source : DLR German Aerospace Center
<urn:uuid:94ca0221-5e97-4a3c-84df-d19e2e2c9251>
3.875
1,135
Knowledge Article
Science & Tech.
37.255979
Add your answer here. Check out some similar questions! physical science [ 1 Answers ] When a wire is made smaller, the resistance increases. Which happens to the electric Science investigaroty project title-physical science category [ 1 Answers ] i want to look for our title defense for tommorow in research Physical science [ 1 Answers ] A carpenter lifts a 10-kg piece of wood 1.5m above the ground, producing an acceleration of 9.8 m/s2. The carpenter then carries the wood to a truck that is 10 meters away. How much force is needed to raise the piece of wood? Physical Science [ 4 Answers ] What is the formula for speed? Physical Science [ 1 Answers ] What is the kinetic energy of an object that has a mass of 12 kilograms and moves With a velocity of 10 m/s? View more Energy questions Search
<urn:uuid:7d7e2fdf-3413-414b-9d44-3e31da9a09d0>
2.703125
190
Q&A Forum
Science & Tech.
64.330451
Object-Oriented Design is based on the position that the more closely a program models the real world problem it represents, the better the program will be. In many cases, the data definitions on a project are more stable than the functionality is, so building a design based on the data, as object-oriented design does, is a more stable approach. Object-oriented design uses several ideas that are important to modern programming. The Four Object-Oriented Design Principles -ignore irrelevent details -concentrate on what is relevent to the current process -want to bind code and -done via classes -classes can inherit properties and methods of -100% encapsulated, hence safe code re-use -use single interface for multiple functions will come back to this later Steps in Object-Oriented Design - Identify the objects and their attributes, which are - Determine what can be done to each object. - Determine what each object can do with other object. - Determine the parts of each object that will be visible to other objects, which part will be public and which should be private. - Define each object's public interface. Examples of Bad and Good Programs in JAVA Examples of Bad and Good Programs in C++ Department of Computer Science, University of Regina. [CS Dept Home Page]
<urn:uuid:b976f2b8-df42-4646-b305-3bc5ac3dc296>
3.84375
303
Tutorial
Software Dev.
31.713664
Programming in standardized high level languages has the benefit of being readily portable across architectures. One would think that portability exists as long as programmers constrain themselves to features specified by the language standard and employ compilers that are dutiful in implementing to the standard. Unfortunately, this is not always the case. Migration of software between architectures becomes problematic when byte order dependent code exists in the source code base only to be discovered when runtime problems surface. In large, legacy code bases consisting of millions of lines of code, it is very difficult to find and address all of the byte order dependencies using known techniques into endian-neutral code. The Bi-Endian Compiler (BEC) enables applications to execute with the byte order semantics as they were designed. For example, the BEC implementation discussed in this article enables applications to execute with big-endian semantics on a little-endian processor. Employing BEC requires the programmer to designate the byte order of all data. During compilation, BEC inserts code sequences, where necessary, to load data into processor registers such that the data is in native byte order before operations are performed. Subsequently, code sequences are inserted that transform the results in native byte order into the resulting data's declared byte order before storing to memory. This article first provides background on the subject by reviewing byte order dependencies and current techniques to mitigate issues involving them. The BEC is then introduced , discussing the language features necessary to express byte order and the underlying compiler implementation. The porting process is discussed showing how to effectively apply the compiler and its features to port an application. Performance optimization and evaluation is then detailed, showing techniques used to improve the performance of the implementation. The conclusion summarizes and offers thoughts on future directions. Endianness, or byte order, is the format of how multibyte data is stored in memory . It specifies the location of the most significant and least significant bytes that comprise a multibyte type such as a 32-bit integer. The two types of endian architectures are termed Big-Endian and Little-Endian. Discussions on the advantages and disadvantages of each has been characterized as being akin to a religious war . Regardless, both big-endian and little-endian architectures exist and this can cause problems when migrating between architectures due to byte order dependent code. Example 1 shows a code snippet that returns differing output samples depending upon the byte order of the processor architecture used to execute the code. On a big-endian processor, where the most significant byte is stored in the lowest memory address, the pointer ap points to 0x12. On a little-endian processor, where the least significant byte is stored in the lowest memory byte address, the pointer ap points to 0x78. Legacy code bases built up over several years by many different programmers can be littered with such snippets of code motivated in many cases by optimization; assuming the location of a smaller subset of bytes in a multibyte element can save in terms of memory transactions. #include <stdio.h> int a = 0x12345678; char *ap = (char *)&a; printf("%2x %x\n", *ap, a); Output on a big-endian processor: 12 12345678 Output on a little-endian processor: 78 12345678 Example 1: Byte order dependent code example and output (Source: Intel Corporation). Techniques of transforming byte order dependent code into endian-neutral code are well understood . In the aforementioned example, macros could be defined whose implementation would be platform dependent, but would agree upon which byte of a larger component would be considered first, second, and so on. The techniques require the programmer to first identify byte order dependent code and make manual code changes to enforce endian neutrality. In comparison, BEC does not require the programmer to find the specific byte order dependent code, but to only identify the byte order of the data. The compiler enforces that the correct byte order semantics are being executed. In the example from Example 1, if the code was written to assume big-endian order, the programmer only specifies that the variable a is big-endian and the compiler ensures that the expectation is met. In the conservative case, the programmer could communicate that the entire program should execute with big-endian semantics and the compiler would enforce. A second approach to migrating byte order dependent code is encapsulated by binary translation techniques . These techniques encompass more in that they enable execution of one processor's instruction set architecture (ISA) on a processor with a different ISA by intelligently and efficiently translating between the two. This approach is attractive due to its relative ease of use for the customer; Apple employed its Rosetta technology to help migrate from the PowerPC architecture to Intel architecture. compared with BEC, this approach typically incurs greater overhead as the application is translated during runtime without the benefit of aggressive static compiler optimization techniques. Bi-Endian Evolution and Implementation The BEC has extensions to C and C++ in which byte order is a type attribute and can be bound to a built-in type, typedef, or to a type as part of a variable declaration. The byte order attribute can be bound to pointer types, floating point types, and be part of a type chain consisting of multiple pointer indirections, integral, and floating point types. The following sections describe the evolution of the BEC, its language extensions, the dataflow analyses implemented, and big-endian data initialization. As the early proposals for a BEC prototype were discussed, it was necessary to dispel a common misconception. Often, engineers and managers reviewing the proposal voiced concerns that the compiler would be unable to determine the intentions of the programmer. They viewed byte order dependencies as they are encountered in a debugger or binary translator. It was often necessary for the prototyping team to explain that if the byte order of types was part of the program specification, the intentions of the programmer are clearly stated. It only became necessary to provide mechanisms for explicitly declaring byte orders at varying granularities of scope. The earliest proof-of-concept BEC was demonstrated using the C benchmarks in SPEC2000. These benchmarks were compiled such that all types were declared as big-endian. The resulting executables were run on a processor that supports only little-endian. The next large scale demonstration involved compiling the Linux operating system. The byte order of the fields within the Internet Protocol (IP) header in the network stack is maintained as big-endian by using hton() macros, converting between network order, which is big-endian and the host byte order, which is little-endian. As a demonstration of the BEC capabilities, the header fields were explicitly declared as big-endian and the macros were removed. The functionality of the compiled operating system was unchanged. The BEC technology is an alternative to the use of hton() macros to maintain network byte order. Debugging of mixed endian code, which is code containing uses of both big-endian and little-endian types, was supported by modified versions of a proprietary debugger and GNU gdb. The DWARF v3.0 specification provides a means for specifying byte order. Various other DWARF mechanisms allow specifying the location of pieces of data, as would be necessary when the actual byte order of data is other than specified by the programmer. An example where byte order of data can differ from its specification is when a value declared as big-endian is operated upon, such as an addition. During execution time, the data element is represented in a register in little-endian format (on a little-endian processor) for correct operation. Display of the data in a debugger would need to understand this difference. In a large development project, the person who is debugging such code may not be aware of the actual byte order of the data. The primary function of the language extensions is to enable the programmer to communicate the byte order of compilation units, code sections, and individual declarations to the compiler. Example 2 is a code sample employing each of the above. The source file, file.c, is compiled using the dependencieslittle-endian option, which specifies all data declarations in the compilation unit are little- endian. Similarly, the compiler supports a -big-endian option, which specifies all data declarations in the compilation unit are big-endian. Declarations that are impacted by these options are said to have been declared implicitly in either a big-endian or little-endian context. The variable, a, in the example would be stored in little-endian byte order. For convenience, we refer to such a variable as a little endian variable. In the file, #pragma byte_order (push, bigendian), specifies that declarations following the pragma are big-endian. The variable, b, would be stored in big-endian byte order. In addition, the optional parameter push specifies that a stack of byte orders is maintained that enables byte order declarations spanning nested include files. This declaration method is also implicit and overrides the byte order specified at the command line. A section of code that has an implicit declaration bound to it is termed a big- or little-endian section (depending on the specified byte order). At the finest granularity an explicit declaration occurs via a byte order attribute. The variable, c, would be stored in big-endian byte order. The byte order attribute overrides both implicit methods. icc -little-endian file.c icc -little-endian file.c /*file.c*/ int a = 0x12345678; /*little-endian*/ #pragma byte_order (push, bigendian) int b = 0x12345678; /*big-endian*/ #pragma byte_order(pop) int __attribute__((bigendian) c=0x12345678; /*big-endian*/ Example 2: byte order-dependent code example (Source: Intel Corporation). By default, the byte order of system include files are the same as the target architecture. The BEC allows specification of the implicit byte order of individual compilation units, for instance include files, without modification of the source code. The BEC provides command line options to specify directory sub-trees and to specify the implicit byte order of the compilation units contained in the sub-tree. The sub-tree identification mechanism is based upon regular expression matching of the directory path name. The implicit byte order specification is by way of source code prolog and epilog files that contain #pragma byte order specifications. The prolog and epilog files become part of the post-preprocessor file that is actually translated by subsequent phases of the compilation. Similar to many modern compilers, the BEC consists of multiple phases, transforming one representation of the code to another, beginning with the source code and concluding with the executable. Figure 1 illustrates the compilation phases. Figure 1: BEC Compilation Phases (Source: Intel Corporation, 2011). The front-end phase parses the source code and transforms it into an abstract syntax tree (AST). During this phase, byte order attributes are associated with the program types represented in the AST and are dependent upon the byte order context at the point of declaration and as discussed in the previous section. The BEC employs a proprietary intermediate language (IL) to represent the program under compilation. The IL translation phase converts the AST representation into this IL representation. Translated variables may have byte order conversion operations (BOCOs) placed both before and after the variable in cases when its type has a byte order opposite that of the underlying target. The optimization phase operates on the IL representation to make execution of the code on the target platform more efficient. Since the BOCOs are represented in the IL just as any other instructions, standard compiler optimization can be applied. These optimizations and a description of each include: - Common subexpression elimination: removes redundant BOCOs for unused data. - Code motion: moves BOCOs up to the function entry, which reduces the number of BOCOs. - Constant propagation: determines if a constant that requires a BOCO has already been loaded (and converted), which eliminates unnecessary BOCOs. In addition, an optimization solely designed to remove redundant BOCOs is invoked through an optimization termed the "bswap elimination optimization," which will be discussed in greater detail later in this article. The code generator phase converts the IL representation into the binary code specific to the target platform. BOCOs are implemented using either hardware shift instructions or BSWAP instructions. Hardware BSWAP instructions provide an efficient method of converting between byte orders. In the BEC, pointer data types can also be attributed with a byte order. As a result, there may be situations where a pointer has the opposite byte order as the target platform. Pointers of the opposite byte order must be byte-swapped upon initialization. This presents a challenge when pointers are initialized by link time constants because these constants are unknown at compile time and are resolved later during the linking stage. For the initial proof-of-concept BEC, the linker was modified to support other-endian, linker or loader resolved constants. This created an unnecessary dependency within the tool chain with respect to the new technology. Subsequent versions of the BEC employed a flexible set of alternatives for resolving such constants. In order to perform the necessary byte swap operations for pointers, the compiler generates and places special initialization data in a section of the object file, the . initdata section. This information is used in a three-step data initialization process detailed as follows: - At the static data initialization step, a post link tool is employed, which initializes data that can be initialized statically, such as data that does not have relocations associated with it. - At the dynamic loader initialization step, a dynamic loader has an opportunity to complete initialization based on the information from the . initdatasection. This step is optional and requires a modified operating system loader. - The dynamic runtime initialization step, employs a runtime routine to initialize data stored in the opposite byte order from the underlying platform. This routine is automatically invoked prior to passing control to the main routine. Porting of the application from a big-endian architecture to a little-endian one consists of three main steps: - Compile the application with the BEC in big-endian compilation mode. Resolve compiler-diagnosed issues such as warnings. - Employ a symbol consistency checking mechanism to resolve possible incompatibilities between different compilation units. - Manually review the code and debug with BEC technology-enabled debuggers.
<urn:uuid:4b456402-4510-4d7e-845b-523d33a9d21c>
3.109375
3,053
Documentation
Software Dev.
23.678999
Emissions of air pollutants by sector in 2005, EU-27 The TOFP factors are as follows: NOX 1.22, NMVOC 1, CO 0.11 and CH4 0.014 (de Leeuw, 2002). Results are expressed in NMVOC equivalents (kilotonnes kt). Data not available: for Iceland (emissions of CO, NMVOC, NOX were not reported) and Malta (CO). The figure also shows the emissions of acidifying pollutants (sulphur dioxide SO2, nitrogen oxides NOX and ammonia NH3), each weighted by an acid equivalency factor prior to aggregation to represent their respective acidification potentials. The acid equivalency factors are given by: w (SO2) = 2/64 acid eq/g = 31.25 acid eq/kg, w (NOX) = 1/46 acid eq/g = 21.74 acid eq/kg and w (NH3) = 1/17 acid eq/g = 58.82 acid eq/kg. The graph shows the emissions of primary PM10 particles (particulate matter with a diameter of 10 ¼m or less, emitted directly into the atmosphere).
<urn:uuid:71af0f92-f8a9-4dcc-8c71-6db9d4f97f47>
2.765625
249
Structured Data
Science & Tech.
72.313378
State of the Environment 2011 Committee. Australia state of the environment 2011. Independent report to the Australian Government Minister for Sustainability, Environment, Water, Population and Communities. Canberra: DSEWPaC, 2011. Humans have both direct and indirect effects on biodiversity. Direct effects mainly involve taking species (e.g. taking animals and plants as food, harvesting plants for ornamental purposes, or removing plants or animals that have become pests). Indirect effects happen as a result of other activities associated with human existence, such as growing food, using industrial processes that either consume natural resources or introduce heat or chemicals into the environment, and clearing land for urban development, agriculture, mining or other activities. Since European settlement, harvesting has had major detrimental effects on many terrestrial species, including red cedar, koalas, and various kangaroos and wallabies. Harvesting of fish and other species are issues in inland water and marine environments (see Chapter 4: Inland water and Chapter 6: Marine environment). Forestry and firewood collection are discussed in Chapter 5: Land. Salvage logging (logging of trees after wildfires) is partly a harvesting issue and partly a habitat modification issue. It has major effects on wildlife, especially species that live in tree hollows.95-96 Harvesting of native terrestrial species—such as kangaroos, wildflowers or marine species—is strictly regulated (Box 8.4). Illegal harvesting of some species, such as orchids, is frequently mentioned as a threat in species listing. In some cases, harvesting is used as a tool to manage populations of native species that are becoming pests due to changes in their environment that remove previous restrictions on population size.34 International movement of wildlife and wildlife products is regulated under Part 13A of the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act). Commercial export of regulated wildlife and wildlife products (see www.environment.gov.au/biodiversity/wildlife-trade/lists/exempt/index.html for exemptions) may occur only where the specimens have been derived from a captive breeding program, artificial propagation program, aquaculture program, wildlife trade operation (WTO) or wildlife trade management plan (WTMP) approved under the EPBC Act. Most WTOs and WTMPs involve the wild harvest of native wildlife. WTOs are approved for a maximum period of three years and WTMPs are approved for a maximum period of five years. As of July 2011, there are three small-scale WTOs for harvesting whole plants (ferns), one small-scale WTO for cut flowers, five small-scale WTOs for invertebrates (mostly insects but one is for hermit crabs), one small-scale WTO for mammals (wallaby skin and fur) and one existing stock WTO for taxidermied specimens (primarily birds and mammals) (see www.environment.gov.au/biodiversity/wildlife-trade/sources/operations/index.html). The environment minister or their delegate must not approve a WTO unless satisfied that the operation is not detrimental to the survival of the taxon or the conservation status of the taxon to which the operation relates, and that it will not threaten any relevant ecosystem. WTMPs are generally developed and implemented by state or territory authorities (Table A). Plans must address the legislative context of the proposed trade, general management procedures, such as licensing, the different types of harvest or production covered under the plan, monitoring and assessment, and reporting and compliance. The minister or their delegate may not approve a management plan as a WTMP unless they are satisfied that the application has appropriately assessed the environmental impact of the actions covered by the plan. WTMPs must be ecologically sustainable, and not detrimental to the survival of the taxa, the conservation status of the taxa and any relevant ecosystem. Harvesting of native seeds from the wild is one aspect of biodiversity harvesting that is becoming increasingly important. Seeds can be collected for conservation and restoration purposes as well as for commercial gain.97 Seed harvesting can potentially cause damage to the parent plants or reduce the viability of the source population through repeated harvesting. Attention needs to be paid to how this industry functions to enable restoration of biodiversity and improve the extent and condition of native vegetation. |NSW cut flowers||NSW| |Kangaroos (four plans)||NSW, Qld, SA and WA| |Crocodiles (three plans)||WA, NT (NT has separate plans for freshwater and saltwater crocodiles)| NSW = New South Wales; NT = Northern Territory; Qld = Queensland; SA = South Australia; Tas = Tasmania; Vic = Victoria; WA = Western Australia Further information: information on the sustainability of the kangaroo harvest can be found at www.environment.gov.au/biodiversity/wildlife-trade/wild-harvest/kangaroo/index.html. Links to the management plans can be found at www.environment.gov.au/biodiversity/wildlife-trade/sources/management-plans/index.html. Various measures of a society’s ecological footprint, the amount of natural resources consumed by people and the area of land required to support that consumption are available; one of these is the Global Footprint Network (Figure 8.13). In the most recent data, Australia ranks as having the eighth largest ecological footprint of all nations. It is difficult to translate this into an effect on biodiversity, partly because the land from which Australians obtain their resources is not all within Australia, partly because people in other countries obtain their resources from Australia, and partly because the calculations do not reveal in detail which resources are being taken from which ecosystems and which species are being affected. Nevertheless, there is a strong message that lifestyles of Australians are exerting relatively strong pressures on ecosystems compared with people in many other countries. Figure 8.13 Ecological footprint for consumption for a range of high-consuming and low-consuming countries There are 81 countries omitted from the middle of the figure. In another international comparison, based on a set of measures of environmental degradation, Australia had the ninth worst absolute environmental impact out of 171 countries.100 More than a decade of research using CSIRO’s Australian Stocks and Flows Framework, together with ecological footprint analysis and local-scale modelling, emphasises the importance for all Australian cities of managing not only population growth, but where and how people live, and the consumption of natural resources per person.56 This research suggests, for example, that Sydney will have trouble avoiding further losses of biodiversity, because growth will require conversion of relatively undegraded habitat. However, Perth and Melbourne have the scope to minimise losses, because they can develop previously cleared areas where the major effects on biodiversity have already been felt. Australia’s high footprint is largely caused by our lifestyles, which use high levels of natural resources in an inefficient way. In the Northern Territory, for example, the average ecological footprint of the Indigenous population is 6.4 global hectares per person, while the footprint for the non-Indigenous population is around 9 global hectares per person.101 (Global hectares are a measure of biocapacity—one global hectare is an average of all hectare measurements of biologically productive areas on Earth.) These analyses were based on various data sources from 1998 to 2004. The lower footprint of the Indigenous population is partly due to traditional use of ecosystem resources, but it is also due to poverty. Use of inland water for agricultural and other purposes is considered in detail in Chapter 4: Inland water. Here we summarise key points in relation to biodiversity. Water is extracted from the environment by households and businesses and in agricultural and other production industries for a range of purposes, including direct consumption by humans (e.g. drinking, cooking) and indirect consumption (e.g. use in production of food, manufacturing of goods that contain water or rely on inputs that use water, cooling systems, transport).102 Extraction of water from the environment places pressures on biodiversity by affecting flow rates in waterways, which affects the habitat for plants and animals living in those waterways, as well as the wetting and drying cycles that are important for many species’ breeding. It also affects the provision of food for waterway-dependent species. Indirect pressures include effects on hydrological cycles, which affect the levels and condition of underground water on which many species of plants and animals depend. The condition of soils is affected when rising watertables bring salt to the surface. Water consumption is considered in the analyses of ecological footprints. Australia’s water consumption was 14 101 gigalitres in 2008–09, a decrease of 25% from 2004–05. Agricultural activities accounted for 7589 gigalitres or 54% of total Australian water consumption in 2008–09. This is a decrease from 2004–05 when it was 65% of water consumption, reflecting restricted supplies during southern Australia’s extended drought. The major impacts of water consumption, changed flow regimes and changed hydrology are on wetlands and the species that depend on them, on animals and plants that live in and around waterways, and in some cases on the landscape more broadly. These impacts are discussed further in Chapter 4: Inland water and Chapter 5: Land. Search within SoE SoE 2011 - Reader Survey What do you think of SoE 2011? Please provide your feedback through the reader survey. SoE 2011 - Reader Survey
<urn:uuid:7b157ad2-f2aa-4104-8a51-ef355ca59099>
3.328125
1,964
Knowledge Article
Science & Tech.
30.140996
Mon Jan 14 00:22:56 GMT 2008 by Erin What does the P in 88P Howell's comet mean? Fri Feb 22 02:28:11 GMT 2008 by Pj It means periodic. Most known comets are periodic. This means that its orbit can be calculated and its future sightings can be predicted (like Halley's Comet) I Love This Paper Wed Jan 30 20:42:37 GMT 2008 by Josh This research is awesome, i study this all the time Sun May 04 18:56:21 BST 2008 by Dalton Approximately how many years would it take Swift-Tuttle to impact Earth,and how can we be sure the data up to date now is accurate? Scientists compile data all the time,and many of them disagree on data and many other Astronomical issues... Wed Apr 01 07:37:10 BST 2009 by fred that is a good article. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:f9e463e2-cb48-42af-af27-8c7ce288536c>
3.21875
245
Comment Section
Science & Tech.
79.978117
C14 Decay and Nitrogen Ion Date: Winter 2011-2012 When carbon-14 decays to nitrogen it gains a proton, however this means that the nitrogen atom is "short" one electron so it has more protons than electrons. Does the new nitrogen atom stay as an ion and if not where does it get the "extra" electron? Charge is conserved in all interactions. When the neutron decays, it produces an electron and an antineutrino as well as a proton. The electron produced usually is moving so fast that it escapes, but electrons can shuffle around. Generally a lot of free radicals are created. Richard E. Barrans Jr., Ph.D., M.Ed. Department of Physics and Astronomy University of Wyoming I do not know if I have ever seen an answer to this question. As you know, however, this decay does not occur in a vacuum, it occurs with all kinds of other substances around it. Further, the beta particle that is emitted is--for all practical purposes--as electron. Conceivably, the beta particle (electron) could be entrapped by neighboring newly-formed nitrogens. Ray Tedder, NBCT You are correct in stating that as C-14 become N-14 there is a transmutation of a neutron to a proton and we should therefore expect that N-14 is positively charged. However, from an empirical point of view we cannot capture a single C-14 becoming N-14. We can only observe a mass of C-14 and realize that there is a spontaneous decay in the sample to N-14. We can then look at the N-14 to see if it is neutral or not. When we do this we find very little (if any) positively charge N-14. Now, is it because the N-14 is produced as a neutral atom, is produced as a positively charged N-14 which captures the electron that is also produced in the same beta decay, or the positive N-14 is produced and then later captures free electrons ... hard to say, because all we can observe is the mass of sample, not individual C-14's and N-14's. Greg (Roberto Gregorius) Click here to return to the Chemistry Archives Update: June 2012
<urn:uuid:1997ac85-f4be-4070-8b1f-96a8291c8342>
3.546875
501
Q&A Forum
Science & Tech.
64.684351
The Effect of Dieoffs of Asian Clams (Corbicula fluminea) on Native Freshwater Mussels (Unionidae) There is a great deal of concern about the declining freshwater mussel fauna of North America. Although deteriorating water quality and habitat degradation may account for much of the decline, it has been suggested that the exotic Asian clam, Corbicula fluminea, may be having an effect on native unionids. Negative impacts may result directly from competition or indirectly, because of Corbicula population crashes that release ammonia and reduce dissolved oxygen in the sediment. Laboratory tests were conducted to determine the relative sensitivity of native mussel and Asian clam life stages to unionized ammonia, and mussel glochidia were the most sensitive (24-hr LC50 of 0.11 mg/L NH3-N). Juvenile and adult mussels were similarly sensitivity, with average 96-hr LC50's of 0.49 and 0.52 mg/L NH3-N, respectively. Adult C. fluminea were the least sensitive, having an average LC50 of 0.80 mg/L NH3-N. The EPA standard test organism, Ceriodaphnia dubia, had one of the lowest LC50's (0.07 mg/L NH3-N) of the five species, and the fathead minnow, Pimephales promelas, had the highest (1.18 mg/L). The differing sensitivities of the various life stages are important when trying to determine the impact of an Asian clam dieoff. If a dieoff occurs at a time of year when the more sensitive life stages, such as glochidia are present, then the impact on mussel recruitment may be greater. Two miniature artificial stream tests were used to determine the effect of clam density on dieoff rate, ammonia production and dissolved oxygen levels. Only clams at the highest density of 10,000/m2 experienced 100% mortality. Unionized ammonia levels exceeded 4.0 mg/L, and dissolved oxygen levels dropped below 1.0 mg/L during the dieoff. The amount of unionized ammonia produced was twofold greater than the concentration that produced an LC50 in adult C. fluminea and ~40 times greater than the LC50 for V. iris glochidia. Factors thought to have contributed to the C. fluminea dieoff were flow rate, low dissolved oxygen levels, temperature and perhaps ammonia. A complete dieoff did not occur until flow was stopped and dissolved oxygen concentrations began to drop. One-hundred percent mortality occurred in 38 days for the first test, and 21 days in the second test. Higher water temperatures in the first test (26±oC) compared to the second test (average = 21.7oC) are thought to have resulted in the faster dieoff.
<urn:uuid:ad3f0c84-4a11-4f58-8720-97860a6824e0>
2.78125
588
Knowledge Article
Science & Tech.
51.423157
Plasma classification (types of plasma) From (The Plasma Universe Wikipedia-like Encyclopedia) Plasmas are described by many characteristics, such as temperature, degree of ionization, and density, the magnitude of which, and approximations of the model describing them, gives rise to plasmas that may be classified in different ways. Pseudo-plasmas vs real plasmas A real plasma may have complex characteristics that exhibited complex phenomena. To model it's behavior, scientists may approximate and simplify a real plasma's characteristics; this pseudo-plasma may or may not be a adequate representation of a real plasma. Pseudo-plasmas tend to neglect double layers, instabilities, filamentary structures, plasma beams, electric currents, and other potentially important properties. Cold, warm and hot plasmas In the laboratory in the positive column of a glow discharge tube: - "..there is a plasma composed of the same number of electrons and ions. [..] In low pressure gas discharge, the collision rate between electrons and gas molecules is not frequent enough for non-thermal equilibrium to exist between the energy of the electrons and the gas molecules. So the high-energy particles are mostly composed of electrons while the energy of the gas molecules is around room temperature. We have Te >> Ti >> Tg where Te, Ti and Tg are the temperatures of the electron, ion and gas molecules, respectively. This type of plasma is called a "cold plasma". - "In a high pressure gas discharge the collision between electrons and gas molecules occurs frequently. This causes thermal equilibrium between the electrons and gas molecules. We have Te ≃ Tg. We call this type of plasma a "hot plasma". - "In cold plasma, the degree of ionization is below 10-4." Hot plasma (thermal plasma) A hot plasma in one which approaches a state of local thermodynamic equilibrium (LTE). A hot plasma is also called a thermal plasma, but in Russian literature, a "low temperature" plasma in order to distinguish it from a thermonuclear fusion plasma. Such plasmas can be produced by atmospheric arcs, sparks and flames. Cold plasma (non-thermal plasma) A cold plasma is one in which the thermal motion of the ions can be ignored. Consequently there is no pressure force, the magnetic force can be ignored, and only the electric force is considered to act on the particles. Examples of cold plasmas include the Earth's ionopshere (about 1000K compared to the Earth's ring current temperature of about 108K)., the flow discharge in a fluorescent tube, An ultracold plasma is one which occurs at temperatures as low as 1K. and may be formed by photoionizing laser-cooled atoms. Ultracold plasmas tend to be rather delicate, experiments being carried out in vacuum. The degree of ionization of a plasma is the proportion of charged particles to the total number of particles including neutrals and ions, and is defined as: α = n+/(n + n+) where n is the number of neutrals, and n+ is the number of charged particles. α is the Greek letter alpha. Degree required to exhibit plasma behaviour Umran S. Inan et al write: - "It turns out that a very low degree of ionization is sufficient for a gas to exhibit electromagnetic properties and behave as a plasma: a gas achieves an electrical conductivity of about half its possible maximum at about 0.1% ionization and had a conductivity nearly equal to that of a full ionized gas at about 1% ionization." In a plasma where the degree of ionization is high, charged particle collisions dominate. In plasmas with a low degree of ionization, collisions between charged particles and neutrals dominate. The degree of ionization which determines when a gas becomes a plasma will vary between different types if plasma, and may be as little as 10-6: - "Among the many types of plasma, those commonly employed for plasma processing are low temperature, low density, non-equilibrium, collision dominated-environments. By low temperature, we mean "cold" plasmas with a temperature normally ranging from 300K and 600K, by low density we mean plasmas with neutral gas number densities of approximately 1013 to 1016 molecules cm-3 (pressure between ~ 0.1 to 103 Pa) which are weakly ionized between 10-6 to 10-1" - ".. Coulomb collisions will dominate over collisions with neutrals in any plasma that is even just a few percent ionized. Only if the ionization level is very low (<10-3) can neutral collisions dominate." Alfvén and Arrhenius also note: - "The transition between a fully ionized plasma and a partially ionized plasma, and vice versa, is often discontinuous (Lehnert, 1970b). When the input energy to the plasma increases gradually, the degree of ionization jumps suddenly from a fraction of 1 percent to full ionization. Under certain conditions, the border between a fully ionized and a weakly ionized plasma is very sharp." Fully ionized plasma A fully ionized plasma has a degree of ionization approaching 1 (ie. 100%). Examples include the Solar Wind (interplanetery medium), stellar interiors (the Sun's core), fusion plasmas Partially ionized plasma (weakly ionized gas) A partially ionized plasma has a degree of ionization that is less than 1. Examples include the ionosphere (2x10-3), gas discharge tubes. The aurora may exhibition properties of a weakly ionized gas and a weakly ionized plasma: - "If we observe an aurora in the night sky we get a conspicuous and spectacular demonstration of the difference between gas and plasma behavior. Faint aurorae are often diffuse and spread over large areas. They fit reasonably well into the picture of an ionized gas. The degree of ionization is so I low that the medium still has some of the physical properties of a gas that is homogeneous over large volumes. However, in certain other cases (e.g., when the auroral intensity increases), the aurora becomes highly inhomogeneous, consisting of a multitude of rays, thin arcs, and draperies a conspicuous illustration of the basic properties of most magnetized plasmas." Associate Professor of Physics, Richard Fitzpatrick, writes: - "Note that plasma-like behaviour ensues after a remarkably small fraction of the gas has undergone ionization. Thus, fractionally ionized gases exhibit most of the exotic phenomena characteristic of fully ionized gases." High density plasma Medium density plasma Low density plasma Dusty plasmas and grain plasmas A dusty plasma is a plasma containing nanometer or micrometer-sized particles suspended in it. A grain plasma contains larger particles than dusty plasmas. Examples include comets, planetary rings, exposed dusty surfaces, and the zodiacal dust cloud. Colloidal plasmas, Liquid plasmas and Plasma crystals "A macroscopic Coulomb crystal of solid particles in a plasma has been observed. Images of a cloud of 7-μm "dust" particles, which are charged and levitated in a weakly ionized argon plasma, reveal a hexagonal crystal structure. The crystal is visible to the unaided eye." "Colloidal plasmas may "condense" under certain conditions into liquid and crystalline states, while retaining their essential plasma properties. This "plasma condensation" therefore leads to new states of matter: "liquid plasmas" and "plasma crystals." The experimental discovery was first reported in 1994". "Liquid and crystalline phases can be formed in so-called complex plasmas — plasmas enriched with solid particles in the nano- to micrometre range. The particles absorb electrons and ions and charge negatively up to a few volts. Due to their high mass compared to that of electrons and ions the particles dominate the processes in the plasma and can be observed on the most fundamental — the kinetic level. Through the strong Coulomb interaction between the particles it is possible that the particle clouds form fluid and crystalline structures. The latter is called 'plasma crystal'." Active and passive plasmas Hannes Alfvén writes: - "Passive plasma regions, which can be described by classical hydrodynamic theory. They transmit waves and high energy charged particles but if the field-aligned currents exceed a certain value they are transferred into. - Active plasma regions: These carry field-aligned currents which give them filamentary or sheet structure with thickness down to a few cyclotron radii (ionic or even electronic). They transmit energy from one region to another and produce electric double layers which accelerate particles to high energies. Active regions cannot be described by hydromagnetic theories. Boundary conditions are essential and may be introduced by circuit theory" - "These regions may transmit different kinds of plasma waves and flow of high energy particles. There may be transient currents perpendicular to the magnetic field changing the state of motion of the plasma but not necessarily associated with strong electric fields and currents parallel to the magnetic field. A plasma of this kind fills most of space." - "Besides the passive plasma regions there are also small but very important regions where filamentary and sheet currents flow (Alfvén, 1977a). By transferring energy and producing sharp borders between different regions of passive plasmas, they are of decisive importance to the overall behaviour of plasmas in space. There are two different - but somewhat related - types of such regions which we shall call plasma cables and boundary current sheets." Ideal and non-ideal plasmas An ideal plasma is one in which Coulomb collisions are negligible, otherwise the plasma is non-ideal. "At low densities, a low-temperature, partly ionized plasma can be regarded as a mixture of ideal gases of electrons, atoms and ions. The particles travel at thermal velocities, mainly along straight paths, and collide with each other only occasionally. In other words, the free path times prove greater than those of interparticle interaction. With an increase in density, mean distances between the particles decrease and the particles start spending even more time interacting with each other, that is, in the fields of surrounding particles. Under these conditions, the mean energy of interparticle interaction increases. When this energy becomes comparable with the mean kinetic energy of thermal motion, the plasma becomes non-ideal." High Energy Density Plasmas (HED plasmas) - ^ Kiyotaka Wasa, Shigeru Hayakawa, Handbook of Sputter Deposition Technology: Principles, Technology and Applications (Materials Science and Process Technology Series), (1992), William Andrew Inc., 304 pages, ISBN 0815512805 (page 95) - ^ Maher I. Boulos, Pierre Fauchais, Emil Pfender, Thermal Plasmas: Fundamentals and Applications (1994) Springer, ISBN 0306446073 (p.6) - ^ Souheng Wu, Polymer Interface and Adhesion CRC Press, ISBN 0824715330, (page 299) - ^ Marcel Goossens, An Introduction to Plasma Astrophysics and Magnetohydrodynamics (2003) Springer, 216 pages, ISBN 1402014333, (page 25) - ^ The Sun to the Earth -- And Beyond: Panel Reports, National Research Council (U.S.) (2003) 246 pages, ISBN 0309089727 (p.59) - ^ A. J. van Roosmalen, J. A. G. Baggerman, S. J. H. Brader, Dry Etching for VLSI, Springer, 254 pages, ISBN 0306438356 (page. 14) - ^ T. Killian, T. Pattard, T. Pohl, and J. Rost, "Ultracold neutral plasmas", Physics Reports 449, 77 (2007). - ^ Steven L. Rolston, "Ultracold neutral plasmas", Trends, July 14, 2008, American Physical Society - ^ Umran S. Inan, Marek Gołkowski, Principles of Plasma Physics for Engineers and Scientists, Publ. Cambridge University Press, 2011, ISBN 0521193729, 9780521193726, 284 pages (page 4) - ^ Loucas G. Christophorou, James Kenneth Olthoff, Fundamental Electron Interactions With Plasma Processing Gases, (2004) in Section 3.1 Low-temperature, Low-Density, Non-Equilibrium Plasmas, 76 pages, ISBN 0306480379 (page 39) - ^ Robert J. Goldston, Paul Harding Rutherford, Introduction to Plasma Physics, "Fully and Partially Ionized Plasmas" (page 164) - ^ Lehnert, B., "Minimum temperature and power effect of cosmical plasmas interacting with neutral gas", Cosmic Electrodynamics (1970) 1:397. - ^ a b Hannes Alfvén and Gustaf Arrhenius, Evolution of the Solar System, (1976) Part C, Plasma and Condensation, "15. Plasma Physics and Hetegony - ^ Francis Delobeau, The Environment of the Earth, (1971) 132 pages, ISBN 902770208X (page 13) - ^ Richard Fitzpatrick, Introduction to Plasma Physics: A graduate level course, "Introduction: 1.2 What is plasma?" p.6 - ^ Horanyi Mihaly, and Mitchell Colin J., "Dusty Plasmas in Space: 6. Saturn's Rings: A Dusty Plasma Laboratory", Journal of Plasma and Fusion Research, Vol.82; No. 2; Page 98-102 (2006) - ^ H. Thomas et al, "Plasma Crystal: Coulomb Crystallization in a Dusty Plasma", Phys. Rev. Lett. 73, 652 - 655 (1994) - ^ G. E. Morfill, H. M. Thomas, U. Konopka, and M. Zuzic, "The plasma condensation: Liquid and crystalline plasmas", Physics of Plasmas 6, 1769 (1999); - ^ Gregor E Morfill et al, "A review of liquid and crystalline plasmas—new physical states of matter?", 2002 Plasma Phys. Control. Fusion 44 B263-B277 - ^ Hannes Alfvén, "Plasma in laboratory and space", Journal de Physique Colloques 40, C7 (1979) C7-1-C7-19 - ^ Hannes Alfvén, "Electric Currents in Cosmic Plasmas", Reviews of Geophysics and Space Physics, vol. 15, Aug. 1977, p. 271-284. - ^ V. E. Fortov, Igor T. Iakubov, The physics of non-ideal plasma, World Scientific, 2000, ISBN 9810233051, ISBN 9789810233051, 403 pages. (Page 1)
<urn:uuid:fdce9fc7-c001-41cc-b62e-4c21261c1613>
3.796875
3,181
Knowledge Article
Science & Tech.
43.019745
Mopalia lignosa (Gould, 1846) |Mopalia lignosa, San Simeon, CA| |(Photo by: Dave Cowles, 1997)| How to Distinguish from Similar Species: The hairs are flexible, not stiff and thick as in Mopalia muscosa. Other Mopalias have longer hairs or have tubercles instead of pits on the plates. Geographical Range:_Prince William Sound, Alaska to Point Conception, CA Depth Range: Middle and low intertidal Habitat: Sides or bottoms of large boulders on open coast Biology/Natural History: This species is usually under rocks, and begins crawling when the rock is turned over. They are the fastest crawling chiton I have seen (are those racing stripes on the plates?) Feeds on many types of algae, especially diatoms and Ulva. Sense light with aesthetes in plates. |Main Page||Alphabetic Index||Systematic Index||Glossary| Morris et al., 1980 O'Clair and O'Clair, 1998 Another view of Mopalia lignosa. Note the hairs between the plates. Photo by Dave Cowles, San Simeon, CA 1997
<urn:uuid:1d1eb7a5-7c95-4d3c-b6b8-aad79e83decd>
2.71875
268
Knowledge Article
Science & Tech.
44.446491
Which came first, the galaxy or the black hole? Every big galaxy we see in the sky has a supermassive black hole at its heart, a dark monster that may be millions or billions of times the mass of the Sun. But did the black hole form first, or the galaxy… or did they grow together? Astronomers think they may be on to the answer: black holes form first, or at least more quickly, and galaxies grow around them. |A supermassive black hole gobbling down matter at the center of a galaxy. Illustration courtesy NASA/Dana Berry, SkyWorks Digital Inc. It’s been known for a decade or two that big galaxies have giant black holes in their cores. It was a surprise at first, but now it’s expected that they all do. The Milky Way certainly does; our black hole is 4 million times the mass of the Sun, and it sits at the geometric center of the galaxy. Every time we look carefully at big galaxies, we see evidence for such a beast. The second surprise came when the masses of the black holes were compared to the masses of their host galaxies. No matter how big or how small the galaxy, the mass of the black hole it harbored scaled right along with it (or, more accurately, with the bulge of stars, gas, and dust at the galactic center… sortof like the downtown region of a big city). Galaxies with bigger central bulges have bigger black holes, smaller galaxy bulges means more modest black holes. This shocked astronomers. Mind you, as scary and big as these galactic black holes are, they are still a tiny fraction of the mass of their host galaxy’s bulge, about one-tenth of a percent, in fact. And that’s just for the bulge; the black hole is only about a thousandth of a percent of the total mass of the entire galaxy. So the black hole is downright dinky compared to the galaxy. How could it possibly affect this gigantic structure around it? This scaling issue means that somehow, the black hole and the galaxy must "know" about each other; either the black hole affected how big the galaxy got, or the galaxy itself somehow shaped the size of the black hole, or some third characteristic shaped them both. But which was it? One way to figure out which came first, the galaxy or the black hole, is to look at very distant galaxies. When we look at nearby galaxies we see this scaling between the galaxy and its black hole. But in these cases, the galaxies are old compared to the age of the Universe; we’re looking at 10-12 billion year old galaxies in a 13.7 billion year old Universe. By now, most galaxy and black hole-forming processes have settled down and stopped. But if we look at very distant galaxies, we see them as they were when they were much younger, only a billion or two years old. If black holes grow more quickly or more slowly than their host galaxies, then looking that far back in distance and time may make that easier to see. This is what Chris Carilli, of the National Radio Astronomy Observatory (NRAO), and an international team of astronomers did. |Radio observations of young galaxies find that the black holes form first, or at least grow more quickly than the galaxy. Credit: NRAO/AUI/NSF, SDSS When they looked at distant, young galaxies, they found that the black holes were more massive relative to their host galaxies than they are today! That’s a big result. It implies very strongly that somehow, the black holes grew first, with the galaxies growing more slowly around them. This is the first big breakthrough in the galaxy/black hole chicken-and-egg problem. It’s a key finding that will allow astronomers to pursue the myriad questions that we still have… like, do the formations of the galaxies and black holes start at the same time, but black holes grow more quickly? Does the black hole reach its adult size and stop while the galaxy is still growing? And the real killer question: what the heck process is going on that relates the final mass of the black hole to the mass of the galaxy? We still don’t know. But we have ideas… one is that black holes are messy eaters. As matter falls onto the nascent hole, it forms a flattened disk. This channels a vast wind that blows out from the superheated material in the disk that is just above the black hole’s point of no return. This gale of subatomic particles and energy blows out into the galaxy, affecting how stars form and how material from the proto-galactic cloud falls onto the galaxy itself. Eventually, the black hole runs out of material to feed on, the wind shuts off, and the growth stops… but at the same time, that wind has blown away all the material the galaxy was using to grow and form stars. So this process would link the growth and eventual size of the black hole to the far larger galaxy around it. The thing is, this is just an idea; we don’t know if it’s right or not! But this new work studying distant galaxies has provided a vital piece of evidence to what’s actually going on. Now astronomers can focus their attention on these galaxies and see if they can tease out more details, more clues so that they can solve one of the biggest outstanding cosmic mysteries today: how, specifically, did galaxies form? Or, if you prefer: how did we get here? After all, almost every question in astronomy boils down to this one. And every day, we get a bit closer to the answer. Links to this Post - O buraco negro no fim do universo talvez não exista | Bender Blog | January 7, 2009 - Personal Science-Related Starlinks | Mike Brotherton: SF Writer | January 8, 2009 - Interesting stuff for the 2009 silly season « The Outer Hoard | January 15, 2009 - A black hole wind is rising | Bad Astronomy | Discover Magazine | January 21, 2009 - Las galaxias crecen de las semillas de un agujero negro. | InternetParaGanar.com | October 27, 2009
<urn:uuid:2a8781dd-5be9-4b20-b082-f6e91542e9fe>
3.765625
1,312
Personal Blog
Science & Tech.
58.151655
In a Los Angeles laboratory, researchers have let loose scores of what amount to living micromachines. Dwarfed by a comma, each tiny device consists of an arch of gold coated along its inner surface with a sheath of cardiac muscle grown from rat cells. With each of the muscle bundles' automatic cycles of contraction and relaxation, the device takes a step. Viewed under a microscope, "they move very fast," says bioengineer Jianzhong Xi of the University of California, Los Angeles (UCLA). "The first time I saw that, it was kind of scary." Xi and his UCLA colleagues Jacob J. Schmidt and Carlo D. Montemagno describe their musclebots in the February Nature Materials.
<urn:uuid:75d38764-ffa4-4e40-8b9d-4190118883c3>
2.84375
148
Comment Section
Science & Tech.
46.085714
Effects of Clearcutting on Soil Water Depletion in an Engelmann Spruce Stand Water Resources Research Soil water depletion was monitored for five growing seasons on 0.4 hectare plots in a mature stand of Engelmann spruce in northern Utah. Three plots were then clearcut and in the first season soil water depletion was 20 to 25 cm less than on an uncut plot. This change, which represents a savings of water previously lost to evapotranspiration, is considerably greater than reported for comparable studies in aspen and lodgepole pine. The effects of clearcutting on soil water depletion are expected to persist for as many as 50 years. In the first winter after cutting, peak snow water equivalent in the clearcut plots averaged 91 cm, or 31 cm greater than for the uncut control plots. Hart, G. E., and D. A. Lomas (1979), Effects of clearcutting on soil water depletion in an Engelmann spruce stand, Water Resour. Res., 15(6), 1598–1602, doi:10.1029/WR015i006p01598.
<urn:uuid:b7106d7e-b6cd-4fc6-9262-60474413df0e>
2.8125
230
Academic Writing
Science & Tech.
66.497439
Every 11 years or so, for reasons scientists have yet to fully comprehend, the sun comes a little undone. Magnetic storms rage across its surface, sunspots erupt like acne, and clouds of electrically charged particles fly outward at 2 million miles per hour. When those clouds reach Earth, they can overload power lines and disrupt communications, but they also have an oddly lyrical side: The rain of solar particles lights up the aurora, one of nature's most breathtaking spectacles. Now is the perfect time to witness both the beauty and the beast. The current solar cycle apparently reached its maximum late last year. Auroras usually hit their peak the following year, so the view this month should be about as good as it gets. And these days, a fleet of spacecraft constantly monitors the sun's violent behavior and its earthly consequences. The results of these round-the-clock observations are available to anyone with an Internet connection you just have to know where to look. The SOHO satellite (sohowww.nascom.nasa.gov ), launched in 1995, monitors the solar eruptions that unleash the energy of a billion atomic bombs in hours or less. For the past three years, the ACE spacecraft (www.srl.caltech.edu/ACE ) has hovered a million miles sunward of Earth, measuring these blasts up to an hour before they reach us. Meanwhile, the year-old IMAGE satellite (pluto.space. swri.edu/IMAGE ) generates global portraits of the subatomic particles from the sun as they crash into Earth's magnetic field. This information, combined with input from other, earlier satellites, enables researchers at the National Oceanic and Atmospheric Administration to produce plots of electrical activity (sec.noaa.gov/pmap ) as readily as TV weathercasters track hurricanes. |Auroras light up Earth's poles as a solar storm sweeps by. The IMAGE satellite captures the swiftly changing glow.| Photo by Harald Frey, Stephen Mende/Space Sciences Laboratory/University of California at Berkeley Early warnings from our space sentinels have helped us diminish the destructive impact of the sun's outbursts. On March 13, 1989, during the last solar maximum, a dazzling dusk-to-dawn light show over North America sparked electrical discharges that overloaded transformers and plunged Quebec into darkness. This time around, power companies know when to reroute current to compensate for solar-induced surges. As an added bonus, IMAGE and its kin reveal where auroras are most active. When the charged particles from the sun reach Earth, some get trapped in our planet's magnetic field and reverberate between the poles, appearing in the IMAGE maps as doughnut-shaped zones of electrical activity around the Arctic and the Antarctic. Each aurora seen from the ground is a tiny segment of that enormous ring. Places like Fairbanks, Alaska, that sit right below the glowing doughnut enjoy overhead displays nearly every clear night. Down in the lower 48, auroras are much rarer but can look even more spectacular because they are usually seen from the side, a perspective that is often more majestic. The auroras may slowly unfold into luminous rays, blotches, arcs, lines, and curtains, or undergo split-second changes. Usually they are pale green, but occasionally the color plunges to deep bloodred. With satellite readings available at the click of a mouse, you no longer have to guess blindly about the timing of solar eruptions and auroras. Today's Space Weather (sec.noaa.gov/today.html ) issues three-day solar-storm forecasts and current photographs of the sun, while the SpaceWeather.com site presents a full report on current solar activity and is organized in an attractive format. But the best way to catch glorious auroras is still persistence. Look to the north from time to time on clear, moonless nights. One sighting and you'll appreciate, in a way no satellite can capture, the subtle alchemy that turns solar turmoil into a wall of ghostly fire.
<urn:uuid:2d612aac-81ed-4645-b546-d8c93f49eb24>
3.296875
839
Knowledge Article
Science & Tech.
46.529134
LCM is an abbreviation for Least Common Multiple’. It is also called as ‘lowest common multiple’ and less popularly as ‘smallest common multiple’. For a given set of two or more integers, the least common multiple is the least integer that can be divided by all the integers in the given set. The fundamental method of finding the smallest common multiple for a set of integers is first list out the multiples of each integer. Pick up the common integers that appear in all such lists. Now figure out the lowest among them. It is the least common multiple. For example, let us try to find the least common multiple of 4 and 6. The multiples of 4 are: 4, 8, 12, 16, 20, 24, 28, 32, 36, 40 … The multiples of 6 are: 6, 12, 18, 24, 30, 36, 42 … The integers that are seen common in both the lists are: 12, 24, 36 … The smallest integer among these commons is 12. Therefore, the smallest common multiple of 4 and 6 is 12. There is another method to find the lowest common multiple by using the following formula. LCM of x, y = (x*y)/(GCF of x and y) Let us try this method to find the least common multiple of 12 and 16. The GCF of 12 and 16 is 4. Therefore, the least common multiple of 12 and 16 = (12*16)/(4) = 3*16 = 48. In cases where you need to find the lowest common multiples of many numbers, especially large, you can find that by prime factorization. The method here is to do the prime factorization of each number. The lowest common multiple is the product of the highest power of each prime number, out of all factors. Let us find the least common multiple of 9 and 12 by this method. The prime factorization of 9 is 3*3 = 32. The prime factorization of 12 is 3*4 =3*2*2 = 3* 22. The product of the highest powers of prime numbers is 32* 22= 9*4 = 36. Hence, the smallest common multiple of 9 and 12 is 36. The knowledge of lowest common multiple for a given set of integers are greatly helpful in finding lowest common denominator for a given set of fractions. The lowest common denominator is the lowest common multiple of the denominators of the given fractions. Therefore, LCM problems are mostly found in such applications. For example the lowest common divisor of the fractions (1/3) and (1/5) is 15, which is the lowest common multiple of 3 and 5.
<urn:uuid:83130e85-e607-41db-9cc3-25e27b7a0799>
4.34375
575
Tutorial
Science & Tech.
76.162555
|Annu. Rev. Astron. Astrophys. 1999. 37: Copyright © 1999 by . All rights reserved Far-ultraviolet radiation was first detected from early-type galaxies by the Orbiting Astronomical Observatory-2 in 1969. This was a major surprise because it had been expected that such old stellar populations would be entirely dark in the far-UV. To the contrary, not only did elliptical galaxies and the bulges of early-type spirals contain bright UV sources, but their energy distributions actually increased to shorter wavelengths over the range 2000 to 1200 Å, resembling the Rayleigh-Jeans tail of a hot thermal source with Te 2000K. The effect was therefore called the "UV-upturn," the "UV rising-branch," or, more simply, the "UVX." It was only the second new phenomenon (after X-rays from the active galaxy M87) discovered by space astronomy outside our Galaxy. Controversy flourished over the interpretation of the UVX for the next 20 years because of the slow accumulation of high quality UV data. More recent evidence has winnowed the alternatives and strongly supports the idea that the UVX is a stellar phenomenon (as opposed to nuclear activity, for example) associated with the old, dominant, metal-rich population of early-type galaxies. It is the most variable photometric feature of old stellar populations. It appears to be produced mainly by low-mass, helium-burning stars in extreme (high temperature) horizontal branch and subsequent phases of evolution. Such objects have very thin envelopes (MENV 0.05 M) overlying their cores. On both theoretical and observational grounds, the lifetime UV outputs of these stars are exquisitely sensitive to their physical properties. They depend strongly, for instance, on helium abundance; the UV spectrum is the only observable in the integrated light of old populations with the potential to constrain their He abundances. More remarkably, changes of only a few 0.01 M in the mean envelope mass of an extreme horizontal branch population can significantly affect the UV spectrum of an elliptical galaxy. If this interpretation is correct, then far-UV observations become a uniquely delicate probe of the star formation and chemical enrichment histories of elliptical galaxies. They do, that is, once we understand the basic astrophysics of these advanced evolutionary phases and their production by their parent populations. However, this is one of the last underexplored corners of normal stellar evolution, and a complete interpretation is not yet at hand, even for nearby systems such as globular clusters where full color-magnitude diagram information is available. The key physical process involved in producing the small-envelope stars is mass loss during low-gravity phases on the red giant branch and subsequent asymptotic giant branch. Serious modeling of mass loss has only recently begun, and we so far have little intuition for the effects of population characteristics such as metal abundance. Although the interpretation of the integrated light of galaxies has heretofore relied on astrophysics established and tested in the context of local stars, it may be that the UVX problem will be the first where observations of galaxies will act as strong diagnostics of stellar evolution theory. At any rate, it is clear that to understand the controlling mechanisms of the UVX in galaxies we must conjoin integrated light observations of distant galaxies with the stellar astrophysics of globular clusters and hot field stars in our own and nearby galaxies. There are broader ramifications of this interpretation as well. UV light acts as a tracer for stellar mass loss. As the primary source of fresh interstellar gas and dust in old populations, stellar mass loss is directly linked to a diverse set of other important phenomena, including gas recycling into young generations of stars, galactic winds, X-ray cooling flows, far-infrared interstellar emission, dust in galaxy cores, and gas-accretion fueling of nuclear black holes. The UV light also traces the production of low-mass stellar remnants. The hot UVX stars, regardless of their origin, are important distributed contributors to the interstellar ionizing radiation field of old populations. It is possible that the UVX is influenced by, and therefore reflects, galaxy dynamics. Finally, characterization of the UV light of nearby ellipticals, its separation into young or old stellar sources, and its predicted evolution is also basic to the development of realistic "K-corrections" for cosmological applications to high redshift galaxies and to interpretation of the cosmic background light. There has been excellent progress over the last decade in understanding the UVX phenomenon, but the first question that might occur to the reader is why it took 30 years simply to identify its source. The answer lies in our historically limited capability for extragalactic UV observations, a subject we discuss in the next section. Following that, we describe the discovery of far-UV light from old populations and its basic observational characteristics, the lively debate over the leading alternative interpretations, and the confluence of theory and new observations that has led to the currently accepted interpretation. We also discuss several of the other observational opportunities presented by the generally faint UV background in galaxies. By "early-type" galaxies in this paper, we mean ellipticals, S0s, and the large bulges of spirals of types Sa and Sb, although most of the detailed analysis to date has concentrated on Es and S0s.
<urn:uuid:082a5665-3ff5-4ab5-83d7-2e7caaee124e>
3.3125
1,093
Academic Writing
Science & Tech.
28.941855
GISS Inventing Temperatures In Africa–Part II By Paul Homewood In an earlier post, GISS Inventing Temperatures In Africa , we discovered there was only one, (yes one), rural station in the whole GISS Temp Database for Africa that met these two conditions :- 1) A record from 1941 or earlier up to 2011. ( There are many stations with gaps in their record, but I have not excluded these). 2) No more than 20% missing data between 2001 and 2011. Further analysis suggested that even for the few rural stations still operational, most seemed to have very short lifespans, making them useless for assessing climatic trends. Furthermore, the readings in the last few years were extremely sparse, with typically 10 months each year with no temperature logged at all. All of this seemed to cast great doubt on the accuracy of the GISS temperature record, which claims that Africa is one of the fastest warming places in the world, second only to polar regions. So how does the GISS surface record compare with the satellite record? Using the tool at CO2 Science , we can plot temperature trends for specific areas of the planet by longitude and latitude. GISS data is not available in such detail, but GHCN data is, and as we have seen already, GISS temperature trends are essentially based on GHCN data. So, first of all, let us look at the area – [10E to 40E] by [20S to 20N]. This covers most of the Southern part of the Continent from Botswana up past the equator and up to Sudan and Niger. (I have tried as far as possible to keep sea area to a minimum, hence the omission of South Africa). GHCN and UAH (MSU) temperatures for this area are shown below. The trend increase for GHCN is 0.79C, while for UAH it is only 0.26C. The second area to look at is – [20N to 30N] by [10W to 30E]. This covers most of North Africa from Mauritania in the West to Egypt in the East. (This area shows the highest temperature anomaly on the GISS map, indicating that 2011 is between 1C and 2C warmer than the 1951-80 baseline). In this case, the trend increase is 1.27C for GHCN and 0.33C for UAH. Although the GHCN record is incomplete in the 1990’s, the GHCN trend is consistent with the GISS claim of an anomaly of up to 2C. In the first area, too, the GHCN figures correlated well with the GISS map which showed a mixture of areas ranging from 0.2C in the South to 1.0C in the North. The UAH figures do pick up the fact that the Northern band is very slightly warmer than the Southern section, but seem to indicate that the GISS/GHCN surface temperature in Africa is so grossly overestimated as to be worthless. We already know that GISS temperature anomalies in the Arctic, which again are not based on anything remotely resembling a proper temperature record, are much higher than what are shown by UAH satellites. It is beginning to appear that the whole GISS Surface Temperature Record is now utterly unfit for purpose and irretrievably damaged.
<urn:uuid:53d4f7f4-fd2a-4062-b67f-4a0da0d8b990>
3.046875
692
Personal Blog
Science & Tech.
59.406729
If a number N is expressed in binary by using only 'ones,' what can you say about its square (in binary)? This article for the young and old talks about the origins of our number system and the important role zero has to play in it. Using balancing scales what is the least number of weights needed to weigh all integer masses from 1 to 1000? Placing some of the weights in the same pan as the object how many are needed? Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter. A collection of games on the NIM theme Have you seen this way of doing multiplication ? A composite number is one that is neither prime nor 1. Show that 10201 is composite in any base. Explore a number pattern which has the same symmetries in different bases. Let N be a six digit number with distinct digits. Find the number N given that the numbers N, 2N, 3N, 4N, 5N, 6N, when written underneath each other, form a latin square (that is each row and each. . . . In 'Secret Transmissions', Agent X could send four-digit codes error free. Can you devise an error-correcting system for codes with more than four digits? Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter. How can Agent X transmit data on a faulty line and be sure that her message will get through? Show that the infinite set of finite (or terminating) binary sequences can be written as an ordered list whereas the infinite set of all infinite binary sequences cannot. We are used to writing numbers in base ten, using 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Eg. 75 means 7 tens and five units. This article explains how numbers can be written in any number base. Find b where 3723(base 10) = 123(base b). Explore the factors of the numbers which are written as 10101 in different number bases. Prove that the numbers 10201, 11011 and 10101 are composite in any base. You have worked out a secret code with a friend. Every letter in the alphabet can be represented by a binary value. Find all 3 digit numbers such that by adding the first digit, the square of the second and the cube of the third you get the original number, for example 1 + 3^2 + 5^3 = 135. An example of a simple Public Key code, called the Knapsack Code is described in this article, alongside some information on its origins. A knowledge of modular arithmetic is useful. Ask a friend to choose a number between 1 and 63. By identifying which of the six cards contains the number they are thinking of it is easy to tell them what the number is.
<urn:uuid:96322d0f-5f5a-45b8-886f-3df332b882eb>
4
651
Content Listing
Science & Tech.
70.69847
Here are several equations from real life. Can you work out which measurements are possible from each equation? Which line graph, equations and physical processes go together? This is our collection of tasks on the mathematical theme of 'Population Dynamics' for advanced students and those interested in mathematical modelling. Are these statistical statements sometimes, always or never true? Or it is impossible to say? Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from? Why MUST these statistical statements probably be at least a little Get further into power series using the fascinating Bessel's equation. Use your skill and knowledge to place various scientific lengths in order of size. Can you judge the length of objects with sizes ranging from 1 Angstrom to 1 million km with no wrong attempts? How efficiently can you pack together disks? Which units would you choose best to fit these situations? Work with numbers big and small to estimate and calculate various quantities in physical contexts. Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents. Use the computer to model an epidemic. Try out public health policies to control the spread of the epidemic, to minimise the number of sick days and deaths. Formulate and investigate a simple mathematical model for the design of a table mat. Which dilutions can you make using only 10ml pipettes? Invent scenarios which would give rise to these probability density functions. Go on a vector walk and determine which points on the walk are closest to the origin. When you change the units, do the numbers get bigger or smaller? See how enormously large quantities can cancel out to give a good approximation to the factorial function. Estimate these curious quantities sufficiently accurately that you can rank them in order of size How much energy has gone into warming the planet? Each week a company produces X units and sells p per cent of its stock. How should the company plan its warehouse space? Match the descriptions of physical processes to these differential Build up the concept of the Taylor series Find the distance of the shortest air route at an altitude of 6000 metres between London and Cape Town given the latitudes and longitudes. A simple application of scalar products of vectors. Work with numbers big and small to estimate and calculate various quantities in biological contexts. By exploring the concept of scale invariance, find the probability that a random piece of real data begins with a 1. The probability that a passenger books a flight and does not turn up is 0.05. For an aeroplane with 400 seats how many tickets can be sold so that only 1% of flights are over-booked? Look at the advanced way of viewing sin and cos through their power series. Explore the possibilities for reaction rates versus concentrations with this non-linear differential equation If a is the radius of the axle, b the radius of each ball-bearing, and c the radius of the hub, why does the number of ball bearings n determine the ratio c/a? Find a formula for c/a in terms of n. To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling... Analyse these beautiful biological images and attempt to rank them in size order. An observer is on top of a lighthouse. How far from the foot of the lighthouse is the horizon that the observer can see? What functions can you make using the function machines RECIPROCAL and PRODUCT and the operator machines DIFF and INT? Work out the numerical values for these physical quantities. Are these estimates of physical quantities accurate? Could nanotechnology be used to see if an artery is blocked? Or is this just science fiction? Get some practice using big and small numbers in chemistry. Simple models which help us to investigate how epidemics grow and die out. Work with numbers big and small to estimate and calulate various quantities in biological contexts. Explore the properties of perspective drawing. Can Jo make a gym bag for her trainers from the piece of fabric she has? Investigate circuits and record your findings in this simple introduction to truth tables and logic. Looking at small values of functions. Motivating the existence of the Taylor expansion. Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . How do you write a computer program that creates the illusion of stretching elastic bands between pegs of a Geoboard? The answer contains some surprising mathematics. Explore the relationship between resistance and temperature Explore the meaning of the scalar and vector cross products and see how the two are related. How is the length of time between the birth of an animal and the birth of its great great ... great grandparent distributed?
<urn:uuid:b72d70f3-9cf2-4574-a395-21d644885a31>
3.34375
1,031
Content Listing
Science & Tech.
49.718862
What is a Link?: Links are often the first thing that most developers want to learn. Links are what allow your readers to move around the Web. In fact, links put the "hyper" in "hypertext" (definition of hypertext). What is an Anchor or Bookmark?: Anchors or bookmarks are a special type of link that points to within a page, rather than the very top. Creating bookmarks are a little bit more work than plain links, because you have to mark the location of the anchor as well as the link itself. But once you understand that, you'll know it's easy to add internal links. How to Use Web Page Links: You may think, I've got my a tag, what more do I need to know to use links? Well, firstly, there's how you're linking to your pages. Use absolute or relative paths to the pages. But once you've created the link paths, you're not done. You need to create links people will click on. Free tip: people don't click on "click here". You can tell people to "click here" over and over on your site, and they still won't click. Make your links information heavy and people will be more likely to click. Basic Uses for Links: There are lots of things you can use links for, beyond just linking to your favorite band's website. Link Tips and Tricks: But there is more you can do with links than just link to things. Styling Your Links: Often, when people think of styling links, they only think of changing the color. But with modern browsers, you can use CSS to change much more than the color of your links. You can make your links look like buttons or boxes depending upon what CSS properties you use. But before you get too excited about creating and styling your links, you should be aware of the link "dark side": Link Rot. This happens when good links go bad. You can avoid it on your own site by forwarding links to their new location if they move. If you have a lot of links on your site, then you'll want to use a link checker. Link checkers are a specific type of HTML validator that checks the links on your page (or sometimes your whole site) to find the broken ones. I recommend running a link checker regularly across your link collections. You would be surprised how quickly they can go bad.
<urn:uuid:1d3282c7-b605-4dea-9f5e-19265c0a8c33>
2.8125
509
Tutorial
Software Dev.
74.538851
Wednesday, 24 April 2013 Tuesday, 5 March 2013 Tuesday, 14 February 2012 13 Why do we need such a big telescope? What will it look like? How will it be built? How will it handle the data? Tuesday, 10 January 2012 24 I always thought the idea of a rotating space station would neatly get around the problem of no gravity, by artificially creating it by spin. Yet none of the current spacecraft designs include this feature. Why has this simple solution been abandoned? Wednesday, 23 November 2011 6 I was watching a movie where a super solar flare was about to enter the Earth and destroy all living things on the planet. Could this happen? Tuesday, 23 August 2011 14 If the universe is expanding and galaxies are moving away from each other, why do astronomers say the Milky Way and Andromeda galaxies are going to collide? Thursday, 28 July 2011 2 I've heard it said that without interstellar dust to obscure starlight that the night sky would be ablaze with light. Is this true? Wednesday, 15 December 2010 8 If the summer solstice falls on the longest day, why doesn't it also coincide with the earliest sunrise and the latest sunset? Thursday, 7 October 2010 2 I often hear scientists talking about using 'spectroscopy' to study distant stars. How does it work and what can you really tell about an object by the light it gives off? Thursday, 17 June 2010 10 How far north of the equator can you see the Southern Cross group of stars? Wednesday, 13 January 2010 17 How far would I have to travel above the Earth before I could say I was in space? Is there an exact point where the Earth's atmosphere ends and space begins? Tuesday, 25 August 2009 42 Many years ago we were told that the farthest galaxies detected by the then most powerful telescope were 15 billion light-years away. How far away can we detect galaxies now? Thursday, 18 June 2009 11 I've just bought a new phone that is GPS-enabled and it got me wondering, how does GPS actually work? And how accurate is it? Thursday, 26 March 2009 14 Many Australian cities have a tremendous amount of light pollution. So how do city slickers get to view the wonders of the sky? Thursday, 6 November 2008 5 How is it that these relatively tiny objects (compared to the Sun) can travel such huge distances and yet orbit the Sun rather than be swallowed by it? Tuesday, 2 September 2008 13 If objects in space grow bigger as a result of smaller pieces colliding and 'sticking' together, why don't small asteroids look like coco pops?
<urn:uuid:de7743c5-9b58-43e5-8a95-14db9513b939>
3.546875
552
Q&A Forum
Science & Tech.
69.183462
Bats around the world are losing habitat to ever-expanding cities. Urbanization is more complete and irreversible than other encroachments, such as agriculture, and causes some of the greatest local extinction rates. Not only is natural habitat reduced to small, often tiny, remnant patches, but native plants are replaced with often-invasive nonnatives. The result is an ecosystem that becomes increasingly fragmented and homogenized as you move from rural areas to the urban center. Some bat species adapt and survive in an urbanized environment, roosting in buildings, for instance, and foraging at streetlights. Others, especially those with specialized behaviors, do not. Tropical forests are not immune to urban sprawl. The great Atlantic Forest that stretches along much of Brazil’s eastern coast is a “biodiversity hot spot” that once covered 476,000 square miles (1.2 million square kilometers). Today, just 8 percent – about 38,600 square miles (99,900 square kilometers) – remains intact. What once was forest is home now to about 70 percent of Brazil’s 169 million people. The southeastern state of Espírito Santo, where little is known of local bat diversity and ecology, is growing so rapidly that we must increase our knowledge of native bats in order to create and implement conservation and management plans before it is too late. The capital city of Vitória was founded in 1551 on an offshore island, but urbanization was insignificant until the 1940s. In the ’60s, the city spread onto the mainland as industrialization expanded. Its population is estimated at 1.8 million people, and the city is listed as a high-priority site for biodiversity conservation. I chose Vitória to study the complex relationship between urbanization and biodiversity and the general urban ecology of bats. I examined habitat uses and needs of various species and bats’ use of urban parklands and wooded and non-wooded streets. The research was supported in part by a BCI Student Research Scholarship funded by the U.S. Forest Service International Programs. The city has eight municipal parks where forest remnants can provide a refuge for wildlife. For our study, we chose three parks ranging in size from about 5.5 acres (2.3 hectares) to 25 acres (10 hectares) for sampling. We also sampled three wooded streets (bordered by trees and other plants) and three non-wooded streets. Researchers have reported that wooded thoroughfares can serve as corridors along which birds move from park to park in an urban landscape. Do these streets also serve as pathways for bats? We conducted a total of 31 netting sessions between August 2006 and July 2007. At each session, we monitored mist nets at all sampling sites for four hours beginning at sunset. We recorded each captured bat by species, sex, age, reproductive status, weight and dimensions. Fecal samples were collected for later analysis, and each bat was banded before release. We captured a total of 172 bats representing 10 species in four families. By far the most common species was the great fruit bat (Artibeus lituratus), with 114 captures. This was the only species netted along non-wooded streets. We also observed several bats of the Noctilionidae family (known as bulldog bats) feeding over a small parkland lake and under a bridge, but captured none. Urban parks, despite their modest size and visitation by people, were the overwhelmingly favored habitat. All 10 species and 71 percent of the bats in our sample were netted at the parks, which typically offer native trees (although usually with nonnatives, as well), small lakes and other resources. Only two species, 40 great fruit bats and six white-lined broad-nosed bats (Platyrrhinus lineatus), were captured along wooded streets for 27 percent of the total. Barren streets produced only three great fruit bats. We also identified several day roosts of these two species in trees on the campus of the Federal University of Espírito Santo, near the sampling areas. Fruit-eating bats completely dominated our sample with 81 percent of captures. These included great fruit bats, white-lined broad-nosed bats and big-nosed tent-making bats (Uroderma magnirostrum). Flying insect-eating bats, gleaning insectivores, nectar-eating bats and omnivores each accounted for 6 percent or less. It is quite possible that flying insectivores, such as those often reported around streetlights, are underrepresented because of the limitations of mist netting. Insect-eating bats frequently fly at higher altitudes than other bats and their especially sophisticated echolocation makes them better able to avoid the nets. The relative scarcity of these bats may also reflect our sampling locations and foraging preferences of urbanized insectivores. The diversity I found in Vitória clearly falls well short of that in the intact forests of Espírito Santo, in which 36 species have been recorded. Comparing bat diversity of forests and urban areas is difficult, however, because so few forest areas have been systematically sampled. My previous research at Espírito Santo’s Paulo Cesar Vinha State Park recorded a total of 15 species. These results confirm previous studies that find sharply decreased species richness and abundance in urban landscapes compared to less-disturbed areas. It also appears unlikely that wooded streets offer bats the same park-connecting corridors as reported for birds. The Vitória sample indicates that for bats, tree-lined streets are, statistically, much more similar to non-wooded streets than to urban parks. Only two bat species used them, both in low numbers. These data also suggest that the amount and complexity of vegetation likely play a large role in maintaining the abundance and diversity of bats. This urban bat population, like others reported elsewhere, is dominated by a single species, the great fruit bat, which apparently has a high tolerance for urbanization and a strong ability to adapt to changing conditions. We also reported the first observation (published in Biota Neotropica) of this species feeding on Maclura tinctoria (fustec wood) fruits, which adds a new item to the known diet of this important seed-dispersing species. These fruits have large seeds that are not swallowed by the bats, which instead eat the pulp and discard the seeds. A diet of fruits with large, uneaten seeds may indicate an important resource that is not detected in traditional diet studies that evaluate fecal samples. The only species in our study that had not previously been reported in urban areas of Brazil is the silver-tipped myotis (Myotis albescens). I hope this contribution to the knowledge base of urban ecology can help conservation efforts in the cityscapes where increasing numbers of bats find themselves forced to adapt or die. Sound ecological principles, such as preserving remnant natural habitat, restoring native plant species to modified habitats and using native trees and more of them along streets and boulevards, can help at least some bats and other wildlife to survive the growth of cities.
<urn:uuid:051552a9-4a00-465f-9987-737b0597e1d3>
3.859375
1,477
Academic Writing
Science & Tech.
38.664481
or the past two years, the hype surrounding the Simple Object Access Protocol (SOAP ) has barely waned, although its opponents have gradually risen in number. While some critics are simply tired of hearing about Web services, a small handful of Internet architects have come up with a surprisingly good argument for pushing SOAP aside: there's a better method for building Web services in the form of Representational State Transfer (REST REST is more an old philosophy than a new technology. Whereas SOAP looks to jump-start the next phase of Internet development with a host of new specifications, the REST philosophy espouses that the existing principles and protocols of the Web are enough to create robust Web services. This means that developers who understand HTTP and XML can start building Web services right away, without needing any toolkits beyond what they normally use for Internet application development. The key to the REST methodology is to write Web services using an interface that is already well known and widely used: the URI. For example, exposing a stock quote service, in which a user enters a stock quote symbol to return a real-time price, could be as simple as making a script accessible on a Web server via the following URI: Any client or server application with HTTP support could easily call that service with an HTTP GET command. Depending on how the service provider wrote the script, the resulting HTTP response might be as simple as some standard headers and a text string containing the current price for the given ticker symbol. Or, it might be an XML document. This interface method has significant benefits over SOAP-based services. Any developer can figure out how to create and modify a URI to access different Web resources. SOAP, on the other hand, requires specific knowledge of a new XML specification, and most developers will need a SOAP toolkit to form requests and parse the results. Lighter on Bandwidth Another benefit of the RESTful interface is that requests and responses can be short. SOAP requires an XML wrapper around every request and response. Once namespaces and typing are declared, a four- or five-digit stock quote in a SOAP response could require more than 10 times as many bytes as would the same response in REST. SOAP proponents argue that strong typing is a necessary feature for distributed applications. In practice, though, both the requesting application and the service know the data types ahead of time; thus, transferring that information in the requests and responses is gratuitous. How does one know the data typesand their locations in the responseahead of time? Like SOAP, REST still needs a corresponding document that outlines input parameters and output data. The good part is that REST is flexible enough that developers could write WSDL files for their services if such a formal declaration was necessary. Otherwise, the declaration could be as simple as a human-readable Web page that says, "Give this service an input of some stock ticker symbol, in the format q=symbol, and it will return the current price of one share of stock as a text string." Probably the most interesting aspect of the REST vs. SOAP debate is the security angle. Although the SOAP camp insists that sending remote procedure calls through standard HTTP ports is a good way to ensure Web services support across organizational boundaries, REST followers argue that the practice is a major design flaw that compromises network safety. REST calls also go over HTTP or HTTPS, but with REST the administrator (or firewall) can discern the intent of each message by analyzing the HTTP command used in the request. For example, a GET request can always be considered safe because it can't, by definition, modify any data. It can only query data. A typical SOAP request, on the other hand, will use POST to communicate with a given service. And without looking into the SOAP envelopea task that is both resource-consuming and not built into most firewallsthere's no way to know whether that request simply wants to query data or delete entire tables from the database. As for authentication and authorization, SOAP places the burden in the hands of the application developer. The REST methodology instead takes into account the fact that Web servers already have support for these tasks. Through the use of industry-standard certificates and a common identity management system, such as an LDAP server, developers can make the network layer do all the heavy lifting. This is not only helpful to developers, but it eases the burden on administrators, who can use something as simple as ACL files to manage their Web services the same way they would any other URI. Not For Everything To be fair, REST isn't the best solution for every Web service. Data that needs to be secure should not be sent as parameters in URIs. And large amounts of data, like that in detailed purchase orders, can quickly become cumbersome or even out of bounds within a URI. In these cases, SOAP is indeed a solid solution. But it's important to try REST first and resort to SOAP only when necessary. This helps keep application development simple and accessible. Fortunately, the REST philosophy is catching on with developers of Web services. The latest version of the SOAP specification now allows certain types services to be exposed through URIs (although the response is still a SOAP message). Similarly, users of Microsoft .NET platform can publish services so that they use GET requests. All this signifies a shift in thinking about how best to interface Web services. Developers need to understand that sending and receiving a SOAP message isn't always the best way for applications to communicate. Sometimes a simple REST interface and a plain text response does the trickand saves time and resources in the process.
<urn:uuid:5379a667-c8d1-448e-b4d4-cbb29901b9dc>
2.859375
1,155
Knowledge Article
Software Dev.
39.083626
The research teams raced to map the Universe by locating the most distant supernovae. More sophisticated telescopes on the ground and in space, as well as more powerful computers and new digital imaging sensors (CCD, Nobel Prize in Physics in 2009), opened the possibility in the 1990s to add more pieces to the cosmological puzzle. The teams used a particular kind of supernova, called type Ia supernova. It is an explosion of an old compact star that is as heavy as the Sun but as small as the Earth. A single such supernova can emit as much light as a whole galaxy. All in all, the two research teams found over 50 distant supernovae whose light was weaker than expected - this was a sign that the expansion of the Universe was accelerating. The potential pitfalls had been numerous, and the scientists found reassurance in the fact that both groups had reached the same astonishing conclusion. For almost a century, the Universe has been known to be expanding as a consequence of the Big Bang about 14 billion years ago. However, the discovery that this expansion is accelerating is astounding. If the expansion will continue to speed up the Universe will end in ice. The acceleration is thought to be driven by dark energy, but what that dark energy is remains an enigma - perhaps the greatest in physics today. What is known is that dark energy constitutes about three quarters of the Universe. Therefore the findings of the 2011 Nobel Laureates in Physics have helped to unveil a Universe that to a large extent is unknown to science. And everything is possible again. [ add comment ] ( 408 views ) | permalink | ( 3 / 1497 )
<urn:uuid:1bd09602-7212-4389-824f-fa41762e4e83>
4.3125
335
Personal Blog
Science & Tech.
46.213952
Harlequin Ladybird Project The UK Harlequin Ladybird Survey was established to track the spread of the invasive alien ladybird, Harmonia axyridis. The project was established by Centre for Ecology & Hydrology (CEH), Cambridge University and Anglia Ruskin University through the NBN Trust and with start-up funding from Defra. The rapid initiation of this project, in response to the first UK sighting of the harlequin ladybird enabled high resolution distribution data to be gathered, documenting the spread of this species from the time of its arrival. Volunteer engagement has been critical to the success of this project. Members of the public register their sightings of harlequin ladybirds on the Survey website. Records are then verified by one of the survey co-ordinators through either digital photographs or ladybird specimens. The website has been designed to instruct the public in identification and also provides regular updates on the progress of the harlequin and related research. The Harlequin Ladybird Survey has also benefited from the latest web-based technology. Data is transferred regularly to the NBN Gateway and a current map is displayed on the survey website. Furthermore, scientists from CEH and Cambridge University are now using the survey to understand more complex aspects of the ecology of the harlequin ladybird within the UK. There is no doubt that without public involvement this unique dataset would not have been achieved. Over 20,000 harlequin ladybird records (many of multiple individuals) have been logged since the survey was launched in March 2005 and there are currently over 10,000 observations on the Gateway.
<urn:uuid:c611bfaa-5318-4f74-b364-ed5d1d895180>
3.0625
326
Knowledge Article
Science & Tech.
25.853785
CHANGES in the amount of cosmic dust raining onto the Earth could help explain why the climate has for the past million years been alternating between ice ages and warmer interglacial periods. For years, researchers have suspected that ice ages happen because of variations in the Earth's orbit which move us slightly farther from the Sun every 100 000 years or so. But the reduction in the amount of solar radiation reaching the Earth as a result is not, by itself, enough to plunge the planet into an ice age. Kenneth Farley and Desmond Patterson of the California Institute of Technology in Pasadena have now found that the amount of cosmic dust arriving at the seafloor also varies on a 100 000-year cycle - exactly corresponding to the cycle of ice ages and interglacials. The idea that cosmic dust might periodically cause the planet to cool was suggested in September ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:0a8aa5c3-2591-4b55-9bf5-f0a3feafddb0>
4.21875
201
Truncated
Science & Tech.
47.044737
|Previous||UML Classes||Table of Contents||UML Packages||Next| A Device is a physical computational resource with processing capability upon which artifacts may be deployed for execution. Device s may be complex (i.e., they may consist of other devices). Node (from Nodes ) on page 218 In the metamodel, a Device is a subclass of Node. No additional attributes No additional associations No additional constraints A device may be a nested element, where a physical machine is decomposed into its elements, either through namespace ownership or through attributes that are typed by Device s. Issue 8143 -reword invalid sentence (editorial change to original fix) A Device is notated by a perspective view of a cube tagged with the keyword «device». Figure 10.14 - Notation for a Device The following changes from UML 1.x have been made — Device is not defined in UML 1.x.
<urn:uuid:4be769c7-0988-450f-b128-bb380b9e47f5>
2.90625
203
Documentation
Software Dev.
38.258213
Whale Lice and Barnacles Barnacles are a fact of life for gray whales; there are hundreds of pounds of barnacles on gray whales. These barnacles attach themselves to gray whales in the lagoons when the whales are born and are on gray whales for as long as they are alive. Barnacles depigment the skin when they attach themselves to the whale, when the barnacles die and fall off they leave a small round white circle or ring. Barnacle scars create a unique pattern on each whale, which can help in identifying the gray whale. Gray whales also have whale lice which are not true parasites. They feed on the skin and damaged tissue actually helping the whales. The lice gather around open wounds or scars of the whale. These lice can spread from mother whales to their calves during birth, and nursing. Whale lice are orange colored patches around the barnacles and in crevices of the whale’s body such as creases and the mouth line. To get rid of the whale lice, whales rub themselves along the sea bottom or breach. Gray whales feed on the bottom sediments such as amphipods and scrape off barnacles and whale lice as they feed. Above is a picture of “Scarback” the gray whale taken this past summer on the Whales Tail, she is the most famous resident gray whale off Depoe Bay. Scarback has been around since 1979 and can be identified by the large scar on her right dorsal hump. It is believed that Scarback got her wound from an exploding harpoon which happened sometime between 1985 and 1988.
<urn:uuid:729adaf8-58fc-4332-bde7-475a5e8cdefa>
3.65625
330
Knowledge Article
Science & Tech.
56.401269
Corallinaceae, Red Algae, Coralline algae, Carpet Algae. Im sorry, but video content will be enabled in an upcoming update. In the mean time, take a look around our amazing images provided by the awesome community here at Whats That Fish. Also known as Red Algae, Coralline algae and Carpet Algae. Found on coral and rocky reefs in places of high energy wave action, growing in dense patches of matt like turf over large areas. Looks very much like a large doormat!! Varies in colour. Length - spreading Depth - 0-3m As many as 9000 species of algae are spread across the worlds oceans, some can be as much as 30 metres across while others are just a slippery scum over rocks. Algae grows wherever there is enough light and life supporting conditions. They support a huge number of marine life both for food and as a home. Feel free to leave your comments here...
<urn:uuid:16691d37-c115-473f-889a-044a06972829>
3.03125
204
Truncated
Science & Tech.
58.060147
Short Summaries of Articles about Mathematics in the Popular Press "Number Crunch: A Group of Mathematicians is Trying to Cut the World's Lengthiest Theorem Down to Size," by Kim McDonald. Chronicle of Higher Education, 21 June 1996, page A8. The notion of a group is central to mathematics. In addition to providing a way to represent mathematically the symmetries of objects such as crystals, exploring the structure of groups has an inherent mathematical fascination. The proof of the "Enormous Theorem," completed in 1980, is a patchwork of results that established a classification of an important collection of groups: the finite simple groups. Like the Scientific American piece on this topic (see The Not So Enormous Theorem), this article describes current efforts by mathematicians to condense, correct, and complete the proof and publish it in a projected 12-volume set. The present article provides a little more detail about what groups are and why the theorem is significant.
<urn:uuid:891d5d08-4702-411a-8ca2-2d9d32dee035>
3
202
Content Listing
Science & Tech.
34.536392
Well, given this news from the BBC, http://news.bbc.co.uk/2/hi/science/nature/8377128.stm , it turns out that the claims that temperatures haven't been rising were in fact wrong (sometimes even the BBC makes mistakes). Predicting climate change is only vague in as much as they don't know what the exact effects will be, nor can they pinpoint exactly where they will be felt. Whether or not climate change will happen is no longer a prediction; it will happen and already is happening. Fiona, has climate change ever NOT happened? Climate has always changed, from hour to hour, day to day, century to century, and millennium to millennium. Moreover, it's also changed quite fast, without human help. My favorite example is the asteroid that knocked out the dinosaurs, which changed our climate overnight. Many scoff at such an example, saying it's not representative of typical pre-human-induced climate change, which takes a LONG time. Does it really? The last glaciation ended about 11,000 years ago. Yes, humans were around, but there weren't many and nobody had an SUV. Most climate scientists agree that we went from having glaciers covering half of Europe to today's environment in just 20 years! Bill Bryson mentioned this in his History of Nearly Everything book, where he points out that we exchanged the climate of Sweden for that of Texas in just about 20 years. Please read this academic paper (or at least the abstract) to learn that previous major climate changes were much faster than many people might imagine. It concludes, "From present understanding of the record of the last 150,000 years, at least a few large climate changes certainly occurred on the timescale of individual human lifetimes, the most well-studied and well-established of these being the ending of the Younger Dryas, and various Holocene climate shifts. Many other substantial shifts in climate took at most a few centuries, and they too may have occurred over a few decades."Another researcher points out that the last cold period (13,000 years ago) took just six months to blow in . In short, climate change not only happens without our help, it also can do it quite quickly, within a human lifetime. Since we hardly understand why any of this happens (and we can't seem to predict the weather just 5 days from now), then it's a bit presumptuous to believe we have a crystal ball that can accurately predict our climate 10, 100, or 1,000 years from now. The general public doesn't really understand the concept of global warming, so when they hear that global average temperatures will increase by 2C degrees (or 4C or 6C or whatever), they just assume that across the board everywhere will get warmer by 2C degrees. However, that is not at all the case. Some places will get colder, some hotter, some wetter, some drier. In some places the temperature difference might be more like 6C or 8C - or as much as 10C or as little as 0.5C - even though the global average change is 2C. There will also be more extreme weather events, such as hurricanes, tornadoes, freak snowstorms in summer, temperatures above 20C in the winter, excessively heavy rainfall in short periods of time, i.e. last week in Cumbria in Northern England where record rainfall caused flooding with water levels reaching over 2.5 metres (8ft 2in). So although the projected increases in temperature don't sound like much, the local and regional consequences could/will actually be quite significant. You make a great point. Climate will CHANGE. That means for some it will change for the worse, for others it will change for the better. - A desert may blossom thanks to climate change - A tundra might explode with trees, arable land, and wildlife - Antarctica might finally be livable again (it used to have 30 meter trees covering it 45 million years ago during the Thermal Maximum of the Cenozoic - and yes, Antarctica was located at the south pole back then too, so it was really hot overall on this planet). The news focuses on the negative about climate change. I like a more balanced approach. Why don't we consider the millions of species that will benefit from climate change? Yes, many species will die. Is that news? Not really. 99.99% of all species that have ever lived are extinct. Many were knocked out by climate change, both the slow and the fast kind. Species come and go, but life goes on.Climate prophets stupidly claim that nature "can't handle such fast climate changes." However, all the life on this planet is living proof that nature can certainly handle rapid climate change! Consider the hundreds of wild temperature swings (far worse than a mere 6 degrees) that the Earth has gone through. Many of those swings occurred over just a few decades, and some even happened overnight (Dino extinction, Yellowstone's toxic eruptions, etc.). Yet, we're still here. In fact, evolutionary biologists rejoice that these punctuated, dramatic, and quick temperature swings have occurred. The world wouldn't be the same without it. Without climate change, including swift climate change, humans (and many other species) wouldn't be around here today. We'd still be bacteria, which brings up.... Even if we are causing the current climate change (which I agree is probable) and we end up killing lots of species, it's not unprecedented. We'll just be following in the footsteps of our ancestors. No, I'm not talking about early humans who hunted much of the Earth's mega-fauna to extinction. I'm talking about our real early ancestor, the cyanobacteria. Our common ancestor was responsible for the Oxygen Catastrophe 2.4 billion years ago. Alas, our great-granddaddy was the biggest mass murderer in Earth's history. For all the species living on the planet, oxygen was a deadly poison gas. Did our relative care? No, he just kept burping and farting oxygen every time he digested water. The climate has never been so drastically transformed by any other living organism. The cyanobacteria managed to kill nearly every living thing on the planet. It makes our human touch seem delicate. Life does what it must to reproduce, damn everyone else. We're no different than our uber-genocide producing ancestor. Perhaps we ought to be wiser, but we're not Contrary to what you have previously suggested, calling it "climate change" is not a cop-out, nor is it a matter of hesitancy, keeping things vague, or trying to ensure that they are right regardless of what happens. Perhaps, but my main point is that it's not a very interesting term. Such a prediction is like saying "I predict the sun will rise tomorrow." Climate has always changed, and, as I mentioned earlier, the climate has also changed quickly (all on its own, without us nasty little humans twisting its arm). Most importantly, life on Earth has dealt with it just fine, thank you. If it could survive the Snowball Earth climate (where the whole Earth was frozen about 1 billion years ago), and it could survive the Great Oxygen Catastrophe, and it could also thrive in the Thermal Maximum of the Cenozoic, then I think it can handle a 2-10 degrees warming. Of course, some humans may not like it. On the other hand, others will. Again, the media ignores all the humans who will benefit from climate change. People who live in the Nevada desert, the Canadian tundra, or the Australian outback may all benefit as their climate change to something that humans generally prefer (people in those areas may see their climate get wetter, warmer, or cooler, respectively). First, the media told us it was global warming. When they realized that half the industrialized world would be grateful for a couple of degrees of warming, they switched to "climate change", but with a twist. They focused on change...for the worse. When the doomsday preachers warn about climate change, they imply that "where you live, your climate will get worse." Although nobody knows for sure, this is extremely unlikely. A much more likely scenario is that some people will enjoy better weather, while others will suffer worse weather. And some won't see any substantial change at all. Hence, winners and losers. Moreover, who decides what's bad anyway? A snowmobiler will hate that Alaska warms up, while an Alaska farmer will love it. One woman will cry if Arizona gets drier, but an environmentalist will be happy that the Desert Fox has a wider habitat. A beach bum will hate if Bali gets cooler, but those who suffer from heat strokes will love it. Although nobody really knows our climate future, there are enough predictions out there that in 2020 or 2070 many people will be able to say "I told you so." Meanwhile, the rest will quietly change the subject. That's why I don't make predictions, other than the obvious one that will always be right: climate is changing and will change in the future. And it may change quickly or slowly. And for better or for worse. And don't ask me why. Once the news says that climate has stopped changing, then I will be very worried. That would be a once-in-an-Earth-time event. Finally, I am not suggesting that we should ignore climate change. I have written an article describing what we ought to do about climate change .My main point here is that we should be skeptical and humble during any climate change debate. Most of all, we should have a wider perspective. This wider perspective includes considering longer periods than the last 150 years or even the last million years. It includes considering ALL species (both the winners and losers). It includes considering ALL humans (both the winners and losers). With such a broad perspective, we might have a more intelligent debate. Thanks for thoughts!
<urn:uuid:901c9d1c-b6f1-49f9-b4bc-5df10224e66d>
3
2,091
Comment Section
Science & Tech.
57.488886
Return to Physics Index Sandra Tucker Crown Fine Arts Academy 2128 S. St. Louis Chicago IL 60623 This lesson is designed for primary level students. They will observe the transfer of momentum. 2 or more pairs of rollerblades 2 cars - each car has a spring attached to the front bumper and velcro attached to the rear bumper. a weight (that will fit on one of the cars) Begin with a roller blade demonstration. Tell the students they will need to observe what happens. Students of approximately the same weight will put on rollerblades and will participate in four demonstrations. (Be sure students wearing roller blades are also wearing helmets and other protective gear.) l. One person stands still. The other person skates into him. At the point when they would collide, they push off from each other. 2. Repeat above, but this time, as they meet they grab on to each other. 3. Both students skate towards each other. When they meet, they push off from each other. 4. Repeat above, but this time, they grab on to each other. This entire series can be repeated using one heavier student (or two students together) and one lighter student. See if the results are the same. If possible, let all students try these demonstrations with a partner. Now repeat the demonstration using the 2 cars. The students will observe and discuss what they see. The teacher can draw the diagrams of the car's movement on the board. 1. One car moving - one car still. The springs on the cars are facing each other. Observation: The moving car stops and the still car moves in the same direction the moving car was going. 2. One car moving - one car still. The velcro on the cars is facing each other. Observation: The cars stick together and move slowly in the direction the moving car was going. 3. Both cars moving towards each other. The springs on the cars are facing each other. Observation: Each car moves in the opposite direction but slower. 4. Both cars moving towards each other. The velcro on the cars is facing each other. Observation: The cars stop. These demonstrations can be repeated putting a weight on one of the cars. Some of the observations will differ. Students should observe that movement transfers from one car to the other. This is called the transfer of momentum. They should also see that the weight of the car (or other object) affects the speed and direction of the movement. Now do a demonstration using cue balls. Line up two cue balls. Have a student hit them with a cue stick so they move in an angle. Use a string to show where the ball was hit and the path each ball takes. The students should be able to see a 90 degree angle. Ask them if they see an "L". Repeat several times. Now give the students a bag of different size marbles and string and let them try to see what kinds of angles are made when you hit two marbles together. Have them observe if the angle changes when the marbles are the same compared to when the marbles are different. Discuss the observations with the class. Students should be able to describe what "transfer" means, when discussing transfer of momentum.
<urn:uuid:23d1ab31-81ac-49ab-bb9a-865e96adc953>
4.0625
714
Tutorial
Science & Tech.
71.404878
Applet is a program to run on the browser and it is embedded on the web page. This program is not system level program but it is a network level program. The Applet class is a super class of any applet. Applet viewer is used to view or test the applet whether the applet is running properly or not. In this program we will see how to draw the different types of shapes like line, circle and rectangle. There are different types of methods for the Graphics class of the java.awt.*; package have been used to draw the appropriate shape. Explanation of the methods used in the program is given just ahead : The drawLine() method has been used in the program to draw the line in the applet. Here is the syntax for the drawLine() method : drawLine(int X_from_coordinate, int Y_from_coordinate, int X_to_coordinate, int Y_to_coordinate); The drawSring() method draws the given string as the parameter. Here is the syntax of the drawString() method : drawString(String string, int X_coordinate, int Y_coordinate); The drawOval() method draws the circle. Here is the syntax of the drawOval() method : X_coordinate, int Y_coordinate, int Wdth, int height); draws the rectangle. Here is the syntax of the drawRect() method X_coordinate, int Y_coordinate, int Wdth, int height)Here is the java code of the program :. Here is the HTML code of the program: <APPLET CODE="CircleLine.class" WIDTH="800" HEIGHT="500"></APPLET> If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:7b91e076-3d8f-45eb-a341-854e731daf6a>
3.671875
419
Documentation
Software Dev.
52.597681
One might think having no limbs would put a damper on the love life, but not for snakes. When a female snake is ready to mate, she begins to release a special scent (pheromones) from skin glands on her back. As she goes about her daily routine, she leaves an odor trail as she pushes off resistance points on the ground (See Getting Around). If a sexually mature male catches her scent, he will follow her trail until he finds her. The male snake begins to court the female by bumping his chin on the back of her head and crawling over her. When she is willing, she raises her tail. At that point, he wraps his tail around hers so the bottoms of their tails meet at the cloaca -- the exit point for waste and reproductive fluid. The male inserts his two sex organs, the hemipenes, which then extend and release sperm. Snake sex usually takes under an hour, but it can last as long as a whole day. Female snakes reproduce about once or twice a year; however, the methods of birth vary among species. Some snakes give birth to live young (from one to 150 at a time), while others lay eggs (from one to 100 at a time); some even combine these methods by holding eggs internally until they hatch, and the babies are born live. For the most part, female snakes do not sit on their eggs like a hen, but in some cases they will protect their eggs (and their young) for a few days after they leave the mother's body. For more information on snakes and related topics, check out the links on the next page.
<urn:uuid:8177be25-d54e-44f7-aa50-08b002c1d00d>
3.203125
332
Knowledge Article
Science & Tech.
61.931576
One of the reasons for launching the Hubble Space Telescope in 1990 was to eliminate the filter of the atmosphere that affects earth-bound observations of the night sky. The results speak for themselves: more than 10 000 peer-reviewed papers using Hubble data, around 98% of which have citations (only 70% of all astronomy papers are cited). There are plenty of other filters at work on Hubble's data: the optical system, the electronics of image capture and communication, space weather, and even the experience and perceptive power of the human observer. But it's clear: eliminating one filter changed the way we see the cosmos. What is a filter? Mathematically, it's a subset of a larger set. In optics, it's a wavelength-selection device. In general, it's a thing or process which removes part of the input, leaving some output which may or may not be useful. For example, in seismic processing we apply filters which we hope remove noise, leaving signal for the interpreter. But if the filters are not under our control, if we don't even know what they are, then the relationship between output and input is not clear. Imagine you fit a green filter to your petrographic microscope. You can't tell the difference between the scene on the left and the one on the right—they have the same amount and distribution of green. Indeed, without the benefit of geological knowledge, the range of possible inputs is infinite. If you could only see a monochrome view, and you didn't know what the filter was, or even if there was one, it's easy to see that the situation would be even worse. Like astronomy, the goal of geoscience is to glimpse the objective reality via our subjective observations. All we can do is collect, analyse and interpret filtered data, the sifted ghost of the reality we tried to observe. This is the best we can do. What do our filters look like? In the case of seismic reflection data, the filters are mostly familiar: - the design determines the spatial and temporal resolution you can achieve - the source system and near-surface conditions determine the wavelet - the boundaries and interval properties of the earth filter the wavelet - the recording system and conditions affect the image resolution and fidelity - the processing flow can destroy or enhance every aspect of the data - the data loading process can be a filter, though it should not be - the display and interpretation methods control what the interpreter sees - the experience and insight of the interpreter decides what comes out of the entire process Every other piece of data you touch, from wireline logs to point-count analyses, and from pressure plots to production volumes, is a filtered expression of the earth. Do you know your filters? Try making a list—it might surprise you how long it is. Then ask yourself if you can do anything about any of them, and imagine what you might see if you could.
<urn:uuid:0e7bc9e3-e665-4e59-9542-b779bf25454e>
3.796875
600
Nonfiction Writing
Science & Tech.
43.150394
|Wesley R. Elsberry Joined: May 2002 I think Roger Cuffey's discussion of transitional fossils is a good starting place. He has an online version of his 1974 paper (scanned, apparently), and this is the section that gives four classes of transitional sequences. Although the broad patterns and many details in the history of life are well known, many other details remain to be learned. Because of the unevenness of our knowledge, therefore, we can conveniently distinguish several different types of transitional-fossil situations. Let us consider these now, starting with that situation where our knowledge is most complete, and proceeding through situations in which knowledge is progressively less complete. First, some groups have been so thoroughly studied that we know sequences of transitional fossils which grade continuously from one species to another without break (Table 1), sometimes linking several successive species which cross from one higher taxon into another (Table 2). We can say that situations of this kind display transitional individuals. Among the many available examples of transitional individuals, some particularly convincing examples can be noted. corals (Carruthers, 1910, p. 529, 538; Easton, 1960, p. 175; Moore, Lalicker, & Fischer, 1952, p. 140; Weller, 1969, p. 123), gastropods (Fisher, Rodda, & Dietrieh, 1964), pelecypods (Kauffman, 1967; Kauffman, 1969, p. N198-200; Kauffman, 1970, p. 633), echinoids (Beerbower, 1968, p. 136, 138; Kermack, 1954; Nichols, 19S9a, 1959h; Olson, 1965, p. 98; Rowe, 1899). Second, other fossil groups have been well enough studied that we know sequences of transitional fossils comprising a series of chronologically successive species grading from an early form to a later form (Table 3), again sometimes crossing boundaries separating different higher taxa (Table 4). This type of situation can be termed successive species. Published descriptions of successive species lack explicit discussion of individuals transitional between the species, although frequently such exist in the author's collection but are not discussed because they are not directly pertinent to his purposes. Again, some especially persuasive examples of successive species can he seen, among: foraminiferons (Wilde, 1971, p. 376), brachiopods (Greiner, 1957; Raup & Stanley, 1971, p. 124), pelecypods (llastoo, 1960, p. 348; Kay & Colbert, 1965, p. 327; Moore, Lalicker, & Fischer, 1952, p. 447; Newell, 1942, p. 21, 42, 47-48, 51-52, 60, 63, 65; Olson, 1965, p. 97; Stenzel, 1949; Stenzel, 1971, p. N1079-1080; Weller, 1969, p. 209), ammonoids (Cobhan, 1961, is. 740-741). In many fossil groups, our understanding is relatively less complete, thus giving rise to a third type of situation which we can label successive higher taxa. Here, we may not have complete series of transitional individuals or successive species, but the genera (or other higher taxa) represented in our collections form a continuous series grading from an earlier to a later form, sometimes crossing from one higher-rank taxon into another (Table 5). Because genera are relatively restricted in scope, many series of successive genera have been published. However, families and higherrank higher taxa are so broad in concept that they are not usually used to construct transitional-fossil sequences, although occasionally they are (Bulman, 1970, p. V103-104; Easton, 1960, p. 436; Flower & Kummel, 1950, p. 607). Finally, in some fossil groups, our knowledge is quite fragmentary and sparse. We then may know of particular fossils which are strikingly intermediate between two relatively high-rank higher taxa, but which are not yet connected to either by a more continuous series of successive species or transitional individuals. We can refer to these as isolated intermediates, a fourth type of situation involving transitional fossils, a type which represents our least-complete state of knowledge. Isolated intermediates include some of the most famous and spectacular transitional fossils known, such as Archaeopteryx (Colbert, 1969, p. 186-189; Romer, 1966, p. 166-167). This form is almost exactly intermediate between the classes Reptilia and Ayes (Cuffey, 1971a, p. 159; Cuffey, 1972, p. 36), so much so that "the question of whether Archaeopteryx is a bird or a reptile is unimportant. Both viewpoints can be defended with equal justification" (Brouwes, 1967, p. 161). The fossil onychophorans (Moore, 1959, p. 019; Olson, 1965, p. 190) and the fossil monoplacophorans (Knight & Yochelson, 1960, p. 177-83; Raup & Stanley, 1971, p. 308-309) have been regarded as annelidarthropod and annelid-mollusk inter-phylum intcrsnediates, respectively. Moreover, although invertebrate phylum origins tend to be obscure for several reasons (Olson, 1965, p. 209-211), recently discovered, Late Precambrian, soft-bodied invertebrate fossils may well alter that situation, particularly after certain peculiar forms are studied and compared with Early Cambrian forms (Kay & Colbert, 1965, p. 99, 103; Weller, 1969, p. 247). Mention of this last prompts me to point out parenthetically that the appearance of shelled invertebrates at the beginning of the Cambrian has been widely misunderstood. The assertion is frequently made that all the major types of animals appeared suddenly and in abundance then. In actual fact, collecting in successive strata representing continuous sedimentation from Late Precambrian into Early Cambrian time reveals a progressive increase upward in abundance of individuals. Moreover, the various higher taxa-particularly the various classes and orders reflecting adaptation to different modes of life-appear at different times spread over the long interval between the Early Cambrian and the Middle Ordovician. Finally, because of widespread interest in questions of man's origins, it is well worth emphasizing that a rather complete series of transitional fossils links modern man continuously and gradationally hack to midCenozoic, generalized pongids (see references in Table 2). In spite of statements to the contrary . . . , the fossil record of the Hominnidea, the superfamily containing man and the apes, is quite well known, and it is therefore possible to outline a tentative evolutionary scheme for this group (Uzrcll & Pilbeam, 1971, p. 615). (Cuffey and Moore on ASA) So the fossils which I utilize in my TFEC fall into the first category that Cuffey mentions, that of "transitional individuals". There are several morphospecies between the G. trilobus parent species and the O. universa daughter species. "You can't teach an old dogma new tricks." - Dorothy Parker
<urn:uuid:49e915a7-4b9b-4818-8f68-97cede839c25>
2.8125
1,547
Comment Section
Science & Tech.
46.458156
Named for a forgotten constellation, the Quadrantid Meteor Shower is an annual event for planet Earth's northern hemisphere skygazers. It usually peaks briefly in the cold, early morning hours of January 4. The shower's radiant point on the sky lies within the old, astronomically obsolete constellation That position is situated near the boundaries of the modern constellations Hercules, Bootes, and Draco. Many of this year's Quadrantid meteors were dim, but the one captured in this north-looking view is bright and easy to spot. In the foreground is the Maurice River's East Point Lighthouse located near the southern tip of New Jersey on the US east coast. The likely source of the dust stream that produces Quadrantid meteors was identified as an asteroid.
<urn:uuid:079d918e-704c-4c1d-9e37-1ddcc8949a99>
2.890625
178
Knowledge Article
Science & Tech.
45.46107
Saprophytes: Where Do They Come From? "David Brayford ", IMI D.BRAYFORD at cabi.org Mon Mar 3 08:07:28 EST 1997 I might have missed the beginning of this conversation but, anyhow......, fungi have a whole range of ecological strategies involving a saprotrophic component, so there is no single answer to your questions. For a good source of background information on this subject area check out the book:- Cooke, R.C. & Rayner, A.D.M. (1984 - blimey, seems like yesterday, Alan). Ecology of Saprotrophic Fungi. Longman, London & New York, 415 pp. ISBN 0-582-44260-5. Lots of interesting things therein. Dave Brayford, IMI Subject: Saprophytes: Where Do They Come From? Date: 01 March 1997 8:51 Sorry, Jessie, it's not that I disbelieve you at all, it's just that I like to get second opinions. My question is: Where do saprophytes go when there is nothing dead around? Are they growing innocuously in living substrate until the substrate dies and they take over? If so, are they draining some nutrients from the substrate and, thereby, weakening it somehow? Or do they just exist as spores that float around and don't actually germinate until a suitable substrate is found? Any answers and/or citations are appreciated! More information about the Mycology
<urn:uuid:6a4025be-acaa-448a-b7fd-516bb3d7996e>
3.078125
337
Comment Section
Science & Tech.
65.360034
Habitat Protection and Research It is widely accepted that some fishing methods, such as bottom trawling and dredging, impact on the seabed and may cause damage to marine habitats and ecosystems. Most fishing of this type occurs in water depths of less than 1200m (these make up around 30 percent of our economic zone). The Ministry of Fisheries has recognised this problem and has a number of closures in place to protect the seabed. The Ministry has also commissioned a range of research projects to find out more about seabed habitats and communities and how they are affected by bottom trawling and dredging. Current and Future Management Policy and strategic frameworks to implement sea-bed protection have been under development for sometime now, and include the Strategy for Managing the Environmental Effects of Fishing and the Marine Protected Areas Policy and Implementation Plan. The government has already closed a number of areas to trawl and dredge fishing. These include coastal trawl and dredge closures to protect sponge gardens in Spirits Bay (Northland) and bryozoan reefs off Separation Point (Tasman region). There are also a range of Seamount Closures (announced in November 2000) and Benthic Protection Areas (announced in April 2007) across New Zealand’s offshore waters. Combined, these protect about 32 percent of deep water habitats from bottom trawling and dredging. These measures are a step towards meeting objectives in the New Zealand Biodiversity Strategy and ensuring that pristine areas of ocean sea-bed are maintained for posterity. The government’s approach to protecting seabed habitats was outlined in the Strategy for Managing the Environmental Effects of Fishing (SMEEF) that was launched in 2005. This approach will be to set standards based on the relative vulnerability of habitats (standards will determine how much of each habitat will remain free of impact). The assessment of vulnerability will incorporate the relative resilience of biological and physical components of each habitat, the reversibility of the impact and the relative importance of the habitat to ecosystem function. The effect of bottom trawling depends on the extent of trawling, the type of habitat affected, and how much of that habitat exists. The government realises that impacts will occur when fishing, just as impacts occur on land, but wants to constrain those impacts so they do not become adverse. Working out the point at which an impact becomes an ‘adverse effect’ is difficult and will no doubt be controversial, but that is the role of the standard-setting process. Work is underway to assemble all the available information, and fill in critical knowledge gaps with new research. The research projects are tailored to provide the most cost-effective way of providing guidance on the impacts of fishing and what measures can be developed to usefully provide a way of assessing risk, vulnerability and resilience. Habitats will be mapped, and standards as described above will be developed. Decisions can then be made about the need for, and location of, any further closures. Benthic Impacts Strategy The Benthic Impacts Strategy sets out the government’s process for setting limits around the effects of fishing on sea-bed habitats. The limits are called Habitat Standards. A Habitat Standard will define how much of each sea-bed habitat must remain free of damage, including from fishing. This will ensure that the effects of fishing do not stop sea-bed habitats functioning and contributing effectively to fish production and the marine ecosystem. The standard will be based around an assessment of the risk that fishing poses to each habitat type in question. The assessment will take into account: - how sensitive the biological and physical components of each habitat are; - the reversibility of the likely impacts; and - the relative importance of the habitat to ecosystem function. The first step in the process is to identify the range and location of broad sea-bed habitat types in New Zealand’s Territorial Seas (within 12 nautical mile of the coast) and offshore areas. A habitat type is made up of physical attributes (like what the sea-bed is made of and how deep it is) and the plant and animal communities living there To start with, habitat types will be identified from existing national classification systems and databases, but will be modified as more information becomes available. The Ministry of Fisheries is currently undertaking, or plans for 2007/08, a range of research projects that will give more information on sea-bed habitats, particularly soft-sediment and seamount habitats in offshore areas. The extent and intensity of fishing in this depth zone varies considerably, and a current project is mapping the footprint of trawling and dredging over the past 16 years to characterise trends in fishing patterns and changes in effort as fishing fleet behaviour has changed over time. Zoning the ocean and mapping habitats from an ecological standpoint is not straightforward. Several projects have studied seamount habitats and it has been widely accepted that the benthos living there can be fragile, very long-lived and vulnerable to fishing. A current study is comparing recovery rates on seamounts closed to fishing with those where fishing has continued. However, seamounts form just a fraction of the seabed area, and attention is now turning to soft sediment habitats that underpin many of our largest trawl and dredge fisheries. A recently completed project showed that the utility of the Marine Environment Classification system (which the fishing indudtry used as a starting point for proposing BPAs) in predicting broad-scale habitat types from physical and oceanographic information (e.g. water temperature, depth, slope, etc) can be significantly improved by adding biological information. The most comprehensive biological data available is the distribution and abundance of fish species, some of which are closely associated with sea-bed habitats. Existing data about sea-bed invertebrates are also being tested for use in the Marine Environmental Classification system. However, information about soft sediment sea-bed communities in New Zealand is not comprehensive and the Ministry is seeking to improve the situation. A multi-agency project to map sea-bed biodiversity and habitats on the Chatham Rise and Challenger Plateau will yield new information about offshore soft sediments and their associated communities from depths of 200 to 1200 m. New work to assess the effects of trawling on different habitat types is also planned for 2007/08.
<urn:uuid:c01dedfd-2fde-4024-8969-bc562f825257>
3.8125
1,307
Knowledge Article
Science & Tech.
26.209823
The following is an excerpt: Cornell researchers have taken a leap toward meeting those needs by discovering a gene that could lead to new varieties of staple crops with 50 percent higher yields. The gene, called Scarecrow, is the first discovered to control a special leaf structure, known as Kranz anatomy, which leads to more efficient photosynthesis. Plants photosynthesize using one of two methods: C3, a less efficient, ancient method found in most plants, including wheat and rice; and C4, a more efficient adaptation employed by grasses, maize, sorghum and sugarcane that is better suited to drought, intense sunlight, heat and low nitrogen. View the original article here: Scientists discover genetic key to efficient crops – Phys.Org
<urn:uuid:1a71bb70-4d67-456d-9d5b-274254287f20>
3.84375
156
Truncated
Science & Tech.
30.361635
Studies analyzing some 20 years of footage taken of Orangutans in the jungles of Borneo are giving scientists fascinating insights into how these great apes communicate. The footage shows the orangutans using mime as a means ‘talking’ with one another. These messages include the desire to have an itch scratched and a termite nest opened. From an article in the Guardian: The study suggests they are capable of more complex communication than previously thought, and resort to mimes to elaborate on messages directed at other apes and their former keepers. The orangutans observed formerly lived in captivity, but the footage,… - Greenfudge.org on Facebook FUNDRAISINGWe are currently fundraising to start our first real-live nature conservation project. Even $1 can be a big help! Add your green newsYou must be logged in to submit a story Tip of the Day Home/Posts Tagged ‘researcher’
<urn:uuid:d7d56753-b33f-4967-b6df-084e6b1dea6c>
2.78125
201
Content Listing
Science & Tech.
30.855
By studying orangutan populations, a team of researchers headed by anthropologist Michael Krützen from the University of Zurich has demonstrated that great apes also have the ability to learn socially and pass them down through a great many generations. The researchers provide the first evidence that culture in humans and great apes has the same evolutionary roots, thus answering the contentious question as to whether variation in behavioral patterns in orangutans are culturally driven, or caused by genetic factors and environmental influences. In humans, behavioral innovations are usually passed down culturally from one generation to the next through social learning. For many, the existence of culture in humans is the key adaptation that sets us apart from animals. Whether culture is unique to humans or has deeper evolutionary roots, however, remains one of the unsolved questions in science. About ten years ago, biologists who had been observing great apes in the wild reported a geographic variation of behavior patterns that could only have come about through the cultural transmission of innovations, much like in humans. The observation triggered an intense debate among scientists that is still ongoing. To this day, it is still disputed whether the geographical variation in behavior is culturally driven or the result of genetic factors and environmental influences. Humans are not the only ones to exhibit culture Anthropologists from the University of Zurich have now studied whether the geographic variation of behavioral patterns in nine orangutan populations in Sumatra and Borneo can be explained by cultural transmission. “This is the case; the cultural interpretation of the behavioral diversity also holds for orangutans – and in exactly the same way as we would expect for human culture,” explains Michael Krützen, the first author of the study just published in Current Biology. The researchers show that genetic factors or environmental influences cannot explain the behavior patterns in orangutan populations. The ability to learn things socially and pass them on evolved over many generations; not just in humans but also apes. “It looks as if the ability to act culturally is dictated by the long life expectancy of apes and the necessity to be able to adapt to changing environmental conditions,” Krützen adds, concluding that, “Now we know that the roots of human culture go much deeper than previously thought. Human culture is built on a solid foundation that is many millions of years old and is shared with the other great apes.” Largest dataset for any great ape species In their study, the researchers used the largest dataset ever compiled for a great ape species. They analyzed over 100,000 hours of behavioral data, created genetic profiles for over 150 wild orangutans and measured ecological differences between the populations using satellite imagery and advanced remote sensing techniques. “The novelty of our study,” says co-author Carel van Schaik, “is that, thanks to the unprecedented size of our dataset, we were the first to gauge the influence genetics and environmental factors have on the different behavioral patterns among the orangutan populations.” When the authors examined the parameters responsible for differences in social structure and behavioral ecology between orangutan populations, environmental influences and, to a lesser degree, genetic factors played an important role, proving that the parameters measured were the right ones. This, in turn, was pivotal in the main question as to whether genetic factors or environmental influences can explain the behavioral patterns in orangutan populations. “That wasn’t the case. As a result, we could prove that a cultural interpretation for behavioral diversity also holds true for orangutans,” van Schaik concludes. Michael Krützen, Erik P. Willems, and Carel P. van Schaik: Culture and Geographic Variation in Orangutan Behaviour, in: Current Biology, Volume 21, Issue 21, first published online: October 20, 2011, doi:10.1016/j.cub.2011.09.01 Dr. Michael Krützen Anthropological Institute & Museum University of Zurich Tel.: +41 635 54 12 University of Zurich Phone: +41 44 634 44 32 Nathalie Huber | Source: Universität Zürich Further information: www.uzh.ch More articles from Life Sciences: In Early Earth, Iron Helped RNA Catalyze Electron Transfer 21.05.2013 | Georgia Institute of Technology, Research Communications Resistance to last-line antibiotic makes bacteria resistant to immune system 21.05.2013 | American Society for Microbiology University of Würzburg physicists have succeeded in creating a new type of laser. Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature. It also emits light the waves of which are in phase with one another: the polariton laser, developed ... Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions. They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics. “When water boils, its molecules are released as vapor. We call this ... Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset. For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ... A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise. The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ... About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers. However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich. 21.05.2013 | Studies and Analyses 21.05.2013 | Life Sciences 21.05.2013 | Studies and Analyses 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
<urn:uuid:2c353199-a612-4dc7-98f1-182b5334be22>
3.859375
1,441
Knowledge Article
Science & Tech.
38.167842
The vermin only teaze and pinch Their foes superior by an inch. So, naturalists observe, a flea Has smaller fleas that on him prey; And these have smaller still to bite ‘em, And so proceed ad infinitum. What’s the most common living thing on Earth? A smart guess would be to start with bacteria, which make up over half of all biomass on Earth (you did watch that episode of my YouTube show, right?). And since the oceans cover considerably more than half the planet, a marine bacterium would also make sense. Until very recently, almost everyone with an opinion on these things would agree. See that bacterium in the electron microscope image above? It’s called Pelagibacter ubique, an ocean-dwelling microbe whose family of relatives make up a stunning one-third of aquatic life forms (by sheer numbers). Why so numerous? One popular theory was that because they are immune to infection by bacterial viruses, they could grow unchecked. Thanks to some creative science, that theory now appears dead wrong (here’s the paper in Nature). By diluting seawater over and over (that is what grad students are for), Stephen Giovannoni’s team was able to isolate single ocean viruses, most of which had never been identified before. Then they stuck them in with P. ubique and waited. Amazingly, some of the viruses could infect this previously uninfectable bacterium! Those little blobs in the image above are actually viruses ready to burst out of their unfortunate little host. They sequenced the DNA of those viruses, when back to the ocean, and found that one of them, with the unremarkable name “HTVC010P”, is … well, basically everywhere. This “smaller flea”, which itself feeds upon something so mind-bogglingly numerous, is now our best candidate for “The World’s Most Abundant Thingy”. Whether or not it’s a life form? I’ll leave that up to you …
<urn:uuid:ce2a041b-3d39-4345-82fa-bb2565027193>
2.984375
444
Personal Blog
Science & Tech.
49.569332
Below is an image of a biaxial mineral. Inside of the mineral is a biaxial indicatrix, which is slightly more complicated than a uniaxial indicatrix. Optically speaking, biaxial minerals are more complicated than uniaxial minerals. Biaxial mineral have far less symmetry in their crystaline form than uniaxial minerals, and is it this lack of symmetry which yields the two optic axes and two isogyres when we take an interference figure. Biaxial minerals have three allowed vibration directions-- light can travel as the alpha ray, the beta ray, or the gamma ray. These three vibration directions correspond to the three crstallographic axes. Whereas in the uniaxial system the relative refractive indices can change, the relative refractive indices are fixed in the biaxial system. The alpha ray, corresponding to the X-axis, always has the lowest refractive index. The beta ray, corresponding to the Y-axis, has the middle-value refractive index. The gamma ray, corresponding to the Z-axis, always has the greatest refractive index. Determining the sign of a biaxial mineral using an idicatrix is more difficult than a uniaxial mineral. Since the relative refractive indices are fixed, we can no longer base sign on refractive indices. Instead, we must look at the optic axes. The optic axes are at some angle from the Z-axis and their position relative to this axis. If we make a circular section which is perpendicular to the optic axes, and has a radius equal to the Y-axis, we can think of this as being equal to the omega vibration direction in the uniaxial system. We look at angle of inclination of the optic axes from the Z-axis and call this the 2V. When the optic axes are within 45 degrees from the Z-axis, the mineral is positive. When the optic axes are within 45 degrees from the Y-axis, the mineral is negative.
<urn:uuid:25e020df-db85-4e67-aa97-1399fb820cd1>
3.828125
425
Academic Writing
Science & Tech.
38.635118
"THE past isn't dead," wrote William Faulkner. "It isn't even past." As far as biology goes, he had a point. Relics of the most distant biological past live on in every creature on Earth. Now a team of researchers have pieced together some of these relics to produce a surprising picture of our primitive ancestors. They say that, in certain ways, the earliest life on Earth was more like the complex cells of animals and plants than the bacteria normally regarded as life's most ancient ancestors. If they're correct, evolutionary biologists will have to redraw the whole tree of life and re-evaluate the origins of modern cells. The researchers, biologists from Massey University in New Zealand, started with the assumption that life began in an RNA world. Today, most RNA molecules act as cellular intermediaries, converting the genetic information stored in DNA into the proteins that provide cells with structure ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:71752d27-d39c-45aa-add1-1f551acda72e>
3.453125
212
Truncated
Science & Tech.
48.626396
Someone correct me if I am wrong, but I did some back of the envelope calculations that seems to show if you have everything at 100% efficiency, it takes 32 horsepower to accelerate 550 pounds at one G, 9.8 m/s^2 or 32 ft/s^2 if you will. Or 32 HP to give 880 Kg 1 G of acceleration, pick your units:) So I imagine something like this: a metal runway with grooves perpendicular to the length, like a linear gear and a dragster with metal gear matching wheels so you get 100% coupling of power. So assuming the gear combo has zero friction, it still connects the motor to the wheels. So the whole thing weighs 880 kg or 550 pounds and has a 32 hp electric motor, also running at 100% efficiency. That is my working assumption. Is that scenario wrong? So you look at a major rocket like the shuttle or the Saturn V, fueled by LOX and liquid hydrogen. So you get the most bang for the buck so to speak, the best you can do with chemical rocket fuel and you clock in at 450 specific thrust. It seems to me that kind of rocket is only getting about 1000th of the energy actually turning into usable thrust but the rest just being 'wasted' as heat. Does that sound about right? So if you are in space and have a very long light weight cable say hooked to the surface of the moon, and you have a spacecraft say 100,000 km away from the moon so you can ignore its gravity, and you have a reel-in motor that just winds up the cable, wouldn't that be about the same thing as a motor driven acceleration on earth but now you don't even have gravity or atmosphere to consider. Under those conditions, wouldn't you get close to that number I mentioned, 32 HP=1 G of accel for 550 pounds (880 Kg)? I am trying to visualize how much energy is wasted in chemical rockets or fusion rockets for that matter VS how much thrust you actually get for the energy expended.
<urn:uuid:c4cbf870-bcda-4f94-927b-9880daaf3a45>
2.734375
426
Comment Section
Science & Tech.
67.722473
|Aug7-12, 10:56 PM||#1| Entropy of liquid nitrogen. I tried to find entropy of liquid nitrogen in various data books, using nitrogen's CAS number, MSDS, but I was able to get the entropy of nitrogen till only 100K, not below that. Does anyone know what's the entropy of liquid nitrogen or that where can I find it? |Aug8-12, 01:26 AM||#2| I suggest the following approximation to extrapolate the value of the entropy below 100°K. The approximation won't be valid for temperatures far below 100° K, but it should be valid within a limited range of 100°K. Hypothesize that the specific heat of liquid nitrogen is constant. Furthermore, hypothesize that assume that the liquid nitrogen is being cooled down from 100K at whatever pressure you got that value of entropy from. The specific heat at constant pressure is defined as C_P where, You can look up C_P for liquid nitrogen in tables. Since C_P is almost constant for liquids, it doesn't matter precisely. T is the temperature, S is the entropy and P is the pressure. If you assume that C_P is constant, then you can integrate this equation easily. where ΔS is the change in entropy, T is the new temperature and, Putting the two equations together, You say you have S(100°K), and you can look up C_P. Thus, you can estimate the entropy at any temperature, i.e., S(T). Here are some hints. Note "ln" is the Neparian logarithm, or the "natural logarithm". The difficult part may be finding the best value of C_P. For a liquid, there is very little difference between C_P or C_V. The specific heat at constant pressure is almost the same as the specific heat at constant volume for a liquid. So if your table gives you C_V instead of C_P, just use it. As I said, C_P is almost constant over a large temperature range for a liquid. Similarly, it will be constant over a limited pressure range for a liquid. Therefore, if you can only find C_P at 77°K, use it. Final hint: There may be something wrong with your value of S(100°K). Check very carefully whether that CAS number is really for 100°K or 77°K. The standard tables prefer values at 1 Atmosphere. Your question implies a larger value of pressure, which is possible. However, check. You may have found the entropy of gaseous nitrogen at 100°K. This will not work. Also note that all this is based on an approximation. One must be wary of approximations. However, this hypothesis is probably the best estimate possible given the limited information in your problem. |Aug8-12, 02:03 AM||#3| Thanks for your reply. I will like to add few things. A correction - I have the entropy of nitrogen gas at 100K. Not of liquid nitrogen at 100K. The equation from which I derived the entropy recommends to use it in the temperature range of 100K-500K. It might be safe to extrapolate till 78K, but below this temperature there will be a sudden drop in entropy(due to change in phase, gas to liquid). Again after phase change there will be slow and smooth reduction in entropy with reduction in temperature. I wanted to know the entropy of nitrogen at phase change point or below the phase change point(liquid nitrogen) at 1bar pressure. |Aug8-12, 07:09 AM||#4| Entropy of liquid nitrogen. Well, finding entropy of liquid nitrogen is more difficult than I thought. As a rough approximition, I took it 120J/moleK. Following is the reason; EN2(liquid) = (EBr2(liquid)/EBr2(gas))*EN2(gas) EN2(liquid) = (152/245)*192 = 120 I guess, this should be close enough. |Aug8-12, 01:18 PM||#5| At 1 Atmosphere, the boiling point of nitrogen is 77°K. The nitrogen gas has to be cooled down to the boiling point, condense into a liquid, and then cooled down to the temperature that you want the entropy at. 1) S(gas, 77°K)=S(gas, 100°K)+C_P[diatomic ideal gas] ln(77°K/100°K) 2) S(liquid, 77°K)=S(gas, 77°K)-ΔH(vaporization, nitrogen)/77°K 3) S(liquid, T)=S(liquid, 77°K)+C_P[liquid nitrogen] ln(T/77°K) Here, ΔH(vaporization, nitrogen) is the heat of vaporization for nitrogen, C_P[diatomic ideal gas] is the specific heat at constant pressure for a diatomic gas, C_P[liquid nitrogen] is the specific heat at constant pressure for liquid nitrogen and T is the temperature of interest. The heat of fusion is also called the enthalpy of fusion. The three steps above should be valid down to the freezing point of nitrogen. Extending this method to temperatures just below the freezing point should be obvious. At temperatures far below the freezing point, the specific heat will stop being constant with temperature. However, You will need to look up C_P[liquid nitrogen] and ΔH(vaporization, nitrogen) in a table. C_P[diatomic ideal gas] is the same for all diatomic ideal gases. Nitrogen is a diatomic gas. The units have to be consistent. Therefore, you may have to manipulate the values that you look up on a table. C_P[liquid nitrogen]≈C_V[liquid nitrogen] You already have S(gas, 100°K). I couldn't find it in the CRC manual, but you have it. Here is the rest of the information that you will need. Handbook of Chemistry and Physics, 84th Edition, 2003-2004 David R. Linde, Editor in Chief Properties of Cryogenic Liquids T(Nitrogen vaporization)=77.35 °K ΔH(vaporization, nitrogen)=25.3 J/g C_P[Nitrogen gas]=1.34 J/(g °K) C_P[Liquid nitrogen]=2.042 J/(g °K) These values, together with the formulas above, determine the entropy density in units of J/(g °K) at any temperature T in °K. |Aug8-12, 04:35 PM||#6| I gave another post where I detail a better way to calculate the entropy. The main physical reason I used that method is because entropy is a perfect differential for reversible processes. Therefore, one can break up the entropy into three components. The proportionality that you just used is based on - what?- The method that I suggested requires you looking up the heat of vaporization of nitrogen and the specific heat of liquid nitrogen. I think that would work far better than your proportionality. |Aug8-12, 05:47 PM||#7| Try the NIST database. http://webbook.nist.gov/chemistry/fluid/ |Aug8-12, 10:52 PM||#8| Thanks Darwin and Q_Goest! The information is there in the NIST site that Q_Goest mentioned. Entropy of liquid nitrogen at 1bar pressure and 77.244K is 79.313J/moleK. I was quite off the mark. I will like to close this thread now. |Similar Threads for: Entropy of liquid nitrogen.| |How much liquid nitrogen???||General Physics||8| |Liquid Nitrogen||General Physics||0| |Liquid Nitrogen vs. Liquid Helium||General Physics||2| |Liquid Nitrogen||Introductory Physics Homework||2|
<urn:uuid:ceb4c255-c0b5-45b6-8722-a588d51ebf27>
2.734375
1,752
Comment Section
Science & Tech.
69.182238
Genome Particle Depressions (WIP) The following example takes a PRT Loader saved from Particle Flow and selects the vertices of the collision plane according to the distance to the closest particle. The resulting selection is then used to Push the vertices to produce depressions in the "ground". - Steps TBD RESULT: As the particles fall to the ground, they produce matching depressions: The left image shows frame 100 where all particles are already on the groud. The right image shows just the Plane and the PRT Loader without the particle mesh to reveal the depressions in the ground: Arbitrary Shaped Particles The above example works well with spheres since the falloff from a point particle is spherical. But if the particles have an arbitrary shape, it might not be very useful. Here is an alternative approach which does not fall directly into Particle Data Sampling since it uses the actual mesh of the particles - either a Mesher compound object made from the Particle Flow, or a Frost particle mesher assigning the shapes on the fly. - Steps TBD RESULT: As the mesh particles approach the surface, the IntersectRay will find the hit point and compare the distance to the threshold. If the mesh has penetrated the surface, the vertex selection will be set to the penetration distance. Thus, a Push modifier with a value of 1.0 will push the surface as much as the penetration was (left image). The right image shows the depressions without the Frost mesh: And here is the side view of the plane with the depressions visible below it: Adding a Relax modifier with value of 1.0 and 10 iterations and scaling the Soft Selection value by a factor of 2.0 to allow for the relax to curve around the objects without intersecting them much produces a smoother depression in the ground: This works, of course, with any shapes since we are ray-tracing the geometry from every vertex: Note: As mentioned already, Genome is history-independent. This means that if your particles are set to BOUNCE off the plane multiple times before coming to rest, their depressions would come and go as they move around. To solve this, you would have to spawn a static particle on each collision and send the spawned particles to a new event. These particles will be the "stand-ins" for the imprints and should be meshed separately to be raytraced by Genome.
<urn:uuid:2be9f225-af9b-4625-a01c-c94381cfc160>
3.0625
508
Documentation
Software Dev.
43.835953
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2012 September 23 Explanation: Yesterday was an equinox, a date when day and night are equal. Today, and every day until the next equinox, the night will be longer than the day in Earth's northern hemisphere, and the day will be longer than the night in Earth's southern hemisphere. An equinox occurs midway between the two solstices, when the days and nights are the least equal. The picture is a composite of hourly images taken of the Sun above Bursa, Turkey on key days from solstice to equinox to solstice. The bottom Sun band was taken during the winter solstice in 2007 December, when the Sun could not rise very high in the sky nor stay above the horizon very long. This lack of Sun caused winter. The top Sun band was taken during the summer solstice in 2008 June, when the Sun rose highest in the sky and stayed above the horizon for more than 12 hours. This abundance of Sun caused summer. The middle band was taken during the Vernal Equinox in 2008 March, but it is the same sun band that Earthlings saw yesterday, the day of the Autumnal Equinox. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:fffaf28b-a28f-4bb8-8503-9a5bb79756c1>
3.65625
313
Knowledge Article
Science & Tech.
55.180463
We write numbers in a base as a series of symbols (digits) having each possible states (values). symbols, you have possible combinations, so can be seen as the number of digits needed to write The "log=size" equality is true within an error of 1 because the log is real-valued, whereas the number of digits is an integer. For instance, Nonetheless, viewing the log merely as the size can be helpful to get some intuition about it and gives interesting insights in various fields where the log is central. This may help beginners/students to get a better feel for the log than from its usual mathematical definitions and properties (e.g., inverse of exponentiation, integral of , tool to transform multiplication into addition or to deal with large numbers, etc.). It is a pity that this function has been baptized with such a long and complicated name. Many students would have an easier life were it named "size" instead of "logarithm"! As an advanced example, in information theory, is the number of different possible states available to each symbol and the log is the quantity of information contained in the sequence of symbols. Then, each digit is called a bit and is the unit of information. is the probability of a state, then is the number of states with probability is the number of bits needed to write this number. Then the entropy is the average number of bits (average information) needed to write the number of available states. Nothing is known about the nature of the states, only their number (they can be a dice roll or a gas microstate), so the more there are on average, the more we know we are missing knowledge; hence the usual interpretation of entropy as a measure of our ignorance (and that is why a six-faced dice has a much smaller entropy than a gas). A symbol with states can be a digit, but it can also be anything less abstract with a known number of states (coin, dice, atom, molecule, degree of freedom, …). Hence, the entropy in statistical mechanics can be seen more physically as a number of energy containers ( states each) present in the system. The physical meaning depends on the base you choose. usually depends on the field of application: base 2 in information theory, base 10 in daily decimal system, and the natural base 2.7128 in math/physics. In the latter case, though, you have to imagine a virtual symbol/container with 2.7128 states! Indeed, is the number of states you can get with an infinite collection of one-faced coins! The constant is a virtual way to deal naturally with a base—or a number of symbols, since —varying continuously instead of discretely.
<urn:uuid:6c95437c-c89e-4c5f-b175-3ad238459242>
3.765625
579
Tutorial
Science & Tech.
45.10587
Most of the Derby tools are JDBC applications. A JDBC application is one that uses the classes in the java.sql package to interact with a DBMS. When you work with JDBC applications, you need to know about several concepts. The most basic is the connection. A JDBC connection is the object through which commands are sent to the Derby engine and responses are returned to the program. Establishing a connection to a specific database is done by specifying a appropriate database URL. The following sections provide background information to help in understanding the Derby database connection URL.
<urn:uuid:ecd083d7-af1b-4c8e-81da-da270785753b>
3
115
Documentation
Software Dev.
48.509413
MSYS is a collection of GNU utilities such as bash, make, gawk and grep to allow building of applications and programs which depend on traditionally UNIX tools to be present. It is intended to supplement MinGW and the deficiencies of the cmd shell. An example would be building a library that uses the autotools build system. Users will typically run "./configure" then "make" to build it. The configure shell script requires a shell script interpreter which is not present on Windows systems, but provided by MSYS. A common misunderstanding is MSYS is "UNIX on Windows", MSYS by itself does not contain a compiler or a C library, therefore does not give the ability to magically port UNIX programs over to Windows nor does it provide any UNIX specific functionality like case-sensitive filenames. Users looking for such functionality should look to Cygwin or Microsoft's Interix instead. Up to MSYS 1.0.11, all components of MSYS were distributed in one single installer you downloaded and ran. While convenient, this made it difficult to update individual components. So, all the MSYS components are now available as separate downloads managed by mingw-get (see Getting_Started (currently alpha release). For convenience, you can follow the instructions below to install 1.0.11. Previous MSYS versions (up to 1.0.11) These instructions were based on the Enlightenment Wiki . Thanks to Vincent Torri for pointing them out. The total size of the installation of MSYS/MinGW is around 110 MB. Be sure to have enough space on your hard disk. - If you haven't already installed MinGW on your system, install MinGW in C:\MinGW. It is better to not install it in the same directory than MSYS, though there should be no problem since MSYS 1.0.11. In the installer, choose "Download and install", then "Current" (it will install gcc 4.4.0). - Install MSYS 1.0.11 . I usually install it in C:\msys\1.0, but you can use any directory that you prefer. Check http://sourceforge.net/projects/mingw/files/ for more recent versions of all these files. - Next, the post install process will ask for the directory where MinGW was installed to. Enter "c:/mingw". If you make a mistake, you can change it by editing the "C:\msys\1.0\etc\fstab" file, make sure to use 'LF line endings. An example fstab may contain: c:/mingw /mingw c:/java /java - Install MSYS DTK 1.0 in C:\msys\1.0. - Install MSYS Core 1.0.11 . It is an archive. Untar it in C:\msys\1.0. - Set the environment variable HOME to C:\msys\1.0\home Now you should have a cyan "M" link on the Desktop. When you double-click on it, a terminal should be launched. Using MSYS with MinGW It is convenient to have your MinGW installation mounted on /mingw, since /mingw is on MSYS PATH by default. For this to work, just type (assuming MinGW is on c:\mingw): mount c:/mingw /mingw To install 3rd party library and applications which uses the autotools build system the following commands are often used. ./configure --prefix=/mingw make make install Installing to "/usr/local" should be avoided, since the MinGW compiler won't look there by default. Building for MSYS To build an application for MSYS (as opposed to using MSYS), users will need to install the MSYS Toolchain. It contains headers and libraries for MSYS along with a patched version of GCC and Binutils. See HOWTO Create an MSYS Build Environment . It should never be treated as a targeted platform. It is meant only as a means to update the MSYS components or the MSYS runtime DLL itself. Resulting programs will only run under MSYS. MinGW build VS MSYS build Some programs when used under the MSYS shell can be tricky. One such example is sed. $ ls *.txt -1 | sed -e s/.exe/\&\!/g Normally, sed will append "!" to the end of every .txt file, but if sed was compiled and link using MinGW, MSYS will treat it as a native application and will try to change "/" to "\" to compensate for the difference between UNIX path and WIN32, resulting in unpredictability when used under the MSYS shell.
<urn:uuid:6a9c25e2-4b41-4272-a258-7fa1384cc8ac>
2.828125
1,011
Tutorial
Software Dev.
70.939936
As recently as last year, scientists believed the asteroid, known as AG5, had "significant potential" to threaten Earth, but a new analysis conducted with the help of Hawaii's Gemini North telescope has found that the asteroid will likely miss Earth by about 550,000 miles (twice the distance to the moon). If it were to strike Earth, AG5 would create an explosion several thousands times more powerful than the atomic bombs used in World War II. Scientists say that an asteroid the size of AG5 is likely to hit Earth about once every 10,000 years. Currently, the only asteroid scientists believe could threaten Earth is known as VK187, a 426-foot wide asteroid that has a one in 1,820 chance of hitting Earth in June, 2048. By Jason Koebler
<urn:uuid:d30c03f0-66e8-4def-aee4-0671fe501dc5>
3.609375
161
Personal Blog
Science & Tech.
46.44
Climate Change - page 3 Volcanoes and ClimateWhen some volcanoes erupt, they send gases up into the stratosphere. Emitted sulfur dioxide is converted into sulfate aerosols, which are tiny suspended particles that circle the globe and reduce the amount of solar radiation that reaches the Earth. Although the actual volcanic activity may only last several days, the impacts on climate can last for a few years, which is the time it takes for the aerosols to fall out of the stratosphere. The eruption of Mount Pinatubo in the Philippines in June 1991 sent about 20 million tons of sulfur dioxide and ash as high as 20 miles into the Earth's atmosphere. The Pinatubo sulfur dioxide cloud was the largest stratospheric disturbance observed since the advent of Earth-observing satellite imagery in the 1970s. Following the eruption, global temperatures decreased by as much as 0.5 degrees C (about 1 degree F) in late 1992. In addition, sulfate aerosols emitted during the eruption interacted with human-made chlorofluorocarbons (CFCs), which destroy ozone, and led to the lowest-ever recorded levels of stratospheric ozone. Climate models showed that the eruption also caused a change in North Atlantic wind patterns, which led to a warmer winter season in Europe during 1991-92. This image from the Multi-angle Imaging SpectroRadiometer (MISR) shows the breakup of the Larsen B ice shelf, which collapsed and broke away from the Antarctic Peninsula during February and March, 2002. Scientists believe the collapse was accelerated by warm summer temperatures. (Image credit: NASA/GSFC/LaRC/JPL, MISR Team) The Larsen B Ice Shelf BreakupPicture a chunk of ice the size of the state of Rhode Island - about 3200 square km (1235 square miles). That's the approximate size of the northern section of the Larsen B ice shelf, which shattered and broke off from the Antarctic continent in early 2002. Researchers discovered the ice shelf breakup while analyzing satellite data from the Moderate Resolution Imaging Spectroradiometer (MODIS). The shattered ice sent thousands of icebergs drifting across the Weddell Sea. Ice shelves are immense plates of partially floating glacial ice that are fed by glaciers and also by the accumulation of new snow. The Larsen B event was the largest in a series of ice shelf retreats along the Antarctic Peninsula during the last 30 years. Since the 1940s, temperatures in the region have warmed by about 0.5 degrees C (1 degree F) per decade. If the warming trend continues, scientists believe the next ice shelf to the south, the Larsen C, may also begin to disintegrate. This graphic crudely illustrates the effect of hypothetical, spatially uniform sea level rise scenarios. The red areas indicate regions of the southeastern United States that would be below sea level at rises of 1, 2, 4, and 8 meters (about 3, 6.5, 13, and 26 feet), respectively. (Image credit: NOAA, Geophysical Fluid Dynamics Laboratory) Sea Level RiseMean sea level refers to the height of the sea in relation to a surface reference point. Sea level goes through periodic cycles of change throughout geologic time. According to the Intergovernmental Panel on Climate Change (IPCC), average global sea level has risen 10 to 25 cm (4 to 10 inches) during the past century. Recent analyses of satellite data also show that the rate of sea level rise is accelerating - rising by about 5 cm (2 inches) between 1985 and 2005. The main cause of sea level rise is water expansion due to rising global temperatures, known as thermal expansion. Melting glaciers and ice caps are responsible for about one-third of sea level rise. Many scientists believe that the Antarctic ice sheet is fairly stable and will not pose a threat to sea level within the next 100 years. However, the Greenland ice sheet may gradually melt and add its water to the oceans. About one-third of the world's population resides in coastal areas. Miami, Florida is one of many U.S. coastal cities that are vulnerable to rising sea level. (Image credit: NOAA) Global sea level rise threatens both human development and natural habitats through flooding, coastal erosion, and saltwater intrusion. Natural features such as dunes and wetlands can be destroyed, making certain areas more vulnerable to hurricanes and cyclones. About one-third of the world's population resides in coastal areas. Even a 0.3-meter (1-foot) increase in average sea level would flood low-lying U.S. coastal cities such as New Orleans, New York, Boston, Charleston, and Miami. In recent years, Earth-observing satellites, such as TOPEX/Poseidon, have enabled scientists to monitor global sea level and, therefore, predict future changes.
<urn:uuid:acdb5760-e1f1-48d3-a160-e9b5114bae0e>
4.15625
998
Knowledge Article
Science & Tech.
43.247718
Scientific Investigations Report 2006–5294 The northern High Plains aquifer is the primary source of water used for domestic, industrial, and irrigation purposes in parts of Colorado, Kansas, Nebraska, South Dakota, and Wyoming. Despite the aquifer’s importance to the regional economy, fundamental ground-water characteristics, such as vertical gradients in water chemistry and age, remain poorly defined. As part of the U.S. Geological Survey’s National Water-Quality Assessment Program, water samples from nested, short-screen monitoring wells installed in the northern High Plains aquifer were analyzed for major ions, nutrients, trace elements, dissolved organic carbon, pesticides, stable and radioactive isotopes, dissolved gases, and other parameters to evaluate vertical gradients in water chemistry and age in the aquifer. Chemical data and tritium and radiocarbon ages show that water in the aquifer was chemically and temporally stratified in the study area, with a relatively thin zone of recently recharged water (less than 50 years) near the water table overlying a thicker zone of older water (1,800 to 15,600 radiocarbon years). In areas where irrigated agriculture was an important land use, the recently recharged ground water was characterized by elevated concentrations of major ions and nitrate and the detection of pesticide compounds. Below the zone of agricultural influence, major-ion concentrations exhibited small increases with depth and distance along flow paths because of rock/water interactions. The concentration increases were accounted for primarily by dissolved calcium, sodium, bicarbonate, sulfate, and silica. In general, the chemistry of ground water throughout the aquifer was of high quality. None of the approximately 90 chemical constituents analyzed in each sample exceeded primary drinking-water standards. Mass-balance models indicate that changes in ground-water chemistry along flow paths in the aquifer can be accounted for by small amounts of feldspar and calcite dissolution; goethite and clay-mineral precipitation; organic-carbon and pyrite oxidation; oxygen reduction and denitrification; and cation exchange. Mixing with surface water affected the chemistry of ground water in alluvial sediments of the Platte River Valley. Radiocarbon ages in the aquifer, adjusted for carbon mass transfers, ranged from 1,800 to 15,600 14C years before present. These results have important implications with respect to development of ground-water resources in the Sand Hills. Most of the water in the aquifer predates modern anthropogenic activity so excessive removal of water by pumping is not likely to be replenished by natural recharge in a meaningful timeframe. Vertical gradients in ground-water age were used to estimate long-term average recharge rates in the aquifer. In most areas, the recharge rates ranged from 0.02 to 0.05 foot per year. The recharge rate was 0.2 foot per year in one part of the aquifer characterized by large downward hydraulic gradients. Nitrite plus nitrate concentrations at the water table were 0.13 to 3.13 milligrams per liter as nitrogen, and concentrations substantially decreased with depth in the aquifer. Dissolved- gas and nitrogen-isotope data indicate that denitrification in the aquifer removed 0 to 97 percent (average = 50 percent) of the nitrate originally present in recharge. The average amount of nitrate removed by denitrification in the aquifer north of the Platte River (Sand Hills) was substantially greater than the amount removed south of the river (66 as opposed to 0 percent), and the extent of nitrate removal appears to be related to the presence of thick deposits of sediment on top of the Ogallala Group in the Sand Hills that contained electron donors, such as organic carbon and pyrite, to support denitrification. Apparent rates of dissolved-oxygen reduction and denitrification were estimated on the basis of decreases in dissolved- oxygen concentrations and increases in concentrations of excess nitrogen gas and ground-water ages along flow paths from the water table to deeper wells. Median rates of dissolved- oxygen reduction and denitrification south of the Platte River were at least 10 times smaller than the median rates north of the river in the Sand Hills. The relatively large denitrification rates in the Sand Hills indicate that the aquifer in that area may have a greater capacity to attenuate nitrate contamination than the aquifer south of the river, depending on rates of ground-water movement in the two areas. Small denitrification rates south of the river indicate that nitrate contamination in that part of the aquifer would likely persist for a longer period of time. Posted April 2007 Purpose and Scope Description of Study Area Land Use and Water Use Vertical Changes in Lithology Vertical Hydraulic Gradients Vertical Gradients in Water Chemistry Major Ions and Trace Elements Vertical Gradients in Ground-Water Age Summary and Conclusions McMahon, P.B., Böhlke, J.K., and Carney, C.P., 2007, Vertical Gradients in Water Chemistry and Age in the Northern High Plains Aquifer, Nebraska, 2003: U.S. Geological Survey Scientific Investigations Report 2006–5294, 58 p., 2 appendixes. For additional information contact: |Document Accessibility: Adobe Systems Incorporated has information about PDFs and the visually impaired. This information provides tools to help make PDF files accessible. These tools convert Adobe PDF documents into HTML or ASCII text, which then can be read by a number of common screen-reading programs that synthesize text as audible speech. In addition, an accessible version of Acrobat Reader 7.0 for Windows (English only), which contains support for screen readers, is available. These tools and the accessible reader may be obtained free from Adobe at Adobe Access.|
<urn:uuid:30928774-1a7c-4252-9d59-d05ec3e1730e>
3.046875
1,195
Knowledge Article
Science & Tech.
27.407004
The core of modern Unix shared libraries September 3, 2011 The fundamental Unix development that enabled modern shared libraries is shared copy-on-write There are two problems with shared libraries, with the second stemming from the first. The first problem is how to get them loaded into the process's address space at all, and it has many possible solutions. The easiest solution is actually to have the kernel do it, since after all the kernel is already mapping code and data from the executable; all it needs to do is map some additional things from another file (or several other files). (Kernel loading was actually how some early shared library implementations worked, but that's another entry.) The second problem comes from the first problem, and it is how to deal with shared libraries being mapped into different places in memory in different processes while still sharing as much physical memory between processes as possible. Here there are many fewer solutions and most of them are not very good (position independent code has limits, for example). The only really good solution is to fix things up by applying runtime relocations to the shared library, using copy on write to de-share only pages that needed relocations. In theory the kernel could do this relocation as it loaded each shared library into your process. In practice no one wants to have that much complicated code running in the kernel, so the relocation needs to happen in user space. The simplest way to do that is to have user space handle the entire job of loading shared libraries, and to do that user space needs a way to set up those shared copy-on-write mappings for shared libraries, ie it needs (Doing all shared library loading in user space also allows for all sorts of useful flexibility and powerful features that would be awkward or out of place in a kernel implementation. For that matter, it allows the implementation itself to be replaced.) * * * Atom feeds are available; see the bottom of most pages.
<urn:uuid:a2e25092-95dd-4445-872a-b948b8d206df>
2.828125
407
Personal Blog
Software Dev.
32.885426
Science Fair Project Encyclopedia Ring wave guide This must be solved under the circularity condition. Let the ring's radius be r, then in one dimension: where α is the position angle on the ring. We get The solution of this is The circularity condition is now, that Now the wave function becomes Quantum states found: n = 0: - φ = 1, - a constant function, and E = 0. This represents a stationary :particle (no angular momentum spinning around the ring). n = 1: - This produces two independent states that have the same energy level (degeneracy) and can be linearly combined arbitrarily; instead of one can choose the sine and cosine functions. These two states represent particles spinning around the ring in clockwise and counterclockwise directions. The angular momentum is :. n = 2 (and higher): - the energy level is proportional to n2, the angular momentum to n. There are always two (degenerate) quantum states. Conclusion: every quantum state is filled by a total of 2n + 1 particles. In organic chemistry, aromatic compounds contain atomic rings, such as benzene rings (the Kekulé structure) consisting of five or six, usually carbon, atoms. So does the surface of "buckyballs" (buckminsterfullerene). These molecules are exceptionally stable. The above explains why: the ring behaves like a circular wave guide. The excess (valency) electrons spin around in both directions. Every energy level is filled by electrons, as electrons have additionally two possible orientations of their spins. The rule that 4n + 2 excess electrons in the ring produces an exceptionally stable ("aromatic") compound, is know as the Hückel rule. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:e0b698c2-6ef6-464c-a6cb-15e8bb44c9b1>
3.9375
405
Knowledge Article
Science & Tech.
38.414254
Many solar flares are expected to hit Earth in 2012 and 2013 but are those warnings just another Y2K? Back in 1999, people around the world were fearful of just what was going to happen on January 1st, 2000. Most thought the digital clocks would shut down, erasing our bank statements, setting off classified missiles, and creating pandemonium the likes of which we’d never seen. On the morning of January 1st, 2000, the world woke up to discover—nothing. And much like the apocalyptic disappointment on that day, the solar flares set to hit Earth last week barely made a ripple in our magnetic field. Image Source: NASA Despite the let-down, experts continue to goad the public, promising more solar flares to come during the sun’s 11-year storm cycle. Not only should we anticipate some communication errors, popular terrestrial radio might be affected. For those of us who are frequent fliers, NASA has recommended the use of alternative flight paths in areas where the flares are destined to rain down. Last week’s bought disrupted the GRAIL satellites, two devices orbiting the moon, hoping to send back some valuable information on the uninhabitable rock. While the public is largely unaware of it, the fact that airlines such as Delta avoided flying over the North Pole as they are want to do actually saved them from potentially dangerous equipment failures. No one wants to have to see another repeat of Alive, especially if solar flares are preventing proper communication. The solar flare itself isn’t what’s got people on Earth in a panic. What really concerns scientists are the CMEs that accompany solar flares. CMEs are the energy and matter eruptions that occur with a flare. When a CME hits the Earth’s magnetosphere travelling at 965 kilometers per second, the resulting collision causes our upper atmosphere to wiggle like a bowl full of Jell-O. Believe it or not, CMEs are what cause the aurora borealis. While this might seem like an innocuous event, any time the aurora borealis are particularly stunning, communication lines in that area of the world are fraught with issues. Earth has been dealing with solar flares for its entire lifetime, and as noted above, every 11 years we go through this sun cycle. As with every other cyclic event, experts are warning that global chaos may still occur, especially in 2013, provided we make it through the Mayan apocalypse. 2013 is the projected “big year” for the solar storm, and NASA unwisely warned the public that a large solar flare might be capable of paralyzing power grids, the Internet, cell phones, and all other satellite communications for days to months!! NASA then went on to explain—for everyone to hear—about how a hostile country could be successful at a nuclear attack, not on the planet, but in the upper parts of the atmosphere. Such a detonation would have the same effects as a huge CME hitting the magnetosphere and creating an electromagnetic storm. Communication systems would be knocked out along with most other electronics. Of course, the government feels that a threat of this caliber is far from becoming a reality; we haven’t even located those darn nuclear weapons in the Middle East yet. Up to you now, do you think and firmly believe that one of those expected solar storms are going to initiate the end of our civilization in the coming 2 years?
<urn:uuid:3a13631b-f76f-4526-aec2-e10b4f424d77>
3.03125
705
Personal Blog
Science & Tech.
43.994036
If an MVC framework is being followed, then most of the content of a JSP should simply present and format the data (the "Model" objects of the MVC framework) being passed to the JSP by the Controller (or its delegate). Many feel that when using MVC, code in a JSP which performs a task not directly related to presentation (such as business logic, validation, error handling, etc.) probably belongs elsewhere. For example, the most natural home for validation code is the Model Object. This is a specific example of a general guideline of lasting value - the idea of separating the "layers" of the application : the user interface, the business logic or model, and the database. The underlying reason for this separation is, ultimately, to allow people with different skills to make significant and effective contributions to the development of an application. User interface designers should be concerned almost entirely with presentation issues, while a database expert should be concerned almost entirely with writing SQL. (See as well the package by feature, not layer topic.) Package by feature, not layer Validation belongs in a Model Object
<urn:uuid:1ccffe71-d3a3-4447-aa56-48192e6a142b>
2.984375
230
Knowledge Article
Software Dev.
30.16853
The environmental sample processor (ESP) is a molecular biology lab packed (robotic technician and all) inside a canister the size of a kitchen garbage can. A decade in development, the present version of the robot works well in surface waters. At prearranged times or when triggered by another sensor, the ESP filters 2-L (a half-gallon) of seawater into its metal-alloy housing. The sample is washed and treated with chemicals that break down cell membranes. The freed cell contents are then washed over a wafer and passed to a series of robotic devices. Within these devices is where the molecular chemistry happens. Standard tests detect gene products and answer specific questions: By detecting specific RNA sequences, the system can find out which organisms are present. Or, by testing for particular proteins, it can ask what those organisms are doing. (For example, an RNA test can look for a particular kind of harmful algae; a protein test can learn how much toxin it is producing.) In surface waters, these tests are useful for monitoring algae and bacteria populations. They can also identify larger creatures such as mussel and barnacle larvae, helping biologists piece together normally hidden parts of these animals’ life cycles. For the MARS observatory, MBARI is developing a deep-water ESP that can do the same work in the deep ocean’s cold and extreme pressure. Procedures for genetic identification of marine organisms, and for clarifying their role in biogeochemical cycles, typically require collecting samples at sea and returning them to a fully equipped laboratory for individual analyses.To reduce time and costs, MBARI initiated the development of the Environmental Sample Processor (ESP), and new techniques to allow remote application of molecular probe technology. With the ESP, researchers will be able to conduct molecular biological analyses remotely, in real-time over a sustained period, and with interactive capability. The organisms will be analyzed, fresh from the sea, and in a timely manner and with significant reduction of costs. This video gives some background information to help explain the design, engineering and deployment of the Environmental Sample Processor (ESP). Astrobiology Magazine : (04/07/07) Taking the Ocean's Pulse Astrobiology Magazine : (03/30/06) Back to top
<urn:uuid:3d42fbf1-f58a-4ebb-9a76-da354be99a0f>
3.625
470
Knowledge Article
Science & Tech.
26.149871
How can you use Perl plotting through the "gnuplot" drawing charts variant In order to use Perl Plotting and graphics you can find several different modules on the net. Generally speaking, you have at least 2 great opportunities: code everything in Perl using one of the CPAN available modules or use some other plotting programs, called externally from a Perl script. In your decision you must take in consideration the application specific you have to develop and the types of chart (2-D or 3-D plotting and rendering) you want to implement. Next, I am going to present to you the “gnuplot” drawing charts variant. 1. The gnuplot gnuplot is an interactive plotting program and can be used to render charts both in 2-D or 3-D dimensions, in many different formats. It is an external program and you can talk to it using a module interface (see below). It’s available for UNIX, IBM OS/2, MS Windows, DOS, Macintosh, VMS, Atari and many other platforms. Though it is copyrighted – you can freely distribute it on the condition not to modify it. And now, if you are curious about and you want to get and try it, please use the link below in order to download the current released version: gnuplot for Perl plotting From the same page you can browse through other useful links such as FAQ, Demos, Tutorials, etc. - draw chart of many types: lines, pointx, bars, contours, surfaces, stacked histograms - draw text on the chart - use multiple pages on one page - use multiple axes on a single plot - output many types of file(jpeg, pbm, pdf, png, gif, …) - animate your graphs and a lot of other features which you can find on the page you can access through the link presented above. In order to use this plotting program in Perl, you must download Graphics-GnuplotIF module which represents a dynamic Perl interface to gnuplot and can be got by accessing the page: "http://search.cpan.org/~mehner/Graphics-GnuplotIF-1.4/" This module allows you send requests to gnuplot through simple Perl subroutine calls. The gnuplot program is launched as a separate process, the plot commands being sending through a pipe (be sure that the operating system you are using supports pipes) . You can start several independent plots from one script, each plot having its own pipe. 2. The Chart::Graph module and Gnuplot() function. Another available opportunity for Perl plotting, is to use the function gnuplot() of the Chart::Graph module, which is a Perl extension for a front-end to gnuplot, XRT, and Xmgrace. The function gnuplot() can be used with almost the same options and arguments as the gnuplot application itself (see point 1. above). You can download this module by accessing the page: "http://psb.sbras.ru/docs/CPAN/pub/CPAN/modules/by-module/ Chart/Chart-Graph-3.2.tar.gz" For more information about how you can use the function gnuplot(), please see the page link: Perl ZIP File Image Magick Perl return from Perl Modules to Home Page
<urn:uuid:f44c6476-40ae-496c-bb11-d738a4bbf262>
2.84375
719
Tutorial
Software Dev.
57.165694
matlab /mit/2.670/Computers/Matlab/Examplesstarts up matlab and adds that directory to the places matlab will look for .m files in.) function return-values = functionname(arguments) so that matlab will recognize it as a function. Each function needs to have its own file, and the file has to have the same name as the function. If the first line of the function is function answer = myfun(arg1,arg2) answer = (arg1+arg2)./arg1then the file must be named myfun.m. The function has arg1 and arg2 to work with inside the function (plus anything else you want to define inside it, and possibly some global variables as well), and by the end of the function, anything that is supposed to be returned should have a value assigned to it. This particular function is just one line long, and it returns answer, which is defined in terms of the two arguments arg1 and arg2. narginused within a function tells you how many arguments the function was called with. You can write functions that can accept different numbers of arguments and decide what to do based on whether it gets two arguments or three arguments, for example. evallets you take a string and run it as a matlab command. For example, if I have to plot 20 similar data files for trials and I want to load each file and use the filename in the title, I can write a function that takes the filename as a string as an argument. To load it in the function, I can use str = ['load ' filename] to put together a command string, and to run that command. Then to use the filename in the title, I can use str = ['title(' filename ')'] fevalevaluates a function for a given set of arguments. For example, feval('sin',[0:pi/4:2*pi]) is the same thing as saying sin([0:pi/4:2*pi]). If you're dealing with a situation where you might want to specify which function to use as an argument to another function, you might use feval.
<urn:uuid:24a931ce-0b8d-4d6e-a8f9-09848f83aa54>
3.34375
463
Documentation
Software Dev.
60.291292
<chemistry> A white crystalline aromatic hydrocarbon, C10H8, analogous to benzene, and obtained by the distillation of certain bituminous materials, such as the heavy oil of coal tar. It is the type and basis of a large number of derivatives among organic compounds. Formerly called also naphthaline. (01 Mar 1998) |Bookmark with:||word visualiser||Go and visit our forums|
<urn:uuid:349dcee8-c6bc-416d-a092-03cb6c01029d>
2.6875
92
Knowledge Article
Science & Tech.
28.222011
PINE trees near the Chernobyl nuclear plant in Ukraine are altering their DNA in response to the radioactive fallout from the reactor accident in 1986. The changes act as a defence mechanism that prevents the trees' genome from being destabilised by radiation. The explosion of one of Chernobyl's reactors showered trees in the surrounding area with a huge amount of radioactivity. Scots pines ( To test the theory, Olga Kovalchuk from the University of Lethbridge in Alberta, Canada, with Andrey Arkhipov and Nikolai Kuchma from the Chernobyl Centre in Ukraine conducted long-term experiments in which they planted uncontaminated Scots pine seeds in the highly contaminated soil where Chernobyl's radioactive ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:0dd371c1-7d75-4a3d-9dc1-4917c4887b6b>
3.390625
171
Truncated
Science & Tech.
33.930406
A story in a software project is essentially a single feature that has been defined through a study of the project before the first iteration. However, in an agile project, there is no need to analyze the project completely up front. It is important to note that when creating project tasks it is necessary to use Epics, Stories and sub-tasks. Epics are those large user stories which are further dissected into several smaller user stories. It is advisable to use them if the software project has complex functionality. Agile and Testing Tools are also related, as Agile testing is a software testing practice that tracks down the principles of agile software development. Agile development further puts forth that testing is not a separate phase, but an integral part of software development, along with coding. Agile development is a kind of methodology that offers a new way of thinking about software development. In any agile project, tasks are main unit and thus tasks need to be organized, allocated, scheduled and tracked and in this regard JIRA excels as one of the major Agile Tools. The agile methodology has revolutionized the way software is developed. This component has helped teams to deliver a higher level of quality code quickly, frequently and in better alignment with what customers look for. JIRA is proper Agile component as it keeps you agile as it is a lightweight, web-based tool for task and issue tracking. To know more please visit us at http://www.rommanasoftware.com/
<urn:uuid:d7ed10b7-a27b-4984-94f8-a728147a001c>
2.8125
303
Truncated
Software Dev.
34.807857
Climate change is occurring as a result of human activity. Greenhouse gases are being emitted into the atmosphere, primarily from the burning of fossil fuels such as coal, oil or gas, to meet the needs of our modern lifestyles. Everyday things we take for granted – such as the heating and lighting in our homes, or our transportation systems – all release carbon dioxide into the atmosphere. Even the products we buy, from carpets to computers, produce emissions during their manufacture and transportation. Trees and woodlands play a crucial role in regulating our climate. They remove carbon dioxide from the atmosphere, storing it as carbon through the biochemical process of photosynthesis. Because they are such large organisms, trees are capable of absorbing and storing large amounts of carbon through this process. The carbon is held in the trunk, branches, leaves and roots of each tree, and even in the forest soil. A single tree can hold up to 4 tonnes of carbon. Scroll down for more information, or use the link at the foot of this page to download a set of posters which we have developed as part of our activities for the UN's International Year of Forests. (Also available as 9 separate files in our Discovery Zone.) The world’s forests will play a very big part in providing a solution to climate change and there are four key things we can do to help: 1. Manage our woodlands sustainably We must manage our woodlands carefully and maximise their ability to store carbon effectively. When trees are young they soak up carbon very quickly. As trees get older carbon absorption slows down until it reaches a steady state. At this point a forest doesn’t absorb any more carbon, but it has become a vast carbon reservoir. Good management of our forests means cutting down some trees to maintain a range of different tree ages. This maximises the absorption capacity of the whole woodland. Find out more about sustainable woodland management. 2. Protect the woodlands we already have Many of the world’s forests are being destroyed. This not only means that we have fewer trees to absorb the carbon we produce, but it also leads to the release of all the carbon stored in them. Consequently deforestation is responsible for the release of almost 6 billion tonnes of carbon dioxide (CO2) emissions every year. 3. Use more wood in place of high carbon materials Wood has the lowest energy consumption of any commonly used building material. Replacing one cubic metre of concrete or red brick with the same volume of timber would save 1 tonne of carbon. There is only one building material that uses the sun’s energy to renew itself in a continuous sustainable cycle; wood. Wood is the only major building material that is renewable. It uses less energy and produces less air and water pollution than the energy-intensive manufacture of steel and concrete. Not only this, but anything made from wood will continue to store carbon for hundreds of years! We can all make a significant contribution to climate change reduction by using wood in place of energy-intensive materials such as steel or concrete. 4. Use wood for fuel Climate change means we have to look for alternative sources of energy, and those produced by natural methods are the best to use. You may have heard about green energy such as wind or solar power, but burning wood is also a clean renewable energy source – providing the wood comes from sustainably managed woodland and is burned close to source to reduce transportation. Extracting firewood and other wood from woodland is one way to ensure that woodlands are managed rather than neglected (because there are economic benefits in doing so). It also benefits the rural economy by providing local jobs and diversification opportunities for farmers and land owners. The Environmental Change Network (ECN), set up in 1992, is a multi-agency monitoring programme funded by a number of government departments. It aims to detect changes in our environment in the UK as a result of climate change. Measurements are taken in the atmosphere, in soils and water chemistry and regular surveys of a number of sensitive plant and animal species are also carried out. Key water and terrestrial habitats are monitored including woodlands. For more information visit www.eci.ox.ac.uk. |Attachment||(click to download)||Size| |Climate _Change_Fact_IYF_all.pdf||3.46 MB| |Climate _Change_Fact_IYF_Lowres.pdf||1.72 MB|
<urn:uuid:67adc4d7-3e8e-4c33-b0a6-d65e659621b3>
4.125
912
Knowledge Article
Science & Tech.
44.522894
Dielectrics have the strange property of making space seem bigger or smaller than it looks. The dielectric constant value tells you how much smaller or bigger the space gets. It shows itself in two ways. First, when you put some dielectric between two electric charges it reduces the force acting between them, just as if you'd moved them apart. Secondly, the dielectric constant of a material affects how electromagnetic signals (light, radio waves, millimetre-waves, etc.) move through the material. A high value of dielectric constant makes the distance inside the material look bigger. This means that light travels more slowly. It also ‘scrunches up’ the waves to behave as if the signal had a shorter wavelength. For electromagnetic waves, just like the forces between charges, the dielectric warps the space to make it look a different size. Content and pages maintained by: Jim Lesurf (firstname.lastname@example.org) using HTMLEdit on a StrongARM powered RISCOS machine. University of St. Andrews, St Andrews, Fife KY16 9SS, Scotland.
<urn:uuid:bb7f646b-6466-4bb8-9828-d3577317b923>
3.625
242
Knowledge Article
Science & Tech.
50.942
If you have an hour, the drilling team gives a presentation and a Q and A. They explain the significance of the first non-Earth drilling. They’ve driven the Rover over to a flat area of rocks they call ‘John Klein,’ in a depressed region called ‘Yellowknife Bay,’ beyond Glenelg which was originally a target point from the landing site. There’s a group of likely fine-grained (siltstone or mudstone?) rocks on the Martian ground. They’ve photographed white veins in the rocks amongst other features, and used the ChemCam to determine the veins are probably a calcium sulfate, which forms on Earth usually due to water percolating through rocks, but they’re still doing analysis. They’re now using the drill for the first time, doing a test drill of 2 centimeters, and then drilled 6 centimeters down into these flat ‘river’-looking rocks in John Klein. The Rover scooped up the material and it’s gray in color, as at the surface has been exposed to iron oxidation. You can download these photos or view them a slideshow, and the Rover team keeps updating them with each new tool they use and each new location they move the Rover. You can track the whole mission that way in photos with captions explaining what’s going on. Here’s animation of how the drill works (follow that link for all video updates).
<urn:uuid:49fb8b35-2c50-49fc-b141-14d57f9e96ba>
3.640625
306
Personal Blog
Science & Tech.
59.150909
The Klingon Bird of Preys were first introduced in Star Trek III: The Search for Spock. Two classes existed, the B’rel-class and the K’Vort-class, roughly scout and light cruiser classes. They were formidable ships, not only because of their lovable Klingon crews, forward torpedo launchers, and disruptor cannons, but because of their cloaking abilities*. Moving from the dark depths to the twilight zone** any animal with cloaking abilities would rule the ocean. Well maybe they wouldn’t’ be like ocean royalty but they would do alright. Nearly every group of animals has a transparent brethren that lives in the well-lit open ocean. In darker deeper water, a majority of denizens are red or black. In both cases, this coloration or lack of serve to cloak the animal. But what’s an animal to do if they are in between these zones, not a sharp boundary but a grey area full of scoundrels or needs to migrate between the two. A red or black creature in ligher shallower waters easily contrasts against the light coming from above. A transparent animal, finding itself in the deep, would be easily distinguishable from the direct light cast from another organism’s bioluminescence. If only like a Bird of Prey an organism could shift between cloaking and no cloaking. Two such creatures the octopus Japetella heathi (right) and the squid Onychoteuthis banksii (left) can do exactly this. When you shine a direct light on the normally transparent Japetella heathi or Onychoteuthis banksii, mimicking a bioluminescent beam, its chromatophores are triggered turning the animal opaque. But the octopus, like a crafty Klingon, is strategic in triggering the chromatophore response. Objects or shadows near the octopus did not trigger a response. Yet tactile, i.e. poking it with a probe, a big stick, or whatever is nearby, or blue light did activate the cloaking device. Both animals consistently reflected 2x as much light when in the transparent mode compared with the pigmented mode. Indeed in the cloaked state, the octopus was able to achieve the same reflectance of the red and black fishes and invertebrates of the deep. These cephalopods seem to understand the ancient Klingon proverb tugh qoH nachDaj je chevlu’ta’ or A fool and his head are soon parted. Best be no fool and cloak often***. Sarah Zylinski, Sönke Johnsen (2011) Mesopelagic Cephalopods Switch between Transparency and Pigmentation to Optimize Camouflage in the Deep. Current Biology Vol. 21, Issue 22, pp. 1937-1941) *I would also note that early Klingon Bird of Preys also had sweet submarine style periscopes **I’m not referring to young emo vampires either although maybe cloaking is useful agains them as well ***not really related to this post but I feel compelled to say KKKKKKKHHHHHHAAAAAANNNNNN!!!!!!!
<urn:uuid:dfc13dd2-727d-492b-8dba-77ecc9227854>
3.3125
663
Personal Blog
Science & Tech.
51.900443
Devices use signing information to check an application's source and validity before allowing it to access protected APIs. For test purposes, you can create a signing key pair to sign an application. The keys are as follows: A private key that is used to create a digital signature, or certificate. A public key that anyone can use to verify the authenticity of the digital signature. You can create a key pair with the Keystores Manager as described in Managing Keystores and Key Pairs.
<urn:uuid:5e12b926-06dc-46f3-abe3-d37287a6efde>
2.734375
99
Documentation
Software Dev.
35.690143
Migratory Shorebirds of the East Asian - Australasian Flyway: Population estimates and internationally important sites By M. Bamford, D. Watkins, W. Bancroft, G. Tischler and J. Wahl Wetlands International - Oceania, 2008 ISBN 9 789 0588 20082 - Migratory Shorebirds of the East Asian - Australasian Flyway: Population estimates and internationally important sites (PDF - 10.7 MB) Report by chapter - Acknowledgements and Summary (PDF - 288 KB) - Introduction (PDF - 1.3 MB) - Methods (PDF - 201 KB) - Overview (PDF - 446 KB) - Species accounts (part 1 of 3) (PDF - 2.6 MB) - Species accounts (part 2 of 3) (PDF - 2.5 MB) - Species accounts (part 3 of 3) (PDF - 1.9 MB) - Country accounts (PDF - 2.7 MB) - References (PDF - 185 KB) - Appendices (PDF - 267 KB About the document Migratory shorebirds present a particular conservation challenge because their patterns of movement take them across international boundaries, in some cases almost spanning the globe. They utilise different sites in different countries at different times of the year, and conservation of these species therefore requires the management of the suite of sites that are important to them. To identify important sites requires count data and population estimates to put those count data into perspective. The need for this information in the East Asian - Australasian region was identified in the Asia-Pacific Migratory Shorebird Action Plan, and Wetlands International undertook to implement this component of the Plan through this review. This review therefore aimed to: - Develop population estimates for shorebirds in the East Asian - Australasian (EAA) Flyway; - Identify sites of international importance for migratory shorebirds in the EAA Flyway. This review is the first time that the identification of sites of international importance for migratory shorebirds across the EAA Flyway has been conducted.
<urn:uuid:bb2e4a8d-bd97-4e35-82a1-1c4dae950449>
3.21875
447
Academic Writing
Science & Tech.
42.26191
Search Digital Classroom Resources: Digital Classroom Resources Projectile Motion Simulation C. Jay Hutchings and Nadina Duran-Hutchings This Flash Forum "Sharing area" article presents a game in which the user must determine the correct angle and initial velocity for a projectile that must hit a roving target. The mathematics involved is primarily algebra derived from simple calculus concepts, and it is explained in help files included with the applet. The Flash source code can be downloaded from the "Sharing Area" of the MathDL Flash Forum. Open Projectile Motion Simulation in a new window
<urn:uuid:e53d718b-2ca0-49b7-aef9-57b70f9f1b3b>
3.40625
124
Content Listing
Science & Tech.
24.042702
Search Loci: Convergence: Reductio ad absurdum, which Euclid loved so much, is one of a mathematician's finest weapons. It is a far finer gambit than any chess play: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game. A Mathematician's Apology, London, Cambridge University Press, 1941. The Enigmatic Number e: A History in Verse and Its Uses in the Mathematics Classroom Links to Resources: Biographies and Topics The hyperlinks to biographical information in the poem's text are from the excellent source: J.J. O'Connor and E.F. Robertson, The MacTutor History of Mathematics Archive, University of St. Andrews, Scotland. Additional biographical and historical links are provided below for variety and pedagogical usefulness. Jost Bürgi (1552-1632) Christian Goldbach (1690-1764) Euler's correspondence with Goldbach: Lettre XV, November 25, 1731 Charles Hermite (1822-1901) Christiaan Huygens (1692-1695) Gottfried Leibniz (1646-1716) Nicolaus Mercator (1620-1687) Gregorius Saint-Vincent (1584-1667) Brook Taylor (1685-1731) Wikipedia (http://en.wikipedia.org) is a rich online source of information about most of the topics mentioned in the poem. Unfortunately, Wikipedia's open-editing policy makes the accuracy of the information on its site uncertain. For this reason we provide links to different online sources. Whenever possible, we have chosen sites where the information is presented with students and educators in mind. Derangements and the Hat-Check Problem i (imaginary unit) Logarithm and natural logarithm (ln) Roots of unity Series (infinite sums) Glaz, Sarah, "The Enigmatic Number e: A History in Verse and Its Uses in the Mathematics Classroom," Loci (April 2010), DOI: 10.4169/loci003482
<urn:uuid:0e96c816-302b-409c-b470-6e0286ac9a45>
2.734375
463
Content Listing
Science & Tech.
42.170117
I know radon is harmful and a threat to health, but are there any uses for it? Also, do you know what scientists were involved in the research of it? Try contacting Commonwealth Edison, they have lots of information about radon, especially now, and should be of help to you. Noble gases are characterized by their lack of chemical reactivity. However, radon (a noble gas) should be even more reactive than xenon, a noble gas that has been caused to undergo a chemical reaction. Given that radon is radioactive and has a half life of only about 3.82 days, it doesn't hang around long enough to be converted into any kind of chemical compound. Other than the fact that it is a pesky air pollutant, I can't think of any practical use for radon. Perhaps another Newton scientist could add something regarding radon's utility. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:6c05e08a-4c6a-4b26-840b-0c579ddaebd3>
3.03125
206
Comment Section
Science & Tech.
53.224088
The Most Complete List of Solar Energy Facts on the net! Ok, so let’s state the obvious. The Sun’s photovoltaic energy is responsible for much of the continuation of life on earth. Don’t believe me? Look at plants and plankton. They make our oxygen, which creates ozone, which protects us from the sun. Plants create most of the food on the planet for everything from whales to grasshoppers. Even the dead dinosaurs that we burn in our cars ( fossil fuel) is derived, from animals which once lived off of plants. Today, much of our food, lumber and fuel is derived from plants. Let’s face it. Photosynthesis is awesome. But will we ever see solar energy become one of our primary sources of power? Here are some interesting facts about the power of the sun and our modern photovoltaic systems. Cool Video About Solar Potential How Does Solar Energy Work? There are two types of solar energy technology that are currently used: Photovoltaic and Thermal. Photovoltaic Energy is created when sunlight photons hit a solar panel, knocking electrons in the material loose and creating an electrical current. Here is a more in-depth explanation I’ve written on how solar panels work. Solar Thermal Energy is turning out to be much easier to harness. We all know the sun gives off heat, so solar thermal energy is simply harnessing that heat via collectors and using it to power solar-powered electricity plants, roof-mounted hot water heaters and solar pool warmers. Solar thermal energy actually seems to hold some of the largest potential as far as electrical production goes. Some of the largest electrical plants currently in the nation — for that matter, in the world — is created by solar thermal energy, as opposed to by photovoltaic cells. If you are looking for Solar Energy Facts for Kids, try scrolling down towards the bottom of the page. Hopefully some of these facts and ideas will help with lesson plans Interesting Facts About the Sun - One Million Earths could fit inside of the the Sun - Sunlight takes 8.3 minutes to reach earth’s outer atmosphere. The sun could disappear and we would still receive sunlight for 8 minutes - In one hour, enough sunlight strikes the Earth to provide the entire planet’s energy needs for one year. - Fossil Fuel is a form of stored solar energy. Biomass that formerly used Solar energy has been changed into fuel by earth’s geologic activity. - About 50% of that sunlight reaches earth and is absorbed by its surface. Approximately 30% of the sunlight is reflected back by the earth’s surface. - About 1,366 watts per square meter of energy reaches the earth atmosphere. Depending on location, about 1,000 watts per square meter reaches the ground per hour. Photovoltaic Solar Panel Facts - Solar Panels are considered to be somewhat inefficient as they can only convert a maximum of about 20% of the sunlight they are presented with. By comparison, a gasoline engine is about 18% efficient and a diesel engine is nearly 50% efficient. Solar Thermal Energy is approaching 30% to 40% efficiency using Stirling engines, and anywhere from 25% to 35% using solar powered steam generators. - Solar panels will not reach grid parity — matching the cost of current energy production — until it reaches $1 per watt to build panels. - Some estimates put solar panels output at about 10 watts per square foot - A highly-efficient solar panel (one square meter in size ) can produce up to 1.4 kWh/ day depending on location. The further you are away from the equator, the less sunlight — or insolation — will your area receive. - Depending where you live, you may need more solar panels to account for shorter days and less sun. See our Full Solar Report for more calculations Using The Sun’s Heat To Generate Power - According to Schott a German-Based solar manufacturer, Solar Thermal energy costs about 15 cents per Kilo-watt hour. That is only about 5 cents more per kilowatt than conventional solar energy typically costs - The efficiency of solar thermal electrical plants increases the more concentrated the heat is. - Solar thermal power plants typically use either parabolic systems to collect heat, or a mirror array to concentrate the power on a main power - Concentrated solar power plants are more efficient, however the costs of creating tracking systems for the solar panel arrays are also more expensive. Plus, material must be used that can handle temperatures above 800 degrees Celsius. Facts About Home Solar Energy Systems - More than 10,000 homes in the United States have solar energy installed. - Solar panels cost about $3-4 per watt for a grid-tied system. Stand-alone systems with full battery backups are considerably more. Complete grid-tie kits (with the required inverter!) can be purchased from retailers such as Amazon and Sams Club at about $4 per watt. - Solar panels typically increase the value of your home. Currently, homes in Southern California that have solar panels installed currently have an increased home value of approximately $5.50 per watt of solar energy. That means that a solar panel system that costs $20,000 to install could bring a return $30,000+ (for a 5 Kilowatt system). - There are many government and state programs that provide incentives and reimbursements for installing solar panels. - Depending where you live, you may need more solar panels to account for shorter days and less sun. See our Full Solar Report for more calculations on solar concentrations Solar Energy Vehicle Racing - Solar powered vehicle races have been around since 1985. - The World Solar Challenge and the North American Solar Challenge are two of the largest such races held each year - The World Solar Challenge is in Australia and covers 1877 miles across the continent. The solar powered cars exceed 90 mph! Largest Solar Plants In The World - The largest photovoltaic power plant is in Sarnia, Canada. It has an 97 megawatt capacity. Most large power plant installations instead rely on solar thermal energy due to its efficiency. - The world’s largest solar energy station is currently the collection of 9 power plants called the Solar Energy Generating Systems (SEGS) developed in the 80’s out in the Mojave desert with a production capacity of 354 Megawatts (MW). - Spain also has two large power plants. The Solnova plant is three 50 MW towers for a total potential of 150MW. The Andasol plant takes the title as second-largest behind the SEGS facility with five, 50MW plants for a total output potential of 250 MW. Facts About Current US Energy Usage - 8% of all energy usage in the United States is created by alternative energy. - Solar energy only accounts for 0.2% of that. - Nuclear Electric Power – 9% - Coal Power – 21% - Natural Gas – 25% - Petroleum – 37% Facts About Current US Alternative Energy Usage - Renewable energy accounts for 8% of the United States Energy Usage. It is broken down as below: - Of the Alternative energy sources, Solar Energy creates about 1% of that energy. - Geothermal Energy accounts for about 5% - Wind Energy is 9% - Biofuels is 20% - Wood is 24% - Hydropower is 35% - The United States is second in producing renewable energy, behind China. History of Solar Power - In the 15th century, Leonardo Di Vinci was one of the first to design an industrial use for the sun. He designed a system of concave mirrors to heat water. - The photovoltaic properties of certain silicons were discovered in 1954 by Bell Laboratories. The first functional photovoltaic panels were developed for use on satellites. Experiment Ideas For Kids – Demonstrate the Power of Sunshine! The sun is VERY dangerous. Especially when working with mirrors and magnifying glasses. Children should never be allowed to handle equipment that could be hot or redirect sun rays as blindness and burns could easily result. thesolarenergyfacts.net takes no responsibility for any damages As with any good experiment, make sure that your experiment provides a: - An Experiment with a Control, an Independent variable and a Dependant variable. The Sun provides about 300 BTUs of heat per square foot per day. You might try experiments that demonstrate this heat: - Cooking eggs in on a sidewalk, or on a frying pan laid in the sun. - Using a magnifying glass to ignite paper - Thermometer under glass, dark paper, light paper and in open air to compare results. For a more visual effect you could use ice cubes instead of thermometers to show the differences - More Resources for Teaching About Sunlight
<urn:uuid:c7c33218-4702-4b05-b74a-e1d2089c4fa6>
3.453125
1,857
Content Listing
Science & Tech.
46.393812
Global warming has felt like breaking news a few times in recent years, but the first big pulse of coverage and public attention came in 1988. A blog about climate change, the environment and sustainability. Go to Blog » How can we protect the planet for our children? Andrew C. Revkin looked at the latest research on global warming for AARP The Magazine. Q&As With Andrew C. Revkin The Intergovernmental Panel on Climate Change released its latest report on Feb. 2, which provided a grim and powerful assessment of the future of the planet. The Times' Andrew C. Revkin answered readers' questions and responded to comments. In a series of articles, a team of Times reporters described how the world is, and is not, moving toward a more secure, and less environmentally damaging, relationship with energy. Several of the writers responded to questions and comments Bill Clinton sits down with New York Times reporter Andrew C. Revkin after announcing his new plan to fight global climate change at the Large Cities Climate Summit in New York. Malawi, India, the Netherlands and Australia will experience global warming in very different ways. Science reporter Andy Revkin examines the long-term social consequences of rising temperatures and seas around the globe. Dr. James Hansen, NASA's top climate scientist, says the Bush administration tried to stop him from talking about emissions linked to global warming. Your search for WETLANDS in Global Warming returned 14 articles LEAD: Global warming caused by industrial pollutants in the atmosphere is likely to shrink forests, destroy most coastal wetlands, reduce water quality and quantity in many areas and otherwise cause extensive environmental disruption in the United States over the next century, according to a draft report by the Environmental ProtectionOctober 20, 1988 U.S. News LEAD: FRANCIS MARION, AN American general in the Revolutionary War, bedeviled the British with his surprise attacks from the marshy areas of South Carolina. The short, sickly-looking fellow was among our earliest partisan fighters, pioneering guerrilla tactics against regular troops in skirmishes at Tearcoat Swamp and Halfway Swamp.March 4, 1990 Magazine News New York City will be hit hard by the effects of global warming over the next century, as the sea level rises and washes away beaches, floods the subways and creates new wetland areas in Brooklyn, Queens and Staten Island, an environmental group said yesterday. The group, the Environmental Defense Fund, working in collaboration with Columbia University researchers, released a report yesterday called ''Hot Nights in the City,'' on what it considers the likely effects of the widely forecast ris...June 30, 1999 New York and Region News LONG ISLAND should prepare now for the incremental effects of global warming, planners and scientists said last week in response to an international report that offered some dire predictions. The United Nations-sponsored report and a regional analysis of it project that warming may submerge wetlands, batter beaches and coasts, flood low-lying areas, increase rainfall and bring more storms and hurricanes within the next several decades if current trends continue.March 4, 2001 New York and Region News Scientists and state officials are considering a broad diversion of the Mississippi River into Louisiana’s sediment-starved marshes.September 19, 2006 Science News The Democrats’ return to power in Congress has raised hopes that progress can be made on vital matters like global warming, oil dependency, national parks and threatened wetlands.January 1, 2007 Opinion Editorial Global warming is drying up mountain lakes and wetlands in the Andes and threatening water supplies to major South American cities including La Paz, Bogotá and Quito, World Bank research shows. The risk is especially great to an Andean wetland habitat called the paramo, which supplies 80 percent of the water to Bogotá's seven million people. Rising temperatures are causing clouds that blanket the Andes to condense at higher altitudes. Eventually this so-called dew point will miss the mountains...July 21, 2007 World News Water levels in the three upper Great Lakes are wavering far below normal, and experts expect Lake Superior to reach a record low in the next two months.August 14, 2007 U.S. News If approved, the law would prohibit the importation of fuels derived from crops grown on certain kinds of land — including forests, wetlands or grasslands.January 15, 2008 Business News Ecologists fear that global warming will make protected landscapes inhospitable to prized species.January 29, 2008 Science News Sea level rise fueled by global warming threatens the barrier islands and coastal wetlands of the Middle Atlantic states, a federal report warned.January 17, 2009 U.S. News As sea levels rise, tidal flooding is disrupting life in Norfolk and all along the East Coast, a development many climate scientists link to global warming.November 26, 2010 Front Page News As sea levels rise, tidal flooding is disrupting life in Norfolk and all along the East Coast, a development many climate scientists link to global warming.January 24, 2012 Science Blog New York’s elected officials are talking about a topic that has been close to unmentionable during the presidential campaign.November 2, 2012 N.Y. / Region News SEARCH 14 ARTICLES ABOUT GLOBAL WARMING: - Well: The 4-Minute Workout - Well: Cheating Ourselves of Sleep - Humanities Committee Sounds an Alarm - DAVID BROOKS: Beyond the Brain - Personal Journeys: Finding Gems on the Tuscan Coast - THOMAS L. FRIEDMAN: Postcard From Turkey - A.M.A. Recognizes Obesity as a Disease - Recipes for Health: Bean Salads for Summer - Well: When the Bully Is a Sibling - A Health Maven’s Sweet Secret - The F.B.I. Deemed Agents Faultless in 150 Shootings - AT&T to Introduce Solar-Powered Charging Stations - Bloomberg Plan Aims to Require Food Composting - G.O.P. Pushes New Abortion Limits to Appease Vocal Base - N.Y.U. Gives Its Stars Loans for Summer Homes - Iran President-Elect Wants to Ease Strains With U.S., but Sees No Direct Talks - Justices Block Law Requiring Voters to Prove Citizenship - New Leak Indicates Britain and U.S. Tracked Diplomats - U.S. Presses Taliban on Qatar Office in Bid to Save Talks - Uncertainty at Fed Over Its Stimulus Plans and Its Leadership - Protests Widen as Brazilians Chide Leaders - Well: Cheating Ourselves of Sleep - Extending a Hand, Obama Finds a Cold Shoulder Abroad - Well: The 4-Minute Workout - Immigration Law Changes Seen Cutting Billions From Deficit - In Bulger’s Underworld, a ‘Judas Was the Worst’ - Taliban Step Toward Afghan Peace Talks Is Hailed by U.S. - Dissent Festers in States That Obama Forgot - Anxiety: I Know What You Think of Me - MAUREEN DOWD: Of Rats and Hit Men RSS Feeds on Global Warming Subscribe to an RSS feed on this topic. What is RSS? Get Alerts on Global Warming Receive NewsTracker e-mail alerts on topics covered on this page.More Alerts »
<urn:uuid:35971728-be26-4cb7-8146-ea5a48e1a0aa>
2.75
1,570
Content Listing
Science & Tech.
50.061009
Since I am a java guy I am also posting this query in the java forum. When we use the -genkey argument the keytool "generates a key pair (a public key and associated private key). Wraps the public key into an X.509 v1 self-signed certificate, which is stored as a single-element certificate chain". When we use the -selfcert argument the keytool "generates an X.509 v1 self-signed certificate, using keystore information including the private key and public key associated with alias". If -genkey generates a self signed certificate what does -selfcert do? I can't understand what actually happens between -genkey and -selfcert. What does self sign mean in both the case? -genkey generates a private and public key in addition to creating a cert. -selfcert creates a cert using a specified key. A self-signed certificate means that the certificate chain does not lead to a Certification Authority (CA) who validates you are who you say you are. A user who encounters a self-signed cert in an applet or web server will be notified that the certificate is questionable. Have a look on the page you linked, the section marked "Certificate Chains" for more.
<urn:uuid:4cdd0d56-ef7c-4712-80d9-c94d3c4f3bf6>
3.109375
260
Q&A Forum
Software Dev.
49.784
http://ecoworldly.com/2008/10/07/sci...greatest-lake/In a report published in the journal Nature, researchers from Tokyo’s Institute of Technology and the Swiss Federal Institute of Aquatic Science and Technology have observed the cichlid evolve into a new species better adapted in sighting its prey and predator. Researchers looked at two species, conspicuous by their red or blue colours. They determined through lab experiments that certain genetic mutations helped some fish adapt their vision at deeper levels to see the colour red and others in shallower water to recognise shades of blue. The researchers showed that the eyes have adapted to this difference so that fish that live in deeper water have a pigment in their eyes that is more sensitive to red light, while shallow-water fish were sensitive to blue. I thought DF would find this article interesting. I think Cichlid's are already interesting because of their close relationship with salt water fish.
<urn:uuid:3ba00206-808e-466a-a822-98d61dca0ad3>
3.5
196
Comment Section
Science & Tech.
45.18875
From: Eirik Berg I see that you are interested in how to colonize and live on Mars. Some time ago, I read about a suggestion about using some of the large lava tunnels which is suspected to be widespread on the planet do so; a erea is closed on both ends, isoleted from within and optical fibres attached to panels of lenses are used to lead sunlight down under the surface. But since then, not much have been mentioned about this suggestion. Are they still toying with it, or do they stick to capcules on the surface? From: Greg Bear It's a cool idea, but we don't know much about the extent or nature of whatever lava tunnels there might be on Mars. We're just going to have to go there and pull out the ol' ladder and rope kit and flashlight. I'd recommend contacting the Mars Society and reading Robert Zubrin's books to follow up on Mars colonization ideas. From: Eirik Berg Thanks for reply. Yes, of course humans would first have to establish themselves before they moves on to the next step. But after that, it could really start to become exiting. Find some info at last about it: And the sunlight collected from the panels above could be used for other things than just giving light to the plants and solar cells as well. Some of it could be focused on black and light absorbing material, rising the temperature. When a certain temperature had been reached, cells of wax would melt, absorbing the heat. During the night the wax would "freeze" again, releasing the heat in the caves. As the article says, it's ironic that humans leaves earth and settle down on Mars just to become cavemen. That is, if there really is such lave tubes with the desired potential on the red planet.
<urn:uuid:87f62612-2af1-4663-8b95-be7d9421dd6a>
2.78125
373
Comment Section
Science & Tech.
59.403344
[Haskell-cafe] how to print out intermediate results in a recursive function? qiqi789 at gmail.com Sat Feb 4 19:23:07 CET 2012 I have a question;how can I print out the intermediate number lists in a mergesort recursive function like the following one. merge ys = ys merge xs = xs merge (x:xs) (y:ys) = if x <= y then x : merge xs (y:ys) else y : merge (x:xs) ys mergesort = mergesort [x] = [x] mergesort xs = let (as, bs) = splitAt (length xs `quot` 2) xs in merge (mergesort as) (mergesort bs) main = do print $ mergesort [5,4,3,2,1] In the main function, it only prints out the final number list. But I'd like to print out the number lists in every recursive level. How can I do that? Thanks. More information about the Haskell-Cafe
<urn:uuid:a33c77fb-5352-457a-81f3-52f14495b6bf>
2.703125
260
Comment Section
Software Dev.
77.763571
Jurassic Park: fact or fiction? Can the steps taken in Jurassic Park,(the movie) be done in real life? I can only give a personal observation here. We are really at the time on the cutting edge of DNA research regarding these topics. More often, as the scientist in the movie stated, we get so involved in the 'could' of DNA research, but we must continue asking ourselves if we 'should'. There are clearly moral/social issues involved in DNA research, and, given the ever increasing costs of research versus ever dwindling research dollars, I suspect research geared more towards improving our existence rather than bringing back ancient creatures for reasons which cannot be clearly shown to benefit our lives now will be followed. I suspect given a pure preserved sample of dinosaur blood, perhaps from an amber-entombed mosquito, and given the technology of 'filling-in' missing DNA segments, that indeed the cloning would be possible. For what purpose, however? Better, in my opinion, to spend limited dollars on finding genetic causes of birth defects, heart disease, and cancer, along with a host of other debilitating illnesses like arthritis and diabetes. Thanks for using NEWTON! I think that Ric is over-optimistic in thinking that the Jurassic Park experiments could be made to work now. Even if we did clone all of the genome of a dinosaur - and remember, we haven't yet done this for any vertebrate species that is readily available, much less one with very limited sample -- We would not know how to put those pieces together into the correct order - that the order is key to proper regulation, which is key to proper development. No, I don't' think we're even close to that Ric's other comments on research priority and resources are more of a moral or political nature - and are worth careful and I agree that bringing back extinct species is not currently possible. I also do not believe it will be possible in the foreseeable future. Even if the information in the DNA could be retrieved and "filled in" (which is not currently possible), we would still not have the developmental information that is stored in the female gamete or egg. This information comes from the mother and requires the existence of a previous generation. Remember that the experiments in which cattle have been cloned involved transplanting nuclei from one individual into egg cells in which the has been destroyed. The developmental information in the egg cell is crucial to normal development. Unless an intact egg cell could be found or synthesized, Jurassic Park will remain on hold. Click here to return to the Biology Archives Update: June 2012
<urn:uuid:bc4b2aef-11b6-4fa4-9ce3-99d7d5866bba>
2.765625
578
Comment Section
Science & Tech.
39.46231
Abstract - Variability in the Responses of Black-billed Magpies to Natural Predators Buitron, D. 1983. Variability in the responses of black-billed magpies to natural predators. Behaviour 87. pp. 209-233. Encounters between black-billed magpies (Pica pica) and a variety of natural predators were observed during 3 breeding seasons in Wind Cave National Park, South Dakota. Raptors were the most frequently encountered potential predators, with magpies reacting more strongly to falcons than to hawks. Reactions to crows and squirrels were most frequent and intense during laying and incubation, while raptors in flight and coyotes were responded to most vigorously during the second half of the nestling period and the first two weeks of fledging. Perched raptors were almost always mobbed vigorously. Diving to within 2 inches of a predator appeared to be effective in driving it away. The roles of chasing and alarm calling were less clear, but in addition to alerting mates and offspring to danger, such behavior would impede efficient hunting by the predator and so might contribute to its departure. Did You Know? Blue Flax is often considered a subspecies of the Eurasian L. perenne which is very similar. The plant is named after Meriwether Lewis. More...
<urn:uuid:7f4cb008-0dcb-4716-8ec0-0ffbfa4dd1f8>
3.296875
277
Knowledge Article
Science & Tech.
47.942145
…continuedObserving from the City City and suburban observers gained a new claim to the deep sky when nebula filters were developed in the late 1970s. These function on a straightforward principle. Emission nebulae give off light at narrow wavelengths that differ from those of sodium- and mercury-vapor streetlights. By using a multilayer interference filter, the spectrum of visible light can be cut finely enough to separate these wavelengths. The result is a much darker sky, somewhat dimmer stars and galaxies, and only slightly dimmer planetary and emission nebulae. This enhanced contrast can, in many circumstances, more than make up for the relatively small amount of light lost from the nebula, and so it stands out more clearly. These filters do not bring country skies to the city, but they do help. One technique for detecting nebulae, especially tiny planetaries, is "blinking" with the filter. Hold it at the eye and move it rapidly in and out of the line of sight; a nebula will blink relative to the surrounding stars. Alternatively, blinking can be done by tilting the filter back and forth while looking through it, since it loses its effectiveness when at an angle. Several nebula (or "light pollution") filter designs are available. They use somewhat different strategies for different types of objects and conditions. The biggest promise that technology holds out for those who can afford it in both money and time is the CCD camera. By 2000, CCD (charge-coupled device) cameras had taken over and vastly expanded high-end amateur astronomy, and their prices are declining every year. A CCD camera has two enormous strengths. First, the CCD chip is many times more sensitive to light than either your eye or photographic film. Second, it feeds a digitally recorded image from the telescope directly into your computer, where the image can be enhanced, analyzed, measured, and manipulated. The most important manipulation is the ability to subtract away an extremely light-polluted background, as if by magic, with hardly any loss of data. An 8-inch telescope can now record 15th- or even 16th-magnitude stars in the worst city light pollution or moonlight. This is several times fainter than the same telescope can show stars to the eye under black, mountaintop conditions! Drawbacks to CCDs include the very small field of view, the difficulty of aiming this field where you want, and problems of focusing. The equipment may be temperamental; the telescope mounting must be as rigid and controllable as for long-exposure astrophotography. And, of course, you're looking at a computer screen, not stars. It has been said that CCD astronomy is about working with equipment and computers, not skygazing. The most important advance that CCDs represent is the science that can be done with the recorded images. For much of the 19th century, amateurs were almost on a par with professional astronomers in terms of the useful science they could do. Then amateurs fell very, very far behind but now CCD cameras in dedicated hands are making up some of this lost ground. Amateurs are discovering asteroids in great numbers, performing professional-quality variable-star studies, detecting the 19th-magnitude optical afterglows of gamma-ray bursts near the limits of the observable universe, taking spectra of stars and galaxies, imaging the planets more finely than was once thought possible, and much more. No machine, however, will ever replace the simplicity and delight of examining the stars directly, as a part of living nature. Duck and Cover "Light pollution" is the glow in the sky itself. It should not be confused with local lights that shine directly into the observer's eyes. Local lights are more aggravating but easier to defeat. Many observers have cooperative neighbors who turn off outdoor lights on request. A good way to break the ice on this issue is to offer views through your telescope. If you can't observe in the shade of trees or walls, you might rig a tarpaulin to shield your site. Max Wyssbrod lives in Lucerne, Switzerland, which he calls "the brightest country in Europe." His "cloth observatory" consists of four aluminum poles 10 feet long that fit into tubes cemented into the ground in a 10-foot square. The four walls are black cloth; guy ropes add stability. The whole rig, along with an 8-inch Schmidt-Cassegrain telescope, takes 15 minutes to set up. Another strategy is to shield only your eye and the back end of the telescope. An old-fashioned photographer's black cloth or equivalent, or a cape that can be thrown up over your head, does the trick. Any telescope in bright local lights should also have a long dewcap or side shield to keep the light out of the tube. Eyepieces should have rubber eyecups. "I use a black hood, and blinders I made from cardboard fitted to each side of my face," writes Charles Haun of Morristown, Tennessee. "This works quite well." Hiding under cloth and wearing blinders may seem an ignominious way to experience the glories of the cosmos. But such is the garb that amateur astronomers shall increasingly wear as they march bravely into the future.
<urn:uuid:ce245eeb-0c38-4970-bd7d-4a58d70d0676>
3.390625
1,105
Tutorial
Science & Tech.
46.152218
Potential Flow Around a Sphere The potential (inviscid) flow around a sphere is equivalent to the superposition of a 3D doublet and a free stream, which reduces to the simple analytic equation: Cp = 1 - 9/4 cos2(theta) Cp is the Pressure Coefficent theta is the angle measured perpendicular to the flow direction Surface Pressure Coefficient Contours. Pressure Coefficient comparison between potential flow theory for a sphere and computation. Notice the excellent agreement between the computation and the theory. Try For Yourself The most convenient way to view and edit this case is to use our Professional add-on that combines all the add-ons used during this example.
<urn:uuid:4453595e-a845-45cc-9290-9269e84d2eb9>
2.84375
151
Tutorial
Science & Tech.
30.156067
A parsec is the unit for expressing distances to stars and galaxies, used by professional astronomers. It represents the distance at which the radius of the Earth's orbit subtends an angle of one second of arc; thus a star at a distance of one parsec would have a parallax of one second, and the distance of an object in parsecs is the reciprocal of its parallax in seconds of arc. For example, the nearest triple-star system, Alpha Centauri, has a parallax of 0.753 second of arc; hence, its distance from the Sun and the Earth is 1.33 parsec. One parsec equals 3.26 light-years, which is equivalent to 3.09x1013 km (1.92x1013 miles). In the Milky Way Galaxy, wherein the Earth is located, distances to remote stars are measured in terms of kiloparsecs (1 kiloparsec = 1,000 parsecs). The Sun is at a distance of 8.5 kiloparsecs from the centre of the Milky Way system. When dealing with other galaxies or clusters of galaxies, the convenient unit is the megaparsec (1 megaparsec = 1,000,000 parsecs). The distance to the Andromeda Galaxy (Messier 31) is about 0.7 megaparsec. Some galaxies and quasars have likely distances on the order of about 3,000 megaparsecs, or 9,000,000,000 to 10,000,000,000 light-years. Excerpt from the Encyclopedia Britannica without permission.
<urn:uuid:1db818e4-3b58-4497-b811-f153334bba4b>
4
333
Knowledge Article
Science & Tech.
66.570473
How do you measure a field like electrical or magnetic fields? The field itself is of course not visible. But you can see the effects of a field and use that for the visualization. For example, in case of magnetic fields a nice high school type of experiment is to use iron filings sprayed around a magnet. The filings align along the field lines (so that the force on them is minimum), and this alignment makes the field lines visible. Although this kind of visualization technique works very well for larger objects, things get trickier if the fields are concentrated within the tiniest spots only a few nanometers large. There aren’t many objects that could be used to measure fields on that scale. Yet Xiang Zhang and colleagues from the University of Berkeley have achieved exactly that and developed a method capable of measuring the electromagnetic fields down to an area only 15 nanometres in size. The fields they measure are so-called hotspots that form at the surface of metals or around metallic nanostructures such as nanoparticles or nanoscale bow tie structures. There, collective movements of the electrons – the surface plasmons – can create huge electromagnetic fields. This is very much in analogy to the way any other antenna works: oscillating electrical currents create electromagnetic radio waves. On the surface of metal nanostructures, the same happens; but because the geometry is so small the effect is much larger. These hotspots are very efficient antennas indeed. Light that is directed on the hotspot with a frequency close to the resonance of these plasmons is very tightly focussed into this tiny space. If a molecule happens to be in that hotspot, the interaction with this highly concentrated light is very strong. It is no surprise that this effect is explored for sensing applications such as surface-enhanced Raman spectroscopy (SERS). The efficiency of SERS and other applications, however, depends on the precise distribution of electromagnetic fields at these hotspots. It is a somewhat surprising drawback that the electromagnetic field of these hotspots couldn’t be measured so far. In their experiments, Xiang Zhang now use the ultimate analogue to iron filings to visualize the electromagnetic field of hotspots: single molecules. They take a dilute solution of dye molecules and pour this on the hot spots. Then, they turn the light on. Due to the antenna effect of the plasmons the light gets focussed into the hotspot, where it excites the dye molecules and causes them to emit light. But the dye molecules can only emit light for a little while before their electronic states become saturated. Each dye molecule only sends out a brief flash of light before it goes dark again for a certain period of time. The researchers then made sure that the dye solution is so dilute that on average only one molecule at a time emits a flash, which they are able to measure with an accuracy of 1.2 nanometres. Furthermore, due to the random movement of the dye molecules, the flash from the dye molecules always occurs at a different location. The intensity of the light emission of the molecule, however, is the same all the time. But the light intensity the camera sees depends strongly on the antenna effect of the hotspot, which in turn depends on the local electromagnetic field at the position of the dye molecule. In that way the location and intensity of the light flashes from the dye molecules provides a direct map of the electromagnetic fields in the hotspot – with nanometer resolution. At the moment, the technique only is able to measure two-dimensional images of the hotspots, but three-dimensional sensing seems not too unlikely. Nevertheless, already at this stage this tool provides very useful feedback for the study of hotspots and the further development of applications such as SERS. Cang, H., Labno, A., Lu, C., Yin, X., Liu, M., Gladden, C., Liu, Y., & Zhang, X. (2011). Probing the electromagnetic field of a 15-nanometre hotspot by single molecule imaging Nature, 469 (7330), 385-388 DOI: 10.1038/nature09698
<urn:uuid:eb1ce7a9-8bde-4b65-b9c8-dc26c884fb83>
3.765625
844
Academic Writing
Science & Tech.
42.793359
Problem: Show that is an irrational number (can’t be expressed as a fraction of integers). Solution: Suppose to the contrary that for integers , and that this representation is fully reduced, so that . Consider the isosceles right triangle with side length and hypotenuse length , as in the picture on the left. Indeed, by the Pythagorean theorem, the length of the hypotenuse is , since . Swinging a -leg to the hypotenuse, as shown, we see that the hypotenuse can be split into parts , and hence is an integer. Call the point where the and parts meet . If we extend a perpendicular line from to the other leg, as shown, we get a second, smaller isosceles right triangle. Since the segments and are symmetrically aligned (they are tangents to the same circle from the same point), they too have length equal to . Finally, we may write the hypotenuse of the smaller triangle as , which is also an integer. So the lengths of the sides of the smaller triangle are integers, but by triangle similarity, the hypotenuse to side-length ratios are equal: , and obviously from the picture the latter numerator and denominator are smaller numbers. Hence, was not in lowest terms, a contradiction. This implies that cannot be rational. This proof is a prime example of the cooperation of two different fields of mathematics. We just translated a purely number-theoretical problem into a problem about triangle similarity, and used our result there to solve our original problem. This technique is widely used all over higher-level mathematics, even between things as seemingly unrelated as topological curves and groups. Finally, we leave it as an exercise to the reader to extend this proof to a proof that whenever is not a perfect square, then is irrational. The proof is quite similar, but strays from nice isosceles right triangles
<urn:uuid:00d43d0b-af78-46af-b7e7-228faa73cdb0>
3.171875
395
Academic Writing
Science & Tech.
39.921972
Your question, as it seems to me, is mainly a nomenclature issue. Class, in mathematics, means usually something described by a formula: all things with a certain property. Equivalence classes, for example, are called classes but could just as well be sets. In fact sets are also classes in the context of set theory. You can see this in other places as well (e.g. conjugacy class). In set theory the Russell paradox (and various other paradoxes as well) showed that not every collection is a set. This is why the notion of a class was invented. It turns out that this notion is (at least philosophically) close to other notions of class in mathematics. Formally, however, note that there are "normal functions; normal spaces; normal distributions; etc." but many of these are somewhat orthogonal and have nothing to do with one another. Classes of set theory are constructs in set theory, while conjugacy classes are constructs in group theory (used elsewhere as well, of course). So to sum up, the word "class" is commonly used to describe a definable collection (definable from what? well, that depends on the particular context); and as Qiaochu remarked, in a very technical sense every set is a class anyway (in the set theoretical context of the word).
<urn:uuid:3716f2de-be19-4a02-af7d-b6a90efe5542>
3
283
Q&A Forum
Science & Tech.
49.245922