text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Tundra is the global biome that consists of the treeless regions in the north (Arctic tundra) and high mountains (alpine tundra). The vegetation of tundra is low growing, and consists mainly of sedges, grasses, dwarf shrubs, wildflowers, mosses, and lichens. The word "tundra" is derived from the Finnish word "tunturi," which refers to the upland treeless parts of hills and low mountains free of woodlands.
Tundra climates are extremely cold and snowy in winter. Summers are cool. The southern or lower limit of trees corresponds roughly to a mean July temperature between 10 and 12 degrees Celsius (50 and 53.6 degrees Fahrenheit), but in maritime areas the limiting summer temperature can be lower. Low shrubs, less than about 1 meter (3.2 feet) tall, and peaty soils are common near treeline. In the northern extremes and at higher elevations, the landscapes are predominantly barren with scattered wildflowers, such as purple mountain saxifrage and Arctic poppies, mosses, and lichens. Most of the Arctic tundra regions are underlain by permafrost, ground that is permanently frozen beneath a shallow layer of soil that thaws annually.
Tundra ecosystems have a variety of animal species that do not exist in other regions, including the Arctic hare, musk oxen, lemmings, Arctic ground squirrels, and ptarmigan. Other animals migrate annually to the Arctic including caribou and many species of birds.
The Arctic tundra is the least exploited of Earth's biomes. It is a unique biological laboratory for scientists to study unaltered ecosystems. The chief ecological concerns in the Arctic tundra are cumulative impacts of oil and mineral exploitation, roads, tourism, and long-range transport of air pollution from industrial centers to the south. Global warming is likely to have its greatest effect on tundra. Major concerns are the fate of permafrost and the carbon contained in Arctic peat. Decomposition of this carbon could increase the concentration of carbon dioxide in the atmosphere.
Wielgolaski, F. E. Polar and Alpine Tundra. The Netherlands: Elsevier, 1997. | <urn:uuid:8e06b7ad-c5e6-40ab-a4dc-e267a9bf048a> | 4.1875 | 471 | Knowledge Article | Science & Tech. | 44.310217 |
|Main Problem: Where is the start address of the
buffer? This must be known in advance to overwrite the
copy of %i7 accordingly.
|Minimal variations (different C compiler, different
compilation options, different libraries, different
release of the operating system) cause this address
|The nop operations (no operation)
increase the probability that we hit our malicious
|Next problem: How to address relatively to the
location of the code? Solution: After the call
operation, that instruction is pointed to by %o7.
|Hint: The SPARC processor is a three address machine
where the two operands are specified first and are
followed by the target.
|Afterwards, we prepare the exec system call:
%o0 is the first parameter which points to
the path of the binary we want to execute. In
this example: ``/bin/sh''.
|The second parameter in %o1 points to the vector argv
which consists of ``/bin/sh'', ``-c'', and the command we
want to execute. | <urn:uuid:cfed7016-3614-4aae-9f0b-efc7b569169d> | 3.125 | 233 | Documentation | Software Dev. | 41.999596 |
Netlink is a flexible, robust, wire-format communications channel typically used for kernel to user communication although it can also be used for user to user and kernel to kernel communications. Netlink communication channels are associated with families or "busses", where each bus deals with a specific service; for example, different Netlink busses exist for routing, Netlink , Netlink and several other kernel subsystems. More information about Netlink can be found in RFC 3549 .
Over the years, Netlink has become very popular which has brought about a very real concern that the number of Netlink family numbers may be exhausted in the near future. In response to this the Generic Netlink family was created which
acts as a Netlink multiplexer, allowing multiple services to use a single Netlink bus.
- Generic Netlink HOWTO kernel API
- libnl - A user space library to netlink.
- Iproute2 utilities use netlink internally to communcate with the kernel.
- RFC 3549 Linux Netlink as an IP Services Protocol | <urn:uuid:70ba92bb-0571-4c5e-9b2c-f89e6d4ecefa> | 2.859375 | 224 | Knowledge Article | Software Dev. | 25.221509 |
Magnetic Free Fall
There are two cylindrical slugs of metal were allowed to free-fall down a
hollow vertical tube made of copper. One of the slugs, a magnet, took
significantly longer to fall through the tube than the other.
The moving magnetic field induces an electric current in the surrounding
conductor, and the magnetic field resulting from this "eddy current"
opposes the field of the falling magnet. If the conductor were a
superconductor, that is, if no energy were lost by the induced current, the
induced magnetic field would be able to completely prevent the permanent
magnet from moving. This is the principle behind magnetic levitation. You
question sounds like a homework question; for the actual equations
governing this behavior, look in your physics book under "eddy currents" or
Richard Barrans Jr., Ph.D.
The changing magnetic flux sets up eddy currents in the copper which
produce their own magnetic field opposing the change in magnetic flux.
This produces a force on the magnetic slug.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:90cece96-8f3b-495d-9ae5-2243babd5d32> | 3.890625 | 235 | Q&A Forum | Science & Tech. | 45.001703 |
Laser cooling of neutral atoms
The past year has seen major advances and surprises in laser cooling of neutral atoms in optical molasses— an arrangement of counterpropagating laser beams that strongly damps the motion of atoms. For atoms at rest in equal intensity, oppositely propagating laser beams, the total force is, of course, zero. But for atoms moving in light tuned to a frequency below the atomic resonance, the traditional view of optical molasses was that the Doppler shift causes atoms to absorb light more strongly from the beams that oppose their motion. This produces a damping force that both cools the atoms and provides a viscous confinement within the intersecting laser beams (although there is no restoring force). In particular, the simple theory for two level atoms predicted that the lowest atomic kinetic energy would be hΥ/4, where Υ is the decay rate of the atomic state excited by the laser. For sodium atoms cooled on the yellow D-line transition, this corresponds to a temperature of 240 μK.
Access to the full text of this article is restricted. In order to view this article please log in. | <urn:uuid:6f0aac18-fe2f-4336-80ba-4e53b0429204> | 3.046875 | 231 | Truncated | Science & Tech. | 38.018224 |
|May29-09, 10:14 PM||#1|
Torque on Baseball going through pitching machine
1. The problem statement, all variables and given/known data
I am designing a shaft for a pitching machine that uses a spinning tire to accelerate a ball and must determine the torque acting on the shaft when a ball goes through the machine. A 1/4 horsepower motor is spinning an 8 inch tire at 1900RPM, constant angular velocity. Weight of baseball is 5.25oz (.328lb). What is the force that is imparted on the ball tangent to the wheel and with what velocity does it exit with? I know degrees over which the ball is acted on by the tire as .581radians.
2. Relevant equations
I am unsure of which equations will be relevant. Maybe vf^2=vo^2 +2as, Torque =Force x Radius, Power = omega(angular velocity) x Torque, F= ma
3. The attempt at a solution .
The ball is acted on by the tire at arc length s. By using s=r*theta, I get s to be .059m. But the problem where I am stuck is how do I incorporate the rotation of a tire to the acceleration of the object. Once I find this acceleration, then I can easily back out the tangential force, torque and resulting velocity. Please feel free to change units if you prefer. Thanks for your help!
|Similar Threads for: Torque on Baseball going through pitching machine|
|Physics of baseball pitching||General Physics||2|
|Torque on a baseball bat||Introductory Physics Homework||1|
|Material in machine tool body (milling machine)||Materials & Chemical Engineering||0|
|Aerodynamics: Wing Pitching Moment||Aerospace Engineering||4| | <urn:uuid:b461929b-d6d8-40ad-9ff5-d803ac86bfb9> | 3 | 387 | Comment Section | Science & Tech. | 63.250668 |
Alternative Energy Nuclear
Nuclear energy is the electricity which is generated by turbines which uses the steam created by the heat produced by the uranium atoms through fission. Fission is basically the breaking up of unstable nuclei into a smaller and more stable nuclei. The sun’s energy is generated by nuclear fusion. Nuclear energy has for long been considered as an alternative to other forms of energy. Let us see some of the benefits and dangers of nuclear energy
1. Nuclear energy is considered a very clean and cheap form of energy. It is a perfect solution for modern day problems like global warming.
2. Nuclear energy can be used to generate steam for submarines and aircrafts though it is not done on a commercial level as yet.
3. It can be used to generate power for domestic usage, shops, factories, etc.
4. It is useful in medicinal and scientific research.
5. Nuclear energy does not pollute the environment with the emission of dangerous gases like carbon dioxide, nitrogen oxides, etc.
6. France which has nuclear power plants throughout has a reduction in its total pollution by 80-90%. That is one reason why France has got such clean and healthy air.
7. The area of land required to produce 1000MW of energy by nuclear plants is very less when compared to a wind farm.
8. The wastes that come out of a nuclear plant are not exactly wastes. It still contains 97% of fuel and if reprocessed and used properly, they can be recycled again and used for other purposes.
1. Nuclear energy wastes must be disposed off properly. Otherwise, there are chances of catastrophic events taking place.
2. The radioactive substance which is emitted in the atmosphere by nuclear power plants can contaminate or poison the land for centuries to come.
3. The nuclear fission activity should be done carefully and under trained experts. Any mishaps or errors could prove fatal.
4. Terrorism could go on an increase. Nuclear power is a very powerful ingredient in those atom bombs and missiles used by these terrorists and if nuclear energy is made easily available, then terrorist activities would just not be stoppable.
5. There are chances of more wars taking place between countries. Now when you see countries trying to negotiate and talk over even the most critical of matters, with the advent of nuclear energy even the smallest of problems would be dealt with war and military.
The Nuclear Debate:
Even though we have so many benefits relating to nuclear energy, people are still very reluctant to switch over to nuclear energy completely because of the kind of dangers it has attached with it and also uranium, which is necessary to make the nuclear fission happen, is a bit scarce and not available in abundance. And moreover, there are not many venture capitalists that are willing to invest in creating nuclear investors or reactors. The reason being so is because the process is something which has not been tried and tested.
Oil And Gas Employers
There are thousands of oil and gas organizations worldwide engaged in drilling and refining crude oil and natural gas. Most of the world's major employers in the oil and gas producing industry are mentioned below, with a prominence on North American companies. It is advisable to go through the... | <urn:uuid:1d554673-bf19-41b1-8ec3-f78ceb9b5ee5> | 3.109375 | 655 | Knowledge Article | Science & Tech. | 49.761903 |
I wrote earlier about my trek to the northeastern corner of California to attend the Golden State Star Party. My objective was to photograph a galaxy, and after checking out the pristine skies during my first night of observing, I selected my targets: M81 and M82, a close-by pair of galaxies in Ursa Major (the Big Dipper).
Some nearby galaxies appear fairly large in the sky, surprisingly, almost the size of a crescent moon in the case of M81. But the surface brightness is extremely low, so such galaxies are not visible to the naked eye. Even through my telescope eyepiece, M81 is little more than a smudge of light. To see interesting detail takes a very large telescope, a long photographic exposure, or both. Since I had never taken photographs of this type, preparation took months.
There were three areas where I needed to do research, purchase equipment, and practice:
- Even though my telescope has a mount that compensates for the earth's rotation, I knew that small errors could creep in and smear my pictures unless I made corrections. I've done this by hand for relatively short exposures, but I knew that my photos of M81 and M82 would take hours of exposure, so I purchased and learned to use an autoguider, which is an auxiliary camera on a small telescope piggy-backed to the main one. If the target moves even a fraction of a pixel off center, the autoguider sends a correction to the telescope mount to compensate.
- When doing long exposures with a digital camera, one typically breaks the total exposure into pieces (5 minutes each in my case), and then stacks the resulting exposures in a program like Photoshop. This stacking reduces the noise in the resulting photograph. You can press the shutter by hand, but I chose to obtain special software and cables to remotely control the Nikon DSLR that I planned to use for the photographs. From my laptop computer, its screen covered with red Plexiglas® so as not to disrupt my night vision, I could view through and control both my autoguider and my Nikon.
- Finally, it is very difficult to focus a telescope with a camera attached because the image is so dim. I chose to purchase and use a Bahtinov mask, which is a strangely shaped mask that I put over the lens of my telescope only while I am focusing it. The mask produces a diffraction pattern around any star. This pattern indicates whether the image is in focus and, if not, in which direction to turn the focus knob - it's pretty slick!
Having worked most of the kinks out in advance, I was very happy with my results; however, there was a surprise. I found that earth-orbiting satellites were streaking across five of my exposures. Fortunately, when stacking the images, it is possible to make these streaks disappear. In my next entry I'll discuss the nature and size of M81 and M82.
Curious About Astrophotography?
Whether you come at it from an interest in astronomy or an interest in photography, astrophotography is an eye-opening field that combines art, science, and technology. If you are interested in learning more about astrophotography with digital cameras, these resources will help get you started:
- The Digital Photos and Dynamic Range Science Buddies Project Idea discusses important information that any astrophotographer needs to understand.
- Catching the Light is a good general source of information about digital astrophotography. | <urn:uuid:05d83b86-8638-4b79-bc24-f773a3fdab54> | 2.765625 | 716 | Personal Blog | Science & Tech. | 39.416304 |
July 7, 2009 The goal of DNA barcoding is to find a simple, cheap, and rapid DNA assay that can be converted to a readily accessible technical skill that bypasses the need to rely on highly trained taxonomic specialists for identifications of the world's biota. This is driven by a desire to open taxonomic identifications to all user groups and by the short supply of taxonomists that do not even exist in many groups.
Although DNA barcoding is being rapidly accepted in the scientific literature and popular press, some scientists warn that we are being too hasty in wholeheartedly embracing this technique. Dr. David Spooner, a researcher with the USDA and an expert in the potato and tomato family (Solanaceae), offers just such a cautionary note against accepting this technique without closer examination.
One of the critical issues surrounding the DNA barcoding debate is that using a section of DNA may not adequately distinguish among closely related species or complex groups. Moreover, in plants, there is still much debate over which gene sequence region should be used and its reliability. In animals, the 5' segment of mitochondrial cytochrome oxidase subunit I (COI) is relatively established as a barcoding marker, but Spooner highlights many groups where COI fails to distinguish species. The COI region fails completely for plants because it evolves at a slower rate in plants and has a much more variable sequence.
The search for alternative barcoding regions in plants is especially problematic. Although several gene sequences have been proposed for plants, none of them serves as a universal barcode marker. Regions that have been proposed for plants include a section of the nuclear ribosomal DNA: the internal non-transcribed spacer region (ITS); and various plastid regions to include the trnH-psbA intergenic spacer and the plastid genes rpoC1, rpoB, and matK.
Spooner tested the utility of barcoding in a well-studied but complex plant group, wild and cultivated potatoes (Solanum section Petota). Section Petota includes over 200 species and is widespread throughout the Americas. In his study, 63 ingroup species and 9 outgroup species (in the genera Solanum, Capsicum, and Datura) were used. DNA was extracted from young leaves of single plants. Spooner tested the most frequently suggested DNA barcoding regions for plants: ITS, trnH-psbA, and matK. He found that none of these regions were very accurate at distinguishing or serving as markers for species boundaries in the section Petota.
There was too much intraspecific variation in the nuclear ITS region, and the plastid markers did not have enough variation and thus failed to group together some well-supported species. Section Petota is a complex group because, among other things, there is much hybridization among species; some of the species have multiple divergent copies of their DNA arising from past hybridization (allopolyploidy); there is a mixture of sexual and asexual reproduction; and there is possible recent species divergence. Such complications are not uncommon in many plant groups. Spooner concludes that a variety of morphological and molecular approaches are needed, and we cannot rely on a DNA barcode alone to distinguish among species in complex groups such as section Petota.
Spooner extrapolates from the taxonomic difficulties of section Petota to many other groups possessing similar biological traits, and points out that DNA barcoding needs to be accepted with great caution as these groups have not been tested with the technique. He also urges caution against limiting the identification, or in some cases even the definition of a species to a small sequence of DNA. He suggests that the search for a DNA barcoding marker or limited set of such markers that reliably identify the majority of life forms will be a continuously elusive goal.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Spooner et al. DNA barcoding will frequently fail in complicated groups: An example in wild potatoes. American Journal of Botany, 2009; 96 (6): 1177 DOI: 10.3732/ajb.0800246
Note: If no author is given, the source is cited instead. | <urn:uuid:a43ef6ef-5806-47a7-8a5d-2f2a12913b8c> | 3.359375 | 898 | Truncated | Science & Tech. | 32.929685 |
A day at the beach is a wonderful way to spend time with your family and friends. You can swim, play games and build sand castles. But have you ever wondered how the beach you are standing on came to be? How, for example, did all of that sand get there? Beaches are formed and continually changed by the ocean's waves moving rock particles onshore, offshore and along the shore. In this activity, you can investigate how beach formations are made by some parts of a beach that can resist erosion from the waves more than other parts.
A beach is a geologic formation made up of loose rock particles such as sand, gravel and shell fragments deposited along the shoreline of a body of water. A beach has a few key features. The berm is the part that is mostly above water; this is the active shoreline. The top of the berm is known as the crest, and the part that slopes toward the water is called the face. At the bottom of the face there may be a trough and, further seaward, there may be sandbars parallel to the beach.
The erosion of rock formations in the water, coral reefs and headlands create rock particles that the waves move onshore, offshore and along the shore, creating the beach. Continual erosion of the shoreline by waves also changes the beach over time. One change that erosion can cause is the appearance of a headland. This is land that juts out from the coastline and into the water and affects how the surrounding shoreline is eroded.
• Paint-roller pan
• Measuring cup
• Digital camera
• Plastic 500-milliliter water bottle (empty)
• Adult volunteer to help take pictures
• Small gravel, such as aquarium gravel
• Cover the bottom of the paint-roller pan with five cups of sand. Build up a beach with most, but not all, of the sand at the shallow end of the pan.
• Slowly pour six cups of water into the deep end of the pan. Let the water and sand settle for five minutes. How has the beach changed during this time?
• Take a picture of your beach so that you have a record of how it looked in its original state. Where is the shoreline (the area where beach and water meet)?*
• Lay a plastic bottle horizontally so it is floating in the water in the deep end of the pan.
• For two minutes bob the water bottle up and down with your fingertips to create waves. If the waves get so big that water splashes out of the pan, make them smaller. How does the water swirl? How does the shoreline change after one minute? What about after two minutes?
• After two minutes of bobbing the bottle, take a picture of the beach. How does it look compared with the first picture?
• Empty, clean and dry the paint-roller pan. Prepare a "beach" again, as you did for the preparation. When the beach is complete, make a "headland" by creating a mound out of two cups of small gravel in the middle of the shoreline. The headland should be partly in the water and partly on the beach. Take a picture of the beach with the headland.
• Again, lay the plastic bottle horizontally so it is floating in the water. For two minutes, bob the water bottle up and down with your fingertips. Again, if the waves are so big that water splashes out, make them smaller. How does the water swirl? How does the shoreline change after one minute? What about after two minutes?
• After two minutes, take a picture of the beach. How does it look compared with the previous picture?
• How does the headland affect where the water goes? How does it affect how much the shoreline erodes?
• Extra: Repeat this activity at least two more times with a ruler taped to the side of the pan. Exactly how much shoreline erosion occurs with and without a headland?
• Extra: Try increasing or decreasing the speed of bobbing the bottle. Does this affect how the beach changes over time?
• Extra: Pour a large volume of water all at once into the deep end of the pan to simulate a storm surge or a tsunami. What happens to the beach?
*Correction (6/26/12): The sentence was edited after posting to clarify meaning. In this activity, experimenters are asked to observe changes in the shoreline. | <urn:uuid:41e74442-2071-42a5-9706-a43aa4c9fd76> | 4.21875 | 919 | Tutorial | Science & Tech. | 62.333572 |
Following some discussion in last year's tutorials I found the following formula for Bayes' theorem.
The formula states that given that the event B has occurred, the probability that it was due to cause Ai,
In other words, the posterior probability of event Ai given that B has occured, is the joint probability of event Ai (given event B) divided by the sum of the joint probabilities of all possible events (given event B).
In the case of this tutorial, the probability that the woman is a carrier p( A1), given that she is unaffected (event B), is the probability that she would be unaffected if she were a carrier (the joint probability that she is both a carrier and unaaffected) divided by the sum of all the joint probabilities (i.e. that possibility plus the possibility that she is not a carrier and is unaffected).
Which is what I said all along! | <urn:uuid:7c71fb78-e661-4315-b025-f7ed9858132a> | 2.796875 | 185 | Personal Blog | Science & Tech. | 43.445671 |
A University of Rochester study shows that baboons are able to understand numbers. Experimenters showed the monkeys peanut-filled cups and the monkeys then chose which cup contained more peanuts. Read more about the experiment and its conclusions...
Mammoth Cave in Kentucky is the largest cave in the world, with more than 390 miles of passageway and new discoveries adding several miles to this total each year.
Spelunkers compete to explore it, early 1900s farmers competed to sell it, and now the best minds in math and physics compete to explain it. For centuries, researchers have understood the basics: caves form when water trickles through tiny rock fractures. But the question has still remained: how does a small flow of water erode rock fast enough to make 300-mile tunnels? Now, an answer emerges from a series of math equations. This discovery has applications in everything from the safety of dams to the fate of nuclear waste. Read the full article here.
The most common question students ask math teachers at every level is “When will I use math?” WeUseMath.org is a non-profit website that helps to answer this question. This website describes the importance of mathematics and many rewarding career opportunities available to students who study mathematics. | <urn:uuid:c50e040a-a4f7-4d0e-830d-6290a5997a82> | 3.09375 | 255 | Content Listing | Science & Tech. | 47.678839 |
Further informationIs water blue?Water does absorb a tiny amount of light, but in small amounts, we don’t notice it. Water actually absorbs light that has a reddish color. When there’s a lot of water – like a whole lake full – we notice that red light from the Sun has actually been absorbed. So what color do we see when reddish light is gone?Blue!Blue is the color absorbed the least by water, so it’s the one we see the most. Blue light also scatters more than other light. That’s the reason why the sky is blue. That scattering affects the color of water too.It just takes a big enough sample of water to notice these things. That’s why you won’t see color in the glass you’re drinking.
ExplorationBlue Water (Through Milk)If you want to make the water in your drinking glass appear blue, you can actually do it by adding something white:Milk!A few drops of milk in a glass of water will make it appear slightly blue. That’s because the white milk is helping to scatter light and the blue light is scattering more than other colors.
Sources & links
Chaplin, Martin “Water Absorption Spectrum.” 7 June 2009. Water Structure and Science. 28 Jul 2009. <http://www.lsbu.ac.uk/water/vibrat.html> "industrial glass." Encyclopædia Britannica. 2009. Encyclopædia Britannica Online. 29 Jul. 2009 <http://www.britannica.com/EBchecked/topic/1426115/industrial-glass>. | <urn:uuid:801dfc23-faf8-43f2-85da-de64c4277380> | 3.1875 | 353 | Knowledge Article | Science & Tech. | 78.301389 |
This is an image of a mercury bulb thermometer. The temperature is measured by reading the number next to the thin black line that goes partly up the yellow tube.
Click on image for full size
Image courtesy of Wikipedia Creative Commons
Thermometers measure temperature. "Thermo" means heat and "meter" means to measure. You can use a thermometer to measure the temperature of many things, including the temperature of the air, the temperature of our bodies, and the temperature of the food when we cook. Temperature is a measure of the hotness and coldness of an object.
Thermometers usually have a bulb at the base and a long glass tube that extends to the top. The glass tube of a thermometer is filled with alcohol or mercury. Both mercury and alcohol grow bigger when heated and smaller when cooled. Inside the glass tube of a thermometer, the liquid has no place to go but up when the temperature is hot and down when the temperature is cold. Numbers are placed alongside the glass tube that mark the temperature when the line is at that point.
Other types of thermometers include dial thermometers and electronic thermometers. Electronic thermometers measure temperature much more quickly than mercury and dial thermometers.
The thermometer measures temperatures in Fahrenheit, Celsius and another scale called Kelvin. Fahrenheit is used mostly in the United States, and most of the rest of the world uses Celsius. Kelvin is used by some scientists.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
The Kelvin scale is a temperature scale that is often used in astronomy and space science. You are probably more familiar with the Celsius (or Centigrade) scale, which is part of the metric system of measures,...more
Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. Rainbows result from the refraction and reflection of...more
The Earth travels around the sun one full time per year. During this year, the seasons change depending on the amount of sunlight reaching the surface and the Earth's tilt as it revolves around the sun....more
Scientists sometimes travel in specially outfitted airplanes in order to gather data about atmospheric conditions. These research aircraft have special inlet ports that bring air from the outside into...more
An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). Anemometers can measure wind speed, wind direction, and other information like the largest gust of wind...more
Thermometers measure temperature. "Thermo" means heat and "meter" means to measure. You can use a thermometer to measure the temperature of many things, including the temperature of...more
Weather balloons are used to carry weather instruments that measure temperature, pressure, humidity, and winds in the atmosphere. The balloons are made of rubber and weigh up to one kilogram (2.2 pounds)....more | <urn:uuid:fbf0ed96-5b36-4dd8-be59-c8481d7037fd> | 4.0625 | 650 | Knowledge Article | Science & Tech. | 45.068383 |
Seasonal to Interannual Forecasts
March 2005 archive copy
On This Page
For More Information
The Climate Impacts Group (CIG) translates global-scale climate forecasts and conditions into regional-scale climate forecasts for Pacific Northwest (PNW) resource managers and the general public. The El Niño/Southern Oscillation (ENSO) is the most important factor for seasonal forecasting, changing the odds for different types of winter and spring weather (e.g. warmer/drier, cooler/wetter) in the PNW. The climate outlook also provides the basis for natural resource forecasts, including the CIG's annual streamflow forecasts.
What's Next for the Pacific Northwest?
Updated March 23, 2005
The climate outlook is reviewed monthly and updated as needed.
Current indicators for Pacific climate:
Over the past few weeks sea surface temperatures (SSTs) in the eastern tropical Pacific have been trending to near average values (and even below average values in the far eastern equatorial Pacific) indicating that the weak El Niño event of 2004-05 has ended (see definition). In contrast, tropical SSTs in the western half of the tropical Pacific remain about 1°C warmer than average. The warmth of the western tropical Pacific has been relatively stable since late 2001. Because the greatest warming has remained in the western half of the tropical Pacific many climate scientists have questioned whether the recent warm period deserves the standard "El Niño" label, or whether it should be characterized as something else. Either way, beginning in January 2005 the region east of the date line has recently cooled, and the majority of current forecasts call for either near average or slightly above average SSTs in the eastern tropical Pacific for the next few seasons. It is interesting to note that a pattern of dry weather in Australia/Indonesia and heavy rainfall further east developed in February, and along with this came very strong western wind anomalies and an extreme negative swing in the Southern Oscillation Index (SOI for February was -4.1 standard deviations!). This strong month-long disturbance in the western Pacific has since faded away, but it did force strong downwelling along the equator that is now slowly moving eastward along the equator (as a packet of equatorial Kelvin wave signals). It is not now clear how this one-month event will influence tropical climate in the next few months and seasons, but it is likely to favor a warming trend in eastern equatorial Pacific SSTs for the next month or two. For more information on the current ENSO state and forecast, see the forecast summaries provided by the NOAA Climate Prediction Center and the International Research Institute for Climate Prediction .
Seasonal to interannual forecasts for the state of the Pacific Decadal Oscillation (PDO) index (based on a pattern of North Pacific SSTs) are an emerging science. A major source of uncertainty in developing PDO forecasts is our lack of understanding of what causes the observed multi-year persistence in the PDO index and, more importantly, what triggers PDO regime shifts. However, a strong tendency for year-to-year persistence of the PDO index along with a well-established statistical relationship with the state of ENSO provides a means for making skillful 1-year projections of the PDO index.
Using that simple statistical method with the observed PDO index values from July 2003-June 2004, combined with a prediction that SSTs in the NINO3.4 region of the tropical Pacific would be in the range of +0.4°C to +1.2°C, yields a prediction for a July 2004-June 2005 PDO index value ranging from ~ +0.5 to +1. Observed PDO index values for July 2004 to January 2005 are: 0.44, 0.85, 0.75, -0.11, -0.63, -0.17, 0.44 and 0.81 (averaging out to ~0.30). Note that the average value for the Nino34 index from July 2003-February 2004 is ~0.70, about in the middle of the range predicted last fall.
For More Information
What will it mean for the PNW in coming months?
The latest seasonal forecasts from NOAA's Climate Prediction Center call for a slight tilt in the odds favoring a warm spring and summer for the entire Pacific Coast of North America.
At this time it seems unlikely that the PDO pattern has played a major role in influencing North Pacific climate in recent months, but instead recent changes in the PDO index reflect a strong SST response to recent atmospheric circulation patterns. While circulation patterns for the fall and early winter did not resemble those most often observed in "classic" El Niño fall/winter periods, the jet stream pattern that prevailed in February did. This "El Niño-like" circulation pattern included a persistent blocking ridge over the PNW region that extended into the Gulf of Alaska, and most of the storm activity during February was directed at central and southern California and the interior southwest US.
For More Information
Even though the Pacific Northwest has experienced warmer and drier than average fall and winter climate, the large-scale circulation patterns causing these conditions only resembled those typically associated with an El Niño event during our exceptionally dry month of February. It was also during February that the most significant shift in tropical rainfall has been observed during the past 6 months, and there too the patterns of anomalously wet and dry weather resembled those often seen in past El Nino episodes. In the past few weeks the strong shift in tropical rainfall patterns has faded away, as have the weak warm SST anomalies in the eastern equatorial Pacific. NOAA's Climate Prediction Center forecast calls for an increased likelihood that western Oregon and Washington will experience above average temperatures for the next few months, and the remainder of the region has climatological odds for both precipitation and temperature this spring and summer.
It is important to note that these climate forecasts indicate relatively subtle shifts in the odds for warmer/cooler temperatures and more/less precipitation in the PNW rather than a deterministic (or exact) climate prediction for the next 2 seasons. Simply stated, expectations for continued very weak El Niño to ENSO neutral conditions in the tropics and trends for warmer spring and summer west coast temperatures yield a climate outlook for the PNW that has higher than average odds for a warm spring and summer in western Oregon and Washington.
Pacific Northwest Resource Outlooks
- Water Resources Forecasts (streamflow and other hydrologic conditions)
- Salmon survival forecast
- Forecast of extreme weather events
Climate Prediction Resources
The links below provide access to the latest information on the current state of global and regional climate, as well as links to global and regional climate predictions.
The Current State of the Tropical Pacific
- Real-time data from moored ocean buoys (from NOAA’s TAO array)
- ENSO diagnostic discussion (from NOAA’s Climate Prediction Center)
- Weekly ENSO update (from NOAA’s Climate Prediction Center)
- ENSO Quick Look (from the International Research Institute for Climate Prediction)
- Monitoring El Niño/La Niña (from NOAA’s Climate Prediction Center)
Predictions of Tropical Pacific Conditions
- Seasonal Niño3 sea surface temperature anomaly plume forecasts (from the European Center for Medium-Range Weather Forecasts)
- ENSO forecast forum (from NOAA’s Climate Prediction Center)
- Statistical Probabilistic ENSO Predictions (from the International Research Institute for Climate Prediction)
- Sea surface temperature forecasts (from the International Research Institute for Climate Prediction)
The Current State of the Globe
- Climate diagnostics bulletin (from NOAA’s Climate Prediction Center)
- Monitoring climate in the Extratropics and Tropics (from NOAA’s Climate Prediction Center)
- The North Atlantic Oscillation (NAO) (from NOAA’s Climate Prediction Center)
- Monthly climate information digest (from the International Research Institute for Climate Prediction)
- Accumulated daily precipitation time series graphs (from NOAA’s Climate Prediction Center)
- Daily global and regional precipitation analysis (from NOAA’s Climate Prediction Center)
- Index of Climate Prediction Center’s climate monitoring activities and data
Current and Predicted U.S. Conditions
- Monthly to seasonal climate outlooks (from NOAA’s Climate Prediction Center)
- Northern Hemisphere snow report (updated monthly by NOAA/NCEP)
- Spring and summer streamflow forecasts (from the USDA Natural Resources Conservation Service)
- Drought in the US
- Water supply forecasts and snowpack conditions for the Western U.S.
- Experimental seasonal fire risk forecasts (from the U.S. Forest Service)
- Western U.S. climate conditions and forecasts (from the Western Regional Climate Center)
Pacific Northwest Conditions
- Western Washington water and snowpack (from Seattle City Light)
- Seattle water supply conditions and outlook (from Seattle Public Utilities)
- Coastal conditions (from NOAA’s CoastWatch)
- Data on PNW snowpack (from the Western Regional Climate Center) | <urn:uuid:5e1354f5-e4f7-4a8e-9a87-30dda5622ea0> | 2.84375 | 1,889 | Knowledge Article | Science & Tech. | 30.417413 |
Introducing parallel programming
Introducing .net parallel programming
This is about the parallel programming features of .NET 4, specifically the Task Parallel Library (TPL), Parallel LINQ, and the legion of support classes that make writing parallel programs with C# simpler and easier than ever before.
With the widespread use of multiprocessor and multicore computers, parallel programming has gone mainstream. Or it would have, if the tools and skills required had been easier to use and acquire.
Microsoft has responded to the need for a better way to write parallel programs with the enhancements to the .NET framework
.NET has had support for parallel programming since version 1.0, now referred to as classic threading, but it was hard to use and made you think too much about managing the parallel aspects of your program, which detracts from focusing on what needs to be done.
The new .NET parallel programming features are built on top of the classic threading support. The difference between the TPL and classic threading becomes apparent when you consider the basic programming unit each uses. In the classic model, the programmer uses threads. Threads are the engineof execution, and you are responsible for creating them, assigning work to them, and managing their existence. In the classic approach, you create a little army to execute your program, give all the soldiers their orders, and keep an eye on them to make sure they do as they were told. By contrast, the basic unit of the TPL is the task, which describes something you want done. You create tasks for each activity you want performed, and the TPL takes care of creating threads and dealing with them as they undertake the work in your tasks. The TPL is task-oriented, while the classic threading model is worker-oriented.
Tasks let you focus primarily on what problem you want to solve instead of on the mechanics of how it will get done. If you have tried parallel programming with classic threads and given up, you will find the new features have a refreshing and enabling approach. You can use the new features without having to know anything about the classic features. You’ll also find that the new features are much better thought out and easier to use. | <urn:uuid:c2a33a2e-4dcb-4970-92e4-9b3aa6cf3c4e> | 2.75 | 452 | Tutorial | Software Dev. | 52.106843 |
You can define an inline function by using
defun. An inline function works just like an ordinary
function except for one thing: when you compile a call to the function,
the function's definition is open-coded into the caller.
Making a function inline makes explicit calls run faster. But it also has disadvantages. For one thing, it reduces flexibility; if you change the definition of the function, calls already inlined still use the old definition until you recompile them.
Another disadvantage is that making a large function inline can increase the size of compiled code both in files and in memory. Since the speed advantage of inline functions is greatest for small functions, you generally should not make large functions inline.
Also, inline functions do not behave well with respect to debugging,
tracing, and advising (see Advising Functions). Since ease of
debugging and the flexibility of redefining functions are important
features of Emacs, you should not make a function inline, even if it's
small, unless its speed is really crucial, and you've timed the code
to verify that using
defun actually has performance problems.
It's possible to define a macro to expand into the same code that an
inline function would execute. (See Macros.) But the macro would be
limited to direct use in expressions—a macro cannot be called with
mapcar and so on. Also, it takes some work to
convert an ordinary function into a macro. To convert it into an inline
function is very easy; simply replace
Since each argument of an inline function is evaluated exactly once, you
needn't worry about how many times the body uses the arguments, as you
do for macros. (See Argument Evaluation.)
Inline functions can be used and open-coded later on in the same file, following the definition, just like macros.blog comments powered by Disqus | <urn:uuid:dde06310-3356-4181-910d-28f89127b460> | 3.171875 | 388 | Documentation | Software Dev. | 36.131661 |
See also the
Browse High School Calculus
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Maximizing the volume of a box.
Maximizing the volume of a cylinder.
Volume of a tank.
What is a derivative?
- Inverse of a Multivariate Function [05/30/2002]
Let f:NxN -> N such that f(x,y) = 2^x(2y + 1) - 1 for all natural
numbers x, y. Let the inverse of f, g be given by g:N -> NxN. Find the
inverse of the function g.
- Inverting Functions [07/19/2002]
To find the inverse of a function y=f(x), do I interchange the
variables x and y, or do I solve for x in terms of y?
- Is Infinity a Number ... in Inversive Geometry? [01/15/2010]
A reader attempts to demonstrate infinity as concretely measurable in
an inversive geometric construction. Doctor Tom explains analyzes the
argument, weighing the pros and cons of the axioms of non-Euclidean
geometries, and going on to expose an apparent paradox.
- Is pi Squared Rational or Irrational? [01/07/2011]
Doctor Ali provides a proof by contradiction.
- Is the Function Invertible? [09/22/1997]
For the following functions f(x) decide if the function is invertible as a
function from R to R...
- Iterated Limits [5/25/1996]
I don't know how to do the following problems...
- Jerk - Derivative of Acceleration [03/16/2001]
I need a mathematical term that begins with the letter J, its definition,
and a high-school-level explanation of the term.
- kth Derivative of x^n [03/04/2003]
What is a quick way to find the nth derivative of a function? Example:
the 7th derivative of x^8.
- LaGrange Error for a Taylor Polynomial [05/28/2000]
How can you find the LaGrange error for a Taylor polynomial?
- Lagrange Multipliers [01/08/1998]
I have a problem with Lagrange Multipliers - can you help?
- Lagrange Multipliers [01/28/2001]
The temperature of a point(x,y,z) on the unit sphere is given by
T(x,y,z)=xy+yz. Using Lagrange multipliers, find the temperature of the
hottest point on the sphere.
- Lagrange Multipliers and Constraints [11/24/1998]
When using the Lagrange Multiplier method, how do you determine which of
the two equations is the constraint?
- Laplace Transform [01/25/2001]
What are Laplace transforms, and what are their applications?
- Latus Rectum [08/08/2002]
I'm trying to find the definition, an explanation, a formula, or
anything that will help me to better understand what latus rectum is.
- Learning Differential Equations [7/15/1996]
What resources can I use to learn about differential equations?
- Lengthening Shadow [10/13/2003]
A man 6 feet tall walks at 4 miles an hour directly away from a
lampost 18 feet tall. Why does his shadow lengthen at a constant rate?
- Length of a Cubic Curve [12/10/1997]
I need to calculate the true length of a cubic curve.
- Let f(x) = 1 + 1/2 + 1/3 + ... + 1/[(2^n)-1] [05/15/1999]
Which of the following inequalities are correct?
- L'Hopital's Rule [4/23/1995]
I can't figure out how to take the limit using L'Hopital's rule on this
- L'Hopital's Rule [8/11/1996]
How can I solve the following problem: limit as x-> infinity of x^(1/ x)?
- L'Hopital's Rule and Limits [8/7/1995]
Find the limit of [sin 3x / tan (x/3)] as x goes to 0.
- Lifting an Object of Changing Mass [7/28/1996]
Suppose you grab the end of a chain that weights 3 lb/ft and lift it
straight up off the floor at a constant speed of 2 ft/s: determine the
force as a function of height; how much work do you do in lifting the top
of the chain 4 feet?
- Limit As n Approaches Infinity [9/7/1996]
How do I find the limit (n-> -infin.) of (sqrt(2n^2+1))/(2n-1)?
- Limit Intuitions [10/01/1998]
Can you explain the intuition behind the formal definition of a limit?
derive the equation of the tangent line of a function at a given point?
- Limit of n^2/2^n as n Tends to Infinity [12/05/2002]
Demonstrate that the limit as n tends to infinity of the fraction n^2
/ 2^n = 0.
- The Limit of Sin(1/x) [10/02/1998]
Why does the sine of 1/x have no limit as x approaches 0?
- Limit of x sin(1/x) [04/23/2002]
I assumed from the graph that the function had a limit at x=0 of 0,
but since it involves sin(1/0) I can not prove this using the basic
trigonometric limits (sin x/x and (1-cos x)/x), L'Hopital's
rule, or by rearranging the equation. Can you help?
- Limit Problem [7/16/1995]
Find the limit of (x^2 + 2x) / (5x - 5) as x tends to infinity.
- A Limit Problem [10/24/1996]
What is the limit of (x^2 -4) / (2x) as x approaches infinity?
- Limit Problems [08/08/1998]
What is the limit of (sqrt(2-t) - sqrt(2))/t as t->0?
- Limit Proof [7/13/1996]
If lim(An/n) = L and L>0, how do you show that lim(An) = + infinity?
- Limit Proofs with L'Hopital's Rule [05/27/1998]
How would you prove the following three limits? ...
- Limits Approaching Infinity [02/16/2002]
Is the only way to answer them by memorizing all the general questions
you may come across?
- Limits - Indeterminate Forms [10/12/1997]
I cannot do a problem where I need to convert into the form 0/0 and then
use L'Hopital's Rule...
- Limits of Multi-Variable Functions [11/12/2004]
Find the limit, if it exists, or show that the limit doesn't exist:
lim (x*y*cos(y))/(3*x^2 + y^2) as (x,y) ==> (0,0).
- Limits of Sequences [02/25/2001]
Is the limit of [(1 + 1/sqrt(n))^(1.5n)], as n goes to infinity, e? What
is the limit as n goes to infinity of [(1 + a/n)^n], where a is not equal
- Limits of the Natural Logarithm [07/22/1998]
Can you help us find the limit of x (ln x)^n as x goes to infinity, for
all n? Do we use L'Hopital's Rule?
- Linearity and Concavity [12/07/2003]
Why does a linear function have no concavity?
- Line Tangent to an Ellipse [03/29/2003]
Find the equation of the tangent to the ellipse x^2 + y^2 = 76 at each
of the given points: (8,2),(-7,3),(1,-5). Write your answers in the
form y = mx + b.
- Logistic Growth of A Rumor Spreading [04/11/1998]
Assuming logistic growth, find how many people know the rumor after two | <urn:uuid:8ac51e9c-5c6c-45ff-ae38-f4c3cd6d48d5> | 2.9375 | 1,897 | Q&A Forum | Science & Tech. | 87.64855 |
i have always thought was defined as , where is the principal square root. Now I heard that you can't take the square root of negative numbers. That makes me confused.
How do you solve this equation for example:
Do you have to specify that the solution can be complex as well? Is it okay then to take the square root of negative numbers? (Or is it called the complex square root, and how do you know when it is the complex square root that's intended?)
And when is a number negative? is it when the real part of it is negative? Or is negative numbers only possible for real numbers? | <urn:uuid:dd0f6fef-43d1-4f5b-bf91-d9ee60e0c19e> | 2.75 | 128 | Q&A Forum | Science & Tech. | 68.059246 |
Let me try yet one more type of explanation, which I will confine to the PN junction diode (covers virtually all diodes used in modern circuits).
The diode consists of a p-doped region (p-type) slapped up against an n-doped region (n-type). In the p-type, the electron (e-) flow is largely accomplished by electrons moving from hole to hole. This is, electrically, exactly analogous (and is often visualized) as holes moving in a direction opposite to e- flow (although there is no physical movement of postivie charge) In the n-type, there are loosely bound e- which can be donated (moved).
At the PN junction of the diode, loosely bound e- in the n-type fall into the holes of the adjacent p-type. What you then have is an abundance of e- in a thin layer of the p-type layer at the junction, and a depletion of them (creating a net positive charge) in a thin layer of the n-type. This sets up a voltage field of positive in the n-type relative to negative in the p-type. This pushes any free e- in the n-type further away from the junction. The result is a thin PN layer which has no free holes and no free e-. The layer becomes an insulator.
Now, if you apply a positive voltage to the p-type and a negative at the n-type, e- in the p-type are removed, making free holes. Simultaneously, the positive voltage is conteracting the reverse voltage which had been set up in the PN junction, and e- in the n-type are force closer to the p-type, where they can cross over and fill up the new holes. Current flows.
If, however, you apply positive voltage to the n-type, and negative to the p-type ("reverse-biasing" the diode) you simply reinforce the voltage gradient which was already naturally set up in the PN junction. The e- are forced even farther away from the PN junction, and the insulative boundary (depletion region) thickens. No current flows.
To get more in-depth than that might take a good portion of a graduate course in materials science. I hope what I have written suffices. | <urn:uuid:f0a0b5d9-0d05-44b6-9231-20718b3f42a3> | 2.953125 | 497 | Q&A Forum | Science & Tech. | 55.132778 |
The mass of asteroid (21) Lutetia
Measurements obtained during the 10 July 2010 Rosetta flyby of asteroid (21) Lutetia enabled Rosetta scientists to determine the mass of Lutetia with greater precision than ever before. The mass estimate of 1.7 x 1018 kg turned out to be much lower than expected from ground-based observations. The Rosetta error bar is smaller than the measurement point.
For further details see Pätzold et al., (2011) | <urn:uuid:43ec1246-f844-424d-b844-84bfb4459e9a> | 2.6875 | 105 | Knowledge Article | Science & Tech. | 56.424219 |
Trace gases and aerosols are major factors influencing the climate. With the help of highly complex installations, such as MIPAS on board of the ENVISAT satellite, researchers try to better understand the processes in the upper atmosphere. Now, scientists have completed a comprehensive overview of sulfur dioxide measurements.
Concerns continue to grow about the effects of climate change on fire. Wildfires are expected to increase 50 percent across the United States under a changing climate, over 100 percent in areas of the West by 2050 as projected by some studies. Of equal concern to scientists and policymakers alike are the atmospheric effects of wildfire emissions on climate. | <urn:uuid:ae32bb21-4b21-41cf-a864-1e2942f956cf> | 3.078125 | 127 | Content Listing | Science & Tech. | 26.650055 |
The Signum Representation
I’ve just noticed that there’s a very important representation I haven’t mentioned. Well, actually, I mentioned it in passing while talking about Rubik’s group, but not very explicitly. And it’s a very important one.
Way back when I defined permutation groups I talked about a permutation being even or odd. Remember that we showed that permutation can be written out as a composite of transpositions which swap two objects. In general this can be done in more than one way, but if it takes an even number of swaps to write a permutation in one way, then it will take an even number of swaps in any other way, and similarly for permutations requiring an odd number of swaps. In this way we separate out permutations into the “even” and “odd” collections.
The composite of two even permutations or two odd permutations is even, while the composite of an even and an odd permutation is odd. This is just like the multiplication table of the group , with “even” for the group’s identity and “odd” for the other group element . That is, we have a homomorphism for every permutation group .
Now to make this into a representation we’re going to use a one-dimensional representation of . We have to send the group identity to the field element , but we have a choice to make for the image of . We need to send it to some field element , and this element must satisfy for this to be a representation. We could choose , but this just sends everything to the identity, which is the trivial group representation. There may be other choices around, but the only one we know always exist is (note we’re tacitly assuming that .
So we define the one-dimensional signum representation of by sending all the even permutations to the matrix whose single entry is , and sending all the odd permutations to the matrix whose single entry is . Often we’ll just ignore the “matrix” fact in here, and just say that the signum of an even permutation is and the signum of an odd permutation is . But secretly we’re always taking this and multiplying it by something else, so we’re always using it as a linear transformation anyway. | <urn:uuid:32196dce-b57f-46b7-8f0f-dc6523f622a1> | 2.8125 | 494 | Personal Blog | Science & Tech. | 44.75346 |
Science Fair Project Encyclopedia
The Myxozoa are a group of microscopic, parasitic animals. Originally they were considered protozoa, and included with other non-motile forms in the group Sporozoa. However, as their distinct nature became clear they were removed to their own phylum. They are now generally considered to have developed from multicellular animals, and are classified with them.
Many Myxozoa have a two-host lifecycle, involving a fish and an annelid worm or bryozoan. Infection occurs by valved spores. These contain one or two sporoblast cells, and one or more polar capsules, containing filaments that anchors the spore to its host. The sporoblasts are then released as a motile form called an amoebula, which penetrates the host tissues and develops into one or more multinucleate plasmodia. Certain nuclei later pair up, one engulfing another, to form new spores.
The polar capsules are very similar in structure and appearance to the stinging cells of Cnidaria. On account of this the Myxozoa have been generally held to be extremely reduced cnidarians, and in particular have been considered close relatives of Polypodium , with some genetic support. More recent studies of Hox genes, however, point to an origin among the Bilateria. This has been given strong support by the discovery that Buddenbrockia , a worm-like parasite of bryozoans up to 2 mm in length, belongs among the Myxozoa. Genetically it is almost indistinguishable from the other forms, and it has Myxozoan-like spore capsules, but it retains a bilateral body form with longitudinal muscles. This serves as a missing link between the Myxozoa and their multicellular ancestors.
Myxozoa are split into two classes, Malacosporea and Myxosporea. The outdated subgroup Actinosporea is now recognized as a life cycle phase of Myxosporea.
- Class Malacosporea
- Buddenbrockia plumatellae
- Tetracapsuloides bryosalmonae , an important parasite of salmon
- Class Myxosporea
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:306f6b4d-eace-4186-875b-9d315bf40a26> | 3.484375 | 497 | Knowledge Article | Science & Tech. | 22.592044 |
June 26, 2004
Arable lands constitute a 13.5% of the continental surface.
Permanent arable lands constitute a 4% of the world’s surface.
One third of the continental surface is bein deprived of the fertile layer of soil.
Since 1490 until now, more than 4 billion hectares of fertile soil have been destroyed by climate variability, harvesting, grazing, industrialization and urban growth.
About 20 billion tons of fertile soils are destroyed each year by fire and winds.
March 1987-- Intense winds dragged a dense cloud of reddish dust that darkened the diurnal sky forming a band which stretched from the Tropic of Cancer to near the Equator. The spectacle, which dyed the daylight on red, lasted for more than four hours.
Ten years later, on August 1997, during an excursion to the city of Zacatecas, Mexico, I stumbled upon a Pan-American highway blocked by reddish sand dunes! The large healthy farming areas that I have seen 20 years before have become into a far-reaching desert. Sand dunes and rocky soils dominated the scene.
On April 2001, once again, the northern sky was covered by a dense cloud of dust. The reddish hued light wrapped our group for more than six hours. The origin of that reddish dust was the same that I have seen in 1987. The reddish dust had been picked up by the winds from the fields of the state of Zacatecas, Durango and San Luis Potosi. In that occasion, the area of dust dispersal was wider than during the 1987 event. I realized it because I also received reports about the phenomenon coming from Texas, US.
We call the phenomenon "red wind" because it transports reddish dust.
WHAT DOES DESERTIFICATION MEAN?
Desertification refers to the degradation of soil in Arid, semiarid and subhumid lands due to the Climate Variability and human activities.
Among the human activities, we identify farming like the main cause of desertification because farmers persistently grow crops and fairly distribute farm animals on grazing lands through more than 20 years.
There is also a link between acid rain -caused by atmospheric pollution- and desertification because the sulfur-derived acids destroy diverse life forms that supply of biodegradable materials to soils, for example bacteria and earthworms.
Desertification can also occur naturally like the last phase of an ecological succession. However, with the association of Climate Variability and the expansion of the human communities, which as they grow they demand greater quantities of food originating from agriculture and stockbreeding. The arid, semiarid and subhumid lands have been degraded in all continents by a farming overexploitation.
The European countries more affected by land degradation are Armenia, Azerbaijan, Bulgaria, Rumania and Russia.
Africa holds a dramatic and astounding situation because more than three billion hectares of prairies and sub humid forests are above the process of desertification due to unrestrained destruction by fire of vast prairies, agricultural and stockbreeding overexploitation, irregularity of pluvial precipitations -that happen after long periods of drought- and severe erosion of soils by winds. The rate of soils’ degradation has been calculated in more than 100 thousand hectares per year. The deserted areas in Africa reach already a 73% (almost 2/3 of the African continent).
Asia holds 1.7 billion hectares of arid regions. The depths of despair are that the Asian populations are spreading out beyond deserts (it would be as a gravitational field that expands above other gravitational fields).
In China, a fast industrialization has propitiated the destruction of grasslands and humid forests by the development of industrial parks and the increase of rice harvest. The desertification in China has been speeded up by the exploitation of coal from woods as a main supply of energy. Large areas of tropical pluvial forests have been chopped down to obtain coal from trees to use it in a boosting industry.
There are 20.5 million square Kilometers of deserts in Latin America. Atacama Desert is one of the more famous deserts. It locates in Chile and extends from the South border of Peru to the North border of Argentina. It is almost 1000 Km in length and 160 Km wide. Atacama Desert is an old arid region that has volcanic and saline soils; however, the agriculture exploited at its peripheries has propitiated a haste advance of the desertification barely in ten years.
In Mexico, a belt of severe desertification extends from Sonora to San Luis Potosi. The mexican belt of desertification is a wide strip that advances from Chiapas to Baja California -It includes a great portion of the Mexican Central Plateau (Anahuac Plateau), like the states of Sonora, Chihuahua, Durango, Zacatecas, Coahuila, and San Luis Potosi. The excess on agricultural exploitation of rustic lands generated a severe degradation of the soils. Other areas of the Mexican Republic that display serious desertification are Southwest of Veracruz and the states of Chiapas, Guerrero, Oaxaca and Michoacan. At these states, the agricultural overexploitation and the devastation of woodlands have largely contributed to the erosion of soils.
In USA, the erosion by winds and water, the continuous dryness and the intensive rustic activities on till and no-till fields have provoked a conspicuous desertification that has positioned the problematic of the Midwest and Midland’s prairies (Great Plaines) among the more destructive phenomena in the American Union.
Texas shares with the Midwest and the Great Plains a very high level of soil degradation as a result of continuous drought, wind erosion and an unconscious exploitation of soils by rural and industrial activities.
The incontrollable and irreversible advance of desertification in the United States is a very frustrating situation and it is a good example on how do the political affairs influence directly in the devastation of formerly-dynamic environments.
WHY DESERTIFICATION IS SO BAD FOR LIFE ON EARTH?
Arid, semiarid and grasslands or subhumid ecosystems are the natural habitat of thousands of plant and animal species that are annihilated by the destruction of the Biomes where they live. This has been one of the main sources in the decreasing of biodiversity in our planet.
Semiarid and grasslands or subhumid ecosystems produce almost a 15% of the planet’s Oxygen.
Humus, or fertile layer in arid, semiarid, grasslands and subhumid ecosystems, retains the rainy water through longer periods, impeding the impetuous and vertiginous flow of water. Humus also slows down water evaporation at isohyets.
The desertification is established as the loss of the fertile layer of soils (humus), which is the layer that gives support to wild or cultivated plants, whether. Fertile layer also works as a retainer of water. Thus, desertification affects the economy of the nations maintained by agriculture because desertification weakens to agricultural productivity.
The degradation of soils means a less food production for humans and their livestock. Damage on fertile soils equals to less food and more starvation on our planet. Dfs= St/F (Damage on fertile soils [Dfs] is proportional to Starvation [St] an inversely proportional to Food production [F].)
WHAT CAN WE DO?
Not too much. The responsibility involves to all us; however, we can do almost nothing to set back the fast advance of deterioration of the fertile layer of soil.
Abandoning all the fertile lands through the next 100 years will regenerate only 1% from the total of soils destroyed until now.
It is urgent that we stop now the global chaotic and impulsive destruction of woodlands. The practice of burning grass lands and shrubs to strip lands for farming and urbanizing must be catalogued like an ominous transgression against humankind.
Each nation has created research commissions to assess the best way to stop the problem of desertification; however, we know little about those commissions’ work because, at fighting against desertification, we humans are in absolute disadvantage. We know how to destroy the Biosphere, but we do not know how to drive it backwards.
DESERTIFICATION IS AN IRREVERSIBLE PROCESS. TODAY, YOU AND ME ARE AWARE OF THIS PROBLEM. WHAT WE ONLY CAN DO IS TEACHING OTHERS TO DO NOT PERPETRATE OUR PRECEDING MISTAKES AND TO PREVENT, MORE THAN FIXING, THE PROBLEM... ONCE ESTABLISHED IN A SPECIFIC AREA, DESERTIFICATION CANNOT BE REVERSED.
Ash, Caroline. Desert Rescue. Science, Vol 291, Issue 5509, 1667 , 2 March 2001.
Brown, James H., et al. Complex Species Interactions and the Dynamics of Ecological Systems: Long-Term Experiments. Science, Vol 293, Issue 5530,
643-650 , 27 July 2001.
Sustainable development of drylands and combating desertification.
http://www.fao.org/documents/show_cdr.asp?url_file=/docrep/V0265E/V0265E00.htm. FAO's Website.
Desertification. USGS's Website: http://pubs.usgs.gov/gip/deserts/desertification/
Frazier, Thomas W. Natural and Bioterrorist/Biocriminal Threats to Food and Agriculture. Annals of the New York Academy of Sciences. 894:1-8 (1999). New York, NY. | <urn:uuid:e68234a4-8a16-470f-be28-f98ea1e19bac> | 3.703125 | 2,030 | Knowledge Article | Science & Tech. | 40.008656 |
The team used Fe3O4 nanoparticles to produce a diverse range of propargylamines at moderate to high yields. These compounds can serve as building blocks and skeletons of biologically active compounds, the researchers note.
The Fe3O4 nanoparticles were magnetically separated (Figure 1), washed with ethyl acetate, air dried and then used again without further purification. The materials went through 12 such cycles without significant loss of catalytic activity, say the researchers.
Work is underway on using the nanoparticles for other reactions, notes Chao-Jun Li of McGill's Department of Chemistry. | <urn:uuid:319170d5-050b-400f-8b6a-ce6b0ed71db7> | 2.796875 | 122 | Truncated | Science & Tech. | 20.305 |
Taye, M.T. and Willems, P. 2012. Temporal variability of hydroclimatic extremes in the Blue Nile basin. Water Resources Research 48: 10.1029/2011WR011466.
The authors write that the upper Blue Nile basin is one of the most important river basins in Africa, because "it contributes about 60% of the Nile's flow at Aswan, Egypt (Yates and Strzepek, 1998; Sutcliffe and Parks, 1999; Conway, 2005) ... and its availability is a matter of survival for Egypt and Sudan." In addition, they say it is the largest and most economically imperative water resource for Ethiopia, which "is planning irrigation and hydropower projects using the Blue Nile River (Tesemma et al., 2010), while all the other riparian countries are working on increasing their share of the water to boost their economic developments."
What was done
In studying this important river and the region that feeds it, in the words of Taye and Willems, "the temporal variability of basin-wide rainfall extremes and river flow extremes from four gauging stations was investigated," and "on the basis of a quantile anomaly analysis method, decadal variations in extreme daily, monthly, and annual quantiles were studied, and the periods of statistical significance were identified."
What was learned
The two Belgian scientists found that, in regard to river flows and rainfall depths, "the 1980s had statistically significant negative anomalies in extremes in comparison with the long-term reference period of 1964-2009, while the 1960s-1970s and the 1990s-2000s had positive anomalies, although less significant." Most important of all, however, they report that "there is neither consistent increasing nor decreasing trend in rainfall and flow extremes of recent years." And, therefore, they say that "anticipated trends due to global warming could not be identified."
What it means
Once again, for another part of the world, and in spite of the oft-repeated "doom-and-gloom" prognostications of climate alarmists, the global warming experienced over the past half-century or so has not led to either extreme increases or decreases in rainfall and subsequent river flow in Africa's upper Blue Nile Basin.
Conway, D. 2005. From headwater tributaries to international river: Observing and adapting to climate variability and change in the Nile basin. Global Environmental Change 15: 99-114.
Sutcliffe, J.V. and Parks, Y.P. 1999. The Hydrology of the Nile. IAHS Special Publication 5.
Tesemma, Z.K., Mohamed, Y.A. and Steenhuis, T.S. 2010. Trends in rainfall and runoff in the Blue Nile Basin: 1964-2003. Hydrological Processes 24: 3747-3758.
Yates, D.N. and Strzepek, K.M. 1998. Modeling the Nile basin under climatic change. Journal of Hydrologic Engineering 3: 98-108.Reviewed 22 August 2012 | <urn:uuid:c298b494-ebfd-453d-9964-3fd27eac006b> | 3 | 637 | Knowledge Article | Science & Tech. | 56.201414 |
g_assert() is used for debugging. It's typically used in areas where an expression SHOULD evaluate to true, and if it's not, then it throws out an error to the console. So, if you see this:
g_assert (value == NULL)
Then the developer is expecting "value == NULL" to be a true expression. If value is not null, then an error message will be seen. You will see this used OFTEN throughout the GTK+ and the GLib libraries. Think of it as us, the programmers, telling our code that we "assert" the following expression to be true, and to let us know otherwise.
Try to call a gtk_widget_xxx function on a widget which has not been initialized yet and before your program crashes (or just doesn't work), you'll see some assertion failure errors. Without them, all you would know is that your program isn't working. But now, you know exactly why.
When your program is ready to release, hopefully you aren't seeing any assertion errors!
So let's look at your snippet:
g_file_get_contents ("foo.txt", &contents, NULL, &err);
g_assert ((contents == NULL && err != NULL) || (contents != NULL && err == NULL));
At this point, we expect one of two conditions, contents is NULL and error is set, or contents is set and error is NULL. Otherwise, something we hadn't predicted has happened. So we're not expecting that assertion to output any error messages.
Same thing here:
g_assert (contents == NULL);
fprintf (stderr, "Unable to read file: %s\n", err->message);
When we get into the error handling block, we expect the contents to be NULL because there was an error. That's why we aren't freeing the contents. But if for some reason, it isn't NULL, we're going to spit out an error message about that freakish condition.
Now, This is a bit redundant if you ask me, because this was already checked in the previous assertion. But that's what it's doing.
So in short, removing assertions do not change how the code runs per say, but they assist in debugging critical error conditions--conditions in which the developer intends to never happen. | <urn:uuid:889f8cd9-a545-47c8-996a-498497a7bfce> | 3.34375 | 493 | Comment Section | Software Dev. | 67.4355 |
Where's Tyche, the 10th 9th planet? Getting the full story.
John Matese and Daniel Whitmire of the University of Louisiana at Lafayette recently made the news when they announced the possible discovery of a gas giant planet they named Tyche
in the Oort Cloud, at the extreme edge of the Solar System (previously
). Now ars electronica breaks down the evidence behind the announcement, what can be done to confirm or disprove its existence & how long it could take.
posted by scalefree
on Mar 3, 2011 -
People have been upset
about Pluto's demotion for some time now. (While classical music fans
have just had a love/hate relationship with this whole process.) But astronomical hate mail
has never been as cute as the missives Neil deGrasse Tyson has received over the years from tots upset at poor Pluto's ouster.
posted by greekphilosophy
on Mar 15, 2010 -
Tonight NASA is scheduled to launch the Kepler Mission
(named after planetary legislator Johannes Kepler
) with the goal of finding Earth size planets in orbit around stars in the Cygnus-Lyra
region of the sky. Over the next 3 and a half years it will maintain a nearly unblinking gaze on the approximately 100 thousand stars in the region. NASA expects it to find about 50 Earth size planets
, as well as hundreds that are larger. You can watch the launch live on NASA TV
. [more inside]
posted by borkencode
on Mar 6, 2009 -
Mars and Beyond
- 50 years ago, this animated episode of Tomorrowland aired on Disneyland a few months after the launch of Sputnik - an entertaining melange of astronomy, sci-fi, pop culture, science, speculation, and surreality. Walt himself and Wernher von Braun make guest appearances and clip 5 is particularly trippy. (Parts 2
posted by madamjujujive
on Jun 10, 2007 -
Scientists have discovered a planet composed of scorching hot ice
. Originally thought to be a gas giant due to its mass, its actually only four times the size of Earth and most likely composed of exotic forms of ice, such as Ice VII and Ice X
with s surface temperature of 300° C.
posted by Artw
on May 16, 2007 -
Hubble harvests 100 new planets
during a 7-day sweep of the bulge of the Milky Way.. If confirmed it would almost double the number of known planets to about 230. "I think this work has the potential to be the most significant advance in discovering extra-solar planetary systems since the first planets were discovered in the mid-1990s.
posted by stbalbach
on Jul 1, 2004 -
Transits of Venus occur every 130 years or so when Venus can be observed passing across the face of the sun. Chasing Venus
is an online exhibition by Smithsonian Institution Libraries that tells the story of how the transit has been observed since the 17th century, with early observations in England
, illustrated accounts of expeditions
by 18th century astronomers to various parts of the world, and early uses of photography
to record observations in the 19th century. Includes links to animations
of transits reconstructed from Victorian photographs, and details of a lecture series
on Thursdays in April and May (first one April 8). The first transit since 1882 is this year.
posted by carter
on Apr 4, 2004 -
Reflections on a Mote of Dust
"We succeeded in taking that picture [from deep space], and, if you look at it, you see a dot. That's here. That's home. That's us. On it, everyone you ever heard of, every human being who ever lived, lived out their lives. The aggregate of all our joys and sufferings, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilizations, every king and peasant, every young couple in love, every hopeful child, every mother and father, every inventor and explorer, every teacher of morals, every corrupt politician, every superstar, every supreme leader, every saint and sinner in the history of our species, lived there on a mote of dust, suspended in a sunbeam."
Carl Sagan "Pale Blue Dot"
posted by crasspastor
on Sep 11, 2001 -
A Mars Lander is set to touch down on Mars
sometime between December 1st and December 20th of this year. Keep your eyes peeled on this mars site, it will be the primary location of new information about the mission. I doubt if they find water on Mars though...
posted by mathowie
on Nov 16, 1999 -
We are not alone....
a new planet outside of our solar system was found today. It's only a matter of time before the little green men come down to greet us.
posted by mathowie
on Nov 13, 1999 - | <urn:uuid:4954e497-7069-4dbb-a6e6-9ca56bc17810> | 2.921875 | 1,043 | Comment Section | Science & Tech. | 51.633871 |
A rational function f(t) with coefficients in a field K is a fraction g(t)/h(t) where f(t), g(t) are in K[t].
a fraction whose numerator and denominator are both polynomials in X
a function formed by taking the ratio of two polynomials
a function that is expressed as the ratio of two polynomials
a function that looks like a fraction
a function which is the ratio of two polynomials
a function whose value is given by a rational expression
Functions constructed using the roots or quotients of polynomials. Example: f(x) = x1/2 g(x) = (2x3 - 4x2 + 2) / (x2 + 1)
A ratio of two polynomial functions, , where .
In mathematics, a rational function is any function whose output can be given by a formula that is the ratio of two polynomials. | <urn:uuid:a2fc91ee-4638-4cff-b61e-32eb7f71e834> | 3.40625 | 209 | Structured Data | Science & Tech. | 51.023529 |
The Indexed Sequential Access Method, or ISAM , is a technique for organizing data and efficiently retrieving it. It is designed for efficient operation in two modes: random access and sequential access; hence the name Indexed Sequential Access Method.
The ISAM is implemented as a C function library. The purpose of the ISAM library is to manage index and data files. The combination of the data file and its associated index file is called a "database".
The ISAM library defines the operations that can be performed on
a database. There are six basic operations that are performed by
any ISAM library:
To make it easier to perform database operations, there is the notion
of a "current record". This is an internal position indicator that
points to a particular record in the database. Some ISAM operations
apply to the current record. Functions are provided for changing
the internal position indicator. For example, there are functions for
moving to the next or previous record. These types of operations
are always relative to an index.
A database will often have more than one index. For example, the card catalog has an author index, a title index, and a subject index. In addition, the card catalog might have a unique index for finding a book by its ISBN number. Database operations are performed relative to a particular index. Each index defines a different ordering of the records. For example, the title index orders the records alphabetically by book title, while the author index orders them by the author names. Each index has its own current record, which will generally be a different record than the current record in another index. Since the indexes define a logical ordering of records, the physical ordering (i.e., the order in which the records are physically stored in the data file) is generally not important.
Remember that an index is a collection of keys that are values extracted from one or more fields of a record. An index contains one key for each record in the database. Each key points to the complete record from which it was extracted. If an index contains keys that are composed from more than one field, the keys are called compound (or multiple field) keys.
In our card catalog example, the author index might contain keys consisting of a single string value extracted from the last name field. This would cause the author index to order records by last name. For example, the name Thomas Jefferson would appear in sequence before the name George Washington, since the letter 'J' appears alphabetically before the letter 'W'. But what if we had two authors named Jefferson, Thomas Jefferson and Timothy Jefferson. Which would come first in the ordered sequence? The answer is that it could be either, since the ordering is not affected by the author's first name. If we wanted the author index to ensure that Thomas Jefferson appeared in sequence before Timothy Jefferson, we would need to include the first name field as part of the key.
Instead of using single field keys, we could make the author index using compound keys consisting of two string values extracted from both the last name and first name fields of each record. The order in which the two strings are stored in the compound key is important.
If the last name field is stored first, followed by the first name field, then the author index will order the records by last name. Then any records with the same last name will be further ordered by first name. For example, the name Thomas Jefferson will appear in sequence before the name Timothy Jefferson, since the letter 'h' in Thomas appears alphabetically before the letter 'i' in Timothy.
On the other hand, if the first name field was stored in the key
before the last name field, then the author index would order the
records by first name. Then any records with the same first name
would be further ordered by last name. With this ordering, the name
George Washington would appear in sequence before the name
Thomas Jefferson, since the letter 'G' would appear alphabetically
before the letter 'T'. Since it would probably be preferable for the
author index to order records by last name rather than first name,
the compound key should store the last name first, followed by the
Each field in a record is used to store a particular type of information (or data). The most common data type is a string. A string is simply a sequence of characters terminated by a binary zero (or null character). A string field is used to store textual information. In our card catalog example, string fields would be used to store the author's first and last names. The title and subject heading would also be stored in string fields.
The ISAM library supports two types of string fields: normal and case sensitive. The string type becomes an issue when strings are compared. For normal string fields, the string "ABC" would be equal to the string "abc", even though one string contains upper-case characters and the other contains lower-case characters. However, for case sensitive string fields, "ABC" would not be equal to "abc". Using the standard ASCII character set, the string "ABC" would be less than "abc" because the character 'A' appears before the character 'a' in the character set.
String fields can also be used to store numerical data. However, the standard C numeric data types (int, long, float, double, etc.) are stored internally in a binary format. This binary format is a bit- packed representation of a numeric value that is suitable for performing arithmetic operations. Before storing a numeric value in a string field, it must first be converted to string format using a standard C function like sprintf. Likewise, before performing arithmetic operations with a numeric value stored as a string, it must first be converted back to binary format using a standard C function like sscanf.
Instead of using string fields to store numerical data, a more efficient alternative is to use binary fields. A binary field stores numeric data using the internal binary format. Therefore it is not necessary to convert numeric values to and from string format. An additional benefit is that binary format saves disk space. For any given numeric data type (e.g., type short int), the binary format is fixed-length for all values (typically two bytes), whereas the length of the string format is dependent on the value (e.g., "10" is three bytes in length while "32000" is six bytes in length. Remember that a string requires a terminating null character.
There is also a disadvantage to using binary fields for storing numeric data. The size of a standard C numeric data type is implementation dependent. One compiler might treat the type int as a two byte (16-bit) value, while another might treat it as a four byte (32-bit) value. A compiler might use different sizes based on whether you are compiling for a 16-bit or 32-bit operating system. Another problem is that the byte ordering of binary data can be dependent on the computer processor (least significant versus most significant byte first). Also, binary floating point formats can vary (not all compilers use the standard IEEE format). As a result, a database that contains numeric data stored in binary format will not be as portable as one that stores numeric data in string format. The tradeoff is efficiency versus portability.
The ISAM library supports three standard binary integer data types: short int, unsigned short int and long int. Although the sizes can vary, most compilers treat short int as a 16-bit (two byte) signed integer, unsigned short int as a 16-bit (two byte) unsigned integer, and long int as a 32-bit (four byte) signed integer. Most processors store integer values in little endian (least significant byte first) format. Therefore, these binary integer data types are at least somewhat portable.
The ISAM library also supports the two standard binary floating point data types: float and double. Most compilers store floating point values using the standard IEEE format, with float stored as a four byte value and double stored as an eight byte value.
Finally, the ISAM library supports a generic binary data type. This
is a fixed-length field that can hold any type of data. Although
fixed-length, the size can be any specified number of bytes. For
example, a single byte binary field could be used to efficiently store
numeric values of limited range (0..255 or -128..127).
A database record typically contains many string fields. The length of the strings stored in a particular field usually varies from one record to the next. For example, a record might contain a string field that is used for general comments. One record might need a comment that is a couple of hundred characters in length, while the next record might not need any comment at all.
Some database programs (e.g., dBASE®) only support fixed-length records (i.e., each record is the same size). With fixed-length records, the string fields in each record must be large enough to accommodate the longest strings ever stored in those fields. For a given string field, most records will contain strings that are shorter than the longest string. However, since space is reserved for the longest string possible for that field, there is a lot of unused (or wasted) disk space. For example, if a string field allowed a maximum of 300 characters, then a zero length string stored in that field would waste 300 bytes of disk space.
The ISAM library solves the problem of wasted disk space by storing variable length records (i.e., each record may be a different size). With variable length records, a given string field may be a different length in each record. The string fields are only large enough to accommodate the actual strings that are stored in each record. An empty or zero length string requires only one byte of storage (i.e., the string terminating null character consumes one byte of disk space). | <urn:uuid:24869e18-4e5a-4d19-8168-52417a8bc7c0> | 3.875 | 2,035 | Documentation | Software Dev. | 48.738887 |
A collection is an object whose primary function is to store a number of like objects. An object called CarCollection may contain any number of Car objects. Collections can traditionally be accessed in the same manner as arrays, which means CarCollection[n] represents a particular Car object. This is true in C#, Java, and more - but not PHP, unfortunately. Since PHP has only recently begun to develop a package of built in objects (the SPL, Standard PHP Library), the ability to support collections in the accepted behavioral sense is very limited.
Their mission is to work with a datatype that PHP does have to simulate this kind of collection handling - arrays. They walk you through the creation of a foundation class, one that simply allows you to get and fetch from the array. Extending that makes it possible to create a customized method for sorting personal data (name). | <urn:uuid:6cae00ca-d4ed-4299-96ea-72198473d2eb> | 2.90625 | 176 | Knowledge Article | Software Dev. | 45.152095 |
This image of irrigated agriculture in the deserts of central Saudi Arabia, 450 km west of Riyadh, was taken by the Landsat 7 satellite on February 5, 2000, while orbiting 700 km above the surface of the Earth at a speed of roughly 26,000 km/h. The Saudis manage to make the desert bloom by pumping fossil water from deep below the Earth’s surface. A well at the center of each of these fields feeds a center pivot irrigation system which spreads water in large circles up to one kilometer in diameter. The aquifers which supply these fields are ancient and finite. When the fossil water runs out, the desert sands will return. Like the irrigation projects of many arid regions, the Saudis’ desert jewels will soon fade.
I began building this image by selecting a set of infrared bands that would best tell the story of irrigated agriculture in Saudi Arabia. Landsat satellites see the earth through eight spectral bands—the reds, greens, and blues of human experience, along with much longer wavelengths of infrared and thermal light. After using image processing software to assemble the initial false color composite, I selected a 100km-wide subset and choose contrast and saturation levels to accentuate the most interesting features of the image. Because healthy vegetation reflects strongly in the near infrared, the Saudis’ alfalfa and wheat fields are painted red against the desert background. To an astronaut these fields would appear green, and the intense brightness of the desert would likely washout the complex mineralogical patterns picked up by Landsat.
Aside from providing beautiful images, Landsat is used in desert environments to build maps of irrigated agriculture, to prospect for water and minerals, and to manage natural resources. The workhorse of NASA’s Earth observing satellite constellation, Landsat-series satellites have been on orbit continuously since 1972, making Landsat the longest-running satellite collection program in the world. | <urn:uuid:653293a5-1494-4a0e-b7f5-6af5ea497a8a> | 3.703125 | 388 | Knowledge Article | Science & Tech. | 34.06476 |
The alligator snapping turtle (Macrochelys temminckii) is one of the largest freshwater turtles in the world. It is not closely related to, but is often associated with, the common snapping turtle. The alligator snapping turtle is characterized by a large, heavy head, and a long, thick shell with three dorsal ridges of large scales (osteoderms) giving it a primitive appearance reminiscent of some of the plated dinosaurs.
The largest freshwater turtle in North America, the alligator snapping turtle is found primarily in southeasten United States waters.
This alligator snapping turtle was in a reptile park in South Africa. Never seen one till that day. | <urn:uuid:e17a8717-ae35-4662-b8fd-bb92f209b84c> | 3.109375 | 139 | Knowledge Article | Science & Tech. | 32.24359 |
Carbon-14 is present in all living things. When a living thing dies, the amount of isotope at that time starts to decay.
The function for radioactive decay is R(t)=R0(1/2)t/h, where R is the radioactivity/gram of carbon-14 at time t after death, R0 is the radioactivity/gram of carbon at the time of death, and h is the half-life of carbon-14. The half-life of carbon-14 is 5370 years. After 3000 years, how much carbon-14 radioactivity/gram remains in a dead tree?
So far I've set up the equation:
But how are you supposed to solve for R(t) when you have 2 unknowns? | <urn:uuid:06955cd6-93a1-479f-80cd-4bbb34109123> | 2.734375 | 159 | Q&A Forum | Science & Tech. | 82.919423 |
melted stone from Waidhofen/Thaya TV-Report in "Österreich Bild" ORF 28.8.2007
reference: Elisabeth Widensky
Ball lightning is an atmospheric phenomenon, the physical nature of which is still controversial. The term refers to reports of a luminous object which varies in size from golf ball to several meters in diameter. It is sometimes associated with thunderstorms, but unlike lightning flashes arcing between two points, which last a small fraction of a second, ball lightning reportedly lasts many seconds. There have been some reports of production of a similar phenomenon in the laboratory, but some question whether it is the same phenomenon.
Reports Despite over 10,000 reported sightings of the phenomenon, ball lightning has often been regarded as nothing more than a myth, fantasy, or hoax. Reports of the phenomenon were dismissed due to lack of physical evidence, and were often regarded the same way as UFO sightings.
A 1960 paper reported that 5% of the US population reported having witnessed ball lightning. Another study analyzed reports of 10,000 cases.
Ball lightning is photographed very rarely, and details of witness accounts can vary widely. Many of the properties observed in ball lightning accounts conflict with each other, and it is very possible that several different phenomena are being incorrectly grouped together. It is also possible that some photos are fakes.
The discharges reportedly appear during thunderstorms, sometimes issuing from a lightning flash, but large numbers of encounters reportedly occur during good weather with no storms within hundreds of miles.
A report from an area of central Africa having a very high incidence of lightning said that ball lightning used to appear from a certain hill just before the onset of the rainy season.(The report also exhibits a reluctance to report such phenomena typical of many people).
Ball lightning reportedly tends to rotate or spin and can possess odd trajectories such as veering off at an angle or rocking from side to side like a leaf falling. Fireballs can also move with or against the wind. Other motions include a tendency to float (or hover) in the air and take on a ball-like appearance. Its shape has been described as spherical, ovoid, teardrop, or rod-like with one dimension being much larger than the others. Many are red to yellow in colour, sometimes transparent, and some contain radial filaments or sparks. Other colours, such as blue or white occur as well.
Sometimes the discharge is described as being attracted to a certain object, and sometimes as moving randomly. After several seconds the discharge reportedly leaves, disperses, is absorbed into something, or, rarely, vanishes in an explosion. Some accounts have the balls passing freely through wood or glass or metal, while other accounts report circular holes in the wood or glass or metal. Some report explosions when the balls contact electrical wiring or the vaporisation of water when the balls enter water. Some accounts say the balls are lethal, killing on contact, while other accounts say the opposite.
A 19th Century depiction of ball lightningTesla reportedly could consistently make ball lightning in his Colorado lab, with one account saying that he was able to temporarily contain the balls in wooden boxes.
Pilots in World War II described an unusual phenomenon for which ball lightning has been suggested as an explanation. The pilots saw small balls of light "escorting" bombers, flying alongside their wingtips. Pilots of the time referred to the phenomenon as "foo fighters," initially believing that the lights were from enemy planes. However there are other theories as to the identity of the foo fighters.
Submariners in WWII gave the most frequent and consistent accounts of small ball lightning in the confined submarine atmosphere. There are repeated accounts of inadvertent production of floating explosive balls when the battery banks were switched in/out, especially if mis-switched or when the highly inductive electrical motors were mis-connected or disconnected. An attempt later to duplicate those balls with a surplus submarine battery resulted in several failures and an explosion.
Volcanos and the atmosphere and earth around them have been known to produce ball lightning and other luminous effects, with or without electrical storms. These accounts vary greatly.
Other accounts place ball lightning as appearing over a kitchen stove or wandering down the aisle of an airliner. One report described ball lightning following and engulfing a car, causing the electrical supply to overload and fail. In 1773, two clergy men recalled that they saw a ball of light drop down in their fireplace. Seconds later, it exploded.
Some researchers suggest that ball lightning has a more diverse range of properties than previously thought (e.g. Singer, 1971). Japanese investigators (e.g. Ofuruton et al) report that Japanese ball lightning can occur in fine weather and be unconnected with lightning. The diameter is said to be typically 20–30 cm but sometimes even larger up to a few meters. Ball lightning can split and recombine and can exhibit large mechanical energy like carving trenches (e.g. Fitzgerald 1978) and holes into the ground | <urn:uuid:c1b8e60a-af3d-4d0c-bdbd-fa36708157fa> | 3.109375 | 1,021 | Knowledge Article | Science & Tech. | 39.546824 |
Santa Barbara Field Guides - Butterflies|
wingspread 1.5-1.75 in.|
Recognition: Small; female is off-white with black FW tips and adjacent spot on leading edge of FW; male has same markings as the female but more yellow-toned, and also has pattern of patchy brownish yellow stripes on HW.
Flight period: The adults are active from March to June.
Hostplants: The larva feeds on rock cress (Arabis), and other plants in the mustard family (Brassicaceae) such as desert candle, peppergrass, jewel flower and tansy-mustard.
Habitat: Found in rocky canyons, ridges, and hillsides.
Distribution: Found from southern Oregon south to Baja California, and also east into Idaho, Colorado, and New Mexico. In the spring it can be found on the Mojave Desert, in the Coachella Valley, and around Palm Springs. Probably only present in the drier parts of Santa Barbara County, such as the Cuyama Valley. | <urn:uuid:0009cd1e-5d69-4cd4-814b-0fdd672c2762> | 2.75 | 225 | Knowledge Article | Science & Tech. | 54.623811 |
Not far from milepost 200 on a stretch of the Pacific Coast Highway near the Oregon Dunes National Recreation Area is a humble water hole known in some biology circles as Slimy Log Pond. It was from this inauspicious pool that a water flea (Daphnia pulex) dubbed The Chosen One was plucked in 2000, and became the first crustacean to have its genome sequenced.
Analysis of The Chosen One's genome shows that this Lilliputian crustacean contains the most genes of any animal sequenced to date. It also has the potential to accelerate scientists' understanding of synthetic chemicals' effects on the environment—and human health.
The world's most common small freshwater feeder—gobbling up algae in lakes and ponds the world over—Daphnia are also a staple in fishes' diets, proving a crucial link in food webs. This miniscule animal—barely visible to the naked eye—has long been an invaluable aquatic indicator species and is used by agencies across the globe to take stock of the health of freshwater systems.
As such a well-studied species, Daphnia are poised to become a key model organism to delve deeper in the study of environmental genomics. Improved understanding of the interactions among genes and the environment could also diminish the deleterious effects of chemicals on human health as well.
The sequence details, published online February 3 in Science, turned up the most shared genes with humans of any arthropod that has been sequenced to date. This genetic overlap means that the sentinel species could also end up being "a surrogate for humans to show the effects of the chemicals on shared pathways," says John Colbourne of Indiana University Bloomington's Center for Genomics and Bioinformatics and the lead author on the new paper. "The majority of the genome is a reflection of how the animal has evolved to cope with environmental stress."
Previous genetic snapshots of Daphnia have hinted at its overall makeup. But whole genome sequences provide "much better information about the function of genes, and allow us to be much more comprehensive in understanding the effects of toxicants," says Chris Vulpe of the Nutritional Science and Toxicology group at University of California, Berkeley, who was not involved in the new study. "It really adds to your ability to understand what's going on" in the environment.
Named for the Greek mythological nymph Daphne (who shuns the god Apollo's advances and in Ovid's telling was transformed into a tree), aquatic Daphnia, with its gently branching antennae, generally reproduce without males by passing along a diploid genome (a complete set of chromosomes) to offspring. This consistency creates clone lines, making them excellent candidates for laboratory study.
But like the water they often live in, these crustaceans "have a really muddy biology," Colbourne says. That murkiness, however, has turned out to be fertile territory for genetic research, he notes. "The genome is a lot more plastic and a lot more responsive to the environment than we had given it credit for."
Researchers working to sequence Daphnia—as part of the Daphnia Genomics Consortium—were expecting to find one about the size of the fruit fly, with its 14,000 genes. So they were stunned to find that the D. pulex genome contains at least 30,907 genes—nearly 8,000 more than the human genome. Some 36 percent of these genes have not previously been identified in any other organism. And researchers found that rather than being evolutionary deadweight, most of these unfamiliar genetic signatures "tend to be the genes that are most responsive to Daphnia's ecology," Colbourne says.
Not all of the crustacean's genes are active at any given time. Rather, a large portion of them are switched on or off with changes in the flea's environment. They are "more or less environment-specific," Colbourne says. Although they are "coding for the same proteins, they're being expressed differently depending on what environmental stresses you expose the animal to." And finding the genes that allow the animal to tolerate outside stressors—whether they are chemicals or UV radiation—could help researchers search for parallel pathways in humans.
One of the reasons the Daphnia genome contains so many genes, the researchers found, is because gene duplication in this species occurs at a much higher rate than in other familiar species—about 30 percent higher rate than in humans and about three times the rate in fruit flies.
"There's obviously a selective advantage to having so many genes," Colbourne says. "We were able to discover for the very first time that newly duplicated genes can acquire new functions very, very rapidly." In other species duplicate genes tend to become harmful or irrelevant and thus get weeded out quickly. Daphnia genes stick around longer, suggesting that they are often put to good use—and quickly—responding to environmental factors. | <urn:uuid:72425c06-f01e-4c68-b754-3279d27ecbe1> | 3.6875 | 1,021 | Knowledge Article | Science & Tech. | 33.606727 |
Phascinating PhysicsThis page contains answers to questions that have to do with physics. They explore time, the speed of light, the universal forces and much more! Click on a topic to see our "Quickie" questions archive or scroll down for more in-depth answers.
Quickie Phascinating Physics
|Particle Physics||(15 questions)|
In-depth questions and answers1)Is it possible to travel through time?
2)Is there an object in the universe that can travel faster than the speed of light?
3)What are the universal forces?
4)Why are atoms "smashed" in an accelerator? What is learned by that process?
5)How can a neutrino be detected?
6)Can you blow bubbles in space?
7)How do scientists know what an atom looks like, to make a model of one?
8)If 2 cars are traveling at the speed of light and the one in the back turns on its headlights, would the car in front be able to see them?
9)Is it true that positrons run down the time and electrons run up the time?
10)Are there any places that you can become weightless other than space?
11)If the Earth is round like a ball, why aren't we standing at odd angles around it?
12)Why is it said that a person traveling in spaceships at near speed of light comes back younger than his counterpart on earth?
13)What is Schrodinger's cat paradox?
14)What is the gravity assist, swingby approach used with spacecraft?
15)What is twice as cold as zero?
16)What is gravity and how does it work?
17)How do airplanes fly? | <urn:uuid:3966cd3d-58af-4d96-b1f5-a50b938ddc2a> | 3.203125 | 364 | Q&A Forum | Science & Tech. | 75.881758 |
This colorful supernova remnant is called W49B, and inside it astronomers think they may have found the Milky Way’s youngest black hole. It’s only 1,000 years old, as seen from Earth, and 26,000 lightyears away.
From a vantage point on NASA’s Chandra X-ray Observatory, astronomers observed and measured the remnant and determined it to be very unique. The supernova explosion of this massive star was not symmetrical like most, and instead of collapsing to form a telltale neutron star at its center, this supernova seems to have a black hole.
On Friday, February 15, astronomers will get an unusually good look at a near-Earth asteroid called 2012 DA14. It will be the first time a known object of this size will come this close to Earth—a mere 8 percent the distance between us and our moon.
The asteroid, which measures 150 feet across, was first spotted by astronomers when it zoomed by Earth this time last year. This asteroid’s fly-bys occur about once a year since its orbit around the sun is very similar to our own.
Apophis is a big name in the world of asteroids, and on Wednesday the famed space object will be making an appearance for astronomers across the globe.
A flurry of apocalyptic hoopla was generated
in 2004 when astronomers found an asteroid that looked like it may be headed for Earth. Apophis measures almost 1000 feet across, and if it were to hit Earth, the fateful collision would occur on Friday the 13th, in April of 2029. So astronomers set out to take more pictures of the asteroid’s orbit and better estimate the chances of a collision. As a clearer picture of its orbit emerged, the odds went from 1 in 300, to 1 in 45, to zero. But that doesn’t mean the threat is gone.
A composite image of a molecular cloud used as a model to determine how stars are formed.
Hot off the astronomical press: the star census is complete. An international team of astronomers has conducted the first, comprehensive survey of stellar formation in the universe. The undertaking was ten times bigger than any star formation study before it, and confirmed that the rate of star formation has slowed significantly over time. But the researchers upped the stakes with this one by finding that the universe is now almost out of star-making materials.
Although we can detect the planets orbiting distant stars through indirect methods, an optical image would provide much more information about how planets form and evolve. But those stars are so much brighter than the planets around them that the starlight simply drowns out the smaller orbs, like a flashlight beam in bright daylight. But now, researchers have developed an imaging system called Project 1640—a collaboration between the American Museum of Natural History, the California Institute of Technology, and NASA’s Jet Propulsion Laboratory—that can create a dark space around a planet to snap its photo.
Update, July 16: The Project 1640 researchers provided some more images showing how the system works, so we assembled them into the gallery below.
Images courtesy of Project 1640 | <urn:uuid:de9ed0f3-8cee-4566-b38c-7e4bcbe40f91> | 3.828125 | 642 | Content Listing | Science & Tech. | 45.382146 |
Balancing Chemical Equations
We have now determined symbols and formulas for all the ingredients of chemical equations, but one important step remains. We must be sure that the law of conservation of massA measure of the force required to impart unit acceleration to an object; mass is proportional to chemical amount, which represents the quantity of matter in an object. is obeyed. The same number of atomsThe smallest particle of an element that can be involved in chemical combination with another element; an atom consists of protons and neutrons in a tiny, very dense nucleus, surrounded by electrons, which occupy most of its volume. (or moles of atoms) of a given type must appear on each side of the equation. This reflects our belief in Dalton’s third postulate that atoms are neither created, destroyed, nor changed from one kind to another during a chemical process. When the law of conservation of mass is obeyed, the equation is said to be balanced.
As a simple example of how to balance an equation, let us take the reaction which occurs when a large excess of mercury combines with bromine. In this case the productA substance produced by a chemical reaction. is a white solidA state of matter having a specific shape and volume and in which the particles do not readily change their relative positions. which does not melt but instead changes to a gasA state of matter in which a substance occupies the full volume of its container and changes shape to match the shape of the container. In a gas the distance between particles is much greater than the diameters of the particles themselves; hence the distances between particles can change as necessary so that the matter uniformly occupies its container. when heated above 345°C. It is insolubleUnable to dissolve appreciably in a solvent. in water. From these properties it can be identified as mercurous bromide, Hg2Br2. The equation for the reaction would look like this:
Hg + Br2 → Hg2Br2 (1)
but it is not balanced because there are 2 mercury atoms (in Hg2Br2) on the right side of the equation and only 1 on the left.
An incorrect way of obtaining a balanced equationA representation of a chemical reaction that has values of the stoichiometric coefficients of reactants and products such that the number of atoms of each element is the same before and after the reaction. is to change this to
Hg + Br2 →
This equation is wrong because we had already determined from the properties of the product that the product was Hg2Br2.Equation (2) is balanced, but it refers to a different reaction which produces a different product. The equation might also be incorrectly written as
Hg2 + Br2 → Hg2Br2 (3)
The formula Hg2 suggests that molecules containing 2 mercury atoms each were involved, but our previous microscopic experience with this element indicates that such molecules do not occur.
In balancing an equationDetermining those values of the stoichiometric coefficients of reactants and products in a chemical equation that make the number of atoms of each element the same before and after the reaction. you must remember that the subscripts in the formulas have been determined experimentally. Changing them indicates a change in the nature of the reactants or products. It is permissible, however, to change the amounts of reactants or products involved. For example, the equation in question is correctly balanced as follows:
2Hg + Br2 → Hg2Br2 (4)
The 2 written before the symbol Hg is called a coefficient. It indicates that on the microscopic level 2Hg atoms are required to react with the molecule. On a macroscopic scale the coefficient 2 means that 2 mol Hg atoms are required to react with 1 mol Br2 molecules. Twice the amount of Hg is required to make Hg2Br2 as we needed for HgBr2.
To summarize: Once the formulas (subscripts) have been determined, an equation is balanced by adjusting coefficients. Nothing else may be changed.
EXAMPLE 1 Balance the equation
Hg2Br2 + Cl2 → HgCl2 + Br2
SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. Although Br and Cl are balanced, Hg is not. A coefficient of 2 with HgCl2 is needed:
Hg2Br2 + Cl2 → 2HgCl2 + Br2
Now Cl is not balanced. We need 2 Cl2 molecules on the left:
Hg2Br2 + 2Cl2 → 2HgCl2 + Br2
We now have 2Hg atoms, 2Br atoms, and 4Cl atoms on each side, and so balancing is complete.
Most chemists use several techniques for balancing equations.1 For example, it helps to know which element you should balance first. When each chemical symbolA one- or two-letter abbreviation for an individual element. appears in a single formula on each side of the equation (as Example 1), you can start wherever you want and the process will workA mechanical process in which energy is transferred to or from an object, changing the state of motion of the object.. When a symbol appears in three or more formulas, however, that particular element will be more difficult to balance and should usually be left until last.
1Laurence E. Strong, Balancing Chemical Equations, Chemistry, vol. 47, no. 1, pp. 13-16, January 1974, discusses some techniques in more detail.
EXAMPLE 2 When butane (C4H10) is burned in oxygen gas (O2), the only products are carbon dioxide(CO2) and water. Write a balanced equation to describe this reaction.
Solution First write an unbalanced equation showing the correct formulas of all the reactants and products:
C4H10 + O2 → CO2 + H2O
We note that O atoms appear in three formulas, one on the left and two on the right. Therefore we balance C and H first. The formula C4H10 determines how many C and H atoms must remain after the reaction, and so we write coefficients of 4 for CO2 and 5 for H2O:
C4H10 + O2 → 4CO2 + 5H2O
We now have a total of 13 O atoms on the right-hand side, and the equation can be balanced by using a coefficient of in front of O2:
C4H10 + O2 → 4CO2 + 5H2O
Usually it is preferable to remove fractional coefficients since they might be interpreted to mean a fraction of a molecule. (One-half of an O2 molecule would be an O atom, which has quite different chemical reactivity.) Therefore we multiply all coefficients on both sides of the equation by two to obtain the final result:
2C4H10 + 13O2 → 8CO2 + 10H2O
(Sometimes, when we are interested in moles rather than individual molecules, it may be useful to omit this last step. Obviously the idea of half a mole of O2 molecules, that is, 3.011 × 1023 molecules, is much more tenable than the idea of half a molecule.)
Another useful technique is illustrated in Example 2. When an element (such as O2) appears by itself, it is usually best to choose its coefficient last. Furthermore, groups such as NO3, SO4, etc., often remain unchanged in a reaction and can be treated as if they consisted of a single atom. When such a group of atoms is enclosed in parentheses followed by a subscript, the subscript applies to all of them. That is, the formula involves Ca(NO3)2 involves 1Ca, 2N and 2 × 3 = 6 O atoms.
EXAMPLE 3 Balance the equation
NaMnO4 + H2O2 + H2SO4 → MnSO4 + Na2SO4 + O2 + H2O
Solution We note that oxygen atoms are found in every one of the seven formulas in the equation, making it especially hard to balance. However, Na appears only in two formulas:
2NaMnO4 + H2O2 + H2SO4 → MnSO4 + Na2SO4 + O2 + H2O
as does manganese, Mn:
2NaMnO4 + H2O2 + H2SO4 → 2MnSO4 + Na2SO4 + O2 + H2O
We now note that the element S always appears with 4 O atoms, and so we balance the SO4 groups:
2NaMnO4 + H2O2 + 3H2SO4 → 2MnSO4 + Na2SO4 + O2 + H2O
Now we are in a position to balance hydrogen:
2NaMnO4 + H2O2 + 3H2SO4 → 2MnSO4 + Na2SO4 + O2 + 4H2O
and finally oxygen. (We are aided by the fact that it appears as the element.)
2NaMnO4 + H2O2 + 3H2SO4 → 2MnSO4 + Na2SO4 + 3O2 + 4H2O
Notice that in this example we followed the rule of balancing first those elements whose symbols appeared in the smallest number of formulas: Na and Mn in two each, S (or SO4) and H in three each, and finally O. Even using this rule, however, equations in which one or more elements appear in four or more formulas are difficult to balance without some additional techniques which we will develop when we investigate reactions in aqueous solutions.
The balancing of chemical equations has an important environmental message for us. If atoms are conserved in a chemical reactionA process in which one or more substances, the reactant or reactants, change into one or more different substances, the products; chemical change involves rearrangement, combination, or separation of atoms. Also called chemical change., then we cannot get rid of them. In other words we cannot throw anything away. There are only two things we can do with atoms: Move them from place to place or from compoundA substance made up of two or more elements and having those elements present in definite proportions; a compound can be decomposed into two or more different substances. to compound. Thus when we "dispose" of something by burning it, dumping it, or washing it down the sink, we have not really gotten rid of it at all. The atoms which constituted it are still around someplace, and it is just as well to know where they are and what kind of molecule they are in. Discarded atoms in places where we do not want them and in undesirable molecules are known as pollutionThe contamination of the air, water, and earth by personal, industrial, and farm waste.. | <urn:uuid:19e73aac-9831-4271-906d-2f943f0981d1> | 4.59375 | 2,275 | Tutorial | Science & Tech. | 47.07357 |
The Who, What, When, Where and Why of Chemistry
Chemistry is not a world unto itself. It is woven firmly into the fabric of the rest of the world, and various fields, from literature to archeology, thread their way through the chemist's text.
Stephen Davey, associate editor for Nature Chemistry, blogged at the Sceptical Chymist about visiting the National Archives and seeing the Declaration of Independence, the Constitution and the Bill of Rights. He was surprised to find that the documents were stored under helium as opposed to argon - and wondered why. That started me wondering as well, particularly since the inert gases are not interchangeable in all circumstances (you can use helium to dilute the air mixture for diving, but not argon, for example.)
Helium is both more expensive (not an issue in this context, the cost of the gas inside the cases has got to be the least expensive piece!) and difficult to work with than argon. It can leak out through materials that seem air and water "tight". That's why those latex balloons that looked so cheery on the day of the party are withered and droopy by the morning. They're waterproof, but not helium proof.
In the 1950s the US National Bureau of Standards (now NIST) was charged with deciding on the best way to preserve the Charters of Freedom (the three founding documents of the United States of America). (You can read the full report here.) Helium was chosen, despite its propensity to leak through many materials, partly because a high purity, local source was readily available but most because of its thermal conductivity.
The designers of the encasements wanted a way to measure the pressure of the helium within the cases without having to open them, or remove a sample. Since the thermal conductivity of helium is very different than that of air, changes in the thermal conductivity (how heat moves between the panes) could be used to detect leaks. Argon's thermal conductivity is similar to air, so if argon leaked out and air in, the change would be hard to detect.
New casements were designed about ten years ago, with argon as the gas of choice this time. Sapphire ports are embedded to allow the atmosphere inside the cases to be monitored spectroscopically - by passing a beam of light through the port. Since the new methods of monitoring don't require the inert atmosphere to have a different thermal conductivity, it allows argon - which can't wiggle its way out the way helium can - to be used.
The photo is from The Science News-Letter, vol. 62 (Dec. 6 1952), p. 359. | <urn:uuid:08406da4-416f-479f-8f77-4b0e26ebe4d2> | 3.25 | 549 | Personal Blog | Science & Tech. | 49.786346 |
The data are 3He exposure ages from lateral moraine bands on Mount Waesche, a volcanic nunatak in Marie Byrd Land, West Antarctica. The proximal part of the moraine is up to 45 meters above the present ice level was deposited approximately 10,000 years ago, well after the glacial maximum in the Ross Embayment. The upper distal part of the moraine may record multiple earlier ice advances. The data are all generated by crushing and melting mineral separates (mostly olivine) in vacuo, and measurements with a noble gas mass spectrometer at Woods Hole Oceanographic Institution. Full details can be found in Ackert et al. (Science, 1999, vol. 286, p.276-280). | <urn:uuid:a1d4bea3-3924-4330-9c23-2983285fd5bc> | 2.96875 | 151 | Knowledge Article | Science & Tech. | 45.654578 |
From MicrobeWiki, the student-edited microbiology resource
Higher order taxa
Domain (Bacteria); Phylum (Actinobacteria); Class (Actinobacteria); Order (Actinomycetales); Suborder (Catenulisporineae); Family (Catenulisporaceae); Genus (Catenulispora)
Description and significance
The Actinobacteria phylum is known to include freshwater life, marine life and some common soil life. Members of this phylum are important in the decomposition of organic material and carbon cycle, which puts nutrients back into the environment. Actinobacteria are also of high pharmacological interest because they can produce secondary metabolites (3). C. acidiphila is known only to be found in soil in Gerenzano, Italy. C. acidiphila forms aerial and vegetative mycelia (2). Since it’s part of the class Actinobacteria and the order Actinomycetales, it may produce novel metabolites or be an antibiotic producer. However, no information on the production of novel metabolites is currently known (1).
The complete genome of C. acidiphila was sequenced and published in 2009; this was the first complete genome sequenced of the Actinobacterial family Catenulisporaceae. The genome is 10,467,782 bp in length and comprises one circular chromosome. The content of the G-C in DNA is 69.8% of the total number of genes. Of the 9122 predicted genes, 99.28% were protein coding genes and just 0.76% of the genes were classified as RNA genes (2). For more information about the known functions of this genome, see tables 3 and 4 in the following article: “Complete genome sequence of Catenulispora acidiphila type strain” (2).
Cell and colony structure
Catenulispora genus consists of Gram-positive, non-motile and non-acid fast colonies of the organism that form branching hyphae. Vegetative mycelium are non-fragmentary and the aerial hyphae start to septate into chains of arthrospores (a resting sporelike cell produced by some bacteria) that are cylindrical. In C. acidiphila the spores have an average diameter of about 0.5 µm and are also known to range in length from 0.4-1 µm (1).
C. acidiphila is an aerobic species, but is also capable of non-pigmented and reduced growth under anaerobic and microaerophilic conditions. It has the ability to hydrolyze starch and casein. It can also use carbon sources as a source of energy. The sources of carbon that this species can use are the following: glucose, fructose, glycerol, mannitol, xylose and arabinose. C. acidiphila can’t reduce nitrates. Hydrogen sulfide (H2S) is also produced by this species (1, 2). The strain of C. acidiphila was also resistant to lysozyme, which wasn’t reported for the Catenulispora genus (2). The mechanism of how it reduces hydrogen sulfide is not known at this time.
C. acidiphila is an acidophilic species that grows well in the pH range of 4.3-6.8, but optimally at a pH of 6.0. They can grow optimally at a temperature of between 22-28 °C; however, it can grow significantly between 11-37 °C. As of right now, C. acidiphila has only been found in Geranzano, Italy. (1).
As of right now, C. acidiphila is not known to cause any infections or diseases. However, some species of the Actinobacteria are known to form a wide variety of secondary metabolites. Since a wide variety of secondary metabolites are a source of potent antibiotics, the Streptomyces species has been the main organism targeted by the pharmaceutical industry (3). Since C. acidiphila is part of the Actinobacteria phylum, it could possibly be targeted by the pharmaceutical industry (2).
Busti, E, et al. "Catenulispora Acidiphila Gen. Nov., Sp. Nov., a Novel, Mycelium-forming Actinomycete, and Proposal of Catenulisporaceae Fam. Nov." International Journal of Systematic and Evolutionary Microbiology (2006) 56: 1741-746. DOI: 10.1099/ijs.0.63858-0
Copeland, Alex, Alla Lapidus, Tijana Glavina, et al. "Complete Genome Sequence of Catenulispora Acidiphila Type Strain (ID 139908T)." Standards in Genomic Sciences (2009) 1: 119-125 DOI:10.4056/sigs.17259
Ventura, Marco, et al. “Genomics of Actinobacteria: Tracing the Evolutionary History of an Ancient Phylum.” Microbiol Mol Biol Rev. 2007 September; 71(3): 495–548. DOI: 10.1128/MMBR.00005-07
Edited by Dustin J Ambrose, student of Dr. Lisa R. Moore, University of Southern Maine | <urn:uuid:a1e9ee0a-e412-4395-a340-8fbe37fd272d> | 3.234375 | 1,133 | Knowledge Article | Science & Tech. | 40.955064 |
An active object framework
is a callback
-based form of multitasking
for computer systems. Specifically, it is a form of cooperative multitasking
and is an important feature of the Symbian
Within the framework, active objects
may make requests of asynchronous
services (e.g. sending an SMS
message). When an asynchronous request is made, control is returned to the calling object immediately (i.e. without waiting for the call to complete). The caller may choose to do other things before it returns control back to the operating system, which typically schedules other tasks or puts the machine to sleep. When it makes the request, the calling object includes a reference to itself.
When the asynchronous task completes, the operating system
identifies the thread
containing the requesting active object, and wakes it up. An "active scheduler" in the thread identifies the object that made the request, and passes control back to that object.
The implementation of active objects in Symbian
is based around each thread having a "request semaphore
". This is incremented when a thread makes an asynchronous request, and decremented when the request is completed. When there are no outstanding requests, the thread is put to sleep.
In practice there may be many active objects in a thread, each doing its own task. They can interact by requesting things of each other, and of active objects in other threads. They may even request things of themselves.
This is an... Read More | <urn:uuid:935b9938-5c1d-461c-a26b-f084d8e5cb67> | 3.109375 | 302 | Knowledge Article | Software Dev. | 47.31 |
Constructor Overloading In this article we will combine two techniques to allow a number of constructors to be defined for a single class. This gives the class the flexibility to construct objects in a variety of ways according to the manner in which they are to be used. Creating Overloaded Constructors Creating an overloaded constructor is as simple as adding overloaded methods. The additional constructors are simply added to the code. Each constructor must have a unique signature, ie. the parameter types must differ from all other constructors. To demonstrate the use of overloaded constructors we will create a new … Click here to continue reading. | <urn:uuid:bc66e6f3-5122-4632-bce7-adb004f9194d> | 2.828125 | 122 | Truncated | Software Dev. | 35.301429 |
April 16, 2010
The very first exercise at Programming Praxis evaluated expressions given in reverse-polish notation. In today’s exercise, we write a function to evaluate expressions given as strings in the normal infix notation; for instance, given the string “12 * (34 + 56)”, the function returns the number 1080. We do this by writing a parser for the following grammar:
|expr + term|
|expr – term|
|term * fact|
|term / fact|
|( expr )|
The grammar specifies the syntax of the language; it is up to the program to provide the semantics. Note that the grammar handles the precedences and associativities of the operators. For example, since expressions are made up from terms, multiplication and division are processed before addition and subtraction.
The normal algorithm used to parse expressions is called recursive-descent parsing because a series of mutually-recursive functions, one per grammar element, call each other, starting from the top-level grammar element (in our case, expr) and descending until the beginning of the string being parsed can be identified, whereupon the recursion goes back up the chain of mutually-recursive functions until the next part of the string can be parsed, and so on, up and down the chain until the end of the string is reached, all the time accumulating the value of the expression.
Your task is to write a function that evaluates expressions according to the grammar given above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2 | <urn:uuid:e2e4cd22-6e19-4742-a81f-7c644960cfb5> | 3.65625 | 346 | Tutorial | Software Dev. | 30.871613 |
Our analysis points to a primarily natural cause for the Russian heat wave. This event appears to be mainly due to internal atmospheric dynamical processes that produced and maintained an intense and long-lived blocking event. Results from prior studies suggest that it is likely that the intensity of the heat wave was further increased by regional land surface feedbacks. The absence of long-term trends in regional mean temperatures and variability together with the model results indicate that it is very unlikely that warming attributable to increasing greenhouse gas concentrations contributed substantially to the magnitude of this heat wave.Such attribution, they warn, may only be a matter of time:
To assess this possibility for the region of western Russia, we have used the same IPCC model simulations to estimate the probability of exceeding various July temperature thresholds over the period 1880-2100 (Figure 4).The results suggest that we may be on the cusp of a period in which the probability of such events increases rapidly, due primarily to the influence of projected increases in greenhouse gas concentrations.It would be interesting to assess how much time such attribution might take using a method similar to what we used for detection of trends in hurricane damage under model projected futures.
For an earlier discussion see this, and thanks KK. | <urn:uuid:b1fd5ffa-19fb-45eb-9627-13c5df417b35> | 3.09375 | 246 | Personal Blog | Science & Tech. | 22.228204 |
In mathematics, you don’t understand things. You just get used to them. -Johann von Neumann
Sometimes, I have to deal with series: lots of numbers all added together. Some series clearly approach a limit, like the following:
1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + …
I can visualize this in terms of pies. (What do you want? I’m hungry!) One = one whole pie. So that first number starts me out with one whole pie.
When I add the second number in my series, I’m clearly adding half a pie to that, for a total of a pie and a half.
When I add the third term, I’ve now got one-and-three-quarters pies, and by adding the fourth term, I’ve got one-and-seven-eighths pies.
If I add more and more terms in this series, I’m going to get closer and closer to two whole pies, but I’ll never quite reach it, always being just a tiny sliver away.
And that’s okay! It means that the sum of this series is two. The series has a limit, or — in math-speak — the series converges.
But not all series converge! Take a look at this one:
1 + 1 + 1 + 1 + 1 + 1 + …
The first term starts me out with a pie. Fine, so did the last series. But when I add the second term in, I realize I now have two pies!
In fact, if I add the third one in I have three pies, if I add the fourth in I get four pies, and so on! The more terms I add, the more pies I get.
This last part — that the series doesn’t approach a limit — is really important. It means that I can keep adding more terms, but when I do, the value of the series keeps changing; it doesn’t approach one single value. Therefore, this series does not converge, and the word we give to this is that the series diverges.
Or, to quote Lindsay Lohan (correctly) from Mean Girls, The Limit Does Not Exist!
But what if I were a tease about it? What if I gave you this series:
1 – 1 + 1 – 1 + 1 – 1 + 1 – 1 + …
After one term, there is a pie, but after two terms, there aren’t any pies; the second term takes the first pie away!
And it looks like the third term adds a pie, and the fourth one takes it away, and this continues. You’re adding pies (one-by-one) and taking them away (one-by-one) as you keep on adding terms.
So you might ask yourself, does the series converge, and if it does, what is its limit?
A mathematician is likely to tell you that the limit does not exist. Why not? Because for half of the terms, the sum is one full pie. But for the other half of the terms, the sum is zero pies. I could group these terms together to show you. For example, if I put my parentheses like so:
( 1 – 1 ) + ( 1 – 1 ) + ( 1 – 1 ) + ( 1 – 1 ) + ( 1 – 1 ) + …
it’s clear that this series sums up to zero. But what if I’m clever, and I organize my terms like this:
1 + ( -1 + 1 ) + ( -1 + 1 ) + ( -1 + 1) + ( -1 + 1) + …
I can see that my series sums up to one whole pie! So who’s right: is it zero pies, one pie, or does the solution simply not exist? To get it right, you have to ask yourself what happens on average.
Well, for half of the terms you have a pie, and for the other half of the terms you have no pies. On average? You have half a pie. Even though at no point do you actually have half-a-pie, this series sums to one-half. (For more rigor, you might want to read this.)
1 – 1 + 1 – 1 + 1 – 1 + 1 – 1 + … = 1/2.
This is a hard one to wrap your head around, and many mathematicians and physicists that I knew in graduate school were unable to do it. Want to try something more ambitious? Have a go at this one:
1 – 2 + 3 – 4 + 5 – 6 + 7 – 8 + …
(Answer here.) And if this isn’t up your alley, don’t worry. We’ll go back to the astrophysics on Wednesday. | <urn:uuid:9cec421c-b3ab-4786-81cd-60e73a452436> | 2.921875 | 1,036 | Personal Blog | Science & Tech. | 88.819815 |
What is Renewable Energy?
Our country currently relies heavily on fossil fuels, such as coal, oil, and natural gas for its energy. These fossil fuels are nonrenewable, that is, they will eventually run out, becoming too expensive or too environmentally damaging to develop. Contrastingly, renewable energy resources—like the wind and solar energy—are constantly replenished and will never run out.
The majority of renewable energy technologies are directly or indirectly powered by the sun. Our planet is in equilibrium such that heat radiation into space is equal to incoming solar radiation, the resulting level of energy can roughly be described as the Earth's climate. The oceans absorb a major fraction of the incoming radiation. Most radiation is absorbed at low latitudes around the equator, but this energy is dissipated around the globe in the form of winds and ocean currents. Wave motion may play a role in the process of transferring mechanical energy between the atmosphere and the ocean through wind stress. Solar energy is also responsible for the distribution of precipitation which is tapped by hydroelectric projects, and for the growth of plants used to create biofuels.
There are almost infinite amounts of renewable energy sources for our planet to harvest for our needs. An example of the vast amounts of energy from renewable sources versus our global demand for energy is explored in the chart below:
The volume of the cubes in the chart above represent the amount of available geothermal, wind and solar energy in terawatts (TW) or one million watts. The red cube on the right represents the proportional total global energy consumption.
Types of Renewable Energy
There are many different types of renewable energies. The most popular being solar power, wind power, biofuels and biomass, water power, hydrogen, geothermal, wave power, and tidal power. For more information on other types of renewable energy, please visit the following:
Biofuels and Biomass
The organic matter that makes up plants is known as biomass. Biomass can be used to produce electricity, transportation fuels, or chemicals. The use of biomass for any of these purposes is called biomass energy.
Geothermal energy taps the Earth's internal heat for a variety of uses, including electric power production, and the heating and cooling of buildings.
Hydrogen is always combined with other elements, like with oxygen to make water. Once separated from another element, hydrogen can be burned as a fuel or converted into electricity.
Water Power or Hydropower
Flowing water creates energy that can be captured and turned into electricity. This is called hydroelectric power or hydropower.
The bulk of all renewable energy comes either directly or indirectly from the sun.
Tidal Power and Wave Power
The ocean produces energy from the sun's heat and mechanical energy from the tides and waves. | <urn:uuid:259880cc-dbf4-4ff1-a431-349717409743> | 3.828125 | 568 | Knowledge Article | Science & Tech. | 28.277634 |
Science Fair Project Encyclopedia
| style="text-align:center;" |
Harbour Porpoise range The Harbour Porpoise (Phocoena phocoena) is one of six species of porpoise, and so one of about eighty cetacean species. The Harbour Porpoise, as its name implies, stays close to coastal areas or river estuaries and as such is the most familar porpoise to whale watchers. This porpoise often ventures up rivers and has been seen hundreds of miles from the sea.
The species is sometimes known as the Common Porpoise in texts originating in the United Kingdom, though this usage appears to be dying out.
The Harbour Porpoise is a little smaller than the other porpoises. It is about 75 cm long at birth. Males grow up to 1.6 m and females to 1.7 m. The females are correspondingly heavier, with a maximum weight of around 76 kg compared with the males' 61 kg. The body is robust and the animal is at its maximum girth just in front of its triangular dorsal fin. The beak is poorly demarcated. The flippers, dorsal fin, tail fin and back are a dark grey. The sides are a slightly speckled lighter grey. The underside is much whiter, though there are usual grey stripes running along the throat from the underside of the mouth to the flippers.
Harbour Porpoises live up to 25 years.
Population and distribution
The species is widespread in cooler coastal waters in the Northern Hemisphere, largely in areas with a mean temperature of about 15°C. In the Atlantic, Harbour Porpoises may be present in a concave band of water running from the coast of western Africa round to the eastern seaboard of the United States, including the coasts of Spain, France, the United Kingdom, Ireland, Norway, Iceland, Greenland and Newfoundland. There is a similarly-shaped band in the Pacific Ocean running from Sea of Japan, Vladivostok, the Bering Strait, Alaska and down to Seattle and Vancouver. There are diminishing populations in the Black and Baltic Seas.
Harbour Porpoises are not and never have been actively hunted by whalers because they are too small to be of interest—an adult is about the same size and a little lighter than the average adult human. The global population is in the hundreds of thousands and the Harbour Porpoise is not under threat of widespread extinction. However a key concern is the large number of porpoises caught each year in gill nets and other fishery equipment. This problem has led to a documented decrease in the number of Harbour Porpoises in busy fishing seas such as the Black and Baltic. It is known that the porpoises' echolocation is sufficiently discriminating to detect the presence of the nets, but this does not stop porpoises from becoming trapped. Scientists have developed beacons to attach to the nets to try to deter curious porpoises. These are not yet widespread and there is some controversy regarding their use—some concerns have been raised about the value of adding more noise pollution to the seas.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:4986fb50-0b3c-4139-83f5-9fd09429a75c> | 3.640625 | 680 | Knowledge Article | Science & Tech. | 48.68348 |
La Niña events greatly influence Australia’s climate.
The 2010–11 and 2011–12 La Niña events were two of the most significant in Australia’s recorded meteorological history.
The following pages explore the ‘story’ and ‘background’ of these La Niña events. The ‘story’ section follows the evolution of these extraordinary events and their widespread impacts on the weather of Australia during 2010 through 2012. The ‘background’ section gives an overview of the physical processes driving La Niña and El Niño events, and outlines the ways in which these events typically alter weather in Australia.
Unless otherwise indicated, all temperature and rainfall anomalies (i.e. departures from average) in this publication are calculated with respect to the 1961–1990 average, as recommended by the United Nations World Meteorological Organization.
At a glance: the impact of these La Niña events in Australia
The successive La Niña events spanning 2010–12 were associated with record rainfall over much of Australia and some of the biggest floods in living memory. This followed years of severe drought in many parts of the country, and while it brought relief to many Australians, it also brought devastation to others.
Some facts about the 2010–11 and 2011–12 La Niña events
The 2010–11 La Niña event was one of the strongest on record, comparable in strength with the La Niña events of 1917–18, 1955–56 and 1975–76.
In October and December 2010, and February and March 2011, the Southern Oscillation Index values (a measure of a La Niña's strength) were the highest recorded for each month since records commenced in 1876.
2011 was Australia's coolest year in a decade (2001–2011).
2010 was Australia's third-wettest calendar year on record.
The Murray–Darling Basin experienced its wettest calendar year on record in 2010 and Western Australia experienced its wettest year on record in 2011.
2011 was Australia's second-wettest calendar year (with the wettest year since national rainfall records began in 1900 being 1974 – also a La Niña year).
Ocean temperatures to the north of Australia were highest on record in 2010.
April 2010 to March 2012 was Australia's wettest two-year period on record.
Widespread flooding occurred in many parts of Australia associated with the record rainfalls. | <urn:uuid:971c4fd6-16a2-4742-b5ef-4b2ec35abfb0> | 3.53125 | 497 | Knowledge Article | Science & Tech. | 49.616579 |
Carbon dioxide (CO2) is a chemical molecule consisting of one carbon atom covalently bonded to two oxygen atoms. At atmospheric pressure and temperature, carbon doixode is a colorless, odorless gas that exists naturally as a trace gas in the Earth's atmosphere. It is a fundamental component of the Earth's carbon cycle, with a considerable number of sources, both natural and man-made; moreover, there are a significant number of natural carbon sinks including oceans, peatlands, forests and other biota.
Carbon dioxide s an important greenhouse gas produced by human activities, primarily through the combustion of fossil fuels; however, methane, chlorofluorocarbons and other gases are more potent greenhouse gases. Its concentration in the Earth's atmosphere has risen by more than 35% since the Industrial Revolution. Charles D. Keeling was a pioneer in the monitoring of carbon dioxide concentrations in the atmosphere. Atmospheric mixing ratios for carbon dioxide are now higher than at any time in at least the last 800,000 years, standing at 385 parts per million (ppm) compared to a pre-industrial high of 280 ppm. The current rate of increase is around two ppm per year (see Figure 1).
Sinks of Carbon Dioxide
Carbon dioxide is stored in a number of media including seawater, soils and addition to plant biomass via photosynthesis. While all of these processes have not been quantified in detail, they represent massive fluxes and sinks for sequestration of carbon.
Sources of Carbon Dioxide
Respiration, both on land and in the sea, is a key component of the global carbon cycle. On land, an estimated 60 Pg C (60 billion tonnes) is emitted to the atmosphere each year by autotrophic respiration. A similar amount, about 55 Pg C, is emitted as a result of heterotrophic respiration.
In the sea, autotrophic respiration is thought to account for about 58 Pg of the dissolved inorganic carbon in surface waters each year, with the contribution of heterotrophic respiration being 34 Pg C.
Although respiration is a large source of carbon dioxide, it is currently smaller than the amount of CO2 that is removed from the atmosphere annually by phtosynthesis, the biochemical process by which plants and other autotrophic organizms convert carbon dioxide into biomass. Consequently, respiration does not currently represent a net source of carbon doxide to the atmosphere.
Emissions of CO2 due to volcanic activity, though sometimes large on a local scale, are relatively minor on a global scale, accounting for between 0.02 and 0.05 Pg C per year, or less than 1 percent of yearly human-generated carbon dioxide emissions.
It is estimated that man-made changes in land-use have, until now, produced a cumulative global loss of carbon from the land of about 200 Pg. Widespread deforestation has been the main source of this loss, estimated to be responsible for nearly 90 percent of losses since the mid-nineteenth century. Losses primarily occur due to the relatively long-term carbon sinks of forests being replaced by agricultural land.
The conversion of land from forested to agricultural land can have a wide range of negative effects as far as greenhouse gas emission is concerned. Soil disturbance and increased rates of decomposition in converted soils can both lead to emission of carbon to the atmosphere, with increased soil erosion and leaching of soil nutrients further reducing the potential for the area to act as a sink for carbon.
Current estimates suggest land-use changes lead to the emission of 1.7 Pg C per year in the tropics, mainly as a result of deforestation, and to a small amount of uptake (about 0.1 Pg C) in temperate and boreal areas - so producing a net source of around 1.6 Pg C per year.
Energy - Stationary Sources
Of the carbon dioxide emissions arising from fossil fuel combustion—up to 6.5 Pg C each year—around 40% is a result of electricity generation, with coal-fired generation being the leading sector. Other stationary sources include industrial (particularly iron and steel manufacture), emissions resulting from oil extraction, refinement and transportation, and domestic and commercial fossil fuel use.
Energy - Mobile Sources
Globally, transport-related emissions of carbon dioxide are growing rapidly. They currently consitute around 24% of anthropogenic CO2 emissions. Road transport dominates these emissions, though off-road, air and marine transport emissions are aslo significant. The use of petroleum as a fossil fuel for transportation dominates carbon dioxide emissions from this source. In 1999, in the U. S., more than 30 percent of fossil fuel-related carbon dioxide emissions were a direct result of transportation. With about two-thirds of this being from gasoline consumption by motor vehicles and the remainder coming from diesel and jet fuel use in lorries and aircraft, respectively.
Carbon dioxide is produced in lime and cement manufacture as a result of the heating of limestone. The final amount of CO2 produced varies depending the type of cement being made. Globally, this source is estimated to amount to 0.2 Pg C emission to the atmosphere each year. Significant carbon dioxide emissions (around 0.25 PgC per year) also result from its use in chemical feedstocks.
Though responsible for large CO2 emissions over short time-scales, the net CO2 emissions due to biomass burning are difficult to quantify due to the subsequent uptake of CO2 through regrowth of vegetation. An unsustainable (i.e., not off-set by regrowth) fraction equivalent to about 10% of total emissions is generally assumed biomass used in energy-generation, with this figure being incorporated into the total emissions resulting from land-use change.
- Zhenhao Duana and Rui Sun. 2003. An improved model calculating CO2 solubility in pure water and aqueous NaCl solutions from 273 to 533 K and from 0 to 2000 bar. Chemical Geology 193: 260–271.
- Carbon dioxide Capture and Storage. IPCC 2005 Full text.
- Greenhouse Gas Sinks. Reay et al. (eds). CABI Publishing (in press). | <urn:uuid:6d9ffe77-05d7-410c-8782-2e57d322d59e> | 3.875 | 1,275 | Knowledge Article | Science & Tech. | 41.67117 |
The International Astronautical Congress is in full swing in Prague today, with regular updates flowing over #IAC2010 on Twitter and the first session of interstellar import now in progress as I write this. It’s a session on interstellar precursor missions that includes, in addition to Ralph McNutt (JHU/APL) on the impact of the Voyager and IBEX missions, a series of papers from the Project Icarus team ranging from helium-3 mining to communications via the gravitational lens of both the Sun and the target star (no specific target has yet been chosen for Icarus).
Claudio Maccone will be summarizing where we stand with the FOCAL mission, envisioned as the first attempt to exploit gravitational lensing for astronomical observations. But I’ll turn today to Marc Millis, who will wrap up the precursor session with a discussion of the first interstellar missions and their dependence on things we can measure, such as energy. The notion here is to look at the energy required for an interstellar mission and to weigh this against predictions of when those energy levels will be accessible and available to be used for space purposes.
Energy Needs Drive Spaceflight
How to get a handle on energy growth trends? Millis uses annual data on the world’s energy production from 1980 to 2007, calculating the ratio of each year’s energy production to the preceding year, then finding the average and standard deviation of all 27 of these years. How soon until Earth becomes a Kardashev Type I civilization — one capable of mastering all the energy reaching the Earth from the Sun? Acknowledging the wide span of uncertainty in the result, Millis pegs the earliest year this could occur as 2209, with a nominal date of 2390 and a latest date of 6498. A constant growth rate is assumed, which balances depletion of natural resources against unforeseen advances in new energy sources, leaving growth rates relatively stable.
Fascinating as they are in their own right, I won’t go through all the numbers (I’ll link to the paper when it becomes available online). But note the key factors here, which are the total amount of energy produced by our species and the proportion of that energy devoted to spaceflight. For the latter, Millis compares the annual Space Shuttle launch rate against the total annual energy consumed by the United States, finding that the maximum ratio of Shuttle propulsion energy to total US energy consumed occurred in the year 1985, equaling 1.3 x 10-6. The average ratio over the years 1981 to 2007 is 5.5 x 10-7. Millis then takes the maximum ratio (over an order of magnitude greater than the average ratio) to calculate the earliest opportunity for future missions. What he calls the Space Devotion Ratio is thus 1.3 x 10-6.
The Alpha Centauri Calculation
When could we launch a 104 kilogram interstellar probe to Alpha Centauri based on these calculations? Assume 75 years as the maximum travel time that might be acceptable to mission scientists and assume a rendezvous rather than a flyby mission, acknowledging the need to acquire substantial amounts of data at the destination. Millis extrapolates from existing deep space probes to arrive at a putative mass, adding the needed margins to ensure survival over a 75-year transit and the substantial communications overheard to relay information to Earth.
As to propulsion options, Millis works with two possibilities, the first being an ideal case that assumes 100% conversion of stored energy into kinetic energy of the vehicle (think ‘idealized beam propulsion’ or even some kind of space drive), the second being an advanced rocket with an exhaust velocity of 0.03c. We thus wind up with two sets of figures, again based on energy availability. Millis then converts the propulsion energy figures into equivalent world energy values, using the Space Devotion Ratio he first calculated earlier for US space involvement.
The result: The earliest launch for a 75-year probe is 2247, with a nominal date of 2463. This assumes idealized propulsion; i.e., a breakthrough technology like a space drive. Fall back on advanced rocket concepts and the energy requirements are much higher, with the nominal launch date of the probe now becoming 2566, the earliest possible date being 2301.
Strategies for Interstellar Research
The play in the numbers is huge, the uncertainty in the results caused by the wide span in possible energy production growth rates. Interestingly, Millis’ finding that the earliest interstellar mission will not be possible for two centuries coincides with earlier estimates from Bryce Cassenti and Freeman Dyson based on economic and technological projections. We can, obviously, adjust the numbers based on our projections of technological growth, and as with any projection, sudden changes to world economic patterns would be a substantial wild card.
But Millis argues that in the absence of a single technological solution, it would be premature to focus on specific propulsion options to the exclusion of other, more theoretical alternatives. For that matter, it would be foolish to be inhibited by the ‘incessant obsolescence’ postulate (a term that Millis himself coined), noting that earlier missions may well be overtaken by faster ones launched at a later date. Instead, what he calls ‘cycles of short-term, affordable investigations’ targeting key questions whose answers we can hope to find today are the best way to proceed. And that means continuing our investigations of everything from the already operational solar sails to technologies that today seem impossible, such as travel faster than the speed of light. | <urn:uuid:a756587a-4833-4798-8d53-a2ff5b459748> | 2.859375 | 1,133 | Personal Blog | Science & Tech. | 35.598614 |
William Henry Perkin
A photograph that William Henry Perkin took of himself at the age of 14—four years before he discovered the first synthetic dyestuff. CHF Collections.
In 1856, during Easter vacation from London’s Royal College of Chemistry, 18-year-old William Henry Perkin (1838–1907) synthesized mauve, or aniline purple—the first synthetic dyestuff—from chemicals derived from coal tar. Like Friedrich Wöhler’s accidental synthesis of urea, Perkin’s chemical manipulations were designed to produce a quite different product—quinine. His teacher, August W. Hofmann, one of Justus von Liebig’s former students, had remarked on the desirability of synthesizing this antimalarial drug, which at that time was derived solely from the bark of the cinchona tree, by then grown mainly on plantations in southeast Asia. Against Hofmann’s recommendation and with the financial support of his father, a construction contractor, Perkin commercialized his discovery and developed the processes for the production and use of the new dye. In 1857 he opened his factory at Greenford Green, not far from London.
William Henry Perkin. Williams Haynes Portrait Collection, CHF Collections.
From this modest beginning grew the highly innovative chemical industry of synthetic dyestuffs and its near relative, the pharmaceutical industry, which improved the quality of life for the general population. These two industries also stimulated the search for a better understanding of the structure of molecules. Perkin, at the age of 36, sold his business so that he could devote himself entirely to research, which included early investigations of the ability of some organic chemicals to rotate plane-polarized light, a property used in considering questions of molecular structure. | <urn:uuid:c27f0459-5053-46a6-a0b5-067e32bb83f7> | 3.359375 | 375 | Knowledge Article | Science & Tech. | 29.025495 |
This site contains 25 questions on the topic of plate tectonics, which covers the development of the theory, crustal movements, geologic features associated with tectonics, and plate boundaries (convergent, divergent, transform). This is part of the Principles of Earth Science course at the University of South Dakota. Users submit their answers and are provided immediate verification.
Intended for grade levels:
Type of resource:
No specific technical requirements, just a browser required
Cost / Copyright:
Copyright 1997 Timothy H. Heaton, Department of Earth Sciences, University of South Dakota
DLESE Catalog ID: DLESE-000-000-001-750
This resource is part of 'Earth Science Practice Exams'
Resource contact / Creator / Publisher:
Author: Dr Timothy H. Heaton
University of South Dakota | <urn:uuid:7b2c6690-49b8-4470-a9c0-82875eac4363> | 3 | 171 | Content Listing | Science & Tech. | 22.109414 |
Converting to String
String class provides us with the method String.valueOf(some stuff) that basically converts "some stuff" into a string..
But we can also concatenate "some stuff" with an empty String ("").
Both these methods result in a String representation of an object(or other data type) so my question is what is the difference between the two, advantages of one over the other, disadvantages of one over the other and which method do you guys usually use when forced to convert something to a String and why??
The API documentation of valueOf() describes what it does: but note that there are various versions that take different argument types.
String concatenation/conversion is dealt with in the JLS 15.18.1.
If this is a homework question your best bet is to read what these things actually do, then decide and express your own ideas and preferences for comment. | <urn:uuid:125603ae-4029-47a4-9c6a-ac466f59ced6> | 2.859375 | 189 | Q&A Forum | Software Dev. | 56.506091 |
US usage, definition 1: A quadrilateral which has a pair of opposite
sides which are parallel. The parallel sides are called the bases,
and the other two sides are called the legs.
US usage, definition 2: A quadrilateral which has one parallel pair of opposite sides and one non-parallel pair of opposite sides. The parallel sides are called the bases, and the other two sides are called the legs.
UK usage: The same as the US word trapezium. The UK word trapezium means the same as the US word trapezoid, and vice-versa.
Commentary: Under US definition 1, a parallelogram is a type of trapezoid. Under US definition 2, a parallelogram is not a type of trapezoid. Regardless of which definition you prefer, the trapezoid area formula can be used to find the area of a parallelogram.
rectangle, square, height
of a trapezoid, trapezoid rule, polygon | <urn:uuid:49efa30f-0e13-4411-9f52-666331e348ce> | 3.78125 | 213 | Knowledge Article | Science & Tech. | 37.170952 |
is a very large ciliate measuring from 500-2000 microns long when
extended. There are a variety of species of Stentor.
coeruleus is a very large trumpet shaped, blue to blue-green ciliate
with a macronucleus that looks like a string of beads (dark connected dots
on the left). With many myonemes, it can contract into a ball.
It may also swim freely both extended or contracted.
stentor uses the cilia to sweep food down into its gullet.
polymorphus is 500-1500um long and hosts many Zoochlorellae which
makes this species green. The Zoochlorella live in symbiosis with
are always fun to watch and are available from science supply
companies. Some instructors do not have luck keeping their cultures
Stentor shown at lower left is Stentor coeruleus or Stentor
polymorphus. The image was taken with a phase contrast
microscope. Notice the macronucleus (round circles that look like a string
Video of a Stentor captured with a microscope courtesy of James Tripp. | <urn:uuid:88e17127-6ef4-4f46-9866-2cfe2faedbb8> | 2.890625 | 241 | Knowledge Article | Science & Tech. | 50.62878 |
co-leader of the new DOE Center for Research on Enhancing Carbon
Sequestration in Terrestrial Ecosystems (CSITE), Gary Jacobs of
ORNL's Environmental Sciences Division (ESD) explains some of the
concepts that the center's 28 scientists are studying and some of
the questions they seek to answer.
Vegetation and Soil:
Natural Scrubber for Carbon Emissions?
An Interview with ORNL's Gary Jacobs
|CSITE co-leaders Blaine Metting (left) and Gary Jacobs inspect a field of Queen Anne's lace (Daucus carota) at the Fermi National Accelerator Laboratory's National Environmental Research Park.
can forests, pastures, croplands, and soils reduce carbon dioxide levels
in the atmosphere in an era of increasing industrialization, and how
can CSITE facilitate this process?
Mac Post of ESD, one of our experts on the carbon cycle, says that several lines of evidence indicate that the terrestrial biosphere, most likely the Northern Hemisphere, could be taking up a net amount of nearly 2 billion tons of carbon per year. Some of this uptake is perhaps due to regrowth of forests harvested previously this century in North America and Europe. Another factor may be the enhanced growth of natural vegetation resulting from rising atmospheric carbon dioxide concentrations stimulating photosynthesis. This Northern Hemisphere sink is larger than the estimated 0.5 to 1.5 billion tons of carbon emitted to the atmosphere through conversion of natural ecosystems to agriculture, primarily in tropical regions. Thus, the earth's vegetation and soil could act as a huge natural scrubber for carbon dioxide emissions from industrial sources and land-use changes.
Terrestrial ecosystems remove atmospheric carbon dioxide by plant photosynthesis during the day, which results in plant growth (roots and shoots) and increases in microbial biomass in the soil. Plants release some of the stored carbon back into the atmosphere through respiration. When a plant sheds leaves and roots die, this organic material decays, but some of it can be protected physically and chemically as dead organic matter in soils, which can be stable for up to thousands of years. The decomposition of soil carbon by soil microbes releases carbon dioxide to the atmosphere. This decomposition also mineralizes organic matter, which makes nutrients available for plant growth. The total amount of carbon stored in an ecosystem reflects the long-term balance between plant production and respiration and soil decomposition.
CSITE seeks to demonstrate through research that forests, pastures, cropland, other vegetation, and their associated soils can be managed and manipulated to sequester even more carbon from the atmosphere. People can help reduce carbon dioxide releases to the atmosphere and enhance carbon sequestration by protecting and adding to ecosystems that store carbon. For example, we can preserve forests instead of burning them to clear land for farms. We can grow more trees. We can reduce soil erosion. Some agricultural and forestry management techniques are already helping to sequester additional carbon. The focus of our center, however, is to do research to determine the most effective, most acceptable ways to manipulate and manage ecosystems to increase carbon storage in above-ground biomass and below-ground roots and soil. For example, we will look at different approaches to fertilizing and cultivating forest plantations and crops. R&D is needed to understand, measure, implement, and assess these strategies.
is the scope of research for CSITE?
We will try to discover how changes in land use and land management affect the ability of vegetation and soil to sequester carbon. To measure and predict these changes, we will rely on various tools, ranging from remote sensing to simulation modeling. We seek to understand how microbial activity, soil aggregation, and other processes at the molecular level control carbon sequestration in vegetation and the soil. We will also do several assessments: We will determine scientifically the national potential for sequestering carbon in terrestrial ecosystems. We will evaluate the actual net effect on greenhouse warming potential of practices that enhance carbon sequestration in terrestrial ecosystems. This assessment will take into account the greenhouse gas costs of improving plant productivity, such as the increased carbon dioxide and nitrogen oxide emissions associated with fertilizer production and machinery operation. We will develop a quantitative understanding of the environmental impacts of increasing carbon sequestration in terrestrial ecosystems and create improved tools for predicting impacts. Finally, we will analyze soil carbon sequestration to determine its economic and social impacts, especially possible pressures on land use and production in the agricultural and forest sectors.
is ORNL particularly well qualified to study carbon sequestration in
CSITE's proposal was successful partially because we formed a national partnership with some of the best researchers and institutions in the field. The future success of CSITE depends on whether our collaborative team can address the most pressing scientific challenges. As for ORNL's specific qualifications, several ESD researchers Mac Post, Jeff Amthor, Gregg Marland, Stan Wullschleger, and Bob Luxmoore, for example have considerable experience modeling, monitoring, and conducting large experiments on forests and other ecosystems, with an emphasis on understanding the carbon cycle and impacts of global change on ecosystems. Rich Norby, Paul Hanson, and others have studied the effects on forest growth of increasing carbon dioxide concentrations and changing inputs of water. Janet Cushman and Lynn Wright have managed a national program for developing better ways to raise faster-growing biomass crops for energy production. ESD staff also played a key role in writing a chapter on soils and vegetation for DOE's report, Carbon Sequestration: State of the Science, which was compiled, edited, and published by ORNL staff. (This chapter is the source of much of the material in this interview.) ESD also is home to several key global climate change data centers, such as the Center for Carbon Dioxide Information and Analysis, a NASA Distributed Active Archive Center, and the Atmospheric Radiation Measurements Program data center. Studying the ecological effects of global change has historically been one of the Lab's strengths.
is soil management important for carbon sequestration?
Soils are estimated to contain about 75% of all terrestrial carbon. Because 25 billion tons of soils are lost through wind and water erosion each year, there is an incentive to prevent erosion not only to benefit agriculture but also to increase carbon sequestration. One solution is to produce and protect soils high in carbon-containing organic matter because they have better texture and are better able to absorb nutrients, retain water, and resist erosion. Soil organic matter processes are a particular emphasis of ESD's Chuck Garten and our Argonne National Laboratory collaborators, Julie Jastrow and Mike Miller.
Jeff Amthor and others estimate that some 40 to 60 billion tons of carbon have been lost from soils since the great agricultural expansions of the 1800s. Removal of natural perennial vegetation and cultivation of the land have caused declines of soil organic matter by 50 to 60% in the top 20 centimeters of soil and 20 to 30% in the top meter of soil. This decline is due largely to a decrease in the formation of new organic matter below the ground and the loss of natural mechanisms that protect soil organic carbon from decomposition and oxidation. Cultivated soil is exposed to the air, so during decomposition by soil microbes, the soil organic matter is oxidized, and the carbon is released to the atmosphere as carbon dioxide.
changes in farming practices enable soils to store more carbon?
Cesar Izaurralde and Norm Rosenberg, two of our PNNL partners, are experts in agricultural systems. They point out that soil carbon can be increased by reduced-till agriculture, in which the soil is barely disturbed before crops are planted, and by the practice of returning crop residues to soil to reduce wind erosion. The U.S. Conservation Reserve Program (CRP) of the US Department of Agriculture, which since 1985 has been paying farmers to retire land from cultivation for up to 15 years and plant it in grass to stabilize it, is also increasing soil carbon storage. Some evidence suggests that levels of soil organic carbon have doubled over the past 20 years in the upper 18 centimeters of soil placed in the CRP. In addition, erosion of the land enrolled in the CRP has decreased 21%. All of these practices reflect mainly the "recovery" of soil carbon previously lost because of earlier cultivation.
changes in forestry practices would make plants and soils more efficiently
remove carbon dioxide from the atmosphere?
Forests in the United States are being managed to produce harvestable fiber and maintain cover, increase water storage, and retain litter. One major challenge is to slow the rate of deforestation. If this trend could be reversed and if reforestation occurs, some modeling studies suggest that, globally, forests could sequester from 200 to 500 billion tons of carbon by 2090. These values are large and controversial, and estimating the potential for carbon sequestration is one of the research challenges. A big challenge is to determine how to manage forest nutrients to achieve both profitable productivity and net carbon storage. Strategies are needed to address both fertilization and incorporation of forest residue into soils.
can more carbon be stored in soils and plants?
More carbon can be stored below ground by increasing the depth of soil carbon, boosting the density of carbon in the soil, and decreasing the rate at which soil carbon decomposes. CSITE will be focusing initially on the latter two. Harvey Bolton of PNNL, Jizhong Zhou of ESD, and Mike Miller of ANL will be looking at microbiological processes that could be manipulated to reduce decomposition rates of soil organic matter. ESD's John McCarthy will be investigating molecular-scale interactions among clay particles and soil organic matter in search of a better way to protect the organic matter. We hope that other research programs will provide complementary results. For example, advances in biochemical research may produce a "smart fertilizer" that increases a soil's organic content and ability to retain water, protects its organic matter, and improves its texture so it can hold more carbon. Another important R&D area would be the development of new ways to produce fertilizer that use less energy and reduce carbon emissions.
More carbon can be sequestered in vegetation, possibly even by genetically engineering plants to increase their carbon retention. Plants could be engineered to produce cellular structures more resistant to decomposition, increasing the lifetime of soil organic matter and thus sequestering more carbon in soils. We need to find ways to make carbon accumulate faster, increase the vegetation's carbon density, and use biomass carbon in long-lived structural materials and industrial products.
are the other benefits of storing more carbon in vegetation and soils?
Creating conditions for higher plant productivity and accumulation of soil organic matter will not only sequester more carbon but also restore degraded ecosystems worldwide. Carbon sequestration strategies would improve soil and water quality, decrease nutrient loss, reduce soil erosion, improve wildlife habitats, increase water conservation, and produce additional biomass for energy and other products. Understanding how to increase soil carbon stocks in agricultural lands may be critical to the future sustainability of food production.
will you know if carbon sequestration is increasing in a terrestrial
A critical question is whether new sensors will be required or if process knowledge rules of thumb will be sufficient to estimate changes in carbon sequestration based on the implementation of observable land management practices. Developing measurement and sensing techniques to verify increased carbon sequestration in terrestrial ecosystems and to monitor its effects will be challenging. Detecting changes in terrestrial carbon concentrations at large scales will not be easy. An important R&D goal identified by DOE in its roadmap report is to develop in situ, nondestructive, below-ground sensors to quantify rates and limits of carbon accumulation over various times and land areas. To determine whether increases have occurred in aboveground biomass, new advances in satellite-based-remote sensing will be required. ORNL could certainly play a role in developing some of the needed sensor technology. | <urn:uuid:b83112ba-fb84-4b7c-b9b5-0b2dd8d091b1> | 3.875 | 2,402 | Audio Transcript | Science & Tech. | 24.645075 |
A new research paper
(PDF format) suggests that astrobiologists and geologists should work in
a virtual world that extends their senses rather than using traditional
approaches. The paper presents the argument using the Matrix films as an
analogy. As an example, they offer the idea of a human with ViA wearable
computer equipment that interfaces via neural network software to the
vision system of a Mars probe. (the software is named, appropriately,
EditOr or NEO). The human and the computer each provide the types of
image processing they're good at resulting in some interesting
views. See the Cyborg
Astrobiologist project website for more information.
It's really neat, but....there is a 6-7 minute wait for a light
speed signal to reach Mars, then you have a 6-7 minute wait for the
light speed signal to return (or is it 6-7 minutes round trip total?).
Unless someone gets quantum twin particle communications going it's
going to be hard to get a decent interactive speed going there from
Earth with something on Mars that is real time.
I personally would go insane trying to telepresence a robot on Mars
from Earth at those time delay speeds for everything you want to do.
Oh dear. Don't get me wrong, I like sci-fi movies. But to me a
computer science paper should be about computer science, not just a
promotion for a piece if fiction. Quoting scenes from sci-fi films and
including many hollywood images in a science paper just makes me feel | <urn:uuid:3982dafc-ea84-483f-8f25-4485d19d5323> | 2.6875 | 325 | Comment Section | Science & Tech. | 57.316929 |
These procedures are in structures
(make-time integer) -> time
(current-time) -> time
(time? x) -> boolean
(time-seconds time) -> integer
timerecord contains an integer that represents time as the number of second since the Unix epoch (00:00:00 GMT, January 1, 1970).
make-time's using its argument while
current-time's has the current time.
Time?is a predicate that recognizes
time-secondsreturns the number of seconds
(time=? time time) -> boolean
(time<? time time) -> boolean
(time<=? time time) -> boolean
(time>? time time) -> boolean
(time>=? time time) -> boolean
Time->stringreturns a string representation of
timein the following form.
"Wed Jun 30 21:49:08 1993 "
Previous: Time | Next: Time | <urn:uuid:480099e6-6199-46a7-b3b4-a05e0201b228> | 3.28125 | 195 | Documentation | Software Dev. | 49.079757 |
Hardness and specific gravity are two of the major characteristics of rocks. Hardness of a rock or minneral is its resistance to scratching and may be described relative to a standard scale of 10 minerals known as the Mohs scale. F. Mohs, an Austrian mineralogist, developed this scale in 1822.
Specific gravity is the number of times heavier a gemstone of any volume is than an equal volume of water; in other words, it is the ratio of the density of the gemstone to the density of water.
The following is a table of the 10 minerals from Mohs' hardness scale:
Ruby or Sapphire
The following table shows the hardness of various common materials.
|Piece of chalk||1|
|Plaster of Paris||2| | <urn:uuid:d072aa65-14b9-4720-a3bd-85b8f3836cd7> | 3.8125 | 160 | Knowledge Article | Science & Tech. | 48.201585 |
Sea Fan Corals
- This species of coral is of the “gorgonian soft coral family”, hence they are commonly known as the “Sea Fan”. They have a flower like appearance as they grow colonially flat fan like pattern. All species grow about 2 feet high and the colonies may grow up to 5 feet tall. This coral has a flexible structure, which sways with the water currents. The fan shaped colonies usually grow across the current, increasing their ability to hold prey. They are abundantly found in the Atlantic coasts of Florida, Bermuda and The West Indies.
Stag Horn Corals-
Found 10-160 feet below the surface in protected clear water. Their colonies cover large areas of the reef. Stag horn corals show tentacles in multiples of three. These tiny fingerlike tentacles emerge at night. They are most common in shallow reef environments with bright light and moderate to high water motion. They need oxygenated water. Environmental destruction has led to a dwindling of their populations and they are listed as a candidate species for the endangered species act of 1973.
Ivory Bush Corals-
Ivory bush corals show colonies that are clumps of short, thin, highly fused branches. They show dense branches that grow in clumps, which are characteristic to this coral. Their short branches are crooked in shape. The ivory bush corals thrive in areas of high sedimentation including hard bottoms, lagoons and black reef areas to depths of 40 feet. They are found in shallow reef environments and in rarity. Ivory bush corals thus survive in varying conditions of the environment.
Mountainous Star Corals-
The mountainous star corals are massive, mound shaped coral colonies found in the Gulf of Mexico, Flower Gardens, The Florida Key Reef, throughout the Caribbean and other areas of the Tropical Atlantic. They are healthy, fully pigmented star corals. They rise from 15020 feet below the waters just a few feet from the surface.
Colonies become pale due to loss of algal pigment due to exposure to high water temperature or solar radiation. This condition is reversible.
Finger Leather Corals-
The finger leather corals are commonly referred as spaghetti finger leather coral, soft finger corals or thin finger corals. They are slimy to touch. They are known to exist in various sizes. These corals are always seen attached to a small piece of rock. They periodically go through regeneration stage where they form a waxy coating all over their body and within a day or two they shed the top layer of their skin. This is a normal interesting phenomenon seen in them.
Brain corals show structure similar to that of the brain, hence the name “brain coral”. They are commonly found in massive colonies. These brain corals appear in green, gray, purple, brown or yellow-brown in color. Their colonies are found as low as 3-130 feet. Their unique shape and beauty adds attractiveness to the underwater world of coral reef. They are of at most commercial importance as they attract lots of tourist in the Caribbean Sea.
Elkhorn corals are large branching corals with thick and sturdy elk antler like branches. They have tree like colonies up to 13 feet across and 6.5 feet tall and flat tips of the branches. Over the last 10,000 years these are one of the three most important, unique Caribbean corals contributing to reef growth and development and providing essential fishery habitat. | <urn:uuid:65369870-d7fa-405f-af56-b4ef11eeb154> | 3.234375 | 711 | Knowledge Article | Science & Tech. | 53.457563 |
These images show two Jupiter-sized sunspot groups on the face of the Sun (left) and an extreme close-up of a different, smaller sunspot group (right). The lefthand image was taken on Oct. 24, 2003 by the SOHO (Solar & Heliospheric Observatory) spacecraft. The righthand image was taken on July 15, 2002 by the Swedish 1-m Solar Telescope on the island of La Palma off the western coast of Africa. The central dark part of the large sunspot in the middle of the righthand image is about 14,000 km (8,700 miles) across... slightly larger than Earth!
Images courtesy SOHO (NASA & ESA) and the Royal Swedish Academy of Sciences. | <urn:uuid:1c0943b4-6122-4dfb-a2da-d15817f8d32b> | 2.9375 | 156 | Knowledge Article | Science & Tech. | 65.511261 |
Technical Summary for Mathematicians
This is an introduction to finite fields, rings, and groups, with applications to modern cryptography.
The examples used here are all ultimately derived from Z/nZ, by means of constructions like products, field extensions, and groups of invertibles.
The RSA cryptosystem, probably the most popular, is the discrete logarithm problem in the group (Z/pqZ)×.
It is conjecturally as difficult to solve this problem as it is to factor pq (as evidence, note that the order of any element divides lcm(p-1, q-1)), which is, in turn, conjecturally computationally difficult.
Of course, there are special values (of pq and the log base) which are low-order and so must be avoided.
Other well-investigated cryptosystems are based on the discrete log problem in GF(pd)×. Again, it is open whether such discrete log problems really provide "trapdoor" functions.
Further technical information and references can be found at the RSA FAQ.
The Woodrow Wilson Leadership Program in Mathematics
The Woodrow Wilson National Fellowship Foundation
CN 5281, Princeton NJ 08543-5281 | <urn:uuid:2e95c3d0-d803-4fa0-8307-c94271682cda> | 3.046875 | 258 | Knowledge Article | Science & Tech. | 39.603304 |
Since no one else did, here is the explanation of uranium series,
electron spin resonance and optically stimulated luminescence.
Uranium series dating dates the uranium and thorium which is
incorporated into travertine (cave limestone) deposits. Ground water,
seeping into caves, contains uranium 238 and 234, which are highly
soluble in water, but not daughter elements such as thorium 230 which is
insoluble. After the uranium is deposited in the limestone, the uranium
238 and 234 decays to thorium -230 and then this on to other isotopes.
By measuring the ratios, and knowing the half-lives of the radioactive
series, one can date the travertine and thus anything it covers. All
the thorium and below must have originally been uranium. The method is
good from 5000 years ago back to about 1 million years
Electron spin resonance (ESR) measures damage to crystaline material
such as tooth enamel. As a tooth lies in the earth it is bombarded by
natural radioactivity. This disrupts the crystal displacing electrons
in the crystal lattice. By bombarding the crystal with microwaves, a
some of the energy is absorbed by the trapped electrons indicating the
extent of the radiation damage. By knowing the ambient radiation level
in the deposite a tooth came from, one can calculate how long the thing
has been lying there. ESR works from the present to more than a million
Thermoluminescence is similar to ESR. When a mineral like flint or
quarts is heated or exposed to sunlight the crystal lattice is annealed
a bit and displaced electrons are 'neutralized'. When the object is
buried it does not see the sun but radiation damage once again
accumulates, causing displaced electrons.. By stimulating the object
with light, there is a glow which eminates from the object which can be
measured. This glow indicates how much damage the flint has suffered.
ONce again by knowing the ambient radiation level, one can calculate how
long it has been since the mineral saw sunlight or how long since it was
heated. The range is from 5000 to 500,000 years for this method.
Info can be found in Encylcopedia of Human Evolution S. Jones ed. p.
BTW Thermoluminescence was discovered by either Boyle or Hooke who took
a crystal to bed with him and placed it next to his body. In the dark
the crystal glowed. No one knows what he was really doing with the
Foundation, Fall and Flood Adam, Apes and Anthropology http://www.isource.net/~grmorton/dmd.htm | <urn:uuid:6c66cdb2-a91b-4e29-b80f-9fb8f0a956f1> | 3.828125 | 574 | Knowledge Article | Science & Tech. | 47.615145 |
A molecular sieve is a material with very small holes of precise and uniform size. These holes are small enough to block large molecules and allow small molecules to pass. Many molecular sieves are used as desiccants. Examples: Activated charcoal and silica gels are molecular sieves.
According to IUPAC notation,microporous materials have pore diameters of less than 2 nm (20 Å) and macroporous materials have pore diameters of greater than 50 nm (500 Å); the mesoporous category thus lies in the middle with pore diameters between 2 and 50 nm (20-500 Å).
Microporous material (<2 nm)
- Zeolites (aluminosilicate minerals, not to be confused with aluminium silicate)
- Porous glass: 10 Å (1 nm), and up
- Active carbon: 0-20 Å (0-2 nm), and up
- Montmorillonite intermixes
- Halloysite (endellite): Two common forms are found, when hydrated the clay exhibits a 1 nm spacing of the layers and when dehydrated (meta-halloysite) the spacing is 0.7 nm. Halloysite naturally occurs as small cylinders which average 30 nm in diameter with lengths between 0.5 and 10 micrometres.
- Montmorillonite intermixes
Mesoporous material (2-50 nm)
Macroporous material (>50 nm)
Molecular sieves are used as adsorbent for gases and liquids. Molecules small enough to pass through the pores are adsorbed while larger molecules are not. It is different from a common filter in that it operates on a molecular level and traps the adsorbed substance. For instance, a water molecule may be small enough to pass through the pores while larger molecules are not, so water is forced into the pores which act as a trap for the penetrating water molecules, which are retained within the pores. Because of this, they often function as a desiccant. A molecular sieve can adsorb water up to 22% of its own weight. The principle of adsorption to molecular sieve particles is somewhat similar to that of size exclusion chromatography, except that without a changing solution composition, the adsorbed product remains trapped because in the absence of other molecules able to penetrate the pore and fill the space, a vacuum would be created by desorption.
Molecular sieves are often utilized in the petroleum industry, especially for the purification of gas streams and in the chemistry laboratory for separating compounds and drying reaction starting materials. For example, in the liquid natural gas (LNG) industry, the water content of the gas needs to be reduced to very low values (less than 1 ppmv) to prevent it from freezing (and causing blockages) in the cold section of LNG plants.
They are also used in the filtration of air supplies for breathing apparatus, for example those used by scuba divers and firefighters. In such applications, air is supplied by an air compressor, and is passed through a cartridge filter which, dependent on the application, is filled with molecular sieve and/or activated carbon, finally being used to charge breathing air tanks. Such filtration can remove particulates and compressor exhaust products from the breathing air supply.
Methods for regeneration of molecular sieves include pressure change (as in oxygen concentrators), heating and purging with a carrier gas (as when used in ethanol dehydration), or heating under high vacuum. Regeneration temperatures range from 175 °C to 315 °C depending on molecular sieve type. In contrast, silica gel can be regenerated by heating it in a regular oven to 120 °C (250 °F) for two hours. However, some types of silica gel will "pop" when exposed to enough water. This is caused by breakage of the silica spheres when contacting the water.
|Model||Pore diameter (Ångström)||Bulk density (g/ml)||Water absorbing||Attrition abrasion W (%)||Used in|
|3A||3||0.60~0.68||19~20||0.3~0.6||desiccation of petroleum cracking gas and alkenes, selective adsorption of H2O in insulated glass (IG) and polyurethane|
|4A||4||0.60~0.65||20~21||0.3~0.6||adsorption of water in sodium aluminosilicate which is FDA approved (see below) used as molecular sieve in medical containers to keep contents dry and as food additive having E-number E-554 (anti-caking agent); Preferred for static dehydration in closed liquid or gas systems, e.g., in packaging of drugs, electric components and perishable chemicals; water scavenging in printing and plastics systems and drying saturated hydrocarbon streams. Adsorbed species include SO2, CO2, H2S, C2H4, C2H6, and C3H6. Generally considered a universal drying agent in polar and nonpolar media; separation of natural gas and alkenes, adsorption of water in non-nitrogen sensitive polyurethane|
|5A-DW||5||0.45~0.50||21~22||0.3~0.6||degreasing and pour point depression of aviation kerosene and diesel, and alkenes separation|
|5A small oxygen-enriched||5||0.4-0.8||≥23||?||Specially designed for medical or healthy oxygen generator|
|5A||5||0.60~0.65||20~21||0.3~0.5||desiccation and purification of air; dehydration and desulphurization of natural gas; desulphurization of petroleum gas; oxygen and hydrogen production by pressure swing absorption process|
|10X||8||0.50~0.60||23~24||0.3~0.6||High-efficient sorption, be used in desiccation, decarburization, desulphurization of gas and liquids and separation of aromatic hydrocarbon|
|13X||10||0.55~0.65||23~24||0.3~0.5||desiccation, desulphurization and purification of petroleum gas and natural gas|
|13X-AS||10||0.55~0.65||23~24||0.3~0.5||decarburization and desiccation in air separation industry, separation of Nitrogen from Oxygen in Oxygen concentrators|
|Cu-13X||10||0.50~0.60||23~24||0.3~0.5||sweetening of aviation and corresponding liquid hydrocarbons|
The FDA has as of April 1, 2012 approved sodium aluminosilicate (sodium silicoaluminate) for direct contact with consumable items under 21 CFR 182.2727. Prior to this approval Europe had used molecular sieves with pharmaceuticals and independent testing suggested that molecular sieves meet all government requirements but the industry had been unwilling to fund the expensive testing required for government approval.
Distinction from zeolite
|Able to distinguish materials on the basis of their size||Special class of molecular sieves with aluminosilicates as skeletal composition|
|May be crystalline, non-crystalline, para-crystalline or pillared clays||Highly crystalline materials|
|Variable framework charge with porous structure||Anionic framework with microporous and crystalline structure|
- 3A Molecular sieve
- 4A Molecular Sieve
- 5A Molecular Sieve
- Activated carbon
- Lime (mineral)
- Silica gel
- J. Rouquerol et al. (1994). "Recommendations for the characterization of porous solids (Technical Report)" (free download pdf). Pure & Appl. Chem 66 (8): 1739–1758. doi:10.1351/pac199466081739.
- Brindley, George W. (1952). "Structural mineralogy of clays". Clays and Clay Minerals 1: 33–43. doi:10.1346/CCMN.1952.0010105.
- Molecular Sieves Study
- Spence Konde, "Preparation of High-Silica Zeolite Beads From Silica Gel," retrieved 2011-09-26
- Specification and Function
- "Sec. 182.2727 Sodium aluminosilicate.". U.S. Food and Drug Administration. 1. Retrieved 10 December 2012.
- Molecular Sieves' Use | <urn:uuid:8a34d557-1b83-4467-b723-dac66a556bd1> | 4.09375 | 1,864 | Knowledge Article | Science & Tech. | 49.765983 |
Global warming is increasingly rendering Inuit and other Arctic peoples at a loss for words. They simply do not have names in their languages for the temperate species flocking up from the south.
They have plenty of ways of describing their own wildlife - some have more than 1,000 words for reindeer - but none for, say, the robin, which is only now venturing north of the treeline.
The Inuit are reduced to describing it as "the bird with the red breast" in their language, Inuktiut, said Sheila Watt-Cloutier, chairwoman of the Inuit Circumpolar Conference, the top elected representative of the people worldwide.
Nor, she said, are there words for salmon, hornets and barn owls, all of which are appearing in the Arctic for the first time. "We can't even describe what we are seeing," she added.
Last month, the Arctic Council presented the most comprehensive report ever carried out on the climate of the region, which is made up of parts of the United States, Canada, Russia, Iceland, Norway, Sweden, Finland and Denmark. It concluded that the far north is warming up twice as fast as the rest of the planet.
The Arctic Climate Impact Assessment, the result of years of study by more than 250 scientists and published by Cambridge University Press, said that summer sea ice has declined dramatically; Arctic glaciers are shrinking rapidly; permafrost is thawing and warm waters are thrusting ever further into northern seas, threatening serious changes to the world's climate.
It added that polar bears "are unlikely to survive as a species" if the sea ice melts, because they would be crowded out by brown bears and grizzlies, which would be much better suited to the new environment.
Another report, on changes in North American wildlife because of climate change, concluded that the Arctic fox is already retreating in the face of a steady northwards expansion of the red fox.
Much larger and more aggressive, the red fox easily beats its Arctic cousin in fights, but cannot cope with extreme cold. But as global warming has taken hold it has already advanced 600 miles north in parts of Canada, said the report, which was written by professors from the universities of Texas and Colorado.
Ms Watt-Cloutier said: "Climate change is happening first and fastest in the Arctic. Our elders have intimate knowledge of the land, sea and ice and have observed disturbing changes to the climate and wildlife. If we can reverse the emission of the pollution that causes climate change in time to save the Arctic from the devastating impact of global warming, then we can spare suffering for millions of people around the globe." | <urn:uuid:1db999c5-491b-451a-871a-ff303516375b> | 3.015625 | 547 | Comment Section | Science & Tech. | 39.312279 |
See also the
Dr. Math FAQ:
Browse High School Triangles and Other Polygons
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Area of an irregular shape.
Pythagorean theorem proofs.
- Area of an Octagon [10/26/2001]
I am trying to figure out the square footage of an octagon-shaped house.
Each wall measures 15 ft. in length.
- Carrying a Ladder around a Corner [02/28/2003]
A ladder of length L is carried horizontally around a corner from a
hall 3 feet wide into a hall 4 feet wide. What is the length of the
- Congruence and Triangles [12/13/1997]
Can you please explain how to determine, using SSS, SAS, and ASA, how a
shape is congruent or not?
- General Area Formula [02/14/2002]
Is there an all-inclusive formula for the area of a square, rectangle,
parallelogram, trapezoid, and triangle?
- Polygon Angles [02/14/1997]
What is the sum of the measure of the angles in polygons with sides 3-50?
- Polygon Diagonal Formula [8/20/1996]
Does the polygon diagonal formula apply to other parts of geometry?
- Possible Areas of a Triangle [12/27/2001]
Exploring the areas of a triangle with side lengths 6 and 7.
- Pythagorean Proof Based on the Principles of Scaling [04/04/2002]
I've decided to do a project with some connections to the Pythagorean
theorem, but the project requires innovative ideas.
- The Pythagorean Theorem [07/07/1997]
Could you please explain the Pythagorean Theorem?
- Pythagorean Theorem and non-Right Triangles [03/09/2002]
Why doesn't the Pythagorean theorem work for triangles other than right
- Pythagorean Theorem: Why Use the Converse? [07/15/2003]
Why use the converse of the Pythagorean theorem?
- Theorem or Postulate? [11/03/2002]
Shouldn't the three triangle postulates - SSS, ASA, and SAS - be
- Triangle Congruence [05/13/2003]
I don't understand how to tell if two triangles are congruent.
- Two Crossing Ladders [07/01/2003]
Two walls are 10 ft. apart. Two ladders, one 15 ft. long and one 20
ft. long, are placed at the bottoms of the walls leaning against the
opposite walls. How far from the ground is the point of intersection?
- 16-sided Regular Polygon [07/31/2001]
How can I construct a 16-sided polygon?
- 30-60-90 and 45-45-90 Triangles [03/15/1999]
If I have a triangle that is 30-60-90 or 45-45-90, how do I find all the
sides when given only one side? Where does trigonometry come in?
- 8 Sticks, No Triangle [05/12/2003]
No triangle can be formed using any three of 8 sticks, each of integer
length. What is the shortest possible length of the longest of the
- AAA, ASS, SSA Theorems [11/16/2001]
Why can't AAA, ASS, and SSA be used to determine triangle congruence?
- Acute Angles in a Triangle [12/02/1998]
What is the greatest number of angles smaller than a right angle that a
triangle can have?
- Altitudes and Bisectors of a Triangle [05/25/1999]
Prove that the altitudes of a triangle are bisectors in the triangle
formed by connecting the meeting points of the altitudes with the sides
of the original triangle.
- The Ambiguous Case [04/01/2003]
How many triangles can be constructed if, for example, a=4, A=30, and
c=12? Or a=9, b=12, and A=35?
- Angle Between Two Sides of a Pyramid [10/29/1999]
How can I compute the angle formed by two sides of a frustrum of a
- The Angle Bisector and Equal Side Ratios [05/17/1998]
Given triangle ABC and angle bisector BD, show that AB/AD = BC/CD.
- Angle-bisector Proof [10/16/1997]
Prove that in a triangle ABC, a pair of angle-bisectors cannot be
- Angle Bisector Theorem [10/11/2002]
The bisector of an interior angle of a triangle divides the opposite
side internally into two segments that are proportional to the
- Angle Measurements of Triangles inside Semicircle [11/26/1998]
If the area of a triangle inside a semicircle is equal to the area
outside the triangle within the semicircle, then find the values of the
acute angles in the triangle.
- Angle of Elevation [01/22/1997]
A tree 66 meters high casts a 44-meter shadow. Find the angle of
elevation of the sun.
- Angle, Side Length of a Triangle [9/4/1996]
What is the relation between the angles and side lengths of a triangle?
- Angle-Side-Side Does Not Work [11/12/2001]
Can you give me a construction to show that Angle-Side-Side does not
prove two triangles congruent?
- Angles of a Cyclic Quadrilateral [07/14/1998]
ABCD is a cyclic quadrilateral with AB parallel to DC. Angle DAC = 40
- Angles of a Triangle [02/04/2003]
Why do the angles of a triangle always add up to 180 degrees?
- Angles of Stars [08/18/1997]
What are the interior and external angles of stars built on regular
pentagons and octagons.
- Another Isosceles Triangle [10/27/1999]
A triangle has sides of length 29, 29, and 40 cm. How can I find another
isosceles triangle with the same perimeter and area that also has sides
of integral length?
- Ant and Rectangle [01/22/2001]
Does the ant walk along the diagonals of the rectangle?
- Apothem of a Hexagon [6/11/1996]
What is the formula for the apothem of a regular hexagon?
- Apothem of a Triangle [03/21/2001]
Find the apothem and radius of a triangle with a side of length 12.
- Applying Euler's Methods [07/27/1999]
Questions about prime divisors, triangle constructions, decomposing
quartic polynomials, and rational roots.
- Approximating Pi using Geometry [08/12/1998]
I need to know a simple method to find the approximate value of pi using
- Arcs Inside a Square [07/25/1999]
What is the area of the figure created by the intersection of two arcs
drawn in a square of sidelength 5 units?
- Area and Perimeter in Polygons [06/24/1999]
How can I prove the formula A = (a^2n)/(4tan(180/n)) for computing the
area of a regular n-gon with sidelength a? How does this compare to the
area of a circle? | <urn:uuid:4da58a0c-e987-4403-9dc3-a73d7789a627> | 3.328125 | 1,664 | Q&A Forum | Science & Tech. | 69.396646 |
I wish just to present the problem and then muse on its educational significance both for my personal learning of mathematics, and for that of my students.
N is the 4-digit integer 6_9_. If these two digits are reversed, explain why the resulting number must be 2970 more?
(posted by @dmarain to @cuttheknotmath)
I immediately look toward the base system when digits are switching around. When a digit moves from one place to another, it takes on a new meaning. I expect this new meaning will give me the difference I desire.
First number can be represented as (with a, and b in the set of base 10 digits):
6X1000 + aX100 + 9X10 + bx1
When we switch the digits, the new number becomes:
9X1000 + ax100 + 6x10 + bx1
The second is always larger (which is an interesting discussion to have with students) so we subtract to keep difference positive. We se that because the a and b did not switch orientation in the place value system, their value remains constant. Therefore, they will cancel out upon subtraction and have no bearing on the final solution. This is why it is a constant regardless of the two digits.
9000 +60 - (6000 + 90)
9060 - (6090)
In this way, we show that for the general case (there are 100 cases in total--2 spots, 10 digits) this fact is always true.
The idea of proof to students is very elitist. In high school, a list of examples where it holds is often sufficient. Once the list gets long enough, the proof is concluded. In this case, it would be easy enough to show the students there are 100 cases; this may discourage a plug-and-chug method. Instead, number tricks like this help students realize two things:
1. The basic qualities of our base 10 number system
2. The many interesting patterns that numbers create
In my curriculum, both of these topics are mandated. Posing this problem to a class of grade 10s gives them opportunity to create hypotheses, test them out, and then dive deeper into the number system to look around. I would imagine that such an activity would pair nicely with one on scientific notation or binary numbers. If they are really keen, a proof like the one above may be deciphered. Then we drag algebra into the mix as well.
There are many areas of useful mathematics that are left out of textbooks. As teachers, our pursuit of learning can greatly effect our teaching. | <urn:uuid:c22cd389-ed16-4e17-bc0e-874256cb55f1> | 3.46875 | 538 | Personal Blog | Science & Tech. | 60.791467 |
Starting with the second session, there will be a brief quiz during each session. Each review quiz counts for 5% of the course grade.
In a house in Cambridge, the water from the faucet suddenly started showing some particulate matter, which is suspected to be copper from a pipe. It was brought to the MIT Reactor for analysis. You are asked to calculate the activity that would be produced by thermal neutron activation, if 1 gram of copper is irradiated in the reactor flux of 4 x 1012 n.cm-2.sec-1 for 2 hours.
The answer should contain: the activity equation, the parameters and values used, and the activity calculated.
An entrepreneur wants to know whether a particular area of interest has Molybdenum and Antimony. So what are the radioisotopes that can be used for the thermal neutron activation analysis. Provide all the relevant information of the X (n,γ)Y reaction, identify the parent and daughter nuclei, the activation cross section, the half-life of the daughter product, and the predominant gamma-ray energy for identification.
An unknown sample powder was found in an envelope. It was brought to the reactor for analysis. The gamma spectrum revealed significant gamma-ray peaks of energy 320 KeV, 1368 keV and 2754 keV. Identify the content of the powder.
The weights of empty vial, empty vial + sample powder were taken 6 times. Write the formula for the propagation of errors, calculate the error in the weight of the sample powder. Interpret the results.
Weights (in grams) of the empty vial, weighed separately for 6 times:
1.14470, 1.14475, 1.14472, 1.14476, 1.14478, 1.14475
Weights (in grams) of the vial + sample powder, weighed separately for 6 times:
1.35041, 1.35040, 1.35029, 1.35018, 1.35026, 1.35035
Arsenic is determined in river sediment samples. The abundance of As in the standard is 145 ppm. The gamma-ray energy of 76As is 559 keV. The gamma peak areas of the sample and standard are respectively, 32699, and 1533496 for the same counting times. The delays from the end of irradiation for the sample and the standard counting are 5.953 d, and 4.252 d. The weights of the sample and standard are 0.38476 g and 0.41669 g. Calculate the abundance in the sample. Estimate the propagation of errors. You may use the weighing error from problem 4 above.
In the MIT-EAPS INAA Laboratory, an internal standard has been analyzed 10 times. Nd is one of the Rare Earth Elements. Its measured abundance values (in ppm) are
24.0 ± 0.7, 23.7 ± 0.7, 24.0 ± 0.5, 24.3 ± 0.9, 23.7 ± 1.0, 24.3 ± 1.0, 24.0 ± 0.7, 23.8 ± 0.6, 24.0 ± 0.7, 24.7 ± 0.9.
The reference value of this standard is 24.7 ± 0.3. Write the formulae and calculate the precision and accuracy of this measurement. Express the precision and accuracy in percentage.
Our department chairperson came to know that an equipment grant would be available soon. So a memo was sent to our gamma-spectroscopy group asking the importance of gamma spectrometer. Write the usefulness of a gamma spectrometer. Describe the components of a gamma spectrometer. Look at the latest products. To look at the Web site Canberra and look under product category and do a write up - one or two lines of each product you want to select.
The Department of Agriculture came to know that some fruit trees in Florida got contaminated. So they want to send some dry leaves for analysis of arsenic. Suggest a suitable standard.
The MIT Libraries has asked you for some suggestions for new books on neutron activation analysis. What book titles can you suggest, which they do not already own?
Now that you are familiar with trace element analysis of materials by neutron activation analysis, briefly describe its application by giving one example. | <urn:uuid:ee3d6991-9a57-4a9b-acc9-c46aed4d540d> | 3.765625 | 905 | Academic Writing | Science & Tech. | 73.167048 |
Electric current and magnetism
When two magnets are close together, magnetic
forces are at work. Two magnets repel or attract each
other. A wire carrying an electric current and a magnet
also repel or attract, because electric current makes a
solder wire, enameled wire, horseshoe magnet, battery, switch, conducting wire and
Close the switch, what happens?
Change the direction of electric current and switch the magnets. | <urn:uuid:bfd79203-2f63-4f9e-a29d-255807aea70a> | 3.4375 | 93 | Knowledge Article | Science & Tech. | 33.705682 |
The Physics Philes, lesson 16: Appreciate the Gravity of the Situation
In which balls are thrown, gravity works, and energy is added.
Holy cow, you guys. This is the 16th post of this series and, frankly, I long ago ran out of clever ways to introduce each installment. (You may have noticed.) So I’m just going to jump right into it.
Last week I attempted to explain gravitational and elastic potential energy. I’m running a little behind this week, so I’m going to have to save elastic potential energy for next week. I spent too much time last week at the Renaissance Festival and watching the special features on my Blu-ray of The Avengers. But for now, it’s game on, gravitational potential energy. GAME. ON.
Let’s say that you throw a 0.145 kg ball straight up in the air. By throwing the ball you give it an upward velocity of 20.0 m/s. If we ignore air resistance (which, of course, we will do), how high will the ball fly?
Because the ball is being thrown straight up, we don’t need an x-axis. I’ll treat point one as the point at which the ball leaves your hand and point two where the ball reaches it’s maximum height. The equations we need are:
OK. Which variables do we know? As the free body diagram shows, our target variable is y2, the ball’s maximum height. Luckily, we know everything else:
m = 0.145 kg
v1 = 20.0 m/s
v2 = 0 m/s
g = 9.80 m/s
y1 = 0
The velocity at point two is zero because the ball will have reached its potential height and will instantaneously be at rest. Since y1 = 0, the potential energy at point one is zero. The same is true for kinetic energy at point two; since the ball is at rest, kinetic energy at point two is zero. Do you understand why this would be? Remember that potential energy is mass times gravity times position. If the position is at zero, then the whole equation must equal zero. Kinetic energy is 1/2 times mass times velocity. If the velocity is zero, the entire equation must equal zero. Dig?
So we know that potential gravitational energy at point one is zero and we know that kinetic energy at point tow is zero. From this we know that:
Voila! Neglecting air resistance, that ball went 20.4 m in the air. But what if we have other forces doing work on the ball, as well? What if your hand moved up 0.50 m while you’re throwing the ball. Your hand once again gives the ball an upward velocity of 20.0 m/s. Assuming the force your hand exerts is constant, what is the magnitude of the force? And what is the speed of the ball 15.0 m above where it left your hand?
In this case, there is nongravitational work being done, so we need this equation:
In this situation, we have three points: y1 = 0.50 m; y2 = 0 m; and y3 = 15.0 m. We also know that the velocity at y1 = 0 m/s (because the ball is at rest) and at y2 = 20.0 m/s. The velocity at y3 is one of our target variables. The nongravitational force, which is our other target variable, only works between points y1 and y2. Let’s determine the nongravitational force first.
To figure out the nongravitational force, we need to calculate the other work indicated on the left side of the above equation. To do that, we need to find the other variables:
Potential gravity at point one is negative because the ball was initially thrown below the origin, but don’t worry about that. To find the other work done, we need to add the difference between the kinetic energy at points one and two and the difference between the gravitational potential energy at those same points:
So the other work done is equal to 29.7 J. Now how do we use this number to find the force? Remember the work equation? W = Fs, where s is the displacement. The displacement in this case is (y2 – y1) because the other force only acts in between those points. By rearranging the equation, we have:
Great! We’ve found the force! Now we need to find the speed at point three. Between points two and three, we don’t need to worry about other work, so we can just use the equations from the first problem.
When I first encountered these equations, I was a little intimidated. Especially when they are expressed in symbols. For a while I was getting the kinetic energy at point one confused with kinetic energy with point two and the same with the potential energies. But as I spent some time with the equations, it all became clear. Or clearer, at least.
I wish I had time to get into elastic potential energy this week, but I can’t. Sorry. Next week we’ll explore elastic potential energy in all it’s snappy glory! | <urn:uuid:144eb709-1fbd-42a9-812a-273606ddce50> | 3.9375 | 1,115 | Personal Blog | Science & Tech. | 74.258697 |
|Breakthrough Research on Platinum–Nickel Alloys|
Two out of three of the kinetic barriers to the practical use of polymer electrolyte membrane (PEM) hydrogen fuel cells in automobiles have been breached: the impractically high amount of extra energy needed for the oxidation reduction reaction (ORR) on the catalyst and the loss of catalytic surface areas available for ORR. Using a combination of probes and calculations, a group of scientists has demonstrated that the Pt3Ni(111) alloy is ten times more active for ORR than the corresponding Pt(111) surface and ninety times more active than the current state-of-the-art Pt/C catalysts used in existing PEM fuel cells. This new variation of the platinum–nickel alloy is the most active oxygen-reducing catalyst ever reported.
For practical purposes, automobiles using PEM fuel cells are still test vehicles. Historically, there have been a number of obstacles to the use of such fuel cells in vehicles. These obstacles have centered around the kinetic limitations on the oxygen reduction reaction (ORR). First, the substantial overpotential, or extra energy needed for the ORR at practical operating current densities, reduces the thermal efficiency to well below the thermodynamic limits, typically to about 43% at 0.7 V (versus a theoretical thermal efficiency of 83% at 1.23 V). Second, the dissolution and/or loss of Pt surface area in the cathode, due to degradation by unwanted byproducts such as hydroxides, must be greatly reduced. Third, an approximately fivefold reduction in the amount of platinum in current PEM fuel cell stacks is needed to meet the cost requirements of large-scale automotive applications.
Previous studies led to incremental improvements in catalyst performance, but large increases have been elusive. However, thanks to researchers from Berkeley Lab, Argonne National Laboratory, the University of Liverpool, and the University of South Carolina, the first and second obstacles to an efficient PEM fuel cell have been surmounted.
A combination of in situ and ex situ surface-sensitive probes and density functional theory (DFT) calculations was used to study ORR on Pt3Ni(hkl) single-crystal surfaces, identify which surface properties govern the variations in reactivity of PtNi catalysts, and determine how surface structures, surface segregation, and intermetallic bonding affect the ORR kinetics. Techniques used include low-energy electron diffraction spectroscopy(LEEDS), Auger electron spectroscopy (AES), low-energy ion scattering (LEIS), and synchrotron-based high-resolution ultraviolet photoemission spectroscopy (UPS). UPS data were obtained using ALS Beamline 9.3.2.
The researchers worked with three single crystals of Pt3Ni alloy: 100, 110, and 111. All three crystals, when compared with their pure platinum counterparts, showed improvement in oxidation reduction, but the Pt3Ni(111) alloy showed the most significant improvement.
The Pt3Ni(111) surface has an unusual electronic structure (d-band center position) and arrangement of surface atoms in the near-surface region. Under operating conditions relevant to fuel cells, its near-surface layer exhibits a highly structured compositional oscillation in the outermost and third layers, which are Pt-rich, and in the second atomic layer, which is Ni-rich. This causes a weakening of the bonds between the Pt surface atoms and the OH– molecules. The weakening increases the number of active sites available for O2 adsorption. As the kinetics of O2 reduction are determined by the number of free Pt sites available for the adsorption of O2, the intrinsic catalytic activity at the fuel-cell-relevant potentials (E > 0.8 V) has been found to be ten times more active than the corresponding Pt(111) surface. The observed catalytic activity for the ORR on Pt3Ni(111) is the highest ever observed on cathode catalysts, including the Pt3Ni(100) and Pt3Ni(110) surfaces.
The next step is to engineer nanoparticle catalysts with electronic and morphological properties that mimic the surfaces of pure single crystals of Pt3Ni(111). In this way, the amount of platinum will be reduced without a loss in cell voltage, while also maintaining the maximum power density. This will drive down the cost, and the third obstacle to an efficient, affordable hydrogen fuel cell will disappear.
Research conducted by V.R. Stamenkovic (Berkeley Lab and Argonne National Laboratory), B. Fowler and C.A. Lucas (University of Liverpool, U.K.), B.S. Mun and P.N. Ross (Berkeley Lab), G. Wang (University of South Carolina), and N.M. Markovi (Argonne National Laboratory).
Research funding: U.S. Department of Energy, Office of Basic Energy Sciences (BES); General Motors Corp.; and the U.K. Engineering and Physical Sciences Research Council. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences (BES).
Publication about this research: V.R. Stamenkovic, B. Fowler, B.S. Mun, G. Wang, P.N. Ross, C.A. Lucas, N.M. Markovi, "Improved oxygen reduction activity on Pt3Ni(111) via increased surface site and availability," Science 315, 493 (2007). | <urn:uuid:b7b238c4-ce6e-4acf-b528-e86adc73ed8d> | 2.703125 | 1,137 | Academic Writing | Science & Tech. | 41.413957 |
Science Fair Project Encyclopedia
Civil defense siren
A civil defense siren, air raid siren, or tornado siren is a electrically-powered mechanical device for generating sound to provide warning of approaching danger and to indicate when the danger has passed. Initially designed to warn of air raids they were adapted to warn of nuclear attack and of natural phenomena such as tornadoes. The generalized nature of the siren led to them being largely replaced with more considered warnings, such as the U.S. Emergency Broadcast System.
Sound is generated by having a motor drive a shaft at either end of which are mounted fans, one fan having a few more blades than the other. Around each fan is a housing with a number of cut slots to match the number of fan blades. The blades are designed to draw air in at the end and force it out through the slots in the housing. Due to the design, the air output is cut on and off alternately thus producing the sound. Modern sirens can reach 140 dB at 30 metres (100 feet).
A number of different sound forms could be created. During World War II for a "red warning" of approaching danger the siren would be run normally producing a tone that rose and fell regularly between one high and one low tone, corresponding to the number of blades on each fan and the speed at which they turned. A "white warning" (All Clear) was a single continuous tone. Sometimes there was a "take cover" warning for immediate danger, the power to the motor was cut for a moment at intervals to change the tone produced. Post WW II two further warnings were introduced for nuclear attack - "grey warning" indicated approaching fall out with a 2½ minute warning of long steady tones divided by equal periods of silence, the silence being created with a manual shutter. A "black warning", also for manual sirens, was either a Morse code 'D' (–··) or three quick tones, indicating imminent danger of fall out.
Public sirens are often sounded by local municipalities when a tornado warning has been issued for an area by a government agency, such as the National Weather Service in the U.S., or the Meteorological Service of Canada. Sirens may also be sounded even before a warning, if a tornado, waterspout, or other funnel cloud is spotted by police, firefighters, or other personnel trained as a spotter. In some places, such as Cobb County, Georgia, sirens are sounded if a severe thunderstorm warning is issued while there is a tornado watch out for the region.
Some areas, such as Mexico City, have warning systems for major earthquakes. Because the seismic detection system can give several seconds notice of earthquakes (which generally occur over 100 km away on the Pacific coast), lives can be saved when people can scramble to greater safety, or at least less danger. This is not as effective where major earthquakes occur very near or even right under cities, such as Los Angeles or San Francisco.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:7a803179-4e71-465f-b96f-3c7cae929281> | 3.609375 | 635 | Knowledge Article | Science & Tech. | 45.353991 |
Science Fair Project Encyclopedia
The Garnet group of minerals show crystals with a habit of dodecahedrons and trapezohedrons. They are nesosilicates with the same general formula, A3B2(SiO4)3. The chemical elements in garnet include calcium, magnesium, aluminium, iron2+, iron3+, chromium, manganese, and titanium. Garnets show no cleavage and a dodecahedral parting. Fracture is conchoidal to uneven; some varieties are very tough and are valuable for abrasive purposes. Hardness is 6.5 - 7.5, specific gravity is 3.1 - 4.3, luster is vitreous to resinous, and they can be transparent to opaque.
There is a misconception that garnets are a red gem but in fact they come in a wide variety of colors including purple, red, orange, yellow, green, brown, black, or colorless. The lack of a blue garnet was remedied in 1998 following the discovery of color-change blue to red/pink material in Bekily, Madagascar, These stones are very rare. Color-change garnets are by far the rarest garnets except uvarovite, which does not come in cuttable sizes. In daylight, their color can be shades of green, beige, brown, grey and rarely blue, to a reddish or purplish/pink color in incandescent light. By composition, these garnets are a mix of spessartine and pyrope, as are Malaya garnets. The color change of these new garnets is often more intense and more dramatic than the color change of top quality Alexandrite which is freqently disappointing, but still sells for many thousands of dollars (US) per carat. It is expected that blue color-change garnets will match Alexandrite prices or even exeed them as the color change is often better and these garnets are much rarer. The blue color-change type is mainly caused by relatively high amounts of vanadium (about 1 wt.% V2O3).
Six common varieties of garnet are recognized based on their chemical composition. They are pyrope, almandine or carbuncle, spessartite, grossularite (varieties of which are hessonite or cinnamon-stone and tsavorite), uvarovite and andradite. The garnets make up two solid solution series; 1. pyrope-almandine-spessarite and 2. uvarovite-grossularite-andradite.
Garnet Group Members
Grossularite is a calcium-aluminium garnet with the formula Ca3Al2(SiO4)3, though the calcium may in part be replaced by ferrous iron and the aluminium by ferric iron. The name grossularite is derived from the botanical name for the gooseberry, grossularia, in reference to the green garnet of this composition that is found in Siberia. Other shades include cinnamon brown (cinnamon stone variety), red, and yellow. Because of its inferior hardness to zircon, which the yellow crystals resemble, they have also been called hessonite from the Greek meaning inferior. Grossularite is found in contact metamorphosed limestones with vesuvianite , diopside, wollastonite and wernerite .
One of the most sought after varieties of gem garnet is the fine green grossular garnet from Kenya and Tanzania called tsavorite. This garnet was discovered in the 1960s in the Tsavo area of Kenya, from which the gem takes its name.
Pyrope, from the Latin pyropos, means similar to fire. It is ruby-red in color and chemically a magnesium aluminium silicate with the formula Mg3Al2(SiO4)3, though the magnesium can be replaced in part by calcium and ferrous iron. The color of pyrope varies from deep red to almost black. Transparent pyropes are used as gemstones.
A variety of pyrope from Macon County, North Carolina is a violet-red shade and has been called rhodolite, from the Greek meaning "a rose." In chemical composition it may be considered as essentially an isomorphous mixture of pyrope and almandite, in the proportion of two parts pyrope to one part almandite. Pyrope has nicknames of Cape ruby, Arizona ruby, California ruby, Rocky Mountain ruby, and Bohemian garnetfrom the Czech Republic. Another intriguing find is the blue color-change garnets from Madagascar, a pyrope spessatine mix. The color of these blue garnets is not like sapphire blue in subdued daylight but more reminiscent of the greyish blues and greenish blues sometimes seen in spinel However in white LED light the color is equel to the best corn flower blue sapphire or D block tanzanite this is due to the blue garnets ability to absorb the yellow component of the emitted light.
Almandite, sometimes called almandine, is the modern gem known as carbuncle (though originally almost any red gemstone was known by this name). The term "carbuncle" is derived from the Latin meaning "little spark." The name Almandite is a corruption of Alabanda , a region in Asia Minor where these stones were cut in ancient times. Chemically, almandite is an iron-aluminium garnet with the formula Fe3Al2(SiO4)3; the deep red transparent stones are often called precious garnet and are used as gemstones (being the most common of the gem garnets). Almandite occurs in metamorphic rocks like mica schists, associated with minerals such as staurolite, kyanite, andalusite, and others. Almandite has nicknames of Oriental garnet, almandine ruby, and carbuncle.
Spessartite or spessartine is manganese aluminium garnet, Mn3Al2(SiO4)3. It's name is derived from Spessart in Bavaria. It occurs most often in granite pegmatite and allied rock types and in certain low grade metamorphic phyllites. Spessartite of a beautiful orange-yellow is found in Madagascar. Violet-red spessartites are found in rhyolites in Colorado and Maine.
Uvarovite is a calcium chromium silicate with the formula Ca3Cr2(SiO4)3. It is a rather rare garnet, bright green in color, usually found as small crystals associated with chromite in peridotite and serpentinite or sometimes in crystalline marbles and schists. It is found in the Urals of Russia and Outukompu, Finland. Knorringite is a rare variety in which magnesium replaces calcium. It is often found in kimberlites and used as an indicator mineral in the search for diamonds.
Andradite is a calcium-iron garnet, Ca3Fe2(SiO4)3, is of variable composition and may be red, yellow, brown, green or black. The recognized subvarieties are topazolite (yellow or green), demantoid (green) and melantite (black). Andradite is found both in deep-seated igneous rocks like syenite as well as serpentines, schists, and crystalline limestone. Demantoid has been called the "emerald of the Urals" from its occurrence there, and is one of the most prized of garnet varieties. Topazolite is a golden yellow variety and melanite is a black variety.
In yttrium iron garnet (YIG), Y3Fe2(FeO4)3, the five iron(III) ions occupy two octahedral and three tetrahedral sites, with the yttrium(III) ions coordinated dodecahedrally by eight oxygen ions. The iron ions in the two coordination sites exhibit different spins, resulting in magnetic behaviour. By substituting specific sites with rare earth elements, for example, interesting magnetic properties can be obtained. One example for this is gadolinium gallium garnet, Gd3Ga2(GaO4)3, which is synthesized for use in magnetic bubble memory. Yttrium aluminium garnet (YAG), Y3Al2(AlO4)3, is used for synthetic gemstone. When doped with neodymium (Nd3+), these YAl-Garnets are useful as the lasing medium in lasers.
- Dana's Manual of Mineralogy ISBN 0471032883
- Color Encyclopedia of Gemstones ISBN 0442203330
- Mineral galleries
- USGS Garnet locations - USA
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:6064674a-3b71-4ae7-8528-16e74c2c32cc> | 3.75 | 1,879 | Knowledge Article | Science & Tech. | 32.477286 |
Science Fair Project Encyclopedia
The Osmundaceae family of ferns is the only family of the order Osmundales, which in turn is the only order in the class Osmundopsida. This is an ancient and fairly isolated group, often known as the "flowering ferns" because of the striking aspect of their ripe sporangia. The sporangia are borne naked on highly dimorphic fronds. They are larger than those of most other ferns.
Ferns of this family form heavy rootstocks with thick mats of wiry roots. Many species form short trunks; in the case of the genus Todea , they are sometimes considered as tree ferns because of the trunk, although it is relatively short.
The leaf tissue ranges from very coarse, almost leathery in the case of the Cinnamon fern (Osmunda cinnamomea), to delicate and translucent, as in the case of the genus Leptopteris .
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:5dbaf966-c857-4f70-b64e-a859cb33e25f> | 3.453125 | 236 | Knowledge Article | Science & Tech. | 43.987494 |
Since the early 80's, it has been discovered a massive dysfunction of reef organs. It affects not only corals, but also other algae's. Coral loses its colors, hence the common name "Bleaching".
The first described mass bleaching began in Bonaire in June 1979 and ended in February 1980. It occurred on the windward coast of Bonaire from 10 to 40 m depths. Bleaching also occurred in several areas in Florida and the Great Barrier Reef. In 1987-1988, the bleaching occurred worldwide and was the most extensive bleaching ever recorded.
Braincoral Recent mass bleaching took always place during the warmest part of the year. The temperature is in most cases characterized as "above normal". Together with experimental evidences that raising the temperature above local normal can induce bleaching, this has lead to consider temperature as the critical factor. Mass bleaching might be the first signal of global warming! It is still unknown if the temperature is the cause of mass bleaching.
High level of irradiance was also suggested as a cause of mass bleaching. In addition to photosynthesis enhancement, light may also warm the upper face. Dozens of reports disproved this suggestion after a research with turbid water.
High UV irradiance has been suggested to participate in bleaching. The most difficult fact to reconcile with the UV hypothesis is the occurrence of bleaching at 90 meters. Bleaching also occurred in an underwater cave. An experiment with UV-opaque and UV-transparent plastic plates at 18 meters, yielded similar less bleaching of larger foraminifers underneath both of them after four months, but densities increased only under UV-opaque shields.
The main knowledge on the reef mass bleaching phenomenon may be summarized as follows:
- It is a global phenomenon, observed in shallow marine tropical areas;
- It is very recent and sudden (since 1979);
- It is probably increasing in magnitude and becoming chronic;
- no coherent spatial patterns emerge (depth, reef zonation, geographic area), except maybe a relationship with groove structures and pass ;
- It affects almost all, and probably all, and quasi-exclusively animal-algae symbioses which constitute the founder of reef ecosystems;
- In very general term, it affects preferentially fast-growing, or better said high-photosynthesizing associations;
- It affects them in mass, with subsequent variable mortality.
After more than a decade of mass bleaching events, it has been not possible to identify clearly its origin, though it is widely believed to be related to the "Global Changes":
- It is generally associated with warmest temperatures, but not with exceptional ones in a few well-studied cases. It is yet difficult to relate it to global warming;
- It is frequently associated with calm weather, which might have various consequences (on hydrological patterns, water transparency, air-sea exchanges, and physiology). Changes in the last decade of other weather parameters during bleaching remain to be studied;
- Light is often involved, probably in relation to photosynthesis;
- UV can not be directly, and hardly indirectly, responsible for bleaching, as they have not yet increase in tropics;
- Carbon dioxide build up remains as the last serious hypothesis, but rather because its effects are almost unknown.
Mass bleaching is a threat on all the reef ecosystems, potentially more dangerous than every other disturbance, such as sea level rise or sea level fall, local pollution or future UV increase. Lack of explanation of the phenomenon makes it even more alarming than impact of ozone depletion in polar and temperate regions or acid rain on temperate forests, for which remedies are known ...
In 2012 a report of the International Union for Conservation of Nature (IUCN), showed that around the islands of Dutch Caribbean (Bonaire, Saba and Sint Eustatius) with Aruba, Curacao and Sint Maarten, only 30 % of the coral is still alive.
Special thanks to Erez Wolf for his underwater pictures. | <urn:uuid:52460cbb-e7b1-407e-b600-0cd315dc6c49> | 3.953125 | 834 | Knowledge Article | Science & Tech. | 31.162143 |
The following examples show you how to use string literals. String literals are widely used to identify filenames or when messages are displayed to users. First, we'll look at single-quoted strings, then double-quoted strings.
A single-quoted string is pretty simple. Just surround the text that you'd like to use with single quotes, E.g.
Strings are pretty simple. But what if you wanted to use
a single quote inside the literal? If you did this, Perl would
think you wanted to end the string early and a compiler error
would result. Perl uses the backslash (
\) character to indicate
that the normal function of the single quote-ending a literal-should
be ignored for a moment.
Tip The backslash character is also called an escape character-perhaps because it lets the next character escape from its normal interpretation
A literal is given below. Notice how the single quote is used.
So for example we can do
'These are David Marshall\'s Course Notes'
or more advanced we can do
'Fiona asked: "Are you enjoying David\'s Perl Course"'
The single-quotes are used here specifically so that the double-quotes can be used to surround the spoken words. Later in the section on double-quoted literals, you'll see that the single-quotes can be replaced by double-quotes if you'd like. You must know only one more thing about single-quoted strings. You can add a line break to a single-quoted string simply by adding line breaks to your source code-as demonstrated by below:
print 'Bill of Goods Bread: $34 .45 Fruit: $45.00 ====== $79.45'
The basic outline of the code here is: | <urn:uuid:814da631-e911-43bb-94ab-da4bc2f6bda6> | 4.28125 | 378 | Documentation | Software Dev. | 59.833333 |
How a solar flare could send us back to the Stone Age
A powerful enough solar flare could knock out our power grids, disrupt our GPS satellites, and bring the global economy to a halt, warns a British scientists.
(Page 2 of 2)
The storms can also disrupt communications on transoceanic flights. Sometimes when that happens, they will either divert or cancel flights. So that would be the like the disruption we had in Europe from the volcano two years ago, where they had to close down airspace for safety reasons.Skip to next paragraph
In Pictures Space photos of the day: The Sun
Subscribe Today to the Monitor
Q: What went wrong in the 1989 storm?
A: In the U.K., there were two damaged transformers that had to be repaired. But no power cuts. The worst thing is what happened in Quebec. In Quebec, the power system went from normal operation to failure in 90 seconds. It affected around 6 million people. The impact was reckoned to be $2 billion Canadian in 1989 prices.
We had lots of disruption to communications to spacecraft operations. The North American Aerospace Defense Command has big radars tracking everything in space, and as they describe it, they lost 1,600 space objects. They found them again, but for a few days they didn't know where they were.
Q: Is that the biggest geomagnetic storm on record?
A: We always describe the storm in 1859 as the biggest space weather event. We know there were huge impacts on the telegraph, which suggests there would be similarly severe impacts on modern power grids. It's hard to compare it to the 1989 event because of the changes in our technology.
Q: Many systems have been built to withstand a storm as big as the 1989 event. Is that good enough?
A: A serious concern would be whole regions losing electrical power for some significant time. Here in the U.K., the official assessment is that we could lose one or two regions where the power might be out for several months.
Q: What would the consequences be?
A: In the modern world, we use electricity for so many things. We require electrical power to pump water into people's houses and to pump the sewage away. (You can imagine) what could happen if the sewage systems aren't pumping stuff away.
If you don't have power, you can't pump fuel into vehicles. If you don't have any fuel, traffic could come to a standstill.
Q: Could the economy function?
A: Most of the time you're using credit cards, debit cards or you'll be getting money out of an ATM. If you've lost the power, the computers in the bank that keep track of our money will have back-up power, but not the ATMs or the machines in the shops. So if you had a big power outage, it wouldn't be long before we'd be trying to find cash.
Q: What are the chances that something like this will happen soon?
A: A recent paper (published in February in the journal Space Weather) tried to estimate the chance of having a repeat of 1859 and came up with a value of a 12 percent chance of it happening in the next 10 years. That's quite a high risk.
Q: What can be done?
A: The biggest step is to make more and more people aware of the issue, so they're thinking about it in the way they design things. That's the most critical part.
I think it's also getting a better picture of these very violent past events. We'd like to find out more about the scope of those events. We have a lot of old data from past events that's on paper – in newspapers and so on – and we're busy trying to find ways to turn it into digital.
Q: We had a recent flare-up of publicity in March thanks to a solar storm that didn't really amount to much. Is this sort of coverage a good thing or a bad thing?
A: It makes such a good scare story, and it's entertaining. It was a mildly interesting event, certainly, but not at all big-league stuff. It makes people think, "Oh it's nothing really," so experts like myself are in danger of being in the crying-wolf situation. That's something that is a concern to me, personally.
(This interview was edited for space and clarity.)
Making a Difference | <urn:uuid:adfe378a-800d-4e61-81fc-c7e56c282e11> | 3.296875 | 914 | Audio Transcript | Science & Tech. | 69.761794 |
Let's have some time to think about that.
Tick, tock, tick, tock, tick, tock...
Large amounts of ozone -- around 50% more than predicted by the world's state-of-the-art climate models -- are being destroyed in the lower atmosphere over the tropical Atlantic Ocean. This startling discovery was made by a team of scientists from the UK's National Centre for Atmospheric Science and Universities of York and Leeds. It has particular significance because ozone in the lower atmosphere acts as a greenhouse gas and its destruction also leads to the removal of the third most abundant greenhouse gas; methane.No matter that the world is cooling and that models are yet to get anything right even once. Let's spend trillions of dollars anyway.
The findings come after analysing the first year of measurements from the new Cape Verde Atmospheric Observatory, recently set up by British, German and Cape Verdean scientists on the island of São Vicente in the tropical Atlantic. Alerted by these Observatory data, the scientists flew a research aircraft up into the atmosphere to make ozone measurements at different heights and more widely across the tropical Atlantic. The results mirrored those made at the Observatory, indicating major ozone loss in this remote area.
So, what's causing this loss? Instruments developed at the University of Leeds, and stationed at the Observatory, detected the presence of the chemicals bromine and iodine oxide over the ocean for this region. These chemicals, produced by sea spray and emissions from phytoplankton (microscopic plants in the ocean), attack the ozone, breaking it down. As the ozone is destroyed, a chemical is produced that attacks and destroys the greenhouse gas methane. Up until now it has been impossible to monitor the atmosphere of this remote region over time because of its physical inaccessibility. Including this new chemistry in climate models will provide far more accurate estimates of ozone and methane in the atmosphere and improve future climate predictions.
Professor Alastair Lewis, Director of Atmospheric Composition at the National Centre for Atmospheric Science and a lead scientist in this study, said: "At the moment this is a good news story -- more ozone and methane being destroyed than we previously thought - but the tropical Atlantic cannot be taken for granted as a permanent 'sink' for ozone. The composition of the atmosphere is in fine balance here- it will only take a small increase in nitrogen oxides from fossil fuel combustion, carried here from Europe, West Africa or North America on the trade winds, to tip the balance from a sink to a source of ozone"
Professor John Plane, University of Leeds said: "This study provides a sharp reminder that to understand how the atmosphere really works, measurement and experiment are irreplaceable. The production of iodine and bromine mid-ocean implies that destruction of ozone over the oceans could be global".
Dr Lucy Carpenter, University of York and UK co-ordinator of the Observatory added: "This observatory is a terrific facility that will enable us to keep an eye on the chemical balance of the atmosphere and feed this information into global climate models to greatly improve predictions for this region in the future".
How will the James Hansens and Al Gores and Nicholas Sterns and Tim Flannerys be viewed in years to come?
Not well, I suspect.
Probably in the same light as eugenics advocates. | <urn:uuid:c937c291-6a47-44ca-94cb-b64842f57efa> | 3.375 | 684 | Personal Blog | Science & Tech. | 29.353044 |
There are a couple of points to be made with our simple JSP. First the
shows the use of the
ApplicationResource file that we mentioned earlier. This frees you from hard-coding text into your application.
<html:errors/> is what uses the
ActionErrors collection that we created to return any errors that occurred during validation in the
The form defined within the
<html:form action="login.action" focus="userName">
tag is really what starts the Struts process. The action that is defined here,
login.action, must match an
ActionMapping in the struts-config.xml file. From there, the appropriate actions, forms, and forwards are defined. On the submit of this form, the action is trapped by the Struts
ActionServlet. We'll discuss how that is defined below.
Our Welcome.jsp simply demonstrates how it's possible to use the standard mechanisms available in JSP to pass information from actions to pages:
<title>Welcome to Struts</title>
<p>Welcome <%= (String)request.getSession().getAttribute("USERNAME") %></p>
</p>You have logged in successfully!</p>
USERNAME attribute was set in the
Build the appropriate configuration files
We've spoken a lot about the struts-config.xml file. Usually, the contents of this file are built along the way. However, in this step in the development flow, it's good to take a step back and clearly examine the config file and make sure that everything matches. The final names of all of the
Action classes, JSPs, and forms should be well-known and defined in this file. The one thing that we haven't talked about yet is the web.xml file. This file is used by the JSP container, in this case Tomcat, to give specfic information regarding the application. The web.xml file for the StrutsSample application looks like:
<?xml version="1.0" encoding="ISO-8859-1"?>
This is the web-app configuration that allow the strutsSample to work under
PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN"
<!-- Struts Tag Library Descriptors -->
<servlet> tag is used to define the instance of the
org.apache.struts.action.ActionServlet. In this case, we are calling it
oreilly. Two init parameters are worth calling out here: the application that defines the resource file that holds all of the strings for the applications, and the location of the struts-config.xml file. If you noticed, the url-pattern of the servlet mapping is the same extension we used in our form action attribute. The servlet mapping tells Tomcat to send all requests that end in .action to the
oreilly servlet. You can specify whatever pattern you want here. You might see references to the .do extension in Struts, but I think .action is more explicit. The welcome file list is the default page to display when accessing the application. Finally, we list any of the Struts taglibs we might be using.
Now that we have all of the pieces in place, it is relatively easy to build, test, and deploy your application. Using Ant to build the application is relatively painless. If you haven't used Ant before, I would definitely recommend looking into it. It is easy to learn and provides a clean way to maintain a build environment. I've put together a build.xml file that works with this sample. You can download all of the files used in this article, build and then create a strutsSample.war file by simply typing Ant in the directory where build.xml is located. You need to first download and install the Ant release. It takes about 10 minutes to download and set up.
Our application structure is as follows:
Struts config files (struts-config.xml, web.xml)
Classes directory (strutsSample application package structure)
Lib directory (struts.jar)
Once you have your application in a .war file, you just put it in the webapps directory of Tomcat, and start Tomcat. The .war file will be expanded and a new context will be created in Tomcat for the application. The web.xml file that we created tells Tomcat where to get the rest of the information it needs about the application. You should then be able to access the application by typing http://localhost:8080/strutsSample. Tomcat defaults to use port 8080, so no special setup is required. Login.jsp will display as the welcome page, and you can test away.
In this series of articles, we walked through the entire process of taking an application requirement and building an application from scratch using the Struts framework. While there are more files involved with Struts than with JSP, there are definite benefits in terms of being able to use an MVC model and apply it to a complex environment. The first application takes the most time just getting accustomed to where all the pieces fit together.
Hopefully with the help of this series of Struts articles, you now have a good understanding of exactly what the components are, where they fit, what's necessary, and a good development flow to follow. Struts is just in its infancy and I think will prove to be a valuable tool for doing what we all like to do -- build applications.
Read more JSP and Servlets columns.
Return to ONJava.com. | <urn:uuid:b4888ed8-71cb-4e2b-98ed-88a0c2ab1e7f> | 2.921875 | 1,172 | Documentation | Software Dev. | 61.261043 |
Why we have a scientific consensus on climate change
Posted on 23 March 2011 by Thomas Stemler
A short piece for the general audience of RTR radio, Perth, Australia.
(listen to the original audio podcast)
Recently a research group analysed the current literature on climate science. Their aim was to find out how many of the active researchers in the field agree on man-made climate change. The answer is, 97 out of 100 agree that the climate is changing and that we are causing it.
From my own experience, such a high proportion is quite unusual. As scientists we are trained to be professional sceptics, who doubt everything and who moreover love a good debate. Therefore putting 3 scientists together in a room sometimes results in an argument with 5 different opinions.
While this is the more enjoyable side of science, the more important one is that being sceptic lets us identify errors and improve our understanding of nature.
Climate science is a very special science. It includes experts who study the dynamics and data from the atmosphere, the oceans, glaciers, and so on. Some of us specialise in building models, others use them to make predictions.
So how come that 97 % of the experts agree that the current warming is not natural but a consequence of burning fossil fuels?
First, it is because all our data show that the global mean temperature is increasing, that the glaciers and the arctic ice are melting and therefore sea levels are rising.
Second, we know that burning fossil fuel releases CO2 into the atmosphere. The properties of CO2 were first studied by John Tyndall in the late 1850s. Tyndall was an experimental physicist interested in how different gases absorb heat. John Tyndall's observations were remarkable. His pioneering work eventually inspired physicists to develop the theory of quantum mechanics, but his results about CO2 also led Arrhenius in 1896 to the conclusion that burning fossil fuel will result in global warming. So climate science is a very old science indeed; we have known about CO2 for more than 150 years.
Nowadays we know how much CO2 we put into the atmosphere by using it as our global garbage bin for fossil fuel. All our climate observations show a global increase in temperature. This increase is consistent with the well established properties of CO2.
Taking this into account it is no longer surprising that 97% of the professional sceptics working in the area of climate science agree that we are currently witnessing man-made climate change. The only question remaining is, what do we do? Ignore the facts or generate energy from other sources?
Dr Thomas Stemler is a physicist who is currently an Assistant Professor of Mathematics at the University of Western Australia. He is an expert in forecasting of complex nonlinear dynamical systems.
This podcast is now available on iTunes (or search for "Climate Podcasts from the University of Western Australia" in the iTunes store). Alternatively, you can subscribe to the stream via feedburner. | <urn:uuid:55cb24fb-2b28-4dd6-9370-cefc037b840d> | 3.25 | 606 | Personal Blog | Science & Tech. | 42.823571 |
Story : http://www.msnbc.msn.com/id/46486262/ns/technology_and_science-space/
Anyone out under the stars in the early evening lately likely cannot help but notice two brilliant objects dominating the western sky: the planets Venus and Jupiter.
Venus, because it is closer to the sun than Earth, never strays far from the sun in our sky. Jupiter, being outside the Earth's orbit, can appear anywhere along the ecliptic — the path of the sun, moon, and planets across the sky. Venus and Jupiter are gradually growing closer, and will pass each other on March 13.
The moon, meanwhile, is making its monthly trip around the Earth and will pass these two planets on Saturday and Sunday this week (Feb. 25 and 26). The moon appears close to Venus on Saturday night, and then near Jupiter on Sunday night. The view on either night will be what astronomers call a triple conjunction.
The sky map of Venus, Jupiter and the moon for this story shows how they will appear during the celestial triple play. | <urn:uuid:b6eb7b1b-11d1-4320-bbda-093f75f78a4b> | 3.1875 | 226 | Comment Section | Science & Tech. | 67.162328 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Surface Temperature Reconstructions for the last 2,000 Years
8000 B.C. Such recession has thus occurred in the past due to natural variability, but has been rare in the most recent few millennia.
In the Andes, at the same glacier where the dated plant material was exposed (Quelccaya), melting in the 1980s was strong enough to destroy the geochemical signature of annual layers in the ice beneath (Alley 2006; Thompson et al. 2003, in press). An ice core taken from Quelccaya in the late 1970s showed that such melt had not happened in at least the previous millennium. This strongly suggests anomalous warmth in the late 20th century. The Quelccaya ice cap has existed without interruption for more than 1,000 years. If its present rate of shrinkage continues, it will disappear entirely within a few decades.
Over the last few decades, the floating ice shelves along the Antarctic Peninsula have been disintegrating, following a progressively southward pattern (Vaughan and Doake 1996, Cook et al. 2005). This is primarily a result of higher temperatures inducing surface melt (van den Broeke 2005). Analysis of sediment cores from the seafloor (Domack et al. 2005) beneath one of the largest former shelves (the Larsen B, which disintegrated in the late 1990s) indicates that this ice shelf had persisted throughout the previous 10,000 years, providing further evidence that recent decades have been anomalously warm. | <urn:uuid:916d9549-ba5e-4bf3-b010-01df163ba94d> | 3.90625 | 345 | Knowledge Article | Science & Tech. | 46.701174 |
In one e-mail from 1999, the center's director, Phil Jones, alludes to one of Mann's articles in the journal Nature and writes, "I've just completed Mike's Nature trick of adding in the real temps to each series for the last 20 years (i.e., from 1981 onwards) and from 1961 for Keith's to hide the decline."
Mann said the "trick" Jones referred to was placing a chart of proxy temperature records, which ended in 1980, next to a line showing the temperature record collected by instruments from that time onward. "It's hardly anything you would call a trick," Mann said, adding that both charts were differentiated and clearly marked.
Mann is certainly right: "trick" is a word scientists use all the time to mean a clever way to refocus a problem or transform it somehow to make progress. If I had a dollar for every time I heard a professor say "Now here's a trick you can use," I'd be able to buy a Lexus.
An elementary example of what might be considered a trick is converting from Euclidean (x,y) coordinates to polar (r,θ) coordinates where a calculation simplifies. A more complicated example is dimensional regularization in quantum field theory, where, since some observable factors are calculated to be infinity, you instead do the calculation in 4+d dimensions and then in the end let d go to zero, and the answers are finite. (No, there is no good mathematical basis for this, as a mathematician will tell you, but as a physicist will tell you, it works.)
Skeptics who are focusing on this particular word just do not understand some of the inside language scientists use. | <urn:uuid:aaa3e48c-4819-4c3d-aca6-b65c27fba699> | 2.75 | 354 | Personal Blog | Science & Tech. | 48.366705 |
In 1903 the word on the street was that Pierre and Marie Curie were the front-runners for the Nobel Prize in Physics for their work on radioactivity — inherent in which was the hypothesis that the atom was not the most basic particle but could emit subatomic particles. Some were affronted by the idea that a woman could have played any significant part in this work, and they argued for awarding the prize to Pierre and French physicist Henri Becquerel, but not Marie.
When Pierre caught wind of this, he argued vehemently on his wife’s behalf. When the award was finally presented to both Curies and Becquerel, Marie was lauded at the presentation as a “help meet” to Pierre. Thus, Marie Curie became the first woman to win a Nobel Prize. The insulting irony was that Pierre had given up his work on crystals and magnetism to literally help his wife blaze a new trail in chemistry and physics with her work on radioactivity.
This year marks the 100th anniversary of Marie Sklodowska Curie’s second Nobel Prize — this time in chemistry for the discovery of polonium and radium. The first woman to win a Nobel became the first person to win two. But the second award was not without controversy. After Pierre’s death in 1906, Marie was rumored to have begun an affair with French physicist Paul Langevin. The scandal broke around the same time as her second award. She refused to let the slander mar her scientific work. She wrote to a critic that “I believe there is no connection between my scientific work and the facts of private life.”
One hundred years later, Madame Curie stars in an exhibit at the Nobel museum in Stockholm — giving her the credit that she was denied by many during her lifetime. Marie died in 1934 of aplastic anemia most likely due to her lifelong exposure to radiation. A year later, her daughter Irene Joliot-Curie and her husband Frederic Joliot won the Nobel Prize for Chemistry for their work on the synthesis of radioactive elements. Irene died in 1956 of leukemia, also likely due to her exposure to radioactive materials.
The opening of the exhibit coincided with the European Multidisciplinary Cancer Congress. Madame Curie’s discovery of radiation proved to be a double-edged sword. Exposure to ionizing radiation is associated with several cancers — lung, skin, thyroid, multiple myeloma, breast, and stomach. However, the physics of radiation underlie many imaging techniques that allow physicians to noninvasively identify and follow tumors in the body. Radiation also turns out to be an effective treatment of certain cancers. Her pioneering investigation provided the groundwork for cancer research that greatly increased the odds of survival for many cancer patients.
The Solvay Conferences in Brussels were initiated to have the brightest minds of the age work on preeminent open problems in both physics and chemistry. The most famous meeting was held in 1927 and is noted for the presence of so many scientific luminaries addressing the newly proposed quantum theory. Seventeen of the 29 members were Nobel winners or would become winners. In the photo, Marie Curie — with two Nobel prizes to her name — takes her place alongside Albert Einstein, Niels Bohr, Erwin Schrodinger, and Werner Heisenberg, among others. | <urn:uuid:73b94ed9-b33b-4f57-875f-1d09f0874d65> | 3.0625 | 683 | Personal Blog | Science & Tech. | 41.125928 |
over the rainbow!
The agitation of the charged particles that are present in all
matter causes all objects to emit electromagnetic radiation. The
objects emit this energy, but can also transmit, absorb, and reflect
it. The sun is one of the main natural sources of the electromagnetic
energy on Earth, but there are also many artificial sources, such
as electric lamps, microwave ovens, and cell phones.
The suns electromagnetic spectrum includes radiation with
very short wavelengths, such as gamma rays and X-rays (of the order
of 1/100 of a micron) and very long wavelengths, such as radio waves
(that can be several kilometres long). A relatively small part of
the electromagnetic spectrum is especially important for us.
This is the part that covers wavelengths from
0.4 to 0.7 µm, or visible light. Within this frequency range
each of the colours of the rainbow corresponds to a specific wavelength.
Thus, blue = approx. 0.45µm, green = approx. 0.55µm,
and red = approx. 0.65µm.
Electromagnetic radiations characteristics
can undergo changes during transmission. For example, the Earths
atmosphere (notably the ozone layer) luckily blocks some of the
solar radiation that is extremely harmful to human beings. The radiations
properties are sometimes changed partially during the transfer.
Thus, the oceans upper layers absorb the parts of the solar
radiation that correspond to red and green light, which explains
the blue colour of the deeper layers. At a certain depth the water
absorbs all of the visible light rays and the ocean becomes pitch
When radiation reaches an object, the object may
absorb one part and reflect another part of the radiation. In all
cases, the electromagnetic radiations characteristics will
be modified. Plants use mainly the red part of the solar spectrum
to carry out photosynthesis. The reflected light spectrum is thus
devoid of this red band and the light reflected by the leaves appears
signatures can be represented graphically as shown here, with
the reflected electromagnetic radiations frequencies (or wavelengths)
plotted on the abscissa (x axis) and intensities on the ordinate
As we can see from the graph,
different objects do not all absorb the same parts of solar radiation.
Consequently, their reflected ray spectra are different. The pattern
of the electromagnetic spectrum reflected by an object is called
its spectral signature.
Remote sensing makes use
of this property, for analysing the characteristics of the electromagnetic
spectra reflected by objects (their spectral signatures) allows
one to determine some of the objects properties, within limits.
Human vision basically uses the same principle: it uses colours
to identify objects, for example, to select the ripest apple. The
sensors used in remote sensing, however, make it possible to broaden
the field of analysis to include parts of the electromagnetic spectrum
that are well beyond visible light.
A spectro-radiometer is usually used to analyse all the details
of an electromagnetic spectrum. This instrument can analyse all
of the frequencies of a given electromagnetic radiation. Other,
simpler, instruments measure only certain parts of the frequency
spectrum, known as spectral bands. Although the data provided by
such multispectral radiometers are discrete, that is, non-continuous,
they can also be used to distinguish different types of materials.
The sensors used in remote
sensing cover the ultraviolet (<0.3 µm), visible (0.4-0.7
µm), near-infrared (0.7-1.5 µm) and thermal infrared
(up to 1000 µm or 1 mm) ranges. As a rule, they merely measure
and analyse the radiation reflected by the objects that are lit
by the sun; they are thus passive systems. Other remote
sensing systems send out signals that strike the Earths surface
and then analyse their echoes; these are active systems.
The latter usually operate in the microwave or radar wave range,
working with wavelengths of from 1cm to 1m. | <urn:uuid:ca020fe5-638b-4d9e-98f2-3dbf5c32f417> | 3.8125 | 872 | Knowledge Article | Science & Tech. | 43.331417 |
First. Lets write what do we know about each voxel :
voxel = (x, y, z, color) // or some other information
General way is simply this:
set of voxels = set of (x,y,z, color)
Note, that triplet (x,y,z) identify each voxel uniquely, since voxel is point in space and there is no way two points occupy one place (I believe we are talking about static voxel data).
It should be fine for simple data. But it is by no means a fast data structure.
Rendering is AFAIK done by scanline algorithm. Tom's Hardware article on voxels has image of scanline algorithm.
If fast lookup is needed, then the fastest data structure for lookup is hash (aka array, map ...). So You have to make hash out of it. So, naively we want just fastest way to get arbitrary element:
array [x][y][z] of (color)
This has O(1) for looking up voxel by x,y,z coordinates.
Problem is, that it's space requirements are O(D^3), where D is range of each x,y and z numbers (forget Real number, since if they were Chars, which have range of 256 values, there would be 256^3 = 2^24 == 16 777 216 elements in array).
But it depends on what You want to do with voxels. If rendering is what You want, then it is probably this array what You want. But problem of storage still remains ...
If storage is the problem
One method is to use RLE compression in the array. Imagine a slice of voxels (set of voxels, where voxels have one coordinate constant value .... like plane where z = 13 for example). Such slice of voxels would be looking like some simple drawing in MSPaint. Voxel model, I'd say, usually occupy fraction of all the possible places (D^3 space of all possible voxels). I believe, that "take a pair from triplet of coordinates and compress the remaining axis" would do the trick (for example take [x][y] and for each element compress the all voxels at z axis at given x,y ... there should be 0 to few elements, RLE would do fine here):
array [x][y] of RLE compressed z "lines" of voxel; each uncompressed voxel has color
Other method to solve storage problem would be instead of array using tree data structure:
tree data structure = recursively classified voxels
for octrees: recursively classified by which octant does voxel at (x,y,z) belong to
- Octree, as mentioned by Nick. It should compress voxels. Octree has even a decent speed for lookup, I guess it is some O(log N), where N is number of voxels.
- Octree should be able to store decently arbitrary voxel data.
If voxels are some simplistic heightmap You might store just that. Or You might store parameters to function which generates the heightmap, aka procedurally generate it ...
And of course You can combine all possible approaches. But don't overdo it, unless You test that Your code works and measure that it is REALLY faster (so it is worth the optimization).
Other than Octrees is RLE compression with voxels, google "voxlap", "ken silverman" ...
There is list of resources and discussion on how to make fast voxel renderer, includes papers and source code. | <urn:uuid:c860b347-39cc-49a8-9712-d57d5ff1d08a> | 2.828125 | 781 | Q&A Forum | Software Dev. | 64.916864 |
The mittenpunkt or middlespoint of a
triangle ABC is the point of concurrence of the lines from the
excenters D, E, F through the corresponding triangle side
midpoints G, H, N. The mittenpunkt of triangle ABC is the
symmedian point of the excentral triangle DEF. The point was
studied by Christian Heinrich von Nagel in 1836. | <urn:uuid:03c43a87-015c-4d0f-a928-37c027b440da> | 2.6875 | 88 | Knowledge Article | Science & Tech. | 68.541905 |
Writes a byte to the current position in the stream and advances the position within the stream by one byte.
Assembly: mscorlib (in mscorlib.dll)
For an example of creating a file and writing text to a file, see How to: Write Text to a File. For an example of reading text from a file, see How to: Read Text from a File. For an example of reading from and writing to a binary file, see How to: Read and Write to a Newly Created Data File.
Use the CanWrite property to determine whether the current instance supports writing.Notes to Implementers:
The default implementation on Stream creates a new single-byte array and then calls Write. While this is formally correct, it is inefficient. Any stream with an internal buffer should override this method and provide a much more efficient version that writes to the buffer directly, avoiding the extra array allocation on every call.
Windows 7, Windows Vista, Windows XP SP2, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP Starter Edition, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003, Windows Server 2000 SP4, Windows Millennium Edition, Windows 98, Windows CE, Windows Mobile for Smartphone, Windows Mobile for Pocket PC, Xbox 360, Zune
The .NET Framework and .NET Compact Framework do not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements. | <urn:uuid:067f9f74-23d7-41d5-aa46-94cb4212bdfd> | 3.03125 | 302 | Documentation | Software Dev. | 51.865158 |
Argument Passing and Naming Conventions
All arguments are widened to 32 bits when they are passed. Return values are also widened to 32 bits and returned in the EAX register, except for 8-byte structures, which are returned in the EDX:EAX register pair. Larger structures are returned in the EAX register as pointers to hidden return structures. Parameters are pushed onto the stack from right to left.
The compiler generates prolog and epilog code to save and restore the ESI, EDI, EBX, and EBP registers, if they are used in the function.
Note When a struct, union, or class is returned from a function by value, all definitions of the type need to be the same, else the program may fail at runtime.
For information on how to define your own function prolog and epilog code, see Naked Function Calls.
The following calling conventions are supported by the Visual C/C++ compiler.
|Keyword||Stack cleanup||Parameter passing|
|__cdecl||Caller||Pushes parameters on the stack, in reverse order (right to left)|
|__stdcall||Callee||Pushes parameters on the stack, in reverse order (right to left)|
|__fastcall||Callee||Stored in registers, then pushed on stack|
(not a keyword)
|Callee||Pushed on stack; this pointer stored in ECX|
For related information, see Obsolete Calling Conventions.
END Microsoft Specific | <urn:uuid:3dc725ce-dab8-42ff-8063-d3324dfee408> | 2.71875 | 319 | Documentation | Software Dev. | 38.571034 |
Methane gas hydrate forming below a rock overhang at the sea floor on the Blake Ridge diapir. This image, taken from the DSV Alvin during the NOAA-sponsored Deep East cruise in 2001, marked the first discovery of gas hydrate at the sea floor on the Blake Ridge. Methane bubbling out of the sea floor below this overhang quickly freezes. forming this downward hanging hydrate deposit, dubbed the "inverted snowcone." Click image for larger view.
Windows to the Deep
Exploration of the Blake Ridge
July 22 August 3, 2003
On this exploration, scientists used the Alvin submersible and other tools to explore the biology, physics, and chemistry of sea-floor methane seeps at water depths of 2,000 m to 2,800 m off the coast of the southeastern United States. These seeps occur where methane hydrate depositsa solid form of methane and water stable at high pressures and low temperaturesrise to shallow depths beneath the sea floor and break down to produce methane gas. The Alvin dives explored three sea-floor features where scientists found chemosynthetic communities that live on or near the sea-floor emission sites.
Background information for this exploration can be found on the left side of the page. Daily updates and more detailed logs and summaries of exploration activities are posted below and to the right.
Updates & Logs
Click images or links below for detailed mission logs.
August 1, 2003
Have you heard of organic carbon that microbes degrade to make methane? Join the search
July 31, 2003
The young science party consists mainly of graduate and undergraduate students. Read about a first Alvin
dive experience in today's log
July 30, 2003
The crew returns to the Blake Ridge Diapir
in search of Ice Shrimp and Gas Hydrate. They also experience the strong smell of hydrogen sulfide gas.
July 29, 2003
The crew has completed the first dive ever on the Cape Fear Diapir
. Near the landing site, divers encountered the most biologically diverse location of the day.
July 28, 2003
Discovery of a new species of polychaete worm at the Blake Ridge seeps
July 27, 2003
Dive two at Blake Ridge Diapir
yielded, Waves of clams as far as we could see.
July 26, 2003
The next two dives will occur at the Blake Ridge Diapir
, a site known for its concentration and diversity of seep organisms, hydrate, and other features such as bacterial mats.
July 25, 2003
The first Alvin
dive is complete. This was the first dive ever along the northeastern eroded flank
(side) of Blake Ridge. | <urn:uuid:4b40bda0-c614-430a-8a8a-348c72fc02e1> | 3.390625 | 558 | Content Listing | Science & Tech. | 50.209789 |
|IODP publications Expeditions Apply to sail Sample requests Site survey data Search|
Subduction zones account for 90% of global seismic moment release, generating damaging earthquakes and tsunamis with potentially disastrous effects on heavily populated coastal areas (e.g., Lay et al., 2005). Understanding the processes that govern the nature and distribution of slip along these plate boundary fault systems is a crucial part of evaluating earthquake and tsunami hazards. More generally, characterizing fault behavior through direct sampling, near-field geophysical observations, and measurement of in situ conditions at the depths of coseismic slip is a fundamental goal of modern earth science. To this end, several recent and ongoing drilling programs have targeted portions of active plate boundary faults that either slipped coseismically during large earthquakes or nucleate clusters of smaller events. These efforts include the San Andreas Fault Observatory at Depth (Hickman et al., 2004), the Taiwan-Chelungpu Drilling Project (Ma et al., 2006; Hirono et al., 2006), and IODP NanTroSEIZE drilling (Tobin and Kinoshita, 2006a, 2006b).
NanTroSEIZE is a multiexpedition, multistage project focused on understanding the mechanics of seismogenesis and rupture propagation along plate boundary faults. The IODP science plan outlines a coordinated effort to sample and instrument the plate boundary system at several locations offshore the Kii Peninsula (Figs. F1, F2). The main objectives are to improve understanding of
As NanTroSEIZE progresses, scientists will evaluate a set of core hypotheses through a combination of riser and riserless drilling, long-term observatories, and associated geophysical, laboratory, and numerical modeling efforts. The following hypotheses are paraphrased from the original IODP proposals and outlined in Tobin and Kinoshita (2006a, 2006b):
Sediment-dominated subduction zones such as Nankai margin are characterized by repeated occurrence of great earthquakes of ~M 8.0 (Ruff and Kanamori, 1983). Although the causative mechanisms are not well understood (e.g., Byrne et al., 1988; Moore and Saffer, 2001; Saffer and Marone, 2003), the updip limit of the seismogenic zone is thought to correlate with a topographic break along the outer rise of the forearc (e.g., Byrne et al., 1988; Wang and Hu, 2006). At Nankai, high-resolution seismic reflection profiles clearly document an out-of-sequence thrust or megasplay fault system that branches from the plate boundary (décollement) within the coseismic rupture zone of the 1944 Tonankai M 8.2 earthquake (Park et al., 2002) (Fig. F2). As stated above, two of the first-order goals of this project are to document the role of the megasplay fault in accommodating plate motion and to characterize its mechanical and hydrologic behavior. Ultimately, we plan to intersect the plate interface itself at seismogenic depths.
The Japanese Center for Deep Earth Exploration (CDEX) conducted three coordinated riserless expeditions during 2007–2008 as Stage 1 of NanTroSEIZE, drilling a series of sites across the continental margin offshore the Kii Peninsula. The transect is located within the inferred coseismic slip region of the 1944 Tonankai M 8.2 earthquake (Figs. F1, F2) (Tobin and Kinoshita, 2006a, 2006b). The first expedition (IODP Expedition 314) successfully obtained a comprehensive suite of geophysical logs and other downhole measurements at sites along the transect using state-of-the-art logging-while-drilling (LWD) technology (Kinoshita et al., 2008). Unfortunately, the expedition ended before LWD data could be obtained from any of the "subduction input" sites. This was followed by a coring expedition (IODP Expedition 315) to collect materials from and to characterize in situ conditions within the accretionary wedge and Kumano forearc basin at IODP Sites C0001 and C0002 (Ashi et al., 2008). The third expedition (IODP Expedition 316) collected core samples from shallow fault zones, including the frontal thrust near the trench (IODP Sites C0006 and C0007) and the older accretionary prism and megasplay fault at 400 meters below seafloor (mbsf) (IODP Sites C0004 and C0008) (Kimura et al., 2008).
NanTroSEIZE proceeds to Stage 2 in 2009. IODP Expedition 319 will drill two holes for future long-term observatories, which in conjunction with a planned dense ocean floor network system will monitor earthquakes and tsunamis (Kinoshita et al., in press). The first-ever riser hole in scientific ocean drilling history will be drilled and cased to 1600 mbsf at a site just above the locked zone of the plate interface. A riserless cased hole for another long-term observatory is also included in the expedition plan.
Some of the tasks remaining from NanTroSEIZE Stage 1 will be implemented following the riser expedition. Expedition 322 will characterize the sedimentology, physical properties, physical and chemical hydrogeology, and in situ conditions of the incoming sediment and uppermost igneous crust at proposed Site NT1-07A. A companion site (proposed Site NT1-01A) is included in the contingency plan. | <urn:uuid:995fa049-9a50-4e0f-a1e0-a742bcbe056a> | 2.875 | 1,145 | Academic Writing | Science & Tech. | 35.577857 |
The smallest fish on record is no longer the 8mm Indo-Paciffic gobby, but rather a 7.9mm member of the carp family known as Paedocypris progenetica. Discovered in a peat bog on Sumatra island by Switzerland’s Maurice Kottelat and Singapore’s Tan Heok Hui, this fish is remarkable not only because it is so small, but because of the way it has adapted to thrive in its environment.
First, the fish lives in murky peat bog water with a ph of 3. This is about 100 times more acidic than rainwater, so it is amazing to find a fish that is actually able to live in it.
Secondly, it has “bizarre grasping fins” with exceptionally large muscles. The purpose of these is unclear at the moment, but it is theorized that fish uses them to grasp its mate during copulation.
Finally and most importantly according to Kottelat, is the scientific significance of finding a complete vertebrae in such a tiny body. Apparently this is nearly unheard of in organisms this small.
It will be interesting to learn more about this amazing little fish as more research is conducted. The Natural History Museum reports that several populations of Paedocypris have already been lost due to habitat destruction caused by rampant development and intensive farming, so researchers are trying to learn all they can about this amazing specimen before it too becomes extinct. | <urn:uuid:8c6a180a-33dc-443b-8d3c-dcad2b84e450> | 3.171875 | 300 | Personal Blog | Science & Tech. | 41.551764 |
Birds have a lousy sense of smell, right? That common perception may apply to some modern-day birds, but that wasn’t always the case. Early birds, frankly, smelled like dinosaurs, meaning that they inherited a pretty respectable sense of smell from their dinosaurian kin. The typical scenario had been that as birds evolved flight, the senses of vision and balance increased and the olfactory sense diminished. Darla Zelenitsky (University of Calgary) and François Therrien (Royal Tyrrell Museum) invited Ryan Ridgely and me to join forces in testing this scenario by studying the evolution of the olfactory bulb, the part of the brain receiving information on odors, across the transition from small theropod dinosaurs to birds. As our new article in Proceedings of the Royal Society B reveals, birds started out with a full sensory toolkit, including a pretty capable sniffer. And we also learned a thing or two about non-avian theropods along the way.
Assembling the team
Collaboration was obvious. François and Darla had done some important work on the olfactory apparatus of theropod dinosaurs, culminating in their excellent 2009 article. Our Ohio team has had NSF funding since 2003 to look at the evolution of the brain and sensory systems of dinosaurs and other archosaurs. So, it was natural to combine our data and expertise to tackle the transition to birds. Teasing apart this transition has been the target of our NSF grants all along, and so Ryan and I had already sampled a number of the key advanced non-avian coelurosaurian dinosaur species (e.g., dromaeosaurs, troodontids) and basal bird species (e.g., Archaeopteryx, Hesperornis, Ichthyornis), as well as some basal members (e.g., Lithornis) of the evolutionary group that includes all modern-day birds (Neornithes).
Making sense of avian olfactory evolution
You can check out the published article for details of our findings. Our OU WitmerLab site and Darla’s site both have more information. Basically, our results showed that many advanced non-avian maniraptorans had pretty respectable senses of smell as judged by the sizes of their olfactory bulbs. Bulb size was maintained across the transition to birds, as far as we can tell, and even increased a bit in basal birds. Moderately large olfactory bulbs persist in basal members of Neornithes, the group that includes all modern-day species of birds. Lithornis, a 58-million-year old neornithine perhaps related to ratites and tinamous, gave us our best evidence for early members of the modern radiation, but even among birds living today, many basal (primitive) species (ducks, flamingos) have relatively large olfactory bulbs.
All this suggests that birds inherited a good sense of smell from their ancestors and that a pretty potent sniffer was maintained in many bird groups. Some birds indeed have tiny olfactory bulbs and are not as reliant on odors. A huge group, the perching birds (passerines, such as crows, cardinals, and sparrows), have very small olfactory bulbs, as do parrots. Previously, the assumption had been that birds in general had a weak sense of smell and that a good sense of smell is an evolutionary specialization. Our study reverses that polarity: a good sense of smell is basically an avian trait, itself inherited from dinosaurs. It’s been greatly reduced a few separate times (albeit in one case in an extremely diverse group, passerines), as well as greatly enhanced a few separate times (albatross, turkey vultures).
Olfaction and the not-quite-total extinction of dinosaurs
Nearly all dinosaurs—including birds—went extinct at the end of the Cretaceous. Only “modern birds” (neornithines) survived. What was different about neornithines? The reality is that it’s still generally mysterious why some species survive mass extinctions and others die out, but a common trait of some survivors is that they are generalists that can do many things well. Basal neornithines weren’t the sensory specialists that we once thought. Rather they were generalists that could use all their senses, which may have given them an edge in comparison to other flying vertebrates like others birds or pterosaurs. Being excellent flyers maybe gave them an edge over non-avian dinosaurs, many of which were also sensory generalists. So, the ancestors of modern-day birds could not only fly to search for better conditions, but they had the complete sensory toolkit to find food, mates, and suitable habitats. That may have been enough to tip the scales toward survival. This scenario constitutes little more than speculation at this point—a future research direction—which is why it got about half a sentence in our published article. We know next to nothing about the olfactory capabilities of, say, enantiornithine birds, a diverse group that didn’t survive the extinction. I’m still banking on neornithines being mostly lucky, but it’s worth keeping this hypothesis in mind.
Nocturnal dinosaurs: seeing…and smelling?…in the dark
This was a good week for dinosaur sensory biology. A couple days after our article on olfaction came out, Lars Schmitz and Ryosuke Motani’s article on eyeballs, vision, and daily activity patterns came out in Science. They found that many of the same theropods that we found to have moderately large olfactory bulbs also had the eye-socket parameters of nocturnal animals. Many nocturnal animals have a keen sense of smell, and so these results fit nicely. On the other hand, they found that basal birds were probably more active in the daytime (diurnal), whereas we found basal birds to have a sense of smell that was pretty comparable to, if not better than, the non-avian maniraptorans. Our avian samples only share Archaeopteryx, and so it’s hard to know what to make of this difference. To be honest, I trust Lars and Ryosuke’s vision-based assessments of activity patterns much more than assessments based on olfactory capabilities (a good sense of smell can be pretty handy at any time of day). Still, combining data like these is another interesting future direction and has bearing on Ryan’s and my ongoing work on dinosaur brain and sensory evolution. It’s a good time to be a dinosaur biologist!
– Larry Witmer | <urn:uuid:4cd22e89-eebf-43a5-acc1-f40d41322454> | 3.21875 | 1,396 | Personal Blog | Science & Tech. | 40.326064 |
In southern Vermont, researchers study the at-risk Bicknell’s Thrush
Bryan Pfeiffer / Wings Photography
It’s 3:30 a.m. on the summit of Stratton Mountain. The sun won’t rise for another two hours, but scientists from the Vermont Center for Ecostudies (VCE) have already begun their search for “the canary of the mountains,” more commonly known as Bicknell’s Thrush.
While their coffee percolates inside the Ski Patrol hut, VCE field directors Sara Frey and her husband Juan Klavins open mist nets around Stratton’s high-elevation trails to catch the rare bird, which breeds only in the montane fir forests of the U.S. Northeast and parts of the Gaspé Peninsula in Québec.
Frey and Klavins, along with a field technician and a project intern from the University of Vermont, have shared this Balsam fir habitat with the Bicknell’s Thrush for the better part of six weeks, collecting blood samples, banding the birds and finding their nests using radio telemetry.
“This work is difficult,” admits Frey, who sleeps in the bed of her Toyota pickup at night with Klavins. The research team has endured varying weather, black flies and little sleep for weeks on the summit. But as scientists who study a bird facing imminent extinction from climate change, the conservation of Bicknell’s takes priority over their own discomfort.
“[Bicknell’s] are very secretive,” Frey says, “and it takes patience to study their behavior, when you can look at one.”
Frey says she became “hooked” on Bicknell’s research in 2003 when she got involved with an ongoing ecological and demographic study of the bird. Because their breeding habitat exists only in these mountainous regions, which VCE biologists say exaggerates problems such as climate change and atmospheric deposition of mercury, the Bicknell’s is at risk of losing its habitat and ultimately the entire species.
“It is a species that is so closely associated with a specific habitat — montane forests of the Northeast — that it can not only help us understand how to direct conservation of Bicknell’s Thrush but the ecosystem as a whole,” says Frey, who has been working with Woodstock-based Kent McFarland, a VCE conservation biologist who joined the Bicknell’s research in 1994.
With help from biologists such as Frey and citizen scientists — trained volunteers — McFarland and his colleagues have collected extensive data on Bicknell’s Thrush and their diminishing habitats. Their most recent work was included in the peer-reviewed paper “Potential Effects of Climate Change on Birds of the Northeast,” published in April in Mitigation and Adaptation Strategies for Climate Change.
In the paper, McFarland and fellow biologist Dan Lambert from the American Bird Conservancy predict that the current pace of carbon dioxide emissions will raise temperatures enough to produce major population declines in high-elevation birds like the Bicknell’s. With only about 40,000 of them left, there’s reason to worry, says McFarland, who has accompanied Frey and the research team on the summit for the day.
Using computer-simulated warming, the duo found that raising the mean summer temperature by 1 degree Celsius would reduce the availability of suitable Balsam fir habitat for the Bicknell’s by more than half. An increase of 2 degrees, expected before the next century, would eliminate all breeding sites in the Catskills and most in the Green Mountains.
“Bicknell’s are so tied to this Balsam fir forest that they don’t seem to have much plasticity to go to another forest type,” McFarland says, pointing to digital maps of the species’ “richness” on his laptop while he sits in the cluttered Ski Patrol hut. “Its ability to change that rapidly or adapt that rapidly would surprise me.”
The team has just returned from a net run in which they pulled a Bicknell’s Thrush from the tangles of a mist net. The net is made of thin nylon and is virtually invisible. A partnering biologist from the Dominican Republic, the Bicknell’s thrush wintering grounds, holds the olive-brown bird as it flutters in his hands and gives a slurred whistle. This particular Bicknell’s was banded by the VCE “almost a year ago to the day,” McFarland says.
After updating the bird’s data, Pat Johnson, the team’s field technician, returns the bird to its nesting area then checks on its nest, dubbed “stumpy” — a tight bed of twigs and fungus tucked into a mangled stump deep within the forest.
These forests, McFarland explains, adapted to a slow warming period nearly 3500 years ago by moving to higher elevations. That took more than a thousand years. Considering the current carbon emission scenarios issued by the Intergovernmental Panel on Climate Change, McFarland isn’t expecting a repeat of that leisurely adaptation.
“I will die and there will still be Bicknell’s Thrush,” he says. “My daughter? Not so sure. My daughter’s daughter? There probably will not be Bicknell’s Thrush if climate change continues as predicted. I don’t know if trees can react that fast. This [current] warming is going to happen in 150 to 200 years. It’s a totally different beast. The forest zones will literally lose ground.”
To make matters worse, the VCE is now saying that Bicknell’s and other high-elevation birds are susceptible to the atmospheric deposition of mercury as well. The same coal-burning plants emitting carbon and contributing to global warming are dumping mercury on the mountain peaks of the Northeast.
McFarland says mercury deposition maps of the Northeast encompass the Bicknell’s Thrush habitats. The mercury works its way up the food chain: The Bicknell’s Thrush eats leaf-eating insects, and the Sharp Shinned Hawk eats Bicknell’s. While the biologists at VCE aren’t certain of mercury’s effects on the birds, their speculations aren’t positive.
“We’re the tail pipe of the nation,” explains McFarland, as VCE Director Chris Rimmer extracts a blood sample from a Sharp Shinned Hawk to test for its mercury count. “Mercury from the Ohio Valley is transported and comes to us in rain and snow, and a lot of it gets wicked out in the needles of the high-elevation trees.”
The Dummerston office of avian ecologist Hector Galbraith lies roughly 30 miles southeast of where the Vermont Center for Ecostudies researched the Bicknell’s Thrush.
Galbraith is also the director of the Climate Change and Energy Initiative at the Manomet Center for Conservation Sciences in Massachusetts. In a phone conversation, Galbraith says the organization should become a model for state and private research organizations in the coming decades.
“This is the exact sort of research we need to be doing to track these birds in real time and deal with climate change, because Bicknell’s are so exquisitely sensitive and vulnerable to climate change,” he says. “Their thermal habitat may very well go before the fir forest [does].”
Galbraith notes Bicknell’s and their habitat will need to move up 900 feet, but they don’t have 900 feet to move. “Whatever way you look at it, this is not good news for Bicknell’s Thrush,” he says.
Despite its diminishing habitat and susceptibility to mercury deposition, among other effects jeopardizing the species’ demography, it is not listed as “endangered” or “threatened” by the U.S. Fish and Wildlife Service (USFWS).
USFWS Spokesperson Valerie Fellows says research grants are typically prioritized by species depending on their endangered or threatened status at the federal and state level. In the 2007 fiscal year, USFWS’ National Wildlife Refuge System provided VCE with $16,062 for its work with Bicknell’s Thrush; in 2008, it was just $3600. Vermont Fish & Wildlife and private foundations brought the annual funding for this project to about $100,000.
“We have got to get real about funding these long-term monitoring studies,” protests Galbraith. “We are heading for major ecological change. Climate change is not just a thing of the future, it’s happening now.
“Even at first glance it’s obvious that the species is fragmented,” he continues. “It has a small population with limited distribution in the U.S. and its habitat is quite vulnerable. With that said, there have been a lot of petitioned candidates recently but few actual listings.”
Waiting out the political climate in Washington, the VCE is focusing on saving their feathered friend, and has spearheaded the formation of the International Group for Bicknell’s Thrush Conservation. “We’d like to get the bird on a corrective course and we needed a formal entity,” McFarland explains. “In the end, we’ll have a road map for Bicknell’s Thrush habitat: Here’s the problem, here’s the solution, and here’s how we’re going to do it.”
Meanwhile, the field season is about to end atop Stratton. The team has ingested enough Thai noodles, bean burritos and Yerba Maté to tide them over until next summer, but as they pack up their gear, they’ve got something new to ponder: a Bicknell’s-Veery hybrid.
Frey, who has kept a blog while on the mountain, mentions the discovery in her June 20 post: “This bird sang a slightly odd Veery song and then about every 3rd song it sang a short Veery song that ended abruptly and broke into the ending of a Bicknell’s thrush song! We quickly captured it in a mist net playing its own song back to itself.”
VCE has planned to run DNA tests on the bird to find out whether the mother is Bicknell’s or Veery, says McFarland. The hybrid is not necessarily an indication of climate change, as recently reported.
“Veery is very nearby Bicknell’s Thrush habitat and they are closely related,” he says. “It is just interesting to find. Now, if it began to occur more and more, then it might be interesting to understand why. Hybrids occur among organisms all the time.”
It’s now after 11 a.m. The team takes down the mist nets. McFarland stuffs his pockets with the nylon and carries the netting poles over his shoulder, cracking jokes at Rimmer’s expense. “We have fun too,” he admits.
McFarland takes a moment to rest inside the hut. Tugging on the binoculars strapped around his long-sleeved flannel, he glances out the window.
“I’m not necessarily a religious person, but I sort of find my mystery in life in things like Bicknell’s Thrush,” he muses. “I would be really sad if they go. These are like great works of art. You’re not going to recreate them again.” | <urn:uuid:37148477-b131-4d99-b826-eca7a4ba94b2> | 3.359375 | 2,545 | Truncated | Science & Tech. | 54.958469 |
Ask a question about 'Phase response'
Start a new discussion about 'Phase response'
Answer questions from other users
In signal processing
Signal processing is an area of systems engineering, electrical engineering and applied mathematics that deals with operations on or analysis of signals, in either discrete or continuous time...
and electrical engineering
Electrical engineering is a field of engineering that generally deals with the study and application of electricity, electronics and electromagnetism. The field first became an identifiable occupation in the late nineteenth century after commercialization of the electric telegraph and electrical...
, phase response
is the relationship between the phase
Phase in waves is the fraction of a wave cycle which has elapsed relative to an arbitrary point.-Formula:The phase of an oscillation or wave refers to a sinusoidal function such as the following:...
of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier
Generally, an amplifier or simply amp, is a device for increasing the power of a signal.In popular use, the term usually describes an electronic amplifier, in which the input "signal" is usually a voltage or a current. In audio applications, amplifiers drive the loudspeakers used in PA systems to...
or a filter
In signal processing, a filter is a device or process that removes from a signal some unwanted component or feature. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal...
Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time. | <urn:uuid:b4c96fc1-a5e5-4606-a7b2-3dcb837db54c> | 3.328125 | 406 | Q&A Forum | Science & Tech. | 30.902455 |
methaneArticle Free Pass
methane, also called marsh gas, colourless, odourless gas that occurs abundantly in nature as the chief constituent of natural gas, as a component of firedamp in coal mines, and as a product of the anaerobic bacterial decomposition of vegetable matter under water (hence its alternate name, marsh gas). Methane also is produced industrially by the destructive distillation of bituminous coal in the manufacture of coal gas and coke-oven gas. The decomposition of sludge by anaerobic bacteria in sewage-treatment processes also produces a gas rich in methane.
Methane is the simplest member of the paraffin series of hydrocarbons. Its chemical formula is CH4. It is lighter than air, having a specific gravity of 0.554. It is only slightly soluble in water. It burns readily in air, forming carbon dioxide and water vapour; the flame is pale, slightly luminous, and very hot. The boiling point of methane is −162 °C (−259.6 °F) and the melting point is −182.5 °C (−296.5 °F). Methane in general is very stable, but mixtures of methane and air, with the methane content between 5 and 14 percent by volume, are explosive. Explosions of such mixtures have been frequent in coal mines and collieries and have been the cause of many mine disasters.
The chief source of methane is natural gas, which contains from 50 to 90 percent methane, depending on the source. Methane produced by the destructive distillation of bituminous coal and by coal carbonization is important in locations where natural gas is not plentiful.
Since commercial natural gas is composed largely of methane, their uses may for all practical purposes be considered identical. Because of its abundance, low cost, ease of handling, and cleanliness, such gas is widely used as a fuel in homes, commercial establishments, and factories.
Methane is an important source of hydrogen and some organic chemicals. Methane reacts with steam at high temperatures to yield carbon monoxide and hydrogen; the latter is used in the manufacture of ammonia for fertilizers and explosives. Other valuable chemicals derived from methane include methanol, chloroform, carbon tetrachloride, and nitromethane. The incomplete combustion of methane yields carbon black, which is widely used as a reinforcing agent in rubber used for automobile tires.
What made you want to look up "methane"? Please share what surprised you most... | <urn:uuid:dcc9cf30-6ec8-4146-bbc9-bef06b6f1329> | 3.828125 | 522 | Knowledge Article | Science & Tech. | 37.295969 |