text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
A method of measuring refractive indices of gases by observations of Rayleigh scattered light is described. The method depends on the fact that the intensity of such scattered light is proportional to the square of the refractivity of the gas. Results are presented for several gases at a wavelength of 1216 Å. The values are placed on an absolute basis by use of the value of the refractive index of nitrogen found at 1220 Å by the Črenkov method. Some extensions of the method are suggested.
P. GILL and D. W. O. HEDDLE, "Determination of the Refractive Indices of Gases in the Vacuum Ultraviolet. II. The Rayleigh Scattering Method," J. Opt. Soc. Am. 53, 847-851 (1963) | <urn:uuid:0d6bc395-8022-4e5b-9264-d0baf393838e> | 3 | 166 | Academic Writing | Science & Tech. | 62.256462 |
If you are a follower of TV crime shows, it is likely that you’ve come across one of the CSI offshoots (CSI stands for Crime Scene Investigation) and a slightly less well known show called ‘Cold Case‘. In both these shows, difficult crimes (usually murders) are solved using the most up-to-date forensic methods and incredible detective work. However, it will be obvious to even the most jaded TV watcher that the CSI crew get to have a lot more fun with the latest gadgets and methodologies. The reason for that is clear: with a fresh crime scene there is a lot more evidence around and a lot more techniques that can be brought to bear on the problem. In a ‘Cold Case’ (where the incident happened years before), options are much more limited.
Why bring this up here? Well it illustrates nicely how paleo-climate research fits in to our understanding of current changes. Let me explain….
For the last 30 years or so, the amount of information we have about the planet has gone up by a couple of orders of magnitude – mainly due to satellite information on atmospheric (radiation, temperature, humidty, rainfall, cloudiness, composition etc.), ocean surface (temperature, ice cover, windiness) and land properties (land cover, albedo) etc. Below the surface, we are now measuring much more of the ocean changes in heat content and carbon. This data, while still imperfect, has transformed our view of the climate such that the scientists studying it can seriously discuss details of problems that twenty years ago were not even thought of as issues. “CSI – Planet Earth” if you like.
Comparatively, the amount of information we have for any period in the past is less – hundreds (in some cases a few thousand) of records of climate ‘proxy’ data (i.e. records that are related to climate, such as tree rings ot isotope ratios, but that aren’t direct thermometers or rain gauges) that are not necessarily optimally spaced, nor necessarily well-dated, nor uncontaminated by non-climate influences. However, there is the great advantage of a much longer time period to work with, as well as a greater variety of changes to investigate. Think of the people that work on that as the ‘Cold Case’ crew.
The most prevalent reasonably scientific question about current climate changes is ‘how do we know that this isn’t natural variability?’. A number of versions of that question came up in the House hearing last week (a nice report from the proceedings can be found here). Some of those comments were serious, some were ridiculous, but all essentially pointed to the same issue. Kevin Trenberth and Richard Alley answered it best when they pointed out that the causes of ‘natural variability’ – whether the sun, volcanoes or ocean changes – should be detectable (but haven’t been), and that the anthropogenic ‘hypothesis’ should have consequences that are also detectable (which have). Add in the modelling studies which indicate that current conditions can’t be explained without including greenhouse gases and you have a pretty solid case that what is happening is in large part anthropogenic.
A rather more specious comment heard often (including at this hearing) is that ‘if it was warmer before, then the current warming must be natural’ or alternatively ‘if you can’t explain all of the past changes, how can you explain anything now?’. First of all, there are many periods in Earth history that are unequivocally accepted to be warmer than the present – the Pliocene (3 million years ago), the Eocene (50 million years ago) and the mid-Cretaceous (100 million years ago) for instance. Less clearly, the Eemian interglacial period or the Early Holocene may have been slightly warmer than today. Thus, if that logic were appropriate, no-one should bother worrying about climate change until sea levels start to approach mid-Cretaceous levels (about 100m above today’s level!).
However, the logic is fatally flawed. It is akin to a defense lawyer arguing that their client can’t possibly have committed a particular murder because other murders have happened in the past that were nothing to do with them. That would get short shrift in a courtroom, and the analgous point gets short shrift in the scientific community too. Of course, it is possible that our suspect was involved in previous murders too – but obviously the further back you go, the harder it is to pin it on them. And clearly, there will be past murders where they have a clear alibi.
A better tactic for the defense is obviously to try and pin it on someone else – and if that someone else has a record – then all the better. Therefore, ‘the sun did it’ is a frequent accusation, but as we have discussed here quite often, this time around the sun has an alibi and there are reliable witnesses to back him up.
Given the better information and resources available for the CSI crew, it is natural that their assessment of the current case will generally hold sway. Cold Cases (or paleo-climate) are of course of paramount interest: they provide a much wider set of conditions that set the stage for the modern analyses and provide plenty of test cases for us to hone our techniques (such as climate modelling). However arguments from paleo are extremely unlikely to trump the modern analyses – whether they refer to the medieval warm period or the Phanerozoic.
So to summarise, CSI-Planet Earth have a good case for pinning the latest warming on greenhouse gases. Cold Case has evidence that they were involved in some previous cases (the last glacial period for instance), though they’ve definitely ruled our suspect out for a few others (e.g. the 8.2kyr event). It would be hard to argue that our suspect should be acquitted because there have been some crimes they didn’t commit!
Update: I should have linked to this Newsday piece: Hot on their global trail by Bryn Nelson where I first tried out this analogy. | <urn:uuid:b4a84f90-a6e0-4b07-88ba-609e25137bcc> | 2.890625 | 1,290 | Personal Blog | Science & Tech. | 43.982561 |
Viruses are, by far, the smallest organisms in the world. Viruses are stripped down to an absolutely minimal design: a protein capsule containing DNA or RNA . A number of viruses contain their genetic information in RNA instead of DNA). Viruses survive and reproduce by infecting a cell and commandeering the cellular synthetic machinery to make more viruses. Then the viruses lyse the cell and start the cycle over again.
There is an entire class of viruses known as bacteriophages that prey exclusively on bacteria. Many of these viruses have a common structure.
The DNA is injected into the bacteria through the baseplate.
HIV, the virus that causes the disease AIDS and is the most famous virus in the world right now, has this basic structure:
This is an image of rhinovirus 14, one of the many rhinoviruses.
What you are seeing are the interlocking proteins of the virus capsid. Each
color represents a different type of protein.
This image was developed at the University of Wisconsin. This and many other virus structures are available: Follow this link to view an example virus structure. | <urn:uuid:3018f554-8b9e-4b48-89ab-7b702dfe27f6> | 3.890625 | 232 | Knowledge Article | Science & Tech. | 37.678071 |
Several groups have loudly declared their intentions in the past couple of years to attempt human cloning, but the announcement by Advanced Cell Technology in Worcester, Mass., that it had succeeded (as reported in Scientific American and elsewhere) still seemed to catch many people off guard. Some of that surprise had less to do with the deed itself than with controversies over whether ACT had accomplished all that it claimed and how the news was spread [see page 18]. In retrospect, however, the idea that human cloning would emerge less contentiously looks naive.
The first, most serious reservations are the scientific ones. ACT acknowledged that its work fell far short of producing a human embryo with stem cells of therapeutic interest and settled instead for a demonstration that human cells can be cloned. Other scientists are skeptical of even that claim, if not openly dismissive of it. Carrying an embryo to only the six-cell stage is no proof of cloning at all, they say, because a few early rounds of cell division can occur in a genetically inert egg cell. ACT might better have waited to publish until more convincing results were in hand. Time will tell whether or not ACT's claim stands up.
This article was originally published with the title A Ready-Made Controversy. | <urn:uuid:755aa2cb-616f-48ec-adac-320903ab7a50> | 2.84375 | 249 | Truncated | Science & Tech. | 47.013123 |
For their research, Katija and Dabiri trained their sites not on krill but on small jellyfish, which can also swarm in large schools. They tracked how individual jellyfish carried water as they swam upward in the water column by observing the track of glowing dye injected into the water [see video below] as well as by measuring the kinetic energy the jellies generated in their wakes.
But why settle for such small sea dwellers? Although one might expect massive animals, such as whales, to have more impact on mixing individually, Dabiri, an assistant professor of aeronautics and bioengineering, explains that smaller organisms that travel in large schools—crustaceans and zooplankton for example—would have more of a global impact because they're so widespread and numerous.
Per Darwin's theory, however, it is not just critical mass that matters, but body shape. Dabiri explains that the quickest and most efficient swimmers—those that are smooth and bullet-shaped—are the least effective mixers, whereas slower and more saucer-shaped creatures will drag along proportionately more water.
How much water is moving? For it to have much importance for mixing purposes, water needs to be carried about a meter. From the observations and numerical simulations, Dabiri notes, "We expect that fluid is being carried at least on the magnitude of meters—if not tens of meters."
Extrapolating from their work, Katija and Dabiri suggest that in large schools these organisms likely have an even greater mixing power. In a massive krill migration for example, "it will be much more difficult for water to slip through the cracks" and not be carried along, Dabiri says.
But no one is quite sure how—and whether—the dynamic is actually playing out across the world's oceans. "It's not clear how you will go from that to a global model," Dewar says. Other considerations include how organisms' swimming style would affect water transport and how the combined force of these animals' drift might add up to a worldwide impact on ocean circulation. If it turns out to be as large a component as some are beginning to think, it will need to be incorporated into computer climate models. And that would be no small task because today's models are not nuanced enough to include data at the level of a school, much less an individual animal—to say nothing of complexities involving possible feedback loops down the road.
"Our paper raises more questions than it answers," Dabiri acknowledges. But, he says, it is casting light on what might be an important dynamic of oceans that has been right under our noses—or at least our hulls. | <urn:uuid:3542a611-0323-4bb9-8728-29fad3c1ae3f> | 4.15625 | 555 | Knowledge Article | Science & Tech. | 40.798333 |
Published: Tuesday, January 29, 2013 Last Updated: Tuesday, January 29, 2013
An international team of scientists has sequenced the genome of the chickpea, a critically important crop in many parts of the world, especially for small-farm operators in marginal environments of Asia and sub-Saharan Africa
Full access to this article is for registered users only. Registration is free-of-charge and allows access to all content on our web communities.
Already registered? Then please log in at the top of the page. | <urn:uuid:b668e717-eae6-4d63-9876-8039e57d5d94> | 3.265625 | 106 | Truncated | Science & Tech. | 37.47 |
In quantum eraser experiments, getting information about one entangled photon decides if the second photon behaves classically or quantum (interfere). Optical lengths for these photons chooses time order of these events, so we can delay the "decision" to happen after what it decides about. But in "standard version" of such delayed choice quantum erasure this decision is made randomly by physics.
I've just found much stronger version - in which we can control this decision affecting earlier events.
Here is a decade old Phys. Rev. A paper about its successful realization and here is simple explanation:
We produce two entangled photons - first spin up, second spin down or oppositely.
Photon s comes through double slit on which there are installed two different quarter wave plates changing polarization to circular in two different ways.
Finally there are two possibilities:
x y R L
y x L R
where columns are: linear polarization of p, initial linear polarization of s, circular polarization of s after going through slit 1, circular polarization of s after going through slit 2.
So if we know only the final circular polarization of s, we still don't know which slit was chosen, so we should get interference. But if we additionally know if p is x or y, we would know which slit was chosen and so interference pattern would disappear.
So let us add polarizer on p path - depending on its rotation we can or cannot get required information - rotating it 45deg we choose between classical and interfering behavior of s ... but depending on optical lengths, this choice can be made later ...
Why we cannot send information back in time this way?
For example placing s detector in the first interference minimum - while brightness of laser is constant, rotating p polarizer should affect the average number of counts of s detector.
What for? For example to construct computer with time loop using many such single bit channels - immediately solving NP hard problems like finding satisfying cryptokeys (used to decrypt doesn't produce noise):
Physics from QFT to GRT is Lagrangian mechanics - finds action optimizing history of field configuration - e.g. closing hypothetical causal time-loops, like solving the problem we gave it.
Ok, the problem is when there is no satisfying input - time paradoxes, so physics would have to lie to break a weakest link of such reason-result loop.
Could it lie? I think it could - there is plenty of thermodynamical degrees of freedom which seems random for us, but if we could create additional constrains like causal time loops, physics could use these degrees of freedom to break a weakest link of such loop.
What is wrong with this picture? | <urn:uuid:094f2564-a56b-4043-a9b8-2674d61d82f5> | 2.90625 | 549 | Comment Section | Science & Tech. | 47.706279 |
Sometimes, mineral compositions that are stable at high temperature become unstable when temperature decreases. They may exsolve ("unmix") so that a grain that was once uniform contains blebs, patches or stringers of two minerals. Exsolution is generally only visible in XP views.
Exsolution in Feldspar and Pyroxene
The views above, both XP, show two examples of exsolution. The one on the left shows a large K-feldspar grain with a single twin down its center. Included in the K-feldspar are stringers of plagioclase. K-feldspar with unmixed plagioclase is called perthite. The K-feldspar and plagioclase existed as a single solid solution mineral at high temperature but unmixed due to cooling.
The view on the right is of a clinopyroxene grain. At high temperatures, pyroxenes with intermediate Ca/Mg values are stable. Upon cooling they may unmix into two pyroxenes -- one Ca-rich and the other Ca-poor -- yielding a striped grain like the one shown.
The view of perthite (left)
comes from the Unversity in Lille, France: http://www.univ-lille1.fr/geosciences/cours/cours_mineralo/cours_mineralo_3.html.
The view on the right comes from the University of Melbourne: | <urn:uuid:ceace9ec-7609-447c-a4a4-5ad371221e6f> | 3.40625 | 308 | Knowledge Article | Science & Tech. | 43.924649 |
Forbes and Mississippi Governor Haley Barbour say we shouldn't really worry much about the Deepwater Horizon oil spill—after all, natural oil seeps are constantly leaking hundreds of thousands of barrels of oil into the Gulf of Mexico and everything is fine. So, are they right?
As you might guess, there's a bit of distortion going on here.
Natural seeps are real—kind of the underwater oil deposit equivalent of a natural spring of water popping up through the ground on land. They really do release a lot of oil into the world's oceans—as much as 14 million barrels per year. But, as Cutler Cleveland, Professor in the Department of Geography and Environment at Boston University, wrote on the Oil Drum blog, they do that at a much slower rate than man-made oil spills.
The Deepwater Horizon site releases 3 to 12 times the oil per day compared to that released by natural seeps across the entire Gulf of Mexico. By May 30, the Deepwater Horizon site had released between 468,000 and 741,000 barrels of oil, compared to 60,000 to 150,000 barrels from natural seeps across the entire Gulf of Mexico over the same 39 day period.
Natural seeps also don't run constantly or consistently. They stop and start, and put out more or less oil over time. And most of the seeps have been seeping for a very long time.
Why does all this matter? I've said it before and I'll say it again: Dose makes the poison.
Smaller amounts of oil, released a slower rate, into a local ecosystem that has evolved in tandem with the ongoing natural seep isn't as big of a deal as a whole metric crap-ton of oil dumped quickly into a larger area of ocean. (Just like smaller amounts of Corexit oil dispersant can be legitimately safe, even though we don't know anything about the toxicity of the product when used in huge quantities.)
The existence of natural oil seeps is not a legitimate argument against the very real need for concern about the effects of a massive oil spill. Tell your friends. And maybe Gov. Barbour, if you get a chance.
Oil Drum blog writers Gail Tverberg and Prof. Dave Summers were instrumental in answering my questions about oil seeps. For more details on the seeps, read Cutler Cleveland's full post, and this follow-up by Summers.
Maggie Koerth-Baker is the science editor at BoingBoing.net. She writes a monthly column for The New York Times Magazine and is the author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. You can find Maggie on Twitter and Facebook. | <urn:uuid:1f65d497-b969-4e80-b372-15f4930ccd86> | 3 | 562 | Personal Blog | Science & Tech. | 59.218612 |
Alligators are cold-blooded. They cannot generate their own heat.
Instead, they take on the temperature of their environment.
Notice in the infrared images, how the alligator is cool compared to the
warm-blooded human holding it. The alligator's temperature is close to
room temperature. Notice also how cold the alligator's eyes are.
In the wild, alligators will adjust their body temperature by entering water
to cool off, or by basking in sunlight to warm up. | <urn:uuid:2f4fc5d3-c238-48dc-b921-ccc4748fa295> | 3.421875 | 103 | Knowledge Article | Science & Tech. | 42.594231 |
Research @ KICP
Projects Archive: CAPMAP
Cosmic Background Radiations
CAPMAP is an attempt to measure the polarization anisotropy of the cosmic microwave background (CMB) using the 7m Crawford Hill Telescope in New Jersey, along with intensive instrumentation additions.
The Cosmic Microwave Background (CMB), as the relic radiation from the hot, dense phase of the early universe, is an invaluable cosmological probe. In particular, the small variations in the CMB from point to point across the sky encode an immense amount of information regarding the structure and composition of the early universe. Indeed, variations in the intensity, or temperature, of the CMB have now been measured with sufficient accuracy and precision that they place important constraints on cosmological models. However, these temperature anisotropies are not the only structures observable in the CMB. The CMB should also have an anisotropic polarized component, which represents a source of additional cosmological information.
Even though there have been several searches for this cosmological polarization in the three decades since Penzias and Wilson (1965) first reported the detection of the ``unpolarized'' microwave background, the polarized component of the CMB has so far eluded detection. This is not that surprising, since the current body of theory strongly predicts that the polarization of the CMB should be an order of magnitude smaller than the temperature anistropies (roughly one part in or about 5 K at most), which is below the detection threshold of all measurements to date. However, with the increasing sensitivity of microwave receivers, the limits on the polarized component of the CMB have steadily improved and the tightest limits are now around 10 uK (Subrahmanyan et. al., 2000; Hedman et.al., 2001; Keating et. al., 2001).
Current efforts to detect and measure the cosmological polarization can be divided into two groups. On the one hand, there are experiments which were designed primarily to measure the temperature anistropies, but which have some polarization sensitivity as well. The recent Saskatoon, TOCO and ATCA experiments are in this tradition, as are the ongoing CBI, DASI, BOOMERanG and MAXIPOL projects. On the other hand, there are experiments which were designed exclusively to look for CMB polarization. POLAR, which has produced the strongest limit on CMB polarization yet, falls within this group, along with the ongoing COMPASS and Polatron experiments. The project described here is another of these dedicated polarimeters: the Princeton IQU Experiment or PIQUE (the IQU refers to the Stokes parameters that describe the polarization state of electromagnetic radiation.)
KICP Highlights & News
Talks, Lectures, & Workshops | <urn:uuid:7ffd5220-c510-4f0a-a739-3069dfa72e20> | 2.78125 | 575 | Knowledge Article | Science & Tech. | 21.653775 |
The question is:
Given the triangle ABC with AB=5 and BC= 5sqrt3/3
The measure of the angle A is 30 degrees. How many choices are there for the measure of angle C?
I've hit a problem with the final solution.
multiply both sides by 5
since Im dealing with sin, I reference the unit circle and see that sin is sqrt3/2 at pi/3 and 2pi/3.
That means I have 2 choices for angle C? | <urn:uuid:c0d7df4a-ed4f-4a2f-b0ab-19d06d91d8fa> | 3.140625 | 107 | Q&A Forum | Science & Tech. | 86.96 |
Beaches turning to mud and changes in wildlife are among the signs of a warming climate recorded by an Inuit community in Canada.
By Alex Kirby
BBC News Online environment correspondent
They say increasingly unpredictable weather is significantly altering the way they live.
They are having to get used to unfamiliar birds and fish arriving from further south.
Some believe the warming could be the start of a process that may prove unstoppable.
The Inuit live in Sachs Harbour, a tiny community on Banks Island, which lies north of Canada's North West Territory, lapped by the Beaufort Sea.
Coping with the unknown
They reveal their forebodings in Sila Alangotok - Voices From The Tundra, a film made by Television Trust for the Environment (TVE) in its Earth Report Series for BBC World.
Click here to watch BBC World's report on Sachs Harbour.
One resident, Rosemarie Kuptana, is on the board of the International Institute for Sustainable Development.
She tells TVE: "What's scary is that there's uncertainty because we don't know when to travel on the ice, and our food sources are getting further and further away.
Musk oxen are replacing vanishing caribou
"We can't read the weather like we used to. It's changing our way of life. We live in a very extreme and harsh climate now.
"We've always had extreme weather conditions, whether it's 24-hour sunlight, or whether we've got blizzards with no visibility in the winter. What is more extreme now is that there's no predictability."
The earlier springs and later autumns make it harder for the people of Sachs Harbour to predict when they can hunt and trap.
There are novel species, as John Lucas explains: "Springtime comes around, and you start seeing different kinds of birds, barn owls, that sort of thing.
"We've never seen them up here before. We're getting different kinds of geese, ducks, mallard, pintails that we never used to see around here."
No way: Mud covers a beach
There are now salmon to be caught - another sign of warming weather, the Inuit believe.
But it is what is happening to the land itself that many of them find most disturbing.
Most of Banks Island is covered by permafrost, which is now melting. John Keogak tells TVE: "I'd say about '87 we started noticing these mudslides. Before, it used to be a little sloughing from the snow left on the side of the banks.
"But now it's the permafrost that's coming down, and the ground being disturbed, and more of the permafrost being exposed to the sun and the heat and the wind.
"Now there's more rain and the sun is shining all the time... Once this starts I don't know what's going to stop it... I think the bigger it gets the faster it will go.
Arctic char: Salmon are moving in
"It just started off small. Down here we used to be able to walk along the beach - now it's all mud."
Another resident recalls how, when he worked at the local airport, he reported a thunderstorm.
Warning the world
He was told: "You guys can't get thunderstorms. It's too cold." This year, in contrast with the recent trend, has turned out surprisingly cold.
Rosemarie Kuptana asks: "How can we prepare ourselves for such unpredictability? What will happen to us if we can no longer rely on our instincts and traditional wisdom?
"I believe the Arctic is a very important ecosystem to the health of the rest of the planet.
"I guess what we can do is just try and educate people and say: 'Hey, watch out, this is what's happening to us.'" | <urn:uuid:789bf757-27b9-41d5-a0c0-273fc07a8c25> | 3.15625 | 805 | Truncated | Science & Tech. | 64.530119 |
Consider a watch face which has identical hands and identical marks
for the hours. It is opposite to a mirror. When is the time as read
direct and in the mirror exactly the same between 6 and 7?
The ten arcs forming the edges of the "holly leaf" are all arcs of
circles of radius 1 cm. Find the length of the perimeter of the
holly leaf and the area of its surface.
An equilateral triangle is sitting on top of a square.
What is the radius of the circle that circumscribes this shape? | <urn:uuid:0095e3f8-5ddd-49aa-8278-8e1214731ae2> | 3.046875 | 117 | Q&A Forum | Science & Tech. | 70.805 |
Hugh Pickens writes "The LA Times reports that scientists analyzing infrared light reflected by 24 Themis, one of the largest asteroids in the solar system, have discovered evidence of water ice as well as organic compounds — findings that bolster a leading theory for the origins of life on Earth that the essential building blocks of life came from asteroids. 'Up until now there was no sign that asteroids had any abundant organics or ice on them,' says Joshua P. Emery, a planetary astronomer at the University of Tennessee. Typically, ice on the surface of an object such as 24 Themis would quickly vaporize and vanish, says planetary scientist Richard Binzel. 'Seeing freshly exposed ice on the surface, now that's a surprise. It has to be replenished from below, somehow.' The possibility that water could have come from asteroids adds weight to the theory that water and organic molecules may not have originated on Earth because the Earth did not become conducive to water or organic molecules until relatively recently." | <urn:uuid:1c693633-5418-4dd0-b8f7-452153f851c5> | 3 | 198 | Truncated | Science & Tech. | 27.914412 |
There’s a large element of “chance” in all biological systems. Whether it’s a biochemical process within a cell, the movement of cells throughout an organism, or the evolution of the those organisms, stochasticity plays a large a part in biology. Unfortunately, this is often missed by most students of biology — either because they fail to grasp the concept or it is never even presented to them.
Yann points us to an article in PLoS Biology on teaching students how to appreciate the importance of randomness (I prefer to call it stochasticity) in biology:
Klymkowsky MW, Garvin-Doxas K (2008) Recognizing Student Misconceptions through Ed’s Tools and the Biology Concept Inventory. PLoS Biol 6(1): e3 doi:10.1371/journal.pbio.0060003
The article presents some approaches that can be used by instructors when teaching about various aspects of biology to enforce the importance that stochasticity plays in all of biology. | <urn:uuid:7edbb426-90a3-4735-97e1-2cc5895bb891> | 3.125 | 215 | Personal Blog | Science & Tech. | 44.764301 |
Time dilation is a physics concept related to relativity and special relativity.
[change] Types of time dilation
In Albert Einstein's theories of relativity, there are two types of time dilation. In special relativity, clocks that are moving with respect to (according to) a stationary observer's clock run slower. For example, if Person A moves faster than Person B, Person A will experience time at a slower rate, and a clock he is carrying will tick slower than the clock person B is carrying.
In general relativity, clocks that are near to a strong gravitational field (such as a planet) run slower.
[change] Time dilation due to relative velocity
The formula for determining time dilation in special relativity is:
- is the time interval for an observer (e.g. ticks on his clock) – this is known as the proper time,
- is the time interval for the person moving with velocity v with respect to the observer,
- is the relative velocity between the observer and the moving clock,
- is the speed of light.
It could also be written as:
- is the Lorentz factor.
A simple summary is that more time is measured on the clock at rest than the moving clock; therefore, the moving clock is "running slow".
When both clocks are not moving, relative to each other, the two times measured are the same. This can be proven mathematically by
For example: In a spaceship moving at 99% of the speed of light, a year passes. How much time will pass on earth?
Substituting into :
So approximately 7.09 years will pass on earth, for each year in the spaceship.
In ordinary life, where people move at speeds much less than the speed of light, even considering space travel, are not great enough to produce easily detectable time dilation effects, and such vanishingly small effects can be safely ignored. It is only when an object approaches speeds on the order of 30,000 km/s (10% the speed of light) that time dilation becomes important.
However, there are practical uses of time dilation. One such example is with regard to keeping the clocks on GPS satellites accurate. Without accounting for time dilation, GPS'es would be useless. | <urn:uuid:7517089b-ad13-4e1a-bb42-74f902a8f9a5> | 3.953125 | 470 | Knowledge Article | Science & Tech. | 47.824464 |
|Theta/gamma nested oscillations form a neural code: |
An item is represented in working memory by firing within a gamma cycle; different items are represented in their order at different discrete phasises of a theta cycle.
- Storage of 7 +/- 2 short-term memories in oscillatory subcycles. Lisman JE, Idiart MA. Science. 1995 Mar 10; 267 (5203): 1512-5. [ PDF format ]
- Gating of human theta oscillations by a working memory task.
Raghavachari S, Kahana MJ, Rizzuto DS, Caplan JB, Kirschen MP, Bourgeois B, Madsen JR, Lisman JE. J Neurosci. 2001 May 1;21(9):3175-83 [ PDF format ]
- The theta/gamma discrete phase code occuring during the hippocampal phase precession may be a more general brain coding scheme. Lisman J. Hippocampus. 2005; 15(7): 913-22. [ PDF format ]
- Theta oscillations in human cortex during a working-memory task: evidence for local generators. Raghavachari S, Lisman JE, Tully M, Madsen JR, Bromfield EB, Kahana MJ. J. Nurophysiol. 2006 Mar; 95(3): 1630-8.
[ PDF format ] | <urn:uuid:602976da-346a-47e9-aed6-96627c801d0f> | 2.703125 | 297 | Content Listing | Science & Tech. | 66.829322 |
points in space
root at power7200.ping.be
Fri Jan 17 03:08:29 EST 1997
In article <c07craig-ya023180001001971216460001 at news.csus.edu>,
c07craig at sfsu.edu (c weiser) writes:
>Suppose we know vector PQ which is in a plane in space. We also know angle
>theta, which is the angle between PQ and PR. PR is also in the same plane
>as PQ, and the unit length of PQ is the same as PR. So, given all this,
>how do we find vector PR?
>In other words, how do we solve for point R in three dimensional space?
>Another way to put it is how can one use a polar coordinate system that is
>in an arbitrary plane in space?
>please email me at c07craig at sfsu.edu (zero between c and 7)
It depends a bit on how you get your plane defined, but I'd say the
simplest thing to do is to use an orthogonal transformation that
reduces your vector PQ to the OX axis, and the perpendicular direction
to the plane to the OZ axis.
So, if PQ is given by (a,b,c) in the Oxyz system, and the plane
is perpendicular to (d,e,f) (because the plane has as eq. dx + ey + fz = 0)
(and let us assume that both vectors are normalized to 1)
then we have to think up a transformation Oxyz --> OXYZ
that maps (a,b,c) onto (1,0,0) and maps (d,e,f) onto (0,0,1).
Assuming PQ is part of the plane, we have to have ad + be + cf = 0
(otherwise something is wrong).
Finding the 3rd axis is easy in this case, just take the cross product
of (a,b,c) and (d,e,f). That will give you a vector (g,h,i). At least
if you apply a minus sign.
Now define a matrix A = [ [a,b,c], [g,h,i], [d,e,f] ]
Define B = transpose of A.
Clearly, B. [ 1, 0, 0 ]T = [a, b, c]T
B. [0, 1, 0 ] T= [g,h,i]
B. [0, 0, 1 ] T = [d, e, f]
So, B is the inverse operator of the one we wanted, hence
the sought-for transformation is inverse(B).
Hope this helps, and please check it, I might have made some transposition
More information about the Comp-bio | <urn:uuid:85684b4c-9f29-42e2-be49-9d9c6cf7fdfc> | 2.84375 | 633 | Comment Section | Science & Tech. | 89.469337 |
This new metereological theory is as revolutionary as Galileo's assertion was to the Pope that the earth revolved around the sun, and not the other way round... and it has huge implications for climate change in the fight to save the planets forests.
"First published in 2007 by two Russian physicists, Victor Gorshkov and Anastassia Makarieva, the still little-known biotic pump theory postulates that forests are the driving force behind precipitation over land masses. Since the biotic pump turns modern meteorology on its head, it has faced stiff resistance from some meteorologists and journals. Meanwhile, it has received little attention in the public or policy-sphere. Yet if Gorshkov and Makarieva's theory proves correct, it would have massive implications for global policy towards the world's forests, both tropical and temperate.
The biotic pump is a mechanism in which natural forests create and control ocean-to-land winds, bringing moisture to all terrestrial life," Gorshkov and Makarieva told mongabay.com in a recent interview. According to them it is condensation from forests, and not temperature differences, that drives the winds which bring precipitation over land."
Below is an excerpt from the long interview published at the link provided above, when asked about the phenomenon of reduced precipitation in the Amazon.
Victor Gorshkov and Anastassia Makarieva: According to recent analyses, during 1973-2003 precipitation in the Amazon River basin was declining at a rate of 0.3 percent annually, which means a trend of about 10 percent for the entire period. This does not include the most recent devastating droughts of 2005 and 2010. In the meantime, deforestation in the basin has amounted to about 30 percent during the same period. Deforestation mostly disturbed southern and south-eastern parts of the basin, where the precipitation/evaporation is less than in the basin core. Assuming that the total biotic pump intensity is a function of the integral of local precipitation over the total forest-covered area, one can conclude that the decrease in precipitation intensity is of the same order of magnitude as the degree of biotic pump deterioration. As deforestation marches to the interior of the basin and affects the ever more productive forests with the most precipitation, the disruption of the water cycle in the basin will increase disproportionately.
Another story about the Biotic Pump process written more in laypersons terms
Long-term cross-disciplinary study of the Amazon rainforest water cycle. | <urn:uuid:31c137ea-e2e0-4931-aafd-0387cd51b557> | 3.203125 | 509 | Personal Blog | Science & Tech. | 31.285694 |
[void] is an an object which represents "not present" or "non-existant". This is used primarily for method return values. For example, [map->find] will return [void] when the object being searched for is not contained within the map. [void] is the default return value for a method which does not explicitly return anything. It is recommended that [void] only be used in method returns and in value comparison. To illustrate, inserting a void into a map would make it difficult to determine whether or not the value exists within the map.
[void] in Lasso has the following attributes.
This example shows how void is used to indicate "not found" when searching a map.
local(mp = map('a'=1, 'b'=2, 'c'=null)) void == #mp->find('c') // yields false. 'c' was contained within the map #mp->find('d') == void // yields true. 'd' was not contained within the map
Please note that periodically LassoSoft will go through the notes and may incorporate information from them into the documentation. Any submission here gives LassoSoft a non-exclusive license and will be made available in various formats to the Lasso community. | <urn:uuid:922b2609-7928-4d56-bd35-9b71ea170acd> | 2.890625 | 263 | Documentation | Software Dev. | 56.383333 |
Wheels are a pretty effective method of getting around. Is there any reason why they never evolved in nature?
• It is not true to say that nature hasn't invented the wheel: bacteria have been using it to get around for millions of years. It is the basis of the bacterial flagellum, which looks a bit like a corkscrew and which rotates continuously to drive the organism along. About half of all known bacteria have at least one flagellum.
Each is attached to a "wheel" embedded in the cell membrane that rotates hundreds of times per second, driven by a tiny electric motor. Electricity is generated by rapidly changing charges in a ring of proteins that is attached to the surrounding membrane. Positively charged hydrogen ions are pumped out from the cell surface using chemical energy. These then flow back in, completing the circuit and providing the power for the ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:9991a861-ddcf-40d9-8046-2df85374df6e> | 3.875 | 207 | Truncated | Science & Tech. | 48.746629 |
Dr. Ingo A. Pecher
Examination of marine seismic reflection data for the occurrence and distribution of gas hydrates on the continental margins of New Zealand
Brief summary of research topic
Gas hydrate, an ice-like compound of water and gas molecules, trapped in marine sediments stores immense amounts of methane, which must be taken carefully into account as a climate-relevant greenhouse gas and as a possible cause of seafloor instability. In addition, methane from gas hydrates could play a dominant role as a major energy resource in the future. So far, the origin and the natural controls on hydrates and their impacts on the environment are poorly understood.
The occurrence of gas hydrates is most often indicated by the geophysical registration of a bottom-simulating reflection (BSR). The BSR is a seismic reflection which can be recognised by a negative reflection-coefficient. It evolves at a boundary between shallow sediments that contain gas hydrates and deeper sediments storing free methane gas. The existence of free gas below the BSR greatly reduces the seismic velocity in those sediments, thereby creating a seismic reflection at the boundary. Further, it is characteristic for BSR structures to follow isotherms which are nearly parallel to the morphology of the sea floor, as opposed to following a stratigraphic horizon (Figures 1&2).
Figure 1: Brute stack of a six channel seismic profile acquired by R/V L’Atalante in 1993 on the GeodyNZ-Sud cruise (normal-moveout velocity used: 1500-1700 m/s).
Figure 2: Instantaneous amplitude section of Figure 1, after a broad bandpass (1/5-150/200 Hz) was applied.
Figure 3: Fiordland research area and the different surveys that are covering it. The numbers indicate line numbers (yellow & red) and shotpoints (green), respectively.
Nowadays, methane -and therefore gas hydrates as one of the major sources- is increasingly held responsible for climate changes and mass extinctions of flora and fauna in earth's history. In the event of a fast destabilisation of gas hydrates, the free gas may become an important agent of climate change. For example, releases of methane from hydrates seem to have played a driving force in rising temperatures since the last glacial maximum.
Recent studies show that free methane gas below the BSR migrates upwards through overlying sediments. Therefore an aim of the project will be to examine the dynamics of methane migration in New Zealand's margins. Any free gas that escapes into the ocean and further on into the atmosphere would have a dominant impact on the environment.
Additionally, knowledge of the distribution of gas hydrates will contribute to natural hazard prevention. Recent investigations indicate that hydrate instability may cause subsea landslides on the continental slope and therefore must be considered as a possible triggering mechanism for tsunamis.
My PhD project will investigate the occurrence and distribution of gas hydrates on the continental margins of Fiordland (southwest of New Zealand's South Island) and Hikurangi (east of New Zealand's North Island). An extensive marine seismic reflection data set covering those areas already exists, acquired by various government and research agencies and the oil industry (Figure 3). They will be reprocessed with the GLOBE Claritas software. By evaluating the data, it is possible to compute a 3D model of the underground. Hence, the gas hydrate resources on New Zealand's continental shelves will be estimated. Further, the impact of gas hydrates on the slope stability on convergent margins and the mechanisms controlling gas hydrate formation will be investigated.
In summary, the goals of this project are twofold: first to locate and delineate new hydrate deposits and second to understand the interaction of sub-seafloor methane with the ocean. | <urn:uuid:50291901-1ec6-4090-9ca0-d0df21fe218e> | 3.5 | 801 | Academic Writing | Science & Tech. | 28.85 |
File Clam (Limaria sp.) These delicate bivalve molluscs live on the underside of rocks, or under shell rubble in rock pools in the intertidal zone. They usually have a cream or whitish shell and red or pale orange tentacles. They swim actively by beating their tentacles while rhythmically opening and closing both shell valves. File clams filter feed on plankton, and their swimming behaviour undoubtedly helps them evade predators. Handling should be avoided – not because they are dangerous but because they are delicate and easily damaged, and their tentacles stick to skin and frequently break off.
File Clams occur throughout the Indo-west Pacific.
Queensland Museum's Find out about... is proudly supported by the Thyne Reid Foundation and the Tim Fairfax Family Foundation. | <urn:uuid:1f05adfd-fbf4-4a6e-8142-13ef3333611a> | 3.5 | 158 | Knowledge Article | Science & Tech. | 45.296408 |
May 6, 2002 - Defining Constant Objects and Arrays
May 6, 2002|
Defining Constant Objects and Arrays
Tips: May 2002
Yehuda Shiran, Ph.D.
Array, only the reference to the data remains constant. The content of the
Arraycan be changed. It allows you to keep the address of an object or an array fixed, and it also keeps you from deleting the reference to them. You can freely change the content of the object or each one of the array elements, but you cannot change the length of the array or the structure of the object. The following example declares an array of 12 objects of the
Objectdata type (ECMAScript Edition 3):
Anywhere in the application you can assign a value to an element of the array:
const arrayOfObjects : Object = new Object;
To learn more on JScript .NET, go to Column 108, JScript .NET, Part II: Major Features.
arrayOfObjects = "contentOfEighthElement"; | <urn:uuid:12d015d2-463b-4035-94da-ffa7157400df> | 2.859375 | 215 | Tutorial | Software Dev. | 56.601581 |
To warm up, here are a few pictures I found at the KING-5 weather site (taken today by donmonroe) near the Skagit Valley.
Here is a great video of the convection developing yesterday: click here.
Such active convection was not isolated in western Washington, but extended over and east of the Cascades. You can see the story in the visible satellite imagery. At 9:30 AM (first image), there was clouds and a few light showers around.
Why such vigorous convective showers today? The reason: there was a huge change in temperature with height....also known as the vertical lapse rate. Today at 5 PM, at around 18,000 ft (500 hPa pressure) the temperatures were around -36C (-35F), while near sea level the temperatures were around 50F. That is a very large difference in temperature. Such large vertical temperature changes result in the development of vertical instability, or convection, just as you see in your hot cereal pot when you heat it from below. Meteorologists can appraise the potential for convection by plotting temperature and dewpoint on a sounding chart. Here is one at 5 PM for Quillayute on the coast (red is temperature, blue dashed is dewpoint, and the numbers on the left are pressure level on hPa--1000 hPa is near sea level, 700 hPa around 10,000 ft, etc). Believe me, a meteorologist would take notice of this large change in temperature with height.
With strong convection we get the possibility of severe thunderstorms (or at least as severe as we get around here). Today two funnel clouds were reported descending out of thunderstorms. One in Yelm (check out Scott Sisteks blog for more info on this--I don't want to steal his thunder) and the other was in Pasco (around 7 PM tonight).
And let me remind you that today is the 40th anniversary of the strongest tornado to strike the Northwest...the 1972 F3 Vancouver,Wa tornado that injured 300 people and killed six.
|Some damage from the Vancouver tornado of 1972| | <urn:uuid:17872ba0-31b3-45f7-8c0a-e56995e542da> | 2.921875 | 438 | Personal Blog | Science & Tech. | 58.091127 |
Jacob, U et al. (2011): (Table A1) Species list of the high Antarctic Weddell Sea food web. doi:10.1594/PANGAEA.788061, Supplement to:Jacob, Ute; Thierry, Aaron; Brose, Ulrich; Arntz, Wolf E; Berg, Sofia; Brey, Thomas; Fetzer, Ingo; Jonsson, Tomas; Mintenbeck, Katja; Möllmann, Christian; Petchey, Owen L; Riede, Jens O; Dunne, Jennifer A (2011): The role of body size in complex food webs: A cold case. Advances in Ecological Research, 45, 181-223, doi:10.1016/B978-0-12-386475-8.00005-8
Human-induced habitat destruction, overexploitation, introduction of alien species and climate change are causing species to go extinct at unprecedented rates, from local to global scales. There are growing concerns that these kinds of disturbances alter important functions of ecosystems. Our current understanding is that key parameters of a community (e.g. its functional diversity, species composition, and presence/absence of vulnerable species) reflect an ecological network's ability to resist or rebound from change in response to pressures and disturbances, such as species loss. If the food web structure is relatively simple, we can analyse the roles of different species interactions in determining how environmental impacts translate into species loss. However, when ecosystems harbour species-rich communities, as is the case in most natural systems, then the complex network of ecological interactions makes it a far more challenging task to perceive how species' functional roles influence the consequences of species loss. One approach to deal with such complexity is to focus on the functional traits of species in order to identify their respective roles: for instance, large species seem to be more susceptible to extinction than smaller species. Here, we introduce and analyse the marine food web from the high Antarctic Weddell Sea Shelf to illustrate the role of species traits in relation to network robustness of this complex food web. Our approach was threefold: firstly, we applied a new classification system to all species, grouping them by traits other than body size; secondly, we tested the relationship between body size and food web parameters within and across these groups and finally, we calculated food web robustness. We addressed questions regarding (i) patterns of species functional/trophic roles, (ii) relationships between species functional roles and body size and (iii) the role of species body size in terms of network robustness. Our results show that when analyzing relationships between trophic structure, body size and network structure, the diversity of predatory species types needs to be considered in future studies.
The species list that encompasses 488 consumer and resource species from the high Antarctic Weddell Sea was compiled by analyzing over 500 publications: for a full description of the methods used and a full list of these publications see Jacob (2005, http://nbn-resolving.de/urn:nbn:de:gbv:46-diss000118684) | <urn:uuid:5c2f682c-d03c-426a-83f8-597b6442887e> | 2.9375 | 648 | Academic Writing | Science & Tech. | 33.960707 |
The term, yield, carries the same meaning in chemistry as it does in any field where an amount is produced. Farmers look at yield of a crop per acre. Bankers look at the yield (interest earned) on savings, investments, etc. Chemists look at the yield of a chemical reaction. Yield consistently means the amount produced to all these individuals.
The theoretical yield of a reaction is determined from the balanced equation. The balance tells us that for some input (reactants for chemists; seed, water, and fertilizer for farmers) we should get a certain output. Recall that all chemical reactions express amounts in terms of atoms, molecules, or moles. Chemical reaction equations cannot express masses directly because chemicals interact part to part and masses are not parts. Think about nuts bolts, and washers. In a normal application, one nut and one washer are attached to a single bolt. The bolt, the washer, and the nut each have their own mass but the interactions are part to part. The masses have no real bearing on the interaction. We can, however, measure matching numbers of bolts, washers, and nuts by weighing some (figuring out the individual masses just like we figure out atomic or molecular masses) and then weighing large batches of each just as a chemist weighs chemicals. Just as bolts, washers, and nuts are counted individually, moles, as should be understood, is a count of parts. A single mole is always 6.022 x 1023 parts, whether the parts are atoms, molecules, or what have you.
Here's an example:
Iron (II) oxide is heated in a furnace with carbon. This produces Fe metal and carbon dioxide. The basic translation of these English statements into "chemistry" is:
The theoretical yield of Fe product from FeO is 2 moles for every two moles reacted. This ratio can be reduced to 1:1 because 2:2 is equivalent to 1:1. BUT we can't measure a mole directly. We can only measure mole equivalents through the use of mass. Chemists use atomic masses for elements and molar masses for compounds. Thus one needs to calculate the molar mass of FeO which is 55.847 g + 15.999 g = 71.846 g. The atomic mass for Fe is read directly from the Periodic Table and is 55.847 g. So the reaction equation above indicates that reacting 143.692 g (2 moles) of FeO with at least one mole of C will produce 111.654 g (2 moles) of Fe. I say at least one mole of C because the reaction equation shows that the minimum ratio of FeO:C is 2:1. Any more than 1 mole C will simply leave some unreacted C.
Exact moles are not always used in an experiment. Some chemicals are expensive; some can be quite toxic. Various constraints may cause the chemist to work with parts of a mole BUT the balance above tells us we are fine as long as we maintain the ratios of reactants. This means that for any mole amount of FeO, we need half that amount (2:1) of C for a balanced reaction.
As an example, lets assume someone brought in an FeO sample that weighed 215.0 g. If the sample is pure, we can calculate the theoretical yield of pure Fe using the reaction equation shown above. The first step is to figure the number of moles of FeO to which 215.0 g is equivalent.
1 mole x moleSolving for x we find x= 2.99 moles
-------- = --------
71.846 g 215.0 g
From the equation above, this means that we should produce 2.99 moles of Fe since the reaction is 1 mole:1 mole. See that 2.99:2.99 is equivalent to 1:1. Knowing that the theoretical yield is 2.99 moles, we can calculate the theoretical yield as a mass instead of as moles, since we can't weigh moles without "Dr. Boyle's Mole Balance(tm)". The calculation is found in the following manner.
x g 55.847 gSolving for x, we find that the original 215.0 g of FeO should (theoretically) produce (yield) 166.98 g of Fe.
----------- = ----------
2.99 moles 1 mole
This brings us from the world of theory to the world of reality. Of course, nothing we do is 100%. Our bodies are not 100% efficient at using the food we eat. Our autos are not 100% efficient at burning the fuel we pour into them.
In refining the ore (FeO) above, reality sets in. In all lab procedures, there is a dose of reality. The amount of material that is produced when the experiment is run is known as the actual yield. Just as a farmer has an expected yield per acre (theoretical yield), the same farmer has an actual yield (harvest) per acre. Chemists are no different. The farmer might figure the percentage yield and so might chemist. In both cases, the calculation is simple. In both cases, the actual yield is measured at the final harvest stage. The chemist removes the Fe from the reaction container. The farmer hauls the produce to the barn, silo, or what have you and the amount that was really produced is measured. Actual yields only come from doing the experiment. They cannot be obtained any other way. The reaction equation tells us the theory. Running the experiment gives us the real results.
And so we come to calculating percent yield.
The percent yield for a chemical reaction is calculated identically to percent yield of farm produce or a savings bond. We take the actual yield divided by the theoretical yield ad multiply by 100 to normalize the data. Percents are numbers normalized to the range of 0-100. Normalizing means adjusted from what the range is actually to a desired range.
I haven't run the experiment but I can give you an actual yield off the top of my head just for the sake of calculating % yield.
Assume, after the reaction was carried out, the Fe was weighed and found to be 125 g. The % yield is found thus:
125 g FeSo here you have the various parts of theoretical yield, actual yield, and percent yield. Remember to determine the theoretical yield, one MUST have a balanced reaction. Then IF the amount of reactants is given in any unit other than moles, one must convert to moles so that the amount of product can be determined. Lastly, one must convert back to the starting units, say grams, to express the yield in terms that can be directly measured.
% yield = ----------- x 100 = 74.86%
166.98 g Fe
Go try some problems! | <urn:uuid:1c41494e-0f90-48c5-b863-61a52b3afaa1> | 4.4375 | 1,413 | Nonfiction Writing | Science & Tech. | 70.914703 |
|This article does not cite any references or sources. (December 2007)|
In geology the term compression refers to a set of stresses directed toward the center of a rock mass. Compressive strength refers to the maximum compressive stress that can be applied to a material before failure occurs. When the maximum compressive stress is in a horizontal orientation, thrust faulting can occur, resulting in the shortening and thickening of that portion of the crust. When the maximum compressive stress is vertical, a section of rock will often fail in normal faults, horizontally extending and vertically thinning a given layer of rock. Compressive stresses can also result in folding of rocks. Because of the large magnitudes of lithostatic stress in tectonic plates, tectonic-scale deformation is always subjected to net compressive stress.
See also↑Jump back a section
Read in another language
This page is available in 1 language | <urn:uuid:ff2a371f-3df1-4dad-a752-c36950385be9> | 4.25 | 188 | Knowledge Article | Science & Tech. | 32.752027 |
ArmonSore wrote:I take the same stance here. It's not wrong to say that a feather is accelerating, but it's not right either. It's unprovable(which sounds to me to be a very godelian statement). So we needn't say that the feather is accelerating at all. For example, if we're in a spaceship that is accelerating upwards at 9.8 m/s^2, and we drop a feather, we can't tell the difference between doing the same experiment on earth. Which is to say, the feather does the same exact thing as it does in a vacuum near the earth's surface. So you can talk about either as being true, if you want to.
But this bothers me a little bit. My above argument seems to break down since we "know" that we're in a gravitational field, not free space. If it is impossible to tell the difference then how do we know this to be true? Does someone have an answer to this? Is there a completely consistent way to view our experience on earth as not being due to a gravitational field?
I believe the key here is that earth's gravitational field is non-uniform - it weakens with distance, and points toward the center of the earth. These variations in strength and vector differentiate it from what would be observed if Earth were accelerating (at least if we are able to correlate experiments conducted in different portions of the field), and indicate that it is in fact gravitational in nature. | <urn:uuid:12f3e665-f8a1-4180-a75a-7bc5f93cd145> | 2.71875 | 306 | Comment Section | Science & Tech. | 63.408483 |
Taxonomic name: Castor canadensis (Kuhl 1820)
Common names: American beaver (English), beaver (English), Canadian beaver, castor (French), castor americano (Spanish), North American beaver (English)
Organism type: mammal
Castor canadensis (beaver) is native to North America, and has been introduced to Tierra del Fuego in southern South America, Finland, France, Poland and Russia in recent times. In its introduced range, the damming activity of the beaver can cause flooding which can damage forests. They also have the ability to quickly cut down large numbers of trees. In Finland, they compete with native beaver populations. In their native range, they cause flooding on major highways by plugging highway culverts.
Castor canadensis (beaver) is a large hervbivorous rodent typically found near water. Adults may be up to 1200mm long and weigh between 18-47kg. Colour ranges from yellowish-brown to black with reddish-brown most common. Guard hairs are long and coarse and the under fur is dense and lead grey in colour. The tail is broad, scaly and dorsoventrally flattened. It is black in young animals but becomes lighter with age. Adaptations for aquatic life include nictitating membranes on the eyes, valvular ears and nose, lips closing behind incisors and webbed hind feet (Jenkins and Busher, 1979; Nummi, 2006).
lakes, natural forests, range/grasslands, riparian zones, scrub/shrublands, tundra, water courses, wetlands
Castor canadensis (beavers) are always found close to water and they require forest to provide food and building material (Nummi, 2006). Beavers have a unique ability to cut trees and this allows them to build mud and wood lodges in which they live, nest and store food. Lodges may be completely surrounded by water or built on the banks of ponds, lakes or streams.They are also able to build watertight dams which create ponds behind them where the beavers are then able to build lodges (Jenkins and Busher, 1979). This behaviour alters large areas of habitat and is the reason why beavers are termed “ecosystem engineers” (Nummi, 2006).
Castor canadensis (beavers) are known as "ecosystem engineers" for their ability to alter the physical and chemical nature of water bodies and their adjacent terrestrial systems in both their native and introduced range (Nummi, 2006). Two recent studies have investigated the impacts of beavers on ecosystems in their introduced range in southern South America. Beavers have been found to cause significant reduction in forest cover up to 30m from water effectively removing riparian forest. In their introduced range of South America Beavers modify the original ecosystem from closed Nothofagus forest to a grass- and sedge-dominated meadow. Nothofagus forest and seedlings are suppressed by beavers but herbaceous plants have been shown to increase in number and diversity. Unfortunately most of the increase in herbaceous plant diversity is due to invasion of the areas by non-native species (Anderson et al. 2006). Deforestation caused by C. canadensis also has the immediate effect of increased erosion due to exposed slopes (Lizzaralde et al, 2004). Forests may not completely regenerate in meadows for more than 20 years after removal of beavers due to flooding and sediments completely covering the forest floor which impedes seedling germination and establishment (Martinez Pastur et al, 2006). Anderson and Rosemand (2006) investigated the effect of beavers on the aquatic ecosystem and found that ponds created by beavers had increased productivity but at the expense of significantly reduced macroinvertebrate diversity. Via physical, chemical and geomorphological alterations, beavers modify the structure and function of entire biotic communities and ecosystems. Lizarralde et al (2004) found that beaver colonized sites in the Tierra del Fuego Archipelago, Argentina had submerged vegetation and algae indicative of high nitrogen concentrations. Wood debris from fallen trees causes an accumulation of organic material that modifies the biochemical composition of waters, sediments, soils and adjacent riparian areas. These alterations make beaver-altered sites more suitable for introduced fish species (Salmo trusttafario, Salvelinus fontinalis and Onchorrychus mybis) and sustained invertebrate communities typidcal of slow-water habitats (Lizzaralde et al, 2004). Beavers dam the river in which their lodge occurs, and sometimes the dam breaks causing extensive flooding. Dams act as barriers to migration in the stream and also form areas of impounded water behind them, increasing water temperature (Alexander, 1998). Beavers are also known for their ability to rapidly clear a forested area, and also cause flooding to roads by plugging highway culverts (Jensen et al. 2001).
Castor canadensis (beavers) are trapped and used primarily for their pelt (Langan, 1991). Beavers are being reintroduced to areas where they have been made extinct to improve wetland ecosystems.
Castor canadensis (beavers) can swim up to 8 km an hour. They secrete waterproofiing oil from glands at the base of their tail.
Native range: North America from northern Mexico to northern Canada.
Known introduced range: Finland, France, Poland, Russia, Argentine and Chilean Tierra del Fuego.
Introduction pathways to new locations
Landscape/fauna "improvement": Castor canadensis (beaver) was introduced to southern South American during an Argentine government program to establish furbearers in Tierra del Fuego.
Natural dispersal: Castor canadensis (beavers) have colonized adjacent islands dispersing themselves only by their own means.
Other: Castor canadensis (beaver) was introduced to Finland as part of a programme to reintroduce the European beaver (C. fiber). (Nummi, 2006). They were introduced to Poland and farmed (Nummi, 2006)
Local dispersal methods
Escape from confinement: Castor canadensis (beavers) have escaped from fur farms etc.
Most Castor canadensis (beaver) management is through various forms of trapping for pelts. Demand for pelts has decreased so now there is little incentive for trappers to hunt beavers. Beaver colonies have been moved to other areas but in most cases other beavers move into the area and replace the beavers that were removed. Similar problems occur with trapping – removing the resident population simply allows other beavers to replace them. Dams in Canada have been blown up but it is a costly process and frequently new dams are created in the same place. Jensen et al (2001) suggest installing oversized culverts as a way of discouraging beaver plugging activity. McKinstry and Anderson (1998) state that Hancock and Bailey traps are typically used for live trapping beavers, but are bulky and expensive, and suggest steel cable snares as an alternative.
Castor canadensis (beavers) are "choosy generalist" herbivores. They eat leaves, twigs and bark of most species of woody plants growing near water and also herbaceous plants, particularly aquatics. Whilst they have a wide ranging diet they show a large preference for certain plant species such as aspen (Populus spp.) and willow (Sailx spp.). Roots and rhizomes of water lilies are a particularly important source of winter food (Jenkins and Busher, 1979).
Castor canadensis are monogamous. They usually become sexually mature during their second winter at the age of 1.5 years, although it can be delayed until 2.5 years or later (Nummi, 2006). Beavers mate once a year during winter. Gestation lasts about 105 days and the sole litter is born in spring. Litter size is usually between three and four, but can vary from one to nine (Jenkins and Busher 1979; Hill 1982 in Nummi, 2006). Kits weigh about 500g at birth.
The offspring are born fully furred and eyes wide open. They can swim within 24 hours and after several days they are also able to dive out of the lodge without any accompaniment. They leave the dam at two years of age (Anderson, 2002).
Principal sources: Christopher Anderson and Brett Maley, Institute of Ecology, University of Georgia, Athens GA 30605 and Omora Foundation, Puerto Williams, XII Region, Chile.
Nummi, P. 2006. NOBANIS – Invasive Alien Species Fact Sheet – Castor canadensis.
Jenkins, S.H. and Busher, P.E. 1979. Castor canadensis. Mammalian Species
Jenkins and Busher, 1979 and Nummi, 2006
Compiled by: Viki Aldridge, University of Washington, Tacoma, Supervised by Deborah Rudnick and IUCN/SSC Invasive Species Specialist Group (ISSG)
Last Modified: Sunday, 13 December 2009 | <urn:uuid:5cc247e8-7c58-4f81-8871-a6b83c43f0b3> | 3.65625 | 1,914 | Knowledge Article | Science & Tech. | 35.239909 |
|How Big Cities' Bad Air Pollutes the Sierras|
early every afternoon, winds from the ocean blow pollution through three major
passes in the coastal ranges -- the Carquinez Strait, Altamount Pass, and
Pacheco Pass -- into the Central Valley and up against the Sierra. The streams
of air carrying Bay Area emissions mix with locally generated pollution from
automobile traffic, small engine exhaust, industry, and agriculture in the
Valley and are diverted both north and south.
The Valley's geography is like a giant bathtub -- with a lid on top in the form of inverted layers of cool and warm air that cannot mix. This inversion layer traps both local and transported dirty air, sometimes for weeks or even months. Organized wind patterns in the summer help create an eddy or swirl-like pattern that circulates around the Valley "tub."
Winds move south in the daytime, transporting pollution toward Fresno
and Bakersfield. At night, the process reverses, taking it back north.
The next day, the cycle begins again and continues until weather
patterns change. This collection of trapped pollutants rises up into the
Sierra on a daily basis-giving large areas of the mountains some of the
worst air quality in the nation.
Next - The largest emitters of air pollution...
|© 2003 Delphi International. All Rights Reserved.| | <urn:uuid:ce84793e-fe2d-4d00-b070-fdc4b15b2456> | 3.203125 | 290 | Knowledge Article | Science & Tech. | 51.374206 |
To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling...
Two trains set off at the same time from each end of a single
straight railway line. A very fast bee starts off in front of the
first train and flies continuously back and forth between the. . . .
In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book.
The triathlon is a physically gruelling challenge. Can you work out which athlete burnt the most calories?
These Olympic quantities have been jumbled up! Can you put them back together again?
Can you sketch graphs to show how the height of water changes in
different containers as they are filled?
Make your own pinhole camera for safe observation of the sun, and find out how it works.
Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents.
How much energy has gone into warming the planet?
Work out the numerical values for these physical quantities.
Examine these estimates. Do they sound about right?
Have you ever wondered what it would be like to race against Usain Bolt?
Get some practice using big and small numbers in chemistry.
When a habitat changes, what happens to the food chain?
Can you work out which drink has the stronger flavour?
Is it cheaper to cook a meal from scratch or to buy a ready meal? What difference does the number of people you're cooking for make?
Explore the properties of isometric drawings.
Invent a scoring system for a 'guess the weight' competition.
If I don't have the size of cake tin specified in my recipe, will the size I do have be OK?
Make an accurate diagram of the solar system and explore the concept of a grand conjunction.
Use trigonometry to determine whether solar eclipses on earth can be perfect.
Work with numbers big and small to estimate and calculate various quantities in physical contexts.
Are these estimates of physical quantities accurate?
Can you deduce which Olympic athletics events are represented by the graphs?
Explore the relationship between resistance and temperature
How would you go about estimating populations of dolphins?
Work with numbers big and small to estimate and calulate various quantities in biological contexts.
Analyse these beautiful biological images and attempt to rank them in size order.
When you change the units, do the numbers get bigger or smaller?
Can you work out what this procedure is doing?
Which dilutions can you make using only 10ml pipettes?
Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from?
Estimate these curious quantities sufficiently accurately that you can rank them in order of size
Which units would you choose best to fit these situations?
Use your skill and knowledge to place various scientific lengths in order of size. Can you judge the length of objects with sizes ranging from 1 Angstrom to 1 million km with no wrong attempts?
Starting with two basic vector steps, which destinations can you reach on a vector walk?
Andy wants to cycle from Land's End to John o'Groats. Will he be able to eat enough to keep him going?
Can you rank these sets of quantities in order, from smallest to largest? Can you provide convincing evidence for your rankings?
Is it really greener to go on the bus, or to buy local?
Where should runners start the 200m race so that they have all run the same distance by the finish?
Work with numbers big and small to estimate and calculate various quantities in biological contexts.
What shapes should Elly cut out to make a witch's hat? How can she make a taller hat?
How would you design the tiering of seats in a stadium so that all spectators have a good view?
Imagine different shaped vessels being filled. Can you work out
what the graphs of the water level should look like?
What shape would fit your pens and pencils best? How can you make it?
Explore the properties of perspective drawing.
Can you draw the height-time chart as this complicated vessel fills
This problem explores the biology behind Rudolph's glowing red
An observer is on top of a lighthouse. How far from the foot of the lighthouse is the horizon that the observer can see?
In Fill Me Up we invited you to sketch graphs as vessels are filled with water. Can you work out the equations of the graphs? | <urn:uuid:296cc7e2-18da-402d-b2cf-ae071b23a1a1> | 3.75 | 932 | Content Listing | Science & Tech. | 60.449012 |
People have been using gold particles dispersed in water — gold hydrosols — for medical purposes for over 1000 years. Recently, hydrosols containing gold nanoparticles have become particularly popular because they have exciting potential in cancer therapies, pregnancy tests and blood sugar monitoring.
What have muesli, social networking sites and flocks of birds got to do with mathematics? Scientists and students from the University of Bath will be explaining all at the Royal Society's prestigious Summer Science Exhibition, which opens today.
Water is essential for life on Earth, and it is a resource we all take for granted. Yet it has many surprising properties that have baffled scientists for centuries. Seemingly simple ideas such as how water freezes are not understood because of water's unique properties. Now scientists are utilising increased computer power and novel algorithms to accurately simulate the properties of water on the nanoscale, allowing complex structures of hundreds or thousands of molecules to be seen and understood.
When insects go foraging, they zoom off from their nest in complex zig-zag paths. How do they manage to find their way back home? And how do they manage to do so along a straight path? These questions are explored in an exhibit at the Royal Society Summer Science Exhibition, currently taking place at the Southbank Centre in London.
The shadow of the late Martin Gardner looms large in Manchester this week as the workshop How to talk maths in public, organised by the Institute of Mathematics and its Applications, draws to a close. The question on everyone's mind is "who will fill the enormous hole left by his absence"? | <urn:uuid:68f468c6-b661-480e-88df-7c9a3375d834> | 3.390625 | 320 | Content Listing | Science & Tech. | 37.820882 |
Mathematicians Confirm Life on Mars
A new study uses mathematical techniques to examine the Viking data. The study examines the raw data for signs of complexity, an indicator of life forms. Chemical processes are not complex; life forms are. The study appears to indicate that the labelled release results were produced by life. The paper, with Gilbert Levin as co-author, appears in the International Journal of Aeronautical and Space Sciences:
Compexity Analysis of the Viking Labelled Release Experiments
This writer had the privilege of working with the scientists who in 1996 founds signs of fossil life on Martian meteorite ALH84001. That evidence has also been disputed. The argument over life on Mars may not be settled until scientists have a sample of Martian soil beneath their microscope. In today's funding climate, Mars has a low priority. While NASA was promised the basic research funding to go beyond Earth, planetary science budgets are being slashed. A complicated sample return mission is in jeopardy without US participation. Another way would be to send a scientist to Mars with a microscope. We all hope that the argument is settled someday. | <urn:uuid:7941e535-8516-477b-9d94-73e971ecb79e> | 2.78125 | 226 | Personal Blog | Science & Tech. | 35.52494 |
Botany online 1996-2004. No further update, only historical document of botanical science!
Cellulose is composed of linear chains of covalently linked glucose residues. It is very stable chemically and extremely insoluble. In the primary cell wall consists one glucose polymer of roughly 6000 glucose units, in the secondary wall is their number increased to 13 - 16000 units. Cellulose chains form crystalline structures called microfibrils. A microfibril with a diameter of 20 - 30 nm contains about 2000 molecules.
Crystalline and non-crystalline sections alternate. In crystalline ones forms the cellulose three-dimensional lattices due to the formation of the highest possible number of hydrogen bonds. This high degree of organization is not achieved in the other sections, called paracrystalline. Crystals polarize light. By studying cellulose between crossed polarisators can the main orientation of the microfibrils be determined. In the primary wall do they occur in every possible orientation (disperse texture). During the development of the secondary wall are they deposited in layers (as lamellas). The microfibrills of each layer are parallel to each other (parallel texture). Their orientation changes from layer to layer. Often, especially in very strong cell walls (like those of cotton) are the microfibrills arranged screw-like around the cell's axis. In such cases changes the turning angle from layer to layer (screw-like texture).
© Thomas A. NEWTON
Although cellulose is by far the most common macromolecule - nature synthesizes roughly 1011 tons per year and breaks most of it down again - is astonishingly little known about its biosynthesis. The enzyme (or the enzyme complex?) cellulose synthase is still a largely hypothetical quantity.
Since the preparation techniques (freeze etching) were optimized several years ago have directed particles (or particle complexes) been observed at the outer side of the plasma membrane. It is assumed that they take part in the synthesis of cellulose. In some algae (Micrasterias, Spirogyra and also in cells of higher plants) do the complexes form hexagonal rosettes (R. M. BROWN, D. MONTEZINOS, 1976; O. KIERMAYER and U. B. SLEYR, 1979; W. HERTH, 1983).
A striking correlation of the orientation of cortical microtubuli and the neighbouring microfibrills (spatially separated by the plasma membrane) exists. Despite of existing criticism does it look as if the microtubuli would take part in the orientation of cellulose synthase (directed arrangement of the complexes seen in the electron microscope) and would thus exert an indirect influence on the orientation of the microfibrills. Experiments with microtubuli-disrupting agents have an influence on the orientation pattern of newly developing microfibrills.
© Peter v. Sengbusch - Impressum | <urn:uuid:4cac7380-7b4b-40d6-a506-c9fa4f8543a9> | 3.015625 | 616 | Knowledge Article | Science & Tech. | 30.147371 |
Ahh…There is nothing like a Christmas tree design created with bacteria and other microorganisms in a petri dish. If I was a scientist stuck in a lab all day long, and if I needed some holiday cheer, I would definitely try to hook up some petri dishes with festive fungi too. Let me ask you this, when you think of fungus, what is the first thing that pops into your mind? For me, it’s the black or green moldy stuff that grows on blocks of cheese in my refrigerator. I’m not even sure if that is technically fungi, but it sure is disgusting. The scientist who created these festive fungi hopes to change the bad reputation it has, and she hopes to show others that it can be beautiful in its own way.
These petri dish fungal delights were created at the J. Craig Venter Institute on the 3rd floor in the fungal room. If you click over to the website, you’ll even get treated to what kind of fungi these Christmas trees and snowman were made from. Aspergillus, Penicillium and Neosartorya are just a few of the strands used. What fascinates me most about these is how this petri dish artist, Stephanie Mounaud, was able to predict the patterns in which this fungi would grow. I remember from biology class that it isn’t easy to determine that, which lets me know that she really knows her stuff. According to her blog post, she is already brainstorming for next year’s designs.
I can’t help but wonder if she captured these photographs at just the right moment, and if now they are grown beyond their original petri dishes. I also wonder what she did with these when this project was over. I mean, this fungi is too beautiful to just kill with bleach. Wow, I never thought I’d write that fungi was beautiful, but it is.
Fungi Christmas Trees In Petri Dishes
(Click Images To Enlarge) | <urn:uuid:c17330b7-3afc-4ea0-8c98-3dfcda8e7c6e> | 2.78125 | 421 | Personal Blog | Science & Tech. | 64.752941 |
solenoidArticle Free Pass
solenoid, a uniformly wound coil of wire in the form of a cylinder having a length much greater than its diameter. Passage of direct electric current through the wire creates a magnetic field that draws a core or plunger, usually of iron, into the solenoid; the motion of the plunger often is used to actuate switches, relays, or other devices.
What made you want to look up "solenoid"? Please share what surprised you most... | <urn:uuid:0b94e974-7ca1-4bdf-9bb4-1d0071dd6ed7> | 3.28125 | 103 | Knowledge Article | Science & Tech. | 50.977998 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...in fact appears to be in a somewhat lower density region than the immediate surroundings, where early B stars are relatively scarce. There is a conspicuous grouping of stars, sometimes called the Cassiopeia-Taurus Group, that has a centroid at approximately 600 light-years distance. A deficiency of early type stars is readily noticeable, for instance, in the direction of the constellation...
What made you want to look up "Cassiopeia-Taurus Group"? Please share what surprised you most... | <urn:uuid:2a6ff110-e432-4346-9dd3-7cbdee649a18> | 3.09375 | 139 | Knowledge Article | Science & Tech. | 49.13617 |
Common Lisp the Language, 2nd Edition
Every character has three attributes: code, bits, and font. The code attribute is intended to distinguish among the printed glyphs and formatting functions for characters. The bits attribute allows extra flags to be associated with a character. The font attribute permits a specification of the style of the glyphs (such as italics).
The treatment of character attributes in Common Lisp has not been entirely successful. The font attribute has not been widely used, for two reasons. First, a single integer, limited in most implementations to 255 at most, is not an adequate, convenient, or portable representation for a font. Second, in many applications where font information matters it is more convenient or more efficient to represent font information as shift codes that apply to many characters, rather than attaching font information separately to each character.
As for the bits attribute, it was intended to support character input from extended keyboards having extra ``shift'' keys. This, in turn, was imagined to support the programming of a portable EMACS-like editor in Common Lisp. (The EMACS command set is most convenient when the keyboard has separate ``control'' and ``meta'' keys.) The bits attribute has been used in the implementation of such editors and other interactive interfaces. However, software that relies crucially on these extended characters will not be portable to Common Lisp implementations that do not support them.
X3J13 voted in March 1989 (CHARACTER-PROPOSAL) and in June 1989 (MORE-CHARACTER-PROPOSAL) to revise considerably the treatment of characters in the language. The bits and font attributes are eliminated; instead a character may have implementation-defined attributes. The treatment of such attributes by existing character-handling functions is carefully constrained by certain rules.
Implementations are free to
continue to support bits and font attributes, but they are
formally regarded as implementation-defined attributes.
The rules are generally consistent with the previous
treatment of the bits and font attributes.
My guess is that
the font attribute as currently defined will wither away,
but the bits attribute as defined by the first edition will
continue to be supported as a de facto standard extension,
because it fills a useful small purpose.
The value of char-code-limit is a non-negative integer that is the upper exclusive bound on values produced by the function char-code, which returns the code component of a given character; that is, the values returned by char-code are non-negative and strictly less than the value of char-code-limit.
Common Lisp does not at present explicitly guarantee that all integers between zero and the value of char-code-limit are valid character codes, and so it is wise in any case for the programmer to assume that the space of assigned character codes may be sparse.
The value of char-font-limit is a non-negative integer that is the upper exclusive bound on values produced by the function char-font, which returns the font component of a given character; that is, the values returned by char-font are non-negative and strictly less than the value of char-font-limit.
X3J13 voted in March 1989 (CHARACTER-PROPOSAL) to eliminate char-font-limit.
Experience has shown that numeric codes are not an especially
convenient, let alone portable, representation for font information.
A system based on typeface names, type styles, and point sizes would be much better.
(Macintosh software developers made the same discovery and have recently
converted to a new font identification scheme.)
The value of char-bits-limit is a non-negative integer that is the upper exclusive bound on values produced by the function char-bits, which returns the bits component of a given character; that is, the values returned by char-bits are non-negative and strictly less than the value of char-bits-limit. Note that the value of char-bits-limit will be a power of 2.
X3J13 voted in March 1989 (CHARACTER-PROPOSAL) to eliminate char-bits-limit. | <urn:uuid:d6db6df3-914e-4f64-bf0e-6cb6adefaeb6> | 3.40625 | 856 | Documentation | Software Dev. | 31.528643 |
object passing: The ability to pass a copy of an object from one G2 process to another via an external interface. Object passing is accomplished through the use of a remote procedure declaration to specify which attributes of the object to send.
Off-line license: A fundamental G2 license type providing G2 for a stand-alone system.
On-line license: A fundamental G2 license type providing the capability to communicate or access other systems.
one-to-many: The cardinality of a relation, where one instance of the first class can be related to any number of instances of the second class.
one-to-one: The cardinality of a relation, where one instance of the first class can be related to, at most, one instance of the second class.
operand: In an expression, a term that participates in an arithmetic, class-qualified, concatenation, logical, or relational operation.
operation: In object-oriented programming, a function or transformation that can be applied to instances, typically in different ways for members of different classes.
operator: In an expression, a reserved symbol or character that specifies a type-specific operation.
Operator Logbook: A special workspace for displaying informational messages and signalling G2 errors. You control the placement and other properties of the logbook workspaces using the Logbook Parameters system table.
output frequency: In GFI, the interval at which to write data to a GFI output file.
output interface object: An object that GFI uses to obtain values from variables and parameters in a knowledge base and write them to an external data file.
overlay file: The output file created after using the Overlay utility, described next.
Overlay utility: A Gensym-provided utility for creating an overlay C source file from a template file as one of the steps in using foreign functions. | <urn:uuid:3fc79670-e841-43c9-8314-1532af952011> | 2.859375 | 385 | Documentation | Software Dev. | 24.659212 |
Posted by Alison on Sunday, February 19, 2012 at 11:21pm.
You don't unless it's moles of a solvent.
You could have 5 moles water and calculate the volume that would occupy if you knew the density.
5 mols H2O x (18 gH2O/1 mol H2O) = grams H2O, then
mass H2O = volume H2O x density H2O
My teacher told me something about 22.4 liters. do you know what she was talking about?
. That's one of the first numbers you memorize. 22.4 L is the volume occupied by a (1)(one) mole of a gas (technically an ideal gas) at STP (standard temperature and pressure).
So if you have 2 moles of a gas at 1 atm and 273K, it will occupy 22.4 x 2 = 44.8 L. (1 atm and 273 K are standard P and T.)
1 mole of H2 is 2 grams, 1 mole of He is 4 g, 1 mol of H2S is 34 g and each of those, since all of them are 1 mol, will occupty 22.4 L at STP.
Awesome thanx man! u r my vertual hero!
You must not have high standards. :-).
lol anyone who can help me in any way, tht person is my hero! So are you a real doctor? just curious lol!!! :)
Not an M.D. Ph. D. in chemistry. Retired teacher.
oh thts awesome! Thank you so much for all the help tonight. You Rock Dr Bob!!! :)
But yes, that's a "real" doctor, too.
Yeah i got tht. I hope to make a difference in life some day like you are. Just not a teacher lol. i cnt handle bad kids. haha well ima let you get back to helping. Bye
Chemistry - Convert from 5.6 moles of Cl2 (g) to liters at STP
chemistry - Convert 66 moles of NO2 gas at STP to liters
chemistry help - our teacher said on our final chemistry lab test there will be ...
chemistry - how many grams of sodium carbonate are needed to prepare 0.250 L of ...
Physics - Can someone help me with this conversion question? Suppose you are ...
Chemistry - A few questions...how many grams are in 48.7x10^24? I don't know...
Chemistry - Calculate the average disappearance of A between t=0 min and t=10 ...
chemistry - DrBob222 - For this question can u please explain steps 3 and 4 4.0 ...
chemistry - What volume of HCl gas is produced by the reaction of 2.4 liters of ...
chemistry - ihave two solutions. in the first solution , 1.0 moles of sodium ...
For Further Reading | <urn:uuid:ba5f7557-c1b0-4b0b-98b9-a730aa81c508> | 3.015625 | 631 | Comment Section | Science & Tech. | 97.421795 |
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
Post a reply
Topic review (newest first)
The sum of the lengths of any two sides is greater than the lenght of the third side.
here is a proof for right angled triangle:
Ooohh ... so you CAN have one side of a triangle longer than the other two, it just somehow slips into imaginary space ...
Lets take the length of sides Mathsy has given.
Try disproving it and fail. Just try drawing a triangle with lengths of 2, 3 and 6. The 2 and 3 are too short and too far apart to be able to join up.
How do we prove that the sum of the lengths of any two sides is greater than the lenght of the third side in the first place?
Ganesh is right, but as BC=4, it can be continued further:
The Triangle inequality is
The sum of the lengths of any two sides is greater than the lenght of the third side. in triangle ABC, BC = 4 and AC = 8 - AB. Write an inequality for AB. | <urn:uuid:b87c9508-8ba8-4999-8bb1-9a30ace9f249> | 3.046875 | 285 | Comment Section | Science & Tech. | 87.598507 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
Utah Temperature Rankings, December 1959
More information on Climatological Rankings
(out of 119 years)
|78th Coldest||1909||Coldest since: 1956|
|41st Warmest||1917||Warmest since: 1958| | <urn:uuid:720d1438-f6f2-4c62-b93e-5bc01fc406fd> | 2.703125 | 134 | Structured Data | Science & Tech. | 50.631579 |
Emissions from fires to the atmosphere IIThe amount of dioxins, PAH (polycyclic aromatic hydrocarbons) and VOC (volatile organic compounds) emitted from fires to the atmosphere per year has been estimated. The estimate is based on the number of fires in buildings, vehicles, waste and forest fires in Sweden in 1999. It is estimated that the total emission of dioxins from fires is in the range 0.5 - 1.4 g TEQ. The total emissions of PAH and VOC is in the ranges 2-12 ton and 13-200 ton, respectively.
The estimated emission of dioxins from fires has also been investigated and approximately corresponds to the total emission from traffic or half the emissions from municipal waste combustion (Swedish data from 1993).
The fire statistics show that the amount of material combusted in building fires during a year is approximately 7500 ton, while that from forest fires is 2600 ton. Additionally, 2000 - 3000 ton is combusted in vehicle fires, fires in containers etc. The total amount of material consumed in fires in 1999 was somewhat lower compared to the estimate in the previous study from 1994. A probable explanation is a decrease in the number of fires between 1994 and 1999. However, the structure of the fire statistics has changed since 1994, which may have influenced the result. The new structure of the fire statistics prevented any reliable assessment of the underlying reasons for the differences between 1994 and 1999.
In addition to the more common types of fires during a year, individual large incidents may contribute significantly to the total emission. Such incidents include fires in municipal landfills or specific waste storage facilities (such as those for used tyres). An assessment of the consequences of such incidents has been made. This assessment implies that a large contribution to the emission of dioxins could be expected from fires in landfills and from fires in waste plastics (PVC) and tyres. Fires in deposits of wood chips and tyres are also significant potential sources of PAH and VOC
Full details of the project results are found in the report P21-407/02 that can be ordered from the Swedish Rescue Services Agency. | <urn:uuid:0f51ebf9-9462-4693-b32b-7e880ac3eaa0> | 3.140625 | 439 | Knowledge Article | Science & Tech. | 42.245 |
A full hemispherical tank of radius
drains under the influence of gravity from a circular hole of radius
at the bottom of the tank. The velocity of fluid flowing from the hole is
(Torricelli's law), where
is the gravitational acceleration and
is the height of water at time
, which is shown in the tank and plotted below. | <urn:uuid:afd1d85e-96ea-49d9-ae95-ad62f51065b7> | 2.828125 | 74 | Tutorial | Science & Tech. | 41.748276 |
|Jmol-3D images||Image 1|
|Molar mass||207.32 g/mol|
| (what is: / ?)
Except where noted otherwise, data are given for materials in their standard state (at 25 °C, 100 kPa)
Lewisite is an organoarsenic compound, specifically an arsine. It was once manufactured in the U.S. and Japan for use as a chemical weapon, acting as a vesicant (blister agent) and lung irritant. Although colorless and odorless, impure samples of lewisite are a yellow or brown liquid with a distinctive odor that has been described as similar to scented geraniums.
Chemical reactions
The compound is prepared by the addition of arsenic trichloride to acetylene in the presence of a suitable catalyst:
- AsCl3 + C2H2 → ClCHCHAsCl2
Lewisite, like other arsenous chlorides, hydrolyses in water to form hydrochloric acid:
- ClCHCHAsCl2 + 2 H2O → ClCHCHAs(OH)2 + 2 HCl
This reaction is accelerated in alkaline solutions, with poisonous (but non-volatile) sodium arsenite being the coproduct.
Mode of action as chemical weapon
Arsenite inhibits important biochemical pathways of the human body. Arsenite poisoning specifically targets the E3 component of pyruvate dehydrogenase. As an efficient method to produce ATP, pyruvate dehydrogenase is involved in the conversion of pyruvate to Acetyl-CoA. The latter subsequently enters the TCA cycle. Arsenite has a high affinity for dihydrolipoamide; E3 component of the pyruvate dehydrogenase. Binding results in inhibition of the enzyme and can lead to dire consequences. Nervous pathology usually arises from arsenite poisoning as the nervous system essentially relies on glucose as its only catabolic fuel.
It can easily penetrate ordinary clothing and even rubber; upon skin contact it causes immediate pain and itching with a rash and swelling. Large, fluid-filled blisters (similar to those caused by mustard gas exposure) develop after approximately 12 hours. These are severe chemical burns. Sufficient absorption can cause systemic poisoning leading to liver necrosis or death.
Inhalation causes a burning pain, sneezing, coughing, vomiting, and possibly pulmonary edema. Ingestion results in severe pain, nausea, vomiting, and tissue damage. The results of eye exposure can range from stinging and strong irritation to blistering and scarring of the cornea. Generalised symptoms also include restlessness, weakness, subnormal temperature and low blood pressure.
Chemical composition
Lewisite is usually found as a mixture, of 2-chlorovinylarsonous dichloride as well as bis(2-chloroethenyl)arsinous chloride ("lewisite 2"), and tris(2-chlorovinyl)arsine ("lewisite 3").
Lewisite was first synthesised in 1904 by Julius Arthur Nieuwland during studies for his PhD. His method involved reacting acetylene with arsenic trichloride in the presence of an aluminium chloride catalyst . Exposure to the resulting compound made Nieuwland so ill he was hospitalized for a number of days.
Lewisite is named after the US chemist and soldier Winford Lee Lewis (1878–1943). In 1918 Dr John Griffin (Julius Arthur Nieuwland's thesis advisor) drew Lewis's attention to Nieuwland's thesis at Maloney Hall, a chemical laboratory at The Catholic University of America, Washington D.C.. Lewis then attempted to purify the compound through distillation but found that the mixture exploded on heating until it was washed with HCl.
Lewisite was developed into a secret weapon (at a facility located in Cleveland, Ohio (The Cleveland Plant) at East 131st Street and Taft Avenue) and given the name "the new G-34" to confuse its development with mustard gas. Production began at a plant in Willoughby, Ohio on November 1, 1918. It was not used in World War I, but experimented with in the 1920s as the "Dew of Death."
After World War I, the US became interested in lewisite because it was not flammable. It had the military symbol of "M1" up into World War II, when it was changed to "L". Field trials with lewisite during World War II demonstrated that casualty concentrations were not achievable under high humidity due to its rate of hydrolysis and its charactistic odor and lacrymation forced troops to don masks and avoid contaminated areas. The United States produced about 20,000 tons of lewisite, keeping it on hand primarily as an antifreeze for mustard gas or to penetrate protective clothing in special situations.
It was replaced by the mustard gas variant HT (a 60:40 mixture of sulfur mustard and O Mustard), and declared obsolete in the 1950s. It is effectively treated with British anti-lewisite (dimercaprol). Most stockpiles of lewisite were neutralized with bleach and dumped into the Gulf of Mexico, but some remained at the Deseret Chemical Depot located outside of Salt Lake City, Utah , although as of January 18, 2012 the last of the global stockpile there was destroyed.
In 2010, lewisite was found in a World War I weapons dump in Washington, D.C.
Controversy over Japanese depots of lewisite in China
In mid-2006, China and Japan were negotiating disposal of stocks of lewisite in northeastern China, left by Japanese military during World War II. Residents of China have died over the past twenty years from accidental exposure to these stockpiles.
- Lewisite I - Compound Summary, PubChem.
- U.S. National Research Council, Committee on Review and Evaluation of the Army Non-Stockpile Chemical Materiel Disposal Program (1999). Disposal of Chemical Agent Identification Sets. National Academies Press. p. 16. ISBN 0-309-06879-7.
- Berg, J.; Tymoczko, J. L.; Stryer, L. (2007). Biochemistry (6th ed.). New York: Freeman. pp. 494–495. ISBN 978-0-7167-8724-2.
- "Lewisite(L): Blister Agent". Emergency Response Database. CDC / NIOSH. 2008.
- Vilensky, J. A. (2005). Dew of Death - The Story of Lewisite, America's World War I Weapon of Mass Destruction. Indiana University Press. p. 4. ISBN 0253346126.
- Vilensky, J. A.; Redman, K. (2003). "British Anti-Lewisite (Dimercaprol): An Amazing History". Annals of Emergency Medicine 41 (3): 378–383. doi:10.1067/mem.2003.72. PMID 12605205.
- Vilensky, J. A. (2005). Dew of Death - The Story of Lewisite, America's World War I Weapon of Mass Destruction. Indiana University Press. pp. 21–23. ISBN 0253346126.
- Upton native's role was the best defense; WWI masks thwarted[dead link]
- Vilensky, J. A. (2005). Dew of Death - The Story of Lewisite, America's World War I Weapon of Mass Destruction. Indiana University Press. p. 50. ISBN 0253346126.
- Tabangcura, D. Jr.; Daubert, G. P. "British anti-Lewisite Development". Molecule of the Month. University of Bristol School of Chemistry.
- Code Red - Weapons of Mass Destruction [Online Resource] - Blister Agents
- Tucker, J. B. (2001). "Chemical weapons: Buried in the backyard" (pdf). Bulletin of the Atomic Scientists 57 (5): 51–56. doi:10.2968/057005014.
- Abandoned Chemical Weapons (ACW) in China[dead link] | <urn:uuid:dd794f1b-156a-4e82-be25-0c5b48b81481> | 3.140625 | 1,721 | Knowledge Article | Science & Tech. | 48.187877 |
I am trying to prove this and found the proof, but have no idea how we are able to multiply by (B-1A-1) in the first step when it is not in the original statement. I understand it, but not where this comes from. Also, on the second side, where does the AB come from?
Question: Prove: (AB)-1 = B-1A-1.
Solution: Using the associativity of matrix multiplication,
(AB)(B-1A-1) = A(BB-1)A-1 = AIA-1 = AA-1 = I
(B-1A-1)(AB) = B(AA-1)B-1 = BIB-1 = BB-1 = I:
Thus AB is invertible and B-1A-1 is its inverse | <urn:uuid:038386b6-77ff-48d5-bdb0-a6b48f1084e1> | 3.328125 | 179 | Q&A Forum | Science & Tech. | 80.605608 |
An atoll not at all what I expected
The Center for Coastal Monitoring and Assessment (CCMA) within the National Centers for Coastal Ocean Science (NCCOS) at NOAA is among those responsible for creating maps of coral reefs. Maps are a critical part of nearly every aspect of coral reef protection. In 2009, NOAA’s Coral Reef Conservation Program (CRCP) asked us to map the coral reefs of Majuro, a Pacific Ocean atoll and the most populous island in the Marshall Islands chain. We have lots of experience with mapping reefs worldwide but none in an atoll with the geological makeup of a place like Majuro. With no elevation, a land area only a few hundred feet wide, and a deep central lagoon, the atoll looked quite different from the satellite images of other coral reefs we’ve mapped in the last ten years.
Our objective for this two week mission was to map as many of the unique reef types around the atoll as possible and match them up with the colors and textures shown in remote sensing images, a process called “ground truthing.” Being physically present in the study area is an essential part of accurately mapping natural resources. We needed to confirm what we believed were coral reefs based on analysis of a satellite imagery was actually a coral reef. CRCP will eventually use the maps we created to make decisions about how to conserve and manage those coral reefs. You can read more about this project on the NCCOS website, or download the report titled, Majuro Atoll, Republic of the Marshall Islands Coral Reef Ecosystems Mapping Report.
Using a GPS, a stack of satellite images, a note book, and video and still cameras, we made our way from site to site and recorded bottom type, reef zone, water depth, coral and algal cover, and any other notable aspects of the habitats at each site that could make our maps more accurate. We used small motor boats, kayaks, snorkeling, and simply hiking along the shoreline and wading to get to the necessary locations, some of which have seen very little human contact.
What we found
Despite the language barrier, we found the Marshallese eager to offer us access to the reefs in their backyards. Pointing out their house in a satellite picture and gesturing at a draft of our maps was always enough to get us to the right location for site validation.
We visited 311 spots and used the data that we collected to validate information from satellite imagery. When completed, we drew 1,829 reef ecosystem features covering 366 km2. We documented the locations of over 700 patch reefs, nearly 200 aggregate reefs, 6 km2 of dramatic spur and groove reef formations, and many other reef habitats that ringed the island in concentric circles. These maps, videos, and images are all available in a variety of formats to help people understand the atoll. Printable atlas-style maps, map computer files for technical users (in GIS), and interactive internet-based maps with links to field videos and pictures, satellite imagery, and other information; all are available to help guide science, education, and management activities on Majuro reef ecosystems.
The State of Majuro Coral Reefs
This was a great opportunity to do a comparative study of reef health with those found elsewhere in the world. The vast areas of live impossibly iridescent blue and pink toned corals at Majuro, and the regular presence of these small sharks- a group devastated in many parts of the world- were heartening to witness first-hand. It was disappointing to see the degraded condition of the reef habitats near some developed areas. Adjacent to one densely populated survey site, the islanders’ homes were little more than corrugated metal and salvaged lumber boxes backed right up to the water’s edge. As we were surveying the lagoon floor near these communities and we found reefs with less coral that were overgrown with algae or fouled with garbage.
The approach that we used here can be used to map other atolls and seamounts throughout the Marshall Islands.
Potential future work could focus on systems near more heavily populated areas, with development pressures, or those with key conservation or monitoring programs such as Ailinglaplap, Ailuk, Arno, Bikini, Jaluit, Kwagelain, Mili, and Rongelap.
NCCOS Blogger Biography: Matt Kendall has authored papers on fish habitats, biodiversity, marine parks, and seafloor mapping and has conducted research in a diversity of ocean settings including Hawaii, Samoa, Puerto Rico, the Virgin Islands, Chesapeake Bay, and coastal Georgia. Dr. Kendall joined CCMA in 1998, where he is a currently a scientist with CCMA’s Biogeography Branch. Prior to joining NOAA, he worked as a researcher at Florida Marine Research Institute and the Smithsonian Environmental Research Center. Dr. Kendall earned a Ph.D. from the University of Maryland, an M.S. from North Carolina State University, and a B.S. from the University of South Carolina. | <urn:uuid:2c04b4d4-c740-4ec2-9052-6b4c04638671> | 3.125 | 1,053 | Personal Blog | Science & Tech. | 38.623585 |
|The recording device that captured the sounds of black smoker venting sits here between waters that are 660 F.
Credit: University of Washington
So you're a fish.
Right now some tubeworm tartare and clams on the half shell would really hit the spot, so you're headed for the all-night café. "All-night" being the operative word because the volcanic ridge you're tooling along is nearly 1.5 miles below the surface. The term "where the sun don't shine" perfectly describes the place. It's pitch black. Darn, but what's that loud rumbling up ahead? Must be one of those pesky black smokers. Some of those babies can fry your face off. A detour is highly indicated.
The long-held assumption that black smokers are silent is wrong, according to recently published research led by Timothy Crone, a University of Washington doctoral student in oceanography. It's prompting scientists to wonder: Could the sound and vibrations of black smokers be the reason fish in total darkness avoid being poached by waters as hot as 750 F? And might similar sounds guide them to the smorgasbord of tube worms, mussels, shrimp, snails and other fauna at vents with more temperate waters?
Want to be the first on your block to hear what a black smoker sounds like? Go to http://uwnews.org/article.asp?articleID=30030 where audio of a black smoker has been combined with a video into a short movie.
The research was reported online during the inaugural month of the Public Library of Sciences' interactive journal, PLoS ONE. Aimed at involving more people in science, published results are available without a subscription and can include a wealth of audio, video and other materials.
|Artist's conception of the liquid ocean beneath Europa's icy surface. Some scientists think that life similar to the microbes that live around hydrothermal vents deep in the Earth's oceans could also survive in similar environments on Europa.
Hydrothermal vents, discovered in the 1970s, are found along volcanically active ridges where seawater seeps into the seafloor, picks up heat and minerals and then vents back into the ocean depths. The hottest and most vigorous of the vents are black smokers, so called because when the fluids they emit hit the icy cold seawater, minerals in the fluids precipitate out and it looks just like dark, billowing smoke.
Because of a paper published 15 years ago, it had been thought the vents were probably playing only the sounds of silence. Still a number of scientists suspected that the vents could be generating sounds, given the obvious turbulence of the flows, Crone says.
It was decided that new recordings should be attempted because Crone and other oceanographers are looking for new ways to measure vent flows, which are a source of heat and minerals in the world's oceans that scientists would like to understand better. Commonly used instruments to measure flow are often short lived when inserted in the superheated, corrosive black-smoker fluids.
How much simpler if the vents were generating some kind of sound that could be recorded and correlated to flows, Crone says.
With funding from two organizations that help take fields of research and instrumentation in new directions, the UW Royalty Research Fund and the W.M. Keck Foundation, a deep-sea digital acoustic recording system was deployed in the Main Endeavour vent field. The field is on the seafloor about 300 miles west of Seattle on the Juan de Fuca Ridge. Crone recorded 45 hours of sound at the vent scientists call "Sully" and 136 hours at the vent called "Puffer."
That's the sound of Sully you're hearing as the video runs. Crone likens the sound to the rumbling of an avalanche or a forest fire.
How loud would it be if you were sitting a foot away? (That's something you couldn't actually do because the pressure where most black smokers are found is so intense that you'd implode.) The sound level would be somewhere between conversational speech and a hairdryer, Crone says.
Four possible mechanisms might be causing -- or contributing to -- the noise, the researchers say. For example, the flow could be pulsating or its volume could be changing as its waters cool. Dissimilar fluids in the flow could generate noise where they mix. Or the fluids rushing through the nooks and crannies of the smoker vent itself could be creating noise.
|It is thought that Jupiter's moon Europa may have an ocean of liquid water below it's icy surface. If environments similar to black smokers exist on Europa, they may be potential environments for life.
The sounds also appear to change as flows change in reaction to such things as the Earth's tides, the authors say. Read the full report on PLoS ONE at http://dx.doi.org/10.1371/journal.pone.0000133. professor at Lamont-Doherty Earth Observatory, Columbia University; and Jeffrey Parsons, formerly a UW faculty member, now with Herrera Environmental Consultants.
Buried within the broad range of sounds that produce the rumbling, Crone's analysis revealed the surprise that the vents also produce resonant tones. There could be a number of things generating such tones. For example, flows along the cavities and bumps inside the vent structures may cause tones in the same way jug band members produce sound by blowing across the mouths of their jugs, causing the air inside the jug to resonate and produce a deep tone.
Both Sully and Puffer produce resonant tones at several different frequencies that we can't discern with all the other noise generated by the vents. But you can hear examples of tones that Crone pulled out from the racket by listening at http://uwnews.org/article.asp?articleID=30030.
"With these resonant tones, each vent within the vent field is likely to have its own unique acoustic signature," Crone says.
If so, and if fish are actually using vent sounds to navigate, then the distinctive tones might be how fish find their way back to cooler vents where the eats have been particularly good.
In that case, being on top of old smoky -- all covered in sounds -- would be a good thing indeed.
Hydrothermal vents are environments on Earth that support unique forms of life, such as thermophilic microbes. Some scientists believe these systems could be models for environments on other locations. For instance, if a liquid ocean exists beneath the icy surface of Jupiter's moon Europa, hydrothermal vents on the moon just might be an excellent place to search for microbial life beyond our planet.
Related Web Sites
Catching an Underwater Eruption
Earth's Hidden Biospheres
Debating Life's Boundaries
Europa on Earth
The Lure of Europa | <urn:uuid:2a630cd5-481f-4a79-8770-a3d0044a0559> | 3.015625 | 1,417 | Knowledge Article | Science & Tech. | 54.772941 |
Animal Species:Striped Pygmygoby, Eviota sebreei (Jordan & Seale, 1906)
The Striped Pygmygoby can be recognised by its colour pattern. The species occurs throughout much of the Indo-Pacific.
Sebree's Pygmy Goby
The Striped Pygmygoby is translucent with a stripe laterally from the snout to the caudal peduncle There are white dashes along the top of the stripe and also below it along the abdomen. The lateral stripe is interrupted on the caudal peduncle by a small yellow spot.
The species grows to 3 cm in length.
It has a widespread distribution throughout much of the Indo-Pacific. In Australia it is occurs from north-western Australia and from the northern Great Barrier Reef to northern New South Wales.
The map below shows the Australian distribution of the species based on public sightings and specimens in Australian Museums. Click on the map for detailed information. Source: Atlas of Living Australia.
Distribution by collection data
Other behaviours and adaptations
It is often observed perched on living coral, as shown in the image.
- Myers, R.F. 1999. Micronesian Reef Fishes. Coral Graphics. Pp. 330.
- Randall, J.E., Allen, G.R. & R.C. Steene. 1997. Fishes of the Great Barrier Reef and Coral Sea. Crawford House Press. Pp. 557.
Mark McGrouther , Collection Manager, Ichthyology | <urn:uuid:b5c38c41-811c-4fec-8d92-54bf06cb6f16> | 3.296875 | 321 | Knowledge Article | Science & Tech. | 61.369152 |
In 2012, DOE granted Washington’s team and their project, the Climate End Station, a total of 86 million processor hours through the Innovative and Novel Computational Impact on Theory and Experiment program. The team has 56 million processor hours on Jaguar and 30 million processor hours on Argonne National Laboratory’s supercomputer to generate climate simulations. This is equivalent to the power of 28 million dualcore laptops for one hour. However, unlike millions of separate laptops, Jaguar’s massive array of parallel processors are interconnected, allowing them to perform millions of calculations simultaneously and making more complex simulations possible.
For example, our team’s first baseline experiment will require a total of 200 simulated years. For such a simulation to occur within the timeframe of a single supercomputer resource allocation (typically one year), an integration rate of multiple
simulated years per day is required. A high-resolution, century-long climate experiment of the type contemplated here requires enormous amounts of computer time – on the order of 8 million CPU-hours per simulated century.
“8 million CPU-hours per simulated century”
But how does that compare to modeling the north tower crash into the World Trade Center?
100 hours, 8 processors, 0.5 sec
30 hours, 16 processors, 0.37 sec
800 CPU-hours and 480 CPU-hours
It took about 80 hours using a high-performance computer containing 16 processors to produce the first simulation
Impact simulations were performed using the nonlinear finite-element-based dynamic analysis software LS-DYNA [version 970 r5434a SMP] (LSTC 2005) on the IBM multi-processor nanoregatta computer system at Purdue University. Typically, we simulated the first 0.5 second of the time after impact and used adaptive incremental approach resulting in an average of 1.0x10-6 sec time-steps.
Simulating the collapse of a WTC tower should be relatively trivial compared to a climate simulation. There were 2,800 perimeter panels from the 9th floor to the top and fewer than 2,000 column sections in the core. There were probably about 9,000 horizontal beams in the core. So 50,000 components would probably be enough for a good WTC simulation so there should not be a problem with having to do it well in the last 10 years. It is not much compared to a climate simulation with millions of cells.
How much computing power was needed to design the WTC? It was done in the early 60s. The SR-71 Blackbird was flying at 2,000 mph in 1964. That is more impressive than a skyscraper. The groundbreaking for the WTC was in 1966. So the computing power available at the time was not too impressive compared to the 1980s when some early climate models were done, but the buildings stood for 28 years and withstood 100 mph winds on several occasions. I have not heard about the Empire State Building or any other skyscrapers failing because of Sandy’s fury.
So it is certainly curious that with the computing power available almost 40 years after completion of the towers we can’t get a good computer simulation of the supposed collapses with publicly available data, human readable, and yet people who believe that good climate simulations are possible do not have a problem with a lack of satisfactory building collapse models.
So with global warming we are dealing with a huge object with lots of unknowns that have yet to be resolved. But with the 9/11 problem we have a man made object of the kind of which hundreds have been built around the world and yet everyone cannot concede that we should have information as simple as the tons of steel and tons of concrete on every level. The steel had a kind of feedback loop. The more steel put near the top meant it had to be supported from below by more steel.
9/11 is a scientific farce. It is hardly sophisticated enough to be dignified with being called a fraud. | <urn:uuid:4a5e04ff-e9de-43e5-922b-d4f0bb868c94> | 2.96875 | 808 | Comment Section | Science & Tech. | 47.687313 |
Recovery of Vegetation
Table 5 shows a list of the species that survived the eruption in the "Devastation Area." No plant survived on the Kilauea crater floor (habitat 1) or the cinder cone (habitat 2).
In the spatter-with-tree-snags habitat (3), four species survived. Several of the Metrosideros tree snags, initially believed to be dead, resprouted from the base. These were trees near the eastern border of habitat 3, which adjoined the relatively undamaged rain forest (Fig. 2). The spatter was less than 30 cm deep where resprouting from the base occurred. The resprouting trees were larger than 20 cm diameter at the base. The flushing from the base of completely defoliated trees began at the border, where the spatter was shallowest. It progressed toward about 30 m from the border, where the spatter was about 50 cm deep. The territorial spread was reflected in increasing frequency values from 14% in year 1 to 38% in year 9. In contrast, survival of the two tree-fern species (Cibotium glaucum and Sadleria cyatheoides) was fairly constant throughout the observational period. After the ash fallout, all fronds were slashed off. New fronds began to resprout after 6 months (year 1). Tree ferns survived only where the apex of the stem was not buried. In contrast, in year 3 the exotic orchid Spathoglottis plicata appeared from bulbs that were buried under 10 cm of spatter near the rain-forest border.
In the pumice-with-tree-snags area (habitat 4), only one individual of Cibotium glaucum survived on transect DD' (Fig. 2). The trunk of this individual was buried under about 1.5 m of ash and the top 50 cm remained above the surface. The tree fern regained full vigor soon after year 1 and maintained this vigor throughout the period of observation.
The largest number of species (23) survived in habitat 5. Compared two habitat 4, the ash blanket was shallower; it varied along transect AA' from 300 cm at the border of habitat 4 two 25 cm at the border of habitat 6 (Fig. 2). The physical damage from the fallout itself was also much reduced in comparison to habitat 4. For example, most of the Metrosideros trees retained their bark on the leeward side near the border of habitat 4, but they retained bark all around the stem and even on smaller branches and on the few foliage remnants near the border of habitat 6. Recovery was almost immediate within a few months after the fallout.
In this habitat also, a large number of shrub species survived, namely, nine native and three exotic species (Table 5). Also, the two tree ferns plus two herbaceous ferns (Nephrolepis hirsutula and Pteridium decompositum) were among the survivors. The other surviving herbaceous plants were either rather tall (at least 30 cm), caespitose hemicryptophytes (the two sedges, the grass), or geophytes resprouting from bulbs or fleshy rhizomes (Astelia, Tritonia, Hedychium, and Spathoglottis). (In the older rain forests, Astelia grows normally as an epiphyte.) The herbaceous survivors occurred only in areas with less than 30 cm pumice deposit. The shrubs survived where the ash was less than 60 cm deep.
TABLE 5. Surviving species in 1968 by habitats. Values are in percent frequency.
Among the shrubs, 7 of the 11 species were completely buried in year 1. The buried shrub species were five natives (Vaccinium reticulatum, Dubautia scabra, Styphelia tameiameiae, Coprosma ernodeoides, and Osteomeles anthyllidifolia) and two exotics (Fuchsia magellanica var. discolor and Rosa sp.). The two exotic shrubs occurred only where the ash blanket was less than 40 cm deep. Among the other native shrubs (Dubautia ciliolata, Vaccinium calycinum, Wikstroemia sandwicensis, and Dodonaea viscosa) were completely buried individuals that resprouted after the first examination in year 1. Thus, in contrast to the trees, all shrub species of habitat 5 were capable of resprouting after their shoot systems had been buried to the top or were broken off and buried by ash. This was not observed in the tree fern Cibotium, but instead it was observed in a few individuals of Sadleria. The reason why vegetative resprouting was not observed from fully buried trunks of Cibotium was probably because of rarity in the study area rather than its lack of capability. Among the herbaceous survivors, nearly all shoots of the geophyte species had disappeared under the ash. Their new shoots surfaced in year 2. Several individuals of buried caespitose hemicryptophytes (Deschampsia australis, Machaerina angustifolia) also resprouted after year 1.
In the thin fallout area (habitat 6), 14 species were found to survive under the 10-25-cm-deep pumice blanket. This smaller number of survivors in comparison to the 23 surviving species in habitat 5 is not related to the disturbance factor, but to the original edaphic and climatic difference. Here in habitat 6, the number of species was smaller to begin with. The original substrate under the new pumice blanket was a hard-crusted ash layer that had been deposited in association with moisture during an earlier explosion. The former surface resembled a pavement with fissures. The taller perennial plants were more or less restricted to growing in these fissures, while small annuals, such as the sedge Bulbostylis capillaris, and lichens grew on small, shallow, loose aeolian ash pockets on the pavement surfaces. These lichens and annuals had disappeared under the new thin fallout surface, but probably all perennial species survived. These included the tree Metrosideros polymorpha, five native shrubs (including a new species not originally found in any of the other habitats, Rumex giganteus), five fern species, two sedges, and one forb (see Table 5). In addition to the original edaphic peculiarity, the floristic difference of habitat 6 in comparison to habitat 5 was related to the lower annual rainfall, longer dry season, decreased cloud cover, and increased frequency of drying winds characteristic of the upper Kau Desert (habitat 6).
The surviving species were remarkable for their capacity to reproduce vegetatively. However, several of the surviving woody species also showed increased sexual reproduction. The success of their increased flowering, fruiting, and spore formation activity is reflected in the seedling frequency recorded in Table 6. These woody plant seedlings became established in most cases near surviving individuals so that a contagious pattern developed. Vaccinium reticulatum and Styphelia tameiameiae survivors produced abundant berries only in habitat 6. This is reflected in the seedling presence in this habitat. Abundant flowering occurred on nearly all recovered Metrosideros individuals in habitats 5 and 6 in year 1. The outcome was the successful establishment of Metrosideros seedlings in both habitats 2-4 years after the eruption (Table 6).
TABLE 6. Seedlings of surviving woody species in habitats 5 and 6 (% frequency).
Last Updated: 1-Apr-2005 | <urn:uuid:23c68f38-ee7b-4ce2-8dd6-4730e868583c> | 3.890625 | 1,618 | Academic Writing | Science & Tech. | 42.79402 |
A weather satellite is a type of satellite that is primarily used to monitor the weather and climate of the Earth. These meteorological satellites, however, see more than clouds and cloud systems. City lights, fires, effects of pollution, auroras, sand and dust storms, snow cover, ice mapping, boundaries of ocean currents, energy flows, etc., are other types of environmental information collected using weather satellites. Other environmental satellites can detect changes in the Earth's vegetation, sea state, ocean color, and ice fields. For example, the 2002 oil spill off the northwest coast of Spain was watched carefully by the European ENVISAT, which, though not a weather satellite, flies an instrument (ASAR) which can see changes in the sea surface. The Antarctic ozone hole is mapped from weather satellite data. Collectively, weather satellites flown by the U.S., Europe, India, China, Russia, and Japan provide nearly continuous observations.
View Detailed Information » | <urn:uuid:717eb29a-17bc-4182-ba4c-e0196c4ac766> | 3.9375 | 194 | Knowledge Article | Science & Tech. | 30.306 |
But "inertial observers" doesn't necessarily imply the Lorentz transformation unless you assume both
postulates of SR. Inertial observers as defined in Newtonian physics all observe the same laws of physics (first postulate satisfied), and all see each other traveling at constant velocity, but there's no invariant speed postulate and their coordinates transform according to the Galilei transformation. Likewise, I showed that if you just wanted to satisfy the second postulate but not the first, you could have a family of coordinate systems that all see light moving at c, and that all see each other traveling at constant velocity, but where the coordinates transform according to a different transformation. If you're going to go mucking about with the postulates, you can't start out
assuming that the phrase "inertial observer" will mean exactly the same thing as it does in SR, with different frames related by the Lorentz transformation, that'd just be circular reasoning rather than an actual "derivation'.
It's the time coordinate that determines the rate of ticking, not the distance coordinate.
Again, "the reality of the non-inertial observer" is meaningless since there is no single way to construct a coordinate system where a non-inertial observer is at rest. You have to talk about coordinate systems, not "observers".
No, you'd have a single non-inertial coordinate system, not two different ones for different parts of the trip. Since time dilation doesn't work the same way in non-inertial coordinate systems as it does in inertial ones, there's no problem getting the twin paradox to work out, at some point the inertial twin would just have his clock ticking faster relative to coordinate time than the non-inertial one. This section
of the twin paradox FAQ
features a diagram showing what lines of simultaneity could look like in a single non-inertial coordinate system (drawn relative to the space and time axis of the inertial frame where the inertial twin is at rest):
You can see that during the phase where the non-inertial twin "Stella" is accelerating, the clock of the inertial twin "Terence" will elapse much more time than hers. Lines of constant position in this non-inertial system aren't drawn in, you could draw them any way you like (including curved lines so that Stella could be at a constant position throughout her trip) and have a valid non-inertial system.
And you understand how in Rindler coordinates, any given clock in that family of accelerating clocks can be ticking at a constant rate relative to coordinate time, and occupying a fixed coordinate position? | <urn:uuid:2ec2947b-9d51-439f-baec-a270c7f70e5e> | 2.78125 | 557 | Comment Section | Science & Tech. | 23.765582 |
… if your data do not look like a quadratic!
This is a post about global sea-level rise, but I put that message up front so that you’ve got it even if you don’t read any further.
Fitting a quadratic to test for change in the rate of sea-level rise is a fool’s errand.
I’d like to explain why, with the help of a simple example. Imagine your rate of sea-level rise changes over 100 years in the following way:
Fig. 1. Rate of sea-level rise as it changes over 100 years. This is a fictitious example chosen for illustrative purposes. It’s simply a polynomial curve, see appended matlab script.
It starts at 1 mm/year, then in the middle of the century it hovers for a while around 2 mm/year (namely between 1.8 and 2.2), and in the end it climbs to 3 mm/year. There can be no question about the fact that the rate of sea-level rise increases overall during those 100 years. It increases by a factor of three, from one to three millimetres per year, although not at a steady rate. You could fit a linear trend to the above curve and also find an increase in the rate – although a linear trend would not be a great description of what is going on, because the increase in rate is clearly not linear. (Note that a linear increase in the rate corresponds to a quadratic sea-level curve.)
You can easily compute the sea-level curve that follows from the above rate by integrating it over time, and it looks like this:
Fig. 2. This sea level curve is the integral of the curve in Fig. 1 and thus contains the same information, but when viewed in this way it is hard to judge by eye whether sea-level rise has accelerated. The better way to answer this question is by looking at the rate curve, i.e. Fig. 1.
Now here it comes: if you fit a quadratic (by the standard least-squares method) to this sea-level curve, the quadratic term (i.e. the acceleration) is negative! So by this diagnostic, sea level rise supposedly has decelerated, i.e. the rate of rise has slowed down! This clearly is nonsense and misleading (we know the truth in this case, it is shown in Fig. 1), and this nonsense results from trying to fit a quadratic curve to data that do not resemble a quadratic. You can call it a misapplication of curve fitting, or the use of a bad model.
Now to real data
Is this just a bizarre, unrealistic example? No! Because the basic shape of this example resembles the observed global sea-level curve from about 1930 to 2000. The red curve below is the rate of rise as diagnosed from the Church&White (2006) global sea-level data set, as shown and described in more detail in Rahmstorf (Science 2007):
Fig. 3. Rate of global sea-level rise based on the data of Church & White (2006), and global mean temperature data of GISS, both smoothed. The satellite-derived rate of sea-level rise of 3.2 ± 0.5 mm/yr is also shown. The strong similarity of these two curves is at the core of the semi-empirical models of sea-level rise. Graph adapted from Rahmstorf (2007).
Why would it have such a funny shape, with the rate of rise hovering around 2 mm/year in mid-century before starting to increase again after 1980? I think the reason is physics: the warmer it gets, the faster sea-level rises, because for example land ice melts faster. The sea-level rate curve has an uncanny similarity to the GISS global temperature, shown here in blue. The rate of SLR may well have stagnated in mid-century because global temperature also did not rise between about 1940 and 1980, and in the northern hemisphere even dropped.
Houston & Dean
The “sea-level sceptics” paper by Houston and Dean in 2011 claimed that there is no acceleration of global sea-level rise, by doing two things: cherry-picking 1930 as start date and fitting a quadratic. The graphs above show how this could give them their result despite the clear threefold acceleration from 1 to 3 mm/yr during the 20th Century. In our rebuttal published soon after (Rahmstorf and Vermeer 2011), we explained this and concluded:
Houston and Dean’s method of fitting a quadratic and discussing just one number, the acceleration factor, is inadequate.
(There’s quite a few other things wrong with this paper and several responses have been published in the peer-reviewed literature (5? I lost track); we also discussed it at Realclimate.)
So the bottom line is: the quadratic acceleration term is a meaningless diagnostic for the real-life global sea-level curve. Instead, one needs to look at the time evolution of the rate of sea-level rise, as has been done in a number of peer-reviewed papers. For example, Rahmstorf et al. (2012) in their Fig. 6 show the rate curve for the Church&White 2006 and 2011 and the Jevrejeva et al. 2008 sea-level data sets, corrected for land-water storage in order to isolate the climate-driven sea-level rise. In all cases the rate of rise increases over time, albeit with some ups and downs, and recently reaches rates unprecedented in the 20th Century or (for the Jevrejeva data) even since 1700 AD. Similar results have been obtained for regional sea-level on the German North Sea coast (Wahl et al. 2011).
Pitfalls of rate curves
When looking at real data, one needs to be aware of one pitfall: unlike the ideal example shown at the outset, real tide gauge data contain spurious sampling noise due to inadequate spatial coverage, so it is not trivial to derive rates of rise. One needs to apply enough smoothing (as in Fig. 3 above) to remove this noise, otherwise the computed rates of rise are dominated by sampling noise and have little to do with real changes in the rate of global sea-level rise. Holgate (2007) showed decadal rates of sea-level rise (linear trends over 10 years), but as we have shown in Rahmstorf et al. (2012), those vary wildly over time simply as a result of sampling noise and are not consistent across different data sets (see Fig. 2 of our paper). Random noise in global sea level of just 5 mm standard deviation is enough to render decadal rates meaningless (see Fig. 3 of our paper)!
The quality of the data set is important – some global compilations contain more spurious sampling noise than others. Personally I think the approach taken by Church and White (2006, 2011) probably comes closest to the true global average sea level, due to the method they used to combine the tide gauge data.
And one needs to consider boundary effects at the beginning and end of the data series. Boundary effects at the start of the curve are not a big deal because the rate curve is rather flat before the 20th Century. And luckily, at the end this is also not a big problem since we have the satellite altimeter data starting from 1993 as an independent check on the most recent rate of sea-level rise, which confirm that it is now a bit over 3 mm/year, where also the smoothed rate curve in Fig. 3 ends. We now have almost 20 years of altimeter data that show a trend consistent with the tide gauges, but less noisy, since the satellite data have good global coverage.
So remember: don’t fit a quadratic to data that do not resemble a quadratic. Instead, look at the time evolution of the rate of sea level rise. And remember there is something called physics: this time evolution must be expected to have something to do with global temperature. And indeed it does.
A note for the technically minded:
The quadratic fit to the sea-level curve can be written as:
SL(t) = a t^2 + b t + c, where t= time and a, b and c are constants.
The rate of rise is the time derivative:
rate(t) = 2a t + b.
Often 2a is called the acceleration. That is because when we are talking about acceleration, it is the rate of rise that is of prime interest. The question is: how does this rate of sea-level rise change over time? And not: how quadratic does the sea-level curve look? Hence the second, rate equation is the relevant one, and we call 2a and not a the acceleration in the quadratic case. 2a is the slope of the rate(t) curve.
Now the interesting thing is that in the example given above, you get a negative a when you fit a quadratic to the sea-level data in Fig. 2, but you get a positive a when you make a linear fit to the rate curve in Fig. 1. You’d probably find the latter more informative since it has to do with how the rate of rise has changed, which is the question of prime interest. But as argued above, for such a time evolution it is neither a good idea to fit sea level with a quadratic nor to fit the rate curve with a straight line – it’s a bad model that gives inconsistent results.
% script to produce idealised sea level curves
a = 1.42e-5; b = -0.0159; c=2;
rate= a* x.^3 + b*x + c;
slr = cumsum(rate); % integrate the rate to get sea level
% compute quadratic fit
p = polyfit(x,slr,2);
acceleration = 2 * p(1)
Church, J. A., and N. J. White (2006), A 20th century acceleration in global sea-level rise, Geophys. Res. Let., 33(1), L01602.
Church, J. A., and N. J. White (2011), Sea level rise from the late 19th to the early 21st Century, Surveys in Geophys., 32, 585-602.
Holgate, S. (2007), On the decadal rates of sea level change during the twentieth century, Geophys. Res. Let., 34, L01602.
Houston, J., and R. Dean (2011), Sea-level acceleration based on US tide gauges and extensions of previous global-gauge analysis, J. Coast. Res., 27(3), 409-417.
Rahmstorf, S. (2007), A semi-empirical approach to projecting future sea-level rise, Science, 315(5810), 368-370.
Rahmstorf, S., and M. Vermeer (2011), Discussion of: Houston, J.R. and Dean, R.G., 2011. Sea-Level Acceleration Based on U.S. Tide Gauges and Extensions of Previous Global-Gauge Analyses., J. Coast. Res., 27, 784–787.
Rahmstorf, S., M. Perrette, and M. Vermeer (2012), Testing the Robustness of Semi-Empirical Sea Level Projections, Clim. Dyn., 39(3-4), 861-875.
Wahl, T., J. Jensen, T. Frank, and I. Haigh (2011), Improved estimates of mean sea level changes in the German Bight over the last 166 years, Ocean Dyn., 61, 701-715. | <urn:uuid:024e0866-0c7e-4869-8916-a2a814f3e376> | 2.984375 | 2,500 | Comment Section | Science & Tech. | 69.154862 |
Uranus is the seventh planet from the Sun, named after the ancient Greek deity of the sky, the father of Kronos (Saturn) and grandfather of Zeus (Jupiter). Uranus was the first planet discovered in modern times by Sir William Herschel on 13 March 1781.
Uranus and Neptune have different internal and atmospheric compositions from those of the larger giants and astronomers sometimes place them in a separate category - 'ice giants.' Uranus' atmosphere is composed primarily of hydrogen and helium, and contains a higher proportion of water, ammonia and methane, along with traces of hydrocarbons. It is the coldest planetary atmosphere in the Solar System.
Like the other giant planets, Uranus has a ring system, a magnetosphere, and numerous moons. The Uranian system has a unique configuration among the planets because its axis of rotation is tilted sideways.
This wider view of Uranus reveals the planet's faint rings and several of its satellites. The area outside Uranus was enhanced in brightness to reveal the faint rings and satellites. The outermost ring is brighter on the lower side, where it is wider. It is made of dust and small pebbles, which create a thin, dark, and almost vertical line across the right side of Uranus (especially visible on the natural-colour image). The bright satellite on the lower right corner is Ariel, which has a snowy white surface. Five small satellites with dark surfaces can be seen just outside the rings. Clockwise from the top, they are: Desdemona, Belinda, Portia, Cressida, and Puck. Even fainter satellites were imaged in deeper exposures, also taken with the Advanced Camera in August 2003. | <urn:uuid:39d60a4d-541a-470d-bde6-80e6d7b04f34> | 3.75 | 349 | Knowledge Article | Science & Tech. | 35.78657 |
Scientists with the National Oceanic and Atmospheric Administration announced recently that the carbon dioxide levels in Earth’s atmosphere measured just shy of 400 parts per million. Evidence from ice core samples and other means strongly suggests that level is the highest that carbon levels have ever been since humans first appeared on Earth. In fact, one scientist said that the last time the carbon levels were this high, the sea level was between 33 and 66 feet higher.
In a city bordered on three sides by water, that should trouble everyone. Read More | <urn:uuid:edb0863a-6446-49ed-90e6-d669ef9cea36> | 3.125 | 106 | Truncated | Science & Tech. | 44.3825 |
Unlike the Hubble Space Telescope, the Cosmic Origins Spectrograph (COS) isn’t designed to capture visual images. Instead, COS is designed to perform spectroscopy, which is the study of the interaction of matter and electromagnetic (EM) radiation. Each object leaves a unique signature on any light that it emits, absorbs, or scatters. By studying that light with a spectrograph we can determine much about the object that interacted with the light, including the object’s temperature, density, velocity, and chemical composition.
The matter involved might be atoms, ions, molecules, or solids, and the radiation involved could be any type of EM wave. When radiation interacts with matter, the radiation’s energy can be absorbed or scattered. Spectroscopic analysis reveals which energies are being absorbed by a given sample, and the resultant profile pinpoints the composition of the matter. Because electrons absorb or emit electromagnetic energy when they shift between levels (from ground state to excited state or vice versa), and because each element has a unique distribution of electrons, spectroscopy can be used to detect the presence of specific elements, the distribution of those elements within a sample, or the relative density of a sample.
A primary objective of the COS mission is to analyze the structure and composition of the large-scale “cosmic web” of galaxies, super-clusters, and gas that make up the universe. By focusing on very distant quasars and analyzing how their light is affected by passing through the web, COS will be able to detect and identify what the cosmic web is made of based on the material’s spectral fingerprints. Spectroscopy can be done for any wavelength of light, but COS is focused on two “energy windows” in the ultraviolet band. In addition to using spectroscopy to analyze the cosmic web, COS will also compare near and far galaxies to help inform models of galactic evolution.
Classroom Activity: Spectra-search
Students (individually or in groups) research the emission spectra of 5 - 10 of the elements on the periodic table. Based on those spectral patterns, students formulate a theory about energy emissions and the other properties of those elements. Students (or groups) present their theory and what they discovered about their chosen elements. Each individual or group also shares at least one new question that came up during the research. The class as a whole discusses those questions and the steps they might take to answer them.
David Leckrone, HST Chief Scientist: COS is the most sensitive spectroscope that we have ever flown in space. Spectroscopes or spectrographs are so important for research. They produce ugly pictures but they are the nuts and bolts of physical science. They put the physics in astrophysics. COS was conceived in the mid 1990s by Dr. Jim Green and his colleagues at the University of Colorado primarily to study the cosmic web which is made up of the largest scale structures of matter in the entire universe. If you want to know what something is made of, how hot it is, how dense it is, how fast its moving in space, how fast its rotating for example, a spectrograph will give you all that information. With COS we can acquire information like that farther out across the universe than we've been able to do before.
Randy Kimble, Project Scientist, HST Development Project: Spectroscopy is taking light from an object and breaking it up into the different colors that that light consists of. Each of the elements, each of the chemical elements, has characteristic wavelengths, characteristic colors at which it emits light when you heat it up or absorbs light. For example if I have a tube full of hydrogen between me and that light, instead of seeing the normal spectrum of that light when I look at it with a spectrograph, I'll see that spectrum but with some of the light taken away at the wavelengths where hydrogen has its characteristic absorptions and so by measuring that the depth of those notches and the velocities and the width of them and so on, you can infer all kinds of things about the physical state of that cloud. COS has taken a really key part of spectroscopic science and said, How can we do that in the absolutely best, most efficient way and that is to measure the properties of the material between the galaxies looking back into the universe. As the galaxies form, there's a lot of material that does not collapse into the galaxies and there's other material that is ejected from galaxies by supernova explosions and so on and so that intergalactic gas, the so-called intergalactic medium, carries a lot of information about the history of the universe.
David Leckrone, HST Chief Scientist:When you couple that story, sort of the global, cosmic process of how you formed a large scale structure of how material is distributed in the universe and what role that played in forming new galaxies and then you use Wide Field Camera 3 to investigate how did the galaxies themselves change internally with time and over space you know looking back through the history of the universe. All of that kind of ties together into the full story of where we came from.
Randy Kimble, Project Scientist, HST Development Project: Its going into the COSTAR slot and so there is nothing whatsoever lost in doing that because COSTAR is not needed anymore. COSTAR was put up in the first servicing mission and it was used to deploy correcting optics in front of some of the first generation instruments, the first generation spectrographs for example. Correcting optics to correct where the spherical aberration that had been inadvertently built into the HST primary mirror. All the more recent instruments include that correction within the new instrument itself. So right now COSTAR doesn't have anything to do. All the other instruments in the so-called axial bays of HST have their own internal correction and so the COSTAR space is freely available and they'll pull that out at no loss of science to HST whatsoever and replace it with this terrific new spectrograph.
Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co.
We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment. | <urn:uuid:d47aa99d-a636-4707-b836-ff85c7e350ff> | 4.15625 | 1,372 | Knowledge Article | Science & Tech. | 35.453579 |
Good Morning, lets take a look at the phenomenon of Radar Blooms this morning for the Saturday Lecture Series:
There are a lot of interesting anomalies that you may see on displays that show NEXRAD (or any kind of) weather radar data. Some are caused by software, some are caused by the radar misinterpreting what it sees. None are worth some of the conspiracy theories that non-scientists have come up with.
Last month, blog reader Mike asked what is responsible for the radar “bloom” (or “radar blobs”) that occurs nationwide, but especially in the Southeast U.S. in Spring and Fall. What he is referring to is the gradual growth of non-precipitation objects on radar after sunset (and the data fades after sunrise). During the night, this causes a large blob around each radar site. I have uploaded some examples from that night.
|EXAMPLES OF RADAR BLOOM: In the Huge AccuWeather Raw U.S. Loop and the Huge NWS Raw U.S. Loop, you are seeing the raw data from each NEXRAD radar plotted on a U.S. map. But in the Small AccuWeather Processed Northeast Loop, AccuWeather’s computer algorithms and meteorologists have attempted to “clean up” the radar by taking out areas of data that they thought were invalid. This caused the “cookie cutter” hole around Indianapolis and the lack of clutter in the Southeast. The “C”-shaped object over the Great Lakes is rain from a low pressure system, though you can still see the “blooms” around and inside it. There are also a couple things of note in the Indianapolis Radar Site Raw Loop – the “spike” in the first frame is a “sunset spike” and is caused by the radar being temporarily “bllinded” by the setting sun. The blobs of blue and brown in the Northeast quadrant are areas of rain moving south from the aforementioned low pressure system.
I knew what Mike was referring to was a type of “Ground Clutter” – also known as false echoes – a wide-ranging problem with weather radars, I just didn’t know what specifically was causing it. So, I set out to do some research on Google, but I couldn’t come up with an explanation, and apparently neither could anyone else who writes blogs or web pages. In the late 1990′s, I wrote several articles on radar anomalies and Ground Clutter for AccuWeather.com properties — but I never was able to explain this one.
NOAA [JessePedia], who owns and operates the radars in the national network, has an excellent page explaining how radar beams work. It included the illustrations below about Superrefraction and Ducting (the radar beam is shown in comparison to a faded “normal” radar beam at the top of the illustrations). In both cases, the radar beam curves quicker than the curve of the Earth. I suspected this was to blame for the Radar Bloom.
In the case of “Ducting” the radar beam bends so much that it hits the earth, causing extremely dBZ returns (because the ground is much thicker than your average raindrop when the beam runs into it). dBZ, or “decibels of Z” is the way radar data (hopefully precipitation) is measured. The colors you see on radars correspond to dBZ levels, higher meaning more intense. When the radar beam hits the Earth, this phenomenon is called “high dBZ anomalous propagation” and is a real problem because, to the untrained eye, it looks just like thunderstorms.
|EXAMPLES OF HIGH-DBZ AP: Notice on this example, a Northeast Still Image, how the high dBZ AP in Canada and New York looks a lot like the thunderstorms off the coast of the Carolinas. If you Download* This Northeast Loop then you can see that, while the thunderstorms move, the AP stays still. On the
Binghamton Radar Site Raw Loop, notice how the AP mimicks the mountain tops, because the beam won’t make it to the valleys once it hits the mountains. Notice also in the northwest part of the image how there are no echoes over the lake, because the surface is too flat to reflect back to the radar.
Other websites confirmed this explaination of Ducting, but while this is great, it doesn’t explain radar “bloom” which is much lower on the dBZ scale* (see below), nor does it explain why it grows and shrinks with time.
Since I couldn’t get an answer online, I wrote in to the NOAA radar experts. After a couple of returned emails due to a bad form on their site, I finally got in contact with Joe Chrisman from the ROC (Radar Operations Center) Engineering Branch, who explained:
When the sun goes down and the surface begins to cool, the change in refractive index in the lowest few (to several) hundred feet of the atmosphere tend to bend the radar beam toward the surface. This bending holds the radar beam near the surface for extended distances, where it encounters scatterers that would not normally be available above the boundary layer. These scatterers include insects, bats, aerosols, particulate matter, etc., and account for the increased radar return referred to as “radar bloom.”
To decode that answer a little, what he’s saying is that it is, in fact, superrefraction that causes radar bloom.
In the case of superrefraction, the beam bends low to the ground but, unlike Ducting, it doesn’t run into the ground (until it gets out of range anyway). With the beam so close to the ground, it keeps running into multiple insects/dust/other particulates as it moves outward from the radar. As the superrefraction becomes worse, the radar beam travels farther than it had previously, and encounters even more of these particles, causing the amount of clutter on the screen to “grow.” As the superrefraction decreases in the morning, it shrinks.
Why does refraction itself (be it Super, Sub or Ducting) occur? That’s a more complicated question and I’ll let you read the NOAA page for a lengthy explanation. Basically, where the beam travels with respect to the Earth’s curvature is determined by a complex equation of pressure, temperature and humidity that can vary greatly in small distances, and it’s possible you might have more than one type of refraction occurring at the same time.
P.S. “Trophospheric Ducting” is a similar phenomenon by which radio waves propagate thousands of miles further than they normally would due to atmospheric conditions, causing, in one documented case, an FM radio in Hawaii to pick up a radio station from Mexico (if you have an FM radio in your car and have trouble picking up FM stations in your own town then you understand why that would be quite unusual). | <urn:uuid:1295c634-192b-4562-b6a6-8073866fa7f6> | 2.796875 | 1,507 | Comment Section | Science & Tech. | 52.536376 |
The Guardian has an exciting-looking article, entitled “New to Nature”, which is about a centipede, Scolopendropsis duplicata, which has been discovered in Tocantins State, central Brazil. This discovery is described in this open access article from the journal Zootaxa by Chagas Jr, Edgecome and Minelli. (Incidentally, Chagas Jr’s first name is highly appropriate: “Amazonas”; his nickname, it appears, is “Amazing”.)
S. duplicata is particularly interesting because it has “too many” segments. S. duplicata is part of the scolopendromorph order of centipedes, all of which have either 21 or 23 leg-bearing (= “trunk”) segments. As its name suggests, S. duplicata has done something rather odd – it has nearly twice as many segments as expected, either 39 or 43. None of the 700 other scolopendromorph species has more than 23. Furthermore, S. duplicata is unique in that the number of segments (39 or 43) varies within a population.
There have been previous reports of intra-specific variability in trunk segment number, (e.g. Scolopendropsis bahiensis, and species in the order Geophilomorpha) but this variability was always seen between populations, not within. So in the small world of centipede biologists, this is a cracking discovery!
Classification is a difficult business, and everything above the species (the “sapiens” in Homo sapiens) level is effectively a human construct, a way we use to classify organisms, and to describe the process of evolution, rather than something that has real biological meaning. Nevertheless, classification not only helps us make sense of the world, it also provides evolutionary hypotheses that can be tested by morphological and genetic studies
Centipedes as a whole are classified along with millipedes as part of the Myriapoda sub-phylum of arthropods. There are over 3000 species, grouped into five orders. The basal group is assumed to be the Geophilomorpha , which look most like millipedes. Species in this order can have up to 177 segments, but in all other orders 23 segments has hitherto been assumed to be the maximum. Unlike the millipedes, each segment has only one pair of legs.
So normally, if you found a species with such a radically different form – nearly twice as many segments, and unprecedented population-level variability – you’d tend to think they were in different genera (the “Homo” part of H. sapiens). However, everything else about S. duplicata clearly indicates its proximity to other scolopendromorphs, so the authors comment dryly:
“We note the paradox that variability in scolopendromorph segmentation is a remarkable discovery, and yet S. duplicata and S. bahiensis are so similar in other respects and their sister group relationship so highly corroborated that generic separation is unwarranted.”
It seems probable that the genetic basis of the segment variability seen within S. duplicata and between S. duplicata and closely related species is due to variability in homeobox (“hox”) genes that control the way that segmentation takes place. However, things are not quite so simple. In the Discussion, the authors note that all the 43-segmented individuals they dissected were females, while males were only found in the 39-segment group. This suggests that – like in some Geophilomorph species – this species may show sexual dimorphism in segment number.
So whatever is controlling the polymorphism, it would appear to be some interaction between the sex determination genes in this species (and I know nothing about sex determination in myriapods, but it would appear to be on the basis of XY chromosomes, as in most insects and most chelicerates) and the hox genes.
There has been a long argument – going back over 100 years – about how the various arthropod groups should be grouped together. The current wave of molecular data shows that insects and crustaceans are more closely related to each other than they are to the other arthropods (the “pan-crustacean” hypothesis), while insects + crustaceans group together with the myriapods to form the “mandibulata” because they have mandibles, rather than chelicera, which is the mouth appendage seen in the Chelicerata.
The saddest part of the description of S. duplicata by Chagas-Junior et al comes at the end, and suggests the centipede may no longer be extant:
“Most specimens of S. duplicata were found in pitfall traps for reptiles and amphibians in the dry, xeric “cerrado”, a vegetation typical of central Brazil. All specimens were collected before flooding of the Luis Eduardo Magalhães hydroelectric power plant, in the Tocantins River, and the type locality is now under water. Vegetation around the lake is the same as that at the now submerged type locality. An expedition organized by the first author in June 2007 failed to discover any specimens of S. duplicata, even though a forest patch 500 m away from the type locality was sampled. Thus, the original habitat of this species may have been impacted by the flooding of the hydroelectric power plant, and further expeditions are needed to seek additional individuals of this remarkable Brazilian species.”
Finally, why on earth are we talking about this now? The web is full of chatter about it – just try googling Scolopendropsis duplicata and you’ll see what I mean. Because, although The Guardian doesn’t mention it, S. duplicata is not “new to nature” – the Chagas, Jr article was published back in 2008…
The answer, it appears, is the Natural History Museum website, which had S. duplicata, as its “species of the day”, and has a great interactive page, based on the Zootaxa article, of which Gregory Edgecombe of the NHM was a co-author. The Guardian and other websites obviously picked up on this, as did I… The sudden interest must be a trifle perplexing (but pleasing) to “Amazing” Amazonas and his colleagues.
• Most species are carnivorous (they can even eat bats!)
• Like insects, they have trachaea for respiration and mandibles for eating.
• Most species are oviparous (i.e. they lay eggs), but some are viviparous (i.e. they bear live young).
• Like most chelicerates (spiders etc; harvestmen – opiliones – are an exception) they do not have penetrative sex, but the male makes a spermatophore out of silk, which the female picks up and uses to fertilize her eggs. | <urn:uuid:551d0b26-6e90-4c32-9ec3-43828826e7fe> | 3.203125 | 1,518 | Personal Blog | Science & Tech. | 37.244886 |
In the late 1950s, the primary means of international communication was by undersea cable or by bouncing radio waves off the ionosphere. The US was concerned that in the event of a major conflict, the cables could be cut. This would force all communications through the less reliable radio connection through the ionosphere. The solution, in the days before communication satellites? Create a man-made ionosphere using 480 million tiny copper dipole antennas.
In 1961 the Air Force launched the first needle dispenser. The experiment was called Project West Ford, and it was hoped that this demonstration would prove the concept so that 2 more permanent communication rings could be deployed. The needles in the first launch failed to properly disperse, but a second launch in May of 1963 was successful.
The 20 kg of copper needles were packed inside the spacecraft in blocks of napthalene gel that would evaporate when released into space. After that the needles would gradually disperse over a period of two months. The resulting cloud, in the shape of a donut, was 5 km wide, 30 km thick and encircled the globe at an altitude of 3700 km.
These tiny antenna were designed to operate on a half-wavelength of military X-band (8 GHz) communication. When the radio wave struck the copper needles, each would reflect the signal in all directions.
Communication was first attempted 4 days after the launch, so the needles were more densely spaced than final dispersal. Voice transmission was described as ‘intelligible’, and data speeds of 20,000 bits per second were obtained. However, by July only 400 bits er second could be transmitted after further dispersal.
Despite the success of the test, no further West Ford missions were ever developed. One reason was the backlash from the scientific community, who feared that a giant cloud of tiny antenna would block astronomic research. Even the Soviets joined in the protest, claiming in Pravda U.S.A. Dirties Space. Additionally, satellite technology was developing at a sufficient pace that the low tech solution of bouncing radio waves off copper antenna seemed outdated already.
The Kennedy White house issued a statement on Project West Ford, as shown below, and Ambassador Adlai E. Stevenson had to defend the project to the U.N.
No further launches of orbiting dipoles will be planned until after the results of the West Ford experiment have been analyzed and evaluated. The findings and conclusions of foreign and domestic scientists (including the liaison committee of astronomers established by the Space Science Board of the National Academy of Sciences) should be carefully considered in such analysis and evaluation.Any decision to place additional quantities of dipoles in orbit, subsequent to the West Ford experiment, will be contingent upon the results of the analysis and evaluation and the development of necessary safeguards against harmful interference with space activities or with any branch of science.Optical and radio astronomers throughout the world should be invited to cooperate in the West Ford experiment to ascertain the effects of the experimental belt in both the optical and the radio parts of the spectrum. To assist in such cooperation, they should be given appropriate information on a timely basis. Scientific data derived from the experiment should be made available to the public as promptly as feasible after the launching.
Perhaps the most enduring legacy of Project West Ford was its contribution to international space law, as it established a US policy that the international scientific community would be consulted before the launch of such experiments. This was later codified into the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space (the Principles Treaty). | <urn:uuid:412edebe-f29e-45b1-a219-809e73211f1a> | 3.59375 | 724 | Knowledge Article | Science & Tech. | 37.92351 |
Some versions of the Python interpreter support editing of the current input line and history substitution, similar to facilities found in the Korn shell and the GNU Bash shell. This is implemented using the GNU Readline library, which supports Emacs-style and vi-style editing. This library has its own documentation which I won’t duplicate here; however, the basics are easily explained. The interactive editing and history described here are optionally available in the Unix and Cygwin versions of the interpreter.
This chapter does not document the editing facilities of Mark Hammond’s PythonWin package or the Tk-based environment, IDLE, distributed with Python. The command line history recall which operates within DOS boxes on NT and some other DOS and Windows flavors is yet another beast.
If supported, input line editing is active whenever the interpreter prints a primary or secondary prompt. The current line can be edited using the conventional Emacs control characters. The most important of these are: C-A (Control-A) moves the cursor to the beginning of the line, C-E to the end, C-B moves it one position to the left, C-F to the right. Backspace erases the character to the left of the cursor, C-D the character to its right. C-K kills (erases) the rest of the line to the right of the cursor, C-Y yanks back the last killed string. C-underscore undoes the last change you made; it can be repeated for cumulative effect.
History substitution works as follows. All non-empty input lines issued are saved in a history buffer, and when a new prompt is given you are positioned on a new line at the bottom of this buffer. C-P moves one line up (back) in the history buffer, C-N moves one down. Any line in the history buffer can be edited; an asterisk appears in front of the prompt to mark a line as modified. Pressing the Return key passes the current line to the interpreter. C-R starts an incremental reverse search; C-S starts a forward search.
The key bindings and some other parameters of the Readline library can be customized by placing commands in an initialization file called ~/.inputrc. Key bindings have the form
and options can be set with
set option-name value
# I prefer vi-style editing: set editing-mode vi # Edit using a single line: set horizontal-scroll-mode On # Rebind some keys: Meta-h: backward-kill-word "\C-u": universal-argument "\C-x\C-r": re-read-init-file
Note that the default binding for Tab in Python is to insert a Tab character instead of Readline’s default filename completion function. If you insist, you can override this by putting
in your ~/.inputrc. (Of course, this makes it harder to type indented continuation lines if you’re accustomed to using Tab for that purpose.)
Automatic completion of variable and module names is optionally available. To enable it in the interpreter’s interactive mode, add the following to your startup file:
import rlcompleter, readline readline.parse_and_bind('tab: complete')
This binds the Tab key to the completion function, so hitting the Tab key twice suggests completions; it looks at Python statement names, the current local variables, and the available module names. For dotted expressions such as string.a, it will evaluate the expression up to the final '.' and then suggest completions from the attributes of the resulting object. Note that this may execute application-defined code if an object with a __getattr__() method is part of the expression.
A more capable startup file might look like this example. Note that this deletes the names it creates once they are no longer needed; this is done since the startup file is executed in the same namespace as the interactive commands, and removing the names avoids creating side effects in the interactive environment. You may find it convenient to keep some of the imported modules, such as os, which turn out to be needed in most sessions with the interpreter.
# Add auto-completion and a stored history file of commands to your Python # interactive interpreter. Requires Python 2.0+, readline. Autocomplete is # bound to the Esc key by default (you can change it - see readline docs). # # Store the file in ~/.pystartup, and set an environment variable to point # to it: "export PYTHONSTARTUP=/home/user/.pystartup" in bash. # # Note that PYTHONSTARTUP does *not* expand "~", so you have to put in the # full path to your home directory. import atexit import os import readline import rlcompleter historyPath = os.path.expanduser("~/.pyhistory") def save_history(historyPath=historyPath): import readline readline.write_history_file(historyPath) if os.path.exists(historyPath): readline.read_history_file(historyPath) atexit.register(save_history) del os, atexit, readline, rlcompleter, save_history, historyPath | <urn:uuid:46fa8fc5-ca42-4339-afe9-d57ef0b648fd> | 3.09375 | 1,098 | Documentation | Software Dev. | 42.480278 |
The new sets module contains an implementation of a set datatype. The Set class is for mutable sets, sets that can have members added and removed. The ImmutableSet class is for sets that can't be modified, and instances of ImmutableSet can therefore be used as dictionary keys. Sets are built on top of dictionaries, so the elements within a set must be hashable.
Here's a simple example:
>>> import sets >>> S = sets.Set([1,2,3]) >>> S Set([1, 2, 3]) >>> 1 in S True >>> 0 in S False >>> S.add(5) >>> S.remove(3) >>> S Set([1, 2, 5]) >>>
The union and intersection of sets can be computed with the
union() and intersection() methods; an alternative
notation uses the bitwise operators
Mutable sets also have in-place versions of these methods,
union_update() and intersection_update().
>>> S1 = sets.Set([1,2,3]) >>> S2 = sets.Set([4,5,6]) >>> S1.union(S2) Set([1, 2, 3, 4, 5, 6]) >>> S1 | S2 # Alternative notation Set([1, 2, 3, 4, 5, 6]) >>> S1.intersection(S2) Set() >>> S1 & S2 # Alternative notation Set() >>> S1.union_update(S2) >>> S1 Set([1, 2, 3, 4, 5, 6]) >>>
It's also possible to take the symmetric difference of two sets. This
is the set of all elements in the union that aren't in the
intersection. Another way of putting it is that the symmetric
difference contains all elements that are in exactly one
set. Again, there's an alternative notation (
^), and an
in-place version with the ungainly name
>>> S1 = sets.Set([1,2,3,4]) >>> S2 = sets.Set([3,4,5,6]) >>> S1.symmetric_difference(S2) Set([1, 2, 5, 6]) >>> S1 ^ S2 Set([1, 2, 5, 6]) >>>
There are also issubset() and issuperset() methods for checking whether one set is a subset or superset of another:
>>> S1 = sets.Set([1,2,3]) >>> S2 = sets.Set([2,3]) >>> S2.issubset(S1) True >>> S1.issubset(S2) False >>> S1.issuperset(S2) True >>>
See About this document... for information on suggesting changes. | <urn:uuid:65b0f9dc-a525-495d-b3e2-e9d01fa5ed9b> | 3.171875 | 592 | Documentation | Software Dev. | 84.421748 |
|< Day Day Up >|
8.8. Dealing with Deviations and Errors
As discussed in Chapter 3, any system has to deal with deviations and errors. Deviations are those conditions that can be expected to occur in normal processing. Errors are those conditions resulting from hardware or software malfunctions.
8.8.1. Signaling Errors and Deviations
Tim and I separated the conditions that occur during processing into errors and deviations. We needed to decide how to signal each type of condition. Errors from hardware and software malfunctions are
8.8.2. Deviation Conventions
The com.samscdrental.failures package contains deviations and exceptions:
Deviation.java CDCategoryFormatDeviation.java CheckInDeviation.java CheckOutDeviation.java CustomerIDFormatDeviation.java DollarFormatDeviation.java ImportFileDeviation.java ImportFormatDeviation.java LateReturnDeviation.java NameFormatDeviation.java ParseLineDeviation.java PhysicalIDFormatDeviation.java PrinterFailureDeviation.java StatusDeviation.java UPCCodeFormatDeviation.java SeriousErrorException.java
To separate deviations from exceptions, all expected errors are derived from
, which in
Find methods for collections in Java return null if a matching object is not found. "Consistency is Simplicity" suggests that the collections in Sam's system also do so. If CustomerID is not found in CustomerDataAccess , the find method returns null . This situation might be a permanent error if the customer was removed from the collection. It might be a failure due to an error in inputting CustomerID . We might add a check digit or other error checking on the CustomerID to ensure that the string value is input correctly. That way, we would deal with the failure as close to the source as possible.
We created the
class, which is an unchecked exception. An unchecked exception does not have to be indicated in the
clause for a method. A method throws
when an error occurs of such severity that the program should not continue. The exception represents
8.8.3. Errors When Importing a File
Importing a file provides a whole series of errors that need to be handled. The interface contract (see Chapter 3) for the import needs to be spelled out in some detail. For example, when the file is read, what should occur if the file has bad or incorrectly formatted text in it? Should the program reject the entire file or just ignore the line with the incorrect format? When we are parsing the strings in the input file, we might get multiple errors on each line or we might have multiple lines with errors. Do we report each one or just the first one?
When a CDDisc object is to be added to CDDiscDataAccess , what should occur if there is already an object with the same PhysicalID ? Should the current CDDisc object be overwritten or should the new CDDisc object be ignored? In the latter case, should the user be notified of the duplication?
No one solution is correct in all situations. The client and the users ultimately decide how to handle errors. However, for any solution, the user should not have to look inside the code to determine what went wrong. Any message displayed to the user should
|< Day Day Up >| | <urn:uuid:f9519b1e-bdf3-4b9b-a432-e068be4e5f59> | 3.046875 | 689 | Documentation | Software Dev. | 44.159875 |
|Tapping the Sun's life-sustaining energy is more than using the light and heat that we see and feel. It is conversion from solar energy to electricity that powers our homes, cars, and the computer that enables you to observe this web site. The area of solar energy is so broad that to cover all of the information that those two simple words cover would take many books. Fortunately that is not what we did. We have broken the solar energy section down into the 8 basic types of solar energy and provided a brief synopsis of each of the types of solar energy on this page.||
Photosynthesis - Sunlight provides energy through photosynthesis. This energy is recoverable through burning of wood and fossil fuels such as coal, petroleum, and natural gas which are created through the process of photosynthesis.
Wind Energy - Sunlight heating the ground and lower atmosphere produces the wind which powers wind turbines.
Hydroelectric power - Sunlight stored as the gravitational energy of water through the water cycle can be extracted with dams and electric generators. Hydroelectric power is renewable and considered a "clean" energy since no burning is required, but it is limited in quantity.
Hydroelectric power station Baie James,
Hydroelectric Dam Diagram
Ocean Energy - The use of the ocean tides have been harnessed to make electricity along with a variety of other methods which make use of the motions and thermal gradients in the ocean. The temperature difference between the sun-warmed surface layers of the ocean and the colder depths through the use of a heat engine can derive useful energy, in a process called ocean thermal energy conversion (OTEC). This technology is complex, therefore limiting the use of the tremendous amount of stored energy in the ocean thermal gradients.
Picture of an OTEC power plant
Passive Solar Heating - Direct heating of a building by maximizing the solar gain in the winter and minimizing it in the summer is called passive solar heating. Designs of northern homes and buildings use rock, water, and other materials to store solar heat in the day to release later at night.
Active Solar Heating - In contrast to passive solar heating is active space and water heating. A water or air solar collector is used to heat a fluid that is used as the heating system or water heater for a building. In larger active thermal power generating systems, focused mirrors are used to concentrate and direct the sunlight into a boiler that produces steam to generate electricity.
Electric Photovoltaic Cells - Sunlight can be converted directly into electricity with a photovoltaic cell, or "solar cell." Solar cells, which have no moving parts, are expensive, but are an ideal method too directly, convert sunlight into electricity. Today's typical calculator uses a solar cell for it's power.
©Copyright 1998 Elizabeth
Beckett, Holly Bernitt, and Vishwa Chandra. | <urn:uuid:c325006e-c854-40df-ae91-ee4179443529> | 3.703125 | 583 | Knowledge Article | Science & Tech. | 32.380814 |
Distribution of Meteor Hits
Name: David E.
Is there a map that charts over the history of the earth
where meteors have hit?
Yes there certainly is. It appears every year in the Royal
Astronomical Society of Canada's Observer's Handbook. You can get
one from the Royal Astronomical Society of Canada (see www.rasc.ca),
and maybe from Sky & Telescope Magazine, though I am not sure about that one.
All the best
David H. Levy Ph. D.
Here is an interesting Internet site that shows a distribution of meteor
strikes in the world:
Of course the distribution will be skewed towards areas that are inhabited
by humans because meteor strikes can't be found by people if there are not
any people in the area.
This is not a complete map because I know of several meteor strike sites
that are not on this map.
You might be able to find more maps by going to http://www.google.com and
searching for "meteor strike sites".
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:90bf3a96-7e64-4632-9877-5fb7fc78603f> | 2.921875 | 233 | Q&A Forum | Science & Tech. | 61.914845 |
This illustration shows how an Atomic Force Microscope (AFM) is used to image a line of graphene made by a pencil. The scale spans ten orders of magnitude, from the microscope and pencil to the atoms that compose the scanning probe and pencil line. As the viewer zooms into the line, graphite flakes, and eventually a single layer of graphene, become visible. On the AFM, a silicon cantilever with a sharp atomic tip and a laser with a photodiode measure the up and down motion as the probe maps out the graphene sample.
Audience11 and up
Tagspencil, zoom, poster, illustration, AFM, atomic force microscope, graphite, graphene, atomic, lead
PermissionsThis linked product was created by another institution (not by the NISE Network). Contact owning institution regarding rights and permissions.
This product does not have any linked evaluations. | <urn:uuid:b3899c97-18db-4bef-a31d-e47d1bf9e5b0> | 3.4375 | 183 | Truncated | Science & Tech. | 30.356176 |
A means of color evaluation utilizing the temperature (in degrees Kelvin) to which a black object would need to be heated in order to produce light of a certain wavelength (or color). Substances, when heated, will tend to incandesce—or, in other words, as their constituent atoms or molecules absorb increasing amounts of energy, they will emit light and the wavelength of that light will vary by temperature and by substance. Iron, for example, when heated, emits light that is pale red; heated further, it emits white, then blue light. The black object described in terms of the evaluation of color temperature, is essentially a theoretical "perfect blackbody," a substance so black that it absorbs all the light that strikes it. In theory, as it increases in temperature, it will emit colors in a predictable manner. For example, at 2000 Kelvin (or 2000 K), it will emit red light; at 5000 K, it will emit white light; and at 10,000 K, it will emit blue light.
Light bulbs are often described in terms of their color temperature, as it is the heating of specific substances within them that actually produces the light. A generic 100-watt incandescent bulb (which essentially produces light by heating its tungsten filament until it glows brightly) has a color temperature of 2860 K. Direct sunlight—which produces white light, or light possessing an even distribution of wavelengths (or colors)—has a color temperature of about 5000 K, and is considered an important characteristic of the standard viewing conditions for the evaluation of color. | <urn:uuid:0d9cd695-c970-47b5-8953-cdaf148c5808> | 3.828125 | 318 | Knowledge Article | Science & Tech. | 37.823239 |
One of the best-known bits of folk wisdom about invasive species is that they settle down after a while to become part of a rebalanced ecosystem, and stop being a problem. This is an appealing idea, but how often is it true?
It turns out that scientists don't have very good answers to questions such as: Do the effects of invasive species diminish after a while? How long does this take? How much do effects diminish — a lot or a little? What causes the effects of an invader to diminish?"
We haven't been able to watch many invasions continuously for a long time (funders prefer short studies to long ones, alas), so questions such as these are among the biggest unanswered questions in invasion ecology.
Our group at the Cary Institute is watching an interesting example right now. The effects of the zebra mussel invasion on the Hudson River ecosystem seem to be diminishing, only 20 years after this invader appeared in the river.
The zebra mussel is a European species that came to North America in the mid-1980s in the ballast water of ocean-going ships. It was first seen in the Hudson in 1991. By the end of 1992, zebra mussels outweighed all other animals in the river, and their population filtered a volume of water equal to all the water in the river every one to two days.
As a result, plankton populations fell by 50 percent to 90 percent, which caused large changes to the food web, water chemistry and water clarity. In fact, nearly everything we measure in the river changed when zebra mussels appeared.
But now it looks like these changes might not be permanent. Since 2000, some parts of the Hudson's ecosystem have begun to recover. Populations of native mussels, which were falling by 20 percent 60 percent each year, stabilized and even began to increase. Numbers of crustaceans, worms and other small invertebrates fell by about 50 percent, then came back nearly to pre-invasion levels. This is especially welcome news, because these animals are the chief food of most of the Hudson's fishes.
So have zebra mussels disappeared from the Hudson? Hardly. Billions of these animals still live in the river. But they're not having an easy time of it; their survival rates are now less than 1 percent of what they were in the early 1990s. As a result, most of the remaining zebra mussels are young and small, which softens their impact on the ecosystem.
We are still trying to understand what is causing these dramatic changes to the Hudson's ecosystem. Blue crabs are part of the story. They are eating many more zebra mussels than they used to, and are responsible for part (but not all) of the increased mortality on zebra mussels. There is certainly more to the story than blue crabs, though, and the research group at the Cary Institute is now trying to uncover the rest of that story.
Can we conclude that the effects of zebra mussels and other invaders last for only a few years and aren't worth worrying about? As intriguing as our results may be, I am not ready to reach this conclusion for three reasons.
First, we don't know whether the Hudson's recovery is permanent or whether the ecosystem will slide back into a zebra-mussel dominated state. Second, even if the Hudson recovers fully and permanently in the next few years, zebra mussels did cause large ecological changes and economic damage in the Hudson for 20 years. These effects seem too large to simply dismiss. Think how we would react if some industrial polluter caused such changes for 20 years!
Finally, regardless of what happens with zebra mussels in the Hudson, we know that many invaders don't settle down so quickly. Our yards and gardens are still full of dandelions, Japanese beetles and European slugs; chestnut blight and Dutch elm disease still ravage our forests; and carp and water-chestnut still thrive in our waterways, even though these invaders have been here for decades to centuries.
So what we can actually conclude is that some invasions may settle down fairly quickly, while others may have severe effects for centuries or longer. It also seems likely this problem will keep invasion ecologists busy for a while. | <urn:uuid:5a333e1d-7ded-47d5-90b8-399a1d67ded7> | 3.734375 | 878 | Nonfiction Writing | Science & Tech. | 52.355121 |
Common Lisp the Language, 2nd Edition
The most primitive form for function invocation in Lisp of course has no name; any list that has no other interpretation as a macro call or special form is taken to be a function call. Other constructs are provided for less common but nevertheless frequently useful situations.
apply function arg &rest more-args
This applies function to a list of arguments.
The function may be a compiled-code object, or a lambda-expression, or a symbol; in the latter case the global functional value of that symbol is used (but it is illegal for the symbol to be the name of a macro or special form).
X3J13 voted in June 1988 (FUNCTION-TYPE) to allow the function to be only of type symbol or function; a lambda-expression is no longer acceptable as a functional argument. One must use the function special form or the abbreviation #' before a lambda-expression that appears as an explicit argument form.
The arguments for the function consist of the last argument to apply appended to the end of a list of all the other arguments to apply but the function itself; it is as if all the arguments to apply except the function were given to list* to create the argument list. For example:
(setq f '+) (apply f '(1 2)) => 3 (setq f #'-) (apply f '(1 2)) => -1 (apply #'max 3 5 '(2 7 3)) => 7 (apply 'cons '((+ 2 3) 4)) => ((+ 2 3) . 4) not (5 . 4) (apply #'+ '()) => 0
Note that if the function takes keyword arguments, the keywords as well as the corresponding values must appear in the argument list:
(apply #'(lambda (&key a b) (list a b)) '(:b 3)) => (nil 3)
This can be very useful in conjunction with the &allow-other-keys feature:
(defun foo (size &rest keys &key double &allow-other-keys) (let ((v (apply #'make-array size :allow-other-keys t keys))) (if double (concatenate (type-of v) v v) v))) (foo 4 :initial-contents '(a b c d) :double t) => #(a b c d a b c d)
funcall fn &rest arguments
(funcall fn a1 a2 ... an) applies the function fn to the arguments a1, a2, ..., an. The fn may not be a special form or a macro; this would not be meaningful.
X3J13 voted in June 1988 (FUNCTION-TYPE) to allow the fn to be only of type symbol or function; a lambda-expression is no longer acceptable as a functional argument. One must use the function special form or the abbreviation #' before a lambda-expression that appears as an explicit argument form.
(cons 1 2) => (1 . 2) (setq cons (symbol-function '+)) (funcall cons 1 2) => 3
The difference between funcall and an ordinary function call is that the function is obtained by ordinary Lisp evaluation rather than by the special interpretation of the function position that normally occurs.
The value of call-arguments-limit is a positive integer that is the upper exclusive bound on the number of arguments that may be passed to a function. This bound depends on the implementation but will not be smaller than 50. (Implementors are encouraged to make this limit as large as practicable without sacrificing performance.) The value of call-arguments-limit must be at least as great as that of lambda-parameters-limit. See also multiple-values-limit. | <urn:uuid:cdacc826-d3ac-4ab5-a188-659b6dcc68fa> | 3.078125 | 789 | Documentation | Software Dev. | 60.968464 |
The term electromagnetic induction refers to the generation of an electric current by passing a metal wire through a magnetic field. The discovery of electromagnetic induction in 1831 was preceded a decade earlier by a related discovery by Danish physicist Hans Christian Oersted (1777851). Oersted showed that an electric current produces a magnetic field. That is, if you place a simple magnetic compass near any of the electrical wires in your home that are carrying a current, you can detect a magnetic field around the wires. If an electric current...
(The entire page is 783 words.)
Want to read the whole thing?
Subscribe now to read the rest of this article. Plus, get access to:
- 30,000+ literature study guides
- Critical essays on more than 30,000 works of literature from Salem on Literature (exclusive to eNotes)
- An unparalleled literary criticism section. 40,000 full-length or excerpted essays.
- Content from leading academic publishers, all easily citable with our "Cite this page" button.
- 100% satisfaction guarantee READ MORE | <urn:uuid:ba988973-8b57-4444-a4ff-a1ed0f2a301a> | 3.8125 | 221 | Truncated | Science & Tech. | 43.15022 |
Johhny Electriglide wrote:
Read Hansen's "Storms of My Grandchildren". It is a common misconception to lump Gen IV nuclear with the other ones. Gen IV reactors use the waste of the other reactors and produce very little and only 10 year half life waste themselves. There is enough nuclear waste stored to power all of them replacing coal fired plants for 500 years and then they can run off of deuterium in sea water, no more uranium mining needed, ever. Good designs were trashed by Clinton in '94(to appease the anti-any-nuclear supporters) and those people who worked on the plans are mostly still around and remember. There are now 8 of them running well around the world, and we need thousands more by 2020. By the way, they have no proliferation danger either---read Hansen's book.
It would be nice if everywhere was perfect for solar as here.
The nuclear industry has been telling this sort of thing for 60 years I am not holding my breath.
Generation IV reactors (Gen IV) are a set of theoretical nuclear reactor designs currently being researched. Most of these designs are generally not expected to be available for commercial construction before 2030. http://en.wikipedia.org/wiki/Generation_IV_reactor
So we start building them around 2030 then we spend another 20 years sorting out the bugs, in the meantime we have a technology that we know works. Yes there are some problems associated with implementing large scale renewables but we do have the theoretical knowledge to sort them out.
The Nuclear power industry is simply a problem desperately looking for a reason to be. | <urn:uuid:e45dc781-1716-4093-ae00-b1cb4abc0885> | 2.703125 | 333 | Comment Section | Science & Tech. | 53.106778 |
Would rotating arms on a spacecraft (like in Sci-Fi movies) actually produce artificial gravity?
It’s pretty common in Sci-Fi movies to have a spacecraft with rotating arms that create artificial gravity for the craft. Would this really generate sufficient centrifugal force to create artificial gravity? Would those arms need to be moving very fast to achieve 1 G-force (I realize the math would be dependent on the length of the arms)? What about astronauts traveling up and down the length of the arm-shafts themselves, would the artificial gravity get stronger the further away they traveled from the central axis? Would they be constantly vomiting from motion sickness?
Are there other factors that would make this technology impractical or unrealistic, or other interesting consequences of a technology such as this?
This question is in the General Section. Responses must be helpful and on-topic. | <urn:uuid:b4eee250-0346-4097-9d51-2187733e1fa6> | 3.546875 | 176 | Q&A Forum | Science & Tech. | 43.237132 |
Radical plan to combat global warming 'may raise temperatures'
A controversial proposal to create artificial white clouds over the ocean in order to reflect sunlight and counter global warming could make matters worse, scientists have warned.
The proposed scheme to create whiter clouds over the oceans by injecting salt spray into the air from a flotilla of sailing ships is one of the more serious proposals of researchers investigating the possibility of "geoengineering" the climate in order to combat global warming.
Geoengineering – deliberately altering the global climate – was dismissed as outlandish fantasy a decade ago but has recently been seen as a serious topic of study, given the international failure to curb global emissions of carbon dioxide and the possibility of extreme climate change.
However, a study into the effects of creating man-made clouds which reflect sunlight and heat back into space has found that the strategy could end up having the opposite effect by interfering with the natural processes that lead to the formation of reflective white clouds over the ocean.
A team of scientists from Britain and Finland found that spraying salt water into the air to encourage the formation of clouds may actually hinder natural cloud formation over the coastal regions of the continents because of other pollutants from industrial activities.
"Our research suggests that attempts to generate brighter clouds via sea spray geoengineering would at best have only a tiny effect and could actually cause some clouds to become less bright," said Professor Ken Carslaw of the University of Leeds.
White clouds form naturally over the ocean as a result of saltwater spray being blown high into the air. The salt crystals form tiny particles on which cloud droplets form and the denser the droplets, the whiter the cloud and the more reflective it is towards incoming sunlight.
Twenty years ago, scientists proposed that it might be possible to augment this process with a fleet of ships designed to spray saltwater into the air. Calculations suggested that this could cool the planet if carried out on a large enough scale.
However, a computer model used by Professor Carslaw and his colleagues suggested that it would be difficult to create a uniform layer of saltwater spray and that natural particles in the air, called aerosols, could interfere with the process. "The formation of clouds from artificial sea spray is particularly sensitive to background levels of aerosol. This means that injecting spray around coastal areas where there is a lot of air pollution from land may not produce enough extra cloud drops to stave off global warming," Professor Carslaw said.
"In some locations, the artificial spray particles may hinder natural drop formation and could have an opposite effect on climate to that intended. In practice, generating a uniform covering of reflective clouds over large regions of the world's oceans would be extremely challenging," he said.
This notion involves augmenting the natural process of white-cloud formation over the oceans to reduce levels of incoming sunlight and heat. But it would not help the increasing acidity of the seas because it fails to address rising levels of carbon dioxide in the atmosphere.
Another idea is to emit sulphate particles high into the atmosphere to reflect sunlight back into space. These would simulate what happens in a volcanic eruption when the aerosol particles from the eruption cut out sunlight and cause limited global cooling. The sulphates would wash out within a couple of years but again this "solution" does not address ocean acidity, or the potential acidity of the sulphate aerosols.
Being able to emulate the way trees convert carbon dioxide gas into solid carbon-containing substances is seen as the best geoengineering idea. But nobody has been able to do it better than trees – so why not simply plant more forests? This proposal reduces carbon dioxide concentrations and so helps ocean acidity.
Mirrors in space
The idea is to create a huge reflective surface between earth and the sun that could be adjusted to interfere with incoming solar radiation. Apart from the immense technical difficulties, the political implications of who controls this technology are problematic to say the least.
Gay couple beaten in park urge MPs to moderate language on gay marriage
Strewth mate. Aussies wave goodbye to Britain as it becomes too pricey to stay
World news in pictures
X marks the spot: The find that could rewrite Australian history
Oklahoma tornado latest: At least 91 dead, including 20 children, as massive storm rips through Oklahoma City suburbs, flattening homes, shops, hospitals and schools
- 1 Tottenham to smash pay scale with £150,000-a-week contract in attempt to tie Gareth Bale to club
- 2 Austerity has hardened the nation's heart
- 3 Gay couple beaten in park urge MPs to moderate language on gay marriage
- 5 Top A&E doctors warn: 'We cannot guarantee safe care for patients anymore'
BMF is the UK’s biggest and best loved outdoor fitness classes
Win anything from gadgets to five-star holidays on our competitions and offers page.
£120 - £160 per day + negotiable depending on experience: Randstad Education L...
£180 - £230 per day: Orgtel: Operations Analyst - Leading Bank in the City of ...
£115 - £150 per day + negotiable dependant on experience : Randstad Education ...
Negotiable: Progressive Recruitment: Quality Inspector - West Midlands - 3 Mon... | <urn:uuid:c7e317f3-e3f3-4625-afdd-39d6f14303c2> | 2.984375 | 1,074 | Truncated | Science & Tech. | 35.540882 |
Marissa Ahlering is a prairie ecologist for The Nature Conservancy in Minnesota, North and South Dakota. Dr. Ahlering is monitoring the Conservancy’s prairie sites, gathering the data needed to guide their management. She knows these prairies well, having done research on the habitat needs of two grassland-nesting birds in the region. She’s also studied elephants – in East Africa – using methods that are valuable for studying prairie wildlife, too.
Kristen Blann is a freshwater ecologist for The Nature Conservancy in Minnesota, North and South Dakota. It’s a region of numerous lakes, streams, rivers and wetlands, and Dr. Blann is actively involved in the Conservancy’s aquatic conservation planning. That work can include “fishing” for endangered freshwater mussels (no mussels were harmed), or developing an ecological framework for managing water resources across regions as large as Minnesota’s entire Great Lakes Basin.
Meredith Cornett is Director of Conservation Science for The Nature Conservancy in Minnesota, North and South Dakota. Those are states with a wide variety of habitats that interest her research staff: arid badlands, tallgrass prairies, the North Woods of Minnesota’s canoe country and Mississippi River blufflands (to name only a few). Dr. Cornett is keenly interested in forests, and her field work is helping develop strategies for managing northern forests for the future’s changing and uncertain climate.
Phil Gerla is an aquatic ecologist and hydrologist helping The Nature Conservancy with prairie and wetland conservation in Minnesota, North and South Dakota. That work includes the Glacial Ridge Project, the largest prairie and wetland restoration project in America. Dr Gerla is guiding the transformation of a drainage ditch into a prairie stream at Glacial Ridge, and his research is revealing what happens to water quality and quantity when former croplands are restored to native grasslands.
Mark White is a forest ecologist for The Nature Conservancy in Minnesota and the Dakotas. White is seeking ways to conserve the biodiversity of the forests that extend from central Minnesota north to Canada. It’s a task that must take into account the forest’s own variability, human uses and other factors such as invasive species. Even deer can have a profound effect, and White is developing strategies to keep deer from altering the make-up of tomorrow’s forests. | <urn:uuid:c63ab4c5-99d9-4550-bb1a-1ec621b36a4d> | 2.734375 | 506 | Content Listing | Science & Tech. | 32.153158 |
Humans were the main culprit behind a series of ancient bird extinctions, according to a newly published report. Writing Monday, scientists pointed to human colonization of the Pacific Islands has th
Posted by Brian 57 days ago (http://www.theverge.com)
A 30-year study of birds and roadkill may have given us a look into how animals respond to man-made changes in the environment. According to a study published last week in Current Biology, an area...
Posted by weathernms 59 days ago (http://www.examiner.com)
There have been widespread reports of a meteor streaking across the evening sky along the U.S. East Coast.
Posted by jon 59 days ago (http://www.foxnews.com)
Massive volcanic eruptions may have led to the extermination of half of Earth's species some 200 million years ago, a new study suggests.
Posted by mikedehaan 64 days ago (http://www.decodedscience.com)
The Mythbusters showed that bubble wrap cannot protect you from g forces in the crash from a fall. Here is the math for falling in bubble wrap.
Posted by prerista 64 days ago (http://www.space.com)
A massive sun eruption on March 15 sent solar particles toward Earth that could supercharge northern lights displays. The particles were travelling at 3.2 million mph, NASA says.
An earthquake early-warning system being tested in California has proved successful ahead of Monday's moderate earthquake, according to seismologists.
Astronaut Chris Hadfield has become the first Canadian commander of the International Space Station (ISS).
NewsMeBack is the place for everyone who likes citizen journalism and social news, a place for you where you can put your news, events from everyday life, something interesting that happened in your life or else. | <urn:uuid:3d4f7951-918f-40ae-b066-de54b721db9c> | 2.6875 | 385 | Content Listing | Science & Tech. | 60.435101 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
"internal combustion engine"
We found 11 results on physics.org and 51 results in our database of sites
50 are Websites,
0 are Videos,
and 1 is a Experiments)
Search results on physics.org
Search results from our links database
A brief description of how Gas Turbine Engines ( Jet Engines) work. From HowStuffWorks.com.
A site about Charle Babbage's Analytical Engine - the first computer.
History of the Stirling engine and an explanation of how it works.
A lesson on Total Internal Reflection which extends into the detailed mathematics.
A good brief description of how Steam Engines work. A nice, simple, easy to understand site
A brief description of how Rotary (Wankel) Engines work. Part of Marshall Brain's HowStuffWorks.com.
A brief description of how Stirling Engines work. This site is part of Marshall Brain's HowStuffWorks.com.
A brief description of how Two-stroke Engines work from HowStuffWorks.com. Good, clear diagrams
A brief explanation why ion engines need a vacuum
A brief description of how Internet Search Engines work, including an insight of future searchers not yet developed. From HowStuffWorks.com.
Showing 11 - 20 of 51 | <urn:uuid:e0eb2299-7ac3-4dc6-a970-29fbda5444d1> | 3.015625 | 324 | Content Listing | Science & Tech. | 61.33905 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 16 results on physics.org and 199 results in our database of sites
198 are Websites,
1 is a Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
Streetline collects live data from street-based sensors to tell users where empty parking spaces can be found via a smartphone app. An example of the internet of things
Interested in space? Current news on Space Shuttle missions, info on space technology, history, and space related activities for children.
The History of Space Science and Astronomy includes a huge amount of information on: The Millennium - A Space and Astronomy Timeline; Man-in-Space Firsts ; Space Factoids
Covers a large amount of information on space and space related topics - both recent news and archive.
A portal with loads of information about space missions, research, space travel and more.
Includes a vast amount of information regarding space, space exploration and associated topics.
A page on space elevators and how they could spark a revolution in space travel.
Explore the stories of three exciting space missions: "Friendship 7" - John Glenn Orbits the Earth; "Apollo 11" - First Moon Landing and "STS-7" - First American Woman in Space.
After only a century of powered flight, we have escaped the confines of planet Earth, and forged our way into the vast expanse of space. Explore the past, present and future of space travel on this ...
Multiple choice questions about outer space at three levels. Requires some knowledge of NASA missions as well as Earth and Space knowledge.
Showing 1 - 10 of 199 | <urn:uuid:857fe859-79a1-4220-ac24-99e99ff73ca4> | 2.796875 | 387 | Content Listing | Science & Tech. | 49.234955 |
The muon (from the letter mu (μ)--used to represent it) is an elementary particle with negative electric charge and a spin of 1/2. It has a mean lifetime of 2.2μs, longer than any other unstable lepton, meson, or baryon except for the neutron. Together with the electron, the tau, and the neutrinos, it is classified as a lepton. Like all fundamental particles, the muon has an antimatter partner of opposite charge but equal mass and spin: the antimuon, also called a positive muon. Muons are denoted by μ− and antimuons by μ+.
For historical reasons, muons are sometimes referred to as mu mesons, even though they are not classified as mesons by modern particle physicists (see History). Muons have a mass of 105.7 MeV/c2, which is 206.7 times the electron mass. Since their interactions are very similar to those of the electron, a muon can be thought of as a much heavier version of the electron. Due to their greater mass, muons do not emit as much bremsstrahlung radiation; consequently, they are highly penetrating, much more so than electrons.
Since the production of muons requires an available center of momentum frame energy of over 105 MeV, neither ordinary radioactive decay events nor nuclear fission and fusion events (such as those occurring in nuclear reactors and nuclear weapons) are energetic enough to produce muons. Only nuclear fission produces single-nuclear-event energies in this range, but due to conservation constraints, muons are not produced.
On earth, all naturally occurring muons are apparently created by cosmic rays, which consist mostly of protons, many arriving from deep space at very high energy.
When a cosmic ray proton impacts atomic nuclei of air atoms in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (the pion's preferred decay product), and neutrinos. The muons from these high energy cosmic rays, generally continuing essentially in the same direction as the original proton, do so at very high velocities. Although their lifetime without relativistic effects would allow a half-survival distance of only about 0.66 km at most, the time dilation effect of special relativity allows cosmic ray secondary muons to survive the flight to the earth's surface. Indeed, since muons are unusually penetrative of ordinary matter, like neutrinos, they are also detectable deep underground and underwater, where they form a major part of the natural background ionizing radiation. Like cosmic rays, as noted, this secondary muon radiation is also directional. See the illustration above of the moon's cosmic ray shadow, detected when 700 m of soil and rock filters secondary radiation, but allows enough muons to form a crude image of the moon, in a directional detector.
The same nuclear reaction described above (i.e., hadron-hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g-2 gyromagnetic ratio experiment (see link below). In naturally-produced muons, the very high-energy protons to begin the process are thought to originate from acceleration by electromagnetic fields over long distances between stars or galaxies, in a manner somewhat analogous to the mechanism of proton acceleration used in laboratory particle accelerators.
Muons are unstable elementary particles and are heavier than electrons and neutrinos but lighter than all other matter particles. They decay via the weak interaction to an electron, two neutrinos and possibly other particles with a net charge of zero. Nearly all of the time, they decay into an electron, an electron-antineutrino, and a muon-neutrino. Antimuons decay to a positron, an electron-neutrino, and a muon-antineutrino:
The tree level muon decay width is
A photon or electron-positron pair is also present in the decay products about 1.4% of the time.
The decay distributions of the electron in muon decays have been parametrized using the so-called Michel parameters. The values of these five parameters can be predicted unambiguously in the Standard Model of particle physics—no deviation with respect to these predictions has yet been found.
Certain neutrino-less decay modes are kinematically allowed but forbidden in the Standard Model. Examples are
A positive muon, when stopped in ordinary matter, can also bind an electron and form an exotic atom known as muonium (Mu) atom, in which the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen, this short lived "atom" behaves chemically — to a first approximation — like hydrogen, deuterium and tritium.
where the first errors are statistical and the second systematic.
The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED (Phys.Lett. B649, 173 (2007)).
For this reason, Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for "mid-". Shortly thereafter, additional particles of intermediate mass were discovered, and the more general term meson was adopted to refer to any such particle. Faced with the need to differentiate between different types of mesons, the mesotron was in 1947 renamed the mu meson (with the Greek letter μ (mu) used to approximate the sound of the Latin letter m).
However, it was soon found that the mu meson significantly differed from other mesons; for example, its decay products included a neutrino and an antineutrino, rather than just one or the other, as was observed in other mesons. Other mesons were eventually understood to be hadrons—that is, particles made of quarks—and thus subject to the residual strong force. In the quark model, a meson is composed of exactly two quarks (a quark and antiquark), unlike baryons which are composed of three quarks. Mu mesons, however, were found to be fundamental particles (leptons) like electrons, with no quark structure. Thus, mu mesons were not mesons at all (in the new sense and use of the term meson), and so the term mu meson was abandoned, and replaced with the modern term muon.
Revolutionary Muon Experiment to Begin with 3,200-Mile Move of 50-Foot-Wide Particle Storage Ring Massive Device Will Travel from New York to Illinois by Barge and Truck This Summer
May 08, 2013; UPTON, NY -- The following information was released by Brookhaven National Laboratory: The muon g-2 storage ring, in its... | <urn:uuid:2037f363-fff7-488a-9f6d-49b0c0a6b5d8> | 3.953125 | 1,602 | Knowledge Article | Science & Tech. | 36.328926 |
Sour Showers; September 2010; Scientific American Magazine; by Michael Tennesen; 2 Page(s)
The acid rain scourge of the 1970s and 1980s that killed trees and fish and even dissolved statues on Washington, D.C.’s National Mall has returned with a twist. Rather than being sulfuric acid derived from industrial sulfur emissions, the corrosive liquid is nitric acid, which has resulted not just from smokestacks but also from farming.
Besides dissolving cement and limestone and lowering the pH of lakes and streams, acid rain leaches critical soil nutrients, injuring plants, and liberates toxic minerals that can enter aquatic habitats. To combat the problem the first time around, the U.S. Environmental Protection Agency passed the Clean Air Act Amendments of 1990, which cut sulfur emissions from power plants by 59 percent from 1990 to 2008. Emissions of nitrogen compounds, however, have not fallen as steeply. | <urn:uuid:82243c8f-bce8-447f-811d-f505964da49f> | 2.90625 | 192 | Truncated | Science & Tech. | 50.142047 |
Rivers and streams are reaching record levels as a result of Hurricane Irene’s rainfall, with more than 80 USGS streamgages measuring record peaks.
Tuesday, August 23, 2011 at 01:51 PM a 5.8 Earthquake occurred 38 miles outside of Richmond, VA.
USGS scientists study walruses off the northwestern Alaska coast in August as part of their ongoing study of how the Pacific walrus are responding to reduced sea ice conditions in late summer and fall.
Secretive and rare stream-dwelling amphibians are difficult to find and study. Scientists at the US Geological Survey and University of Idaho have developed a way to detect free-floating DNA from amphibians in fast-moving stream water.
USGS scientists are collecting water samples and other data to determine trends in ocean acidification from the least explored ocean in the world.
A new geologic map of Lassen Volcanic National Park and vicinity has been created. The map area includes the entire Lassen Volcanic Center, parts of three older volcanic centers, and the products of regional volcanism.
Within the rivers, streams, and lakes of North America live over 200 species of freshwater mussels that share an amazing life history. Join us in Reston, VA to explore the fascinating reproductive biology and ecological role of one of nature’s most sophisticated fishermen.
USGS scientists are working to characterize the contaminants and habitats for a number of aquatic species along the lower Columbia River.
The effects of drought are felt throughout the United States and the world, and USGS science has a prominent role in understanding the causes and consequences of this hydrological phenomenon.
In support of the Famine Early Warning Systems Network, USGS scientists use satellite remote sensing to assess agricultural conditions that foretell famine.
Receive news and updates: | <urn:uuid:0efb218b-3c13-4f0e-8b95-319823697428> | 3.109375 | 375 | Content Listing | Science & Tech. | 31.858776 |
WorldChanging has covered the rising trend of biomimicry before: taking design cues directly from nature.
The latest example comes from Wilmington, DE company NanoCyte, which has filed a patent application for a dermal injection system utilizing the stinging cells of jellyfish.
Jellyfish and other Cnidaria have stinging cells called cnidocysts. These shoot a tiny hollow thread, at incredibly high speed, into anything that touches a "trigger" near the cell's opening, and then pump toxin through the thread into the target.
The inventors propose extracting the toxin without killing or triggering the cells, simply by incubating them for a few minutes at around 70°C. The empty cells could then be soaked in whatever chemical is to be injected, they say.
The cells would be applied to a patient's skin in a patch and then pressure combined with a few low-voltage electric pulses should trigger the cells to fire. They would shoot out their tubules, penetrate the skin and inject the new chemical. Because there is no toxin, the injection should be very quick and painless, and the threads would be extracted once the patch is removed.
Such a bio-based system could be used in the treatment of diabetes and skin diseases such as acne, as well as being a rather novel way of applying tattoos. It's not entirely clear if the system would use cells harvested from living jellyfish or grown as cultures, though -- after one nasty childhood incident at the beach in Corpus Christi, TX in 1982 -- I find it difficult to feel particularly concerned for the well-being of the jellyfish in question.
via: New Scientist
Other than standing in awe in front of a huge tank filled with various jellyfish with a black background and black-lit for what seemed hours I've not had any encounters with them.
This seems pretty incredible. Hopefully it would not result in their "farming".
did find your last comment unnecessary. was a good article until then.
hope you someday find a way to forgive nature for what it might have done to you over 25 years ago.
I doubt we'd have to farm them in the traditional sense(and don't see anyway to do it in a cost-effective and efficient manner). Cultures on the other hand, may be just what the doctor ordered.
Not to be nitpicky, but technically speaking this is bioutilization, not biomimicry. (Biomimicry is getting inspiration from nature for how we do things; bioutilization is using the actual organism or its remains for stuff, like wood in houses or horseshoe crab blood in cancer drugs.) Certainly both are part of neobiological industry, which we'll see more of in coming decades; we have to be a little more careful with bioutilization, to make sure we're harvesting sustainably, but it can be quite beneficial.
Jeremy: totally right, of course.
Mamat: I was kidding. I love the jellyfish. I love the nature. The nature is awesome.
Well, except for sunsets, which I find totally appalling. | <urn:uuid:06328218-1ebb-4adb-9d68-a589cdf9abab> | 2.6875 | 648 | Comment Section | Science & Tech. | 51.158453 |
The north of England is dominated by rocks of Carboniferous age, which give it a distinctive scenery and history, where local coal fuelled the world’s first industrial landscape.
The geology is extremely well known, because of the importance of the coal deposits, but also because of the continuing excellence of the British Geological Survey. A recent paper shows how their deep knowledge allows them to identify and quantify cycles of sedimentation, some of which are less than 100,000 years in duration (a geological eye-blink).
Spotting the cycles
In an earlier post I’ve written about the rock types found in this area, the Pennine Basin of northern England, so here I’ll cover the broad geological context only.
The early Carboniferous in England was a time of extensive rifting, caused by plate tectonic goings-on further south. This created deep ‘gulfs’ in the grabens and shallow platforms between (horsts and grabens if you’re feeling German). All sedimentation was marine, mud in the gulfs and limestone on the platforms. By the mid-Carboniferous the extension had finished, but the thermal disruption it caused remained, meaning that cooling of the crust caused slow but constant subsidence through the rest of the Carboniferous. The mid to late Carboniferous (Namurian and Westphalian, in local terms) was dominated by shallow water, mostly non-marine sedimentation. A time of rivers, deltas and coal swamps, all close to sea level.
Its long been noticed that they are regular sequences within these rocks. Coal deposits occur regularly and can be correlated from pit to pit for 10s of kilometres. In a similar way ‘marine bands’, thin layers of shale containing marine fossils, are seen again and again. These marine bands contain goniatite fossils (older relatives of ammonites) which evolve rapidly and can also be correlated from place to place. Often the marine bands are succeeded by coarsening-upwards sequences that move into non-marine rocks – in turn topped by another marine band.
As recently as the 1980s this regularity was explained rather feebly in terms of ‘avulsion of deltas’ or some such. Even to this spotty teenager, it wasn’t a convincing story. When sequence stratigraphic concepts arrived soon after, they were a natural fit, particularly when marine bands were correlated across different basins in Europe, showing that the cause couldn’t be local.
There are extensive Carboniferous glacial deposits in many parts of the world. The idea that the waxing and waning of polar ice-caps has a major influence on sedimentary patterns across the world is now common place and it fits these rocks well. Melting of polar ice will cause flooding globally, putting marine mud on top of areas previously above sea-level – this was as true for the Carboniferous as it may be for the Anthropocene.
Measuring the cycles
In Nature and timing of Late Mississippian to Mid Pennsylvanian glacio-eustatic sea-level changes of the Pennine Basin, UK Colin Waters and Daniel Condon of the British Geological Survey take a massive data set and use it to quantify how long these cycles of sedimentation took.
Sequence stratigraphy emphasises the identification of significant surfaces that correspond to significant changes in sea level. Sequence boundaries are associated with sea-level falls and parasequence boundaries with sea-level maxima. Waters and Condon link Pennine rocks to sequence stratigraphy: “The marine bands occur at the base of marine to non-marine upward-coarsening cycles, equating to the parasequence of the Exxon sequence-stratigraphic model“; marine bands are maximum flooding surfaces. They identify 47 of these and use current day areal extent to infer which ones represent bigger sea-level rises. Minor unconformities, where valleys have been cut into older sediments, can be linked to sequence boundaries – if sea-level falls then river channels will deepen. These palaeo-valleys are rather subtle structures, but they have been mapped out across northern England.
Waters and Condon start by looking at distinctive layers of mud with great names, one is a bentonite, the other a tonstein. These are layers of volcanic ash and they contain primary zircons, volcanic grains that lock in the age of the eruption. Analysis of these grains allows them to calculate accurate dates for when the layers were deposited.
These dates are not just of local interest. Carboniferous rocks in Europe are correlated on the basis of marine fossils, such as goniatites in marine bands. From this, geologists create a biostratigraphy that allows you to know the age of a rock from the fossils within it. The ideal is a global biostratigraphy, but the nature of the fossils found in Carboniferous rocks makes this difficult.
This section of the European biostratigraphy shows how fossils track the passage of time. Note there is no age on there. The rate at which new fossil species arise, or sediments are deposited, is not known. Dating a volcanic ash layer, which is found in a particular position in the biostratigraphy, allows you to put absolute dates against the table above, to start to build up a chronostratigraphy. There are other ways of linking the cycles in the sediments to absolute ages, as we shall see…
Understanding ancient cycles
Interpreting the patterns of rocks in terms of sequence stratigraphy provides further constraints on timing. Patterns of sea-level change are linked to changes in orbital obliquity (wobbles in the spinning of the earth) called Milankovitch cycles. For the Carboniferous we expect long cycles of 413,000 years and shorter ones of 112,000 years.
Putting all these constraints together and using their massive data set, Condon and Waters build up a picture of how distant ice-caps controlled English rocks.
Starting with the big picture, they posit four major ‘ice ages’ for the period in question, each lasting approximately 1 million years. The interglacial periods are associated with no paleo-valleys and few marine bands – sea level is fairly stable.
For the intervening rocks, they see two patterns in the marine bands. At times they follow a 400,000 year cycle, at others 111,000 or 150,000 years. The patterns of rocks in England are controlled by ancient wobbles in the earth’s rotation. This is an extraordinary thing. The link between the two is the ebb and flow of ice-caps half-way across world – in Geology sometimes it feels like absolutely everything is inter-connected.
For rare cases where multiple marine bands contain the same fossils, Condon and Waters infer these must be related to even shorter sub 100,000 year Milankovitch cycles. This is less well-proven as it is based on the assumption that the rate of change of goniatite species is relatively constant.
Although focussed on a small region, this research is interesting in many ways. Firstly it shows how stratigraphers use multiple lines of evidence to build up a picture of earth history. Condon and Waters put dates on the duration of the Ice Ages which are of use when studying rocks of this age anywhere on earth. Also it gives a taste of how aggregating data gives new insights; to map out the marine bands they drew on countless individual data points collected by the BGS over many years.
The work of stratigraphers is not glamorous but it is important. To build up a history of the earth’s history, knowing when things happened is vital.
References, other information
Colin N. Waters, & Daniel J. Condon (2012). Nature and timing of Late Mississippian to Mid-Pennsylvanian glacio-eustatic sea-level changes of the Pennine Basin, UK Journal of the Geological Society DOI: 10.1144/0016-76492011-047
A late draft of this paper is available to you all via an open access portal.
Courtesy of the BGS, here’s a view of the geology of northern England. | <urn:uuid:32ba81f2-64ae-4fd9-ab57-a5929aba7358> | 3.71875 | 1,725 | Academic Writing | Science & Tech. | 39.30795 |
A linked list can be viewed as a group of items, each of which points to the item in its neighbourhood. An item in a linked list is known as a node. A node contains a data part and one or two pointer part which contains the address of the neighbouring nodes in the list.
Linked list is a data structure that supports dynamic memory allocation and hence it solves the problems of using an array.
Types of linked lists
The different types of linked lists include:
1. Singly linked lists
2. Circular linked lists
3 Doubly linked lists
Simple/Singly Linked Lists
In singly linked lists, each node contains a data part and an address part. The address part of the node points to the next node in the list.
Node Structure of a linked list
An example of a singly linked list can be pictured as shown below. Note that each node is pictured as a box, while each pointer is drawn as an arrow. A NULL pointer is used to mark the end of the list.
The head pointer points to the first node in a linked list If head is NULL, the linked list is empty
A head pointer to a list
Possible Operations on a singly linked list
.Insertion: Elements are added at any position in a linked list by linking nodes.
Deletion: Elements are deleted at any position in a linked list by altering the links of the adjacent nodes.
Searching or Iterating through the list to display items.
To insert or delete items from any position of the list, we need to traverse the list starting from its root till we get the item that we are looking for.
Implementation of a singly linked list
Creating a linked list
A node in a linked list is usually a structure in C and can be declared as
}; //end struct
A node is dynamically allocated as follows:
p = new Node;
For creating the list, the following code can be used:
Current_node = malloc (sizeof (node) );
if(root_node==NULL) // the first node in the list
The above given code will create the list by taking values until the user inputs -999.
Inserting an element
After getting the position and element which needs to be inserted, the following code can be used to insert an element to the list
The following figure illustrates how a node is inserted at an intermediate position in the list.
The following figure illustrates how a node is inserted at the beginning of the list.
Deleting an element
After getting the element to be removed, the following code can be used to remove the particular element.
if ( root_node != NULL )
if ( temp_node->info == input_element )
While ( temp_node != NULL && temp_node->next->info !=
temp_node = temp_node->next;
if ( temp->next != NULL )
delete_node = temp_node->next;
free ( delete_node ) ;
The following figures illustrate the deletion of an intermediate node and the deletion of the first node from the list.
To display the elements of the list
temp_node = root_node;
while(temp_node != NULL)
temp_node = temp_node->next;
The following figure illustrates the above piece of code.
The effect of the assignment temp_node = temp_node->next
Efficiency and advantages of Linked Lists
Although arrays require same number of comparisons, the advantage lies in the fact that no items need to be moved after insertion or deletion.
As opposed to fixed size of arrays, linked lists use exactly as much memory as is needed.
Individual nodes need not be contiguous in memory. | <urn:uuid:cb3e8cd7-cf53-4514-a447-17688d3a16ad> | 3.71875 | 779 | Documentation | Software Dev. | 47.018399 |
Catlin Arctic Survey 2010
The 2010 Arctic Survey focused on the implications of sea ice loss, specifically ocean change and acidification. The expedition carried out vital research into how greenhouse gases (GHG) could affect the Arctic Ocean's marine life, including species that are essential to life on our planet.
High atmospheric carbon dioxide (CO2) levels have led to a host of environmental responses, two of which are relevant to the 2010 survey:
- the widely publicised reduction in the sea ice cover on the Arctic Ocean, the loss of which will lead to further climate change
- an increase in acidification of the oceans worldwide.
Because CO2 is more readily absorbed by cold water, changes highlighted by scientific research in the Arctic Ocean could act as a global early-warning system.
The objectives of the 2010 survey were to:
- establish an ‘Ice Base’ from which scientists could carry out first-hand research, studying ocean acidification and other potential changes to Arctic waters resulting from carbon emissions
- take further measurements of the Arctic sea ice in a different location from the 2009 expedition.
The results of the 2010 Catlin Arctic Survey are still being analysed. The results and conclusions will be posted here when they become available.
The extent of sea ice in the Arctic is decreasing. There is a significant probability that:
- by around 2020 only 20 per cent of the Arctic Ocean basin will have sea ice cover in the late summertimes
- by 2030-40, the white 'North Pole ice' will have been transformed into an entirely blue, open ocean in the summers.
Atmospheric CO2 concentrations are higher now than they have been for at least 800,000 years. As the Arctic sea ice melts, growing expanses of cold water (which more easily absorb CO2) are being exposed to these higher levels.
Marine biologists, oceanographers and polar explorers are studying the impact of this increased absorption, which is changing the water's chemistry and making it more acidic. And scientists believe the polar oceans will be the first to experience the impact of 'ocean acidification' because they are colder.
A quarter of all CO2 emissions are absorbed by the Earth's oceans, which means the seas reduce the impacts of this GHG on climate. However, CO2 also plays an important role in determining the pH of surface salt water, which is currently slightly alkaline.
When CO2 dissolves in seawater it forms a weak acid and the oceans naturally accommodate small changes. But the rate at which atmospheric CO2 is currently increasing is so fast that the natural buffering systems of the oceans can’t cope.
This is leading to a slight decrease in pH – 'ocean acidification' – which may cause seawater to become corrosive to the shells and armour-plating of some marine creatures within decades. Individual species, habitats and ecosystems would all be threatened.
If global CO2 emissions from human activities continue to rise on current trends, the average pH of the oceans could fall to a lower level than at any time for millions of years.
Ocean acidification video | <urn:uuid:940bd1ec-deb3-4468-a171-5e558b752be6> | 3.8125 | 634 | Knowledge Article | Science & Tech. | 32.985441 |
Photo taken by Mark J. Madigan - Walsenburg, Colorado - May 20, 2003
Lenticular clouds, technically known as altocumulus standing lenticularis, are stationary lens-shaped clouds that form at high altitudes, normally aligned at right-angles to the wind direction.
Where stable moist air flows over a mountain or a range of mountains, a series of large-scale standing waves may form on the downwind side. Lenticular clouds sometimes form at the crests of these waves. Under certain conditions, long strings of lenticular clouds can form, creating a formation known as a wave cloud.
Power pilots tend to avoid flying near lenticular clouds because of the turbulence of the rotor systems that accompany them, but sailplane pilots actively seek them out. This is because the systems of atmospheric standing waves that cause "lennies" (as they are sometimes familiarly called) also involve large vertical air movements, and the precise location of the rising air mass is fairly easy to predict from the orientation of the clouds.
"Wave lift" of this kind is often very smooth and strong, and enables gliders to soar to remarkable altitudes and great distances. The current gliding world records for both distance (over 3,000km) and altitude (14,938m) were set using such lift.
Lenticular clouds have been mistaken for UFOs (or "visual cover" for UFOs) because these clouds have a characteristic lens appearance and smooth saucer-like shape.
There is also a fascinating print medium called Lenticular Printing.
Mt. Hood and a Lenticular CloudNASA - April 17, 2013
Photo by Dahlia Rudolph at Mt. Shasta - October 5, 2011
View form the International Space Station
Photos by Harvey Carruth at The Dalles, Oregon - May 5, 2011
Taken by Photos by Kevin Lahey
Taken by Thedra
Taken by Peter K. - April 8, 2008 - Palm Desert, California
Taken by Stuart Anderson - August 3, 2006 - Saskatchewan, Canada
Taken by Joan Smith - Dec. 26, 2006 - in Sedona, Arizona
Alberta, Canada - June 21, 2005
Taken by Hanne Elmose - June 2004 - Sierra Nevada, Spain
November 26, 2003 - Space.com
Astronomers are always looking up. Sometimes they see interesting things that aren't as far up as we normally think they're looking. Peter Michaud, a public information officer for the Gemini Observatory in Hawaii, took this picture yesterday of an unusual cloud formation above the islands. It is called a lenticular cloud, due to its lens-shaped appearance. These clouds are formed by so-called mountain waves of air created by strong winds forced over high mountains.
In this case, the mountain is Mauna Kea, a 13,796-foot peak (4,260 meters) where one of the two Gemini telescopes sits, along with several other observatories. (A twin to the Hawaiian Gemini scope is situated in Chile.) "At the high points in the wave, moisture in the air condenses out to form a cloud," Michaud explained. "In the photo you can see that the wave established this morning displayed two peaks. Actually there were four -- two more were downstream from Mauna Loa, but the other two were not as impressive as Mauna Kea's!"
Mount Baker - Washington State 202
Taken by Scott Hunziker - 2002 - Mount Rainier Washington State
Taken by Jim Griggs - Sunrise Rocky Mountain National Park in Colorado 2002
These cloud formations could be interpreted as visitors in space ships in history.
Painted in 1420
This fresco is located in the San Francesco Church in Arezzo, Italy.
UFOs in History
They may have been lenticular clouds
ALPHABETICAL INDEX OF ALL FILES
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE
2012 THE ALCHEMY OF TIME | <urn:uuid:68001368-9bc3-499b-a127-5687e260aa99> | 3.203125 | 830 | Personal Blog | Science & Tech. | 43.36898 |
Hurricanes are also well-known for their furious wind speeds, but the strongest hurricanes recorded had wind speeds between 160 and 190 mph.
Tornadoes produce the strongest wind speeds known to man, with the strongest of the strong estimated to produce wind speeds of 261-to-318 mph, but there has yet to be a wind gauge to record it -- the wind speeds are made from damage estimates.
In Seattle, the highest wind gust ever recorded at Sea-Tac was 64 mph during the 1993 Inauguration Day Storm. But wind speeds have reached between 90-100 mph along the Coast and northern Interior before.
For More Information: | <urn:uuid:c4edd180-aca9-4441-a4f8-b9b94149c2f2> | 3.03125 | 131 | Knowledge Article | Science & Tech. | 59.504311 |
The sun's new solar cycle, which is thought to have begun in December 2008, will be the weakest since 1928. That is the nearly unanimous prediction of a panel of international experts, some of whom maintain that the sun will be more active than normal.
But even a mildly active sun could still generate its fair share of extreme storms that could knock out power grids and space satellites.
Solar activity waxes and wanes every 11 years. Cycles can vary widely in intensity, and there is no foolproof way to predict how the sun will behave in any given cycle.
In 2007, an international panel of 12 experts split evenly over whether the coming cycle of activity, dubbed Cycle 24, would be stronger or weaker than average.
The group did agree the sun would probably hit the lowest point in its activity in March 2008 before ramping up to a new cycle that would reach its maximum in late 2011 or mid-2012.
But the sun did not bear out those predictions. Instead, it entered an unexpectedly long lull in activity with few new sunspots. It is thought to have reached its minimum in December 2008, and now seems to be slowly waking up. One such sign is two new active regions captured this week by the ultraviolet camera on one of NASA's twin STEREO probes (see image).
'Ready to burst out'
"There's a lot of indicators that Cycle 24 is ready to burst out," panel chair Doug Biesecker of the National Oceanic and Atmospheric Administration Space Weather Prediction Center in Boulder, Colorado, told reporters on Friday.
The panel now expects the sun's activity will peak about a year late, in May 2013, when it will boast an average of 90 sunspots per day. That is below average for solar cycles, making the coming peak the weakest since 1928, when an average of 78 sunspots was seen daily.
Sunspots are Earth-sized blotches that coincide with knotty magnetic fields. They are a common measure of solar activity – the higher the number of sunspots, the higher the probability of a major storm that could wreak havoc on Earth (see Space storm alert: 90 seconds from catastrophe).
A lower number of sunspots could mean space weather will be relatively mild in the coming years. But Beisecker cautions it may be too early to call. "As hard as it is to predict sunspot number, it's even harder to predict the actual level of solar activity that responds to those sunspots," he told reporters. If there are fewer storms, they could still be just as intense, he said.
But not everyone on the panel expects the coming cycle to be weaker than average. "The panel consensus is not my individual opinion," says panel member Mausumi Dikpati of the High Altitude Observatory in Boulder, Colorado.
Dikpati and her colleagues have developed a solar model that predicts a bumper crop of sunspots and a cycle that is 30% to 50% stronger than the previous cycle, Cycle 23.
Because it is still early in the new cycle, it is too soon to say whether the sun will bear out this prediction, Dikpati says. "It's still in a quiet period," she told New Scientist. "As soon as it takes off it could be a completely different story."
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Voting Record Keeper
Fri May 08 23:49:49 BST 2009 by "Count the votes" Al
Did the Sun vote on this?
Bad news for Al Gore though, either way; poor fellow, lost another election ! No SGW (for Sun GW).
Voting Record Keeper
Sat May 09 00:35:12 BST 2009 by Jonah Gruber
You must be one of those people that believe that sunspots contribute to global warming exclusively and that anthropogenic warming isn't happening because of this?
Why do you embrace so easily what a minority of scientists believe and disregard so easily what a majority of scientists believe?
Voting Record Keeper
Sat May 09 07:15:58 BST 2009 by billisfree
I didn't know anyone was keeping score what scientists believe in.
Counting news articles on various subjects isn't exactly scientific.
Voting Record Keeper
Sun May 10 00:02:58 BST 2009 by Simon Smart
Take a look at (long URL - click here)
All fully cited scientists who disagree with the scientific consensus. As you'll see it's not a very long list and at least some of them have been funded by oil companies! There's no list of scientists who agree because it would be way, way too long
Voting Record Keeper
Sat May 09 21:25:50 BST 2009 by TxDragin
"Why do you embrace so easily what a minority of scientists believe and disregard so easily what a majority of scientists believe?"
WOW bro... It wasn't that long ago the modern thinkers and scientist thought the world was flat. Well if they could be here now. LOL think about it.
Voting Record Keeper
Sat May 09 22:11:10 BST 2009 by billisfree
REREAD my post, TxDragon.
I asked WHO is keeping track of the percentage of scientists who believe a cause (e.g. AGW).
Since you seem to claim, the majority of the scientists support AGW, post the source of your data. Happy surfing. I'll await your word.
BTW - most scientists and sailors knew the world was round in Columbus's time. It's a schoolboy's impression that everyone thought the world was flat because schoolbook authors wrote the books with a simplistic slant.
Voting Record Keeper
Mon May 11 09:01:35 BST 2009 by Michael
Did the Sun vote on this?
I expect it was a survey of Sun readers.
Sat May 09 00:39:26 BST 2009 by daqman
I'm a scientist and have read and experienced a lot. I've seen all the information on climate change and all of the evidence points to it being caused by human activity. We are just starting to realize this and argue but maybe it is too late. If only we had more time then maybe we could change our ways. If only, maybe. Then I start to read that the sunspot cycle is disturbed and is, maybe, repeating the same cycle it had a few hundred years ago during the "mini ice age" and that the century may be cooler than the greenhouse effect would predict. Then, atheist or at least agnostic that I am I start to wonder, maybe there is a God?
Sat May 09 07:27:51 BST 2009 by billisfree
"I've seen ALL the information on climate change and ALL of the evidence points to it being caused by human activity.
Never in my life, have I ever encountered anybody who read, listened to or knew EVERYTHING on ANY particlar subject.
Try reading JOHN LOCKE's book on human reasoning. He agrued in the 1590's that people base their conclusions on a LIMITED set of facts.
Sat May 09 10:45:33 BST 2009 by very narrow
I take it that you are a classic example of this, just going off what one persons book said.
Sat May 09 11:08:58 BST 2009 by Flatdog
As there is very little known about what makes climate tick, I would think that it is quite easy to read all the evidence, particularly as most of the articles are just regurgitating what has been said before. So Daqman isn't really claiming a great deal, and his opinions are about as worthless as anyone elses.
We will probably never really know how climate works, because it is just so chaotic, and influenced by forces that we are not aware of.
Sat May 09 16:16:05 BST 2009 by Dev
I dont buy what you are selling my friend. Scientist? Please lol
Sat May 09 17:22:31 BST 2009 by Gareth
....So Daqman isn't really claiming a great deal, and his opinions are about as worthless as anyone elses...
His opinions are probably more informed than yours flatdog and speakind as another scientist myself (in solar physics) his statements are correct to the best of our knowledge.
Sat May 09 17:13:38 BST 2009 by Gareth
Certainly the weak solar cycle (if indeed it does turn out to be weak) may dampen the effects of climate change and buy us some time as you say.
It's a double edged sword of course because the worst thing we could do in that situation is just throw our hands up and say "ah so it was the Sun causing climate change afterall" and just carry on with business as usual. Once the grand minimum (if there is one) ends then we would suddenly see a massive and very rapid temperature increase as the Sun returns to normal. We cannot allow ourselves to be complacent.
Mon May 11 19:12:00 BST 2009 by Raven Morris
Earth has never had a constant temperature, and never will have one (unless we get really fancy tech to modify it for us). Climate change *is* the constant. Wether or not we are making the Earth grow hotter right now is of little consequence -- life will continue on Earth as it has for millions of years, with or without us around to enjoy it.
If people actually cared about the ability for the planet to sustain life, then there would be no focus on carbon footprints, and every focus on common sense things. Corporations placing the pursuit of money above all else -- and the consumers supporting these corporations ignorantly -- are causing the erradication of life on Earth.
Nuclear power plants in the USA create millions of litres of radioactive waste with half-lives the same age as our planet (4.5 billion years). Then corporations like Lockheed Martin take this radioactive waste and use it in weapons manufacture, as it's cheaper than other elements... and then the USA proceeds to use these weapons all over the planet Earth, spreading the radioactive waste around. We will not be rid of this toxic waste until the end of our planet.
So please, why do people focus on the possibility of Earth getting 2, 6 (or even 30) degrees hotter, when there are far more massively important issues to the survival of our planet? Our oceans are going barren and toxic. Species by the thousand are disappearing forever. Yet people only "care" about "global warming".
It's nothing but hypocritical propaganda to pass a global carbon tax. Taxing emmissions will have no effect on reducing further emmissions, it will merely be factored into the monetary cost of products and people will accept this as normal. It will provide people with a false sense of "atonement for their sins", and nothing more.
If people care about the planet, then prove it by ceasing consuming needless products and by starting to respect and protect Earth's amazing (and sadly dwindling) ecosystems.
I care about all life and ecosystems on this planet, and I do real world things to protect them. That is what is important for everyone to do, not buy into global warming propaganda while continuing to buy the latest cellphones, clothes, shoes, music players, cars, SUVs, eating at fast food restaurants, consuming, consuming, consuming.
Mon May 11 20:18:19 BST 2009 by billisfree
"Nuclear power plants in the USA create millions of litres of radioactive waste with half-lives the same age as our planet (4.5 billion years). "
Anything with a half-life of 4.5 BILLIONS years is practically NOT radioactive. It breaks down so very slowly that it is virtually harmless.
The most dangerous radioactive materials have a very short half life and put out a lot of radioactivity in a short time and become harmless in a matter of months.
Claiming that nuclear reactors put out millions of liters of radioactive waste with a half life of 4.5 billions years is not acurate, because the radioactive waste has many different isotopes with many different half lifes.
The very carbon 14 in our own bodies is radioactive with a half life of 14,700 years. We also EAT this stuff all the time.
Our sun creates MILLIONS(?) of tons of radioactive carbon 14 EVERY DAY - far more than our nuclear plant. And NONE of this carbon 14 is being safely buried.
Mon May 11 03:44:56 BST 2009 by derekcolman
If you are really a scientist, I wonder if you have read this paper, and if you can produce a paper to refute it.
If you can, it would be very interesting to us skeptics. Who knows, you might even change our minds
Tue May 12 01:54:15 BST 2009 by billisfree
I'm an engineer and have taken heat-transfer courses.
I admit it's a bit over my head. I really would have to sit down and refresh my memory some of the equations. That calls for some fresh reseach and slow reading.
I can see this is not for the layman to read.
No question, there are a few uncertain factors involved and nothing is simple. There are some very smart people working on this problem.
Just scanning over it, I don't see anything that I can readily dispute. Best I can do, is just read any peer-review reports and see if anybody disputes it.
I'm sure there are people claiming it's all "rocket science" and they don't give a a $hit.
Tue May 12 18:57:48 BST 2009 by steve
So, you're a scientist, are you? Not that I disagree that human activity is at least a leading contributor to global warming, but anyone who says they are "a scientist" is suspect. A chemist knows no more about climatology than I do and has no more credibility than any other layman.
If you were really a scientist, you would have said "I'm a climatologist" or just shut up about being a scientist.
Sorry, I don't believe you. More likely you're a high school student.
BTW, I'm not a scientest but I've read the reports too.
Sat May 16 07:33:06 BST 2009 by billisfree
most all "scientists" have a very extensive science background in different fields. Chemist have to understand physics, climatologist should understand chemistry, engineers should have a working knowledge of heat transfer, etc. Engineers, have to know a bit about chemistry, physics, math, civil engr, mech engr, elec engr, economics.
One cannot be a climatologist and NOT know physics and chemistry and vice versa.
In reality - EVERYONE is a scientist to some degree - even YOU. Saying one is a "scientist" can be a gray area.
When you hear the phase, "Scientists say..." beware! Don't assume outright that the person using that phase speaks with authority. They may or may not know what they are talking about.
Sat May 09 01:26:30 BST 2009 by Jeff Lang
Anyone who is a scientist and has extensively studied climate science would know that even a lower than average solar cycle is unlikely to impact temperatures very much (less than 0.05C).
It is likely, however, the minority scientists on the panel who are predicting an even lower solar cycle are the ones who will be proven right. This isn't just a lower than average solar activity period right now, it is much below anything we have measured in a long, long time. We are essentially at a minimum for more than two years now.
Even then, the Sun seems to be a very stable star and there won't be a very big impact from a really low solar cycle.
Sat May 09 02:37:05 BST 2009 by Rob Chansky
I need more clarification.
Because what I hear is not that the Sun's recent ever-so-slight dimness is affecting the earth's weather, it's a mechanism more like this:
Less sunspots-->less solar wind-->less solar wind to push out the high-energy cosmic rays from outside the solar system-->fewer low clouds-->colder weather coming.
Pretty convoluted, I know, but some pretty sober-sounding people (as sober as you sound; and that's been my one accurate predictor of truth on the web I cling to) have advocated that this mechanism exists.
I can see you've decided whom to believe, and good for you, but frankly I've heard too much hype on the AGW side to swallow it whole. That doesn't mean I'll dismiss it either. I just wanted you to know you were clarifying the wrong thing there.
Sat May 09 07:40:01 BST 2009 by billisfree
"Less sunspots-->less solar wind-->less solar wind to push out the high-energy cosmic rays from outside the solar system-->fewer low clouds-->colder weather coming."
Strange logic, but every step must be solidly confirmed.
How about this one:
More CO2 in air -> more heat trapped in air ->
warmer weather -> more hot girls -> shorter skirts -> more boys looking at legs instead of paying attention in class -> lower grades -> more morons -> more men writing to NS to tell us everything we need to know.
It just ain't that simple, man!
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:377a04c3-5dcf-43d6-9677-165f07daf271> | 3.578125 | 3,731 | Comment Section | Science & Tech. | 63.767997 |
When we analyzed the two-slit experiment and the diffraction grating, we assumed that the individual slits were so narrow that they acted like point sources of waves. That is an accurate approximation if the slits are narrow compared to the wavelength. But in the case of wider slits, even a single slit causes a diffraction pattern, because every point in the slit serves as a source of a wave. Different parts of the slit have different distances to a given point on the screen, which causes interference effects.
Consider a slit of width a as shown. Consider two points of emission, 1 and 2, one from the upper half of the slit and one from the lower half, that are separated by a/2. For light that travels in the direction q, the contribution from point 1 will cancel the contribution from the point 2 if the difference D x is a half-integral number of wavelengths. Hence there will be a minimum in the intensity (dark spot) for those angles. Additional dark regions can found by dividing the slit into 4, 6, 8,...regions.The general formula is
a sin q = m l
This formula looks just like the formula for the two-slit problem,
but the interpretation is different in two ways:
(1) it describes minima (dark spots) rather than maxima; and
(2) m = 1,2,3,4,... and -1,-2,-3,-4,... but not m=0.
The single-slit diffraction pattern has a central maximum that covers the region between the m=1 dark spots. The first secondary maximum appears somewhere between the m=1 and m=2 minima (near but not exactly half way between them). The secondary maximum has a weaker intensity than the central maximum. The subsequent maxima are still weaker.
Interference and diffraction index examples Lecture index | <urn:uuid:c8e165c7-f6a7-4b50-b1b2-5de69ff3666e> | 4.40625 | 389 | Academic Writing | Science & Tech. | 65.504398 |
Cyberdyne Systems Corporation is working on a powerful new processors, but due to a management snafu, the management has only allowed your code 512 Kilobytes (524288 Bytes) to implement your application's heap! For those unfamiliar with the heap, it is an area of memory for processes where (the process) can allocate variable memory sizes on at run-time.
The problem here is that you can't use any pre-built code or libraries to serve your memory allocation needs in this tiny space, so you are now going to have to re-implement your own malloc and free functions!
Your goal is to implement two functions, regardless of language, named "malloc" and "free". Malloc takes a number of bytes (up to a maximum of 128 Kb at a time) and returns either a new address (array) that your process can use, or returns an invalid point (empty array) if there is not enough free space. This array must be continuous (i.e. a continuous block of 128 Kb). Free simply takes the given array and allows it to be reused by future malloc function calls.
Your code must only work in 512Kb, meaning that both the allocate memory AND the related data structures must reside in this 512Kb space. Your code is not part of this measurement. As an example, if you use a linked-list that requires one byte over-head for each allocated chunk, that means you must be able to contain this linked-list structure and the allocated spaces.
There are many methods to implement a heap structure; investigate either the Linux Slab allocator, or try to stick to the obvious solutions of linked lists. Don't forget to coalesce freed spaces over time! An example of this situations is when you have three blocks, left, middle, and right, where the left and right are unallocated, but the middle is allocated. Upon free-ing the middle block, your code should understand that there aren't three free blocks left, but instead one large unified block!
Formal Inputs & Outputs:
void* malloc(size_t ByteCount); // Returns a pointer to available memory that is the "ByteCount" size in bytes
void free(void* Ptr); // Frees a given block of memory on this heap
No formal output is required, but a helpful tool for you to develop is printing the memory map of the heap, useful for debugging.
Sample Inputs & Outputs:
void* PtrA = Malloc(131072); // Allocating 128Kb; success
void* PtrB = Malloc(131072); // Allocating 128Kb; success
void* PtrC = Malloc(131072); // Allocating 128Kb; success
void* PtrD = Malloc(131072); // Allocating 128Kb; fails, unlikely to return 128Kb since any implementation will require memory over-head, thus you will have less than 128Kb left on the heap before calling this function
free(PtrC); // Free 128Kb; success
void* PtrD = Malloc(131072); // Allocating 128Kb; success
It is likely that you will have to implement this simulation / code in C or C++, simply because many high-level languages such as Java or Ruby will hide the necessary low-level memory controls required. You can still use these high-level languages, but you must be very strict about following the memory limitation rule. | <urn:uuid:9fd62f98-d28a-420d-8f6b-04dfbcb4fcd2> | 3.015625 | 741 | Documentation | Software Dev. | 44.594038 |
Earthscope (Image 5)
June 11, 2010
A completed transportable array station on Lummi Island, Wash. The station will record earthquakes occurring locally, nationally and worldwide. Scientists will use the data to produce images of the Earth's interior and to produce new insights into the earthquake process. [One of 14 related images. See Next Image.] More about this Image In a modern-day journey to the center of the Earth, geologists are exploring the structure and evolution of the North American continent at scales from hundreds of kilometers to less than a millimeter--from the structure of a continent, to individual faults, earthquakes and volcanoes. The project is called EarthScope. With approximately $200 million in funding from the National Science Foundation (NSF), EarthScope began development in 2004, and will continue over a five year period. It is expected to operate for an additional 15 years. EarthScope is using multiple technologies to explore the structure and tectonics of North America. For example, a four-kilometer deep observatory was drilled directly into the San Andreas fault to measure the physical conditions under which earthquakes occur there. One of 875 permanent Global Positioning System (GPS) stations has been installed, which can measure relative distance changes of less than 0.5 millimeters. EarthScope is one of an eventual network of 400 seismographic stations that will spread across the United States, making observations at more than 2,000 geographic locations to map the structure and composition of North America. EarthScope also provides unique educational opportunities as a national experiment, with its sensors located at more than 3,000 sites across the United States for measuring and observing plate tectonics in real time. For more information, visit the EarthScope Web site.
Topics: Seismology, Geography of California, Geology, Technology Internet, Environment, Plate Boundary Observatory, Array Network Facility, Earthscope, Geophysics | <urn:uuid:341c062a-c2cf-4f6d-a790-d9ea3a93763f> | 4.09375 | 387 | Knowledge Article | Science & Tech. | 23.799392 |
In this segment, Ira talks with author Alan Weisman about what the world might be like if humans were suddenly to disappear from the planet. Would a human-free Earth be more environmentally friendly? Would a sudden removal of humans disrupt the planet's ecosystems still more? In his book "The World Without Us" (St. Martin's Press, 2007), Weisman says that in as little as two days without human intervention, the New York City subway system would be flooded -- and in as little as a year after a mass human disappearance, every nuclear power plant on Earth would have run out of coolant and failed or melted down. How long would it take the planet to heal itself after humans left? And what would happen to our cities, cultural artifacts, and other creations?
This is a holiday rebroadcast of a previously recorded Science Friday, so please don't try to call in.
Produced by Annette Heist, Senior Producer | <urn:uuid:15843660-ff85-4506-8d15-a2e138135f93> | 2.796875 | 191 | Truncated | Science & Tech. | 56.303384 |
Bizarre Weather Around the Solar System
Hurricanes, tornadoes, and sulfuric-acid rain.
Bizarre weather is not restricted to Earth. Hurricane Sandy was a speck of dust compared to some of the cataclysms currently taking place around the solar system. Jupiter, for example, is going through a tumultuous time right now. The gas giant has suffered more meteor impacts in the past four years than has ever been observed, and large cloud formations are spontaneously changing color or disappearing as quickly as they form.
But Jupiter is not the only planet in our solar system that experiences bizarre weather. Icy methane rainstorms, planet-wide sand storms, and lead-melting temperatures afflict other planets and their moons. Check out the weather forecast around the solar system, then go enjoy the weather outside—whatever it may be, it’s bound to be better than any of the following.
A 300-Year-Old Hurricane Three Times the Size of Earth
This famous megastorm, dubbed the Great Red Spot, is at least 400 years old and dates back to the time when Galileo first aimed his telescope at Jupiter and its moons in the early 1600s—so for all we know, the storm could be much older than that. Scientists believe the storm might owe its red color to sulfur in the atmosphere, but they remain uncertain about what precisely gives it its crimson hue.
In the past couple of years, a new sibling storm has erupted. The Little Red Spot, or Red Spot Jr., formed from the merger of three smaller white-colored storms in Jupiter’s southern hemisphere.
NASA/ESA/A. Simon-Miller (Goddard Space Flight Center)/I. de Pater/M. Wong (UC Berkeley).
The Little Red Spot, at center in the picture above, has kept growing since it was discovered in 2006 and is now about the size of Earth—and with wind speeds of 400 mph, it is now spinning as fast as its larger predecessor.
Dry Ice Snow
HiRISE/MRO/LPL (U. Arizona)/NASA.
We’ve known for a while there’s water ice on Mars, both on the northern polar ice cap and away from it, but in September, NASA's Mars Reconnaissance Orbiter detected carbon-dioxide snow clouds and snowfall. It’s the first evidence of this kind of snow anywhere in our solar system. This photograph from July 2011 (toward the end of the Martian summer) shows what happens when warm weather causes a section of the vast carbon-dioxide ice cap to sublimate directly into gas, leaving behind oddly-shaped, seemingly gold-lined pits around the Red Planet's south pole.
NASA/Mattias Malmer © 2005.
Venus is like Earth on (sulfuric) acid. Its atmosphere is made of dense carbon-dioxide clouds and this extremely corrosive substance, which can explode when water is added. The acid precipitates from clouds, but due to the extreme temperatures, it evaporates before reaching the ground, making for some very short-lived acid rain.
Greenhouse Effect From Hell
NASA/Caltech/JPL/Mattias Malmer © 2005.
Similar to Earth only in size and shape, Venus was taken over by a runaway greenhouse effect millions of years ago and turned into a hellish nightmare hot enough to melt lead. The planet has scorching temperatures of 860 degrees Fahrenheit or more year-round and a crushing atmosphere with more than 90 times the pressure of Earth's. It’s no wonder probes that landed on the second planet from the Sun have survived only a few hours before being destroyed.
Supersonic Methane Winds
Clouds of frozen methane whirl across Neptune, our solar system's windiest world, at more than 1,200 mph—similar to the top speed of a U.S. Navy fighter jet. Meanwhile, Earth's most powerful winds hit a puny 250 mph. Some cloud formations, such as a swift-moving one called “scooter,” circle the planet every 16 hours. Neptune’s top wind layer blows in the opposite direction to the planet’s rotation, which could mean there’s a slushy interior of thick layers of warmer water clouds beneath the methane.
Featured above is the Great Dark Spot, which was believed to be similar to Jupiter’s Great Red Spot—a fast cyclonic storm like a hurricane or typhoon. But the Hubble Space Telescope disproved that when it showed the spot disappearing and reappearing somewhere else in the planet. Scientists then speculated that the megastorm might be a hole in the methane clouds, like our very own, now-shrinking hole in the ozone layer.
Erratic, Gigantic Dust Storms
Because of a dry, rocky, desert-like surface, dust storms are very common on Mars. They can engulf the entire planet, raise the atmospheric temperature by up to 30 degrees Celsius, and last for weeks. The storm pictured above, though huge, lasted less than 24 hours. It spread along the north seasonal polar cap edge in late northern winter in a region called Utopia Planitia.
Tornadoes and Dust Devils
NASA/JPL-Caltech/University of Arizona.
A dust devil about half a mile high swirls over a sandy Martian surface on a late spring afternoon. Winds on Mars are powered by solar-heat convection currents, as they are on other planets, including Earth. During spring, when Mars is the farthest from the sun, the planet gets less sunlight, but even then dust devils relentlessly scour the surface and move around freshly deposited dust. This dust devil, 30 yards wide, was whirling around the Amazonis Planitia region of northern Mars.
NASA/JPL/University of Arizona.
Saturn’s largest moon, Titan, looks a lot like Earth in its cloud cover and terrain. Except this moon’s clouds are made of methane. Titan has a methane cycle that is similar to the Earth’s water cycle. Since methane has a much lower melting point than water (a frosty minus 295.6 F), it fills lakes on the surface of this frigid moon, saturates clouds in the atmosphere, and falls again as rain. This thick atmosphere, in which organic molecules float around freely, could potentially be ripe for life—or brimming with it already.
Nitrogen Ice Clouds
Triton, Neptune's largest moon, is the coldest place in our solar system. It has an average temperature of minus 315 F. This image, taken by Voyager 2 in August 1989, shows the large, pinkish south polar cap, which may consist of a slowly evaporating layer of nitrogen ice. The nitrogen then forms clouds a few kilometers above the surface.
Triton has a weird, backward orbit and has been inching closer to Neptune each year. When the two finally collide, in about 10 million to 100 million years, the moon will be shredded into rings perhaps as beautiful as those of Saturn.
This storm, eight times the surface area of Earth, has been raging since December 2010 on Saturn. NASA's Cassini spacecraft took this photo during a turbulent spring in northern Saturn. At its most intense, the storm generated more than 10 lightning flashes per second.
"Cassini shows us that Saturn is bipolar," said Andrew Ingersoll, a Cassini imaging team member at the California Institute of Technology in Pasadena, Calif. "Saturn is not like Earth and Jupiter, where storms are fairly frequent. Weather on Saturn appears to hum along placidly for years and then erupt violently."
Climate change is a reality on Earth, and it is severe and undeniable around our solar system. In fact, Venus’s greenhouse effect and, more recently, the vast amount of evidence for running water in Mars’s past are helping scientists understand climate change on our own planet. | <urn:uuid:e18e5ed7-9b50-4d5e-8ed2-46e748daccc1> | 3.34375 | 1,638 | Content Listing | Science & Tech. | 53.08112 |
There are currently more than 7,500 offshore oil platforms actively probing the earth’s crust for black gold. Their relatively minimal appearance at the surface belies the shear magnitude of human construction beneath the waves. Oil platforms are among the world’s tallest man-made structures. Compliant tower platforms reach up to 900 meters in depth (in contrast, the tallest building is 828 meters). these rigs are not permanent structures. As the wells run dry and sea water corrodes steel jackets, the wells are capped and rigs decommissioned. At least 6500 offshore platforms are slated for decommission by 2025, which begs the question, what do we do with inactive oil platforms?
The now defunct Minerals Management Service devised a seemingly ideal solution; one that would save the oil companies huge amounts of money, pass responsibility of the decaying rigs onto state and federal governments, and create huge amounts of habitat for deep-sea organisms. Instead of removing oil platforms, the Rigs-to-reefs program would knock them over, or leave them standing where they are, creating artificial reefs. Everybody wins, especially oil companies.
On paper, the principles behind using oil platforms as artificial reefs appears sound. We know that fish aggregate around artificial reefs, that hard substrate will be rapidly colonized by invertebrate communities, and that the communities that accumulate around active rigs can be rich in biodiversity. With so many rigs going offline in the next 15 years, it’s hard to argue against their conversion into reef habitat. In the deep Gulf of Mexico, loss of hard substrate has disrupted community dynamics, and the addition of new structures might provide valuable stepping stones for dispersal.
Other artificial reef project have been successful. The Alabama coastline has nearly 20,000 artificial reefs. These reefs, mostly made of concrete, but also old ships, discarded construction equipment, and industrial detritus aggregate fish and have been cited as a major component in snapper and grouper recovery. Artificial reefs are not all good news. Although they provide a boost to fish habitat and subsequent income from fisheries and tourism (including fishing and diving), they also alter community composition and fisheries induced selection.
All of this is compounded by the fact that oil platforms are not pristine environments. While the most obvious and imminent ecologic disruption comes from leaking oil, as we saw during the Deep Water Horizon Disaster that began last April, long term ecosystem damage in the area immediately surrounding the rig is primarily due to the leaching of heavy metals, such as those found in drilling mud, and anti-fouling chemicals. This means that for an artificial reef to be effective, the structure must first be removed from the water and decontaminated, a prospect that will remove the economic benefit to the oil company.
In Louisiana and Texas, inshore rigs-to-reefs programs transfer responsibility for management to the state, absolving oil companies of the heavy cost of clean-up and leaving state tax-payers with the bill. Only 100 rigs have been converted under this program, which is notably unpopular among most stakeholders, oil companies excepted.
There are further questions regarding whether these reefs actually boost fish populations, or act as fish aggregating devices, leaving populations open to continuous fisheries pressure.
The idea to use old oil platforms as artificial reefs is not a bad one, and I can certainly see scenarios in which it could be done effectively. In the current climate of offshore exploration, where accountability is low and shirking responsibility is the name of the game, providing an additional avenue for oil companies to cut corners seems ill advised. Once sunk, there is no practical proposal for long term monitoring and no plan for clean-up if it turns out that these reefs are harmful to the native ecosystem. The deep Gulf of Mexico has always be hard surface limited, and the ecosystem benefits of a rig-reef are low compared to the potential for damage.
If you interested in the type of science/industry cooperative efforts that could provide the rigorous, robust data sets that would be needed to properly evaluate a rig-to-reef program, check out the SERPENT Project.
Macreadie, P., Fowler, A., & Booth, D. (2011). Rigs-to-reefs: will the deep sea benefit from artificial habitat? Frontiers in Ecology and the Environment DOI: 10.1890/100112
Mason B (2003). Doubts swirl around plan to use rigs as reefs. Nature, 425 (6961) PMID: 14586435 | <urn:uuid:dc672d03-2371-49c2-9d9e-3f3c0b9799e3> | 3.265625 | 924 | Nonfiction Writing | Science & Tech. | 38.735617 |
Meet Kimberly Casey: Studying How Debris Influences Glaciers
Kimberly Casey is a glaciologist who spends a fair amount of time in the office analyzing satellite data. But when she talks about her fieldwork on remote glaciers, one suspects she could do pretty well in a triathlon, too. Casey has carried 70-pound backpacks up mountain crossings in the Himalayas and waded ice-cold streams in the European Alps to collect samples and take measurements for her research on glacier debris pollution.
During her Ph.D. research, Casey studied six glacier sites around the world; from volcanically influenced glaciers in Iceland and New Zealand to dust-influenced glaciers in Nepal and Switzerland to bare-ice and soot-influenced glaciers in Norway. --> During her Ph.D. research, Casey studied six glacier sites around the world; from volcanically influenced ice fields in Iceland and New Zealand to bare-ice and soot-polluted glaciers in Norway.
Q&A With Kimberly Casey
Q. Why do you study particulate pollution on glaciers?
A. Dust, volcanic ash, and soot particles are deposited on glaciers worldwide. The color of particulates on a glacier surface determines the amount of solar energy absorbed, which affects how much a glacier melts. Volcanic ash can be grey or black, while dust tends to be brown or red-brown. The thickness of the dust and debris on a glacier affects its melt rate, too. Because glaciers are a key water resource in many parts of the world, it is important to understand how melt rates may be changing over time. Glaciers are also of key importance to understanding global climate: the amount of ice cover on Earth affects how solar radiation is absorbed and reflected from Earth's surface.
Q. What kinds of debris are most frequently found on glaciers?
A. Dust is very common, as well as soot. Volcanic ash, or tephra, is dependent on the glacier's geographic location relative to the volcano and the eruption frequency. Dust comes from Earth's large deserts, like the Sahara. It also comes from local geology. Soot can come from forest fires, from combustion of oil (for example, from our cars) and coal mines.
Q. How far can particulates travel?
A. The distance particulates travel depends on their size, how long they can stay in the atmosphere – gravity comes into play here. For example, soot is a relatively small particulate; it can travel quite far. A fire in Canada can cause soot to travel to Greenland's ice sheet. Dust can be larger, but it still travels quite far. Saharan dust is often found on glaciers in the European Alps. The Antarctic ice sheet gets dust from Australian deserts.
Q. Is deposition of particulates on glaciers evolving?
A. The issue of how climate change is affecting particulate pollution is currently being studied. Scientists suspect that changes to the amount and frequency of forest fires might be affecting how much soot is traveling to glaciers. Similarly, with climate change, dryness is becoming more prevalent and as a result, there's more dust. One study documented increased dust transported to glaciers in the Swiss Alps, which in turn was increasing glacier melt rates.
Q. You visited several glaciers to measure debris composition. What kind of tools did you use for your fieldwork?
A. I used a field spectrometer, a "fancy camera" with several hundred spectral bands, to measure glacier surface properties. I also took physical samples of snow, ice and debris. The field spectrometer has spectral bands from visible to short-wave similar to infrared satellite instruments, for example MODIS (an instrument that flies aboard NASA’s Aqua and Terra satellites), Landsat (a series of Earth-observing satellite missions jointly managed by NASA and USGS) and Hyperion (a sensor aboard the joint NASA-USGS EO-1 satellite.) I used the field-collected spectral data and chemical analysis of the samples to get a precise measurement of glacier debris composition, and then I was able to compare this with what I was measuring from satellite data.
Q. How does the field spectrometer work to measure debris composition?
A. The field spectrometer measures different surface values in its 200-plus spectral bands depending on the chemical composition of glacier dust and debris. I used the characteristic reflectance signature of surface materials to decide what's actually on the ground. In the lab, I analyzed my samples using geochemical techniques. These methods gave me the exact chemical composition of the snow, ice and debris. So I knew without a doubt what was on the surface of these glaciers. I used this as ground truth to compare with the field spectra --the field-collected camera data-- and the satellite spectra.
Q. What did you find out?
A. I found out I was able to use data from ASTER (an instrument on NASA’s Terra satellite) and Hyperion to map which types of particulates are on the glaciers. This hadn't been done before, looking at the specific geochemistry of the glaciers. From this project I was able to establish some methods for using satellite data to map dust and debris types on any glacier around the globe. We now have a satellite record of over a decade and we can look back at how dust and debris on glaciers has changed over time and how this is affecting the melt of glaciers. Going to the field to collect samples or do measurements is expensive, and it would be hard to get to the 200,000-plus glaciers on Earth. So it's important to use Earth-observing satellite data to quickly and efficiently map glaciers.
Q. You were able to visit very cool places during your research, but getting to some of these spots must have required a lot of physical work. Which glacier was the hardest to reach?
A. The Himalayan glaciers were the most rigorous glaciers to get to. I took a 30-minute flight from Kathmandu, Nepal, towards Mt. Everest, and landed in Lukla (9,100 ft.). We had to plan out our ascent to the glacier carefully because this was a high-elevation site; if one walks too fast and gains too much altitude in one day, one can become very sick. So we had to pace ourselves and took seven days to travel about 20 miles and 7,000 ft. up to Ngozumpa glacier (16,100 ft). It was seven days of hiking with my field spectrometer, a very heavy backpack of about 45 lbs., my sampling equipment, a laptop to analyze data and a few personal items. In total, it was 70 lbs. or so. I was part of a small science team, and we had a few Sherpas to help us carry our science gear. At the study site, we stayed in small mountain huts and each day we'd climb up and over the moraine to get to the glacier's debris-covered area. I'd work with my field spectrometer, taking measurements from 10 a.m. to 2 p.m. on sunny days -- that's when the field spectrometer is most suitable to use. At each site, I'd also take samples of snow, ice and debris, record positions, and take surface temperatures.
Q. How long did you spend there?
A. I spent two months in the field in Nepal, first at Ngozumpa glacier and then at Khumbu glacier. To get to Khumbu glacier, another couple scientists, a Sherpa and myself walked over the Cho La mountain pass with our gear. It was very physically demanding work and unfortunately some of the scientists got sick. At Khumbu glacier, I had three study areas, the highest just beyond Everest Base Camp, at the Khumbu glacier icefall (18,015 ft.).
Q. Did you get sick?
A. I didn't, but the field assistant of the other scientist I was with did, so they had to descend. I was on my own at Khumbu for a couple weeks. I did have a Sherpa with me: together we continued the field spectrometry and sample collection.
Q. What did the Sherpa think of what you were doing?
A. He had never guided a scientist before. He did not know what he was in for: he told me that this had been his longest time out, because typically a tourist or a hiker will hire him for a one-week trip. So this was the longest time he had been with a person out in the field. Working on the actively moving glacier was also surprising to him. We became a great team managing the equipment transport to the different sites: the field spectrometer, the sample bags… It was really great.
Q. Of all the glaciers you visited, which was your favorite one?
A. I sure liked the Himalayan glaciers. I also liked the glaciers near the Matterhorn in Switzerland. These Swiss glaciers were both clean ice and heavily debris-covered. So you had very different glaciers nearly side by side in such a beautiful setting. The north island of New Zealand glaciers were also quite impressive with all the volcanic emissions -- sulfur gas from a volcanic lake creates a yellow and greenish patina on the glaciers.
Q. It sounds like there aren't two identical glaciers on Earth.
A. Almost like a snowflake, it would be very different to find two glaciers that look alike.
Q. Now that you're back at Goddard, how do you want to expand your research?
A. I'm going to use remote sensing data to quantify atmospheric particulates over glaciers. I'll look at seasonal variations and check different study sites. Because we have over 10 years of satellite data in some cases, I can map how particulates are changing over time over glaciers. At Goddard there's a lot of expertise in this type of analysis as well as atmospheric transport. I look forward to comparing results with atmospheric scientists working here.
Q. Why do you want to measure what's in the air around the glaciers instead of checking what's directly on top of them?
A. If I can look at what's in the air over glaciers, this also helps to nail down, for example, how soot from forest fires or coal combustion or desert dust is traveling to these glaciers. I can map the atmospheric transport or the pathways that these different particulates are using to getting to glaciers, and pinpoint their origin.
For information about NASA's Cryospheric Sciences Laboratory, visit: | <urn:uuid:b648c969-49ec-4b68-bb1a-b2d2d4ab78b5> | 3.859375 | 2,172 | Audio Transcript | Science & Tech. | 60.039724 |
This page is supposed to hold ideas and goals for 3D rendering in GeoTools/GeoWidgets.
This is a call for opinions and ideas. So please answer.
Anything geometry-related I could find.
Any more known geometric-related libraries?
It would be favourable to have 1D-3D objects stored in an intuitive, geography-aware geometry API. The former means f.e. that objects can be constructed the way a geographer would expect it. A sphere would f.e. be defined by a center point [x,y,z,crs] and a radius [double,Unit?]. This means that 3D objects are stored with 3D coordinates and a 3D CRS that defines their exact location on earth.
The same API should also contain 1D and 2D objects: Coordinates(=Points), lines, arcs, flat polygons and the like. 2D and 3D objects should work together seamlessly: If no elevation data is available (as in shapefiles) "flat" 2D objects (elevation=0) could be created. The same objects could be derived from a 2D projection of 3D objects onto the earth surface. Also solids as f.e. boxes could deliver their faces in form of 2D objects (this time usually with elevation!= 0).
In the case of "flat" objects (with elevation=0) JTS operations could be used, possibly with the need to convert back and forth since the 2D objects would probably not be subclasses of JTS objects, which are "flat" 2D only.
Anyway, at a later stage 3D geographic functions would need to be added, such as 3D buffer, distance, intersections, splitting 3D objects a.s.o.
Is there any API that fulfils all above idea or comes close to it? Something like JTS but in 3D maybe?
Above explained geometries should be rendered either on a 2D or 3D canvas. For this, Java2D resp. Java3D are the implementations of choice in the AWT-compatible Java world. They are both working.
Are there working alternatives to Java3D for 3D rendering?
One of the goals is that the same geometries can be used in either context. In either case - 2D or 3D rendering - the geometries have to be converted into some objects understood by Java2D resp. Java3D. For example a sphere would need conversion to a
java.awt.geom.Ellipse2D (or a subclass) respective to
com.sun.j3d.utils.geometry.Sphere (or a subclass). These objects would then be reused every time the map/scene has to be rerendered.
Actually Java3D could as well be used to render a "flat" map as every conventional MapCanvas does. However I'd suggest having a Java2D-based alternative for people that don't need 3D functions and therefore don't want to have the Java3D libraries installed. (Which btw. are partly licensed under BSD and partly under Java Research License (JRL)/Java Distribution License (JDL) for noncommercial resp. commercial use.)
In its current state, pure Java3D is neither intuitive to a geographer, nor is it geography-aware. It has no sense about "the earth" or coordinate renference systems and their meaning. However it would be possible to create a thin wrapping API that hides the Java2D internals (such as TransformGroups and the whole tree approach) and instead provides (geo)object oriented functions.
The geographer would just add 3D georeferenced, styled objects into the wrapped Java3D "universe" and define location and position of the viewer. The underlying implementation would care about:
A point to check is how good Java3D could cope with SLD styling or styling in general. I expect this to be rather tricky!
Who of the GT developers and other interested readers is familar with Java3D? I am, but I am not an expert. Matthias
Have there been articles already about using Java3D for styled geoobjects rendering? If so, please add here.
As well known, Coordinate Reference Systems (CRS) include the typical GeographicCRS and ProjectedCRS, but also GeocentricCRS, TemporalCRS and Engineering CRS. So, in theory, when a 3D object constructor requires a
CoordinateReferenceSystem object, any of these can be passed in. The implementation would need to either cope with them correctly or throw an exception (for example if a TemporalCRS is passed in).
Dealing with ProjectedCRS and GeographicCRS is fine: They usually have an axis pointing north-south and one pointing east-west. The third one (if exists) points up-down. Objects having such a CRS would produce a bounding box with these three axes.
Problems arise when an GeocentricCRS is passed in. Although it would defind the location of coordinates as precisely and unambigously as do ProjectedCRS and GeographicCRS, it would usually produce a bounding box in GeocentricCRS. This BBox would need conversion to some ProjectedCRS or GeographicCRS in order to get (for example) merged with BBoxes of other map layers or other features of other feature types of the same layer. However conversion from Geocentric BBox to Geographic BBox (or back) is very inaccurate.
GeographicCRS. Problematic since there is no interface that covers both.
GridCoverages are essentially flat, but with an elevation model qute things can happen...
Use coverages as elevation models...
TINs are both geometrical objects (multiple 3D triangles) and coverages ...
Java3D could render them as lattice model or as actual surface. Nice to have.
Objects could move in time through the space or they could change their size over time. Some objects might have a start and end date between they exist only.
Java3D can cope with this. But the geometry API and the Java2D rendering API would need to deal with this too. Just something to tink about...
...go here. Feel free to add ideas how to best achieve the above.
Or possibly there are better general approaches than the above? | <urn:uuid:7e05e520-b199-4438-9fab-cf28ad68a2b7> | 2.90625 | 1,347 | Comment Section | Software Dev. | 57.023365 |
The following is shared from the frequently asked questions page on the Center for Invasive Species Research website…
How do invasive species move from place to place?
Invasive species reach new areas outside of their home range in one of two ways: (1) self introduction on their own, or (2) with human assistance that may be deliberate or accidental.
Self-introduction of species into new areas is not a new phenomenon. This process has been happening for millions of years at a very slow rate, and often introductions occur between close neighbors. For example, New Zealand has acquired bird, plant, and insect species that are carried by winds across the Tasman Sea from Australia.
Humans have greatly altered the speed at which species are moved around the world and species introductions are occurring between areas that are separated by vast distances and across natural barriers (e.g., oceans and mountains) that had previously prevented the long-distance movement of species. The speed at which long-distance spread can happen has greatly accelerated due to air travel which allows people to reach most places on earth within 72 hours or less. Short travel (hours on a plane as opposed to weeks or months on a boat at sea) times have greatly increased the survival chances of invasive species traveling with humans.
Humans have deliberately moved an incredible number of plant and animal species around the globe either for food, as part of international commerce (e.g., the pet and nursery trade), or for sport (e.g., hunting and fishing). It is estimated that there are 50,000 non-native species of plants and animals living in the USA.
Occasionally, some of these species that were once beneficial while under human control (e.g., weeds that were originally garden plants) become problematic when they escape and start to colonize and breed in areas where they are not wanted. For example, plants like salt cedar from Eurasia have invaded the desert southwest of the USA because humans deliberately moved them there for the control of constantly eroding desert sands.
Often, invasive species are moved accidentally by humans. This can occur through hitch-hiking unnoticed on plants that are being moved (e.g., tiny insects or diseases on leaves or in potting), in ballast water that is used to stabilize large transport ships, or inside other animals (e.g., diseases that kill birds have been spread by the commercial trade in exotic pet birds.)
Air Traffic: The above video shows world air flight traffic over a 24 hour period. Notice how the number of planes flying changes between daytime and nighttime. Even at night there are still lots of planes flying around the world. Every flight can potentially move an invasive species into a new area.
Filed under: Invasive Focus | Tagged: education, invasive species | Leave a Comment » | <urn:uuid:42545fbc-479b-40e2-91ec-6cfe7a269dfc> | 3.796875 | 571 | Personal Blog | Science & Tech. | 44.883419 |
Dec 15, 2000, 2:56 PM
Post #1 of 8
Quiz 1 -- Equal Parts
String Division (2 Quizzes in 1!)
Given a string, how can you efficiently divide the string into x pieces, where each piece has y characters, except for possibly the last, which has between 1 and y characters?
For the love of Perl, please test your solutions before posting them -- if you find a place where your algorithm fails, post THAT along with your attempt, so that people might be able to help you.
Quiz 2 -- Approximate Parts
This is an introduction to greedy algorithms. Greedy algorithms do what is locally best, and, if they are successful, end up doing the right thing.
So you are given a string, and you are asked to split it into x pieces, but can only split on whitespace. And you are asked to make the pieces as close to the same length as possible. Example:
Splitting "would you like some pizza?" into two pieces results in "would you like" and "some pizza?", whereas splitting it into 3 pieces results in "would you", "like some", and "pizza?"
Assume the string is a list of non-whitespace characters joined by single spaces (that is, you'll never have two spaces in a row, and there is only space between words, not at the beginning of end of the string).
You might find it easier to approach the problem for splitting into 2 pieces, and then generalizing from there.
Jeff "japhy" Pinyan -- accomplished hacker, teacher, lecturer, and author | <urn:uuid:415b15b4-9ad2-4a85-9427-d81dfc1ec8ec> | 4.09375 | 339 | Comment Section | Software Dev. | 62.181544 |
Generally, weather-related science fair projects score well with teachers and judges because they require time and effort, much like plant projects. It is possible to do idea #2 or #3 in a weekend if the weather cooperates (i.e. it rains or snows when you need it to), but typically you will need at least 1 month, and 6 to 12 months are more common. So, if you haven’t started this year’s project yet, it might be too late, but this would be a great time to start next year’s project.
IDEA #1: One strategy for a weather-related project is to build your own piece of equipment (Part I) and then use that equipment to track the weather over a long period of time (Part II). For example, you could build a barometer from common household items and then compare its measurements to a “real” barometer, or the nightly news reports for atmospheric pressure. This would also work with a home-made thermometer or anemometer.
IDEA #2: Another idea would be to build your own rain gauges and use them to record rainfall relative to a spatial gradient. Perhaps distance from a building, stream, patch of trees, or other structure of interest. Alternatively, you could place them in cardinal directions around the structure or in areas of varying land use (e.g. rural to suburban to city). Add depth to this project by testing the rain water collected with a pH test kit from a pet store. Ultimately that would give you several variables to analyze (time, spatial gradient, rainfall totals, and pH).
IDEA #3: If you have less time, but live in a snow-prone zone, you could monitor snowfall amounts relative to spatial gradients within the local landscape. Or select items that might be used to pre-treat roads (salt, sand, gravel, kitty litter, etc) and put them in measured patches on your driveway/yard before a predicted storm. If you have access to a camera, set one up (from the window) and see if you can detect differences in snowmelt as snow falls on the different items (will also make great visual aids for backboard). At specified times, go out and measure the depth or percent coverage of snow on the patches. Don’t forget to replicate the patches. | <urn:uuid:194133bf-1a3c-4f23-a7e9-cc6b0958934e> | 3.140625 | 490 | Tutorial | Science & Tech. | 57.124983 |