text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Sigma and Pi Bonds
There are two types of covalent bonds: sigma (σ) and pi (π) bonds. Sigma bonds occur when s orbitals, p orbitals pointing along the axis to the other bonding atom, or hybridized orbitals collide and overlap; the region of maximum electron density is between the two bonded atoms and lies along the axis between them. Single-order bonds are always σ bonds. Sigma bonds are like taking the index finger from each hand and pointing them directly at each other, so one fingertip touches another. Just as you can rotate your fingers without losing contact, bonded atoms can rotate freely about a sigma bond.
Pi bonds occur when p orbitals (usually those left over from orbital hybridization) perpendicular to the axis of bonding overlap; in this case, the region of highest electron density is "above" or "sideways from" the axis between bonding atoms. It is important to note that hybridized or s orbitals never make π bonds. Pi bonds make up the second bond in a double bond and the second and third in a triple bond. Pi bonds are like pointing two fingers at the ceiling and moving them sideways until they touch. Unlike sigma bonds, pi bonds cannot rotate and maintain the bond; you can see this in our finger model, as turning your fingers out of alignment breaks the connection. Therefore, double or triple-bonded atoms cannot rotate relative to each other. | <urn:uuid:02a3e6d7-3f24-470c-ad8d-4cdc67fa3575> | 3.953125 | 296 | Knowledge Article | Science & Tech. | 45.315709 |
For Louisiana in particular, a key area of concern is coastal marshes. They are the breeding ground as well as home base for a wide range of marine life vital to the region’s fishing industries. Moreover, the wetlands provide a first barrier against storm surges from hurricanes.
But southern Louisiana’s wetlands already are stressed – vanishing as the Mississippi Delta sinks beneath the ocean at a rate that, by some estimates, averages 50 acres a day. In addition, the fisheries off the coast are exposed to an annual “dead zone” each spring as nutrient-rich water from the continental heartland moves down the Mississippi and into the Gulf, triggering algae blooms. When the algae die and decompose, the process uses up much of the dissolved oxygen in the water. Fish flee, but bottom dwellers – crabs and other shellfish – generally can’t move fast enough to do so.
If the blowout “turns into something that takes months to shut off ... that is our biggest concern,” says James Cowan Jr., a fisheries ecologist at Louisiana State University in Baton Rouge. With the ecosystem already distressed, “We are concerned it may be at a tipping point.”
In trying to assess the potential effect of oil on the Gulf Coast wetlands, a 1969 spill in Massachusetts’ Buzzards Bay might offer close – if still imperfect – parallels, say Dr. McDowell and Woods Hole colleague Christopher Reddy. | <urn:uuid:764191f7-a689-4052-81da-3197bfae2cf5> | 3.4375 | 298 | Truncated | Science & Tech. | 51.302701 |
Mangroves are a collection of salt-tolerant evergreen trees that live in tropical and sub-tropical coastal environments and line approximately 8% of the world’s coastlines. They are unique because they occupy both land and water and are sometimes referred to as ‘floating forests’. This unique aspect of mangroves that enable them to ‘float’ is due to their aerial roots that develop in fine muds or sandy sediments. These roots form a dense tangled network below the water surface providing a home and shelter for a diverse number of species. They also prop the tree up, hence the term ‘prop roots’ and take in oxygen at low tide.
There are over 50 species of trees and shrubs that are classified as ‘true’ mangroves. Each of these species have adapted to the salty water conditions they grow in. Most mangrove shorelines are made up of two or three zones, each dominated by different mangrove species. The largest mangrove coverage is in the America’s, where just four species exist. These consist of the red, black, white and button mangroves. Red mangroves grow nearest the shoreline as they have a particularly high salt-tolerance with black mangroves growing further landward in more swamp-like conditions. Some species of black mangroves develop pencil-like tubes called pneumatophores to aid breathing in the rich oxygen-free sediment they form in. White and button mangroves grow further inland in drier conditions and are only in contact with seawater at high tide.
Over half the worlds mangrove forests have been destroyed over the last 30- 40 years to make way for commercial enterprises such as aquaculture (mainly shrimp farming), agriculture and coastal development. Intensive shrimp farming has devastating environmental effects. Not only does the practice clear large areas of coastal habitat including mangrove forests but it also pollutes nearby coastal waters and marine habitats such as coral reefs with waste matter from the shrimp ponds.
Increased human settlement along our coastlines also leads to agricultural expansion. This is believed to be the most destructive human impact on mangrove forests due to the scale of the problem. Unregulated urban development increases pollution and alters the distribution and use of water and with increased tourism into tropical regions over recent decades; this is only compounding the problem.
Furthermore, overfishing is another (indirect) factor affecting mangrove communities. Many commercial fisheries are overfishing fish stocks and removing juvenile as well as adult fish from our seas. A large majority of juvenile fish would go onto spawn and use the mangrove habitat as a breeding ground and as a fish nursery. Mangroves support a complex community of species and fish play a vital part in this community as they consume large deposits of decomposed leaf, bark and twig litter produced by the mangrove trees. Small fish in turn attract and feed larger fish and this forms a healthy cycle that supports the community. The removal of fish in this cycle can lead to a major imbalance and could compromise the health of the forests.
The importance of mangroves and why we should care
Mangrove forests are important habitats for a number of reasons. Firstly, they are vital ecosystems that deliver a full range of what are referred to as ‘ecosystem services’. These services include provisional services, such as food, wood and fibre; regulating services such as flood regulation and water purification; supporting services such as nutrient cycling and finally cultural services, including recreational tourism and education.
Mangroves are also known as rich centres of biodiversity as they provide a home and shelter for many species including fish, birds, frogs, snakes, insects and several endangered crocodiles. Mammals also occupy these forests ranging from small animals like swamp rats and monkeys to large carnivores like tigers, that use the dense foliage as cover. Mangroves are also important nursery areas for many species of fish. Overfishing is a global problem and we are fishing at an accelerated rate without allowing fish stocks to recover; mangroves are therefore vital in providing breeding grounds for fish.
Along with protecting coastlines from erosion by acting as a natural barrier and flood defence, mangroves also filter pollutants from river run-offs and prevent a harmful build up of sedimentation from reaching the oceans and nearby marine habitats such as coral reefs. Mangroves and coral reefs have a symbiotic (mutual beneficial) relationship – the reef protects the coast where the mangroves grow from being eroded by the sea, and the forest traps sediment washed from the land preventing it reaching the reef. Both mangrove forests and coral reefs found in coastal areas provide protection and breeding grounds for fish – a key source of income and nutrition for people in these regions.
The solution: what we can do
Efforts to save mangroves from commercial development are becoming more popular as the benefits of mangroves become more widely known. In Thailand, community management volunteer programs have been effective in restoring damaged mangrove forests and in some areas across Western and North East Africa mangrove reforestation are also underway. However, establishing new mangrove plantations on coastal mudflats has not always been easy due to the lack of nutrients (nitrogen, phosphorus and iron) from inland water flow in some coastal regions where reforestation projects have been carried out. Ideally, the best solution is to protect existing mangrove forests and to restore damaged forests in regions where they naturally occur. Establishing marine reserves or Marine Protected Areas (MPA’s) in mangrove-lined coastlines are one of the most effective ways to prevent deforestation and urban development occurring. You can help by signing a petition to establish more marine reserves.
It must also be noted that mangroves grow in some of the poorest regions of the world where commercial enterprises such as shrimp farming and agriculture are in the short term more economically viable than protecting mangroves. Therefore, it’s vital that Conservation organizations and Governments work with local communities and provide incentives (possible employment) education and training through community-led projects that aim to highlight the benefits of mangroves in order for people to want to conserve them. Unfortunately, it’s often lack of funds that prevent projects such as these from succeeding; but there are many great projects urging people to get involved by actively volunteering or by asking for a small donation to support field projects. More information on a successful (and now completed) community-led reforestation project run by the United Nations Development Programme (UNDP) can be found here.
Mangroves are wonderfully diverse habitats that carry out their processes and functioning modestly ‘behind the scenes’ without a big fuss or drama. Many people do not really know or fully understand how important these forests are and are not even aware of the many natural benefits they provide; benefits we all take for granted. We often try and replicate these benefits, but as with all things in nature, no man-made system can ever be more effective at doing its job or replace the thing that naturally evolved to do it. Mangroves ‘do their job’ extremely well providing they are left alone to get on with it (through protection and necessary conservation). We must act now to conserve these incredible habitats by voting for more marine reserves and by educating people about the multi-benefits these floating forests provide us with. For more information about ways to make a difference and for ways to get involved, please click here. | <urn:uuid:db34d4e1-59f6-4b2d-b090-f953a518ca44> | 4.03125 | 1,549 | Knowledge Article | Science & Tech. | 25.531877 |
Major Section: HISTORY
The keyword command
:oops will undo the most recent
which we here consider just another
ubt). A second
:oops will undo
the next most recent
ubt, a third will undo the
ubt before that
one, and a fourth
:oops will return the logical world to its
configuration before the first
Consider the logical world (see world) that represents the
current extension of the logic and ACL2's rules for dealing with it.
:u commands ``roll back'' to some previous world
(see ubt). Sometimes these commands are used to inadvertently
undo useful work and user's wish they could ``undo the last undo.''
That is the function provided by
:Oops is best described in terms of an implementation. Imagine a
ring of four worlds and a marker (
*) indicating the current ACL2
* w0 / \ w3 w1 \ / w2This is called the ``kill ring'' and it is maintained as follows. When you execute an event the current world is extended and the kill ring is not otherwise affected. When you execute
:u, the current world marker is moved one step counterclockwise and that world in the ring is replaced by the result, say
w0', of the
w0 / \ *w0' w1 \ / w2If you were to execute events at this point,
w0'would be extended and no other changes would occur in the kill ring.
When you execute
:oops, the marker is moved one step clockwise.
Thus the kill ring becomes
* w0 / \ w0' w1 \ / w2and the current ACL2 world is
w0once again. That is,
w0. Similarly, a second
:oopswill move the marker to
w1, undoing the undo that produced
w1. A third
w2the current world. Note however that a fourth
:oopsrestores us to the configuration previously displayed above in which
w0'has the marker.
In general, the kill ring contains the current world and the three
most recent worlds in which a
:u were done.
ubt may appear to discard the information in the events
undone, we can see that the world in which the
ubt occurred is
still available. No information has been lost about that world.
ubt does discard information!
Ubt discards the information
necessary to recover from the third most recent
the other hand, discards no information, it just selects the next
available world on the kill ring and doing enough
return you to your starting point.
We can put this another way. You can freely type
:oops and inspect
the world that you thus obtain with
pc, and other history
commands. You can repeat this as often as you wish without risking
the permanent loss of any information. But you must be more careful
ubt seem ``safe'' because the
ubt can always be undone, information is lost when you
We note that
:u may remove compiled definitions (except in
Lisps such as OpenMCL, in which functions are always compiled). When the
original world is restored using
:oops, restored functions will not
generally be compiled, though the user can remedy this situation; see comp.
Finally, we note that our implementation of
oops can use a significant
amount of memory, because of the saving of old logical worlds. Most
users are unlikely to experience a memory problem, but if you do, then you
may want to disable
oops by evaluting
(reset-kill-ring 0 state); | <urn:uuid:baf99a76-a5a2-42e9-bbdc-f947de83e08f> | 2.71875 | 771 | Documentation | Software Dev. | 54.495852 |
This is an interesting piece, based on new research just published in Nature, which shows the world is more sensitive to CO2 than it has been in about, say, 12-15 million years or so. Here’s the full repost from Skeptical Science:
Today's Climate More Sensitive to Carbon Dioxide Than in Past 12 Million Years (via Skeptical Science)
Posted on 10 June 2012 by John Hartz This is a reprint of a news release posted by the National Science Foundation (NSF) on June 6, 2012. Geologic record shows evolution in Earth’s climate system Core samples were collected at the sites noted in the North Pacific Ocean. Credit: Jonathan LaRiviere…
I'm the director of CleanTechnica, the most popular clean energy website in the world, and Planetsave, a leading green and science news site. I've been covering green news of various sorts since 2008, and I've been especially focused on solar energy, electric vehicles, bicycling, and wind energy for the past few years. You can also find my work on Scientific American, Reuters, Think Progress, GE's ecomagination site, several sites in the Important Media network, & many other places. To connect on some of your favorite social networks, go to zacharyshahan.com or click on some of the links below. | <urn:uuid:fd14572f-c049-44f4-887c-7a5683cc74e6> | 2.875 | 282 | Personal Blog | Science & Tech. | 42.900572 |
It turns out that the elliptical orbit of the Earth has little effect on the seasons. Instead, it is the 23.45-degree tilt of the planet's rotational axis that causes us to have winter and summer.
The diagram below demonstrates what happens.
In this diagram, you can see the axis of rotation and the equator. The Northern Hemisphere (at the top) is currently experiencing winter, and the Southern Hemisphere is experiencing summer. By looking at how sunlight is landing on the planet in the diagram, you can clearly see two things:
- The Southern Hemisphere is getting about three times as much sunlight as the Northern Hemisphere.
- The North Pole is getting zero sunlight, which is why it experiences 24 hours of darkness in January.
That huge difference in the amount of sunlight reaching the ground in the different hemispheres is what causes the seasons.
Here are some interesting links:
- Why is southern exposure so sought after when searching for an apartment in the city?
- What is the Chandler wobble?
- How Compasses Work | <urn:uuid:411686be-2577-48bb-84a6-d235f39d5d99> | 4.15625 | 216 | Knowledge Article | Science & Tech. | 55.127821 |
Modern Spectral Climate Patterns in Rhythmically Deposited Argillites of the Gowganda Formation (Early Proterozoic), Southern Ontario, Canada
NOTE: At the time of publication, the author Gary Hughes was not yet affiliated with Cal Poly.
Rhythmically deposited argillites of the Gowganda Formation (ca. 2.0–2.5 Ga) probably formed in a glacial setting. Drop stones and layered sedimentary couplets in the rock presumably indicate formation in a lacustrine environment with repeating freeze–thaw cycles. It is plausible that temporal variations in the thickness of sedimentary layers are related to interannual climatic variability, e.g. average seasonal temperature could have influenced melting and the amount of sediment source material carried to the lake. A sequence of layer couplet thickness measurements was made from high-resolution digitized photographs taken at an outcrop in southern Ontario, Canada. The frequency spectrum of thickness measurements displays patterns that resemble some aspects of modern climate. Coherent periodic modes in the thickness spectrum appear at 9.9–10.7 layer couplets and at 14.3 layer couplets. It is unlikely that these coherent modes result from random processes. Modern instrument records of regional temperature and rainfall display similar spectral patterns, with some datasets showing significant modes near 14 yr in both parameters. Rainfall and temperature could have affected sedimentary layering in the Gowganda argillite sequence, and climate modulation of couplet thickness emerges as the most likely explanation of the observed layering pattern. If this interpretation is correct, the layer couplets represent predominantly annual accumulations of sediment (i.e. they are varves), and the thickness spectrum provides a glimpse of Early Proterozoic climatic variability. The presence of interannual climate patterns is not unanticipated, but field evidence presented here may be of some value in developing a climate theory for the Early Proterozoic.
Gary B. Hughes, Robert Giegengack, and Haralambos N. Kritikos. "Modern Spectral Climate Patterns in Rhythmically Deposited Argillites of the Gowganda Formation (Early Proterozoic), Southern Ontario, Canada" Earth and Planetary Science Letters 207.1-4 (2003): 13-22.
Available at: http://works.bepress.com/gbhughes/10 | <urn:uuid:321d003c-303f-41a9-ade3-1bc57754b02a> | 2.9375 | 486 | Academic Writing | Science & Tech. | 24.77177 |
If one needs a source of energy that is not exhaustible and self
sustaining perhaps earth energy/neg energy fits the bill. The size of
the collector plates would be relitive to the current.
but we have to get the static magnetic
> field to become
> oscillatory. If you can make a magnetic field from a permenate
> magnetic oscillate
> by either machanical means or field modulation means, there would be
> your answer.
You can't get work energy without current. To get current one needs
prospective pos and neg potentials re/static charges. The larger the
difference the more the current. Perm-mag induce current but the energy
has to come from somewhere. Telsa generators always generate
simultaneously pos and neg energy to create current. So the key is to
have enexhaustible sources of neg and pos energy.
> Now, this mechanical or field modulation (FM) means would also have
> to be free..
> or at least the magnetic field oscillations (integrate for power
> extraction) would have
> to supply the oscillations means.
As per ground radio specs, it appears that inducing a small specified
sin wave can be amplified by passing through a ground current. If this
is correct than you would have a stronger current recieved via the
reciever than what is used to initiate the signal. Maybe I'm wrong but
it seems that earth currents tune themselves to sin waves, rather than
> Any takers for this kinda project? The TOMI is close..where magnetics
> are used to
> rotate a ball bearing. If one could produce that...ball bearing
> rotating continually..
> I'd have to say.. you would be 75% there to answers to free energy.
In a perfect no resistance world this would work.
> v/r Ken Carrigan
> PS... it also could be that high frequency oscillations are the answer
> vice slower or
> lower oscillations. Something like a car.. for gas milage? Too fast
> and wind velocity
> friction decreases effieciency but too slow also reduces efficiency.
> There could be this
> relationship for oscillating magnetic field for extraction of power.
> I have done some
> research on a faraday disk and it does hold true about the magnetic
> field rotation versus
> velocity.... k*V^3 is frictional losses.
I thing you have got it in a nutshell, what is the right resonance that
earth energy best converts to sin waves? | <urn:uuid:23097d80-f36f-471a-8ac2-f2d3347f76e8> | 2.796875 | 544 | Comment Section | Science & Tech. | 57.47588 |
Whale song is the sound made by whales to communicate.
The word "song" is used in particular to describe the pattern of regular and predictable sounds made by some species of whales (notably the humpback) in a way that is reminiscent of human singing.
For more information about the topic Whale song, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:f91eab16-b534-4d9d-949b-ba6a32d52599> | 3.046875 | 103 | Knowledge Article | Science & Tech. | 32.324388 |
Do you like your strawberry jelly with or without the seeds? Are you glad to have a seed-free watermelon, or do you enjoy spitting the seeds into the garden? You might not like finding seeds in your fruit, but fruit is a plant's tool for dispersing seeds to create offspring. In this activity you will investigate how many seeds can be dispersed for each type of fruit. Based on the number of seeds they produce, how productive do you think some of your favorite fruits are?
Many plants grow fruit to enclose and protect their seeds, which need to spread out to grow new plants. Animals love to eat sweet, juicy fruit. This approach would seem like a poor way for plants to protect their seeds, so why would making fruit that is tasty be beneficial? When an animal eats fruit the fleshy part is digested. The seeds, however, pass without harm through the digestive system and are spread by the animal when it excretes (poops). In this way, they are deposited farther from the original plant (along with a little bit of fresh fertilizer) and can grow into a new plant. This is called seed dispersal, and it is just one strategy that plants use to spread seeds over a wide area and make more plants.
You might think that all fruit-bearing plants would pack as many seeds as possible into each fruit to maximize the number of new plants that will grow. But, in fact, different plants have different strategies for seed production and dispersal. Some fruits produce many, many seeds to make sure that at least some will grow, even if most fail. Other fruits put all of their resources into producing and protecting one very large seed.
• Different types of fruits: Try to include a pepper, tomato and apple as well as a squash or cucumber (yes, all of these are technically considered the "fruits" of their plants)
• Cutting board
• Paper towels
• Go to the grocery store and pick out different kinds of fruit. Don't just stick to traditional fruits, try some new ones as well. Some produce you might think are vegetables are really fruit! Try to include at least one pepper, tomato and apple, along with a squash or cucumber. Avoid seedless varieties.
• Tip: Bananas do have seeds, but they are very tiny, appearing as little black spots in the center of a banana slice. You can try to count them, but it is not recommended!
• Tip: If you dissect a pepper, be sure to wash your hands before you touch your eyes after handling the seeds. Pepper seeds can be spicy and cause a burning sensation! Use a mild pepper variety, such as a bell pepper, if you are very sensitive.
• You may need an adult to help you when cutting the fruit open.
• Begin to dissect your first fruit, removing the seeds and placing them on a paper towel. In the fruit, are the seeds arranged in a certain pattern?
• When you are done removing the seeds, count the number of seeds on the paper towel. How many seeds were in the fruit?
• Tip: If you are dissecting a cucumber or squash, instead of removing the seeds you can try cutting the fruit lengthwise, counting the rows of seeds, and then slicing the fruit the other way to determine how many seeds are in one row. Multiply these two numbers together to get a good approximation of the total number of seeds.
• One at a time, continue to dissect each fruit, place the seeds on a paper towel, then count them. Be sure to keep the seeds from different fruits separated.
• How many seeds are in each fruit? Which held the most seeds? The least? Did similar types of fruit produce similar numbers of seeds?
• How do seeds from different types of fruit look similar or different? In each fruit, were there similar patterns in which the seeds were arranged?
• Extra: Try this activity again but use multiple fruit of each type, such as multiple peppers, tomatoes, cucumbers and squash. Does the same type of fruit always hold a similar number of seeds, or does the amount vary a lot?
• Extra: Is fruit size related to seed quantity? Repeat this activity but this time use a ruler to measure each fruit before you count their seeds to see if larger fruits tend to produce more seeds than smaller ones. (You can also use a scale to weigh each fruit as an alternative way to measure fruit size.) Do larger fruits make more seeds?
• Extra: Are seedless fruit varieties really seedless? Dissect several different varieties of seedless fruits and look for seeds. Are "seedless" fruit varieties completely seedless, or simply have fewer seeds than normal? What is the decreased seed productivity of seedless varieties compared with normal varieties on a fruit-to-fruit comparison basis? | <urn:uuid:58b0aa6e-5def-4c9f-8506-c19bf7de821a> | 3.765625 | 990 | Tutorial | Science & Tech. | 60.999412 |
Researchers from UTA and UNAM appear on a local television program in the state of Coahuila to discuss their investigations of Mexican amphibians, reptiles, and parasites in August of 2007.
Herpetology of Mexico is a preliminary attempt to characterize the vast amphibian and reptile diversity of Mexico. The majority of information on this website has resulted from fieldwork conducted by an international team of researchers from the University of Texas at Arlington (UTA) and the Universidad Nacional Autonoma de México (UNAM). This research was made possible be generous grants from the National Science Foundation (DEB-0102383 and DEB-0613802 to J. A. Campbell).
Mexico is a country of particular biogeographical interest
because it includes the transition zone between the two
great faunal regions of the Western Hemisphere, the
Neartic and Neotropical. About 1100 described species
of amphibians and reptiles are known from Mexico, and
62% of these species are known only from Mexico.
Approximately one hundred new species of amphibians
and reptiles have been described from Mexico since
1980 even without an organized survey for the country.
It is anticipated that 200+ species of amphibians and
reptiles, and a far larger number of parasites, remain to
be discovered in Mexico. The UTA/UNAM investigations
conducted between 2001 and 2010 have resulted in
large natural history collections of amphibians, reptiles,
and their parasites. Thus far these collections have allowed
for a better understanding of species distributions and most importantly have led to the discovery of several new species.
The urgency for describing Mexican biodiversity is more acute than for other tropical countries, for several reasons. First, human population density in Mexico has now reached the point where the last vestiges of forests are disappearing in many areas, and these forests are recognized for their extremely high biodiversity and levels of endemism. Second, the diversity of amphibians and reptiles in Mexico is higher than any other country in the world, regardless of size or location. Finally, dramatic population declines have been identified in both amphibians and reptiles; some species have already been lost and others will follow soon.
Comments or Suggestions?
Herpetology of Mexico
All content © The University of Texas at Arlington. All rights reserved. | <urn:uuid:49fd072d-8b2a-478c-bebd-1c4682e519dc> | 3.1875 | 480 | Knowledge Article | Science & Tech. | 21.039852 |
I know someone who bought earphones that shine light in you ears. According to what he was told, there are neurons that sense light and then make you feel wide awake when activated, which seemed like snake oil to me. Apparently the pineal gland may be able to sense light and it does secrete melatonin - a sleep regulating hormone. I'm still sceptical though as its stuck in the middle of your brain. Would shining lights in your ears be able to have any effect on how awake you feel?
There is no known mechanism for light detection through the ears in humans, as far as I know. It is certainly true that the pineal gland is part of the system that regulates the circadian rhythm (briefly, the daily sleep-wake cycle). However, while the pineal gland in birds and other non-mammalian vertebrates is directly sensitive to light, the mammalian pineal gland is not (see, for review, Doyle and Menaker, 2007 and Csernus, 2006).
In all animals, the circadian rhythm is regulated by a photoperiod cue and therefore requires light detection. In mammals, the light sensors are found exclusively in the retina, the sensory portion of the eye. There are two classes of light detecting cells in the retina. First, rod and cone photoreceptors mediate vision in the usual sense of the word. These cells contain proteins called opsins that absorb photons of light and thereby excite the photoreceptors that contain them, informing the brain that light was detected.
A second class of photosenstive cells in the retina are called intrinsically photosensitive retinal ganglion cells (ipRGCs) (see Do and Yau, 2010 for review). These cells mediate "non-image-forming" vision and are an important part of the circadian rhythm pathway. They also contain an opsin called melanopsin which is a photosensitive pigment. This is not to be confused with melatonin, which is the sleep hormone released by the pineal gland. The ipRGCs in the retina send the photoperiod cue to a brain area called the suprachiasmatic nucleus (SCN). The SCN then signals to the pineal gland.
If we are generous and assume that these light-emitting headphones are the result of misunderstandings, we can guess that the confusion arises from (1) the fact that some animals have a directly photosensitive pineal gland, but not mammals and (2) that the pineal gland secretes melatonin but not the photosensitive pigment melanopsin.
Update: From a bit of research, it turns out that the company selling the headphones is not "confused" as I politely offered. I don't think this site is the appropriate forum to refute their research or claims. Suffice to say that the retina is the only part of the human brain shown to be photosensitive.
I believe there are light sensors (TRPV3) in the skin for infrared light (heat), that convey that information back to the brain from the skin. This is kind of light detection, but it is not direct detection like the rhodopsins in the eye.
By the way, without passing information onto neurons, cells probably have a lot of sensors they may use to respond to their local environment. This recent article talks about how olfactory receptors can be found in lung and gut cells. so its quite possible that the conventional light detecting genes (rhodopsins) would be found in skin cells, but they may not convey information to neurons. | <urn:uuid:c8710e56-de85-4d74-87f8-82ed5e0a91af> | 3.203125 | 732 | Q&A Forum | Science & Tech. | 47.79 |
This week's highlight taxon is the Musophagidae, a uniquely African family of birds that contains the turacos (or should that be "turacoes"?), also spelt "touracos". Turacos are about 20 species of medium-sized frugivorous (fruit-eating) birds. The name of the type genus, Musophaga, means "plantain eater", which is also used as the vernacular name for some of the largest species, though it seems that they rarely, if ever, actually eat plantains (plantains, for those not in the know, are bananas eaten baked when they are still fairly green and contain more starch). Figs seem to a much more favoured food. Rutgers & Norris (1972) mentions them also feeding on leaves, insects and occassionally even meat, though it is unclear whether turacos actually eat the latter in the wild.
Turacos are best known for their brilliant coloration, and not without reason. The red turacin and the green turacoverdin are unique among bird pigments in being metal-bearing porphyrins, the same class of compounds as the heme found in haemoglobin, though the turaco pigments contain copper rather than iron. Turacin is unique to turacos, while there is evidence that turacoverdin (or a very similar compound) is also found in the jacana (Jacana spinosa), the blood pheasant (Ithaginis cruentata) and the roulroul (Rollolus roulroul), a species of partridge (Dyck, 1992). Turacoverdin (and the pigments in the latter three species, if they are distinct) is also notable as the only truly green pigment known from any bird. Green coloration in other birds is the result of structural coloration (from the crystalline structure of the feathers), or the combination of yellow pigment and blue structural coloration (Hill & McGraw, 2006). Rutgers and Norris (1972) mention a persistent belief that turacin is water-soluble, and actually washes out when the bird bathes. While this belief is untrue, turacin is soluble in more alkali solutions.
Turacin and turacoverdin are found in four of the six or seven genera of turacos. The genera Crinifer and Corythaixoides (together forming the subfamily Criniferinae) lack both pigments and are mostly grey. Species of Corythaixoides are also known as "go-away birds" after the sound of their calls. These two genera, as well as the great blue turaco (Corythaeola cristata) which is placed in its own subfamily, are referred to as the grey turacos. Corythaeola is also mostly grey, but does have small patches of turacin (the vent) and turacoverdin (a small breast-stripe). The remaining turacos, referred to as the turacin-bearing turacos, are included in the subfamily Musophaginae (Veron & Winney, 2000). Over half the species of turaco belong to the genus Tauraco, which are mostly a spectacular green. Tauraco, the purple-crested turaco (Gallirex porphyreolophus)* and the Ruwenzori turaco (Ruwenzorornis johnstoni) contain both turacin and turacoverdin (the latter two species may be included in the genus Musophaga). The remaining species of Musophaga in the strict sense lack turacoverdin and are a brilliant blue colour, with bright red crests coloured with turacin.
*Gallirex is a very neat genus name - it means "king of the chickens".
The relationships of the turacos to other birds have long been problematic. The most common position attributed to them has been as sister taxon to the cuckoos (Cuculidae), but other suggested relationships have been with the Galliformes or Opisthocomus (the hoatzin). The morphological analysis of Livezey and Zusi (2007) supported the traditional position, while the molecular analysis of Ericson et al. (2006) placed both Musophagidae and Cuculidae in a grade of largely terrestrial birds, also including the Grues (cranes and rails) and Otididae (bustards), sitting at the base of the mostly aquatic "higher waterbird" clade, but was unresolved to whether the two families formed a monophyletic group.
Dyck, J. 1992. Reflectance spectra of plumage areas colored by green feather pigments. The Auk 109 (2): 293-301.
Ericson, P. G. P., C. L. Anderson, T. Britton, A. Elzanowski, U. S. Johansson, M. Källersjö, J. I. Ohlson, T. J. Parsons, D. Zuccon & G. Mayr. 2006. Diversification of Neoaves: integration of molecular sequence data and fossils. Biology Letters 2 (4): 543-547.
Hill, G. E., & K. J. McGraw. 2006. Bird Coloration. Harvard University Press.
Livezey, B. C., & R. L. Zusi. 2007. Higher-order phylogeny of modern birds (Theropoda, Aves: Neornithes) based on comparative anatomy. II. Analysis and discussion. Zoological Journal of the Linnean Society 149 (1): 1-95.
Rutgers, A., & K. A. Norris (eds.) 1972. Encyclopaedia of Aviculture vol. 2. Blanford Press: London.
Veron, G., & B. J. Winney. 2000. Phylogenetic relationships within the turacos (Musophagidae). Ibis 142 (3): 446-456. | <urn:uuid:d8769021-6421-4589-b290-07107a1160d1> | 4.0625 | 1,281 | Knowledge Article | Science & Tech. | 49.17533 |
Dingemans, B.J.J. and Bakker, E.S. and Bodelier, P.L.E. (2011) Aquatic herbivores facilitate the emission of methane from wetlands. Ecology, 92, 1166-1173. ISSN 0012-9658.
|PDF - Published Version |
Restricted to KNAW only
Official URL: http://dx.doi.org/10.1890/10-1297.1
Wetlands are significant sources of atmospheric methane. Methane produced by microbes enters roots and escapes to the atmosphere through the shoots of emergent wetland plants. Herbivorous birds graze on helophytes, but their effect on methane emission remains unknown. We hypothesized that grazing on shoots of wetland plants can modulate methane emission from wetlands. Diffusive methane emission was monitored inside and outside bird exclosures, using static flux chambers placed over whole vegetation and over single shoots. Both methods showed significantly higher methane release from grazed vegetation. Surface-based diffusive methane emission from grazed plots was up to five times higher compared to exclosures. The absence of an effect on methane-cycling microbial processes indicated that this modulating effect acts on the gas transport by the plants. Modulation of methane emission by animal–plant–microbe interactions deserves further attention considering the increasing bird populations and changes in wetland vegetation as a consequence of changing land use and climate change.
|Institutes:||Nederlands Instituut voor Ecologie (NIOO)|
|Deposited On:||04 Jan 2011 01:00|
|Last Modified:||24 Apr 2012 16:42|
Repository Staff Only: item control page | <urn:uuid:16cd5714-60f1-42dc-9ae5-2f453f328018> | 2.875 | 358 | Academic Writing | Science & Tech. | 39.585106 |
It was like something out of a science fiction film. Scientists shot a rocket toward where Mars would be in several months. On that rocket was a capsule containing a 1-ton (0.9-metric-ton) rover. The capsule detached from the rocket and made its way to the upper atmosphere of Mars. Then the really crazy stuff happened.
After the capsules descent to Mars's surface slowed, first by encountering the relatively thin atmosphere of the red planet and then by deploying the largest parachute ever built by NASA, the sky crane and rocket boosters took over.
With rockets firing, the sky crane lowered the rover down to the surface on cables. Just after the rover touched down, the sky crane cables separated from the rover and the crane flew off to crash a safe distance away. And the entire procedure happened automatically with no human control -- in fact, the rover had been sitting on the planet's surface for several minutes before we knew for sure that it had worked.
The landing was a marvel of science and engineering. And this was just the beginning of the mission! The rover has since begun to explore its surroundings and send data back to us about the conditions on Mars.
Honorable Mentions: It was a big year for space missions! In 2012, we also saw the successful launch of the SpaceX Dragon vehicle, which rendezvoused with the International Space Station. A secondary mission to place a prototype communications satellite into orbit failed when safety parameters fell below NASA's criteria. And in October, Felix Baumgartner broke several world records as he skydived from a balloon floating higher than 127,000 feet (38,710 meters), prompting many to refer to his accomplishment as a space jump. | <urn:uuid:163aca14-ab65-459c-b567-02c9f9292344> | 3.671875 | 346 | Listicle | Science & Tech. | 49.846849 |
Wednesday, May 16, 2012 - 07:30 in Paleontology & Archaeology
As history has repeatedly shown, where there are valuable minerals to be unearthed, adventurous humans will arrive in droves even if it means battling extreme conditions and risking life and limb.
- More gold -- and other minerals -- in them thar hills?Tue, 24 Jul 2012, 14:34:58 EDT
- Caltech team finds evidence of water in moon mineralsWed, 21 Jul 2010, 13:30:37 EDT
- Sea life 'facing major shock'Tue, 21 Aug 2012, 10:35:35 EDT
- Mineral diversity clue to early Earth chemistryThu, 28 Feb 2013, 16:36:06 EST
- Space radar to improve miners' safetyThu, 19 Jun 2008, 10:07:44 EDT | <urn:uuid:4b755518-2d87-4a37-bf3e-85d0a17248b4> | 3.1875 | 167 | Content Listing | Science & Tech. | 34.191925 |
§ Mr. Morley
Annual monitoring for evidence of Lindane in the area of the English Channel where MV Perintis sunk was undertaken between 1989 and 1993. This found that Lindane concentrations in seawater were low. The conclusion was that the English Channel had not been contaminated as a result of the sinking and that the container of Lindane on MV Perintis had sunk intact.
The Lindane (a pure, crystalline material not formulated in a carrying solvent) was packed in plastic sacks within the container. Advice from Defra"s Centre for Environment, Fisheries and Aquaculture Science is that dissolution of the Lindane in seawater at the ambient seabed temperature, leaching from the sacks, and diffusion from within the container into the surrounding water, are all likely to be slow processes and the impact on marine life is therefore expected to be negligible. | <urn:uuid:ec3136f9-44f2-41cc-a08e-9979f9e870d0> | 2.953125 | 177 | Knowledge Article | Science & Tech. | 31.475952 |
No one has observed any evidence for proton decay. That might be disappointing professionally for physicists, but it's good news for the universe. If it turns out to be possible, proton decay could be the beginning of the end of everything. Here's why.
How do we start with protons and end with the end of the universe? We begin with what's in those protons. Inside protons are quarks. Quarks are one of the two most basic particles we can find. Quarks are subject to the strong force, the force that keeps a nucleus together. Each quark has essentially been assigned a baryon number of one third. The most famous baryons are protons and neutrons which have three quarks each, amounting to a baryon number of one. (Also famous are the antiprotons, which have a negative baryon number. So if a proton and antiproton were simultaneously created, the overall baryon number of the system is zero.) Because the charge of the quarks in protons and neutrons is a little different, the two particles have different charges. They also have slightly different masses. The neutron is a little chunkier, which means it can be involved in a change that involves the other fundamental piece of matter in the universe.
Leptons are separate from quarks. They are things like the electron, the neutrino, and their counterparts the antineutrino and antielectron. None of them are affected by the strong force. They have lepton numbers, and their anti-counterparts have negative lepton numbers.
Lepton numbers and baryon numbers seem meaningless, until you know that no reaction in the universe has ever been observed that changed the overall baryon or lepton number. This lead to the laws of conservation of baryon number and lepton number. Think of them like you would the conservation of energy and conservation of mass. A sudden change in lepton number would be like an apple just disappearing into nothing, or a burst of energy coming from nowhere.
Which is why scientists were so puzzled, at first, by the interaction of leptons and quarks in the decay of the neutron. When a neutron decays, it turns into a proton, and sheds an electron. Since a proton is positive, and an electron is negative, charge was conserved, but it seemed to scientists like lepton number had completely changed. Later, they realized that this decay involved the emission of an anti-neutrino (specifically, an anti-electron neutrino, which is a neutrino associated with electron interactions.) Since the electron had a lepton number of +1, and the anti-electron neutrino had a number of -1, the number was conserved, as was mass, as was charge. The decay entirely involved the weak force, which meant that the strong force wasn't messing around with any leptons. And all was well.
Protons are the lightest of the baryons. They can't shed anything else, unless their quarks dissolve into lighter particles. But this would subtract baryons and add leptons from out of nowhere. It was decided that it couldn't happen.
Then along came a little thing called the Grand Unified Theory. This is an unrealized theory that holds that all the forces can reach some level of equivalence, and can be explained with one unifying and quantifiable idea. It's very aesthetically pleasing. The problem is, if the strong and the weak force are equivalent, then leptons and baryons are equivalent as well. Remember the conservation of mass and the conservation of energy? This would be like Einstein's realization that E = mc^2, and mass and energy are equivalent - that one can be subbed in for another. Suddenly an apple could disappear, and a sudden burst of energy could appear. Matter could be converted into energy. Under the Grand Unified Theory, baryons could be converted into leptons. Baryon number and lepton number are no longer conserved.
Protons could then break down into positrons and pions. Although there are various mechanisms of proton decay, scientists think that protons have a life of about 10^25 to 10^33 years. Which is a pity, since at this point the universe will have plenty of problems of its own. By 10^30 years, the universe's stars will have first receded out of view of each other, and burned out until they went dark. Energy is what organizes atoms - gravitational energy that brings particles together to form stars and planets, solar energy that heats up planets gives live a chance. By then, the most intense bursts of energy will come from chunks matter falling into black holes. That might be the only way to wring energy from the universe. And it won't work, because the matter itself will simply dissolve. Once the baryons have flat-out fizzled into leptons, there's no way of getting them back without the input of a lot of energy. Proton decay means that any civilization - any matter at all - that makes it that long will literally dissolve as even hydrogen dissolves into smaller particles.
Not a cheery picture is it?
Top Image: NASA and The Hubble Heritage Team (STScI)
Second Image: NASA, ESA and The Hubble Heritage Team (STScI/AURA)
Third Image: NASA, JPL-Caltech, J. Rho (SSC/Caltech) | <urn:uuid:31d796eb-2b06-4cb8-9225-43ad78c988ea> | 4.03125 | 1,150 | Nonfiction Writing | Science & Tech. | 52.138176 |
“Hydrothermal vents are home to animals found nowhere else on the planet that get their energy not from the Sun but from breaking down chemicals, such as hydrogen sulphide,” said Professor Alex Rogers of Oxford University’s Department of Zoology, who led the research. “The first survey of these particular vents, in the Southern Ocean near Antarctica, has revealed a hot, dark, ‘lost world’ in which whole communities of previously unknown marine organisms thrive.”
“What we didn’t find is almost as surprising as what we did,” said Professor Rogers. “Many animals such as tubeworms, vent mussels, vent crabs, and vent shrimps, found in hydrothermal vents in the Pacific, Atlantic, and Indian Oceans, simply weren’t there.”
The team reports its findings in this week’s issue of the online, open-access journal PLoS Biology.
Source and Image: The Daily Galaxy | <urn:uuid:42806314-c953-4440-bfbb-56b9bf4d91a0> | 3.78125 | 209 | Truncated | Science & Tech. | 37.212 |
Has extreme weather increased in recent years? The science is still unsettled on whether climate change has resulted in more intense hurricanes, so let's restrict our attention to tornadoes and heavy rain. There is evidence that global warming has caused an increase in very heavy precipitation events--the kind most responsible for major floods. However, there is no evidence that climate change has caused in increase in tornadoes and severe thunderstorms, though preliminary research suggests this may occur late this century.
Are tornadoes and severe thunderstorms getting more numerous and more extreme due to climate change? To help answer this question, let's restrict our attention to the U.S., which has the highest incidence of tornadoes and severe thunderstorms of any place in the world. At a first glance, it appears that tornado frequency has increased in recent decades (Figure 1).
|Figure 1. The number of tornadoes reported in the U.S. since 1950. Image credit: High Plains Regional Climate Center, University of Nebraska-Lincoln. Source.|
However, this increase may be entirely caused by factors unrelated to climate change:
Given these uncertainties in the tornado data base, it is unknown how the frequency of tornadoes might be changing over time. The "official word" on climate science, the 2007 United Nations IPCC report, stated it thusly: "There is insufficient evidence to determine whether trends exist in small scale phenomena such as tornadoes, hail, lighting, and dust storms."
Furthermore, we're not likely to be able to develop methods to improve the situation in the near future.The current Doppler radar system can only detect the presence of a parent rotating thunderstorm that often, but not always, produces a tornado. Until a technology is developed that can reliably detect all tornadoes, there is no hope of determining how tornadoes might be changing in response to a changing climate. According to Doswell (2007): I see no near-term solution to the problem of detecting detailed spatial and temporal trends in the occurrence of tornadoes by using the observed data in its current form or in any form likely to evolve in the near future.Violent tornadoes are not increasing
Violent tornadoes (EF4 and EF5 on the Enhanced Fujita Scale, or F4 and F5 on the pre-2007 Fujita Scale), though rare, cause a large fraction of the tornado deaths reported each year. These storms are less likely to go uncounted, since they tend to cause significant damage along a long track. Thus, the climatology of violent tornadoes may offer a clue as to how climate change may be affecting severe weather. Unfortunately, we cannot measure the wind speeds of a tornado directly, except in very rare cases when researchers happen to be present with sophisticated research equipment. Tornadoes are categorized using the Enhanced Fujita (EF) scale, which is based on damage. So, if a violent tornado happens to sweep through empty fields and never destroy any structures, it will never be rated as a violent tornado. Thus, if the number of violent tornadoes has actually remained constant over the years, we should expect to see some increase in these storms over the decades, since more buildings have been erected in the paths of tornadoes.
However, if we look at the statistics of violent U.S. tornadoes since 1950 (Figure 2), there does not appear to be any increase in the number of these storms. In fact, there was only one tornado of EF5 intensity reported during the eight year period 2000-2007, the tornado that devastated Greensburg, Kansas in 2007 (although Canada did report its first EF5 tornado in history on June 22, 2007). The previous eight year period of 1992-1999 had six F5 tornadoes, so we can't say that climate change has caused an increase in the strongest tornadoes in recent years. Note that the EF scale to rate tornadoes was adopted in 2007, but the transition to this new scale still allows valid comparisons of tornadoes rated EF5 on the new scale and F5 on the old scale.
|Figure 2. The incidence of violent (EF4/EF5, or F4/F5) tornadoes by decade since 1950. The asterisk by the decade of the 2000s indicates that the statistics extend only through February 2008. We can expect that another 20% of this decade's violent tornado activity will occur in 2008 and 2009.|
An alternate technique to study how climate change may be affecting tornadoes is look at how the large-scale environmental conditions favorable for tornado formation have changed through time. Moisture, instability, lift, and wind shear are needed for tornadic thunderstorms to form. The exact mix required varies considerably depending upon the situation, and is not well understood. However, Brooks (2003) attempted to develop a climatology of weather conditions conducive for tornado formation by looking at atmospheric instability (as measured by the Convective Available Potential Energy, or CAPE), and the amount of wind shear between the surface and 6 km altitude. High values of CAPE and surface to 6 km wind shear are conducive to formation of tornadic thunderstorms. The regions they analyzed with high CAPE and high shear for the period 1997-1999 did correspond pretty well with regions where significant (F2 and stronger) tornadoes occurred. The authors plan to extend the climatology back in time to see how climate change may have changed the large-scale conditions conducive for tornado formation.
Del Genio et al.(2007) used a climate model with doubled CO2 to show that a warming climate would make the atmosphere more unstable (higher CAPE) and thus prone to more severe weather. However, decreases in wind shear offset this effect, resulting in little change in the amount of severe weather in the Central and Eastern U.S. late this century. The speed of updrafts in thunderstorms over land increased by about 1 m/s in their simulation, though, since upward moving air needed to travel 50-70 mb higher to reach the freezing level. As a result, the most severe thunderstorms got stronger. In the Western U.S., the simulation showed that drying led lead to fewer thunderstorms, but the strongest thunderstorms increased in number by 26%, leading to a 6% increase in the total amount of lighting hitting the ground each year. If these results are correct, we might expect more lightning-caused fires in the Western U.S. late this century, due to enhanced drying and more lightning.
Using a high-resolution regional climate model (25 km grid size) zoomed in on the U.S., Trapp et al. (2007) found that the decrease in 0-6 km wind shear in the late 21st century would more than be made up for by an increase in instability (CAPE). Their model predicted an increase in the number of days with high severe storm potential for almost the entire U.S., by the end of the 21st century. These increases were particularly high for many locations in the Eastern and Southern U.S., including Atlanta, New York City, and Dallas (Figure 3). Cities further north and west such as Chicago saw a smaller increase in the number of severe weather days.
|Figure 3. Number of days per year with high severe storm potential historically (blue bars) and as predicted by the climate model (A2 scenario) of Trapp et al. 2007 (red bars).|
We currently do not know how tornadoes and severe thunderstorms may be changing due to changes in the climate, nor is there hope that we will be able to do so in the foreseeable future. Preliminary research using climate models suggests that we may see an increase in the number of severe storms capable of producing tornadoes late this century. However, this research is just beginning, and much more study is needed to confirm these findings. The lack of an increase in violent EF4 and EF5 tornadoes in recent decades implies that climate change has not yet increased tornado activity.
Are heavy rain events becoming more frequent due to climate change? That is a difficult question to answer, since reliable records are not available at all in many parts of the world, and extend back only a few decades elsewhere. However, we do have a fairly good set of precipitation records for many parts of the globe, and those records show that the heaviest types of rains--those likely to cause flooding--have increased in recent years. According to the United Nations' Intergovernmental Panel on Climate Change (IPCC) 2007 report, "The frequency of heavy precipitation events has increased over most land areas". Indeed, global warming theory has long predicted an increase in heavy precipitation events. As the climate warms, evaporation of moisture from the oceans increases, resulting in more water vapor in the air. According to the 2007 IPCC report, water vapor in the global atmosphere has increased by about 5% over the 20th century, and 4% since 1970. Satellite measurements (Trenberth et al., 2005) have shown a 1.3% per decade increase in water vapor over the global oceans since 1988. Santer et al. (2007) used a climate model to study the relative contribution of natural and human-caused effects on increasing water vapor, and concluded that this increase was "primarily due to human-caused increases in greenhouse gases". This was also the conclusion of Willet et al. (2007).More water vapor equals more precipitation
This increase in water vapor has very likely led to an increase in global precipitation. For instance, over the U.S., where we have very good precipitation records, annual average precipitation has increased 7% over the past century (Groisman et al., 2004). The same study also found a 14% increase in heavy (top 5%) and 20% increase in very heavy (top 1%) precipitation events over the U.S. in the past century. Kunkel et al. (2003) also found an increase in heavy precipitation events over the U.S. in recent decades, but noted that heavy precipitation events were nearly as frequent at the end of the 19th century and beginning of the 20th century, though the data is not as reliable back then. Thus, there is a large natural variation in extreme precipitation events.Pollution may contribute to higher precipitation
It is possible that increased pollution is partly responsible for the increase in precipitation and in heavy precipitation events in some parts of the world. According to Bell et al. (2008), summertime rainfall over the Southeast U.S. is more intense on weekdays than on weekends, with Tuesdays having 1.8 times as much rain as Saturdays during the 1998-2005 period analyzed. Air pollution particulate matter also peaks on weekdays and has a weekend minimum, making it likely that pollution is contributing to the observed mid-week rainfall increase. Pollution particles act as "nuclei" around which raindrops condense, increasing precipitation in some storms.The future of flooding
It is difficult to say if the increase in heavy precipitation events in recent years has led to more flooding, since flooding is critically dependent on how much the landscape has been altered by development, upstream deforestation, and what kind of flood control devices are present. One of the few studies that did attempt to quantify flooding (Milly et al., 2002) found that the incidence of great floods has increased in recent decades. In the past century, the world's 29 largest river basins experienced a total of 21 "100-year floods"--the type of flood one would expect only once per 100 years in a given river basin. Of these 21 floods, 16 occurred in the last half of the century (after 1953). With the IPCC predicting that heavy precipitation events are very likely to continue to increase, it would be no surprise to see flooding worsen globally in the coming decades.
Dr. Jeff Masters' Recent Extreme Weather Blogs
Dr. Ricky Rood's Recent Extreme Weather Blogs
Brooks, H.E., J.W. Lee, and J.P. Craven, 2003, "The spatial distribution of severe thunderstorm and tornado environments from global reanalysis data", Atmospheric Research Volumes 67-68, July-September 2003, Pages 73-94.
Doswell, C.A., 2007, "Small Sample Size and Data Quality Issues Illustrated Using Tornado Occurrence Data", E-Journal of Severe Storms Meteorology, Vol 2, No. 5 (2007).
Del Genio, A.D., M-S Yao, and J. Jonas, 2007, Will moist convection be stronger in a warmer climate?, Geophysical Research Letters, 34, L16703, doi: 10.1029/2007GL030525.
Marsh, P.T., H.E. Brooks, and D.J. Karoly, 2007, Assessment of the severe weather environment in North America simulated by a global climate model, Atmospheric Science Letters, 8, 100-106, doi: 10.1002/asl.159.
Trapp, R.J., N.S. Diffenbaugh, H.E. Brooks, M.E. Baldwin, E.D. Robinson, and J.S. Pal, 2007, Severe thunderstorm environment frequency during the 21st century caused by anthropogenically enhanced global radiative forcing, PNAS 104 no. 50, 19719-19723, Dec. 11, 2007.Heavy precipitation references
Bell, T. L., D. Rosenfeld, K.-M. Kim, J.-M. Yoo, M.-I. Lee, and M. Hahnenberger (2008), "Midweek increase in U.S. summer rain and storm heights suggests air pollution invigorates rainstorms," J. Geophys. Res., 113, D02209, doi:10.1029/2007JD008623.
Kunkel, K. E., D. R. Easterling, K. Redmond, and K. Hubbard, 2003, "Temporal variations of extreme precipitation events in the United States: 1895.2000", Geophys. Res. Lett., 30(17), 1900, doi:10.1029/2003GL018052.
Groisman, P.Y., R.W. Knight, T.R. Karl, D.R. Easterling, B. Sun, and J.H. Lawrimore, 2004, "Contemporary Changes of the Hydrological Cycle over the Contiguous United States: Trends Derived from In Situ Observations," J. Hydrometeor., 5, 64-85.
Milly, P.C.D., R.T. Wetherald, K.A. Dunne, and T.L.Delworth, Increasing risk of great floods in a changing climate", Nature 415, 514-517 (31 January 2002) | doi:10.1038/415514a.
Santer, B.D., C. Mears, F. J. Wentz, K. E. Taylor, P. J. Gleckler, T. M. L. Wigley, T. P. Barnett, J. S. Boyle, W. Brüggemann, N. P. Gillett, S. A. Klein, G. A. Meehl, T. Nozawa, D. W. Pierce, P. A. Stott, W. M. Washington, and M. F. Wehner, 2007, "Identification of human-induced changes in atmospheric moisture content", PNAS 104 15248-15253, 2007.
Trapp, R.J., N.S. Diffenbaugh, H.E. Brooks, M.E. Baldwin, E.D. Robinson, and J.S. Pal, 2007, Severe thunderstorm environment frequency during the 21st century caused by anthropogenically enhanced global radiative forcing, PNAS 104 no. 50, 19719-19723, Dec. 11, 2007.
Trenberth, K.E., J. Fasullo, and L. Smith, 2005: "Trends and variability in column-integrated atmospheric water vapor", Climate Dynamics 24, 741-758.
Willett, K.M., N.P. Gillett, P.D. Jones, and P.W. Thorne, 2007, "Attribution of observed surface humidity changes to human influence", Nature 449, 710-712 (11 October 2007) | doi:10.1038/nature06207. | <urn:uuid:05b4270b-6b42-49ca-84ab-7212b4e2aecb> | 3.59375 | 3,397 | Knowledge Article | Science & Tech. | 61.549723 |
Census of Marine Life: Wild and Wonderful Creatures
The Census of Marine Life - a ten-year effort by scientists from around the world to answer the age-old question, “What lives in the sea?” It was an international effort to asses the diversity, distribution, and abundance of marine life that lives in our ocean, and the project offically concluded in October 2010.
Browse a small sampling of the amazing marine life documented by Census scientists in this photo slideshow. You can learn more about the ambitious survey in the book, Citizens of the Sea: Wondrous Creatures from the Census of Marine Life.
See more marine life and biodiversity in a photo slideshow of marine species at risk and watch a video to learn about how scientists are gathering vast amounts of ecological information to build a catelog of life on the island of Moorea. | <urn:uuid:2ebfa9f1-a969-412f-ab7d-d0f436c5ab6a> | 3.515625 | 173 | Truncated | Science & Tech. | 29.553 |
The Hubble Space Telescope is only 17 years old, but already, it has turned in a series of great discoveries and images.
On its birthday, Hubble's handlers have released 48 images of the Carina Nebula as new stars are being born. Carina is estimated to be 7,500 light years away. These images include data from a fixed earth telescope in Chile, and that allows them to be displayed in full color.
In the Carina Nebula, the images show outflowing winds and ultraviolet radiation from at least a dozen brilliant stars. Meanwhile, physical forces are acting on the space … Read more | <urn:uuid:045047ac-3463-493c-a760-e34dc9f5ea2c> | 2.75 | 122 | Truncated | Science & Tech. | 45.958412 |
- About Us
- SW Climate
Published March 24, 2011
Southwest Snowpack(updated 3/17/11)
Data Source(s): National Water and Climate Center, Western Regional Climate Center
Snowpack levels continued to drop over the past month as very little precipitation fell across most of Arizona and New Mexico. The current La Niña event drew winter storms northward into the Upper Colorado River Basin states of Utah, Colorado, and Wyoming. As of March 17, nearly all Snow Telemetry (SNOTEL) stations in Arizona and New Mexico measured below-average to well below-average snow water equivalent (SWE) (Figure 8). SWE in the central Mogollon Rim area was the lowest in Arizona, measuring only 15 percent of average. The Verde River Basin measured the highest SWE in the state, with only 53 percent of average. In New Mexico, the San Miguel, Dolores, Animas, and San Juan river basins in the northern part of the state had near-average levels measuring 92 percent of average SWE. The snowpack in the southern portion of the state fared worse, with the Mimbres, San Francisco, and Gila river basins containing only 8, 23, and 28 percent of average SWE, respectively. While many river basins in the region measured near-record levels of snowpack during the winter months of 2010, this winter was almost completely opposite.
Forecasts show a weakening La Niña pattern but still call for elevated chances of below-average precipitation for the spring months. As a result, streamflow forecasts anticipate below-average to well below-average runoff from most basins in the Southwest, except those with headwaters in the Rocky Mountains to the north of Arizona and New Mexico where precipitation has been higher this winter.Notes:
Snowpack telemetry (SNOTEL) sites are automated stations that measure snowpack depth, temperature, precipitation, soil moisture content, and soil saturation. A parameter called snow water content (SWC) or snow water equivalent (SWE) is calculated from this information. SWC refers to the depth of water that would result by melting the snowpack at the SNOTEL site and is important in estimating runoff and streamflow. It depends mainly on the density of the snow. Given two snow samples of the same depth, heavy, wet snow will yield a greater SWC than light, powdery snow.
This figure shows the SWC for selected river basins, based on SNOTEL sites in or near the basins, compared to the 1971–2000 average values. The number of SNOTEL sites varies by basin. Basins with more than one site are represented as an average of the sites. Individual sites do not always report data due to lack of snow or instrument error. CLIMAS generates this figure using daily SWC measurements made by the Natural Resource Conservation Service.
Southwest Climate Outlook Staff
- Michael Crimmins, UA Extension Specialist
- Stephanie Doster, Institute of the Environment Editor
- Dan Ferguson, CLIMAS Program Director
- Gregg Garfin, Founding Editor, Institute of the Environment
- Zack Guido, CLIMAS Associate Staff Scientist
- Gigi Owen, CLIMAS Assistant Staff Scientist
- Nancy J. Selover, Arizona State Climatologist
- Jessica Swetish, CLIMAS Publications Assistant
Please direct your Southwest Climate Outlook comments and suggestions to Zack Guido.
The CLIMAS Web site contains official and non-official forecasts, as well as other information. While we make every effort to verify this information, please understand that we do not warrant the accuracy of any of these materials.... Read full disclaimer | <urn:uuid:847ca877-abf2-4062-92d1-d314028a63a3> | 2.6875 | 752 | Knowledge Article | Science & Tech. | 31.774047 |
The DODís vessels ORV Sagar Kanya and FORV Sagar Sampada were deployed for studying the impact of tsunami on the ocean environment and its resources in the Bay of Bengal and Arabian Sea during January Ė February 2005.
Immediately after the tsunami, the vessel PRV Sagar Kanya sailed from Goa on 3rd January 2005 with scientists from NCAOR, NIO, Goa and NIO, Regional Centre, Visakhapatnam and carried out multi-disciplinary observations along the affected coastal areas of Malabar coast and Nagapattinam, Cuddalore, Pondicherry and Chennai on the Coromandal coast.
Leading scientists and media have raised apprehensions with regard to the impact of tsunami on the fishery resources also. In the given scenario, a team of scientists from Cochin University of Science & Technology, NIO-RC, Kochi, Annamalai University and CMLRE, Kochi sailed out from Kochi onboard FORV Sagar Sampada on 5.1.2005 to carry out detailed investigations on the impact of tsunami on the benthic communities, water chemistry and productivity patterns of the coastal waters covering Kerala, Tamil Nadu and Andhra coasts and Andaman and Nicobar Islands.
Impact assessment was made through comparison of the results with previous data collected from these areas by the earlier cruises of FORV Sagar Sampada. Concurrent measurements on the near coastal waters i.e. up to 30 m depth were also undertaken by involving scientists from NIO-RC, Kochi; CUSAT; CMFRI, CMLRE and Annamalai University.
From all these analysis DOD-India have made a preliminary report of Tsunami happened on th Dec 2004. you can Download it here Tsunami Report | <urn:uuid:2708dc55-664e-4510-87ed-172a923d4cfb> | 2.75 | 371 | Comment Section | Science & Tech. | 29.875682 |
More evidence that Darwin's theory of natural selection as the origin of new species is wrong
From Jane Harris-Zsovan's recent story at Design of Life blog:
Darwin's theory of natural selection requires offspring to diverge from a common ancestor to create new species. It requires genetic differences to increase as descendants adapt to their environmental niches.
It is this 'natural selection' and 'adaptation' that creates species. And, as the newly created species continue to adapt, they should become more different over time. Following this line of thought, hybrids should be less viable than their parents.
Not only is there evidence that natural selection oscillates over time, but some hybrids, in both plant and animal kingdoms, are better suited to their environments than their parents.
In the case of the Darwin's finches, even the 'purebred' finch populations show little tendency to sustain changes in size or shape of their beaks over the long term. This scenario is exactly what Darwinian theory doesn't predict.
For more go here.
Note: I haven't been blogging much recently, and the reason is that I was editing a book. I am now back from that. More later. | <urn:uuid:5e6c946c-1fcf-477e-81a4-bb5b57182879> | 3.078125 | 246 | Personal Blog | Science & Tech. | 46.523864 |
[If you cannot see the You Tube video below, you can click here for a high quality mp4 video.]
Interviewees: Richard Prum & Jakob Vinther, Yale University
by Caroline Parnass
Yale University graduate student Jakob Vinther was studying a fossil squid from the late Jurassic period, about 150 million years ago. Looking at the darkly colored, organic material where the squid’s ink sac was, he wondered how, on occasion, these ink sacs are preserved in the fossils. So Vinther took a small sample from the specimen and viewed it under a scanning electron microscope. He was surprised to find that the ink granules in the sample were preserved with their original shapes.
“And this ink is composed of a pigment called melanin,” says Vinther. “I thought it was quite spectacular that you could actually recognize melanin in a fossil that is 150 million years old.”
Melanin is a natural pigment that gives the color to human hair and skin, as well as many birds’ feathers. Paleontologists have assumed for years that the small, rod-shaped objects they find on many fossil bird feathers under the microscope are bacteria that had eaten away the bird feathers and were fossilized along with the feather in a matted, organized fashion. Vinther points out that melanin granules in bird feathers are organized in a very specific manner that is easily recognizable.
When he looked at a fossil feather under the electron microscope, he says, “They were organized in the way that they are in a feather, so they are aligned according to the rays of the feather.” After observing these so-called “bacteria,” Vinther hypothesized that these objects were mot bacteria at all, but rather preserved melanin granules like he had seen in the fossil squid.
When Vinther showed his images to Richard Prum, a professor of ecology and evolutionary biology at Yale, Prum recalls, “I immediately was on Jakob’s side. I immediately thought that these things are melanin granules.”
The research team substantiated their claim by looking at a very rare type of fossil bird feather: one with black and white bands of color running horizontally. They found that while these melanin granules were found on the black portions of the feather, they could not be seen on the white portions.
The End of Artistic License?
This research now allows scientists to say things about the external appearance of fossilized species that would have never been possible to say prior to it.
“For centuries, people have been doing paintings or drawings, renderings, of fossil organisms, trying to understand what they may have looked like. Most of those have been entirely fantasy,” says Prum, adding, “This has been some of the first bit of science on something that is so vivid to us when we look at living animals today: What colors are they?”
This discovery could now allow us to predict the coloration of other fossilized remains.
“You quite often find fossil fish and fossil dinosaurs and birds where you have the eye preserved, and we have melanin inside the eye as well,” says Vinther. “You also find insects with color bands preserved. So I immediately thought, actually melanin might be much more wide-spread in the fossil record than hitherto accepted.”
Perhaps the most surprising and most exciting application of this research is that it may allow us to predict the colors of many dinosaurs.
“These include many of our most well loved dinosaurs,” says Prum. “Like velociraptor, the dinosaur that chased the kids around the kitchen in Jurassic Park, was actually fully plumaged.”
While these dinosaur feathers were not used for flight until the appearance of the transitional species Archaeopteryx, the first known bird, they were probably useful for warmth. Prum says we could even learn more about the color of one of the most famous dinosaurs of all, Tyrannosaurus rex.
Detail from The Age of Reptiles © 1966, 1975, 1985, 1989 Yale Peabody Museum. All rights reserved.
“In the classic mural The Age of Reptiles in the Yale Peabody museum, they depicted T-rex, which is one of the iconic, huge, bipedal, meat-eating dinosaurs,” he says. “Recent fossil discoveries have shown that the closest relative of these huge tyrannosaurids actually had tiny skin appendages or fossil feathers—’dino-fuzz.’ Our recent discovery of melanin granules in fossil feathers may allow us to reconstruct what some of these dinosaurs really looked like, and that’s going to depend on having more and exciting fossil specimens.”
The team has thus far only been able to recognize black, white and red/brown melanin granules. But more research is being done to look at the biochemical composition of the preserved melanin granules and see whether or not there is information that can help them distinguish between a greater variety of colors.
So could this mean that fearsome creatures like T-rex might have been bright and multi-colored rather than the dour grey-green that artists have typically imagined?
“Even now many of the fossil feathers of dinosaurs from the Liaoning of China have very well preserved feathers that indicate that they may have had … feather patterns like black and white spots or brightly pigmented patterns within the plumage,” says Prum.
Before you know it, pictures in textbooks may need to be redone with T-rex looking like a big macaw.
Share on Facebook |
Tweet This | | <urn:uuid:95fb1396-eae7-43d5-b2c9-5a9e92b2ffa3> | 3.765625 | 1,182 | Truncated | Science & Tech. | 39.878313 |
Washington: There are more than 210,000 known life forms in the world's oceans, but this could be a fraction of the total number of marine species, according to early results from a marine census released today.
Scientists from around the world are expected to complete the census of the world's oceans by 2010, when they hope to have a better understanding of the waters that cover nearly 70 per cent of the Earth's surface.
With nearly half of the world's population of 6.3 billion living along ocean coasts, experts say the big deep has been under-explored.
"The census is an attempt to level the playing field and I hope that by 2010 we will know as much about life in the oceans as life on land," said Ron O'Dor, a squid expert from Nova Scotia who is coordinating the census.
Hundreds of scientists from more than 50 countries are involved in the $US1 billion ($A1.42 billion) census, being sponsored by governments and a USfoundation, and experts are meeting in Washington this week to plan the next seven years of research.
O'Dor said just three years into the study they were making new finds weekly, at an average rate of 160 fish species a year. Those fish are not necessarily new species, but have been never been recorded by humans.
Over 15,300 species of marine fish are now in the census database and experts involved in the count expect the final tally to be roughly 20,000.
About 1,700 other animals and plants are also being cataloged each year and scientists estimate 210,000 marine life forms are currently known but the final number could be 10 times higher.
While new species are being documented, scientists are alarmed at how many species have died out, been lost to overfishing, pollution or climate change.
Recent research on the depletion of sharks and other large predators suggests the size spectrum of marine mammals is shrinking toward the small, said Fred Grassle, chair of the scientific steering committee of the census.
Large fish have been depleted by about 90 per cent in the past half century and fishing grounds are being destroyed by large fleets who are delving deeper.
"By changing one part of the ecosystem, the whole food chain changes," said O'Dor.
The obvious challenge in conducting the census is the vast size of the oceans and complete darkness at lower levels kilometres below the surface - what scientists call the Dark Zone.
Among recent finds at this depth have been giant squid and massive red jelly fish with muscular arms. The squid swims so fast, it has been impossible to catch.
Just as animal movements are tracked on land, scientists are tagging the movement of fish by attaching digital instruments to athletic fish from sharks and tuna to sea turtles.
Tagging fish leads to less duplication when counting species and provides an accurate record of movement.
Scientists are also interested in looking at masses under the water, called seamounts, and what species thrive there.
"People knew there were isolated islands under the sea but we are finding that 70 per cent of the species on one seamount are not found on another. The fact these seamounts are coming under increasing fishing pressure damages that habitat," said Grassle.
|Print this article Email to a friend||Top|
|text | handheld (how to)||
Copyright © 2003. The Sydney Morning Herald.
|advertise | contact us| | <urn:uuid:741707bf-6ded-4a9c-9600-0f1e17a3b909> | 3.203125 | 696 | Truncated | Science & Tech. | 51.93083 |
Duroquinone Molecule Nano-Brain
A nano-brain consisting of a hexagonal duroquinone molecule can carry out 16 times more operations than a normal computer transistor. All in a package hundreds of times smaller than the wavelength of visible light. This molecule resembles a hexagonal plate with four cones linked to it, "like a small car," explained researcher Anirban Bandyopadhyay, an artificial intelligence and molecular electronics scientist at the National Institute for Materials Science at Tsukuba in Japan.
(Molecular brains arranged to mimic our nervous sytem)
The duroquinone molecule nano-brain might prove to be the controller for all of the tiny gadget parts that nanotech researchers have created - motors, propellers, switches, elevators, sensors and so on.
Scientists operate the device by tweaking the center duroquinone with electrical pulses from an extremely sharp electrically conductive needle. The molecule and its four cones can shift around in a variety of ways depending on different properties of the pulse — say, the pulse's strength.
Since weak chemical bonds link the center duroquinone with the surrounding 16 duroquinones, each of those shifts too. Imagine, for instance, a spider in the middle of a web made of 16 strands. If the spider moves in one direction, each thread linked to it experiences a slightly different tug from all the others.
In this way, a pulse to the central duroquinone can simultaneously transmit different instructions to each of the surrounding 16 duroquinones. The researchers say this design was inspired by that of brain cells, which can radiate branches out like a tree, with each branch used to communicate with another brain cell.
Ultimately, the nano-brain idea could be implemented in a three-dimensional sphere of 1,024 duroquinones. This means it could perform 1,024 instructions at once, for 4^1024 different outcomes — a number larger than a 1 with 1,000 zeroes after it.
I think that duroquinone molecule nano-brains would be just the thing we need to make science-fictional inventions like lithocules possible:
...each lithocule knew exactly where it was supposed to go and what it was supposed to do. They were tetrahedral building blocks of calcium and carbon, the size of poppyseeds, each equipped with a power source, a brain and a navigational system.
(Read more about lithocules from Neal Stephenson's The Diamond Age)
You might also be able to get your Philip K. Dick's autofac up and running, as suggested today by the excellent Frolix_8.
Via LiveScience; thanks to Misja van Laatum for the tip on this story.
Scroll down for more stories in the same category. (Story submitted 3/12/2008)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
you like to contribute a story tip?
Get the URL of the story, and the related sf author, and add
Comment/Join discussion (Back On) ( 0 )
Related News Stories -
Fujitsu Touchscreen Mixes Real And Virtual Worlds
'His hands flashed over the keyboard - it had not been there a moment before, but it was operative...'- Frederik Pohl, 1965.
Nanowire Memristor Networks Form 'Brains'
'He had constructed ... a brain, of metal... whose atomic structure he claimed was analogous to the atomic structure of a living brain.'- Edmond Hamilton, 1926.
US Census Will Be Online In 2020
'Most would be in English, but some would be in Spanish, some in Amerind languages, some in Chinese...'- John Brunner, 1975.
Wireless Brain-Computer Interface
'I used my implant to tell MILLIE [a mainframe computer] what we wanted and she took care of it," Art said.'- Pournelle and Niven, 1981.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
German Firm Seeks To Recruit Autistics
Not a deficit, but a strength.
NASA Supports Pizza Printer
Is it extra with printed pepperoni?
Could Ground-Based Lasers De-Orbit Space Junk?
'Then their lasers vaporized the smaller satellites...'
'Hello, Computer!' Google Now Highlighted at IO13
MIT Robot Cheetah Video Shows Gait Transition
'The legs are long, curled way up to deliver power, like a cheetah's.'
TrackingPoint Smart Rifle
Not your typical 'smart bullet' approach.
Sky City's 220 Stories Are Go
'It rested among green parklands and... stood in total isolation, a glittering block of whites and flashing windows dotted with colors.'
CARMAT Bioprosthetic Total Human Heart Replacement
'George Walt's corporate existence proved the workability of wholly mechanical organs...'
Personal Sniffer Robots
'...The ticking combinations of the olfactory system of the hound.'
Physical Exam? We've Got Apps
See the future of handheld, personal medical devices.
The Interplanetary Internet, Vint Cerf Speaking
'This was the center of Interplanetary Communications.'
Drosophila Robotica, The Mechanical Fly
'... the Scarab [flying robot] buzzed into the great workroom as any intruding insect might...'
Robo-Raven Flapping Wing Robot Bird
'When he had first built them, they had been crude indeed, flying mechanisms with little more than a reflex-response unit.'
Japan's Nursing Home Robot Plan
Let's make the Roujin Z-0001 Robotic Bed!
Samsung Smart TVs With Gesture Control
'He waved his hand and the circuit switched abruptly.'
Swiss HCPVT Giant Photovoltaic 'Flower'
'...leaning against one of the slender stalks of a sunshade-photocell collector.'
More SF in the News Stories
More Beyond Technovelgy science news stories | <urn:uuid:44632b1b-974a-42d1-9491-67026ed469cc> | 3.453125 | 1,320 | Comment Section | Science & Tech. | 50.377222 |
variable = rand() % max;
rand() produces a random integer value.
% is called the modulus operator. It's basically what you learned to be the remainder in division. Ie. 5 % 3 = 2 and 16 % 5 = 1
As for this function, it actually only returns a number from 0 to max - 1, as you can't divide a number by a number and have the remainder equal to the divisor. Ie. a % b cannot equal b. | <urn:uuid:8b58daaf-7acb-4541-a7f3-fd73fefd7775> | 3.015625 | 102 | Q&A Forum | Software Dev. | 79.200685 |
THE FORMATION OF THE MOON: The Earth's moon formed just 30 to 50 million years after the sun was formed, when an object the size of Mars collided with Earth, releasing a giant cloud of dust along with the moon. Using a spectrograph on the Spitzer Space Telescope researchers identified the chemical make up of the dust and debris particles left floating in space. These chemicals provided evidence that high-speed collisions -- the same type of collision that most likely created our moon – occurred in space.
ABOUT THE SPITZER TELESCOPE: The Spitzer Space Telescope was launched on August 25, 2003. Spitzer detects the infrared energy radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground. Spitzer allows us to peer into regions of space that are hidden from optical telescopes. Many areas of space are filled with vast, dense clouds of gas and dust that block our view. Infrared light however can penetrate these clouds, allowing us to peer into regions of star formation, the centers of galaxies, and into newly forming planetary systems. Infrared also brings us information about the cooler objects in space, such as smaller stars which are too dim to be detected by their visible light, extrasolar planets, and giant molecular clouds. | <urn:uuid:ea437fb1-2eae-4e47-a6b9-dcaff9710b47> | 4.34375 | 267 | Knowledge Article | Science & Tech. | 43.913907 |
View Full Version : moving earth with comet/asteroids
2001-Nov-25, 04:01 AM
I ran across a small newspaper article a month or so ago about the possibility of moving Earth into a larger/different orbit by managing the paths of comets or asteroids. Does anyone know the whereabouts of that actual article, or other sources of information regarding this idea?
2001-Nov-25, 05:06 AM
I recall this idea being discussed at the old board a few months ago.
There was a paper by Don Korycansky (http://www.ucolick.org/~kory/) and others in Astrophysics and Space Science, 275, p 349-366. A 20 page PDF version of it can be found here (http://www.ucolick.org/~kory/astro_eng/astro_eng.pdf), and the topic was also discussed in Scientific American (http://www.sciam.com/2001/0601issue/0601scicit6.html) as well as a number of newspapers.
As an aside, there is an obvious typo in the Scientific American article. They write of the 100 km object having a mass of 1016 tons, but I think they mean 10<sup>16</sup> tons. Even so, I think the number is wrong. A 100 km diameter round object with mass 10<sup>16</sup> tons would have a density of around 19 tons/m<sup>3</sup>. Even the density iron is less than half of that. They must have used a 100 km radius when they calculated the mass.
A search at Google on the name "Don Korycansky" will turn up a bunch of info. The url is: http://www.google.com
(Hmmm, edited because it wouldn't accept a hot link to Google without appending a bunch of other stuff to my code -- weird)
<font size=-1>[ This Message was edited by: Torsten on 2001-11-25 00:15 ]</font>
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved. | <urn:uuid:a610276a-a7d8-4793-bb46-bca7d262851c> | 3.1875 | 462 | Comment Section | Science & Tech. | 81.837726 |
An ejecta blanket is a generally symmetrical apron of ejecta that surrounds a crater; it is layered thickly at the crater’s rim and thin to discontinuous at the blanket’s outer edge.
structure of impact craters, showing surrounding ejecta
After an impact event, the falling debris forms an ejecta blanket surrounding the crater. Approximately half the volume of ejecta falls within 1 crater radius of the rim, or 2 radii from the center of the crater. The ejecta blanket becomes thinner with distance and increasingly discontinuous. Over 90% of the debris falls within approximately 5 radii of the center of the crater. Ejecta which falls within that area is considered proximal ejecta. Beyond 5 radii, the discontinuous debris is considered distal ejecta.
- ^ David Darling. "ejecta blanket". The Encyclopedia of Astrobiology, Astronomy, and Spacecraft. Retrieved 2007-08-07.
- ^ French, Bevan M. (1998). "Ch 5: Shock-Metamorphosed Rocks (Impactites) in Impact Structures". Traces of Catastrophe: A Handbook of Shock-Metamorphic Effects in Terrestrial Meteorite Impact Structures. Houston: Lunar and Planetary Institute. pp. 74–78. | <urn:uuid:bc6f9cce-16d8-484f-ac0e-43c16a2e1550> | 3 | 268 | Knowledge Article | Science & Tech. | 42.061225 |
How do you explain a scientific breakthrough in a soundbite, let alone the creation of the universe? That must be the daily problem faced by the PR flak at CERN, the Geneva-based European Nuclear Research Facility. Scientists investigating the creation of the universe hit the front pages this week with a new discovery; top prize to anyone who could put it into a Tweet.
Physicists at CERN said Wednesday they have discovered a new subatomic particle which bears remarkable similarity to the Higgs boson. Apparently this gives a potential clue as to why elementary particles have mass… Still with us?
A CERN spokesman told the media, “The results are preliminary, but the five-sigma signal at around 125 GeV we’re seeing is dramatic. This is indeed a new particle.” Another scientist chimed in, “We observe in our data clear signs of a new particle, at the level of five sigma, in the mass region around 126 GeV.” In spite of these “clarifications” the media found a way to describe the discovery–the “God particle” became the shorthand. But does anyone understand what it means?
The PR Verdict: “C” (Distinctly OK) for CERN and the PR surrounding its discovery. While it rates top marks for global coverage and for getting key messages and data included, it’s only a “C” because we still have no idea what actually happened or what any of it means.
The PR Takeaway: When in doubt that anyone will understand your announcement, talk about benefits and not content. This is one example where the subject matter is truly too daunting for any PR flak with a clipboard and red pen. A couple of soundbites might have been useful to explain what this could mean in its practical application, if there is one. Failing that, what is the question that can now be answered but which could not be answered a week ago? That might be the tweet CERN was looking for.
Could CERN have come up with a better way to relate their discovery? Do you know what the discovery means? Give us your PR Verdict, below. | <urn:uuid:08237e82-7116-45eb-bb28-0776094e8c27> | 2.734375 | 456 | Personal Blog | Science & Tech. | 54.710618 |
Lightningfollows the Sun
Space Science News home
Lightning follows the Sun Space imaging team discovers unexpected
One of a series of stories covering the quadrennial International Conference on Atmospheric Electricity, June 7-11, 1999, in Guntersville, Ala.
"We've been watching the global distribution and established a picture of how it changes as a function of time of day, season, and even from year to year," said Dr. Hugh Christian of the Global Hydrology and Climate Center in Huntsville, Alabama. Christian is the principal investigator for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission, and its predecessor, the Optical Transient Detector (OTD) on Microlab 1.
Above: Lightning likes land: Data from the Lightning Imaging Sensor shows that most lightning strikes occur over land where the ground can warm the air more effectively. This map covers the latitudes 35 deg. N to 35 deg S overflown by the Tropical Rainfall Measuring Mission carrying the LIS. Links to 937x224-pixel, 37KB GIF. (NASA/GHCC)
They provide a view of lightning from above the cloud tops, thus revealing the large percentage of cloud-to-cloud and intracloud flashes that cannot be seen from the ground. LIS and OTD both register the time of a lightning flash and its location within the instrument's field of view. This is then overlaid with weather pictures and other data so scientists can track the locations and frequency of lightning and how it corresponds with other weather phenomena.
Since their launches - OTD four years ago and LIS 18 months ago - Christian and his team have generated several maps showing global lightning patterns.
Left: At the Global Hydrology and Climate Center in Huntsville, Ala., Hugh Christian (foreground), Steve Goodman and Richard Blakeslee (background) monitor data from the Lightning Imaging Sensor aboard the Tropical Rainfall Measuring Mission. (NASA)
December 3: Mars Polar Lander nears touchdown
December 2: What next, Leonids?
November 30: Polar Lander Mission Overview
November 30: Learning how to make a clean sweep in space
The first of these patterns to emerge was the discovery that lightning is more common in storms over land than over oceans. "It's probably a consequence of enhanced convection from increased warming over land." The LIS team announced its initial findings in 1998. Today, Christian will present expanded findings that buttress their claim.
Sign up for our EXPRESS SCIENCE NEWS delivery
"Over land, we see tremendous diurnal changes, a strong peak in lightning in the afternoon over land," Christian continued. "Over water we see very little variation. We believe it's due to the land absorbing heat and causing strong convection. On the other hand, water can store a lot more heat, and releases it slowly."
Lightning patterns also vary from one season to the next.
"We see tremendous variations in extratropical regions," meaning areas north or south of the tropics of Cancer and Capricorn. "You see lightning activity truly following the sun. As summer in the northern hemisphere progresses, you see lightning moving farther north," Christian continued. "You see a similar pattern in the southern hemisphere, but not so pronounced because there isn't as much land outside the tropics."
Voltage (June 18, 1999) Scientists discuss biology, safety,
and statistics of lightning strikes.|
News shorts from Atmospheric Electricity Conference (June 16, 1999) Poster papers on hurricanes and tornadoes summarized.
Soaking in atmospheric electricity (June 15, 1999) 'Fair weather' measurements important to understanding thunderstorms.
Lightning position in storm may circle strongest updrafts (June 11, 1999) New finding could help in predicting hail, tornadoes
Lightning follows the Sun (June 10, 1999) Space imaging team discovers unexpected preferences
Spirits of another sort (June 10, 1999) Thunderstorms generate elusive and mysterious sprites.
Getting a solid view of lightning (June 9, 1999): New Mexico team develops system to depict lightning in three dimensions.
Learning how to diagnose bad flying weather (June 8, 1999): Scientists discuss what they know about lightning's effects on spacecraft and aircraft.
Three bolts from the blue (June 8, 1999): Fundamental questions about atmospheric electricity posed at conference this week.
Lightning Leaders Converge in Alabama (May 24, 1999): Preview of the 11th International Conference on Atmospheric Electricity.
What Comes Out of the Top of a Thunderstorm? (May 26, 1999): Gamma-rays (sometimes).
Lightning research at NASA/Marshall and the Global Hydrology and Climate Center.
It most certainly can be used on a small-scale, short-term basis to monitor the progress of storms.
"We can use lightning to monitor and study storms, including severe thunderstorms," Christian explained, since the lightning can only be generated by convection within a cloud system. "It's tightly coupled with the dynamics and physics of the storm. We use it to monitor its evolution and life."
As successful as OTD and LIS have been - and are expected to be over the next 1 year and 6 years (respectively) that they are expected to continue operating - they can only be used in research. Their view is limited to a small area directly under their satellites, so global or even regional monitoring is impossible.
To fill that role, Christian and his team are studying designs for a Lightning Mapping Sensor that would be placed aboard geostationary weather satellites. From 35,680 km (22,300 mi) up, the sensor could track severe activity and enhance meteorologists' warning capabilities.
LIS primer from the Global Hydrology and Research Center
Global Hydrology and Research Center home page
More Space Science Headlines - NASA research on the web
NASA's Earth Science Enterprise Information on Earth Science missions, etc.
45th Weather Squadron at Patrick AFB,
lightning reference page.
National Severe Storms Laboratory, Norman, OK
Numerical Modeling at National Severe Storms Laboratory
The New Mexico Tech 3D Lightning Mapping System
Lightning Detection and Ranging project at Kennedy Space Center.
National Severe Storms Laboratory Photo Library, where we got a lot of the neat pictures in these stories.
Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
For more information, please contact:|
Dr. John M. Horack , Director of Science Communications
Curator: Bryan Walls
NASA Official: John M. Horack | <urn:uuid:83d2a98d-f3a8-4afd-bd65-8eeff7496a29> | 3.234375 | 1,373 | Content Listing | Science & Tech. | 42.82194 |
The Hottest Planet Venus is a dim world of intense heat and volcanic activity. Similar in structure and size to Earth, Venus' thick, toxic atmosphere traps heat in a runaway 'greenhouse effect.' The scorched world has temperatures hot enough to melt lead. Glimpses below the clouds reveal volcanoes and deformed mountains. Venus spins slowly in the opposite direction of most planets. | <urn:uuid:3b3b065c-1559-4d5f-bcc3-09abd59bbfe4> | 3.046875 | 78 | Knowledge Article | Science & Tech. | 44.668393 |
Chel Anderson is a botanist and plant ecologist for the Minnesota Department of Natural Resources. She lives here in Cook County and joins us periodically to talk about phenology or what’s going on in the woods right now. Welcome, Chel.
Well, Chel, we had one big storm. Let’s talk about storm damage.
Anderson: Yeah, that’s a great topic, I think, very immediate, shall we say, since we’re right in the aftermath and getting a better sense of just what all of the impacts are that are easy to say. And, of course, one of the most obvious, besides damage to your home or your yard or local buildings, is all the trees down. And, there are a lot of trees down.
Oh, I’ve talked to so many people that basically had to carve their way out of their driveways. And, I know I did in a number of cases run into that and a number of people, particularly away from the shore, there were a lot of trees down along the shore, but this was back inland where you might not expect quite so much.
Anderson: Right, true, and I think it’s a great remind of how while things like the big blowdown in 1999 really loom large and for good reason in our minds. This kind of damage to the canopy trees in particular in the forest and woodlands is happening all the time and these smaller events, in terms of the area and mass that they affect, may be smaller and not as easy to visualize. There is still a tremendous—there’s probably literally millions of trees that in this storm were impacted whether or not they came down altogether or were damaged somehow would vary quite a bit, but there is a tremendous effect on the canopy trees in particular. And, to some degree, this is a very important process within forest and woodland ecosystems, because it brings larger woody material down to the forest floor which is utilized by many things and is very important to various functions within any forest ecosystem. But, of course, it can also have negative effects in that it can open up areas of light on the forest floor, which can make for significant changes to plant life that normally grows in the shade. And, in terms of long-term impacts of something like climate change, this is one of the effects that we anticipate we will see as a result of climate change here and that is something that is generally referred to as the thinning of the forest canopy. This will not necessarily happen in huge events like ’99 blowdown, but will happen as a result of more frequent, larger, more impactful storms, like the one we’ve just had, and take out canopy trees and bring more light to the forest floor.
I’m assuming those would be the leafy trees?
Anderson: Yeah, well, any tree that is growing at the top of the structure of the vegetation.
So, some of the white pines?
Anderson: It could be conifers; it could be deciduous species like aspen and birch. A storm at this time of year has a relatively smaller effect on our leafy trees, our deciduous leafy trees, because they’ve lost their leaves, so they present less of an obstruction to the wind. So, the wind is exerting less force on them than if this had happened two months ago when they still had their leaves.
Yeah, they wouldn’t have that kind of wind-sail thing.
Anderson: Exactly. So, the conifers—as a proportion of the total forest canopy out there—they’ve probably taken a larger hit than the aspen or birch, for instance.
A lot of spruce and balsam seem to come down. Does that have anything to do with their root structure?
Anderson: Well, in part. They do have very broad root systems, but so do many of our deciduous species. But, it mostly has to do with the fact that they present a lot of surface to the wind, compared to the other trees, and they’re less protected by the other trees, because the wind can sail through those bare canopies of the other trees. And, of course, the biggest effects on any species of tree out there are going to be the trees that are on the edge of something, because they’re going to take the brunt of the wind first and they’re going to be totally exposed to the force of the wind. If you’re a tree growing within the forest, then you’re sheltered by your fellow trees around you. And trees that grow on the edge, if they’ve started from the beginning growing on the edge, they develop extra special strength qualities of strength both in their roots and in their trunks and branches that help them, you know, resist and be more firm to the wind. So, that doesn’t mean they can’t be toppled, they can, but they definitely adapt to that as they grow, just like your tomato plants. If you grow your tomato plants inside before you put them out, and you never put them until they’ve got to be tall, ready, looking good, you put them out, they fall right over, because the plant has not been exposed to any jostling by the wind. So, the cells have not responded to that particular condition and they aren’t ready for it and they need time. Again, as I said, bring coarse, woody debris down to the forest floor is a very important thing and that’s not a bad thing, and having it there is a good thing. So, if you have stuff that’s come down in the forest around that isn’t in your way, other than if you have a huge area blown down around your home or something, don’t feel the immediate need or that there’s some ecological need to go out and tidy it up, because the decaying of that material on the forest floor is essential to a healthy ecosystem, and it helps prevent the big rains that we had, which is another component of this storm, from going like a bullet to the nearest stream or lake, and slowing down the water is something that this kind of coarse, woody debris as it decays becomes like sponges and it soaks things up and slows water from moving across the surface of the land when it falls so heavily like it did this time at times. Big storms like we had are the kinds that really can contribute huge amounts of sediment—if a watershed isn’t in good shape in the forest around it—it can contribute tremendous sediment loads that can really spoil spawning options for fish. And, a lot of that sediment, in a big storm like this, ends up in Lake Superior, which has a lot of near-shore spawning for species that, again, require areas of boulders and gravel and cobbles that are not covered with sediment and where all the spaces in between them aren’t filled in with finer sediments of clay.
So, in other words, with a storm like that, there was good news and bad news.
Anderson: Yeah, as with most things.
Chel Anderson, DNR botanist and plant ecologist. Thanks again for helping us understand what’s going on around us and putting the big October blow into perspective.
Anderson: You’re welcome. | <urn:uuid:b28a5128-c413-49d7-bbd3-87a7dcc2cf15> | 3.046875 | 1,552 | Audio Transcript | Science & Tech. | 55.565876 |
This tutorial assumes that you know the basics behind binary code, bits, and bit-wise operators. You don't need to know what every operator does by heart, but you should understand the fundamentals of bit-wise manipulation and have the ability to look up anything you don't know. If you need to learn these things or just want a refresher, you can check out my tutorial on Understanding Bit-Wise Operators in Java.
Let me say first of all, that often you don't need to use bit-wise operators to accomplish a goal. I'm simply showing some things you can do with the operators. It will be up to you to decide when and if you can apply these operators to your program.
The most common example I can think of for bit-wise operators are Colors. Yep, the pixels on your screen. Look up an RGB chart and you will see that values can range from 0-255 for each red, green and blue. That is only 1 byte of information for each color value. Using bit-wise operators, we can store small blocks of data in a larger block of data. In the case of colors, we can store the four bytes corresponding to red, green, blue, and alpha within 1 single integer. In fact, the Color class in Java internally stores the color value as an integer. The way it works is the left-most byte in the integer is the alpha value which can be from 0-255, the next byte to the right is red, then green, then blue. It looks like this:
aaaa aaaa rrrr rrrr gggg gggg bbbb bbbb
where a, r, g, and b are individual bits that are the value of that part of the color.
So let's create a color using just an integer and some bit-wise operators. First, let's pick the color and write down the RGB values we want. I am going to choose this nice lime green color. The values are, Red=173, Green=255 and Blue=47. The following is a simple GUI that displays a panel of the set color. (Note: the Color class handles all the bit-wise manipulation for you. You would never be obligated to do it yourself like this.)
As an exercise, take a look at the createColor() method and try to understand what the operators are doing. Get out a piece of paper and write yourself a little math problem with the bits. Discover what's happening behind the scenes. (Remember &= or <<= work the same as +=. Just take the operand and add it to the other side. eg, alpha &= 0xff is the same as alpha = alpha & 0xff.)
Bonus Question: Calculate how many pixels there are on your entire screen using your resolution values. Our small window only had 400 x 300. If each of those pixels has to represent an RGBA value (4 bytes), how much memory, in bytes, does that require?
Another use for bit-wise operators would be using individual bits as flags. When I was working in C++, I tended to see this used a lot. You would define a single bit to represent a boolean value or flag. You could set a bunch of these and then | them together to get combinations of flags as 1 parameter. Tell me, if you didn't have to write the class, which constructor would you prefer to use in the main method?
As you can see, we can break all of our Objects / raw types into a series of bytes and using bit-wise operators we could convert them to any other type. We could also design Objects to use the least amount of memory possible. I can fit a TicTacToe board into an integer and still have room in that integer to keep a score for the player. Most people would probably have the score itself as an integer, and an array of chars (or worse Strings) for the board. Though in Java, we don't really overly concern ourselves with memory management. Most of the time, it's better to have understandable code than to have optimally efficient code, but it's a nice skill to know. | <urn:uuid:3d9a49e3-4479-41aa-ad7f-8237a47e5384> | 3.875 | 859 | Comment Section | Software Dev. | 68.293721 |
Search this site
Java programming tutorial for beginners
The aim of this and the following pages is to teach you how to program Java, assuming you've done little or no programming in the past. Before you proceed with this tutorial, you should probably read through the guide to getting started with Java: that guide will tell you how to install the necessary software you need to program in Java. Especially if you're a complete beginner, you should make sure you've followed the getting started guide before starting on this tutorial, as it takes you through the mechanics of using the Java development tools. (An exception might be if you are working with a programming tutor who has taken you through these mechanics, or shown you a different tool than the one that we explain.) In this tutorial, we're going to concentrate mainly on the actual Java programming language, not on the mechanics of using the tools.
As you read through the tutorial, you'll frequently see example program snippets. In general, you should try running them, and altering them slightly to see how your changes affect how they run. To see how to run them, see the page on running your first line of Java— you essentially follow this procedure, but insert several lines as necessary.
The topics currently covered in this tutorial are as follows. They're generally designed to be read in order and link on from one another:
Written by Neil Coffey. Copyright © Javamex UK 2012. All rights reserved. | <urn:uuid:9c16416a-0c87-4b1d-bf89-55eba71c3f00> | 3.109375 | 293 | Tutorial | Software Dev. | 49.689479 |
What exactly creates high and low pressure...and is there a tendency for one to be found more frequently during certain times of the year?
High pressure is generated by sinking air, low pressure by rising air. They can be generated on a very small scale by air rising from heated pavement (you see hawks soaring in these thermals) or cold air moving down a mountainside into a valley (why the coldest temperatures are found in such places). Highs and lows can be generated on a much bigger scale by the large scale circulation of the atmosphere. The same basic principles (sinking air/high pressure, rising air/low pressure) apply. In terms of locations, high pressure is most commonly found on a global scale in the tropics and over/near the poles...low pressure over the equator and in what we call the mid-latitudes...essentially spanning the continental U.S. Highs and lows in this zone (the mid-latitudes) tend to shift; for example in the Pacific Northwest, high pressure is most common during the summer and early autumn months, low pressure in the late autumn and winter months. | <urn:uuid:c5fb6c98-3f5c-45c6-bda9-6c5ae06cfbe5> | 3.828125 | 232 | Knowledge Article | Science & Tech. | 57.904567 |
THERE was no splash, just a flash of red as the body plunged into the pool and disappeared. At first, Simon Pollard thought his eyes were playing tricks on him. Spiders don't commit suicide, yet this one - a small red crab spider about a centimetre across - seemed to have thrown itself into the mouth of a carnivorous plant.
Pollard, a spider expert from Canterbury Museum in Christchurch, New Zealand, had never seen anything like this before his visit to Sarawak, in the Malaysian part of the island of Borneo. The slimy fluid inside the trap of the pitcher plant spells death to any unsuspecting insect. Lured to the rim by sweet drops of nectar, most are unable to keep their footing on the slippery surface and fall to their doom.
But Pollard soon discovered that reckless behaviour was quite normal among the local red crab spiders. Whenever he ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:208bf5f8-bc3c-4d82-aa22-0ef163c8319c> | 3.3125 | 212 | Truncated | Science & Tech. | 58.882429 |
So the shrinking Arctic ice will make it easier for us to tap the Arctic's oil and gas reserves (21 January, p 24). That's oil and gas that we will burn, producing carbon dioxide that will cause the ice cap to shrink even further. Since less polar ice means less sunlight is reflected from the Earth's surface, and hence more energy is absorbed, the melting of polar ice is already starting a positive feedback mechanism leading to global warming. Do we really need to add to the problem?
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:49ed897d-29ad-44a3-b0f7-fc24f6261aaf> | 3.40625 | 127 | Truncated | Science & Tech. | 57.72777 |
For undocumented functions, this is a particular oddity. For me, this function has so far returned only two values for any particular image: 1 and 8. And, after experimenting with it about forty or fifty times, I think I understand how it works, based simply on my experience.
The value returned is the number of bits used for the particular color channel within an image. That means how variable a particular color is within an image. While I received 1 and 8, it could theoretically be as much as 16 or 32, depending on how the technology evolves over, say, the next decade or so.
To best understand this function, you should probably also understand the ImageMagick function getColorValue. In terms of the ImageMagick class, a color can be measured between 0 and 1, in terms in of a particular pixel. Red, Green, Blue, or, any particular color of any color scheme, can be something like 0.501960784314. But, since each pixel is a combination of the three colors, you can have 0.845 for red, 0.254 for green, and 0.11 for blue.
How does this all tie in with the getChannelDepth function? Easy. If all of the pixels in an image are either values 1 or values 0 for the particular Red/Green/Blue values, then this function will return a 1 for 1 bits per pixel for that color channel. If, however, any single pixel in the image for the inputted channel parameter isn't exactly 1 or 0 for the particular color channel, then this function will return an 8 for 8-bit colors per pixel for that color channel.
If you receive back a 1 for every single color channel put in the parameter, that means you're dealing with an image that's 16-bit -- you know, like those computer games published in 1982, or the Atari console games. You won't ever forget a 16-bit color green, trust me. If you get back an 8 for every single color channel put in the parameter, that means you're dealing with any standard, modern image.
You can input any color channel, based on the channel constants available within the ImageMagick class. See them here: http://www.php.net/manual/en/imagick.constants.php#imagick.constants.channel . That means a format like imagick::CHANNEL_UNDEFINED, but with the "_UNDEFINED" value being anything here: undefined, red, gray, cyan, green, magenta, blue, yellow, alpha, opacity, matte, black, index, all, and default.
For any image with one pixel color of RGB value 1 / 0.501960784314 / 0.501960784314 (#FF8080), you get this result:
Channel - 'Undefined' : 1
Channel - 'Red' : 1
Channel - 'Gray' : 1
Channel - 'Cyan' : 1
Channel - 'Green' : 8
Channel - 'Magenta' : 8
Channel - 'Blue' : 8
Channel - 'Yellow' : 8
Channel - 'Alpha' : 1
Channel - 'Opacity' : 1
Channel - 'Matte' : 1
Channel - 'Black' : 1
Channel - 'Index' : 1
Channel - 'All' : 8
Channel - 'Default' : 8
If all colors are between 0 and 1 with getColorValue function, each of these results with be 1. If you're dealing with an image that has full color spectrum depth (almost any given photograph), you'll get 8 for red, gray, cyan, green, magenta, blue, yellow, all, and default, with a 1 for the other remaining channels. Perhaps some use for automated image editing, like use with posterize or oilpaint functions.
(PECL imagick 2.0.0)
Imagick::getImageChannelDepth — Gets the depth for a particular image channel
int Imagick::getImageChannelDepth ( int
Gets the depth for a particular image channel.
Elenco dei parametri
TRUE in caso di successo.
add a note User Contributed Notes Imagick::getImageChannelDepth - [1 notes]
holdoffhunger at gmail dot com ¶
1 year ago | <urn:uuid:5e4b572a-41ae-4fc4-add0-c531a0675d0e> | 2.734375 | 916 | Documentation | Software Dev. | 59.784694 |
This is the local valid time for which the observation was taken. There may be some ambiguity around the times when Iowa changes from CDT->CST and vice-versa.
This value is simply the temperature of the air. Values are in units of degrees fahrenheit.
Wind Direction can sometimes be a hard concept to interpret. The
values are in integer degrees for where the wind is blowing from. 0°
is a wind from the North. 90° is a wind from the East. 180° in
a wind from the South. 270° is a wind from the West.
For example. If the value is 90, this is an easterly wind. Meaning, if you were facing east, the wind would be in your face.
For the schoolNet sites, this value is an instantenous measurement of the wind speed. Values are in knots.
This value is the accumulated amount of precip recorded at the site for the local day. Values are in inches.
This value is the accumulated amount of precip recorded at the site for the current month. Values are in inches.
Instantaneous values of solar radiation. Values are in Watts per meter squared.
Relative humidity is expressed as a percentage. It is a measure of the amount of water vapor currently in the air versus the capacity of the air.
Atmospheric pressure expressed in inches of mercury. | <urn:uuid:e238b4f5-e271-4136-94f8-e569bc3b0c01> | 2.9375 | 285 | Structured Data | Science & Tech. | 64.215375 |
are in Italics
Fish & Wildlife Mgmt.
Model Design & Building
Pulp and Paper
Reptile & Amphibian Study
Small Boat Sailing
Soil & Water
- Tell the meaning of the following: alpha particle, atom, background radiation, beta
particle, curie, fallout, half-life, ionization, isotope, neutron, neutron activation,
nuclear energy, nuclear reactor, particle accelerator, radiation, radioactivity, roentgen,
and X ray.
- Make three-dimensional models of the atoms of the three isotopes of hydrogen. Show
neutrons, protons, and electrons. Use these models to explain the difference between
atomic weight and number.
- Make a drawing showing how nuclear fission happens. Label all details. Draw a second
picture showing how a chain reaction could be started. Also show how it could be stopped.
Show what is meant by a "critical mass."
- Tell who five of the following people were. Explain what each of the five discovered in
the field of atomic energy: Henri Becquerel, Niels Bohr, Marie Curie, Albert Einstein,
Enrico Fermi, Otto Hahn, Ernest Lawrence, Lise Meitner, William Roentgen, and Sir Ernest
Rutherford. Explain how any one person's discovery was related to one other person's work.
- Draw and color the radiation hazard symbol. Explain where it should and should not be
used. Tell why and how people must use radiation or radioactive materials carefully.
- Do any THREE of the following:
- Build an electroscope. Show how it works. Put a radiation source inside it. Explain any
- Make a simple Geiger counter. Tell the parts. Tell which types of radiation the counter
can spot. Tell how many counts per minute of what radiation you have found in your home.
- Build a model of a reactor. Show the fuel, the control rods, the shielding, the
moderator, and any cooling material. Explain how a reactor could be used to change nuclear
into electrical energy or make things radioactive.
- Use a Geiger counter and a radiation source. Show how the counts per minute change as
the source gets closer. Put three different kinds of material between the source and the
detector. Explain any differences in the counts per minute. Tell which is the best to
shield people from radiation and why.
- Use fast-speed film and a radiation source. Show the principles of autoradiography and
radiography. Explain what happened to the films. Tell how someone could use this in
medicine, research, or industry.
- Using a Geiger counter (that you have built or borrowed), find a radiation source that
has been hidden under a covering. Find it in a least three other places under the cover.
Explain how someone could use this in medicine, research, agriculture, or industry.
- Visit a place where X ray is used. Draw a floor plan of the room in which it is used.
Show where the unit, the person who runs it, and the patient would be when it is used.
Describe the radiation dangers from X ray.
- Make a cloud chamber. Show how it can be used to see the tracks caused by radiation.
Explain what is happening.
- Visit a place where radioisotopes are being used. Explain by drawing how and why it is
- Get samples of irradiated seeds. Plant them. Plant a group of nonirradiated seeds of the
same kind. Grow both groups. List any differences. Discuss what irradiation does to seeds. | <urn:uuid:48337d88-445d-4d63-ba9f-f08f76f387cf> | 3.875 | 768 | Tutorial | Science & Tech. | 55.195631 |
How big is the shuttle?
- The shuttle measures 122.2 feet long, 56.67 feet high, with wingspan of 78.06 feet. The height of the full shuttle stack, including the external fuel tank, is 184.2 feet. Gross weight is 4.5 million pounds at liftoff. That’s almost four times as weighty as the heaviest airplane ever built, the 1.2-million-pound Russian An-225 airplane. But when it returns, the orbiter weighs 230,000 pounds, about as much as a Boeing 757 jet.
- The cargo bay measures 60 feet long and 15 feet in diameter, and can carry cargo equivalent to the size of a school bus. Maximum payload is 29.5 metric tons, or 32.5 U.S. tons. The average school bus weighs 15 tons.
How much money is involved?
- Researchers estimated in 2005 that the average cost of a shuttle mission would be $1.3 billion over the life of the program, and roughly $1 billion for the last five years of operation.
- NASA says that salaries for civilian astronaut candidates are based on the federal government's General Schedule pay scale for grades GS-12 through GS-13. Each person's grade is determined according to his or her academic achievements and experience. According to the figures cited by NASA, a GS-12 starts at $65,140 per year and a GS-13 can earn up to $100,701 per year. However, those figures are adjusted to reflect different localities, and civilian astronauts in Houston would receive higher pay levels. Military astronauts receive the salary associated with their rank and experience.
How fast does the shuttle fly?
- Like any other object in low Earth orbit, the shuttle must reach speeds of about 17,500 miles per hour to remain in orbit. The exact speed depends on the shuttle's orbital altitude, which normally ranges from 190 to 330 miles above sea level, depending on the mission.
- During ascent, astronauts feel a maximum acceleration of 3 G’s, or three times the force of gravity on Earth’s surface. Most roller coasters give riders a maximum rush of 3 to 5 G’s.
How far does the shuttle go?
- A typical flight plan calls for 250 orbits, giving each crew member about 6.6 million frequent-flier miles.
© 2013 msnbc.com Reprints | <urn:uuid:f602eade-7a06-4345-be25-4f88277d21a1> | 3 | 500 | Listicle | Science & Tech. | 78.427772 |
Trapping education manual for the beginning or inexperienced trapper intended to provide information on North Dakota's predators and furbearing animals and the basics on how to trap them using good trapping skills and sound fur management.
Links by map location or station number to real-time stage and streamflow, real-time water quality, ground-water data, long-term hydrographs, and annual water-data reports to view and download for past and current water conditions in North Dakota.
USGS water resources home page for North Dakota with links to hydrologic studies and long-term and real-time data on streamflow, ground and surface water, droughts, and water quality plus district and publications information.
Assessment of the importance of the Conservation Reserve Program in preventing the decline of grassland breeding birds by preserving grassland habitats in North Dakota. Published as Wilson Bulletin v. 107 no. 4, pp. 709-718 (1995).
Homepage linking the historic journey of Lewis and Clark and the research of the USGS in the same region with links to mapping history, remarkable points on the Missouri River, educational activities, photo gallery, and publications.
Summary of part of the USGS interdivisional Mississippi Basin Carbon project that will study the changes in climate and the environment through carbon cycle changes recorded in lake sediments in the Upper Mississippi River Basin. | <urn:uuid:39b74aa8-5189-4032-96a8-c7d68ca743cb> | 2.890625 | 280 | Content Listing | Science & Tech. | 32.994091 |
In 1998, Oceanographer John Martin (1935-1993) published an article demonstrating that increasing the concentration of iron in the surface waters of the Southern Ocean resulted in increased phytoplankton (i.e., microscopic plants such as diatoms) production. Like all plants, phytoplankton converts carbon dioxide (CO2) into organic compounds like sugars through the process of photosynthesis. The by-product of this reaction is oxygen (O2). With respect to biological processes, this is referred to as “primary production”, and the oceans have the highest rates of primary production in the world.
The importance of this study was immediately recognized throughout the scientific community. Simply stated, the “Iron Hypothesis” suggests that increasing iron concentrations in the surface ocean can result in increasing primary production in the oceans, making it a natural process to decrease CO2 concentration in the air. As the plants go through their life cycles, they will eventually die and sink to the seafloor, permanently “sequestering” the carbon stored in their bodies. Of course, the process is much more complex. But, this demonstrates that human activity is not just one-way: i.e., humans only pumping excess CO2 into the air. We can also devise ways in which we can remove large amounts of this excess gas. These processes fall under the broad category of “Carbon Sequestration”. | <urn:uuid:84c48a86-8707-4bf8-b207-9ed85dc158bc> | 3.6875 | 297 | Personal Blog | Science & Tech. | 38.294 |
Date of this Version
Computers use algorithms to evaluate polynomials. This paper will study the efficiency of various algorithms for evaluating polynomials. We do this by counting the number of basic operations needed; since multiplication takes much more time to perform on a computer, we will count only multiplications. This paper addresses the following: a) How many multiplications does it take to evaluate the one-variable polynomial, Σ= + + + + = n i i i n n a a x a x a x a x 0 2 0 1 2 ... when the operations are performed as indicated? (Remember that powers are repeated multiplications and must be counted as such.) Write this number of multiplications as a function of n. b) Use mathematical induction to prove that your answer is correct. c) Find another way to evaluate this polynomial by doing the operations in a different order so that fewer multiplications are needed. Hint: Think of ways to intermix addition and multiplication and experiment with polynomials of lower degree. Write the number of multiplications as a new function of n. The best algorithm will use only n multiplications. Explain the algorithm you will use. d) How many multiplications does it take to evaluate the two-variable polynomial, ΣΣ = = n i n j i j ij a x y 0 0 when the operations are performed as indicated? Write this number of multiplications as another function of n. e) Use mathematical induction to prove that your answer is correct. f) Find another way to evaluate the two-variable polynomial by doing the operations in a different order so that fewer multiplications are required. Write down the associated function of n. Do you think that this is the most efficient algorithm? If not hunt for a better algorithm. | <urn:uuid:22bbc919-f3cf-4b91-b60d-3f4e386fe327> | 4.1875 | 373 | Academic Writing | Science & Tech. | 47.793496 |
Quiz: What You Don’t Know About Wind Energy
Photograph by Pascal Rossignol, Reuters
You've seen those tall white turbines turning on hillsides or on windy plains, but how much do you know about the energy captured from wind?
As early as 200 B.C., wind power was used in China, and later, in Persia. How were the first windmills that appeared in Europe in the Middle Ages different from those earlier turbines?
- They were used to generate electricity.
- They turned on a horizontal axis.
- They turned on a vertical axis.
- They were used to grind grain.
Pumping water and grinding grain were the first recorded uses of wind power, but the iconic early windmills of Europe differed from their forerunners in Persia and China in that they turned on a horizontal axis instead of a vertical one.
Wind energy actually relies on what other force of nature?
- Solar energy
- Water power
- Geothermal power
- The moon's gravitational pull
Wind is caused by the uneven heating of the Earth's surface by the sun.
In the mid-1980s, the diameter of the typical wind turbine rotor, including the hub and the blades, was about 65 feet (20 meters). How did the size of rotors change over the next two decades?
- They are now half the size.
- They doubled in size.
- They tripled in size.
- They are five times larger.
The wind energy industry achieved major advances in power and efficiency with rotors that measure about 330 feet (100 meters) in diameter. By 2011, prototypes with a diameter of about 420 feet (128 meters) were the largest turbines developed.
The Roscoe Wind Complex, with more than 600 turbines on 100,000 acres (40,468 hectares) of cotton farmland south of Abilene, Texas, was the world’s largest wind farm in 2011. It generates enough electricity to power how many homes?
- 1.2 million
- 2.6 million
Roscoe's total capacity of 781.5 megawatts delivers enough energy to power 265,000 Texas homes, says operator E. ON Climate & Renewables.
What wind speed is said to be necessary to make large wind energy systems economically viable?
- 3 meters per second (7 miles per hour)
- 6 meters per second (13 mph)
- 9 meters per second (20 mph)
- 12 meters per second (27 mph)
Large systems are generally said to require wind speeds of 6 meters per second (13 mph.) Stanford University researchers estimate that suitable areas are widespread enough globally that if just 20 percent of potential were captured, it would satisfy the world’s electricity demand seven times over.
According to the U.S. Fish and Wildlife Service, what is responsible for the most bird deaths?
- Building window strikes
- Communications towers
- Wind turbines
Building window strikes may account for 97 million to 976 million bird deaths each year; cars, 60 million; communications towers, 4 to 5 million, the agency says. Wind turbine rotors kill an estimated 33,000 birds annually.
What measure has been shown to significantly reduce bat deaths at wind turbine sites?
- Strobe lights
- Intermittent high-pitched sound
- Reduction in turbine rotations
- No measures have been shown to mitigate bat mortality due to wind turbines.
A slight increase in the speed at which turbines are programmed to begin rotating reduced rotations and cut bat deaths by as much as 93 percent with less than a 1 percent power loss, a published study by researchers at Bat Conservation International found.
Off what coast was the world's first commercial offshore wind facility built?
- Galveston, Texas
- Borkum, Germany
- Kent, in the United Kingdom
- Vindeby, Denmark
The world's first commercial offshore wind facility was built in 1991 off the coast of Vindeby, Denmark, in the Baltic Sea.
How large is the wind energy potential off the coasts of the United States, compared to the nation’s current installed electricity capacity, according to government estimates?
- 10 percent
- 20 percent
- 50 percent
- 100 percent
The U.S. Department of Energy (DOE) estimates that more than 900,000 megawatts, close to total current installed U.S. electrical capacity, of potential wind energy exists off the United States coasts. But as of 2011, the U.S. had no offshore wind installations.
What nation had the largest amount of wind energy capacity installed as of 2010?
- The United States
In 2010, China overtook the United States as leader in installed wind energy capacity, with a building spree that increased its wind power base by 73 percent in just one year.
In what country does wind power provide the largest share of the nation’s electricity?
- The United States
Denmark leads the world in reliance on wind power, which provides 21 percent of its electricity. The next most wind-dependent nation is Portugal, at 18 percent.
By 2010, there were enough wind turbines to generate what percentage of the world’s electricity?
- 2.5 percent
- 5.5 percent
- 10.5 percent
- 20.5 percent
Total installed wind capacity by the end of 2010 was enough to generate 430 terawatt-hours of electricity per year, 2.5 percent of world power demand and more than the total electricity demand of the United Kingdom.
Great work! For you, learning about wind energy is a breeze!
Way to go! Your wind power knowledge is more than bluster! See if taking the quiz again can produce a better score.
Could be better. See if you can catch a better current on a second try.
Brush up on your wind energy facts at The Great Energy Challenge, and then retake the quiz to see how much you’ve learned.Retake Quiz
Great Energy Challenge Blog
@NatGeoGreen on Twitter
Special Report: The Great Shale Gas Rush
The shale gas industry maintains that it protects drinking water and land. But mistrust has been sown in rural communities.
The industry promises jobs to a state badly in need of an economic boost, but the work so far isn't where you might expect it to be.
Track the growing mark that energy companies have etched on Pennsylvania since first producing natural gas from shale. | <urn:uuid:44733185-9903-4b69-8441-adb9093b8d62> | 3.609375 | 1,338 | Q&A Forum | Science & Tech. | 57.730828 |
Our hypothesis is that ‘spatial and temporal variations in diversity and ecosystem function of the sea ice microbial community are sensitive indicators of changing climatic conditions’. The research will develop baseline long-term data on microbial biodiversity and community structure in the “grass” of ice-covered regions – the primary and secondary producers at the base of the food web. We will ... do this using both conventional methods and molecular technology, and will quantify abundances and species identifications using a range of traditional and modern techniques including DNA fingerprints, high throughput sequencing and single cell genome amplification. We will develop an understanding of the functional role of various components of the sea ice microbial community using ecophysiological methods we have developed over nearly 20 years of Antarctic research coupled with new technologies brought together with our international collaborators on this project.
Microorganisms are the most diverse and by far the most abundant biological entities in marine environments and they are often sensitive indicators of environmental change because of their rapid lifecycles. Given the projected changes to the volume and extent of annual sea ice, SIMCO or ‘sea ice microbial communities’ provide an ideal model system to measure the effects of environmental change in Polar Regions. This research will generate a bio-inventory of the microorganisms in sea ice using both conventional methods and molecular technology, and will quantify abundances to generate community fingerprints for each field site.
In 2010-2011 our research was conducted at Cape Evans. The following data was collected.
- CTD (conductivity/temperature/depth) casts of temperature, conductivity, salinity, depth, live chlorophyll were made each day at solar noon through the sea ice to bottom (25m).
- PAR recorded at 5m above sea ice, using Skye light sensors. (units of microeinsteins /m2/s for PAR and microW per cm2 for UVB. Data were recorded daily at solar noon.
- HOBO data loggers embedded in sea ice at approx 300mm intervals through the depth of the ice (2m). Recording continuously at 10 min intervals.
In 2011-2012 our research was conducted at Turtle Rock and Cape Evans. The following data was collected.
- Turtle rock: CTD casts of temperature, conductivity, salinity, depth, live chlorophyll were made each day at solar noon through the sea ice to bottom (25m).
- Air temperature, solar visible and UVB irradiances were recorded each day at solar noon (approx 1330)
- Cape Evans: CTD casts of temperature, conductivity, salinity, depth, live chlorophyll were madeon selected days at approximately solar noon through the sea ice to bottom (30m) | <urn:uuid:89ecedc4-dbc5-48c2-bf6b-392c2ae21db7> | 3.3125 | 555 | Academic Writing | Science & Tech. | 23.769209 |
Catalysts are compounds that accelerate chemical reaction but are not consumed in the reaction. They do this by providing an alternate reaction mechanism, lowering the activation energy of a reaction; as we showed in previous lessons, the activation energy of a reaction is closely related to its speed. It is very important to remember that catalysts do not affect K, the equilibrium constant, ΔH, or any other factors but rate. Catalysts may be destroyed in one phase of a reaction and regenerated in another, or they may not be involved directly at all. An example of the former phenomenon is the catalytic destruction of ozone (O3) by nitrogen oxides:
In this reaction, both NO and NO2 are catalysts, and the reaction can start with either step. This particular reaction is important environmentally, as ozone depletion increases the amount of harmful ultraviolet radiation reaching the earth's surface. Some aircraft engines, especially those found on the Concorde supersonic transport, generate nitrogen oxides; reducing the NOx pollution of aircraft engines is a major goal in aerospace today. This is just one example of the effects catalysts have on our lives.
In the example above, the catalyst was in the same state (gaseous) as the reactants, so it is called a homogenous catalyst. Homogenous catalysts are often destroyed and remade in a reaction, as were the nitrogen oxides above. In other reactions, the catalyst is in a different phase from the reactants; these catalysts are referred to as heterogeneous. Heterogeneous catalysts are not usually consumed and regenerated in a reaction. In a reaction equation, the catalyst or element is shown over the arrow, representing the catalyst's involvement by accelerating the reaction.
Metallic heterogeneous catalysts are particularly important in industrial applications. Over a trillion dollars worth of chemicals each year are generated through catalyzed reactions, from fertilizers to fuels and pharmaceuticals. Palladium, platinum, and rhodium are some of the most common metallic catalysts in industrial use. These metals are also used in the catalytic converter of an automobile, which breaks down some of the more harmful pollutants in engine exhaust (including carbon monoxide, nitrogen oxides, and uncombusted gasoline).
A particular class of catalysts called enzymes are found in living organisms. Since our bodies are too cool for most reactions to occur at sufficient speeds, enzymes must be used. These large organic molecules accelerate one and only one reaction; the reactants in the catalyzed reaction (called substrates) fit into the enzyme in a particular geometric manner. Improper substrates will not fit into the active site, the area on the enzyme where the catalysis takes place. The substrates then join together with the enzyme, forming a molecule called the enzyme-substrate complex, which decays into the desired products and the enzyme. This theory of enzyme-substrate relation is called the "lock and key" model, due to the effect of shape on the process. Enzymes affect almost every reaction in our bodies, including the decomposition of sugar into the forms of energy our cells can use. Without these important catalysts, life as we know if would not be possible.
The concentration of catalysts has some effect on reaction speed. If only a few catalysts are available, the effect on the rate will be small. However, once every set of reacting molecules can be catalyzed, adding more catalyst will have no effect. In general, though, catalysts appear in the rate law of the reaction they catalyze. | <urn:uuid:b143cef4-81a4-4c36-90e9-897cc912b82e> | 4.4375 | 722 | Knowledge Article | Science & Tech. | 24.262004 |
Author(s): D. R. Steward, P. Le Grand &E. A. Bernard
This manuscript presents an overview of the Analytic Element Method, and illustrates
how this mathematical technique is ideally suited to utilization within a
GIS geodatabase model.
The Analytic Element Method contains a set of analytic
elements that exactly satisfy the governing partial differential equation and
represent flow associated with a point, line or polygon.
Elements are superimposed
to simulate local and regional flow within an infinite domain.
A GIS geodatabase
model is presented here, which organizes spatial data in a vector format
that relates directly to analytic elements.
Scripts have been developed to automate
the creation of groundwater models from the GIS geodatabase using the computer
model MLAEM (Multi-Layer Analytic Element Model).
An example is presented
to illustrate the efficacy of this approach.
Analytic Element Method, groundwater, ground water, Geographic
Information System, GIS, geodatabase.
The Analytic Element Method was developed by Otto D.
Strack and his students
at the University of Minnesota over the past 30 years.
This mathematical representation
of groundwater flow has been published in books by Strack and Haitjema
, and the most recent advances in the methodology was recently summarized in
The AEM began as a means of solving problems with idealized shapes
(e.g., flow around cylindrical objects) and has evolved into a robust solution technique
capable of modeling local detail within regional models the size of nations
(e.g., NAGROM in The Netherlands; www.riza.nl).
Size: 292 kb
Paper DOI: 10.2495/BE050181
the Full Article
This article is part of the WIT OpenView scheme and you can download the full text Adobe PDF article for FREE by clicking the 'Openview' icon below.
this page to a colleague.
This paper can be found in the following bookBoundary Elements XXVII: Incorporating Electrical Engineering and Electromagnetics Buy | <urn:uuid:1435ebf9-da17-4443-8ea5-c9659c050589> | 3 | 444 | Academic Writing | Science & Tech. | 32.080031 |
A line of fixed length l moves so that its ends are on the coordinate axes. Determine the locus of P on this line which divides it in the ratio m:n. What is the locus if m=n?
Follow Math Help Forum on Facebook and Google+
Here are some hints.
First say that are the endpoints of the segment on the x-axis and on the y-axis respectively.
Because the line segment has constant length .
If then is the midpoint of the segment, so .
As and visa versa.
View Tag Cloud | <urn:uuid:4d47a22a-f185-44ae-8d94-4bf3d22b130a> | 3.71875 | 119 | Q&A Forum | Science & Tech. | 77.687238 |
What is KOH COMPOUND?
What is the ionic name for the compound KOH? Potassium Hydroxide What is the name of the compound KOH? KOH is Potassium Hydroxide What is the name of this compound KoH?
Best Answer: is this a serious thai travel forum question. ... "Potassium Hydroxide" if you're looking for an answer about chemistry. "Island" if you want to know what Koh ...
KOH is potassium hydroxide. It is highly basic, forming strongly alkaline solutions in water and other polar solvents.
What is KOH COMPOUND? Mr What will tell you the definition or meaning of What is KOH COMPOUND
Potassium hydroxide is an inorganic compound with the formula K OH, commonly called caustic potash. Along with sodium hydroxide (NaOH), this colorless solid is a prototypical strong base. ... KOH reacts with carbon dioxide to give bicarbonate: KOH + CO 2 → KHCO 3 Manufacture
What is the compound name for KOH? ChaCha Answer: KO is refer to as potassium-oxygen. KO is not a stable compound. Thanks for using C...
What is THE NAME OF THE IONIC COMPOUND KOH? Mr What will tell you the definition or meaning of What is THE NAME OF THE IONIC COMPOUND KOH
The name for the ionic compound KOH is Potassium Hydroxide. In general, ionic compounds consist of a metal and one or more non-metals. The only exception for this
What kind of compound is KOH? ChaCha Answer: KOH is an ionic compound between a Potassium ion and a Hydroxide ion.
Best Answer: First you name the cationic (positive) element or radical (in KOH, this is potassium (K)), then you name the anionic (negative) element or radical (in KOH, this ...
What-Is-the-Compound-Koh - what is the compound name for koh? : KO is refer to as potassium-oxygen. KO is not a stable compound. Thanks for using C...
What Is Koh Compound? - Find Questions and Answers at Askives, the first startup that gives you an straight answer
What is Potassium Hydroxide. Buy potash - potassium hydroxide . Buy lye - sodium hydroxide. It is a chemical compound with formula KOH . Pure potassium hydroxide forms white, deliquescent crystals.
Best Answer: **potassium hydroxide** ... It's simple. :) K is potassium OH is hydroxide KOH is Potassium Hydroxide. ... Potassium hydroxide ... Potassium hydroxide ...
Best Answer: It is Inorganic compound and ionic compound. not binary. kOH in the water=K+(aq) + OH-(aq) so it is not covalent. good luck
What is the compound name for CH4? The chemical name for the compound CH4 is Methane. Methane is a colorless, odorless natural gas usually found within the Earth or ...
Which Compound Is Insoluble In Water Koh? - Find Questions and Answers at Askives, the first startup that gives you an straight answer
potassium hydroxide, chemical compound with formula KOH. Pure potassium hydroxide forms white, deliquescent crystals. For commercial and laboratory use it is usually in the form of white pellets.
Best Answer: potassium hydroxide-Its a very strong base inwater, and extremely toxic. ... Pottasium hydroxide. K= pottasioum OH= hydroxide ... yeah, it's potassium hydroxide ...
Naming Compounds Worksheet II Write the name for each compound. 1. HCl 41. NH 4F 2. KOH 42. AsCl 5 3. HgOH 43. KHCO 3 4. KCl 44. K 2O 5. FeCl 3 45.
potassium hydroxide. n. A caustic white solid, KOH, used as a bleach and in the manufacture of soaps, dyes, alkaline batteries, and many potassium compounds.
The compound produced when acetaldehyde is reacted with KOH in ether under ice-bath condition is a. Butanal b. 3-Hydroxybutanal c. 2-Butenal d. a mixture of butanal and 2-butenal
The skin sample is placed on a slide with KOH solution and the solvent DMSO. This solution slowly dissolves the skin cells but not the fungus cells. The fungus cells can then be seen with a microscope. Color stains may be used so that the fungus is easier to see. If you have patches ...
Get the step by step solution for "what is the ionic compound for KOH"
Reacts forming KOH: Structure; Crystal structure: Antifluorite (cubic), cF12: Space group: ... is an ionic compound of potassium and oxygen. This pale yellow solid, the simplest oxide of potassium, is a rarely encountered, highly reactive compound.
what is the percentage composition of these compounds: KOH,H2SO,SnO2? 1. Points. Asked by secure26 - 2 years ago Tags CHEMISTRY. All Answers. Sort By. Show. thinkfirst Level 18 / Scientist Answered 2 years ago-KOH ...
We would like to show you a description here, but the site you’re looking at won't allow us.
Potassium hydroxide which has a common name caustic potash is an inorganic compound with the formula KOH. It is used as bleach and in the production of soaps, dyes, alkaline batteries, and many potassium compounds.
On treatment with KOH, compound A (Shown above) is converted to B (C8H8O) which does not have an absorption in the 3200-3600 cm-1 region of its IR spectrum.
Is it valid to express KOH in this way (see attached image)? I wasn't sure if structural formulae are applicable to ions. The single bond to me denotes a covalent bond, which isn't applicable to the ionic compound KOH, is it?
We would like to show you a description here, but the site you’re looking at won't allow us.
potassium hydroxide n. A caustic white solid, KOH, used as a bleach and in the manufacture of soaps, dyes, alkaline batteries, and many potassium compounds. Also called ...
Prepare potassium hydroxide from miscellaneous compounds. Potassium hydroxide can be made (though it is impractical) from hydride, acetylide, azide, ... Potassium hydroxide, also called caustic potash, is a chemical compound with the formula KOH.
The pure HX compounds are not considered to be acids. The pure materials are gases at room temperature except for water. HF(g) HCl(g) HBr(g) HI (g) H 2 O(l) H 2 S(g) Bases are ... KOH(aq) NaOH(aq) CsOH(aq) Al(OH) 3 (aq) Ca(OH) 2 (aq) Metal ...
Product # Description. Add to Cart. 35127: Potassium hydroxide solution volumetric, 0.1 M KOH in ethanol (0.1N)
yeah me too did the same but i even wrote the first one when it follows SN1 reaction.so, i wanted to know what is the apt answer?
Potassium hydroxide, also called caustic potash, is a chemical compound with the formula KOH. The purified material is a white solid that is commercially available in the form ...
The previous pattern was that most compounds were soluble with some exceptions. With hydroxides it is the other way around. There is a short list of those that are soluble: NaOH, KOH, NH 4 OH, Ca (OH) 2, Sr(OH) 2, Ba(OH) 2. The rest of ...
Compound A has a molecular formula C5H9Br. Compound A reacts with KOH in ethanol to give compound B?
On other hand, the pyrazolothiadiazineone derivative 13 was obtained by treating compound 1 with CS2 and alcoholic KOH in 1:2:2 molar ratio. Under PTC conditions, compound 1, CS2, and ethyl cyanoacetate or malononitrile to gave the pyrazolopyridine derivatives 16 and 17.
What Is Potassium Hydroxide?. Potassium hydroxide, also known as potassium hydrate and caustic potash, is a strong alkaline chemical available in pellets and flakes. With the chemical formula KOH, potassium hydroxide is one of the compounds people call lye. Potassium hydroxide is ...
Binary Ionic Compounds and Their Properties; The Covalent Bond; Covalent Molecules and the Octet Rule; Writing Lewis Structures for Molecules; Ionic Compounds Containing Polyatomic Ions; Atomic Sizes; Ionic Sizes; Further Aspects of Covalent Bonding;
... by decomposing molten potassium hydroxide (KOH) with a voltaic battery. soap production and saponification. TITLE: soap and detergent (chemical compound) SECTION: Alkali Sodium hydroxide is employed as the saponification alkali for most soap now produced.
Ternary Compounds. Ternary compounds are composed of three different elements. ... KOH. Note the importance of upper and lower case - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Example 2.
Identify the formula and name of each of these ionic compounds that use polyatomic ions.
Compounds containing hydroxide ion (OH-) are insoluble Exceptions Alkali metal hydroxides (LiOH, NaOH, KOH) are soluble Ca(OH) 2, Sr(OH) 2, Ba(OH) 2 are soluble. 3. Compounds containing CO 3 2-, PO 4 3-, S 2-, SO 3 2-are insoluble Exceptions ...
How many atoms or moles of an element are in one mole of compound? I have a question. In KOH, how many ATOMS of K are in the compound? I say that it is 1 divided by 6.022 × 10 23 because there are that many atoms in a mole.
During decomposition, one compound splits apart into two (or more pieces). ... KOH ---> 4) NaCl ---> Example #1. How to figure out the right (or product side): (1) Identify the type of compound decomposing: NaClO 3 is a chlorate
Molar mass of KOH = 56.10564 g/mol. Convert grams Potassium Hydroxide to moles or moles Potassium Hydroxide to grams. Molecular weight calculation: ... Using the chemical formula of the compound and the periodic table of elements, ...
KOH: MOL WT. 56.1: H.S. CODE: 2815.20: TOXICITY. Oral rat LD50: 273 ... PubChem Compound Summary - Potassium hydroxide. IPCS INCHEM - Potassium hydroxide. KEGG (Kyoto Encyclopedia of Genes and Genomes) - Potassium ...
If you didn't find what you were looking for you can always try Google Search
Add this page to your blog, web, or forum. This will help people know what is What is KOH COMPOUND | <urn:uuid:883e81f0-b658-4f59-b423-d6e683f05a2b> | 2.734375 | 2,524 | Q&A Forum | Science & Tech. | 62.841786 |
The encoding classes are primarily intended to convert between different encodings or code pages and a Unicode encoding. Encoding.Unicode (UTF-16) encoding is used internally by the .NET Framework, and Encoding.UTF8 encoding is often used for storing character data to ensure portability across machines and cultures.
The StringBuilder class is designed for operations that perform extensive manipulations on a single string. Unlike the String class, the StringBuilder class is mutable and provides better performance when concatenating or deleting strings.
For more information about , see Character Encoding in the .NET Framework and the MSDN blog Shawn Steele's Thoughts about Windows and .NET Framework Globalization APIs.
|Decoder||Converts a sequence of encoded bytes into a set of characters.|
|DecoderFallbackException||The exception that is thrown when a decoder fallback operation fails. This class cannot be inherited.|
|Encoder||Converts a set of characters into a sequence of bytes.|
|EncoderFallbackException||The exception that is thrown when an encoder fallback operation fails. This class cannot be inherited.|
|Encoding||Represents a character encoding.|
|StringBuilder||Represents a mutable string of characters. This class cannot be inherited.|
|UnicodeEncoding||Represents a UTF-16 encoding of Unicode characters.|
|UTF8Encoding||Represents a UTF-8 encoding of Unicode characters.| | <urn:uuid:b02f5d90-4d6d-49c7-8857-14cf6bf1a9c9> | 3.53125 | 309 | Documentation | Software Dev. | 23.340285 |
Giant Isopods for Valentine's Day
February 14, 2003
Tracey Smart, Master's Candidate
Oregon Institute of Marine Biology
University of Oregon
After three days of beautiful weather and calm seas, Valentine's Day dawned on the ship in the company of 3- to 4- ft swells, clouds, and wind. Unfortunately, this meant that the day's scheduled dives had to be cancelled. A dive had been planned at Bush Hill, home to hundreds, if not thousands, of tubeworm colonies. Several of the scientists had been waiting for more specimens and were disappointed. Nevertheless, all was not lost.
Instead of diving on Bush Hill, we returned to a muddy plain near the Brine Pool to retrieve the large trap that we had baited and released five days earlier. We were hoping to collect large crabs, the giant isopod Bathynomus giganteus, and small amphipods that, hopefully, would be attracted to the bait. We were not disappointed!
With the winch on the starboard side of the ship, we pulled up three barrels of rope and our trap, brimming with isopods. Once the trap was on board, we removed and examined the animals. Several large Bathynomus, each more than a foot long, were among the 20 or so captured. Among them were half a dozen females that possessed their oostegites (brooding legs that develop in the molt preceding reproduction). That none of these females carried any developing young led us to hypothesize that they had only recently spawned. This hypothesis was further supported by the fact that only two very small isopods (about 6 cm long) were brought up with the trap.
Two spider crabs were also in the trap, one of which carried embryos firmly attached to her abdomen. Both crabs also carried barnacles, but of two very different types. The smaller crab was infected with a Rhizocephalan barnacle, a parasite that attaches to its host and then reprograms the host's reproductive behavior and biology to accomplish its own reproduction. The gravid female crab, on the other hand, played host to epibiotic lepadomorph barnacles on the back of her carapace.
For me, the most exciting find of the day was the four mid-sized Bathynomus that carried hundreds of epibiotic barnacles. I had planned to use them to investigate the morphological characteristics that develop in barnacles living on other organisms. We had also planned on lowering a box core into the mud on this plain, and looking at the animals and bacteria living in this often forgotten habitat. Unfortunately, by afternoon the swells were even bigger, so we were content to return to our rooms to ride out the waves. | <urn:uuid:3a31f382-9fdc-45de-bec2-16e1664cf4bc> | 2.84375 | 565 | Personal Blog | Science & Tech. | 44.746057 |
This is a question for which I've found it surprisingly hard to find a good answer. Biology texts talk mystically about the ATP->ADP reaction providing energy to power other reactions. I'd like to know some more details. Is this following roughly right ?
- Each reaction in a cell has a specific enzyme.
- Each enzyme has binding sites for, say, two molecular species AND for an ATP molecule.
- When a reaction takes places, the two species bind to the enzyme, and a little later, an ATP molecule binds.
- For some reason (why ?), the ATP->ADP reaction is now energetically favourable, so the high-energy bond breaks.
- This releases electromagnetic energy at some characteristic frequency.
- Certain bonds in the enzyme have a resonant frequency that allow them to absorb this electromagnetic energy (the EM energy disturbs molecular dipoles ?).
- The 3D structure of the enzyme is disturbed (i.e. it bends) in such a way that the 2 molecular species are mechanically forced together, providing sufficient activation energy for the reaction in question.
- The newly formed species no longer binds nicely to the enzyme (why ?) so it detaches, as does the ADP, which also doesn't bind as nicely as ATP.
- The End.
So, is that an accurate summary ? Anyone care to add some more physical details ? (like all the thermodynamics and quantum chemistry I have no idea about) | <urn:uuid:43c30423-aaec-49a6-b0eb-2249d51a3cab> | 2.734375 | 300 | Q&A Forum | Science & Tech. | 47.956105 |
This chapter describes how to use the Science Collection and introduces its conventions.
The following code demonstrates the use of the Science Collection by plotting a histogram of 10,000 trials of rolling two dice.
#lang racket (require (planet williams/science/random-source)) (require (planet williams/science/discrete-histogram-with-graphics)) (let ((s (make-random-source)) (h (make-discrete-histogram))) (random-source-randomize! s) (for ((i (in-range 10000))) (let ((die1 (+ (random-uniform-int 6) 1)) (die2 (+ (random-uniform-int 6) 1))) (discrete-histogram-increment! h (+ die1 die2)))) (discrete-histogram-plot h "Histogram of the Sum of Two Dice"))
The following figure shows the resulting histogram:
The Science Collection is a collection of modules each of which provides a specific area of functionality in numerical computing. Typical user code will only load the modules it requires using the require special form.
For example the code in Section 2.1 requires two of the modules from the Science Collection: random-source and discrete-histogram-with-graphics. This is specified using the following forms:
(require (planet williams/science/random-source)) (require (planet williams/science/discrete-histogram-with-graphics))
Each of these statements will load the corresponding module from the Science Collection—
There are two sub-collections of the Science Collection. These are:
Loading modules from either of these sub-collections requires that the sub-collection be specified when using the require special form. For example, to load the module for the Gaussian random distribution, the following is used:
As a shortcut, the entire Science Collection can be loaded using one of the following, depending on whether or not the graphic routines are needed:
Support for the graphical functions within the modules of the Science Collection has been separated from the fundamental numerical computing functionality of the modules. This facilitates the use of the numerical computing functions in non-graphical environment or when alternative graphical presentations are desired.
By convention, when graphical functions are included for a specific numerical computing area, there are three modules that provide the functions:
the basic numerical computing functions
the graphical functions
both the basic numerical computing and graphical functions
This might be used in implementing higher-level graphical interfaces.
In general, either the module or module-with-grahics module is loaded. However, the module-graphics module can be loaded when only the graphical routines are being referenced.
For example, the example code in Section 2.1 requires both the basic numerical computing and graphical functions for the discrete histogram functionality. Therefore, it loads the discrete-histogram-with-graphics module using the form:
The graphical routines are implemented using the plot collection (PLoT) provided with Racket. The plot collection is, in turn, implemented using the Racket Graphics Toolkit MrED. Both of these modules are required to be present to use the graphical functions. | <urn:uuid:dcd100b7-8460-4908-8ba1-bd9e87ff8b8f> | 3.4375 | 666 | Documentation | Software Dev. | 22.587218 |
Sci. STKE, 17 August 2004
PLANT BIOLOGY Identification of Stress-Regulated MicroRNAs
Small RNAs, either microRNAs (miRNAs) or short-interfering RNAs (siRNAs) are emerging as important regulators of gene expression. MicroRNAs (miRNAs) are short, single-stranded RNAs that inhibit gene expression by posttranscriptionally targeting RNAs for degradation. Short-interfering RNAs (siRNAs) are double-stranded RNAs that also mediate gene silencing. Sunkar and Zhu constructed a library of small RNAs from Arabidopsis seedlings exposed to various stresses, cold, dehydration, high salt, or abscisic acid, in order to identify new small RNAs and gain insight into the roles of small RNAs for mediating abiotic stress responses. Twenty-six new miRNAs were identified and were classified into 15 new families, with two miRNAs falling into previously reported families. In addition, 102 putative siRNAs were identified. Target transcripts regulated by the miRNAs were predicted based on sequence analysis and suggest that more than one miRNA may target the same transcript, that a single miRNA may target multiple transcripts, and that often these transcripts tend to be from members of the same gene family. For example, miR400 is predicted to interact with at least 10 members of the PPR (pentatricopeptide repeat) family. Furthermore, the abundance of several miRNAs was increased or decreased in response to various abiotic stresses, and several of the newly identified miRNAs exhibited tissue or developmental stage-specific expression patterns. These results open the door for further exploration of the role of small RNAs in abiotic stress signaling.
Citation: Identification of Stress-Regulated MicroRNAs. Sci. STKE 2004, tw294 (2004).
Science Signaling. ISSN 1937-9145 (online), 1945-0877 (print). Pre-2008: Science's STKE. ISSN 1525-8882 | <urn:uuid:560af384-235c-425b-ba2f-235bb4488d49> | 2.921875 | 428 | Academic Writing | Science & Tech. | 33.317194 |
This science fair project was performed to find out what conditions will make tablets dissolve faster. The experiment involved using Alka-Seltzer antacid tablets, water and hydrochloric acid at different temperatures.
The Alka-Seltzer antacid tablet will dissolve most quickly in a hydrochloric acid solution and at the highest temperature.
Alka-Seltzer is an antacid medication used for relief of heartburn and neutralizing stomach acid. The pH level in our stomach is normally between 2 to 3. The food that we eat will enter the stomach where acid is then secreted to help in its digestion. When too much food is eaten, even more acid is produced. At times, the pH level falls below 2. This is the cause of the heartburn.
Alka-Seltzer tablets consist of aspirin, sodium bicarbonate and citric acid. As an antacid the tablet will provide quick pain relief and help to raise the pH level in the stomach to between 3 or 4.
Alka-Seltzer medication is taken by dissolving the tablet in water and drinking it. Once the tablet is placed in the water, the citric acid will dissolve in water, making it acidic. The acidic solution will react with the sodium bicarbonate and release carbon dioxide. The releasing of carbon dioxide gas will create fizzy bubbles in the drink. The mixture is then drunk to reduce stomach discomfort and to neutralize stomach acids.
The materials required for this science fair project:
- 8 Alka-Seltzer tablets
- 8 beakers
- 1 large beaker
- 1 bottle of ice water
- 1 pack of ice cubes
- 800ml of Hydrochloric acid at pH 2
- 1 hot plate
- 1 thermometer
- pH paper
- 1 measuring cylinder
- 1 stopwatch
1. For this science project, the independent variable is whether water or hydrochloric acid is used as the solution, and the temperature of the solution. The dependent variable is the time taken by the Alka-Seltzer tablet to fully dissolve in the solution. This is determined using a stopwatch. The constants (control variables) are the amount of water used, the size of the tablet and the air pressure in the room.
2. The first beaker is filled with 200ml of ice water. Water and ice cubes are added to bring the temperature to 15°C, and the volume of the water is adjusted back to 200ml. 1 Alka-Seltzer tablet is placed in the water and the time taken for the tablet to fully dissolve is checked using the stopwatch and recorded in the table given below.
3. Using another 3 beakers filled with 200ml water and the hot plate, the temperature of the water is brought to 25°C, 35°C and 45°C. One Alka-Seltzer tablet is placed in each beaker and the time taken for the tablet to dissolve is measured and recorded in the table below.
4. The remaining 4 beakers are filled with hydrochloric acid. The temperature of the first beaker is brought to 15°C as explained in procedure 2 and the time for the Alka-Seltzer tablet to dissolve is measured and recorded in the table below.
5. The temperature of the remaining 3 beakers filled with hydrochloric acid are brought to 25°C, 35°C and 45°C using the hot plate and the time required for the Alka-Seltzer tablet to dissolve is measured and recorded in the table below
The results show that as the temperature of the solution increases, the Alka-Seltzer tablet dissolves more quickly. The tablet also dissolves more quickly in acidic solutions, compared to water.
Time for Alka-Seltzer tablet to dissolve (seconds)
The chart below represents our experiment results.
The hypothesis that Alka-Seltzer antacid tablets will dissolve most quickly in hydrochloric acid solution and at the highest temperature, is proven to be true.
Antacids are used for quick relief from heartburn and stomach discomfort especially after a heavy meal. In order to see quick results, the tablets are best taken with warm water so that they dissolve most quickly. Most mediations are in fact taken best with warm water rather than cold water.
Try to repeat the science fair project by using other tablets like vitamin C.
The science fair project can also be repeated using an acidic solution to observe how long it takes the acidity to be neutralized by the antacid tablet. | <urn:uuid:8d8c63ff-2be2-4b1c-8035-411894a286b0> | 3.734375 | 942 | Tutorial | Science & Tech. | 50.562702 |
|Chemical reactions triggered by ultraviolet (hv) in the thin martian atmosphere.
Credit: Sushil, UMich
Six years ago, then NASA Associate Administrator Wesley Huntress, Jr., stated , "Wherever liquid water and chemical energy are found, there is life. There is no exception." Few opportune years like 2004 have presented astrobiology with as many remarkable vistas and fresh perspectives on this fundamental triad of water, chemical energy and life.
Consider this year's accomplishments of those dedicated to searching for life in the universe.
Landing on Mars not once, but twice. Then finding evidence for water on opposite sides of the red planet. Picking up what appears to be methane signals in the martian atmosphere, one of the residues that might prove one day to be the product of underground biology. Scientists began to discuss seriously what colonization strategies make sense.
Setting off to explore the even richer atmosphere of the Earth-like moon, Titan. Spiraling into orbital capture around Saturn and photographing its majestic rings.
Flying through the tail of a comet and heading home after collecting the first extraterrestrial samples from such dusty iceballs. Launching the Deep Impact probe to smash into a comet and watch how the dust and ice get kicked up.
Filling the astronomy catalogs with well over a hundred new planets, including what may prove to be the first visible exoplanet. Finding some nearby candidates that might occupy temperate locations or safely orbit Sun-like stars.
Witnessing the once-per-century passage of our neighboring Venus across the face of the Sun. The MESSENGER probe took off on its decade long tour of the inner solar system to orbit Mercury.
Discovering the largest planetoids beyond Pluto among those outer nurseries where only comets visit.
The editors of Astrobiology Magazine revisit the highlights of the year and where possible point to one of the strongest lineups ever for beginning a new turn of the calendar. Between the marathon still being run by the twin Mars rovers and the expected descent to Saturn's moon, Titan, next year promises no letdowns.
|Perspective view. Ophir Chasm in northern Marineris Valley network.
Credit: ESA/Mars Express
Number seven on the countdown of 2004 highlights was detection of methane on Mars. Relatively high levels of methane have been detected on Mars using a combination of ground based spectroscopy and the orbiting Mars Express probe.
Mars resembles Earth more than any other planet in our solar system, and studying its atmosphere gives us a greater understanding of our own.
Having methane appear on Mars is something of a mystery, because the planet was not believed to have active volcanism or tectonics. Could the methane be evidence of martian life forms buried underground?
Methane on Mars could be produced by non-biological methods or by biological ones. "Biologically produced methane is one of many possibilities," said Sushil Atreya, professor and director of the Planetary Science Laboratory in the University of Michigan College of Engineering. "Methane is a potential biomarker, if a planet has methane we begin to think of the possibility of life on the planet. On Earth, methane is almost entirely derived from biological sources."
How the methane got to Mars is the big question, and there are several possible sources, Atreya said. The most exciting scenario is that methanogens---microbes that consume the Martian hydrogen or carbon monoxide for energy and exhale methane---dwell in colonies out of sight beneath the surface of the red planet.
|Perspective view. Ophir Chasm in the northern Marineris Valley network.
Credit: ESA/Mars Express
"These are anaerobic so they don't need oxygen to survive, if they are there," Atreya said. "If they are there, they would be underground."
Spectrocopy detected an average 10 parts per billion by volume (ppbv) of methane on Mars, a small amount compared to the approximately 1700 ppbv on Earth. The methane gas was distributed unevenly over Mars' surface, which tends to support the theory that an internal, on-site source, rather than a comet, is the source generating the methane, said Atreya.
Speculation is tempting, but many more experiments are necessary before drawing any conclusions.
"While it's tantalizing to think there are living things on Mars, we aren't in a position to say that is what is causing the methane," Atreya said.
- Mars Reconnaissance Orbiter (MRO) launch, Mars Orbiter to collect high-resolution, 1-meter, images in stereo-view of Mars
- European Venus Express, Venus Orbiter for two-year nominal mapping life [486 days, two Venus year]
- New Horizons, Pluto and moon Charon flyby, mapping to outer solar system cometary fields and Kuiper Belt
- Dawn, Asteroid Ceres and Vesta rendezvous and orbiter, including investigations of asteroid water and influence on meteors
- Kepler, Extrasolar Terrestrial Planet Detection Mission, designed to look for transiting or earth-size planets that eclipse their parent stars [survey 100,000 stars]
- Europa Orbiter, planned Orbiter of Jupiters ice-covered moon, Europa, uses a radar sounder to bounce radio waves through the ice
- Japanese SELENE Lunar Orbiter and Lander, to probe the origin and evolution of the moon
- Japanese Planet-C Venus Orbiter, to study the Venusian atmosphere, lightning, and volcanoes.
- Mars Scout mission, final selections August 2003 from four Scouts: SCIM, ARES, MARVEL and Phoenix
- French Mars Remote Sensing Orbiter and four small Netlanders, linked by Italian communications orbiter
- BepiColumbo, European Mercury Orbiters and Lander, including Japanese collaborators, lander to operate for one week on surface
- Mars 2009, proposed long-range rover to demonstrate hazard avoidance and accurate landing dynamics
Related Web Pages
2003: Year in Review
Solar System Exploration Survey
Mars Opportunity Rover
Mars Spirit Rover
Planet Ten: Beyond Pluto? | <urn:uuid:0bafe7dd-b45d-4b4d-b3cb-c1a6fbbb96a8> | 3.296875 | 1,279 | Content Listing | Science & Tech. | 28.479879 |
In order to help standardize
the mathematics used in science, setting rules on
measurements is key. The failure of the Mars Climate
Orbiter in 1999 was the result of confusion over
what standard unit of measure was to be used.
There are 4 categories in which standard of
measure are applied: time, distance, mass and
temperature. At the bottom of the page are some
Units of time is a simple matter, things occur in
a linear fashion and so a value of time is assigned.
Units of measure are: seconds, minutes, hours,
years, and so on. There is no "metric" or "English"
units of measure in time; however, there can be some
confusion as time coordinates are a part of the
equatorial measure of right ascension. For example,
the right ascension (RA) coordinate of the
is 18 hours and 38 minutes. That has nothing to do
with a phenomenon that occurred at hour 18 and 38
Time in relation to Relativity is another matter - covered in the Cosmology section.
In astronomy, distances can be very large. While
on Earth we measure distance (and dimensions for
that matter) in either centimeters, meters, and
kilometers for the metric system and inches, feet,
and miles for the English system, distances to
distant galaxies is given in light-years.
As a rule of thumb, science tends to lean more
towards the metric system so variations on the meter
(kilometer, centimeter, millimeter, and so on) are
common place. Measure of light-years and parsecs do
not suffer the same confusion, but the light-year
seems to be more popular than using the parsec.
For very large distances, there are really three
choices: astronomical units, light-years, and
parsecs (for really distant galaxies - like the ones near the edge of the observable Universe, it is common to give distance in "redshift").
|Astronomical Unit (AU)
||9.5 x 1015 meters (63,240 AU)
||3.1 x 1016 meters (206,265 AU)
An astronomical unit is the mean distance between
the Earth and the
A light-year is the distance light travels in 1
year - ~300,000 m/s.
A parsec is the distance a nearby object, 1 AU
away "subtends" to one arc-second. In English, this
means that if an observer were far enough away from
Sun so that the Earth-Sun distance were only 1
arc-seconds across, the observer would be 1 parsec
parallax is the apparent shifting of an
object due to perception. Example: hold a pen at
arms length and alternate closing one eye at a time
- the apparent shift of the object is called a
Also, for comparison purposes: 1 parsec = 3.26 ly.
1 inch = 2.54 cm
1 foot = 0.3048 meters
1 mile = 1.609 kilometers
1 micrometer = 1
µm = 10-6
1 nanometer = 1 nm = 10-9 meters
1 minute = 60 seconds
1 hour = 3600 seconds
1 day = 86,400 seconds
1 year = 3.156 x 107 seconds
1 km/s = 103 m/s
1 m/h = 0.447 m/s
1 m/h = 1.47 ft/s
Back to Top | <urn:uuid:3dec8c2b-7446-4024-8d3e-8945ac5cf5da> | 4 | 753 | Knowledge Article | Science & Tech. | 64.3132 |
- Cascading Style Sheets (CSS)
- Document Object Models (DOM)
In order to understand the complexities of DHTML, it is useful to examine its components in more detail.
Cascading Style SheetsCSS are sophisticated codes that separate web content from the web desplay - the style, positioning, colors, fonts, and so on. CSSP or CSS Positioning allows pixel-level control over HTML element positioning. The separation of the presentation style of web documents from the content with CSS2 (CSS level 2) simplifies Web authoring and site maintenance. "CSS2 supports media-specific style sheets so that authors may tailor the presentation of their documents to visual browsers, aural devices, printers, braille devices, handheld devices, etc. This specification also supports content positioning, downloadable fonts, table layout, features for internationalization, automatic counters and numbering, and some properties related to user interface" (W3 Consortium). The W3C offer an excellent tutorial to learn CSS2 called, CSS2 Specification.
Putting them togetherDHTML is usually applied to achieve three tasks:
- Position or placing blocks of content on the page and moving them around
- Style Modifications which change the look and feel of the page
- Event handling or relating user events to changes in positioning or other style modifications
A helpful group of tutorials are available through HTML Goodies, called DHTML and Layer Tutorial.
The links included in this article offer introductions and how-tos to begin your journey in mastering this newest development in html coding. Any quick search on Google will bring many more resources to your attention, readily available at your fingertips. As browser manufacturers work on their incompatibility and the use of higher version browsers become more commonplace, DHTML will become a mandatory part of any professional designer's itinerary.
HTML Highlight Article Series
PART 1: Should a Credible Designer Know HTML?
PART 2: HTML 3.2 - The Birth of Wilbur
PART 3: HTML 4.0 AND 4.01 - More of a Good Thing!
PART 4: XHTML : Web Coding for Refined Design
PART 5: DHTML : Dynamic Web Coding | <urn:uuid:30be1d71-a457-4b0c-ac14-88183a2123c2> | 3.390625 | 449 | Truncated | Software Dev. | 38.10692 |
Where in the world are savannas? In this BrainPOP UK movie, Tim and Moby teach you where savannas are located and how their coordinates affect the climate, vegetation, and animal life there. In this educational, animated movie you’ll learn how tall savanna grass can grow, where the largest savannas can be found, and the name of the most famous savanna of all. You’ll also find out about some of the amazing animals that live on the savanna — including lions, zebras, and elephants! Discover why savannas don’t have many trees and what prevents the savanna from becoming a tropical rainforest. Just make sure to steer clear of that rhino! | <urn:uuid:237247f6-86bd-45da-ab58-0763a98b3e88> | 2.90625 | 151 | Truncated | Science & Tech. | 47.0575 |
HomeOracle Page 3 - Database Interaction with PL/SQL, part 3
Combining TABLE and RECORD - Oracle
Jagadish Chatarji has been writing about database interactions with Oracle PL/SQL. The last part started on TYPE, RECORD, and TABLE declarations of PL/SQL. This one now goes further into TABLE, RECORD, and using them together. It will also introduce NESTED TABLES.
In my previous article, I explained PL/SQL records. Now we shall combine RECORDs with TABLEs to achieve effective results in a simple way. Let us consider the following example.
declare type t_empRec is record ( ename emp.ename%type, sal emp.sal%type, deptno emp.deptno%type ); type t_emptbl is table of t_emprec; v_emptbl t_emptbl; begin select ename,sal,deptno BULK COLLECT into v_emptbl from emp; for i in v_emptbl.first .. v_emptbl.last loop dbms_output.put_line(v_emptbl(i).ename || ',' || v_emptbl(i).sal || ',' || v_emptbl(i).deptno); end loop; end;
The above program retrieves 'ename', 'sal' and 'deptno' columns from 'emp' table and displays all of those details. In my previous article, I displayed the same but used %ROWTYPE. Now in this program, I am combining the definitions of RECORD and TABLE to store only the data we need (but not the entire row). The most important statement in the above program is the following:
type t_emptbl is table of t_emprec;
That statement defines a PL/SQL table named 't_emptbl'. But the content (rows) within that table should match with the structure defined in the following declaration:
type t_empRec is record ( ename emp.ename%type, sal emp.sal%type, deptno emp.deptno%type );
So, indirectly 't_emptbl' can have any number of rows with only the fields 'ename', 'sal' and 'deptno'. This is a wonderful technique to define PL/SQL tables with our own fields. The rest of the program is just similar to the example I gave in my previous article. | <urn:uuid:4b58d36b-57de-4a5b-856f-ff1cb7b8c3ce> | 2.71875 | 523 | Documentation | Software Dev. | 58.196804 |
Indian Ocean Tsunami Animation
Watch an animation of the 2004 Boxing Day Indian Ocean Tsunami.
Download the 4.3 megabyte animation here. It is in *.mov format.
Best played with Apple Quicktime Player.
Modelling performed by William Power using the MOST software developed by Vasily Titov at PMEL.
The Indian Ocean tsunami of 26 December 2004 was generated by a very large earthquake off the coast of Sumatra. At magnitude 9.3, the earthquake was larger than first reported, and the second largest ever recorded by scientific instruments. The earthquake ruptured across an area that extends for 1200 kilometres. This rupturing deformed the seabed, raising a huge volume of water above its normal level, which then spread out as the tsunami.
The tsunami caused huge devastation and in excess of 280,000 deaths in countries bounding the Indian Ocean. By the time it reached New Zealand, the tsunami was measured to be as much as half a metre in height at some locations, despite the great distance between here and the earthquake, and the landmass of Australia in between. How can we explain this?
The speed of tsunami waves is determined by the depth of water in which the tsunami travels. One consequence of this is that the tsunami path is refracted, or bent, by changes in water depth, in much the same way that light is refracted when it passes through a glass lens. At the start of the tsunami most of the wave energy traveled either west towards Sri-Lanka and India, or east towards Thailand. Some of the waves spread out to the south, where wave energy was channeled by a ridge of undersea mountains which extends down to the south of Australia. From here some of the waves traveled east towards the South Pacific.
To the south of New Zealand lies the Campbell Plateau, a large shallow area which is an extension of the New Zealand landmass, and which extends for about a thousand kilometres into the Southern Ocean. When the tsunami waves reached this plateau they slowed, and were refracted north towards the east coast of South Island. This may explain why some of the largest waves from this tsunami in New Zealand were measured in Timaru. | <urn:uuid:84ee878c-d4f9-4dcb-8743-890c345be89e> | 4.0625 | 449 | Truncated | Science & Tech. | 53.966865 |
What is Nuclear Fission?
Nuclear fission results in a great deal of energy being released in an explosion.
CREDIT: Nevada Division of Environmental Protection
Nuclear fission is a reaction in which a large nuclei breaks apart into two smaller nuclei, releasing a great deal of energy.
Nuclei can fission on their own spontaneously, but only certain nuclei, like uranium-235 and plutonium-239, can sustain a fission chain reaction. This is because these nuclei release neutrons when they break apart, and these neutrons can slam into other nuclei, causing them to also break apart and release more neutrons.
Uranium-235 is the fuel of choice in all commercial reactors (and even one natural reactor). The uranium fuel is packed into the core and usually surrounded by a moderator, which is a substance that slows down the neutrons so they have a better chance of inducing fission.
Once the chain reaction gets going, the heat from the core is typically used to boil water and drive a steam turbine. The chain reaction can be slowed down and even turned off by introducing control rods, which contain materials that absorb neutrons.
Life's Little Mysteries: Gift Edition Hardcover Book
Uncover the truth behind more than 100 mysteries that surround us every day with our new hardcover book! Perfect for gifts and classrooms, and suitable for all ages. Some of the included mysteries are:
- Why Do Cats Land on Their Feet?
- How Long Does it Take to Make Petrified Wood?
- What Everyday Things Around Us Are Radioactive?
Find out all of this and much, much more in our NEW hardcover book.
It makes a great gift idea for all ages. more info>> | <urn:uuid:e847a134-7dc4-4096-93bd-76e209c7d483> | 4.25 | 359 | Knowledge Article | Science & Tech. | 52.381099 |
The FLSTSSN gathers data on dead, sick, or injured (i.e., stranded)
sea turtles found in Florida.
This network functions as a part of an eighteen-state network
led by National Oceanic and Atmospheric Administration's National
Marine Fisheries Service (NMFS). In Florida, strandings are
documented by the Fish and Wildlife Research Institute (FWRI) staff
biologists and by a network of permitted participants located
around the state.
Live strandings are rescued and
transported to properly permitted rehabilitation facilities.
Strandings data collected on a standardized reporting form include
date, species, location, carapace length and width, carcass
condition, carcass disposition, and information on anomalies (e.g.,
entanglement, propeller damage, and fibropapillomas).
Additionally, certain carcasses are collected by FWRI staff for
gross or detailed necropsy. FWRI reports Florida sea turtle
strandings to NMFS as a part of a management plan intended to
reduce the incidental take of turtles in the shrimp fishery.
FWRI also uses sea turtle stranding data to monitor
mortality and to detect and describe any unusual stranding events.
Stranding data collected through the FLSTSSN have been used
extensively in the identification of mortality factors and in the
development of recovery actions (e.g., Turtle Excluder Device
requirements and gill net regulations).
of Statewide Turtle Stranding Data on the the Marine
Resources GIS (MRGIS) data page. The link to the MRGIS Internet Map
Server is at the bottom of the MRGIS IMS article. To make a map of
turtle data, select " turtles" as current map view, and navigate
using menu on the left side. | <urn:uuid:a69773ef-8bf3-4160-95a6-981775608d5d> | 2.796875 | 384 | Knowledge Article | Science & Tech. | 31.393908 |
Milky Way's neighbour galaxies may have brushed closely long ago
A recent study at National Science Foundation's Green Bank Telescope (GBT) have revealed that two of our Milky Way's neighbor galaxies may have had a close encounter billions of years ago.
The new observations have confirmed a discovery of 2004 that claimed the presence of hydrogen gas streaming between the giant Andromeda galaxy, also known as M31, and the Triangulum Galaxy, or M33.
"The properties of this gas indicate that these two galaxies may have passed close together in the distant past," said Jay Lockman, of the National Radio Astronomy Observatory (NRAO).
"Studying what may be a gaseous link between the two can give us a new key to understanding the evolution of both galaxies," he added.
Both the galaxies, about 2.6 and 3 million light-years, respectively, from Earth, are members of the Local Group of galaxies including our own Milky Way and around 30 others.
The hydrogen "bridge" between the galaxies was discovered earlier in 2004 by astronomers using the Westerbork Synthesis Radio Telescope in the Netherlands, but disputed on technical grounds.
Detailed studies with the highly-sensitive GBT indicated the existence of the bridge, and showed six dense clumps of gas in the stream.
Observations of these clumps revealed that they have roughly the same relative velocity with respect to Earth as the two galaxies, supporting the argument that they are part of a bridge between the two.
When galaxies pass close to each other, one result is "tidal tails" of gas pulled into intergalactic space from the galaxies as lengthy streams.
"We think it's very likely that the hydrogen gas we see between M31 and M33 is the remnant of a tidal tail that originated during a close encounter, probably billions of years ago," said Spencer Wolfe, of West Virginia University.
"The encounter had to be long ago, because neither galaxy shows evidence of disruption today," he added.
"The gas we studied is very tenuous and its radio emission is extremely faint -- so faint that it is beyond the reach of most radio telescopes," Lockman said.
"We plan to use the advanced capabilities of the GBT to continue this work and learn more about both the gas and, hopefully, the orbital histories of the two galaxies," he added.
-With inputs from ANI.
NOT ENOUGH REASONS TO BAN IPL, SAYS N SRINIVASAN
May 19, 2013 at 10:25 PM
"BCCI DOESN___T COMES UNDER RTI, SAYS BCCI BOSS "
May 19, 2013 at 10:09 PM
"WILL TAKE ACTION AFTER GETTING REPORT FROM DELHI POLICE: SHUKLA "
May 19, 2013 at 10:08 PM | <urn:uuid:557f75e0-3c23-4357-8294-bdd4824a0362> | 3.03125 | 582 | Comment Section | Science & Tech. | 41.846418 |
- What are the fundamental building blocks of matter?
- How can I identify them?
- Which forces hold them together?
- How do these forces work?
- How far have the secrets of forces and matter been understood so far?
Find the answers to these and other questions
by browsing, reading, and working through some of the educative
materials on particle physics which is collected here. Most
of the material contains interactive elements, some even real
particle physics events for making your own measurements,
and understanding particle physics "hands-on". The
material was collected for the IPPOG Particle Physics Masterclasses, where some
of the measurements form the practical exercises for high
school students spending a day at one of the Research
Institutes. More info on the teaching systems, which are
suited for a wide range of readers, is accessible via the
menu in the left column. | <urn:uuid:b75b5ecb-b6ed-47fb-9612-4a76c94de97f> | 3.296875 | 190 | Content Listing | Science & Tech. | 42.472668 |
The dehydration of sucrose
Demonstrations designed to capture the student's imagination, by Colin Baker of Bedford School.
In this issue: the dehydration of sucrose
Sucrose is a disaccharide with the formula C12H22O11. On hydrolysis, it yields the two monosaccharides, glucose (aldohexose) and fructose (ketohexose), and on dehydration produces a complex carbonaceous solid residue.
The reaction between sucrose and concentrated H2SO4
With this reaction, there is a time delay of almost one minute before the reaction proceeds. The acid starts to go yellow as the dehydration begins. The rate of dehydration then accelerates as the acid heats up because the reaction is exothermic. As the sugar molecules are stripped of water, the heat generated turns the water into steam which then expands the remaining carbon into a porous, smoking, black column. This expands out of the reaction vessel, producing a choking acrid vapour and the smell of burned sugar. At this stage I normally remind my students that sulfuric acid is highly corrosive and will burn skin so they must avoid contact with it.
50-60g Granulated sugar, (sucrose), C12H22O11
25-30cm3 Concentrated sulfuric acid, H2SO4
Put the sugar into the beaker and stand on a heat-proof mat. With care pour the acid onto the top of the sugar and then stand back. Since this demonstration produces sulfur dioxide as a waste product, it should be performed in a well-ventilated room or a fume cupboard.
Sulfuric acid contact with the eyes or skin can cause permanent damage. Concentrated solutions of acid are extremely corrosive and when sulfuric acid is dissolved in water enough heat is released to make water boil. Sulfur dioxide is toxic in high concentration and is a severe respiratory irritant at lower concentration. The typical exposure limit is 2 parts per million (ppm), a level which can readily be exceeded in a laboratory with poor ventilation. Some people, especially those prone to asthma, may be especially sensitive to sulfur dioxide. In the presence of moisture sulfur dioxide forms an acidic, corrosive solution, which in contact with the skin or eyes may lead to burns.
Precise amounts of chemical are unnecessary but do not use any other form of sucrose other than normal (household) granulated sugar.
Pure sulfuric acid is an oily liquid. Odourless and colourless, the pure acid freezes at 10.5°C to produce a white crystalline solid consisting of a three dimensional hydrogen-bonded network which persists in the liquid and aqueous state, making such solutions viscous. The pure acid decomposes slightly on standing and warming to evolve sulfur trioxide and water. Thus the acid reaches its maximum boiling point mixture of 330°C and its maximum concentration of 98.33 per cent.
Concentrated sulfuric acid has a molarity of 18M and has a strong affinity for water, making it very useful for drying gases which it does not react with, eg SO2, Cl2, N2, and O2. As well as being a good reagent to dehydrate carbohydrates, concentrated H2SO4 also dehydrates crystalline hydrates, organic alcohols and acids. In the reaction with sucrose, the acid dehydrates the carbohydrate to carbon and then oxidises the carbon:
C12H22O11 Dehydrate 12C(s) + 11H2O(l) Oxidise CO2, SO2
The acid is a powerful oxidising agent as shown by its reactions with solid halides:
NaI + H2SO4 HI + NaHSO4
H2SO4 + 8HI 4I2 + H2S + 4H2O
These reactions can be used to good effect when teaching oxidation number and balancing equations. It is worth noting that a solid potassium bromide and concentrated sulfuric acid mixture is used to convert organic alcohols to halogenoalkanes.
CH3CH2OH + HBr CH3CH2Br + H2O
The KBr-H2SO4 mixture generates hydrogen bromide in situ. | <urn:uuid:aa33bc70-8b54-487d-93c4-319cdd55ac5c> | 3.71875 | 879 | Tutorial | Science & Tech. | 34.988077 |
Sep. 8, 2009 Optical clocks like the strontium clock in the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig could be the atomic clocks of the future; some of them though are already ten times more precise and stable than the best primary caesium atomic clocks. Now they might also become more compact and even portable, maybe in the future even travel to space.
PTB scientists have shown how some fundamental difficulties, which a more simple set-up had previously hindered, could be avoided. They have written about this in the current edition of the journal Physical Review Letters. In the next step they want to build such a clock. They already have a practical application in mind: the clock could help to determine geographical heights even more exactly than before.
An optical clock is so exact because its "pendulum" swings so quickly. The same effect that makes a quartz clock more precise than a classical grandfather clock is behind this: the periodically swinging element within, the oscillating quartz crystal, moves by far more quickly than the pendulum of the grandfather clock; thus the scale can to some extent be divided more precisely and also be more precisely checked. The "pendulum" of a caesium atomic clock swings even more quickly: i.e. that microwave radiation which can bring about a spin change in each electron of a caesium atom. Precisely the microwave frequency at which this effect is largest defines the second. An optical atomic clock works with the still higher frequency of optical radiation -- that is with an even faster pendulum.
As the movement of the atoms leads to very large frequency shifts through the Doppler effect, in the best of these clocks the atoms are slowed down to a hundredth of the speed of a pedestrian in a first preparation step with the aid of laser cooling. In a lattice clock a further step then follows in which the atoms are held in potential wells. These are created through the intensive light field of a laser. Several tens of thousands of strontium atoms are trapped in this so-called optical lattice. The movement of the atoms is thus limited to the fraction of an optical wavelength, so that shifts through the Doppler effect can be ignored.
A few hundred atoms which can disturb each other are trapped in each potential well. If the isotope strontium-87, a fermion, is used, two of these particles do not come close to each other at very low temperatures due to the Pauli principle. That is the reason why this isotope is used to construct optical clocks. But as it can only be cooled relatively complicatedly with laser light and, moreover, only has a natural abundance of 7 %, it is, in principle, not so well suited for simple, transportable clocks or even for clocks suitable for space.
The isotope strontium-88 with over 80 % natural abundance, which is also easier to cool, is, however, a boson. That means that even at the lowest temperatures many collisions between the atoms occur. They can lead to losses and to a shift and broadening of the reference line. How strongly these collisions influence the accuracy of the clock was, however, not known previously. In an experiment at PTB, these influences have now been measured in detail for the first time. The results of the investigation have shown how the optical lattice has to be dimensioned and how many atoms may be stored in it to operate a very accurate lattice clock also with strontium-88. A clock is now being built on this basis which is more compact and more transportable than the previous lattice clocks.
The gravitational red shift of the earth, amounting to a height difference of 10-16 per meter on its surface, is being discussed as a possible first use for the precise determination of the height over the geoid. So the clock could be used to improve, for example, gravitation maps.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:bd61b414-4a0a-409c-a77d-b2fcef4c8f86> | 3.84375 | 855 | Truncated | Science & Tech. | 45.668366 |
Friday, February 13, 2009
We asked Byron Tapley, director of the Center for Space Research in the Cockrell School of Engineering, if there’s concern about the GRACE mission.
GRACE stands for Gravity Recovery and Climate Experiment. It is a joint operation of NASA and the German Center for Air and Space Flight. Tapley is the project’s principal investigator.
GRACE consists of two satellites that orbit the Earth in unison at an altitude of 311 miles–and have since March 2002. They measure changes in the Earth’s gravity and have provided data for scores of scientific papers.
“The collision is a significant concern for all Earth orbiting missions,” Tapley said in an e-mail message. “The GRACE satellites are at a lower altitude that the satellites that collided and in principal are not subject to immediate threat, but the pieces created by the collision will eventually come down.”
He said it will take time to determine what threat the debris might pose to the GRACE satellites,
The satellites have the capability to avoid collisions, he said.
“The assessment of the probability for impact is an on-going concern and the actions would be planned as a threat is identified,” Tapley said.
He added, “The project has a task to consider minimizing the probability of collision with other satellites.” | <urn:uuid:20150e14-92b3-4e87-b8b6-4db1d37efca3> | 3.359375 | 284 | Audio Transcript | Science & Tech. | 40.407727 |
Tornadoes can occur at any time of the year and just about anywhere in the world. However, the unique geography of the United States helps to produce some of the most favorable conditions for their development.
The months with the greatest number of tornadoes overall are April, May, and June, but tornadoes can and do occur during any month of the year. Tornado seasons vary in different parts of the United States.
In the Southeast, the peak season for tornadoes is February through April. In the northern Plains, tornadoes are most likely to develop from June through August.
Generally, tornado frequency is high in the South in late winter and early spring; and in the Plains, Midwest, and Ohio Valley from early spring through summer.
The reason for this is that low-level heat, moisture, and instability necessary for tornado formation are usually confined to southern regions early in the year, and only reach northern sections with regularity late in the spring.
Also, the jet stream (associated with the upper-level weather systems that can contribute to thunderstorm and tornado formation) migrates northward from winter to summer.
A secondary season of tornado activity has been observed across the South in late autumn as the jet stream, located well to the north in the summer, migrates southward to its typical wintertime position.
Tornadoes can occur at any time of day, but are more likely to occur 2-7 p.m. In Tornado Alley, typically defined as the region from Texas north to Nebraska, very few tornadoes occur in the morning. In the Southeast, a significant percentage occur during the night and morning hours.
Tornadoes can vary greatly in terms of their length, width, direction and speed. The median path length of a tornado is just under one mile, while its path width averages 50 yards, half the length of a football field.
Since most tornadoes are formed in conjunction with severe thunderstorms, forecasters first must determine where thunderstorms are likely to form and reach the severity necessary for tornado formation.
The difference between whether a thunderstorm will be severe with large hail and strong winds but no tornado, or whether it will spawn a potentially deadly tornado is very subtle.
When evaluating areas for tornadic potential, forecasters traditionally examine observations and computer data to locate regions where strong instability and wind shear coexist.
While forecasters can determine areas where instability and wind shear might contribute to favorable tornadic conditions, NEXRAD Doppler radar is one of the best indicators of impending tornado formation.
By displaying swirling cloud-level winds that often precede a tornado, Doppler radar can help forecasters pinpoint areas where tornado formation may be occurring or is imminent. This can give meteorologists valuable lead time for the issuance of watches and warnings. | <urn:uuid:fcdb4c5f-3b0a-4635-83f1-f4b77424787b> | 4.1875 | 572 | Knowledge Article | Science & Tech. | 32.800519 |
Tritium is a kind of hydrogen. Tritium is radioactive. Tritum has a half-life of 12.3 years. Pretend you started with 100 kilograms of tritium. After 12.3 years (one half-life), only half (50 kg) of the tritium would be left (yellow lines). After 24.6 years (two half-lives), only one-quarter (25 kg) of the original 100 kg of tritium would be left (blue lines).
Original artwork by Windows to the Universe staff (Randy Russell). | <urn:uuid:39b4cd61-77f6-4300-a1f9-bc68c314ba08> | 3.234375 | 120 | Knowledge Article | Science & Tech. | 84.23169 |
The scientists are removing the muscle layer to reveal the pygmy right whale’s unusual bone structure.
This is Dr Sentiel Rommel’s thoughts on the rib structure:
You can see the ribs gradually changing to the unique flattened and overlapping ribs on the right. the space between the ribs allows them to move as the whale breathes and also accomodates changes in volume that occurs as the air is compressed by water pressure when the whale dives through deep water.
The flattened ribs don’t have as much space between them and overlap. The study of terrestial animals that have wide, flattened ribs (the pangolin and anteater) have shown that these ribs proably increase the stiffness of the body.
So possibly a stiffer body has advantages in the way they swim, but this is purely speculative at the moment!
Hi this is Anton on Jane’s blog. This morning we discovered that the second rib on the left side of the animal is broken. We can tell from the bloody area around the break that the animal suffered this injury prior to death. Dead animals don’t bleed. This injury probably occured at the time of the stranding but most likely did not contribute to cause of death. | <urn:uuid:963f2797-5717-4738-849e-c9d95f7cb460> | 3.140625 | 257 | Personal Blog | Science & Tech. | 53.971053 |
Following (below left) is the Hockey Stick diagram endorsed by Geoffrey Boulton, General Secretary of the Royal Society of Edinburgh, in their December 2009 (post-Climategate) Policy Advice statement Climate Change and the U.N Copenhagen Summit here. On the right is a confirmation plot from data archived in late 2009 and February 2010 by Boulton’s associate and 2007 hire at the University of Edinburgh, Gabrielle Hegerl (showing that I’ve located the precise data version for the Boulton hockey stick.)
Boulton Figure 3 Caption: Estimates of mean decadal temperatures over the land areas between 300 and 900 in the northern hemisphere during the last 1500 years. Prior to the instrumental record of the last 150 years (shown in red), temperatures are deduced from tree-rings, lake sediments and ice cores. The dashed lines show the range of higher frequency variability in the data. The record shows an early mediaeval cool period from prior to about 950AD, a mediaeval warm period until about 1200AD, the so-called Little Ice Age from about 1450 to 1850AD and the very strong late 20th Century warming. Temperatures in sub-surface rocks can be used to deduce long-termchanges in surface temperature that naturally smooth out inter-annual variations to show long-term trends. Temperature records from 631 boreholes have been used in this way to show how distinctive the 20th Century warming has been compared with the preceding 400 years. (From: Hegerl, G.C. and others. 2007. Detection of human influences on a new, validated 1500-year temperature reconstruction; Journal of Climate, 20 (4): 650-666.)
Boulton’s Policy Advice assured his readers that this reconstruction was “independent of the University of East Anglia reconstruction, about which there has recently been much controversy”:
Several independent estimations have now been made of the global or hemispheric average temperatures for the last two millennia. Figure 3 is one of these, and shows that the late 20th Century warming has been rapid and large compared with earlier periods (note that this is independent of the University of East Anglia reconstruction, about which there has recently been much controversy). If we look in more detail at the 20th Century warming however (Figure 3),we see that the pattern of climate change has been much more complex than the smoothly accelerating pattern of greenhouse gas concentration
Hegerl et al (J Clim 2007) describes a couple of different versions of her reconstruction: one step starts in 1505 (12 records) ; one in 1251; one in 946 (7 records) and one in 558 (5 records). Hegerl did not provide accurate digital or even paper citations for the records; the archive is smoothed data. The task of identifying the provenance of each of the series is further complicated by the fact that the only information about the smoothing is that it is “decadally smoothed” – the exact filter is not reported.
The version shown in the Boulton Policy Paper is the 5-record version starting in 558. I’m now in a position to identify the provenance of each of these 5 records – something that takes a lot of patience.
The first record in this group is described by Hegerl as follows:
Western United States: This time series uses an RCS processed tree-ring composite used in Mann et al. (1999), and kindly provided by M. Hughes, and two sites generated by Lloyd and Graumlich (1997), analyzed by Esper et al. (Boreal and Upper Wright), and provided by E. Cook. The Esper analyses were first averaged. Although there are a number of broad similarities between the Esper and Hughes reconstructions, the correlation is only 0.66. The two composites were averaged.
CA readers, but apparently not rigorous Journal of Climate peer reviewers, know that MBH99 goes back only to 1000 and that there is no candidate “RCS processed tree ring composite” in MBH99. Needless to say, this is Mann’s infamous PC1 (from Mann and Jones 2003, not MBH99)- accepted without demur by rigorous Journal of Climate peer reviewers even though Mann’s PC1 and the use of strip bark had been sharply criticized in the NAS panel report. In the CH5 reconstruction, only the PC1 is used (the strip bark foxtails are not averaged in.)
Next is a series described as follows:
Northern Sweden: This is from Grudd et al. (2002) by way of Esper.
This is Tornetrask – a site used in every multiproxy reconstruction that I know of – including, for example, MBH99, Jones et al 1998 and Briffa 2000. Esper’s RCS chronology is only slightly different than the RCS chronology from Briffa (2000). The measurement data used in Briffa 2000 wasn’t archived, but based on the measurement data in Briffa 2008, it looks like the Briffa 2000 and Esper measurement datasets matched. RCS methods are pretty trivial mathematically and thus there isn’t a lot of difference between the two chronologies. This is a ring width chronology (not an MXD density chronology) and, in this case, wasn’t bodged.
The third record is also familiar to CA readers:
Taimyr Peninsula: This is from Naurzbaev et al. (2002) by way of Esper.
Taymir, like Tornetrask, is a Briffa 2000 site and used in most recent multiproxy reconstructions. The Esper version of measurement data is smaller than the Briffa 2008 version, which pulls in some Schweingruber sites (the Briffa 2000 version has never been archived, but is presumably fairly similar to the Esper version.)
The fourth record is a West Greenland isotope series from Fisher et al 1996. This is also used in virtually every multiproxy study: MBH99, Jones et al 1998, Mann and Jones 2003, Moberg 2005, etc.
The 5th record is a Chinese composite from Yang et al 2002, used in many studies. This is used in many recent multiproxy studies as well.
East Asia: This is the high-resolution record (10-yr average) from Yang et al. (2002).
Although this supposedly has 10-year interval, it used the Thompson Dunde ice core previously smoothed to 50-year intervals.
The serial re-use of these proxies is very familiar to CA readers – a point confirmed by Wegman et al (2006).
Far from the Boulton hockey stick – a composite of the Mann PC1, Tornetrask, Taymir, West Greenland isotopes and the Yang composite – being “independent” of the controversial East Anglia reconstructions (regardless of whatever Boulton had in mind here precisely), the Boulton hockey stick is not “independent”.
Boulton and the Team that can’t shoot straight.
See CA category here for prior posts on Hegerl. | <urn:uuid:a616afd7-59e2-4528-a670-25a1ad5be2e5> | 3.125 | 1,484 | Personal Blog | Science & Tech. | 49.346565 |
Brief SummaryRead full entry
BiologyInformation on the biology of Bannerman's turaco is extremely limited, but fruits are believed to make up the bulk of its diet, primarily the fruits and berries of Podocarpus milanjensis, as well as figs (4). Breeding takes place during the early rainy season from March to June. Nests are rather flimsy platforms of twigs hidden in trees or bushes among a tangle of creepers or the thick foliage on outer branches, and have been recorded anywhere between 1.5 and 10 m above ground. Clutches typically contain two white eggs, which are incubated by both the male and female (4). | <urn:uuid:c5af61ee-e5ac-470f-a9ce-695f40a3d478> | 3.265625 | 140 | Knowledge Article | Science & Tech. | 44.52 |
GNU parallel is a shell tool for executing jobs in parallel locally or using remote computers. A job is typically a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. If you use xargs today you will find GNU parallel very easy to use, as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. If you use ppss or pexec you will find GNU parallel will often make the command easier to read. GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.
Asymptote is a powerful descriptive 2D and 3D vector graphics language for technical drawing, inspired by MetaPost but with an improved C++-like syntax. It provides for figures the same high-quality level of typesetting that LaTeX does for scientific text. Asymptote is a programming language as opposed to just a graphics program. It can exploit the best features of script (command-driven) and graphical user interface (GUI) methods. High-level graphics commands are implemented in the language itself, allowing them to be easily tailored to specific applications.
tex-upmethodology provides a complete set of LaTeX styles that permit you to write documents according to a UP-based methodology. Its major features are document history, task management, design and specification documentation, and helping tools. tex-upmethodology is officially supported by CTAN.
TXR is a new data munging language to replace the likes of awk and Perl. TXR's special pattern language provides template-based matching of entire documents or large sections of documents. It also contains a language for functional and imperative programming. It is written in C and takes the form of a utility that is portable to Unix-like platforms and Windows.
AutoLaTeX is a tool for managing small to large sized LaTeX projects. The typical AutoLaTeX input file is simply a series of variable definitions in a Makefile for the project. This Makefile was automatically generated by a provided Perl script. The user can easily perform all required steps to do such tasks as preview the document or produce a PDF file. AutoLaTeX will keep track of files that have changed and how to run the various programs that are needed to produce the output. One of the best features of AutoLaTeX is to provide translation rules to automatically generate the figures which will be included into the PDF.
Recoll is a personal full text desktop search tool based on Xapian. It provides an easy to use, feature-rich, easy administration interface with a Qt-based GUI. Text, HTML, PDF, PostScript, MS Word, OpenOffice, Wordperfect, KWord, Abiword, maildir, and mailbox mail folder formats are supported, along with their compressed versions and quite a few others. Powerful query facilities are provided. Multiple character sets are supported, and internal processing and storage uses Unicode UTF-8. Stemming is performed at query time and the stemming language can be switched after indexing.
Template Data Interface (TDI, /ʹtedɪ/) is a markup templating system written in Python with (optional but recommended) speedup code written in C. Unlike most templating systems, TDI does not invent its own language to provide functionality. Instead, you simply mark the nodes you want to manipulate within the template document. The template is parsed, and the marked nodes are presented to your Python code, where they can be modified in any way you want.
Sanzang is a compact and simple cross-platform machine translation system. It is especially useful for translating from the CJK languages (Chinese, Japanese, and Korean), and it is very suitable for working with ancient and otherwise difficult texts. Unlike most other machine translation systems, Sanzang is small and approachable. Any user can develop his or her own translation rules, and these rules are simply stored in a text file and applied at runtime. | <urn:uuid:82f757d7-98cc-4924-846b-140f57122d32> | 2.90625 | 894 | Content Listing | Software Dev. | 40.325644 |
Saturday, October 13, 2007
Andrew Steckl has revealed an intriguing new piece of evidence that nature has more power than humans have yet to imagine. His work on photonics at the University of Cincinnati has focuses on intensifying the light produced by LEDs with biological material.
“Biological materials have many technologically important qualities — electronic, optical, structural, magnetic,” says Steckl. “DNA has certain optical properties that make it unique. It allows improvements in one to two orders of magnitude in terms of efficiency, light, brightness — because we can trap electrons longer.”
His big idea- using salmon sperm. His main focus is on creating products that are more environmentally sustainable. This material is a readily available bi-product of the fishing industry, and is thrown away by the ton.
Steckl believes that the use of biological materials has the potential to improve all our current electronic technologies, plus close the loop between industry and waste.
For the original article link here
For the hype at Tree Hugger and ignorant comments from the peanut gallery link here | <urn:uuid:30bc5bff-cb34-4c9a-b194-c0868de26e57> | 3.09375 | 221 | Personal Blog | Science & Tech. | 25.30477 |
Dear forum members,
please can someone help me to understand how to do this problem
Determine the sum of all quotients where m and n are whole numbers and
I don't understand how to do this.
Any help is appreciated.
Thank you in advance!
Thank you so much!
Now I get the method. But won't it be quite a long calculation, even if using the arithmetic formula. My teacher wrote a "short cut" on a sample answer sheet as follows
the i-1 should be above the sigma sign, and the k=1 below, and there should be no equal sign before the k, but I just can't get it to work like that. Hope it is still clear enough.
Can you please help me figure out what he is trying to tell?
I noticed that the amount numbers in brackets after the fraction multiplier, are always one less in amount than the numerator of the fraction, but I don't know how to make use of that when creating a shortcut to calculate the sum.
Did you understood why he gave this formula ? I'll try to explain a part of it...
This is because in a first time, you sum the fractions while variating the numerator, then when variating the denominator.
The numerator is, as Mr F mentioned the sum of the terms of an arithmetic sequence.
The thing is, normally, it's n(n+1)/2, with n the number above the sign sum (can't find a noun about it)
So here, it's (i-1)(i-1+1)/2 according to the formula.
So you can simplify the general term of the sequence.
And you now have to calculate | <urn:uuid:bdab3068-1c6b-4fe4-9bf3-bd059a206f13> | 2.96875 | 359 | Comment Section | Science & Tech. | 74.510622 |
Understanding where to locate celestial objects, and how those objects move
across the sky is fundamental to enjoying the hobby of astronomy. Most amateur
astronomers adopt the simple practice of "star-hopping" to locate
celestial objects by using star charts or astronomical software which identify
bright stars and star patterns (constellations) that serve as "road
maps" and "landmarks" in the sky. These visual reference
points guide amateur astronomers in their search for astronomical objects.
And, while star-hopping is the preferred technique, a discussion of using
setting circles for locating objects is desirable since your telescope is
provided with this feature. However, be advised, compared to star-hopping,
object location by use of setting circles requires a greater investment
in time and patience to achieve a more precise alignment of the telescope's
polar axis to the celestial pole. For this reason, in part, star-hopping
is popular because it is the faster, easier way to become initiated in the
|IMPORTANT NOTICE! Never use a telescope or spotting scope to look at the Sun! Observing the Sun, even for the shortest fraction of a second, will cause irreversible damage to your eye as well as physical damage to the telescope or spotting scope itself. |
The Celestial Sphere
Understanding how astronomical objects move: Due to the Earth's rotation,
celestial bodies appear to move from East to West in a curved path through
the skies. The path they follow is known as their line of Right Ascension
(R.A.). The angle of this path they follow is known as their line of Declination
(Dec.). Right Ascension and Declination is analogous to the Earth-based
coordinate system of latitude and longitude.
Understanding celestial coordinates: Celestial objects are mapped
according to the R.A. and Dec. coordinate system on the "celestial
sphere," the imaginary sphere on which all stars appear to be placed.
The Poles of the celestial coordinate system are defined as those 2 points
where the Earth's rotational axis, if extended to infinity, North and South,
intersect the celestial sphere. Thus, the North Celestial Pole is that point
in the sky where an extension of the Earth's axis through the North Pole
intersects the celestial sphere. In fact, this point in the sky is located
near the North Star, or Polaris.
On the surface of the Earth, "lines of longitude" are drawn between
the North and South Poles. Similarly, "lines of latitude" are
drawn in an East-West direction, parallel to the Earth's equator. The celestial
equator is simply a projection of the Earth's equator onto the celestial
sphere. Just as on the surface of the Earth, imaginary lines have been drawn
on the celestial sphere to form a coordinate grid. Celestial object positions
on the Earth's surface are specified by their latitude and longitude.
The celestial equivalent to Earth latitude is called "Declination,"
or simply "Dec," and is measured in degrees, minutes or seconds
north ("+") or south ("-") of the celestial equator.
Thus any point on the celestial equator (which passes, for example, through
the constellations Orion, Virgo and Aquarius) is specified as having 0°0'0"
Declination. The Declination of the star Polaris, located very near the
North Celestial Pole, is +89.2°.
The celestial equivalent to Earth longitude is called "Right Ascension,"
or "R.A." and is measured in hours, minutes and seconds from an
arbitrarily defined "zero" line of R.A. passing through the constellation
Pegasus. Right Ascension coordinates range from 0hr0min0sec up to (but not
including) 24hr0min0sec. Thus there are 24 primary lines of R.A., located
at 15 degree intervals along the celestial equator. Objects located further
and further east of the prime (0h0m0s) Right Ascension grid line carry increasing
With all celestial objects therefore capable of being specified in position
by their celestial coordinates of Right Ascension and Declination, the task
of finding objects (in particular, faint objects) in the telescope can be
simplified. The setting circles, R.A. and Dec. of the telescope may be dialed,
in effect, to read the object's coordinates, positioning the object in the
vicinity of the telescope's telescopic field of view. However, these setting
circles may be used to advantage only if the telescope is first properly
aligned with the North Celestial Pole.
Lining Up with the Celestial Pole
Objects in the sky appear to revolve around the celestial pole. (Actually,
celestial objects are essentially "fixed," and their apparent
motion is caused by the Earth's axial rotation). During any 24 hour period,
stars make one complete revolution about the pole, making concentric circles
with the pole at the center. By lining up the telescope's polar axis with
the North Celestial Pole (or for observers located in Earth's Southern Hemisphere
with the South Celestial Pole), astronomical objects may be followed, or
tracked, simply by moving the telescope about one axis, the polar axis.
If the telescope is reasonably well aligned with the pole, therefore, very
little use of the telescope's Declination flexible cable control is necessary-virtually
all of the required telescope tracking will be in Right Ascension. (If the
telescope were perfectly aligned with the pole, no Declination
tracking of stellar objects would be required). For the purposes of casual
visual telescopic observations, lining up the telescope's polar axis to
within a degree or two of the pole is more than sufficient: with this level
of pointing accuracy, the telescope can track accurately by slowly turning
the telescope's R.A. flexible cable control and keep objects in the telescopic
field of view for perhaps 20 to 30 minutes. | <urn:uuid:fb7c1e8b-b98b-4436-a7d1-f4f5a73bcda2> | 3.90625 | 1,256 | Documentation | Science & Tech. | 39.860686 |
Mantle dynamics, uplift of the Tibetan Plateau, and the Indian Monsoon
Article first published online: 14 JUN 2010
Copyright 1993 by the American Geophysical Union.
Reviews of Geophysics
Volume 31, Issue 4, pages 357–396, November 1993
How to Cite
1993), Mantle dynamics, uplift of the Tibetan Plateau, and the Indian Monsoon, Rev. Geophys., 31(4), 357–396, doi:10.1029/93RG02030., , and (
- Issue published online: 14 JUN 2010
- Article first published online: 14 JUN 2010
Convective removal of lower lithosphere beneath the Tibetan Plateau can account for a rapid increase in the mean elevation of the Tibetan Plateau of 1000 m or more in a few million years. Such uplift seems to be required by abrupt tectonic and environmental changes in Asia and the Indian Ocean in late Cenozoic time. The composition of basaltic volcanism in northern Tibet, which apparently began at about 13 Ma, implies melting of lithosphere, not asthenosphere. The most plausible mechanism for rapid heat transfer to the midlithosphere is by convective removal of deeper lithosphere and its replacement by hotter asthenosphere. The initiation of normal faulting in Tibet at about 8 (± 3) Ma suggests that the plateau underwent an appreciable increase in elevation at that time. An increase due solely to the isostatic response to crustal thickening caused by India's penetration into Eurasia should have been slow and could not have triggered normal faulting. Another process, such as removal of relatively cold, dense lower lithosphere, must have caused a supplemental uplift of the surface. Folding and faulting of the Indo-Australian plate south of India, the most prominent oceanic intraplate deformation on Earth, began between about 7.5 and 8 Ma and indicates an increased north-south compressional stress within the Indo-Australian plate. A Tibetan uplift of only 1000 m, if the result of removal of lower lithosphere, should have increased the compressional stress that the plateau applies to India and that resists India's northward movement, from an amount too small to fold oceanic lithosphere, to one sufficient to do so. The climate of the equatorial Indian Ocean and southern Asia changed at about 6–9 Ma: monsoonal winds apparently strengthened, northern Pakistan became more arid, but weathering of rock in the eastern Himalaya apparently increased. Because of its high altitude and lateral extent, the Tibetan Plateau provides a heat source at midlatitudes that should oppose classical (symmetric) Hadley circulation between the equator and temperate latitudes and that should help to drive an essentially opposite circulation characteristic of summer monsoons. For the simple case of axisymmetric heating (no dependence on longitude) of an atmosphere without dissipation, theoretical analyses by Hou, Lindzen, and Plumb show that an axisymmetric heat source displaced from the equator can drive a much stronger meridianal (monsoonlike) circulation than such a source centered on the equator, but only if heating exceeds a threshold whose level increases with the latitude of the heat source. Because heating of the atmosphere over Tibet should increase monotonically with elevation of the plateau, a modest uplift (1000–2500 m) of Tibet, already of substantial extent and height, might have been sufficient to exceed a threshold necessary for a strong monsoon. The virtual simultaneity of these phenomena suggests that uplift was rapid: approximately 1000 m to 2500 m in a few million years. Moreover, nearly simultaneously with the late Miocene strengthening of the monsoon, the calcite compensation depth in the oceans dropped, plants using the relatively efficient C4 pathway for photosynthesis evolved rapidly, and atmospheric CO2 seems to have decreased, suggesting causal relationships and positive feedbacks among these phenomena. Both a supplemental uplift of the Himalaya, the southern edge of Tibet, and a strengthened monsoon may have accelerated erosion and weathering of silicate rock in the Himalaya that, in turn, enhanced extraction of CO2 from the atmosphere. Thus these correlations offer some support for links between plateau uplift, a downdrawing of CO2 from the atmosphere, and global climate change, as proposed by Raymo, Ruddiman, and Froehlich. Mantle dynamics beneath mountain belts not only may profoundly affect tectonic processes near and far from the belts, but might also play an important role in altering regional and global climates. | <urn:uuid:4e54af9d-fbbf-4b9c-9e2e-4da8d908821c> | 2.734375 | 935 | Academic Writing | Science & Tech. | 21.328126 |
Colorado Insects of Interest
Scientific Name: Araneus gemmoides Chamberlin and Ivie
Class: Arachnida (Arachnids)
Order: Araneae (Spiders)
Family: Araneidae (Orbweaver spiders)
Identification and Descriptive Features: The large, full-grown females (Figure 1) are the stage most often seen. These vary from about 5-7 mm long and 4.5-5.5 mm wide. Males (Figure 7) are considerably smaller, about half as large. Both sexes have a bulbous abdomen with a pair of projections on the front. A faint white line runs down the midline of the front of the abdomen and it is usually crossed with small V-shaped markings. Overall coloration is highly variable and ranges from straw-colored (Figure 2) to dark grayish brown (Figure 3). The combination of the projections of the abdomen, dimples and markings lead to the common name “cat-faced spider”. Araneus gemmoides is also sometimes referred to as a “monkey-faced spider”.
The cat-faced spider captures prey by use of a sticky web that it usually establishes among vegetation a few feet above ground. The web is of concentric design, typical of those spiders in the orb-weaver spider family, with spiraling sticky coils. As the web is damaged, it may be torn down, consumed and reconstructed regularly. The spiders may also relocate their webs repeatedly. Late in the season cat-faced spiders are most often seen near porch lights or just outside windows, areas that are likely to attract flying insects.
During the day the spiders sometimes may be seen in the center area of the web or even at work in its repair. However, usually they remain within an area of retreat at a corner of the web, often hidden. While waiting within the retreat a leg maintains contact with a thread of the web to detect vibrations indicating snared prey. When an insect does get stopped in the web the spider moves out, swaths it with sheets of silk, then paralyzes it with its digestive saliva. The prey is then usually carried back to the area where the spider later consumes it.
Cat-faced spiders do have many natural enemies including various insects and other spiders that may feed on them. Probably the most conspicuous predator is the black and yellow mud dauber (Sceliphron caementarium), which captures spiders, paralyzes them with a sting, and uses them to provision their mud nest cells (Figure 8).
If handled, a mature cat-faced spider may give a sharp pinch of a bite, although they can not normally pierce the skin. Furthermore, they are not a dangerous species and do not possess venom that produces any serious effects on humans.
Related Species: Many orbweaver spiders (Araneidae family) occur in Colorado and make their characteristic patterned webs amongst vegetation. At least four other Araneus species occur in the state, but none are nearly as commonly encountered as is the cat-faced spider. Other orbweaver spiders that are common primarily occur in the genera Neoscona (Figure 9), Aculepeira and Argiope.
The information herein is supplied with the understanding that no discrimination is intended and that listing of commercial products, necessary to this guide, implies no endorsement by the authors or the Extension Services of Nebraska, Colorado, Wyoming or Montana. Criticism of products or equipment not listed is neither implied nor intended. Due to constantly changing labels, laws and regulations, the Extension Services can assume no liability for the suggested use of chemicals contained herein. Pesticides must be applied legally complying with all label directions and precautions on the pesticide container and any supplemental labeling and rules of state and federal pesticide regulatory agencies. State rules and regulations and special pesticide use allowances may vary from state to state: contact your State Department of Agriculture for the rules, regulations and allowances applicable in your state and locality. | <urn:uuid:96267b75-a798-440c-a99b-371cd628c213> | 3.671875 | 818 | Knowledge Article | Science & Tech. | 34.722927 |
Tuesday 21 May
Sodom’s apple milkweed (Calotropis procera)
What’s the World’s Favourite Species?Find out here.
Sodom’s apple milkweed fact file
- Find out more
- Print factsheet
Sodom’s apple milkweed description
Growing as a spreading shrub or a small tree, Sodom's apple milkweed (Calotropis procera) has simple stems with only a few branches, which are light grey-green in colour and covered in a fissured, corky bark (3) (4) (5) (6) (7). The fairly large, grey-green leaves grow in opposite pairs along the stems and are smooth, with a pointed tip and heart-shaped base (2) (3) (4) (6) (7). The large, waxy, white flowers have deep purple spots or blotches at the base of each of the five petals, and are grouped in clusters, known as umbels (3) (4) (6) (7). Sodom's apple milkweed produces a simple, fleshy fruit in a grey-green inflated pod, containing numerous flat, brown seeds with tufts of long, white silky hair (‘pappus’) at one end. Sodom's apple milkweed exudes a milky white sap (latex) when the plant is cut or broken, which although toxic is widely used in many traditional medicines (2) (3) (4) (5) (6) (7).
- Also known as
- apple of Sodom, ashkhar, desert apple, giant milkweed, rubber bush, Sodom apple, Sodom's milkweed.
- Height: 2.5 – 6 m (2)
World Agroforestry Centre, Agroforestree Database:
Abu Dhabi Environment Agency:
- The transfer of pollen between flowers on different plants.
- Term used to describe a species that was originally introduced from another country, but becomes established, maintains itself and invades native populations.
- A plant that normally lives for more than two seasons. After an initial period, the plant produces flowers once a year.
- In plants, a usually umbrella-shaped flower cluster in which the individual flower stalks originate at roughly the same point.
UNEP-WCMC (November, 2010)
- Orwa, C., Mutua, A., Kindt, R., Jamnadass, R. and Simons, A. (2009) Calotropis procera. Agroforestree Database: A Tree Reference and Selection Guide. World Agroforestry Centre, Kenya.
Hawaiian Ecosystems at Risk project (HEAR) (November, 2010)
Brandes, D. (2005) Calotropis procera on Fuerteventura. Working Group for Vegetation Ecology, Institute of Plant Biology, TechnicalUniversity Braunschweig, Germany. Available at:
Ecocrop (November, 2010)
- Francis, J.K. (2003) Calotropis procera. U.S. Department of Agriculture, Forest Service, International Institute of Tropical Forestry, Puerto Rico.
- Parsons, W.T. and Cuthbertson, E.G. (2001) Noxious Weeds of Australia. CSIRO Publishing, Australia.
Germplasm Resources Information Network (GRIN) (November, 2010)
- view the contents of, and Material on, the website;
- download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use;
- teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User.
Sodom’s apple milkweed biology
Sodom's apple milkweed is a perennial species (5) and flowering and fruiting occurs throughout the year, with each plant producing hundreds to thousands of seeds which are dispersed by the wind. After rainy periods, Sodom's apple milkweed seedlings will emerge in large numbers, although very few will grow to reach maturity (6). Cross-pollination occurs by insects, particularly by species such as the monarch butterfly (Danaus plexippus), which uses Sodom's apple milkweed as a host plant for various stages of its life cycle (2). The latex produced by Sodom's apple milkweed is toxic when ingested by mammals, affecting the heart, as well as causing nausea and vomiting (7).Top
Sodom’s apple milkweed range
Sodom's apple milkweed is native to much of Africa (excluding south and central Africa), the Arabian Peninsula and southern Asia. Sodom's apple milkweed has also been introduced, and is now naturalised, in Australia, many Pacific islands, Central and South America, South Africa and the Caribbean Islands (2) (3) (4) (6) (7) (8).Top
Sodom’s apple milkweed habitat
A drought-resistant, salt-tolerant species, Sodom's apple milkweed grows in open habitats and is particularly common in overgrazed pastures and on poor soils where there is little competition from grasses. Sodom's apple milkweed is also found along roadsides, watercourses, river flats and coastal dunes, and is often prevalent in disturbed areas (2) (3) (5) (6).Top
Sodom’s apple milkweed status
Sodom's apple milkweed has yet to be classified by the IUCN.Top
Sodom’s apple milkweed threats
There are no known threats to Sodom's apple milkweed.Top
Sodom’s apple milkweed conservation
There are no known conservation measures in place for Sodom's apple milkweed.Top
Find out more
To find out more about Sodom's apple milkweed, see:
To find out more about conservation in the Emirates region, see:
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact:
More »Related species
Play the Team WILD game
MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends.
Terms and Conditions of Use of Materials
Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors.
Visitors to this website (End Users) are entitled to:
End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials.
Additional use of flagged material
Green flagged material
Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use.
Creative commons material
Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details.
Any other use
Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use.
Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted. | <urn:uuid:39280d00-2cf4-4f25-b7e7-98aa3b78468b> | 3.265625 | 1,827 | Knowledge Article | Science & Tech. | 39.493431 |
by Professor Paul Weiss
Designing, synthesizing, assembling, operating, and measuring molecular
devices give us the ability to study the ultimate limits of function.1-4 We seek to understand the rules and limits associated
with such devices given that we are able to know the precise positions and
connections of all the atoms in the entire system. Experiments, theory, and
simulations are used in concert to this end.
While no one has come up with the means to "wire" these molecules and
assemblies into functional architectures for technological use, we can explore
how far precise structures might be pushed, test the extent to which cross-talk
and interference of densely packed devices affect performance, and elucidate the
role of environment on function and stability.
New nanoscale tools have enabled our exploration and ultimately the
manipulation of the atomic-scale world. In addition to structural measurements,
acquiring local spectra across the rotational, vibrational, and electronic
energy ranges has become possible. Local "action spectra" can be used to
determine the energy thresholds for motion and other dynamics. Functional
measurements at these scales in combination with the above are now enabling
comprehensive views of the relationships between molecular structure, assembly
and interactions, and operation.4
As nanoscale tools become more sophisticated, so does our ability to design
and to assemble increasingly complex, precise supramolecular structures and
devices. "Isolated" species on surfaces and in controlled matrices can be made
to function in predictable ways and with high efficiencies.2,3 The situation changes when a number of
functional or interacting components are placed together. Such interactions can
be designed to stabilize an assembly or a state of the system, or even to
interrogate the system by constraining spatial relationships. All of these
possibilities mimic biological components and systems. We are currently limited
by our ability to probe these systems.
Key to the future will be understanding and exploiting such interactions in
order to enable higher order function and greater complexity.5 This may need to include controlling precise spacings in
order to limit cross-talk, excitation transfer, and mechanical interference
(steric hindrance). Likewise, molecular and supramolecular systems may
ultimately be designed to function hierarchically with efficiencies rivaling
those found in biological systems.
1. Z. J. Donhauser, B. A. Mantooth, K.
F. Kelly, L. A. Bumm, J. D. Monnell, J. J. Stapleton, D. W. Price Jr., D. L.
Allara, J. M. Tour, and P. S. Weiss, Conductance Switching in Single Molecules
through Conformational Changes, Science 292, 2303 (2001).
2. P. S. Weiss, Functional Molecules and Assemblies in
Controlled Environments: Formation and Measurements, Accounts of Chemical
Research 41, 1772 (2008).
3. A. S.
Kumar, T. Ye, T. Takami, B.-C. Yu, A. K. Flatt, J. M. Tour, and P. S. Weiss,
Reversible Photo-Switching of Single Azobenzene Molecules in Controlled
Nanoscale Environments, Nano Letters 8, 1644 (2008).
4. A. M. Moore and P. S. Weiss, Functional and Spectroscopic
Measurements with Scanning Tunneling Microscopy, Annual Reviews of Analytical
Chemistry 1, 857 (2008).
5. D. B. Li, R. Baughman, T. J. Huang,
J. F. Stoddart, and P. S. Weiss, Molecular, Supramolecular, and Macromolecular
Motors and Artificial Muscles, MRS Bulletin 34, 671 (2009).
Our work in this area is currently supported by the National Science Foundation,
the Department of Energy, and the Kavli Foundation.
Copyright AZoNano.com, Professor Paul S. Weiss (University of
California, Los Angeles) | <urn:uuid:dbd5360d-d2f8-4d39-85f7-80f13a868a29> | 2.765625 | 880 | Academic Writing | Science & Tech. | 42.233897 |
Cold-blooded organisms, more technically known as poikilothermic, are animals that have no internal metabolic mechanism for regulating their body temperatures. Some (usually smaller) animals have unregulated temperatures, but most have sophisticated physiological and behavioral techniques for obtaining their desired core body temeprature from the environment. Cold-blooded animals are often referred to as ectotherms.
Ectotherms depend largely on external sources of heat, such as solar radiation. As the environmental temperature increases, the animal's metabolic rate will increase. Lizards, fish, and amphibians are examples of ectotherms. Whereas an endotherm, or warm-blooded animal will use up to 98% of its energy for heat production, an ectotherm has all this energy available for activity, growth, repair and reproduction.
Examples of this temperature control include:
Many homeothermic, or warm-blooded, animals also make use of these techniques at times. For example, all animals are at risk of overheating on hot days in the desert sun , and most homeothermic animals can shiver.
Poikilotherms often have more complex metabolisms than homeotherms. For an important chemical reaction, poikilotherms may have four to ten enzyme systems that operate at different temperatures. As a result, poikilotherms often have larger, more complex genomes than homeotherms in the same ecological niche. Frogs are a notable example of this effect.
Because their metabolism is so variable, poikilothermic animals do not easily support complex, high-energy organ systems such as brains or wings. Some of the most complex adaptations known involve poikilotherms with such organ systems. One example is the swimming muscles of Tuna, which are warmed by a heat exchanger. In general, poikilothermic animals do not use their metabolisms to heat or cool themselves. For the same body weight poikilotherms need 1/3 to 1/10 of the energy of homeotherms. They therefore eat only 1/3 to 1/10 of the food needed by homeothermic animals.
It is comparatively easy for a poikilotherm to accumulate enough energy to reproduce. Poikilotherms in the same ecological niche often have much shorter generations than homeotherms: weeks rather than years.
This energy difference also means that a given niche of a given ecology can support three to ten times the number of poikilothermic animals as homeothermic animals. However, in a given niche, homeotherms often drive poikilothermic competitors to extinction because homeotherms can gather food for a greater fraction of each day.
Poikilotherms succeed in some niches, such as islands, or distinct bioregions (such as the small bioregions of the Amazon basin). These often do not have enough food to support a viable breeding population of homeothermic animals. In these niches, poikilotherms such as large lizards, crabs and frogs supplant homeotherms such as birds and mammals. | <urn:uuid:faae1b54-0280-4571-9ca0-41d51efd7798> | 4.09375 | 633 | Knowledge Article | Science & Tech. | 22.250668 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
The metal in main-group organometallic compounds can be any of the elements in the s block (i.e., groups 1 and 2) or any of the heavier elements in groups 13 through 15. (Groups 13–18 constitute the p block.) The elements at the borderline between the d block and p block—namely, zinc, cadmium, and mercury—will be discussed along with the...
What made you want to look up "p-block element"? Please share what surprised you most... | <urn:uuid:85a6779c-d461-4235-adf6-73cd67435738> | 2.796875 | 142 | Truncated | Science & Tech. | 76.222516 |
In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and Numerical analysis is the study of Algorithms for the problems of continuous mathematics (as distinguished from Discrete mathematics) In Mathematics, Computing, Linguistics and related subjects an algorithm is a sequence of finite instructions often used for Calculation The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm.
Sometimes a single calculation can be achieved in several ways, all of which are algebraically equivalent in terms of ideal real or complex numbers, but in practice when performed on digital computers yield different results. Some calculations might damp out approximation errors that occur; others might magnify such errors. Calculations that do not magnify approximation errors are called numerically stable. One of the common tasks of numerical analysis is to try to select algorithms which are robust — that is to say, have good numerical stability.
As an example of an unstable algorithm, consider the task of adding an array of 100 numbers. To simplify things, assume our computer only has two digits of precision (for example, you can only represent numbers in the hundreds as 100, 110, 120, etc. ).
The obvious way to do this would be the following pseudo-code:
sum = 0 for i = 1 to 100 do sum = sum + a[i] end
That looks reasonable, but assume the first element in the array is 1. 0 and the other 99 elements are 0. 01. In pure math, the answer would be 1. 99. However, on our two-digit computer, once the 1. 0 was added into the sum variable, adding in 0. 01 would have no effect on the sum, and so the final answer would be 1. 0 – not a very good approximation of the real answer.
A stable algorithm would first sort the array by the absolute values of the elements in ascending order. This ensures that the numbers closest to zero will be taken into consideration first. Once that change is made, all of the 0. 01 elements will be added, giving 0. 99, and then the 1. 0 element will be added, yielding a rounded result of 2. 0 – a much better approximation of the real result.
There are different ways to formalize the concept of stability. The following definitions of forward, backward, and mixed stability are often used in numerical linear algebra. Numerical linear algebra is the study of Algorithms for performing Linear algebra computations most notably matrix operations on Computers
Consider the problem to be solved by the numerical algorithm as a function f mapping the data x to the solution y. The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function The result of the algorithm, say y*, will usually deviate from the "true" solution y. The main causes of error are round-off error, truncation error and data error. For the acrobatic movement roundoff see Roundoff. A round-off error, also called rounding error, is the difference between the Truncation error or Discretization error is error made by numerical algorithms that arises from taking finite number of steps in computation The forward error of the algorithm is the difference between the result and the solution; in this case, Δy = y* − y. The backward error is the smallest Δx such that f(x + Δx) = y*; in other words, the backward error tells us what problem the algorithm actually solved. The forward and backward error are related by the condition number: the forward error is at most as big in magnitude as the condition number multiplied by the magnitude of the backward error. In Numerical analysis, the condition number associated with a problem is a measure of that problem's amenability to digital computation that is hownumerically well-conditioned
In many cases, it is more natural to consider the relative error
instead of the absolute error Δx. In the mathematical field of Numerical analysis, the approximation error in some data is the discrepancy between an exact value and some approximation to it
The algorithm is said to be backward stable if the backward error is small for all inputs x. Of course, "small" is a relative term and its definition will depend on the context. Often, we want the error to be of the same order as, or perhaps only a few orders of magnitude bigger than, the unit round-off. An order of magnitude is the class of scale or magnitude of any amount where each class contains values of a fixed ratio to the class preceding it In Floating point arithmetic, the machine epsilon (also called macheps, machine precision or unit roundoff) is for a particular Floating
The usual definition of numerical stability uses a more general concept, called mixed stability, which combines the forward error and the backward error. An algorithm is stable in this sense if it solves a nearby problem approximately, i. e. , if there exists a Δx such that both Δx is small and f(x + Δx) − y* is small. Hence, a backward stable algorithm is always stable.
An algorithm is forward stable if its forward error divided by the condition number of the problem is small. This means that an algorithm is forward stable if it has a forward error of magnitude similar to some backward stable algorithm.
The above definitions are particularly relevant in situations where truncation errors are not important. In other contexts, for instance when solving differential equations, a different definition of numerical stability is used. A differential equation is a mathematical Equation for an unknown function of one or several variables that relates the values of the
In numerical ordinary differential equations, various concepts of numerical stability exist, for instance A-stability. Numerical ordinary differential equations is the part of Numerical analysis which studies the numerical solution of ordinary differential equations (ODEs They are related to some concept of stability in the dynamical systems sense, often Lyapunov stability. Dynamical systems theory is an area of Applied mathematics used to describe the behavior of complex Dynamical systems usually by employing Differential In Mathematics, the notion of Lyapunov stability occurs in the study of Dynamical systems In simple terms if all solutions of the dynamical system that start out It is important to use a stable method when solving a stiff equation. In Mathematics, a stiff equation is a Differential equation for which certain numerical methods for solving the equation are numerically unstable
Yet another definition is used in numerical partial differential equations. Numerical partial differential equations is the branch of Numerical analysis that studies the numerical solution of Partial differential equations (PDEs An algorithm for solving an evolutionary partial differential equation is stable if the numerical solution at a fixed time remains bounded as the step size goes to zero. In Mathematics, partial differential equations ( PDE) are a type of Differential equation, i The Lax equivalence theorem states that an algorithm converges if it is consistent and stable (in this sense). In Numerical analysis, the Lax equivalence theorem states a consistent finite difference approximation for a Well-posed linear Initial value problem Stability is sometimes achieved by including numerical diffusion. Numerical diffusion is a difficulty with Computer simulations of continuous systems such as Fluids or plasmas. Numerical diffusion is a mathematical term which ensures that roundoff and other errors in the calculation get spread out and do not add up to cause the calculation to "blow up". | <urn:uuid:7a8e2001-02f4-4100-a86f-3e9ac894dc91> | 3.6875 | 1,547 | Knowledge Article | Science & Tech. | 27.062279 |
Signals formed from random processes usually have a bell shaped pdf. This is called a normal distribution, a Gauss distribution, or a Gaussian, after the great German mathematician, Karl Friedrich Gauss (1777-1855). The reason why this curve occurs so frequently in nature will be discussed shortly in conjunction with digital noise generation. The basic shape of the curve is generated from a negative squared exponent:
This raw curve can be converted into the complete Gaussian by adding an adjustable mean, ?, and standard deviation, σ. In addition, the equation must be normalized so that the total area under the curve is equal to one, a requirement of all probability distribution functions. This results in the general form of the normal distribution, one of the most important relations in statistics and probability:
Figure 2-8 shows several examples of Gaussian curves with various means and standard deviations. The mean centers the curve over a particular value, while the standard deviation controls the width of the bell shape.
An interesting characteristic of the Gaussian is that the tails drop toward zero very rapidly, much faster than with other common functions such as decaying exponentials or 1/x. For example, at two, four, and six standard
deviations from the mean, the value of the Gaussian curve has dropped to about 1/19, 1/7563, and 1/166,666,666, respectively. This is why normally distributed signals, such as illustrated in Fig. 2-6c, appear to have an approximate peak-to-peak value. In principle, signals of this type can experience excursions of unlimited amplitude. In practice, the sharp drop of the Gaussian pdf dictates that these extremes almost never occur. This results in the waveform having a relatively bounded appearance with an apparent peakto- peak amplitude of about 6-8σ.
As previously shown, the integral of the pdf is used to find the probability that a signal will be within a certain range of values. This makes the integral of the pdf important enough that it is given its own name, the cumulative distribution function (cdf). An especially obnoxious problem with the Gaussian is that it cannot be integrated using elementary methods. To get around this, the integral of the Gaussian can be calculated by numerical integration. This involves sampling the continuous Gaussian curve very finely, say, a few million points between -10σ and +10σ. The samples in this discrete signal are then added to simulate integration. The discrete curve resulting from this simulated integration is then stored in a table for use in calculating probabilities.
The cdf of the normal distribution is shown in Fig. 2-9, with its numeric values listed in Table 2-5. Since this curve is used so frequently in probability, it is given its own symbol: Φ(x) (upper case Greek phi). For example, Φ(-2) has a value of 0.0228. This indicates that there is a 2.28% probability that the value of the signal will be between -∞ and two standard deviations below the mean, at any randomly chosen time. Likewise, the value: Φ(1) = 0.8413, means there is an 84.13% chance that the value of the signal, at a randomly selected instant, will be between -∞ and one standard deviation above the mean. To calculate the probability that the signal will be will be between two values, it is necessary to subtract the appropriate numbers found in the Φ(x) table. For example, the probability that the value of the signal, at some randomly chosen time, will be between two standard deviations below the mean and one standard deviation above the mean, is given by: Φ(1) - Φ(-2) = 0.8185 or 81.85%
Using this method, samples taken from a normally distributed signal will be within ?1σ of the mean about 68% of the time. They will be within ?2σ about 95% of the time, and within ?3σ about 99.75% of the time. The probability of the signal being more than 10 standard deviations from the mean is so minuscule, it would be expected to occur for only a few microseconds since the beginning of the universe, about 10 billion years!
Equation 2-8 can also be used to express the probability mass function of normally distributed discrete signals. In this case, x is restricted to be one of the quantized levels that the signal can take on, such as one of the 4096 binary values exiting a 12 bit analog-to-digital converter. Ignore the 1/ √2πσ term, it is only used to make the total area under the pdf curve equal to one. Instead, you must include whatever term is needed to make the sum of all the values in the pmf equal to one. In most cases, this is done by
generating the curve without worrying about normalization, summing all of the unnormalized values, and then dividing all of the values by the sum. | <urn:uuid:b26f49de-f35d-4e82-a975-9c27fcd38005> | 4.0625 | 1,036 | Academic Writing | Science & Tech. | 52.670961 |
Tue, 11 Dec 2001 10:42:55 +0000
> Simon> gcd x y is the greatest POSITIVE integer that divides
> Simon> both x and y.
> I find it confusing to read a definition which contains redundant
> information. Instead, I'd suggest to add something like:
> "Note: this number is always positive"
Or, perhaps easier on the eye,
"gcd x y is the greatest (positive) integer that divides both x and y."
Keith Wansbrough <email@example.com>
University of Cambridge Computer Laboratory. | <urn:uuid:54defbf9-a3af-47bc-a7a6-293bd549cc34> | 3.171875 | 130 | Comment Section | Software Dev. | 58.4075 |
The seasons are caused by the tilt of Earth's axis (23.4°) and not by the fact that Earth's orbit around the Sun is an ellipse. The average distance of Earth from the Sun is 93 million miles; the difference between aphelion (farthest away from the Sun) and perihelion (closest to the Sun) is 3 million miles, so that perihelion is about 91.4 million miles from the Sun. Earth goes through the perihelion point a few days after New Year's Day, just when the Northern Hemisphere has winter. Aphelion is passed during the first days of July. This by itself shows that the distance from the Sun is not important within these limits. What is important is that when Earth passes through perihelion, the northern end of Earth's axis happens to tilt away from the Sun, so that the areas beyond the Tropic of Cancer receive only slanting rays from a Sun low in the sky.
The tilt of Earth's axis is responsible for four lines you find on every globe. When, say, the North Pole is tilted away from the Sun as much as possible, the farthest points in the North that can still be reached by the Sun's rays are 23.5° from the pole. This is the Arctic Circle. The Antarctic Circle is the corresponding limit 23.4° from the South Pole; the Sun's rays cannot reach beyond this point when we have midsummer in the North.
When the Sun is vertically above the equator, the day is of equal length all over Earth. This happens twice a year, and these are the “equinoxes” in March and in September. After having been over the equator in March, the Sun will seem to move northward. The northernmost point where the Sun can be straight overhead is 23.4° north of the equator. This is the Tropic of Cancer; the Sun can never be vertically overhead to the north of this line. Similarly the Sun cannot be vertically overhead to the south of a line 23.4° south of the equator—the Tropic of Capricorn.
This explains the climatic zones. In the belt (the Greek word zone means “belt”) between the Tropic of Cancer and the Tropic of Capricorn, the Sun can be straight overhead; this is the tropical zone. The two zones where the Sun cannot be overhead but will be above the horizon every day of the year are the two temperate zones; the two areas where the Sun will not rise at all for varying lengths of time are the two polar areas, Arctic and Antarctic.
Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved.
More on Seasons from Infoplease: | <urn:uuid:32692b8e-d068-42a3-a4ef-d75fc6d917aa> | 4.375 | 576 | Knowledge Article | Science & Tech. | 67.854562 |
Java applet is a snippet of code that works as an individual application. It is coded using the Java bytecode. Bytecode is a type of programming that is exclusive for Java programming. Applets make use of the Java Virtual Machine, more commonly known as JVM, for its working. However, you can also run them on other standalone tools like the Sun Applet viewer to test applets. Another advantage with applets is that you don’t have to worry about writing the code in Java only. All you need is a programming language that will compile to Java, like Jython.
A Little History about Java Applets
Java applets were not introduced when Java language came into being. It evolved after a few years and only by 1995, final and working versions of Java applets were released. Initially, the applications were basic but developers and experts alike were quick to notice the potential of applets. Over time, more and more applets that had extensive applications came to the fore.
What Exactly Are Applets?
Basically, java applets are used to provide interactive user experience on websites. Websites are usually designed using markup languages, HTML, CSS, and so on. These languages cannot support an interactive user experience. But with the integration of java applets, you can do so. That is because the java bytecode that developers use to design applets are independent of platform they are being used on. So, irrespective of whether it is a Windows system, Mac system, Linux or UNIX system, the performance of the applets is not affected.
Also, you don’t have to make any sort of investment if you want to write the code for Java applets. True, you have to learn java programming but once you have done that, there are a number of open source tools to compile and test your applets. In fact, you can develop production level java applets using these open source tools.
Java applets are operated by most of the web browsers in a sandbox. Basically, what this means is that the browsers do not provide the applets with access to the local data, which is data present on the computer. When you access a java applet, the code for the applet is downloaded from a web server. The code is then embedded into the webpage code that the browser loads. Basically, the webpage will remain as a HTML code while the applet will be embedded as a Java code.
Java Applet Advantages
Firstly, java applets are platform independent. So, they work on almost all operating systems. Applets also don’t have compatibility issues with different versions of Java. Irrespective of whether the version of Java you are using is Java 4, 5 or 6 version, applets works just fine. They are also supported by almost all the major web browsers. Also, there will be no delay in loading the java applets because they cache quickly. The speed with which applets are executed is also high when compared to other programming languages such as C++. An applet can also be used as a real time application. | <urn:uuid:ebeeabcf-7fc5-429f-8c0e-f1499201f2ab> | 3.546875 | 632 | Knowledge Article | Software Dev. | 44.675024 |
16 April 2009
Gran Sasso National Laboratory survives the Italian earthquake
Last week a major earthquake struck Italy about 10 kilometres from the Laboratori Nazionali del Gran Sasso (Gran Sasso National Laboratory). The inhabitants of the city of L'Aquila nearby the laboratory suffered numerous injuries and loss of lives, as well as extensive damage to buildings, homes and historic structures. I know L’Aquila very well, having stayed there many times while I was performing an experiment at the Gran Sasso National Laboratory. I feel very badly for the friendly people of L’Aquila and I hope there will be a rapid recovery and rebuilding of this charming old Italian city. The Gran Sasso National Laboratory, the most ambitious deep underground research facility in the world, suffered no significant damage, and for that we can be thankful.
|Aerial view of the Gran Sasso Mountain and external laboratories|
|Zichichi (right) discussing plans for the Gran Sasso National Laboratory|
The Gran Sasso National Laboratory is the creation of Antonino (Nino) Zichichi. He proposed in 1979 to build an underground laboratory close to the Gran Sasso highway tunnel. In 1982 the Italian Parliament approved construction and the scientific facilities were completed in 1987. Creating an all-purpose laboratory deep underground was a very new and forward-looking concept at that time. Perhaps the most prescient idea of Zichichi's was to orient the experimental halls towards CERN, enabling the detection of neutrinos created at CERN and having travelled 730 kilometres underground to the Gran Sasso.
The features of the Gran Sasso National Laboratory are very impressive. The double road tunnel is 10.4 kilometres long and is part of a major highway between Rome and the Adriatic Sea. One end of the tunnel is near the city of L'Aquila and the other side comes out near the small town of Teramo. Near the centre of the road tunnel they built a bypass to the underground scientific laboratory. The overburden is about 1400 metres of rock (mostly limestone), giving about a factor of a million reduction in the cosmic ray flux compared to the surface. There are almost 20,000 metres squared of laboratory space and at any given time about 1000 scientists from 25 countries use the facilities. The broad research programme includes neutrino physics, dark matter searches, nuclear astrophysics, gravitational waves, geophysics and biology.
My own long-term involvement in the Gran Sasso National Laboratory dates back to the beginning of the laboratory. Following my interest in magnetic monopoles, I was intrigued by predictions coming from models of Grand Unification that suggested that magnetic monopoles existed, but that their mass was about 1015 GeV. Such extremely heavy monopoles would not have been observed in previous searches using accelerators, cosmic rays and even moon rocks. In fact, such super-heavy mass monopoles could only have been produced in the early universe and the relic monopoles could be detected today.
|The MACRO detector of the author (magnetic monopoles and atmospheric neutrinos)|
|Integrated protons on target at CERN for the first Opera data run (all images: Gran Sasso National Laboratory)|
These very heavy monopoles would easily penetrate the earth, and that was the key to our search for them in a deep underground space where most ordinary cosmic rays would have been absorbed by the overburden. We formed an Italian-US collaboration, led by Enzo Iarocci, now chair of the International Linear Collider Steering Committee, and myself. Our experiment, MACRO, did not observe any GUT magnetic monopoles, but we set stringent limits on their existence. As is often the case for large particle physics detectors with new capability, we produced other important physics, including evidence for atmospheric neutrino oscillations.
In the present Gran Sasso experimental programme, a couple of experiments deserve special attention. The OPERA and ICARUS experiments are large-scale neutrino detectors for detecting neutrinos originating at CERN and travelling 730 kilometres underground to Gran Sasso. These experiments are sensitive to neutrino oscillations, by observing the appearance of electron or tau neutrinos from a beam of muon neutrinos originating at CERN. ICARUS-600 is a 600-ton detector that is pioneering efforts to develop a liquid argon time projection chamber for neutrino detection. ICARUS should be in position to detect neutrinos within this year and they are proposing to follow up with a larger-scale detector. The construction of OPERA is finished and the experiment has completed their first significant data run using the CERN neutrino beam in 2008 from about 2.1019 protons on target.
Another important experimental effort is aimed at the direct detection of dark matter. The leading candidate is the yet to be discovered supersymmetric particles, also an important goal for LHC and ILC. The DAMA experiment has reported results that could be indicative of dark matter, but so far these results have not been confirmed by searches using other techniques.
The laboratory, being deep underground also provides practical experience for the ILC, for example it is built under a mountain with horizontal access like the Japanese ILC sample site. They have been concerned with safety issues and, in fact, suffered from some early problems that were compounded by an accident in the Borexino experiment in 2002 when 50 litres of trimethylbenzene were discharged into the environment. No damage resulted, but further construction was halted for some time.
Eugenio Coccia, an old friend and colleague of mine from research on gravitational waves, has been the Director of the Gran Sasso National Laboratory since 1993. He has paid particular attention to improving community relations and to safety concerns while being director. He deserves much credit for the laboratory surviving this terrible large earthquake unscathed.
-- Barry Barish | <urn:uuid:04e784da-c613-45b1-a0c7-5785c17232a6> | 2.9375 | 1,209 | Nonfiction Writing | Science & Tech. | 24.843992 |
Asexual Reproduction in Diatoms
Imagine a hat box: the lid is the same size and shape as the box itself, except that the lid is just slightly bigger so that it fits snugly over the box. This is how a diatom cell wall, or frustule, is shaped. The "lid" is called the epivalve and the "box" is called the hypovalve. The two valves are held together by a sort of tape, called girdle bands. All three of these components together make up the frustule and all are made of the same material: glass.
In order to divide into two daughter cells, the cell grows and pushes out the two valves to make room. But when it is actually ready to divide, it must do something with the rigid cell wall. Instead of shedding the entire cell wall and each daughter cell having to create a brand new one, diatoms give each daughter cell one valve. This is a great strategy because laying down silicon dioxide from seawater is metabolically costly. However, it has its drawbacks. Unfortunately, the daughter cells can only make a new hypovalve, the smaller half of the hat box and so for each reproductive cycle, a diatoms produces 1 original size cell and 1 slightly smaller cell. Over many generations, diatoms can shrink in size considerably, as shown in the figure below:
Diatoms shrink with successive generations!
© 2001 Caren E. Braby | <urn:uuid:239c606d-fac0-4dbf-9c3b-4d0a46190803> | 4 | 301 | Knowledge Article | Science & Tech. | 51.903462 |
|Water, water everywhere - but at least one protein can function without
the wet stuff. Researchers from the University of Bristol, UK, swapped
the coating of water on myoglobin proteins - which normally carry oxygen
to muscle and give raw meat its red colour - with a synthetic polymer
that acts as a surfactant, effectively turning the proteins into a
viscous liquid with the consistency of thick treacle.
Then they used a neutron-scattering technique to observe how well the
proteins could move, a measure of their proper functioning. They found
that the protein-polymer hybrids moved as well as proteins in water,
remaining flexible and exhibiting the usual internal dynamics.
Importantly, they could still bind oxygen as well as myoglobin does in
living tissue. The finding overturns the dogma that water is the most
important biological molecule.
Previous studies have shown that modifying proteins with polymers can
lead to therapeutic applications. For example, applying a polyethylene
glycol (PEG) coating - a process known as PEGylation - can mask a
protein and help it avoid rejection by the immune system. But where
previous studies have required some kind of solvent for the protein to
function, the Bristol team were able to observe proteins functioning
normally in an entirely solvent-free environment.
Among the applications the team intends to explore are wound dressings
in which the liquid protein is applied like a paste. It could then act
like an oxygen pump, with a chemical reaction between the protein layer
and a glucose membrane drawing oxygen down through the dressing to the
surface of the skin. | <urn:uuid:9202ac95-fd5e-49a0-ae4a-22a4211e3a71> | 3.4375 | 340 | Knowledge Article | Science & Tech. | 33.334819 |
|Feb2-13, 02:20 PM||#1|
Trouble interpreting Bode diagram
Trying to get my head around frequency response and Bode plots, but I'm having trouble interpreting what's going on. The bode plot of a function is expressing what would happen to my function if I multiply it with sin(ωt), right?
To get a better feel for it, I asked Matlab to give me the Bode plot of 1/(s^2+1), ie sin(t). Shouldn't I be getting a graph of the different amplitude ratios for sin(t)*sin(ωt)?
But according to Matlab, the amplitude ratio approaches infinity at ω = 1. I don't get it. Can anyone explain what I'm not getting?
|Similar Threads for: Trouble interpreting Bode diagram|
|Interpreting a signal block diagram to form transfer funciton||Engineering, Comp Sci, & Technology Homework||24|
|Trouble interpreting fictitious forces for block on wedge problem||Introductory Physics Homework||3|
|Bode Plot Trouble in Matlab||Engineering, Comp Sci, & Technology Homework||0|
|Assistance interpreting an isothermal transformation diagram||Engineering, Comp Sci, & Technology Homework||0|
|diagram trouble||Advanced Physics Homework||0| | <urn:uuid:0b91cada-0082-42bf-8c2d-4b4b4cafc31b> | 2.6875 | 288 | Comment Section | Science & Tech. | 46.234426 |
How reliable are climate models?
What the science says...
|Select a level...||Basic||Intermediate|
While there are uncertainties with climate models, they successfully reproduce the past and have made predictions that have been subsequently confirmed by observations.
There are two major questions in climate modeling - can they accurately reproduce the past (hindcasting) and can they successfully predict the future? To answer the first question, here is a summary of the IPCC model results of surface temperature from the 1800's - both with and without man-made forcings. All the models are unable to predict recent warming without taking rising CO2 levels into account. Noone has created a general circulation model that can explain climate's behaviour over the past century without CO2 warming.
Figure 1: Comparison of climate results with observations. (a) represents simulations done with only natural forcings: solar variation and volcanic activity. (b) represents simulations done with anthropogenic forcings: greenhouse gases and sulphate aerosols. (c) was done with both natural and anthropogenic forcings (IPCC).
Predicting/projecting the future
A common argument heard is "scientists can't even predict the weather next week - how can they predict the climate years from now". This betrays a misunderstanding of the difference between weather, which is chaotic and unpredictable, and climate which is weather averaged out over time. While you can't predict with certainty whether a coin will land heads or tails, you can predict the statistical results of a large number of coin tosses. In weather terms, you can't predict the exact route a storm will take but the average temperature and precipitation over the whole region is the same regardless of the route.
There are various difficulties in predicting future climate. The behaviour of the sun is difficult to predict. Short-term disturbances like El Nino or volcanic eruptions are difficult to model. Nevertheless, the major forcings that drive climate are well understood. In 1988, James Hansen projected future temperature trends (Hansen 1988). Those initial projections show good agreement with subsequent observations (Hansen 2006).
Figure 2: Global surface temperature computed for scenarios A, B, and C, compared with two analyses of observational data (Hansen 2006).
Hansen's Scenario B (described as the most likely option and most closely matched the level of CO2 emissions) shows close correlation with observed temperatures. Hansen overestimated future CO2 levels by 5 to 10% so if his model were given the correct forcing levels, the match would be even closer. There are deviations from year to year but this is to be expected. The chaotic nature of weather will add noise to the signal but the overall trend is predictable.
When Mount Pinatubo erupted in 1991, it provided an opportunity to test how successfully models could predict the climate response to the sulfate aerosols injected into the atmosphere. The models accurately forecasted the subsequent global cooling of about 0.5 °C soon after the eruption. Furthermore, the radiative, water vapor and dynamical feedbacks included in the models were also quantitatively verified (Hansen 2007). More on predicting the future...
Figure 3: Observed and simulated global temperature change during Pinatubo eruption. Green is observed temperature by weather stations. Blue is land and ocean temperature. Red is mean model output (Hansen 2007).
Uncertainties in future projections
A common misconception is that climate models are biased towards exaggerating the effects from CO2. It bears mentioning that uncertainty can go either way. In fact, in a climate system with net positive feedback, uncertainty is skewed more towards a stronger climate response (Roe 2007). For this reason, many of the IPCC predictions have subsequently been shown to underestimate the climate response. Satellite and tide-gauge measurements show that sea level rise is accelerating faster than IPCC predictions. The average rate of rise for 1993-2008 as measured from satellite is 3.4 millimetres per year while the IPCC Third Assessment Report (TAR) projected a best estimate of 1.9 millimetres per year for the same period. Observations are tracking along the upper range of IPCC sea level projections (Copenhagen Diagnosis 2009).
Figure 4: Sea level change. Tide gauge data are indicated in red and satellite data in blue. The grey band shows the projections of the IPCC Third Assessment report (Copenhagen Diagnosis 2009).
Similarly, summertime melting of Arctic sea-ice has accelerated far beyond the expectations of climate models. The area of sea-ice melt during 2007-2009 was about 40% greater than the average prediction from IPCC AR4 climate models. The thickness of Arctic sea ice has also been on a steady decline over the last several decades.
Figure 5: Observed (red line) and modeled September Arctic sea ice extent in millions of square kilometres. Solid black line gives the average of 13 IPCC AR4 models while dashed black lines represent their range. The 2009 minimum has recently been calculated at 5.10 million km2, the third lowest year on record and still well below the IPCC worst case scenario (Copenhagen Diagnosis 2009).
Do we know enough to act?
Skeptics argue that we should wait till climate models are completely certain before we act on reducing CO2 emissions. If we waited for 100% certainty, we would never act. Models are in a constant state of development to include more processes, rely on fewer approximations and increase their resolution as computer power develops. The complex and non-linear nature of climate means there will always be a process of refinement and improvement. The main point is we now know enough to act. Models have evolved to the point where they successfully predict long-term trends and are now developing the ability to predict more chaotic, short-term changes. Multiple lines of evidence, both modeled and empirical, tell us global temperatures will change 3°C with a doubling of CO2 (Knutti & Hegerl 2008).
Models don't need to be exact in every respect to give us an accurate overall trend and its major effects - and we have that now. If you knew there were a 90% chance you'd be in a car crash, you wouldn't get in the car (or at the very least, you'd wear a seatbelt). The IPCC concludes, with a greater than 90% probability, that humans are causing global warming. To wait for 100% certainty before acting is recklessly irresponsible.
Last updated on 9 July 2010 by John Cook. | <urn:uuid:88ab9203-4fc5-4da7-9f2a-7293df38a6d8> | 3.484375 | 1,326 | Knowledge Article | Science & Tech. | 41.126856 |
Alula Australis 4?
|Home | Stars | Orbits | Habitability | Life ||
On March 26, 2012, a team of astronomers using NASA's Wide-field Infrared Survey Explorer (WISE) submitted a pre-print which revealed a very dim and cool, T8.5 brown dwarf (designated WISE J111838.70+312537.9) that exhibits common proper motion with this multiple-star system. With an angular separation from the primary of about 8.5 arc-minutes, the astronomers estimated a wide physical separation of about 4,000 AUs. Supporting earlier analyses, the sub-solar metallicity and low chromospheric activity of the primary (Xi UMa A) suggest that that the system is at least two billion years old. Based on the infrared luminosity and color of the substellar object, the mass of this brown dwarf is estimated to be between 28 and 58 Jupiter-masses based on an estimated age of the star system of between two and eight billion years (Wright et al, 2012).
Also known as Xi (or Ksi) Ursae Majoris, Alula Australis is derived from the Arabic for "first spring" or leap "south" of the "Gazelle" (whose stars have since been gathered up in Constellation Leo Minor), in combination with Alula Borealis ("north") or Nu Ursae Majoris. It is a relatively compact, multiple system of four to five components, of which one recently has been determined to be a brown dwarf. (See an animation of the orbits of star groups Aab and Babc? and their potentially habitable zones, with a table of basic orbital and physical characteristics.)
Initial, space-based parallax measurements by the HIPPARCOS Mission proved to be "confused" by the short-period ("curious epicyclic") motions of this multiple star system (R. F. Griffin, 1998). To improve statistical reliability, NASA's NStars Database averaged ground-based measurements (1995 Yale Parallax Catalogue) with modified HIPPARCOS (Staffan Soderhjelm, 1999) to provide a revised distance of about 27.3 light-years from Sol. The system lies in the southern part (11:18:10.94+31:42.2:C~, ICRS 2000.0) of Constellation Ursa Major, the Great Bear, which also encompasses the Big Dipper or Plow (Plough) -- south of Alula Borealis (Nu Ursae Majoris).
The two bright stars of the system were discovered on May 2, 1780, by Sir William Friedrich Wilhelm Herschel (1738-1822, portrait), who was born Friedrich Wilhelm Herschel and who discovered the planet Uranus in 1781 -- which led to his appointment in 1782 as private astronomer to the King of England. Their relative positions were first accurately measured in 1826 by Friedrich Georg Wilhelm von Struve (1793-1864), who became director of Russia's Dorpat Observatory in 1817 and founded and directed the Pulkovo Observatory in 1837. (Struve also surveyed 120,000 stars from 1819 to 1827, published an extensive monograph of Halley's Comet based on observations in 1835 and his findings on 2,640 double stars in 1837, and measured the parallax of Vega from 1835 to 1838.) Because the relative motion of the brighter components implied actual physical association rather than coincidental visual alignment, Félix Savary (1797-1841) was inspired to calculate the first orbit ever for a "double star" by applying Newton's "laws of gravity" in 1828, whose solution was subsequently provided by Herschel in 1829. A summary with more information on the history of astronomical investigations into this multiple star system from 1780 to recent years can found in R. F. Griffin (1998).
|Aab-Babc? Mass Center||0.0||...||...||...||...||...||...||...||...|
|Aab Mass Center||10.6||59.9||0.412||121.2||...||...||...||...||...|
|Alula Australis Aa||0.466||1.83||0.53||91||1.05||0.97||...||...||0.98|
|Alula Australis Ab||1.224||1.83||0.53||91||0.4||...||...||...||...|
|Disrupted H.Z. Aab?||1.3||1.5||0||91||...||...||...||...||...|
|Babc? Mass Center||10.6||59.9||0.412||121.2||...||...||...||...||...|
|Alula Australis Ba||0.34||<=1||?||?||0.9||...||...||...||0.76|
|Alula Australis Bb||0.06||0.003||0||?||0.15||...||...||...||...|
|Alula Australis Bc?||0.76||<=1||?||?||>=0.5||...||...||...||...|
|Disrupted H.Z. Ba?||1.1||1.2||0||?||...||...||...||...||...|
This star is a yellow-orange main sequence dwarf star of spectral and luminosity type F8.5-G0 Ve, with about 105 percent of Sol's mass (Wulff Dieter Heintz, 1996), 97 percent of its diameter (Johnson and Wright, 1983, page 671), and about 1.1 times its luminosity. It may be nearly (98 percent) as enriched as Sol with elements heavier than hydrogen ("metallicity"), based on its abundance of iron (Cayrel de Strobel et al, 1991, page 291). Compared with Star B, Star A has a high lithium abundance. Given its relatively low chromospheric activity and the similarity of its Ca-II lines to Sol's, the system is likely to be more than two billion years old (Strobel et al, 1994). The star is a New Suspected Variable designated NSV 5165. Useful star catalogue numbers for the star include: Xi/Ksi UMa, 53 UMa, HR 4375, Gl 423 A, Hip 55203, HD 98231 A, BD+32 2132 A, SAO 62484, LHS 2390, LTT 13045, LFT 790, Struve (Sigma) 1523, ADS 8119 A, and BDS 5734.
Spectroscopic and astrometric analyses reveal a companion Ab. First detected as a periodic orbital perturbation in 1905 by N. E. Norlund, this companion has been cited by the Sixth Catalog of Orbits of Visual Binary Stars as being separated "on average" from Star Aa by a semi-major axis of 0.054", which may be a misinterpretation of Heintz (1996). In any case at a distance of about 27.3 ly, with a period of 1.834 years and a combined mass for Aab of around 1.45 Solar-masses, the semi-major axis should be around around 1.69 AUs wide. The two stars have a highly elliptical orbit, and more recent radial velocity analyses suggest that the eccentricity is closer to 0.53, rather than the 0.61 value derived from visual observations (Griffin, 1998). Hence, Aa and Ab may move as close as 0.8 and as far as 2.6 AUs, at an inclination from the perspective of an observer on Earth of 91° (Heintz, 1996). The orbit of an Earth-like planet (with liquid water) around this tight binary (Aab) would have to be centered around 1.3 AUs -- between the orbital distances of Earth and Mars in the Solar System -- with an orbital period between one and two Earth years. Given that the closest separation of the stars in the binary pair Aab is around 0.8 AU (given an ecccentricity of 0.53), however, such a planetary orbit appears unlikely to be stable. Furthermore, the noncoplanarity of the orbits of star pairs Aab and Babc? reduces the likelihood of stable planetary orbits at increasing distances from each star or binary pair (Alan Hale, 1994).
On the other hand, the wide binary pairs Aab and Bab are separated by an "average" distance of about 21.2 AUs (of a semi-major axis of 2.533" at 27.3 ly) in an elliptical orbit (e= 0.412) of 59.9 years, so that the two star pairs get as close as 12.5 AUs and as far away as 39.9 AUs (Wulff Dieter Heintz, 1996; revising earlier earlier estimates, including Mason et al, 1995). The orbital inclination of the two pairs from Earth fluctuates from 122.1° from 1935 to 1995 to 121.2 from 1995-2034. (See an animation of the orbits of star groups Aab and Babc? and their potentially habitable zones, with a table of basic orbital and physical characteristics.)
NASA -- larger image
Xi Ursae Majoris Ab and Bb may be dim red dwarfs, like
Gliese 623 A (M2.5V) and B (M5.8Ve) at lower right.
Xi Ursae Majoris Ab
This companion object appears to have about four-tenths of Sol's mass according to Heintz (1996), which would correspond with a star around spectral type M3 (Mason et al, 1995). It may have around one percent of Sol's luminosity. Despite its faintness, its radius may be half as great as star Aa (Griffin, 1998, page 20).
This star is a yellow-orange main sequence dwarf star whose spectral and luminosity type has been estimated in the range from G0-5 Ve. The star may have about 90 percent of Sol's mass, 91 times of its diameter (Johnson and Wright, 1983, page 671), and 67 percent of its luminosity. It is about 300 °K cooler than Star Aa. Star Ba may be only 76 percent as enriched as Sol with elements heavier than hydrogen ("metallicity"), based on its abundance of iron (Cayrel de Strobel et al, 1991, page 291). Moreover, the star seems to be depleted in lithium because it has maintained comparably dynamo-induced, chromospheric activity resulting from a relatively fast, synchronous (tidally-locked) rotation with its companion and so it may have lost about 10 percent more of its matter than would a single star of its mass and age (Strobel et al, 1994). Star Ba may have a brown dwarf companion (see Bb below) in a "torch orbit," with an average separation of 0.06 AU in a highly circular orbit (e=0.00) whose period is completed within four days.
The orbit of an Earth-like planet around the tight binary system that star Ba forms with its brown dwarf companion in the liquid water zone would have to be centered around 1.1 AU -- a little farther than Earth's orbital distance around Sol -- with an orbital period exceeding one Earth year. However, if the existence of a relatively close, second companion (see Star Bc below) around Bab -- with an orbital period of 2.2 to 2.9 years or less -- is confirmed, then a planetary orbit in Star Ba's water zone may not be stable over the long run. Useful catalogue numbers for this star include: HR 4374, Gl 423 B, HD 98230, and LHS 2391.
© John Whatmough (Artwork from Extrasolar Visions, used with permission)
Xi Ursae Majoris Bb (HD 98230 b) appears to be too massive to be a brown dwarf
-- like Gliese 229 b with its own dark satellite, as imagined by Whatmough.
Xi Ursae Majoris Bb (or HD 98230 b?)
This low-mass companion was discovered using radial velocity measurements in 1996, possibly confirming Louis Berman's discovery of a spectroscopic companion to Star B in 1931. It has at least 37 times the mass of Jupiter and a circular orbit (e~ 0) with a period of 3.98 days with a semi-major axis of ~0.06 AUs. A recent analysis of radial velocities, however, discusses the possibility of a Bb companion as a orange-red (late K-type) dwarf star based on suspected mass ratios among the binary pairs without any mention of the 1996 brown dwarf finding (Griffin, 1998, pp. 293-294). According to James Kaler, however, "Variations caused by surface activity lead to a rotation period of 4.0 days, which when compared with the projected equatorial velocity (2.8 kilometers per second) allows the axial and presumably orbital tilts to be found, which in turn allows for a true estimation of mass and shows the dim companion to be a cool-end class M dwarf with a mass of 0.15 Suns, placing it well above the brown dwarf limit of 0.075 solar masses ..." (more). Past calculations of orbital elements and system mass ratios based on astrometry -- and other visual observations -- and the spectral type of Star Ba (G0-5) indicate that Xi Ursae Majoris Bb is not massive enough to fully account for subsystem B (e.g., Wulff Dieter Heintz, 1996, page 411), and suggest the existence of a stellar companion (i.e., Bc).
© Torben Krogh & Mogens Winther,
(Amtsgymnasiet and EUC Syd Gallery,
student photo used with permission)
If it exists, Xi Ursae Majoris Bc may be an
orange-red dwarf star, like Epsilon Eridani
at left center of meteor.
Xi Ursae Majoris Bc?
Analysis of only one of 27 speckle interferometric observations (obtained with Kitt Peak and Canada-France-Hawaii telescopes) uncovered a fifth visual component to this multiple system (Mason et al, 1995). This object, however, apparently "never [has] had any effect whatsover upon the astrometric and radial-velocity behaviour of the observable components whose existence is reliably established" Griffin, 1998), pages 275-276). The star may be an orange-red, main sequence dwarf of spectral and luminosity type K2-3 V and have an orbital period with Star Ba or 2.2 to 2.9 years. Subsequently, Heintz (1996, page 411) suggested that such a companion to Star Ba would have to have a mass of at least half Sol's to reach detectable brightness, and that, among other orbital requirements, Bc's period would have to be less than an Earth year in order to account for the absence of effects on Ba's radial velocities and positions.
Brown Dwarfs or Planets?
When brown dwarfs were just a theoretical concern, astronomers differentiated those hypothetical objects from planets by how they were formed. If a substellar object was formed the way a star does, from a collapsing cloud of interstellar gas and dust, then it would be called a brown dwarf. If it was formed by gradually accumulating gas and dust inside a star's circumstellar disk, however, it was called a planet. Once the first brown dwarf candidates were actually found, however, astronomers realized that it was actually quite difficult to definitely rule on the validity of competing hypotheses about how a substellar object was actually formed without having been there. This problem is particularly difficult to resolve in the case of stellar companions, objects that orbit a star -- or two.
© American Scientist
(Artwork by Linda Huff for Martin et al, 1997; used with permission)
Although brown dwarfs lack sufficient mass (at least 75-80 Jupiters) to
ignite core hydrogen fusion, the smallest true stars (red dwarfs) can
have such cool atmospheric temperatures (below 4,000° K) that it is
difficult to distinguish them from brown dwarfs. While Jupiter-class planets
may be much less massive than brown dwarfs, they are about the same
diameter and may contain many of the same atmospheric molecules.
University of California at Berkeley astronomer Ben R. Oppenheimer, who helped to discover Gliese 229 b, is part of a growing group that would like to define a brown dwarf as an substellar object with the mass of 13 to 80 (or so) Jupiters. While these objects cannot fuse "ordinary" hydrogen (a single proton nucleus) like stars, they have enough mass to briefly fuse deuterium (hydrogen with a proton-neutron nucleus). Therefore, stellar companions with less than 13 Jupiter masses would be defined as planets.
Other prominent astronomers, such as San Francisco State University astronomer Geoffrey W. Marcy who also has helped to discover many extrasolar planets, note that there may in fact be many different physical processes that lead to the formation of planets. Similarly, there may also be many different processes that lead to the creation of brown dwarfs, and some of these may also lead to planets. Hence, more observational data may be needed before astronomers can determine how to make justifiable distinctions in the classification of such substellar objects.
The following star systems are located within 10 light-years, plus more bright stars within 10 to 20 ly, of Alula Australis.
|Star System||Spectra &|
|BD+36 2219 AB||M1 Ve |
|61 Ursae Majoris||G8 Ve||4.7|
|Groombridge 1830||G8 VIp||5.4|
|WD 1126+185||DC8 /VII||6.5|
|GJ 1138||M V||6.8|
|AC+23 468-46||M2.5 V||7.0|
|AC+27 28217||M3.5 V||7.1|
|G 122-49||M V||8.5|
|GJ 1134||M V||8.6|
|* plus bright stars *||. . .|
|Beta Comae Berenices||F9.5-G0 V||13|
|11 (SV) Leonis Minoris AB||G8 Vv |
|Xi Bootis AB||G8 Ve |
Up-to-date technical summaries on these stars can be found at: the Astronomiches Rechen-Institut at Heidelberg's ARICNS, the NASA Exoplanet Archive, and the Research Consortium on Nearby Stars (RECONS). Additional information may be available at Roger Wilcox's Internet Stellar Database.
Constellation Ursa Major is only visible from the northern hemisphere. The seven stars of the Big Dipper in this constellation are famous as the traveller's guide to Polaris, the North Star. For more information about the stars and objects in this constellation, go to Christine Kronberg's Ursa Major. For an illustration, see David Haworth's Ursa Major.
For more information about stars including spectral and luminosity class codes, go to ChView's webpage on The Stars of the Milky Way.
Note: Special thanks to Andrew Tribick about the 2012 discovery of a T8.5 brown dwarf and to David Bellomy for pointing out the implausibility of the reported semi-major axis for the inner binary pair Aab and encouraging us to derive a more plausible, average orbital separation.
© 1998-2012 Sol Company. All Rights Reserved. | <urn:uuid:3e80f136-2346-4b07-b042-8a6bfb20effd> | 3.109375 | 4,186 | Knowledge Article | Science & Tech. | 72.500693 |
A geologic and oceanographic study of the waters and Continental Shelf of Gulf of the Farallones adjacent to the San Francisco Bay region. The results of the study provide a scientific basis to evaluate and monitor human impact on the marine environment.
The Sediment Transport Instrumentation Facility at USGS Woods Hole Field Center maintains and deploys oceanographic instrumentation for the study of coastal and ocean circulation and sediment transport.
USGS project to understand coastal evolution and modern beach behavior; to identify and model the physical processes affecting coastal ocean circulation and sediment transport; and to identify sediment sources and construct a regional sediment budget.
Topics in Coastal and Marine Sciences provides background science materials, definitions, and links to give a common context for users from a variety of backgrounds. Coastal erosion was chosen as the first topic.
Site with links to projects of the field center of the Woods Hole Coastal Marine Geology Program on underwater areas between shorelines and the deep ocean, off the U.S. East Coast, the Gulf of Mexico, and in parts of the Caribbean and Great Lakes. | <urn:uuid:7d7ac812-8f22-4642-91b6-6b805ef5d0a1> | 3.140625 | 216 | Content Listing | Science & Tech. | 28.265488 |