text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Egg | Larva/Caterpillar |
Pupa | Adult
The primary job of the adult stage is to reproduce—to mate
and lay the eggs that will become the next generation. Monarchs do not mate until
they are three to eight days old. When they mate they remain together from one afternoon
until early the next morning—often up to 16 hours! Females begin laying eggs
immediately after their first mating, and both sexes can mate several times during
their lives. Adults in summer generations live from two to five weeks.
Each year, the final generation of monarchs, (which emerges in
late summer and early fall), has an additional job. They migrate to
overwintering grounds (see image below), either in central Mexico for eastern
monarchs or in California for western monarchs. Here they survive the long winter
until conditions which allow them to return to their breeding grounds. These adults
can live up to eight or nine months.
Male and female monarchs can be distinguished easily. Males have
a black spot (indicated by red arrow in image above) on a vein on each hind wing
that is not present on the female. These spots are made of specialized scales which
produce a chemical used during courtship in many species of butterflies and moths,
although such a chemical does not seem to be important in monarch courtship. The
ends of the abdomens are also shaped differently in males and females, and females
often look darker than males and have
wider veins on their wings.
The body of an adult butterfly is divided into the same major parts as the larva:
head, thorax, and abdomen.
There are four main structures on the adult head: eyes,
antennae, palpi, and proboscis. A butterfly’s relatively enormous compound
eyes are made up of thousands of ommatidia (see
image below), each of which senses light and images. The two antennae and
the two palpi, which are densely covered with scales, sense molecules in the air
and gives butterflies a sense of smell. The straw-like proboscis is the butterfly’s
tongue, through which it sucks nectar and water for nourishment. When not in use,
the butterfly curls up its proboscis.
The thorax is made up of three segments, each of which has a pair
of legs attached to it. The second and third segments also have a pair of wings
attached to them. The legs end in tarsi (singular,
tarsus), which grip vegetation and flowers when the
butterfly lands on a plant. Organs on the back of the tarsi "taste" sweet
liquids. Monarchs and other nymphalid butterflies look like they only have four
legs because the two front legs are tiny and curl up next to the thorax.
Monarchs clustered in an Oyamel Fir tree at the
Sierra Chincua overwintering site in central Mexico. | <urn:uuid:8f7aefda-25ff-48b8-8964-36e5d8dd9383> | 4.03125 | 638 | Knowledge Article | Science & Tech. | 49.609962 |
The El Niño-Southern Oscillation (ENSO)
Floods and mudslides in Ecuador, droughts and wildfires in Australia, and extreme California rainstorms – could all of these events be triggered by the same thing? Yes they can. It’s the El Niño-Southern Oscillation (ENSO), a combination of changes in the ocean and atmosphere that affect weather in many areas of the world.
Normally, trade winds move water at the ocean surface from the eastern tropical Pacific towards the western Pacific. This creates upwelling of cold nutrient-rich water off the coast of Peru and Chile, which supports a diversity of marine life. The western Pacific is in a low pressure system and has wet weather. The eastern Pacific, in a high pressure system, is dry. But every 3 to 7 years the atmosphere and ocean change during El Niño and La Niña events – the two extremes of ENSO.
Air pressure rises in the western Pacific and falls in the central and eastern Pacific during El Niño, the warm phase of ENSO which happens in Northern Hemisphere winter. Without the strong pressure gradient, the trade winds weaken. Without the trade winds pushing the tropical Pacific Ocean water west, the warm and nutrient-poor water of the western Pacific spreads east, piling up water in the eastern tropical Pacific. This weakens the upwelling of nutrient-rich water in the eastern Pacific. Not as much marine life can survive in the warm water as can in the cool nutrient-rich waters.
During La Niña, the cold phase of ENSO, the trade winds grow stronger across the Pacific because the low pressure over the western Pacific strengthens, as does the high pressure over the central and eastern Pacific. This causes more upwelling of ocean water off the coast of Peru and Chile, making the surface water of the eastern tropical Pacific unusually cold.
Both El Niño and La Niña events can have far-reaching effects on the weather. Intense rainstorms and flooding, extreme droughts, the strength of the Atlantic hurricane season, and winter storms in many areas of the world are affected by ENSO events. ENSO may also have an impact on the North Atlantic Oscillation as it has an effect on the Arctic troposphere. These impacts are called teleconnections
Shop Windows to the Universe Science Store!The Spring 2010 issue of The Earth Scientist focuses on the ocean, including articles on polar research, coral reefs, ocean acidification, and climate. Includes a gorgeous full color poster! | <urn:uuid:2c383a53-d26e-47b1-9d9e-a0aca26530d6> | 4.09375 | 516 | Knowledge Article | Science & Tech. | 43.611016 |
Nuclear Blasts Show Terrifying PowerBy Tony Long
It was 63 years ago today that the United States detonated the very first atomic bomb. Three weeks later, the only two A-bombs dropped in warfare destroyed Hiroshima and Nagasaki, Japan. Many nuclear -- and thermonuclear -- bombs have been tested since. Here are some images.
Operation Upshot-Knothole, conducted at the Nevada Proving Ground between March 17 and June 4, 1953, consisted of 11 atmospheric tests: three airdrops, seven tower tests and one airburst. Upshot-Knothole involved the testing of new theories, using both fission and fusion devices.
House No. 1, located 3,500 feet from ground zero, was completely destroyed on the first day of testing. The elapsed time from the first picture to the last was 2⅔ seconds. The camera was completely enclosed in a 2-inch lead sheath as a protection against radiation. The only source of light was that from the detonation. Frame No. 1 (upper left) shows the house lighted by the blast. Frame No. 2 (upper right) shows the house on fire.
Courtesy National Nuclear Security Administration/Nevada Site Office | <urn:uuid:cec1a346-cc6c-4a90-a5ce-829fa44cc15a> | 3.28125 | 251 | Truncated | Science & Tech. | 52.989705 |
This article was written by Sarah Rich in January 2007. We're republishing it here as part of our month-long editorial retrospective.
For several years, I lived in an apartment in San Francisco where my bedroom window faced directly onto Haight Street. The quietest time of day fell somewhere between the bar-hoppers and homeless retiring to their respective sleeping quarters, and the early cooing of pigeons and idling of delivery trucks. But that's not to say it was ever truly quiet. Like most city-dwellers, the sounds of the street became more peaceful than silence, in a way, and I didn't really mind the near-continuous melodies of horns, sirens and voices. Sound, like smell, is something we adapt to much more quickly than physical changes in our surroundings. But the behavioral changes we make according to our soundscape can in fact be a powerful indicator of environmental and cultural trends in our region.
Even more powerful are the adaptations we can observe in wildlife as their habitat changes due to the presence of human activity. This is the primary focus of the Acoustic Ecology Institute, a New Mexico-based non-profit working to "increase personal and social awareness of our sound environment, through education programs in schools, regional events, and our internationally recognized website," and to build "a comprehensive [online] clearinghouse for information on sound-related environmental issues and scientific research."
Perhaps the best way to understand their work is to hear about a few examples offered up in their recent news posts:
Study Confirms Birds' Changing Songs in Cities - Field studies in ten European cities, including London, Paris, and Prague, have confirmed that great tits adapt their songs to be better heard above a variety of noise conditions. The city-dwelling birds, a species that has adapted well to urban settings, were compared to forest-dwelling birds nearby. In songs important for mate attractions and territory defense, the urban songs were shorter and sung faster than the forest songs. The urban songs also showed an upshift in frequency that is consistent with the need to compete with low-frequency environmental noise, such as traffic noise. The capacity of great tits to sing within a relatively wide frequency range, and the ability to adjust songs by leaving out lower frequencies, seems critical to the bird's ability to thrive despite urban noise. Species without these capacities may have no other choice than to escape city life. An earlier study by the same researchers had identified frequency differences in great tit songs in one urban area, reflecting the amoung of low-frequency noise they had to be heard above; this study expands the findings to include many populations of tits, and compares urban to rural populations.
Another story that struck me for its artistic quality is the installation of "One Square Inch of Silence" in Olympic National Park by natural sound recordist and master listener, Gordon Hempton. His project stood as a call to the National Park system to establish sonic refuges within park boundaries, where increased visitor traffic has tainted the purity of the soundscape and generally raised the volume throughout the otherwise protected acreage. "Quiet is going extinct," he says. His installation can be found by following directions on his website. Visitors are invited to scrawl their impressions quietly on a piece of paper and leave them in a jar at the site.
It's not just about nature and wildlife, though. The AEI tracks advances in science and medicine through sound, such as a study published by the Optical Society of America illuminating the possibility of early detection of metastasizing melanoma by listening to sounds emitted by cancer cells:
The unprecedented, minimally invasive technique causes melanoma cells to emit noise, and could let oncologists spot early signs of metastases -- as few as 10 cancer cells in a blood sample -- before they even settle in other organs...The team's method, called photoacoustic detection, combines laser techniques from optics and ultrasound techniques from acoustics, using a laser to make cells vibrate and then picking up the characteristic sound of melanoma cells. The microscopic granules of melanin contained in the cancer cells absorb the energy bursts from the blue-laser light, going through rapid cycles of expanding as they heat up and shrinking as they cool down. These sudden changes generate ultrasonic sounds which propagate in the solution like tiny tsunamis.
Their areas of inquiry span ocean, wilderness and metropolis, revealing a huge amount of information about our changing planet just by taking the time to listen.
Acoustic Ecology and the Extinction of Silence is part of our month long retrospective leading up to our anniversary on October 1. For the next four weeks, we'll celebrate five years of solutions-based, forward-thinking and innovative journalism by publishing the best of the Worldchanging archives. | <urn:uuid:af3d5b80-d697-4d2e-b0a8-33a1b4058920> | 3.0625 | 976 | Nonfiction Writing | Science & Tech. | 29.593924 |
The Australian ghost bat (Macroderma gigas) is one of the largest of all microbats. It is the only carnivorous bat in Australia and is included among a number of species referred to as false vampire bats (see story, page 11). It perches in vegetation, darting down to catch small mammals and reptiles, killing them with a quick bite, then carrying them to a safe perch for eating. It also captures small, sleeping birds, probably learning the locations of important roosts.
Ghost bats live in northern Australia and New Guinea in diverse habitats from arid expanses to lush rain forests. One of the largest known colonies inhabits a gold mine adit, and they are also known to roost in caves and crevices. They are secretive and rare, but often an accumulation of remains from their prey will distinctly mark a roost.
Photo by Merlin D. Tuttle | <urn:uuid:06259be4-5887-4ec9-aac6-cfd1c3583466> | 3.328125 | 186 | Knowledge Article | Science & Tech. | 50.160884 |
I wrote another column for Discover (the actual magazine), which is now available online. It’s about how far back in cosmological time we can push our knowledge on the basis of actual data, not mere theory.
Of course we literally look back in time every time we peer into a telescope, since it takes time for light to travel to us from distant objects. But there’s an earliest moment we can possibly see using light — the moment of recombination, about 380,000 years after the Big Bang, when electrons hooked up with protons and other nuclei to form atoms. Earlier than that, the electrons were floating around freely, bumping into photons, and generally making the universe opaque.
So we have to be a bit more clever. And we have been: using the fact that the early universe was a nuclear fusion reactor, and observing the surviving abundances of light elements to pin down what conditions were like at that time. This technique gets us within seconds of the Big Bang. But if things break just right — the dark matter turns out to be a weakly-interacting particle, whose properties we can study here on Earth — we might be able to push the data-informed era much earlier back than that.
Think about what that means: Sitting here on Earth, cosmologists extrapolated our understanding back 13.7 billion years, to a few seconds after the universe began. We used that understanding to make predictions about the current universe—and we were right. We may not know for sure whether it will rain tomorrow, but we do know exactly how protons and neutrons bounced around like Super Balls in the nuclear inferno of the Big Bang. This will surely go down as one of the most impressive accomplishments of the human intellect.
And yet cosmologists want to do better still. The goal is to discover relics that predate even Big Bang Nucleosynthesis. At the moment that’s not quite possible, but there is one promising candidate: dark matter, the dense but unseen stuff that holds galaxies together.
Roughly speaking, if we get lucky, we could learn about conditions in the universe about 1/10,000th of a second after the Big Bang. We’d like to go even much earlier than that, but let’s not forget to be impressed at how well we’ve already done. | <urn:uuid:27f3e18b-2e6d-4cd7-8599-8e35e05c1802> | 3.421875 | 488 | Personal Blog | Science & Tech. | 51.161855 |
|Poster: Earth's Energy Budget|
|This NASA poster depicting the Earth's Energy Budget, with Activities, is available at http://science-edu.larc.nasa.gov/energy_budget.|
|Tour of the Electromagnetic Spectrum|
http://missionscience.nasa.gov/ems/index.html This series of eight videos covers an introduction to electromagnetic waves and the different regions throughout the spectrum (Radio, Microwave, Infrared, Visible, Ultraviolet, X-Rays and Gamma Rays). These animated videos explain each region with examples from real world NASA science applications.
http://missionscience.nasa.gov/ems/TourOfEMS_Booklet_Web.pdf Also available is a 29-page booklet that can be downloaded covering the spectrum, wave behaviors, anatomy of an electromagnetic wave, and how images are created. Each region of the spectrum is detailed with images from NASA missions.
This kid-friendly Web site for ages 10-12 answers the big questions about global climate change using simple illustrations, humor, interactivity and age-appropriate language. Includes a collection of Earth-science-related games and a Green Careers section which profiles real people doing jobs that help slow climate change.
|My NASA Data|
Students of all ages can investigate microsets of NASA Earth science satellite data, including atmosphere, biosphere, cryosphere, ocean and land surface. Many new data types continue to be added to the collection along with online lesson plans, teacher-friendly documentation, computer tools and an Earth science glossary. MY NASA DATA can be used with existing curriculum and to enable students to practice science inquiry and math or technology skills using real measurements of Earth system variables and processes. Examples of climate and energy-related lessons include:
|Climate Change Wildlife and Wildlands Toolkit|
This resource for middle school and informal education focuses on climate change impacts on wildlife and wildlands across the United States. The toolkit divides the country into 11 distinct "eco-regions" based on a number of factors including geography and habitat type.
|Earth Exploration Toolbook|
Earth Exploration Toolbook is a collection of computer-based Earth science activities for middle school to college level instruction. Each activity, or chapter, introduces one or more data sets and an analysis tool that enables users to explore some aspect of the Earth system. Step-by-step instructions in each chapter walk users through an example—a case study in which they access data and use analysis tools to explore issues or concepts in Earth system science. Several chapters use NASA Earth science data.
|Earth & Sky Podcasts|
Bruce Wielicki on Clouds and Earth's Energy Balance - In this Clear Voices for Science podcast (8 minutes), NASA scientist Bruce Wielicki spoke with EarthSky's Jorge Salazar about how clouds affect the energy Earth receives, keeps, and emits back to space—Earth's energy balance. Wielicki is the principal investigator for NASA's Clouds and the Earth's Radiant Energy System (CERES) mission.
NASA eClips™ are short, educational video segments that inspire and engage students, helping them see real world connections. The programs are produced for targeted audiences: Our World (K–5), Real World (grades 6–8), and Launchpad (grades 9–12 and the general public). Examples of NASA eClips™ related to climate and energy are:
Programs - Get Started
(Global Learning and Observations to Benefit the Environment) is a worldwide hands-on, primary and secondary school-based science and education program. GLOBE's vision promotes and supports students, teachers and scientists to collaborate on inquiry-based investigations of the environment and the Earth system. GLOBE is sponsored by NASA, NOAA, NSF and U.S. Department of State.
A new GLOBE Student Climate Research Campaign (coming September 2011) will engage students from around the world in the process of investigating and researching their local climate and sharing their findings globally. SCRC is comprised of learning activities, events, and research investigations. Teachers can start preparing their students now to participate in SCRC. For more information, go to: http://www.globe.gov/explore_science/conduct_research/scrc.
|S'COOL (Students' Cloud Observations On-Line)|
S'COOL is a real-time, collaborative science experiment that elementary through secondary students conduct with NASA scientists. Participants make ground truth observations of clouds for comparison with satellite data. These observations help NASA scientists validate the measurements from NASA's CERES satellite instrument (Clouds and Earth's Radiant Energy System). The S'COOL Web site includes several educational resources, including tutorials, cloud ID charts and ideas for projects. The site also includes information on Roving Cloud Observations for S'COOL, a program for citizen scientists.
|NASA's Global Climate Change Education (GCCE) Initiative|
NASA's Global Climate Change Education Initiative has awarded several grants to organizations across the United States to explore innovative ways to teach the science surrounding global climate change and Earth system science. Funded projects include all levels of formal and informal education, including a range of activities, such as courses and workshops for educators, learning resources, citizen science projects, research opportunities for teachers and students and more. Visit the NASA GCCE Web site to learn more about the funded projects and link to resources and datasets for climate change education. | <urn:uuid:19d0dd65-e23a-4a4f-ba33-7f8d3ccda413> | 3.390625 | 1,123 | Content Listing | Science & Tech. | 30.043464 |
This Article Explain about all the features and characteristics of JAVA language.
JAVA is Simple :
Java is designed to be easy for the professional programmer to learn and use. If you have some programming experience , you will not find java hard to master. In java there are a small number of clearly defined ways to accomplish a given task.
JAVA is Purely OOP :
Java is a pure object oriented language. Everything in java is inside a class. The Object model in java is simple and easy to extend.
Java programs are Robust :
In today’s world programs must execute reliably in a variety of systems. To gain reliability java restricts us in few key areas, that force us to find our mistakes early in program development. Java is strictly typed language , it checks code at compile time and also at run time. Many hard to track down bugs that often turn up in hard to reproduce run time situations are simply impossible to create in java.
The output of a java compiler is not executable code rather it is bytecode. Bytecode is highly optimized set of instructions designed to be executed by the Java Run time system known as Java Virtual Machine (JVM) . JVM is an interpreter for bytecode. Translating a java program into bytecod helps makes it much easier to run a program in wide variety of environments, because now we only have to implement JVM for each platform. Thus it is the easiest way to crate truly portable programs.
When we use a Java-compatible Web Browser ,we can safely download java applets without fear of viral infection or malicious intent. Java achieves this protection by confining a java program to the java execution environment and not allowing it access to other parts of the computer. | <urn:uuid:05b02c93-9501-41a5-a82e-46c77e960f1f> | 3.609375 | 357 | Knowledge Article | Software Dev. | 41.27138 |
Chunk:Cascading Style Sheet
A Cascading Style Sheet or CSS is used to control the presentation of an HTML page. For example, a CSS file will often control the font, margins, color, background graphics, and other aspects of a web page's appearance. CSS allows you to separate the content of an HTML page from it's appearance. In Joomla!, CSS files (for example, template.css) are normally part of the template. | <urn:uuid:3a97cd43-c1a9-497f-ba02-676cfec74ced> | 2.75 | 95 | Documentation | Software Dev. | 65.405 |
General Astronomy/Thermal Radiation
All objects emit energy in the form of electromagnetic radiation. As the atoms are shaken by random thermal motion, the moving charge of the electrons causes them to emit a changing electromagnetic field. In general, the cooler the body, the slower the motion of its atoms and molecules, and the longer the wavelength of emitted radiation. Thus a human body emits mostly in the infrared part of the spectrum, making night vision cameras so valuable to the military and police. But the tungsten filament of an incandescent light bulb is at a much higher temperature (roughly 3000 K or about 5000 degrees F), causing it to emit mostly visible light.
Thus the spectrum and intensity of the emitted radiation can be used to determine the object's temperature from a distance. If a material is heated above 700 Kelvins, it begins to glow visibly - starting out as a dark red color and moving towards the blue end of the spectrum with increasing temperature. However, most objects radiate a wide range of temperatures, and the effective color perceived by the human eye may not be fully indicative of the true temperature. For example, the Sun appears white to most observers, but the wavelength at which it radiates most of its energy is about 5800 K or roughly 10,000 degrees, which spectroscopically is equivalent to a green color. However, when the human eye detects the various wavelengths we receive from the Sun, the in particular ratios emitted by the Sun, our eye-brain connection perceives it as white. | <urn:uuid:13d5f871-5dfa-42c4-8c9b-0082c7b4f72b> | 4.3125 | 309 | Knowledge Article | Science & Tech. | 33.833155 |
See also the
Dr. Math FAQ:
0.9999 = 1
0 to 0 power
n to 0 power
0! = 1
dividing by 0
Browse High School Number Theory
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Infinite number of primes?
Testing for primality.
What is 'mod'?
- Proof that Sqrt(3) is Irrational [08/14/1997]
How does one prove that sqrt(3) is irrational? or others? Is there a
general algorithm? How about just for primes?
- Proof That the Cube Root of 3 is Irrational [05/22/2000]
How can I show that the cube root of 3 is irrational?
- Proof with Pigeonhole Principle [09/20/2001]
Prove that among five points selected inside an equilateral triangle with
sides of length 2, there always exists a pair at a distance not greater
- Proof with Powers of 2 and a Product [10/22/2008]
Prove that starting with any power of 2 there is a number such that
the product of those two numbers will contain only the digits 1 and 2.
For example, 4*3 = 12, 8 * 14 = 112, 64*33 = 2112.
- Properties of the Phi Function [01/19/1999]
What are some properties of the phi function? What about the phi function
and prime numbers?
- Prove 101 the Only Prime [12/14/2002]
In this sequence of integers, all in base 10: 101, 10101, 1010101,
101010101, 10101010101, .......,,,, etc., prove that 101 is the only
prime in the sequence.
- Prove a and b are Perfect Squares [12/28/2001]
Let a and b be positive integers such that (a,b) = 1 and ab is a perfect
square. Prove that a and b are perfect squares.
- Prove a = b = c [01/27/2002]
When a^2 + b^2 + c^2 = ab + bc + ca and abc does not equal 0, prove that
a = b = c.
- Prove that 7 + 17*sqrt(17) is Irrational [09/22/2002]
I need to prove that 7 + 17*sqrt(17) is irrational. I know to set it
equal to a/b, and have already proven that any square root of a prime
number is irrational.
- Prove That an Expression is a Multiple of 10 [12/19/2002]
If a and b are positive integers, prove that (a^5)*(b) - (a)*(b^5) is
a multiple of 10.
- Prove that Log A is Irrational [06/14/1998]
Can you help me prove that a common log of a number (not powers of 10) is
- Prove that n^3 + 2n is Divisible by 3 [04/15/2001]
Prove by mathematical induction that n^3 + 2n is divisible by 3 for all
- Prove Twin Primes Greater Than 3 Divisible by 12 [10/08/2002]
Prove that if p and q are twin primes, each greater than 3, then p+q
is divisible by 12.
- Prove x^2+y^2 Not Divisible by 4 [09/20/2001]
Prove that if x and y are odd, then x^2 + y^2 is even but not divisible
- Proving a Number is Prime [10/13/2004]
How do you really prove that 2 or some other number is a prime number?
- Proving a^x = a^y iff x = y [12/13/2000]
How can I prove that a^x = a^y iff y = x for all real numbers x and y?
- Proving De Moivre's Theorem [12/03/1997]
Prove De Moivres theorem: - (cos(x)+isin(x))^n = cos(nx) + isin(nx) .
- Proving Divisibility [04/30/2002]
Prove that 1^99 + 2^99 + 3^99 + 4^99 + 5^99 is divisible by 15.
- Proving Divisibility [09/11/2003]
Prove that (n^2 - n) is divisible by 2 for every integer n; that
(n^3 - n) is divisible by 6; and that n^5 - n is divisible by 30.
- Proving e is Irrational [11/19/1997]
My professor suggested using a proof by contradiction, but I don't
understand how to do it.
- Proving Fermat's Last Theorem for N = 4 [05/18/2000]
How can you prove Fermat's Last Theorem for the specific case n = 4?
- Proving O(n) [01/23/2001]
How would you prove that an equation is of order n, or n squared?
- Proving Perfect Squares [07/05/1998]
Suppose a, b, and c are positive integers, with no factor in common,
where 1/a + 1/b = 1/c. Prove that a+b, a-c, and b-c are all perfect
- Proving Phi(m) Is Even [04/22/1998]
Explain why phi(m) is always even for m greater than 2...
- Proving the Associative Property [02/24/2001]
How can I prove that a binary operation is associative, if all I am given
is a table for the operation?
- Proving the Properties of Natural Numbers [03/08/2000]
How can you prove or derive the commutative, associative, and
distributive properties of numbers?
- Proving the Square Root of 2 is Irrational [02/04/2004]
How can you prove that the square root of 2 is irrational using the
Rational Root Theorem?
- Proving the Square Root of a Prime is Irrational [07/15/1998]
How do you prove that if p is prime, the square root of p is irrational?
- Public Key Encryption [03/29/1999]
Examples and discussion of operations used for encryption, including mod.
- Pythagorean Quadruplets [12/28/1998]
I am trying to find a formula that generates Pythagorean quadruplets
a,b,c,d such that a^2 + b^2 + c^2 = d^2.
- Pythagorean Theorem, Fermat's Last Theorem [5/16/1996]
Can the Pythagorean theorem be done with 3 different numbers?
- Pythagorean Triple [8/28/1996]
What is the formula for finding the three lengths in a Pythagorean triple
where the shortest side is even?
- Pythagorean Triples [10/07/1997]
What is a Pythagorean triple?
- Pythagorean Triples [04/14/1997]
Why can't all the numbers in a Pythagorean triple be prime?
- Pythagorean Triples [07/14/1997]
Is there a formula to determine the solutions to the following equations?
a^2 + b^2 = c^2, a^3 + b^3 + c^3 = d^3...
- Pythagorean Triples [11/19/1997]
I need to know the first five Pythagorean triples after 3,4,5...
- Pythagorean Triples [05/22/1999]
What is the general formula for all sides of any triple?
- Pythagorean Triples [05/31/1999]
Is there a procedure for finding Pythagorean triples?
- Pythagorean Triples [5/18/1995]
How can the relation between Pythagorean triples be expressed as a
- Pythagorean Triples Divisible by 5 [11/17/2000]
Do all right triangles with integer side lengths have a side with a
length divisible by 5? | <urn:uuid:f2b76cc2-5543-416f-8fd7-1abc476142b2> | 2.703125 | 1,826 | Q&A Forum | Science & Tech. | 86.177688 |
Radioactivity, Temperature, Pressure
Name: Bob B.
Date: Thursday, August 22, 2002
Does the decay rate of U238 vary with temperature and pressure?
Not significantly. U238 decays by emitting an alpha particle (a helium
nucleus). The decay rate is determined almost entirely by the environment
in the nucleus. Temperature and pressure changes that we are capable of
making affect the electron cloud and do not affect the nucleus.
Maybe the sorts of temperatures and pressures you could get in a fusion
reactor would measurably affect the decay rate, but I doubt it.
Richard E. Barrans Jr., Ph.D.
Director of Academic Programs
PG Research Foundation, Darien, Illinois
Radioactive decay is a nuclear process. The rate of fission (in the case
of U238 as well as other decays is proportional to the concentration of
nuclei. So the rate will change slightly with large changes in the
temperature and pressure because the volume of the fissionable material
increases with increasing pressure and decreasing temperature. However, this
is a minor effect if the concentration of nuclei is significantly less than
the so-called critical mass.
When the concentration of nuclei reaches the critical mass, the decay
events produce sufficient neutrons to cause a chain reaction, i.e. the
neutrons strike other nuclei causing them to decay, which produces more
neutrons which cause other nuclei to decay,......
This is what happens in an atomic bomb, and under more controlled
conditions in a nuclear reactor. You can find this discussed in lay terms in
the book: "The Making of the Atomic Bomb" by Richard Rhodes.
However, the temperatures and pressures that can be obtained with
standard equipment do not affect the rate of radioactive decay. What happens
in stars etc. where enormous pressures are generated is also excluded.
So far as is known, decay rates of radioactive elements are constant and
independent of the conditions about which you ask. If decay rates were found
to be temp/pressure dependent, a good deal of geologic dating work would be
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:e45ba026-e068-4132-ace5-59f3b14ffe52> | 3.453125 | 459 | Q&A Forum | Science & Tech. | 45.361778 |
This is a commentary on DOI:10.1029/2003JD003610
Climate and Dynamics
Earthshine and the Earth's albedo: 2. Observations and simulations over 3 years
Article first published online: 29 NOV 2003
Copyright 2003 by the American Geophysical Union.
Journal of Geophysical Research: Atmospheres (1984–2012)
Volume 108, Issue D22, 27 November 2003
How to Cite
2003), Earthshine and the Earth's albedo: 2. Observations and simulations over 3 years, J. Geophys. Res., 108, 4710, doi:10.1029/2003JD003611, D22., , , , , , , , , and (
- Issue published online: 29 NOV 2003
- Article first published online: 29 NOV 2003
- Manuscript Accepted: 26 AUG 2003
- Manuscript Revised: 12 AUG 2003
- Manuscript Received: 19 MAR 2003
Since late 1998, we have been making sustained measurements of the Earth's reflectance by observing the earthshine from Big Bear Solar Observatory. Further, we have simulated the Earth's reflectance for both the parts of the Earth in the earthshine and for the whole Earth. The simulations employ scene models of the Earth from the Earth Radiation Budget Experiment, simulated snow/ice cover, and near-real-time satellite cloud cover data. Broadly, the simulations and observations agree; however, there are important and significant differences, with the simulations showing more muted variations. During the rising phase of the Moon we measure the sunlit world to the west of California, and during the declining lunar phase we measure the sunlit world to the east. Somewhat surprisingly, the one third of the Earth to the west and that to the east have very similar reflectances, in spite of the fact that the topographies look quite different. The part to the west shows less stability, presumably because of the greater variability in the Asian cloud cover. We find that our precision, with steady observations since December 1998, is sufficient to detect a seasonal cycle. We have also determined the annual mean albedos both from our observations and from simulations. To determine a global albedo, we integrate over all lunar phases. Various methods are developed to perform this integration, and all give similar results. Despite sizable variation in the reflectance from night to night and from season to season (which arises from changing cloud cover), we use the earthshine to determine annual albedos to better than 1%. As such, these measurements are significant for measuring climate variation and are complementary to satellite determinations. | <urn:uuid:5068d491-0d91-43ad-9ba0-a20d4b5b1489> | 3.0625 | 537 | Academic Writing | Science & Tech. | 42.846462 |
A while ago I had a discussion with someone about the speed of an object when it hits the ground. When does an object hit the ground faster:
- When you launch the object to the sky so it can fall down;
- Or when you launch the object to the ground so it doesn't have to deal with much air resistance.
The attraction “The Booster” has an arm that has a constant speed (about 120 km/h) is about 80M high and undergoes full revolutions.
In this situation the cart gets disattached at 90 degrees.
I think that the cart hits the ground with a higher speed when it's launched to the ground because of:
- The air resistance, when you launch it to the sky it has a longer path to travel, so it needs to travel through much more air.
- The cart has the same initial speed, when you travel a shorter path there will be less to slow the cart down;
- Nothing provides the cart with additional energy;
- I guess that the "final gravity" is the same. | <urn:uuid:76a374bf-c10a-4eea-904d-41109c9c57c0> | 2.765625 | 222 | Q&A Forum | Science & Tech. | 65.935 |
NASA: We May Be On the Verge of a “Mini-Maunder” Event.
This week, scientists from the US Solar Observatory and the US Air Force Research Laboratory have discovered – to their great surprise – that the sun’s activity is declining, and that we might experience the lowest solar output we’ve seen since 1645-1715. The Register describes it in dramatic tones:
What may be the science story of the century is breaking this evening.
Scientists who are convinced that global warming is a serious threat to our planet say that such a reduced solar output would simply buy us more time… delaying the warming trend, but not stopping or reversing it.
On the other hand, scientists who are skeptical about global warming say that the threat is a new mini ice age. (Remember that scientists have been convinced in the past that we would have a new ice age, and even considered pouring soot over the arctic in the 1970s to help melt the ice in order to prevent another ice age. Obama’s top science advisor was one of those warning of a new ice age in the 1970s. And see this.)
NASA reports this week that we may be on the verge of another Maunder Minimum (a period with an unusually low number of sunspots, leading to colder temperatures):
Much has been made of the probable connection between the Maunder Minimum, a 70-year deficit of sunspots in the late 17th-early 18th century, and the coldest part of the Little Ice Age, during which Europe and North America were subjected to bitterly cold winters. The mechanism for that regional cooling could have been a drop in the sun’s EUV output; this is, however, speculative. | <urn:uuid:31e9f478-fa4b-41af-a2a0-fc2ede5d057a> | 2.875 | 358 | Personal Blog | Science & Tech. | 43.722261 |
ENABLE TRIGGER (Transact-SQL)
Enables a DML, DDL, or logon trigger.
Enabling a trigger does not re-create it. A disabled trigger still exists as an object in the current database, but does not fire. Enabling a trigger causes it to fire when any Transact-SQL statements on which it was originally programmed are executed. Triggers are disabled by using DISABLE TRIGGER. DML triggers defined on tables can be also be disabled or enabled by using ALTER TABLE.
To enable a DML trigger, at a minimum, a user must have ALTER permission on the table or view on which the trigger was created.
To enable a DDL trigger with server scope (ON ALL SERVER) or a logon trigger, a user must have CONTROL SERVER permission on the server. To enable a DDL trigger with database scope (ON DATABASE), at a minimum, a user must have ALTER ANY DATABASE DDL TRIGGER permission in the current database.
A. Enabling a DML trigger on a table
The following example disables trigger uAddress that was created on table Address, and then enables it.
B. Enabling a DDL trigger
The following example creates a DDL trigger safety with database scope, and then disables it.
IF EXISTS (SELECT * FROM sys.triggers WHERE parent_class = 0 AND name = 'safety') DROP TRIGGER safety ON DATABASE; GO CREATE TRIGGER safety ON DATABASE FOR DROP_TABLE, ALTER_TABLE AS PRINT 'You must disable Trigger "safety" to drop or alter tables!' ROLLBACK; GO DISABLE TRIGGER safety ON DATABASE; GO ENABLE TRIGGER safety ON DATABASE; GO | <urn:uuid:be19d4a5-71e9-4d8b-9b3c-89e81e910a80> | 2.796875 | 390 | Documentation | Software Dev. | 42.559033 |
It is very hard to even think about the state of early man of earth when he had no home, no instruments or weapons, and no clothes. Man, unlike other living beings, has a more active and functional brain. However, his brain is not the sole factor which has made him far more progressive, advanced and developed than any other animal.
The first invention of man is said to be a primitive tool which consisted of a split stone and served a wide range of purposes. After this basic tool, man prepared the hand axe, knife, and many other tools and instruments. All these discoveries and inventions led to the evolution of human civilization.
The word “Science” is derivative of the Latin word “Scientia” which means knowledge. Science is probably the most important and helpful subject of study for human race.
Most Famous Scientists and Inventors in History
Inventions and discoveries are generally the direct result of a systematic research work. On certain occasions, however, inventions are a chance event. The famous antibiotic Penicillin was accidentally discovered by Sir Alexander Fleming when he was attempting to study staphylococci bacteria. Sometimes necessity makes the scientists discover new things. For instance, guided missiles had to be developed by German scientists during the World War II in order to destroy and defeat England.
The classic theories put forward by Pythagoras, Aristotle, Archimedes, Socrates, Plato, Jabir Ibn Hayyan, etc. are still relevant today, and have made crucial contributions to scientific developments.
The Era of Modern Science
The age of science really led off in the middle of the 17th century. Robert Boyle was the first scientist to introduce the method of experiments and its importance in the field of science. In this period, various institutions of science and inventors started working in different countries of Europe such as England, France and Germany.
Many famous scientists of ancient and medieval era like Galileo, Newton, etc. put forward many principles and theories in the different fields and areas of science. Since then modern science has utilized and built upon those methods and techniques to discover many of the things we take for granted in our world. | <urn:uuid:c6c0d91e-42ff-446b-99fb-7682d9d2d558> | 3.28125 | 438 | Knowledge Article | Science & Tech. | 38.095 |
Fascinating article in the Guardian here, which suggests that in order to buy us more time from global warming we should paint everything white?
It sounds simple, but the effect could be dramatic. Study after study has shown that buildings with white roofs stay cooler during the summer. The change reduces the way heat accumulates in built-up areas – known as the urban heat island effect – and allows people who live and work inside to switch off power-hungry air conditioning units.
Aware of the benefit, California has forced warehouses and other commercial premises with flat roofs to make them white since 2005, and, if such an effort could be extended, the results could make a big difference.
Together, roads and roofs are reckoned to cover more than half the available surfaces in urban areas, which have spread over some 2.4% of the Earth’s land area. A mass movement to change their colour, Akbari calculates, would increase the amount of sunlight bounced off our planet by 0.03%. And, he says, that would cool the Earth enough to cancel out the warming caused by 44bn tonnes of CO2 pollution. If you think that sounds like a lot, then you’re right. It would wipe out the expected rise in global emissions over the next decade. It won’t solve the problem of climate change, Akbari says, but could be a simple and effective weapon to delay its impact – just so long as people start doing it in earnest. “Roofs are going to have to be changed one by one and to make that effort at a very local level, we need to have an organisation in place to make it happen,” he says. Groups in several US cities, including Houston, Chicago and Salt Lake City, are on board with his plan, and he is talking to others.
The idea is a form of geo-engineering, a broad term used to cover all schemes that tackle the symptoms of climate change, namely catastrophic temperature rise, without addressing the root cause, our spiralling greenhouse gas emissions. And if altering all of the world’s roofs and roads sounds extreme, then take a look at some ideas from the other end of the geo-engineering scale: giant mirrors in space, shiny balloons to float above the clouds and millions of fake plastic trees to suck carbon from the air. An increasing number of climate scientists argue that the world has little choice but to investigate such drastic options. Carbon emissions since 2000 have risen faster than anyone thought possible, mainly driven by the coal-fuelled boom in China, and a global temperature rise of 2-3C seems inevitable. Last year a special edition of a Royal Society journal dedicated to geo-engineering said the geo-engineering schemes “may be risky, but the time may well come when they are accepted as less risky than doing nothing”. | <urn:uuid:319619e5-eb43-4b64-9535-16c9f30d155d> | 2.953125 | 585 | Personal Blog | Science & Tech. | 48.431147 |
References: FORMAT integer printing (p. 388-9)
Edit history: Version 1, Pavel, 10-Jun-87
Version 2, Masinter, 15-Jun-87
There are times when users would like to print out numbers with some punctuation
between groups of digits. The "commachar" argument to the ~D, ~B, ~O, ~X, and
~R FORMAT directives was introduced to fill that need. Unfortunately, the
interval at which the commachar should be printed is always every three digits.
This constraint is annoying when a different interval would be more appropriate.
Add a fourth argument to the ~D, ~B, ~X, and ~O FORMAT directives, and a fifth
argument to the ~R directive, to be called the "comma-interval". This value
must be an integer and defaults to 3. When the : modifier is given to any of
these directives, the "commachar" is printed between groups of "comma-interval"
Under the proposal, the following forms return the indicated values:
(format nil "~,,' ,4:B" 13) => "1101"
(format nil "~,,' ,4:B" 17) => "1 0001"
(format nil "~19,0,' ,4:B" 3333) => "0000 1101 0000 0101"
(format nil "~3,,,' ,2:R" 17) => "1 22"
(format nil "~,,'|,2:D" #xFFFF) => "6|55|35"
The current specification of FORMAT gives the user control over almost all of
the facets of printing integers. Only the wired-in constant for the
comma-interval remains, even though there are uses for varying that number. For
example, in many contexts, it would be convenient to be able to print out
numbers in binary with a space between each group of four bits. FORMAT does not
currently allow specification of the commachar printing interval so users
needing this functionality must write it themselves, duplicating much of the
logic in every implementation's integer printing code. Other uses, requiring
other intervals, can be imagined. For example, using a "commachar" of #\Newline
and a "comma-interval" of, say, 72, very large bignums could be printed in such
a way as to ensure line-breaks at appropriate places.
No released implementations currently support this feature.
Cost to implementors:
The change in the implementation of FORMAT is reasonably minor and, in most
cases, highly localized. Usually, the change is as simple as taking an extra
parameter and using it instead of a wired-in value of 3.
Cost to users:
Since the proposal involves the addition of an argument to certain directives,
the change would be entirely upward-compatible. No user code would need to be
Cost of non-adoption:
Users desiring this functionality will have to write it themselves, duplicating
much of the logic involved in printing integers at all.
One of the few remaining inflexibilities in integer printing is eliminated with
a net increase in user-visible functionality.
By parameterizing one of the final pieces of wired-in behavior from integer
printing, this small part of the workings of FORMAT would be perceived as having
been cleaned up.
Several members of the cleanup committee endorsed this proposal. No objections | <urn:uuid:05056027-02be-4291-b7f5-6c00f1ec1c1c> | 3.234375 | 749 | Documentation | Software Dev. | 44.968098 |
Diatoms have cell walls made of silica, constructed like pill- or date-boxes, and can be preserved in sediments over many centuries. The structure of the walls can be seen after cells have been treated to remove the organic contents.
Diatom species are usually identified by the shape and patterns of the silica walls. Strong, short ribs under a keel along the valve (half wall) are a diagnostic character for the genus Nitzschia, of which N. sigmoidea is the type species.
The different species of Nitzschia are usually recognised by the shape and size of their valves, the density and orientation of rows of pores across their surface, and the short ribs that span the raphe, which often lies along the ridge of a lateral keel. The raphe is usually formed of a pair of longitudinal slits in the wall through which mucilage can be secreted.
Diatom cells are described in two views – valve and girdle view. Cells of N. sigmoidea are more or less sigmoid in girdle view, and linear with wedge-shaped ends in valve view. Cells are about 100-500µm long (0.1-0.5mm), 8-15µm wide in valve view, and about 30µm wide in girdle view. Girdle width varies with the stage of the cell cycle. The raphe keel is slightly off-centre, with irregularly spaced ribs spanning the keel (about 5-7 in 10 µm). Parallel rows of pores cross the valve surface, 23-27 rows in 10µm.The girdle region is formed of several open bands that also contain pores.
With electron microscopy more detail of the wall structure is visible. | <urn:uuid:40291868-d4bd-43d0-b09c-261261c9a62a> | 3.6875 | 376 | Knowledge Article | Science & Tech. | 62.928748 |
|An alligator lily in Everglades National Park.|
Michael Huston is now involved in a high-stakes project whose ultimate success may depend on whether his ideas about species diversity, ecological modeling, and land-use tradeoffs are proven right or wrong. The Clinton administration has made the restoration of Florida Bay, Everglades National Park, and other threatened South Florida ecosystems a national priority. One of the highest scientific priorities in the restoration effort is the generation of information using a computer modeling system based on ideas developed at ORNL and the University of Tennessee at Knoxville (UTK). Use of this individual-based modeling approach pioneered at ORNL has been identified as essential by an interagency task force that includes the Army Corps of Engineers, the U.S. Geological Service, the Environmental Protection Agency, the National Park Service, and the National Biological Service, along with such state and local agencies as the South Florida Water Management District. This multi-institutional collaboration is known as the Across Trophic Level System Simulation (ATLSS).
For the ATLSS project, Huston, Don DeAngelis (now with the U.S. Geological Survey), and Lou Gross (UTK) are linking multiple individual-based models. However, rather than fish and trees, the individuals in these models represent some of the major endangered species of South Florida, including the Florida panther, the Everglades kite, and the wood stork, along with their preydeer, snails, and fish. In the ATLSS modeling system, virtual panthers stalk their simulated prey on a computerized landscape derived from satellite images. If the virtual water is too deep, the movement of the panthers is restricted, but that of their primary prey, the white-tailed deer, is restricted even more. The depth and distribution of water define the unique properties of the Everglades and Big Cypress Swamp, and restoration of the natural pattern of water availability is the primary method for restoring the ecosystem. The objective of the ATLSS models is to predict the biological effects of different patterns of water availability. Such information will assist the Army Corps of Engineers in formulating the restoration plan.
One of Huston's responsibilities is to use information available from hydrologic models to predict how the ecosystem's vegetation will respond in wet and dry seasons and in wet and dry years. The answers will reveal how much food will be available for the deer and how many deer will be available for panthers to feed on under different scenarios of water availability. The virtually flat landscape of South Florida presents an entirely different challenge to Huston than did the Walker Branch Watershed project for which he performed hydrologic and landscape modeling. The topographic relief on Walker Branch Watershed near ORNL exceeds that of the entire state of Florida, and the slope gradient of the Everglades is only about one inch per mile. "Water still runs downhill in Florida," he says, "but just not very fast."
The deterioration of the ecosystems of the Everglades and Florida Bay has resulted from competing demands for land and water from the rapidly growing urban and agricultural areas around Miami. Resolution of this conflict between man and nature is where Huston's ideas about land use and biodiversity may come into play. The primary agricultural activity is located on the deepest and most fertile soils at the northern end of the Everglades, just downstream from Lake Okeechobee.
"This was a rational land use decision," Huston argues, "and need not lead to a serious loss in biodiversity. Plant diversity in this region was naturally low, and the vast stands of sawgrass have simply been replaced by sugar cane and other crops. This highly productive region is ideal for supporting animal diversity, rather than plant diversity. It may now actually support a higher density and diversity of threatened and endangered predators than it did prior to the coming of agriculture. We saw bald eagles nesting next to sugar cane fields, thriving on rats and rabbits. Other rare predators, such as bobcats, otters, foxes, barn owls, and a variety of hawksand of course, 'Florida gators'are also common in the agricultural area, surviving in and around the cane fields."
|Although many conservationists see agriculture as the primary evil threatening the Everglades, Huston sees things differently.|
|Cypress trees in Big Cypress National Preserve.|
Although many conservationists see agriculture as the primary evil threatening the Everglades, Huston sees things differently. "The fertile soils of the Everglades agricultural area give it a contrasting and very important role in comparison with the rest of the system." Huston has seen, just as he expected, that the highest plant diversity is found in unproductive "wet prairies" with stunted pines and cypress. "These unproductive wetlands are great for protecting plant diversity, but lousy for most animals, especially the large predators," he says. "Different parts of the system make different contributions to the health of both the environment and the economy of South Florida."
Huston's task now is to convince decision makers who will determine the fate of the Everglades that man and nature can coexist sustainably, to their mutual benefit. | <urn:uuid:9f9ae9ce-324e-4319-bfd3-869c1c468502> | 3.5 | 1,069 | Knowledge Article | Science & Tech. | 26.533819 |
View a collection of video clips and associated information related to projectile motion.PBS Teachers
View a lesson plan on the topic of projectile motion. The lesson plan centers around the physics of juggling and is presented by PBS Teachers.Shockwave Studios
The Projectile Simulator activity from the Shockwave Studios is an excellent accompaniment for this reading.Curriculum Corner
This collection of sense-making activities from The Curriculum Corner will help your students understand projectile motion.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on projectile motion.PhET Simulation: Projectile Motion
This projectiles lesson from The Physics Classroom Tutorial provides a thorough background of projectiles to complement the PHET simulation.
What is a Projectile?
In Unit 1 of the Physics Classroom Tutorial, we learned a variety of means to describe the 1-dimensional motion of objects. In Unit 2 of the Physics Classroom Tutorial, we learned how Newton's laws help to explain the motion (and specifically, the changes in the state of motion) of objects that are either at rest or moving in 1-dimension. Now in this unit we will apply both kinematic principles and Newton's laws of motion to understand and explain the motion of objects moving in two dimensions. The most common example of an object that is moving in two dimensions is a projectile. Thus, Lesson 2 of this unit is devoted to understanding the motion of projectiles.
A projectile is an object upon which the only force acting is gravity. There are a variety of examples of projectiles. An object dropped from rest is a projectile (provided that the influence of air resistance is negligible). An object that is thrown vertically upward is also a projectile (provided that the influence of air resistance is negligible). And an object which is thrown upward at an angle to the horizontal is also a projectile (provided that the influence of air resistance is negligible). A projectile is any object that once projected or dropped continues in motion by its own inertia and is influenced only by the downward force of gravity.
By definition, a projectile has a single force that acts upon it - the force of gravity. If there were any other force acting upon an object, then that object would not be a projectile. Thus, the free-body diagram of a projectile would show a single force acting downwards and labeled force of gravity (or simply Fgrav). Regardless of whether a projectile is moving downwards, upwards, upwards and rightwards, or downwards and leftwards, the free-body diagram of the projectile is still as depicted in the diagram at the right. By definition, a projectile is any object upon which the only force is gravity.
Projectile Motion and Inertia
Many students have difficulty with the concept that the only force acting upon an upward moving projectile is gravity. Their conception of motion prompts them to think that if an object is moving upward, then there must be an upward force. And if an object is moving upward and rightward, there must be both an upward and rightward force. Their belief is that forces cause motion; and if there is an upward motion then there must be an upward force. They reason, "How in the world can an object be moving upward if the only force acting upon it is gravity?" Such students do not believe in Newtonian physics (or at least do not believe strongly in Newtonian physics). Newton's laws suggest that forces are only required to cause an acceleration (not a motion). Recall from the Unit 2 that Newton's laws stood in direct opposition to the common misconception that a force is required to keep an object in motion. This idea is simply not true! A force is not required to keep an object in motion. A force is only required to maintain an acceleration. And in the case of a projectile that is moving upward, there is a downward force and a downward acceleration. That is, the object is moving upward and slowing down.
To further ponder this concept of the downward force and a downward acceleration for a projectile, consider a cannonball shot horizontally from a very high cliff at a high speed. And suppose for a moment that the gravity switch could be turned off such that the cannonball would travel in the absence of gravity? What would the motion of such a cannonball be like? How could its motion be described? According to Newton's first law of motion, such a cannonball would continue in motion in a straight line at constant speed. If not acted upon by an unbalanced force, "an object in motion will ...". This is Newton's law of inertia.
Now suppose that the gravity switch is turned on and that the cannonball is projected horizontally from the top of the same cliff. What effect will gravity have upon the motion of the cannonball? Will gravity affect the cannonball's horizontal motion? Will the cannonball travel a greater (or shorter) horizontal distance due to the influence of gravity? The answer to both of these questions is "No!" Gravity will act downwards upon the cannonball to affect its vertical motion. Gravity causes a vertical acceleration. The ball will drop vertically below its otherwise straight-line, inertial path. Gravity is the downward force upon a projectile that influences its vertical motion and causes the parabolic trajectory that is characteristic of projectiles.
A projectile is an object upon which the only force is gravity. Gravity acts to influence the vertical motion of the projectile, thus causing a vertical acceleration. The horizontal motion of the projectile is the result of the tendency of any object in motion to remain in motion at constant velocity. Due to the absence of horizontal forces, a projectile remains in motion with a constant horizontal velocity. Horizontal forces are not required to keep a projectile moving horizontally. The only force acting upon a projectile is gravity! | <urn:uuid:5978acec-2770-4428-97d0-5802007d2877> | 4 | 1,167 | Tutorial | Science & Tech. | 44.025425 |
|May11-12, 06:00 PM||#1|
Periodic Table/ Elements toxic to one another?
So Arsenic is poisonous to Oxygen based life forms (Humans). What would be poisonous to a Sulfur based life form (if they exsisted)?
|May11-12, 09:34 PM||#2|
There are metallotolerant bacteria, which can tolerate relatively high levels of As and other heavy metals.
Toxicity is a term with respect to life forms. Elements are not toxic to one another.
|May11-12, 09:55 PM||#3|
If you look on the periodic table, Arsenic is right below Phosphorous in Group V A. Dmitri Mendeleev, a Russian physicist in the 1800s, proved that elements in the same groups have similar chemical properties. Arsenic is so similar to Phosphorous that arsenic can bind to several enzymes that catalyze reactions of phosphorous-containing compounds in our body and inhibit them, leading to a loss of function and eventual cell death.
|arsenic, oxygen, periodic table, sulfur|
|Similar Threads for: Periodic Table/ Elements toxic to one another?|
|Periodic Table of Elements||Chemistry||1|
|why in periodic table normally we have elements with FCC and HCP structure?||Atomic, Solid State, Comp. Physics||1|
|periodic table of elements is inaccurate||Atomic, Solid State, Comp. Physics||2|
|Periodic table of the elements||Biology, Chemistry & Other Homework||3|
|The Janet Periodic Table of Elements||General Physics||1| | <urn:uuid:957647e8-5200-48e4-a5e4-a5cb0d9e57eb> | 3 | 361 | Comment Section | Science & Tech. | 43.166788 |
atk.ObjectFactory — the base object class for a factory used to create accessible objects for objects of a specific GType.
This class is the base object class for a factory used to create
an accessible object for a specific GType. The method
is normally called to store in the registry the factory type to be used
to create an accessible of a particular GType.
that implements an accessibility interface on behalf of
Inform the factory that it is no longer being used to create
accessibles. When called, the factory may need to inform the
objects it has created that they need to be re-instantiated. Note:
primarily used for runtime replacement of
objects in object registries. | <urn:uuid:7d36b366-cf12-430d-b292-fd8afe99082e> | 2.765625 | 148 | Documentation | Software Dev. | 37.589138 |
Web edition: January 9, 2013
For years, scientists had assumed that geysers and other types of hot springs spewed water that came from deep inside Earth. Now researchers working at a Philippine volcano have stumbled across an exception. Hot springs flowing into a lake in the volcano’s crater come from a shallow source — the lake itself. The new finding suggests there may be plenty of similar spots worldwide. | <urn:uuid:22a313bf-54dc-4918-be39-191009b310a0> | 3.109375 | 83 | Truncated | Science & Tech. | 50.778424 |
It's difficult to think of free oxygen, which makes up 21 percent of earth's atmosphere, as a poisonous gas. Virtually all organisms alive today need oxygen to survive. Even plants, which expel oxygen into the atmosphere, need oxygen to live. There was a time early in earth's history, however, when living organisms not only did not need oxygen, but were killed when exposed to the element.
Three and a half billion years ago, Earth's atmosphere contained almost no free oxygen. Instead, it consisted mainly of carbon dioxide, perhaps as much as 100 times more carbon dioxide than contained in today's atmosphere. During this time, Earth's only life forms were aquatic, one-celled organisms -- primitive forms of bacteria -- that extracted energy from a variety of sources. To most of these organisms, free oxygen was poisonous.
At about the same time, new types of primitive bacteria evolved. Unlike their predecessors, these bacteria could convert energy directly from the Sun, through photosynthesis. One type, bluish-green organisms called cyanobacteria, used sunlight to convert carbon dioxide and water into glucose (sugar). These organisms expelled oxygen gas -- a by-product of photosynthesis -- into the oceans.
As the cyanobacteria multiplied and spread around the globe, they released more and more free oxygen into the oceans. This oxygen combined with the molecules of living bacteria, killing them off as it did so. It also combined with other matter, such as the iron dissolved in the ocean. Only after the oxygen combined with all of the matter available did free oxygen begin to accumulate in the oceans. And when the oceans became saturated with oxygen, it began to make its way into the atmosphere.
Organisms that don't need free oxygen to survive are called anaerobic; all living things that need free oxygen, including us, are aerobic organisms. As the oceans and atmosphere filled with oxygen gas, aerobic life evolved further, at some point adding a nucleus to the cell (to protect the cell's DNA from the harmful effects of oxygen) and eventually to multi-cellular organisms. Anaerobic life, on the other hand, was pushed back to areas where there was little or no oxygen. Today, anaerobic organisms like those that existed more than three and a half billion years ago live in oxygen-free places such as volcanic hot vents, where they get their energy in ways other than from carbon dioxide, water, and sunlight. | <urn:uuid:48edee81-8caa-4634-bd05-9b1b8517ce66> | 3.859375 | 493 | Knowledge Article | Science & Tech. | 35.606667 |
Franklin Fact Archive
Back to Franklin Facts homepage.
Back to TV12
THURSDAY AUGUST 24 - OCEAN TEMPERATURES FROM SPACE: A WEB SITE
Now that we're in hurricane season, forecasters look very carefully at measurements
of ocean temperature. That's because hurricanes generally don't form unless the water's
at least 80oF, and once a hurricane gets going, thereís a strong correlation
between hurricane intensity and the warmth of the water. Enter satellite images.
From space, satellites can estimate the sea-surface temperature to within a degree or two.
Amazingly, this is possible even where it's cloudy. These measurements are colorized and
displayed in maps such as
this one which shows global ocean temperatures from earlier this
You can get images like this on the web at
And you can get closeups
of the Atlantic and the East Coast waters at
These days, there's no better way to monitor the ocean than from space. | <urn:uuid:781d4789-3a8b-417e-8fa2-85a49319b056> | 3.15625 | 209 | Truncated | Science & Tech. | 37.791923 |
Physics is the study of basic properties, materials, and forces in our Universe. Our new physics section will start off with some background material about space, time, and matter. It will also include sections on mechanics, electricity and magnetism, thermal physics, atomic physics and particle physics, and tools for math and science (vectors, coordinate systems, units of measurement, etc.).
The new physics section will also have links to many data sources relevant to Earth and space science. The links below provide access to some of the physics-related materials currently on the site. They are organized roughly into the categories we will use in the forthcoming physics section.
Starting Points for Science
- Space - Distance - Astronomical Unit (AU), Angstrom, Wavelength
- Time - Year, The Seasons
- The Four Fundamental Forces
- Orbital Mechanics
- Fluid Mechanics
Electricity and Magnetism
- The Magnetic Field
- The Force of Magnetism
- The Invisible World of Magnetic Fields
- Disk Magnet and Compass interactive (Flash)
- Bar Magnet and Compass interactive (Flash)
- Material which is magnetic
- Generating a Magnetic Field
- Basic Facts About the Effects of Magnetic Fields on Charged Particles
- Basic Facts About Spiral Motion
- Basic Facts About Bounce Motion
- The Facts About Drift Motion
- Electromagnetic Radiation
- No entries here yet. The new physics section will include topics like heat, temperature, the laws of thermodynamics, conduction & convection, etc.
Atomic Physics and Particle Physics
- A Model of the Atom
- Elementary Particles
- Fundamental Forces
- Fusion Reactions
- Elements (Chemical Elements), The Periodic Table of the Elements
- What's a molecule?
Tools for Math and Science
- No entries here yet. The new physics section will include topics like vectors, coordinate systems, units of measurement, etc. | <urn:uuid:33f6a69b-c0a8-4dcc-bf95-03b966be2a93> | 3.6875 | 393 | Content Listing | Science & Tech. | 21.801042 |
A fluxgate magnetometer for measuring magnetic fields.
Image courtesy the Auroral Observatory of the University of Tromso, Norway.
Instruments & Techniques for Space Weather Measurements
How do scientists measure space weather? Let's take a look!
Scientists watch the Sun with special telescopes. Some of the telescopes are on Earth, while others are on satellites. Some of the telescopes are for normal, visible light, but others are for different kinds of electromagnetic radiation. Some telescopes watch infrared (IR), ultraviolet (UV), or even X-ray radiation coming from the Sun.
Solar astronomers use a coronagraph to view the Sun's atmosphere. They use spectroscopes to detect the different kinds of elements in the Sun. A new technique called "helioseismology" even lets scientists "see" inside the Sun!
The Sun gives off light, but it also shoots out radiation. When radiation particles from the Sun get to Earth, radiation detectors on satellites and on Earth measure their types and energy levels.
When radiation from the Sun hits Earth's atmosphere, the radiation can make the atmosphere "glow". The aurora, or Northern and Southern Lights, are an example of this. We can study such "glows" and take pictures of them from Earth or from space.
Some regions of Earth's atmosphere are electrically charged. The electrically charged regions are called the ionosphere. Space weather affects the ionosphere. Scientists study the ionosphere by bouncing radio waves off of it.
Magnetic fields are an important part of space weather. As space weather changes, the strengths and directions of magnetic fields change. Scientists use instruments called magnetometers to measure magnetic fields. There are magnetometers at many places on Earth. There are also magnetometers on satellites around Earth and even on spacecraft circling other planets or exploring different parts of our Solar System.
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, ranging from seismology
, rocks and minerals
, and Earth system science
You might also be interested in:
Spacecraft help us observe and measure space weather. We also make some kinds of space weather measurements from the surface of Earth. Satellites are better for some kinds of observations. However, observations...more
Electromagnetic radiation is the result of oscillating electric and magnetic fields. The wave of energy generated by such vibrations moves through space at the speed of light. And well it should... for...more
Rising above the Sun's chromosphere , the temperature jumps sharply from a few tens of thousands of kelvins to as much as a few million kelvins in the Sun's outer atmosphere, the solar corona. Understanding...more
An element (also called a "chemical element") is a substance made up entirely of atoms having the same atomic number; that is, all of the atoms have the same number of protons. Hydrogen, helium, oxygen,...more
Radiation comes in two basic types: electromagnetic radiation transmitted by photons, and particle radiation consisting of electrons, protons, alpha particles, and so forth. Electromagnetic radiation,...more
One main type of radiation, particle radiation, is the result of subatomic particles hurtling at tremendous speeds. Protons, cosmic rays, and alpha and beta particles are some of the most common types...more
Radio waves are a type of electromagnetic radiation. A radio wave has a much longer wavelength than does visible light. We use radio waves extensively for communications. Radio waves have wavelengths as...more | <urn:uuid:1115609f-006a-4363-8a4a-84535d23f0df> | 4.0625 | 741 | Knowledge Article | Science & Tech. | 38.44187 |
I found this article interesting.
Scientists Clueless over Sun's Effect on Earth
By Robert Roy Britt, LiveScience Senior Writer
posted: 05 May 2005 02:01 pm ET
Share this story
Email While researchers argue whether Earth is getting warmer and if humans are contributing, a heated debate over the global effect of sunlight boiled to the surface today.
And in this debate there is little data to go on.
A confusing array of new and recent studies reveals that scientists know very little about how much sunlight is absorbed by Earth versus how much the planet reflects, how all this alters temperatures, and why any of it changes from one decade to the next.
Determining Earth's reflectance is crucial to understanding climate change, scientists agree.
Reports in the late 1980s found the amount of sunlight reaching the planet's surface had declined by 4 to 6 percent since 1960. Suddenly, around 1990, that appears to have reversed.
Surprising Side Effects of Global Warming
Longer Airline Flights Proposed to Combat Global Warming
No Stopping it Now: Seas to Rise 4 Inches or More this Century
Internet Project Concludes Planet Could Warm by Nearly 20 Degrees
2005 Could Become Warmest on Record
"When we looked at the more recent data, lo and behold, the trend went the other way," said Charles Long, senior scientist at the Department of Energy's Pacific Northwest National Laboratory.
Long participated in one of two studies that uncovered this recent trend using satellite data and ground-based monitoring. Both studies are detailed in the May 6 issue of the journal Science.
Thing is, nobody knows what caused the apparent shift. Could be changes in cloud cover, they say, or maybe reduced effects of volcanic activity, or a reduction in pollutants.
This lack of understanding runs deeper.
A third study in the journal this week, tackling a related aspect of all this, finds that Earth has reflected more sunlight back into space from 2000 to 2004 than in years prior. However, a similar investigation last year found just the opposite. A lack of data suggests it's impossible to know which study is right.
The bottom line, according to a group of experts not involved in any of these studies: Scientists don't know much about how sunlight interacts with our planet, and until they understand it, they can't accurately predict any possible effects of human activity on climate change.
Reflecting on the problem
The percentage of sunlight reflected by back into space by Earth is called albedo. The planet's albedo, around 30 percent, is governed by cloud cover and the quantity of atmospheric particles called aerosols.
Amazingly, one of the best techniques for measuring Earth's albedo is to watch the Moon, which acts like a giant mirror. Sunlight that reflects of Earth in turn reflects off the Moon and can be measured from here. The phenomenon, called earthshine, was first noted by Leonardo da Vinci.
Albedo is a crucial factor in any climate change equation. But it is one of Earth's least-understood properties, says Robert Charlson, a University of Washington atmospheric scientist. "If we don't understand the albedo-related effects," Charlson said today, "then we can't understand the effects of greenhouse gases."
Charlson's co-authors in the analysis paper are Francisco Valero at the Scripps Institution of Oceanography and John Seinfeld at the California Institute of Technology.
Plans and missions designed to study the effects of clouds and aerosols have been delayed or cancelled, Charlson and his colleagues write.
To properly study albedo, scientists want to put a craft about 1 million miles out in space at a point were it would orbit the Sun while constantly monitoring Earth.
The satellite, called Deep Space Climate Observatory, was once scheduled for launch from a space shuttle in 2000 but has never gotten off the ground. Two other Earth-orbiting satellites that would study the albedo have been built but don't have launch dates. And recent budget shifts at NASA and other agencies have meant some data that's available is not being analyzed, Charlson and his colleagues contend.
While some scientists contend the global climate may not be warming or that there is no clear human contribution, most leading experts agree change is underway.
Grasping the situation is crucial, because if the climate warms as many expect, seas could rise enough to swamp many coastal communities by the end of this century.
Charlson says scientists understand to within 10 percent the impact of human activity on the production of greenhouse gases, things like carbon dioxide and methane that act like blanket to trap heat and, in theory, contribute to global warming. Yet their grasp of the human impact on albedo could be off by as much as 100 percent, he fears.
One theory is that if humans pump out more aerosols, the small particles will work to reflect sunlight and offset global warming. Charlson calls that "a spurious argument, a red herring."
Greenhouse gases are at work trapping heat 24 hours a day, he notes, while sunlight reflection is only at work on the day side of the planet. Further, he said, greenhouse gases can stay in the atmosphere for centuries, while aerosols last only a week or so.
"There is no simplistic balance between these two effects," Charlson said. "It isn't heating versus cooling. It's scientific understanding versus not understanding."
I would like to see the amount of money we are spending on the CO2 aspect of the story and the amount spent on the solar aspect of climate change. | <urn:uuid:7bdb5df9-4eb6-4d45-9346-68e6669c89c4> | 3.421875 | 1,147 | Comment Section | Science & Tech. | 46.000662 |
Heat transfer coefficient
The heat transfer coefficient, in thermodynamics and in mechanical and chemical engineering, is used in calculating the heat transfer, typically by convection or phase transition between a fluid and a solid:
- Q = heat flow in input or lost heat flow , J/s = W
- h = heat transfer coefficient, W/(m2K)
- A = heat transfer surface area, m2
- = difference in temperature between the solid surface and surrounding fluid area
From the above equation, the heat transfer coefficient is the proportionality coefficient between the heat flux, that is heat flow per unit area, q/A, and the thermodynamic driving force for the flow of heat (i.e., the temperature difference, ΔT).
The heat transfer coefficient has SI units in watts per squared meter kelvin: W/(m2K).
There are numerous methods for calculating the heat transfer coefficient in different heat transfer modes, different fluids, flow regimes, and under different thermohydraulic conditions. Often it can be estimated by dividing the thermal conductivity of the convection fluid by a length scale. The heat transfer coefficient is often calculated from the Nusselt number (a dimensionless number). There are also online calculators available specifically for heat transfer fluid applications.
An understanding of convection boundary layers is necessary to understanding convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference.
The heat transfer rate can then be written as,
And because heat transfer at the surface is by conduction,
These two terms are equal; thus
Making it dimensionless by multiplying by representative length L,
The right hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient. While the left hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu.
Alternative Method (A simple method for determining the overall heat transfer coefficient)
A simple method for determining an overall heat transfer coefficient that is useful to find the heat transfer between simple elements such as walls in buildings or across heat exchangers is shown below. Note that this method only accounts for conduction within materials, it does not take into account heat transfer through methods such as radiation. The method is as follows:
- = the overall heat transfer coefficient (W/m2 K)
- = the contact area for each fluid side (m2) (with A_1 and A_2 expressing either surface)
- = the thermal conductivity of the material (W/mK)
- = the individual convection heat transfer coefficient for each fluid (W/m2 K)
- = the wall thickness (m)
As the areas for each surface approach being equal the equation can be written as the transfer coefficient per unit area as shown below:
NOTE: Often the value for is referred to as the difference of two radii where the inner and outer radii are used to define the thickness of a pipe carrying a fluid, however, this figure may also be considered as a wall thickness in a flat plate transfer mechanism or other common flat surfaces such as a wall in a building when the area difference between each edge of the transmission surface approaches zero.
In the walls of buildings the above formula can be used to derive the formula commonly used to calculate the heat through building components. Architects and engineers call the resulting values either the U-Value or the R-Value of a construction assembly like a wall. Each type of value (R or U) are related as the inverse of each other such that R-Value = 1/U-Value and both are more fully understood through the concept of an overall heat transfer coefficient described in lower section of this document.
Convective heat transfer Correlations
Although convective heat transfer can be derived analytically through dimensional analysis, exact analysis of the boundary layer, approximate integral analysis of the boundary layer and analogies between energy and momentum transfer, these analytic approaches may not offer practical solutions to all problems when there are no mathematical models applicable. As such, many correlations were developed by various authors to estimate the convective heat transfer coefficient in various cases including natural convection, forced convection for internal flow and forced convection for external flow. These empirical correlations are presented for their particular geometry and flow conditions. As the fluid properties are temperature dependent, they are evaluated at the film temperature , which is the average of the surface and the surrounding bulk temperature, .
Natural convection
External flow, Vertical plane
Churchill and Chu correlation for natural convection adjacent to vertical planes. NuL applies to all fluids for both laminar and turbulent flows. L is the characteristic length with respect to the direction of gravity, and RaL is the Rayleigh Number with respect to this length.
For laminar flows in the range of , the following equation can be further improved.
External flow, Vertical cylinders
For cylinders with their axes vertical, the expressions for plane surfaces can be used provided the curvature effect is not too significant. This represents the limit where boundary layer thickness is small relative to cylinder diameter D. The correlations for vertical plane walls can be used when
where is the Grashof number.
External flow, Horizontal plates
W.H. McAdams suggested the following correlations. The induced buoyancy will be different depending upon whether the hot surface is facing up or down. For a hot surface facing up or a cold surface facing down,
For a hot surface facing down or a cold surface facing up,
The length is the ratio of the plate surface area to perimeter. If the plane surface is inclined at an angle θ, the equations for vertical plane by Churchill and Chu may be used for θ up to . When boundary layer flow is laminar, the gravitational constant g is replaced with g cosθ for calculating the Ra in the equation for laminar flow
External flow, Horizontal cylinder
For cylinders of sufficient length and negligible end effects, Churchill and Chu has the following correlation for
External flow, Spheres
For spheres, T. Yuge has the following correlation. for Pr≃1 and
Forced convection
Internal flow, Laminar flow
Sieder and Tate has the following correlation for laminar flow in tubes where D is the internal diameter, μ_b is the fluid viscosity at the bulk mean temperature, μ_w is the viscosity at the tube wall surface temperature.
Internal flow, Turbulent flow
The Dittus-Boelter correlation (1930) is a common and particularly simple correlation useful for many applications. This correlation is applicable when forced convection is the only mode of heat transfer; i.e., there is no boiling, condensation, significant radiation, etc. The accuracy of this correlation is anticipated to be ±15%.
For a fluid flowing in a straight circular pipe with a Reynolds number between 10 000 and 120 000 (in the turbulent pipe flow range), when the fluid's Prandtl number is between 0.7 and 120, for a location far from the pipe entrance (more than 10 pipe diameters; more than 50 diameters according to many authors) or other flow disturbances, and when the pipe surface is hydraulically smooth, the heat transfer coefficient between the bulk of the fluid and the pipe surface can be expressed as:
- - thermal conductivity of the bulk fluid
- - - Hydraulic diameter
- Nu - Nusselt number
- (Dittus-Boelter correlation)
- Pr - Prandtl number
- Re - Reynolds number
- n = 0.4 for heating (wall hotter than the bulk fluid) and 0.33 for cooling (wall cooler than the bulk fluid).
The fluid properties necessary for the application of this equation are evaluated at the bulk temperature thus avoiding iteration
Forced convection, External flow
In analyzing the heat transfer associated with the flow past the exterior surface of a solid, the situation is complicated by phenomena such as boundary layer separation. Various authors have correlated charts and graphs for different geometries and flow conditions. For Flow parallel to a Plane Surface, where x is the distance from the edge and L is the height of the boundary layer, a mean Nusselt number can be calculated using the Colburn analogy.
Thom correlation
There exist simple fluid-specific correlations for heat transfer coefficient in boiling. The Thom correlation is for flow boiling of water (subcooled or saturated at pressures up to about 20 MPa) under conditions where the nucleate boiling contribution predominates over forced convection. This correlation is useful for rough estimation of expected temperature difference given the heat flux:
- is the wall temperature elevation above the saturation temperature, K
- q is the heat flux, MW/m2
- P is the pressure of water, MPa
Note that this empirical correlation is specific to the units given.
Heat transfer coefficient of pipe wall
The resistance to the flow of heat by the material of pipe wall can be expressed as a "heat transfer coefficient of the pipe wall". However, one needs to select if the heat flux is based on the pipe inner or the outer diameter.
where k is the effective thermal conductivity of the wall material and x is the wall thickness.
If the above assumption does not hold, then the wall heat transfer coefficient can be calculated using the following expression:
where di and do are the inner and outer diameters of the pipe, respectively.
The thermal conductivity of the tube material usually depends on temperature; the mean thermal conductivity is often used.
Combining heat transfer coefficients
For two or more heat transfer processes acting in parallel, heat transfer coefficients simply add:
For two or more heat transfer processes connected in series, heat transfer coefficients add inversely:
For example, consider a pipe with a fluid flowing inside. The rate of heat transfer between the bulk of the fluid inside the pipe and the pipe external surface is:
- q = heat transfer rate (W)
- h = heat transfer coefficient (W/(m2·K))
- t = wall thickness (m)
- k = wall thermal conductivity (W/m·K)
- A = area (m2)
- = difference in temperature.
Overall heat transfer coefficient
The overall heat transfer coefficient is a measure of the overall ability of a series of conductive and convective barriers to transfer heat. It is commonly applied to the calculation of heat transfer in heat exchangers, but can be applied equally well to other problems.
For the case of a heat exchanger, can be used to determine the total heat transfer between the two streams in the heat exchanger by the following relationship:
- = heat transfer rate (W)
- = overall heat transfer coefficient (W/(m²·K))
- = heat transfer surface area (m2)
- = log mean temperature difference (K)
The overall heat transfer coefficient takes into account the individual heat transfer coefficients of each stream and the resistance of the pipe material. It can be calculated as the reciprocal of the sum of a series of thermal resistances (but more complex relationships exist, for example when heat transfer takes place by different routes in parallel):
- R = Resistance(s) to heat flow in pipe wall (K/W)
- Other parameters are as above.
The heat transfer coefficient is the heat transferred per unit area per kelvin. Thus area is included in the equation as it represents the area over which the transfer of heat takes place. The areas for each flow will be different as they represent the contact area for each fluid side.
The thermal resistance due to the pipe wall is calculated by the following relationship:
- x = the wall thickness (m)
- k = the thermal conductivity of the material (W/(m·K))
- A = the total area of the heat exchanger (m2)
This represents the heat transfer by conduction in the pipe.
As mentioned earlier in the article the convection heat transfer coefficient for each stream depends on the type of fluid, flow properties and temperature properties.
Some typical heat transfer coefficients include:
- Air - h = 10 to 100 W/(m2K)
- Water - h = 500 to 10,000 W/(m2K)
Thermal resistance due to fouling deposits
Surface coatings can build on heat transfer surfaces during heat exchanger operation due to fouling. These add extra thermal resistance to the wall and may noticeably decrease the overall heat transfer coefficient and thus performance. (Fouling can also cause other problems.)
The additional thermal resistance due to fouling can be found by comparing the overall heat transfer coefficient determined from laboratory readings with calculations based on theoretical correlations. They can also be evaluated from the development of the overall heat transfer coefficient with time (assuming the heat exchanger operates under otherwise identical conditions). This is commonly applied in practice, e.g. The following relationship is often used:
- = overall heat transfer coefficient based on experimental data for the heat exchanger in the "fouled" state,
- = overall heat transfer coefficient based on calculated or measured ("clean heat exchanger") data,
- = thermal resistance due to fouling,
See also
- Convective heat transfer
- Heat sink
- Churchill-Bernstein Equation
- Heat pump
- Heisler Chart
- Thermal conductivity
- Fourier number
- Nusselt number
- James R. Welty; Charles E. Wicks; Robert E. Wilson; Gregory L. Rorrer., "Fundamentals of Momentum, Heat and Mass transfer" 5th edition, John Wiley and Sons
- S.S. Kutateladze and V.M. Borishanskii, A Concise Encyclopedia of Heat Transfer, Pergamon Press, 1966.
- F. Kreith (editor), "The CRC Handbook of Thermal Engineering", CRC Press, 2000.
- W.Rohsenow, J.Hartnet, Y.Cho, "Handbook of Heat Transfer", 3rd edition, McGraw-Hill, 1998.
- This relationship is similar to the harmonic mean; however, note that it is not multiplied with the number n of terms.
- Coulson and Richardson, "Chemical Engineering", Volume 1,Elsevier, 2000
- Turner C.W.; Klimas S.J.; Bbrideau M.G., "Thermal resistance of steam-generator tube deposits under single-phase forced convection and flow-boiling heat transfer", Canadian Journal of Chemical Engineering, 2000, vol. 78, No 1, pp. 53-60 | <urn:uuid:0b25143b-f51f-4198-bfc6-7362a079099e> | 4.03125 | 3,097 | Knowledge Article | Science & Tech. | 36.275179 |
Ocean Temps Reach Record Highs in Northeast
During the first six months of 2012, sea surface temperatures in the Northeast Shelf Large Marine Ecosystem were the highest ever recorded, according to the latest Ecosystem Advisory issued by NOAA’s Northeast Fisheries Science Center (NEFSC). Above-average temperatures were found in all parts of the ecosystem, from the ocean bottom to the sea surface and across the region, and the above average temperatures extended beyond the shelf break front to the Gulf Stream.
The annual 2012 spring plankton bloom was intense, started earlier and lasted longer than average. This has implications for marine life from the smallest creatures to the largest marine mammals like whales. Atlantic cod continued to shift northeastward from its historic distribution center.
Read more: http://www.laboratoryequipment.com/news/2012/09/ocean-temps-reach-record-highs-northeast | <urn:uuid:faa59d21-6910-4734-bd2a-82ba4c80aec3> | 2.75 | 188 | Truncated | Science & Tech. | 36.672143 |
One of the areas of physics that had been getting a lot of publicity is the area of metametrials/left-handed materials that produces negative index of refraction. The "sexy" aspect of this study is the possibility of "cloaking" an object having such left-handed properties from electromagnetic radiation, even if it is only within a limit bandwidth. I've highlighted several of these cloaking reports previously (read here, here, and here).
Now comes something that throws cold water over everything. A paper that appeared in PRL recently pointed out that while any cloaking device, even a perfect one, cannot be detected via EM radiation, shooting charged particles at them allows them to be detected!
Abstract: A perfect invisibility cloak is commonly believed to be undetectable from electromagnetic (EM) detection because it is equivalent to a curved but empty EM space created from coordinate transformation. Based on the intrinsic asymmetry of coordinate transformation applied to motions of photons and charges, we propose a method to detect this curved EM space by shooting a fast-moving charged particle through it. A broadband radiation generated in this process makes a cloak visible. Our method is the only known EM mechanism so far to detect an ideal perfect cloak (curved EM space) within its working band.
So all you need is to shoot electrons and voila, you see these things being cloaked. Of course, this doesn't work to well in air since, depending on the electrons' energy, the mean free path of the electrons in air can be quite limited. But in the vacuum of outer space, that's a different story. Just think, we could have told Capt. Kirk how he could have detected those cloaked Klingon warbirds! It would have been so easy!
B. Zhang and B.-I. Wu, Phys. Rev. Lett. v.103, p.243901 (2009). | <urn:uuid:79d16225-f865-44bf-b3de-a1cd77a07a1d> | 2.78125 | 387 | Personal Blog | Science & Tech. | 53.586312 |
While not all manmade tunnels are tiled, this type of surface is used for underground roadways and railways all around the world. As it turns out, there are some very good reasons for this.
The main reason why tunnels are tiled is the same reason why most showers are tiled. Smooth ceramic tile is fairly easy to clean because particles of dirt and grime generally don't get imbedded in the material. Instead, most particles collect on top of the tile, and will wash away with detergent and water. This resistance to grime (as well as to moisture) makes tile an extremely durable and easily cleaned material. Even under harsh conditions, it takes a long time for tile to deteriorate, and it's pretty easy to get all of the grime off.
This is especially important in roadway tunnels because of all the exhaust emitted by the cars passing through. These fumes, along with any dirt or roadway salt kicked up by the cars' tires, gets trapped inside the tunnel, leading to a constant accumulation of gunk on the walls. When things get too dirty, city workers can use high-pressure water hoses or extendible brushes to clean off the tile.
Tiles also work well in tunnels because you can easily arrange them to cover pretty much any shape. If you had a cylindrical tunnel, for example, you could attach individual tiles all along the wall and ceiling as if the surface were straight. All of the individual tiles would be flat, but each one would sit at a slightly different angle than the tiles below it and above it. Overall, then, the flat tiles would form a curved surface. To cover the same space with a metal, you would need to prefabricate a metal sheet so that it curved in exactly the same way as the walls and ceiling. You would also need to design metal sheets to fit around any unusual ledges or corners, while tile would adapt easily to these surface areas.
Tiles are also easy and inexpensive to replace should they become worn or damaged. Instead of re-installing an entire sheet of metal, you can simply pop out any busted tiles and put in new ones. It's also simple to see when a tile needs to be replaced, even from a good distance; the tile will have obvious cracks in it. This makes it much easier to fix any damage to the tunnel surface before it becomes a problem.
Tile is also a good reflective surface. In roadway tunnels, this makes for safer driving conditions using less artificial lighting. Scattered fluorescent lights, as well as the cars' headlights, reflect light off the walls, illuminating all areas of the tunnel. This is just an added bonus, of course: There are many other materials that reflect light more effectively, but they are not nearly as durable.
Here are some interesting links: | <urn:uuid:4a39c95e-b280-49b0-bd3e-c3a2276e7f9a> | 3.0625 | 574 | Knowledge Article | Science & Tech. | 47.300449 |
Weird creatures trapped in a sunless, frigid, briny dungeon deep beneath the ice at Blood Falls — has a nice horror-movie ring to it, doesn’t it? And for microbes, they have quite the exotic back story. They’ve been living in dark pockets of an ancient saltwater lake trapped under the advancing Taylor Glacier in Antarctica between 2 million and 4 million years ago.
How did the microbes survived all that time without light, photosynthesis, or outside nutrients? Jill Mikucki, one of the researchers who reported the microbes’ existence in 2007, now reports on their unusual survival mechanism. In the current issue of Science, Dr. Mikucki and other researchers conclude that the microbes metabolize organic matter in the water by using sulphate to facilitate reduction of iron in the bedrock — the same iron that helps produce the rusty color of Blood Falls when the water from the subglacial pools reaches the snout of the glacier.
The microbes have been living in pools of brine beneath 400 meters of glacial ice, according to Dr. Mikucki, a research associate in the department of earth sciences at Dartmouth. I asked her to explain to Lab readers the chief significance of these discoveries. Her answer:
Our work has demonstrated how microbes gain energy and grow below ice in the absence of sunlight for extended periods of time. Subglacial environments are difficult to access and therefore remain largely unexplored. The fact that we were able to describe how microbes can eke out a living below ice is rather interesting considering that we only recently realized life existed below glaciers.
When subglacial brine is actively discharged it provides a window into an ecosystem that has been cut off from the surface for possibly millions of years. The microbes have remained viable despite these extreme circumstances. There are also interesting implications to how life might have survived during past so-called ‘Snowball Earth’ events, of below the ice-caps of Mars or on Europa.
I welcome your thoughts on this finding. Or any idea for the plot of that horror movie at Blood Falls. | <urn:uuid:cc8ad878-b645-4e39-8b1d-13e2ba3f5942> | 3.734375 | 426 | Personal Blog | Science & Tech. | 39.161667 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
presence in clay minerals
The water adsorbed between layers or in structural channels may further be divided into zeolitic and bound waters. The latter is bound to exchangeable cations or directly to the clay mineral surfaces. Both forms of water may be removed by heating to temperatures on the order of 100°–200° C and in most cases, except for hydrated halloysite, are regained readily at ordinary...
What made you want to look up "bound water"? Please share what surprised you most... | <urn:uuid:876b3f5b-b0e2-4cb6-803f-998b85de9e9a> | 2.71875 | 141 | Knowledge Article | Science & Tech. | 53.411722 |
Climate Change FAQ's
Latest News on Carbon Conservation
Real-World Climatology: Records and Observations
Signup For E-mail Alerts
Virtual Climate Alerts
World Climate Report Online
||This - the last point of inflection on the curve T (h) and while the lowest temperature region (in the mesopause it can drop to 150 K). Further temperature will increase - we enter the realm of the thermosphere. In the thermosphere, first, a sharp temperature rise - for some 30 - 40 km we shooting through the entire range of 150 to 300 K, which were up to now, and continue to climb higher. At an altitude of 150 km the temperature has passed for 500 KV and here we have to choose, day or night. For on it depends the further growth temperature. If it happens during the day, temperatures will continue to rise to 1500 - 2000 K. If the night, the temperature rise will be much less - up to 700 - 1000 K. In both cases, from a height of approximately 200 - 250 km the temperature rise will stop, and then she will remain constant. We have entered the region of the isotherm. Well, what's next, or, if you will, above? What goes thermosphere? Usually saying that goes into the thermosphere exosphere, although the term was born as a result of division into "spheres" not on the basis of temperature, and on the basis of the dominant process of determining the composition of the atmosphere. Stratification so much ground easier than on temperature. From Earth's surface to an altitude of 105 - 110 km density atmospheric gas is high enough, so that all traffic into the atmosphere as movement of the atmospheric gas in general. Unable to allocate a separate move, say, molecules of nitrogen and oxygen - the particles of different types of continuously mixed. Such process called turbulent mixing or turbulent diffusion. It is clear that turbulent diffusion tends to maintain a constant composition of atmospheric gas tall. That is why until the specified height, plant compounds of atmospheric gas remains unchanged, subject to variations in the relative concentrations of only chemically active small components such as nitrogen oxide, ozone, etc. Field the atmosphere from Earth's surface up to 105 - 110 km gomosferoy called, ie, regions constant composition. Ends above the realm of turbulent diffusion, which put all of the gases. | <urn:uuid:a01a2bc2-0b07-455b-ac84-da93fcea32b7> | 3.421875 | 487 | Tutorial | Science & Tech. | 47.035666 |
These web pages provide an explanation of forces in accelerating reference frames. Topics include centrifugal force and what is felt in vehicles that are speeding up or slowing down. This is part of "From Stargazers to Starships", an extensive web site that introduces physics and astronomy using topics in space exploration and space science.
%0 Electronic Source %A Stern, David %D March 25, 2006 %T Accelerated Frames of Reference: Inertial Forces %V 2013 %N 22 May 2013 %8 March 25, 2006 %9 text/html %U http://www.phy6.org/stargaze/Sframes2.htm
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | <urn:uuid:19809dd8-9ae2-4d43-ae99-fb5005a6a26d> | 2.765625 | 174 | Knowledge Article | Science & Tech. | 47.293325 |
Joined: 16 Mar 2004
|Posted: Tue Aug 19, 2008 11:25 am Post subject: Joining up nanocircuits
|Joining up nanocircuits
A team of scientists based in the UK and Germany have covalently bonded strings of porphyrin molecules on a gold surface - a step forward in the quest to develop nano-electronics.
Other researchers have linked more than two molecules on surfaces as supramolecular structures before, but the patterns were held together only by non-covalent methods, such as hydrogen bonding and van der Waals interactions.
Non-covalent links are reversible and relatively fragile, says Stefan Hecht, chair of organic chemistry and functional materials at Humboldt University in Berlin and a member of the team. But covalent bonds are more stable and can transport electrical charge.
The team uses porphyrins, flat square-shaped molecules with four phenyl arms, one extending from each edge. The molecules are synthesized so that some or all the arms have a bromine atom at the end. The bromine atoms are removed by heating the molecules, leaving behind carbon radicals that combine through covalent carbon-carbon bonds, linking the porphyrin molecules.
Two methods were used to activate the molecular building blocks. The first involves depositing intact molecules onto the gold surface and then heating. In the second method, the molecules were activated in the evaporator and deposited on the surface, which is at room temperature. In both cases, the activated building blocks are covalently connected directly on the surface upon thermal diffusion.
Hecht says the inspiration for the research project came from earlier work by physicists constructing covalent bonds between two molecules using a scanning tunnelling microscope (STM).
'Our idea was to take bigger molecules and extend this to something much more sophisticated,' he said. The success of the experiments is a first step toward eventual assembly of stable nano-architectures that could be used for molecular electronics and sensing devices, Hecht added.
Neil Champness, chair of chemical nanoscience at the University of Nottingham, called the team's success 'a very significant breakthrough'.
'They have an incredible level of accuracy to determine the relative positions of the molecules they are linking,' said Champness. 'I know a variety of groups trying to do what has been done.'
Although the team used porphyrins and a gold surface, Champness suspects that the technique eventually could be used with other sorts of molecules and surfaces.
Potential applications for molecular electronics would include wires, transistors, capacitors, and magnetic storage, he said.
Champness believes the first devices could be available in as little as 10 years and may eventually lead to the development of tiny computers. 'That is the Star Trek view of the world,' he admitted. 'We are a long, long way from that and [this] paper is a step in that direction.' | <urn:uuid:79ba81ea-8151-4371-bf93-6cb04d44a3c9> | 3.4375 | 610 | Comment Section | Science & Tech. | 29.762666 |
Virus and Cells
Name: Nikki D.
Date: April 2004
Based on the structure of the plasma membrane and
transport mechanisms, how does an airborne virus enter the cell of the
Look up "endocytosis" in a good biology textbook. It's the process where bits of the cell
membrane are pinched off into the interior of the cell. It's a way that cells bring needed
things into the cell. Often, viruses hitch a ride on this process, binding to a part of
the membrane (or a protein in the membrane) that is going into the cell.
Paul Mahoney, PhD
On the surface of every cell are receptor type molecules-kind of like "docking stations".
They are proteins so they have a particular 3D shape. Some of these are attached to
membrane transport proteins-like gates in the membrane. If the virus can adapt to be
shaped like these, they can be taken inside the cell. Some drill their way through.
Many viruses have a outer coat that is made from the plasma membrane of the
cells in which they were made...handy for meeting up and joining with other
cell membranes...rather like two soap bubbles joining, one small and one
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:dc6e799a-239f-4ff6-a7b0-aa43177ac175> | 3.84375 | 273 | Knowledge Article | Science & Tech. | 60.663658 |
By Brendan Borell of Nature magazine
One in five of the world’s invertebrate species are threatened with extinction, according to the latest report from the Zoological Society of London (ZSL).
From the checkerspot butterfly to the giant squid, spineless creatures are thought to represent around 99% of biodiversity on Earth. However, until now, scientists have never attempted a comprehensive review of their conservation status. In fact, fewer than 1% of invertebrates had been assessed by the International Union for Conservation of Nature (IUCN), which has listed threatened species on its Red List since 1963.
“When I first took a look at the Red List, it was biased towards larger, more charismatic species,” says Ben Collen, a biodiversity scientist at the ZSL Institute of Zoology in London, who coordinated the invertebrate study and co-edited the report. “The project we’ve been running for the past five years tries to put invertebrates on the Red List in a systematic way.”
Collen and his colleagues conclude that the greatest threat is to freshwater invertebrates, including crabs and snails, followed by terrestrial and marine invertebrates. More mobile animals, such as butterflies and dragonflies, tended to have the least risk of extinction.
The report estimates that 34% of freshwater invertebrates could be under threat, including more than half of the world's freshwater snails and slugs. In the southeastern United States, which is a freshwater diversity hotspot, almost 40% of molluscs and crayfish could be wiped out owing to the effects of dams and pollution. In the oceans, almost one-third of reef-building corals are endangered largely because of climate change, which causes coral bleaching and ocean acidification.
Overall, habitat loss, pollution and invasive species represented the biggest threats to invertebrate diversity around the world. The proportion of species at risk (one-fifth) is similar to findings in vertebrates and plants. The report will be formally presented on 7 September at the World Conservation Congress in Jeju, South Korea, where conservationists, scientists and government leaders will meet to discuss conservation and development issues.
Because of the lack of information on threats to invertebrates and because so many species are thought to remain unidentified, Collen and his collaborators used a 'sampled approach' to evaluate 1,500 known species in each of 4 representative taxonomic groups — freshwater molluscs, dragonflies, dung beetles and butterflies. They used standard IUCN criteria that take into account population trends and the geographic ranges of species. They also included comprehensive assessments of crayfish, freshwater crabs, reef-building corals and cephalopods in the evaluation.
Overall, Collen says that he and his team queried thousands of experts by e-mail and convened a series of workshops to help gather data on the prevalence of invertebrate species. They supplemented this with data from independent taxonomic and regional assessments, such as the IUCN's Pan-Africa Freshwater Biodiversity Assessment, bringing the total number of invertebrate species in their study to over 12,000.
Although specialists applaud the focus on under-appreciated taxa, some feel that it is premature to draw conclusions from a limited sample. “I hope that it gets the attention from policy-makers that it deserves,” says Terry Erwin, an entomologist at the Smithsonian Institution in Washington DC. “The numbers they use are good enough for public consumption, although not very accurate scientifically.” | <urn:uuid:6f36e12a-181f-4041-a237-95734a6312de> | 3.6875 | 743 | Truncated | Science & Tech. | 26.492992 |
SVG: Modularized and mobile (1/2) - exploring XML
SVG: Modularized and mobile
Mid-February the W3C released last call drafts of SVG 1.1 and SVG Mobile. SVG 1.1 is a modularized form of SVG 1.0, specified in W3C XML Schema rather than DTDs. By dividing up the functionality of SVG into modules, it facilitates the production of SVG profiles. Mobile SVG creates two such profiles: SVG Tiny, which is mainly aimed at cellphones, and SVG Basic for Personal Digital Assistants (PDAs).
The scope of SVG
SVG is a language for describing two-dimensional graphics in XML. SVG allows for three types of graphic objects: vector graphic shapes (e.g., paths consisting of straight lines and curves), images and text. Graphical objects can be grouped, styled, transformed and composited into previously rendered objects. The feature set includes nested transformations, clipping paths, alpha masks, filter effects and template objects.
SVG drawings can be interactive and dynamic. Animations can be defined and triggered either declaratively (i.e., by embedding SVG animation elements in SVG content), as shown in column 49, or via scripting.
Sophisticated applications of SVG are possible by use of a supplemental scripting language which accesses SVG Document Object Model (DOM), which provides complete access to all elements, attributes and properties. A rich set of event handlers such as onmouseover and onclick can be assigned to any SVG graphical object. Because of its compatibility and leveraging of other Web standards, features like scripting can be done on XHTML and SVG elements simultaneously within the same Web page.
What's new in version 1.1?
SVG 1.0 is a W3C Recommendation. This version 1.1 provides a modularization of the SVG 1.0 language which can be used to build profiles of SVG such as SVG Mobile, discussed below. There are also a small number of new features, including:
- the 'solidColor' element
- a description of metadata that can be used in SVG documents that use a geographic coordinate system
- simple text wrapping
- advanced compositing and blending operations
The modularization of SVG is a decomposition of SVG 1.0 and a small set of extra features into a collection of abstract modules that provide specific types of functionality. These modules may be combined with each other and with other modules to create SVG subset and extension document types that qualify as members of the SVG-family of document types.
Identifiers for SVG files have been suggested as follows:
|MIME type under registration||image/svg+xml|
|Suggested file extension, Mac creator||.svg, "svg "|
|Suggestion for compressed files||.svgz, "svgz"|
|Public Identifier 1.1||PUBLIC "-//W3C//DTD SVG 1.1//EN"|
|System Identifier 1.1||http://www.w3.org/TR/2002/WD-SVG11-20020215/DTD/svg11.dtd|
Compatibility with other Standards
SVG leverages and integrates with other W3C specifications and standards efforts. By leveraging and conforming to other standards, SVG becomes more powerful and makes it easier for users to learn how to incorporate SVG into their Web sites.
The following describes some of the ways in which SVG maintains compatibility with, leverages and integrates with other W3C efforts:
- SVG is an application of XML and is compatible with the XML 1.0 Recommendation.
- SVG is compatible with namespaces.
- SVG utilizes XLink for URI referencing and requires support for base URI specifications defined in XML-Base.
- SVG's syntax for referencing element IDs is a compatible subset of the ID referencing syntax in XPointer.
- SVG content can be styled by either CSS or XSL.
- SVG includes a complete Document Object Model level 1 and incorporates many of the facilities described in DOM level 2, including the CSS object model and event handling.
- SVG's animation features are modeled after Synchronized Multimedia Integration Language (SMIL) Animations.
- SVG attempts to achieve maximum compatibility with both HTML 4 and XHTML 1.0.
- In environments which support DOM2 for other XML grammars (e.g., XHTML) and which also support SVG and the SVG DOM, a single scripting approach can be used simultaneously for both XML documents and SVG graphics, in which case interactive and dynamic effects will be possible on multiple XML namespaces using the same set of scripts.
Let's look at two profiles of SVG modules.
Produced by Michael Claßen
Created: Mar 04, 2002
Revised: Mar 04, 2002 | <urn:uuid:f7c46298-b188-4e3a-91d7-f453f2616e6c> | 2.703125 | 1,001 | Knowledge Article | Software Dev. | 54.959285 |
Bug of the Week
Sowbug (Order Isopoda)
Lots of regional names for these guys: sowbug (Midwest U.S.), pillbug (Great Lakes region and northeast U.S.), potato bug (Great Lakes and Intermountain West), roly-poly (most common to the southeast U.S) and wood louse (scientists).
The crater-like surface being negotiated by the sowbug is the plaster wall directly in back of the BugLady’s computer. It was taken in the third week of February. The BugLady did not ask whence it came or where it was going and doesn’t really want to know; there’s plenty of sowbug food here, and welcome to it.
These “bugs” are not insects (Class Insecta) but are in the order Isopoda (“equal legs”) (Class Crustacea), distantly related to lobsters, shrimps, crayfish and crabs. They like the dark and damp and are found under leaves, logs, flowerpots, etc. They breathe through gill-like organs that have to be kept damp (90% humidity), and they can sense when the humidity is too low and will seek out spots that are damper.
The young hatch from eggs carried by the female in a brood sac. They hatch looking like pale “mini-me’s” and simply grow instead of metamorphosing (yes, an actual word!). They have 6 pairs of legs until the first time they molt, when they get a 7th pair. They are odd in that they molt in stages, so if you find a two-toned sowbug, it’s because the skin on its anterior (front) half is new, but the skin on its posterior is older (Hey! Settle down! It just hasn’t been shed yet!). After molting, it tries to puff its soft skin up some so that when its armor hardens, there will be a little more room inside.
|Sowbugs are at home indoors or out, summer or winter.|
They are scavengers, eating organic matter indiscriminately – plant or animal. They turn larger pieces of plants into much smaller ones, thus providing food for much smaller organisms. Some do eat young plants and so are considered pests. Some are considered pests because people get startled when they turn over a flowerpot and eighty-gazillion pillbugs scatter like so many vampires avoiding the sun (but, as the Bug Lady can testify, the true champs for “Creepy-ness Factor” are the herds of earwigs that inhabit her mailbox or that drop out of the touch pad of the garage door opener).
Despite their defenses - when they roll up and tuck in their legs and antennae, their armored dorsal shells protect their proverbial soft underbellies, plus, many species can resort to chemical warfare by producing a noxious substance in glands along the sides of their bodies - pill bugs are eaten by spiders and birds.
The Bug Lady heard a cautionary tale about a sowbug and a naturalist (who might have been showing off just a wee bit with a class). He ate a sowbug, to demonstrate something, and the sowbug spent the next four hours – despite subsequent consumption of water, bread and other edibles (by the naturalist), trying to climb up his esophagus. Stronger than stomach acid!
Check out Life in a Bucket of Soil by Alvin and Virginia Silverstein. | <urn:uuid:9414482b-8558-433a-83af-f7723cd47861> | 2.828125 | 738 | Knowledge Article | Science & Tech. | 61.438308 |
Kepler's Supernova, from Wikimedia Commons
We are all made of stardust
I am still mulling over the things we learned and the things we saw at our star party at Leasburg Dam State Park last week. The instructor explained that we actually are made of stardust, because most elements on earth were created in the hearts of stars. I checked further when I got home (and once I was thawed out from our chilly evening in the dark) and found this information:
... the heaviest elements -- such as gold, lead and uranium -- were produced in a supernova explosion during the cataclysmic end of a huge star's life, says LSU physicist Edward Zganjar (pronounced Skyner).
"Those elements were ejected into space by the force of the massive explosion, where they mixed with other matter and formed new stars, some with planets such as earth. That's why the earth is rich in these heavy elements. The iron in our blood and the calcium in our bones were all forged in such stars. We are made of stardust," Zganjar said. (quote is from Science Daily, 1999.) | <urn:uuid:9801580c-e0e3-4400-a4d1-65eaf1a20338> | 3.046875 | 235 | Personal Blog | Science & Tech. | 58.977518 |
New Horizons mission
New Horizons is a NASA mission to explore Pluto and other distant objects in the Solar System. The spacecraft was launched in 2006 and passed Jupiter in March 2007. As of September 2008 it has passed beyond Saturn's orbit , and is expected to reach Pluto in 2015.
It is the first mission ever launched to explore Pluto. The mission's primary objectives are to study the structure of Pluto and its moon Charon, and learn about their soil and atmospheres. It will also study the Kuiper Belt. | <urn:uuid:24765af6-1c90-47f0-b892-0a3d0836b5e9> | 3.390625 | 105 | Knowledge Article | Science & Tech. | 62.216658 |
Water is H2O. You know that. Well, sometimes water breaks apart into two halves: a hydrogen ion (H+) and a hydroxide ion (OH-). Notice that if you put the hydrogen and hydroxide ions back together, you will restore H2O (2 hydrogen atoms and one oxygen atom). If you go looking around in web pages, you will see that the hydrogen ion is also often referred to as a proton. Since the hydrogen ion often attaches to another water molecule, it often makes H3O+, called a hydronium ion... I only mention this in case you see it on the web.
When pure water exists, its water molecules are in a constant state of splitting and rejoining. However, whenever a water molecule splits, the number of hydrogen ions and hydroxide ions will always remain the same. In this animation, I drew three water molecules. During the course of the animation, each of the water molecules breaks. But, even when all are broken, the concentration (represented by brackets, [ ]) of hydrogen ions remains the same as the concentration of hydroxide ions. In this way, the water is balanced in the number of hydrogen and hydroxide ions. When this balance exists, as in pure water, the pH of the solution is neutral.
Please note that in addition to a neutral pH, the solution carries no electrical charge. I only bring this up because for some reason, students get electrical neutrality confused with pH neutrality. They are different. But in all of the examples I will be giving you on this page, the solution carries absolutely no electrical charge. Even the strong acids and bases I show you will be electrically neutral. OK?
Getting back to pH... what I mean by "water balance" is that the concentration of hydrogen ions is the same as the concentration of hydroxide ions ([H+] = [OH-]). Whenever this balance is tipped, the pH is no longer neutral. Take a look at the typical pH scale:
A solution with a pH from 0 to 6.9 is an acid, while a solution with a pH from 7.1 to 14 is a base (can also be called an "alkaline" solution). Acids and bases do not have an even balance of hydrogen ions with hydroxide ions. Acids have more hydrogen ions, while bases have more hydroxide ions. This has nothing to do with electrical charges! Please note, that when a solution becomes more acidic, its pH decreases, while when it becomes more basic, its pH increases; the terms "increase" and "decrease" when used with pH only refer to pH number, and the more acidic the pH, the lower the number.
Let me summarize in this table:
|Neutral pH||Acidic pH||Basic pH|
|[H+] = [OH-]||[H+] > [OH-]||[H+] < [OH-]|
|pH = 7||low pH number||high pH number|
The pH scale is odd. Who ever heard of a scale going from 0 to 14? Why is 7 neutral? The reason it is so odd is because the concentration of these ions is measured on a logarithmic scale, not a regular scale. I'm not sure how much you know about logarithmic scales, probably very little... and you don't need to understand it just to understand pH... so, to skip past the details, you will just need to understand that every step of the pH scale represents a tenfold change in ion concentration. That is, a pH of 5 has ten times as many hydrogen ions in it as a pH of 6.
There's a neat little JAVA applet at a website where you can Play in the Acid-Base Pool. There you can see that if you double or triple the concentration of hydrogen ions, you still won't get the pH to change a full step. Try out that applet just to make this little idea sink in.
Well, the one solution for which pH is defined in our book is blood. The pH of blood should always be 7.4. When the pH changes from 7.4 at all, we start to have terrible troubles. If the pH of blood gets more acidic, that is called acidosis. Acidosis is when blood pH drops below 7.35. However, if blood pH drops below 6.8, the person with acidosis dies. Our bodies cannot tolerate blood pH lower than 6.8. And if our blood pH gets more basic, or alkaline, that is called alkalosis. Alkalosis is when the blood pH rises above 7.45. But if it rises above 8.0, the affected person dies.
You see, our bodies are extremely sensitive to blood pH. Any blood pH more acidic than 6.8 or more basic than 8.0 causes death. You can imagine, I hope, that if a blood pH of 6.75 is deadly, it cannot feel too good to have a blood pH of 7.0. Right? It also cannot feel too good ot have a blood pH of 7.8. Right? We are so sensitive to blood pH that we have immediate symptoms with either acidosis or alkalosis.
|Some acidosis symptoms:
|Some alkalosis symptoms:
Certain molecules give off hydrogen ions or hydroxide ions when put into solution. For example, hydrochloric acid (HCl) tends to give off hydrogen ions. I have shown this in the animation here. HCl dissociates in water into hydrogen ions and chloride ions. For every hydrogen ion there is one chloride ion, so there is no electrical charge build-up. But, suddenly, the water isn't balanced anymore. Instead, there are a lot more hydrogen ions than hydroxide ions. This larger concentration of hydrogen ions is what makes this solution an acid.
To the left is a similar animation, but this time, I have made a base. You see, if you add sodium hydroxide (NaOH) to a solution, it dissociates. Upon dissociation, NaOH splits into a sodium ion and a hydroxide ion. When this happens, for every sodium ion there is a hydroxide ion, so there is no electrical charge generated. But, the increase in hydroxide ions causes there to be more hydroxide ions than hydrogen ions. For this reason, this solution is a base.
There are many molecules that can give off hydrogen ions or hydroxide ions. There are also molecules that make acids or bases by sucking up hydrogen ions or hydroxide ions... Let me try to explain.
If you add a molecule that tends to suck up hydrogen ions into a solution, what is going to happen to the pH of the solution? To answer this, first consider what will happen to the hydrogen ion concentration... will it increase or decrease? If hydrogen ions are sucked up by the newly added molecule, the hydrogen ion concentration will decrease, because fewer hydrogen ions can be floating around. If the hydrogen ion concentration decreases, that will tend to make the solution more basic. | <urn:uuid:d201724f-9e5d-4323-a858-3bf3cb10aeb1> | 3.671875 | 1,477 | Knowledge Article | Science & Tech. | 64.919325 |
The internal wave structure under the shore fast ice of McMurdo Sound was surveyed. An internal wave is a wave propagating along a density stratification in the water mass. The discontinuity in density is caused by a difference in the temperature or salinity on either side of the boundary surface in the water mass. Previous studies conducted either single CTD profiles or spot temperature and ... salinity measurement using sample bottles and reversing thermometers. These methods lack the regular time scale necessary for determining internal wave structure. By carrying out monitoring of the water column it was hoped to find internal waves missed by earlier oceanographic surveys of McMurdo Sound. 46 CTD casts were taken from two locations near Scott Base, in McMurdo Sound. Two series of casts were taken, the first consisted of three days when casts were taken on an approximately hourly basis, and the other contained data taken over thirteen days. | <urn:uuid:908ae599-af9f-4624-a7c0-9b592c41aa1d> | 2.875 | 180 | Knowledge Article | Science & Tech. | 34.023155 |
New Plasma Device Considered The ‘Holy Grail’ Of Energy Generation And Storage
Scientists at the University of Missouri have devised a new way to create and control plasma that could transform American energy generation and storage.
Randy Curry, professor of electrical and computer engineering at the University of Missouri’s College of Engineering, and his team developed a device that launches a ring of plasma at distances of up to two feet. Although the plasma reaches a temperature hotter than the surface of the sun, it doesn’t emit radiation and is completely safe in proximity to humans.
While most of us are familiar with three states of matter – liquid, gas and solid – there is also a fourth state known as plasma, which includes things such as fire and lightning. Life on Earth depends on the energy emitted by plasma produced during fusion reactions within the sun.
The secret to Curry’s success was developing a way to make plasma form its own self-magnetic field, which holds it together as it travels through the air.
“Launching plasma in open air is the ‘Holy Grail’ in the field of physics,” said Curry. | <urn:uuid:55d8fa5a-6167-45e6-9f3c-69dfd0428570> | 3.578125 | 236 | Truncated | Science & Tech. | 34.47 |
The Idea of Space Exploration
The dream of space travel is an ancient one. The first known description of a flight to the Moon is from an ancient Greek writer, Lucian of Samosata (190 A.D.). He had one of the characters in his, "A True Story" don eagle wings and fly to the Moon in order to learn how the stars came to be "scattered carelessly up and down the universe." Since then, many writers, scientists, and others have dreamed of traveling to the Moon, other planets, and even stars.
Most people don't realize that our ability to fly spaceships to the Moon or Mars is a direct result of Newton's work in the 1600s! The idea of space exploration is an extension of the Copernican revolution, and the actual scientific basis of space travel came from Newton. The understanding of force and his kinematical equations answer the fundamental questions of space flight. Specifically, they allow one to calculate escape velocities, and orbital trips not just around the Earth, but all the way to the Moon or Mars.
"Newton's Cannon" thought experiment. Click here for original source URL.
Newton realized that a satellite could be launched into space. In his 1687 masterwork, Principia, he described a "thought experiment" on how it could be done. A thought experiment is a hypothetical experiment run through in ones mind that is difficult to carry out in practice. For example, one of Newton's laws of motion states that an object in uniform motion (constant velocity) will continue in uniform motion (the same velocity) until acted upon by an outside force. In reality, projectiles are subject to air resistance, and rolling objects are subject to friction and outside forces are always present. Thus, thought experiment allow one to imagine that if friction or air resistance was eliminated, uniform motion would be the result, while reality precludes doing the experiment.
Space Shuttle Discovery night launch of STS-131. Click here for original source URL.
This was Newton's thought experiment for launching something into orbit: imagine a mountain so high that it projects above the Earth's atmosphere (you see why this would be a difficult experiment to actually perform!). Now imagine a cannon pointing out from the mountain top, parallel to the Earth's surface. If you fired a cannon ball at modest speed, the ball would fall near the foot of the mountain. At a higher speed, the ball would fall farther away. At a high enough speed, the ball would be falling towards the Earth (due to the force of gravity) at exactly the same rate that the curved surface of the Earth is “falling away” from it. In this case, it would continue to travel all the way around the Earth. This is exactly how we define an orbit. Note that each path obeys Kepler's laws: even the paths that hit the Earth's surface are segments of ellipses around the center of the Earth. The final orbit is the special case of an ellipse that is a circle. Thus, launching a satellite into orbit around Earth is in fact a 17th-century idea of Isaac Newton!
Author: Chris Impey
Editor/Contributor: Ingrid Daubar-Spitale
Editor/Contributor: Pamela Gay | <urn:uuid:d9821b07-69ee-45f0-b5b5-bda78014224d> | 3.6875 | 673 | Knowledge Article | Science & Tech. | 50.860549 |
Exploring the geoengineering of climate using stratospheric sulfate aerosols: The role of particle size
Article first published online: 26 JAN 2008
Copyright 2008 by the American Geophysical Union.
Geophysical Research Letters
Volume 35, Issue 2, January 2008
How to Cite
2008), Exploring the geoengineering of climate using stratospheric sulfate aerosols: The role of particle size, Geophys. Res. Lett., 35, L02809, doi:10.1029/2007GL032179., , and (
- Issue published online: 26 JAN 2008
- Article first published online: 26 JAN 2008
- Manuscript Accepted: 19 DEC 2007
- Manuscript Revised: 26 NOV 2007
- Manuscript Received: 1 OCT 2007
- climate change;
- global warming
Aerosols produced in the lower stratosphere can brighten the planet and counteract some of the effects of global warming. We explore scenarios in which the amount of precursors and the size of the aerosol are varied to assess their interactions with the climate system. Stratosphere-troposphere exchange processes change in response to greenhouse gas forcing and respond to geoengineering by aerosols. Nonlinear feedbacks influence the amount of aerosol required to counteract the warming. More aerosol precursor must be injected than would be needed if stratosphere troposphere exchange processes did not change in response to greenhouse gases or aerosols. Aerosol particle size has an important role in modulating the energy budget. A prediction of aerosol size requires a much more complex representation and assumptions about the delivery mechanism beyond the scope of this study, so we explore the response when particle size is prescribed. More aerosol is required to counteract greenhouse warming if aerosol particles are as large as those seen during volcanic eruptions (compared to the smaller aerosols found in quiescent conditions) because the larger particles are less effective at scattering incoming energy, and trap some outgoing energy. About 1.5 Tg S/yr are found to balance a doubling of CO2 if the particles are small, while perhaps double that may be needed if the particles reach the size seen following eruptions. | <urn:uuid:68e39b59-88ee-40a9-bc9a-567981d09a04> | 3.03125 | 446 | Academic Writing | Science & Tech. | 28.703125 |
The nice part about science is the ability to retest things as new data and better methods become available. In the case of climate change, new data and updated models are producing increasingly higher warming predictions for the end of this century. MIT joined other entities in retesting their predictions with their Integrated Global System Model . The IGSM is used to make probabilistic projections of climate change from 1861 to 2100. Back in 2003, at the time of their original predictions, end of the century median surface temperatures were 2.4°C higher than the climatological average of the preceding century. Armed with additional data and significant updates to the model, their latest prediction is an astounding 5.1°C (median value) in the 2091 to 2100 time period. That’s more than double the value found just a handful of years ago. I can guarantee, and I’m sure they would agree, that their data isn’t completely sufficient; nor is their model accounting for critical feedback processes, many of which we’re only now becoming aware of.
Their new study also includes new predictions of CO2 concentrations over the next 80 years. Their new 5th percentile projection is higher than their 2003 median (50th percentile) at just under 700ppm (current values are 387ppm and increasing). Their new 50th percentile projection is almost as high as their 2003 95th percentile projection: 866ppm vs. 900ppm. Finally, their new 95th percentile projection registers at a nearly unfathomable 1100ppm. Concentrations of CO2 leading up to 1100ppm would certainly open the door to out-of-control climate feedback processes, the kind which nobody would want to deal with.
Warming in their simulations range from 3.1°C to 7.3°C by 2100. They make sure to note that not one of their 400 simulations resulted in globally averaged temperature increases of less than 2°C. Not one. That’s a very significant result. Why the big change? The authors explain:
Rather than interacting additively, these different affects appear to interact multiplicatively, with feedbacks among the contributing factors, leading to the surprisingly large increase in the chance of much higher temperatures.
That multiplicative description is characteristic of non-linear systems, such as the climate system. It’s quite frankly something that many climate change deniers/delayers don’t understand or gloss over. Additive changes of GHG emissions result in multiplicative surface temperature changes down the road. We don’t have to inject too much CO2 or other gases to generate large temperature increases. Which little additive change in emissions will result in more feedback processes kicking in? We don’t know. As such, I don’t think it’s worth it to continue emitting GHGs until we see the feedback has kicked in – it will be too late to slow things down at that point.
Another important result: polar amplification is present in their simulations. By that, I mean that just as has already been observed in the past 30 years, polar temperatures are expected to increase more than temperatures across the mid-latitudes and tropics. There are some differences between the Northern and Southern Hemispheres. Their median percentile projection calls for a 10°C rise at the north pole by 2091-2100 compared to 1981-2000, a 7°C rise at 45°N and a 6°C rise at the Equator. At 45°S, the median temperature change is predicted to be slightly more than 4°C; the south pole temperature change is predicted to be about 7°C.
Does anybody think the Arctic ice sheet can exist year round with 10°C warmer annual temperatures? I certainly don’t. This report identifies a 5% probability of Arctic Ocean ice disappearing in the summer by 2100. I don’t think it will take until 2025 before that happens. Again, the poles are observed and sampled very infrequently in time and space. We simply don’t have solid ideas of how polar climate dynamics behave – not in stable conditions and certainly not in unstable conditions. | <urn:uuid:96add12c-ece6-44d4-8b0e-8d7125382377> | 3.203125 | 853 | Personal Blog | Science & Tech. | 51.231607 |
Agile scrum methodology is a development method for managing projects and application development. Agile methodology is an approach that can help teams tackle the unpredictability of software building when using incremental or itinerant work cadences known as sprints. It was inspired by sequential or waterfall developing back in the 1970′s for developing large software applications. The concept is that each phrase is completed before the next can begin, this helps to insure that the project never gets ahead of itself regardless of deadlines. This can be frustrating for some designers who feel that their input needs to come earlier but it also means that the project is completed as a whole rather than some parts being finished before others. It is a way of building the software by each designers putting their parts in together at the end, though this may mean that their lack of communication hinders the project in the end as the parts have not been designed to work together from the get go. It is designed to enhance the existing software process such as that of agile RUP methodology.
Many choose to implement agile methodology because of the benefits. The development process reduce development costs and time it takes before the project hits the market. Managers see this as a bonus since it obviously means faster sales, while designers may be dismayed because it can also mean set-backs when the program has to be broken apart and restructured to work together as a whole rather than a sum of parts. Since teams can gather information at the same time it means there is no waiting on another team to finish. This is ideal for multiple small groups of experts to create top quality output fast, but there is a lot of negative and criticism outside of that sphere rendering agile methodology somewhat limited. It is considered to be part of a development culture that “thrives on chaos” since there is little cohesion to developer methods as long as they reach the intended goal. There is also great criticism in that there is little evidence to support the developers claims that the process is faster and more reliable than the traditional methods where developers work together.
Learning agile methodology might seem like a rather useless tool unless you work in software development but the principals can be applied to any team environment with projects and deadlines. An agile methodology tutorial or basic how to is a good way to start but it would be prudent to invest in some quality literature on the subject if you plan on implementing the concept. There are several parts inside agile methodology and depending on which area you wish to concentrate on is going to depend on your learning. Looking at the area of scrum development for example is similar to the rugby term of it’s own name. The project is constantly passed back and forth between teams until the goal is achieved. The benefit of this is that it happens to be very very functional when dealing with manufacturing or development as problems are tackled quickly as they occur. There are set methods and roles with the “scrum” just as with rugby. There is an overseer or scrum master, the overseer or financier, and the cross-functional, self-organizing team who perform analysis, design, implementation, testing tasks etc. Usually the teams will work in sprints to complete each task as it is needed. Sprints are limited to a specific unit of time, between one week and one month and are a constant length. This is an easy way to break up deadlines and know that a project will stay on track. At the end of each sprint there will be a completed portion of the project. The best way to illustrate this process will be through an agile scrum methodology diagram which can easily be found online.
When deciding if this method is right for your team it may be important to do your research first and learn all you can about implementing it, it’s pros, cons etc. Consider talking to your team and asking their input on the proposed method and how they feel it will increase or affect productivity. Finding agile scrum methodology interview questions online might be worth looking into and then consider having them take the time to research the concept themselves before putting it to a vote or simply deciding that the method will or will not function in your environment. Implementing the system will require a certain amount of delegation and trust that the team can complete the project in parts. If your confidence in the team is not secure it might be worth waiting on the implementation or risk a setback in the deadline.
Agile methodology is a productive method for small software groups. It can have setbacks depending on the implementation and it certainly does not always work, there are many elements that are not taken into account when implementing agile methodology which is why it can be seen as a chaotic method where many are left to their own designs with the hope that everything will be “alright in the end”. Agile can be broken down into a variety of different methods of which agile scrum methodology is the simplest and most functional of all. | <urn:uuid:36aef916-d56d-4d21-8a6b-00c5e238c240> | 2.9375 | 992 | Knowledge Article | Software Dev. | 38.558676 |
Welcome to biology-online.org! Please login to access all site features. Create account.
Log me on automatically each visit
| Page history
| Printable version
(Science: zoology) One of the larger divisions of crustacea, so called because the thorax and abdomen are both segmented; Tetradecapoda. It includes the Amphipoda and Isopoda.
Origin: NL, fr. Gr. Joint _ a shell.
Please contribute to this project, if you have more information about this term feel free to edit this page
This page was last modified 21:16, 3 October 2005. This page has been accessed 858 times. What links here
| Related changes
| Permanent link
© Biology-Online.org. All Rights Reserved.
Register | Login
| About Us | Contact Us | Link to Us | Disclaimer & Privacy | <urn:uuid:9332aa52-ff5e-492e-9d25-6839d6d7e357> | 3.09375 | 182 | Structured Data | Science & Tech. | 47.003729 |
Problem 11: Incomplete dominance in a dihybrid cross.
Tutorial to help answer the question
In Mendel's experiments, the spherical seed character (SS) is completely dominant over the dented seed character (ss). If the characters for height were incompletely dominant, such that TT are tall, Tt are intermediate and tt are short, what would be the phenotypes resulting from crossing a spherical-seeded, short (SStt) plant to a dented-seeded, tall (ssTT) plant?
Phenotype of offspring of an incompletely dominant trait
All of the offspring would have the genotype of SsTt. Mendel selected traits that did not display partial dominance to study.
The Biology Project
University of Arizona
Tuesday, August 13, 1996
Contact the Development Team
All contents copyright © 1996. All rights reserved. | <urn:uuid:51a79060-2dd4-4cd1-a2d3-1c6127640b9a> | 3.203125 | 181 | Tutorial | Science & Tech. | 43.185366 |
In its natural state, air is a good insulator. However, if it’s adequately ionized, it can ultimately lead to “corona discharge”. What does that mean and why is it important? Let’s find out.
What’s Involved in Corona Discharge?
Arcing is probably something that you are all familiar with. This is when the electric potential between two electrodes is great enough that electrons pass from one electrode to the other, even through air. Lightning is a classic example of this. Prior to the conditions leading to arcing, applying an electric potential between two electrodes can lead to limited conduction of electricity and corona discharges. This usually occurs with electrodes containing areas of reduced surface area, such as a wire. The air molecules in the vicinity of it become ionized, and positively-charged particles and electrons are free to move in relation to each other.
Corona discharge. Image attributed to G1MFG.
In the vicinity of this corona, the resulting plasma essentially becomes a conductor of electricity with both positive and negative effects on the fluid, such as air, that is just outside of the region of the corona. Under controlled circumstances, corona discharges are a good thing; they enable you to make photocopies, purify air in air conditioners, keep the swimming pool clean, and so forth. On the other hand, particularly when this type of discharge happens spontaneously, it’s less than advantageous. High-voltage systems, such as generators for instance, can wind up wasting energy or harm surrounding components if corona discharges are allowed to occur.
Working with the Effects of Corona Discharge
The negative effects of a corona discharge were presented at the COMSOL Conference 2012 in Milan. This presentation showed how a high-voltage generator is harmful to the stator winding’s insulation system if a corona discharge was allowed to develop. Their solution was to apply end-winding corona protection (ECP) to the insulation where the design of the ECP is important to the overall performance of the generators.
A user’s story published in COMSOL News 2012 also reported on engineers that needed to consider the phenomenon. In order for their electrostatic precipitators to work optimally, they need to apply as high a voltage between the electrodes as possible. This would maximize the charging of polluting particles from their flue gas and ensure they would migrate towards and be captured by one of the electrodes, thus cleaning the flue gas. If a corona develops, then energy in charging the flue gas is lost towards maintaining the corona. Furthermore, if it continues to develop arcing can occur, which would not at all be a good thing in the confined spaces of an electrostatic precipitator.
The two previous examples were modeled using the flexibility of COMSOL Multiphysics to approximate the behavior of a corona discharge and its effects on the fluid (air) it interacts with. A more rigorous model that can be found in our Model Gallery also considers the ionizing reactions that occur within the fluid and treats the phenomenon as a multiphysics problem.
Plasmas and coronas are certainly magical and beautiful. They have been astounding people for millennia, and they’re a phenomenon worth trying to understand.
April 9, 2013 at 9:11 am
hi, I’m a Kherbouche Fouad PhD student in Electrical engineering, I work on electrostatic precipitators I am trying to design an electrostatic precipitator has a laboratory scale in a corona wire / rotating cylinder configuration . I need help regarding the simulation of the electrical parameters and the trajectory of particles subjected to electric field inside the electrostatic precipitator using COMSOL Multiphysics 4.3 (charged particle trajectory module and ac / dc module, particle tracing module ), in this context I took you to give me a help about my simulation. Thinks for all .
April 9, 2013 at 2:13 pm
April 9, 2013 at 2:15 pm
I think you will find it more appropriate to get help for your model though our Support Center (https://cww.comsol.com/support/). Here, we will make sure you reach the right expert and you would have a way to share your model
Login to Comment
- Create Account
- Forgot your password? | <urn:uuid:aaf70fe0-50f6-4230-865d-564972a065ef> | 3.59375 | 910 | Comment Section | Science & Tech. | 36.774786 |
These newcomers wreak havoc by changing, degrading, or displacing native habitat. They may bring disease, and compete directly with wildlife for living space as well as food.
Invasive species move around the world in many ways:
Invasive Species and the Great Lakes
The Great Lakes, one of the greatest freshwater resources on Earth, is changing dramatically because of more than 160 invasive species that are tearing the fabric of the food web. The invaders include
Many invasive species have arrived in the Great Lakes by way of ballast water released from ships. Others were released by people dumping aquariums or accidentally letting go of bait fish. The sea lamprey alone, which was first noted in Lake Ontario in the 1830s, not only alters habitat, but is a very aggressive fish, out-competing native fish for prey.
Invasive Species in the Chesapeake Bay: Nutria
This South American rodent, introduced by the fur industry in the 1940s, has destroyed 7,000-8,000 acres of refuge marshland in the Chesapeake Bay region, impacting all the wildlife that calls those wetlands home. They feed on the roots of marsh plants, and are so prolific that their impacts are huge. State and local government and private organizations are working together to discover the best eradication program.
Have a great animal video we should know about? Want to contribute content to Jeff Corwin Connect Email us here | <urn:uuid:f91efbdf-a01e-4e50-aec6-b89c6f09a43e> | 3.640625 | 285 | Personal Blog | Science & Tech. | 35.96631 |
Homepage photo of salmon embryo courtesy of Haruka Fujimaki, Mount Holyoke College, Mass., student. The image won 9th place in the Olympus Bioscapes Photomicroscopy Competition and was published in the Dec. 2009 issue of Scientific American.
Already on the brink, Atlantic salmon face a changing climate
Atlantic salmon are among the most imperiled species in the Northeast Region. While at one time hundreds of thousands of salmon made their epic migration from the oceans of Greenland to their natal rivers in Maine, now only remnant populations remain. Recovering this iconic species is a priority for the U.S. Fish and Wildlife Service.
The odds have been against the salmon. The construction of dams, overfishing, habitat loss, and water pollution have collectively caused their decline.
We are learning that climate change may be yet another threat to the species’ survival. Recent trends toward earlier snowmelt runoff, less river ice, and warming water temperatures are affecting areas in Maine where the Atlantic salmon is protected as an endangered species.
“Adult salmon appear to be returning to the Penobscot River about two weeks ahead of when they migrated historically. We collect these fish every year to reproduce them in a hatchery setting, and we’ve observed that they’re ready to spawn a little earlier each autumn,” says Paul Santavy, manager of the U.S. Fish and Wildlife Service’s Maine Fisheries Program Complex.
He explains that when spawning occurs in the hatchery on this accelerated schedule the young fish mature too early in the spring. In the incubators, they absorb their internal yolk sacs and need to be released into the river to feed before the water is warm enough to support the invertebrate animals they need as food.
Biologists at the Craig Brook National Fish Hatchery, part of the Atlantic salmon recovery operation that Santavy oversees, are experimenting with ways to delay the spawning cycle that they’re seeing in the Penobscot River salmon. By artificially manipulating water temperatures and length of daylight they are attempting to bring the captive fish more in sync with seasonal river conditions. Hatchery-raised fish comprise 95-percent of the salmon population in the Penobscot.
An additional concern in recovering the Atlantic salmon is that the rivers in the Downeast region of Maine are experiencing extreme fluctuations in water flows – heavier rainfalls happening less frequently, combined with earlier snow and melting river ice – that have been predicted in certain climate change models. During these weather events, the high-flowing water doesn’t have a chance to absorb into the river beds, which contain minerals that serve as a buffer to changing water chemistry. This lack of buffering can increase the acidity of the water. After the rainwater flushes out the system, river flows now tend to decrease rapidly unlike the gradual decline seen in the past.
According to Santavy, these shifting river conditions occur at a time in the spring when the juvenile salmon are very vulnerable. In the wild, the fish have just a one-month window of time to transform from parr into smolt before migrating to the sea.
Biological studies conducted by the U.S. Geological Survey (U.S.G.S.) show that it can take weeks for a salmon in this life stage to recover after being exposed to acidic conditions. Acidic pH levels in the water lower than 6 (7 is neutral) can cause naturally occurring aluminum levels in the water to become reactive to the fish. This can prevent calcium from binding to the fishes’ gills – a necessary process for their transition to the marine environment. If this is the case, Santavy says it could help explain the reason smolt are being lost in the estuaries, and would be devastating to recovery efforts.
In an effort to better understand how changes in these river systems affect Atlantic salmon, a team of scientists from the U.S.G.S., Maine Cooperative Research Unit, National Oceanic and Atmospheric Administration, Maine Department of Marine Resources, National Research Council and the University of Maine is researching how changes in summer low stream flows and stream temperatures in the northeast may affect the populations. The study will also evaluate management options that may mitigate the effects of these conditions. The study is funded by the U.S.G.S. National Global Warming and Wildlife Science Center.
Atlantic Salmon Life Cycle
Atlantic salmon are anadromous, meaning that they travel from the sea to spawn in fresh water. Remarkably, the salmon imprint on their home rivers, recognizing their chemical fingerprint, and find their way back to them as adults to reproduce. The eggs hatch during winter and the tiny salmon, called fry, emerge from the gravel in spring. Juvenile salmon, called parr, remain in fresh water for up to three years before undergoing physiological changes to prepare for their migration into salt water. Salmon remain in the sea for two winters before they return to rivers and streams to spawn. The adult fish are about 2 ½ feet long and weigh about ten pounds.
To learn more about the Atlantic salmon life cycle, check out this video: FLV version (3:38 - 62.08MB - Video by Bob Michelson) | <urn:uuid:385d6db5-db04-4ad0-87a0-253628c4a1fa> | 3.765625 | 1,071 | Knowledge Article | Science & Tech. | 44.137389 |
RSS is a family of web feed formats used to publish frequently updated content such as blog entries, news headlines or podcasts. An RSS document, which is called a "feed," "web feed," or "channel," contains either a summary of content from an associated web site or the full text. RSS makes it possible for people to keep up with their favorite web sites in an automated manner that's easier than checking them manually.
In this article we'll see how to parse RSS Feeds using the PEAR package XML_RSS.
Issue this command in the command line,
pear install --alldeps XML_RSS
$feed_uri = "http://www.go4expert.com/external.php?type=rss&forumids=17";
$rss =& new XML_RSS($feed_uri);
print("<h1>New PHP articles from <a href=\"http://www.go4expert.com\">Go4Expert</a></h1>\n");
foreach ($rss->getItems() as $item) | <urn:uuid:f5ac7754-8f2d-418d-a992-e0392c079643> | 2.796875 | 228 | Documentation | Software Dev. | 74.983515 |
Rapid Changes in Oxygen Isotope Content of Ice Cores Caused by Fractionation and Trajectory Dispersion near the Edge of an Ice Shelf
by Larry Vardiman, Ph.D.
Published in Creation Ex Nihilo Technical Journal, volume 11 (part 1), pp. 52–60, 1997.
© 1997 Creation Science Foundation, Ltd. A.C.N. 010 120 304. All Rights Reserved.
Oxygen isotopes in ice cores extracted from polar regions exhibit a decreasing trend in the ratio of the heavy to light isotopes from the beginning of the “Ice Age” to its end, at which point the trend reverses sharply and then remains fairly constant for several thousand years. This trend has been interpreted by the conventional climate community to have occurred over about 100,000 years and is due primarily to changes in oceanic and atmospheric temperatures as the lighter isotope of oxygen is preferentially transferred slowly from the oceans to ice during glaciation and the rapid transfer back to the ocean during deglaciation. This paper will explore an alternative explanation for this trend. The growth of ice shelves during the “Ice Age” is shown to cause a decreased isotopic ratio at long distances from the edge of an ice shelf because of the fractionation of isotopes as a function of the vertical temperature distribution in the atmosphere and the dispersion of snow by crystal type, fall velocity, and wind fields. If ice shelves grew slowly during the “Ice Age” and melted rapidly during deglaciation, the trend observed in ice cores can be explained in thousands of years, consistent with a short interpretation of earth history.
Ice Cores, Oxygen Isotopes, “Ice Age”, Sea-Floor Sediments, Greenland Ice Shelves, Precipitation Projectories, Ice Crystals, Fractionation, Horizontal Dispersion
For Full Text
Please see the Download PDF link above for the entire article. | <urn:uuid:d8c92b2f-aa3c-47e9-8624-163892a37289> | 3.21875 | 409 | Academic Writing | Science & Tech. | 37.561615 |
Eventually Linearizable Shared Objects
Shared objects are a useful abstraction in the design of concurrent systems. A concurrent system consists of a collection of sequential processes communicating through shared objects. A shared object can be made tolerant to process failures by storing a copy of the shared object at each process and by having the processes coordinate their actions to implement a certain degree of consistency. The more consistent the local copies are kept, the easier it is to design a distributed application using the replicated object. | <urn:uuid:ecb66218-bb24-4039-876f-e99667e49999> | 2.703125 | 95 | Knowledge Article | Software Dev. | 20.420819 |
In recent media interviews, I keep getting asked the same question: Where are our hurricanes? Aren't we having a quiet season? It's almost as if, following Katrina and a widespread sense that hurricanes are in some way tied to global warming, the public has come to think that the failure of the former to appear means that the latter isn't anything to worry about.
The truth is that we're just now moving into the peak of hurricane season in the Atlantic: Most storms occur in August, September, October. Despite a quiet June and July this year, hurricane forecasters William Gray and Phil Klotzbach of Colorado State University still say they expect 15 total storms (of tropical storm strength or higher) to appear (PDF). So far we've only seen two by their reckoning.
Meanwhile, as this August 4 image of the Caribbean's "tropical cyclone heat potential" demonstrates, there are already many stretches of ocean out there that are ripe for hurricane formation. The patches of orange and red denote deep columns of warm water, which can provide immense fuel for hurricane development.
So don't write off this year's storm season quite yet. The truth about hurricanes is that, because they're weather phenomena, sometimes it can seem as if the atmosphere throws a switch and suddenly they're everywhere. Inactivity can turn into activity in quite a hurry, as conditions become sufficiently ripe. During 2004, when Florida was struck by four powerful hurricanes, all occurred within the space of about six weeks. And the first storm in that year (Alex) wasn't named until August 1. This year, by contrast, we've already seen our C-storm, Chantal.
Meanwhile, scientists continue their vigorous debate over the relationship between hurricanes and global warming in general. A new study suggests that the total number of storms in the Atlantic has risen markedly over the past century -- but of course, other scientists disagree. For more on that debate, see here and here. One thing about the 2007 hurricane season is certain: It will provide one more year of data, helping us (albeit only slightly) determine which camp of scientists is right in this dispute.
Finally, it's important to keep in mind that hurricanes occur globally, and there has been quite a lot of activity of late in the northwestern Pacific region. In the past month two powerful typhoons have slammed Japan, and now another one (Pabuk, currently a tropical storm) is forecast to careen towards Taiwan and China within the next few days. However, so far Pabuk hasn't really been obeying forecasts -- so we'll have to watch it closely.
The Daily Green has asked me to follow storms this season and provide commentary on global warming and related issues -- so this will be the first post of many. I hope it has been illuminating, and there's much more to follow.
Enter your city or zip code to get your local temperature and air quality and find local green food and recycling resources near you. | <urn:uuid:4212150c-b49a-41b3-9a70-298e9d6820fc> | 2.71875 | 604 | Personal Blog | Science & Tech. | 45.393385 |
Frank Jeffrey and Mike Coon, PowerFilm Solar
Chris - One of the problems with conventional solar panels is that they're very heavy. They're also fragile and they're stiff and that means itís very tricky to transport them and to install them. A flexible solar cell that you could roll up and then readily transport would be an ideal solution. PowerFilm Solar is an American company and they're doing just this. They're developing what we call Ďthin film photovoltaicsí and weíre joined now by Dr. Frank Jeffrey and also Mike Coon theyíre from PowerFilm Solar and they're going to explain to us how they work. Frank, hello, welcome to the Naked Scientists.
Frank - Thank you.
Chris - Tell us first if you would, how does your architecture, your flexible cells, actually differ from the rigid ones that we see people putting on their roofs? How do you make them bend?
Frank - The principle part of the solar cell itself in our cells is amorphous a silicon, which has an extremely high absorption coefficient so that we can have extremely thin semiconducting material that will still absorb a good portion of light. That thin material, even though if it were thick like a crystalline wafer, it would break, in the same type structure when itís thin enough becomes flexible and tends to bend rather than break. So, thatís the key part, our basic absorber layer that absorbs the solar energy is only say, 5,000 angstroms thick. So itís quite thin and flexible and we put it on a thin film plastic substrate that is also flexible and adds mechanical support and strength to the solar component.
Chris - So when you say itís flexible, how flexible are we talking? Could you roll this up like a newspaper or would it not tolerate that kind of treatment?
Frank - Well, if we have the basic substrate and solar material itself, we can roll it up to a diameter of a pencil and it does just fine. Actually, some applications we do roll that small for storage. Thatís mostly a space type application but normally, we put a heavier encapsulant on the outside to protect against earthís atmosphere and that means around maybe a 3-inch diameter is what a commercial cell or module that we sell will roll up to comfortably.
Chris - Thatís still pretty impressive to get it down so small. If we could zoom in with a microscope and just examine the structure of your cells, what would we see? If you could just paint a picture for us so people can appreciate exactly how they're configured.
Frank - Okay. Maybe an electron microscope in order to see it, but we start out with a basic film of polyamide plastic to build it all on. So thatís the bottom layer that you would see and that may be a 25 to 50-micron thick plastic film. On top of that, we put a metal layer, principally aluminium, that acts as a back electrical contact. They're able to carry the electrons off the back surface of the solar module or solar cell. Then there are six layers of silicon forming actually 2 diodes, a thick diode in the bottom that absorbs the red light and a thinner diode in the top that absorbs the blue light. By having 2 diodes, we get a higher operating voltage and lower current which means we don't lose as much energy in resistance of the leads coming in and out. Then on top, we have a transparent oxide conductor. Itís not all that easy to make something thatís both transparent and conductive, but thatís the type of film we use. That allows the light to come in and also carries the current off the face of the solar cell. So thatís the stack from top to bottom and then clear plastic, generally a polymer encapsulation both front and back to protect it from moisture and outside weathering, and that type of stuff.
Chris - Ingenious to manage to have something that absorbs both the red and the blue so that you don't waste any energy. How much energy do you extract? If I compared your system head-to-head with one that I could buy off the shelf to put on my roof today, how would the efficiencies compare?
Frank - If you compare to the different technologies out there, ours is quite a bit less per square foot. We generate about 5 watts per square foot as opposed to crystalline silicon which is more in a range of 15 watts per square foot. So, the output is considerably less. Part of the point of our approach also is very low cost manufacturing so that ultimately, we can be competitive in the cost per watt generated and in specific markets that require us to be lightweight and thin such as integrating it in building panels, we can be competitive with crystalline silicon on a cost basis, and in an application basis.
Chris - So no such thing as a free lunch. And Mike Coon, letís bring you in here. I suppose the payback must be that you've got very good portability, there must be many applications for something like this which can be rolled up, packaged away, and taken somewhere where you need instant power on demand in the middle of nowhere.
Mike - Thatís right. On one end of the continuum, we have our products which serve the portable power market especially well because of the lightweight nature of our material and on the other end of the continuum, talking more about our building integrated products, our larger scale products which can be up to 30 feet or approximately 10 metres long for larger scale building integrated applications. But on the portable area, the light weight is especially important because we can provide power unlike others that can be extremely lightweight, can be portered in, and can be extremely durable. For example, the US Military has shot holes through it and it continues to perform and thatís because of this printed interconnect which Dr Derrick Grimmer developed early on for the company.
Chris - Tremendous! So in other words, you've got something which is quite literally bomb proof. How are you actually seeking to use this? Who are your markets? Who is taking this product and deploying it in the field?
Mike - Yes. There are currently three primary market segments that weíre serving and a fourth one which weíre launching and in process of gearing up for. The existing markets that weíre selling in to are the commercial industrial markets, a variety of applications ranging from providing panels for GPS asset tracking on semi-tractor trailers to remote data collection, electric golf carts, campers, RV panels, the whole gamut. Also the military market is very important for us. Weíve developed products which range on the one hand from small AA chargers to 5 to 60 watt portable chargers to charge everything from ruggedised laptops and notebooks, to medical refrigeration, to remote sensors, as well as in our larger 1 to 3 kilowatt power shade products which provide not only a remote portable power, but also the shade benefits that was mentioned earlier. These are very rugged durable products that can go over existing shelter structures and about four or five man hours can be set up with two to four soldiers.
Chris - So I guess that if you've got say, a military camp, they're in the middle of nowhere, previously, power had to come from someone carting either very heavy batteries or a big diesel generator and all the fuel for it. Now, you've got a system where you could deploy this, it looks because itís flexible, like a tent to all intents and purposes, and itís going to provide mobile power.
Mike - Thatís right. Itís very much designed to meet the power needs of todayís war theatre which remote outposts are incredibly important, increasingly important, such as in Afghanistan. We provide energy solutions which can be integrated either independently for targeted use of power as well as integrating part of overall hybrid systems. One of the important aspects of our technology is it does reduce fuel consumption which can be incredibly costly in those remote areas. Reducing fuel consumption reduces convoys, which reduces the cost of the fuel as well as the potential risk of casual losses with those fuel convoys.
Chris - Terrific. Well thank you very much for joining us to tell us about your work. Thatís Mike Coon and Frank Jeffrey. They're both from PowerFilm Solar with flexible photovoltaic cells that you can even turn into tents.
I bought some amorphous pv panels. They were 450x300mm and output 300mA @ 15v. After 2 years the output was a tenth of that. I tried one that was still in it's box and it was ok. It seems the only way to maintain output is to keep them out of the sun. David, Sun, 20th Feb 2011 | <urn:uuid:35335698-5c62-4a82-8142-70bf6c1cbbfd> | 3.4375 | 1,806 | Audio Transcript | Science & Tech. | 54.349209 |
Salps—transparent organism that range
from 0.5 to five inches long—are the subject of MIT-WHOI Joint Program graduate student Kelly Rakow Sutherland's PhD dissertation. Observing them in their habitat off the Pacific Coast of Panama, she focused on how the creatures balance their food-energy resources, which they use to propel themselves. Salps swim and eat by rhythmic pulses. Each pulse draws seawater in through a siphon opening at the front end of the animal. Then the salp contracts muscle bands, and the water shoots out another siphon at its rear end, producing a jet that propels the animal forward. In the process, food particles in the water (mostly tiny plankton) are caught on a mesh of mucus strands inside the salp’s mostly hollow body. There are about 40 species of salps with a widely diverse body shapes. Rakow Sutherland posits that some species may be good at filtering seawater through their feeding mesh at the expense of being good swimmers, and vice versa.
(Photo by Larry Madin, Woods Hole Oceanographic Institution) | <urn:uuid:29656b01-0474-4ff9-bbf2-11852f5de3e9> | 3.890625 | 227 | Knowledge Article | Science & Tech. | 50.175028 |
US Department of Agriculture, Natural Resources Conservation Service, Soil Survey Division, World Soil Resources
Orange and red show the areas where the risk of human induced desertification is higher. As it can be immediately seen, the most affected areas are along the equator line in Africa. Developing countries such as India and the Middle East and Eastern Europe are also thought to be affected.
This also goes together with natural deforestation and desertification.
Desertification and droughts are major causes of ecosystem losses and species mass migration or extinction.
MARSEILLE, France (AlertNet) - Water must be used more efficiently and its waste reduced if the world is to meet rising food demand from a fast-expanding population amid the pressures of climate change, experts have said ahead of World Water Day.
Marked each year on March 22, the United Nations hopes the 2012 event will focus attention on water's critical role in feeding the world.
LONDON, Feb 13 (Reuters) - Global warming will get worse as agricultural methods accelerate the rate of soil erosion, which depletes the amount of carbon the soil is able to store, a United Nations' Environment Programme report said on Monday.
Soil contains huge quantities of carbon in the form of organic matter. which provides nutrients for plant growth and improves soil fertility and water movement.
The top metre of soil alone stores around 2,200 billion tonnes of carbon, which is three times the level currently held in the atmosphere, said the UNEP Year Book 2012.
PRETORIA, 14 November 2011 (IRIN) - Soaring temperatures and erratic rains brought on by a changing climate may radically alter water flows in the world’s major river basins, including the Limpopo in southern Africa, forcing people to give up farming in some areas, says a new study. | <urn:uuid:88f03ca5-b6d8-4463-8959-931fc45f9c18> | 3.515625 | 375 | Content Listing | Science & Tech. | 31.176036 |
This video from the USA is called 6ft Black Rat Snake.
Global Warming Beneficial to Ratsnakes
Jan. 8, 2013 — Speculation about how animals will respond to climate change due to global warming led University of Illinois researcher Patrick Weatherhead and his students to conduct a study of ratsnakes at three different latitudes — Ontario, Illinois, and Texas. His findings suggest that ratsnakes will be able to adapt to the higher temperatures by becoming more active at night.
- Media is missing climate in heatwave story (theconversation.edu.au)
- CIA closes down intelligence center focused on climate change (pri.org)
- Surprise, Climate Change Denier Appointed To Head House Science Committee (wonkette.com)
- Nepal Government Publishes Plan For Assessment On Climate Change (chimalaya.org) | <urn:uuid:2f404efe-a0de-4547-9a2b-88ba9f88264a> | 3.078125 | 176 | Personal Blog | Science & Tech. | 30.446667 |
Leaking Gravity May Explain Cosmic Puzzle
By Sara Goudarzi
Special to SPACE.com
posted: 28 February 2005
WASHINGTON, D.C. - Scientists may not have to go over to the dark side to explain the fate of the universe.
The theory that the accelerated expansion of the universe is caused by mysterious "dark energy" is being challenged by New York University physicist Georgi Dvali. He thinks there's just a gravity leak.
Scientists have known since the 1920s that the universe is expanding. In the late 1990s, they realized that it is expanding at an ever-increasing pace. At a loss to explain the stunning discovery, cosmologists blamed it on dark energy, a newly coined term to describe the mysterious antigravity force apparently pushing galaxies outward.
This repulsive, unknown force is believed to make up more than 70 percent of the mass-energy budget of the universe.
But the existence of dark energy is far from proven, and some researchers believe they and their colleagues simply don't understand gravity at larger scales. The gravitational pull between any two objects becomes less with distance. But in Dvali's view, it weakens more than standard theory predicts.
Dvali would modify the theory of gravity so that the universe becomes self-accelerating, eliminating the need for dark energy. He presented his work here earlier this month at the annual meeting of the American Association for the Advancement of Science.
Dvali borrows from string theory, which states that there are extra, hidden dimensions beyond the four we are familiar with: three directions and time. String theory suggests that gravitons -- hypothetical elementary particles transmitting gravitational forces -- can escape to other dimensions. Dvali says this would cause "leaks" in gravity over cosmic proportions, reducing gravitational pull at larger distances more than expected.
"The gravitons behave like sound in a metal sheet," says Dvali. "Hitting the sheet with a hammer creates a sound wave that travels along its surface. But the sound propagation is not exactly two-dimensional as part of the energy is lost into the surrounding air. Near the hammer, the loss of energy is small, but further away, it's more significant."
The effect is to alter the space-time continuum, speeding up universal expansion.
"Virtual gravitons exploit every possible route between the objects," Dvali said, "and the leakage opens up a huge number of multi-dimensional detours, which brings about a change in the law of gravity."
The speeding up of the universe suggest that Einstein's laws of General Relativity, describing the interaction of space and matter, must be modified at large cosmic distances.
"It is this modification, and not dark energy, that is responsible for the accelerated expansion of the universe," Dvali concludes.
The idea might be testable.
Gravity leakage should create minor deviations in the motion of planets and moons. Astronauts on the Apollo 11 mission installed mirrors on the lunar surface. By shooting lasers at the mirrors, a reflected beam can be monitored from Earth to measure tiny orbital fluctuations. Dvali said deviations in the Moon's path around Earth might reveal whether gravity is really leaking away. | <urn:uuid:15b34fbe-838b-4e47-a5b3-5962e903aec1> | 3.015625 | 660 | Truncated | Science & Tech. | 38.735577 |
Tony Phillips, National Aeronautics and Space Administration, Science@NASA, Marshall Space Flight Center
This audio clip reports on a rare hurricane in the South Atlantic that crashed into the coast of Brazil. Weather satellites have been circling Earth for more than 40 years and during that time they have never before spotted hurricanes in the south Atlantic. It was thought that vertical wind shears in the South Atlantic are too strong for hurricanes. People in Brazil were not even sure that it was a hurricane until information from several satellites confirmed it. This site is enhanced by an audio version of the text, photographs, a map, and a remotely sensed image. | <urn:uuid:5738b88e-65df-4202-aff7-1237d8136957> | 3.4375 | 128 | Truncated | Science & Tech. | 34.156054 |
Imagine being able to detect harmful bacteria, viruses, or other contaminants with a simple swipe of a paper towel. Far-fetched as it might sound, such a technology already exists, though it’s not yet on the market.
The so-called “nano napkin” is a napkin made of ultra-fine polymer fibers 100 billionths of a meter (or 100 nanometers) wide. The fibers are coated with antibody-bearing proteins. When the antibodies come in contact with a specific pathogen, they release a dye that changes the color of the napkin.
So far, Cornell researchers have only developed a napkin to detect E. coli bacteria. Wipe the napkin on a surface contaminated with E. coli and it turns yellow—a potentially useful tool in a meat-packing plant or industrial kitchen. By substituting different antibodies, napkins could be made to detect any number of contaminants, including anthrax, bird flu, or even the common cold virus. Multiple antibodies on one napkin could allow for detection of dozens of biohazards at once.
Though the napkin is in its preliminary stages, the technology promises to be inexpensive and, clearly, easy to use. | <urn:uuid:6681d248-54dc-4b06-82ff-2ba8c41bc16b> | 3.921875 | 248 | Knowledge Article | Science & Tech. | 40.183333 |
Ozone Hole Meteorology: 2008 Potential Vorticity
The Antarctic polar vortex acts as a barrier to the exchange of polar and midlatitude air. The potential vorticity (PV) is a conserved quantity that acts as a tracer for motion on an isentropic surface. Plotting contours of PV on an isentropic surface readily shows the extent of the polar vortex. In the Antarctic winter, the PV is more negative going from midlatitudes to the pole. The spacing of PV contours is very wide in the midlatitudes, very tight at the polar vortex edge, and then widens again inside of the vortex. The edge of the vortex is defined to be where the contours of PV are closest together. The area inside of this edge can be determined as the area of the polar vortex.
A useful quantity in analyzing PV and vortex behavior is the equivalent latitude. Since PV tends to decrease from the equator to the South Pole, we calculate the area under successive PV contours. The areas are rearranged so that they vary from 1 at the equator to 0 at the pole. The transformed contours of PV can be thought of as being symmetrically arranged around the pole and monotonically decreasing from the equator to the pole. The latitudes on this transformed map projection are called equivalent latitudes. Plotting PV vs. equivalent latitude shows an interesting shape: an "S" curve, where the upswing area of the "S" is located at the vortex edge. Finding the maximum in the first derivative of PV with respect to equivalent latitude determines the vortex edge.
Comparison to all years
The following figures show the daily progression through the ozone hole season, comparing the current year to the climatology of all other years.
-- click on a link for a PDF figure -- | <urn:uuid:e1a1c777-6d62-4319-9b32-e7463784faff> | 3.546875 | 376 | Knowledge Article | Science & Tech. | 40.579122 |
Why study the parabola?
The parabola has many applications in real life. For example, radiation enters the parabola in an almost parallel fashion. It is concentrated on a single point where the antenna's receiver is placed. The parabola is a graph of a quadratic function. The quadratic equation is important to real-world models.
Where can I see more parabolas?
The pictures below show that the path of the water, a baseball and a golf ball draw as parabolas.
You have seen that the graphs of functions of the form f(x) = ax + b are lines. Parabolas are the graphs of a family of functions known as quadratic functions. A quadratic function has a squared term.
A quadratic function is a function of the form f(x) = a + bx = c, where a, b, and c are real- number constants and a is not 0.
In this site, you can review parabolas using simulations in the various examples. Most of the simulators are from explorelearning web sites. It has very useful simulations and quiz. You can log in free for a one month trial. Please start the review from Lesson 1.
last modified March 20, 2005
Misoon Park: e-mail | <urn:uuid:7f9fd46b-4ed1-4cc1-8357-ee7a4079d8ff> | 3.84375 | 276 | Tutorial | Science & Tech. | 65.032824 |
Solar Eclipses Overview
Solar and lunar eclipses are not independent. For both kinds of eclipse the Sun, Earth and Moon have to be aligned. If the lunar orbit plane did coincide with the ecliptic plane, there would be a lunar eclipse at each Full Moon and a solar eclipse at each New Moon. However, due to the inclination (5° 8' 43") of the Moon's orbit compared to that of the Earth, for an eclipse to occur the Moon has to pass through the plane of the Earth's orbit close to Full Moon, or close to New Moon. This happens about twice per year. Due to perturbations in the Moon's orbit the eclipse cycle is not exactly six months.
When these conditions are met, there can be a lunar eclipse and a solar eclipse within 15 days of each other. So, before the solar eclipse of 11 August 1999, there was a lunar eclipse of 28 July 1999.
A periodicity in the cycle of eclipses was noticed by ancient Greek astronomers. Its duration is 6585.32 days (or 18 years and 10.3 days), after which time the position of the Sun and the Moon, as viewed from the Earth, recur. This is known as the 'Saros cycle'. Within a given Saros cycle the eclipses succeed each other in almost identical manner, except that the observation position on Earth shifts by 120 degrees (0.32 days) from one eclipse to the next.
On average there are 42 solar eclipses (14 partial solar eclipses, 28 central solar eclipses) and 42 lunar eclipses (including 14 total lunar eclipses) per Saros cycle.
Geometry and timing of eclipses
For a total eclipse, the region of totality is a narrow band (up to 300 km wide) around the line of centrality. Outside the zone of totality (or annular eclipse) there exists a much more extended area (more than 7000 km) where the eclipse is partial. The shape of this area depends on the respective positions of the Earth, Moon and Sun. The amount of the Sun covered is greatest closest to the totality zone, and decreases symmetrically away from it.
The Moon's shadow sweeps across the Earth at high speed (more than 2500 km/h) from West to East. This determines the duration of the partial eclipse that can last as much as three hours, and of the total eclipse (from seconds to a few minutes) depending on the position of the observer.
The maximum duration of a total eclipse in the most favourable conditions corresponds to 7min 30s in equatorial regions and 6min 10s at the latitude of Paris.
Events during a total eclipse
A total eclipse is one of the most impressive natural phenomena. Before totality, the first phase of partial eclipsing of the solar disk takes some 40 minutes when the luminosity progressively drops to a few percent. The cooling of the local atmosphere is already noticeable.
The moment of totality comes very suddenly, at the precise instant when the solar surface is totally occulted, and the ambient luminosity drops down to one part in 10 000 of normal solar light in a few seconds. The last solar photospheric rays shine through the lunar valleys at the lunar limb allowing a light from the solar photosphere just before second contact and after third contact, giving the phenomena of fast flashing Baily's Beads for a few seconds. In the middle of the day, suddenly it is deep dawn, with stars to be seen.
We can distinguish four key moments during a solar eclipse:
- the 1st contact when the apparent disks of the Sun and Moon touch for the first time (starting the partial eclipse for some 40 minutes),
- the 2nd contact at the start of totality,
- the 3rd contact at the end of totality (after a few seconds or minutes), with another partial eclipse lasting some more 40 minutes and ending with
- the 4th contact when the apparent disks separate.
The last photospheric light together with the pink ring of the chromosphere, and white inner corona is known as the "Diamond ring". During the few minutes of totality, the dark lunar disk appears with a ring of pink pearls (the solar prominences). During the totality, one sees at last a large diffuse aura (the solar corona) with streak structures streaming at low solar latitudes and fine radial structures (or plumes) near the poles.
||Science of Eclipses
||Eclipses in History
Last Update: 22 Feb 2006 | <urn:uuid:96da5265-177d-414f-b3b3-e564515c6a13> | 4.34375 | 927 | Knowledge Article | Science & Tech. | 50.150405 |
The opinions I more particularly allude to, are those of Berthollet on the Laws of chemical affinity; such as that chemical affinity is proportional to the mass, and that in all chemical unions, there exist insensible gradations in the proportions of the constituent principles. The inconsistence of these opinions, both with reason and observation, cannot, I think, fail to strike every one who takes a proper view of the phenomena.
Whether the ultimate particles of a body, such as water, are all alike, that is, of the same figure, weight, &c. is a question of some importance. From what is known, we have no reason to apprehend a diversity in the particulars: if it does exist in water, it must equally exist in the elements constituting water, namely, hydrogen and oxygen. Now it is scarcely possible to conceive how the aggregates of dissimilar particles should be so uniformly the same. If some of the particles of water were heavier than others, if a parcel of the liquid on any occasion were constituted principally of these heavier particles, it must be supposed to affect the specific gravity of the mass, a circumstance not known. Similar observations may be made on other substances. Therefore we may conclude that the ultimate particles of all homogeneous bodies are perfectly alike in weight, figure, &c. In other words, every particle of water is like every other particle of water; every particle of hydrogen is like every other particle of hydrogen, &c.
Besides the force of attraction, which, in one character or another, belongs universally to ponderable bodies, we find another force that is likewise universal, or acts upon all matter which comes under our cognisance, namely, a force of repulsion. This is now generally, and I think properly, ascribed to the agency of heat. An atmosphere of this subtile fluid constantly surrounds the atoms of all bodies, and prevents them from being drawn into actual contact. This appears to be satisfactorily proved by the observation, that the bulk of a body may be diminished by abstracting some of its heat: But from what has been stated in the last section, it should seem that enlargement and diminution of bulk depend perhaps more on the arrangement, than on the size of the ultimate particles. Be this as it may, we cannot avoid inferring from the preceding doctrine on heat, and particularly from the section on the natural zero of temperature, that solid bodies, such as ice, contain a large portion, perhaps 4/5 of the heat which the same are found to contain in an elastic state, as steam.
We are now to consider how these two great antagonist powers of attraction and repulsion are adjusted, so as to allow of the three different states of elastic fluids, liquids, and solids. We shall divide the subject into four Sections; namely, first, on the constitution of pure elastic fluids; second, on the constitution of mixed elastic fluids; third, on the constitution of liquids, and fourth, on the constitution of solids.
[I have omitted the sections of chapter II. --CJG]
Chemical analysis and synthesis go no farther than to the separation of particles one from another, and to their reunion. No new creation or destruction of matter is within the reach of chemical agency. We might as well attempt to introduce a new planet into the solar system, or to annihilate one already in existence, as to create or destroy a particle of hydrogen. All the changes we can produce, consist in separating particles that are in a state of cohesion or combination, and joining those that were previously at a distance.
In all chemical investigations, it has justly been considered an important object to ascertain the relative weights of the simples which constitute a compound. But unfortunately the enquiry has terminated here; whereas from the relative weights in the mass, the relative weights of the ultimate particles or atoms of the bodies might have been inferred, from which their number and weight in various other compounds would appear, in order to assist and to guide future investigations, and to correct their results. Now it is one great object of this work, to shew the importance and advantage of ascertaining the relative weights of the ultimate particles, both of simple and compound bodies, the number of simple elementary particles which constitute one compound particle, and the number of less compound particles which enter into the formation of one more compound particle.
If there are two bodies, A and B, which are disposed to combine, the following is the order in which the combinations may take place, beginning with the most simple: namely,
The following general rules may be adopted as guides in all our investigations respecting chemical synthesis.
From the application of these rules, to the chemical facts already well ascertained, we deduce the following conclusions; 1st. That water is a binary compound of hydrogen and oxygen, and the relative weights of the two elementary atoms are as 1:7, nearly; 2d. That ammonia is a binary compound of hydrogen and azote, and the relative weights of the two atoms are as 1:5, nearly; 3d. That nitrous gas is a binary compound of azote and oxygen, the atoms of which weigh 5 and 7 respectively; that nitric acid is a binary or ternary compound according as it is derived, and consists of one atom of azote and two of oxygen, together weighing 19; that nitrous oxide is a compound similar to nitric acid, and consists of one atom of oxygen and two of azote, weighing 17; that nitrous acid is a binary compound of nitric acid and nitrous gas, weighing 31; that oxynitric acid is a binary compound of nitric acid with oxygen, weighing 26; 4th. That carbonic oxide is a binary compound, consisting of one atom of charcoal, and one of oxygen, together weighing nearly 12; that carbonic acid is a ternary compound, (but sometimes binary) consisting of one atom of charcoal, and two of oxygen, weighing 19; &c. &c. In all these cases the weights are expressed in atoms of hydrogen, each of which is denoted by unity.
In the sequel, the facts and experiments from which these conclusions are derived, will be detailed; as well as a great variety of others from which are inferred the constitution and weight of the ultimate particles of the principal acids, the alkalis, the earths, the metals, the metallic oxides and sulphurets, the long train of neutral salts, and in short, all the chemical compounds which have hitherto obtained a tolerably good analysis. Several of the conclusions will be supported by original experiments.
From the novelty as well as importance of the ideas suggested in this chapter, it is deemed expedient to give plates, exhibiting the mode of combination in some of the more simple cases. A specimen of these accompanies this first part. The elements or atoms of such bodies as are conceived at present to be simple, are denoted by a small circle, with some distinctive mark; and the combinations consist in the juxta-position of two or more of these; when three or more particles of elastic fluids are combined together in one, it is supposed that the particles of the same kind repel each other, and therefore take their stations accordingly.
|1.||Hydrogen, its relative weight||1|
|3.||Carbone or charcoal||5|
|22.||An atom of ammonia, composed of 1 of azote and 1 of hydrogen||6|
|23.||An atom of nitrous gas, composed of 1 of azote and 1 of oxygen||12|
|24.||An atom of olefiant gas, composed of 1 of carbone and 1 of hydrogen||6|
|25.||An atom of carbonic oxide composed of 1 of carbone and 1 of oxygen||12|
|26.||An atom of nitrous oxide, 2 azote + 1 oxygen||17|
|27.||An atom of nitric acid, 1 azote + 2 oxygen||19|
|28.||An atom of carbonic acid, 1 carbone + 2 oxygen||19|
|29.||An atom of carburetted hydrogen, 1 carbone + 2 hydrogen||7|
|30.||An atom of oxynitric acid, 1 azote + 3 oxygen||26|
|31.||An atom of sulphuric acid, 1 sulphur + 3 oxygen||34|
|32.||An atom of sulphuretted hydrogen, 1 sulphur + 3 hydrogen||16|
|33.||An atom of alcohol, 3 carbone, + 1 hydrogen||16|
|34.||An atom of nitrous acid, 1 nitric acid + 1 nitrous gas||31|
|35.||An atom of acetous acid, 2 carbone + 2 water||26|
|36.||An atom of nitrate of ammonia, 1 nitric acid + 1 ammonia + 1 water||33|
|37.||An atom of sugar, 1 alcohol + 1 carbonic acid||35| | <urn:uuid:2d4f2187-56e3-4ae1-a20e-829d7154dd37> | 2.875 | 1,870 | Academic Writing | Science & Tech. | 37.049559 |
The list of new and emerging technologies enabled by the convergence of nanotechnology, life sciences and information technology is one item longer today following the announcement from the University of Glasgow about inorganic biology.
The project head, Lee Cronin explains that “All life on earth is based on organic biology (i.e. carbon in the form of amino acids, nucleotides, and sugars etc) but the inorganic world is considered to be inanimate.
“What we are trying do is create self-replicating, evolving inorganic cells that would essentially be alive. You could call it inorganic biology.”
But professor Cronin has just used a number of phrases, perhaps intentionally, which will trigger yet another debate about playing God, worries about what happens when they escape from the lab and take over the world, and brings up the subject of responsible versus irresponsible innovation.
Whether developing a technology such as inorganic biology is classified as responsible or irresponsible depends as much on your ethical and religious views as it does on the science. The only sure thing is that the technology will be developed anyway once the genie is out of the bottle, and as with many other technologies we have to attempt to manage them in a way that gives us the best shot at producing beneficial effects.
Responsible innovation is something that seems to be trending, at least in Europe, as a way of ensuring that new and emerging technologies do not create any unpleasant side effects. To some extent it seems similar to the precautionary principle, which has been used as an argument against everything from GMO’s to nanotechnology, and can be used as an effective tool to sway political opinion against any new technology.
I would suggest, however, that thinking about responsible innovation should start only when technology reaches the stage of commercialisation, and that everything up to that point is just scientific curiosity. The howls of “what if science creates a monster?” have to be balanced against the progress that science has made over the past three hundred years, and while the products of science have not always been beneficial, we can live lives free of cholera and access whatever information we want whenever we want. It is impossible to see, from the lab bench, the final application of any technology – neither the inventors of the transistor or science fiction writers predicted the mobile phone, and I can’t remember anyone in the dot.com era predicting Facebook or Twitter.
So responsible innovation should be something for companies to practice rather than scientists, just like open innovation. It’s an idea that fits nicely alongside the drift towards sustainability, shifting from the linear take-make-waste model that has been used ever since the industrial revolution to a more cyclical zero waste one enabled by life sciences. But the concept of responsible innovation needs more definition. Was the development nuclear weapons responsible innovation, as some would argue that they ended the Second World War and prevented a third one, or does their acquisition by rogue states such as North Korea render the whole field irresponsible? Was the development of polymers responsible, as it enabled huge advances in quality of life, or irresponsible as much of the plastic waste produced ends up in land fills or in the world’s oceans?
While industry is changing, and far more questions are being asked about safety and ethics than in the mid twentieth century, the idea of responsible innovation becomes far more dangerous in the hands of governments and regulatory bodies. An increasing number of publicly funded projects require applicants to answer all kinds of questions about the ethics and sustainability of the proposed research. Adding a fluffy ill defined term such as ‘responsible’ to the mix raises the risk of research being judged by personal rather than scientific criteria. It would certainly irresponsible to start demanding answers about responsibility too early, and before defining an end use or application of the technology, something that would risk putting the brakes on innovation and add to regulatory confusion. The use of nanotechnology in food, drugs or solar cells, for example, requires vastly different regulatory structures, even if the same nanomaterials are used for each application.
Is inorganic biology responsible or irresponsible innovation? It is way too early to answer that question, and we shouldn’t even try until we know what it will be used for. It may even prove to be a scientific dead end, and much of the debate about ethics, safety and regulation will end up as productive and relevant as the debate about ‘gray goo.’ | <urn:uuid:c3efda46-fcfa-41fd-8da7-14afc5180439> | 2.71875 | 910 | Personal Blog | Science & Tech. | 24.744844 |
Alll I have seen on it says there is no steam involved, simply
mechanical pressure...I'm not sure how the thing works....there was a
guy named Schaeffer in Chicago (now dead) who was using high intensity
mechanical shock pressure applied to water to produce steam.
I have a bit of information on him with a couple of bad copies of
photos. I think it works along the lines of 'hydrosonics' as with the
Griggs counter-rotating drums, spaced about 1/4" apart and which
causes molecular shearing of the water to produce heat from that
A similar device is the Yusmar vortex heat generator by Dr. Potapov
over in Russia. It uses a compressive vortex chamber and is claimed
to produce sufficient heat to heat buildings simply by water flowing
at high velocity. Is this the one you are referring to as Pavlov?
Pavlov was the guy who did behavioral research and is best remembered
for causing dogs to salivate on hearing a bell.
To see something on the Yusmar, check out;
Tests of the Yusmar in the states (they were looking for overunity
effects, moreso than the heating that is claimed in the device, though
Potapov says it is self-running once started);
A similar anomalous effect is claimed in a self-running device using
high velocity, high pressure cooking oil as described in the Clem
The thinking with regard to this heating effect is that the phenomenon
takes the form of a Hilsch/Rankin tube (you know the air vortex tubes)
which produce heat on one side and cold on the other, using molecular
separation based on heat density/size of the air molecules.
The heat swollen excited molecules are vectored into one path, like a
sieve or filter...the smaller, cooler ones are the cold ones and pass
to the other path.
How to build a Hilsch/Rankin vortex tube;
---Marinus Berghuis wrote:
> At 10:19 7/01/99 -0800, you wrote:
> >Hi Phil!
> >There does appear to be a way to mechanically produce a pressure from
> >water that can be scaled to almost any size.
> >Good Day Jerry,
> While cleaning up my files, came across this one and ask, would this
> pressure device work on the same principle as Pavlov's water heater?
> Surely if you increase the pressure from the pump, the water would
> eventually turn into steam !
> I found a brass gear pump on the property and thought of mounting
this on a
> C.N.G. tank and let it pump to it's maximum and see what happens!
> Do you know of anyone who has tried the experiment?
> please advise
DO YOU YAHOO!?
Get your free @yahoo.com address at http://mail.yahoo.com | <urn:uuid:f7ece08b-a42a-4f86-85bf-c4ee521c986d> | 2.71875 | 632 | Comment Section | Science & Tech. | 61.613076 |
Lichens are classified by their fungal components. The name
given to lichens is actually the name of its fungal component. Therefore,
the identity of the photobiont does not determine the taxonomic grouping
of a lichen.
The taxonomic classification of C. coralloides is as follows:
Kingdom: Fungi: have non-motile bodies made up of apically elongating
filaments called hyphae, a life cycle with sexual and asexual reproduction,
haploid thalli resulting from zygotic meiosis, and heterotrophic
nutrition. Spindle pole bodies, not centrioles, usually are associated
with the nuclear envelope during cell division. The characteristic
wall components are chitin (beta-1,4-linked homopolymers of N-acetylglucosamine
in microcrystalline state) and glucans primarily alpha-glucans
(alpha-1,3- and alpha-1,6- linkages) (Griffin, 1993 as cited in
Division: Ascomycota: is the division of sac-forming fungi. This
is the largest group of fungi and at least 15,000 of them form lichens. These
fungi are grouped together because they hold their spores in sacs called
asci, which are organized within the fertile tissue. However,
some lichenologists and mycologists think that lichens should be grouped
together into a division of their own, Lichenes.
Order: Lecanorales: fruiting bodies are apothecia.
Family: Teloschistaceae: This family is made up of crustose,
foliose and fruticose lichens that have lecanorine apothecia (lecanorine
means that the margin of the apothecium contains photobionts and is
similar in color and texture to the thallus) with (usually) orange
apothecial discs (anthraquinones present). Spores are colorless and
polarilocular. Members of the family Teloschistaceae usually live on
trees and rocks from arctic to temperate regions. The genera within
the family Teloschistaceae are Caloplaca, Blasteni, Fulgensia, Gasparrinia, Teloschistes
-Lichen families are generally characterized by the
structure of their fruiting bodies.
Genus: Caloplaca: is a large heterogeneous group of mostly
crustose species. All of the species of Caloplaca have orange
apothecial disks that react deep red/purple with KOH (because they
have anthraquinones); sometimes the thallus and vegetative structures
also react deep red/purple with KOH. Spores are colorless, two-celled
-Lichen genera can be told apart by the color and septation
of their ascospores.
Species: coralloides: is a dwarf fruticose growth form.
The thallus is greenish yellow to orangeish yellow in color. The thin
yellow prothallus is sometimes present around the base of branches.
The branches are terete and dichotomous to subdichotomously branched.
The thallus of C. coralloides is nodulose and has pseudocyphellae.
Algae are present in scattered clumps below the cortex and hypothecium.
Apothecia can be absent or present, but are common and 0.4-2.0 mm in
size. Apothecia are present terminally or laterally on branches. The
apothecial disc can be concave, flat or convex and is darker orange
than the rest of the thallus. The hymenium (fertile layer in apothecia)
is 65-90 um thick; paraphyses (vertically oriented hyphae in hymenium)
are mostly unbranched but occasionally branched. The spores of C.
coralloides are polaribilocular or two-celled, ellipsoid and colorless.
Pycnidia mostly present and abundant and are immersed to somewhat raised,
orange, and slightly glossy. The major cortical pigment is parietin,
but C. coralloides also has emodin, teloschistin, parietinic
acid and fallacinal. C. coralloides is confined to the coast
and grows in the lower part of the supralittoral zone mostly on vertical
surfaces of hard rocks. C. coralloides is not sorediate.
- Lichen species are delineated somewhat randomly by
the structure of the ascocarp and vegetative characters. The more complex foliose
and fruticose lichens are usually separated by the presence or absense of soredia,
isidia, cilia, pseudocyphellae and lobules, and by the type of rhizines, the
structure of the exciple, spore size and septation and chemistry.
The Photobiont: Trebouxia
The green algal component of C. coralloides is most likely Trebouxia,
the most common green algal photobiont in lichens. The taxonomy of Trebouxia is
Cyanobacteria can also be photobionts in lichens (although this is
not the case in C. coralloides). The most common cyanobacteria
lichen photobiont is Nostoc, which is found in most jelly lichens. | <urn:uuid:c9284b56-9f68-4c4e-ba2d-567b830423fd> | 3.96875 | 1,198 | Knowledge Article | Science & Tech. | 24.872152 |
“It all started when I was about six years old and saw that fantastic tornado in The Wizard of Oz.”
Photo by Carsten Peter, National Geographic
About Tim Samaras
Photo by Carsten Peter, National Geographic
As families scrambled to avoid deadly tornadoes, Tim Samaras raced straight toward them. He careened across the United States' notorious Tornado Alley on a mission: Predict the exact coordinates of an unborn tornado, arrive before it does, and place a weather-measurement probe directly in the twister's violent, swirling path.
"Data from the probes helps us understand tornado dynamics and how they form. With that piece of the puzzle we can make more precise forecasts and ultimately give people earlier warnings," Samaras explained. Since current warnings average a slim 13 minutes, every extra second of warning can be a lifesaver for residents facing a twister's wrath.
"It all started when I was about six years old and saw that fantastic tornado in The Wizard of Oz," Samaras said. About 20 years ago he began storm chasing. He spent every May and June putting 25,000 miles on his vehicle, chasing zigzagging tornadoes across the Plains.
"About five years ago, as an engineer," he noted, "I designed the next generation of probe to measure pressure drops inside tornadoes." A history-making instrument, Samaras's "turtle" probe has recorded record-breaking drops in pressure—the condition that triggers a tornado's extreme wind speeds. "This information is especially crucial, because it provides data about the lowest ten meters of a tornado, where houses, vehicles, and people are."
His car jammed with GPS gear, radios, scanners, a wireless Internet connection, and satellite tracking devices, Samaras constantly checked the forecast, data, and sky. "I only have one shot at being at the right spot," he said. "The worst is being five minutes late. One traffic jam or detour and you can miss the whole show. That's why we try to anticipate the action and arrive while there's still nothing but blue sky. The storms develop right over our heads, and we follow them as they form."
Often the fury fizzled. Tornadoes developed from only two out of every ten storms Samaras followed. And deploying a probe was only possible during two out of every ten tornadoes. "The odds are really against us," he admitted. "Storm chasing is probably the most frustrating thing one can do."
But then there were days like June 24, 2003. On a sleepy country road near Manchester, South Dakota, a half-mile-wide F4 tornado dropped from the sky and barreled across the landscape with more than 200-mile-an-hour winds. At precisely the right place and time, Samaras deployed three probes, the last one placed as he leapt from his car a mere 100 yards ahead of the approaching tornado. Sixty seconds later the tornado crossed that exact spot, full force.
"That's the closest I've been to a violent tornado, and I have no desire to ever be that close again," he recalled. "The rumble rattled the whole countryside, like a waterfall powered by a jet engine. Debris was flying overhead, telephone poles were snapped and flung 300 yards through the air, roads ripped from the ground, and the town of Manchester literally sucked into the clouds. You could see the tornado's path perfectly carved through a cornfield where, like a giant harvester, it had mowed stalks down to the ground."
Amazingly, his probe survived the tornado's direct hit, unmoved from the spot where it had been deployed. Thanks to the pyramid shape Samaras designed, wind actually pushes the probe into the ground, helping to hold the device in place.
A six-inch-high weather station encased in steel, the probe has sensors that measure humidity, pressure, temperature, wind speed, and direction.
"When I downloaded the probe's data into my computer, it was astounding to see a barometric pressure drop of a hundred millibars at the tornado's center. That's the biggest drop ever recorded—like stepping into an elevator and hurtling up 1,000 feet in ten seconds."
Every storm is different. Some require deploying probes as baseball-size hail falls. In another instance Samaras watched telephone poles fall in front of him, sending arcs of sparks exploding across the road as he made his escape. Yet another tornado developed at night, only allowing glimpses of its oncoming path during flashes of lightning.
"My passion for storm chasing has always been driven by the beautiful and powerful storms displayed in the heartland each spring."×Close
From The Society
"We were shocked and deeply saddened by the news that longtime National Geographic grantee Tim Samaras was killed in a tornado in Oklahoma on Friday, May 31, along with Tim's son Paul and their colleague Carl Young. Tim was a courageous and brilliant scientist who fearlessly pursued tornadoes and lightning in the field in an effort to better understand these phenomena.
The National Geographic Society made 18 grants to Tim for research over the years for field work like he was doing in Oklahoma at the time of his death, and he was one of our 2005 Emerging Explorers. Tim's research included creation of a special probe he would place in the path of a twister to measure data from inside the tornado; his pioneering work on lightning was featured in the August 2012 issue of National Geographic magazine.
Though we sometimes take it for granted, Tim's death is a stark reminder of the risks encountered regularly by the men and women who work for us. This is an enormous loss for his family, his wide circle of friends and colleagues and National Geographic." | <urn:uuid:6572ddb0-58cd-4eb9-bc17-f29f4bd43a61> | 2.90625 | 1,175 | Nonfiction Writing | Science & Tech. | 46.97052 |
Applications of Special Relativity
The Twin Paradox
The so-called 'Twin Paradox' is one of the most famous problems in all of science. Fortunately for relativity it is not a paradox at all. As has been mentioned, Special and General Relativity are both self-consistent within themselves and within physics. We will state the twin paradox here and then describe some of the ways in which the paradox can be resolved.
The usual statement of the paradox is that one twin (call her A) remains at rest on the earth relative to another twin who flies from the earth to a distant star at a high velocity (compared to c ). Call the flying twin B. B reaches the star and turns around and returns to earth. The twin on earth (A) will see B's clock running slowly due to time dilation. So if the twins compare ages back on earth, twin B should be younger. However, from B's point of view (in her reference frame) A is moving away at high speed as B moves towards the distant star and later A is moving towards B at high speed as B moves back towards the earth. According to B, then, time should run slowly for A on both legs of the trip; thus A should be younger than B! It is not possible that both twins can be right-the twins can compare clocks back on earth and either A's must show more time than B's or vice-versa (or perhaps they are the same). Who is right? Which twin is younger?
The reasoning from A's frame is correct: twin B is younger. The simplest way to explain this is to say that in order for twin B to leave the earth and travel to a distance star she must accelerate to speed v . Then when she reaches the star she must slow down and eventually turn around and accelerate in the other direction. Finally, when B reaches the earth again she must decelerate from v to land once more on the earth. Since B's route involves acceleration, her frame cannot be considered an inertial reference frame and thus none of the reasoning applied above (such as time dilation) can be applied. To deal with the situation in B's frame we must enter into a much more complicated analysis involving accelerating frames of reference; this is the subject of General Relativity. It turns out that while the B is moving with speed v A's clock does run comparatively slow, but when B is accelerating the A's clocks run faster to such an extent that the overall elapsed time is measured as being shorter in B's frame. Thus the reasoning in A's frame is correct and B is younger.
However, we can also resolve the paradox without resorting to General Relativity. Consider B's path to the distant star lined with many lamps. The lamps flash on and off simultaneously in twin A's frame. Let the time measured between successive flashes of the lamps in A's frame be t A . What is the time between flashes in B's frame? As we learned in Heading the flashes cannot occur simultaneously in B's frame; in fact B measures the flashes ahead of him to occur earlier than the flashes behind him (B is moving towards those lamps ahead of him). Since B is always moving towards the flashes which happen earlier the time between flashes is less in B's frame. In B's frame the distance between flash-events is zero (B is at rest) so Δx B = 0 , thus Δt A = γ(Δt B - vΔx B/c 2) gives:
|Δt B =|
Thus the time between flashes is less in B's frame than in A's frame. N is the total number of flashes that B sees during her entire journey. Both twins must agree on the number of flashes seen during the journey. Thus the total time of the journey in A's frame is T A = NΔt A , and the total time in B's frame is T B = NΔt B = N(Δt A/γ) . Thus:
|T B =|
Thus the total journey time is less in B's frame and hence she is the younger twin.
All this is fine. But what about in B's frame? Why can't we employ the same analysis of A moving past flashing lamps to show that in fact A is younger? The simple answer is that the concept of 'B's frame' is ambiguous; B in fact is in two different frame depending on her direction of travel. This can be seen on the Minkowski diagram in :
|T B =|
|T A = + τ = + t|
What is τ ? We can see from that the slopes of the lines are ±v/c so the time in which A observes no flashes is ct = 2d tanθ = 2dv/c . Thus:
|T A = + = frac2dv|
Comparing T A and T B we see T B = T A/γ which is the same result we arrived at above. A measures more time and B is younger. | <urn:uuid:5d89dc97-2d70-4685-beae-7c514b2974a3> | 3.984375 | 1,046 | Academic Writing | Science & Tech. | 73.334192 |
Mite is a common name for most members of the subclass Acari, a large, diverse group of tiny ARACHNIDS
that also includes TICKS
. Worldwide, over 40 000 species of mites have been described of an estimated 500 000 to one million; in Canada, about 2500 of an estimated 11 000. Mites may exceed INSECTS
in numbers of individuals: up to one million mites, representing some 100 species, may occur in one square metre of forest soil and litter in Canada.
The earliest fossils are 380 million years old. Mite remains of present-day genera are commonly found in pieces of 80- to 100-million-year-old amber from Canada, Mexico and Europe.
As their common name indicates, "mites" are unusually small; most range from 0.1 to 10 mm. Mites come in all colours; many are dull, but some, especially water mites, are bright red, blue or green. Like spiders, mites lack well-defined abdominal segments, but unlike spiders their abdomen is not separated from the rest of the body by a narrow waist and does not bear silk-producing spinnerets. Mites (and ticks) are unique among arachnids (except for a small obscure tropical order, the Ricinulei) in having a movable headlike anterior body region, the gnathosoma, which articulates with the rest of the body, the idiosoma.
Mites grow by gradual metamorphosis, and typically have the following stages: egg, 6-legged larva, 8-legged nymph (of which there may be 1 to 3 stages) and 8-legged sexually reproductive adult.
Mites and ticks are the most ubiquitous single animal group, living in nearly every terrestrial and aquatic habitat, including deep soils and forest canopies, cold and thermal springs and subterranean waters. They occur in all types of streams, ponds, lakes and in sea waters of continental shelves and deep sea trenches along Canada's extensive coasts. Mites may disperse by air currents or by birds, mammals and flying insects.
Many mites have developed nonpredacious feeding habits (feeding on bacteria, yeasts, fungi, algae, mosses and higher plants). Others parasitize insects and vertebrates (except fish), some being found in secretive spots (in auditory organs of moths, in respiratory passages of bees, inside bird quills, under lizard scales, in cloacal cavities of turtles, in lungs of seals and in human facial and chest pores).
Relationship with Humans
Mites are both destructive and beneficial. Herbivorous spider mites are pests of various crops, forest trees and ornamentals. Some herbivorous eriophyid mites form galls or transmit plant viruses, including wheat streak mosaic in Canada. Mites cause great economic losses in stored grain, other food and organic products.
House dust mites concentrate allergenic materials. Other mites - eg, chiggers, mange and scabies mites - are important parasites and sometimes transmit human and livestock diseases.
Some species are beneficial as predators of herbivorous mites; others feed on weeds. Oribatid mites are important in decomposing organic matter, recycling nutrients and in soil formation. The importance of mites as bioindicators of soil and water quality is just beginning to be understood.
See also BIODIVERSITY.
Oribatid mite (photo by Barbara Eamer).
EVERT E. LINDQUIST and VALERIE M. BEHAN-PELLETIER
David E. Walter and Heather C. Proctor, Mites: Ecology, Evolution and Behaviour (1999).
Links to Other Sites
The Canadian National Collection of Insects, Arachnids and Nematodes
This website provides information about the scope and contents of the Canadian National Collection of Insects, Arachnids and Nematodes. Check the “Index” link for illustrated descriptions of various taxonomic groups.
Dr. Donald A. Chant
An obituary for acclaimed scientist and environmental activist Dr. Donald A. Chant. From the website for the Entomological Society of Canada.
An online guide to benthic invertebrates found in or on the bottom sediments of rivers, streams, and lakes in Ontario and other regions of Canada. From ecospark.ca | <urn:uuid:38c8ed50-3567-46bd-97ce-264c58d1798f> | 4.09375 | 917 | Knowledge Article | Science & Tech. | 37.555793 |
Symmetrical Vertical Curves
A symmetrical vertical curve is one in which thehorizontal distance from the PVI to the PVC is equal to the horizontal distance from the PW to the PVT. In other words, l1 equals l2.
The solution of a typical problem dealing with asymmetrical vertical curve will be presented step by step. Assume that you know the following data:
g2 = –7%
L= 400.00´, or 4 stations
The station of thePVI = 30 + 00
The elevation of thePVI = 239.12 feet
The problem is to compute the grade elevation of the curve to the nearest hundredth of a foot at each 50-foot station. Figure 11-17 shows the vertical curve to be solved.
Figure 11-17.—Symmetrical vertical curve.
STEP 1: Prepare a table as shown in figure 11-18.In this figure, column 1 shows the stations; column 2, the elevations on tangent; column 3, the ratio of x/l; column 4, the ratio of (M)*; column 5, the vertical offsets [(x/l)*(e)]; column 6, the grade elevations on the curve; column 7, the first difference; and column 8, the second difference.
STEP 2: Compute the elevations and set thestations on the PVC and the PVT.
Knowing both the gradients at thePVC and PVT and the elevation and station at the PVI, you can compute the elevations and set the stations on the PVC and the PVT. The gradient (g1) of the tangent at the PVC is given as +9 percent. This means a rise in elevation of 9 feet for every 100 feet of horizontal distance. Since L is 400.00 feet and the curve is symmetrical, l1 equals l2 equals 200.00 feet; therefore, there will be a difference of 9 x 2, or 18, feet between the elevation at the PVI and the elevation at the PVC. The elevation at the PVI in this problem is given as 239.12 feet; therefore, the elevation at the PVC is 239.12 – 18 = 221.12 feet.
Calculate the elevation at thePVT in a similar manner. The gradient (g2) of the tangent at the PVT is given as –7 percent. This means a drop in elevation of 7 feet for every 100 feet of horizontal distance. Since l1 equals l2 equals 200 feet, there will be a difference of 7 x 2, or 14, feet between the elevation at the PVI and the elevation at the PVT. The elevation at the PVI therefore is
239.12 – 14 = 225,12 feet.
In setting stations on a vertical curve, rememberthat the length of the curve (L) is always measured as a horizontal distance. The half-length of the curve is the horizontal distance from the PVI to the PVC. In this problem, l1 equals 200 feet. That is equivalent to two 100-foot stations and may be expressed as 2 + 00. Thus the station at the PVC is
30 + 00minus 2 + 00, or 28 + 00.
The station at thePVT is
30 + 00plus 2 + 00, or 32 + 00.
List the stations under column 1.
STEP 3: Calculate the elevations at each 50-footstation on the tangent.
From Step 2, you know there is a 9-foot rise inelevation for every 100 feet of horizontal distance from the PVC to the PVI. Thus, for every 50 feet of horizontal distance, there will be a rise of 4.50 feet in elevation. The elevation on the tangent at station 28 + 50 is
221.12 + 4.50 = 225.62 feet.
The elevation on the tangent at station 29 + 00 is
225.62 + 4.50 = 230.12 feet.
Figure 11-18.—Table of computations of elevations on a symmetrical vertical curve.
The elevation on the tangent at station 29 + 50 is
230.12 + 4.50 = 234.62 feet.
The elevation on the tangent at station 30 + 00 is
234.62 + 4.50 = 239.12 feet.
In this problem, to find the elevation on the tan-gent at any 50-foot station starting at thePVC, add 4.50 to the elevation at the preceding station until you reach the PVI. At this point use a slightly different method to calculate elevations because the curve slopes downward toward the PVT. Think of the eleva-tions as being divided into two groups—one group running from the PVC to the PVI; the other group running from the PVT to the PVI.
Going downhill on a gradient of –7 percent fromthe PVI to the PVT, there will be a drop of 3.50 feet for every 50 feet of horizontal distance. To find the elevations at stations between the PVI to the PVT in this particular problem, subtract 3.50 from the eleva-tion at the preceding station. The elevation on the tangent at station 30 + 50 is
239.12-3.50, or 235.62 feet.
The elevation on the tangent at station 31 + 50 is
235.62-3.50, or 232.12 feet.
The elevation on the tangent at station 31 + 50 is
232.12-3.50, or 228.62 feet.
The elevation on the tangent at station 32+00 (PVT) is
228.62-3.50, or 225.12 feet,
The last subtraction provides a check on the work youhave finished. List the computed elevations under column 2.
STEP 4: Calculate(e), the middle vertical offset at the PVI. First, find the (G), the algebraic difference of the gradients using the formula
G =g2– g1
The middle vertical offset(e) is calculated as follows:
e= LG/8 = [(4)(–16) ]/8 = -8.00 feet.
The negative sign indicates e is to be subtracted fromthe PVI.
STEP 5: Compute the vertical offsets at each50-foot station, using the formula (x/l)2 e. To find
the vertical offset at any point on a vertical curve,first find the ratio x/l; then square it and multiply
bye; for example,
x/l= 50/200 = 1/4.
at station 28 + 50, the ratio of
Therefore, the vertical offset is
(1/4)2 e = (1/16) e.
The vertical offset at station 28 + 50 equals
(1/16)(–8) = –0.50 foot.
Repeat this procedure to find the vertical offset ateach of the 50-foot stations. List the results under columns 3, 4, and 5.
STEP 6: Compute the grade elevation at each ofthe 50-foot stations.
When the curve is on a crest, the sign of the offsetwill be negative; therefore, subtract the vertical offset (the figure in column 5) from the elevation on the tangent (the figure in column 2); for example, the grade elevation at station 29 + 50 is
234.62 – 4.50 = 230.12 feet.
Obtain the grade elevation at each of the stations in asimilar manner. Enter the results under column 6.
Note:When the curve is in a dip, the sign will be positive; therefore, you will add the vertical offset (the figure in column 5) to the elevation on the tangent (the figure in column 2).
STEP 7: Find the turning point on the verticalcurve.
When the curve is on a crest, the turning point isthe highest point on the curve. When the curve is in a dip, the turning point is the lowest point on the curve. The turning point will be directly above or below the PVI only when both tangents have the same percent of slope (ignoring the algebraic sign); otherwise, the turning point will be on the same side of the curve as the tangent with the least percent of slope.
The horizontal location of the turning point is either measured from thePVC if the tangent with the lesser slope begins there or from the PVT if the tangent with the lesser slope ends there. The horizontal loca-tion is found by the formula:
xt= distance of turning point from PVC or PVT
g= lesser slope (ignoring signs)
L= length of curve in stations
G= algebraic difference of slopes.
For the curve we are calculating, the computations would be (7 x 4)/16 = 1.75 feet; therefore, the turning point is 1.75 stations, or 175 feet, from thePVT (station 30 + 25).
The vertical offset for the turning point is foundby the formula:
For this curve, then, the computation is ( 1.75/2)2 x 8 = 6.12 feet.
The elevation of the POVT at 30 + 25 would be 237.37, calculated as explained earlier. The elevation on the curve would be
237.37-6.12 = 231.25. STEP 8: Check your work.
One of the characteristics of a symmetrical parabolic curve is that the second differences between successive grade elevations at full stations are constant. In computing the first and second differences (columns 7 and 8), you must consider the plus or minus signs. When you round off your grade elevation figures following the degree of precision required, you introduce an error that will cause the second difference to vary slightly from the first difference; however, the slight variation does not detract from the value of the second difference as a check on your computations. You are cautioned that the second difference will not always come out exactly even and equal. It is merely a coincidence that the second difference has come out exactly the same in this particular problem. | <urn:uuid:90532031-12dd-47fe-a460-158fbced40d5> | 3.8125 | 2,114 | Tutorial | Science & Tech. | 81.169047 |
Convection currents occur within:
Focus Question: What is the source of energy for convection currents
in the geosphere?
Convection currents in the magma drive plate tectonics.
Heat generated from the radioactive decay of elements deep
in the interior of the Earth creates magma (molten rock) in the aesthenosphere.
The aesthenosphere (70 ~ 250 km) is part of
the mantle, the middle sphere of the Earth that extends to 2900
km. It contrasts with the more rigid lithosphere, the outer
shell of the Earth (0 ~ 70 km) that contains the continental
crust (made up of less dense granitic rocks) and the oceanic
crust (more dense basaltic rocks) that are broken up into more
than a dozen rigid plates.
For more info, see:
do the plates move?
Large convection currents in the aesthenosphere transfer
heat to the surface, where plumes of less dense magma break apart the
plates at the spreading centers, creating divergent plate boundaries.
As the plates move away from the spreading centers, they
cool, and the higher density basalt rocks that make up ocean crust get
consumed at the ocean trenches/subduction zones. The crust is recycled
back into the aesthenosphere.
Subduction of Plates
Because ocean plates are denser than continental plates, when these
two types of plates converge, the ocean plates are subducted beneath
the continental plates. Subduction zones and trenches are convergent
margins. The collision of plates is often accompanied by earthquakes
Focus Question: Where is the source of heat in the atmosphere-hydrosphere
The source of heat is from the sun, above.
The teachers were asked to sketch the variation in the distribution
of heat from the equator to the poles, noting the difference in
the angle of incidence with latitude and how this would affect heating.
This led to discussions about the multiple currents/cells that are driven
by unequal heating driving currents both vertically (creating high and
low pressure systems by descending and ascending air masses) and horizontally.
Focus Question: If the hydrosphere were a closed system
with only an external source of heat from the sun, what simple temperature
patterns would you expect to see in the ocean basins?
One would expect to see warmer temperatures at the equator
and cooler temperatures at the poles leading to two large convection cells
from the equator to the poles, one in each hemisphere.
Teachers map their hypotheses:
Focus Question: How well does this simple basin model
illustrate the real convection cells in the ocean-atmosphere system?
The teachers pondered this and other questions to be addressed
further during Session #2 on November 2, 2002. | <urn:uuid:36c0e833-9e45-411b-a80e-b4f648e91065> | 4.5625 | 593 | Academic Writing | Science & Tech. | 37.022518 |
XML is used in many aspects of web development, often to simplify data storage and sharing.
If you need to display dynamic data in your HTML document, it will take a lot of work to edit the HTML each time the data changes.
With XML, data can be stored in separate XML files. This way you can concentrate on using HTML/CSS for display and layout, and be sure that changes in the underlying data will not require any changes to the HTML.
In the real world, computer systems and databases contain data in incompatible formats.
XML data is stored in plain text format. This provides a software- and hardware-independent way of storing data.
This makes it much easier to create data that can be shared by different applications.
One of the most time-consuming challenges for developers is to exchange data between incompatible systems over the Internet.
Exchanging data as XML greatly reduces this complexity, since the data can be read by different incompatible applications.
Upgrading to new systems (hardware or software platforms), is always time consuming. Large amounts of data must be converted and incompatible data is often lost.
XML data is stored in text format. This makes it easier to expand or upgrade to new operating systems, new applications, or new browsers, without losing data.
Different applications can access your data, not only in HTML pages, but also from XML data sources.
With XML, your data can be available to all kinds of "reading machines" (Handheld computers, voice machines, news feeds, etc), and make it more available for blind people, or people with other disabilities.
A lot of new Internet languages are created with XML.
Here are some examples:
If they DO have sense, future applications will exchange their data in XML.
The future might give us word processors, spreadsheet applications and databases that can read each other's data in XML format, without any conversion utilities in between.
Your message has been sent to W3Schools. | <urn:uuid:9ca95ccd-f377-4e0f-a3dc-d443487735c9> | 3.421875 | 406 | Knowledge Article | Software Dev. | 46.780227 |
I have just returned from an absolutely marvelous visit to the Cayman Islands where I was participating in the Nassau grouper spawning project, called Grouper Moon, which is run jointly by the Cayman Islands Government and REEF (http://www.reef.org/programs/grouper_moon). This species of grouper gather together near the full moon in winter months to spawn at specific sites every year. Unfortunately this fidelity has made them very vulnerable to overfishing as the fishers know exactly when and where the aggregation will form. The historical aggregations consisted of tens of thousands of hungry fish so fishing was lucrative and very heavy. The largest fish were being harvested just as they are making the next generation, which seems nonsensical but such traditional fisheries are very difficult to stop. The Cayman Islands finally closed their grouper aggregations in 2003, and the Grouper Moon project has been studying the spawning ever since. Although greatly depleted from historical times, the 3500-4000 fish at the Little Cayman aggregation is relatively healthy. I saw several spawning ‘rushes’ where a large female shoots up through the water column releasing eggs as she goes, surrounded by several males releasing sperm. The eggs are fertilized in the water, and the larvae spend several weeks in the plankton before settling in their nursery habitats to grow big enough to move out to the reef. Nassau groupers’ natural range extends from northern Florida to South America, but intensive fishing has severely depleted populations and unfortunately the historical spawning aggregations that were fished out decades ago have still not returned. The Little Cayman spawning aggregation is one of the very few remaining in the Caribbean, and fortunately seems to be holding its own, with more juveniles appearing every year. Thanks to the considerable efforts of the Caymanian scientists and the REEF program, this aggregation will hopefully survive into the future. | <urn:uuid:424e0dad-b7de-4ec2-b233-4a58cd8b0550> | 2.8125 | 383 | Personal Blog | Science & Tech. | 30.048915 |
December 1, 2010
Having already found more than 500 planets circling distant stars, scientists are getting better at understanding what they’re made of. A group led by Jacob Bean at the Harvard-Smithsonian Center for Astrophysics reports in this week’s Nature that they’ve analyzed the atmosphere of a planet only slightly larger than our own for the first time. And they may have found water —or rather, steam.
The planet, called GJ 1214b, has a radius about 2.6 times larger than Earth’s, and orbits a star located 40 light-years away in the direction of the constellation Ophiuchus. Scientists knew from previous observations that the planet must have an atmosphere, because its density is too low for an all-rocky planet. Theoretical models suggest three possibilities: A) a cloud-free hydrogen atmosphere, B) high clouds or haze obscuring a deeper hydrogen atmosphere, and C) an atmosphere made mostly of water vapor.
Bean and his colleagues used the 3.6-meter Very Large Telescope in Chile to analyze the spectrum of starlight filtering through the planet’s atmosphere. The data led them to rule out option A and favor option C, the steam world (although B is still a possibility). And future infrared observations should be able to distinguish between B and C.
Sadly, though, “the planet would not harbor any liquid water due to the high temperatures present throughout its atmosphere,” say the authors.
No Comments »
No comments yet. | <urn:uuid:2d11218c-e55f-46e1-bca4-84ac1f68b675> | 3.421875 | 317 | Personal Blog | Science & Tech. | 52.562919 |
Minnesota Flash Floods: 1970-2012
This is a continuation of the book Sixteen Year Study on Minnesota Flash Floods. This document was published in January, 1988 by the Minnesota Department of Natural Resources Division of Waters State Climatology Office and the University of Minnesota Soil Science Department. That study looked at sixteen years of flash floods from 1970 to 1985. In addition, flash floods from 1986 to 2012 are included below.
The definition of a flash flood used here is the occurrence of 6 inches or more rainfall within a 24 hour period. The size of a flash flood is measured area in square miles over which a 4-inch or more rainfall occurs. The rationale for using this criteria is that a rainfall of six inches in a 24-hour period is near the 100-year return period in Minnesota and, second, a 4-inch and greater rainfall approximates the level at which the newspsper reports indicate increased erosion or other economic damages are associated. There are a total of 117 flash flood events documented here since 1970.
Flash Flood Events Year by Year | <urn:uuid:0d06341f-1617-4cd0-a716-77cdcebd3f4b> | 3.265625 | 215 | Knowledge Article | Science & Tech. | 46.631233 |
Moons of the Solar Systemby Mark Hughes
There are more than 140 known moons throughout our solar system. Most are barren rock, while some have atmospheres, volcanoes, and maybe even oceans of liquid water or methane. Follow this slideshow to learn more about the moons of our solar system.
1 of 10
The moon around Earth is known simply as "the Moon," but it is also named Luna, after the Roman moon goddess. Luna is one of the largest moons in the solar system; it does not have an atmosphere and its surface is rocky. The people of Earth are used to Luna shining down on them. Luna does not create its own light however; it simply reflects the light from the Sun. Notice the red spot in the upper right of the picture, that's Mars in the distance.
Fun Fact: If you lived on the Moon, you would see the Earth go through phases, much like Earthlings see the Moon do.
Photo source: NASA | <urn:uuid:79a7b89a-e11f-4786-90ee-8e31f65c9c73> | 3.671875 | 198 | Listicle | Science & Tech. | 65.290997 |
Previous Next Edit Rename Undo Refresh Search Administration
void GB.HashTable.Add ( GB_HASHTABLE hash , const char *key , long len , void *data )
Inserts a data into the hash table.
- hash points at the hash table.
- key is the key associated with the data.
- len is the length of the key.
- data is the data pointer.
The keys are copied into the hash table, so the string that key
points at can be temporary.
If a data was already associated with the specified key, this data is silently replaced by the | <urn:uuid:4d0d868b-cbd7-41e8-9aae-28e1e257f87f> | 2.765625 | 127 | Documentation | Software Dev. | 67.217282 |
Sunday, October 4, 2009
Do Big Earthquakes Cause Tremors Elsewhere?
With the spate of large earthquakes worldwide, some people have asked if large earthquakes can set off smaller earthquakes elsewhere. In short, the answer is yes, but the big earthquakes don't make other earthquakes; they may help trigger earthquakes in places where stresses are close to the breaking point anyway. These smaller quakes usually occur within a few hours of a major tremor. But there are some odd complications that are now being recognized.
Some new research, reported in NatureNews, suggests that large distant quakes can weaken other fault zones and cause activity months later. This assertion is supported by the fact that the period of 2005-2007 following the devastating magnitude 9 Sumatra-Andaman quake (the tsunami killed nearly a quarter of a million people) had the largest number of large earthquakes than any comparable period since the early 1900's.
The abstract for the research article can be found here (the entire article requires a fee). The full article: Taira, T., Silver, P. G., Niu, F. & Nadeau, R. M. (2009). Remote triggering of fault-strength changes on the San Andreas fault at Parkfield, Nature 461, 636-639.
The photos today show the San Andreas fault on the Carrizo Plains south of Parkfield, and the Landers fault in the Mojave Desert. The 1992 quake on the Landers fault (mag. 7.3) set off tremors at Parkfield months later. | <urn:uuid:8525bc4f-bbe1-46ef-a776-72cbb5acd77a> | 3.5625 | 318 | Personal Blog | Science & Tech. | 60.3525 |
Why = = (and not just =)
mailund at birc.dk
Wed Oct 22 18:12:22 CEST 2003
On Mon, 20 Oct 2003 00:59:30 +0200, Carlo v. Dango wrote:
>> expression syntax to boot. The original language in the Algol family,
>> Algol 60, used ":=" as the assignment operator as I recall, a tradition
>> followed by Pascal.
>> Frankly, I'd rather have the assignment operator be something
>> like a left arrow: "<-" perhaps. I think it makes more sense,
>> and it avoids the confusion between one = and two.
> actually Algol was meant to have the <- rather than the := but due to
> problems writting the lexer/parser, it became :=
> if you want a language which uses the <- operator then have a look at the
> language "Beta" it takes a little getting used to, when used nested ;)
Actually, if I recall correctly, BETA <URL:http://www.daimi.au.dk/~beta/>
uses the "->" operator--assignment goes the other way.
In BETA there's syntactical difference between assignment and equality
tests (= for testing equality, -> for assigning), but assignment and
method calls uses the same syntax. So assigning to variable x, and
calling function f, uses the same operator:
5 -> x; 5 -> f;
More information about the Python-list | <urn:uuid:8fb31c72-62c8-4f99-8287-e0d0110e6ecc> | 2.953125 | 322 | Comment Section | Software Dev. | 57.81057 |
A member of RSC staff (and Chemistry World fan) recently suggested to me that it’s been 100 years since the idea of solar fuels was born. His evidence? A paper by Italian chemist Giacomo Luigi Ciamician published in Science on 27 September 1912. In it, Ciamician proposes how we might harness the enormous power of the Sun to produce fuels from plants:
‘Is it possible or, rather, is it conceivable that…the cultivation of plants may be so regulated as to make them produce abundantly such substances as can become sources of energy…? I believe that this is possible.’
Although he doesn’t use the term, Ciamician is clearly talking about biofuels:
‘…it seems quite possible that the production of organic matter may be largely increased… The harvest, dried by the sun, ought to be converted, in the most economical way, entirely into gaseous fuel…’
And from there he goes on to describe artificial photosynthesis:
‘For our purposes the fundamental problem from the technical point of view is how to fix the solar energy through suitable photochemical reactions. To do this it would be sufficient to be able to imitate the assimilating processes of plants.’
The paper covers the use of sunlight to power the production of all kinds of useful compounds, not just fuels. But it’s this idea of capturing energy from the sun – deliberately and directly – to store in chemical form for later use that is arguably its most compelling. The idea falls within a generalised concept of solar power (or solar energy) but can be demarcated from making electricity directly from sunlight, as photovoltaic solar cells do.
And it’s a hot topic today. Earlier this year, the RSC published a report into solar fuels and artificial photosynthesis describing the rapid rate of progress in this area in recent years.
Indeed, the whole paper seems very prescient. Ciamician highlights a widespread and growing dependence on fossil fuels and questions how industry would cope with a sudden and unexpected price spike.
Perhaps unsurprisingly, he makes a few false steps in his comments about biofuels:
‘There is no danger at all of using for industrial purposes land which should be devoted to raising foodstuffs. An approximate calculation shows that on the Earth there is plenty of land for both purposes, especially when the various cultivations are properly intensified and rationally adapted to the conditions of the soil and the climate.’
But to be fair there were fewer than two billion people on the planet back in 1912. Who could have predicted the impact of a four fold increase over the next 100 years?
In predicting how our rampant thirst for energy would lead us to the Sun, Ciamician seems to be peering into the future with remarkable clarity. | <urn:uuid:2576cc7b-9526-4966-aa0c-5bc18256413c> | 3.65625 | 582 | Personal Blog | Science & Tech. | 39.800953 |
A new supercomputer simulation shows the collision of two neutron stars can naturally produce the magnetic structures thought to power the high-speed particle jets associated with short gamma-ray bursts (GRBs).
A European team working on the LISA Pathfinder mission has completed an extensive series of ground tests on the spacecraft's optical payload. The tests successfully achieved - for the first time on a spacecraft instrument - the incredible precision that will be required to confirm the existence of gravitational waves.
NASA's Fermi Gamma-ray Space Telescope has discovered 12 new gamma-ray-only pulsars and has detected gamma-ray pulses from 18 others. The finds are transforming our understanding of how these stellar cinders work.
Dr. Joan M. Centrella and Dr. John G. Baker are the 2008 recipients of the John C. Lindsay Memorial Award for Space Science. NASA's Goddard Space Flight Center in Greenbelt, Md., honors one or more of its civil servant space scientists each year with this award, which is the center's highest honor for outstanding contributions in space science.
It's well known that black holes can slow time to a crawl and tidally stretch large objects into spaghetti-like strands. But according to new theoretical research from two NASA astrophysicists, the wrenching gravity just outside the outer boundary of a black hole can produce yet another bizarre effect: light echoes.
NASA scientists have reached a breakthrough in computer modeling that allows them to simulate what gravitational waves from merging black holes look like. The three-dimensional simulations, the largest astrophysical calculations ever performed on a NASA supercomputer, provide the foundation to explore the universe in an entirely new way. | <urn:uuid:a0297073-1899-4679-9603-03c253fbee76> | 2.984375 | 333 | Content Listing | Science & Tech. | 41.73029 |
Worksheet: When Dragons Eat the Sun
1. What city and state did you select?
2. Why are all the things in the sky moving together?
3. What is happening to all the things in the East sky?
4. What is happening to all the things in the West sky? Why?
5. In real life, why can't you see stars and planets during the daytime?
6a. How would you describe the movement of
the objects in the North sky?
6b. Is there one star that seems to be standing still? Why?
6c. What is the name of this star?
6d. Why would that star be of greater
interest to sailors than other stars?
7. What happens to the movement of the stars when you set the time step to "solar days"?
8. How many days does it take the moon to make one cycle through
the sky, returning to its original position?
9a. Describe the path of the noon sun over the seasons (summer, fall, winter, spring).
9b. Why does the moon have phases?
9c. Which other planets have phases? Why just those?
10a. Which is the slowest moving planet in the sky? Why is it such a slowpoke?
11b. Why do the stars jerk every 4 years when the time step is set to "years"?
11. a. What causes a solar eclipse?
11b. What is the difference between a partial solar eclipse and a total solar eclipse?
11c. What does a total solar eclipse look like on earth?
11d. Why do you have to be in just the right place on earth to see a total solar eclipse?
11e. Estimate the speed with which the totality spot moves across southern Africa
during the total eclipse of June 21, 2001.
11f. Could you keep up with it in a speeding car? Why or why not?
11g. How old will you be when the next total solar eclipse is visible from the
11h. Draw a sketch of the sun at the closest approach to totality during
the eclipse of the August 21, 2017, as viewed from your home location.
11i. Is the eclipse more or less total when viewed from Orlando, Florida?
11j. What is the best place you can find to view the total eclipse? | <urn:uuid:0103d2f6-640c-4076-ba97-bc9ad4822fe4> | 3.171875 | 501 | Tutorial | Science & Tech. | 88.710968 |
Comment: 10:50 - 12:11 (01:21)
Source: Annenberg/CPB Resources - Earth Revealed - 11. Evolution Through Time
Keywords: "Dee Trent", life, atmosphere, oxygen, photosynthesis, stromatolite, algae, ozone, "carbon dioxide", "solar radiation", Precambrian, Paleozoic, limestone, ocean, horse, camel, humans, sycamore, "carbon cycle"
Our transcription: The first appearance of life out of the seas was possible in large part because the planet's atmosphere had become oxygen rich due to photosynthesis of stromatolitic algae.
As oxygen built up, an ozone layer formed high in the atmosphere shielding the Earth's surface from deadly solar radiation.
Although this process was actually well underway in the Precambrian, it was not until the mid Paleozoic that life was sufficiently developed to take advantage of the dry land habitats the ozone layer protected.
At the same time oxygen increased in the atmosphere, another atmospheric gas, carbon dioxide, decreased because carbon dioxide had become an important building block of life.
And a lot of the carbon dioxide is trapped in organisms in seawater and actually eventually becomes limes oozes on the sea floor, which may eventually become limestones.
Organisms are very much a part of the control of our environment.
The plants take in the carbon dioxide and give off oxygen.
The plants, the limestones, the ocean, the atmosphere, human beings, horses and camels, the sycamore tree outside are all interrelated in a big cycle called the "carbon cycle."
Geology School Keywords | <urn:uuid:757ca62f-f9c1-410a-8422-13d358e7fbbb> | 3.75 | 342 | Knowledge Article | Science & Tech. | 20.132222 |
There are more than 200 thousand earthquakes on the map, with data from 1898 to 2003. The Pacific "Ring of Fire" is visible near the center of the image, as is the Mid-Atlantic Ridge on the right.
Most quakes occur on fault lines, but there are instances of quakes happening away from fault lines.
The yellow colors on the map (best seen near Alaska, Japan, and New Zealand) are the strongest quakes with magnitudes above 8.0.
You won't see the colors from the strong earthquakes in Haiti (2010), Chile (2010), Japan (2011), and Indonesia (2004) because they are not included in the data. | <urn:uuid:9335cf76-c0ea-46de-be2e-8b1360adab06> | 3 | 137 | Knowledge Article | Science & Tech. | 65.195786 |
Science Fair Project Encyclopedia
Mohs scale of mineral hardness
Mohs' scale of mineral hardness characterizes the scratch resistance of various minerals through the ability of a harder material to scratch a softer. It was created by the German mineralogist Friedrich Mohs and is one of several definitions of hardness in materials science.
Mohs based the scale on ten readily available minerals. Materials are characterised against the scale by finding the hardest material that they can scratch.
The table below shows comparison with absolute hardness measures by a sclerometer. Mohs' is a purely ordinal scale with, for example, corundum being twice as hard as topaz, but diamond, almost four times as hard as corundum.
|6||Orthoclase Feldspar (KAlSi3O8)||72|
Some mnemonics traditionally taught to geology students to remember this table are "The Girls Can Flirt And Other Queer Things Can Do", "The Geologist Can Find An Ordinary Quartz, (that) Tourists Call Diamond", or "Ten Green Cows Flew Away, Feeling Queer To Come Down" (which uses an ambiguous 'F' for Feldspar).
An alternative table is shown below which has been modified to incorporate additional substances that may fall in between two levels.
|7||Vitreous pure silica|
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:f94444cf-35fe-4d61-988a-09b66c771d8d> | 3.859375 | 314 | Knowledge Article | Science & Tech. | 34.851777 |
HomePython Page 3 - Windows Programming in Python: Creating COM Servers
Annotating the Class with Attributes - Python
In an earlier article I discussed accessing COM components from within Python programs. However, I left a question dangling, namely, can COM servers be created in Python, and can they be accessed by applications created in other languages or platforms such as Visual Basic? The answer is an emphatic yes.
Every Python class representing a COM interface must expose itself as a COM object. To achieve this, the PythonCOM framework requires certain attributes to be associated with the Python class that needs to be exposed as a COM object. There are three main attributes, which are:
All of the above attributes are required in exposing a Python class as a COM object - from simplest to the most complex COM objects.
The _public_methods_ attribute takes a list of all those methods that need to be exposed via the COM. The _reg_progid_ attribute is used to assign the ProgID for the new object, that is, the name that the users of this object must use to create the object. It is the human readable name of the object. Finally, the _reg_clsid_ attribute sets the unique CLSID for the object. These IDs must not be copied. Instead new ones should be created using pythoncom.CreateGuid().
Let's take a look at the same example of the COM server:
def double(self, arg): # trivial test function to check it's alive return arg * 2
The _reg_clsid_ contains a 32 bit class id for the object. The _reg_progid_ contains the human readable name using which the object can be called. Here the _public_methods_ contains the list of all the methods exposed via COM. Here there is only one -- double. That completes this section. In the next section, I will be creating a real world example and call it from Visual Basic. | <urn:uuid:5e661e71-4578-4644-814f-23cd5862fa9f> | 2.71875 | 404 | Documentation | Software Dev. | 52.150324 |
There are very few ways of conducting experiments without the influence of Earth's gravity. One of these platforms became available on 25 November 2012, when a rocket was launched from the Swedish Esrange Space Center in Kiruna. The high-altitude sounding rocket MAPHEUS-3 (Materialphysikalische Experimente unter Schwerelosigkeit, Material Physics Experiments under Microgravity Conditions) of the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) carried four experiments that were subjected to three and a half minutes of microgravity conditions during the flight. Among other things, the experiments involved melting metal samples in furnaces. The solidified samples were recovered on 26 November 2012 with the help of a snowmobile.
"If we conduct these experiments on Earth, lift forces are exerted on the molten metals," explains Andreas Meyer from the DLR Institute of Material Physics in Space. "When we conduct experiments in microgravity, we can overcome these and observe physical processes without any interference." The small furnaces began heating the aluminium- rich alloys for the ATLAS diffusion experiment before the rocket was launched. Eighty seconds after MAPHEUS-3 lifted off, the various liquefied components mixed in microgravity. "This is a process about which we still know very little." The 'demixing' of molten metals was also a subject of study during the MAPHEUS-3 campaign. With the DEMIX experiment, scientists investigated the behaviour of copper-cobalt alloys during the melting process. "With the results of the MAPHEUS-3 flight, we can revise the existing models for this process and adjust them accordingly," states Meyer. "This demixing process is employed in industry – and so it is also of interest to this sector to test the current models."
Fundamental research with a video camera
For the MEGraMa experiment, researchers from the DLR Institute of Material Physics in Space filmed the impact behaviour of particles with a diameter of less than one millimetre. Four magnets accelerated the spherules in a controlled manner during the flight; in the meantime, a video camera recorded how the spherules lose energy as they collide with one another. "With this, we investigate the behaviour of granular gases," emphasises Institute Director Meyer. "This process is not yet fully understood."
To prepare for the next flight campaign – MAPHEUS-4 – the rocket also carried a newly developed furnace that was subjected to microgravity conditions. Measuring just 40 by 40 by 20 millimetres, it will melt six samples during a flight scheduled for next year. "The smaller the furnace, the lower the amount of energy needed to heat it up." One advantage of this new furnace is that it is 'transparent' to X-rays, which enables the direct study of the changes in composition taking place in the interior of the liquefied metal samples.
Recovered by a snowmobile
The launch was performed by staff from DLR's MObile ROcket BAse (MORABA). "We are, to some extent, responsible for MAPHEUS' 'flight ticket'. In addition to the launch itself, this includes the provision of the launcher and rocket engines, which are developed in-house, as well as the overall integration of the rocket," explains DLR engineer Markus Pinzer. Microgravity was achieved 80 seconds after lift-off at an altitude of 100 kilometres; the rocket reached a maximum altitude of 140 kilometres. After the capsule was returned to Earth by parachute, a team quickly located it, with the experiments on board, and were able to recover it one day after its launch using a snowmobile. "The flight was very challenging, both scientifically and technologically," emphasised Project Manager Martin Siegl from the DLR Institute of Space Systems. "Complex development work on the experiments, a variety of tests and an intensive preparation phase all culminated in those few minutes of flight." Now it is time to evaluate the acquired data and analyse the resolidified metal samples. | <urn:uuid:4bbbd621-61fd-4769-81ee-2c17997a312d> | 3.328125 | 836 | Knowledge Article | Science & Tech. | 30.072752 |
Skip Maine state header navigation
Skip First Level Navigation | Skip All Navigation
|Home | Contact Us | Publications|
Global Distribution and Circulation of Water
Ground water is a part of the hydrosphere (Nace, 1960), which includes all of the water of the oceans, rivers, lakes, lower atmosphere, and subterranean environments. Ground water accounts for less than 1% of all the water in the hydrosphere. As tiny as this amount might first appear, it is nearly seven times the amount of fresh surface water available at any one time.
There is a constant interchange of water throughout the hydrosphere from sea water to atmospheric moisture to surface water to ground water. This change of water forms is known as the hydrologic cycle. The basic components of this cycle are illustrated in Figure 1.
There is no beginning or end to the hydrologic cycle, which involves a layer of the atmosphere 10 miles thick and at least 1/2 mile of the lithosphere (soil and rock) (Chow, 1964). Precipitated water returns to the earth in a variety of forms and may be intercepted or transpired back to the atmosphere by plants. It may run over the ground into streams and lakes or evaporated back into the air. A small amount moves downward through the soil to a zone of saturation.
This ground water is later discharged directly to streams and lakes or to springs and seeps, from which it may run off to streams or be evaporated into the atmosphere. Ultimately, surface and ground water flow back to the oceans to complete the cycle, which may take hours, or thousands and even millions of years.
Last updated on March 25, 2009
|Copyright © 2005 All rights reserved.| | <urn:uuid:2e1c2f6b-34c1-4f95-807c-a06657815dae> | 3.765625 | 353 | Knowledge Article | Science & Tech. | 46.517308 |
CHARLES DARWIN's theory of evolution has been the source of much controversy since its publication in 1859, most recently involving the intelligent design (ID) lobby in the US. Now the theory is fuelling another debate, although for once the battle lines have nothing to do with religion.
Instead of pitting God against science, the emerging spat centres on evolutionary algorithms (EAs), which mimic the processes of natural selection and random mutation by "breeding", selecting and re-breeding possible designs to produce the fittest ones.
EAs take two parent designs - for a boat hull, say - and blend components of each, perhaps taking the surface area of one and the curvature of another, to produce multiple hull offspring that combine the features of the parents in different ways. Then the algorithm selects those offspring it considers are worth re-breeding - in this case those with the right combination of parameters to ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:d25ba019-b1f3-44fa-8ab6-62e44463d73f> | 2.96875 | 210 | Truncated | Science & Tech. | 37.871013 |
Inspiration often comes from nature, and Anthony Reale took his from the basking shark, the second largest fish in the sea (after the whale shark). The basking shark is a krill raker, swimming through the sea with its huge mouth open and filtering the tiny organisms from the water. Reale looked at the shark's mouth and saw beautiful efficiency borne of millions of years of evolution: The gills direct flowing water around to the back of the mouth, drawing water in with a more consistent flow pattern. He took these elements and applied them to an in-water turbine housing. When tested at the University of Michigan's hydrodynamics lab, it demonstrated a 40 percent better efficiency than comparable turbines. Reale is now shopping the patent-pending design around to investors. | <urn:uuid:707f9523-24d8-4631-bd2d-2b031858759a> | 2.734375 | 160 | Listicle | Science & Tech. | 45.427727 |
This section tells you how to create a custom formatter for a logger handler. Java provides two types formatter SimpleFormatter and XMLFormatter. But, the java logging package allows for creating a custom formatter through the logger handler. Here you will see custom formatter that means formatter is created by users or user defined formatter. A logger handler writes log records to another file by using the logger formatter.
Descriptions of program:
Program creates a logger and writes logger records to given file in FileHandler that has append property is true to add content into last of file. Contents are adding into a file to given format in custom formatter. Here, formatter provides the facility to write date into file in 'heading 1' format.
Description of code:
Above is a method of Formatter class that returns string types formatter records. This method takes a argument which is any type of handler.
Here is the code of program:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:1da57645-6ea6-4a7b-b626-627144fd5edd> | 2.71875 | 238 | Documentation | Software Dev. | 43.866875 |