text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
(THE EFFECT OF TEMPERATURE ON THE RATE OF A REACTION)
The purpose of this demonstration is to show the effect of temperature upon the rate of a reaction.
This demonstration is appropriate for a general or first-year college-prep course. The rate of formation of CO2 (g) from the reaction of Alka-Seltzer tablets with water at different temperatures is easily observed by comparing the rates of inflation of balloons attached to the flasks in which the reactions are carried out.
Approximately 10 minutes.
*See Modifications / Substitutions
Safety goggles must be worn by those doing the demonstration.
In place of Erlenmeyer flasks, 10-12 oz. soda bottles may be used.
- Alka-Seltzer tablets
- tap water - hot, room temperature, and cold
- balloons (previously blown up to stretch them)
- 125-mL Erlenmeyer flasks*
- chemical scoops, spatulas or spoons
Solutions may be flushed down the drain with water.
Bicarbonate and hydrogen ions, produced when Alka-Seltzer dissolves in water, react according to the following equation:
- Ask students to help with the demonstration; it will require at least one student per flask.
- Fill one flask about half way with hot tap water. Add an equal volume of room temperature tap water to the second flask and an equal volume of cold tap water to the third.
- Practice putting the balloon over the mouth of the flask.
- Break three Alka-Seltzer tablets into comparable size pieces.
- Add one broken tablet to each flask.
- Immediately put a balloon over the mouth of each flask.
- Note the rate of balloon inflation.
- Compare the sizes of the inflated balloons when the reactions are finished to demonstrate that only the rate of reaction is affected, not the final amount of product.
HCO3-(aq) + H+(aq) ----> H2O (l) + CO2 (g)
Increasing the temperature increases the rate of reaction because at the higher temperature, a greater percentage of ions in the sample have energy greater than the required activation energy for the reaction. The observed rate of inflation of the balloons, which is shown to be related to the temperature at which the reaction takes place, is a measure of the rate of formation of carbon dioxide gas.
Check balloons before doing the demonstration to be sure that they will inflate with the pressure generated.
Smoot, Robert, Chemistry: A Modern Course, Merrill Publishing Co., Columbus, OH, 1983. This work desribes a similar demonstration.
Submitted by Patti Ruff, Bill Vitori, Irene Walsh, Doug Wilbur, and Joe Don Wilkins
Woodrow Wilson Leadership Program in Chemistry
The Woodrow Wilson National Fellowship Foundation
CN 5281, Princeton NJ 08543-5281 | <urn:uuid:65cb687d-3f68-4958-83eb-af743acd72ed> | 3.984375 | 605 | Tutorial | Science & Tech. | 43.960977 |
Concept A: Absolute Zero.
The lowest possible temp is around -273degC.
It is labeled 0K on the Kelvin scale.
K = degC + 273
degC = K - 273
Gas volumes are proportional to K temps.
A temp change from 20K to 40K will double the volume.
A temp change from 20degC to 40degC does not.
(It will increase the volume by 313K / 293K.)
ALL TEMPS USED IN GAS LAWS MUST BE IN KELVINS.
Concept B: Standard Temperature and Pressure. (STP)
Solids and liquids have a defined volume.
Gases have a defined volume if the temperature and pressure are defined.
Standard Temperature = 0degC = 273K
Standard Pressures = 1 atmosphere (atm) = 760mmHg = 760torr = 101.3kPa = 14.7psi (pounds per square inch) = 29.9inHg.
You'll probably have to convert between these units.
Equation 1: The combined gas laws.
P2V2 / T2 = P1V1 / T1
Temps have to be in Kelvins.
Pressures have to have the same units, but it doesn't matter which one.
Volumes have to have the same units, but it doesn't matter which one.
This equation is used to figure out a new value when a fixed amount of gas undergoes a change in conditions of PV or T. Typically a volume of gas will be collected at lab conditions (say 25degC and 740.mmHg) and you will be asked to calculate the volume collected at STP.
V2 = V1P1T2 / P2T1.
Equation 2: Ideal Gas Law
PV = nRT.
This equation is used to calculate one value when the other 3 are known.
T must be in Kelvins.
P and V must have the same units as R. (or R the same units as P and V)
n measures moles.
R is a constant and can be calculated based on the units given in the problem by substituting standard conditions and volume (22.4L or equivalent) for 1 mole of a gas.
R = (PV / nT)
R = (1atm)(22.4L) /[(1mole)(273K)] = 0.0821Latm/moleK
R = (760torr)(22.4dm^3) / [(1mole)(273K)] = 62.4torrdm^3/moleK etc.
By substituting m/M (mass/molarmass) for n, a couple other interesting equations show up.
PV = mRT / M
P = mRT / MV
PM / RT = m/V = D
For molar mass
PV = mRT / M
MPV = mRT
M = mRT / PV
Gas law 3: Graham's Law
v1 / v2 = sqrt(M2 / M1)
v (lower case) stands for velocity not Volume (upper case V).
M stands for molar mass.
This equation applies to two samples of different gases at the same temp and pressure.
If the velocity of one gas is known, the velocity of the other can be calculated.
Alternatively, the "relative rates" of gas1 to gas2 can be calculated.
Or gas1 is how many times faster than gas2?
In the latter two examples it's that ratio that's important not a specific value for v1 or v2.
Note that the subscripts are criss-crossed in the equation.
It's possible you will also have to calculate the volume of a dry gas, but that requires an explanation of open and closed manometers.
You may also have to understand how to do a calculation involving a real gas.
The equation contains corrections that compensate for the assumptions made about ideal gases:
1 that particles are point masses, (have mass but occupy not space.)
2 that collisions are completely elastic (no energy lost.)
See it and a description of the rest of the laws and some examples at | <urn:uuid:c5fb0ccc-c359-4fb7-8dc9-5cd03f849c79> | 4.375 | 907 | Tutorial | Science & Tech. | 78.602143 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2006 August 7
Explanation: Our Sun is still very active. In the year 2000, our Sun went though Solar Maximum, the time in its 11-year cycle where the most sunspots and explosive activities occur. Sunspots, the Solar Cycle, and solar prominences are all caused by the Sun's changing magnetic field. Pictured above is a solar prominence that erupted in 2002 July, throwing electrons and ions out into the Solar System. The above image was taken in the ultraviolet light emitted by a specific type of ionized helium, a common element on the Sun. Particularly hot areas appear in white, while relatively cool areas appear in red. Our Sun should gradually quiet down until Solar Minimum occurs, and the Sun is most quiet. No one can precisely predict when Solar Minimum will occur, although some signs indicate that it has started already!
Authors & editors:
NASA Web Site Statements, Warnings, and Disclaimers
NASA Official: Jay Norris. Specific rights apply.
A service of: EUD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:88be11d4-de8d-492c-8475-e850f41310c1> | 3.546875 | 252 | Knowledge Article | Science & Tech. | 43.020577 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2008 June 18
Explanation: What if the atmosphere above you became one gigantic lens? This actually happens when a nearly transparent sheet of pyramid shaped ice crystals falls from the sky in a common orientation. These ice-crystals act together like millions of miniature ice mirrors, with external and internal reflections from different faces creating arcs and halos of different radii. An amazing display of pyramid ice crystal halos was captured on June 5 above Tampere, Finland. Visible above are very unusual sun halos of 9, 18, 20, 23, and 24 degrees. In contrast, thin and flat falling ice crystals will produce a halo of 22 degrees only. The high clouds containing the ice crystals are faintly visible, as are some sundogs. The usual Sun image was covered behind a light post, and the above image was significantly digitally sharpened. It is not currently known how large areas of nearly uniform pyramidal ice crystals form.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:1309cec4-34fd-4e84-baa9-bf242be7d547> | 3.59375 | 263 | Knowledge Article | Science & Tech. | 37.503707 |
|Author||Massachusetts Institute of Technology|
|Publisher||Massachusetts Institute of Technology|
|Linking from code with a different license||Yes|
The MIT License is a free software license originating at the Massachusetts Institute of Technology (MIT). It is a permissive free software license, meaning that it permits reuse within proprietary software provided all copies of the licensed software include a copy of the MIT License terms. Such proprietary software retains its proprietary nature even though it incorporates software under the MIT License. The license is also GPL-compatible, meaning that the GPL permits combination and redistribution with software that uses the MIT License.
Software packages that use one of the versions of the MIT License include Expat, PuTTY, the Mono development platform class libraries, Ruby on Rails, Lua (from version 5.0 onwards), and the X Window System, for which the license was written.
Because MIT has used many licenses for software, "MIT License" is considered ambiguous by the Free Software Foundation. "MIT License" may refer to the "Expat License" (used for Expat) or to the "X11 License" (also called "MIT/X Consortium License"; used for the X Window System by the MIT X Consortium). The "MIT License" published on the official site of Open Source Initiative is the same as the "Expat License".
Except as contained in this notice, the name(s) of the above copyright holders shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization.
The XFree86 Project uses a modified MIT License for XFree86 version 4.4 onward. The license includes a clause that requires attribution in software documentation, like the original 4-clause BSD license. The Free Software Foundation contends that this addition is incompatible with the version 2 of the GPL, but compatible with version 3:
The end-user documentation included with the redistribution, if any, must include the following acknowledgment: "This product includes software developed by The XFree86 Project, Inc (http://www.xfree86.org/) and its contributors", in the same place and form as other third-party acknowledgments. Alternatively, this acknowledgment may appear in the software itself, in the same form and location as other such third-party acknowledgments.
A common form of the MIT License (from OSI's official site, which is the same version as the "Expat License", and which is not identical to the X source code) is defined as follows:
Copyright (C) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Comparison to other licenses
The MIT License is similar to the 3-clause "modified" BSD license, except that the BSD license contains a notice prohibiting the use of the name of the copyright holder in promotion. This is sometimes present in versions of the MIT License, as noted above.
The original BSD license also includes a clause requiring all advertising of the software to display a notice crediting its authors. This "advertising clause" (since disavowed by UC Berkeley) is present in the modified MIT License used by XFree86.
The MIT License states more explicitly the rights given to the end-user, including the right to use, copy, modify, merge, publish, distribute, sublicense, and/or sell the software.
The Simplified BSD license used by FreeBSD is essentially identical to the MIT License, as it contains neither an advertising clause, nor a prohibition on promotional use of the copyright holder's name.
Also similar in terms is the ISC license, which has a simpler language.
The University of Illinois/NCSA Open Source License combines text from both the MIT and BSD licenses; the license grant and disclaimer are taken from the MIT License.
- Stallman, Richard. "Various Licenses and Comments about Them # Expat License". Free Software Foundation. Retrieved 5 December 2010.
- Stallman, Richard. "Various Licenses and Comments about Them # X11 License". Free Software Foundation. Retrieved 5 December 2010.
- "Open Source Initiative OSI - The MIT License:Licensing". Open Source Initiative. Retrieved 5 December 2010.
- Dickey, Thomas E. "NCURSES — Frequently Asked Questions (FAQ)".
- "XFree86 License (version 1.1)". XFree86 Project. Retrieved 2007-07-12.
- "Various Licenses and Comments about Them". Free Software Foundation. Retrieved 2011-05-10.
- "To All Licensees, Distributors of Any Version of BSD". University of California, Berkeley. 1999-07-22. Retrieved 2006-11-15.
- MIT License variants
- The MIT License template (Open Source Initiative official site)
- Expat License
- X11 License
- XFree86 License | <urn:uuid:af6be989-5b7c-43fa-b634-e1cc836113b9> | 2.765625 | 1,239 | Documentation | Software Dev. | 34.493986 |
The factorials and binomials have a very long history connected with their natural appearance in combinatorial problems. Such combinatorial‐type problems were known and partially solved even in ancient times. The first mathematical descriptions of binomial coefficients arising from expansions of for appeared in the works of Chia Hsien (1050), al-Karaji (about 1100), Omar al-Khayyami (1080), Bhaskara Acharya (1150), al‐Samaw'al (1175), Yang Hui (1261), Tshu shi Kih (1303), Shih–Chieh Chu (1303), M. Stifel (1544), Cardano (1545), Scheubel (1545), Peletier (1549), Tartaglia (1556), Cardan (1570), Stevin (1585), Faulhaber (1615), Girard (1629), Oughtred (1631), Briggs (1633), Mersenne (1636), Fermat (1636), Wallis (1656), Montmort (1708), and De Moivre (1730). B. Pascal (1653) gave a recursion relation for the binomial, and I. Newton (1676) studied its cases with fractional arguments.
It was known that the factorial grows very fast. Its growth speed was estimated by J. Stirling (1730) who found the famous asymptotic formula for the factorial named after him. A special role in the history of the factorial and binomial belongs to L. Euler, who introduced the gamma function as the natural extension of factorial () for noninteger arguments and used notations with parentheses for the binomials (1774, 1781). C. F. Hindenburg (1779) used not only binomials but introduced multinomials as their generalizations. The modern notation was suggested by C. Kramp (1808, 1816). C. F. Gauss (1812) also widely used binomials in his mathematical research, but the modern binomial symbol was introduced by A. von Ettinghausen (1826); later Förstemann (1835) gave the combinatorial interpretation of the binomial coefficients.
A. L. Crelle (1831) used a symbol that notates the generalized factorial . Later P. E. Appell (1880) ascribed the name Pochhammer symbol for the notation of this product because it was widely used in the research of L. A. Pochhammer (1890).
While the double factorial was introduced long ago, its extension for complex arguments was suggested only several years ago by J. Keiper and O. I. Marichev (1994) during the implementation of the function Factorial2 in Mathematica.
The classical combinatorial applications of the factorial and binomial functions are the following:57181644131429Item613418010943441Item723121614745161Item | <urn:uuid:6f35a29e-0a04-4332-b836-23d8ea022319> | 3.421875 | 641 | Knowledge Article | Science & Tech. | 54.183563 |
What is a Magnetar?
A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of high-energy electromagnetic radiation, particularly X-rays and gamma rays.1
On March 5, 1979, several months after dropping probes into the toxic atmosphere of Venus, two Soviet spacecraft, Venera 11 and 12, were drifting through the inner solar system on an elliptical orbit. It had been an uneventful cruise. The radiation readings on board both probes hovered around a nominal 100 counts per second. But at 10:51AM EST, a pulse of gamma radiation hit them. Within a fraction of a millisecond, the radiation level shot above 200,000 counts per second and quickly went off scale.
Eleven seconds later gamma rays swamped the NASA space probe Helios 2, also orbiting the sun. A plane wave front of high-energy radiation was evidently sweeping through the solar system. It soon reached Venus and saturated the Pioneer Venus Orbiter’s detector. Within seconds the gamma rays reached Earth. They flooded detectors on three U.S. Department of Defense Vela satellites, the Soviet Prognoz 7 satellite, and the Einstein Observatory. Finally, on its way out of the solar system, the wave also blitzed the International Sun-Earth Explorer.
The pulse of highly energetic, or “hard,” gamma rays was 100 times as intense as any previous burst of gamma rays detected from beyond the solar system, and it lasted just two tenths of a second. At the time, nobody noticed; life continued calmly beneath our planet’s protective atmosphere. Fortunately, all 10 spacecraft survived the trauma without permanent damage. The hard pulse was followed by a fainter glow of lower-energy, or “soft,” gamma rays, as well as x-rays, which steadily faded over the subsequent three minutes. As it faded away, the signal oscillated gently, with a period of eight seconds. Fourteen and a half hours later, at 1:17AM on March 6, another, fainter burst of x-rays came from the same spot on the sky. Over the ensuing four years, Evgeny P. Mazets of the Ioffe Institute in St. Petersburg, Russia, and his collaborators detected 16 bursts coming from the same direction. They varied in intensity, but all were fainter and shorter than the March 5 burst.
Astronomers had never seen anything like this. For want of a better idea, they initially listed these bursts in catalogues alongside the better-known gamma-ray bursts (GRBs), even though they clearly differed in several ways. In the mid-1980s Kevin C. Hurley of the University of California at Berkeley realized that similar outbursts were coming from two other areas of the sky. Evidently these sources were all repeating unlike GRBs, which are one-shot events [see “The Brightest Explosions in the Universe,” by Neil Gehrels, Luigi Piro and Peter J. T. Leonard; Scientific American, December 2002]. At a July 1986 meeting in Toulouse, France, astronomers agreed on the approximate locations of the three sources and dubbed them “soft gamma repeaters” (SGRs). The alphabet soup of astronomy had gained a new ingredient.
Another seven years passed before two of us (Duncan and Thompson) devised an explanation for these strange objects, and only in 1998 did one of us (Kouveliotou) and her team find remains of a star that exploded 5,000 years ago. Unless this overlap was pure coincidence, it put the source 1,000 times as far away as theorists had thought—and thus made it a million times brighter than the Eddington limit. In 0.2 second the March 1979 event released as much energy as the sun radiates in roughly 10,000 years, and it concentrated that energy in gamma rays rather than spreading it across the electromagnetic spectrum.2
About 26 magnetars are known (see here). | <urn:uuid:e02012a3-3b8a-451a-b98e-9ff50678ecad> | 4.21875 | 834 | Knowledge Article | Science & Tech. | 51.825595 |
Mathematics is an activity of investigation and exploration. Informally, both calculi and an algebras are tools which consist of sets of symbols and systems of rules (usually called axioms) for manipulating those symbols.
Calculi tend to be specified/defined/explored/used to answer questions of "calculation" or reckoning, in some very general sense. Calculi tend to be used to investigate properties of objects (i.e "What is the area under the curve?")
Algebras tend to be specified/defined/explored/used to answer questions about how different "things" are related, in some very general sense. Algebras tend to be used to study the relationship between objects. (i.e. "Is this equation 'the same' as that equation?")
I think it is safe to say that the term "algebra" today, carries a bit more meaning to most mathematicians than the general teram "calculus".
The Calculus (as taught in high-school or undergraduate university), also known as "infinitesimal calculus", is a calculus focused on limits, functions, derivatives, integrals, and infinite series. It is chiefly concerned with calculations or answering questions about change. The Calculus uses the complex numbers (chiefly) as a foundation for this investigation.
Opening a book on computer science, you might find a "calculus of computation" which might involve symbols and rules which let one "calculate" or "discover" behavioral properties of a computer program. As a foundation, such a calculus might use "states" and "transitions", instead of the complex numbers, to ground the investigation.
Elementary Algebra (ie. high-school algebra) is, informally, the study of relationships of variables and structures (e.g. equations) arising from combining variables according to certain rules (i.e. performing "operations"). It uses the complex numbers as the basic foundation in which one could "check" or "verify" statements, but quickly one finds that "calculating with numbers" is not that useful (or practical) in investigating relationships between equations.
"The general theory of arithmetic operations is algebra: so we can also develop an algebra of set theory." - Concepts of Modern Mathematics, Ian Stewart
In that sense, Elementary Algebra is more "abstract" than arithmetic, and is often the subject where schools (specifically bad teachers) lose a student's interest and attention in mathematics. It is a tragedy, since it is exactly at Elementary Algebra that things get interesting.
In computer science or other engineering disciplines, you might find a "process algebra" when reasoning about how various states of a computer program relate to each other. We can ask questions like "is a specification of a collection of processes 'functionally equivalent' to another specification (i.e do they do the same thing? as in the case of a particular hardware design versus a software program)? The same "process algebra" could possibly be used to reason about how the various "states" of a garage door opener relate to each. Such an algebra might use states, transitions, and time as a foundation. | <urn:uuid:69ed5e70-ffb4-4903-93ea-fca8cc825aaf> | 3.296875 | 661 | Q&A Forum | Science & Tech. | 29.90691 |
Climate & Energy
Ever sinces Earth’s creation, the oceans have been absorbing vast amounts of carbon dioxide in a system that keeps our planet’s atmosphere in balance. Now, because of the burning of fossil fuels, the oceans are becoming saturated with carbon. Carbon dioxide is changing the very chemistry of the oceans, causing them to become more acidic and jeopardizing the future of coral reefs and organisms that produce shells. The oceans are also warming, thus increasing the intensity of storms, causing sea levels to rise and disrupting ecosystems and ocean circulation. | <urn:uuid:76187a19-35c9-409e-87e5-aeeb094e7349> | 3.34375 | 111 | Knowledge Article | Science & Tech. | 28.591089 |
Yesterday marked the 40th anniversary of Landsat, the longest-running program focused on acquiring satellite photos of Earth. The Landsat satellite snaps one completely photo of the Earth’s surface every 16 days, and the petabytes of photos collected over the years have given scientists a view into how our planet’s surface has changed over time, whether by natural or human-caused means. Google is currently working to make the photos easily enjoyable by the general public by transforming them into time-lapse videos.
Here are a few of the time-lapse created so far:
Deforestation of the Amazon
Growth of Las Vegas
Drying of the Aral Sea
You can dive into the data yourself by visiting Google Earth Engine. | <urn:uuid:4ed33ab6-606a-4790-966b-5f4283a07cb9> | 3.203125 | 153 | Truncated | Science & Tech. | 27.906457 |
Over the past several decades, paleontologists—including [Jack] Horner—have found ample evidence to prove that modern birds are the descendants of dinosaurs, everything from the way they lay eggs in nests to the details of their bone anatomy. In fact, there are so many similarities that most scientists now agree that birds actually are dinosaurs, most closely related to two-legged meat-eating theropods like Tyrannosaurus rex and velociraptor.That's funny. Here's where the Jurassic Park comes in:
But “closely related” means something different to evolutionary biologists than it does to, say, the people who write incest laws. It’s all relative: Human beings are almost indistinguishable, genetically speaking, from chimpanzees, but at that scale we’re also pretty hard to tell apart from, say, bats.
These regulatory genes—the master switches of development—contain the recipes for making certain proteins that stick to different stretches of the genome, where they function like brake shoes, controlling at what time during development, and in what part of the body, other genes (for things like growth-factor proteins or actual structural elements) get turned on. The same basic molecular components get deployed to make the six-legged architecture of an insect or fish fins or elephant trunks. Different body shapes aren’t the result of different genes, though genetic makeup certainly plays a role in evolution. They’re the result of different uses of genes during development. So making a chicken egg hatch a baby dinosaur should really just be an issue of erasing what evolution has done to make a chicken. “There are 25 years of developmental biology underlying the work that makes Horner’s thought experiment possible,” says Carroll, now a molecular biologist at the University of Wisconsin-Madison. Every cell of a turkey carries the blueprints for making a tyrannosaurus, but the way the plans get read changes over time as the species evolves.That is one of the best definitions of hox genes that I have ever seen. The problem is that you are looking at a developmental level that is very basic. It will be interesting to see how this is applied in the next few years. | <urn:uuid:c541fb16-46ad-44a7-beef-9c82d8b4b1cb> | 3.40625 | 451 | Personal Blog | Science & Tech. | 33.581091 |
As alternative biochemistries go, a fair amount of both serious science and science fiction has suggested the idea of silicon based life — life which uses silicon to form its main structures as opposed to the carbon that makes up life on Earth. After all, despite the fact that carbon-based life is the only kind of life we know of, it would be blinkered to assume that no other life is possible. In his book, The Cosmic Connection, Carl Sagan coined the phrase “carbon chauvenism” to describe such assumptions. It’s an interesting concept to think about, largely because of how utterly alien silicon-based life could be!
The basic assumption behind the idea of silicon-based life is simple. Silicon sits right underneath carbon in the periodic table, so it should have similar chemistry. The two certainly do share a few things in common — both can form four bonds, both are reasonably reactive (but not too reactive) and both have fairly extensive chemistries. For instance, both react with hydrogen to form methane (CH4) and silane (SiH4).
Sadly for silicon, there are a lot of ways in which it’s simply not as good as carbon. The biggest difference is that carbon can catenate. In other words, it can form strong chemical bonds with itself, meaning that carbon can form rings, chains and cages in all manner of shapes and sizes. The number of possible structures that carbon can make is theoretically infinite, given enough carbon atoms. Silicon, on the other hand, doesn’t much like to catenate. Silicon-silicon bonds are relatively weak, so silicon chains are unstable. So much so that, at least under Earth-like conditions, it’s extremely difficult to form any silicon chains at all. Let alone rings or anything else. Disilane will actually combust spontaneously in air. Water too, will break it apart (forming silicic acids). Also, as I found out previously, silicon isn’t even too fond of bonding with carbon.
Even considering the possiblity of siloxanes (chains made up of silicon and oxygen), there’s one other big fact standing in the way of silicon biochemistry. Earth’s crust is full of silicon. Oxygen too. Actually, Earth is rather odd in the Universe, in that it contains much more silicon than carbon. Given that life chose carbon anyway and mostly ignored silicon is surely an indicator that either silicon is unsuitable, or that carbon simply works better.
So it’s unlikely we’ll find anything like the Horta living on any of these new planets being discovered. Sorry Trek fans.
It’s worth mentioning though, that while Earth life isn’t actually based on silicon, some life does use it. Diatoms create their tiny but beautiful shells out of silica. Even you and I contain silica.
But carbon’s the bit that counts… | <urn:uuid:ceeec9bc-e050-4254-b9cc-9fdfcfc35fac> | 3.578125 | 609 | Personal Blog | Science & Tech. | 48.451911 |
Mike Shanahan (Under the Banyan) reports on the rapid decline of seed dispersing and other animals in the world’s most botanically diverse forest. (And no, it doesn’t have anything to do with that bug-bear of northern-hemisphere conservationists, overpopulation, as he details in a comment.)
Mike at Under the Banyan Tree reports on the seemingly daunting but ultimately encouraging struggle to recover a forest devastated by loggin in Borneo.
The national park managers showed us before and after photographs that revealed how they were slowly turning a wasteland into something that once more resembled a forest. Since 2005, they have planted more than a million trees on 5,000 hectares of the burnt and deforested land. In 2012, they aim to plant trees on another 2,000 hectares.
This is just a start. Because forests like that at Sebangau store vast quantities of carbon below ground in their buried peat and above ground in their trees, they can play an important role in limiting climate change.
It means that efforts to reforest Sebangau could be among the first projects in line for funding under an international scheme called REDD+ that will allow polluting companies and countries to offset their carbon emissions by paying to plant trees and protect forests.
Read the rest of the post to learn how this could help save one of our closest animal cousins from extinction.
In a recent post at Under the Banyan, Mike rails at the FAO’s definition of “forest” and the poor policy decisions that can lead to:
Scientists have tried to explain how important real forests are for limiting climate change, tackling poverty and creating green economies based on timber and other forest products.
But the fate of forests gets decided in concrete capitals where policymakers pour over green-tinged maps and financial spreadsheets that only show some of the costs and benefits of changing a real forest into anything else.
Right now, somewhere in the world, one of these policymakers is reading a technical document about forests — they are reading small black print on a dull pale page and they are probably wishing the document or the day was shorter.
It makes me wonder how many of the bureaucrats who will decide the fate of the world’s tropical forests have actually walked in one. And how the protectors of the forests can encourage more policymakers to take that journey. | <urn:uuid:e517ce15-7c38-46f8-a11a-7545e6accfef> | 3.578125 | 490 | Personal Blog | Science & Tech. | 38.495358 |
Magnetomotive force, also known as magnetic potential, is the property of certain substances or phenomena that gives rise to magnetic field s. Magnetomotive force is analogous to electromotive force or voltage in electricity.
The standard unit of magnetomotive force is the ampere-turn (AT), represented by a steady, direct electrical current of one ampere (1 A) flowing in a single-turn loop of electrically conducting material in a vacuum . Sometimes a unit called the gilbert (G) is used to quantify magnetomotive force. The gilbert is defined differently, and is a slightly smaller unit than the ampere-turn. To convert from ampere-turns to gilberts, multiply by 1.25664. Conversely, multiply by 0.795773.
Although the standard definition of magnetomotive force involves current passing through an electrical conductor, permanent magnet s also exhibit magnetomotive force. The same is true for planets with magnetic fields, such as the Earth, Jupiter, Saturn, Uranus, and Neptune. The Sun also generates magnetomotive forces, particularly in the vicinity of sunspots. | <urn:uuid:5a8065b4-75dc-4d4d-b171-a00570bc0171> | 3.828125 | 239 | Knowledge Article | Science & Tech. | 22.215772 |
Rufous-fronted Laughingthrush Garrulax rufifrons is restricted to the mountains of west and central Java, Indonesia. It is currently classified as Near Threatened on the basis that it approaches the thresholds for Vulnerable under criteria A2cd+3cd; B1ab(i,ii,iii,v). It has a very small range, which is not severely fragmented, but within which it has become scarce as a result of exploitation for the cage-bird trade, as well as habitat loss in some areas. The global population size has not been quantified, but the species is described as uncommon (del Hoyo et al. 2007). Adequate data are lacking on the precise magnitude of declines, but the population is suspected to be in moderately rapid decline overall.
Recent information suggests that the threats faced by this species are greater than previously thought. The species is heavily trapped for trade, but published data on this are lacking (D. Yong in litt. 2012). It is also threatened by habitat loss and disturbance, and is likely to be impacted by climate change in the future (D. Yong in litt. 2012). As a result, this species may qualify for uplisting to Vulnerable under criteria A, B and/or C of the IUCN Red List. If there is evidence to suggest that the population is declining at a rate of at least 30% over three generations (estimated at c.14 years in this species [BirdLife International, unpubl. data]), it would qualify as Vulnerable under criteria A2cd+3cd+4cd. If the species’s range has become severely fragmented owing to on-going habitat loss, or it is found at ≤10 locations, this species would qualify as Vulnerable under criterion B1ab(i,ii,iii,v). If population estimates show that the population is <10,000 mature individuals, and it is continuing to decline by ≥10% over the past 14 years, it would qualify as Vulnerable under criterion C1, or if all subpopulations are ≤1,000 mature individuals, it would warrant uplisting under criterion C2a(i).
Very little is known from the majority of sites within this species’s range (D. Yong in litt. 2012); more details are required in order to determine its threat status. Further information is particularly welcome on this species’s population size, trends, distribution and the severity of threats.
del Hoyo, J., Elliott, A. and Christie, D. (2007) Handbook of the Birds of the World, vol. 12: Picathartes to Tits and Chickadees. Barcelona, Spain: Lynx Edicions.
- Archived 2011-2012 topics: Chinese Hwamei (Garrulax canorus): request for information
- Rufous Hornbill (Buceros hydrocorax) is being split: list both B. hydrocorax and B. mindanensis as Vulnerable?
- Archived 2011-2012 topics: Nonggang Babbler (Stachyris nonggangensis): uplist to Vulnerable?
- Archived 2010-2011 topics: Nilgiri Pipit (Anthus nilghiriensis): uplist to Vulnerable?
- Archived 2011-2012 topics: Rufous-vented Prinia (Prinia burnesii) and Swamp Prinia (P. cinerascens): request for information | <urn:uuid:4ceda297-6bd3-4715-ac2e-7e54a40c08f3> | 2.984375 | 723 | Knowledge Article | Science & Tech. | 49.757169 |
All points are locations. The connections between the points have a specific weight. Not all connections are bidirectional (a dot marks a start travel point). When Calculate is pressed, all routes from the selected location are calculated. When a route is selected in the listbox, the shortest route is visually shown by coloring the start dots red.
In this example, the shortest route from 0 to 4 is going through location 2, 1 and then 4.
Dijkstra was a Dutch computer scientist who invented a fast and simple way to calculate the shortest path between two points. Many examples I have found on the Internet implement that algorithm but none of them have done it in an Object Oriented way. So I thought of making my own.
Information about Dijkstra can be found here.
Using the Code
The code contains two Project classes:
GUI: Shows the information visually
- To add locations, click on the 'Add Location' button and then click on the map where you want to add locations.
- To add routes, click on the 'Add Location' button to deactivate the add location, then click on a start location, then click on a end location. The weight of the route can be configured on top.
RouteEngine: Calculates the route
I will only go into details about the
RouteEngine. How the UI is handled is not so important for this article but if you need information about it, you can always ask.
Connection: This class holds the information about the connection between two dots. This is a one directional connection from A (the startpoint is visually shown with a dot) to B with a specific weight attached.
Location: Just a location (for example 1).
RouteEngine: This class will calculate all routes from one given
Route: This class holds the information about a route between two points (generated with the
The most simple class. It only holds a name to display.
This class contains two
Location objects and a
public Connection(Location a, Location b, int weight)
this._a = a;
this._b = b;
this._weight = weight;
This class contains a route. It has only a list of connections and the total weight. This class is generated by the route engine.
This is the class that drives the component. The algorithm is as follows:
- Set the
startPosition as active
- Set the total weight to all routes to infinite
- Iterate through all connections of the active position and store their weight if their weight is smaller than their current weight
- Set the active position as used
- Set the nearest point (on whatever location) that isn't used as active
- Repeat 3, 4, 5 until all positions are used
The following method will perform all these steps (and some extra checking and thinking). The
Dictionary returned is a list of destination locations and the corresponding route to each destination location.
Location _locationToProcess = null;
foreach (Location _location in _shortestLocations)
if (_shortestPaths[_location].Cost == int.MaxValue)
_locationToProcess = _location;
var _selectedConnections = from c in _connections
where c.A == _locationToProcess
foreach (Connection conn in _selectedConnections)
if (_shortestPaths[conn.B].Cost > conn.Weight + _shortestPaths[conn.A].Cost)
_shortestPaths[conn.B].Cost = conn.Weight + _shortestPaths[conn.A].Cost;
- 24 December, 2007: First release | <urn:uuid:5ee3d754-6bdc-4831-b302-4e3d4a19e24b> | 3.59375 | 756 | Documentation | Software Dev. | 56.582845 |
Glen Paul: G'day, and welcome to CSIROpod, I'm Glen Paul. The State of the Climate 2012, recently released by CSIRO and the Australian Bureau of Meteorology, details the latest observations of Australia's climate, and provides analysis of the factors that influence it.
The headline finding in the State of the Climate 2012 is that Australia's land and oceans have continued to warm in response to rising CO2 emissions from the burning of fossil fuels. Though much of Australia has swung from drought to floods since the last State of the Climate summary in 2010, it notes that the long-term warming trend has not changed, with each decade having been warmer than the previous decade since the 1950s. And what's more, the rate of change is increasing.
Joining me on the line to discuss the State of the Climate 2012 is CSIRO's Dr Michael Raupach. Mike, what are the fundamental differences showing up in this summary compared to the observations made in 2010?
Dr Raupach: Well the first difference is that over the last two years there has been a cooling trend in Australia because of two significant La Niña events in the Pacific Ocean. The major issue that everybody wonders of course is whether this cooling trend is the start of something much bigger, that somehow global warming has ceased, or the alarm bells have been ringing much too loudly.
This by all objective analysis is not the case. The cooling trend over the last couple of years in Australia is no indication that global warming has ceased, and the main reason for that is that we can attribute the cooling trend pretty reliably to the two La Niña events in the Pacific Ocean, which have the consequence of pulling down temperatures globally. This is a well know feature of La Niñas, that correlate very well with mild temporary coolings in the global atmosphere and climate, and this is what’s caused the temporary drop in Australia's temperature.
So we do not think that the cooler temperatures over the last two years in Australia are a signal that in any sense global warming has stopped.
Glen Paul: So the rains and the ensuing floods haven’t muddied the waters, so to speak, in relation to what is climate change. But how do you separate weather patterns such as El Niño and La Niña from the effects of climate change?
Dr Raupach: That is done by distinguishing the short term from the long term, and we know that short term weather has always fluctuated on scales from days to years, we've had climate variability on all of those scales for as long as we’ve been able to record the weather, and historically of course anecdotes tell us that this has been around all the time throughout human civilisation and longer.
So we, in order to detect climate change, have to screen this kind of short term variability out of the longer term trends, and this can be done in a number of ways by applying various filters to the record to remove the influence of El Niños and La Niñas, and other causes of short term fluctuations, including volcanic eruptions.
And when this is done the longer term trends remain clear, they’ve been clear for a century or more in the record of observations that we have, so those longer term trends are one – by far from the only one – but one of a number of reasons why the climate community believes that climate change is a reality, that the major cause is anthropogenic human emissions of carbon dioxide and other greenhouse gases, and that longer term we need to bring those emissions down in order to prevent climate change from getting into really dangerous territory in the coming Century.
Glen Paul: Hmm. And how and when will climate change begin to impact on these weather patterns themselves?
Dr Raupach: Well there has of course been speculation that climate change is already impacting on weather patterns, that we’ve had the warmest sea surface temperatures on record off the north coast of Australia, this has been associated with very high rainfall events, particularly in the north, and to some extent on the eastern part of the country. So there’s some evidence beginning to emerge that there is a link between these high temperatures, certainly they can be attributed to the combined influence of La Niña and a general warming of the climate, and those high rainfall events in northern and eastern Australia.
Glen Paul: Uh-huh. Now according to the summary, concentrations of long lived greenhouse gases in the atmosphere reached a new high in 2011. What sort of increases are we seeing there?
Dr Raupach: Since 2000 we’ve seen an increase in greenhouse gas emissions, particularly the emissions of carbon dioxide, at around 3 per cent. per year. That has continued up to the last year, despite a small downward blip in 2009, associated with the global financial crisis. That blip was completely overcome the following year, 2010, where the rate of increase of carbon dioxide with emissions from fossil fuels was about 5.9 per cent.
So we've seen through the last decade fossil fuel emissions of CO2 increasing at about 3% per year on average. That’s translated to a rise in CO2 concentrations which on average through that decade, the last decade, has been close to two parts per million per year; last year it was indeed two parts per million per year. And the consequence of that is that we’ve seen an increase over the last Century in CO2 concentrations from preindustrial values of around 280 parts per million, to the current value which is now well above 390 parts per million.
I checked the other day the reading from the Mauna Loa, CO2 observations in the reading for December 2011 was 393 parts per million for CO2. So we are indeed seeing this long term trend, very steady, despite the inter-annual variability and the strength of the CO2 sinks in land and ocean, that’s causing CO2 concentrations to rise at the moment by about two parts per million per year.
Glen Paul: Hmm, and of course with the rise in CO2 emissions comes the further warming of the land and the oceans. Does sea level rise continue to be an issue in the report?
Dr Raupach: The sea level rise does continue to be an issue. It’s been going up globally by an average of about just over three millimetres per year for the last three decades. Sea level rises around Australia are not uniform, and the sea level does not go up like water in a still bath which is having a slow tap run into it; it doesn’t rise uniformly all around the world. Sea levels vary, and the rates of rise vary from place to place.
Australia, for reasons associated with ocean circulations and wind patterns on the oceans, has seen larger rises around its coastlines on average than the rest of the world. But the major story is that we can expect to see sea level rise continuing, and most of that sea level rise is because of thermal expansion of the oceans – that is the ocean waters expanding as the climate warms.
Ninety percent of the excess energy that the earth’s system is receiving as a result of excess greenhouse gases in the atmosphere ends up in the oceans, and that excess energy is responsible for the bulk of the sea level rise that we’re seeing.
Glen Paul: OK. Now, much of Australia's population is concentrated along the coast, and some might say, well they’ve not noticed any significant change in sea level down at the local wharf, where they’ve fished for years. When might they start to notice or what might that wharf look like in 50 years time if things continue the way they're going?
Dr Raupach: Well, if sea level rise continues as predicted we’ll be getting rises of anywhere between 30 centimetres and well over a metre, with a median prediction of well over half a metre by the end of this Century, by 2100. So if half of that was realised by 2050, then we’d be looking at sea levels which are anywhere between 15 and 50 to 60, or even higher, centimetres above where they are now on the median.
Of course, this will cause a difference to sea levels of wharves, but perhaps more importantly it will cause increased damage as a result of storm events, because the relationship between inland propagation of a storm and sea level is highly leveraged. Storms will come in a lot further inland as a result of storm surges and the like, in response to each metre of sea level rise by something like a factor of one to a hundred.
Glen Paul: And obviously that’s not the kind of news that we want to hear about our future, yet you don’t come to these conclusions by pulling facts and figures out of your hat, to put it politely. A lot of work and top notch science goes into these reports. How does it make you feel when you hear some commentators being so dismissive of the research?
Dr Raupach: There is a tremendous tendency in the public debate about climate change to have that debate revolve around factoids, around small pieces of information which are taken out of context, and one example is the claim by some commentators that for Australia to meet its emissions reductions target of 5 per cent below 2000 levels by the year 2020 would make zero difference, or negligible difference to global climate change, on the grounds that Australia contributes only 1.3 per cent of global emissions anyway.
This ignores the fact the atmosphere is a globally shared commons, and climate change is a global problem, and therefore for Australia to take a stance like that would be inviting the view that Australia is not serious about climate change at all. Or we could draw the analogy between a speeder on a road who argues that he can drive perfectly safely, and therefore speed limits should not apply to him, and our society takes a dim view of that kind of thing.
And it’s likely in coming years that the global society will take a dim view of nations which take the corresponding view of the global atmosphere, that is that to the extent that climate change is perceived as a global problem, it’s everybody else's problem, not our own.
So we, Australia, need to meet our emissions targets, not only because we will make a difference – we will – but also because it’s the only way of us showing that we expect other countries to do the same thing.
Glen Paul: Fair enough. And of course people are welcome to form their own opinions by reading the summary, which is available as a downloadable PDF on the CSIRO website. Thank you very much for talking to me today about it, Mike.
Dr Raupach: Thanks, Glen.
Glen Paul: Dr Michael Raupach. For more information find us online at www.csiro.au. You can like us on Facebook, or follow us on Twitter at CSIROnews. | <urn:uuid:abe5d680-80be-47de-9169-29aa35bd0688> | 2.984375 | 2,276 | Audio Transcript | Science & Tech. | 47.858324 |
Capella, brightest star in the constellation Auriga; Bayer designation α Aurigae; 1992 position R.A. 5h16.1m, Dec. +45°59−. Capella is a yellow giant star of spectral class G8 III and is also a spectroscopic binary star with a component of spectral class F. Its apparent magnitude of 0.06 makes it the 6th-brightest star in the sky. Capella is about 45 light-years from the earth. Its name is from the Latin for "little she-goat."
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Capella from Fact Monster:
See more Encyclopedia articles on: Astronomy: Stars | <urn:uuid:bc94df38-8e1f-4c17-b3f7-627f984b381d> | 3.265625 | 158 | Knowledge Article | Science & Tech. | 66.870167 |
The Universe Tonight
The directory 'Gallery/programs/universe_tonight/1008' does not exist. Check your parameters.
Submillimeter Astronomy on Mauna Kea
By Dr. Hiroko Shinnaga
(Staff Research Scientist at the Caltech Submillimeter Observatory)
Hiroko Shinnaga is a staff research scientist at the Caltech Submillimeter Observatory. She acquired her Doctor of Science degree at Ibaraki University in Japan. Before she start working at the CSO, she was a postdoctoral fellow at Academia Sinica Institute for Astronomy and Astrophysics in Taiwan and at Harvard-Smithsonian Center for Astrophysics for the Submillimeter Array (SMA) project.
When you look up at the dark night sky, you'll find so many stars are shining above you. How and where were they born? How do they evolve? And how do they end their lives? Those are some of the research questions in submillimeter astronomy.
Submillimeter waves (300 – 1000 microns) are longer than infrared and shorter than radio waves. For this reason, the technology used for submillimeter instruments makes use of both radio and infrared technologies. Water vapor in the Earth's atmosphere strongly absorbs submillimeter radiation therefore, submillimeter telescopes need to be located at high, very dry sites such as the summit of Mauna Kea. Submillimeter astronomy is one of the newest and least explored areas of astronomy because of technical challenges and because it requires very dry weather condition.
Submillimeter radiation is mainly generated in very cold dusty regions of space. Stars are born inside dense cold clouds where visible light cannot penetrate. When stars reach a certain age, like ten billion years old, for Sun-like stars, they start shedding their outer envelope material into space before they end their lives. A large submillimeter telescope, like the one that the Caltech Submillimeter Observatory (CSO) has, can catch these scenes that cannot be seen in other wavelengths.
The CSO is a 10.4 meter (34.1feet) diameter radio telescope, one of the world's pioneering telescopes dedicated to submillimeter astronomy. A world-famous Physicist, Dr. Robert Leighton, designed the telescope. The CSO has been in operation since 1987 and been led by Professor Thomas Phillips in the Physics Department at Caltech. The radio style dish has a homologous steel tube back-up structure. The eighty-four panels of the telescope are light weight aluminum honeycomb hexagons. The hexagon concept was later followed by the Keck telescopes. Many key instruments at the observatory were developed by current/former graduate students and staff members at the California Institute of Technology (Caltech) and at the CSO.
The CSO has two types of instruments, namely, SIS (Superconductor-Insulator-Superconductor) heterodyne receivers, and bolometers. The heterodyne receivers can measure motions of interstellar material. On the other hand, the bolometers are submillimeter cameras, and they can take pictures of astronomical objects at submillimeter wavelengths.
Mauna Kea is one of the best sites in the world for submillimeter astronomy. At Mauna Kea, are the top three observatories for submillimeter astronomy. They are the CSO (10.4 meter diameter), the James Clerk Maxwell Telescope (JCMT, 15 meter diameter), and the Submillimeter Array (SMA, 6 meter diameter with eight dishes). These observatories are operated by different institutions. However, they can work together to create the very best submillimeter instrument in the world. The name is the Expanded Submillimeter Array (eSMA). The eSMA is able to observe with a very high spatial resolution. Using this very special instrument, astronomers acquired initial science results.
At CSO, locally in Hilo, there are ten staff members, including one technical manager, one administrative assistant, two electro-mechanical technicians, two electronics technicians, one electronics engineer, and three scientists. Dr. Walter Steiger who used to be a technical manager at the CSO contributes to our outreach activities as a volunteer. Although we don't have many staff members, the CSO team maintains the telescope as a world premier telescope and the performance and the efficiency are exceptionally high. The CSO is open to astronomers from all over the world. You can find more information about our observatory at our websites, www.submm.caltech.edu/cso/ and www.cso.caltech.edu/outreach/kiosk/newresults_cso.html. | <urn:uuid:6329a026-7684-4287-9f00-bb460196117b> | 3.34375 | 962 | Knowledge Article | Science & Tech. | 30.894225 |
The Environment type exposes the following members.
Terminates all processes.
Finalizes the MPI environment. Users must call this routine to shut down MPI.
|Equals||(Inherited from Object.)|
Verifies that the MPI environment has been finalized by calling Dispose().(Overrides Object..::.Finalize()()().)
|GetHashCode||(Inherited from Object.)|
|GetType||(Inherited from Object.)|
|MemberwiseClone||(Inherited from Object.)|
|ToString||(Inherited from Object.)|
Translates an MPI error code into an appropriate exception, then throws that exception.
Determine whether the MPI environment has been finalized.
Returns the rank of the "host" process, if any.
Determine whether the MPI environment has been initialized.
Returns the rank of the process (or processes) that can perform I/O via the normal language facilities. If no such rank exists, the result will be null; if every process can perform I/O, this will return the value anySource.
Returns the maximum allowed tag value for use with MPI's point-to-point operations.
Returns the name of the currently executing processor. This name does not have any specific form, but typically identifies the computere on which the process is executing.
The level of threading support provided by the MPI library.
Returns the time, in seconds, since some arbitrary time in the past. This value is typically used for timing parallel applications.
Returns the resolution of Time, in seconds. | <urn:uuid:c4f63843-e9f7-44c8-a21d-a438015d9695> | 2.703125 | 344 | Documentation | Software Dev. | 35.781667 |
|Sep20-05, 11:56 AM||#1|
I wanted to do some research on Atomic dust.
The process would take a metal and vaporize it by a high powered LASER in a vacuum and then allow the metal vapor to cool in the vacuum an then fall to the bottem of the chamber as Atomically fine dust particles of the metal.
Is there any research out there that is attemting the study of this?
The only thing I have researched so far that's even close to this is the making of nanotubes with Carbon from vaporization techniques.
I wanted to see if Atomically fine metal dust could be Hydrolically pressed with materials that have lower melting points on an Atomically even mixture ratio and it's properties, Possibly making some odd dielectrics.
|Sep20-05, 07:03 PM||#2|
Sounds an awful lot like vacuum vapor deposition to me.
For example, when the mirror of a big telescope needs to be replaced - they put it in
a vacuum chamber and vaporize aluminum in the chamber. As the chamber is cooled,
the aluminum vapor "plates out" on all the interior surfaces including the mirror.
Dr. Gregory Greenman
|Sep20-05, 07:44 PM||#3|
You will have trouble with this. The atoms move way to fast to fall and they
will end up hitting the walls like little bullets.
|Sep20-05, 10:14 PM||#4|
I do see a problem trying to vaporize in a vacuum. As Morbius points out, the vapor simply plates out on the nearest cold surface.
On the other hand, possibly doing this in a partial vacuum with a noble gas like Xe or Kr might help cool the metal particles, and it will probably be particles rather than atoms.
There are various folks doing metal vaporization and nanoparticle manufacturing. Penn State has some folks doing physical vapor deposition using electron beam heating, and I am quite sure there are many more folks doing this too. NASA Glenn is a possibility, and so are the DOE Labs like Argonne, Oak Ridge, and perhaps Los Alamos and Livermore - but I am just speculating.
There is a trade group that does powder metallurgy, and I would expect they use PVD techniques. Most applications involve pressing metal and mechanically-blended alloy mixtures in power form to near net shape and then sintering. This approach is particularly useful for refractory metals and alloys which are extremely difficult to machine.
|Similar Threads for: Atomic Dust|
|What is ......Dust?||General Astronomy||7|
|Automobile energy cost - Dust to Dust report||Mechanical Engineering||15|
|atomic mass vs. atomic number||Introductory Physics Homework||9|
|Io sending dust out||General Astronomy||0| | <urn:uuid:f025b8c6-0294-44fe-81bd-fd919950bfb6> | 2.75 | 604 | Comment Section | Science & Tech. | 53.468349 |
Notocypraea subcarnea (Beddome 1896)
A Brief Guide to Identification
by Randy Bridges
The original description of Notocypraea subcarnea (Beddome 1896) was followed for years by confusion and mystery. With his notes on the taxonomy of this rare species Lorenz (2005, 2007) effectively clarified the matter. Unfortunately, in those unable to read or unwilling to learn, there still exists some uncertainty regarding this valid (Lorenz 2005; Meyer 2005) taxon. This report outlines the key characteristics of subcarnea, intended to help collectors identify this interesting cowry.
Since the majority of available specimens are beach collected dead shells identification may sometimes be challenging. Normally the key features are evident even in very dead shells. In many collections subcarnea is misattributed to N. angustata (Gmelin 1791) (Fig. 4) or N. comptonii f. casta Schilder & Summers 1963 (Fig. 5). Upon carefully examining the following characteristics the distinction between these taxa is not difficult.
I. Large, sparse and hazy marginal spotting. The marginal spotting is the most readily identifying feature of subcarnea. This characteristic is visible in the specimens shown in Figs. 1 and 3, and the type specimens (Fig. 2). In comparison, angustata exhibits smaller, more numerous and crowded marginal spotting (Fig. 4). N. comptonii f. casta typically exhibits no marginal spotting (Fig. 5) or, in some intermediate forms, characteristically small and crowded spotting.
II. Fine and more numerous teeth. Especially noticeable on the columellar side, this is a very distinguishing feature which is even more apparent in the larger and more elongate varieties. The posterior columellar side shows ribbing while this is mostly smooth in angustata.
III. Wide, strongly curved aperture. The aperture in angustata is narrower and straighter. The anterior terminal ridge in subcarnea is shorter and more curved than angustata.
Other characteristic traits such as the depressed shape and pale color can also be helpful in diagnosing subcarnea. With some care and thoughtful examination, identifying this cowry is relatively easy. It is hoped that collectors will find this brief guide useful for that purpose.
Fig. 1. Specimens of N. subcarnea, from Tasmania.
Row 1: left: Holotype, 22.5 mm.
Row 1: right: North Tasmania, trawled, 22.2 mm.
Row 2: Tinderbox Bay, Hobart, Tasmania, diver at 6 m, 21.3 mm.
Row 3: Port Stanley, Tasmania, beached, 26 mm.
Row 4: Port Stanley, Tasmania, beached, both 29 mm, coll. Chiapponi.
Photo kindly provided by courtesy of Dr. Felix Lorenz
Fig. 2. Beddome's original type specimens of N. subcarnea
Left: Holotype (Natural History Museum, London)
Compiled from Beddome (1898), pl. 21, figs. 8-10.
See also above Fig. 1., Top row: left.
Fig. 3. N. subcarnea from Half Moon Bay, Stanley, Tasmania, 31.5 mm.
Larger, more elongate variety.
Fig. 4. N. angustata from Port McDonnell, South Australia, 28.3 mm.
Fig. 5. N. comptonii f. casta from Port McDonnell, South Australia, 25.8 mm.
BEDDOME, C. E. (1896). Note on Cypraea angustata, Gray, var. subcarnea, Ancey. Proceedings of the Linnean Society of New South Wales, 21:467-468. Web, <http://biostor.org/reference/53586>
BEDDOME, C. E. (1898). Notes on species of Cypraea inhabiting the shores of Tasmania. Proceedings of the Linnean Society New South Wales, 22:564-576, pl. 21. Web, <http://biostor.org/reference/53589>, Pl. 21. Web, <http://biodiversitylibrary.org/page/3345641>
GROVE, S. J. (2011). A Guide to the Seashells and other Marine Molluscs of Tasmania:. Web, <http://www.molluscsoftasmania.net/Species pages/Notocypraea subcarnea.html>
LORENZ, F. (2005). Taxonomic Notes on Two Poorly Known Species of Notocypraea (Gastropoda: Cypraeidae). Visaya, 1(5):16-21.
LORENZ, F. (2007). A preliminary revision of the living Notocypraea. Web, <http://www.cowries.info/shell/noto/noto.html>
MEYER, C. (2005). Notocypraea cf. subcarnea, Cowrie Genetic Database Project. Web, <http://www.flmnh.ufl.edu/cowries/subcarnea.htm> | <urn:uuid:2c972a1b-a6c5-451a-8f1c-8b30375addd5> | 2.84375 | 1,107 | Nonfiction Writing | Science & Tech. | 59.008805 |
About 3,250 square kilometers of Antarctica's Larsen B ice shelf shattered and tore away from the continent's western peninsula early this year, sending thousands of icebergs adrift in a dramatic testimony to the 2.5 degrees Celsius warming that the peninsula has experienced since the 1950s. Those wayward chunks of ice also highlighted a perplexing contradiction in the climate down under: much of Antarctica has cooled in recent decades.
Two atmospheric scientists have now resolved these seemingly disparate trends. David W. J. Thompson of Colorado State University and Susan Solomon of the National Oceanic and Atmospheric Administration Aeronomy Laboratory in Boulder, Colo., say that summertime changes in a mass of swirling air above Antarctica can explain 90 percent of the cooling and about half of the warming, which has typically been blamed on the global buildup of heat-trapping greenhouse gases. But this new explanation doesn't mean that people are off the hook. Thompson and Solomon also found indications that the critical atmospheric changes are driven by Antarctica's infamous ozone hole, which grows every spring because of the presence of chlorofluorocarbons (CFCs) and other human-made chemicals in the stratosphere.
This article was originally published with the title A Push from Above. | <urn:uuid:963957d5-80af-47c0-ab9e-619599675e2d> | 3.578125 | 249 | Truncated | Science & Tech. | 30.795415 |
The weather office in charge of charged with monitoring and forecasting the potential for severe weather over the 48 continental United States is the Storm Prediction Center (SPC) located in Norman, OK. The information provided by SPC will give you critical information concerning the threat of severe weather in your locale.
Convective Outlooks consist of a narrative and a graphic depicting severe thunderstorm threats across the continental United States. The outlook narratives are written in technical language, intended for sophisticated weather users, and provide the meteorological reasoning for the risk areas.
This product also provides explicit information regarding the timing, the greatest severe weather threat and the expected severity of the event. The graphics that accompany the narratives provide vital information to help plan your day.
Convective Outlooks are divided into four periods.
|Day 1||This is the risk of severe weather today through early morning of the following day. Day 1 forecasts are issued five times daily; 06z (around midnight), 13z (around sunrise), 1630z (mid-morning), 20z (mid-afternoon), and 01z (early evening). This is the forecast you will see on SPC's frontpage. (What is Z-time?)|
|Day 2||Day 2 continues from the ending of Day 1 (tomorrow morning) for the next 24 hours. These are issued twice daily; 07z (1:00 am central time, CST or CDT) and 1730z (around noon).|
|Day 3||This is the forecast for the subsequent 24 hours. Day 3 forecasts are issued daily by 2:30 a.m. central time (0830z on standard time and 0730z on daylight time).|
|Days 4-8||A severe weather area depicted in the Day 4-8 period (issued at 10z (4:00 am central time, CST or CDT) indicates a 30% or higher probability for severe thunderstorms (e.g. a 30% chance that a severe thunderstorm will occur within 25 miles of any point).|
In convective outlook graphics, the green shading depicts a 10% or higher probability of thunderstorms during the valid period.
A yellow shaded area indicates a slight (SLGT) risk of severe thunderstorms during the forecast period. Depending on the size of the area, approximately 5-25 reports of one inch or larger hail, and/or 5-25 wind events, and/or 1-5 tornadoes would be possible.
The red shaded area indicates a moderate (MDT) risk of severe thunderstorms are expected. The moderate risk indicates a potential for a greater concentration of severe thunderstorms than the slight risk, and in most situations, greater intensity of the severe weather.
The fuschia shaded area indicates a high (HIGH) risk of severe thunderstorms are expected. A high risk area suggests a major severe weather outbreak is expected, with a high concentration of severe weather reports and an enhanced likelihood of extreme severe (i.e., violent tornadoes or very damaging convective wind events occurring across a large area).
With a HIGH RISK event, the potential exists for 20 or more tornadoes, some possibly EF2 or stronger, or an extreme derecho potentially causing widespread wind damage and higher end wind gusts (80+ mph) that may result in structural damage.
Finally, a "SEE TEXT" label will be used for areas where a 5% probability of severe is forecast, but the coverage or intensity is not expected to be sufficient for a slight risk.
The Public Severe Weather Outlooks (PWO) are issued when a potentially significant or widespread tornado outbreak is expected. This plain-language forecast is typically issued 12-24 hours prior to the event and is used to alert NWS field offices and other weather customers concerned with public safety of a rare, dangerous situation. The PWO is reserved for for all high risks and for moderate risks with a strong risk for tornadoes and/or widespread damaging winds. The SPC issues about 30 PWOs each year.
The MCD basically describes what is currently happening, what is expected in the next few hours, the meteorological reasoning for the forecast, and when/where SPC plans to issue the watch (if dealing with severe thunderstorm potential). Severe thunderstorm MCDs provide you with extra lead time on the severe weather development.
When conditions become favorable for severe thunderstorms and tornadoes to develop, SPC usually issues a severe thunderstorm or tornado watch. Tornadoes can occur in either type of watch, but tornado watches are issued when conditions are especially favorable for tornadoes. Severe thunderstorm watches are blue with tornado watches in red.
Watches are large areas, 20,000 to 40,000 square miles, and are issued by county. They are numbered sequentially (the count is reset at the beginning of each year). A typical watch has a duration of about four to six hours but it may be canceled, replaced, or re-issued as required. A watch is not a warning, and should not be interpreted as a guarantee that there will be severe weather!
When a watch is issued, stay alert for changing weather conditions and possible warnings. Any warnings will be issued by your local NWS Weather Forecast Office.
Usually this decision is based on a number of atmospheric clues and parameters, so the decision to issue a PDS watch is subjective. There are no hard threshold or criteria. PDS watches are most often issued in association with high risk convective outlooks. | <urn:uuid:d6523399-e021-4060-b51b-04296171892d> | 2.6875 | 1,136 | Knowledge Article | Science & Tech. | 45.813284 |
An example of the total emissivity curve as resulting from the present version of the Arcetri Code calculation is displayed in Fig. 1. It has been evaluated assuming cm-3 and adopting the Arnaud & Raymond 1992 (Fe ions) and Arnaud & Rothenflug 1985 (other ions) ion fractions and the Feldman 1992 element abundances. The calculation has been performed for temperatures in the K range, but it is important to note that for temperatures smaller than K opacity effects play an important role in line and continuum radiation formation. As these effects are not accounted for in the present calculation, the total emissivity curve and the radiative losses curves should be taken with caution below this temperature limit.
Fig. 2 shows the contribution of the most abundant elements to the total emissivity curve. Hydrogen is responsible for nearly all the radiative losses at chromospheric temperatures, while iron provides most of the output energy at high temperature. Continuum radiation may be neglected for all temperatures lower than a few million degrees, but at very high temperature free-free continuum radiation dominates the total emissivity.
Figs. 3 and 4 display the total emissivity for some of the most abundant elements in astrophysical plasmas together with the strongest lines of each of the element's ions. For some temperatures the total emissivity of some elements is dominated by the emission of a very small number of very strong lines; these are some of the strongest spectral features observed in solar and stellar spectra.
3.1. Effect of the electron density
Both continuum and line radiation may be electron density dependent, and this may cause the radiative losses and the total emissivity curve to be density dependent as well. As in literature the total emissivity curve is usually given as a function of temperature only, it is important to check the density dependence of this curve.
Continuum radiation electron density dependence stems from the two-photons continuum process: the populations of the H-like level and of the He-like level, decaying to the ground level through a two-photon process, may be altered by collisional de-excitation when electron density reaches a critical value. However the two-photons continuum represents a minor contribution to the continuum radiation at coronal densities and temperatures. Line radiation electron density dependence is given by the role played by collisional excitation and de-excitation into level population; this dependence may provide precious diagnostic tools for determining the electron density of the emitting plasma. Another source of density dependence is given by density effects on ionization and recombination coefficients. Summers 1972 and 1974 calculated density-dependent ionization equilibrium finding that ion fractions change as a function of density, mostly because of the density dependence of the dielectronic recombination coefficient; Vernazza & Raymond 1979 also find that under coronal condition ion fractions are density dependent, mostly due to collisional ionization and dielectronic recombination. Plasma microfields also may have a significant effect on dielectronic recombination, giving a further density dependence to ion fractions. Badnell et al. 1993 carried out quantal calculations for dielectronic recombination of [ C iv] in an electric field, finding that the dielectronic recombination rate could change by 40%.
However, in the literature ion fractions are usually reported as a function of electron temperature only, so in the present work it is not possible to check the effects of their density dependence on the total emissivity curve due to ionization balance.
In order to assess the density dependence due to level population we have performed the theoretical calculation of this curve assuming four different values of the electron density: cm-3. Outside this density range line radiation is density insensitive: for higher densities ion level populations for the most important lines have reached Boltzmann equilibrium, while for lower densities collisional de-excitation becomes negligible compared to radiative decay and the Coronal Model Approximation (yielding density insensitive line Contribution Functions ) may be adopted.
Fig. 5 displays the percentual difference
(with , and cm-3) between total emissivity curves calculated at different densities as a function of electron temperature. As expected, the greatest differences are found with the curves at cm-3, which are very similar, because density-dependence affects line emissivity mostly between and cm-3. Differences are always smaller than 25% and show a marked temperature dependence, being highest at transition region and coronal temperatures and decreasing down to zero at the edges of the selected temperature range.
The maximum at coronal temperatures is given by the presence of a host of strong density dependent lines formed in quiet corona, mainly from Fe, Mg and Si ions. The high temperature tail is dominated by strong, density insensitive lines and free-free continuum; the low temperaure tail is dominated by density insensitive transition region and chromospheric lines and for this reason there are small differences between computations carried out assuming different density values.
3.2. Effect of different datasets and approximations in level population computation
Level populations are strongly sensitive to any change or problem in the atomic parameters, collision strengths and transition probabilities as well as in the approximation adopted for their calculations, and this affects line radiation. It is therefore important to check the effects of different transition probabilities datasets on the resulting total emissivity curve.
As big improvements have been done in the present version of the Code versus the older version described in Landini & Monsignori Fossi 1990, we have performed a comparison between the present results and those obtained using the 1990 version of the Arcetri Code. The adopted element abundances are from Allen 1973. There are three main differences between the two versions of the Arcetri Code: (a) the old 1990 Code calculated all line intensities using the Coronal Model Approximation , (b) the collision rates were calculated using Gaunt factors and (c) radiative data came from different literature sources than in the present version of the Code.
Thus, the present comparison allows to check also the effects of different assumptions in level population calculations on the resulting total plasma emissivity.
Fig. 6 displays the percentual difference
between the two versions of the Code as a function of electron temperature. It is possible to see that rather high differences (up to 60%) are found at transition region temperatures, and smaller discrepancies occur at coronal temperatures. In the positive section of Fig. 6 the older version of the Arcetri Code has higher total emissivity than the more recent version at transition region temperatures. This is due to the presence of few very bright transitions from [ O iv], [ O v], [ C iv] whose emissivities have very different values in the two versions of the Code; their difference is due both to the use of different datasets and to the different approximations used in level population calculation leading to an overestimation of line emissivity for these transitions in the old version of the Code. The negative section of the diagram is due to the much larger number of lines included in the new version.
3.3. Effect of ionization equilibrium
Ion fractions are necessary to both line and continuum calculation and any difference in their values are usually reflected into the total emissivity curve. We have checked the changes between curves calculated adopting different ion fractions datasets. All these calculations have been carried out assuming ionization equilibrium, and ion fractions come from Shull & Steenberg 1982 (SS, but with H and He ion fractions coming from Arnaud & Rothenflug 1985), Arnaud & Rothenflug 1985 (RO), Arnaud & Rothenflug 1985 plus Arnaud & Raymond 1992 for the Fe ions (RA), Mazzotta et al. 1998 (MA).
Fig. 7 displays the percentual differences
between the results obtained adopting RA ion fractions and those obtained with the other three datasets. The overall differences are smaller than 40%, and the greater differences are found with RO ion fractions. These are due to Fe ion fractions, dominating the high temperature tail of the total emissivity curve (the other elements' ion fractions being the same). Differences with the SS and MA results are much smaller.
On the overall, the effect of the use of different ion fractions onto the total emissivity curve may rise up to a maximum of 40%, and are smaller than 20% at transition region and chromospheric temperatures below K.
3.4. Effect of element abundances
Variation of the chemical composition of the emitting plasma may change the total emissivity curve by very large amounts.
It has been long acknowledged that element abundances change in solar plasmas, and their values seem to be associated to magnetic structures in the solar atmosphere (see the reviews of Feldman et al. 1992, Feldman 1992, Mason 1995). These variations seem to be related to the First Ionization Potential (FIP) of the emitting elements (e.g. Haisch et al. 1996). Cook et al. 1989 determined the radiative loss function using photospheric, chromospheric and coronal abundaces and found huge differences in the K temperature range; they also found that these changes have serious effects on loop models.
Also Bhringer & Hensler 1989 and Sutherland & Dopita 1993 have studied the effects of metallicity variations on total emissivity curve, finding huge differences as metallicity decreases from the solar value. This is due to the importance of line radiation from elements with at temperatures between and K.
The importance of abundance changes pointed out by these authors has led us to check the effect of different element abundance values on the resulting total emissivity curve. As this curve is dominated by the emission of some elements, these effects are expected to be very large. In order to check these effects total emissivities have been calculated assuming several different sets of element abundances: Allen 1973 (AL), Feldman 1992 (FE), Grevesse & Anders 1991 (GA), Meyer 1985 (ME) and Waljeski et al. 1994 (WA).
Fig. 8 displays the percentual differences
found between the emissivity curve calculated using FE abundances and those calculated adopting the other datasets. There are huge variations (up to a factor 2.5) between FE and WA total emissivities, while differences up to 70% are found between FE and the other three sets of abundance values.
Considering the results shown in Fig. 8, and that differences up to factor 9 have been observed in the solar atmosphere between distinct structures very close to each other (e.g. Young & Mason 1997), abundance variations are a key factor for the evaluation of total plasma emissivity and need to be carefully chosen in order to be able to properly determine the plasma radiative losses.
© European Southern Observatory (ESO) 1999
Online publication: June 18, 1999 | <urn:uuid:430f305e-c97a-4b0c-a352-8b8e773d08a6> | 3.25 | 2,229 | Academic Writing | Science & Tech. | 24.489162 |
Your online destination for news articles on planets, cosmology, NASA, space missions, and more. You’ll also find information on how to observe upcoming visible sky events such as meteor showers, solar and lunar eclipses, key planetary appearances, comets, and asteroids.
A neutron star in a binary star system is spewing matter into space at nearly the speed of light.
Published: January 29, 2004
New evidence points to binary stars producing planetary nebulae.
Published: January 22, 2004
Scientists discover an extrasolar planet's magnetic field is creating sunspots on its parent star.
Published: January 17, 2004
For the first time, astronomers uncover a companion star among a supernova's remains.
Published: January 11, 2004
Space telescopes spy ancient galaxy clusters in the young universe, shedding light on the years following the Big Bang.
Published: January 5, 2004
Look for this icon. This denotes premium subscriber content.
Learn more » | <urn:uuid:deb18df3-73c4-4a37-9aed-1a3950eeb768> | 3.09375 | 202 | Content Listing | Science & Tech. | 39.235781 |
From this excellent answer I learned (correct me if I am wrong) that when writing a block cipher with say key size 128 bit, one has to pad the password given (variable size) so that it becomes exactly ...
Suppose you need to authenticate yourself to a program with the password - but the program's source code is public, the program doesn't have access to any private information and all your ...
How is synchronization of counter values achieved in the HOTP protocol? As I understand it, the server increments its counter value only if a match (of the OTP value) is found. What happens at ...
Suppose I need to store login information for a third-party website for a few users, how would I go about doing it? Since I am logging into a third party website, I need the password in plain-text, ... | <urn:uuid:c3daad68-fb58-40ac-a37a-4f7aafbeef68> | 3.4375 | 174 | Q&A Forum | Software Dev. | 67.841 |
When the Pickler encounters an object of a type it knows nothing about -- such as an extension type -- it looks in two places for a hint of how to pickle it. One alternative is for the object to implement a __reduce__() method. If provided, at pickling time __reduce__() will be called with no arguments, and it must return either a string or a tuple.
If a string is returned, it names a global variable whose contents are pickled as normal. The string returned by __reduce__ should be the object's local name relative to its module; the pickle module searches the module namespace to determine the object's module.
When a tuple is returned, it must be between two and five elements
long. Optional elements can either be omitted, or
None can be provided
as their value. The semantics of each element are:
In the unpickling environment this object must be either a class, a callable registered as a ``safe constructor'' (see below), or it must have an attribute __safe_for_unpickling__ with a true value. Otherwise, an UnpicklingError will be raised in the unpickling environment. Note that as usual, the callable itself is pickled by name.
None, then instead of calling the callable directly, its __basicnew__() method is called without arguments; this method should also return the unpickled object. Providing
Noneis deprecated, however; return a tuple of arguments instead.
obj.extend(list_of_items). This is primarily used for list subclasses, but may be used by other classes as long as they have append() and extend() methods with the appropriate signature. (Whether append() or extend() is used depends on which pickle protocol version is used as well as the number of items to append, so both must be supported.)
(key, value). These items will be pickled and stored to the object using
obj[key] = value. This is primarily used for dictionary subclasses, but may be used by other classes as long as they implement __setitem__.
It is sometimes useful to know the protocol version when implementing __reduce__. This can be done by implementing a method named __reduce_ex__ instead of __reduce__. __reduce_ex__, when it exists, is called in preference over __reduce__ (you may still provide __reduce__ for backwards compatibility). The __reduce_ex__ method will be called with a single integer argument, the protocol version.
The object class implements both __reduce__ and __reduce_ex__; however, if a subclass overrides __reduce__ but not __reduce_ex__, the __reduce_ex__ implementation detects this and calls __reduce__.
An alternative to implementing a __reduce__() method on the object to be pickled, is to register the callable with the copy_reg module. This module provides a way for programs to register ``reduction functions'' and constructors for user-defined types. Reduction functions have the same semantics and interface as the __reduce__() method described above, except that they are called with a single argument, the object to be pickled.
The registered constructor is deemed a ``safe constructor'' for purposes of unpickling as described above.
See About this document... for information on suggesting changes. | <urn:uuid:af4b1170-3966-455c-a127-d7564c2177f7> | 2.765625 | 715 | Documentation | Software Dev. | 36.461491 |
Scientists from The Ecosystems Center have conducted a program of long-term research at several sites in northern Alaska since 1975. The Centerís arctic research program is based at Toolik Lake, in the northern foothills of the Brooks Range (Fig. 1). The three principal components of the program are: (1) lake studies, (2) river studies, and (3) terrestrial landscape studies. In each of these three areas, long-term monitoring of ecosystem processes (such as primary production) is conducted in order to document their natural variability and to detect trends. Research in each area also includes the monitoring of long-term experiments, designed to help elucidate how major ecosystem processes are controlled. The long-term nature of these experiments also allows us to trace the effects of these manipulations as they cascade up or down, through several trophic levels, over many years. Many of the results of these experiments have been discussed in previous Annual Reports, and the work has led to over 70 scientific publications (seeArctic LTER Web Page).
My part in this larger research program is to understand the role of fish in arctic streams. We have conducted river fertilization (added P & N) in two arctic streams for over 15 years. The purpose of these experiments was to understand how a river system might respond to increased nutrient inputs, such as those that might result from global climate change of disturbance in the watershed (Fig. 3). We found that growth of both adult and age 0 grayling (Thymallus arcticus) was enhanced by nutrient addition. Growth variation between years, however, often exceeds differences caused by nutrient enhancement . Fluctuations in river temperature and discharge (m3/s) may play a role in controlling year to year variation in grayling growth by influencing food availability, energetic demands and prey distribution (Deegan et al., in press). We have also shown that arctic grayling may exert top-down controls in stream ecosystems by predation on insects (Deegan et al. 1997; Golden and Deegan 1998). My current work focuses on the effects on lakes of the immigration of grayling to overwinter. | <urn:uuid:25322597-c57a-44f6-b6cc-374c6da8b51a> | 3.03125 | 441 | Academic Writing | Science & Tech. | 42.500389 |
The smallest of the rorqual whales, the minke whale is also the most abundant (2). Two species are now recognised, the northern hemisphere minke whale (the subject of this species page) and the southern hemisphere minke whale (Balaenoptera bonaerensis) (5). Minke whales are slim in shape (2), with a pointed 'dolphin-like' head (2), bearing a double blow-hole (5). The smooth skin is dark grey above, the belly and undersides of the flippers are white, and there is often a white band on the flipper (5) (6). When seen at close quarters, minke whales have variable 'smoky' patterns which have been used to photo-identify individuals (2).
No one has provided updates yet. | <urn:uuid:1d8937d1-595d-4ec0-99e7-5850ceecc7e8> | 3 | 168 | Knowledge Article | Science & Tech. | 48.776316 |
Tuesday, May 3, 2011 - 21:30 in Earth & Climate
Rock climbers are having a negative impact on rare cliff-dwelling plants, ecologists have found. In areas popular with climbers, conservation management plans should be drawn up so that some cliffs are protected from climbers, experts urge.
- Climbers leave rare plants' genetic variation on the rocksTue, 3 May 2011, 22:32:08 EDT
- How high can a climber go?Fri, 8 Jan 2010, 10:58:15 EST
- Mountaineers measure lowest human blood oxygen levels on recordWed, 7 Jan 2009, 17:36:38 EST
- Taking dex can improve high altitude exercise capacity in certain climbersTue, 11 Aug 2009, 12:50:25 EDT
- When cancer cells can't let goMon, 13 Apr 2009, 8:56:50 EDT | <urn:uuid:045bae3d-27b7-4a1b-af99-2918919a83a7> | 3.125 | 179 | Content Listing | Science & Tech. | 52.991344 |
You just pick a laser, press a button, and you may transfer your armchair (or a spaceship such as the International Space Station) from one corner of your room to another. It's handy, especially for armchair physicists.
Well, so far, they can do such things with microorganisms and similar objects, not with spaceships and armchairs.
It's interesting that the beam isn't "pushing" the objects by the photons' momentum: it is "tugging" them which was pointed out to be theoretically possible in some recent papers. The new experimental results were reported in Nature Photonics yesterday:
Experimental demonstration of optical transport, sorting and self-arrangement using a ‘tractor beam’Read the abstract, it's pretty neat.
They claim that the cells may be moved by micrometers and under the beam, they spontaneously sort and rearrange themselves into interesting geometric patterns. Moreover, these patterns move in the opposite direction than the building blocks they are composed of. This behavior of the tractor beam is likely to be used to organize cells in biology very soon because the experimental infrastructure seems to be very cheap and undemanding.
Incidentally, Intel plans to upgrade Stephen Hawking to a state-of-the-art bioelectronic device capable of speaking 10 words a minute and other virtues. | <urn:uuid:763940b7-5ba9-4ba5-a40e-2999e3aa3ca4> | 3.109375 | 274 | Personal Blog | Science & Tech. | 34.688178 |
What is causing the variability in global mean land temperature?
Article first published online: 13 DEC 2008
Copyright 2008 by the American Geophysical Union.
Geophysical Research Letters
Volume 35, Issue 23, December 2008
How to Cite
2008), What is causing the variability in global mean land temperature? Geophys. Res. Lett., 35, L23712, doi:10.1029/2008GL035984., , , and (
- Issue published online: 13 DEC 2008
- Article first published online: 13 DEC 2008
- Manuscript Accepted: 3 NOV 2008
- Manuscript Revised: 30 OCT 2008
- Manuscript Received: 16 SEP 2008
Diagnosis of climate models reveals that most of the observed variability of global mean land temperature during 1880–2007 is caused by variations in global sea surface temperatures (SSTs). Further, most of the variability in global SSTs have themselves resulted from external radiative forcing due to greenhouse gas, aerosol, solar and volcanic variations, especially on multidecadal time scales. Our results indicate that natural variations internal to the Earth's climate system have had a relatively small impact on the low frequency variations in global mean land temperature. It is therefore extremely unlikely that the recent trajectory of terrestrial warming can be overwhelmed (and become colder than normal) as a consequence of natural variability. | <urn:uuid:5c73bccb-3a74-4838-912c-f998d89d4e5c> | 2.953125 | 279 | Academic Writing | Science & Tech. | 34.560227 |
In the context of databasesand object databases, in particularreplication is the ability to duplicate an object from one database (call it "database A") into another ("database B"). The duplication is performed in such a way that both object instances can be modified, so that at a later time the duplicated object can be returned to its original database, and any differences between the object in A and the object in B can be resolved. There are variations on this scenario, but that's the basic idea.
Users of mobile devices might recognize this as "synchronization." In such applications, the desktop system maintains a set of "master" databases that are replicated onto the handheld device. In the process of using the handheld, you add phone numbers, delete appointments, create new shopping lists, and so on. And, at some future point, you synchronize the mobile device with the desktop, which records the changes you've made into the master database. In short, replication allows a subset of information to be drawn from one database into another, and permits the user to operate on that subset in a disconnected fashion.
Suppose you want to implement replication. What capabilities do database systems need to provide for error-freeor, at least, as close to error-free as we could reasonably getreconciliation of data modified by disconnected users? (In this article, I assume that the databases employed are object databases. So, when I speak of "data," I speak of "objects.")
Certainly, the first requirement would be some way of identifying an object, even as clones of it are replicated from one database to another. In other words, when you put an object into the database, a unique identifier must be attached to it. That identifier must be bound to the object in such a way that, when the object is replicated into another database, the identifier goes with it. (You cannot count on the object's data content as the sole means of identifying that object.) Ideally, this identifier will be invisible and unmodifiable, except under unusual circumstances. After all, there is no reason to make its presence and value known to anything other than whatever framework handles the replication process.
Furthermore, this identifier must be universally unique. Suppose you begin with database A. You replicate a subset of its data into database B. Then you replicate data from database A into database C (some of the objects in C may also be objects in B). Suppose further that, after replication, object X was created in B, and object Y was created in C, and that both objects are of the same class. We must be guaranteed that the unique identifiers of X and Y are universally unique, even though databases B and C (and, in fact, A as well) are disconnected. If, by some chance, object X and object Y received the same IDs, then it would be difficult to impossible to reconcile both databases back to the master database.
Accurate reconciliation of disconnected databases also demands that some sort of version information be attached to an object when the object is modified. This is necessary so that synchronization can determine which of two replicated objects is the most up to date. Ideally, this would be information along the lines of "this object was modified at date and time xxx." At the least, we need a modification flag, such as the one on the Palm OS, which indicates when a record has been made dirty. Otherwise, the synchronization application will have to examine each and every object's content (that is, each object passing through synchronization), comparing it with the original (the object in the master database), to see if any changes have occurred, and attempt to deduce from those changes which object is older. Such resolution code would be unacceptably complex and slow.
Finally, a synchronization system must provide a mechanism that allows the developer to manage conflict resolution. Under normal circumstances, when an object that has been replicated is being reconciled back to the main database, the replication framework code can examine version information (assuming it exists) and determine whether the master database object or the replicated object is the most up to date. Conflicts occur when the version information is such that the winner is not apparent. The developer must be able to provide code that can resolve the confusion. | <urn:uuid:8a58d228-c8cd-483e-bfcf-ceb3c874a276> | 2.953125 | 863 | Knowledge Article | Software Dev. | 31.513934 |
FOSSIL BATS OF THE AMERICAS
late Oligocene, Arikareean NALMA
The Brooksville 2 site was discovered in 1994 in a limestone quarry about 10 km northeast of Brooksville in Hernando County in central Florida. The site consists of clays and sands filling karst solution features in marine Oligocene limestone. The fossils were collected mostly by screenwashing because of the abundance of microvertebrates and the rarity of larger mammals (Hayes 2000). The vertebrate fauna includes frogs, lizards, snakes, and a diverse sample of mammals. Hayes (2000) studied the Brooksville marsupials, insectivores, lagomorphs, rodents, and carnivores. Mammals of medium to large body size include: the horse (Miohippus), two small artiodactyls (Nanotragulus and the camelid Nothokemas), and six species of small carnivores (three mustelids and three canids). Besides bats, the small mammal fauna consists of a didelphid marsupial (Herpetotherium), two insectivores (the erinaceid Parvericius and Centetodon), one lagomorph (Megalagus), and at least three rodents (the castorid Agnotocastor, the heteromyid Proheteromys, and an entoptychine).
The maximum age of Brooksville 2 (Hayes 2000) is constrained by the presence of the erinaceid insectivore Parvericius and entoptychine rodents, both of which appear at the beginning of the late early or medial Arikareean (=Ar2; ~28 Ma; Tedford et al. 1996). The minimum age is limited by the occurrence of Centetodon, Megalagus, Agnotocastor, and Miohippus, all of which go extinct at the end of the early Arikareean (~24 Ma). The artiodactyls Nanotragulus loomisi and Nothokemas waldropi are characteristic of early Arikareean faunas. Hayes (2000) placed the Brooksville 2 LF in the late early Arikareean (Ar2, 24-28 Ma). The similarity between the bat faunas from Brooksville 2 and I-75 indicates that these two faunas are not more than several million years apart in age, suggesting placement of Brooksville 2 early in the Ar2 (~26-28 Ma).
The chiropteran sample from Brooksville 2 consists of about 200 fossils representing five species, including mandible and maxilla fragments with teeth, isolated teeth, and limb bones. The Brooksville bat fauna includes a new genus and species of mormoopid, two undescribed species (one large, one small) representing a new genus of emballonurid, a large undescribed genus and species that may be a phyllostomid, and a single tooth of a molossid. Brooksville shares the first four of these species with I-75. The new genus of possible phyllostomid is also represented by a smaller species at I-75. After Thomas Farm, Brooksville 2 has the second largest bat sample from any North American Tertiary site. The Brooksville and I-75 bat faunas are very similar; both are dominated by the same species of mormoopid and large emballonurid. | <urn:uuid:b052993d-f445-41ca-9c2c-29c530a617a3> | 3 | 720 | Knowledge Article | Science & Tech. | 24.936272 |
Urey’s original chart
Urey’s chart as of 2012
Other stable isotopes
Unstable but long lived
Unstable: 5He and 8Be
(Left) Isotope chart constructed by Harold C. Urey in 1931 in his search for deuterium. The labels dem-
onstrate the conception of nuclear structure at the time: The nucleus was thought to consist exclusively
of protons and tightly bound “nuclear-electrons.” The neutron was unknown. Filled circles indicate
known isotopes; open ones show those that might be expected to be found by following an obvious
trend. Urey saw this as a “road map” to the likely existence of deuterium. (Right) Today’s road map,
showing the stable isotopes known to Urey (black); stable isotopes discovered since—just 2H and 3He
(blue); and long-lived but unstable isotopes of practical importance, including the neutron (red). Two
isotopes indicated in the chart on the left have since been found to be unstable, 5He and 8Be.
t dawn on Thanksgiving Day, 1931, no one in the world had an accurate idea of the
nature of the atomic nucleus. That day, at Columbia University in New York, Harold C.
Urey and George M. Murphy measured the optical emission spectra of samples of hydro-
gen gas received by railway express shipment from the National Bureau of Standards in Washington,
D.C., U.S.A. Urey arrived home late for Thanksgiving dinner, but with the news that he had discovered
the mass 2 isotope of hydrogen. He later remarked, “I thought maybe my discovery might have the prac-
tical value of, say, neon in neon signs. My colleagues felt I was exaggerating [its] importance.”
As it happens, the discovery by optical spectroscopy of that isotope, which Urey and his collabora-
tors subsequently named “deuterium,” transformed our understanding of nuclear structure. It made
possible the first thermonuclear explosion 21 years later, and, just this past December, it provided
perhaps the first direct glimpse of primordial gas created in the Big Bang.
A history of isotope chemistry
In the early 20th century, chemists were puzzled by the existence of isotopes: atoms of the same
chemical element with different weights. These atoms had been discovered in 1912 by Soddy in his
study of the decay of uranium to radon. He realized that there could be versions of an element whose
masses were different, even though their chemical properties were the same. He named this concept
an isotope, which is Latin for “same place.” In other words, atoms of differing weight could occupy
the same place in the periodic table as the original element. In 1913, J.J. Thomson succeeded in separating isotopes of neon by passing a beam of neon ions through a magnetic field, which deflects an
ion in proportion to the ratio of its electric charge and mass.
Aston’s construction of a mass spectrograph in 1919 made possible the discovery of many other
isotopes of stable elements. By Thanksgiving Day 1931, almost all of the stable isotopes of light atoms
that are known today had been found, mostly via mass spectroscopy.
Everything changed within a few months. Deuterium, the heavy stable isotope of hydrogen, was
discovered Thanksgiving afternoon in the optical spectrum of the hydrogen atom. The neutron was discovered in February 1932. Shortly thereafter, Werner Heisenberg’s suggestion that neutrons and protons
were alternative quantum states of the same particle deepened physicists’ understanding of the structure
of the nucleus, and the electrolysis of water proved to be an efficient means for producing deuterium.
May 2012 | 37 | <urn:uuid:001b7fee-52bf-4317-be8e-d536421aa19d> | 3.65625 | 821 | Knowledge Article | Science & Tech. | 45.177619 |
Like a mosquito on a summer evening, a bacteriophage is either feasting or in search of its next meal. But a bacteriophage isn’t interested in human blood and isn’t flying around your backyard. Bacteriophages — aka phages — are microscopic killers of bacteria. Wherever you find bacteria, which is nearly everywhere, you’ll find phages.
A phage is a virus, and the feasting begins when, like any virus, it attaches to its bacterium host and injects its DNA. The phage DNA hijacks the bacterium’s machinery and begins to reproduce itself. Soon, the bacterium is teeming with new phages that burst forth from the bacterium, destroying it. The hunt for a new victim begins.
Not long after Canadian scientist Felix d’Herelle gave a name to viruses that infect bacteria in 1917, he recognized their potential to treat disease. Using sewage, he isolated the dysentery phage and put it in solution. After he and other doctors in a Paris hospital drank a few pints to test it, they administered their phage solution to children dying from dysentery, who were cured the next day.
D’Herelle traveled across Europe and the Soviet Union with his microscopic miracles. When Alexander Fleming stumbled on penicillin in 1928, however, the world had a magic bullet for bacterial infections. Except in a few countries, the use of phages declined into oblivion.
Today, with fast evolving, antibiotic-resistant bacteria, phages are back in the spotlight. With bioinformatics tools, researchers are seeking to understand them at the level of DNA and genes.
In recent work with Pittsburgh Supercomputing Center computational resources, and an important boost from PSC training, Aleisha Dobbins of Howard University and coworkers at the Pittsburgh Bacteriophage Institute, reported the complete genome analysis of a well-known but little understood phage. Her results — reported in the Journal of Bacteriology (April 2004) — reveal the entire DNA sequence and identify the genes of the SP6 bacteriophage.
With phages, the sheer numbers are almost scary — 10 million populate a milliliter of seawater, about 50 drops. Phages comprise the majority of organisms on the planet, and through the recycling of carbon in the oceans may be responsible for up to a quarter of the planet’s energy turnover.
Electron micrograph of two SP6 virus particles, the roughly hexagonal-shaped head is about 50 nanometers in diameter.
“When people hear the word ‘virus,’ they think trouble,” says Dobbins. “But phages kill bacteria and have no effect on humans. They can be used in addition to antibiotics. With interest in phage therapy resurfacing, it’s important to do sequence analysis and map the genes of more phages.”
Research in phages is also part of the effort to defeat human viral disease. Having all the genes they need to replicate themselves, phages are similar to viruses that invade humans, but easier to study because their hosts are bacteria, not humans. If researchers learn how phages assemble their protein houses, called capsids, they may develop the means to dismantle them. Without the capsid shells, both phages and human viruses are harmless bits of floating DNA.
The SP6 phage in particular attracts attention because of its host. Phages are picky parasites. Each one invades a particular bacterium, and SP6 goes after Salmonella, the nasty bacteria that dwell in raw meat and cause food poisoning. While SP6 hasn’t been yet been used to treat food poisoning, it is widely used in biotechnology.
SP6’s RNA polymerase, an enzyme that transcribes DNA into RNA, is commonly used in genetic technology to modify and clone the DNA sequences of bacteria. Despite wide use, SP6 had not been sequenced and most of its genes had not been identified before Dobbins’ work.
As a Ph.D student working on her dissertation, Dobbins planned to focus on one of SP6’s genes. Her plans became more ambitious, however, when she went to a PSC bioinformatics workshop, led by PSC scientist and sequence-analysis expert Hugh Nicholas. Through the workshop, Dobbins gained the ability to tackle the much larger project of the entire SP6 genome.
“Through this workshop,” says Dobbins, “I gained knowledge of the bioinformatics tools I needed to sequence the genome. And I learned how to use software to identify the termination sequences.”
From July 12 to 23, 2004, PSC hosted 19 faculty and staff from nine universities for its two-week course, “Developing Bioinformatics Programs.” PSC scientists Hugh Nicholas (1st row center) and David Deerfield (2nd row right) led the course. Five interns from three universities stayed at PSC for five weeks to continue work on their research projects.
The tools of bioinformatics, which marry information science and statistics with the life sciences, allow researchers to understand biological systems like never before. But researchers need to learn about these new and rapidly improving tools. PSC’s “Developing Bioinformatics Programs” course introduces faculty and graduate students from minority-serving institutions to the computational, mathematical, and biological issues of bioinformatics.
“Bioinformatics computer programs in general involve implementing a mathematical model and comparing the model with the data to see how they relate to each other,” says PSC scientist Hugh Nicholas. “The most common bioinformatics task involves taking a sequence from a biologist’s laboratory and comparing it to all sequences in the database looking for relationships according to a mathematical model of sequence evolution.”
The two-week workshop trains faculty who plan to establish an introductory bioinformatics course at their home institution, and graduate students, such as Aleisha Dobbins, to use bioinformatics tools to complete a research project. The course is sponsored by a grant from the Minority Access to Research Careers Branch of the Branch of Division of Minority Opportunity in Research of the National Institute of General Medical Sciences. It grew out of a bioinformatics workshop originally developed through support from NIH’s National Center for Research Resources, which also supports Tourney, PSC’s sequence-analysis computer, used during the workshop and by university students in courses developed through the workshop. Tourney is available to support bioinformatics course work at any U.S. academic institution.
Nicholas introduced Dobbins to researchers at the Pittsburgh Bacteriophage Institute (PBI) at the University of Pittsburgh. Dobbins and PBI co-directors Roger Hendrix and Graham Hatfull decided that rather than examining one gene, it made sense to sequence and examine the entire genome. This would allow them to compare SP6 with other well-known phages and, perhaps, to draw conclusions about SP6’s evolution, information that relates directly to the ability of bacteria to evolve and defeat antibiotics.
Phages and bacteria evolve in conjunction, bound together in the race to outwit one another and survive. Through this evolutionary drama, phages introduce new genes into the bacteria population. “Most human pathogens are as toxic as they are because of genes that were brought in by phages,” says PBI’s Hendrix. “There’s a lot of interest in what this population looks like and by comparing them to each other we can start to see how the population evolved up to where it is now.”
At PBI, Dobbins sequenced SP6’s entire genome of over 40,000 nucleotides, the building blocks of DNA. She also identified some of the genes and their order. With training from Nicholas and the PSC workshop, she used Tourney, PSC’s sequence-analysis computer, to identify the terminator sequences — regions of the genome that signal RNA polymerases to stop transcribing and disconnect. With Tourney, Dobbins also compared SP6 sequences with databases of known phage gene sequences and thereby identified SP6’s genes.
Dobbins identified SP6 as being part of the T7 phage family, which includes T7 and T3, two of the most well researched phages — a family relation that had been suspected, but not verified. Because of their similarities, Dobbins used a template of the T7 RNA polymerase, which is also used in genetic technology, to build a model of SP6’s polymerase, the gene she hoped to examine in her original research plan.
Phages in the T7 family presumably evolved from the same ancestor as SP6, and have many similarities in sequence and gene placement. But there are many family mysteries. Through comparative analysis, Dobbins found that one sequence of genes appearing in the same place in most phages in the T7 family was in a much different place in SP6. The group is not only in a different place, but reversed in order. As with any research, answers spark new questions.
Dobbins’ work will feed an ongoing discussion about phage evolution. While some believe that they evolved from a common ancestor millions of years ago, others argue that similar structures arose independently, or diverged more recently.
“The evolution of bacteriophages has not totally been traced,” said Dobbins. “We don’t know how they have evolved or what kind of effect this phage had on bacteria. We completed the sequence and found that it had 52 genes, and of the 52, 64 percent are unique to SP6. A lot of additional work needs to be done to identify the function of those genes.” | <urn:uuid:e6624d0d-2bad-459f-85aa-2d0f2fca06d0> | 3.859375 | 2,068 | Knowledge Article | Science & Tech. | 37.354687 |
[Tutorial] Basic tutorial about class basics
View Single Post
07-23-2008, 09:57 PM
Join Date: Oct 2007
Location: Manchester, UK
Nice work m8.
These variables are called class member variables and you may set a value to them when defining them.
While this may be partly true, they are not bound to a specific name, many people call them object properties or just variables or even attributes, all are valid in my mind. There is nothing mystical about it, they are variables and related functions encapsulated within the instantiated object's (or class's if they are bound to the class like a static) namespace.
Classes are objects which contain methods, member variables, are able to be inherited and much more, objects make our life as developers easier by cutting down on repetitive code.
Not to flog an already dying horse here but I think this point needs stressing, classes are anything but objects they are merely blueprints to an object, much like an architects plans to your house, hence why the 'new' keyword followed by the class name is used to create objects of that class type and the scope resolution operator '::' for static or function/variables bound to the class (i.e. not an object).
Whilst it may be good to be 'technical' and 'picky' about names of class/object functions/variables for a basic tutorial I hardly think its necessary and can be confusing for a new OOP'er to start abstracting names for functions (remember they are still declared using 'function') for a tutorial of this scope its unhelpful.
Aslong as the basic facts are underlined and understood by the reader its done its job.
mysql> SELECT * FROM `users` WHERE `users`.`clue` > 0;
Empty set (0.00 sec)
View Public Profile
Send a private message to sketchMedia
Visit sketchMedia's homepage!
Find More Posts by sketchMedia | <urn:uuid:b0b33b76-a4d9-4739-99ae-19ee3808ecb7> | 3.390625 | 413 | Comment Section | Software Dev. | 46.355769 |
This example is part of the article Add Multimedia to your Web documents, part 2.
We can decide to send a raw text file, with
text/plain, to avoid to have to cut and paste the source code of your program or html web page in a
pre and write things like that which are difficult to update:
<h4>An html head example</h4> <pre> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <title>Your Title</title> </head> </pre>
Example: inserting some python code
<object data="http://example.org/source/wiki2xhtml.py" type="text/plain"> <p>Source code of <a href="/source/wiki2xhtml.py">wiki2xhtml</a></p> </object>
Another example: presenting HTML source code
<object data="http://example.org/source/example.html" type="text/plain"> <p>Source code of the <a href="/source/example.html">HTML file</a></p> </object>
Something to be careful of: the MIME-type sent by the server has precedence over the MIME-type given in the object element. If you don't have access to the configuration of your Web server, you could use a copy of your HTML file with a .txt extension, such as example.html.txt. Usually webservers are configured to send .txt files as text/plain; otherwise you can locally configure it to be sent as text/plain. For example, you can achieve this with a .htaccess file in Apache. | <urn:uuid:53e0d652-0fbe-48ef-bcba-ae055b5652c6> | 3 | 445 | Documentation | Software Dev. | 80.918109 |
While the USGS frantically works to keep the public from being aware of the increase in earthquakes, there
are limits to how far the cover-up can go.
Question: As of late, I have observed that USGS and EMSC are a bit lax on their
reports of earthquakes. Sometimes not seeing anything reported for hours from the last
update, or seeing periods of 90-200 minutes with no earthquakes in between. My
question is, are they leaving swarms of earthquakes out of their reports, or there are
actually periods of no earthquakes?
ZetaTalk Answer 8/7/2010: Greater than 90% of the earthquake activity is being altered by the USGS
at present, which is under orders to prevent any clue being given to the public about the Earth changes
caused by the presence of Planet X. Over a decade ago, the approach was to de-sensitize the live
seismographs periodically, so the displays do not turn black worldwide, and to ignore the twice-a-day
patterns showing up on these seismographs. Then any quakes that could be dropped were dropped.
This was obvious to some who were watching the database manipulation. Quakes in the list would
suddenly disappear. This was particularly the case where a quake happened in a remote location, or
out in the ocean. Dumbing down the magnitude quickly followed, but in order to ensure the public did
not notice, the USGS took control of all websites reporting quake statistics. At times, this control
breaks, and discrepancies are reported to the public. Some countries rebel. Quake swarms are another
source of control, as they pepper the databases with many quakes and skew the statistics, and thus are
pulled from the database. Else the question is raised, why so many?
Despite the coverup, the rise in earthquakes is apparent. These charts, secured from the Lindquist Research
site show the rise in quakes 6+, 7+, and 8+ during 1973 to the present year of 2010.
Certainly, quake since 1973 graphed geographically show the fault lines, and where the increase in quakes can
be expected to hit.
Blooming is an astrophysics term describing the effect when light is attracted to a planetary body or star as it
passes, so that it bends toward that planetary body or star and thus, in the view from Earth, appears to be
brighter than would otherwise be the case. Lately, due to the presence of Planet X, both Venus and Jupiter
have appeared to bloom in SOHO images or photos. Venus developed a bubble that pulled toward the Sun
and then popped, on the Stereo Behind images. Per the Zetas, this was a result of light blooming, as Venus
stood between Planet X and the Stereo Behind satellite.
VIDEO: Bubble Popping
ZetaTalk Explanation 8/7/2010: Venus has been pushed back and forth in front of the Earth to simulate
its orbit for earthlings viewing Venus, for some years. Ever since Planet X arrived in the inner solar
system in 2003, and Venus, as the Earth and her Dark Twin, came round in their orbits to encounter
Planet X standing before them. We have stated that maintaining the Element of Doubt is done so that
the establishment on Earth does not panic and start mowing the common man down in the streets. As
they have begun retreating to their bunkers, leaving their political and corporate positions, the truth
can gradually be revealed. Thus, we have hinted of late that Venus may not be where expected.
Certainly as Earth's sister planets re increasingly squeezed in the cup with the Earth, these planets can
be expected to loom large and then eventually escape the cup altogether. This time has not yet arrived.
Is the odd flare from Venus on the Stereo Behind images related to these relaxed rules? These images
from Stereo Behind are taken from the stereo satellite that rides behind the Earth in her orbit, and thus
is pointed in the direction of Planet X which is coming at the Earth in a retrograde manner, from the
right. Thus, these flares are nothing more than the blooming effect which is sometimes seen in
astronomy, where light bends toward a gravity sink, and adds to the appearance of light coming from
that gravity sink.
Jupiter also had a blooming effect in a photo from Sevastopol in the Ukraine. Coincidentally, the series of
photos taken also showed Jupiter making a huge leap to the right within a 20 minute period - a capture of the
Earth wobble on film! At this time of night, Jupiter would be seen in the SE, so the camera was facing in the
direction where Planet X would be, just over the eastern horizon.
My friend has photographed interesting object in the sky. It was in Sevastopol, Ukraine.
The photo is made on the good camera with 30X zoom. According to program
Stellarium, there there could be only Jupiter. Venus was not visible. This too bright and
too big to be Jupiter. For 20 minutes the object has strongly moved. What is it? The
camera has been directed on the East. Time approximately midnight 0:00 (Sevastopol) -
10:00 PM (Greenwich). In this place there could be only Jupiter. But it could not be
such bright and big. 30X it is not enough approach that Jupiter was such size!
ZetaTalk Explanation 8/7/2010: This is Jupiter, with the blooming effect noted also for Venus on the
Stereo Behind images recently. The photographer was looking to the SE, as dawn approached, thus
looking toward Planet X. Blooming is where light from a distant object (Jupiter) is bent toward a
gravity source (Planet X). This intensifies light from the distant object as the light rays are bent and
bunched as they pass that gravity source. But the real story is not the blooming, but the sudden
movement of Jupiter across the night sky! Caught on film, the Earth wobble!
Crop Circle Comparison
Crop circle progression, where a new crop circle is similar to one laid years ago or days earlier, have been
showing up increasingly. We noted that in Issue 194 when the Serpentine Dance that had been in place since
early 2004 was to end in 2010. Here a crop circle from 2003 was compared to a recent 2010 circle
Now another comparison to a crop circle laid in Saskatchewan on Aug 8, 2003 to one laid in Winters in the
UK on August 2, 2010. Per the Zetas, this is an overview, as 2003 was the year Planet X arrived in our inner
solar system, and now in 2010 it is outbound.
ZetaTalk Analysis 8/7/2010: Comparing these two very similar crop circles, one can see the
progression. In Saskatchewan on August 8, 2003, Planet X had arrived at the Sun, but was at the
opposite side from the Earth, whereas in Winters on August 2, 2010 it is outbound, moving away from
Sun. Which pole Planet X is pointing at the Sun can be seen in the key insignia. The N Pole is pointing
AT the Sun in 2003 but away from it in 2010. The Sun's ability to buffer the Earth from the magnetic
effects of Planet X can also be seen, in that in 2003 there are 3 rings around the Sun insignia, shown
as leaning slightly towards Planet X, but in 2010 there are only two rings. The portion of Planet X's
magnetic influence that the Sun cannot buffer is seen as broken out, in front of Planet X as a single
ring. This is what is bombarding Earth at present.
Another recent crop circle in Italy, laid on June 29, 2010, shows this inbound/outbound path clearly, in a
single crop circle.
ZetaTalk Analysis 8/7/2010: For Speyer in Italy, this is a depiction of the passage, in short form. The
slinging orb on either side is Planet X, first as it entered the solar system in 2003 with its retrograde
orbit, and on the left as it leaves. The Earth found herself on the opposite side of Planet X in the spring
of 2003, but for the passage the hapless Earth will be on the same side. Technically speaking, Planet X
is closer to the Sun while it creeps past on its way outbound, but for this crop circle, which is a short
form, the depiction is inbound/outbound end of story.
We noted that in Issue 196 where the new wobble pattern that developed during July showed a progression
over the month.
A single crop circle laid in Eartfield shows progression within days, and is still building! Per the Zetas, this is
showing the effect of the changed wobble pattern that occurred in July, 2010. It is a daily increase in the
rattling effect, and is building!
ZetaTalk Analysis 7/31/2010: Despite claims that this crop circle is fake, it is genuine. Thefts from cars
and false donation boxes have inspired the cries of a fraud, in an attempt to keep people from a field
not closely watched and clearly a trap for the unwary. What does the progression of overlapping orbs
represent? It is a growth, in a day, not only of the encircling reach of the top orbs, but of the size of the
orbs themselves. This pictorially represents what we have been trying to relay regarding the new Earth
wobble, which we will now start called the Earth rattle. The Earth moves in several directions at once,
or in quick succession, jerking back and forth to meet often conflicting directives from the frantic
magnetic field of Earth which is likewise trying to meet quickly changing directives from the dominant
magnetic field of Planet X. The encircling reach represents a change from where the Earth had been
only moments before, so that the past affects the future. The Earth may be in the middle of her usual
Figure 8 wobble pattern and suddenly get pushed into opposition, for instance. The force and
frequency of this type of rattling of the Earth will continue to increase.
We noted the heatwaves assaulting the northern hemisphere in Issue 194. Now a crop circle duet that seems
to be referencing these heatwaves has appeared in Wickham Green on July 30, 2010.
Someone on the Pole Shift ning noted a heatwave correlation or at least a similar appearance.
Could this be referring to the current heatwaves, at least in part, and the lines referencing latitude lines? Per the
Zetas, this, and more is inferred. The angle and tilt of the duet vs a vs each other also has significance. Our
weather is going to get very lively!
ZetaTalk Analysis 8/7/2010: As tempting as it may be to align these two crop circles with the current
heat wave pattern, this is not the sole message. Note that one of the circles is laid across the tram
lines, but the other is laid at an angle across the tran lines, nor are either of these fields arranged so
that their tram lines are parallel to each other. All this is not by accident. We have spoken of the daily
wobble being affected by temporary leans to the left and leans into opposition, all amidst the daily
wobble and quickly switching about. This will of course affect the heatwaves that have beset the
northern hemisphere this past summer. Are the lines on the circles equivalent to attitude lines? They
count 16 lines in both cases, in both circles. If the center one is the Equator, and the large circles to be
interpreted as heatwave centers, then this would place the heatwaves at latitude 70! At present, this is
not the case. Could matters go to this extreme, at least for hours of daylight during a tense lean into
opposition where these upper latitudes were baked in direct exposure to the Sun for long hours? This is
what is being implied here. Why are the circles not parallel to each other, nor even evenly crossing the
tram lines? If you look at both together, as they are laid, you can see the degree a wobble might take
during such an extreme mixed back where leans to the left and into opposition are combined with the
current Figure 8 of a daily wobble. This is a 45° variance! And all this is not the anticipated severe
wobble which will lead into the last weeks. This is just the Earth changes, escalating!
You received this Newsletter because you Subscribed to the ZetaTalk Newsletter service. If undesired, you can quickly
Unsubscribe. You can always access prior Newsletters from the Archives. | <urn:uuid:91e0c386-5eb1-4264-a8d7-da820fb25965> | 2.84375 | 2,684 | Comment Section | Science & Tech. | 57.757297 |
Today a new generation of scientists is discovering just how powerful the McGurk Effect really is. University of California, Los Angeles, psychologist Ladan Shams has been able to create a McGurk-like illusion by displaying a flash paired with varying numbers of corresponding beeps. If Shams delivered a flash with a pair of beeps, participants were more likely to say they also saw two flashes.
These illusions provide evidence of a powerful strategy our brains
use to cope with the uncertain signals we get from our senses.
Certain regions of the brain take input from two or more senses and then combine them in a single sensory channel to sharpen the information overall.
Michael Beauchamp and Audrey Nath, neuroscientists at the University of Texas Health Science Center at Houston, have pinpointed one of the crucial nodes where these streams of information meet. They delivered short magnetic pulses that briefly shut down different parts of the brain. When they applied the pulses to a strip of the brain near the ear, a region called the superior temporal sulcus, the McGurk Effect was diminished.
Beauchamp and Nath followed up on that study with a new one in which they scanned people’s brains with functional magnetic resonance imaging (fMRI) as they played McGurk videos of mismatched sounds and lip movements. They found that the left superior temporal sulcus became more active in people who experienced the illusion and remained less active in those who didn’t. Beauchamp and Nath’s work suggests that when the illusion occurs, it is because the superior temporal sulcus discounts some of the signals coming from one sensory region of the brain in favor of others.
We don’t mix up our senses willy-nilly, however. There is a window of less than a tenth of a second in which a stimulus from one sense can affect the others. As my misadventure with Netflix showed, my brain was accustomed to balancing sight and sound to make sense of what people were saying without my even noticing, but the sound lag during that episode of Law and Order was so wide that the two sensory streams created confusion instead.
Sight and sound are not the only senses we mingle in our brains. What we touch can affect what we see or hear. Our very understanding of the shape of our own body can be informed not just directly, through our eyes, but also by the pressure of our feet on the ground, the stretch of ligaments in our shoulders, and the wiggle of balance-sensing nerve hairs in our inner ears. Together, our senses let us control our bodies, keeping us from falling over every time we stand up.
But this much integration comes with an astonishing ability to be duped. In 1998 Matthew Botvinick and Jonathan Cohen, two psychologists then at the University of Pittsburgh, found they could make people feel as if a rubber hand were really their own. All they had to do was put a rubber hand in front of their subjects and have them put their real hand behind a screen. The scientists simultaneously began to stroke the real hand and the fake one with paintbrushes. In a matter of seconds, people reported that the rubber hand felt as if it were part of their own body and that they even felt it being stroked.
Neuroscientist Valeria Petkova of the Karolinska Institute in Sweden expanded the rubber hand illusion in 2008. Instead of making people feel a rubber hand was part of their body, she wanted to swap entire bodies. First she had volunteers put on goggles fitted with video screens for lenses. The lenses, in turn, transmitted video from cameras positioned to correspond to a mannequin’s eyes. The cameras were pointed down at the mannequin’s body, so that the mannequin seemed to occupy the precise position of the volunteer’s own body. One scientist stroked the abdomen of the mannequin, while another stroked the abdomens of the volunteers. The effect has been described as a total body swap—the synchronized acts of stroking made participants feel as if the mannequin’s body were their own.
These results were intriguing but based largely on what the volunteers told Petkova about how they felt. That left the researchers wondering what was actually going on in their brains. To find out, they redesigned the experiment so that the volunteers lay in an fMRI scanner that recorded their brain activity. While the scan was in progress, participants looked down at their bodies and a scientist stroked their hand or abdomen. Then the scientists again took participants through the whole-body swap. Once the volunteers felt that the mannequin’s body was their own, the same brain regions became active as when they actually looked down at their own bodies.
The tricks we use to integrate our senses take time to develop. As children grow up, they get better and better at combining sights and sounds. When scientists compare children of the same age, they discover a fascinating pattern: The ones who are better at combining sights and sounds tend to score higher on intelligence tests. It’s possible, some scientists suggest, that helping children combine their senses through training exercises will enable them to do better in school. Manipulating our senses could also help people who have lost a limb. Researchers at the Rehabilitation Institute of Chicago are using the rubber hand illusion to teach amputees to feel artificial limbs as their own.
Learning how the brain mingles its senses can do more than shed light on the latest glitch in streaming video. It may also be able to help some people rewire their sense of reality for their own good.
Carl Zimmer is an award-winning biology writer and author of The Tangled Bank: An
Introduction to Evolution. He also has a blog called The Loom on this site. | <urn:uuid:981b1cb0-d3da-467d-8425-af366e1704a4> | 3.28125 | 1,189 | Nonfiction Writing | Science & Tech. | 48.654761 |
Occurs in coral-rich areas of lagoon and seaward reefs. Graze on algae, usually in groups of 20 individuals (Ref. 5503, 48637). Adults usually in small groups and sometimes schooling. Juveniles solitary and usually among corals (Ref. 48637). Its numerous, small pharyngeal teeth may have evolved in response to a shift in diet from macroalgae to filamentous algae (Ref. 33204). Form resident spawning aggregations (Ref. 27825). Monogamous (Ref. 52884). Group and pair spawning have been observed. The flesh is never poisonous (Ref. 4795).
- Myers, R.F. 1991 Micronesian reef fishes. Second Ed. Coral Graphics, Barrigada, Guam. 298 p. (Ref. 1602) http://www.fishbase.org/references/FBRefSummary.php?id=1602&speccode=4306
No one has provided updates yet. | <urn:uuid:45d153c4-7d17-42d9-8a6b-70ed0c65f688> | 2.765625 | 206 | Knowledge Article | Science & Tech. | 70.11359 |
Ice caps not likely to face rapid, irreversible melting as previously thought, researcher claims – meaning polar bears could surviveThe polar bear can be saved from extinction – but only if action is taken quickly to make deep cuts to greenhouse gas emissions, a new study shows.The study, published today in journal Nature, conflicts with previous research, which suggested that Arctic temperatures are already on track to exceed the threshold required to trigger rapid, irreversible ice loss.Researchers from Polar Bears International said sea ice in the Arctic, which polar bears use as a platform on which to hunt seals and breed, is unlikely to undergo a rapid and irreversible decline when temperatures rise beyond a certain threshold."It's widely believed that nothing can be done to save the polar bear," said author Steven Amstrup of Polar Bears International in Winnipeg, Canada. "But that's not true."According to Andrew Derocher, a polar bear...
- With climate changes, polar bear and brown bear lineages intertwineThu, 7 Jul 2011, 12:37:49 EDT
- Polar bears: On thin ice? Extinction can be averted, scientists sayWed, 15 Dec 2010, 15:32:44 EST
- Ancestry of polar bears traced to IrelandThu, 7 Jul 2011, 12:37:59 EDT
- Polar bear births could plummet with climate changeTue, 8 Feb 2011, 12:05:22 EST
- Polar bears no longer on 'thin ice': researchers say polar bears could face brighter futureTue, 21 Dec 2010, 15:02:32 EST | <urn:uuid:cf70007d-41fa-45ed-bbcb-e67523aba99b> | 3.8125 | 312 | Content Listing | Science & Tech. | 47.228333 |
New Impact on Jupiter July 21, 2009Posted by jcconwell in Asteroid, Astronomy, planets.
Tags: Asteroid, Jupiter, Solar System
Taken from the article by Nancy Atkinson at Universe Today
Amateur astronomer Anthony Wesley from Canberra, Australia captured an image of Jupiter on July 19 showing a possible new impact site. Anthony’s image shows a new dark spot in the South Polar Region of Jupiter, at approximately 216° longitude in System 2. It looks very similar to the impact marks made on Jupiter when comet Shoemaker-Levy 9 crashed into the gas giant in 1994. (But read the Bad Astronomer’s post that the black spot could also be weather.)
UPDATE (7/20): It has been confirmed this is an impact on Jupiter. Mike Salway shared the news Glenn Orton from JPL has imaged the Jupiter black spot with the NASA Infrared Telescope and he has confirmed it’s an impact. | <urn:uuid:c1afebd0-8bef-4e5c-8715-f398ebf4f443> | 2.96875 | 199 | Personal Blog | Science & Tech. | 44.649 |
On the face of it, an artist and a theoretical physicist might seem an unlikely pairing. But Turner Prize-winning sculptor Grenville Davey and string theorist David Berman's collaboration is producing beautiful, thought-provoking work inspired by the fundamental structure of the Universe. Julia Hawkins interviewed them to find out more about how the Higgs boson and T-duality are giving rise to art.
The Strong Fields, Integrability and Strings
programme, which took place at the Isaac
Newton Institute in 2007, explored an area that
would have been close to Isaac Newton's heart:
how to unify Einstein's theory of gravity, a
continuation of Newton's own work on
gravitation, with quantum field theory, which
describes the atomic and sub-atomic world, but
cannot account for the force of gravity.
Progress in pure mathematics has its own tempo. Major questions may remain open for decades, even centuries, and once an answer has been found, it can take a collaborative effort of many mathematicians in the field to check
that it is correct. The New Contexts for Stable Homotopy Theory programme, held at the Institute in 2002, is a prime example of how its research programmes can benefit researchers and its lead to landmark results.
Few things in nature are as dramatic, and potentially dangerous, as ocean waves. The impact they have on our daily lives extends from shipping to the role they play in driving the global climate. From a theoretical viewpoint water waves pose rich challenges: solutions to the equations that describe fluid motion are elusive, and whether they even exist in the most general case is one of the hardest unanswered questions in mathematics.
Many people's impression of mathematics is that it is an ancient edifice built on centuries of research. However, modern quantitative finance, an area of mathematics with such a great impact
on all our lives, is just a few decades old. The Isaac Newton Institute quickly recognised its
importance and has already run two seminal
programmes, in 1995 and 2005, supporting
research in the field of mathematical finance.
When the mathematician AK Erlang first used probability theory to model telephone networks in the early twentieth century he could hardly have imagined that the science he founded would one day help solve a most pressing global
problem: how to wean ourselves off fossil fuels and switch to renewable energy sources. | <urn:uuid:238b970c-6583-48a9-93ae-a1d82ec1f553> | 2.828125 | 487 | Content Listing | Science & Tech. | 29.523029 |
This observation shows part of the floor of a large impact crater in Arabia Terra. This crater formed in the distant past when a large asteroid or comet struck Mars, and has been heavily modified since formation. The crater was partially filled by sediments, forming the rock outcrops and layers visible in this image.
After this material was laid down, part of the deposits were eroded away. The central part of the image has been carved especially deeply, forming a distinct depression.
This depression has been a site of aeolian (wind) transport of sand in more recent times. A particularly interesting aspect of this site is that there appears to have been multiple styles of aeolian activity. Both large sand dunes (the dark hills, deep blue in the color image) and smaller ripples (sharp, light-toned narrow ridges) are visible. While ripples are often found in association with dunes, the different colors suggest that the material is not the same. (At full resolution, the surfaces of both the dunes and the large ripples are covered with much smaller ripples.)
Even where the ripples and dunes are in contact, there is a distinct contrast between the materials. In the subimage, dark sand appears to fill a trough between two large light ripples, suggesting that the dark sand has moved more recently. This could be due to different grain sizes, since certain sizes are most easily lifted by the wind | <urn:uuid:fdacf472-3579-4b24-b10b-b90483bdf19c> | 3.75 | 295 | Knowledge Article | Science & Tech. | 45.585679 |
Sea turtles have long fascinated both biologists and conservationists. All of the seven species found in the world’s oceans are listed as either endangered or threatened. Of these,five species are found in waters of the Indian subcontinent. On this part of the site you can learn about the species, distribution, biology, life history and the identification of sea turtles.
Over millions of years of their existence, sea turtles have evolved a variety of remarkable strategies for survival. They use a wide range of habitats (sandy beaches, coral reefs, sea grass beds, etc.), thus playing a critical role as flagship species for the conservation of the oceans’ ecosystems and diversity. Many of these habitats face mounting threats today around the world. Sea turtles are also an important part of the traditional culture of many coastal indigenous peoples all round the world.
Sea turtles migrate long distances between their feeding grounds and nesting sites. After they hatch and return to the sea, only the females return as adults to nest, males may never come back to land at all. Consequently, knowledge of their biology has been confined to the small time interval when they come on to land to nest. Thus there are many questions that scientists are only just beginning to understand: Where do the hatchlings go after they leave the nesting beach? Does the turtle come back to nest on the same beach where it hatched? How do females navigate to the same nesting beaches again & again, covering several thousand kilometres? | <urn:uuid:c8d75eff-217a-417c-8e7e-b6cbbd3e0224> | 3.8125 | 294 | Knowledge Article | Science & Tech. | 43.922448 |
Galileo plunged into Jupiter's crushing atmosphere on Sept. 21, 2003. The spacecraft
was deliberately destroyed to protect one of its own discoveries - a possible ocean beneath the
icy crust of the moon Europa.
Galileo changed the way we look at our solar system. The spacecraft was the first to fly past
an asteroid and the first to discover a moon of an asteroid. It provided the only direct
observations of a comet colliding with a planet.
Galileo was the first to measure Jupiter's atmosphere with a descent probe and the first to
conduct long-term observations of the Jovian system from orbit. It found evidence of subsurface
saltwater on Europa, Ganymede and Callisto and revealed the intensity of volcanic activity on
Read on to learn more about the historic legacy of the Galileo mission. | <urn:uuid:134b7519-1b09-4572-9873-c1b0ccd97b71> | 4 | 177 | Knowledge Article | Science & Tech. | 41.53 |
NASA image created by Jesse Allen, Earth Observatory, using expedited ASTER data provided the NASA/GSFC/MITI/ERSDAC/JAROS and U.S./Japan ASTER Science Team.
The Karymsky Volcano in far northeastern Russia had been erupting several times a day for about a week prior to emitting this ash plume on June 19, 2006. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite captured this false-color image. In this picture, red indicates vegetation, which is lush around the volcano but very sparse on its slopes. The water of Karymskoye Lake appears in blue. The volcano’s barren sides are dark gray, and the volcanic plume and nearby haze appear in white or gray.
Karymsky Volcano is the most active volcano in the eastern volcanic zone of the Kamchatka Peninsula. The volcano is composed of alternating layers of hardened lava, ash, and rocks. Historical eruptions have involved explosive eruptions of lava fragments and the release of volcanic gases. At the time of the June 19 eruption, Karymsky had an alert status of orange, indicating that a small ash eruption was expected or confirmed, but not likely to exceed an altitude greater than 7,620 meters (25,000 feet) above sea level.
This image originally appeared on the Earth Observatory. Click here to view the full, original record. | <urn:uuid:2db74493-3b24-4686-af3a-3152d48361ee> | 3.875 | 302 | Knowledge Article | Science & Tech. | 37.164324 |
Taxonomic name: Mnemiopsis leidyi (Agassiz 1865)
Synonyms: Mnemiopsis gardeni L.Agassiz 1860, Mnemiopsis mccradyi, Mayer, 1990
Common names: American comb jelly, comb jelly, comb jellyfish (English), Rippenqualle (German), sea gooseberry, sea walnut (English), Venus' girdle, warty comb jelly
Organism type: comb jelly
The ctenophore, Mnemiopsis ledyi, is a major carnivorous predator of edible zooplankton (including meroplankton), pelagic fish eggs and larvae and is associated with fishery crashes. Commonly called the comb jelly or sea walnut, it is indigenous to temperate, subtropical estuaries along the Atlantic coast of North and South America. In the early 1980s, it was accidentally introduced via the ballast water of ships to the Black Sea, where it had a catastrophic effect on the entire ecosystem. In the last two decades of the twentieth century, it has invaded the Azov, Marmara, Aegean Seas and recently it was introduced into the Caspian Sea via the ballast water of oil tankers.
Mnemiopsis ledyi is a comb jelly with a length up to 100mm. The body is laterally compressed, with large lobes arising near the stomodeum, generating 4 deep, noticeable furrows that characterize the genus. It has four rows of small, but numerous, cilated combs which are irridescent by day and may glow green by night (NIMPIS, 2002). The colour is usually transparent or slightly milky, translucent (Shiganova 2003).
estuarine habitats, marine habitats
The native habitat of the ctenophore, Mnemiopsis, is in temperate to subtropical estuaries along the Atlantic coast of North and South America (Mayer,1912). M. leidyi is tolerant of a wide range of salinity, temperature and water quality conditions over a broad range of inshore habitats. Since its unintentional introduction to the Black Sea, Mnemiopsis has spread to adjacent bodies of water, inhabiting waters of salinities ranging from 3% in the Sea of Azov to 39‰ in the eastern Mediterranean, and temperatures ranging from 4oC in winter to 31oC in summer (Dumont and Shiganova).
Mnemiopsis ledyi is a major zooplankton predator and is associated with fishery crashes (Costello, 2001). A carnivorous predator on edible zooplankton (including meroplankton), pelagic fish eggs and larvae, M. leidyi causes negative impacts right through the foodchain of the areas it has invaded. In the Black Sea and the Sea of Azov, the zooplankton, ichthyoplankton and zooplanktivorous fish stocks all underwent profound changes.
The pelagic ecosystem of the Black Sea was degraded, manifesting as sharply decreased biodiversity, abundance, and biomass of the main components of the pelagic ecosystem-zooplankton (Dumont and Shiganova). Fish stocks in the Black Sea and Sea of Azov have suffered due to predation on eggs and larval stages of food supplies (Shiganova 2003). Effects on the ecosystem in the Caspian Sea were faster and stronger than in the Black Sea. In 2001, repercussions were felt at all trophic levels, including that of the top predator, the Caspian seal (Dumont and Shiganova).
A cascading effect occurred at the higher trophic levels, from a decrease in zooplankton stock and collapsing planktivorous fish, to vanishing predatory fish and dolphins. Similar effects occured at lower trophic levels: from a decrease in zooplankton stock to an increase in phytoplankton, which was released from zooplankton grazing pressure. The majority of these effects were top-down, but a few were also bottom-up. Similar effects, but less pronounced, were recorded in the Sea of Marmara. Effects on Mediterranean food webs have, so far, remained insignificant. Salinity is probably supraoptimal there, and several predators prevent M.leidyi from reaching outbreak levels.
Mnemiopsis is probably the most-studied ctenophore genus in the world because of its great abundance in estuaries in heavily populated areas of the United States, and because of its explosive population growth after accidental introduction into the Black Sea in the early 1980s. But after the invasion of a new ctenophore of the genus Mnemiopsis into the Black Sea, a question regarding which species was invasive arose. L.N. Seravin (1994) made a revision of the genus Mnemiopsis with the conclusion that it includes only one polymorphic species of lobate ctenophore-Mnemiopsis leidyi, which this new ctenophore belongs to. Richard Harbison also supports this point of view (personal communication in Dumont and Shiganova).
Native range: The native habitat of the ctenophore, Mnemiopsis, is in temperate to subtropical estuaries along the Atlantic coast of North and South America between 40 degrees north to 46 degrees south (Mayer, 1912, Costello, 2001).
Known introduced range: The unintentional introduction of M. leidyi to the Black Sea in the early 1980s allowed it to secondarily expand its range to the adjacent seas of Azov, Marmara, the Aegean and perhaps the eastern Mediterranean (Studenikina et al, 1991, Shiganova et al, 2001). However, nowhere were conditions as optimal and perennial as in the Black Sea and the surface waters of the Sea of Marmara. It has to re-invade the Sea of Azov each year. Low numbers take advantage of the Black Sea current to reach the northern Aegean Sea where they disperse, according to the dominant circulation patterns. However, its presence in Saronikos Gulf and Elefsis Bay could be also due to ballast water release as elsewhere in the eastern Mediterranean Sea (Shiganova et al., 2001).
Introduction pathways to new locations
Ship ballast water: In the early 1980s, Mnemiopsis leidyi was accidentally introduced via the ballast water of ships to the Black Sea where it had a catastrophic effect on the entire ecosystem. It was also introduced into the Caspian Sea via the ballast water of oil tankers.
Biological: Eradication may be impossible in practice. A variety of predators (including medusae and fish) consume M. leidyi in its native regions. Reduction of M. leidyi populations in the Black Sea occurred after one of its predators, the ctenophore Beroe ovata, was introduced to the region (Costello, 2001).
One of the factors that provoked high level of population development of M. leidyi in the Black Sea but was not observed within its natural range-estuarial waters of North America was the absence of a predator feeding on M. leidyi and controlling its population size (Purcell et al., 2001). In 1997, another invader, the ctenophore Beroe ovata Mayer 1912, was found in the northeastern Black Sea. It is a predator feeding on planktivorous comb jellies - especially M. leidyi (Konsulov and Kamburskaya, 1998). As with its predecessor, B. ovata arrived with ballast waters from the same coastal waters of North America (Seravin et al., 2002). Development of B. ovata considerably decreased the population of M. leidyi that had deformed the Black Sea ecosystem for over a decade. The reduction of the M. leidyi population limited its influence on the ecosystem and consequently we observed a recovery of the main components of the Black Sea pelagic ecosystem – zooplankton (including meroplankton), phytoplankton, dolphins and fish as well as their eggs and larvae (Shiganova et al.,2000a,b; 2001 c).
Conscious of this, and bearing in mind the devastating impact of M. leidyi on the fisheries in the Black and Azov Seas in the 1990s, we began a number of initiatives in 2001 with a view to take stock of the situation, review and assess remedial measures and take concrete actions. After deliberation, we proposed the introduction of a potential predator of M. leidyi as the only truly viable option. As shown by the example of the Black Sea, the best – and so far only - candidate for this is another ctenophore species, Beroe ovata. After the accidental introduction of Beroe ovata to the Black Sea, the abundance of M. leidyi here immediately dropped to levels so low that no further damage was inflicted. In fact, the ecosystem almost immediately began to recover. It is anticipated that the results of a Beroe ovata introduction in the Caspian will be similar. Summer 2003 is now the target date for the implementation of this plan (Dumont and Shiganova, unpublished).
A wide range of zooplanktonic prey; varies with ctenophore development. Early cydippid stages utilize protozoa and microzooplankton, lobate forms feed primarily on crustaceans (often copepods, cladocera) mollusc larvae, eggs, and young fish larvae (Costello, 2001).
Mnemiopsis leidyi is a free-spawning, simultaneous hermaphrodite capable of self-fertilization (Costello, 2001). It possesses gonads containing both the ovary and the spermatophore bunches in their gastrodermis. Total numbers of simultaneously forming eggs depends of food availability and on temperature - 2-3000 eggs per day production by adults at high food concentrations is common. The embryo is formed completely within the original egg cover. It has size of about 0.12-0.14mm and acquires its specific form and tentacular structures. When the larva attains mobility the egg cover softens and became flexible. The life span of egg producing individuals may be many months (Costello, 2001).
Totally planktonic life history; early tentaculate larvae resembling Cydippida ctenophores but metamorphoses into the mature lobate form. No current evidence of resting stages (Costello, 2001).
The embryo acquires a double rows of cilia, a well-developed pair of lateral tentacles, and a large, apical sense-organ. The entodermal part of the gastro-vascular system consists of 6 lateral diverticula from a central chamber; 2 of these lateral branches lead into the bases of the tentacles and the other 4 lead outward toward the 4 double rows of cilia. The ectodermal buccal pouch or stomodeum has become a long, laterally compressed tube, with its broad axis 90* from the tentacular axis of the animal. Until this time the animal swims about quite freely within the egg-envelope at this stage its cilia may be observed beating in a normal manner and its tentacles to elongate or contract in response to stimuli. Soon after this the larva breaks through the egg-envelope and escapes into the water. Here it passes the development stages which are very similar to those of the young Pleurobrachia.
The tentacles acquire numerous lateral filaments and elongate greatly, as in Pleurobrachia. When the animal is 5mm long, the oral lobes begin to develop as two simple outgrowths on both sides of the mouth in the sagittal plane of the animal. At the time when the oral lobes begin to develop, the meridional ventral canals and the paragastric tubes begin to elongate downward. The former give rise to the characteristic loops in the oral lobes. Four meridional vessels extend downward and fuse with the circum-oral vessel. The primary tentacle-bulbs migrate downward to lie close by the sides of the mouth. The auricles appear last of all, after the lobes have developed to some extent. When attaining 10mm long the animal becomes ellipsoidal in outline. The appearance of its lobes and auricles resembles to that in the adult of Bolinopsis. Afterward the deep, lateral furrows extend upward to the level of the apical sense-organ and the animal acquires the characteristic of Mnemiopsis ( Mayer, 1912 ). The embrional development takes about 20-24 hours in the Black Sea upper water layer by 23 degrees C. The size of hutched larvae is 0.3-0.4mm.
This species has been nominated as among 100 of the "World's Worst" invaders
Reviewed by: Dr. Tamara Shiganova. P.P.Shirshov Institute of Oceanology, Russian Academy of Sciences, Russia.
Compiled by: Dr. John Costello, Biology Dept., Providence College, Providence, RI, USA.
Dr. Hermes Mianzan, National Institute for Fisheries Research and Development (INIDEP), Argentina.
Dr. Tamara Shiganova. P.P.Shirshov Institute of Oceanology, Russian Academy of Sciences, Russia.
Last Modified: Monday, 30 May 2005 | <urn:uuid:829741a3-9393-4417-9951-0b7635def64d> | 2.796875 | 2,813 | Knowledge Article | Science & Tech. | 37.436754 |
FIRST they were just for light. Now an invisibility cloak has been designed to hide magnetic fields.
Exotic substances called metamaterials steer light around very tiny objects, rendering them invisible. Alvero Sanchez at the Autonomous University of Barcelona, Spain, and colleagues have come up with a design that uses similar principles to hide one magnetic field from another.
First, surround one field with a superconductor to shield it from the other. Next, add layers of metamaterials. These interact with magnetic fields and could be arranged so they steer the second field around the superconductor as if there were nothing there (New Journal of Physics, DOI: 10.1088/1367-2630/13/9/093034).
The design might allow people with pacemakers to have MRI scans, or boost the stealth of ships and planes, whose magnetic fields can give away their ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:9719b3cf-e27e-4574-8d4b-c4f5d55f6fbf> | 3.34375 | 215 | Truncated | Science & Tech. | 51.000112 |
XML lets you use a Document Type Definition (DTD) to describe the markup (elements and other constructs) available in any specific type of document. However, the design and construction of a DTD can be complex and non-trivial, so XML also lets you work without a DTD. DTDless operation means you can invent markup without having to define it formally, provided you stick to the rules of XML syntax.
To make this work, a DTDless file is assumed to define its own markup by the existence and location of elements where you create them. When an XML application encounters a DTDless file, it builds its internal model of the document structure while it reads it, because it has no DTD to tell it what to expect. There must therefore be no surprises or ambiguous syntax: the document must be `well-formed' (must follow the rules).
To understand why this concept is needed, look at standard HTML as an example:
<IMG> element, which is defined (in the SGML DTDs for HTML) as
EMPTY, doesn't have an end-tag (there is no such thing as
</IMG>); and many other HTML elements (such as
<P>) allow you to omit the end-tag for brevity.
- If an XML processor reads an HTML file without knowing this (because it isn't using a DTD), and it encounters
<P> or many other start-tags, it would have no way to know whether or not to expect an end-tag, which makes it impossible to know if the rest of the file is correct or not, because it has now lost track of whether it is inside an element or if it has finished with it.
Well-formed documents therefore require start-tags and end-tags on every normal element, and any
EMPTY elements must be made unambiguous, either by using normal start-tags and end-tags, or by affixing a slash to the start-tag before the closing
> as a sign that there will be no end-tag.
All XML documents, both DTDless and valid, must be well-formed. They must start with an XML Declaration if necessary (for example, identifying the character encoding or using the Standalone Document Declaration):
<?xml version="1.0" encoding="iso-8859-1" standalone="yes"?>
David Brownell notes: XML that's just well-formed doesn't need to use a Standalone Document Declaration at all. Such declarations are there to permit certain speedups when processing documents while ignoring external parameter entities--basically, you can't rely on external declarations in standalone documents. The types that are relevant are entities and attributes. Standalone documents must not require any kind of attribute value normalization or defaulting, otherwise they are invalid.
Rules for well-formedness:
Valid XML files are well-formed files which have a Document Type Definition (DTD) and which conform to it. They must already be well-formed, so all the rules above apply.
A valid file begins with a Document Type Declaration, but may have an optional XML Declaration prepended:
<!DOCTYPE advert SYSTEM "http://www.foo.org/ad.dtd">
The XML Specification predefines an SGML Declaration for XML which is fixed for all instances and is therefore hard-coded into most XML software (the declaration has been removed from the text of the Specification and is now in a separate document). The specified DTD must be accessible to the XML processor using the URL supplied in the SYSTEM Identifier, either by being available locally (ie the user already has a copy on disk), or
by being retrievable via the network.
It is possible (many people would say preferable) to supply a Formal Public Identifier with the PUBLIC keyword, and use an XML Catalog to dereference it, but the Specification mandates a SYSTEM Identifier so this must still be supplied (after the PUBLIC identifier: no further keyword is needed):
<!DOCTYPE advert PUBLIC "-//Foo, Inc//DTD Advertisements//EN"
The test for validity is that a validating parser finds no errors in the file: it must conform absolutely to the definitions and declarations in the DTD. | <urn:uuid:c38fde61-b663-4424-842c-7ab3f41b8c44> | 3.40625 | 896 | Documentation | Software Dev. | 43.88734 |
Description of the Geospatial Multi-Agency Coordination (GeoMAC) project, online maps of current wildland fire locations using Netscape Communicator or Microsoft Internet Explorer, and user guide on how to use mapping application.
Fact sheet describing the value of The National Map designed as a network of digital databases that will provide a consistent national geographic data framework in responding to natural hazards and human-induced disasters.
Using a geographic dataset of structures, with more than 5500 structures that were destroyed or damaged by wildfire since 2001, we identified the main contributors to property loss in two extensive, fire-prone regions in southern California.
The so-called "100-year" flood is really more like the 4 ½ year flood. This can help emergency managers enhance public awareness of how often flooding truly occurs in a region. It also could help convince those people in harm's way that preparedness is m
Video: Learn what USGS scientists have discovered about landslide dynamics and which slopes are most susceptible to sliding. Hear the devastating stories of Bay Area residents affected by landslides and learn to recognize the danger signs. | <urn:uuid:752058cd-cf36-4bb2-872e-877a8d7d4da2> | 3.171875 | 225 | Content Listing | Science & Tech. | 22.035968 |
Have you ever heard of the phrase, “Once in a blue moon”? If you have, did you ever think about where that saying could have came from?
If you were outside gazing at the stars last Friday, August 31st, you would have caught a glimpse of a large full moon out in the night sky. What you probably didn’t realize was that this full moon was one of the two full moons in August. You might be thinking, “But we’re only supposed to have one full moon a month!” — and if you are, you would be right.
The moon goes through a “lunar cycle,” meaning that it goes through phases (such as full, quarter, half, and new moon) — essentially, we can view the moon from Earth as a constantly changing fraction that can be figured out like a math equation! This lunar cycle takes about 29.5 days to complete, from new moon to full moon. If you divide that by the number of days in a year (365 days), you end up with about 12 lunar cycles (one for each month!). That means for each season of the year — spring, summer, fall and winter — there are 3 full moons, and each of these moons has a specific seasonal name (such as Moon After Yule and Grain Moon).
Here’s the catch: there are a couple more days in the calendar year than there are days in the 12 lunar cycles. What does this mean? It means that every few years (about 2.7, to be exact) the days of the year catch up with an extra lunar cycle, and one of the seasons gets a fourth full moon — one extra full moon in addition to the three full moons in a season. When this happens, the third moon is called the “Blue Moon.”
It would make sense to think that the fourth moon would be the Blue Moon, since it’s the extra one added on, right? Actually, the reason why the third moon is called the Blue Moon is because all of the other seasonal full moons have established and set names — we can’t change those because they go along with the times and seasons of the year, so the third moon gets the special title!
The next Blue Moon is supposed to appear in July of 2015. It’s going to be a while before we can see another one, but in the meantime, all of us can brush up on our facts about the moon and solar system in time for the next awesome thing that happens in space! | <urn:uuid:ba3193a0-70a2-456a-8500-dda9091451e0> | 3.28125 | 532 | Personal Blog | Science & Tech. | 74.578639 |
New Efforts May Harness SUN LIGHT (Oct, 1934)
New Efforts May Harness SUN LIGHT
By Robert E. Martin
SUNSHINE, our greatest source of potential power, is now largely wasted. It is highly probable, however, that a few years hence science will find a way to harness the mighty energy of the sun’s radiation. Solar engines and solar heating apparatus will then make it economically practicable for us to use at least a small portion of our now-wasted sunshine to run our factories, light our streets, cook our food, and warm our houses. In the United States we use, each year, something like a half billion tons of coal, a half billion barrels of oil, and fifty billion horsepower hours of water power for heat, light, and power.
If it were possible to convert all this energy into power—which of course it isn’t—it would produce seven trillion horsepower hours. If it were possible to convert completely into power all the solar energy that each year falls on the United States in the form of sunshine, it would amount to seven thousand trillion horsepower hours. Of course, some of the sunshine that comes to us through 93,000,000 miles of space is needed for the general heating of the earth and for the growing of plant life: but above those fundamental needs, solar radiation provides a potential supply of power many thousand times as great as the amount now supplied by other sources.
Solar radiation experts estimate that the sun emits 12,500 horsepower of energy for every square foot of the 585 billion square miles of surface it exposes to the earth. By far the greater part of this almost unthinkable amount of power is lost on its long journey through space, but the radiant energy that reaches the outer surface of the earth’s atmosphere is equivalent to 7,300 horsepower per acre, and at noon on a clear day 5.000 horsepower per acre is transmitted through our atmosphere to the land surface of the earth. The theoretical power value of the sunshine that falls on the 133 square miles of the city of Philadelphia is equal to the power that could be generated by a hundred Niagaras. The Sahara Desert, in a single day, receives solar energy equal to the power that would be produced by burning 6.000 million tons of coal.
No one thinks that it ever will be possible to convert into mechanical power anything like all of the theoretical power value of the heat reaching the earth from the sun. Steam boilers and steam engines have been built for a good many years now, but no boiler or engine has been built that can convert all the heat of coal into its theoretical equivalent of actual power. The heat value of good coal equals 12,760 horsepower hours a ton, but the best result ever obtained from a ton of coal with a boiler and steam engine is 1,470 brake horsepower hours, 11.5 per cent of the fuel’s heat value.
That the use of solar radiation for power is no vague dream of the far-distant future is shown by the fact that at present a solar power plant with a thermal efficiency of 4.32 per cent —over one third of the efficiency of the best steam engine—has been built and is being operated.
Dr. Charles Greeley Abbot, the secretary of the Smithsonian Institution and the world’s leading authority on solar radiation, says that before long we shall find a commercially practicable method of harnessing sunshine. “Financial success probably awaits the solver of the problems of collecting solar heat for power purposes,” he says. “With our present outlook it seems to me likely that within another generation or two power demands will lead to the sun as the most available source of supply.”
Over 2,000 years ago, a few wise men knew enough about solar radiation to concentrate the sun’s rays for their own benefit. Among them were the pagan priests of ancient Rome, who occasionally allowed the sacred fire in the temple of Vesta, the Goddess of the Hearth, to go out, and then rekindled it by placing a piece of carefully dried wood in the focus of a conical metal reflector and letting the sun do the rest. Also there is a classical story that the famed philosopher Archimedes, when a Roman fleet was attacking Syracuse in 214 B.C., set fire to the Roman warships by concentrating sun rays on them by means of mirrors erected on the shore.
It was in an effort to prove the possibility of this tale that, in 1747, Buffon, a French naturalist, made the earliest known scientific experiments leading toward the utilization of solar energy. He mounted over 300 small glass mirrors on a frame so that each of them could be adjusted separately, and so that all of them could be made to concentrate their rays at any desired distance. With this apparatus he set fire to wood over 200 feet away, and melted silver at a tenth of that distance. A few years later Hoesen, a mechanician of Dresden, built a mirror ten feet in diameter whose concentrated rays almost instantly melted coins. To-day, almost two centuries later, scientists still are interested in burning glasses. Dr. George E. Hale, astronomer at the Mount Wilson Observatory in California, recently designed a fifteen-foot instrument with thirty lenses that generates a temperature of 6,000 degrees centigrade that melts steel wire as fast as an ordinary gas burner would melt butter.
About twenty years after Buffon’s experiments, H. B. de Saussure, a Swiss scientist, invented the solar hot box. Realizing that it is not until the sun’s radiant energy strikes some material object that it is converted into heat, and that black, which absorbs all of the sun’s rays, is the most efficient color for this conversion, he constructed a small wooden box, painted it black inside, and covered it with two sheets of plain glass with an air space between them. Just how high the temperatures were that he obtained in this box when he set it in the sun is not known, but when, in 1837, Sir John Herschel used a similar box in Cape Town, a thermometer in it registered 240 degrees Fahrenheit. Sir John astonished his neighbors by using his apparatus for the homely purpose of frying eggs and stewing meats and vegetables. So it seems that his crude hot box was a sort of rough draft of the solar cooker that in recent years Dr. Abbot has developed to a high degree of efficiency.
De Saussure and Herschel had been content to prove that it was possible to collect the sun’s heat. In 1874, August Mouchot, a brilliant French engineer and the greatest of the pioneer harnessers of the sun, took a long and bold stride forward. He used concentrated reflected sun rays to generate steam in a boiler, and used that steam to operate a small engine.
Mouchot’s apparatus, the first solar power plant, consisted of two principal parts—a reflector and a boiler. The reflector, a truncated copper cone lined with thin silver leaf, looked like a big lamp shade pointed skyward. It was thirty-two inches deep, had a diameter of forty inches at its base, and a diameter of 102 inches at its mouth. A hand-operated mechanism made it possible to shift the reflector to follow the movement of the sun. Attached to the lower base of the reflector, its axis the same as the reflector’s, was the boiler, a blackened cylinder made of copper about one-tenth inch thick, eleven inches in diameter, and thirty-two inches long, enclosed by a glass cylinder four inches greater in diameter. The space between the boiler and its enclosing glass cylinder was filled by a two-inch layer of hot air. Inside the boiler was another copper cylinder, somewhat smaller in diameter, only twenty inches long, and hollow except for feed and steam pipes. About twenty-one quarts of water could be heated between the two copper cylinders, and the steam chamber had a capacity of about ten quarts. On a bright day, the sun’s rays concentrated on the boiler by the reflector produced a steam pressure of thirty pounds per square inch in forty minutes. The pressure then was raised rapidly to seventy-five pounds per square inch, the safety limit of the lightly-constructed boiler. On a very warm day the boiler vaporized over five quarts of water an hour, and the small engine it ran, driving a pump, developed one-half horsepower.
WITH the financial assistance of the French government, Mouchot continued his solar-power experiments for twenty years. One of his later plants had a boiler made of several tubes placed side by side, which, when tested over the span of a year by independent engineers, showed the excellent boiler efficiency of forty-nine per cent. Some of his plants were used successfully for pumping water in Algeria.
In America, John Ericsson, the Swedish-born engineer and inventor who had done his adopted country so valuable a service by designing the Monitor that sank the Confederate ironclad Merrimac, was working industriously on the problem of obtaining cheap power from the sun. In 1883 he built in New York a solar power plant, his eighth, that was comparatively inexpensive and highly efficient. The reflecting apparatus consisted of a rectangular trough eleven feet long and sixteen feet broad, built of straight wood staves supported by curved iron ribs. To these staves were attached mirrors made of common window glass silvered on the underside. The trough, revolving around a pivot so that it could be made to follow the movement of the sun, was supported by light steel trusses, to which was attached a water heater six and one-fourth inches in diameter and eleven feet long. The sun’s rays concentrated on the heater by the reflector produced sufficient steam to operate an engine with a six-inch working cylinder and an eight-inch stroke. Ericsson was eighty years old when he built this machine. Had he been younger it is probable that he would have developed it to very high efficiency.
Other inventors continued working on the problem. In 1904, residents of Pasadena, Calif., were astounded by the erection of the largest and most powerful mirror-type solar generator that ever has been built. The brain child of Aubrey G. Eneas, an Englishman living in Boston, it had a cone-shaped reflector thirty-six feet in diameter that weighed over four tons, moved by a clock-controlled motor so as always to be in accurate focus w;ith the sun. The mirrors of the reflector were of white glass, one-sixteenth of an inch thick, sprung to the curvature of the frame.
THE boiler, formed of two concentric steel tubes enclosed in two glass tubes with an air space between them, was thirteen feet six inches long, and was placed at the axis of the reflector. The water was circulated up between the steel tubes, and down the inner tube. About thirteen and one-half square feet of sunshine was concentrated on each square foot of the outer surface of the boiler. The machine transformed about four per cent of the solar radiation intercepted by the mirror into mechanical work, and gave an all-day average of about two and one-half horsepower. Eneas built several similar plants, which were used for pumping water, in southern California and in Arizona.
THE late Frank Shuman, of Philadelphia, came the closest of any of the sun har-nessers to making solar generation of power a commercial success. Starting work in 1906 on the hot box principle, he built several successful experimental plants. In 1911 English capitalists became interested, and the following year he was invited to build a large sun-power plant in Egypt. Professor C. V. Boys, the English physicist who invented the quartz fibers now largely used in instruments of precision, and A. S. E. Ackermann, an English consulting engineer, became associated with Schuman in the work. At the suggestion of Professor Boys the design of the absorbers was changed from the old hot box to a reflector-lined trough something like the one that was used by Ericsson. The boilers were placed on edge at the focus of the reflectors, so that both sides would receive the reflected rays, and were covered by a single layer of glass enclosing an air space around the boiler.
Each channel-shaped reflector and its boiler was 205 feet long. The five reflectors were automatically heeled so as to follow the sun all day. A total area of 13,269 square feet of sunshine was caught, and the maximum amount of steam produced was twelve pounds per 100 square feet, equivalent to one brake horsepower per 183 square feet of sunshine. The maximum output for an hour’s run was fifty-five and one-half brake horsepower, about ten times the power production ever before obtained by a solar-generating plant, and equal to sixty-three horsepower per acre of land occupied by the plant.
In 1916, while engaged in making solar-radiation observations at Mount Wilson, Dr. Abbot built a solar cooker that gives twenty-four-hour-a-day service.
EXCEPT that concentrated sun rays, instead of a fire, are the source of heat, and that engine-cylinder oil instead of water is heated, this apparatus is much like an ordinary bathwater heater. The sun’s rays are reflected on a blackened copper heater tube, covered by two concentric glass tubes, by a cylindrical trough of light sheet steel lined with glossy sheet aluminum. This reflector, which is twelve and one-half feet long and seven and one-half feet wide, is mounted on a steel frame with its long dimension parallel to an axis pointing toward the North Star. An ingenious arrangement of counterweights, controlled by an inexpensive alarm clock, moves the reflector sufficiently for it to follow the daily march of the sun from east to west.
On a platform about six feet above the reflector stands a twenty-by twenty-four-by thirty-six-inch steel reservoir, with two ovens, each nine by eleven by sixteen and one-half inches, in its back. A copper pipe, one and one-half inches in diameter, passes down under the reflector, turns, and returns in the focus of the sun rays, as described.
Although shaded by trees so that only about seven hours of sunlight a day are available, the temperature of the cooker’s ovens always remains above boiling, and many varieties of food may be cooked at night. At most times the ovens are hot enough to bake bread. | <urn:uuid:12e34ca7-dce2-453f-a54a-84ac58467fb8> | 3.625 | 3,022 | Personal Blog | Science & Tech. | 47.882448 |
Earlier this month we wrote about a study of adaptable ants that changed their leaf-gathering strategies to bypass a roadblock thrown in their way. These clever insects solve traffic jams much more easily than big-brained humans do, and now scientists want to borrow their secrets to ease our highway woes.
Ants leave a trail of pheromones to show others the best way back to the nest; when others follow, they leave their own pheromones and the trail is reinforced. They all work together through what biologists call “distributed intelligence.” You can see this skill demonstrated in a Slate video here.
Unfortunately, getting a swarm of humans to all work together isn’t so easy. But thankfully, we might not have to—our cars could do it for us. Some scientists think we could copy ant ingenuity by teaching our cars to talk to one another. In one model, called Inter-Vehicle Communication, cars would send out a signal when they drop below 10 mph that tells other cars that they’re entering a traffic jam. If all the vehicles in one area send this message, other drivers—whose cars are receiving this data—know to find an alternate route.
Commuters aren’t the only people who can learn from ants—according to Slate, companies that move merchandise by truck have turned to algorithms based on ants to figure out the most efficient routes through a congested area.
Now if only we could lift 20 times our body weight like some ants can. | <urn:uuid:502b3910-e59f-46f5-8564-a34bd650c309> | 3.140625 | 311 | Personal Blog | Science & Tech. | 53.45477 |
The Taiga is a forest biome that rings the upper latitudes of the northern hemisphere. The Taiga is dominated by black spruce and paper bark birch growing together. The sunlight is low in intensity and there are only 50-100 frost free days during the summer but the taiga is the largest and most productive forest on the planet. The taiga has very low biodiversity yet liberals prevent it from being harvested for charcoal and paper production. Ted Stevens was very supportive of logging the taiga in Alaska. Each year Alaska looses approximately the area of Rhode Island to forest fires. These forest fires lead to significant air pollution, more particulates than the combined output of all the coal fired power plants in the world, releasing gigatons of CO2 in the process, which is supposed by liberals to lead to global warming. | <urn:uuid:860aca34-e7fa-4d08-a104-709bbab20397> | 3.140625 | 168 | Knowledge Article | Science & Tech. | 42.067535 |
|This article does not cite any references or sources. (February 2008)|
Ergastic substances are non-protoplasm materials found in cells. The living protoplasm of a cell is sometimes called the bioplasm and distinct from the ergastic substances of the cell. The latter are usually organic or inorganic substances that are products of metabolism, and include crystals, oil drops, gums, tannins, resins and other compounds that can aid the organism in defense, maintenance of cellular structure, or just substance storage. Ergastic substances may appear in the protoplasm, in vacuoles, or in the cell wall.
Although proteins are the main component of living protoplasm, proteins can occur as inactive, ergastic bodies—in an amorphous or crystalline (or crystalloid) form. A well-known amorphous ergastic protein is gluten. | <urn:uuid:4c4a3a14-05af-49c9-9fd3-8639c82b69a3> | 3.171875 | 184 | Knowledge Article | Science & Tech. | 31.6924 |
Java6 added a new utility class for reading input data from character based devices including command line. java.io.Console can be used to read input from command line, but unfortunately it doesn't work on most of IDE like Eclipse and Netbeans. As per Javadoc call to System.Console() will return attached console to JVM, if it has been started interactive command prompt or it will return null if JVM has been started using a background process or scheduler job. Anyway java.io.Console not only provides way to read input from command prompt or Console but also reading passwords from console without echoing it. Console.readPassword() method reads password and returns a character array and password is masked during entering so that any peeping tom can not see your password while you are entering it. here is a code example of How to read password and input from command prompt or console using java.io.Console. By the way apart from Console, you can also use Scanner or BufferedReader to read input from command prompt, as shown in this example.
Saturday, May 18, 2013
Saturday, April 27, 2013
In this article I am giving example of some SQL query which is asked when you go for interview who is having one or two year experience on this field .whenever you go for java developer position or any other programmer position interviewee expect that if you are working from one or two years on any project definitely you come across to handle this database query, so they test your skill by asking this type of simple query.
Question 1: SQL Query to find second highest salary of Employee
Answer : There are many ways to find second highest salary of Employee in SQL, you can either use SQL Join or Subquery to solve this problem. Here is SQL query using Subquery :
select MAX(Salary) from Employee WHERE Salary NOT IN (select MAX(Salary) from Employee );
See How to find second highest salary in SQL for more ways to solve this problem.
Question 2: SQL Query to find Max Salary from each department.
SELECT DeptID, MAX(Salary) FROM Employee GROUP BY DeptID.
Question 3:Write SQL Query to display current date.
Ans:SQL has built in function called GetDate() which returns current timestamp.
Question 4:Write an SQL Query to check whether date passed to Query is date of given format or not.
Ans: SQL has IsDate() function which is used to check passed value is date or not of specified format ,it returns 1(true) or 0(false) accordingly.
SELECT ISDATE('1/08/13') AS "MM/DD/YY";
It will return 0 because passed date is not in correct format.
Question 5: Write a SQL Query to print the name of distinct employee whose DOB is between 01/01/1960 to 31/12/1975.
SELECT DISTINCT EmpName FROM Employees WHERE DOB BETWEEN ‘01/01/1960’ AND ‘31/12/1975’;
Question 6:Write an SQL Query find number of employees according to gender whose DOB is between 01/01/1960 to 31/12/1975.
Answer : SELECT COUNT(*), sex from Employees WHERE DOB BETWEEN ‘01/01/1960 ' AND ‘31/12/1975’ GROUP BY sex;
Question 7:Write an SQL Query to find employee whose Salary is equal or greater than 10000.
Answer : SELECT EmpName FROM Employees WHERE Salary>=10000;
Question 8:Write an SQL Query to find name of employee whose name Start with ‘M’
Ans: SELECT * FROM Employees WHERE EmpName like 'M%';
Question 9: find all Employee records containing the word "Joe", regardless of whether it was stored as JOE, Joe, or joe.
Answer : SELECT * from Employees WHERE upper(EmpName) like upper('joe%');
Question 10: Write a SQL Query to find year from date.
Answer : SELECT YEAR(GETDATE()) as "Year";
Hope this article will help you to take a quick practice whenever you are going to attend any interview and not have much time to go into the deep of each query.
Other Interview Questions posts from Java67 Blog
Wednesday, April 17, 2013
Though modern IDE like Eclipse, IntelliJ or Netbeans allows you to generate equals, hashCode and compareTo methods for your value classes, it's equally important, you know how to do that by hand. By overriding equals and hashcode method by yourself, you know how they work, what kind of errors you can get, and most importantly, it's expected form you, as a Java programmer in any core Java interview. More often, you would see a coding question in Java, which ask you to override equals(), hashcode(), compare() and compareTo() methods for a value class. Since I have already shared some tips on How to override compareTo method in Java, and couple of example of writing your own comparator in Java, here I am sharing another simple example of overriding equals and hashCode methods. If you know rules of overriding equals and hashCode, you might know that, whenever you override equals, you must have to override hashCode, otherwise your object will not behave properly on various collection classes e.g. Map or Set, which uses equals, compareTo, hashCode to implement there invariants e.g. Set implementations should not allow any duplicates.
Sunday, March 31, 2013
String class provides split() method to split String in Java, based upon any delimiter, e.g. comma, colon, space or any arbitrary method. split() method splits string based on delimiter provided, and return a String array, which contains individual Strings. Actually, split() method takes a regular expression, which in simplest case can be a single word. split() is also overloaded method in java.lang.String class and its overloaded version takes a limit parameter which is used to control how many times pattern will be applied during splitting process. if this limit is positive n, then pattern will be applied at most n-1 times, if it's negative or zero than split operation is applied any number of time. For example, if we split String "First,Second,Third" on comma and provide limit as 2 then pattern will run one time, and split() will return String array with 2 Strings, "First" and "Second,Third". Since this method accepts a Java regular expression, it throws PatternSyntaxException, if syntax of regular expression is invalid.
Saturday, March 23, 2013
String class in Java provides several methods to replace characters, CharSequence and substring from String in Java. Since String is immutable in Java, every time you performance an operation on String either replacement or removing white space from String, it generates a new String object. There are four overloaded method to replace String in Java :
replace(char oldChar, char newChar)
replace(CharSequence target, CharSequence replacement)
replaceAll(String regex, String replacement)
replaceFirst(String regex, String replacement)
Out of these, 2nd one which takes a CharSequence is added on Java 1.5. CharSequence is actually a super interface for String, StringBuffer and StringBuilder in Java, which means you can pass any of String, StringBuffer or StringBuilder Object as argument to this replace method. replaceFirst() and replaceAll() are very powerful and accepts regular expression. replaceFirst() only replace first match, while replaceAll replaces all matches with replacement String provided.
Tuesday, March 12, 2013
wait, notify, and notifyAll methods are used for inter thread communication in Java. wait() allows a thread to check for a condition, and wait if condition doesn't met, while notify() and notifyAll() method informs waiting thread for rechecking condition, after changing state of shared variable. One good example of how wait and notify method works is Producer consumer problem, where one thread produces, and wait if bucket is full; and other thread consumes and wait if bucket is empty. Both Producer and Consumer thread, notify each other as well. Producer thread notifies consumer thread after inserting an item in shared queue, while consumer thread notify producer, after consuming item from queue. Though Both notify() and notifyAll() are used to notify waiting threads, waiting on shared queue object, but there are some subtle difference between notify and notifyAll in Java. Well, when we use notify(), only one of the sleeping thread will get notification, while in case of notifyAll(), all sleeping thread on that object will get notified. This concept confuses many Java programmers, both beginners and experienced alike, Infact this is one of three question which is very popular on wait and notify concept, along with why wait and notify is defined in Object class and why wait and notify called from synchronized method. In this article we will focus on difference between wait, notify, and notifyAll method in Java.
Friday, March 8, 2013
There are 3 main ways to convert String to int in Java, using constructor of Integer class, parseInt() method of java.lang.Integer and Integer.valueOf() method. Though all those method returns instance of java.lang.Integer, which is a wrapper class for primitive int value, it's easy to convert Integer to int in Java. From Java 5, you don't need to do anything, autoboxing will automatically convert Integer to int. For Java 1.4 or lower version, you can use intValue() method form java.lang.Integer class, to convert Integer to int. As name suggest, parseInt() is the core method to convert String to int in Java. parseInt() accept a String which must contain decimal digits and first character can be an ASCII minus sign (-) to denote negative integers. parseInt() throws NumberFormatException, if provided String is not convertible to int value. By the way parseInt is an overloaded method and it's overloaded version takes radix or base e.g. 2,8,10 or 16, which can be used to convert binary, octal, hexadecimal String to int in Java. Integer.valueOf() is another useful method to convert String to Integer in Java, it offers caching of Integers from -128 to 127. Internally, valueOf() also calls parseInt() method for String to int conversion. In this Java programming tutorial, we will see all three ways to convert String to int value in Java. | <urn:uuid:7e620c36-36d0-49df-9176-87b13b8f9823> | 3.203125 | 2,190 | Content Listing | Software Dev. | 50.349095 |
Is what you see the way it is?
Structure of the reasoning so far:
We have made a model.
We deduce consequences that can be tested by observation.
- Earth rotates counterclockwise as seen from the north.
- Earth revolves around the sun counterclockwise as seen from the north.
- Earth's axis is tilted.
Davison E. Soper, Institute of Theoretical Science,
University of Oregon, Eugene OR 97403 USA
- Day and night.
- Motion of the sun through the ecliptic. | <urn:uuid:9cf0a8ad-0a81-44bf-8aa7-1d84a25189bd> | 3.0625 | 120 | Academic Writing | Science & Tech. | 56.892836 |
Astronomy in the News
- Archives -
Monster Asteroid 1998 'QE2' Misses Earth by Mere 3.6M Miles on May 31
Huge Asteroid Crashes into Moon
Colossal Solar Flare, Strongest of 2013, Shoots from Sun
'Ring of Fire' Solar Eclipse on Thursday
Russian Space Junk Almost Destroys NASA Telescope
New App Measures Light Pollution Level
NASA Eyes Monster Hurricane on Saturn
Russia Now Charging NASA $70 Million Per Seat to Fly U.S.
Bizarre Binary Star System Pushes Study of Relativity to New Limits
Here's What the Big Bang Sounded Like
Kepler Telescope Spots Two Planets in Life-Friendly Orbi
Nuclear Fusion Rocket Could Reach Mars in 30 Days
Sun Unleashes Biggest Solar Flare of the Year
NASA Satellite to Hunt for Planets Orbiting Other Stars
Sun's Magnetic 'Heartbeat' Revealed
Green Meteorite May Be from Mercury
Chameleon Pulsar Baffles Astronomers
Big Sunspot Unleashes an Intense Solar Flare
Largest Spiral Galaxy Found
Curiosity 2 Weeks from Historic Landing on Mars
SpaceX Scrubs Launch to ISS over Rocket Engine Problem
'Zombie' Planet Fomalhaut B Shocks Scientists
'Eye of Sauron' Hosts Rogue Planet
Key Telescopes Threatened by Looming Budget Cuts
Celestial Flybys Set to Thrill
Curiosity Rover Finds 'Flower' on Mars
13 Must-See Stargazing Events in 2013
European Space Agency Project Will Change How Astronomers View Our Galaxy
Moon's Age CalculatorThere are a few nice calculators online for determining the age of the moon past the last new moon. Some only present the current age and some allow you to enter a day of the year, but this one allows you to enter both date and time to get a more precise number. It has a one hour resolution, but that's 24x as precise as the others. There might be another calculator that allows an even finer setting down to the minute or second, but I don't know about it...
Eavesdropping on Satellites1963 was five years since America's first communications satellite, Echo, was placed in orbit. Echo was a passive, spherical reflector that merely provided a good reflective surface for bouncing radio signals off of. By 1963, the space race was well underway and active communications satellites were being launched at a rapid pace. Spotting and tracking satellites has long been a popular pastime with two types of hobbyists: amateur astronomers using telescopes and binoculars, and amateur radio operators using antennas and receivers...
Lunar Libration with PhasesTom Ruen released this animation of the moon showing its monthly phase progression. We have all witnessed the phases of the moon, but have you noticed that its apparent size varies due to its elliptical orbit (0.0549 eccentricity) around the earth? What the animation really emphasizes is something you may have never noticed - a libration motion, also due to the elliptical orbit and the moon's 6.7° axial tilt wrt its orbital plane. Libration causes the pronounced rocking motion. If you viewed the planets from the sun, they would all display the same combination of motions because all have eccentric (elliptical) orbits...
Amateur Radio Astronomy
in QST Magazine
QST is the official publication of the Amateur Radio Relay League (ARRL), the world's oldest and largest organization for Ham radio enthusiasts. Many amateur radio operators also have an interest in astronomy and as such, occasionally articles appear covering topics on amateur radio astronomy. There are also quite a few articles dealing indirectly with aspects of astronomy such as Earth-Moon-Earth (EME) communications where signals are bounced off the moon's surface in order to facilitate transmission (although it is really more of a hobby achievement). The October 2012 edition of QST had an article entitled, "Those Mysterious Signals*," which discusses galactic noise in the 10-meter band. Arch Doty (W7ACD) writes about the low-level background noise that is persistent in the high frequency (HF) bands. At HF, Cygnus A and Cassiopeia A are major sources of cosmic noise, for example. Low level signals come from pulsars...
Exoplanet Discoveries to Date
Are Just a Drop in the Bucket
Scientific American has a nifty interactive graphic showing the relative positions and distances of the 629 known exoplanets. According to a recent study, on average each sun owns 1.6 exoplanets.
Installation Video for
onto CPC 800 Deluxe HD
After reading as many reviews on dual stage focusers, I finally decided on the Feathertouch SCT MicroFocuser for my newly acquired Celestron CPC 800 Deluxe HD telescope. I wanted a dual stage focuser with a light touch instead of an electric focuser. The instructions were available online and it looked like a cinch to install. In fact, it looked so easy that I decided to make a video in front of a live audience (the camera) without a dry run. Being fairly adept at such things, I figured that any departure from simplicity would be immediately obvious. Without rushing, it took 6 minutes and 15 seconds from beginning to end. The video is a little longer since I couldn't help editorializing for a couple minutes at the end...
Telescope & Sky
Website of the Stars
There is a new computer font available to astronomers: Galaxy. Well, not really, not yet, but at some point there probably will be. The rendition of "Telescope and Sky" shown here was generated automatically by a website called "My Galaxies." Thanks to thousands of volunteers worldwide that have participated in The Galaxy Zoo's project of classifying galaxies, a set of letter-shaped galaxies has been identified that can be used to write words like "telescope and sky." It appears that so far God (or the Big Bang - take your pick) did not create a full set of upper case letter-shaped galaxies. Some letters can be considered upper or lower case, like Cc Ii Jj Oo Pp Ss Uu Vv Ww Xx Zz. As you might imagine, there are number-shaped galaxies as well. After all, mathematics is the language of the universe. Judging by the shape of the number "1," I'm guessing that particular galaxy is French, possibly in deference to early astronomer Giovanni Domenico Cassini, who among other...
August 31, 2012
This photo was taken at around 11:00 pm on August 31, 2012, a few hours before the moon was completely at full 13:58 UTC (09:58 EDT) next morning and would be below the horizon. A blue moon originally referred to the third (not the fourth) full moon in a single season with four full moons. Those seasons were not determined by the strict astronomical alignments used today, but instead were based on ecclesiastical dates determined by the Church...
Glowing Trees a Problem
A controversy brews over the merits of breeding plants that glow like a lightning bug. Proponents say glowing trees could eventually replace electric street lights, thereby reducing pollution created by generating stations. Opponents say messing around with tree genes is dangerous and should be disallowed since it could lead to unanticipated environmental ramifications on both plant and animal species. The unique aspect of this effort is that it is being pursued primarily by genetic hobbyists rather than corporations - at least for now. There is bound to be a huge financial potential for such a copyrighted line of plants. My opposition to the concept is primarily a concern for light pollution projected skyward. Astronomers have a difficult enough time with ever-encroaching sources of ambient light, but a planet overrun by cross-bred and mutated glowing plants (and possibly animals), especially if they are capable of emitting levels high enough to replace street lights, would effectively blind billions of dollars of investments in telescopes....
U.S. Department of State Says
Spacesuits Are Weapons
Who would have guessed that you need the blessing of the U.S. Department of State if you want to make and sell spacesuits? Yep, spacesuits are classified as weapons since, by bureaucratic logic (yeah, a non sequitur), if you have the capability to attain a presence at an altitude that requires a spacesuit, you can be a strategic threat to the nation. Here is a story about a startup company in Brooklyn, NY, that found out the hard way about the spacesuit-weapon requirement. There is a rapidly growing demand for functional-yet-stylish spacesuits for safeguarding wealthy space tourists who will soon be blasting off to the top of Earth's atmosphere where space officially begins (at about 50 miles / 80 km). BTW, I tried finding the official policy on spacesuit production the Department of State website, but their search engine keeps failing - must be busy deleting files on the Benghazi massacre.
"Jupiter's moons are invisible to the naked eye and therefore can have no influence on the earth, and therefore would be useless, and therefore do not exist." - Francisco Sizzi (Prof. of Astronomy), dismissing Galileo's sighting of the moons. Now there is a prime example of reductio ad absurdum absurdity.
First Light for My
Celestron CPC 800 Deluxe HD
It only took 32 years, but I finally have the telescope I have dreamed of having since I first peered through an 8" Celestron telescope at a meeting of the Macon Astronomy Club of Macon, Georgia, while stationed as a radar maintenance technician at Robins AFB, Georgia. In September 2012, I made the decisions to purchase Celestron's high-end CPC 800 Deluxe HD telescope. It is a fine piece of work. A year and a half ago I bought the Celestron NexStar 8SE telescope as my first scope in 20 years. At the time it did not seem prudent to spend north of two thousand dollars on a telescope when I didn't know for sure whether the enthusiasm would still be there after so long. The single arm of the NexStar 8SE mount gave me pause, but after reading comments by many people on some of the astronomy forums, it seemed to be good enough for casual observing and entry level....
Home Planetarium from the
1969 Sears Wish book
Here on page 544 of the Sears 1969 Christmas Wish Book is a home planetarium setup. The 7" diameter star projector had over 60 constellations. For an extra $19.99 you could buy a plastic hemispherical dome that would actually make the star projector useful. According to the U.S. Bureau of Labor Statistics' inflation calculator, the total cost of the star projector and dome ($35.98 in 1969) would equal $224.61 in 2012 money.
An Experiment with Gravity
This is pretty cool. If I owned a good receiver, I would definitely give it a try. In 1970 when this Popular Electronics article was written, a lot of Hams were still using tube receivers so the recommendation to let the equipment warm up for several hours prior to making the fine frequency adjustments was good advice. Nowadays the warm-up time and stability of receivers should permit 30 minutes or so to suffice (even ovenized frequency references need time to stabilize when first powered up). Unless I missed it, the author does not explicitly state that the frequency change measured over time is due to gravity acting on the mass of the crystal reference,, but I suspect that is his intention since part of the experiment involves disconnecting the antenna and shielding the receiver from outside interferers. Over a lunar month period (29.5 days) we experience a leap tide and a neap tide which maximizes and minimizes, respectively, the vector sum of gravity and therefore should result in the greatest excursions. Maybe with a super-stable source, a larger scale phenomenon such as a planetary syzygy could be detected (but I doubt it).
of the Night Sky
Goldpaint Photography has an amazing collection of time-lapse videos and still photos of the night sky. Shot from locations with very dark skies, these works are awe-inspiring. Living in a city environment as I do, it is hard to imagine seeing so many stars.
Squeal on CPC 800 Deluxe HD
My new CPC 800 Deluxe HD telescope has a loud squeal on the elevation axis when the clutch is loosened enough to rotate the OTA easily, but not enough to allow it to rotate under its own weight. Celestron claims this is normal. They graciously replaced my original telescope with another new one and it has the exact same squeal. I know it is not the same telescope that I returned because I had etched my initials on the bottom of the original.
I made a 35-second video demonstration of the squeal, which the Celestron agent viewed and determined it was OK.
II really like this telescope otherwise, and maybe I expect too much. Has anyone else noticed the squeal? Do you accept Celestron's claim that this is to be expected?
Telescopes from the
1969 Sears Wish book
Here on page 545 of the Sears 1969 Christmas Wish Book is a selection of three refractor telescope models. I can remember having an el cheapo (a little Spanish lingo there) telescope as a kid living in Annapolis, Maryland, and being dumb enough to screw the sun filter into the eyepiece to look at the sun during the total solar eclipse of 1970 (12 years old at the time),, when the path of totality ran just 50 miles or so south of my home. Telescopes usually don't include solar filters that screw onto the eyepiece anymore for safety reasons.
In Memory of Neil Armstrong
Tranquility Base Photo
Apollo 11 astronaut Neil Armstrong died on August 25, 2012. As most Americans over the age of 30 know, Armstrong was the first human to step foot on the moon. On July 20, 1969, in fulfillment of President Kennedy's 1961 challenge to put a man on the moon and return him home safely by the end of the decade, Armstrong made a giant leap for mankind. That day in 1969 I launched an model rocket as part of Estes' commemorative effort. Last night, in his memory, I took this photo of the Tranquility Base region of the moon. Thank-you, and rest in peace, Mr. Armstrong. | <urn:uuid:8e72b7c2-bd6f-4206-8880-e7e726f303fc> | 2.734375 | 2,974 | Content Listing | Science & Tech. | 47.369486 |
Find out what it's like on other planets. Learn how far away the stars are. Try a fun, space-themed project.
Constellations can help you sort the twinkling dots scattered across the night sky. Connect the stars to see what deep-sky wonders emerge.
The Sun, an average-sized, middle-aged star, formed almost 5 billion years ago from a cloud of gas and dust.
Mercury, the closest planet to the Sun, takes only 88 days to orbit the Sun.
The surface of Venus, the brightest object in the sky after the Sun and Moon, is covered with craters, mountains, volcanos, and lava plains.
Earth is the third planet from the Sun and takes 23 hours, 56 minutes to spin on its axis one time.
The Moon, located 238,000 miles from Earth, has a temperature of 225° F during the day and drops down to –243° at night.
Rust in the soil creates the Red Planet's signature color.
Jupiter is the largest planet in our solar system, with a diameter of 89,000 miles.
Saturn, the sixth planet from the Sun, has a ring system made up of ice and rock particles, some as big as a minivan.
Uranus, the third-largest planet in the solar system, has an average temperature of –350° F and does not have a solid surface.
Neptune has 13 moons; the two largest are Triton and Nereid.
Pluto, reclassified as a dwarf planet in 2006, is located nearly 40 times as far from the Sun as Earth.
Asteroids, chunks of rock and metal that orbit the Sun, sometimes collide with the Earth. This is one possible explanation for the extinction of dinosaurs.
Comets, thought to be leftovers of the early solar system, are made of dust, rocks, organic compounds, and ice.
Observe the changing position of the Sun to determine the cardinal points.
Grab a thick blanket to lie back on and your favorite pair of binoculars. It's time to take your child on a tour of the Milky Way.
Make yourself looney by viewing craters and even making your own.
Look for this icon. This denotes premium subscriber content.
Learn more » | <urn:uuid:61fad42a-18c3-4cf1-9e77-c3c81d27f4a8> | 3.765625 | 478 | Content Listing | Science & Tech. | 68.207763 |
Small "Pompeii" worm worm coming out of its tube. These worms live in the hottest water of any of the vent animals and are one of the most thermally tolerant animals on Earth. Credit: Stephen Low Productions.
June 26, 2007 A discovery that radically changed our understanding of the planet we live on celebrates its 30th anniversary this month. Scientists first discovered volcanic hot vents surrounded by bizarre animals thriving in total darkness at the bottom of the Pacific Ocean in 1977 and at the end of June an international team of scientists, including many of the original explorers, will honor the landmark discovery at a special meeting and public event in the Galápagos Islands, located just south of the discovery site. | <urn:uuid:19ab9d6d-3d3f-462d-a71d-9c848ca86178> | 3.109375 | 143 | Truncated | Science & Tech. | 35.752607 |
Fungal DNA barcoding
DNA barcoding projects at Kew include harvesting sequences from fungarium specimens to populate publicly accessible sequence databases, identification of mycorrhizal fungi on plant roots, and diversity surveys of tropical macrofungi.
DNA barcoding is a systematic way to link DNA data with reference specimens to facilitate identification. There is a lack of DNA sequence data associated with the record number of specimens in Kew's fungal collection; currently only about 400 out of its ~1.25 million specimens are represented in GenBank. This neglect masks the true value of Kew's fungarium, which, being the largest and one of the most extensive collections of fungi from around the world, offers a tremendous opportunity for generating voucher-based DNA barcodes. Barcoding of Kew's fungarium will provide invaluable service to the scientific community as well as improve public access to the information contained within this extensive repository of preserved fungi.
The objectives of this project are to develop a high specimen throughput facility for DNA barcoding one of the world's premier fungal collections, to investigate options for enhancement of the fungal identification services carried out at Kew, to investigate inter- and intra-specific genetic diversity and species delimitation, and thus to enhance the utility and value of the collections held in Kew's fungarium.
Kew Mycology has been involved in fundamental work to establish an official barcode marker for fungi. A recent study compared the utility of the "universal barcode" marker, a portion of the mitochondrial cytochrome oxidase I gene (a.k.a. COI or COX1), with the most widely used genetic region for identification of fungi (nuclear ribosomal internal transcribed spacers; ITS) for DNA barcoding in mushrooms and allies (Dentinger et al. 2011). It was demonstrated that the COI gene in mushrooms and allies can be interrupted by multiple large introns at variable locations, introducing an insurmountable technical hindrance to high-throughput data generation with this locus. Moreover, COI sequences compared directly with ITS were less variable and, ultimately, unable to distinguish among closely related species of Boletus that were resolved using ITS. This work has contributed to a recently submitted, formal proposal to designate ITS as the official DNA barcode marker for Fungi (Schoch et al., submitted).
Another study into the use of the fungal collections at Kew in order to enhance the taxonomic coverage in GenBank has been carried out (Brock et al. 2009). DNA sequences of the ITS region were generated from a diverse set of 279 specimens, and bioinformatic analyses showed that c. 70% of the fungarium taxonomic diversity was not yet represented in GenBank and that a further c. 10% of the sequences matched solely to "environmental samples'" or fungi otherwise unidentified. It is concluded that the not-yet-sampled diversity residing in fungaria can substantially enlarge the coverage of GenBank's fully identified sequence pool to ameliorate the problem of environmental unknowns and to aid in the detection of truly novel fungi by molecular data.
Several additional current but independent research projects at Kew are using DNA barcoding methods. For example, DNA barcoding is being applied to understanding species diversity of British waxcaps (genus Hygrocybe) and earthtongues (family Geoglossaceae), the ecology of mycorrhizae, and coevolution of mushrooms, insects, and orchids in a tripartite mimicry system.
Key publications 2006-2011:
- Dentinger, B.T.M., Didukh, M.Y. Moncalvo, J.-M. (2011). Comparing COI and ITS as DNA barcode markers for mushrooms and allies (Agaricomycotina). PLoS ONE 6(9): e25081. doi:10.1371/journal.pone.0025081
- Bidartondo, M.I., Brock, P.M. & Doring, H. (2009). How to know unknown fungi: the role of a herbarium. New Phytol. 181: 719-724.
- Bidartondo, M.I., Ameri, G. & Doring, H. (2009). Closing the mycorrhizal DNA sequence gap. Mycological Research 113: 1025-1026.
Project Leader: Dentinger, Bryn T M
Herbarium, Library, Art, & Archives
Bryn Dentinger, Paul Cannon, Martyn Ainsworth, Heidi Döring, Begoña Aguirre-Hudson, Martin Bidartondo
Project Partners and Collaborators
Jean-Marc Moncalvo (University of Toronto/Royal Ontario Museum)
National Science and Environmental Research Council (NSERC)
Additional funding and support listed at bolnet.ca
The Royal Society | <urn:uuid:1e08b02c-29fe-46d5-bde1-e5032ab114cc> | 2.75 | 1,029 | Knowledge Article | Science & Tech. | 33.919144 |
I was reading about determinants and what they actually represent. I’ve been working with solutions to the Ax = b problem where A has a determinant which is zero (i.e. it has no unique inverse). Determinants were always just a thing you did and there were magic properties about them. I suddenly realise why intuitively inv(A) exists iff det(A) != 0, simply from the geometric interpretation of the determinant.
The determinant of an n x n matrix represents the “signed volume”* of an n-dimensional cube after being multiplied by A (transformed by). So A is a transformation matrix. If we map to a zero volume, and thus the determinant is zero, we have zero in one of the dimensions.
Now zero is evil because multiplying everything by it yields 0. There is no information left regarding the original dimension as any value would have sufficed to obtain a zero.
Simply speaking, can you inverse 0a = 0? What is a? a is any value you like, so there is no inverse. Likewise for one of our dimensions.
So, when det(A) = 0 we have a matrix transformation that kills one of the dimensions (it maps all values onto a (hyper)plane) no matter what is transformed by it. When we write Ax = b, we know that one of b has a zero in some dimension and therefore we cannot determine exactly where we started to get a solution, since x could have any position in the deficient dimension, and this was mapped to zero in the transformation. Therefore you could potentially find infinitely many inverses since once or more of those dimensions has no constraint.
Of course when I say that it is in a dimension, we are strictly talking about any n-dimensional basis. So suppose you have a plane in R^3. You have a transformation matrix that projects all points in 3D onto that plane. So in the direction of normal to that plane, we have a scale of zero (eigen value as it happens), and thus the projection matrix is a matrix with no unique inverse.
If you’re interested and didn’t quite get what I was talking about, see http://www.youtube.com/watch?v=n-S63_goDFg | <urn:uuid:0dff6041-6c4c-430f-b744-9bc76f52f8e4> | 3.15625 | 477 | Personal Blog | Science & Tech. | 56.609977 |
This is clear evidence in support of the theory of continental drip. Initially all land lies at the North Pole, which is therefore up, as we all know. Then the land slowly drips down, which is why all continents are pear-shaped with little driplets at the bottom (like Sri Lanka and Madagascar).
When all the land has dripped to the bottom, north becomes south (thus explaining the periodic shifts in magnetic poles as well) and it all drips back again.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:0a631f27-7e69-431b-9fe5-229fa0785315> | 3.3125 | 125 | Truncated | Science & Tech. | 58.242236 |
Is the ozone layer comprised solely of oxygen?
No, it's regular air (80% nitrogen, 18% oxygen, 1% argon, 1% others). The
ozone present is created by the action of sunlight on the oxygen. The
wavelengths of sunlight that induce this chemistry are absorbed in the
stratosphere, so they don't reach the surface. That's why the "ozone layer"
is so high up.
For ozone to be formed in the lower atmosphere, some air pollution needs to
be present. So human activities both create ozone in the lower atmosphere,
where we don't want it, and destroy ozone in the upper atmosphere, where we
do want it.
The ozone layer or ozonosphere is the region in the upper
atmosphere between 6 and 30 miles (10-50 kms) altitude,
where there are appreciable concentrations of ozone.
So it belongs mainly to stratosphere .
The average composition of the low atmosphere(up to 15
nitrogen, oxigen, argon,carbon dioxide, ozone,
methane, nitric oxide, hydrogen, nitrous
oxide, carbon monoxide, water vapour
This gases are also in the ozone layer, but the increased
presence of ozone determines itself a series of chemical
processes. So it is found : oxigen and hydrogen atomic,
hydroxyl and methyl radicals, hydrogen peroxyde, water vapour.
Also one must know that even though the ozone layer is
about 40 kms (25 miles) thick, the atmosphere is very
tenous, and the total amount of ozone, compared to
more abundant atmospheric gases is quite small. It is
mentioned (Britannica) that if all of the atmospheric ozone
in a vertical column through the entire atmosphere were
compressed to sea-level pressure, it would form a layer
only a few tenths of a centimeter thick.
And since it is so necessary...one must take a good care of it!
And thanks to ask NEWTON!
(Dr. Mabel Rodrigues)
Click here to return to the Environmental and Earth Science Archives
Update: June 2012 | <urn:uuid:2efa7ddc-62ce-42fd-babe-88055dadbd1d> | 3.96875 | 456 | Audio Transcript | Science & Tech. | 42.43939 |
And then may come some proper planet names. Exoplanets are presently referred to by their host star plus a letter (a, b, c…) depending on how many other planets are present in the system, and the stars themselves are typically just cataloged by the first syllable or two of the constellation that they appear in, with some digits before or after; hence those R2D2-like designations.
The International Astronomical Union (IAU), which officially names heavenly bodies, has resisted naming exoplanets, for now. "But once we find some really important, Earth-like ones, the IAU will probably be forced to make a naming decision," Cash says. And with 50 billion exoplanets–plus waiting for names, who knows? Maybe our grandchildren will grow up learning about real planets with names dreamed up by George Lucas. | <urn:uuid:8a1ce6a1-a1a1-4817-adcb-1d1a3d56b663> | 3.328125 | 174 | Listicle | Science & Tech. | 38.671172 |
Mass of fire, Speed of sight
Q: In my science class we have learned about mass, and I'm just wondering, does fire have any mass? It has stumped all the teachers I asked (even my science teacher), and I would really like to know. (Jimmy, Sioux City, Iowa)
A: Fire is a series of actions and changes that produces a result ó a process. In fact, it is an oxidation process (called combustion or burning) that gives out heat and light energy as well as glowing gas and a small amount of plasma.
The flame color depends on the temperature of the hot burning gasses and the material that burns. Courtesy of the US Forest Service and Wikipedia
So, talking about fire's mass is like talking about the mass of digestion, or boiling, or getting a driver's license ó although it certainly makes us think about what fire is. But a process doesn't have mass.
Flames are another matter. They are burning gases and gasses certainly do have mass.
Flame colors as chemical indicators by Rod Nave, HyperPhysics
Q: I heard at 100 mph what seems to be in front of you is actually behind you because of how fast your eyes receives images. Is this true, and if so how? What speed can your eyes keep up with? (Joe, Virginia Beach, Virginia)
A: Our eyes see faster than you think they do. The entire process ó from light entering the eye to the brain perceiving the image ó takes a mere 50 milliseconds (one sixth of an eye blink). See WonderQuest's speed of human sight.
A pilot weaves among pylons in the Red Bull Air Race at Kemble Airfield, England. Courtesy of Red Bull Air Race and Wikipedia.
Fifty milliseconds isn't much time, but it is time. So no matter how slowly you go, you will travel some distance during that time. At 100 mph (160 km/h), you'll cover 7 feet (2 m). At any speed, your side peripheral vision will pick up objects that are actually behind you. How much difference does that make? Not much. In fact, not much even going 300 mph (480 km/h).
"We reach speeds up to 300 mph (when entering the track)," says Kirby Chambliss, aerobatic champion and winner of the 2006 Red Bull Air Race. "The only time I have trouble seeing the pylons is when I am pulling 11 to 12 G due to tunnel vision. If you are on a straight part of the track it is no problem to see the pylons even at 300 mph." Video of the race.
"...if we focus or try to see the pylons as we go through them [at speeds of about 200 mph (320 km/h)], they would actually be behind us by the time our brain could return a control input," says Mike Mangold, 2005 RBAR world champion. "As I approach a gate, I position my aircraft to fly through what I perceive to be the center of the gate. I do not see the pylons as I go in between them because I am already looking ahead to the next computation. If I look too far ahead, then my ability to accurately position my airplane degrades." Mike looks about one second ahead, which is approximately 300 feet (90 m).
Kirby and Mike travel up to 35 feet (10 m) during 50 milliseconds, and can judge distances ahead well enough that a 35-foot discrepancy doesn't matter. What speed your eyes can keep up with depends on how accurately you need to know your position.
Red Bull Edge, Kirby Chambliss, 5X US Acrobat champion
To see, brain assembles sketch images eyes feed it, University of California, Berkeley, UniSci
Colors are composed by brain not eyes, Cornell University:
(Answered April 18, 2006) | <urn:uuid:5a2c1c21-e57d-45ef-9c11-21211860f39b> | 3.265625 | 809 | Q&A Forum | Science & Tech. | 70.110576 |
Are humans really so tiny and insignificant that their activity makes no difference on climate? Some people say so but, if you are of this opinion, you might change your mind once you read how much human activities affect the Earth's crust. For instance, the amount of rock and soil we move every year would fill the whole Grand Canyon in about 50 years.
An excerpt from Ugo Bardi's "Depleting the Earth," a book being prepared in collaboration with the Club of Rome.
The amounts of minerals extracted nowadays is immense and it becomes even larger if consider as “mining” the use of fertile soil in agriculture which is consumed by the process called erosion. It is estimated that about 4 billion tons of agricultural soil are eroded in the United States alone and dumped into the oceans every year (1). In the whole world, the total amount has been estimated as 75 billion tons per year by Pimentel et al. (2) and as 120 billion tons per year by Hooke (3). These amounts dwarf those created by natural erosion, at least one order of magnitude smaller.
To this amount related to agriculture we must add the amount of rock and sand moved by the construction industry. From the USGS data, we find that the worldwide production of sand and gravel may exceed 15 billion tons per year. The total world production of concrete in 2008 has been of 2,8 billion tons. China, alone, produces more than a billion tons per year, that is about 450 kg per person on the average.
According to Bruce Wilkinson (4) we can visualize the total amount of rock and soil yearly moved by humans considering that these amounts are “ca. 18,000 times that of the 1883 Krakatoa eruption in Indonesia, ca. 500 times the volume of the Bishop Tuff in California and about 2 times the volume of Mount Fuji in Japan. At these rates, this amount of material would fill the Grand Canyon of Arizona in ca. 50 years.”
1 Azimut, 29 Jun 2011, John Baez (Accessed 12 Aug 2011)
2 David Pimentel; C. Harvey; P. Resosudarmo; K. Sinclair; D. Kurz; M. McNair; S. Crist;. L. Shpritz; L. Fitton; R. Saffouri; R. Blair. 1995, Science, New Series, Vol. 267, p. 1117-1123
3 Hooke, R.L.B., 2000, On the history of humans as geomorphic agents, Geology, v. 28 p 843-846
4 Wilkinson, Bruce, 2005, “Humans as geologic agents: a deep-time perspective”, Geolog, 22 pp 161-164 | <urn:uuid:5f10d36a-5177-47fe-83c4-ce1aba4a4a18> | 3.453125 | 570 | Personal Blog | Science & Tech. | 71.380816 |
How is Technology Used
Weather dominates our lives. It controls how we work, live and play, and accounts for nearly 20 percent of the information presented in local newscasts. Many of us plan our daily activities by looking at a forecast. Will it snow? Will it be humid? What are the chances of rain for the weekend?
These are the questions people want answered about their climate, but where does one go to explore the “weather” in the aquatic environment of fish, oysters or bay grasses?
The answer: Eyes on the Bay. http://www.eyesonthebay.net.
Water quality mapping technologies improve this data’s spatial resolution, enabling DNR to determine the extent and impact of harmful conditions such as low dissolved oxygen or algal blooms. New technologies also allow for timely and relevant data to be provided in a compelling format through the EOTB website. DNR is also partnering with NASA to provide satellite imagery of the Chesapeake Bay Watershed. These images provide valuable information and data which scientist can view to help understand or figure out why an event like an algae bloom may have occurred. The orbit of the NASA satellite provides imagery to study environmental issues across the globe such as cloud cover, vegetation on land and water temperatures. (www.aqua.nasa.gov).
More Than Just Data
Courtesy of NASA/GSFC/MITI/ERSDAC/JAROS, | <urn:uuid:e9c3675e-2990-437b-9081-9b6ba2bfddf7> | 3.359375 | 296 | Knowledge Article | Science & Tech. | 46.612547 |
Why doesn't Python have a "with" statement like some other languages?
Starting with Python 2.5, Python does in fact have a with statement, which is used to control execution of code in a specific context:
with manager() as ctx: ... do something with ctx ...
Here, the object returned by the call to manager will be called before and after the with-body has been executed, no matter what happens inside the statement. This is primarily used for resource management. For example, the following statement guarantees that the file is closed after the file has been processed:
with open(filename) as f: process(f.read())
However, this article is about the kind of with-statement that’s available in Object Pascal and several other languages, where it is basically used to add the attributes of a given object to the current scope. In pseudo-Python, it could look something like:
with-object obj: attrib1 = value attrib2 = value
where attrib1 and attrib2 would then refer to object attributes, rather than a local variable with the same name.
Unfortunately, there’s no obvious way to implement such construct for Python, since it would be ambiguous.
Some languages, such as Object Pascal, Delphi, and C++, use static types. So it is possible to know, in an unambiguous way, what member is being assigned in a “with” clause. This is the main point — the compiler always knows the scope of every variable at compile time.
Python uses dynamic types. It is impossible to know in advance which attribute will be referenced at runtime. Member attributes may be added or removed from objects on the fly. This would make it impossible to know, from a simple reading, what attribute is being referenced — a local one, a global one, or a member attribute.
For instance, take the following incomplete snippet:
def foo(a): with-object a: print x
The snippet assumes that “a” must have a member attribute called “x”. However, there is nothing in Python that guarantees that. What should happen if “a” is, let us say, an integer? And if I have a global variable named “x”, will it end up being used inside the with block? As you see, the dynamic nature of Python makes such choices much harder.
The primary benefit of “with-object” and similar language features (reduction of code volume) can, however, easily be achieved in Python by assignment. Instead of:
function(args).dict[key][index].a = 21 function(args).dict[key][index].b = 42 function(args).dict[key][index].c = 63
ref = function(args).dict[key][index] ref.a = 21 ref.b = 42 ref.c = 63
This also has the side-effect of increasing execution speed because name bindings are resolved at run-time in Python, and the second version only needs to perform the resolution once. If the referenced object does not have a, b and c attributes, of course, the end result is still a run-time AttributeError exception. | <urn:uuid:063824f6-f790-4f21-8a00-a555bb0ef00c> | 3.515625 | 674 | Q&A Forum | Software Dev. | 55.437702 |
In a study published in Environmental Research Letters, Cohen et al. (2012) note that over the last four decades Arctic temperatures have warmed at nearly double the global rate, citing Solomon et al. (2007) and Screen and Simmonds (2010); and they state that “coupled climate models attribute much of this warming to rapid increases in greenhouse gases and project the strongest warming across the extratropical Northern Hemisphere during boreal winter due to ‘winter (or Arctic) amplification’,” citing Holland and Bitz (2003), Hansen and Nazarenko (2004), Alexeev et al. (2005) and Langen and Alexeev (2007).
However, they say that “recent trends in observed Northern Hemisphere winter surface temperatures diverge from these projections,” noting that “while the planet has steadily warmed, Northern Hemisphere winters have recently grown more extreme across the major industrialized centers,” and reporting that “record cold snaps and heavy snowfall events across the United States, Europe and East Asia garnered much public attention during the winters of 2009/10 and 2010/11 (Blunden et al., 2011; Cohen et al., 2010),” with the latter set of researchers suggesting that “the occurrence of more severe Northern Hemisphere winter weather is a two-decade-long trend starting around 1988.”
So what’s going on here?
Cohen et al. say that “whether the recent colder winters are a consequence of internal variability or a response to changes in boundary forcings resulting from climate change remains an open question.” But like most scientists who love to resolve dilemmas, they go on to propose their answer to the puzzle, suggesting that “summer and autumn warming trends are concurrent with increases in high-latitude moisture and an increase in Eurasian snow cover, which dynamically induces large-scale wintertime cooling.”
But, again, who knows? The only thing that is certain, as Cohen et al. describe it, is that “traditional radiative greenhouse gas theory and coupled climate models forced by increasing greenhouse gases alone cannot account for this seasonal asymmetry.” And so we have yet another reason why so many scientists are so skeptical about the ability of even the most sophisticated of today’s climate models to adequately portray reality. | <urn:uuid:8a7ad48a-eaea-497f-b11d-8051904f0d25> | 3.234375 | 481 | Personal Blog | Science & Tech. | 30.285211 |
A stylesheet is a set of rules for transformation of XML. The stylesheet syntax allows one to intermix a sequence of xml-micro's, xml-macro's, and top-level Scheme expressions.
The grammar for
stylesheet is given below:
stylesheet-form ::= (stylesheet body-item*) body-item ::= xml-micro-form | xml-macro-form | top-level-sceheme-expression
A stylesheet form evaluates to a stylesheet object.
The semantics of top-level definitions within a stylesheet is identical to that of PLT Scheme's units.
An xml-micro performs a single transformation of its argument. The result of this transformation is not further "expanded".
The grammar for
micro-form ::= (xml-micro trigger-tag expression) trigger-tag ::= element-tag | *text* | *data*
The expression in an xml-micro must evaluate to a function of which takes an XML node as its argument, and (generally) returns a node or nodeset.
An xml-macro performs a transformation of its argument; the result of this transformation further "expanded" until no further transformations are possible.
The grammar for
macro-form ::= (xml-macro trigger-tag expression) trigger-tag ::= element-tag | *text* | *data*
The expression in an xml-macro must evaluate to a function of which takes an XML node as its argument, and returns a node or nodeset.
No further transformations of a node are possible when no more xml-micros or xml-macros are triggered by that node under the stylesheet being applied.
stylesheet->expander :: stylesheet -> (function node -> node))
This function converts a stylesheet to an expander function. The return-result may be applied to an XML node to apply the stylesheet's transformations.
This form is used to combine multiple stylesheets. The result of evaluating a compound-stylesheet is a new stylesheet object.
The grammar for a
compound-stylesheet is given below:
compound-ss-form ::= (compound-stylesheet expression+)
Each expression must evaluate to a stylesheet object.
The example below is a complete working stylesheet which translates a collection of poetry into HTML. The 'micro' for the poem element formats a poem into HTML. The 'macro' for the book element creates a skeletal HTML document, with a new contents element which contains both a table of contents element and the poems to be formatted. The 'micro' for the contents element formats the table of contents and uses
xml-expand to invoke the expander on each of the poems.
poetry->html is bound to the result of applying
stylesheet->expander to the stylesheet, producing a function which takes a single argument: the poetry book element.
(define poetry->html (stylesheet->expander (stylesheet (define-element toc) (define-element contents) (xml-micro poem (lambda (x) (xml-match x [(poem title: ,t poet: ,a tag: ,m (stanza (line ,l1) (line ,l) ...) ...) (h4:div (h4:p) (h4:a h4:name: m) (h4:strong t) (h4:br) (h4:em a) (list (h4:p) l1 (list (h4:br) l) ...) ...)]))) (xml-micro contents (lambda (x) (xml-match x [(contents (toc (poem title: ,t poet: ,a tag: ,m . ,rest) ...) ,p* ...) (h4:div (h4:p) "Table of Contents:" (h4:ul (h4:li (h4:a h4:href: (string-append "#" m) t)) ...) (xml-expand p*) ...)]))) (xml-macro book (lambda (x) (xml-match x [(book title: ,bt ,p* ...) (h4:html (h4:head (h4:title bt)) (h4:body (h4:h1 bt) (contents (toc p* ...) p* ...)))]))) ))) | <urn:uuid:bf123924-d535-44cf-a798-e9eb83ee5407> | 3.046875 | 936 | Documentation | Software Dev. | 72.033671 |
One way to decrease your use of fossil fuels is to heat your water via solar radiation. A solar water heater runs cold water through dark, radiation absorbing tubing. Systems can be as simple as using black PVC pipe on your roof. Assume one such system absorbs at a 45% efficiency in a location where average solar-energy hitting the solar panels is 210 W/m^2. The water begins at a temperature of 16.0 °C and needs to be heated to 61.0 °C. If the area of the solar panels is 32.0 m^2, what volume of water can be heated in one hour? | <urn:uuid:c564924a-9e60-404b-a37a-a5955505bf91> | 3.421875 | 127 | Q&A Forum | Science & Tech. | 77.938333 |
Full Lab Manual
Introduction & Goals
Chemistry & Background
In Your Write-up
In this two-week experiment you will first explore the behavior of monoprotic and polyprotic acids. In week one, you will investigate the acid-base properties of acetic acid, CH3COOH, and phosphoric acid, H3PO4. By titrating, you will examine the acid and conjugate base species present across the pH scale and the composition of buffers at different pH's. In week 2 of this experiment, you will use your knowledge and skills to identify unknown solutions as acids, bases, or buffers and determine their identity from their pKa, pKb or pH values.
How do you expect pH to change with added titrant?
What chemical species are present at what pH?
What is the buffer region?
How does a buffer work?
Which equilibrium is most important in each pH region?
Trustees of Dartmouth College, Copyright 19972003 | <urn:uuid:511b3232-046c-4571-8396-d4af35bf1341> | 3.765625 | 203 | Tutorial | Science & Tech. | 51.880403 |
Why do I have to catch an exception when I use System.in.read()?
System.in is an InputStream, and like any InputStream, an I/O error
may occur when you read from it. For example, the Java program may
have been run as a background process with no standard input, causing
an error during a read attempt.
InputStream.read() throws an
IOException if an error occurs while reading from the stream. This
exception is not derived from RuntimeException and must be caught or else
your program will not compile. | <urn:uuid:b30b3b0a-58d2-40ba-90a6-9731b980d777> | 2.90625 | 120 | Q&A Forum | Software Dev. | 67.905 |
Source apportionment and loads (riverine and direct) of nutrients to coastal waters
Assessment made on 01 May 2004
- Mar 26, 2013 - Nutrients in transitional, coastal and marine waters (CSI 021) - Assessment published Mar 2013
- Jul 05, 2011 - Nutrients in transitional, coastal and marine waters (CSI 021) - Assessment published Jul 2011
- Nov 29, 2005 - Nutrients in transitional, coastal and marine waters (CSI 021) - Assessment published Nov 2005
- Jul 27, 2004 - Nitrate in groundwater
- Jul 27, 2004 - Frequency of low bottom oxygen concentrations in coastal and marine waters
- Jul 26, 2004 - Phosphorus in lakes - Eutrophication indicators in lakes
ClassificationWater (Primary theme)
Coasts and seas
- WEU 007
Policy issue: Are discharges of organic substances and nutrients decreasing?
Discharges of both phosphorus and nitrogen from all quantified sources to the North Sea and Baltic Sea have decreased since the 1980s.
Agriculture is now the major source of nitrogen and phosphorus discharges into the North Sea, whereas for the Baltic Sea agriculture is the main source of nitrogen pollution and urban wastewater the main source of phosphorus pollution.
Data for the Black Sea are less comprehensive than for the Baltic and North Sea, but indicate that riverine discharges are the largest sources of nitrogen and phosphorus.
Comprehensive data is also not available for the Mediterranean but all coastal cities discharge their (treated or untreated) sewage to the sea and only 4 % have tertiary treatment, indicating that the nutrient input from this source maybe high. Agriculture is also intensive in the region and 80 rivers have been identified as contributing significantly to the pollution of the Mediterranean (EEA 1999).
There were significant reductions in phosphorus discharges to the North Sea from urban wastewater treatment works (UWWT), industry and other sources between 1985 and 2000. The reduction from agriculture has been less marked and this was also the largest source of discharges in 2000. Nitrogen discharges to the North Sea have decreased significantly from all four sources between 1985 and 2000 with agriculture being the major source in 2000. However, some countries, such as Norway, Sweden and UK, reported increases in riverine discharges (and direct discharges for the UK) of nitrogen to the North Sea between 1985 and 2000 whereas the other states reported reductions (North Sea Progress report 2002). Even though the data for the Baltic Sea are less recent (late 1980s to 1995) they give a similar picture for the North Sea with significant reductions in discharges of nitrogen and phosphorus from agriculture, UWWT, industry and aquaculture. In 1995, the major source of phosphorus and nitrogen to the Baltic Sea was UWWT and agriculture, respectively. Regarding point sources, the 50 % HELCOM reduction target was achieved for phosphorus by almost all the Baltic Sea countries, while most countries did not reach the target for nitrogen (HELCOM 2000, http://www.vyh.fi/eng/orginfo/publica/electro/fe524/fe524.htm ). Information relating to the Black Sea is less comprehensive in terms of source apportionment and how loads have changed with time. In 1996, the most significant sources of phosphorus and nitrogen to the Black Sea were riverine inputs. The major rivers in the Black Sea catchment are the Danube, Dnieper, Don, Southern Bug, and Kuban covering an area of around 2 million km2 and receiving wastewater from more than 100 million people, heavy industries and agriculture areas. The Danube contributes about 65 % of the total nitrogen and phosphorus discharges from all sources.
Download detailed information and factsheets | <urn:uuid:224284b9-74ed-4488-895d-e3777f7c618e> | 2.6875 | 760 | Structured Data | Science & Tech. | 31.357899 |
Acids and Alkalis
The process of making a
solid come from a solution is
called precipitation. The solid itself is called a precipitate.
An insoluble salt (one
that doesn't dissolve)
can be made by reacting the appropriate soluble salt
with an acid or alkali or another salt.
For examples of precipitation, see tests for ions including
precipitation using the alkali sodium hydroxide
the barium chloride test for sulfate ions
and the silver nitrate test for bromide, chloride and iodide ions.
Precipitation reactions can be used to remove ions from water.
Sodium carbonate can be used to precipitate calcium carbonate.
This is a way of making hard water into soft water.
+ sodium carbonate calcium carbonate +
CaCl2(aq) + Na2CO3(aq) CaCO3(s) + 2NaCl(aq)
The ionic equation is Ca2+(aq) + CO32-(aq) CaCO3(s)
A precipitate can be separated from the solution by filtration.
The precipitate can then be left somewhere warm to dry.
Links Acids and Alkalis Search Questions
gcsescience.com Contents The Periodic Table Index Quizzes gcsescience.com
Copyright © 2012 Dr. Colin France. All Rights Reserved. | <urn:uuid:a7c397d1-df1a-4c30-aae0-902d1e825a00> | 4.34375 | 294 | Knowledge Article | Science & Tech. | 40.097415 |
The W3C has started a Compound Document Formats (CDF) Working Group. The CDF Working Group grew out of a Web Applications and Compound Documents Workshop to explore issues around standardization for compound documents and specification of the behavior of some format combinations, addressing the need for an extensible and interoperable Web.
The CDF Working Group focuses on combinations of specific namespace vocabularies that will become CDF profiles, such as a rich media profile for mobile devices that might include XHTML and SVG Tiny. Other examples include combinations like XHTML and XForms, or XHTML and a subset of VoiceXML using the X+V profile.
A namespace uniquely identifies a set of names so there is no ambiguity when objects have different origins but the same names are mixed together. An XML namespace is a collection of element types and attribute names, which are uniquely identified by the name of the unique XML namespace of which they are a part. In an XML document, any element type or attribute name can thus have a two-part name that consists of the namespace name and the element or attribute name.
A Compound Document by Inclusion (CDI) combines XML markup from several namespaces into a single physical document. A number of standards exist, and continue to be developed, that are descriptions of XML markup within a single namespace; XHTML, XForms, VoiceXML, and MathML are some prominent examples of such standards, each having its own namespace. Each of these specifications focuses on a single aspect of rich-content development. For example, XForms focuses on data collection and submission, VoiceXML on speech, and MathML on the display of mathematical notations.
To authors of content, each of these many standards is useful and important. However, it is the combination of elements from any number of these standards that lends true flexibility and power to rich document creation. A document may be created to be displayed within a Web browser, and to include an input form, a scalable graphic, and a bit of mathematical notation -- all on the same page. XHTML, XForms, SVG, and MathML, respectively, serve these needs, and therefore you can combine them into a single multi-namespace document.
Consider this simple example: a compound document combining XHTML and MathML. The namespace declarations in Listing 1 are marked with appended comments that match the numbered descriptions that follow:
Listing 1. A simple compound document
<?xml version="1.0" encoding="iso-8859-1"?> <xhtml:html xmlns:xhtml="http://www.w3.org/1999/xhtml"><!-- 1 --> <xhtml:body> <xhtml:h1>A Compound document</xhtml:h1> <xhtml:p>A simple formula using MathML in XHTML.</xhtml:p> <mathml:math xmlns:mathml="http://www.w3.org/1998/Math/MathML"><!-- 2 --> <mathml:mrow> <mathml:msqrt> <mathml:mn>49</mathml:mn> </mathml:msqrt> <mathml:mo>=</mathml:mo> <mathml:mn>7</mathml:mn> </mathml:mrow> </mathml:math> </xhtml:body> </xhtml:html>
- XHTML Namespace declaration: Each XHTML element in Listing 1 is qualified with the
- MathML Namespace declaration: Each MathML element in Listing 1 is qualified with the
Figure 1 is a rendered version of the simple compound document in Listing 1 which combines XHTML and MathML for rich content.
Figure 1. Rendered simple compound document
Compound documents can be composed of a single document that contains multiple namespaces, as in Listing 1. This is a Compound Document by Inclusion (CDI). However, a compound document can also be composed over several documents in which one document of a particular namespace references another separate document of a different namespace. For example, a root or top-most document might contain XHTML content for defining and formatting a page. This parent XHTML document can reference another document, of another namespace, through the use of the XHTML
<object> tag. You can repeat this for as many documents as necessary. The root document plus this collection of separate, referenced documents is a Compound Document by Reference (CDR). Figure 2 is a simple CDR document in which an XHTML root document contains a reference to a separate SVG child document that has markup for three colored circles.
Figure 2. Compound Document by Reference
And of course, a compound document can be a hybrid of both a CDI and a CDR.
Model Driven Development (MDD) is an approach and set of techniques for developing better software faster. The Object Management Group (OMG) has labeled this notion of MDD as Model Driven Architecture (MDA), and has developed a set of standards to assist in MDD. The process begins with the definition of business logic early in the requirements phase of software development. This business logic might be modeled in the Unified Modeling Language (UML), based upon the abstraction of the business logic. One or more resulting models form the basis for generating code to produce an implementation.
Some reasons to use MDD are:
- Speeds up the development process
- Business logic is independent from the platform
- If business logic changes, the model is changed
- Expertise is applied to the business model, not the software
- Decreases the costs of software development
You can represent models in many forms, such as UML, XML Model Interchange, Essential Meta Object Facility, and W3C XML Schema.
Eclipse is an open source tool integration platform, most often used as a Java development environment. As a tool integration platform, Eclipse has a varied and ever-growing set of editors and utilities, one of which is the Eclipse Modeling Framework (EMF).
EMF is a tools sub-project of the Eclipse Open Source Project. EMF is a modeling and data integration framework, as well as a code generation framework for building plug-ins for Eclipse. EMF uses ECore, a meta-language describes models and provides runtime support for those models. EMF uses ECore, a meta language that describes models based upon a subset of the OMG Meta Object Facility 2.0 (MOF) called Essential MOF (EMOF). EMF models are persisted as XML Model Interchange (XMI) documents. EMF provides viewing and command-based editing of the model as well as a basic editor for manipulating and serializing instance documents based on an EMF model. EMF models can be created from annotated Java code, XML documents, or UML models.
EMF serves as the backbone for MDD in Eclipse.
You can create CDRs and edit them with existing XML editors, since the references to other documents use generic reference mechanisms such as the
<xhtml:object> tag. However, editors for CDIs require knowledge of more than just how to validate instances of separate documents that reference in order to offer a directed editing experience. An editor that supports compound documents must have specific information about which tags from one namespace can be inserted as children of tags from another namespace. These cross-namespace relationships can be both bidirectional and recursive. A compound document profile defines which tags can be inserted under which other tags for a set of mixed namespaces. Several explicit compound document profiles exist today, such as XHTML/X+V (a subset of VoiceXML) and XHTML/MathML/SVG.
To provide a concrete example, consider an XHTML+XForms compound document profile that must define which XForms tags can exist as child tags for specific XHTML tags and vice versa. One requirement for this profile is that an
xhtml:div element can have as a child an
xforms:repeat element, which can have as a child another
xhtml:div element, which can in turn have as a child an
xforms:input element, as shown in Listing 2.
Listing 2. XHTML and XForms nested tags
<xhtml:div> <xforms:repeat model="model_PostalAddress" id="repeat_AddressLine_model_PostallAddress" nodeset="/hrxml:PostalAddress/hrxml"DeliveryAddress/hrxml:AddressLine"> <xhtml:div> <xforms:input ref="." model="model_PostalAddress"> <xforms:label>Address Line</xforms:label> </xforms:input> </xhtml:div> </xforms:repeat> </xhtml:div>
This nesting of tags needs to be explicitly defined with mechanisms beyond
xsd:anyAttributes because validating and directed editors, and user agent implementers who write rendering code for browsers, need more explicit detail to unambiguously validate and guide document construction, and to build the processing and rendering engines.
When considering compound document creation and editing tooling, keep in mind that you need to accommodate two users: the compound document schema architect and the instance document creator.
The compound document schema architect wants to efficiently express the definition for how to combine specific namespace vocabularies using defined profiles. This is the person who builds the implementation of a compound document profile.
The instance document creator wants to leverage the profile, but has no interest in building or editing profiles. The instance document creator simply wants to create well-formed and valid instances of documents that adhere to a profile, preferably with a directed editor and correct-by-construction experience. In this experience, restricted choices are offered to the editor for valid context-sensitive choices according to the profile.
EMF as an open modeling technology is a natural fit for defining compound document profiles. You can then use the EMF ECore models to create Eclipse-based editors for document creation and serialization.
The model-driven approach to compound document tooling begins with Platform Independent Models (PIMs) of each functional namespace (XHTML, XForms, SVG, and so on) that will be included in a profile. A PIM is a high-level abstraction that does not consider implementation specifics, but rather expresses only the intent of what is being modeled. PIMs can take many forms, such as W3C XML Schema, RELAX NG, Schematron, MOF, or UML models. Once the PIM models for all the profile schemas are created, they can be transformed to Platform Specific Models (PSMs), all of the same normative type. For example, the PSMs might all be XML Schema, UML models, or EMF ECore models. Next, the profile is realized by creating cross-model references between the models, representing the places where tags from one namespace may be referenced by, or inserted under, another. For example, a profile for XHTML+XForms would need to define that an
<xforms:model> tag can be inserted under the
<xhtml:head> tag. Figure 3 shows this PSM XHTML+XForms profile annotation as a UML aggregation relationship between the head class from the XHTML PIM model and the model class from the XForms PIM model.
Figure 3. PSM cross-model relationship in UML
You can transform the PSMs into EMF ECore models, which can be created from UML models or XML Schemas using EMF-provided tooling. In the example in Figure 3, the aggregation relationship becomes an EReference in the PSM ECore model. Creating these models and realizing the profiles as references across these models is the role of the compound document schema architect. These PSM models that realize the compound document profile are then used to drive a directed editor, which the instance document creator uses to create and edit instances that adhere to the profile. Figure 4 is a profile for XHTML+XForms+XML Events from PIM to PSM to serialized instance documents.
Figure 4. Model-driven compound document editor profile creation
A model-driven approach is an efficient way to create functional PIMs of specific namespaces that can be used to create PSMs of combinations of namespaces to represent profiles. You can reuse PIM models many times in different combinations to form as many profiles as required. Using Eclipse EMF ECore models is an ideal way to get directed editing and serialization for the creation of an instance document in a Compound XML Document Editor.
The Compound XML Document Editor (available at IBM alphaWorks) is a dynamic editor framework that uses ECore models to drive model-based compound document construction.
You can add any type whose instances are serialized to XML to the Compound XML Document Editor framework without the need to write any Java code. The Compound XML Document Editor uses model repositories, in which ECore models are stored. Once you drop an ECore model into a Compound XML Document Editor model repository and start the Compound XML Document Editor, you can create or dynamically edit instance documents from these ECore models. You can create model repositories to accommodate as many models and compound document profiles as necessary.
You can swap out individual models, or you can switch out entire model repositories at runtime. Furthermore, you can make changes to ECore models on the fly that are immediately reflected in the editor and in serialized instance documents.
The Compound XML Document Editor comes with ECore models for XHTML, XForms, XML Events, SVG, SMIL, VoiceXML, XUL, MathML, and XLink. Figure 5 shows the available profile combinations in the default model repository with XHTML as the root document; it includes a profile that allows inclusion of elements and attributes from several other namespaces.
Figure 5. Default model repository
The Compound XML Document Editor uses the underlying EMF models to provide a directed editing experience by restricting the allowable right-click options for tag insertion. This is illustrated in Figure 6: The profile is honored by an EMF editor that interrogates the PSM model and allows only valid entries in accordance with that compound document profile. Element attributes are represented as properties in a property sheet.
Figure 6. Directed editing
Once you have created a document, you can render it directly from configurable right-click menu options for browsers that support the compound document profile used in the document (see Figure 7).
Figure 7. Rendering options
Figure 8 shows an insurance form for Automobile Loss Reporting based on ACORD schemas rendered in the X-Smiles browser.
Figure 8. X-Smiles rendered XForm
The Compound XML Document Editor is a standards-based, model-driven, compound document development framework that supports dynamic compound document creation and serialization. The Compound XML Document Editor utilizes Model Driven Development concepts with Eclipse EMF to help develop flexible compound documents and the profiles that define them.
Acknowledgements: Thanks to Simon Johnston and Steve Speicher.
- Learn more about Eclipse and the Eclipse Modeling Framework at eclipse.org.
- Stay current on the latest developments with the W3C's Compound Document Formats (CDF) Working Group, which grew out of a Web Applications and Compound Documents Workshop.
- The W3C is also home to many of the specifications mentioned in this article, such as:
- Visit the Object Management Group (OMG) site where you'll find more information on these technologies:
- Confused by all the XML standards out there? Uche Ogbuji's developerWorks article series on XML standards can help you sort through it all:
- Part 1 -- The core standards (January 2004)
- Part 2 -- XML processing standards (February 2004)
- Part 3 -- The most important vocabularies (February 2004)
- Part 4 -- Detailed cross-reference of the most important XML standards (March 2004)
- Read more about XML User Interface Language (XUL) on Mozilla.org.
- Visit the home page for the RELAX NG schema language.
- Schematron is a language for making assertions about patterns found in XML documents.
- Find hundreds more XML resources on the developerWorks XML zone.
- Learn how you can become an IBM Certified Developer in XML and related technologies.
Get products and technologies
- Download the Eclipse-based Compound XML Document Editor from IBM alphaWorks.
- Check out the Unified Modeling Language (UML) site for more information on this popular modeling tool. You can also find more UML-related resources at the developerWorks Rational area.
Kevin E. Kelly is a Senior Software Engineer with the IBM Corporation working on Software Standards. Kevin is a member of the W3C XForms Working Group as well as the W3C Compound Document Format Working Group. His focus is on the client technology and evolving open standards-based technologies for faster, more efficient standards adoption through XML-based and model-driven approaches. Before joining IBM, Kevin spent eight years at Rational Software working on UML modeling and Java technologies. Kevin holds a B.S. from Mercer University, and a M.S. from the University of Montana.
Jan Joseph Kratky is the lead developer for the Compound XML Document Editor and XML Forms Generator. Currently a software engineer with IBM Emerging Software Standards in Research Triangle Park, North Carolina, he holds a B.A. from Cornell University and an M.S. from Rensselaer Polytechnic Institute. A Sun Certified Java Programmer and Sun Certified Web Component Developer, Jan has worked with Java technologies since 1997, and with Eclipse technologies since 2001. | <urn:uuid:271691c9-0fbd-48db-bc16-b602f98abc43> | 3.234375 | 3,696 | Documentation | Software Dev. | 38.307093 |
O'Reilly Book Excerpts: JXTA in a Nutshell
Getting Started with JXTA, Part 1
In part one in this series of book excerpts from JXTA in a Nutshell, learn about setting up the JXTA Java environment as well as JXTA class files, shells, and peers.
Getting Started with JXTA
In this chapter, we'll see how to get started with JXTA. Although JXTA is a language- and platform-neutral specification, we'll focus on using the standard JXTA applications for the Java platform. The basic concepts that you'll learn in this chapter are applicable to any JXTA implementation using any language; we chose to illustrate the concepts of JXTA using the Java platform because it allows for the simplest discussion of JXTA concepts, and because the Java platform gives us a common basis for our examples, regardless of the computer on which you might run them.
We'll start by discussing how to set up a Java environment to run JXTA programs. Then we'll look in depth at one particular program: the JXTA Shell. Examining the shell will allow us to look in depth at each of the protocols and techniques that JXTA defines; working through the examples in this chapter should provide you with a working knowledge of the key concepts of the JXTA platform and how programs operate within that platform.
Setting Up a Java Environment
The first step in using JXTA is to set up your environment. In this case, that means setting up a Java environment to run JXTA, for which you'll need three things: a Java platform, the JXTA Java class libraries, and any JXTA programs that you want to run.
The Java Platform
In This Series
Getting Started with JXTA, Part 2
For the Java platform, you'll need the Java 2 Standard Edition (J2SE), Version 1.3.1 or later (Version 1.4 is preferred). Work is ongoing in the JXTA community to allow JXTA to run on the Java 2 Micro Edition platform (J2ME); once that work is complete, then the steps we discuss here should work on a J2ME platform as well.
This chapter focuses on running and explaining existing JXTA applications. Therefore, if you're using the Java 2 platform, you need only the Java 2 runtime environment (J2RE). If you plan on programming with JXTA (using the examples in subsequent chapters), then you'll need the Java 2 Software Developer's Kit (SDK).
There are a number of ways to obtain current releases of the Java platform. If your system is running Solaris, Microsoft Windows, or Linux, the simplest way is to download the SDK from http://java.sun.com/j2se/. For other operating systems, check with your system vendor. More commonly, many integrated development environments (IDEs) come with support for Java (and hence a Java platform).
Once you've obtained and installed Java, you must make sure that the
java executable is in your standard path.
The JXTA Class Files and Programs
You can obtain all the JXTA files you need at http://download.jxta.org/easyinstall/install.html. On this page, you can obtain the JXTA demo package for a variety of platforms. In fact, the JXTA demo implementation at this site is written completely in Java; the difference between the platforms lies only in how the parts of the implementation are packaged and how they are installed. Therefore, for Microsoft Windows, download an executable (.exe) file; for Solaris, download a shell script, and so on.
When you execute the installation program, you are prompted for a directory in which to install the code. On Unix systems, the default directory is ./JXTA_Demo; on Microsoft Windows, the default directory is C:\Program Files\JXTA_Demo. Within the directory you select, the installation creates the following:
This directory contains a set of jar files that contains the JXTA implementation and another set that contains implementations of the JXTA demo applications.
The JXTA Shell is an interactive application that lets you look at the JXTA environment and try out basic JXTA functionality. We'll examine the shell in detail later in this chapter.
This is another sample JXTA application; it contains functionality to chat one-on-one, chat with a group, and share files. This application uses all of the standard facilities of JXTA, so it is a good example on which to model other JXTA P2P applications.
This is all you need to use JXTA technology, both as an end user and as a developer. If you're going to do JXTA development, you should add each of the jar files in the lib directory to your classpath. If you're simply going to run the sample applications, there are scripts in each application directory that set up the classpath and run the application.
In our examples throughout this book, we assume that you've installed this hierarchy into /files/JXTA_Demo (C:\files\JXTA_Demo). We'll also assume that your classpath contains the current directory and the necessary jar files from the lib directory:
As you become more familiar with JXTA, you may want to get involved with other JXTA projects, use other JXTA applications, or examine the JXTA source code. You can download all of these things from http://www.jxta.org/project/www/download.html.
Basic JXTA Concepts
Now that we have all of this software, we'll use it to explain a little more about the basic JXTA concepts we outlined in Chapter 1, including how a JXTA application is constructed. We'll use the JXTA Shell as the basis for our exploration, since it provides us with an interactive tool that uses the JXTA platform to perform its operations.
JXTA Shell Syntax
Before we dive into the shell, here are some notes on its syntax. Like any shell, the JXTA Shell issues a prompt (
JXTA>) at which you type in commands.
Shell commands have two kinds of output. Most of them simply send their output to the screen. Some commands, however, produce an object as their output. These objects should be saved in a shell variable. If you do not save the object, most commands will create a new object with a default name to hold the return value; if you're going to need the object, it's easier to assign a name to it yourself. Shell variables are created in JXTA by assigning a new name to the output of such a command.
Here are some simple examples. The
env command produces as its output a list of all the shell variables and their values:
JXTA>env stdin = Default InputPipe (class net.jxta.impl.shell.ShellInputPipe) parentShell = Root Shell (class net.jxta.impl.shell.bin.Shell.Shell) Shell = Root Shell (class net.jxta.impl.shell.bin.Shell.Shell) stdout = Default OutputPipe (class net.jxta.impl.pipe.NonBlockingOutputPipe) consout = Console OutputPipe (class net.jxta.impl.shell.ShellOutputPipe) consin = Default Console InputPipe (class net.jxta.impl.shell.ShellInputPipe) stdgroup = Default Peergroup (class net.jxta.impl.peergroup.StdPeerGroup)
Shell variables are created by assigning a new name to the output of a command that creates an object.
mkadv is such a command; here we store the object it creates in the
myadv shell variable:
JXTA>myadv = mkadv -p
You can print out the content of certain variables by using the
cat command. If the variable has structured data,
cat will print it out:
JXTA>cat myadv <?xml version="1.0"?> <!DOCTYPE jxta:PipeAdvertisement> <jxta:PipeAdvertisement> <id> jxta://59616261646162614A757874614D50474168B1395E034DEA90F3BC8CD7D 361840000000000000000000000000000000000000000000000000000000000000401 </id> </jxta:PipeAdvertisement>
A list of all shell commands can be obtained via the
man command; the
man command can also print out help for a specific command (e.g.,
man mkadv). A complete shell reference appears in Chapter 12.
Pages: 1, 2 | <urn:uuid:bb00b032-e59a-401a-839c-e9ef20eec279> | 2.84375 | 1,861 | Tutorial | Software Dev. | 62.848727 |
Joined: 03 Oct 2005
|Posted: Thu May 04, 2006 8:41 am Post subject: Artificial Compound ‘Nano-Eye’ Is Modelled on Insect Eyes
|[b]Bioengineers at Berkeley University Create an Artificial Compound ‘Nano-Eye’ That Is Modelled on the Eyes of Insects[/b]
Using the eyes of insects such as dragonflies and houseflies as models, a team of bioengineers at University of California, Berkeley, has created a series of artificial compound eyes.
These eyes can eventually be used as cameras or sensory detectors to capture visual or chemical information from a wider field of vision than previously possible, even with the best fish-eye lens, said Luke P. Lee, the team's principal investigator. Potential applications include surveillance; high-speed motion detection; environmental sensing; medical procedures, such as endoscopies and image-guided surgeries, that require cameras; and a number of clinical treatments that can be controlled by implanted light delivery devices.
They are the first hemispherical, three-dimensional optical systems to integrate microlens arrays - thousands of tiny lenses packed side by side - with self-aligned, self-written waveguides, that is, light-conducting channels that themselves have been created by beams of light, said Lee, the Lloyd Distinguished Professor of Bioengineering at UC Berkeley.
The eyes are fully described for the first time in the April 28 issue of the journal Science.
"I've always wanted to create an advanced, three-dimensional optical system," Lee said, "but conventional microfabrication technology is two-dimensional. So, I started thinking about basing a fabrication system on the developmental stages of insect eyes that I'd learned about as a biophysicist and bioengineer."
What he and his team came up with is a low-cost, easy-to-replicate method of creating pinhead-sized polymer resin domes spiked with thousands of light-guiding channels, each topped with its own lens. Not only are these units packed together in the same hexagonal, honeycomb pattern as in an insect's compound eye, but each is also remarkably similar in size, design, shape and function to an ommatidium, the individual sensory unit of a compound eye.
Just like pins in a pincushion - or a dragonfly's 30,000 ommatidia - the team's artificial ommatidia are each oriented at a slightly different angle. Lee's team has shown that the lenses and waveguides of the artificial eyes focus and conduct light in the same way as an insect's eye.
While an insect's ommatidia each end in a photoreceptor cell that transmits a light signal to the creature's optic nerve, Lee plans to couple his team's ommatidia with CCD photodiodes, the light-capturing units used in digital cameras and camcorders. He also has plans to link them to spectroscopes for chemical detection and analysis.
"The lenses and waveguides are the most important part of the system," Lee said. "People have said that it would be totally impossible to create them with an angle, but now that we've done it, we're ready to integrate imaging or chemical sensing into the eyes."
While conventional microfabrication techniques are expensive and use high temperatures, Lee and his team borrowed from nature, using a low temperature system, photopolymerization, and self-aligning, self-writing technology.
To create the artificial eye, the team first needed to construct a hemispherical mold of the eye's outer layer, a structure consisting of thousands of microlenses. Using existing technology, they made a flat array of these tiny, domed lenses arranged in the hexagonal honeycomb pattern. On top of this, they applied a thin slab of an elastic polymer called polydimethylsiloxane, or PDMS, creating a concave pattern of the lenses in the polymer. By affixing the PDMS membrane over the opening of a vacuum chamber and applying negative air pressure, they pulled it into the dome shapes they needed, controlling its form by using different pressures.
They then had a hemisphere-shaped cup pocked with some 8,700 indentations: a compound-eye mold that could be used over and over again using soft lithography technology, a set of methods developed over the last decade to replicate nanoscale-sized structures.
The material they chose for the artificial eyes was an epoxy resin that cures into a hardened form when exposed to ultraviolet light. They poured the resin into the dimpled molds, baked it at a low temperature just long enough to slightly harden the material, then turned out the contents: little resin hemispheres with a surface packed with 8,700 raised mounds. When struck by a beam of light, each of these mounds acts as a lens, focusing the light and sending it into the material below. Like a welder's torch burning a hole into metal, over time the focused light beams etch holes in the resin creating the tiny channels called self-written waveguides.
Because these channels are formed at the angle of the light beams that strike them, Lee used a condenser lens to bend his light source into a spoke-like pattern of beams that converges on the eye's dome. The end result is that the waveguides pierce the resin at angles that head toward the center of the dome, just like the converging ommatidia of an insect eye.
Because the microlenses create the waveguides, each microlens is perfectly aligned with its waveguide. The self-alignment, self-writing processes are crucial to the creation of the artificial compound eye, said Lee, because these processes will also align the microlenses and waveguides with the pixels of CCDs and spectroscopes.
"Who knows? Maybe this is how insect eyes are created, too," said Lee. "First, there are the lenses, and then as light keeps coming in, they make their own optical paths and connect with the visual system."
Lee speculates that the artificial compound eyes will be put to use within a few years. Their first applications may be in ultra-thin camera phones. After that, he expects to see them used in camcorders for omnidirectional surveillance imaging and such uses as small, hidden, wearable cameras.
Source: Berkeley University.
This story was posted on 3 May 2006. | <urn:uuid:697656ad-a76b-4c9e-a5ce-4a36864534ae> | 2.75 | 1,347 | Comment Section | Science & Tech. | 33.593078 |
The beetles are able to recognise themselves in the dim dusk light because their metallic colouration optimises the contrast of the beetles against the forest background (Théry et al 2008).
Although the brooding behaviour of Coprophanaeus lancifer remains unclear, it is likely to be similar to that of related species that have been studied.
Once a carcass has been located, the beetles work in male-female pairs to bury the animal in a burrow.
The female beetle then tears parts of the carcass using its toothed forelegs and head, and forms pear-shaped brood masses from the decomposing flesh in an underground chamber. These masses are then covered in a layer of soil for protection and a single egg is laid in each.
The larvae hatch and feed in the brood mass, which contains sufficient substrate to allow the beetle to complete its development. Coprophanaeus lancifer is immensely strong and pairs of beetles have been studied moving pig carcasses the size and weight of an adult human (Ururahy-Rodrigues et al 2008).
Although the beetles favour vertebrate carrion, they are also sometimes attracted to dung and I have even collected them in Suriname using dead millipedes as bait. | <urn:uuid:0fe4b3b5-058a-4650-b8b3-0df956612a6a> | 3.46875 | 258 | Knowledge Article | Science & Tech. | 38.577386 |
Photo Credit: David Cappaert, www.forestryimages.org
SCIENTIFIC NAME: Rana sylvatica
DESCRIPTION: Wood frogs are distinguished from other frogs by a dark mask through eyes that resembles that of a robber's mask. Their body can have color variations from brown to pink and adults are 1.4 to 3.25 inches in length.
DISTRIBUTION: The wood frog can withstand extreme cold and even freezing. They are found as far north as Labrador and Alaska and are the only frog that lives north of the Arctic Circle. The wood frog can be found from the Canadian Maritimes west to Alaska, with southern portions of its range extending from southern Minnesota and Wisconsin to Arkansas, Tennessee, Alabama and northern South Carolina to Maryland.
In Alabama, wood frogs are rare and local in distribution. They are documented from twelve locations in the eastern Ridge and Valley and upper Piedmont from Mount Cheaha, in Talledega County, south to Horseshoe Bend in Tallapoosa County. Wood frogs are thought to be declining in Alabama, but their status is poorly known.
HABITAT: Wood frogs are usually found in moist, deciduous forests with a lot of leaf litter and lay eggs in vernal pools. During winter, they take shelter in leaf litter or under a log.
FEEDING HABITS: Adult wood frogs feed on insects, arachnids, slugs, worms, and snails. Tadpoles are herbivorous.
LIFE HISTORY AND ECOLOGY: Wood frogs have adapted to very cold climates by freezing over the winter. First, they stop breathing and their heart stops beating. They then produce this antifreeze like substance that prevents ice from freezing the water within their cells. However, ice does form in the spaces between the cells. Once the weather warms, the frogs begin to thaw and begin feeding and mating again.
Wood frogs emerge from hibernation in mid to late January to February usually during the first warm rainy nights of the year and congregate in large numbers at woodland pools. Males call explosively which sounds like the "quack" of a mallard duck. Eggs are laid in 3 to 4 inch diameter globular masses that are usually attached to existing vegetation found in the pond. Each mass may contain up to 3,000 eggs. The eggs hatch within 2 to 3 weeks and the tadpole stage lasts between 6 and 10 weeks. Maturity may be reached in 2 to 3 years depending on the sex and population of frogs. In the wild they usually live no more than 3 years.
AUTHOR: Ericha Nix, Wildlife Biologist, Alabama Division of Wildlife and Freshwater Fisheries | <urn:uuid:254549b0-2b57-4930-bac3-f6370b6cb7bd> | 3.640625 | 560 | Knowledge Article | Science & Tech. | 55.985283 |
|Science Museum of Virginia sea stars|
Strange creatures are our sea stars; they have no blood, no brains, and if we chop them up, as long as there is a fifth left, they will grow everything back. As for the no brains thing, anyone who has seen “SpongeBob SquarePants” can attest that Patrick Star, SpongeBob’s best friend, is not the sharpest knife in the drawer. Comedy is not the only reason Patrick is a little slow on the uptake. The creator of SpongeBob, Stephen Hillenburg, taught marine biology at Orange County Ocean Institute in California and puts weird facts like that into the story and characters. Sea stars actually have something going on upstairs, but it’s just a nerve ring instead of a brain.
Breathing is another thing that our dear sea stars don’t do like most of the creatures we come in contact with. They absorb sea water through a small dot normally located somewhere on the top facing side of the sea star; this is called a madreporite. The water they absorb is used in their circulatory system (yes, you read that right, sea water being used for blood). While they have the water they might as well make the most of it and absorb the oxygen out of it.
For vision the sea star uses a tiny dot on the end of each arm to see. If you find a sea star large enough you may notice the tiny dot (it looks like someone put the point of a highlighter on the very tip of the arm). Their vision is not like ours and is more like dark and light (sun’s out - sun’s not out).
To get around, the sea star uses its arms with hundreds of little, tiny tube feet on each arm. None of the arms are dominant. Our Forbes Sea Stars have 5 arms each and have been clocked at a whopping five inches a minute! That is a sea star run! Full speed, petal to the metal, run! (That’s .005 mph.) When you don’t have to run down your food and most things don’t want to eat you or will only take a bite that you will grow back, speed is not a major concern. Their favorite food is most bivalves (animals with 2 shells) like oysters, mussels, and clams. The creatures that they are most concerned about avoiding are crabs, bottom dwelling fish, sea gulls, sea urchins, lobsters and (be surprised) humans. | <urn:uuid:919dbe19-f4b0-4e65-8ec3-0ab6a14a7cea> | 2.9375 | 528 | Personal Blog | Science & Tech. | 69.150478 |
The History of Steam
like English, there is the language of steam which needs to be
learnt before we dive into the basics of understanding steam.
- Terms and Definitions
-. Unit converter
Geography of a process plant
11 Steam Table
of Temperature. The degree of hotness with no implication of
the amount of heat energy available. The temperature scale is used
as an indicator of thermal equilibrium between two systems in
contact with each other.
Temperature difference, as used
in many heat transfer calculations, may be expressed in either °C
or K. Since both scales have the same increments, a temperature
difference of 1°C has the same value as a temperature
difference of 1 K.
The Celsius (°C) scale. This
is the scale most commonly used by the engineer, as it has a
convenient (but arbitrary) zero temperature, corresponding to the
temperature at which water will freeze.
absolute or K (kelvin) scale. This scale has the same
increments as the Celsius scale, but has a zero corresponding to
the minimum possible temperature when all molecular and atomic
motion has ceased. This temperature is often referred to as
absolute zero (0 K) and is equivalent to -273.16°C.
Fahrenheit (°F) scale. This scale is used in the FPS
system(US and Canada), but not much elsewhere. To convert °F
to °C, use the formula:
is a comparision of the various scales of temperature, shown
of Pressure is defined as 1 newton of force per square metre
(1 N/m²). The SI unit of pressure is the pascal (Pa), but as
Pa is such a small unit the kPa (1 kilonewton/m²) or MPa (1
Meganewton/m²) tend to be more appropriate to steam
However, probably the most commonly used
metric unit for pressure measurement in steam engineering is the
bar. This is equal to 105 N/m², and approximates to 1
Absolute pressure (bar a)
This is the
pressure measured from the datum of a perfect vacuum. So, a
perfect vacuum has a pressure of 0 bar a.
pressureless state of a perfect vacuum is "absolute zero".
Absolute pressure is, therefore, the pressure above absolute
At mean sea level, for instance, the pressure exerted
by the atmosphere is 1.033 kg/cm2 absolute, when measured as
kilograms per square centimeter. This is always assumed to be 1
kg/cm2 a for calculations.
At sea level, the pressure can
also be stated as 1.013 25 bar a (1 atm) , when measured in bars.
This is always assumed to be 1 bar a for calculations.
pressure is also commonly measured in millimeters of mercury, or
Gauge pressure (bar g)
gauge is the pressure as measured by the pressure gauge measured
from the datum of the atmospheric pressure.Gauge pressure
indication is shown as kg/cm2g.
The pressure gauge –
bourdon tube type - measures pressure relative to the outside
atmospheric pressure. This is rounded off to 1 bar a or 1 kg/cm2a
– at (MSL). Therefore, to convert bar g to bar a, we add 1
bar, and to convert kg/cm2g to kg/cm2a, we again add 1 kg/cm2.
pressure + Atmospheric pressure = Absolute pressure
6 bar g + 1
bar = 7 bar a
10 kg/cm2g + 1 kg/cm2 = 11 kg/cm2a
above atmospheric will therefore, always yield a positive gauge
pressure. Conversely a vacuum or negative pressure is the pressure
below that of the atmosphere. A pressure of -1 bar g corresponds
closely to a perfect vacuum. In the other units, a vacuum exists
below zero kg/cm2g .
bar g = 11 bar a = 10.2 kg/cm2g = 11.2 kg/cm2a = 145 psig = 1 MPa
= 106 N/m2
the data given in the steam table has pressure in kg/cm2gauge and
abs . You can check the steam tables and see that in the lower
values for pressure, the enthalpy values vary a lot more than the
higher pressure readings. Therefore it is very important you
convert bar g to bar a, when the steam tables have pressures
mentioned in bar terms.
Differential pressure ΔT
is simply the difference between two pressures. When calculating
difference in pressure, the reference point becomes meaningless.
Therefore, the difference between two pressures will have the same
value whether these pressures are measured in gauge pressure or
absolute pressure, as long as the two pressures are measured from
the same reference.
is the study of energy changes accompanying physical and chemical
changes. The term itself clearly suggests what is happening --
"thermo", from temperature, meaning energy, and
"dynamics", which means the change over time.
Thermodynamics can be roughly encapsulated with these
Heat and Work / Energy / Enthalpy / Entropy / Free
Energy is the capacity to do work
(a translation from Greek-"work within"). Therefore work
and energy are one and the same. The SI unit for work and energy
is the joule, defined as 1 Nm.
The total energy of a system
is composed of the internal, potential and kinetic energy. The
temperature of a substance is directly related to its internal
energy. The internal energy is associated with the motion,
interaction and bonding of the molecules within a substance. The
external energy of a substance is associated with its velocity and
location, and is the sum of its potential and kinetic energy.
physics, the amount of mechanical work done can be determined by
an equation derived from Newtonian mechanics
= opposing force x displacement
is in joules (N*m) (or calories, but we are using primarily SI
opposing force is in newtons (kg*m/s2)
is in meters
In chemical reactions, work is primarily
related with expansion. It is generally defined as :
= (area X applied pressure) X displacement
value of displacement X area is actually the change in
volume. If we imagine a reaction taking place in a container of
some volume, we measure work by pressure times the change in
= dV x P
is the change in volume, in litres
Heat and Work
and work are both forms of energy. They are also related forms, in
that one can be transformed into the other. Heat energy (such as
steam engines) can be used to do work (such as pushing a train
down the track). Work can be transformed into heat, such as might
be experienced by rubbing your hands together to warm them up.
Work and heat can both be described using the same unit of
measure. Units of heat energy used may be calorie (cal), Joule (J,
SI unit) or Btu.
Typically, we use the SI units of
Joules (J) and kilojoules (kJ). But sometimes, the calorie is the
unit of measure (MKS unit). Heat energy is measured in
kilocalories, or 1000 calories.
is defined as an amount of heat required to change temperature of
one gram of liquid water by one degree Celsius.
1 cal =
1 kcal = 1000 cal = 4186.8 J
joule is a derived unit defined as the work done or energy
required, to exert a force of one newton for a distance of one
metre, so the same quantity may be referred to as a newton metre
or newton-metre with the symbol N·m.
One Joule is
the mechanical energy which must be expended to raise the
temperature of a unit weight (2 kg) of water from 0°C to 1°C,
or from 32°F to 33°F.
1 J (Joule) = 2.389 X 10-4
Btu. A Btu - British thermal unit - is the
amount of heat energy required to raise the temperature of one
pound of cold water by 1º F. Or, a Btu is the amount of heat
energy given off by one pound of water in cooling, say, from 70º
F to 69º F.
Heat. Heat is energy transferred as
a result of a temperature differences. Energy as heat passes from
a warm body (with higher temperature) to a cold body (with lower
temperature). Heat is a form of energy and as such is part of the
enthalpy of a liquid or gas.
It is a measure of energy
available with no implication of temperature. To illustrate , the
one kcal that raises one kg of water from 18ºC to 19ºC
could come from the surrounding air at a temperature of 35ºC
or from a flame at a temperature of 1,500ºC.
flow. The transfer of energy as a result of the difference in
temperature alone is referred to as heat flow.
What is the
difference btween temperature and Heat?
Temperature is the
cause. Heat is the effect.
Watt is the SI unit of
power and can be defined as 1 J/s of heat flow.
Capacity. Heat Capacity of a system is the amount of heat
required to change temperature of the whole system by one
Specific Heat Capacity
A measure of the
ability of a substance to absorb heat. It is the amount of energy
(kcal) required to raise 1 kg of water 1° C. Thus specific
heat capacity is expressed in kcal/kg/°C.
The specific heat
capacity of water is 1 kcal/kg/° C. This means that an
increase in enthalpy of 1 kcal will raise the temperature of 1 kg
of water by 1° C.
Specific heat capacity.
Specific heat, given by the symbol "C", is generally
The amount of heat required to raise the
temperature of one (1) kilogram of a substance by one (1) degree.
The heat required to raise one (1) gram of a material one
(1) degree. It can be thought of as the ability of a substance to
Water has a
very large specific heat capacity (4.19 kJ/kg°C) or, 1
cal/gram°C compared with many fluids. Water is therefore, a
good heat carrier.
heat may be measured in kJ/kgC,kcal/kg°C, cal/gram°C or
Btu/lb°F. For comparing units, check the unit converter for
more information. Specific heat capacities for different materials
can be found in the Material Properties section.
of Heat Required to Rise Temperature
The amount of heat
needed to heat a subject from one temperature level to an other
can be expressed a
= specific heat capacity (kcal/kg°C)
Q = amount of heat
M = mass (kg)
ΔT = rise in temperature of
the material in degrees Celsius (°C)
Heat transfer is the flow of enthalpy from matter
at a high temperature to matter at a lower temperature when
brought into contact.
This is the term
given to the total energy, due to both pressure and temperature,
of a fluid (such as water or steam) at any given time and
condition. More specifically it is the sum of the internal energy
and the work done by an applied pressure.
The basic unit of
measurement is the SI unit joule (J). Since one joule represents a
very small amount of energy, it is usual to use kilojoules (kJ)
In MKS units, the basic unit of measurement for
all types of energy is kcal/kg.
specific enthalpy is a measure of the total energy (enthalpy) of a
unit mass (1 kg), and the units are usually kJ/kg or
Heat of the Liquid (Enthalpy of Saturated
Water) – hf
Expressed in kcal's, this is the amount
of heat required to raise the temperature of 1 kg of water from 0°
C to the boiling point of a given pressure/temperature
correlation. Also referred to as Sensible Heat.
hf – heat of the fluid.
Latent Heat of
Evaporation (Enthalpy of Evaporation) – hfg
in kcal's, this is the amount of heat required to change 1 kg of
boiling water to 1 kg of steam. This same amount of heat is
released when a kg of steam is condensed back to a kg of water.
The quantity of latent heat will vary with the pressure and/or
temperature of a closed system.
Written as hfg –
heat incurred in change of state from fluid to gas, or
Total Heat of Steam (Enthalpy of Saturated Steam)
The sum of the Heat of the Liquid and Latent
Heat of Evaporation, also expressed in kcal's.
hg – Total enthalpy of saturated
Subscript f = Fluid or liquid state, for
example hf: liquid enthalpy
Subscript fg = Change of state
liquid to gas, for example hfg: enthalpy of evaporation
g = Total, for example hg: total enthalpy
The density ρ of a substance can
be defined as its mass (m) per unit volume (V). The specific
volume (vg) is the volume per unit mass and is therefore the
inverse of density. In fact, the term ‘specific’ is
generally used to denote a property of a unit mass of a substance.
m = Mass (kg)
V = Volume (m3)
= Specific volume (m3/kg)
The SI units of density (ρ)
are kg/m³, whilst conversely the units of specific volume
(Vg) are m³/kg.
Another term used as a measure of density is the
specific gravity. It is a ratio of the density of a substance (ρs)
and the density of pure water (ρw) at
standard temperature and pressure (STP). This reference condition
is usually defined as being at atmospheric pressure and 0°C.
Sometimes it is said to be at 20°C or 25°C and is referred
to as normal temperature and pressure (NTP).
density of water at these conditions is approximately 1 000 kg/m³.
Therefore substances with a density greater than this value will
have a specific gravity greater than 1, whereas substances with a
density less than this will have a specific gravity of less than
Since the specific gravity is a ratio of two densities,
it is a dimensionless variable and has no units. Therefore in this
case the term specific does not indicate it is a property of a
unit mass of a substance. The specific gravity is also sometimes
known as the relative density of a substance.
Temperature (boiling point). The temperature for a
corresponding Saturation Pressure at which a liquid boils into its
vapor phase. The liquid can be said to be saturated with thermal
(heat) energy. Any addition of thermal energy results in a phase
Boiling Point. A somewhat clearer (and
perhaps more useful) definition of boiling point is "the
temperature at which the vapor pressure of the liquid equals the
pressure of the surroundings".
The pressure at which vaporization (boiling) starts to occur for a
corresponing Saturation temperature. For water at 100°C, the
saturation pressure is 1 atm and, for water at 1 atm, the
saturation temperature is 100°C.
The term 'saturation'
defines a condition in which a mixture of vapor and liquid can
exist together at a given temperature and pressure.
and other units
– °C or °K
Pressure SI unit – Pascal
Pa = 1 N/m2,
too small a unit
Common unit = Bar
1 Bar =105
Atmospheric Pressure = 1 Bar abs at MSL
0 Bar abs
Bar Gauge + atm pressure = Bar abs
Volume = 1 / Density in m3/kg
Gravity = Density ratio to water
Energy SI unit = 1 Joule =
1 Nm = 4.186 cal
Common unit = kilocalorie
1 kcal = heat
reqd to raise 1 kg water by 1°C
1 kcal = 4186.8 Joules
= sp. heat capacity in kcal/kg °C
SI and other units
mWC = meters water column
1 Bar = 10
1 Bar = 14.23 PSI (Lbs/in2)
psi = 10.54 Kg/cm2g
psi = 3.5 Kg/cm2g
is the pressure exerted by a static head of water column. Remember
that 10 meters of head = 1 bar
pressure + Atmospheric pressure = Absolute pressure
most common pressure for utilization of steam is 3.5 kg/cm2g.
Density is about a thousandth of water.
• Boilers come
from an era when the industrial revolution was at its peak in
Britain. Therefore, the world standards for boilers are British.
ie, the MKS system.
• Specific gravity is dimensionless as
it is a ratio. Density of any lquid relative to water. It is used
mostly for fuel.
• The SI unit of 1 Joule is too small,
therefore the kilocalorie was developed.
base units. The
International System of Units (SI) is founded on seven base units:
Length, Mass, Time, Electrical current, Thermodynamic temperature,
Luminous temperature and Amount of substance. They are defined in
an absolute way without referring to any other units. We will be
working with the following four units only..
derived units. Derived units are algebraic combinations of the
seven base units with some of the combinations being assigned
special names and symbols.
properties of matter are measured at STP - Standard temperature
Temperature: freezing point of pure water, 0°C
Pressure: 760 mm Hg or one atmosphere
1. Heating Water
What is the energy needed to heat a mass
of 1.0 kg of water from 0°C to 100°C when the specific
heat of water is 1kcal/kg°C.
Q = M Cp ΔT
1 kg X 1 (kcal/kg°C) X (100 - 0)(°C)
= 100 kcal
If 200 kgs of a substance at 22 °C with a specific
heat of 0.88 kcal/kg°C is heated with 10,000 kcal of energy,
what is the new temperature of the substance?
Q = M Cp ΔT
= Q / M Cp
= 10,000 / 200 X 0.88 = 56.82
temperature is 22+56.82 = 78.82°C
Assume that water at 50° C water is fed to a boiler
at atm pressure. This begins to boil at 100° C. 1 kcal will be
required to raise each kg of water by 1° C. Therefore, for
each kg of water, the increase in enthalpy required to raise the
temperature from 50° C to 100° C is:
(100 - 50) x 1 =
the boiler holds 10000 kg mass the increase in enthalpy to bring
the total mass of water to it's boiling point is therefore:
kcal/kg x 10000 kg or 5,00,000 kcal.
must be remembered, this figure is not the sensible heat, but
merely the increase in sensible heat required to raise the
temperature from 50° C to 100° C. The datum point of the
steam tables is water at 0° C, which is assumed to have a heat
content of zero for our purposes.
total sensible heat of water at 100° C is therefore:
- 0) x 1 = 100 kcal/kg | <urn:uuid:8b83ce57-4823-4df9-9834-ca6521f6f0b0> | 3.890625 | 4,328 | Knowledge Article | Science & Tech. | 58.849814 |
High pressure or an ‘anticyclone’ usually brings a variety of settled, dry weather, depending on the season. During winter the weather under high pressure over the UK can be often cold and cloudy during the daytime with nights often cold enough to allow frost to form. Whilst in summer it can cause days to be warm and sunny with nights being often mild, sometimes warm enough to make sleeping very uncomfortable.
Anticyclones form when air subsides, falls, unlike low pressure which forms when air rises. As air subsides it gradually warms, this warming can stop clouds from forming. However if there is some warm air located near the ground, some air may rise and form areas of patchy or high cloud. Anticyclones can be slow-moving and sometimes stubborn to clear. Any depressions which try to get to close to it may circulate around it and dump bad weather somewhere else. With high pressure, winds are lightest at the centre and circulate clockwise in the northern hemisphere and counter-clockwise in the southern hemisphere.
There are two main types of anticyclone, a cold and warm anticyclone. Cold anticyclones form typically over polar climates, here temperatures are very low and the air is often cold and dense. An inversion tends to develop at low altitudes with anticyclones; this prevents clouds from building any further. If this is so, any cumulus which does form during the daytime will quickly stop growing and spread into a layer of stratocumulus and then disperse when night comes. At night when the temperatures drop below freezing frosts are very likely to form.
Warm anticyclones form mainly over tropical or sub-tropical climates, where temperatures are often warm both at day and night. With these highs air is subsiding at quite a depth through the troposphere, this tends to hold back and restrict any cloud formation, if any cloud do form, they will often be erratic and well broken, these being mainly cumulus and stratocumulus.
‘Anticyclone gloom’ forms when the air at the surface is warm and moist, extensive stratus or fog occurs under the stable, calm conditions, it will remain this way unless the sun is strong enough to burn it away. However in winter low stratus or fog could persist for days, or even weeks in extreme cases.
A ‘blocking high’ is an area of high pressure which remains stationary for days or even weeks on end. Usually with ‘normal anticyclones’ they tend to appear and disappear after a short period of time and then allow other weather systems to take over or allow other areas of high pressure to takes its place, however these blocking highs have other ideas.
Ridges are little areas of high pressure which extend out of anticyclones or ahead of depressions; the key difference is that ridges don’t have a closed-structured centre.
So where are areas of high pressure likely to form?
Sometimes an area of high pressure may well form over Scandinavia, if this is so it may well drag in warmer air from off the continent, in summer, or drag in very cold air in winter, with the winds coming from a south-easterly direction, or the high may bring in an easterly.
An area of high pressure may well form out in the Atlantic and remain there, just west of the UK, if this is the case, depressions may well approach from the north-west or indeed the north, once these depressions move through they could well bring colder air.
The ‘Azores high’ is what usually influences our weather in summer, it can bring warm or even hot weather during daytime and make nights feel quite warm and sticky.. The high may move northward to cover Scotland or stay stuck just south of the UK.
Whatever the situation, an area of high pressure can form anywhere at any time and one thing is for certain, it is capable of changing the weather.
© Lee Johnson 2002 | <urn:uuid:d5b3216c-5ab0-44d9-8404-84861949715a> | 3.78125 | 830 | Knowledge Article | Science & Tech. | 46.011818 |
Visibility and Air Pollution
What is Visibility?
Particulate matter and gaseous air pollution affects visibility to some degree in every national park. Air pollution can create a white or brown haze that affects not only how far we can see but also how well we are able to see the colors, forms, and textures of a scenic vista. Haze results from air pollutants, sucha as fine particles that absorb and scatter sunlight. Both natural and manmade sources contribute to haze-causing particles and gases in the atmosphere. Natural sources include windblown dust and soot from wildfires. Manmade sources include motor vehicles, power plants, and industrial operations. Such air pollutants are often carried by wind hundreds of miles from where they originated.
Visual range is one measure of visibility and is defined as the greatest distance at which a large black object can be seen and recognized against the background sky. The larger the visual range the better the visibility. It is not directly measured but rather calculated from a measurement of light extinction which includes the scattering and absorption of light by particles and gases. Extinction depends on the mass and chemical composition of the particles and gases and is a quantitative measure of how the passage of light from a scenic feature to an observer is affected by air pollutants. Extinction is monitored with transmissometers, nephelometers, or reconstructed from measurements of particle mass and chemical composition. | <urn:uuid:03302d20-2b94-40a3-8850-ff08e8c29a7f> | 3.90625 | 282 | Knowledge Article | Science & Tech. | 28.768744 |
[Previous] | [Session 26] | [Next]
T.R. Spilker (JPL/Caltech)
In cooperation with NASA's Solar System Exploration Subcommittee and its working groups, JPL is studying planetary science missions proposed for launch near the end of the next decade. Results will focus resources on developing technologies that enable a set of missions in which extraordinary scientific advances reward meeting severe technological challenges. This paper describes Saturn Ring Observer, an innovative, chemical-propulsion-only approach to a previous mission concept that enables close-up observation of Saturn's rings to obtain fundamental new information about ring dynamics, some relevant to planetary system formation. It describes the mission's science goals, provided by the Astrophysical Analogs in the Solar System Campaign Strategy Working Group, and the resulting mission concept.
The primary goal is understanding ring processes and evolution as a model for the origin of planetary systems. This involves direct observations of kinematic processes and parameters in the rings, direct observations of the physical nature and distribution of the particles, measuring local surface mass density over a wide radial range, and mapping the optical depth profile at high (~ 10 m) radial resolution and at several co-rotating longitudes.
The ring opening angle as seen from the approaching spacecraft sets an arrival time window at Saturn. Saturn orbit insertion uses a single-pass aerocapture followed by direct insertion near the Huygens gap at apoapsis. The science mission concept calls for placing the spacecraft in a ring-particle-like orbit with a very small inclination; frequent small plane-change maneuvers enable `hovering' 3 ±0.5 km removed from the ring plane. One month of science operations follow insertion, with as many as four changes in radial position totaling several thousand km.
This work was carried out at the Jet Propulsion Laboratory/California Institute of Technology, under contract to NASA. | <urn:uuid:e087acc6-8fd7-4925-9074-6258d497211e> | 2.8125 | 385 | Academic Writing | Science & Tech. | 23.943684 |
LEO > Low Earth Orbit > Polar Sun-Synchronous
Landsat 1, 2 and 3 operated in a circular, Sun-synchronous, near-polar orbit at
an altitude of approximately 913 km (567 miles), with a nominal 9:30 a.m.
crossing of the Equator during the descending ... mode. They circled the Earth
every 103 minutes, completing 14 orbits per day and viewing the entire Earth
every 18 days. The Landsat orbits are selected and trimmed so that each
satellite ground trace repeats its Earth coverage at the same local time every
day. Repetitive image centers are maintained to within 37 km (23 miles). The
orbits of Landsat 4 and 5 are repetitive, circular, Sun-synchronous, and
near-polar at a nominal altitude of 705 km (438 miles) at the Equator. The
satellites cross the Equator from north-to-south on a descending orbital node
at approximately 9:45 a.m. on each pass. Each orbit takes nearly 99 minutes,
and the spacecrafts complete just over 14 orbits each per day, covering the
entire Earth (poles excepted) every 16 days. During processing, data obtained
is framed into individual scenes of the Earth's surface. The ground
instantaneous field of view (IFOV) of the Landsat 1-3 MSS is 79m x 79m pixel
(resolution elements); the Landsat 4 and 5 MSS IFOV is 82m x 82m pixel. MSS
line scanning devices continually scan the Earth in a nominal 185 km swath
perpendicular to the Landsat orbital track. The coverage patterns result in
14-percent image sidelap at the Equator for Landsat 1-3 data; Landsat 4 and 5
image side lap at the Equator is 7.3-percent. Image sidelap percentages
increase proportionally as the latitude increases.
Taken from the NSSDC System for Information Retrieval and Storage (SIRS). For
more information contact the NSSDC Coordinated Request and User Support Office,
301-286-6695 (NASA Goddard Space Flight Center, Code 933.4, Greenbelt, Maryland
U.S. Geological Survey, 1979, Landsat Data Users Handbook, (Revised): U.S.
Geological Survey, p. 1-1 to AH-1.
U.S. Geological Survey and National Oceanic and Atmospheric Administration,
1984, Landsat 4 Data Users Handbook: U.S. Geological Survey, p. 1-1 to 5-1.
National Oceanic and Atmospheric Administration, 1986, Landsat Data Users
Notes, Number 35, 20p. | <urn:uuid:1115b508-fb95-4c4e-858b-4a8084848a4a> | 3.03125 | 577 | Knowledge Article | Science & Tech. | 61.484493 |
Geometry Problem 62. Square diagonal and Inscribed Circle.
Level: High School, College, SAT Prep.
In the figure below, ABCD is a square,
the inscribed circle O and the arc BD of center A meet at E.
Prove that CE is one half of the diagonal of the square. | <urn:uuid:e5de8136-b7ef-4464-af45-0d593a2f18d4> | 3.265625 | 66 | Tutorial | Science & Tech. | 71.9535 |
Recursion is one of the tough programming technique to master. Many programmers working on both Java and other programming language like C or C++ struggles to think recursively and figure out recursive pattern in problem statement, which makes it is one of the favorite topic of any programming interview. If you are new in Java or just started learning Java programming language and you are looking for some exercise to learn concept of recursion than this tutorial is for you. In this programming tutorial we will see couple of example of recursion in Java programs and some programming exercise which will help you to write recursive code in Java e.g. calculating Factorial, reversing String and printing Fibonacci series using recursion technique. For those who are not familiar with recursion programming technique here is the short introduction: "Recursion is a programming technique on which a method call itself to calculate result". Its not as simple as it look and mainly depends upon your ability to think recursively. One of the common trait of recursive problem is that they repeat itself, if you can break a big problem into small junk of repetitive steps then you are on your way to solve it using recursion.
How to solve problem using Recursion in Java
In order to solve a problem using recursion in Java or any other programming language e.g. C or C++, You must be able to figure out :
1) Base case, last point which can be resolve without calling recursive function e.g. in case of Fibonacci series its
1 and 2 where result will be 1. In case of recursive power function its zero power which is equal to 1 or in case of calculating Factorial its factorial of zero which is equal to 1.
2) With every recursive method call, Your problem must reduce and approach to base case. If this is not the case than you won't be able to calculate result and eventually die with java.lang.StackOverFlowError
Recursion Programming Example in Java
In our programming example of recursion in Java we will calculate Fibonacci number of give length using recursion. In case of Fibonacci number current number is sum of previous two number except first and second number which will form base
case for recursive solution.
If you look above example of recursion in Java you will find that we have a base case where program returns result before calling recursive function and than with every invocation number is decreased by 1. This is very important to reach solution using recursive technique.
Programming Exercise to solve using Recursion
Here are few more programming exercise to learn Recursion programming technique in Java programming language. This exercise are solely for practicing. In order to understand Recursion properly you must try to think recursive e.g. look tree as collection of small tree, look string as collecting of small String, look staircases as collection of small staircase etc. Any way try to solve following programming exercise by using Recursion programming technique for better understanding
1. Print Fibonacci series in Java for a given number, see here for solution
2. Calculate factorial of a give number in Java, see here for solution of this programming exercise
3. Calculate power of a give number in java
4. Reverse a String using recursion in Java, see here for solution
5. Find out if there is a loop in linked list using recursion
This was simple introduction of Recursion programming technique to Java programmer with most basic examples. There are lot more to learn on Recursion including different types of recursion e.g. tail recursion, improving performance of recursive algorithm using memoization or caching pre calculated result etc. Its also recommended not to use recursive method in production code instead write iterative code to avoid any stackoverflow error.
Other programming tutorials from Javarevisited Blog | <urn:uuid:4ab8b3a8-9fae-4800-a420-e5e8b236c6e1> | 4.21875 | 773 | Tutorial | Software Dev. | 37.17752 |
Sometimes learning math might give you some fun, especially for someone who love numbers. Math also can teach you small tricks like mind-reading.
Try to think of a number between 1 and 9. Then multiply it by number 9 and add the digits of this new number together. Then subtract 4 from your answer and you will be left with a single digit number. Next, try convert this number to a letter. If you number is 1 it becomes A, 2 becomes B, 3 becomes C, 4 becomes D, 5 becomes E, 6 becomes F and so on. Now think of a type of animal that begins with your chosen letter and imagine that animal as strongly as you can. Hold it vividly in the forefront of your mind. It’s an Elephant.
This is a very simple trick, and you ought to be able to work out how I was able to guess the animal of your choice with such a high likelihood of success. There is a little mathematics involved, in that some simple properties of numbers are exploited, but there is also a psychological and even zoological ingredient as well.
There is another trick of this general sort that involves only the properties of numbers. It uses the number 1089, which you may well already have listed among your favorites. It was the year in which there was an earthquake in England, it is also a perfect square (33 x 33); but its most striking property is the following.
Pick any 3 digit number in which the digits are all different (like 153), make a second number by reversing the order of the 3 digits (become 351). Now take the smaller of the two numbers away from the larger (351 – 153 = 198, if your number has only 2 digits, like 23 then put a 0 (zero) in front, so 023). Now add this to the number you get by writing it backwards (so 198 + 891 = 1089). Whatever number you chose at the outset, you will end up with 1089 after this sequence of operations. | <urn:uuid:f2a3ceb3-9ec8-4ad8-915f-4d3451ff6180> | 3.421875 | 410 | Personal Blog | Science & Tech. | 69.123784 |
Oct. 26, 2000
NASA's NEAR Shoemaker spacecraft swooped 5 kilometers above the surface of 433 Eros on Oct 26th, marking its closest-ever approach to the tumbling space rock. Scientists hope the flyby will uncover clues about extra boulders and missing craters on the near-Earth asteroid.
Sept. 1, 2000
This morning a half-kilometer wide asteroid is zooming past Earth barely 12 times farther from our planet than the Moon. In cosmic terms, it's a near miss, but there is absolutely no danger of a collision. Instead, the encounter offers astronomers an unusually good opportunity to study a near-Earth asteroid.
July 14, 2000
This weekend the Moon, the Sun and the Earth will align for the longest total Lunar Eclipsein 140 years. The best places to see the event are in and around the Pacific Ocean, including Hawaii and Australia. Observers along the west coast of North America will be able to see a partial eclipse just before The Moonsets on Sunday morning.
June 30, 2000
The Earth will reach its greatest distance from the Sun this year on the 4th of July, but don't expect a break from the heat of northern summer. This article discusses Earth's slightly elliptical orbit and the effects (some negligible, some substantial) that lopsided orbits have on planets around the solar system.
Aug. 6, 2008
July 31, 2000
Comet LINEAR continued to blow itself apart this weekend as astronomers around the world monitored the action. The comet is still bright enough to see through amateur telescopes, but it's fading fast. This story compares the breakup of comet LINEAR with another famous fragmented comet, Shoemaker-Levy 9, that collided with Jupitersix years ago.
Sept. 13, 2000
The sunspot number has been remarkably low this week, but that didn't stop the Sun from unleashing an unusual type of solar flare yesterday. As a result of the explosion, a coronal mass ejection is heading toward our planet. It could trigger an auroral display when it hits Earth's magnetosphere around Sept. 14.
April 26, 2000
X-rays scattered by interstellar dust grains have led scientists to develop a new way of estimating distances to cosmic objects using data from NASA's Chandra X-ray Observatory. The new technique could help astronomers in their quest to understand the size and age of the universe.
Oct. 4, 2000
NASA-funded scientists are experimenting with miniature magnetospheres as an innovative means of space transportation. If the group succeeds, next-generation spacecraft may come equipped with fuel-efficient magnetic bubbles that speed their occupants from planet to planet and ward off the worst solar flares.
March 9, 2000
Predicting solar activity can be tricky but now Space Weatherforecasters have a way to predict the future. Researchers using the orbiting Solar and Heliospheric Observatory have developed a new method to see what's on the far side of our star before it rotates over the Sun's limb to face Earth. | <urn:uuid:02174ca5-7069-4bc7-a9e9-5e66b165531b> | 3.453125 | 623 | Content Listing | Science & Tech. | 53.322573 |
In this chapter we’ll be taking a look at sequences and
(infinite) series. Actually, this chapter
will deal almost exclusively with series. However, we also need to understand
some of the basics of sequences in order to properly deal with series. We will therefore, spend a little time on
sequences as well.
Series is one of those topics that many students don’t find all that
useful. To be honest, many students will never see series outside of their calculus class.
However, series do play an important role in the field of ordinary differential equations and
without series large portions of the field of partial differential equations would not be
In other words, series is an important topic even if you
won’t ever see any of the applications.
Most of the applications are beyond the scope of most Calculus courses
and tend to occur in classes that many students don’t take. So, as you go through this material keep in
mind that these do have applications even if we won’t really be covering many
of them in this class.
Here is a list of topics in this chapter.
We will start the chapter off with a brief
discussion of sequences. This section
will focus on the basic terminology and convergence of sequences
More on Sequences Here we will take a quick look about monotonic
and bounded sequences.
Basics In this section we will discuss some of the
basics of infinite series.
Series Convergence/Divergence Most of this chapter will be about the
convergence/divergence of a series so we will give the basic ideas and
definitions in this section.
Series Special Series We will look at the Geometric Series,
Telescoping Series, and Harmonic Series in this section.
Integral Test Using the Integral Test to determine if a
series converges or diverges.
Comparison Test/Limit Comparison Test Using the Comparison Test and Limit Comparison
Tests to determine if a series converges or diverges.
Alternating Series Test Using the Alternating Series Test to determine
if a series converges or diverges.
Absolute Convergence A brief discussion on absolute convergence and
how it differs from convergence.
Test Using the Ratio Test to determine if a series
converges or diverges.
Test Using the Root Test to determine if a series
converges or diverges.
Strategy for Series A set of general guidelines to use when
deciding which test to use.
Estimating the Value of a Series Here we will look at estimating the value of
an infinite series.
Power Series An introduction to power series and some of
the basic concepts.
Power Series and Functions In this section we will start looking at how
to find a power series representation of a function.
Taylor Series Here we will discuss how to find the
Taylor/Maclaurin Series for a function.
Applications of Series In this section we will take a quick look at a
couple of applications of series.
Binomial Series A brief look at binomial series. | <urn:uuid:4bc42b49-fd6e-401b-8999-a565f2562b41> | 4.1875 | 651 | Content Listing | Science & Tech. | 44.216974 |
Coastal & Marine Geology InfoBank
Our Mapping Systems
The USGS and Science Education
USGS Fact Sheets
ground penetrating radar
Comment: 04:14 - 05:19 (01:05)
Source: Annenberg/CPB Resources - Earth Revealed - 21. Groundwater
Keywords: "Earth's surface", sandstone, sediment, soil, water, groundwater, "pore space", porosity, aquifer
Our transcription: Other land surfaces consist of less solid aggregates of materials, such as sandstone, or sediment, or ordinary soil.
Here the water can work its way down through the gaps or intersticies between the individual bits of matter.
Although permeable rock will let the water through, it still may not be a good source of groundwater.
For a rock to contain abundant groundwater it also needs to have a lot of open spaces or pores.
The capacity to transmit water is called "permeability," and the capacity to store water is called "porosity."
The ideal rock material for the accumulation of groundwater is both porous and permeable.
This kind of material is known as an "aquifer", from the Latin for "water bearing."
Sandstone is a good example of an aquifer.
Geology School Keywords | <urn:uuid:0fb9370b-38a6-4e5b-98cb-bc09350a7a6a> | 3.90625 | 269 | Knowledge Article | Science & Tech. | 41.551688 |
Main page Blog Astronomy news Astronomy facts Astronomy sites
A Galactic Collision in Action
This new image shows the results of a vast collision between two galaxies. This strange object is known as NGC 7252, or Arp 226, and has the odd nickname Atoms-for-Peace. The picture was taken by the Wide Field Imager on the MPG/ESO 2.2-meter telescope at ESO's La Silla Observatory in Chile. It is a combination of exposures taken through blue and red filters, for a total exposure time of more than four hours. The field of view is about 18 arcminutes across.
A galaxy collision is one of the most important processes influencing how our Universe evolves, and studying them reveals important clues about galactic ancestry. Luckily, such collisions are long drawn-out events that last hundreds of millions of years, giving astronomers plenty of time to observe them.
This picture of Atoms-for-Peace represents a snapshot of its collision, with the chaos in full flow, set against a rich backdrop of distant galaxies. The results of the intricate interplay of gravitational interactions can be seen in the shapes of the tails made from streams of stars, gas and dust. The image also shows the incredible shells that formed as gas and stars were ripped out of the colliding galaxies and wrapped around their joint core. While much material was ejected into space, other regions were compressed, sparking bursts of star formation. The result was the formation of hundreds of very young star clusters, around 50 to 500 million years old, which are speculated to be the progenitors of globular clusters.
Atoms-for-Peace appears to be a harbinger of our own galaxy's fate. Astronomers predict that in three or four billion years the Milky Way and the Andromeda Galaxy will collide, much as has happened with Atoms-for-Peace. But don't panic: the distance between stars within a galaxy is vast, so it is unlikely that our Sun will end up in a head-on collision with another star during the merger.
The object's curious nickname has an interesting history. In December 1953, President Eisenhower gave a speech that was dubbed Atoms for Peace. The theme was promoting nuclear power for peaceful purposes a especially hot topic at the time. This speech and the associated conference made waves in the scientific community and beyond to such an extent that NGC 7252 was named the Atoms-for-Peace galaxy. In a number of ways, this is oddly appropriate: the curious shape that we can see is the result of two galaxies merging to produce something new and grand, a little like what occurs in nuclear fusion. Furthermore, the giant loops resemble a textbook diagram of electrons orbiting an atomic nucleus.
Posted by: Sean Source | <urn:uuid:8f2e2a81-9371-425c-b36a-a5639d7071ee> | 3.6875 | 561 | Personal Blog | Science & Tech. | 46.412413 |
Deriving the Ramsey Numbers
School: ALBUQUERQUE ACADEMY
Area of Science: Mathematics -- Graph Theory
Ramsey numbers are part of a field of mathematics called graph theory. km (where m is any positive integer) is defined as a graph that contains m points, also known as nodes, with all possible line segments in between each point.
Ramsey numbers show the minimum number n of all the different segment coloring schemes using only blue and red for the graph kn that does not contain a graph of kr or kb the entirely red or blue graph, respectively, for that n. This is abbreviated in the notation: K(r, b)=n where K is a Ramsey function, and r and b are independent variables as described above. n is the result of the function.
Our project intends to find n for r and b values that we choose and to improve upon existing limits for these variables. We plan to use Java as our main programming language. We plan to test a possible n by testing all possible colorings of kn and see if there is a coloring that has no red kr and no blue kb. We will use the computer to test these n and solve for certain pairs of r and b.
If we are unable to get a definitive value for n, then we will have instead found an upper bound for the Ramsey number at that point, which is what mathematicians currently have for many points. We also plan to improve the time it takes to run the program by certain techniques designed at limiting the possibilities and that reduce complexity by powers of two as we go on to solve more and more complex Ramsey numbers. In the end, we hope to have a program that will help mathematicians solve for some of the more complex Ramsey Numbers.
Dr. David Metzler
Sponsoring Teacher: Jim Mims
Mail the entire Team | <urn:uuid:e6943b18-6ad5-466b-90b6-1ce4b1a4882a> | 4.21875 | 382 | Academic Writing | Science & Tech. | 55.267635 |
What does it mean? Part 2
Dr. Bob and I are taking turns explaining the implications of the recent Nature paper. In the first post, Dr. Bob discussed two very important questions: "How big?" and How massive?" In this post, I'm going to cover another big-picture topic: the orientation of the disk.
I'll start with what we thought happened. Most of the literature drew the disk as something that more-or-less bisected the F-star, following Kemp's 1986 drawing (note, a similar drawing was also published in the 1985 epsilon Aurigae conference proceedings):
We now know that this picture was very, very close to being correct. Considering that the parameters in the paper (see pg. L13 of Kemp's work above) all scale to the radius of the F-star, Kemp et. al. did a good job of estimating the parameters for the disk just from polarimetric data and modelling! The only large change is the point of first contact which was assumed to be in the northern hemisphere. Instead, it is clearly in the southern hemisphere.
A big question, and a good team project, might be to investigate whether or not Kemp's polarization data is consistent with the current findings. If it isn't, this could mean the disk or orbit has precessed significantly in the 27 year since the last eclipse! (if you are interested in doing this, please contact me).
For those of you who do not have access to Nature, I have extracted a few parameters in the table below (these are rounded, the parameters in the paper are more precise):
|Disk Radius*||3.8 AU|
|Disk Thickness**||0.76 AU|
|Central Opening (?)||0.5 AU?|
|Disk Inclination**||85 +/- 5 or 95 +/- 5 degrees|
|Disk Tilt**||< 20 degrees|
|Orbital Inclination (?)||88 +/- 2|
* From "Infrared images of the transiting disk in the ε Aurigae system" by Kloppenborg et. al. 2010 April, Nature
** From "Taming the Invisible Monster: System Parameter Constraints for Epsilon Aurigae from the Far-Ultraviolet to the Mid-Infrared" by Hoard et. al. 2010 ApJ.
(?) Unknown parameter, this is a best-guess.
Try out these values in the new light curve generator and see how close you can get the light curve to match with previously observed values! Do the parameters work? If not, what do you think is wrong? Do we have the entire picture or is there more to discover? | <urn:uuid:9bee31ba-4e71-42a5-b005-beeb3ed88932> | 2.796875 | 560 | Personal Blog | Science & Tech. | 67.513261 |
Assembly and testing of NASA's Mars Science Laboratory spacecraft is far enough along that the mission's rover, Curiosity, looks very much as it will when it is investigating Mars.
[b]Above[/b]: [i]This image was taken April 4, 2011, inside the Spacecraft Assembly Facility at NASA's Jet Propulsion Laboratory, Pasadena, Calif. Aside from the setting and its placement on ground support equipment, the rover appears much as it will after landing on Mars in August 2012.[/i]
Testing continues this month at NASA's Jet Propulsion Laboratory, Pasadena, Calif., on the rover and other components of the spacecraft that will deliver Curiosity to Mars. In May and June, the spacecraft will be shipped to NASA Kennedy Space Center, Fla., where preparations will continue for launch in the period between Nov. 25 and Dec. 18, 2011.
The mission will use Curiosity to study one of the most intriguing places on Mars -- still to be selected from among four finalist landing-site candidates. It will study whether a selected area of Mars has offered environmental conditions favorable for microbial life and for preserving evidence about whether Martian life has existed. | <urn:uuid:5ca854f0-7d47-416f-94a8-dd64610aa1b3> | 3 | 232 | Comment Section | Science & Tech. | 41.949886 |