text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Earth-Moon Center of Gravity
Name: David C.
where is the center of gravity for the earth-moon system,
what effect does this have on the orbit of the earth around the sun?
The center of gravity of the earth-moon system is inside the earth on the
line between the centers of the two bodies. This center of mass proceeds
around the sun in an elliptical orbit.
Richard E. Barrans Jr., Ph.D.
PG Research Foundation, Darien, Illinois
These web sites offer some insight into what appears to be a simple question.
In the crudest approximately the center of gravity of the moon/earth system is
about 2900 km or about 75% of the radius from the center of the earth.
However, things are much more complex than that.
The sun cannot be neglected; in fact it has a large effect on tides. So it
is the sun/moon/earth system that must be taken into account. In addition,
there are at least 10 more drags, wobbles, various wiggles caused by many
sources including the planets, the tidal drag of the oceans, and the fact
that the center of earth rotation changes.... It is all very complicated and
still there are small earth/sun/moon motions for which there is no good
explanation as yet.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:f6c2eed9-fd53-4d4f-a8c2-5dc5b6dc568e> | 3.875 | 298 | Q&A Forum | Science & Tech. | 69.184502 |
As a prelude to a new book, Nigel Calder (who was the editor of New Scientist for four years in the 1960s) has written an op-ed for the Times (UK) basically recapitulating the hype over the Svensmark cosmic ray/climate experiments we reported on a couple of month ago (see Taking Cosmic Rays for a spin). At the time we pointed out that while the experiments were potentially of interest, they are a long way from actually demonstrating an influence of cosmic rays on the real world climate, and in no way justify the hyperbole that Svensmark and colleagues put into their press releases and more ‘popular’ pieces. Even if the evidence for solar forcing were legitimate, any bizarre calculus that takes evidence for solar forcing of climate as evidence against greenhouse gases for current climate change is simply wrong. Whether cosmic rays are correlated with climate or not, they have been regularly measured by the neutron monitor at Climax Station (Colorado) since 1953 and show no long term trend. No trend = no explanation for current changes.
Technical Note: We have changed the contact email for the blog to reduce the amount of unsolicited email. If you want to contact us at the blog, please use contact-at-realclimate.org. | <urn:uuid:0dbd3977-218a-4bf4-9ed4-1912bf9e464f> | 2.765625 | 255 | Comment Section | Science & Tech. | 33.718403 |
It is estimated that two-thirds of sulfur dioxide (SO2) air pollution in North America comes from coal power plants. In a recent scientific article published in Geophysical Research Letters, a team of scientists have confirmed that SO2 levels in the vicinity of U.S. coal power plants have fallen by nearly 50% since 2005. .
Courtesy NASAThis finding, using satellite observations, confirms ground-based measurements of declining SO2 levels. In many parts of the world, ground-based monitoring does not exist or is not extensive; therefore, the Ozone Monitoring Instrument (OMI) on the Aura satellite could potentially measure levels of harmful emissions in regions of the world where reliable ground monitoring is unavailable..
Previously, space-based SO2 monitoring was limited to plumes from volcanic eruptions and detecting anthropogenic emissions from large source regions as in China. A new spatial filtration technique allows the detection of individual pollution sources in Canada and the U.S.
"What we’re seeing in these satellite observations represents a major environmental accomplishment," said Bryan Bloomer, an Environmental Protection Agency scientist familiar with the new satellite observations. "This is a huge success story for the EPA and the Clean Air Interstate Rule," he said.
Courtesy EPAI'm assuming that you aren't at home watching dense legal proceedings related to the regulation of molecules in our atmosphere. So here's the timeline of a recent important story.
OK, you're up to date. Unfortunately the media is framing this issue in military terms. "The coming battle." "EPA and Republicans spar over climate change." "EPA blocks Republican rocket launcher with sweet ion science shield." Yeah, I made that last one up. But we don't need battles, we need conversations and action.
My point is that this issue is a great opportunity to have a discussion about how science is used in our public policy decisions. Do you think the EPA is too focused on the scientific findings related to climate change? Are they ignoring the economic impacts? Are you frustrated with some of the Republican views that outright deny the scientific findings on what's causing climate disruption? Are they ignoring real facts? Could this issue be alleviated by better science education?
Courtesy uscgd8 Chemicals known as dispersants are now being used against the ever increasing amount of oil leaking out of a deep water well head. Dispersants help break the larger masses of oil into smaller droplets which will mix into the water. These dispersants are being sprayed onto the surface slicks and are also being injected directly into the oil flowing out almost a mile under the surface.
Officials said that in two tests, that method appeared to be keeping crude oil from rising to the surface. They said that the procedure could be used more frequently once evaluations of its impact on the deepwater ecology were completed. New York Times
Dispersant chemicals contain solvents to assist it in dissolving into and throughout the oil mass and a surfactant which acts like soap. Surfactant molecules have one end that sticks to water and one end that sticks to oil. This, along with wave action, breaks masses of oil into droplets small enough that they stay suspended under water, rather than floating back to the surface.
Such cleanup products can only be used by public authorities responding to an emergency if they are individually listed on the National Contingency Plan Product Schedule.
Many of the first dispersants used in the 70s and 80s did show high toxicity to marine organisms. However, today there is a wealth of laboratory data indicating that modern dispersants and oil/dispersant mixtures exhibit relatively low toxicity to marine organisms.
On occasions the benefit gained by using dispersants to protect coastal amenities, sea birds and intertidal marine life may far outweigh disadvantages such as the potential for temporary tainting of fish stocks. ITOPF
Here is a link to one product on their list (Oil Gone Easy Marine S200
According to National Geographic News, "Dispersants only alter the destination of the toxic compounds in the oil." Moving the oil off the surface protects the birds and animals along the shoreline but will increase the oil exposure for fish, shrimp, corals, and oysters. I hate to mention what hurricanes will do to this situation.
My mom just sent me an E-mail. Why's that worthy of a Buzz post? Well, it just so happens that she's on board the OSV Bold, the US Environmental Protection Agency's only ocean and coastal monitoring ship. (It's crawling along the coast of Maine right now.) From the boat, scientists are able to sample the water column, ocean bottom, and sea life to get a sense of how the ocean is being impacted by human activities, and how we can better manage what goes into it. If you're curious, you can follow the adventures of the OSV Bold on Twitter, or read the daily observations log. (There's a photo of Moms in the batch posted for day 4, but her face isn't visible. Just trust me: she's the beautiful on the Bold. Oh, and lest you think this is a completely frivolous and nepotistic post, check it: www.whitehouse.gov picked up the story, too.)
Courtesy Camera Slayer Awesome Fourth of July fireworks can be viewed from our Science Museum of Minnesota each year during the Taste of Minnesota celebration. Fireworks are often shot over water to minimize fire danger. Ever wonder what kind of chemicals rain down into the Mississippi River during a fireworks display?
Part of learning chemistry is to understand what is called the flame test. Unknown chemical compounds, when heated in a flame, will generate different colors. Lithium yields red, copper gives blue or blue-green, sodium gives yellow, aluminum and titanium produce the whites.
Chemists are attempting to make fireworks less harmful to the environment.
Perchlorates, which are used to help the fireworks’ fuel burn, were named in an EPA health advisory earlier this year (which recommended a maximum of 15 micrograms per liter of drinking water), as they have been linked to disruption of the thyroid gland.Scientific American
A 2007 U.S. Environmental Protection Agency (EPA) study found that perchlorates spiked by up to 1000 times normal after the fireworks display and took 20 to 80 days to return to normal depending on surface temperatures.
Click this link where Live Science explains some of the strange ingredients in fireworks like:
"chemists add bismuth trioxide to the flash powder to get that crackling sound, dubbed "dragon eggs." Ear-splitting whistles take four ingredients, including a food preservative and Vaseline.
Tubes, hollow spheres, and paper wrappings work as barriers to compartmentalize the effects. More complicated shells are divided into even more sections to control the timing of secondary explosions.
Courtesy AviatorDave A recently released report warns that the Great Lakes have been invaded by foreign aquatic species resulting in ecological and environmental damage amounting to hundreds of millions of dollars.
The findings support the need for detection and monitoring efforts at those ports believed to be at greatest risk. The report identified 30 nonnative species that pose a medium or high risk of reaching the lakes and 28 others that already have a foothold and could disperse widely.
The National Center for Environmental Assessment issued the warning in a study released (Jan 5, 09). It identified 30 nonnative species that pose a medium or high risk of reaching the lakes and 28 others that already have a foothold and could disperse widely. (click here to access report)
One preventive measure that works 99% of the time is to flush out the ballast tanks with salty sea water. This usually kills any foreign marine life hitch hiking a ride in the ballast tank water. Both Canada and the United States have made this a requirement for almost two decades now. Both nations also recently have ordered them to rinse empty tanks with seawater in hopes of killing organisms lurking in residual pools on the bottom.
Nanotechnology research is kicking into full gear the world over but almost everyone agrees that we simply don't know how to properly regulate its use. What will particles billions of times smaller than a meter do to our bodies and the environment? Well...they might cure our cancers and clean up our water. But they also might penetrate our blood brain barriers and stick in our gray matter or cause ecosystems to decline due to tiny tiny pollutants.
Well, at least our government is beginning to look at this stuff. The EPA announced on Thursday that they will be regulating all use of nano-silver in US commercial products. If you make odor eating socks with nano-silver you now have to make sure that it won't get out into the environment and cause harm.
The city of Berkeley, California is also looking at creating the first local government nanotech regulations. This isn't surprising for two reasons.
I will be watching this closely and hope that the concerned community members and the scientists can come to some middle ground where research isn't totally crippled by massive regulation but where unknown safety risks are considered.
Fun times in the nanoworld. | <urn:uuid:6fe0f36a-581a-48ef-8e44-26c47745474d> | 3.046875 | 1,854 | Personal Blog | Science & Tech. | 43.610655 |
Astronomers have switched on the first 42 radio dishes of the Allen Telescope Array and are collecting data -- both for conventional radio astronomy, and the search for extraterrestrial life. A total of 350 dishes are planned for the array, which will allow astronomers to image large portions of the sky in one exposure. The operators of the telescope say its design will allow rapid astronomical observations and analysis.
The project, built in an arid valley near the town of Hat Creek, just north of Lassen Volcanic National Park in northern California, is funded in large part by Paul Allen of Microsoft fame. The astronomers plan to have the full 350-dish array operating within three years. In this segment, Ira talks with one of the project leaders about the telescope and what it will be doing in the years ahead.
Produced by Karin Vergoth | <urn:uuid:8fd96fdf-908d-4c14-9a87-8c1862c4fcab> | 3.1875 | 173 | Truncated | Science & Tech. | 37.95 |
Regions or areas bounded by drainage divides and occupied by drainage systems; specifically the tract of country that gathers water originating as precipitation and contributes it to a particular stream channel or system of channels, or to a lake, reservoir, or other body of water. [Glossary of Geology, 4th ed.]
Description of studies conducted to evaluate the surface water, ground water, water interactions, and water quality of Methow River Basin in Washington. With links to related science topics, datasets, maps, project summaries, and news.
Primary homepage for the National Water Quality Assessment (NAWQA) Program studying water quality in river, aquifer and coastal water basins throughout the nation. Links to reports, data, models, maps and national synthesis studies.
The Great and Little Miami River Basins form a National Water Quality Assessment (NAWQA) Program study unit for studying the status, trends, and changes affecting the nation's water quality. Site links to data, publications, maps, and results.
The Puget Sound Basin is a National Water Quality Assessment (NAWQA) Program study unit for studying the status, trends, and changes affecting the nation's water quality. Site links to data, publications, and results.
Program to compile data from the National Water-Quality Assessment Program study units to study national trends with links to sediment coring sites, video on Salt Lake City, study units identification, and publications.
Data warehouse for national water quality program with links to chemical, biological, and physical data for water, sediment and animal tissues, nutrient, pesticide, and VOC levels, streamflow, and ground water levels from national study units.
Reports concentration of organic compounds here, to serve as a baseline against which future measurements can be compared and to provide a general assessment of the quality of local water treatment efforts. | <urn:uuid:2cb3cecb-d6d6-4dde-93ab-6c4924c10540> | 3.171875 | 377 | Content Listing | Science & Tech. | 20.632707 |
- Arnaud Le Hors, W3C
- Robert S. Sutor, IBM Research (for DOM Level 1)
Several of the following term definitions have been borrowed or
modified from similar definitions in other W3C or standards documents.
See the links within the definitions for more information.
- 16-bit unit
- The base unit of a
DOMString. This indicates that
indexing on a
DOMString occurs in units of 16 bits.
This must not be misunderstood to mean that a
can store arbitrary 16-bit units. A
DOMString is a
character string encoded in UTF-16; this means that the restrictions
of UTF-16 as well as the other relevant restrictions on character strings
must be maintained. A single character, for example in the form of a
numeric character reference, may correspond to one or two 16-bit units.
- An ancestor node of any node A is any node
above A in a tree model, where "above" means "toward the root."
- An API is an Application Programming
Interface, a set of functions or methods used to access some
- anonymous type name
An anonymous type name is an
implementation-defined, globally unique qualified name
provided by the processor for every anonymous type declared in
- A child is an immediate descendant node of
- client application
- A [client] application is any software that uses the
Document Object Model programming interfaces provided by the
hosting implementation to accomplish useful work. Some
examples of client applications are scripts within an HTML
or XML document.
- COM is Microsoft's Component Object Model [COM], a technology for building applications from binary
- A convenience method is an operation on an
object that could be accomplished by a program consisting of
more basic operations on the object. Convenience methods are
usually provided to make the API easier and simpler to use or to
allow specific programs to create more optimized implementations
for common operations. A similar definition holds for a
- data model
- A data model is a collection of descriptions of data
structures and their contained fields, together with the operations
or functions that manipulate them.
- A descendant node of any node A is any node
below A in a tree model, where "below" means "away from the
- document element
There is only one document element in a
element node is a child of the
Document node. See
Documents in XML [XML 1.0].
- document order
- There is an ordering, document order, defined on all
the nodes in the document corresponding to the order in which the first
character of the XML representation of each node occurs in the XML
representation of the document after expansion of general entities. Thus,
the document element node
will be the first node. Element nodes occur before their children. Thus,
document order orders element nodes in order of the occurrence of their
start-tag in the XML (after expansion of entities). The attribute nodes
of an element occur after the element and before its children. The
relative order of attribute nodes is implementation-dependent.
- The programming language defined by the ECMA-262 standard
[ECMAScript]. As stated in the standard, the
"property" is used in the same sense as the IDL term
- Each document contains one or more elements, the
boundaries of which are either delimited by start-tags and
end-tags, or, for empty elements by an empty-element tag.
Each element has a type, identified by name, and may have a
set of attributes. Each attribute has a name and a value.
Structures in XML [XML 1.0].
- information item
- An information item is an abstract representation of some
component of an XML document. See the [XML Information set]
- logically-adjacent text nodes
Logically-adjacent text nodes are
CDATASection nodes that may be visited sequentially in
document order without
entering, exiting, or passing over
- hosting implementation
- A [hosting] implementation is a software module that
provides an implementation of the DOM interfaces so that a
client application can use them. Some examples of hosting
implementations are browsers, editors and document
- The HyperText Markup Language (HTML) is a
simple markup language used to create hypertext documents
that are portable from one platform to another. HTML
documents are SGML documents with generic semantics that are
appropriate for representing information from a wide range
of applications. [HTML 4.01]
- In object-oriented programming, the ability to create new
classes (or interfaces) that contain all the methods and properties
of another class (or interface), plus additional methods and
properties. If class (or interface) D inherits from class (or
interface) B, then D is said to be derived from B. B is
said to be a base class (or interface) for D. Some
programming languages allow for multiple inheritance, that is,
inheritance from more than one class or interface.
- An interface is a declaration of a set of
methods with no information given about their implementation.
In object systems that support interfaces and inheritance,
interfaces can usually inherit from one another.
- language binding
- A programming language binding for an IDL
specification is an implementation of the interfaces in the
specification for the given language. For example, a Java
language binding for the Document Object Model IDL
specification would implement the concrete Java classes that
provide the functionality exposed by the
- local name
- A local name is the local part of a qualified
This is called the local
part in Namespaces in XML [XML Namespaces].
- A method is an operation or function that is
associated with an object and is allowed to manipulate the
- A model is the actual data representation
for the information at hand. Examples are the structural
model and the style model representing the parse structure
and the style information associated with a document. The
model might be a tree, or a directed graph, or something
- namespace prefix
- A namespace prefix is a string that associates
an element or attribute name with a namespace URI in
XML. See namespace
prefix in Namespaces in XML [XML Namespaces].
- namespace URI
A namespace URI is a URI that identifies an XML
namespace. This is called the namespace name in
Namespaces in XML [XML Namespaces]. See also sections 1.3.2 "
URIs" and 1.3.3 "
Namespaces" regarding URIs and namespace URIs
handling and comparison in the DOM APIs.
- namespace well-formed
A node is a namespace well-formed XML node if it
is a well-formed node,
and follow the productions and namespace constraints. If
[XML 1.0] is used, the constraints are defined in
[XML Namespaces]. If [XML 1.1] is used,
the constraints are defined in [XML Namespaces 1.1].
- object model
- An object model
is a collection of
descriptions of classes or interfaces,
together with their member data, member functions,
and class-static operations.
- A parent is an immediate ancestor node of a
- partially valid
- A node in a DOM tree is partially valid if it is
well formed (this part is for
comments and processing instructions) and its immediate children are
those expected by the content model. The node may be missing trailing
required children yet still be considered partially
- qualified name
- A qualified name is the name of an element or
attribute defined as the concatenation of a local name
(as defined in this specification), optionally preceded by a
namespace prefix and colon character. See
Qualified Names in
Namespaces in XML [XML Namespaces].
- read only node
- A read only node is a node that is immutable. This
means its list of children, its content, and its attributes, when it
is an element, cannot be changed in any way. However, a read only node
can possibly be moved, when it is not itself contained in a read only
- root node
- The root node is a node that is not a child of any
other node. All other nodes are children or other descendants of the
A schema defines a set of structural and value
constraints applicable to XML documents. Schemas can be
expressed in schema languages, such as DTD, XML Schema, etc.
- Two nodes are siblings if they have the
same parent node.
- string comparison
- When string matching is required, it is to occur as
though the comparison was between 2 sequences of code points
- An information item such as an
XML Name which has been
- The description given to various information items (for example,
attribute values of various types, but not including the StringType
CDATA) after having been processed by the XML processor. The process
includes stripping leading and trailing white space, and replacing
multiple space characters by one. See the definition of
A node is a well-formed XML node if its
serialized form, without doing any transformation during its
serialization, matches its respective production in [XML 1.0] or [XML 1.1] (depending on the XML
version in use) with all well-formedness constraints related
to that production, and if the entities which are referenced
within the node are also well-formed. If namespaces for XML
are in use, the node must also be namespace
- Extensible Markup Language (XML) is an
extremely simple dialect of SGML which is completely
described in this document. The goal is to enable generic
SGML to be served, received, and processed on the Web in the
way that is now possible with HTML. XML has been designed
for ease of implementation and for interoperability with
both SGML and HTML. [XML 1.0] | <urn:uuid:ec825b0c-193f-40e6-8e2a-aa7702ba5d7f> | 3.65625 | 2,135 | Structured Data | Software Dev. | 37.05627 |
Just a quicky on what will probably become a fairly large story:
A difference in the way British and American ships measured the temperature of the ocean during the 1940s may explain why the world appeared to undergo a period of sudden cooling immediately after the Second World War.
The scientists point out that the British measurements were taken by throwing canvas buckets over the side and hauling water up to the deck for temperatures to be measured by immersing a thermometer for several minutes, which would result in a slightly cooler record because of evaporation from the bucket.
This finding actually makes the AGW story go more smoothly:
Professor Jones said that the study lends support to the idea that a period of global cooling occurred later during the mid-twentieth century as a result of sulphate aerosols being released during the 1950s with the rise of industrial output. These sulphates tended to cut sunlight, counteracting global warming caused by rising carbon dioxide.
"This finding supports the sulphates argument, because it was bit hard to explain how they could cause the period of cooling from 1945, when industrial production was still relatively low," Professor Jones said.
Although its perhaps a bit of an embarrassment.
And the weird thing is, Steve McIntyre seems to have got to this one first. Too bad Steve grinds out blog posts rather than writing up a real paper now and again.
Go through the links for details. The James Annan post ( through "a bit of...") is especially good. | <urn:uuid:db86bd2a-1385-4036-9b01-d43c6cbef7b1> | 3.140625 | 306 | Comment Section | Science & Tech. | 46.447556 |
Termites And Global Warming
It is an established fact that termites cause more damage in dollar terms worldwide than the combined ravages of fire, flood, earthquakes, tornadoes and hurricanes combined.
Having come to terms with those statistics, we now have to contemplate the notion that termites are also responsible for 18% of the world’s methane output.
Many people mistakenly believe that Methane, (CH4) causes damage to the globe’s Ozone Layer, but the problem is even worse, because methane is responsible for Global Warming, and that is a far more complex and serious problem.
It’s believed that around 38 % of the greenhouse effect is caused by methane, putting it second on the list of offending gases behind carbon dioxide. Methane breaks down in the atmosphere to form carbon dioxide, ozone, and water, all of which absorb heat. The temperature of the atmosphere rises, the ice caps melt, and before you know it, you’re pumping the Pacific out of your cellar.
Termites release an estimated 80 billion kilograms of “Greenhouse gas” per year.
Considering that there is an estimated 240 quadrillion termites scurrying about the planet, that’s 60 million of those insect pests for every man, woman and child, and that the billions of tiny, burrowing Isoptera are “letting rip” every second of every day.
There are more than 2000 different species of termites and the amounts of methane produced varies considerably between species, with some producing no methane at all. Methane is produced in termite guts, by symbiotic bacteria and protozoa, during food digestion.
The primary impact of humans on termite methane is reduction of emissions through termite habitat destruction. Many of the most important methane producing termite species are found in tropical forest areas, huge swathes of which are destroyed each year for logging, agriculture and housing developments. Additionally, in North America and elsewhere colonies of termites are regularly exterminated due to the threat they pose to wooden structures.
It is estimated that tropical forests, grasslands, and savannahs of Africa, Asia, Australia, and South America regions contribute approximately 80% of global termite emissions.
Who would have thought that having annual termite inspections, installing termite baits and monitors and all other available strategies would assist in minimising our “Carbon Footprint? | <urn:uuid:108d16ed-0d1b-48b8-b430-fbb0b815c0ae> | 3.1875 | 510 | Personal Blog | Science & Tech. | 23.793632 |
Many transition metal solutions are brightly colored. From left to right, aqueous solutions of: cobalt(II) nitrate; potassium dichromate; potassium chromate; nickel(II) chloride; copper(II) sulfate; potassium permanganate.
Question: Why Are Transition Metals Called Transition Metals?
Most of the elements on the Periodic Table
are transition metals. These are elements that have partially filled d sublevel orbitals. Have you ever wondered why
they are called transition metals? What transition are they undergoing?
The term dates back to 1921, when English chemist Charles Bury referred to a transition series of elements on the periodic table with an inner layer of electrons that was in transition between stable groups, going from a stable group of 8 to one of 18, or from a stable group of 18 to one of 32. Today these elements are also known as d block elements. The transition elements all are metals, so they are also known as transition metals.
Transition Metal Properties | List of Transition Metals | <urn:uuid:60cf6154-ad50-420a-a38f-f214ead6ba08> | 4.125 | 214 | Knowledge Article | Science & Tech. | 39.159339 |
Synchronous Channels using MVars
An MVar in Haskell is a shared variable that is either full, or empty. Trying to write to a full one, or read from an empty one, will cause you to block. It can be used as a one-place buffered asynchronous channel. Consider if you didn’t want choice, or conjunction or any of the fancy features of CHP, but you do want to build a synchronous channel using MVars. How would you do it?
MVars — The Obvious Way
There is a very straightforward way to turn asynchronous channels into synchronous channels: you form a pair of channels, and use one for the writer to send the reader the data, and the other for the reader to send the writer an acknowledgement:
data SimpleChannel a = SimpleChannel (MVar a) (MVar ()) newSimpleChannel :: IO (SimpleChannel a) newSimpleChannel = liftM2 SimpleChannel newEmptyMVar newEmptyMVar writeSimpleChannel :: SimpleChannel a -> a -> IO () writeSimpleChannel (SimpleChannel msg ack) x = putMVar msg x >> takeMVar ack readSimpleChannel :: SimpleChannel a -> IO a readSimpleChannel (SimpleChannel msg ack) = takeMVar msg <* putMVar ack ()
Let’s assume that context-switching is the major cost in these algorithms, and examine how many times a process must block in the above algorithm. We know that this must be at least one; whoever arrives first will have to block to wait for the second participant.
We’ll start with what happens if the writer arrives first. The writer arrives, puts the value into the data MVar, then blocks waiting for the ack. The reader arrives, takes the data and sends the ack, at which point the writer wakes up. So in this case: one block.
If the reader arrives first, it will block waiting for the data MVar. The writer will arrive, put the value into the data MVar, then block waiting for the ack. The reader will wake up, take the data and send the ack; then the writer will wake up. So here we had two blocks. The writer blocking is unnecessary; if the reader was already there waiting, there is no need for the writer to wait for an ack, it could just deposit the value and go — if it knew the reader was there.
MVars — The Faster Way
There are several ways to remove that second block. One way is to have an MVar as a sort of status MVar. When the reader or writer arrives, they try to put into this MVar. If they succeed, they are first and they wait on a second MVar. If they fail, they are second and act accordingly, emptying the status variable and waking up the first party:
data FastChannel a = FastChannel (MVar (Maybe a)) (MVar ()) (MVar a) newFastChannel :: IO (FastChannel a) newFastChannel = liftM3 FastChannel newEmptyMVar newEmptyMVar newEmptyMVar writeFastChannel :: FastChannel a -> a -> IO () writeFastChannel (FastChannel sts ack msg) x = do first <- tryPutMVar sts (Just x) if first then takeMVar ack -- will block else takeMVar sts >> putMVar msg x readFastChannel :: FastChannel a -> IO a readFastChannel (FastChannel sts ack msg) = do first <- tryPutMVar sts Nothing if first then takeMVar msg -- will block else (fromJust <$> takeMVar sts) <* putMVar ack ()
This version is, in my benchmarks, twice as fast as the first version, which suggests that context-switching really is the expensive part of these algorithms. In fact, I started out with the first version in thi spost, but CHP’s more featured and complex algorithms were coming out faster because I only ever block once. It was only when I improved the MVar version to the second one above that the results were as I expected. | <urn:uuid:f395692c-ce0f-4ccb-b569-a1fef49f09da> | 3.0625 | 866 | Documentation | Software Dev. | 49.957733 |
Even before design of
began in 1996, it was evident that one of the major problems to be
encountered in building parallel databases would be communication
between the nodes in the network. For this reason,
NDBCLUSTER was designed from the very
beginning to permit the use of a number of different data
transport mechanisms. In this Manual, we use the term
transporter for these.
The MySQL Cluster codebase provides for four different transporters:
TCP/IP using 100 Mbps or gigabit Ethernet, as discussed in Section 5.2.8, “MySQL Cluster TCP/IP Connections”.
Direct (machine-to-machine) TCP/IP; although this transporter uses the same TCP/IP protocol as mentioned in the previous item, it requires setting up the hardware differently and is configured differently as well. For this reason, it is considered a separate transport mechanism for MySQL Cluster. See Section 5.2.9, “MySQL Cluster TCP/IP Connections Using Direct Connections”, for details.
Shared memory (SHM). For more information about SHM, see Section 5.2.10, “MySQL Cluster Shared-Memory Connections”.
SHM is considered experimental only, and is not officially supported.
Scalable Coherent Interface (SCI), as described in the next section of this chapter, Section 5.2.11, “SCI Transport Connections in MySQL Cluster”.
Most users today employ TCP/IP over Ethernet because it is ubiquitous. TCP/IP is also by far the best-tested transporter for use with MySQL Cluster.
We are working to make sure that communication with the ndbd process is made in “chunks” that are as large as possible because this benefits all types of data transmission.
For users who desire it, it is also possible to use cluster interconnects to enhance performance even further. There are two ways to achieve this: Either a custom transporter can be designed to handle this case, or you can use socket implementations that bypass the TCP/IP stack to one extent or another. We have experimented with both of these techniques using the SCI (Scalable Coherent Interface) technology developed by Dolphin Interconnect Solutions. | <urn:uuid:a7b2f1ac-ee64-4da5-a1ad-e3141c97c531> | 2.765625 | 477 | Documentation | Software Dev. | 47.409471 |
There are other theories around the PETM and the details remain uncertain. However it can be said with some confidence that the PETM started with a warming due to CO2 emissions that was then amplified by a low carbon13 source, methane seeming the least problematic option. In most areas of science one can afford to remain agnostic about issues in doubt, however Anthropogenic Global Warming (AGW) and the risks it poses, methane hydrate being just one, do not allow for a 'wait and see' approach. So my working assumption is that the PETM was indeed a warming amplified by methane, with marine hydrates being a likely candidate for a major player in the amplification.
Working on the assumption that methane emissions were the amplifying factor, the next issue is whether the geological record can tell us the speed of the emissions. McInerney & Wing's 2011 Review paper states that the onset of the negative excursion of carbon13 took less than 10,000 years, that being the time from the non-depleted samples to maximum depletion of carbon13. The onset in continental cores took an estimated 8 to 23,000 years. Whilst such timescales are geologically rapid, in human timescales this would be more of a chronic release of methane. Schmidt & Shindell investigate the PETM using an atmospheric chemistry model and climate model. They find that 0.3Gt (Gt=1000,000,000 tons) per year most closely matches the forcing that tallies with the geological record, and that a larger emission scenario of 3Gt per year over 500 years produces a higher forcing than is justified by geological observations. This does not mean that the PETM was not catastrophic, nor does it mean humanity is unlikely to invoke this feedback by emissions of CO2. Furthermore as I've argued in the previous post there is good reason to expect transient pulses of methane as part of the process of their dissolution. However overall the PETM was a catastrophe that unfolded at a slower pace than some public commentary on this issue might lead one to expect.
Figure 1c of Hansen & Sato 2011. Estimated deep ocean temperature changes for the last 500k years.
This figure is presented to illustrate the Holsteinian and Eemian interglacials, which were warmer than the Holocene, the interglacial from which the current epoch, the Anthropocene, has risen. It is noted in the text of Hansen & Sato that the temperature changes of the deep oceans were only around two thirds as large as global temperature changes. To that must then be added polar amplification, the physics for which would apply in the past as now, so there is good reason to suppose that the Arctic was warmer still.
Jakobsson et al examine beach records from the Holocene. They found that between 8,000 and 6,000 years before present Arctic sea-ice was substantially lower than at present. They show that July 65degN insolation was around 10% higher than at present. Funder et al examine beach ridges as indicative of sea-ice on the northern coast of Greenland, they find beach ridges that are indicative of the presence of open ocean where there are ice shelves at present. They also note that temperatures in the north of Greenland were around 2 to 4degC warmer than at present. Dyck et al find that the early holocene is dominated by a positive Arctic Oscillation (AO) mode, with enhanced first year sea-ice growth in the seas off Siberia due to an enhanced transport of sea-ice from the east to the west Arctic (enhanced transpolar drift).
The recent trend in Arctic warming. Provided for context. (If this is your image please let me know as I've lost the attribution.)
The Holocene Thermal Maxiumum (HTM) was a period of warming following the last ice age after the start of the holocene, the warmth was driven by higher summer insolation than at present. Kaufman et al found that the West Arctic (mainly Arctic North America) was only 2degC warmer than the 20th century average, which would make it only marginally warmer than present Arctic temperatures. However the pattern is complex with different areas of the region studied having different times of HTM, and different levels of warming. Renssen et al carried out a modelling study and found that Arctic temperatures were up to an average around of around 2degC, with the Laurentide ice-sheet reducing that by just under 0.5degC. However the Laurentide ice sheet was over North America and as Hansen & Lebedef have shown the climatic scale of global warming is robust to up to around 1200km, so it seems reasonable to suggest that the Laurentide ice sheet had little effect on Siberian temperatures. With regards the HTM in the Russian Arctic sector, I can't find much at all. Wikipedia states "The Holocene Climate Optimum warm event consisted of increases of up to 4 °C near the North Pole (in one study, winter warming of 3 to 9 °C and summer of 2 to 6 °C in northern central Siberia)". But the source of that claim is a Russian published paper that didn't come up in my searches and has a very brief abstract. So it is possible I've missed important Russian literature.
Even though temperatures show an equivocal picture, with temperatures being at least around current Arctic levels but perhaps some degrees warmer, sea-ice proxy studies show a significantly reduced Arctic sea-ice pack. It has already been observed that since 2007 end of summer sea surface temperatures in areas of open ocean are several degrees higher than previously in areas of open ocean, and the the ocean so exposed to sunlight is warmed (ice albedo effect), that warming being mixed down into the ocean column. The physics behind this are general, so would have applied during the HTM. Whereas the recent warming may only have come close to matching HTM temperatures in the last decade, the sea-ice state was lower, and crucially during the HTM the warmth (however warm it was) and reduced sea-ice lasted many centuries, enough time for warming of the Arctic Ocean and to penetrate sediments. Yet we don't see catastrophic releases of methane at during the HTM. This point also applies to the Holsteinian and Eemian interglacials. As they're much older periods data is sparse, however Hansen cites temperatures of the deep ocean as having been warmer than the present.
It is often asserted in public discussion that Arctic sea-ice has a tipping point that will drive a rapid transition to a seasonally sea-ice free state. Such assertions have been made in association with claims of an imminent destabilisation of methane hydrates on the ESAS. However these arguments seem to fall before the evidence, because evidence from the past shows less sea-ice and higher temperatures with substantially greater summer solar forcing produced neither a rapid transition to a stable seasonally sea-ice free state, nor a massive (enough to substantially raise atmospheric methane) venting of methane from the ESAS. One possible explanation for this apparent problem with regards methane and the ESAS is the history of methane hydrates on the ESAS.
Recently Andy Revkin has interviewed Dr Shakhova and Dr Semiletov, in that interview he posts comments from them about modelling research into the effects of post glacial inundation of the ESAS, together with a response from the leader of the team who conducted the modelling study; Dr Dmitrenko. Firstly, elsewhere I've seen the Dmitrenko paper queried on the basis of citation numbers, that is not a sound method, science is about understanding, it is not a popularity contest. Dmitrenko et al model permafrost subject to the recent changes in the Arctic and find that the recent warming (last 30 years) has only lowered the hydrate stability zone by 1m, they conclude that the emissions of methane from the ESAS is due to a long process of warming initiated by the inundation of the ESAS by sea level rises following the last glacial. It is claimed by Semiletov & Shakhova that Dmitrenko et al assumed a thaw point of zero degrees (C), whereas observations show hydrate melting at below zero. This is important because the warming in that area is warming up from substantially below zero. Dmitrenko responds to this criticism stating unequivocally that the assessment of the model is wrong and that the model does simulate unfrozen sediments at below zero degrees, rounding off by saying that their paper had not been carefully read. This is a strong response that can be checked with reference to the paper (indeed it includes a quote from the paper). So I do not think Shakhova and Semiletov are correct in their criticism of Dmitrenko et al.
Dmitrenko et al raises a crucial issue. Shakhova, Semiletov and their colleagues working on the ESAS are involved in crucial cutting edge work on methane emissions from that region, and the possibility of increased emissions in future. However because their work is cutting edge, the observations are new and lack longer term context. What Dmitrenko et al's study suggests is that the current hydrate melt and consequent methane emissions are the result of a longer term process, so these emissions may have been going on for millenia, unobserved. I am reminded of the example of Bryden et al and observations of the Meridional Overturning Circulation (MOC) in the North Atlantic. Bryden's initial results appeared to show a large reduction in the MOC, leading some excitable commentators to declare that the MOC (otherwise known as the Thermohaline Circulation) was shutting down. At the time RealClimate expressed doubt about the slowing of the MOC, and were subsequently proven to be correct, see here for related posts at RC.
Likewise it is possible that the observations of methane emissions from the ESAS are sampling something that has been going on for some time, possibly millenia. With spatially limited observations over a short timescale it is no surprise that the observations don't yet answer the question of whether that is the case. However these observations are crucial because they at least present the start of a series of data that will reveal whether what is being seen is intensifying as the process of Arctic warming, driven by AGW, proceeds.
Semiletov et al 2012, states that:
Our measurements of CH4 taken in 1994–9 and 2003–10 over the ESAS demonstrate that the system is in a destabilisation period (Semiletov 1999a, Shakhova et al 2010a, 2010b, 2010c).Shakova et al 2010a is a paper that uses seven scenarios of methane emissions, different amounts over short and long timescales, and calculates the impacts of these on global average temperature and radiative forcing. Shakova et al 2010b & c contain more relevant detail to the question of whether the ESAS is destabilised, they can be taken to support the hypothesis that the ESAS is destabilised. Shakhova & Semiletov 2006 find that in 2003 the area weighted methane flux was 4.86 X 10^10 g/cmsq/hour and in 2004 it was 3.02 X 10^10 g/cmsq/hour, supporting the large interannual variability noted elsewhere. So whilst the research clearly shows that methane hydrates are destabilising this does not determine whether this is an exceptional condition unique to recent decades and due to AGW. Dmitrenko et al's findings that the destabilisation is a long term situation with its origins in the inundation and massive warming at the start of the holocene are not rejected by the above stated references.
All this said, the work of Shakhova, Semiletov, and colleagues is vital, and I've been impressed by what I've read of their work. Together with observations from ground stations and satellite, their work provides the first basis of a marine methane hydrate early warning system. When the currently destabilised methane deposits on the ESAS start to join their land counterparts in a substantially increasing trend of emission, the groundwork these scientists are laying will be crucial. Significant funding for their efforts can be viewed as a prudent insurance policy for the planet, especially as humanity seems hell-bent on Plan A (Business as Usual).
Dmitrenko et al, 2011, "Recent changes in shelf hydrography in the Siberian Arctic: Potential for subsea permafrost instability."
Dyck et al, 2010, "Arctic sea-ice cover from the early Holocene: the role of atmospheric circulation patterns."
Funder et al, 2011, "A 10,000-Year Record of Arctic Ocean Sea-Ice Variability—View from the Beach."
Hansen & Sato, 2011, "Paleoclimate Implications for Human-Made Climate Change."
Jakobsson et al, 2010, "New insights on Arctic Quaternary climate variability from palaeo-records and numerical modelling."
Kaufman et al, 2004, "Holocene thermal maximum in the western Arctic (0–180 W)"
Francis et al, 2006, "Interglacial and Holocene temperature reconstructions based on midge remains in sediments of two lakes from Baffin Island, Nunavut, Arctic Canada."
McInerney & Wing, 2011, "The Paleocene-Eocene Thermal Maximum: A Perturbation of Carbon Cycle, Climate, and Biosphere with Implications for the Future."
Renssen et al, 2004, "Simulating the Holocene climate evolution at northern high latitudes using a coupled atmosphere-sea ice-ocean-vegetation model."
Shakhova et al 2010a "Predicted methane emission on the East Siberian shelf."
Shakhova et al 2010b "Geochemical and geophysical evidence of methane release from the inner East Siberian Shelf."
Shakhova et al 2010c "Extensive methane venting to the atmosphere from sediments of the East Siberian Arctic shelf" | <urn:uuid:3730f832-306f-4711-993b-18a5c5913493> | 3.421875 | 2,897 | Personal Blog | Science & Tech. | 36.420123 |
The BOREAS AFM-03 team used the NCAR Electra aircraft to make sounding measurements to study the planetary boundary layer using in situ and remote-sensing measurements. Measurements were made of wind speed and direction, air pressure and temperature, potential temperature, dewpoint, mixing ratio of H2O, CO2 concentration, and ozone concentration. Twenty-five research missions were flown over the NSA, SSA, and the transect during BOREAS IFCs 1, 2, and 3 during 1994. All missions had from 4 to 10 soundings through the top of the planetary boundary layer. This sounding data set contains all of the in situ vertical profiles through the boundary layer top that were made (with the exception of "porpoise" maneuvers). Data were recorded in 1-second time intervals. | <urn:uuid:e4ede275-327b-4467-ab28-c34471a8cac2> | 3.078125 | 168 | Knowledge Article | Science & Tech. | 40.212866 |
Catching and Throwing Standard Exception Types
The following guidelines describe best practices for some of the most commonly used exceptions provided by the .NET Framework. For a complete list of exception classes provided by the Framework, see thedocumentation.
Exception and SystemException
Do not throw System.Exception or System.SystemException.
Do not catch System.Exception or System.SystemException in framework code, unless you intend to re-throw.
Avoid catching System.Exception or System.SystemException, except in top-level exception handlers.
If you are designing an application that needs to create its own exceptions, you are advised to derive custom exceptions from theclass. It was originally thought that custom exceptions should derive from the class; however in practice this has not been found to add significant value. For more information, see .
Do throw a System.InvalidOperationException exception if in an inappropriate state. System.InvalidOperationException should be thrown if a property set or a method call is not appropriate given the object's current state. For example, writing to a System.IO.FileStream that has been opened for reading should throw a System.InvalidOperationException exception.
This exception should also be thrown when the combined state of a set of related objects is invalid for the operation.
ArgumentException, ArgumentNullException, and ArgumentOutOfRangeException
Do throw System.ArgumentException or one of its subtypes if bad arguments are passed to a member. Prefer the most-derived exception type if applicable.
The following code example demonstrates throwing an exception when an argument is null (Nothing in Visual Basic).
Do set the System.ArgumentException.ParamName property when throwing System.ArgumentException or one of its derived types. This property stores the name of the parameter that caused the exception to be thrown. Note that the property can be set using one of the constructor overloads.
Do use value for the name of the implicit value parameter of property setters.
The following code example shows a property that throws an exception if the caller passes a null argument.
Do not allow publicly callable APIs to explicitly or implicitly throw System.NullReferenceException, System.AccessViolationException, System.InvalidCastException, or System.IndexOutOfRangeException. Do argument checking to avoid throwing these exceptions. Throwing these exceptions exposes implementation details of your method that may change over time.
Do not explicitly throw System.StackOverflowException. This exception should be explicitly thrown only by the common language runtime (CLR).
Do not catch System.StackOverflowException.
It is extremely difficult to programmatically handle a stack overflow. You should allow this exception to terminate the process and use debugging to determine the source of the problem.
Do not explicitly throw System.OutOfMemoryException. This exception should be thrown only by the CLR infrastructure.
ComException and SEHException
Do not explicitly throw System.Runtime.InteropServices.COMException or System.Runtime.InteropServices.SEHException. These exceptions should be thrown only by the CLR infrastructure.
Do not catch System.Runtime.InteropServices.SEHException explicitly.
Do not explicitly throw System.ExecutionEngineException.
Portions Copyright 2005 Microsoft Corporation. All rights reserved.
Portions Copyright Addison-Wesley Corporation. All rights reserved.
For more information on design guidelines, see the "Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries" book by Krzysztof Cwalina and Brad Abrams, published by Addison-Wesley, 2005. | <urn:uuid:6b494fba-2025-4c47-aad5-457a63838450> | 2.796875 | 744 | Documentation | Software Dev. | 23.729781 |
The Babel fish is one of the more inspired forms of fictional biodiversity. It features in the Hitchhiker’s Guide to the Galaxy by Douglas Adams (sadly no relative), and allows his antihero, the ape descendant Arthur Dent to traverse the universe with only his speaking handheld digital assistant, the Hitchhiker’s Guide, for company (forget Siri, Douglas Adams got there first). The Babel fish is described as ‘small, yellow and leech-like’, and when it had slithered into Arthur Dent’s ear, he could understand anything that was said, in any language of the universe. As usual in the Hitchhiker’s Guide, this turns out not to be entirely a good thing.
Many conservationists seem to hope that ecosystem services will work like a Babel fish for them. For decades they have hammered on about how valuable nature is, and nobody has paid much notice. Humanity blithely goes on strip mining the earth’s stock of natural capital and burning it getting rich, or just keeping alive. But the ecosystem services Babel fish promises to change all that. Insert it into public discourse, and when conservationists speak of wildlife, biodiversity, endangered species or habitat loss, their listeners will hear human wellbeing, natural capital, nature’s supply chain, the stuff humans get for free. When we speak about the importance of conservation, everyone will automatically understand what we mean.
The Babel fish of ecosystem services works by translating conservationists’ ideas about nature into the language of economics. The idea is that economists in their turn can use it to explain why the conservation of nature matters, because the language of economics already has the ear of policy makers and the public. The romantic attachment between conservation biology and environmental economics has been growing for several decades, inspired by work like the late David Pearce’s Blueprint for a Green Economy, and Robert Costanza’s classic attempt to calculate the value of the world’s ecosystem services and natural capital in 1997. The Millennium Ecosystem Assessment, the TEEB reports, and the UK’s National Ecosystem Assessment all servce, with different degrees of success, to ‘make nature economically visible’, as the TEEB’s mission expresses it.
So far, so good, but the thing about interpretation is that it is never perfect. What people hear is often not quite what was meant. This is particularly true where we talk not about concrete things, but in word pictures. Ecosystem services is not really a translating fish, but a metaphor, and, as Mary Midgely observed in New Scientist in 2011, ‘the trouble with metaphors is that they don’t just mirror scientific beliefs, they also shape them. Our imagery is never just surface paint, it expresses, advertises and strengthens our preferred interpretations’.
One of the problem with metaphors is that you stop noticing them. Richard Noorgard points out that the idea of nature as a fixed stock of natural capital that sustains a flow of ecosystem services is no longer the eye-opening metaphor it was in the early 1990s. Instead of a means to communicate the delusion of endless growth, it has become a dominant model for environmental policy, supporting as thriving industry of professionals. In the banal language of everyday politics, ‘we need to recognise that if we withdraw something from Mother Nature’s Bank, we’ve got to put something back in to ensure that the environment has a healthy balance and a secure future’.
A second problem with metaphors is that something is lost in translation. Michel Sandel makes this point about the idea of the market in his book What Money can’t Buy. Markets leave their mark, he says: they can crowd out important nonmarket values: market reasoning empties public life of moral argument. This kind of point has been made in the context of conservation – for example Douglas McCauley argues: ‘if we oversell the message that ecosystems are important because they provide services, we will have effectively sold out on nature’.
A third problem is that is that metaphors develop a life of their own. The metaphor of ecosystem services triggers what Emory Roe calls a policy narrative, a story that secures a basis for policy action. The idea of ecosystem services is taken to mean not only that nature has an economic value that needs to be calculated carefully and taken fully into account in any economic decisions (undoubtedly true), but also that markets can be created to determine that value and trade in associated commodities. Cue market-based conservation, payments for ecosystem services and novel commodities like carbon futures. Welcome Noorgard’s thriving industry, a diverse community of economists (ecological, economic, political, institutional) attempting to measure flows of services from different ecosystems under different conditions, and to work out who wins and who loses in the resulting scramble for benefits. Welcome too businesses, who see opportunity in the idea of ecosystem services, in terms of new products, new brands and an escape from regulation.
These (and others like them) are not novel points about the challenges of applying the ecosystem services approach to biodiversity conservation. Each can be debated, as to the extent to which it is happening, and whether (and for whom) it is a good thing. My point is not the problems or the potential, but the power of the metaphor of ecosystem services, and the narrative that it drives. They are taking conservation to strange places. Mary Midgely is quite right, metaphors shape our belief and the idea of ecosystem services is transforming conservation is ways that are only slowly becoming apparent.
Perhaps the most worrying thing about the ecosystem services Babel fish is whether, ultimately, it works. Economists may like the idea of ecosystem services, and politicians, policy-makers and business leaders may find it useful, but the acid test is whether it helps ordinary people understand what conservationists care about, and why. Morgan Roberston, on his Wetlandia blog, suggests it may not: a 2010 opinion survey for the Nature Conservancy in the USA found that ecosystem services ranked 13th out of 16 terms used to describe the benefits of nature (natural capital was 15th). Conservationists (with the economists and policy makers they like talking to) appear to be in a bubble, mouthing unintelligible jargon to the bemused citizens of planet earth. Business as usual in the world of planet management, in fact. Tony Juniper tells me that his new book What has Nature Ever Done for Us (due out in January 2013) manages not to use the term ecosystem services at all. Maybe he is onto something.
Douglas Adams knew a thing or two about conservation and extinction, and he never underestimated the power of the absurd. The Hitchhiker’s Guide records the sad fact that ‘the Babel fish, by removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation’. He envisaged the races of the universe understanding only too well that they had nothing in common. Maybe conservation will have the reverse problem, if the ecosystem service Babel fish doesn’t translate its ideas very well.
Either way, my conclusion is simple: beware the blandishments of the fish in your ear. | <urn:uuid:a7ba9f28-c358-490f-b389-d2c6f97243e1> | 3.03125 | 1,497 | Personal Blog | Science & Tech. | 30.584952 |
The Exponential Clothesline
Many students have difficulty expressing large numbers using exponential notation. The exponential clothesline helps to pre-assess student understanding of exponents by providing a visual representation of exponential notation (powers of ten). The Exponential Clothesline Conversion Table will also pre-assess and refresh student understanding of fractions and decimals.
Provide each group of students with the following materials:
One 5-meter piece of clothesline (or string)
Fourteen (14) clothespins (or paper clips)
Fourteen (14) index cards with the following numbers identified as follows:
0 written in red
1, 2, and 3, in blue
101, 102, 103, 106, 109, and 1012 in green
10-1, 10-2, 10-3, and 10-4 in black
Numbers written in black on different colored index cards are more visual, and any incorrectly placed numbers are immediately recognizable in a large classroom. The Number and Exponent Cards described above are also available in pdf format for you to download.
Depending on your group, you may wish to provide some larger (or smaller) exponents or include a set of numbers with exponents such as 2 x 100, 2 x 101, 2 x 102 and 2 x 10-2.
Give each group of students the clothesline, clothes pins, and a set of numbered index cards. Randomly distribute the cards as evenly as possible within each group. Have the students string the clothesline, and ask the student in each group with the number 0 to attach that index card approximately one-sixth of the way from the left end of the clothesline. The Students with the number 1 should attach it 25-cm to the right of the number 0.
Explain to the students that their clothesline represents a number line and that they are going to add whole numbers and numbers expressed in scientific notation to the number line. Ask the students who have numbers 2 and 3 in each group to place their numbers on the clothesline. Most students will correctly place whole numbers on the number line. Give the students the task of placing the remaining numbers on the number line in their correct locations. [It is important to explain to your students that this number line is not to scale.] Most groups will discuss and accurately place 101, 102, 103, 106, 109, and 1012 and other higher powers of ten; however when they begin to place the negative exponents on the clothesline, most students will place them to the left of 0, which is a common mistake.
The negative exponents actually fall between 0 and 1. Encourage the students to make changes if they think any of the numbers are not in the correct order. If any of the groups think that their number lines are correct and there are still numbers placed out of sequence, hand out the Exponential Clothesline Conversion Table. The purpose of the conversions is to express the exponents as whole numbers, fractions and decimals. After the final conversion of the exponents to decimals, students who have the negative exponents incorrectly placed usually begin to see that negative exponents are still greater than 0 and rearrange their number lines accordingly.
Use the completed Exponential Clothesline Conversion Table to enter into a discussion about what scientific notation is and why it is useful.
The Exponential Clothesline Conversion Table
Modeling the Electromagnetic Spectrum: Middle School
Modeling the Electromagnetic Spectrum: High School | <urn:uuid:c532c9f7-c16f-4443-b32a-01ec5ab6501f> | 4.15625 | 715 | Tutorial | Science & Tech. | 42.866862 |
The chemistry of inkjet printer ink
Whatever technology is applied to printer hardware, the final product consists of ink on media, so these two elements are vitally important when it comes to producing quality results. The quality of output from inkjet printers ranges from poor, with dull colors and visible banding, to excellent, near-photographic quality.
Two entirely different types of ink are used in inkjet printers: one is slow and penetrating and takes about ten seconds to dry, and the other is fast-drying ink which dries about 100 times faster. The former is generally better suited to straightforward monochrome printing, while the latter is typically used for color printing. Because different inks are mixed to create colors, they need to dry as quickly as possible to avoid blurring. If slow-drying ink is used for color printing, the colors tend to bleed into one another before they’ve dried.
The ink used in inkjet technology is water-based, and this caused the results from some of the earlier printer models to be prone to smudging and running. Oil-based ink is not really a solution for this problem because it would impose a far higher maintenance cost on the hardware. Printer manufacturers are making continual progress in the development of water-resistant inks, but the output from inkjet printers is still generally poorer than from laser printing.
One of the major goals of inkjet manufacturers is to develop the ability to print on almost any media. The secret to this is ink chemistry, and most inkjet manufacturers will jealously protect their own formulas. Companies like Hewlett-Packard, Canon and Epson invest large sums of money in research to make continual advancements in ink pigments, qualities of light fastness and water fastness, and suitability for printing on a wide variety of media.
Today's inkjets use dyes, based on small molecules (<50 nm), for the cyan, magenta and yellow inks. These have high brilliance and wide color gamut, but are neither light- or water-fast enough. Pigments, based on larger (50 to 100 nm) molecules, are more waterproof and fade-resistant, but they aren't transparent and cannot yet deliver the range of colors available from dye-based inks. This means that pigments are currently only used for the black ink. Future developments will likely concentrate on creating water-fast and light-fast CMY inks based on smaller pigment-type molecules. | <urn:uuid:bbd470b8-841c-4664-8872-68824834f3f1> | 3.375 | 506 | Knowledge Article | Science & Tech. | 36.992937 |
Website Detail Page
written by the Kansas State Physics Education Research Group and Dean A. Zollman
This resource illustrates the interference patterns for diffraction through double slits. A series of sources, electrons, protons, neutron, photons, and pions, can be chosen for the virtual experiment. The energy of the source, the slit separation, and the flux rate can all be adjusted. The diffraction patterns are built from random flashes on a screen. The screens can be saved for comparison of different experiments.
ComPADRE is beta testing Citation Styles!
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
Citation Source Information
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Double Slit Diffraction:
Is Referenced By Matter Waves
Matter Waves uses the example of a double slit experiment to explore the wave-like properties of quantum particles.relation by Bruce Mason
Is Required By Light and Waves
The Light and Waves tutorial uses the double slit electron experiment to demonstrate the wave-like properties of electrons.relation by Bruce Mason
Is Required By Interpreting Wave Functions
Interpreting Wave Functions uses a Double Slit Diffraction simulation to demonstrate the need for a description of the measurement probability.relation by Bruce Mason
Is Part Of Visual Quantum Mechanics
Visual Quantum Mechanics contains a series of simulations, tutorials, and pedagogy to help in the learning of concepts in quantum physics. This material is suitable for a wide range of students.relation by Bruce Mason
Know of another related resource? Login to relate this resource to it.
Is Referenced By | <urn:uuid:f46085ec-3e2b-4f66-9b31-178be3af07c9> | 3.765625 | 399 | Content Listing | Science & Tech. | 27.227771 |
Concave Mirror Images
Why is that you can see a REAL image in a concave mirror
when you are looking into the mirror?
A real image is not necessarily an image that is really you. A real image
is one that really has light coming together at it. Place a piece of paper
where the image appears to be located. If you see a picture on the paper,
then the image is real. An image that appears to be in front of the mirror
is usually real. An image that seems to be behind the mirror is virtual.
The light does not go behind the mirror. A real image can be used to expose
film, creating a photograph. This cannot be done with a virtual image. To
use a virtual image for this, you must use a lens or another mirror to
create a real image on the film.
Dr. Ken Mellendorf
Illinois Central College
Some what belated, but the web site below explains it pretty well:
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:2514f02d-d061-49d0-9536-3ac7ec64486f> | 3.4375 | 219 | Knowledge Article | Science & Tech. | 59.175796 |
Waves, Sound and Light: Wave Basics
Wave Basics: Audio Guided Solution
A wave is traveling in a rope. The diagram below represents a snapshot of the rope at a particular instant in time.
Determine the number of wavelengths which is equal to the horizontal distance between points …
a. … C and E on the rope.
b. … C and K on the rope.
c. … A and J on the rope.
d. … B and F on the rope.
e. … D and H on the rope.
f. … E and I on the rope.
Audio Guided Solution
Click to show or hide the answer!
b. 3.5 wavelengths
c. 4.0 wavelengths
d. 1.5 wavelengths
e. 2.0 wavelengths
f. 1.75 wavelengths
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities and record in an organized manner, often times they can be recorded on the diagram itself. Equate given values to the symbols used to represent the corresponding quantity (e.g., v = 12.8 m/s, λ = 4.52 m, f = ???).
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Waves at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | <urn:uuid:9cd3da61-f6be-44ef-96d7-e66d28763252> | 3.90625 | 365 | Tutorial | Science & Tech. | 71.252578 |
Net ocean heat content changes are very closely tied to the net radiative imbalance of the planet since the ocean component of the climate system has by far the biggest heat capacity. Thus we have often made the point that diagnosing this imbalance through measurements of temperature in the ocean is a key metric in evaluating the response of the system to changes in CO2 and the other radiative forcings (see here).
In a paper I co-authored last year (Hansen et al, 2005), we compared model results with the trends over the 1993 to 2003 period and showed that they matched quite well (here). Given their importance in evaluating climate models, new reports on the ocean heat content numbers are anticipated quite closely.
Recently, a new preprint with the latest observations (2003 to 2005) has appeared (Lyman et al, hat tip to Climate Science) which shows a decrease in the ocean heat content over those two years, decreasing the magnitude of the long-term trend that had been shown from 1993 to 2003 in previous work (Willis et al, 2004) – from 0.6 W/m2 to about 0.33 W/m2. This has generated a lot of commentary in some circles, but in many cases the full context has not been appreciated.
With any new data sets there are a number of questions that must always be asked: Are the measurements really representing what is claimed? (in particular, are there sampling or definitional problems?). Do related data provide some support for the results? If correct, what are the potential causes? and, most importantly, what part of the changes are related to predictable deterministic effects? This last question brings up the issue of model evaluation, because of course, the models can only be expected to reproduce the deterministic long-term component.
Given some of the ongoing discussion, it obviously still needs to be pointed out that year-to-year fluctuations in any of the key metrics of planet’s climate are mostly a function of the weather and cannot be expected to be captured in climate models, whose ‘weather’ is uncorrelated with that in the real world. So claims that two years worth of extra data of any quantity somehow prove or disprove climate models are simply erroneous. Clearly, life would be simpler without weather ‘noise’ cluttering up the system, but this is something that just needs to be dealt with. Dealing with it means paying more attention to long term changes than to short term fluctuations and making sure that enough ensemble simulations are made with the models to isolate the signal from the noise.
Going back to the data, are there any potential problems? Well, as addressed by the authors, this time frame is the period when the ARGO floating profilers really start to be important in improving the coverage of data (look at the difference in coverage in their figure 8 between 2002 and 2005). The profilers have clearly been the best thing to happen to ocean observations in decades. Not using the profilers gives a smaller recent change – but with increased error bars because of the deterioration of the sampling. Additionally, some parts of the ocean, particularly the Arctic are still not being sampled sufficiently. These effects may yet prove to be part of the story.
What about any supporting data? One problem is that if the ocean has lost heat at the suggested rate, then the thermal exapansion part of recent sea level rise should have decreased (i.e. sea level should have dropped). Overall, sea level however has continued to rise unabated according to the altimeter satellites. The only way to reconcile the results would be to have had a sharp compensating increase in freshwater from the ice sheets adding to sea level (from 0.7 mm/yr to 2.9 mm/yr). This is conceivable (though unlikely), but clearly would not be good news!
If however, we assume that the data are reasonably accurate, what could be going on? Some of the changes are clearly due to ocean circulation changes – an increased advection of warm water from the sub-tropical Atlantic to the North for instance, but the biggest contribution are the changes seen in the sub-tropical South Pacific. The heat can either have been subducted below the 700m level (the bottom depth for this analysis), advected sideways (no real evidence for that though), or lost through the surface (either to the atmosphere, or directly out to space). The third possibility is thought the most likely.
This in turn can have had a number of possible causes: ‘natural’ tropical variability – for instance, the winter (DJF) tropical Pacific cooled over these two years, possibly as part of larger-scale ENSO variability. Alternatively, it may be due to a change in the forcings. Possible candidates are an as-yet-unquantified increase in aerosol forcings from Asian sources. These haven’t been included in simulations since the data on emissions aren’t yet in.
On a larger point, the radiative imbalance in the AR4 models is a function of how effectively the oceans sequester heat (more mixing down implies a greater imbalance) as well as what the forcings are. Therefore, there is a variation in that modelled value across the models – some of which are smaller than our reported figure (all are significantly positive though).
A slightly more subtle (and slightly more valid) criticism is that the reported magnitude of decadal variability in the OHC numbers is larger than is seen in most coupled models. Some recent work has shown that sampling may play a role here, but it wouldn’t necessarily be surprising if this was so. Even in our paper last year we stated that earlier reported decadal variations were not well simulated. There is obviously much that remains to be understood about annual to decadal variability, however, it must be remembered that it is only on the longer time scales that we expect the forced signal to dominate over the internal ‘noise’. On this basis the ocean heat content changes remain a good validation of the climate model simulations. | <urn:uuid:80dddb75-a183-4626-9211-21ca88acb8f9> | 2.8125 | 1,241 | Comment Section | Science & Tech. | 43.021669 |
by Staff Writers
Reno NV (SPX) Oct 16, 2012
An extraordinarily crowded planetary system is providing critical clues for understanding why most known planetary systems appear different from our own solar system. Using data from NASA's Kepler space mission, scientists are investigating the properties of KOI-500, a planetary system that crams five planets into a region less than one twelfth the size of the Earth's orbit.
Dr. Darin Ragozzine, a postdoctoral researcher at the University of Florida, presented recent findings about this system this Tuesday at the annual meeting of the American Astronomical Society's Division for Planetary Sciences in Reno, NV.
KOI-500 is an especially compact planetary system, hosting five planets whose "years" are only 1.0, 3.1, 4.6, 7.1, and 9.5 days. "All five planets zip around their star within a region 150 times smaller in area than the Earth's orbit, despite containing more material than several Earths (the planets range from 1.3 to 2.6 times the size of the Earth).
At this rate, you could easily pack in 10 more planets, and they would still all fit comfortably inside the Earth's orbit," Ragozzine notes. KOI-500 is approximately 1,100 light-years away in the constellation Lyra, the harp.
NASA's Kepler mission searches for exoplanets - planets around other stars - by observing over 160,000 stars simultaneously and identifying small dips in a star's brightness due to the shadow of a distant planet. Kepler has opened a whole new chapter in the study of exoplanets by discovering hundreds of planetary systems containing multiple closely-spaced planets.
These discoveries include a surprising new population of planetary systems that contain several planets packed in a tiny space around their host stars. KOI-500 is the most compact of them all.
"From the architecture of this planetary system, we infer that these planets did not form at their current locations. The planets were originally more spread out and have 'migrated' into the ultra-compact configuration we see today," said Ragozzine.
Although recent theories for the formation of the large planets of the outer solar system also involve planets moving during the formation process, it is still unclear how the inner planets in the solar system, including Earth, avoided this fate.
Using Kepler data, astronomers can measure the sizes and orbits of planets orbiting Sun-like stars more precisely than ever before, giving birth to a new subfield of study.
In the case of KOI-500, the planets are so close together that their mutual gravity pushes and pulls on their orbits, causing slight changes in the times that the planets pass in front of their host star.
By detecting this effect, Dr. Ji-Wei Xie, a postdoctoral researcher at Nanjing University and the University of Toronto, recently confirmed that the two candidates orbiting farthest from KOI-500 were actually planets.
Ragozzine's work, still unpublished, goes farther, confirming additional planets and characterizing their masses and orbits.
dditionally, four of the planets orbiting KOI-500 follow synchronized orbits around their host star in a completely unique way - no other known system contains a similar configuration. Work by Ragozzine and his colleagues suggests that planetary migration helped to synchronize the planets.
"By precisely characterizing the delicate arrangement of planets in this extraordinarily crowded system, Kepler is providing insights into the formation of KOI-500 and other compact planetary systems," said Eric Ford, an associate professor of astronomy at the University of Florida and a contributor to the study.
"As the most compact system of a new compact population of planets, KOI-500 will become a touchstone for future theories that will attempt to describe how compact planetary systems form," said Ragozzine.
"Learning about these systems will inspire a new generation of theories to explain why our solar system turned out so differently."
Kepler at NASA
Lands Beyond Beyond - extra solar planets - news and science
Life Beyond Earth
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
Distant planet found circling with 4 stars
Honolulu (UPI) Oct 15, 2012
Two U.S. citizen scientists have discovered a planet in a system with four different suns, the first known of its type, U.S. and British astronomers say. The distant planet orbits one pair of stars while a second pair of stars orbits around it, they said. The planet was discovered by two U.S. volunteers using the Planethunters.org website who spotted faint dips in light caused by ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:787e3897-ef52-41e2-b0ca-ab256d0d2936> | 3.5625 | 1,084 | Truncated | Science & Tech. | 42.959437 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2009 February 4
Explanation: On the distant planet HD 80606b, summers might be dangerous. Hypothetic life forms floating in HD 080606b's atmosphere or lurking on one of its (presently hypothetical) moons might fear the 1,500 Kelvin summer heat, which is hot enough not only to melt lead but also nickel. Although summers are defined for Earth by the daily amount of sunlight, summers on HD 80606b are more greatly influenced by how close the planet gets to its parent star. HD 80606b, about 200 light years distant, has the most elliptical orbit of any planet yet discovered. In comparison to the Solar System, the distance to its parent star would range from outside the orbit of Venus to well inside the orbit of Mercury. In this sequence, the night side of HD 80606b is computer simulated as it might glow in infrared light in nearly daily intervals as it passed the closest point in its 111-day orbit around its parent star. The simulation is based on infrared data taken in late 2007 by the Spitzer Space Telescope.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:3a979607-f167-406c-887c-87760d8efe7e> | 3.609375 | 290 | Knowledge Article | Science & Tech. | 51.744737 |
(The second installment in Saul Griffith's energy literacy and climate change presentation -- Ed.)
The result of our energy use:
Carbon Dioxide concentration in the atmosphere.
Scientists have now been studying the concentration of carbon dioxide in our atmosphere for more than half a century; They principally use two techniques: Direct measurement from research sites, which measure the current concentration direct from that atmosphere using highly sensitive instruments, and indirect measurement, which requires inferring the concentrations from ice-cores taken from glaciers, and from ice-cores drilled into the antarctic ice-pack.
The graph above shows the CO2 concentration as measured by different methods for the last 1,000 years.
What this graph represents is the main reason we are all becoming increasingly aware of our environmental impact. This graph tells us about how we are risking our own future.
This might be the most important plot of natural phenomena that science has ever produced. We need to keep looking at these graphs to see how we are doing. We also need to increase our confidence in the reasons for the historical variations of this graph due to natural climate cycles.
There is much earlier data: European Project for Ice Coring in Antarctica (EPICA) covers the last 650,000 years. Here scientists have determined CO2 levels by analyzing bubbles enclosed in the ice. CO2 data from 0 to 420,000 years are from earlier measurements from ice cores from Vostok station [Petit et al., 1999], and Taylor Dome [Indermühle et al., 2000]. The isotopic records indicate the sequence of six full glacial cycles [EPICA Community Members, 2004]. New CO2 data measured at the University of Bern are from ice older than 420,000 years and extend the legendary Vostok record by more than 50 percent back in time. These data confirm that the present CO2 concentrations in the atmosphere are unprecedented for at least the last 650,000 years.
Recent rate increase in atmospheric CO2 concentration
In the last 50 years, the rate of increase in CO2 has increased. The Mauna Loa studies by Roger Revelle were pioneering. Uptake of CO2 by Northern Hemisphere trees, causes seasonal variations, which can be seen at this detail. There is no denying that a rapid increase in CO2 concentrations is happening. In fact, in the last few years scientists have become concerned that the rate of increase has itself increased. This might be an indication that we've reached the limits of the earth's ecosystems ability to absorb CO2.
For reference, pre industrial concentrations were around 280 ppm. (More information on ppm concentrations from Worldchanging.)
For those people who still doubt the scientific evidence of climate change, they should pause to note that despite the complexity of the system, we can in fact measure discrete things very accurately, and whats more, by multiple techniques. Presented here are two independent observations of the same phenomena that are in very close agreement. The observations were highly separated geographically, and geologically, and this increases our confidence highly in the conclusion of rapidly rising atmospheric CO2 levels.
Where is all the Carbon?
Carbon will seek to find a chemical equilibrium. As we push more carbon into the atmosphere by burning fossil fuels, we change the equilibrium and the carbon concentrations in these various deposits changes. A Giga Tonne of Carbon (1 GtC) is 1 billion tonnes. At current rates the increase of CO2 in the ocean results in increased acidity which is reducing the ocean productivity by lowering the growth of plankton and killing coral reefs. Burning forests and deforestation release the carbon trapped in vegetation into the atmosphere. The Indonesian forest fires of recent times released as much as 0.7GtC to the atmosphere.
The thing that we try to understand when looking at this slide is the flows of CO2 through the reservoirs where it is "stored". It is comforting to note that the reservoirs are much larger than the flows, which gives us hope of slowing and even reversing
How is Human Activity changing the carbon balance?
Fossil fuels were carbonaceous things billions of years ago that over time (heat and pressure) became oil, gas, coal, and those things we generally know as fossil fuels.
We now burn those things at a rate much faster than the oceans and other natural systems can absorb them. As a measure of the natural rate at which carbon is stored via photosynthesis, the current estimates is around 40 Gigawatts (GCEP, Stanford, Exergy Flows). That is the rough rate at which new oil and coal is being made - if you wait a few million years to harvest it.
The cartoon above shows you a very simple form of the carbon picture. At the rate of 2 GtC/yr the acidity of the ocean actually increases. This implies even if we only add 2GTC to the atmosphere and consequently to the oceans we have another problem (ocean acidification) as well as the CO2 problem and climate change problem to deal with. The result of trying to force the 7GtC into the atmosphere with only 2GtC coming back out, is a net increase of 5GtC into the atmosphere yearly, consequently the CO2 concentration in the atmosphere increases.
The result of CO2 change is climate change.
The extra CO2 in the atmosphere creates something now widely known as “the greenhouse effect”. Through mechanisms described in much more detail in the resources, heat is trapped within the blanket of earth’s atmosphere and contributes to the heating of the whole planet. This is a complex phenomenon (for a great case study read about the atmosphere in the “winds of change”). This is why it will heat in some places and cool in others even when the overall, or average, trend is for global warming.
In the past 25 years (as you will have seen in Inconvenient Truth) the temperature has risen sharply, breaking all sorts of records.
Both the first two graphs of CO2 concentration in the atmosphere are rather misleading, because the y-axis does not start at zero. The first graph, for example, makes it seem as if the current CO2 concentration is almost 6 times higher than it was a thousand years ago, when it is actually 1.3 times as high. That factor of 1.3 is probably very frightening and important, but the graph itself is a bit absurd.
Over 400 World Wide Prominent Scientists Disputed Man-Made Global Warming Claims in 2007. See http://tinyurl.com/2dv6nz
The idea that the atmosphere behaves like a greenhouse is a misunderstood concept, it is not correct.
In the atmosphere water vapour is a major component (1%-4%) and it has similar optical properties to glass. The water molecule is in the form of a shallow V with two hydrogen atoms attached to the heavier central oxygen atom. It absorbs radiation by the various modes of vibration of this system and by rotation (think of it as two light balls attached to a heavy one by springs). Pure liquid water is actually blue, as it absorbs slightly in the red, though heavy water is colourless because the heavier atoms do not vibrate so readily at optical frequencies. Like the greenhouse glass, the water vapour acts as a one-way energy valve. Thus not only does water keep the planet warm, but it also maintains a fairly constant temperature. It is this greenhouse effect that makes life on earth possible.
The other and major mechanism in the greenhouse is the inhibition of convection. This is not ideal, as there is heat loss from the roof by convection. The Earth, in contrast, loses no heat by convection, but only by radiation. Convection in the oceans and atmosphere merely serves to redistribute the heat.
Find more about computer models here.. | <urn:uuid:34846648-34f4-471f-826b-2c74b39c715f> | 3.34375 | 1,601 | Knowledge Article | Science & Tech. | 46.992037 |
The unit circle is the set of all points , but nesting the sine function in this formula, , gives the sine oval, which becomes progressively more squarish with increased levels of nesting (controlled by the slider). Change the graph to "sin" to examine the nested sine function alone, whose squarish behavior is not shared by the other trig functions.
where is some positive integer. Using gives the unit circle. If , the coordinate is given by , and increasing the nesting of the sine function results in an increasingly squarish oval. Similarly, a plot of the nested sine function approaches a simple square wave of decreasing amplitude .
It is logical to also inquire what happens if other trigonometric functions are nested, but it turns out that their behavior is quite different: the nested cosine function never becomes square but approaches and oscillates about a constant value of 0.739085…. The other basic trig functions show increasing regions of oscillation, generally between +∞ and -∞. The exception is cosecant, which in an increasing number of intervals of decreasing size approaches +1 or -1, and is undefined at the boundaries of these intervals. | <urn:uuid:735bb7fb-41e9-4fec-884e-55dc39175685> | 3.28125 | 244 | Tutorial | Science & Tech. | 35.585032 |
Comprehensive DescriptionRead full entry
BiologyA pelagic (Ref. 26340) schooling species usually found in offshore reefs (Ref. 9710). Juveniles are encountered along shores of sandy beaches, also over muddy bottoms (Ref. 9626). May penetrate into brackish water and ascend rivers. Feed on fishes, shrimps, and other invertebrates (Ref. 3277). Often approach divers (Ref. 9710). Eggs are pelagic (Ref. 4233). | <urn:uuid:5ce584bc-486e-4660-a2f8-41f1be91b714> | 2.96875 | 101 | Knowledge Article | Science & Tech. | 48.773994 |
An Introduction to the Philosophy of Mathematics
Table of Contents
1. Mathematics and its philosophy
2. The limits of mathematics
3. Plato's heaven
4. Fiction, metaphor, and partial truths
5. Mathematical explanation
6. The applicability of mathematics
7. Who's afraid of inconsistent mathematics?
8. A rose by any other name
9. Epilogue: desert island theorems.
Back to book details | <urn:uuid:eabc4c7d-3501-4f33-a415-ced74bd3abf1> | 3.40625 | 92 | Content Listing | Science & Tech. | 48.648987 |
The Arctic is warming much faster than the rest of the planet, and as a result, sea ice is receding opening northern sea routes. This will increase the level of commercial shipping and give an easier access to the resource wealth of the region (hydrocarbons, minerals, and fish). The detrimental effect on land and marine wildlife will be increased by pollution caused by the increasing of the commercial activities.
With the economic gain comes the desire to protect rights and investments, and the resulting potential for conflict. All of this is at odds with the traditional livelihoods of the Arctic’s indigenous peoples.
So far, there has been a remarkable spirit of cooperation among Arctic stakeholders as they recognize the common problems and needs that they all face.
Arctic States and some non-Arctic ones manifested interest in policies across all areas: safety, the environment, sustainable economic development, sovereignty, and indigenous and social development. Of particular relevance in this study is the European Union that has had a northern policy since 1999 and will be issuing a revision in 2012.
The Arctic is a challenging region: distances are vast, the weather is difficult, and for much of the year it is dark. Although increasing, Arctic populations are small. Space technologies have many attributes that make them ideal for application in the Arctic context: satellites can see remote areas that could not be accessed in any other way, they can cover wide areas with relatively little infrastructure and they can provide types of information that are not available from any other source.
Space technologies can contribute to Arctic policy priorities in many ways:
The report shows convergence of policies among states, as well as with capabilities of satellites systems. Space technologies have been contributing to Arctic policy priorities for quite some time. However, these assets will need to be renewed and enhanced if the increasing future challenges of the Arctic are to be met.
Polar View is the Arctic and Antarctic component of the Global Monitoring for Environment and Security Service Element (GSE) initiative of the European Space Agency and the European Union. It is a collaborative project involving about 20 partners, including research institutes, government agencies and private sector technology, environmental and engineering firms. | <urn:uuid:0f49cb98-8ed4-457b-a128-b4211457d7d4> | 3.3125 | 436 | Knowledge Article | Science & Tech. | 21.826487 |
Our astronaut William Anders must keep warm
when he is in space. NASA spent a long time designing and making materials that
would keep heat in and stop astronauts getting cold. A material that does not
let heat pass through easily is called a THERMAL INSULATOR. Click on the button
below to try an online experiment with thermal insulators. When you have looked
at it, think about how you could set up the same experiment in your classroom.
Scroll down the page for more ideas to help you
When you have looked at
the experiment try the following things in your classroom.
Try the experiment for real using different
Vary the times allowed for cooling. Graph
the results. What do you notice?
What other experiments could you design to
investigate thermal insulators.
Can you design an experiment to measure
keeping things cool? | <urn:uuid:c05ee6e1-6e7d-4d1e-bf23-61c197e48035> | 3.671875 | 177 | Tutorial | Science & Tech. | 56.704104 |
Add your answer here.
Check out some similar questions!
Chemistry ! [ 2 Answers ]
What steps do I take to solve how many moles of atoms are in 40.1g Ca
Chemistry [ 1 Answers ]
what are the type of these compound Zinc sulfite,Calciun hydride and Dichromic acid?
Chemistry question about Organic Chemistry [ 9 Answers ]
Could you explain to me why I ended up with two marks out of a possible 6 in these questions? I think that I answered them correctly but I am not sure so I would like your help so that then I may tell my instructor.
Chemistry [ 2 Answers ]
Sir what is carnot engine.
New to Chemistry [ 6 Answers ]
I just went back to school after 15 years and is really having a hard time with Chemistry formulas.
View more Chemistry questions Search | <urn:uuid:adcd10d4-8555-4168-8920-1ad09c25c10a> | 2.875 | 184 | Q&A Forum | Science & Tech. | 66.463399 |
by Duncan Steel
This article was
printed in IMO's
edition of the WGN Journal.
It was written by Duncan Steel
There is evidence that there were two massive bolide explosions
which occurred over South America in the 1930's.
One seems to have
occurred over Amazonia, near the Brazil-Peru border, on August 13,
1930, whilst the other was over British Guyana on December 11, 1935.
It is noted that these dates coincide with the peaks of
the Geminids, although any association with those meteor showers
is very tentative.
The identification of such events is significant
in particular in that they point to the need for re-assessment of
the frequency of Tunguska-type atmospheric detonations.
1 - The Rio Curaca
In 1989 an article by N.Vasilyev and G.V. Andreev in the IMO Journal
(1) drew attention to a discussion, published in 1931 by L.A. Kulik
(2), of a possible Brazilian counterpart to the
explosion of 1908.
The Brazilian event, which occurred on August 13,
1930, was described in the papal newspaper L'Osservatore Romano, the
report being derived from Catholic missionaries working in Amazonia.
That report, in Italian, was used as the basis of a front-page story
in the London newspaper The Daily Herald (since closed down), which
was published on March 6, 1931, and then seen by Kulik. (For the
interested reader, a copy of the story is reprinted in the December
The locality of the explosion gives it it's name:
The Rio Curaca
This is close to the border between Brazil and Peru, at
Latitude: 5 degrees South, Longitude: 71.5 degrees West.
Both of these newspaper stories were discussed in a recent paper by
Bailey and co-workers (3), who provide an English translation of the
story which appeared in L'Osservatore Romano.
Since that paper
should be accessible to many readers of
WGN, I will not give an
extensive account of it here.
I will, however, just mention that
although the eye-witness accounts give do cover the phenomena which
one might expect to be produced by a massive bolide, there are some
other interesting reported observation which would require some
explanation. These include the following:
An ear-piercing "whistling" sound, which might be understood as
being a manifestation of the electrophonic phenomena which have been
discussed in WGN over the past few years.
The sun appearing to be "blood-red" before the explosion. I note
that the event occurred at about 8h local time, so that the bolide
probably came from the sunward side of the earth. If the object were
spawning dust and meteoroids - that is, it was cometary in
nature - then, since low-inclination, eccentric orbits produce radiants close to the sun, it might be that the solar coloration
(which, in this explanation, would have been witnessed elsewhere)
was due to such dust in the line of sight to the sun. In short, the
earth was within the tail of the small comet, if this explanation is
There was a fall of fine ash prior to the explosion, which covered
the surrounding vegetation with a blanket of white: I am at a loss
with regard to this, if the observation is correct (and not mis-remembered
as being prior-, rather than post-impact).
Bailey et al. also discuss the fact that the Rio Curaca event
occurred on the day of the peak of the annual Perseid meteor shower,
but conclude that this is likely to be purely a coincidence. The
date is also close to august 10, on which day in 1972 a large bolide
was filmed skipping through the upper atmosphere above western
Wyoming and Montana, departing from the earth above Canada (4).
Again, this may be merely a coincidence.
A brief discussion of the event is also given by R. Gorelli in
august 1995 issue of Meteorite! magazine.
2 - The Rupununi
I now move on to the suspected explosion over British Guyana in
The main source for information on this event is a story
entitled Tornado or Meteor Crash? in the magazine The Sky (the
forerunner of Sky and Telescope) of September 1939.(5)
A report from
Serge A. Korff of the Bartol Research Foundation, Franklin Institute
(Delaware, USA) was printed, he having been in the area - the
Rupununi region of British Guyana - a couple of months later. The
date of the explosion appears to have been December 11, 1935, at
about 21h local time. I might note that this is near the date of the
peak of the Geminid meteor shower, but yet again this may be merely
The location is given as being near Lat: 2 deg 10min
North, Long: 59 deg 10 min West, close to
Korff's description suggested that the region of devastation might
be greater than that involved in the Tunguska event itself. On his
suggestion, a message was sent to William H. Holden, who in 1937 was
in the general region with the Terry-Holden expedition of the
American Museum of Natural History. That group hiked to the top of
Marudi Mountain in 1937 November and reported seeing an area some
miles across where the trees had been broken off about 25 feet above
their bases, although regrowth over two years in this tropical
jungle had made it difficult to define the area affected.
confirmed, on returning to New York, that he believed the
devastation was due to an atmospheric explosion of cosmic origin.
explorer and author, Desmond Holdridge, also visited the region in
the late 1930's and confirmed the suspicion that a comet or asteroid
detonation was responsible.
Korff obtained several local reports, the best being from a Scottish
gold miner, Godfrey Davidson, who reported having been woken by the
explosion, with pots and pans being dislodged in his kitchen, and
seeing a luminous residual trail in the sky.
A short while later,
whilst prospecting, he cam across a devastated region of the jungle
he estimated to be about five by ten miles (8 by 16 kilometers),
with the trees all seeming to have been pushed over.
Holden was unsure of the origin of the flattening of the forest, and
pointed out that similar destruction can result from tornados.
Holdridge, however, reported eye-witness accounts in accord with a
large meteoroid/small asteroid entry, with a body passing overhead
accompanied by a terrific roar (presumably electrophonic effects),
later concussions, and the sky being lit up like daylight.
aircraft operator, Art Williams, reported seeing an area of forest
more than twenty miles (32 kilometers) in extent which had been
destroyed, and he later stated that the shattered jungle was
elongated rather than circular, as occurred at Tunguska and would be
expected from the air blast caused by an object entering away from
the vertical (the most likely entry angle for all cosmic projectiles
is 45 degrees).
There is a report of the Guyanan event, largely derived from the
account in The Sky, in the newsletter Meteor News for March 1974.
Apparently as a result of that, the publishers (Karl and Wanda
Simmons, of Callahan, Florida) had some correspondence with a Mr. F.A. Liems of Paramaribo, Surinam, concerning a possible
crater/event at Wahyombo in that country; he gives the location as
Lat: 5.25 deg North, Long: 56.05 deg West. The letters date from
1976; apparently Liems died in 1982.
In 1990, as a result of
Andreev's article in WGN about the Brazilian event, Wanda Simmons
sent copies to him, and he kindly sent copies on to me.
notes/maps/letters are included, but it is difficult to know what to
make of them: my impression is that this concerns something that
occurred some time ago, not in this century, and it's linkage with
an incursion by an asteroid or comet is far from clear.
1) N. Vasilyev, G. Andreev, WGN
17:6, 1989, pp. 247-248.
2) L.A. Kulik, Priroda i Ljudi 13-14, 1931, p.6
3) M.E. Bailey, D.J.Markham, S. Massai, J.E. Scriven, The
Observatory 115, 1995, pp. 250-253.
4) Sky and Telescope 44, 1972, pp. 269-272.
5) The Sky, September 1939, pp. 8-10 and p.24.
Below is the wording of the Newspaper article printed in The
Daily Herald on March 6, 1931.
Another colossal bombardment of
the earth from outer-space has just been revealed.
Three great meteors, falling in Brazil, fired and
depopulated hundreds of miles of jungle.
News of this catastrophe has only now reached civilization
because the meteors fell in the remote S. American
It was yet another lucky escape of mankind from an appalling
and unrealized peril.
The last great meteor fell in Siberia in 1908. In a district
so remote that only last year were details of it's
destruction given to the world. Had either of these two
meteor falls chanced to strike a city in a densely populated
country, frightful loss of life and damage would have been
"A Meteor", Mr. C.J.P. Cave an ex-president of the
Royal meteorological Society stated recently "carries in
front of it a mass of compressed and incandescent air.
When it strikes the earth, this air "splashes" in a
hurricane of fire...
The Brazilian meteors are reported (says
the Central News) by Father Fidello of Aviano. writing from
San Paulo de Alivencia in the state of Amazonas, to the
papal newspaper, 'L'Osservatore Romano'."
The meteors fell almost
simultaneously during an amazing storm. Terrific heat was
engendered. Immediately they struck the ground the whole
forest was ablaze.
The fire continued uninterrupted for some months,
depopulating a large area. The fall of the meteor was
preceded by remarkable atmospheric disturbances. At 8
o'clock in the morning the sun became blood-red and a
penumbra spread all over the sky, producing the effect of a
Then an immense cloud of reddish powder
filled the air and it looked as if the whole world was going
to blaze up.
The powder was succeeded by fine
cinders which covered trees and vegetation with a blanket of
white. There followed a whistling sound that pierced the air
with car-breaking intensity, then another and another.
Three great explosions were heard and the earth trembled.
The Siberian meteor of 1908 completely destroyed the forest
over an area of 70 miles in diameter.
It's roar was heard
600 miles away and it's glare maintained twilight all night
even in England. | <urn:uuid:bd33643b-c621-41f3-9d0f-2e04df6bed5c> | 2.953125 | 2,456 | Nonfiction Writing | Science & Tech. | 55.549159 |
What are the disadvantages of solar power?
Solar power has few disadvantages. It offers the promise of free, clean, reliable energy, as well as a slew of other advantages on a larger social and economic scale. However, solar power is not perfect.
We can rely on the sun rising every morning, and we even know its whereabouts in the sky, but as surely as it rises, it must go down. Therefore, solar panels can only collect solar electricity about half the time. Given our relatively poor ability to store electricity at this time, this intermittency is a disadvantage. Grid-tied systems are a happy compromise and largely mitigate this problem for home solar power systems.
High up-front costs are another disadvantage for solar power. Federal, state, local and/or utility rebates and incentives are very generous right now, which makes home solar power very affordable in terms of end costs. But up front, homeowners must still come up with several thousand dollars and then wait for tax credits to come through. There are loans and lease options available to cover these costs, and costs are continually dropping as supply increases and technology improves.
For many newer solar power technologies, degradation of solar cell materials is another disadvantage. While current solar panels on the market will last 25-30 years and longer at reasonable efficiencies, they also cost a lot. The quest for high-efficiency low-cost solar panels is continually hindered by degradation. But once again, technology is improving so fast that today's major obstacle will likely be tomorrow's minor inconvenience.
Remodeling tweets and photos posted daily. Join Us on Twitter | <urn:uuid:21ff1ee7-d832-400b-a3d3-c49342ec62b0> | 3.140625 | 325 | Knowledge Article | Science & Tech. | 34.86099 |
Bats can see as well as humans can, but they have evolved a sophisticated method of using sound that enables them to navigate and find food in the dark called echolocation. Bats produce echolocation by emitting high frequency sound pulses through their mouth or nose and listening to the echo. With this echo, the bat can determine the size, shape and texture of objects in its environment. Bat echolocation is so sophisticated that these animals can detect an object the width of a human hair.
Shouting Bats & Whispering Bats
Bats can be broadly characterized by their echolocation calls as shouting bats and whispering bats. Big brown bats and little brown bats are shouters and produce sounds (if we could hear them) of 110 decibels or similar to the loudness of a smoke alarm. Northern long-eared bats are whispering bats and produce sounds of 60 decibels (similar to the levels of normal human conversation). Shouters tend to forage for food in open spaces; whisperers glean insects from the foliage of trees and forage in the cluttered environments of forest interiors.
Not All Bats Echolocate
About 70% of all bat species worldwide have this ability. Also, bats aren't the only animals that use echolocation. Whales, dolphins, porpoises, oilbirds and several species of shrews, tenrecs, and swiftlets use a similar technique.
Most bat echolocation occurs beyond the range of human hearing. Humans can hear from 20 Hz to 15-20 kHz depending on age. Bat calls can range from 9 kHz to to 200 kHz.
Some bat sounds humans can hear. The squeaks and squawks that bats make in their roosts or which occur between females and their pups can be detected by human ears, but these noises aren't considered to be echolocation sounds.
All Bats in Maryland Echolocate
All bats in Maryland echolocate and all eat insects. Lots of them. Bats can eat more than 50% of their body weight in insects each night. Nursing females may eat their entire body weight each night-as many as 4,500 or more small insects, including insects which are agricultural pests or garden pests. One study documented that a big brown bat maternity colony of 150 bats ate 38,000 cucumber beetles, 16,000 June beetles, 19,000 stinkbugs, and 50,000 leafhoppers in a summer!
Moths are food for many bats and some moths have evolved many interesting tactics to survive bat attacks. Some species have fuzzy wings that will reflect bat echolocation pulses. Other moths in the families Noctuidae and Arctiidae have "ears" which can sense bat echolocation. These "ears" are membranes stretched over sensors and can be located on the head, body or wings of the moth. Once a bat is detected, these moths may fly in loops, make noises to startle the bat, or fold up their wings and dive to avoid capture.
Some bats have evolved methods to counter moth evasive maneuvers such as producing pulses that can detect fuzzy wings. Other bats will use frequencies beyond the range of insect hearing or confuse insects by flying erratically. Bats also fly erratically in the act of catching insects. Although bats will catch insects on the fly with their mouths, they also can scoop insects with their tail or wing membrane and then reach down and snatch the insect with their mouth. This maneuver results in the erratic flight people observe when bats are feeding.
Eavesdropping on Bats
Humans have also devised methods for eavesdropping on bats. Bat detectors are machines with ultrasonic microphones that can detect bat echolocation and output the incoming call within the range of human hearing, allowing bat enthusiasts to "hear" bats as well as see them searching and catching food. With experience, bat detectors can be useful tools to determine bat presence or absence in an area.
Photo of Anabat Detector
Courtesy of Titley Electronics Pty Ltd
Manufacturers of the Anabat bat detection and identification system, and radio telemetry equipment - operating under a quality system, based on the International Standard, ISO 9002. P.O. Box 19, Ballina NSW 2478, Australia - www.titley.com.au | <urn:uuid:2a3e0322-5bf8-4bc6-821e-417cff5df584> | 4.03125 | 902 | Knowledge Article | Science & Tech. | 51.092886 |
|Habitat and Ecology:
Behaviour This species is fully migratory, the main routes of migration being along Arctic coastlines (Snow and Perrins 1998). It arrives on the breeding grounds in early-June (Madge and Burn 1988, Scott and Rose 1996) where it may breed in small, loose colonies (Madge and Burn 1988, del Hoyo et al. 1992, Snow and Perrins 1998) or dispersed in single pairs (Snow and Perrins 1998) (especially in the high Arctic where the habitat is unsuitable for large groups) (Kear 2005a). There is a high degree of synchrony in egg laying and hatching (Johnsgard 1978), with the adults moulting c.10 days after the young hatch (mid-July to mid-August (Scott and Rose 1996)) during which they become flightless for c.21-30 days (Johnsgard 1978, Scott and Rose 1996). Most individuals moult near the breeding grounds (Scott and Rose 1996) although immatures, unsuccessful breeders (Johnsgard 1978) and some more southerly breeding groups (Flint et al. 1984) may undertake pre-moult migrations (Johnsgard 1978) and form large moulting concentrations well-away from nesting areas (Flint et al. 1984). After the post-breeding moult flocks leave the breeding grounds in early-September with some arriving in wintering areas as early as mid-September, others making stopovers on route and arriving later (Madge and Burn 1988). It leaves its wintering quarters again from mid-March to mid-April (Madge and Burn 1988). During the non-breeding season the species remains gregarious, gathering in groups of only a few to several thousands of individuals (Snow and Perrins 1998), although it is rarely found in very large flocks (Kear 2005a). Habitat Breeding The species breeds in coastal Arctic tundra (del Hoyo et al. 1992) in or close to wet coastal meadows with abundant grassy vegetation (Kear 2005a) and on tundra-covered flats with tidal streams (only just above the high tide line) (Johnsgard 1978). In some parts of its range it shows a preference for nesting on small grassy islands (Johnsgard 1978, Madge and Burn 1988, Kear 2005a) in tundra lakes and rivers, especially if nesting Sabine's Gulls Xema sabini (Kear 2005a), Snowy Owls Bubo scandiaca (Flint et al. 1984, Kear 2005a), Peregrine Falcons Falco peregrinus (Flint et al. 1984) or large raptors are present to deter predators (Kear 2005a). High Arctic nesters may also breed widely dispersed over icy tundra, well-away from water (Kear 2005a). Non-breeding Outside of the breeding season the species becomes predominantly coastal, inhabiting estuaries (del Hoyo et al. 1992, Kear 2005a), tidal mudlflats (Madge and Burn 1988, Kear 2005a), sandy shores (del Hoyo et al. 1992), coastal saltmarshes (Kear 2005a) (especially in the spring) (Scott and Rose 1996) and shallow muddy bays (Kear 2005a). In recent years the species has taken to grazing on coastal cultivated grasslands (Madge and Burn 1988, Scott and Rose 1996) and winter cereal fields (Scott and Rose 1996), but rarely occurs on freshwater wetlands except on passage (Madge and Burn 1988). Diet The species is mainly herbivorous (del Hoyo et al. 1992) although it may take animal matter (e.g. fish eggs, worms, snails and amphipods) (Johnsgard 1978). Breeding In its breeding habitat the diet of the species consists of mosses, lichens, aquatic plants (del Hoyo et al. 1992), sedges, tundra grass Dupontia spp., arrowgrass Triglochin spp. and saltmarsh grass Puccinellia spp. (Alaska) (Kear 2005a), although the young may also take insects and aquatic invertebrates (Johnsgard 1978). Non-breeding Outside of the breeding season the species predominantly takes marine microscopic and macroscopic algae (del Hoyo et al. 1992) (e.g. seaweeds, Ulva spp. (Kear 2005a)) and other aquatic plants linked with saline or brackish waters (del Hoyo et al. 1992) in the intertidal zone (e.g. especially eelgrass Zostera spp. (Madge and Burn 1988, Kear 2005a), as well as Ruppia maritima, Spartina alterniflora, Salicornia spp., and arrowgrass Triglochin spp.) (Kear 2005a). Breeding site The nest is a shallow depression (Flint et al. 1984, del Hoyo et al. 1992) on the ground (del Hoyo et al. 1992). Although the species often nests close to water (del Hoyo et al. 1992) typically within a few hundred metres of the tideline (Snow and Perrins 1998), high Arctic nesters may breed on icy tundra well away from water (Kear 2005a) (some nearly up to 10 km inland) (Snow and Perrins 1998) often near boulders where the snow clears first (Kear 2005a). Management information An investigation carried out in one of the species's wintering areas (UK) found that it was most likely to forage on dry, improved grasslands that had high abundances of the grass Lolium perenne, were between 5 and 6 ha in area, and were at a distance of up to 1.5 km inland or 4-5 km along the coast from coastal roosting sites (Vickery and Gill 1999). The species was found to show a preference for grasslands with short, dense swards c.5 cm in height, a characteristic that can be gained through summer management plans involving either mechanical cutting, livestock (sheep or cattle) grazing regimes, or cutting and then grazing (although over longer periods of time the selective grazing of sheep rather than cattle, and frequent rather than infrequent cutting may be more likely to enhance tillering and produce the short, dense sward favoured by this species) (Vickery and Gill 1999). Fertilising the grassland with nitrogen in the autumn at a rate of 50 kg N ha1 was found to increase the overall species use of the habitat by 21 % compared with unfertilised areas (Vickery and Gill 1999), and fertilising at a rate of 75 kg N ha1 was found to increase the overall species use of the habitat by 9-29 % and to remove any preference the geese showed for short sward heights (between 5 and 11 cm) (Vickery and Gill 1999). In other fertilising experiments grazing intensity of the species was found to increase linearly with increasing levels of fertiliser (from 0 kg N ha1 to 150 kg N ha1), although responses in grazing intensity at fertiliser levels lower than 50 kg N ha1 were found to be short-lived (c.2 months after fertiliser application) (Vickery and Gill 1999). | <urn:uuid:63daa7fe-9778-4dd8-babc-7b92cd87700a> | 3.28125 | 1,529 | Knowledge Article | Science & Tech. | 60.529115 |
What's the one sure-fire way to save an animal from extinction? Turn it into a pet, says Michael Archer, director of the Australian Museum in Sydney. With many of the country's native animals in serious trouble, he wants Australians to stop canoodling with dogs and cats and cuddle a quoll or a possum instead. Besides, says Archer, native animals are kinder to the environment-and a lot more fun. Stephanie Pain meets the man with a mission to make you love marsupials.
Why should Australians swap their traditional pets for native animals?
Australia is fundamentally different from every other continent and it's filled with unique animals. Since Europeans arrived, native species have been disappearing almost as fast as those obliterated by the last great mass extinction 65 million years ago. And what we lose here, the whole world loses. Traditional methods of conservation clearly aren't enough, so we have to explore ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:2680c817-b031-49d8-9a86-155271c4f59d> | 2.875 | 214 | Truncated | Science & Tech. | 47.467045 |
There are three main groups of meteorite. They differ in the amount of iron-nickel metal they contain.
Each group of meteorites is split into many more classes and types depending on the minerals they contain, their chemistry and their structure.
Most iron meteorites are thought to be the cores of asteroids that melted early in their history. They consist mainly of iron-nickel metal with small amounts of sulphide and carbide minerals.
Stony-iron meteorites consist of almost equal amounts of iron-nickel metal and silicate minerals and are amongst the most beautiful of meteorites.
The majority of meteorite falls are stony meteorites consisting mainly of silicate minerals. | <urn:uuid:a4416017-fc55-47f3-aaf0-d39e2a97d626> | 3.609375 | 142 | Knowledge Article | Science & Tech. | 32.696055 |
First of all, im daison 21y/o from Philippines.
Before diving into PHP, you must learn the basics, because you can't create a content without HTML and CSS
You can learn HTML here.
After learning HTML, combine it with CSS
, CSS is like in reality a "tools/paint" that you need to hold to design a certain walls or buildings, which means HTML is the structure, while CSS is the designer of your web.
and learn how to use those tools.
After doing the above:
You may now go to the PHP section of w3schools.com
, PHP is a scripting language that developed for the web applications, PERL programming language is part of PHP I call it like a "distro", moreover PHP is a "server side" process script, which means you need to use some PHP compiler, I suggest you to use "XAMPP" for development tools only.
For advance PHP, go to:
do not go over my suggestions first!, basic are the most important and it is required to learn the basic before going to the advance method.
Last edited by Physicsguy; 05-03-2012 at 06:32 PM..
Reason: Added links :) | <urn:uuid:46b0aa57-4be1-4ff8-b2c9-3dc94a245f38> | 2.796875 | 255 | Comment Section | Software Dev. | 73.276588 |
, Quantifying Seasonal Air-Sea Gas Exchange Processes Using Noble Gas Time-Series: A Design Experiment
, Journal of Marine Research, 2006|
A multi-year time-series of measurements of five noble gases (He, Ne, Ar, Kr, and Xe) at a subtropical ocean location may allow quantification of air-sea gas exchange parameters with tighter constraints than currently available by other methods. We have demonstrated this using a one-dimensional upper ocean model forced by 6-hourly NCEP reanalysis winds and heat-flux for the Sargasso Sea near Bermuda. We performed ensemble model runs to characterize the response of the modeled noble gas saturation anomalies to a range of air-sea gas exchange parameters. We then used inverse calculations to quantify the sensitivity of the parameters to hypothetical observations. These calculations show that with currently achievable measurement accuracies, noble gas concentrations in the Sargasso Sea could be used to constrain the magnitude of equilibrium gas exchange to ± 11%, the magnitude of the total air injection flux to ±14%, and the magnitude of net photosynthetic oxygen production to ±1.5 mol O2 m-2 y-1. Additionally, we can use noble gases to quantify the relative contributions of bubbles that are partially dissolved to bubbles that are completely dissolved. These constraints are based on idealized assumptions and may not fully account for some of the uncertainties in the meteorological data, in lateral transport processes, and in the solubilities of the noble gases. As a limited demonstration, we applied this approach to a time series of He, Ne, Ar, and O2 measurements from the Sargasso Sea from 1985 to 1988 (data from Spitzer, 1989). Due to the limited number of gases measured and the lower accuracy of those measurements, the constraints in this example application are weaker than could be achieved with current capabilities.
FILE » stanley2006_38764.pdf | <urn:uuid:e08b193c-0f77-4be7-a7f7-8d126a22c0fe> | 2.765625 | 391 | Academic Writing | Science & Tech. | 20.565473 |
The presence of neutrons in atomic nuclei accounts for the occurrence of isotopes— samples of an elementA substance containing only one kind of atom and that therefore cannot be broken down into component substances by chemical means. whose atomsThe smallest particle of an element that can be involved in chemical combination with another element; an atom consists of protons and neutrons in a tiny, very dense nucleus, surrounded by electrons, which occupy most of its volume. contain different numbers of neutrons and hence exhibit different "nuclidic masses". The nuclidic mass is the mass of a "nuclideAn atom having a particular number of protons and neutrons; isotopes are a set of nuclides all with the same atomic number.", where a nuclide is the term used for any atom whose nuclear composition (Number of protons and neutrons) is defined. For example, naturally occurring hydrogen has two stable nuclides, 11H and 21H, which also are isotopes of one another. More than 99.98 percent is “light” hydrogen, 11H. This consists of atoms each of which has one proton, one electronA negatively charged, sub-atomic particle with charge of 1.602 x 10-19 coulombs and mass of9.109 x 1023 kilograms; electrons have both wave and particle properties; electrons occupy most of the volume of an atom but represent only a tiny fraction of an atom's mass., and zero neutrons. The rest is “heavy” hydrogen or deuteriumThe isotope of hydrogen having one neutron in its nucleus., 21H, which consists of atoms which contain one electron, one proton, and one neutron. Hence the nuclidic mass of deuterium is almost exactly twice as great as for light hydrogen. By transmutation of lithium, it is also possible to obtain a third isotope, tritiumThe isotope of hydrogen that has two neutrons in its nucleus., 31H. It consists of atoms whose nuclei contain two neutrons and one proton. Its nuclidic mass is about 3 times that of light hydrogen.
The discovery of isotopes and its explanation on the basis of an atomic structure built up from electrons, protons, and neutrons required a change in the ideas about atoms which John Dalton had proposed. For a given element all atoms are not quite identical in all respects―especially with regard to mass. It is the number and distribution of electrons which occupy most of the volume of an atom which determines the chemical behavior of atoms. The number of protons in the nucleusThe collection of protons and neutrons at the center of an atom that contains nearly all of the atoms's mass. of each element is important in determining its chemical properties, because the total positive charge of the nucleus determines how the electrons are distributed. All atoms of the same element have the same atomic numberThe number of protons in the nucleus of an atom; used to define the position of an element in the periodic table; represented by the letter Z., but different isotopes have different nuclidic masses. | <urn:uuid:6c263d3d-58c6-4558-bbbd-ccfe902238c2> | 4.09375 | 636 | Knowledge Article | Science & Tech. | 37.608511 |
Many plant populations fluctuate between years of high and low seed production. These fluctuations, often called mast-seeding, have typically been attributed to fluctuations in resource availability.
Recent theoretical models show that mast-seeding could also be due to the dynamics of resource allocation within individuals, synchronized among individuals by pollen limitation in low flowering years. This "pollen coupling" mechanism appears to explain synchronous, alternate-year flowering in a perennial wildflower, Astragalus scaphoides. Our past research shows that plants are more pollen limited in low-flowering years, and that pollen coupling models fit to data for this species predict alternate-year flowering in the absence of environmental forcing. Here, we present tests of two additional components of pollen coupling. First, we directly tested whether preventing fruit set caused plants to flower in successive years.
Removing flowers in 2005, a high flowering year, caused these plants to flower again in 2006. Because few other plants flowered in 2006, these plants did not set seed, and flowered again in 2007. We also directly measured resource depletion by quantifying N, P, and nonstructural carbohydrates before flowering, after fruiting, and at the end of the season in plants with flowers removed. | <urn:uuid:79d88bd0-c8e7-42c7-aa6b-6e3bbdfd23d2> | 2.96875 | 252 | Academic Writing | Science & Tech. | 29.234424 |
Refraction is the act of preventing a rule from firing multiple times in succession. Without refraction, at each loop iteration the same rules can be added to the agenda repeatedly. This is because the same conditions are satisfied. To prevent a single rule from firing repeatedly, a refraction condition is implemented. There are many types of refraction rules.
Some systems use a refraction condition that each rule may only be fired once. Once a rule has been fired, it may never be fired again until the system is reset.
In an intermittant rule refraction condition, a rule may never fire twice consecutively, and may not be added to the agenda again until a different rule fires first.
In a changing antecedent refraction condition, a rule may only fire if the attributes in the antecedent have changed.Last modified on 6 March 2011, at 02:44 | <urn:uuid:35c5f35c-5c6a-4484-93a4-4f2a7d5bff0e> | 3 | 178 | Knowledge Article | Software Dev. | 37.618774 |
There are many geometries of galaxies including the spiral galaxy characteristic of our own Milky Way. In the remarkable deep space photograph made by the Hubble Space Telescope, every visible object except for the foreground stars is another galaxy. The image is a composite of 342 images taken over a period of 10 days with the Wide Field and Planetary Camera 2 on December 18-28, 1995. With an average exposure on the order of 30 minutes, that amounts to about 170 hours of exposure. NASA compares the field of view with that of a dime at 75 feet. The camera was directed almost perpendicular to the galactic plane to get a field of view as clear of foreground stars as possible. The NASA illustration at left shows the location in the constellation Ursa Major. | <urn:uuid:47d2cd93-4cca-4214-9bef-45dee162c83a> | 3.15625 | 149 | Knowledge Article | Science & Tech. | 43.126667 |
Algebraic Rules and Graphing
Rules for Working with Logarithms
The rules for working with logarithms are relatively straight-forward and follow from the definition of the logarithmic function.
The rules for logarithmic operations that you need to be familiar with are as follows:
b) = Log(a) + Log(b)
Log(a ÷ b ) = Log(a) - Log(b)
Log(a) + Log(b) = Log(a) + Log(b) ; not much you can do with this!
Log(a) - Log(b) = Log(a) - Log(b) ; you can't do much here either!
Log(ab) = b × Log(a)
Though these rules are not proven here they follow rather simply from the definition of logarithms. To save space we won't bother with the proofs. For one thing you can test out these rules on your calculator to check and convince yourself. For instance, try the following:
Log(2 × 3) = Log(2) + Log(3) ?? Is this true?
× 3) = Log(6) = 0.77815, so this is the numerical value for
the left hand side. Now, Log(2) = 0.30103 and Log(3) = 0.47712, for the right
hand side. Adding the values for the right hand side together we get: Log(2)
+ Log(3) = 0.77815, which is the same as Log(6). Obviously, this does not
prove the relation, but perhaps it helps to convince you that the rules are
not random and are based on a definition for the logarithm that is consistent
with the rules of algebraic powers.
Some Aspects of Logarithms
The definition of the logarithmic function, as you saw in section 8.1, leads to some interesting results that might seem unexpected at first glance.Log(1) = 0
Negative Answers When Taking Log
Now try taking the logarithm of a number that is less than one but greater than zero; for example, try taking Log(0.5). You will notice that you end up with a negative answer, -0.301029995. So, why is the logarithm of 0.5 a negative number? Again, if one considers the definition of logarithms this makes sense. You are asking what number do I have to raise 10 to the power of in order to get 0.5? If you take 100 you will get 1 and if you take 101 you will get 10. Thus, in order to get a value less than one you'll have to take 10 to the power of some number that is less than zero, thus 10 to the power of a negative number! So, don't be surprised when you take the Log of a number and you end up with a negative answer. If the number you are taking the logarithm of is less than one you will get a negative number out.
Log(0) = undefined!!!!
Finally, try taking the Log(0) on your calculator. This is perhaps going to annoy you and you might even start thinking that your calculator is broken! The Log(0) will result in the calculator giving you an error message, meaning that what you are trying to do is not a good operation. To understand this result, imagine you start taking the logarithm of numbers less than one, like you did above for Log(0.5), but keep taking the Log of smaller and smaller numbers: try Log(0.1), Log(0.01), Log(0.001), which should give you -1, -2, -3 respectively. Everytime you will get a smaller negative number. Well, if you were to iterate this process until you got very close to zero you would notice that you are getting very large negative number. You'll probably run out of patience before you get anywhere, but as you approach taking the Log of very very small numbers you will end up with extremely large negative numbers, until the function becomes undefined because you have gone too far close to zero! So, the Log(0) is undefined and besides this you don't need to know more about it at this level. In fact, you should only take the Log of numbers that are greater than zero. The logarithmic function that we have introduced here is undefined for zero or for numbers less than zero (i.e., Log(-1) also gives you an error).
Graph of the Logarithm Function
To get a larger picture of what the logarithmic function does we can take the function y = Log(x) and plot it on an x-y graph. If you don't feel comfortable graphing functions you should probably review the section on graphing functions. Basically, to plot y = Log(x), one can make a table of x values and take the logarithm of each of these values and create a table of y values. This is shown below:
table of x values: (0.1, 0.5, 1, 2, 5, 10, 20)
table of y values: (-1, -0.303, 0, 0.303, 0.699, 1, 1.3)
(Note: each value on the y-table is equal to the Log of the corresponding value in the x-table ... thus -1=Log(0.1), -0.303=Log(0.5), etc ...)
We can then plot an x-y graph, by taking pairs of values from each table and plotting each one as a point on an x-y grid. This yields the following graph: | <urn:uuid:ffd5bcee-5652-4937-a8ce-c67721c24ccd> | 4.125 | 1,219 | Tutorial | Science & Tech. | 80.179248 |
- (latitude, longitude) points $P_1, P_2,\ldots, P_n$.
- Presumably, all the points should form a dense cloud. However, noise is possible.
- The virtual center of the points.
For instance, 99% of the points may lie within a circle with 1km radius, except for 1% scattered outside that circle at a distance larger than 1km from any point inside the circle. Then this 1% is noise.
Unfortunately, I do not know how to define the noise properly. But the virtual center I am looking for should be close enough to most of the points. If most of the points are close to it, then I do not mind that some be far away.
If it is not too hard, I would like to be able to recognize more than one dense cloud amongst the points. In which case, each cloud could be reduced to its virtual center and thus I will have to find the new virtual super center of the virtual center cloud. That super center is the final result.
I am not a mathematician, so my descriptions are vague. But I am pretty sure that this is a well known problem and it probably has a trivial solution.
This question is similar to Detect Abnormal Points in Point Cloud, however, my space is two dimensional, which probably does not matter. Still.
The points are indeed on the surface of a sphere, a spheroid actually, Earth more precisely. However, the distance between them is not large enough to take the Earth curveture into account, so it may be safely assume that the surface is flat and longitude is X and latitude is Y. | <urn:uuid:641f8b14-11cd-46b8-a1a8-6274714c9375> | 2.828125 | 345 | Q&A Forum | Science & Tech. | 63.975163 |
On This Page
Overview and Description
A codepage is a list of selected character codes characters represented as code points) in a certain order. Codepages are usually defined to support specific languages or groups of languages that share common writing systems. All Window codepages can only contain 256 code points. Most of the first 127 code pointsrepresent the same characters. This is to allow for continuity and legacy code. It is the upper 128 code points 128-255 (0-based) where codepages differ considerably.
For example, codepage 1253 provides character codes required in the Greek writing system and codepage 1250 provides the characters for Latin writing systems including English, German and French. It is the upper 128 code points that contain either the accent characters or the Greek characters. Thus you cannot store Greek and German in the same code stream unless you put some type of identifier to indicate what codepage you are referencing.
This becomes even more complex when dealing with Asian characters sets. Because Chinese, Japanese and Korean contain more than 256 characters, a different scheme needed to be developed but it had to be based on the concept of 256 character codepages. Thus DBCS (Double Byte Character Sets) were born.
Each Asian character is represented by a pair of code points (thus double-byte). For programming awareness, a set of points are set aside to represent the first byte of the set and are not valued unless they are immediately followed by a defined second byte. DBCS meant that you had to write code that would treat these pair of code points as one,and this still disallowed the combining of say Japanese and Chinese in the same data stream, because depending on the codepage the same double-byte code points represent different characters for the different languages.
In order to allow for the storage of different languages in the same data stream, Unicode was created. This one "codepage" can represent 64000+ characters and now with the introduction of surrogates it can represent 1,000,000,000+ characters. The use of Unicode in Windows 2000 allows for easier creation of World-Ready code, because you no longer have to worry about which codepage you are addressing, nor whether you had to group character points to represent one character.
Please note that when writing Unicode applications for Win95/98/ME you still need to convert the Unicode code points back to Window codepages. This is because Win95/98/ME GDI is still ANSI based. But this is made easy with the functions WideChartoMultiByte and MultiByteToWideChar. See " Unicode and Character Sets" on MSDN.
For information about encodings in web pages, please see MLang on MSDN.
Encodings in Win32
See Unicode and Character Sets on MSDN.
Encodings in the .NET Framework
The .NET Framework is a platform for building, deploying, and running Web services and applications that provides a highly productive, standards-based, multilanguage environment for integrating existing or legacy investments with next-generation applications and services. The .NET Framework uses Unicode UTF-16 to represent characters, although in some cases it uses UTF-8 internally. The System.Text namespace provides classes that allow you to encode and decode characters, with support that includes the following encodings:
The .NET Framework provides support for data encoded using code pages. You can use the Encoding.GetEncoding method (Int32) to create a target encoding object for a specified code page. Specify a code page number as the Int32 parameter. The following code example creates an encoding object-enc -for the code page 1252.
After you create an encoding object that corresponds to a specified code page, you can use the object to perform other operations supported by the System.Text.Encoding class.
The one additional type of support introduced to ASP.NET is the ability to clearly distinguish between file, request, and response encodings. To set the encoding in ASP.NET for code, page directives, and configuration files, you'll need to do the following.
In page directives:
In a configuration file:
The following code example in C# uses the Encoding.GetEncoding method to create a target encoding object for a specified code page. The Encoding.-GetBytes method is called on the target encoding object to convert a Unicode string to its byte representation in the target encoding. The byte representations of the strings in the specified code pages are displayed.
To determine the encoding to use for response characters in an Active Server Pages for the .NET Framework (ASP.NET) application, set the value of the HttpResponse.ContentEncoding property to the value returned by the appropriate method. The following code example illustrates how to set HttpResponse.ContentEncoding.
The last major area of discussion involves encodings in console or text-mode programming. In the section that follows, you'll find information on using the Win32 API and C run-time (CRT) library functions, CRT console input/output (I/O), and Win32 text-mode I/O, should you need to deal with this sort of application.
Encodings in Web Pages
Generally speaking, there are four different ways of setting the character set or the encoding of a Web page.
Setting and Manipulating Encodings
Since Web content is currently based on Windows or other encoding schemes, you'll need to know how to set and manipulate encodings. The following describes how to do this for HTML pages, Active Server Pages (ASP), and XML pages.
Internet Explorer uses the character set specified for a document to determine how to translate the bytes in the document into characters on the screen or on paper. By default, Internet Explorer uses the character set specified in the HTTP content type returned by the server to determine this translation. If this parameter is not given, Internet Explorer uses the character set specified by the meta element in the document, taking into account the user's preferences if no meta element is specified. To apply a character set to an entire document, you must insert the meta element before the body element. For clarity, it should appear as the first element after the head, so that all browsers can translate the meta element before the document is parsed. The meta element applies to the document containing it. This means, for example, that a compound document (a document consisting of two or more documents in a set of frames) can use different character sets in different frames. Here is how it works:
You substitute with any supported character-set-friendly name (for example, UTF-8) or any code-page name (for example, windows 1251). (For more information, see Character Set Recognition on MSDN).
Internally, ASP and the language engines it calls-such as Microsoft Visual Basic Scripting Edition (VBScript), JScript, and so forth-all communicate in Unicode strings. However, Web pages currently consist of content that can be in Windows or other character-encoding schemes besides Unicode. Therefore, when form or query-string values come in from the browser in an HTTP request, they must be converted from the character set used by the browser into Unicode for processing by the ASP script. Similarly, when output is sent back to the browser, any strings returned by scripts must be converted from Unicode back to the code page used by the client. In ASP these internal conversions are done using the default code page of the Web server. This works great if the users and the server are all using the same language or script (more precisely, if they use the same code page). However, if you have a Japanese client connecting to an English server, the code page translations just mentioned won't work because ASP will try to treat Japanese characters as English ones.
The solution is to set the code page that ASP uses to perform these inbound and outbound string translations. Two mechanisms exist to set the code page:
How are these code-page settings applied; First, any static content (HTML) in the .asp file is not affected at all; it is returned exactly as written. Any static strings in the script code (and in fact the script code itself) will be converted based on the CODEPAGE setting in the .asp file. Think of CODEPAGE as the way an author (or better yet, the authoring tool, which should be able to place this in the .asp file automatically) tells ASP the code page in which the .asp file was written.
Any dynamic content-such as Response.Write(x) calls, where the x is a variable-is converted using the value of Response.CodePage, which defaults to the CODEPAGE setting but can be overridden. You'll need this override, since the code page used to write the script might differ from the code page you use to send output to a particular client. For example, the author may have written the ASP page in a tool that generates text encoded in JIS, but the end user's browser might use UTF-8. With this code-page control feature, ASP now enables correct handling of code-page conversion.
The behavior of the browser set by the meta tags (described earlier) in the server-side script can be achieved by setting the Response.Charset property. Setting this property would instruct the browser how to interpret the encoding of the incoming stream. Generally this value should always match the value of the session's code page.
For example, for an ASP page that did not include the Response.Charset property, the content-type header would be:
If the same .asp file included:
the content-type header would be:
All XML processors are required to understand two transformations of the Unicode character encoding: UTF-8 (the default encoding) and UTF-16. The Microsoft XML Parser (MSXML) supports more encodings, but all text in XML documents is treated internally as the Unicode UTF-16 character encoding.
The encoding declaration identifies which encoding is used to represent the characters in the document. Although XML parsers can determine automatically if a document uses the UTF-8 or UTF-16 Unicode encoding, this declaration should be used in documents that support other encodings.
For example, the following is the encoding declaration for a document that uses the ISO 8859-1 encoding (Latin 1):
Encodings in Console
Programmers can use both Unicode and SBCS or DBCS encodings when programming console, or "text-mode," applications. For legacy reasons, non-Unicode console I/O functions use the console code page, which is an OEM code page by default. All other non-Unicode functions in Windows use the Windows code page. This means that strings returned by the console functions might not be processed correctly by the other functions and vice versa. For example, if FindFirstFileA returns a string that contains certain non-ASCII characters, WriteConsoleA will not display the string properly.
Always keeping track of which encoding is required by which function-and appropriately converting encodings of textual parameters-can be hard. This task was simplified with the introduction of the functions SetFileApisToOEM, SetFileApisToANSI, and a helper function AreFileApisANSI. The first two affect non-Unicode functions exported by KERNEL32.dll that accept or return a file name. As the names suggest, the SetFileApisToOEM sets those functions to accept or return file names in the OEM character set corresponding to the current system locale, and SetFileApisToANSI restores the default, Windows ANSI encoding for those names. Currently selected encoding can be queried with AreFileApisANSI.
With SetFileApisToOEM at hand, the problem with the results of WindFirstFileA (or GetCurrentDirectoryA, or any of the file-handling functions of Win32 API) that cannot be passed directly to WriteConsoleA is easily solved: after SetFileApisToOEM is called, WindFirstFileA returns text encoded in OEM, not in the Windows ANSI character set. This solution is not a universal remedy against all Windows ANSI versus OEM incompatibilities, however. Imagine you need to get text from a file-handling function, output it to console, and then process it by another function, which is not affected by SetFileApisToOEM. This absolutely realistic scenario will require the encoding to be changed. Otherwise, you will need to call SetFileApisToOEM to get data for console output, then SetFileApisToANSI and get the same text, just in another encoding, for internal processing. Another case when SetFileApisToOEM does not help is handling of the command-line parameters: when the entry point of your application is main (and not wmain), the arguments are always passed as an array of Windows ANSI strings. All this clearly complicates the life of a programmer who writes non-Unicode console applications.
To make things more complex, 8-bit code written for console has to deal with two different types of locales. To write your code, you can use either Win32 API or C run-time library functions. ANSI functions of Win32 API assume the text is encoded for the current console code page, which the system locale defines by default. The SetConsoleCP and SetConsoleOutputCP functions change the code page used in these operations. A user can call chcp or mode con cp select= commands in the command prompt; this will change the code page for the current console. Another way to set a fixed console code page is to create a console shortcut with a default code-page set (only available on East Asian localized versions of the operating system). Applications should be able to respond to a user's actions.
Locale-sensitive functions of C run-time library (CRT functions) handle text according to the settings defined by a (_w)setlocale call. If (_w)setlocale is not called in the code, CRT functions use the ANSI "C" language invariant locale for those operations, losing language-specific functionality.
The declaration of the function is:
The "category" defines the locale-specific settings affected (or all of them, if LC_ALL is specified). The variable-locale -is either the explicit locale name or one of the following:
".OCP" and ".ACP" parameters always refer to the settings of the user locale, not the system locale. Hence they should not be used to set LC_CTYPE. This category defines the rules for Unicode to 8-bit conversion and other text-handling procedures, and must follow the settings of the console, accessible with GetConsoleCP and GetConsoleOutputCP.
The best long-term solution for a console application is to use Unicode, since Unicode interfaces are defined for both the Win32 API and C run-time library. The latter programming model still requires you to set the locale explicitly, but at least you can be sure the text seen by Win32 and CRT does not require transcoding. | <urn:uuid:157fcc43-7a09-43ec-b8aa-667919d73d64> | 3.703125 | 3,148 | Documentation | Software Dev. | 38.919807 |
Physics, a branch of science, is the study of forces and their impacts on the environment. Modern physics connects ideas together about the laws of symmetry and conservation (energy, momentum, charge, and parity). The word physics comes from the Greek word ἡ φύσις "nature".
What is physics? [change]
Physics is the study of energy and matter in space and time and how they are related to each other. Physicists assume the existence of mass, length, time and electric current and then define (give the meaning of) all other physical quantities in terms of these basic units. Mass, length, time, and electric current are never defined but the standard units used to measure them are always defined. In the International System of Units (abbreviated SI from the French Système International), the metre is the basic unit of length, the kilogram is the basic unit of mass, the second is the basic unit of time, the Ampere is the basic unit of electric current.
In addition to these four units, there are three other ones: the mole, which is the unit of the quantity of matter, the candela which measures the luminous intensity (the power of lighting) and the Kelvin, the unit of temperature.
Physics studies how things move, and the forces that make them move. For example, velocity and acceleration are used by physics to show how things move. Also, physicists study the forces of gravity, electricity, magnetism and the forces that hold things together.
Physics studies very large things, and very small things. For instance, physicists can study stars, planets and galaxies but could also study small pieces of matter, such as atoms and electrons.They may also study sound, light and other waves. As well as that, they could examine energy, heat and radioactivity, and even space and time. Physics not only helps people understand how objects move, but how they change form, how they make noise, how hot or cold they will be, and what they are made of at the smallest level.
Physics and Mathematics [change]
Physics is a quantitative science because it is based on measuring with numbers. Maths is used in physics to make models that try to guess what will happen in nature. The guesses are compared to the way the real world works. Physicists are always working to make their models of the world better.
Advanced knowledge [change]
General description [change]
Physics is the science of matter and how matter interacts. Matter is any physical material in the universe. Everything is made of matter. Physics is used to describe the physical universe around us, and to predict how it will behave. Physics is the science concerned with the discovery and characterization of the universal laws which govern matter, movement and forces, and space and time, and other features of the natural world.
Breadth and goals of physics [change]
The sweep of physics is broad, from the smallest components of matter and the forces that hold it together, to galaxies and even larger things. There are only four forces that appear to operate over this whole range. However, even these four forces (gravity, electromagnetism, the weak force associated with radioactivity, and the strong force which holds protons and neutrons in an atom together) are believed to be different parts of a single force.
Physics is mainly focused on the goal of making ever simpler, more general, and more accurate rules that define the character and behavior of matter and space itself. One of the major goals of physics is making theories that apply to everything in the universe. In other words, physics can be viewed as the study of those universal laws which define, at the most basic level possible, the behavior of the physical universe.
Physics uses the scientific method [change]
Physics uses the scientific method. That is, data from experiments and observations are collected. Theories which attempt to explain these data are produced. Physics uses these theories to not only describe physical phenomena, but to model physical systems and predict how these physical systems will behave. Physicists then compare these predictions to observations or experimental evidence to show whether the theory is right or wrong.
The theories that are well supported by data and are especially simple and general are sometimes called scientific laws. Of course, all theories, including those known as laws, can be replaced by more accurate and more general laws, when a disagreement with data is found.
Physics is Quantitative [change]
Physics is more quantitative than most other sciences. That is, many of the observations in physics may be represented in the form of numerical measurements. Most of the theories in physics use mathematics to express their principles. Most of the predictions from these theories are numerical. This is because of the areas which physics has addressed are more amenable to quantitative approaches than other areas. Sciences also tend to become more quantitative with time as they become more highly developed, and physics is one of the oldest sciences.
Fields of physics [change]
Classical physics normally includes the fields of mechanics, optics, electricity, magnetism, acoustics and thermodynamics. Modern physics is a term normally used to cover fields which rely on quantum theory, including quantum mechanics, atomic physics, nuclear physics, particle physics and condensed matter physics, as well as the more modern fields of general and special relativity. Although this distinction can be found in older writings, it is of little recent interest as quantum effects are now understood to be of importance even in fields previously considered classical.
Approaches in physics [change]
There are many approaches to studying physics, and many different kinds of activities in physics. There are two main types of activities in physics; the collection of data and the development of theories.
The data in some subfields of physics is amenable to experiment. For example, condensed matter physics and nuclear physics benefit from the ability to perform experiments. Experimental physics focuses mainly on an empirical approach. Sometimes experiments are done to explore nature, and in other cases experiments are performed to produce data to compare with the predictions of theories.
Some other fields in physics like astrophysics and geophysics are primarily observational sciences because most their data has to be collected passively instead of through experimentation. Nevertheless, observational programs in these fields uses many of the same tools and technology that are used in the experimental subfields of physics.
Theoretical physics often uses quantitative approaches to develop the theories that attempt to explain the data. In this way, theoretical physics often relies heavily on tools from mathematics. Theoretical physics often can involve creating quantitative predictions of physical theories, and comparing these predictions quantitatively with data. Theoretical physics sometimes creates models of physical systems before data is available to test and validate these models.
These two main activities in physics, data collection and theory production and testing, draw on many different skills. This has led to a lot of specialization in physics, and the introduction, development and use of tools from other fields. For example, theoretical physicists apply mathematics and numerical analysis and statistics and probability and computer software in their work. Experimental physicists develop instruments and techniques for collecting data, drawing on engineering and computer technology and many other fields of technology. Often the tools from these other areas are not quite appropriate for the needs of physics, and need to be adapted or more advanced versions have to be produced.
There are many famous physicists. Isaac Newton studied gravity. Galileo Galilei studied light and how planets move. Albert Einstein made a theory for how light can make electrons move, and studied how gravity affects light and space. James Clerk Maxwell proved that light is a type of electromagnetic wave.
Prominent theoretical physicists [change]
Famous theoretical physicists include
- Galileo Galilei (1564–1642)
- Christiaan Huygens (1629–1695)
- Isaac Newton (1643–1727)
- Leonhard Euler (1707–1783)
- Joseph Louis Lagrange (1736–1813)
- Pierre-Simon Laplace (1749–1827)
- Joseph Fourier (1768–1830)
- Nicolas Léonard Sadi Carnot (1796–1842)
- William Rowan Hamilton (1805–1865)
- Rudolf Clausius (1822–1888)
- James Clerk Maxwell (1831–1879)
- J. Willard Gibbs (1839–1903)
- Ludwig Boltzmann (1844–1906)
- Hendrik A. Lorentz (1853–1928)
- Henri Poincaré (1854–1912)
- Nikola Tesla (1856–1943)
- Max Planck (1858–1947)
- Albert Einstein (1879–1955)
- Milutin Milanković (1879–1958)
- Emmy Noether (1882–1935)
- Max Born (1882–1970)
- Niels Bohr (1885–1962)
- Erwin Schrödinger (1887–1961)
- Louis de Broglie (1892–1987)
- Satyendra Nath Bose (1894–1974)
- Wolfgang Pauli (1900–1958)
- Enrico Fermi (1901–1954)
- Werner Heisenberg (1901–1976)
- Paul Dirac (1902–1984)
- Eugene Wigner (1902–1995)
- Robert Oppenheimer (1904–1967)
- Sin-Itiro Tomonaga (1906–1979)
- Hideki Yukawa (1907–1981)
- John Bardeen (1908–1991)
- Lev Landau (1908–1967)
- Anatoly Vlasov (1908–1975)
- Nikolay Bogolyubov (1909–1992)
- Subrahmanyan Chandrasekhar (1910–1995)
- Richard Feynman (1918–1988)
- Julian Schwinger (1918–1994)
- Feza Gursey (1921–1992)
- Chen Ning Yang (1922– )
- Freeman Dyson (1923– )
- Gunnar Källén (1926–1968)
- Abdus Salam (1926–1996)
- Murray Gell-Mann (1929– )
- Riazuddin (1930– )
- Roger Penrose (1931– )
- George Sudarshan (1931– )
- Sheldon Glashow (1932– )
- Tom W. B. Kibble (1932– )
- Steven Weinberg (1933– )
- Gerald Guralnik (1936–)
- Sidney Coleman (1937–2007)
- C. R. Hagen (1937–)
- Ratko Janev (1939– )
- Leonard Susskind (1940– )
- Michael Berry (1941– )
- Bertrand Halperin (1941–)
- Stephen Hawking (1942– )
- Alexander Polyakov (1945–)
- Gerardus 't Hooft (1946– )
- Jacob Bekenstein (1947–)
- Robert Laughlin (1950–)
Other pages [change]
- An equation (e.g., f = m a) is called a "law" when there are clear empirical results that substantiate it.
- Different people, however, have different definitions of what they regard physics to be, and another common definition is that, "physics is the science of nature" [1,2,3,4,5,6].
Other websites [change]
|Wikimedia Commons has media related to: Physics| | <urn:uuid:d8eb1a03-d5bb-4766-b30d-c0634e0baa2d> | 3.609375 | 2,464 | Knowledge Article | Science & Tech. | 38.64656 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Wednesday, 1 September 2010 17
Scribbly Gum Nature Feature You know it's spring when you're woken in the early hours of the morning by the deafening calls of channel-billed cuckoos looking for love.
Wednesday, 14 April 2010
The fairytale Little Red Riding Hood has inspired Australian scientists to invent a new weapon in the fight to save endangered native marsupials from being poisoned by cane toads.
Monday, 30 November 2009
An Australian project tapping Aborigines' knowledge to avert devastating fires that stoke climate change is the world's best example of linking indigenous peoples to carbon markets, says a new report.
Friday, 8 May 2009
Termites could help miners locate gold and diamond reserves saving money and time, says one researcher.
Thursday, 19 March 2009
Traditional Aboriginal burning practices in Australia's savannah country could reduce national greenhouse emissions by nearly 5 megatonnes a year and trigger a million-a-year new industry, says one expert.
Wednesday, 3 December 2008
Early evidence suggests native animals may be trained to avoid poisonous cane toads, using dead toads spiked with a chemical that induces nausea, say researchers.
Monday, 17 November 2008
Climate change may not be as severe as predicted, suggests an international study that shows current modelling of carbon dioxide emissions from soils are overestimated by as much as 20%.
Friday, 14 November 2008
News analysis Investing in agriculture in Australia's north could not only be a way to beat climate change, but could also help feed the planet, says a new report.
Thursday, 16 October 2008
A call for Australians to eat kangaroos to combat climate change might be a case of tuck in now before it's too late, research by an Australian biologist suggests.
Tuesday, 19 August 2008
A new study suggests children do not need to know the words for numbers in order to be able to count, and that basic mathematical ability is hardwired in the human genome.
Friday, 27 June 2008
News analysis A centralised national dump is needed for Australia's growing stockpiles of radioactive waste, say radiation safety experts, but some critics say that's not the safest option.
Tuesday, 19 February 2008
Aboriginal teenagers are turning community perceptions on their head by saying traditional tucker such as stingray and bush plums are their favourite foods, an Australian study says.
Thursday, 23 August 2007
Science Feature No matter where in Australia you are on August 28, you'll be in for a top show once the Sun goes down.
Thursday, 6 April 2006
Scribbly Gum Easter bilbies are an increasingly popular alternative to the traditional chocolate rabbit. But the real bilbies are much harder to find - living secretive lives in isolated deserts across Australia, waiting for the right conditions to start a family.
Thursday, 9 February 2006
Scribbly Gum Across the Top End, baby frilled-neck lizards are hatching and heading for the heights of the nearest tree. There they'll begin their high-rise life, only descending to grab some take-away food, meet up with mates or move house. | <urn:uuid:5d3a44e9-1ce0-4919-a5b4-20bbaeb42d59> | 2.890625 | 660 | Content Listing | Science & Tech. | 34.280034 |
WHAT CAUSES EARTHQUAKES? An earthquake is the result of a sudden release of stored energy in the Earth's crust triggered by shifting tectonic plates. The Earth's lithosphere is an elaborate network of interconnected plates that move constantly -- far too slow for us to be aware of them, but moving, nonetheless. Occasionally they lock up at the boundaries, and this creates frictional stress. When that gets to be too large a strain, the rocks give way and break and slide along fault lines. This can give rise to a violent displacement of the Earth's crust, which we feel as vibrations or tremors as the pent-up energy is released. However, only 10% or so of the total energy is released in the seismic waves. However, the rest is converted into heat, used to crush and deform rock, or released as friction.
HOW DO SCIENTISTS RATE EARTHQUAKES? An earthquake's magnitude describes how much the ground moves. The scale is logarithmic, which means that when the magnitude increases by one (say from 3 to 4, or from 4 to 5) the amount of ground motion increases by ten times. That is, a magnitude 3 quake leads to ten times as much ground motion as a magnitude 2 quake, and a magnitude 2 leads to ten times as much motion as a magnitude 1. This means that a magnitude 3 is a hundred times as violent as a magnitude 1, and a hundred times less violent than a magnitude 5.
The magnitude scale also tells us just how much energy an earthquake released. For example, a magnitude 1 earthquake releases the same amount of energy as 30 pounds of TNT exploding. Although a magnitude 2 earthquake makes the ground move ten times as much as a magnitude 1, it releases 32 times as much energy -- or roughly as much as a ton of TNT. A magnitude 5 earthquake packs the punch of a moderate nuclear weapon, and a magnitude 12 quake would be enough to put a crack all the way through the center of the Earth. | <urn:uuid:4eb49a15-a753-43d7-87dd-a0d2f69ffa80> | 4.28125 | 411 | Knowledge Article | Science & Tech. | 57.144725 |
FIGURE S2. Southern Hemisphere
height-longitude sections at 60°S for height anomalies (contour) and
temperature anomalies (shaded). Positive values are indicated by solid
contours and dark shading, while negative anomalies are indicated by
dashed contours and light shading. Contour interval for height
anomalies is 60 m and for temperature anomalies is 2°C. Anomalies are
calculated from the 1979–95 base period monthly means. | <urn:uuid:965103a0-be4e-41dd-8123-56b1c56adb21> | 3.21875 | 99 | Knowledge Article | Science & Tech. | 29.646154 |
Carbon dioxide is an important gas in lakes. Although highly soluble in water, most carbon dixide in lakes is formed as an end product of respiration. As carbon dioxide dissolves in water, it form a series of compounds, including carbonic acid, bicarbonate and carbonate. The resulting carbonate chemistry, along with common anions such as hydroxide (OH-) and sulfate (SO4-), contribute to the alkalinity (buffering capacity) of water. Alkalinity is a measure of the ability of water to resist changes in PH, which is a measure of the amount of acidity. A neutral pH is 7; acidic conditions have pH less than 7; and alkaline solution have pH greater than 7.
Many aquatic organisms have fairly strict pH requirements, so the amount and stability of pH is very important. For example, the poorly buffered lakes they are unable to resist changes in pH caused by acidic precipitation, and the resulting low pH values (<5.5) reduced the diversity of organisms to only those few adapted to low pH.
Alkalinity is a conservative parameter which means it does not change readily in well-buffered lakes. On the other hand, pH values may vary both temporally and spatially with in a lake. During intense photosynthesis in the euphotic zone, carbon dioxide and carbonic acid can become less abundant. With less of this acid, pH values may rise to as high as 9. Additionally, respiration in the hypolimnion of a productive lake produces an excess of carbon dioxide, which dissociates to carbonic acid and lowers the pH.
Although lake sediments serve as an ultimate sink for whatever is in the water, movement of materials is not solely to the sediments. Storng storms and turnover events may mix sediment to be eventually mixed into the surface waters by internal currents or during periods of turnover. The chemical enviironment in the sediments is dynamic. Biological and chemical processes continually bring change. When dead plant material settles onto the sediment, bacterial decomposers use this organic matter as food and convert the organic phosphorus to phosphate and the organic nitrogen to ammonia. If oxygen is present, ammonia can be oxidized to nitrate, and oxidized iron can tie up the phospate as ferric phosphate. However, if reducing conditions occur, the nitrate is reduced back to ammonia, and the ferric iron is reduced to ferrous iron, which cannot hold the phosphate. Phosphate is then released back into the water. This reaction is referred to as "internal phosphorus loading" and is a major source of phosphorus to eutrophic lakes. | <urn:uuid:2d3a3040-92fc-4302-ac12-6bdbc7ccc3a3> | 3.78125 | 545 | Knowledge Article | Science & Tech. | 31.939196 |
The echinoderms - meaning spiny skin - are the starfish, sea urchins and sea cucumbers.
Starfish like the Bloody henry starfish and the Common starfish which live in our waters are predators that move across the seabed using special organs called tube feet.
Visible spines on the outside of urchins and on the skin of starfish and sea cucumbers are there to deter predators.
On the sandflats at St Martin’s and Tresco there are important populations of burrowing Heart urchins. Crevice Sea cucumbers and featherstars are often seen on reefs by divers. | <urn:uuid:65a56a64-2dd6-4f5c-8fec-d0080edcdbdc> | 3.265625 | 134 | Knowledge Article | Science & Tech. | 44.263099 |
New technique for determining the optical constants ...
|Title||New technique for determining the optical constants of liquids|
|Author(s)||C. Keefe, J. Pearson|
|Abstract||The traditional techniques of transmission and attenuated total reflectance (ATR) spectroscopy for determining the optical constants of liquids are not practical or reliable for very strong absorption bands. Specular reflectance can be used in these cases, but for volatile liquids it is impossible to separate the reflectance spectrum of the liquid from the absorption spectrum of the vapor above the liquid. Methods using special cells have been described in the literature to prevent the liquid from evaporating. In this paper, a similar technique that makes use of traditional transmission cells is presented. It is shown that this new technique generates k(νtilde) spectra for strong absorption bands that are accurate to approximately 2%.|
Using APA 6th Edition citation style.
Times viewed: 96 | <urn:uuid:06dcb769-6989-4a32-8a5c-c9676a251f19> | 2.75 | 194 | Truncated | Science & Tech. | 31.116606 |
This image shows the nucleus of comet Tempel 1 and the nucleus of comet Hartley 2. (The comets are placed next to each other for comparison; they are nowhere near each other in space.) As you can see, comets come in all shapes and sizes.
Tempel 1 is five times larger than Hartley 2. Jets are easily seen coming off Hartley 2 but extensive processing was required to see jets on Tempel 1.
Tempel 1 is 7.6 kilometers (4.7 miles) in the longest dimension. Hartley 2 is 2.2 kilometers (1.4 miles) long.
NASA's Deep Impact spacecraft took both images. When the spacecraft took the Hartley 2 image, it was called the EPOXI mission.
Image credit: NASA/JPL-Caltech/UMD
MORE Asteroids & Comets Images: | <urn:uuid:4b7a70e9-a5be-47ac-a68e-114ecd6c49f4> | 3.578125 | 178 | Knowledge Article | Science & Tech. | 71.263029 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
Utah Temperature Rankings, October 1942
More information on Climatological Rankings
(out of 119 years)
|78th Coldest||1919||Coldest since: 1941|
|41st Warmest||1988||Warmest since: 1940| | <urn:uuid:4c523509-ce33-41ae-a201-818b0e9ad729> | 2.75 | 133 | Structured Data | Science & Tech. | 50.631579 |
The human brain may be the most complex object in the universe, but its construction mostly depends on one thing: the shape of neurons.
Different kinds of neuron are selective about which other neurons they connect to and where they attach. Specific signalling chemicals are thought to be vital in guiding this process.
Henry Markram of the Swiss Federal Institute of Technology in Lausanne and colleagues built 3D computer models of the rat somatosensory cortex, each containing a random mix of cell types found in rat brains, but no signalling chemicals. Nevertheless, 74 per cent of the connections ended up in the correct place, merely by allowing the cells to develop into their normal shape.
The results suggest that much of the brain could be mapped without incorporating signalling chemicals. This is good news for neuroscientists struggling to map the brain's dizzying web of connections. "It would otherwise take decades to map each synapse in the brain," says Markram.
The work could also help untangle the causes of conditions like schizophrenia that are thought to be caused by flaws in brain wiring. If Markram's work proves correct, malformed neurons that don't connect up properly could be a factor.
Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1202128109
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Wed Sep 26 09:31:14 BST 2012 by Eric Kvaalen
In what way is a human brain more complex than a dolphin brain?
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:613aca83-b805-4163-8991-a77ed01af466> | 3.484375 | 438 | Truncated | Science & Tech. | 52.265 |
Flip between macro and nanoscale images of familiar objects to learn about ways that nanotechnology is inspired by nature, surprising properties at the nanoscale, and new applications in nanotechnology. Includes print your own cards.
"Exploring Structures - Butterfly" is a hands-on activity in which visitors investigate how some butterfly wings get their color. They learn that some wings get their color from the nanoscale structures on the wings instead of pigments.
"Exploring Materials - Graphene" is a hands-on activity in which visitors use tape and graphite to make graphene and test the conductivity of graphite. They learn that graphene is a single layer of carbon atoms arranged in a honeycomb pattern.
This forum plays on very real concerns and fears of students: academic performance and taking standardized tests. The crux of this forum is: if there was a supplement or embedded nanotechnology available to the public that will enhance your cognitive abilities by making you smarter or give you instantaneous access to the internet, how would you or local community handle it? Is it cheating? By taking on roles that are somewhat familiar to them, they can put themselves into the shoes of decision makers whether they are parents, teachers, or principals.
This program describes a weeklong summer camp for high school students. The camp does not
assume any previous knowledge of the field and thus is open to students from all backgrounds.
It is hands‐on; application based and also gives a broad overview to nanoscience/nanotechnology as a field with many career opportunities. Students are able to
gain a comprehensive understanding through activities that introduce them to the unique
properties at the nanoscale. Though lab‐tours, discussion groups on societal and ethical implications of nanoscience/nanotechnology and an open‐house at the conclusion of the camp where they present projects to families and friends, students are exposed to a wide variety of
Host Ira Flatow focuses on various nano related topics and issues. Podcasts are available to stream from the Scince Friday website. Available episodes cover recent developments and directions for research in the fields of nanomaterials and nanotechnology. Topics include, imaging at the nanoscale, buckyballs, graphene, nanomedicine, computing, fibers, and electonics. | <urn:uuid:84fed084-5bd3-4313-9538-eed7a9fa8ab4> | 3.3125 | 470 | Content Listing | Science & Tech. | 25.476477 |
The impact of drought on tree water use was investigated in remnant forest in the Liverpool Plains NSW. Tree water use was measured using commercial sap flow testers from December 2002 to September 2004 which corresponded with a period of drought and a period of higher rainfall. Understorey evapotranspiration and soil moisture were also measured. Water use of the stand of trees was a larger proportion of annual total rainfall (87%) during the drought period than during the post-drought period (50% water use) when rainfall was higher. Understorey water use was about 20% of rainfall, suggesting that the understorey evapotranspiration component of the water balance can make a significant contribution to water use and the availability of water for groundwater recharge. The results indicate that the remnant forest was able to survive during the drought because of deep roots. The findings also demonstrate the valuable role forests play in maintaining the hydrological balance and in ameliorating the development of dryland salinity in agricultural areas (A). | <urn:uuid:ff4f18b0-3468-4135-a7bf-f7c6ad0f3478> | 2.921875 | 205 | Academic Writing | Science & Tech. | 25.796325 |
- The construction of the Death Star has been estimated to cost more than $850,000,000,000,000,000. We're working hard to reduce the deficit, not expand it.
- The Administration does not support blowing up planets.
- Why would we spend countless taxpayer dollars on a Death Star with a fundamental flaw that can be exploited by a one-man starship?
However, look carefully (here's how) and you'll notice something already floating in the sky -- that's no Moon, it's a Space Station! Yes, we already have a giant, football field-sized International Space Station in orbit around the Earth that's helping us learn how humans can live and thrive in space for long durations. The Space Station has six astronauts -- American, Russian, and Canadian -- living in it right now, conducting research, learning how to live and work in space over long periods of time, routinely welcoming visiting spacecraft and repairing onboard garbage mashers, etc. We've also got two robot science labs -- one wielding a laser -- roving around Mars, looking at whether life ever existed on the Red Planet.
Keep in mind, space is no longer just government-only. Private American companies, through NASA's Commercial Crew and Cargo Program Office (C3PO), are ferrying cargo -- and soon, crew -- to space for NASA, and are pursuing human missions to the Moon this decade.
Even though the United States doesn't have anything that can do the Kessel Run in less than 12 parsecs, we've got two spacecraft leaving the Solar System and we're building a probe that will fly to the exterior layers of the Sun. We are discovering hundreds of new planets in other star systems and building a much more powerful successor to the Hubble Space Telescope that will see back to the early days of the universe. | <urn:uuid:01a5c7f0-a5f5-4462-aa3b-c4e1d257d813> | 2.75 | 373 | Truncated | Science & Tech. | 49.145441 |
|San José State University|
& Tornado Alley
Four-Nucleon Combinations to Alpha Nuclides
Nuclei are thought of as being composed of protons and neutrons. Their binding energies are computed in terms of the deficit of their masses compared to the masses of the constituent protons and neutrons. However there is a good deal of evidence that the protons and neutrons form alpha particles whenever possible.l (The nuclides which could and probably do contain an integral number of alpha particles are here called alpha nuclides.) Any additional neutrons beyond those tied up in alpha particles form pairs where possible. Therefore the binding energy of a nuclide is largely a matter of the binding energies of the subparticles (alpha particles and nucleon pairs) of which it is composed. There is an additional component of binding energy for a nuclide which comes from the arrangement of those subparticles in the nuclide. This binding energy will be referred to as excess binding energy.
When the excess binding energies are computed for the alpha nuclides and the values plotted versus the number of alpha particles in the nuclide the result is as shown below.
For the data and further detail of the alpha particle substructure of nuclides see Alpha Particle Substructure of Nuclides.
The above graph indicates a shell structure for the alpha nuclides. There is a shell of two alpha particles, then a shell of twelve and then a shell of at least eleven.
If the excess binding energies are computed for the nuclides which could contain an integral number of alpha particles plus four additional neutrons the result also shows a shell structure.
This indicates that the capacity of the third shell of alpha particles is indeed eleven.
The excess binding energy computed above took into account only the binding energy of the alpha particle substructures. The four neutrons would have binding energy of at least two neutron pairs. There is the possibility that four neutrons might form a structure of greater binding energy than that of two neutrons pairs just as the binding energy of an alpha particle is much greater than the binding energy of two proton-neutron pairs (deuterons). To investigate this possibility and others the binding energies of the alpha-plus-four-neutrons nuclides are complied along with those which contain a proton-neutron pair and another neutron pair. For comparison the effect of the addition of another alpha particle is included. These are shown below.
|Binding Energies (MeV) of Nuclides Containing an Integral Number of Alpha Particles Plus a Four Nucleon Combination|
+ n-n pair
For a small number of alpha particles the addition of four neutrons or a p-n pair and n-n pair does not have much of an effect on binding energy. For a large number of alpha particles the case is much different.
The relevant information is the excess binding energy; the binding energy above what is accounted for by the alpha particles. This information is shown below.
|The Excess Binding Energies of Nuclides Which Have a Four Nucleon Combination Added to an Alpha Nuclide|
|+ alpha||+ 4 neutrons||+ p-n &|
The above data are shown in the following display. The red profile is for an added alpha particle. The effect of this four nucleon combination is immediately about 35 MeV and stays at that level until an alpha nuclide of 14 is reached. The effect then falls to about 31 MeV.
The yellow profile is for the effect of four additional neutrons. This effect is small for the small alpha nuclides but it quickly rises and then increases linearly with the number of alphas. The sharp rise and then smaller rise is a shell phenomenon. The linear rise indicates that the four neutron combination is interacting with each alpha particle in a shell.
The green profile is for the effect of a combination of a p-n pair and n-n pair on binding energy. The effect is small for small alpha nuclides but rises quickly to a level of 25 MeV and then increases linearly toward a level of about 40 MeV. Again the linear increase with the number of alpha particles in a shell indicates that the p-n and n-n pair combination is interacting with each of the alpha particles in a shell.
The absence of a linear rise for the effect of an additional alpha particle indicates that the alpha particles are not all interacting with each other.
The is a hump that appears in each of the profiles indicating that it is a structural feature of the nuclides.
(To be continued.)
HOME PAGE OF Thayer Watkins | <urn:uuid:3313b181-159f-4030-863e-b4b936a0c2fe> | 3.3125 | 972 | Knowledge Article | Science & Tech. | 43.387713 |
Artist's concept of the Dawn spacecraft with Vesta and Ceres.
Dawn, part of NASA's Discovery Program of competitively selected missions, was launched in 2007 to orbit the large asteroid Vesta and the dwarf planet Ceres. The two bodies have very different properties from each other. By observing them both with the same set of instruments, Dawn will probe the early solar system and specify the properties of each body.
The Dawn mission to Vesta and Ceres is managed by JPL, a division of the California Institute of Technology in Pasadena, for NASA's Science Mission Directorate, Washington. The University of California, Los Angeles, is responsible for overall Dawn mission science. Other scientific partners include Planetary Science Institute, Tucson, Ariz.; Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany; DLR Institute for Planetary Research, Berlin; Italian National Institute for Astrophysics, Rome; and the Italian Space Agency. Orbital Sciences Corporation of Dulles, Va., designed and built the Dawn spacecraft. | <urn:uuid:eb169ac5-2d28-42d8-be6c-b56cfdb0012b> | 2.96875 | 206 | Knowledge Article | Science & Tech. | 27.74 |
reveals hydrological response of the cave to extreme rainfall events in the Midwest, USA. Cave-flooding events
are identified within the two samples by the presence of detrital layers composed of clay sized particles.
Comparison with instrumental records of precipitation demonstrates a strong correlation between these
cave-flood events and extreme rainfall observed in the Upper Mississippi Valley. A simple model is developed
to assess the nature of rainfall capable of flooding the cave. The model is first calibrated to the last 50-yr
(1950–1998 A.D.) instrumental record of daily precipitation data for the town of Spring Valley and verified
with the first 50 yr of record from 1900 to 1949 A.D. Frequency analysis shows that these extreme flood events
have increased from the last half of the nineteenth century. Comparison with other paleohydrological records
shows increased occurrence of extreme rain events during periods of higher moisture availability. Our study
implies that increased moisture availability in the Midwestern region, due to rise in temperature from global
warming could lead to an increase in the occurrence of extreme rainfall events.
Is this getting as boring for everyone else as it is for me ? | <urn:uuid:f62bf682-5a6e-490c-b7b5-8cba1abe751c> | 3 | 240 | Comment Section | Science & Tech. | 33.288942 |
This picture shows the oil slick off the coast of Louisiana. NASA's Aqua satellite took the picture on April 25, 2010.
Click on image for full size
Image courtesy of NASA/MODIS.
Huge Oil Spill in Gulf of Mexico
A large oil drilling rig in the Gulf of Mexico caught fire and sank in April 2010. Eleven workers were killed and several others injured in the accident.
After the oil rig sank, a huge oil slick formed in the Gulf of Mexico near the delta of the Mississippi River. Clean-up crews haven't yet (as of April 28th) been able to close off the damaged oil well. Each day about 200,000 gallons of oil are leaking into the waters of the Gulf of Mexico. The oil slick has a size of about 600 square miles. It is less than 20 miles from the shore of Louisiana.
The Coast Guard set fire to a section of the oil slick to try to get rid of some of the oil. They may try to burn more of it. They hope to stop the oil from coming ashore at wildlife refuges along the coast of Louisiana and Mississippi.
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
Rivers are very important to Earth because they are major forces that shape the landscape. Also, they provide transportation and water for drinking, washing and farming. Rivers can flow on land or underground...more
Several severe thunderstorms hit the U.S. over the weekend, wreaking havoc on the Midwestern and Southern states. Fourteen tornadoes hit Arkansas on Saturday, March 1, 1997, killing 24 people and injuring...more
Several severe thunderstorms hit the U.S. over the weekend, wreaking havoc on the Midwestern and Southern states. Storms on Saturday, March 1, have killed at least 21 in Kentucky, Mississippi, Ohio, Tennessee,...more
Twelve tornadoes hit Tennessee early Saturday morning, injuring at least 44 people. Two people were killed when six more tornadoes touched down in Kentucky. The tornadoes came from a broad band of severe...more
NASA's instrument Total Ozone Mapping Spectrometer (TOMS) aboard Japanese Advanced Earth Orbiting Satellite (ADEOS) has provided the most recent image showing the total ozone concentration. Ozone is a...more
NASA's Total Ozone Mapping Spectrometer instruments (TOMS) aboard NASA Earth Probe satellite and Japanese Advanced Earth-Observing Satellite (ADEOS) have detected substantial depletion of ozone levels...more
The 4th Assessment Report Summary for Policymakers of the Intergovernmental Panel on Climate Change (IPCC) is scheduled to be released on February 2, 2007 in Paris, France. The IPCC has been established...more | <urn:uuid:e36697b3-3a40-43ef-bfd4-377fec33109f> | 2.921875 | 586 | Content Listing | Science & Tech. | 55.245235 |
The official kilogram. Credit: BIPM
By Margaret Harris
Pick the correct definition of a kilogram:
a) the mass of a body with a de Broglie wavelength of 6.626069311 x 10^-34 m at a velocity of 1 m/s
b) a mass of a body at rest such that Planck’s constant h is 6.626069311 x 10^-34 Js
c) a mass of exactly 5.0184512725 x 10^25 unbound carbon-12 atoms at rest in their ground state
d) the mass of a lump of platinum-iridium sitting under three vacuum jars in a French laboratory
Readers with an interest in metrology will know that the answer is d) — and anyone who didn’t know it could probably have guessed from the photo. But why is the kilogram, alone of all SI units, defined by something so un-fundamental as a lump of metal?
The difficulty, as Bryan Kibble explained this afternoon in a talk at the QuAMP conference in Leeds, is that several of the alternatives have problems of their own. Options a) and b) both rely on pinning down a value for Planck’s constant, and thus might seem like the best way to go; indeed, one of them may actually become the new SI definition, perhaps as early as 2011. However, Kibble argued, both options are somewhat circular, swapping uncertainty in the kilogram for uncertainty in other Planck-derived units, and there’s not really any new science involved in them.
A definition in terms of carbon-12 atoms — or indeed, any kind of atoms — would be more satisfying, Kibble says, but as efforts like the Avogadro project at the UK’s National Physical Laboratory have shown, counting atoms isn’t a trivial task.
Nobody offered any solutions during the question period after the talk, but we did manage to pin down one thing: any fluctuations in fundamental constants (like the fine structure constant, for example) will not affect the kilogram problem — at least not for around 1000 years. So that’s all right then. | <urn:uuid:0c2e30ff-3b83-4030-ac2c-c95e0197aed9> | 3.28125 | 457 | Nonfiction Writing | Science & Tech. | 54.193256 |
Table of contents
Electron configuration describes the distribution of electrons among different orbitals (including shells and subshells) within atoms and molecules.
There are four principle orbitals (s, p, d, and f) which are filled according to the energy level and valence electrons of the element. All four orbitals can hold different number of electrons. The s-orbital can hold 2 electrons, and the other three orbitals can hold up to 6, 10, and 14 electrons, respectively. The s-orbital primarily denotes group 1 or group 2 elements, the p-orbital denotes group 13, 14, 15, 16, 17, or 18 elements, and the f-orbital denotes the Lanthanides and Actinides group. The main focus of this module however will be on the electron configuration of transition metals, which are found in the d-orbitals (d-block).
The electron configuration of transition metals is special in the sense that they can be found in numerous oxidation states. Although the elements can display many different oxidation states, they usually exhibit a common oxidation state depending on what makes that element most stable. For this module, we will work only with the first row of transition metals; however the other rows of transition metals generally follow the same patterns as the first row.
The s, p, d, and f-orbitals are identified on the periodic table below:
First Row Transition Metals
In the first row of the transition metals, the ten elements that can be found are: Scandium (Sc), Titanium (Ti), Vanadium (V), Chromium (Cr), Manganese (Mn), Iron (Fe), Cobalt (Co), Nickel (Ni), Copper (Cu), and Zinc (Zn).
Below is a table of the oxidation states that the transition metals can or cannot form. As stated in the boxes, the “No” indicates that the elements are not found with that oxidation state. The “Rare” signifies the oxidation states that the elements are rarely found in. Lastly, the “Common” identifies the oxidation states that the elements readily found in.
Oxidation States vs. First Row Transition Metals
Filling Transition Metal Orbitals
The electron configuration for the first row transition metals consists of 4s and 3d subshells with an argon (noble gas) core. This only applies to the first row transition metals, adjustments will be necessary when writing the electron configuration for the other rows of transition metals. The noble gas before the first row of transition metals would be the core written with brackets around the element symbol (i.e. [Ar] would be used for the first row transition metals), and the electron configuration would follow a [Ar] nsxndx format. In the case of first row transition metals, the electron configuration would simply be [Ar] 4sx3dx. The energy level, "n", can be determined based on the periodic table, simply by looking at the row number in which the element is in. However, there is an exception for the d-block and f-block, in which the energy level, "n" for the d block is "n-1" ("n" minus 1) and for the f block is "n-2" (See following periodic table for clarification). In this case, the "x" in nsx and ndx is the number of electrons in a specific orbital (i.e. s-orbitals can hold up to a maximum of 2 electrons, p-orbitals can hold up to 6 electrons, d-orbitals can hold up to 10 electrons, and f-orbitals can hold up to 14 electrons). To determine what "x" is, simply count the number of boxes that you come across before reaching the element you are attempting to determine the electron configuration for.
Example of Determining Energy Levels (n)
For example, if we want to determine the electron configuration for Cobalt (Co) at ground state, we would first look at the row number, which is 4 according to the periodic table below; meaning n = 4 for the s-orbital. In addition, since we know that the energy level for the d orbital is "n-1", therefore n = 3 for the d-orbital in this case. Thus, the electron configuration for Cobalt at ground state would simply be Co: [Ar] 4s23d7. The reason why it is 3d7 can be explained using the periodic table. As stated, you could simply count the boxes on the periodic table, and since Cobalt is the 7th element of the first row transition metals, we get Co: [Ar] 4s23d7.
Transition Metals with an Oxidation State
In the ground state, the electron configuration of the transition metals follows the format, ns2ndx. As for the electron configuration for transition metals that are charged (i.e. Cu+), the electrons from the s orbital will be moved to the d-orbital to form either ns0ndx or ns1ndx.
It is helpful to first write down the electron configuration of an element at its ground state before attempting to determine the electron configuration of an element with an oxidation state. See examples below.
Example with Vanadium
Vanadium at Ground State (Neutral):
V: 5 d-electrons = [Ar] 4s23d3
Vanadium with an Oxidation State of +4:
V4+: [Ar] 4s03d1
Or you can also write it as V4+: [Ar] 3d1
Example with Nickel
Above is a video showing how to write the electron configuration for Nickel (Ni) and Zirconium (Zr) from the d-block.
Nickel at Ground State:
Ni: 8 d-electrons = [Ar] 4s23d8
Nickel with an Oxidation State of +2:
Ni2+: [Ar] 4s03d8
Or simply Ni2+: [Ar] 3d8
In this example, the electron configuration for Ni2+ still kept its 3d8, but lost the 4s2 (became 4s0) because the s-orbital has the highest energy level of n = 4 in this case. Therefore, the s-orbital will lose its electrons first, before the d-orbital, and so Ni2+ can be written as [Ar] 4s03d8 OR [Ar] 3d8.
Electron Configuration of a Second Row Transition Metal (Rhodium)
Rhodium at Ground State:
Rh: 7 d-electrons = [Kr] 5s24d7
Rhodium with an Oxidation State of +3:
Rh3+: [Kr] 5s04d6
Or simply Rh3+: [Kr] 4d6
Electron Configuration of a Third Row Transition Metal (Osmium)
Note: Osmium is stable with oxidation states of +2, +3, +4, as well as +8.
Osmium at Ground State:
Os: 6 d-electrons = [Xe] 6s24f145d6
Osmium with an Oxidation State of +2:
Os: [Xe] 4f145d6
Osmium with an Oxidation State of +3:
Os: [Xe] 4f145d5
For fourth row transition metals, the electron configuration is very similar to the electron configuration of the third row transition metals. However, for a fourth row transition metal, you would follow the format of [Rn] 7sx5fx6dx rather than the third row transition metal formating of [Xe] 6sx4fx5dx.
To see an example of an element from the second row or third row transition metals, see "Electron Configuration of a Second Row Transition Metal (Rhodium)" and "Electron Configuration of a Third Row Transition Metal (Osmium)".)
A) V2+ B) V3+ C) V5+ D) Cr2+ E) Cr3+ F) Cr6+ G) Mn2+ H) Mn3+
I) Mn4+ J) Mn6+ K) Mn7+ L) Fe2+ M) Fe3+ N) Co2+ O) Co3+ P) Cu2+ Q) Zn2+
See File Attachment for Solutions. (You will probably need Adobe Reader to open the PDF file.)
This page viewed 165926 times
This page viewed 165926 times | <urn:uuid:38b59f8f-b162-420b-8920-06cf3572c87b> | 4.125 | 1,812 | Documentation | Science & Tech. | 52.961924 |
Molecular Biology and Genetics
Statistics of barcoding coverage
|Specimen Records:||10||Public Records:||2|
|Specimens with Sequences:||6||Public Species:||1|
|Specimens with Barcodes:||6||Public BINs:||1|
|Species With Barcodes:||2|
Geographic range
They are moderately sized lizards with laterally compressed bodies, and typically have well-developed head crests in the shape of a casque. This crest is a sexually dimorphic characteristic in males of Basiliscus, but is present in both sexes of Corytophanes and Laemanctus (Pough et al. 2003).
In Corytophanes, the head crests are used in defensive displays where the lateral aspect of the body is brought about to face a potential predator in an effort to look bigger (Pough et al. 2003). Unlike many of their close relatives, they are unable to break off their tails when captured, probably because the tail is essential as a counterbalance during rapid movement.
Casque heads are forest-dwelling lizards.
Despite the small size of the group, it includes both egg-laying species and some that give birth to live young.
Genera and species
- Genus Basiliscus
- Genus Corytophanes
- Genus Laemanctus
- Frost, D.R, and R. Etheridge. 1989. A Phylogenetic Analysis and Taxonomy of Iguanian Lizards (Reptilia: Squamata). Univ. Kansas Mus. Nat. Hist., Misc. Pub. (81): 1-65. ("Corytophanidae Fitzinger, 1843", p. 34.)
- Dahms Tierleben. www.dahmstierleben.de/systematik/Reptilien/Squamata/Iguania/corytophanidae.
- Bauer, Aaron M. (1998). In Cogger, H.G. & Zweifel, R.G.. Encyclopedia of Reptiles and Amphibians. San Diego: Academic Press. pp. 134–136. ISBN 0-12-178560-2.
Further reading
- Fitzinger, L. 1843. Systema Reptilium, Fasciculus Primus, Amblyglossae. Braumüller & Seidel. Vienna. 106 pp. + indices. (Family Corythophanae, p. 52.)
- Pough FH, Andrews RM, Cadle JE, Crump ML, Savitsky AH, Wells KD. 2003. Herpetology, Third Edition. Upper Saddle River, NJ: Pearson Education, Inc. pp. 129.
To request an improvement, please leave a comment on the page. Thank you! | <urn:uuid:356e43f4-11c3-49d9-9801-53066c085e22> | 2.953125 | 606 | Knowledge Article | Science & Tech. | 55.298403 |
Creationists often like to claim that complex traits cannot arise from the "simple" processes of mutation and selection. They often claim that these processed are not even observable (even though we've been observing them since we began breeding plants and animals).
used evolution to make ROBOTS.
And not just any robots - robots that walk, hunt each other, evolve their shape, and which are even altruistic - a distinctly mammalian trait. All of that was evolved; starting with nothing more than a collection of parts and a simple mutation/selection algorith.
The above image shows one such robot - in this case, this is an example of a walking robot whose shape evolved through autonomous design and fabrication. Fancy way of saying the parts were randomly put together, mutated, and selected until a functioning robot was formed.
Pretty freakin' cool if you ask me!
This is hardly the first time computers and robots have been used in evolutionary experiments. but what does make this experiment unique is that its the first time evolution of structure and evolution of behaviour have been done in one experiment. In the past groups would either have a set structure whose behaviours evolved, or a pre-set series of behaviours that a object was then evolved to fit.
So what does this actually teach us about evolution? Especially given the very simple evolutionary algorithm the used (see above image) - an algorithm far simpler than the "algorithm" of biological evolution.
As it turns out, this study teaches us a lot about evolution, notably:
- Small mutations can lead to very rapid changes in form/behaviour. All of the behaviours appeared quite quickly in these experiments - usually a functioning behaviour/structure would appear in a few dozen generations, and after 100 or so generations the behaviour/structure would be highly defined.
- Once a behaviour/trait is formed, it is optimised very rapidly.
- Very simple systems (in this case consisting of a few hundred parts - compared to the thousands to tens-of-thousands of genes in living organisms) can be moulded by evolution into extremely complex beings, capable of complex - even cooperative - behaviours.
Floreano, D., & Keller, L. (2010). Evolution of Adaptive Behaviour in Robots by Means of Darwinian Selection PLoS Biology, 8 (1) DOI: 10.1371/journal.pbio.1000292 | <urn:uuid:a08995ce-1b83-4e8c-a6cb-040f04347d2e> | 3.796875 | 485 | Personal Blog | Science & Tech. | 39.082267 |
Given the equation =0, find the values of k such that one root of the equation is less than 1; another root is in the interval (1, 4); and the third root is greater than 4.
[Problem submitted by Steve Lee, LACC Professor of Mathematics.]
Solution for Problem 7:
Let f(x)= , then a rough sketch of the graph of f(x) is the following.
k must satisfy both (1) and (2). | <urn:uuid:28460343-c494-4060-806d-a47516ee6ebb> | 3.15625 | 98 | Q&A Forum | Science & Tech. | 87.515 |
Can anybody help with these please?!
Solve this problem by writing an equation. The length of a rectanglar room is 6.2m and the width is 3.3m. What is its area?
Express this problem as an equation. The length of a rectangular car park is a 128m and the width is 63m. What is its area?
Write the rule for finding the area of a rectangle as an equation which will be true for all rectangles.
Express this problem as an equation. The radius of a circle is 6.3m. What is its diameter? Thanks | <urn:uuid:9aa435a8-d5a7-4b02-8200-e584b99c7833> | 3.078125 | 123 | Q&A Forum | Science & Tech. | 88.42025 |
Sea ice in the Arctic continues to track significantly below average, with the 2nd to lowest readings for the month (depending on the day) in the modern era. Weather conditions around Antarctica caused a temporary stall in sea ice freezing, causing extent conditions to tack toward below average conditions before recently recovering somewhat. Global sea ice area therefore took a turn for the worse during June and early July, reaching for historical lows reached only a couple of times before now. Within the last month, global sea ice area reversed the gains made in May toward eliminating the deficit from climatological conditions that characterized the first four months of 2011 and has instead declined rapidly to a 2 million sq. km. deficit by early July.
To help put this in context, only three previous times in recent history have seen conditions as bad as they are today: in 2007, 2008 and 2010. The difference between these previous occurrences and current conditions is profound: they previously occurred around September, when Arctic ice reached its annual minima. This, of course, is July. There are over two months left before melting in the Arctic stops. Will a new record low sea ice area be recorded this year? Stay tuned.
Portions of the Arctic are experiencing warmer near-surface conditions in 2011 than at the same point in 2007, when the record low extent of sea ice was recorded. Additionally, warmer water than in past years continues to be transported into the Arctic Ocean at rates that are quickening (more warm water flowing through the Ocean faster – not a good thing for long-term ice survivability). Weather conditions (local pressure centers, resulting wind patterns, etc.) will have the final influence on what conditions in Sep. 2011 look like. As this summer has progressed, the dipole anomaly has again been established. Prior to the late 1990s, this atmospheric phenomenon didn’t occur. It is postulated that it is setting up in response to climate change. Updating my guess from last month, I think 2011 might challenge 2007 for setting the record low extent. The extent is hovering at daily record low values and the dipole has set up again. It will only take a couple of storm systems to prevent 2011 from setting the record low, however. But I don’t think it will miss it by much.
Read the rest of this entry → | <urn:uuid:4b661730-5ae6-4f4a-b85f-a5baae771ffb> | 3.109375 | 469 | Personal Blog | Science & Tech. | 50.232805 |
They flipped the switch, the proton’s flew, and the world hasn’t blown up, yet.
Three hundred feet underground on the French-Swiss border, the biggest physics experiment in history launched yesterday. The Large Hadron Collider.
The biggest atom smasher ever built: a seventeen-mile collision track, and sky-high hopes for cosmic breakthroughs in our understanding of the universe — of muons and gluons and quarks, of dark matter and black holes and — maybe — whole new space-time dimensions.
This hour, On Point: Particle physics, a giant new tool, the shape of the universe, and you.
You can join the conversation. What are your hopes and fears for the earth’s largest atom smasher? What’s the cosmic question you want answered when it makes its own big bang?
Joining us from Paris is Adrian Cho, staff writer for Science magazine. He was at the European Organization for Nuclear Research in Switzerland yesterday when they fired up the Large Hadron Collider for its first big test.
Joining us from Driggs, Idaho, is Leon Lederman. He’s an experimental physicist, and director emeritus of the Fermilab atom smasher, outside Chicago. He won the Nobel Prize for Physics in 1988 for his work on neutrinos, and he coined the term “God particle” for the Higgs boson with his 1992 book “The God Particle: If the Universe is the Answer, What is the Question?”
Joining us from New York is Lisa Randall. She’s a professor of theoretical physics at Harvard University, renowned for her work on string theory and author of “Warped Passages: Unraveling The Mysteries Of The Universe’s Hidden Dimensions.”
The official site of the Large Hadron Collider at CERN explains the science behind all the excitement.
And just for laughs, here’s a (sort of) rap video about the LHC… | <urn:uuid:ce4af507-4afe-4e38-baed-22ae65630a10> | 2.796875 | 423 | Truncated | Science & Tech. | 56.014623 |
18.104.22.168 Ionization Techniques for Volatile Analytes Entering the MS from a GC
22.214.171.124b Chemical Ionization (CI):
Today, most mass spectrometers can perform both electron ionization and chemical ionization, with different interchangeable ionization units. The CI unit is less open to diffusion of the reagent gas in order to contain the reagent gas longer and promote chemical ionization. Several reagent gases are used including methane, propane, isobutane, and ammonia, with the most common being methane. CI is referred to as a soft ionization technique since less energy is transferred to the original analyte molecule, and hence, less fragmentation occurs. In fact, one of the main purposes of using CI is to observe the molecular ion, represented by M.+ or M.-, or a close adduct of it, such as MH+, MH+2, or M plus the chemical ion (i.e. M+CH3 with methane as the reagent gas or M+NH3 with ammonia as the reagent gas). Notice again that neutral, negative, and positive fragments are produced but only the positive fragments are of use in positive CI detection, while negative ion fragments are detected in negative CI mode.
This section will limit its discussion to CI and methane, the most common reagent gas. Methane enters the ionization chamber at about 1000 times the concentration of the analyte molecules. While the electron beam in EI is usually set at 70eV, in CI lower energy levels are used near the range of 20 to 40 eV. This energy level produces electrons that react with methane to form CH4·+, CH3+, and CH2·+. These ions rapidly react with unionized methane in the following manner:
The CH5+ and C2H5+ ions collide with the analytes (represented by M) and form MH+ and (M-1)+ by proton and hydride transfer
Note that several types of ions can occur, (M+1)+ or MH+ from proton transfer, (M-1)+ from hydride transfer, and M+CH3+ and even M+C2H5+ from additions. By inspecting the mass spectrum for this pattern, the molecular mass of the analyte can be deduced. Similarly, if other reagent gases are used, such as propane, isobutene, and ammonia, similar proton and hydride transfer and adduct formations can occur. The usual goal of CI is to obtain a molecule weight for the molecular ion that would usually not be present in an EI spectra.
A relatively simple illustration of a CI chamber and its reactions is shown in the animation below. This animation is similar to the EI animation, but the continuous addition of a reagent gas, methane, causes the gas to be ionized by the beam of electrons. Subsequently, the ionized methane reacts with analytes exiting the GC column. Methane is preferentially ionized by the beam of electrons due to its significantly higher concentration as compared to analytes from the GC. Positively charged fragments are drawn into the focusing lens and mass analyzer by a positively charged repeller plate (not shown) and the negatively charged accelerator plate.
Animation 5.2. Illustration of a CI Chamber and Reagent Gas-Analyte Reactions.
Chemical ionization is most commonly used to create positive ions, but some analytes, such as those containing acidic groups or electronegative elements (i.e. chlorinated hydrocarbons) will also produce negative ions that can be detected by reversing the polarity on the accelerator and detector systems. Some of these analytes produce superior detection limits with CI as opposed to EI, while others only give increased sensitivity (slope of the response to concentration line). Negative ions are produced by the capture of thermal electrons (relatively slower electrons with less energy than those common in the electron beam) by the analyte molecule. Thermal electrons are present from the low energy end of the distribution of electrons produced by the lower-energy CI source (~20 eV as opposed to 70 eV in EI). These low energy electrons arise mostly from the chemical ionization process but also from analyte/electron collisions. Analyte molecules react with thermal electrons in the following manner, where R-R’ is the unreacted analyte molecule and R represents an organic group.
The identification of negative ion fragmentation patterns of analytes can be used in the same manner as in EI or positive ion CI. But note that extensive fragmentation libraries exist only for 70eV electron ionization (EI). Many analysts create their own reference libraries with the analysis of reference materials that will later be used for the identification of unknown analytes extracted from samples.
Figures 5.3 and 5.4 contain CI spectra for the same compounds analyzed by EI in Figure 5.1 and 5.2, respectively. Note the obvious lack of fragmentation with the CI source and the presence of molecular ions in the CI spectra.
Figure 5.3. Fragmentation of Cyclohexanol by CI.
Figure 5.4. Fragmentation of Decanoic Acid Methyl Ester by CI.
To summarize, for GC-MS systems, individual analytes exit the GC column, are ionized, and fragmented using electron or chemical impact (ionization). Since the detector in a MS is universal (responds to any positively charged ion) it is necessary to separate the molecular ion and its fragments by their mass or mass to charge ratio. This process is completed in a mass analyzer, which is explained in the section below. But first, some mass analyzers require the beam of ion fragments to be focused and all require the ion fragments to be accelerated in a linear direction.
©Dunnivant & Ginsbach, 2008 | <urn:uuid:f4308fed-f7a5-4a5c-8b48-1a410fe54c56> | 2.9375 | 1,212 | Academic Writing | Science & Tech. | 38.892452 |
I am not in favor of total digital enhancement of photographic images. Too much room for human bias. Disclaimers must be issued when a photo is enhanced.
"Old Moon Images Get Modern Makeover"
March 31st, 2009
March 31st, 2009
WOODLANDS, Texas — Images of the moon gleaned from NASA spacecraft more than 40 years ago are now getting a 21st century makeover thanks to the Lunar Orbiter Image Recovery Project (LOIRP).
Back in 1966 and 1967, NASA hurled a series of Lunar Orbiter spacecraft to the moon. Each of the five orbiters were dispatched to map the landscape in high-resolution and assist in charting where best to set down Apollo moonwalkers and open up the lunar surface to expanded human operations.
By gathering the vintage hardware to playback the imagery, and then upgrading it to digital standards, researchers have yielded a strikingly fresh look at the old moon. Furthermore, LOIRP's efforts may also lead to retrieving and beefing up video from the first human landing on the moon by Apollo 11 astronauts in July 1969.
Dennis Wingo, LOIRP's team leader, detailed the group's work in progress during last week's 40th Lunar and Planetary Science Conference.
Teamed with SpaceRef.com, LOIRP's saga is one of acquiring the last surviving Ampex FR-900 machinery that can play analog image data from the Lunar Orbiter spacecraft. Wingo noted that the work is backed by NASA's Exploration Systems Mission Directorate, the space agency's Innovative Partnership Program, along with private organizations, making it possible to overhaul old equipment, digitally upgrade and clean-up the imagery via software.
LOIRP is located at NASA's Ames Research Center at Moffett Field, Calif. There, project members are taking the analog data, converting it into digital form and reconstructing the images.
By moving them into the digital domain, Wingo said, the photos now offer a higher dynamic range and resolution than the original pictures, he added.
"We're going to be releasing these to the whole world," Wingo said.
Use of the refreshed images, contrasted to what NASA's upcoming Lunar Reconnaissance Orbiter (LRO) mission is slated to produce, has an immediate scientific benefit. That is, what is the frequency of impacts on the Moon's already substantially crater-pocked surface?
"We'll be able to get crater counts," Wingo told SPACE.com. "LRO imagery of the same terrain imaged decades ago will provide a crater count over the last 40 years."
Frozen in time
There's also a more down to Earth output thanks to LOIRP scientists.
They have used a Lunar Orbiter 1 image of the Earth for climate studies, basically a snapshot frozen in time that shows the edge of the Antarctic ice pack on August 23, 1966.
The team is working with the National Snow and Ice Data Center in Boulder, Colorado to correlate their images of the Earth with old NASA Nimbus 1 and Nimbus 2 spacecraft imagery that flew at about the same time — in the mid-1960s — as the Lunar Orbiter 1. Nimbus satellites were meteorological research and development spacecraft.
Wingo said that the original Nimbus images may have been recorded on an Ampex FR-900 — so by processing the original Nimbus tapes there is a very good chance that they can provide NASA with polar ice pack data from ten years earlier.
One treasure hunt outing by LOIRP may lead to finding what some term as "lost" Apollo 11 slow scan tapes, Wingo said.
"We don't think they are lost. People have been looking for the wrong tapes," he said, explaining that they were recorded on Ampex FR-900 equipment — not on another type of recorder as previously thought.
Wingo said those Apollo tapes are stored at the Federal Records Center, labeled and ready for a look see.
"We think for the 40th anniversary of Apollo we may be able to get the original slow scan tapes," Wingo said. If so, the hope is to recover them and give the public a higher-quality, never-before-seen view of human exploration of the Moon.
There is a lesson learned output from LOIRP.
"In the beginning, very few people thought this could be done...but now they have seen the results," Wingo said.
It is not enough to have 100 year recording medium, Wingo explains. Without the retention of the specific era equipment that images are archived on, it will be impossible for future generations to recover older NASA or other satellite data, he advised.
This is a general issue, not specific to the Lunar Orbiter program. The retention of critical hardware should be a requirement for flight efforts. The original historic Apollo 11 slow scan images have been lost due to inattention to this critical detail, Wingo concluded. | <urn:uuid:02d506e3-33b4-47b6-a9cb-e30f498b9ded> | 2.78125 | 1,007 | Personal Blog | Science & Tech. | 45.984009 |
I will discuss in this articel the trigonometric properties of triangles.
Triangles have unque trigonometric properties just like other geometrical figures. The inside circle of a triangle is constructed by bisecting the triangle angles and then taking the perpendicular distance from the three bisectors and drawing the inside circle. The Circumscribed circle is constructed by bisecting the sides of the triangle and taking the distance of the three side bisectors and the corners of the triangle as radius. The radius of the inside and outside radius is related mathematically to the dimensions of any triangle.
In any triangle, the tangent of the inside circle is perpendicular to the radius. In addition, the angle bisectors meet at the centre of the inside circle. That is, tan (A/2) = r/ AX, where Ox is the distance from the triangle angle (A) to the radius meeting the side AB or the side against angle C or it can be labeled (c). In the same way Tan (B/2) = r/Bx. That is, AX = r/Tan(A/2) and BX = r/Tan B/2. However AB = AX + BX. That is, (c) = r * [ 1/Tan (A/2) +/ Tan B/2]. In other words r = c* ( Tan A/2 + Tan B/2). In the same manner, r = b* ( Tan A/2 + Tan C/2) or r = a* ( Tan B/2 + Tan C/2). That for any triangle (tan A/2 + Tan B/2) / ( Tan A/2 + Tan C/2) = b/c and (tan A/2 + Tan C/2)/ Tan B/2 + Tan C/2) = a/b. Applying the sin theorem a/ Sin A = b/ Sin B. That is, b Sin A = a Sin B, there fore a/b = Sin A/ Sin B in the same manner, b/c = Sin B/ Sin C . That is, the inside circle radius can be expressed in terms of a side length and Sine ratio of two angles and tan of half of the two angles as follows:
r = c* b/c * ( Tan A/2 + Tan C/2) = c* Sin B/Sin C*( Tan A/2 + Tan C/2)
r = b*c/b* ( Tan A/2 + Tan B/2) = b* Sin C/ Sin B* ( Tan A/2 + Tan B/2)
r = a* b/a * ( Tan A/2 + Tan C/2) = a* Sin B/Sin A* ( Tan C/2 + Tan A/2)
Circumscribed radius of circle of any triangle
The angle two radius make at the circumference of a circle s half of the anle they make at the center of a circle. In this respect, if one bisects the sides of a circle to draw a circumscribed circle of any triangle, then the angle it makes at the center is twice the angle of the one of the triangle. In addition the perpendicular from the center of a circle bisects the cord or one side of any triangle. That is, The angle between the radius of the circumscribed circle and the perpendicular is equal to the opposite angle of the side. Say the angle is (A) and the opposite side is (a) then Sin A = a/2/ R where R is the radius of the circumscribed circle of a triangle. In other words, R Sin A = a/2. That is R = a/2/ Sin A = a/s*co sec A or R = a/2/ 2 Sin A/2* cos A/2 = a/4 sin A/2* Cos A/2.
R = a/ 4 Sin A/2* Cos A/2
R = b/ 4 Sin B/2 8 Cos A/2
R = c/ 4 Sin C/2* Cos C/2
That is if one knows the length of any side of any triangle and knows one angle and its trigonometric identity , that is sin of any angle then one can calculate the length of the radius of the circumscribed circle of a triangle.For example the radius of the circumscribed circle of an equilateral triangle of length of 4 cm is equal to 4/ 4Sin30*cos30. Sin30 = 0.5 and cos 30 = square toot 3/2. There fore R = 4/ 4*0.5*square rootof 3/2 = 2 /0.5*square root of 3 = 4/ square root of 3. | <urn:uuid:08925a88-e253-43e3-9fd2-38899825aa42> | 3.953125 | 1,000 | Academic Writing | Science & Tech. | 85.23734 |
In this article we will be seeing how to add a polygon on top of the Silverlight
Article for creating
Silverlight Bing Map Control. MapPolygon:
The MapPolygon class accepts a list of points that define its shape and location
on the map.
MapPolygon class is used to represent a polygon on the map. Namespace: Microsoft.Maps.MapControl Assembly: Microsoft.Maps.MapControl (in microsoft.maps.mapcontrol.dll)
Create Silverlight application:
Contains the altitude and coordinate values of a location on the map. Fill:
It is used to get or set the fill of the shape. Opacity:
It is used to get or set the opacity of the shape. Stroke:
It is used to get or set the stroke color of the shape.
Silverlight Bing Map Control in SharePoint 2010
Introduction to Microsoft Silverlight | <urn:uuid:38a75bdd-c0ba-4066-8912-92fa6ab817bc> | 2.71875 | 189 | Tutorial | Software Dev. | 54.983182 |
The image below is an artist's conception of the surface of the
10 Myr-old star TW Hydrae (TW Hya). This young star is surrounded by
a disk of gas and dust (the dull orange ring-like structure at the
edge of the frame). Material from the inner edge of the disk rains
onto the stellar surface, as indicated by the white, wispy trails.
Once this gas hits the star, it produces bright spots. The energy of
the accreted gas produces a wind of outflowing material, shown by the
At a distance of only 100 light years, TW Hya is the nearest young
star with an opaque circumstellar disk. CfA scientists in the
Radio and Geoastronomy
division use the Submillimeter
Array to study the structure and chemistry of the disk. In the
SSP division, scientists use
the MMT, and other ground-based
and satellite telescopes to study the accreted gas and the ejected
material in this and other young stars. | <urn:uuid:34de001c-6d52-466f-a245-a6f28d45f914> | 3.328125 | 215 | Knowledge Article | Science & Tech. | 55.276335 |
The land snail Cepaea nemoralis is highly polymorphic for shell colour and banding (Figure 1). Within and between populations individuals can display a shell colour of several shades of yellow, pink, or brown, and any number of bands from 0 to 5. Generally C. nemoralis populations are monomorphic for lip colour, with the common morph being dark brown or black. However, in a few European areas white or palelipped individuals are also present in some populations i.e. Cantabria in Spain; the Pyrenees; Denmark; Yorkshire and Cornwall in England; North Wales; Western Scotland; and the west coast of Ireland (Figure 2). These polymorphic populations consist of both the dark-lipped and white-lipped individuals, and additionally pale brown-lipped individuals, which are possibly heterozygotes (Cain et al., 1968). Very rarely, some populations in these areas are monomorphic for the rare lip colour i.e. in the Pyrenees.
The genes for shell colour, lip colour, and banding are very tightly linked forming a supergene (Jones et al. 1977). Yet, while around 98% of British populations are polymorphic for shell colour and banding, lip colour is almost completely invariant, with there being only a few known British locations that contain populations that are polymorphic for this trait. It is therefore unusual that while natural selection operates so that shell colour and banding are variable, the monomorphism in the lip colour is probably maintained by strong stabilising selection.
There are several linked hypotheses that may explain the distribution of white-lipped C. nemoralis: 1) repeated evolution of the same character due to common environment or selection 2) the populations are derived from the same Pleistocene refugial populations or 3) introgression of the white-lip allele from the sister species Cepaea hortensis. The aim of the project was to establish by means of genetic methods as to whether the populations containing individuals with the rare lip colour all have a common origin. Additionally, the project would also contribute towards understanding the postglacial colonisation of Britain and Ireland.
The last glacial period
The Last Glacial Maximum (LGM; 23 to 18 kyr ago) was at its height around 18 kyr ago during the Pleistocene, when the European ice sheet extended as far south as 52°N with an area of permafrost stretching to 47°N (Hewitt, 2004)(Figure 3). Glaciers also formed on the Southern European Mountains, such as the Alps and Pyrenees, creating a barrier that to a great extent blocked the pathways of migrating species. It is therefore assumed that the majority of temperate species, such as Cepaea nemoralis, survived within the ice-free southern peninsulas i.e. Iberia, Italy, and the Balkans (Bennett 1997; Hewitt, 1999).
The end of the Younger Dryas period (~ 10 Kyr ago) marked the beginning of the latest interglacial phase, the Holocene. Species that were previously confined to refugial areas tracked the warming climate, generally migrating in a northward direction, and filled any available niches en route (Hewitt, 1999). Perhaps modern day European populations that are polymorphic for lip colour all originated from the same southern refugial area, most likely Iberia.
During the summer of 2007, I conducted a transect down the west coast of France, then across the Pyrenees and the north coast of Spain. In all, I collected over 200 individuals from 20 separate locations (Figure 4). Populations were located by thorough inspection of suitable habitats, with care taken so as not to sample near sites where recent introductions are likely, such as, near agriculture areas, parks, or private gardens. Where possible between 10 and 30 individuals were collected at each site, from an area no larger than 10 x 10m. Samples were returned to the lab and frozen on arrival.
The research so far
To date, fragments of the mtDNA gene cytochrome oxidase subunit I (COI) and 16S rRNA for over 950 Cepaea nemoralis individuals from >100 Western European sites have been sequenced and analysed. Intriguingly, the data strongly suggests that individuals from the west coast of Ireland are derived from populations in the Pyrenees, supporting the long-known “Lusitanian” origin of the Irish fauna. Some populations from both of these areas also contain a high proportion of large, white-lipped C. nemoralis. However, there is little evidence that other white-lipped populations in mainland Britain and Europe are primarily derived from the Iberian populations. Additionally, there is no evidence for introgression from the sister species Cepaea hortensis.
The next step for this study is to compare the Irish Cepaea nemoralis mitochondrial DNA patterns to the fossil records from both the east and west coast of Ireland (Preece et al., 1986; Speller, 2006). Additionally, palaeoclimatic niche modeling of refugial areas in Iberia may be constructed to establish the potential distribution and movement of Spanish C. nemoralis during the Pleistocene and Holocene. Finally, sequencing of additional genes, such as a noncoding nuclear gene, and microsatellite work would be useful to support original findings.
I am extremely grateful to the Conchological Society for helping to fund my field work last summer. I would like to thank my supervisor, Dr. Angus Davison, for his help and advice on the project, and the University of Nottingham for funding the lab work. In addition, many thanks to Prof. Robert Cameron and Prof. Steve Jones for their useful discussions about the project.
Bennett, K. 1997 Evolution and ecology: the pace of life. Cambridge University Press.
Cain, A.J., Sheppard, P.M., and King, J.M.B. 1968 The Genetics of Some Morphs and Varieties of Cepaea nemoralis (L.), Phil. Trans. Roy. Soc. B. 253, 383–396.
Hewitt, G. 1999 Postglacial recolonization of European Biota. Biol. J. Linn. Soc. 68, 87–112.
Hewitt, G. 2004 Genetic consequences of climatic oscillations in the Quaternary. Phil. Trans. R. Soc. Lond. B. 359, 183–195. Jones, J. S., Leith, B. H., and Rawlings, P. 1977 Polymorphism in Cepaea: A problem with too many solutions? Ann. Rev. Ecol. Syst. 8, 109-143.
Preece, R. C., Coxon, P., and Robinson, J. E. 1986 New biostratigraphic evidence of the Post-glacial colonization of Ireland and for Mesolithic forest disturbance Journal of Biogeography 13, 487-509.
Speller, G. R. Molluscan biostratigraphy, stable isotope analyses and dating of Irish Holocene tufas University of Cambridge; 2006.
Thomaz, D., Guiller, A., and Clarke, B. 1996 Extreme divergence of mitochondrial DNA within species of pulmonate land snails Proc. R. Soc. Lond. B 263, 363-368.
Figure 1. Diagram to show a selection of the shell pattern polymorphisms of Cepaea nemoralis. A. Dark pink, unbanded. B. Bright pink, unbanded. C. Pale pink, unbanded. D. Dark yellow, unbanded. e. Dark yellow, 5-banded (12345). F. Dark yellow, 5-banded (spread bands). G. Faint pink, with fused bands. H. Dark Yellow, with fused bands. I. Dark yellow, 3-banded (00345). J. Yellow-white, 5-banded (12345). K. Pale yellow, 5-banded (12345). L. Pale yellow, punctate bands. M. Pale yellow, faint bands. N. Dark yellow, mid-banded (00300). o . Pink, mid-banded (00300).
Figure 2. Map to show areas in which populations containing white-lipped individuals are found (circled). The approximate distribution of Cepaea nemoralis is indicated by shading. Insert shows the lip colour polymorphism of C. nemoralis (left: white-lip; middle: pale brown-lip; right: dark-lip).
Figure 3. Map to show the extent of the ice sheet and permafrost in Europe during the Last Glacial Maximum (~18,000 years ago). The grey area represents the ice sheet; sea ice is indicated by the hatched area; the dotted line shows the extent of the permafrost (adapted from Hewitt, 1999).
Figure 4. Map to show the field trip collection sites (white circles; black circles represent sampling sites from Thomaz et al, 1996). The distribution of Cepaea nemoralis is indicated by shading. | <urn:uuid:03110a61-f837-4571-b4b9-f5da8d6b4954> | 3.453125 | 1,916 | Academic Writing | Science & Tech. | 54.143768 |
Experiment of the Week -#199 Blowing Out a Candle
Since it is mid-December, it is time once again for me to read Michael
Faraday's Chemical History of a Candle. This wonderful book is one of
best of his Christmas lectures for children and it is still in print
years later. If you have never read this book, I highly recommend it,
both for the information and to experience Faraday's marvelous style of
science. In keeping with my annual tradition, I have an experiment for
you that uses a candle. What is involved in blowing out a candle? You
that if you blow on the flame, it goes out, but why? To find out, you
* a candle
* a lighter or matches
* a candle holder or some aluminum foil
* a dinner plate
WARNING! THIS EXPERIMENT USES A LIT CANDLE. Be safe and use good
judgement. Never experiment with fire when you are alone. You should
always have one or more adults with you, in case there is an accident.
Place your candle in a holder. If you don't have a holder, you can make
a temporary one by crumpling a ball of aluminum foil around the base of
the candle and then pressing the bottom of it against a table to
flatten it enough for the candle to sit upright. Light the candle and
watch it burn for a minute or so. Then blow it out. Great! It worked. I
hope you liked
Wait a minute! WHY did it work? If you ask different people, you will
get different answers. Some people say that blowing on it cools
the flame enough to put it out. Others say that the carbon dioxide in
your breath smothers the flame. Lets experiment with each of these to
see if we can find out.
First, lets see if cooling the flame will put it out. If cool air will
put it out, then cold air should definitely do it. Relight your candle.
If it is a cold day, take your candle outside. If the weather outside
is not cold, open your freezer and hold the candle in the cold air.
Does it go out? No. Even in very cold air, the candle continues to
burn. That tells us that it is not cooling that lets us blow out the
OK, then maybe it is the carbon dioxide in your breath that puts out
the flame. When you breath air in, you absorb some of the oxygen from
the air and put back some carbon dioxide. Carbon dioxide itself does
extinguish a flame. Instead, it is the lack of oxygen which can cause a
flame to go out. To see if that is the reason you can blow out a
candle, we need to try blowing out the flame with air that has the
normal amount of oxygen in it. To do that, you need something that you
can use to fan through the air. I tried a plastic dinner plate and
found that it worked very well. Light the candle again. Place it on a
flat surface. Hold the dinner plate in your hand and use it like a fan
to blow air at the candle flame. With one quick snap of your wrist, you
should be able to get enough of a blast of air
to put out the candle. Now we know that blowing out a candle is not due
to changes in the amount of oxygen or carbon dioxide.
Then what does make the flame go out? To find out, relight your candle.
Now start blowing gently on the flame. You don't want to blow it
out. Instead, you want it to begin flickering. Watch the flame
you blow. Observe carefully as you blow a bit harder and a bit
harder. You should notice that the flame always seems to stay attached
to the candle wick. If it breaks away from the wick, then the flame
That is why you can blow out a candle. The wick itself is not burning.
Instead, as the wax gets hot, it melts and then comes apart to form
several new chemicals. One of these chemicals is a gas that will burn.
This burning gas is what makes the flame. As long as the flame is
around the wick, it continues to heat more wax and form more gas to
burn. If you blow the flame away from the wick, then it runs out of gas
to burn and goes out.
to Krampf Index
Including permission from Robert
Krampf to post his
experiments on my web site | <urn:uuid:b95f7bba-93e9-4cfa-bc18-fdf1f86703aa> | 3.640625 | 960 | Personal Blog | Science & Tech. | 75.892206 |
From landfills and laboratories to archeological digs and deep space
The Columbus Dispatch - November 14, 2011 02:05 PM
I always knew that the ash tree standing behind my Westerville home was an endangered species.
As the Dispatch's environment and science reporter, I've written stories for more than five years now about the voracious emerald ash borer, an invasive insect from Asia that's literally chewing its way through Ohio's ash trees.
Ash borer larvae kill trees by eating tunnels through the soft wood under the bark that supply trees with water and nutrients. An adult tree is supposed to die within three to five years of being attacked.
My tree, which was taller than my two-story house, looked perfectly fine last year. This year it was completely dead.
When landscapers came last Tuesday to cut it down, there wasn't a single section of this tree that wasn't riddled with the larvae tunnels shown in these photographs and the D-shaped exit holes that adult borer beetles chew through the bark when they are ready to emerge.
I thought my tree would have more time. Dan Herms, an Ohio State University entomologist, said that mature trees die more quickly in infested areas once the population of ash borers reach "critical mass."
Dems have a theme for state diner
Court order for hazardous waste, air pollution violations.
The Mars rover Opportunity has set a pretty nice record.
The other day, the rover drove 263 feet, moving its odometer - which has been ticking along since January 2004 - to 22.220 miles. No bad for a mission that was supposed to last only 90 days! Really. So who owned the previous NASA driving record? That would have been the Lunar Roving Vehicle driven by Apollo 17 astronauts Eugene Cernan and Harrison Schmitt. Those two, visiting the moon for three days in December 1972, drove their rover 22.210 miles.
Borges talks IRS, ODP smacks back | <urn:uuid:889067f0-dcf3-49de-bd9d-0fe3dc1b1a73> | 2.6875 | 406 | Personal Blog | Science & Tech. | 59.5757 |
Professionally, I develop primarily in Ada. When I do get a chance to program in C++, I miss some of the features that Ada provides. Ranged types is one of these features. Ada allows the programmer to constrain a numerical type to a particular range. Intermediate expression results are allowed to exceed the range, but an exception is raised if an attempt is made to assign an out-of-range value to a variable. An exception is also raised if a result of an expression overflows the bounds of the ranged type's base type. This article details a class template I have created to implement a C++ type with similar behavior.
One requirement I have for the class is that it must be written in strictly conforming Standard C++. This conflicts with the desire that the code be efficient as possible. Checking for overflow in C++ without resorting to inline assembly can be expensive. One of the reasons Ada has efficient ranged checked types is because these types are part of the language. An Ada compiler can analyze the code and remove those checks that are unnecessary. I decided to implement this behavior in my class using templates. What is needed is a method to determine at compile-time if a given mathematical operation can overflow.
The possible occurrence of two conditions needs to be determined at compile-time. These conditions are overflow and out-of-range. Overflow occurs when the result of an operation exceeds the limits of the primitive used in defining the ranged type. An out-of-range condition occurs when an attempt is made to assign a value to a variable that is outside the bounds of the variable's defined range.
Method of Analysis
The simplest condition is out-of-range. This is merely a matter of checking that the constructor is only called with values within the allowed range. The bounds of the argument type dictate whether checks are made. If either the upper or lower bound of the type is within the range, the check is removed. For example, assume the range of type
signed char is [-128, 127]. If a variable with a range of [0, 65536] is constructed with a value of type
signed char, only the lower bound would be checked. Since the value cannot exceed the variable's upper bound, the check can be removed.
The more thorny issue is how to determine if an overflow is possible. Keeping track of the maximum range resulting from an operation is one possible way. For example, if two variables of a type with the range [-100, 100] are multiplied, then the resulting range is [-10000, 10000]. Any possibility of overflow may be determined by doing this for each sub-expression. This is messy and overly complex, because it requires overflow checking when calculating each new range.
There is an easier way. Instead of keeping track of maximum and minimum range values, the code keeps track of the maximum possible digits. Ignoring the sign bit, a variable with the range [-100, 100] will have a maximum of 7 binary digits. The maximum resulting binary digits may be determined for every mathematical operation. Multiplying two numbers, each with a maximum of 7 binary digits, creates a product with at most 14 binary digits. So, for each sub-expression: 1) calculate the maximum digits required to hold the result, based on the maximum digits of each operand and the type of operation; 2) locate a primitive type with the same signedness as the base type that will hold a value of at least the resulting number of digits; 3) if the implementation has no such primitive type, flag the operation as having a possibility of overflowing; 4) do the operation, checking for overflow if necessary. Obviously, this method only works for integral types. So, currently floating point types are not supported. | <urn:uuid:d8537a77-fcde-4dc3-95b0-1fd00ee9469d> | 3 | 766 | Documentation | Software Dev. | 43.957135 |
Bang!: How is the Energy of Seismic P-Waves Transmitted Through the Earth?
How is the energy of seismic P-waves transmitted through the earth?
- Masking tape
- 5 marbles
- Cut 5 pieces of string, each 12 inches (30 cm) long.
- Tape one piece of string to each of the marbles.
- Tape the free end of each string to the edge of a table. Adjust the position and length of the strings so that the marbles are the same height and touching each other.
- Pull one of the end marbles to the side, and then release it.
- Observe any movement of the marbles.
The marble swings down, striking the closest marble in its path, and stops moving. The marble on the opposite end swings outward, and strikes its closest neighboring marble when it swings back into its original position. The cycle of the end marbles swinging back and forth continues for a few seconds.
Raising the end marble gives it energy, which is transferred to the marble it strikes. This energy is passed from one marble to the next, as each marble pushes against the next. The end marble is pushed away from the group. The transfer of energy from one marble to the next simulates the transfer of energy between particles of the earth during a seismic P-wave (primary pressure wave of an earthquake).
The first sign that an earthquake has occurred is the hammerlike blow felt and heard as a P-wave exits through the earth's surface. Before that, P-waves move through liquids and solids by compressing (pressing together) the particles of earth directly in front of them. The compressed particles quickly spring back to their original position as soon as the energy moves on. The crust of the earth moves upward as it is hit with the energy of the P-wave, and then settles back into place when the energy moves on.
- Would it affect the transmission of energy if the marbles were not in line? Stick pieces of clay under the strings on the side of the table in order to change the position of the marbles. Be sure that the marbles touch at some point, but that each marble is at a different height.
- Would changing the distance between particles affect the transfer of energy? Repeat the original experiment, moving the pieces of tape supporting the marbles farther apart so that there is a slight separation between each marble.
- Use a Slinky to demonstrate the particle movement of a seismic P-wave as it moves from the focus (starting point) of an earthquake to the epicenter (the point on the earth's surface directly above the focus). The Slinky can be used as part of a project presentation by slightly stretching it vertically and attaching its top and bottom loops to the display. Compress four to five loops together at one end, then release.
- Seismic waves move more slowly through sand because the energy of the waves moves forward in different directions as the sand particles move outward in all directions. To demonstrate this, cover the end of a paper towel tube with a paper towel. Secure the paper towel to the tube with a rubber band. Fill the tube with uncooked rice or bird seed. Use your fingers to press down on the rice as you try to push the rice down and out through the paper towel.
Check it Out!
P-waves are the swiftest seismic waves. Find out the speed of P-waves as they travel through the different layers of the earth's interior: crust, mantle, and core. Display a diagram of a cross section of the earth, with speeds of P-waves indicated for each layer.
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety. | <urn:uuid:b96d8163-a062-4a92-9b86-861d70ed7862> | 4.1875 | 827 | Tutorial | Science & Tech. | 57.861235 |
Australia State of the Environment Report 2001 (Theme Report)
Prepared by: Ann Hamblin, Bureau of Rural Sciences, Authors
Published by CSIRO on behalf of the Department of the Environment and Heritage, 2001
ISBN 0 643 06748 5
Accelerated erosion and loss of surface soil (continued)
Active gullying and sheet erosion catchments - case studies * [L Indicators 1.4 and 1.5]
In recent years more studies of selected major rivers in different climatic regions have been undertaken to identify which combinations of climates and land uses are most vulnerable to accelerated water erosion. The application of remote sensing has also increased our capacity to record accurately those irregular, rare storm events that are responsible for the majority of accelerated erosion and sediment transport as plumes issuing from estuaries. This has been dramatically documented for the north-west of Western Australia, where both Landsat TM and NOAA-AVHRR real-time satellite surveillance have been used to record such episodic events as cyclonic damage and flooding (see satellite photo).
The Ashburton River and its tributaries are badly degraded only in the headwaters of the main channel and in the lower reaches; the majority of the catchment is in good to excellent condition. Despite this, a massive flood that occurred after 470 mm of rain fell in 24 hours in February 1997 caused enormous erosion and redeposition of sediment along the flooded extent of the main channel. The peak in-channel water depth was 20 m, and a sediment plume extended 15 km offshore and covered 500 km2. This event was monitored with NOAA-14 AVHRR imagery by DOLA, but because of the low relief and difficulties in discriminating between parallel-oriented rib erosion and deposition features, the actual extent of the erosion could not be accurately assessed.
Such events are rare. The peak flow on this occasion was four times the volume of the previously highest recorded peak in 1975-76, but a system might experience three or four such events in a century.
The Water and Rivers Commission in Western Australia has assessed the state of the northern rivers of the Indian Ocean, Timor Sea and Western Plateau Drainage basins in a recent report (WRC 1997), and concluded that by far the most serious cause of land and river degradation in the north-west is the removal of natural riverine and catchment vegetation. This results in rapid headward extension of gullies and networks of eroded channels in the upper reaches. The WRC considered the problem to be growing in magnitude, because of very intensive grazing pressure, particularly in the Timor Sea and Indian Ocean basins, where over two-thirds of the land area is in pastoral leases. The riparian zones are of critical importance in stabilising the landscape, and the contrast between their condition in most pastoral leases and the condition of vacant Crown land or national parks demonstrates the extent to which pastoralism has caused such degradation.
The tropical north-east of Queensland is also a susceptible region; large volumes of soil can be eroded in the severe cyclonic storms even where there is no land disturbance. However, changes in land cover and use over the past 150 years have exacerbated the potential rate of erosion substantially. Consequently in recent years fears have been raised over the impact of sediments and nutrient exports on other ecosystems in general and in particular the Great Barrier Reef.
Satellite image of sediment plume extending 50 km from the mouth of the Gascoyne River two weeks after a major cyclone.
Source: SPOT image of 2 March 1995 showing the sediment plume from the flooding Gascoyne River, WA. Seriously degraded inland areas can also be seen in this image. Image provided by WA Department of Land Administration-Remote Sensing Services
Most sediments are exported during infrequent, intense storms, including adsorbed phosphorus, organic material and pesticides that may be on suspended clay particles. For example, during cyclone Sadie in 1994, the Herbert River discharged over 100 000 tonnes of suspended sediments, sourced principally from grazing land. This would be sufficient to cover the whole of Sydney in 2 cm of soil (Mitchell and Bramley 1997).
Table 7 summarises the relationship between land use, annual flow and sediment export. Grazing lands are responsible for over 80% of the estimated annual sediment export from these north Queensland rivers. The proportion of each catchment remaining pristine, with full native tropical vegetation, varies from 2% to 76%.
|Catchment||Mean flow'000s ML||Area'000s km 2||(kg/ha)||'000s t||Pristine as % of total area||Grazing contribution'000s t||Cropping contribution '000s t||Urban contribution '000s t||Total sediment '000s t|
|NE Cape York||19 100||43 300||484||130||21||1 963||3||0||2 096|
|Burdekin-Haughton||10 850||133 510||212||12||2||2 741||73||2||2 829|
|Fitzroy||7 100||142 646||130||41||9||1 589||229||2||1 861|
|Herbert||5 000||10 130||543||23||16||462||64||1||550|
|Johnstone||4 700||23||2 436||60||23||271||235||1||567|
|Mossman-Daintree||4 250||2 615||1 024||104||76||111||52||1||268|
|Mulgrave-Russell||4 200||2 020||2 328||66||49||192||212||1||471|
|Burnett-Kolan||2 900||39 470||177||32||17||599||229||2||698|
|Pioneer-O'Connell||2 650||3 925||1 838||32||19||464||233||1||720|
|Barron||40||2 175||1 150||12||76||75||20||4||114|
|Total||61 090||389 409||565||8 818||1 430||17||10 660|
Source: Davis and Hamblin (1998).
Despite these large sediment losses, and the probable impact of extensive grazing areas within these vulnerable catchments, accelerated erosion is difficult to detect and distinguish from natural erosion unless it becomes severe. As a result it is given less prominence in the media and in policy than other land degradation issues such as salinity or pesticide pollution.
Concern for the condition of the Great Barrier Reef in recent years has focused more attention on erosion in this region of Queensland. Monitoring by remote sensing has revealed that high-intensity cyclonic rainfall causes large plumes of fresh water to enter the coastal zone, carrying varying amounts of sediment. The Burdekin River discharge measurements for the period 1966-1995 has recently been coupled to local wind and tidal measurements in a verified three-dimensional hydrodynamic model that simulates such river plumes, in order to assess the overall impact of the river on the central portion of the Great Barrier Reef. These results indicated that the plume regularly extends over 400 km to the north of the river mouth in coastal waters, but individual events follow different paths because of the complexity of coastal topography, islands and reef matrices. The work predicts that shelf edge reefs are not affected by the Burdekin River plumes, but the inner shelf reefs are affected whenever offshore winds prevail during flood events (McAllister et al. 2000).
Sediment samples taken from the coast to the outer reef in the Great Barrier Reef lagoon in the Townsville region by the CRC Reef Research Centre confirm that very little sediment of land origin occurs in the mid-shelf or outer reefs, and most terrestrial-derived sediment is confined to near-shore locations. The sediments retrieved as grab samples from these areas are predominantly of ancient volcanic air-carried ash.
While these recent research findings may relieve some of the worst fears that have arisen that erosion from land clearing might eventually destroy the outer reef, there is no doubt that the Cairns region of the Great Barrier Reef, for example has been severely affected by induced erosion over the past 100 years. The shallow zone of the inshore reefs has been markedly affected, and coastal rivers have become drains bringing eroded mud to estuary mouths. Mud has accumulated rapidly in the Cairns waterfront region since the 1950s, probably at 10 to 15 times the natural rate (Wolansky and Spagnol 2000).
Figure 15: Burdekin River plumes in 1974 (the largest flood ever modelled) and 1995 (an average year) at 63 and 65 days after the event.
Source: CRC Reef Research Centre
* These case studies are provided as examples only, and the findings presented here cannot necessarily be applied to other areas. | <urn:uuid:1f424c64-5657-410a-b166-ca5f0fef943b> | 3.40625 | 1,862 | Academic Writing | Science & Tech. | 40.003041 |
Communications satellites orbit directly over the poles, how does this help it to do it's job? And for a weather satellite directly orbiting over the equator?
Not all communications satellites orbit over the poles. Some are in geostationary orbits like the GOES weather satellites. Satellites that need to "see" all parts of Earth (but not all at once) travel in relatively low-altitude, near-polar orbits, with Earth turning beneath them. That way, during one complete Earth rotation, nearly every part of Earth passes beneath them. Want to learn more about orbits? http://scijinks.gov/orbits | <urn:uuid:d8b0377e-c146-4c7b-97f9-0f9c1e185f6d> | 3 | 132 | Q&A Forum | Science & Tech. | 44.515754 |
The style elements provide a straightforward way to control the font. These elements fall into two categories: physical styles and logical styles. The style elements provide only crude control over the appearance of your text. For more sophisticated techniques, refer to “Font Control”.
A logical style indicates that the enclosed text has some sort of special meaning. We’ve already seen the
em element, which indicates that the enclosed text should be emphasized (stressed when spoken). The list of available logical style elements includes:
em— Emphasizes text. Usually rendered as italic.
Please do <em>not</em> feed the monkeys.
strong— Strongly emphasizes text. Usually rendered as bold.
<strong>WARNING:</strong> Never feed the monkeys under any circumstances.
cite— Indicates a citation or reference. Usually rendered as italic.
For more information, please refer to <cite>The Dangers of Monkey Feeding, Vol 2</cite>.
dfn— Indicates the defining instance of a term; usually applied when the term first appears. Usually rendered as italic.
Monkeys have sharp <dfn>canines</dfn>, sharp pointy teeth to bite you with.
abbr— indicates an abbreviation or acronym, such as RADAR (RAdio Detection And Ranging); also provides a
titleattribute that may contain the fully-spelled out version. Hovering your mouse over the
abbr>> causes many browsers to display a “tooltip” with the contents of the
titleattribute. Rendering is inconsistent; some browsers display a dotted underline, while others do nothing special.
In particular, beware of <acronym title="Monkeys Of Unusual Size">MOUS</acronym>es.
code— Indicates computer code fragments and commands. Usually rendered in a monospaced font.
<code>10 PRINT "I LOVE MONKEY CHOW"<br> 20 GOTO 10</code>
Wait — “Usually rendered as italic?” What does that mean? Shouldn’t
dfn always render as italic?
Well, not necessarily.
By default, most browsers render
emas italic. However, this is only a convention. Nothing requires browsers to use italic, and in fact, some browsers (such as an text-to-speech synthesis browser) might be completely unable to use italic.
Although the default is italic, you can override this using CSS. For example, you could specify that on your website, all
emelements render as red and bold.
Okay… but why do
cite all render the same by default? If I want italic, why wouldn’t you just use
em and forget about the rest?
Well, sure, you could do that. However, using a richer set of elements gives you finer control. For example, you could declare that emphasized text is red and bold, but all citations are green and italic. You can also use logical style elements to extract more meaning out of a website. For example, if you knew that a website uses the
cite element consistently, you could easily write a program to extract a list of citations. (But don’t obsess over that point; there are better ways to store and consume this sort of information.)
The key point to remember is that a
cite element is a citation, not a chunk of italic text. The italics are just a useful side effect.
Inline vs. Block Elements
Unlike the paragraph and header elements, the style elements listed above don’t mark off a “block” of text. The physical style elements are inline elements that perform their work without adding extra line breaks:
Example 2.9. Inline vs. Block Elements
<p> 1. This is a paragraph with a section of <em>emphasized text</em> inside of it. </p> <em> 2. This is a section of emphasized text with <p>a paragraph</p> inside of it. </em>
The first sentence results in one “block” with a couple of emphasized words inside. In the second sentence, the
p element breaks the text up into multiple blocks.
Physical style elements specify a particular font change. For example, to make text bold, you can mark it off with the
</b>. The list of available physical style elements includes:
b— Makes text bold. Appropriate for product names, …
Today, UniStellarDefenseCorp is proud to announce <b>Neutrinozon</b>, the only neutrino-based death ray on the market.
i— Makes text italic. Appropriate for ship names, internal monologues and thoughts, …
This exciting new product has already been installed on many advanced warships, including the <i>I.S.S. Hood</i>.
sub— Makes text a subscript. Appropriate for scientific and mathematical notation, …
Although the standard electron neutrino (μ<sub>e</sub>) configuration packs plenty of punch, muon neutrino (μ<sub>ν</sub>) upgrades are available on request.
sup— Makes text a superscript. Appropriate for footnotes, scientific and mathematical notation, …
With an intensity of 1.76x10<sup>6</sup> cm<sup>-2</sup>s<sup>-1</sup>, nothing can repel firepower of this magnitude!
The physical styles are subtly different from the logical styles. The logical style element
strong means “something strongly emphasized,” while the physical style element
b just means, “something that is bold.”
A Digression: Physical Styles and Semantic Markup
These days, you’ll sometimes hear people claim that the physical styles are yucky and bad and should never be used. This is because logical styles contain small quantities of a rare earth named “Semanticism”. Semanticism can be mined and processed into the power source for farms of spinning Academic Paper Turbines, which serve to feed and clothe members of our society who would otherwise starve to death.
Although it is true that certain physical styles are obsolete, the
sup elements are appropriate to use in certain situations. For example, compare these code samples:
My grandfather served on the <i>U.S.S. Maine</i>.
My grandfather served on the <em>U.S.S. Maine</em>.
In this case,
i is more appropriate than
em, unless you think it’s appropriate to always be shouting the “U.S.S. Maine” part. Not that
i is all that wonderful, but
em is just flatly wrong. Maybe it would be nice if we had a
vessel element, but HTML is a small language, and the
i element is the best we can do.
For a more extreme example, consider the quantity “2 to the x power,” represented in HTML as
2<sup>x</sup>. If we take away the superscript, this leaves us with
2x, which is most emphatically not equal to “2 to the x power.” Even though the
sup element literally means, “move the 2 above the line”, this physical change to the text has actual mathematical meaning! (Thanks to Jacques Distler for pointing this one out.) | <urn:uuid:43a79858-55b9-4581-bc18-82e30cdc7392> | 3.53125 | 1,580 | Documentation | Software Dev. | 52.590253 |
Introductionearthquake, trembling or shaking movement of the earth's surface. Most earthquakes are minor tremors. Larger earthquakes usually begin with slight tremors but rapidly take the form of one or more violent shocks, and end in vibrations of gradually diminishing force called aftershocks. The subterranean point of origin of an earthquake is called its focus; the point on the surface directly above the focus is the epicenter. The magnitude and intensity of an earthquake is determined by the use of scales, e.g., the moment magnitude scale, Richter scale, and the modified Mercalli scale.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Geology and Oceanography | <urn:uuid:a84214bb-3732-4ef9-8d1e-2e86d499bf4f> | 3.890625 | 161 | Knowledge Article | Science & Tech. | 28.275 |
The Life and Death of Planet Earth by Peter Ward and Donald Brownlee, Times Books/Henry Holt, $25, ISBN 1591020638
TWO phrases spring to mind when you read this book: "you never had it so good" and "the end of the world is nigh". These pessimistic and doom-provoking thoughts echo continuously throughout this riveting read.
Authors Peter Ward and Donald Brownlee are both professors at the University of Washington in Seattle. Ward, a palaeontologist, uses the fossil record to explain how Earth got to its present state. Brownlee, an astronomer, gazes into the celestial crystal ball and foretells how our planet is going to change. Neither has good news.
Palaeontological evidence shows that life began as soon as the physical and chemical conditions allowed. Slimy, single-celled organisms originated about 3800 million years ago, 800 million years after Earth formed. A mere 500 million years ago animals appeared, and quickly dominated the ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:11eb2173-ed0a-475d-8d18-26e0d96e24c9> | 2.875 | 233 | Truncated | Science & Tech. | 52.490698 |
Same velocity means same
acceleration for two objects.
Two cars are set to drag race. The two cars are driven by
Dastardly Dave and Honest Abe. Dastardly Dave jumps the flag and
starts at t = 0 s, accelerating at a constant rate. One second later
Honest Abe starts, accelerating at a constant rate. At t = 4 s, they
both have the same speed. Compare their accelerations at t = 4
Since the change in speed is the same for
both drivers (from rest to the "same speed" at 4 s), and Honest Abe
took a smaller time (4 s) to reach that speed than did Dastardly Dave
(5 s), Honest Abe must have a larger magnitude for his
This website was written by Tom
Brown and Jeff
Crowder. Please let us know if you encounter any
difficulties in accessing the site, or if you have any suggestions on
how we may improve it. | <urn:uuid:260ac3d0-0755-4a7e-87ad-1c71151a757c> | 3 | 208 | Tutorial | Science & Tech. | 62.876316 |
Waves, Sound and Light: Sound and Music
Sound and Music: Audio Guided Solution
Olivia and Mason are doing a lab which involves stretching an elastic cord between two poles which are 98 cm apart. They use a mechanical oscillator to force the cord to vibrate with the third harmonic wave pattern when the frequency is 84 Hz. Determine the speed of vibrations within the elastic cord.
Audio Guided Solution
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities and record in an organized manner, often times they can be recorded on the diagram itself. Equate given values to the symbols used to represent the corresponding quantity (e.g., v = 345 m/s, λ = 1.28 m, f = ???).
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Sound and Music at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | <urn:uuid:f227162e-353f-41a7-9999-ac0c82a8c569> | 3.984375 | 276 | Tutorial | Science & Tech. | 50.541588 |
This sounds like it is an excellent science fair project idea. Doing a carefully controlled experiment on a basic procedure that many scientists use for their research is definitely a worthwhile idea. Progress in science is made one experiment at a time, so I would encourage you to continue.
The Wikipedia article includes some background information on SDS, and reference 6 is a link to a database for household products that include SDS. Please note that you will have to search using all of the alternative names for SDS, such as sodium lauryl sulfate to find all of the products that contain this detergent. http://en.wikipedia.org/wiki/Sodium_dodecyl_sulfate
You would need to look up some of these products and try to pick one that does not have any other ingredients. If you can find one that does not contain other ingredients, this would be a perfect choice for your experiment.
However, it might be best to try to purchase the pure reagent grade SDS, even if it means filling out the forms. Using the pure detergent would ensure that you would not be including unknown variables in your experiment with the reagent.
Here is the information from this website about the rules for projects involving hazardous chemicals.http://www.sciencebuddies.org/science-f ... chem.shtml
SDS is a detergent and while it would be a good idea to wear safety glasses and gloves when working with the pure substance, it is not particularly hazardous, so I don’t think you would need special approval for working with this reagent. However, do check with your teacher and go ahead and submit the approval forms if there are any questions about the requirements for approval. Your project could be disqualified at judging if it is entered without required approval. http://www.sciencebuddies.org/science-f ... chem.shtml
Here is an material safety data sheet (MSDS) for SDS that includes precautions for handling this chemical. http://www.sciencelab.com/msds.php?msdsId=9925002
So what are you planning to do for your experiment? How are your going to measure your results? | <urn:uuid:08e291f7-0b14-4747-857b-7ed051a4b801> | 3.1875 | 452 | Comment Section | Science & Tech. | 60.543042 |
All posts tagged ‘x-rays’
Hidden Blades, Glowing Scorpions and Bug Genitalia: Great Science Images From the American Museum of Natural History
New high-speed videos show us what we already knew: Dogs make a giant mess at the water bowl.
But the clips above and below, filmed in X-ray and visible light, challenge assertions that canines drink by scooping up fluid with a backward-curled tongue. Instead, dogs pull up a column of liquid and chomp it — just like cats do.
“It only looks like dogs scoop with back of their tongue. They drink the same way as cats, just sloppier,” said evolutionary biologist Alfred Crompton of Harvard University.
Crompton leads a study of dog drinking published May 25 in the Journal of the Royal Society Biology Letters. The new work follows research on cat-lapping mechanics published last fall by a group at MIT. In that study, researchers deconstructed how cats drink, and suggested dogs drink differently.
“We didn’t use X-ray video like Crompton. When we saw their clip, we were like, ‘Wow, it is the same!’” said physicist and mechanical engineer Pedro Reis of MIT, a co-author of last year’s cat research.
- 5:42 PM
- Categories: Space
An ordinary star may have lived through a catastrophic explosion that created one of the most famous supernova remnants in astronomy. A new look at the cosmic debris cloud known as Tycho’s supernova remnant shows an arc of material that could explain what creates a key type of supernova.
“It looks like this companion star was right next to an extremely powerful explosion and it survived relatively unscathed,” said astronomer Q. Daniel Wang of the University of Massachusetts in Amherst in a press release. The study appears in the May 1 Astrophysical Journal.
The remnant is named for the great Danish astronomer Tycho Brahe, infamously known for his metal nose and more respectably for describing in 1572 the stellar explosion that bears his name. It was formed by a Type 1a supernova, which are useful in measuring astronomical distances because of their reliable brightness. Type 1a supernovas have also been used to show that the universe’s expansion is accelerating, and to probe the mysterious force called dark energy that’s pushing the universe apart.
A team of physicists, engineers and radiologists recently revived a first-generation X-ray device that had been collecting dust in a Dutch warehouse. The antique machine still sparked and glowed like a prop in an old science fiction movie, and used thousands of times more radiation than its modern counterparts to make an image.
The old machine was originally built in 1896 by two scientists in Maastricht, the Netherlands, just weeks after German physicist Wilhelm Conrad Röntgen reported his discovery of X-rays — an achievement that won him the first-ever Nobel Prize in physics and sparked a rash of copycat experiments.
H.J. Hoffmans, a physicist and high school director in Maastricht, and L. Th. van Kleef, director of a local hospital, assembled the system from equipment already on hand at Hoffmans’ high school and used it to take some of the first photographs of human bones through the skin, including in van Kleef’s 21-year-old daughter’s hand.
Since then, X-rays, which are the right wavelength to tunnel through muscle but are slowed by denser bones, have become almost synonymous with medical imaging. But most of those first X-ray systems were lost to history. Because the techniques and technology to measure radiation doses weren’t invented until decades after the first X-ray machines came about, no one knows exactly how powerful those systems were.
“There’s a gap in knowledge with respect to these old machines,” said medical physicist Gerrit Kemerink of the Maastricht University Medical Center. “By the time they could measure the properties, these machines were long gone.”
About a year ago, when Kemerink’s colleague at the hospital dug Hoffmans and van Kleef’s aging machine out of storage to use in a local TV program on the history of health care in the region, Kemerink grew curious about what the gadget could do. In a paper published online in Radiology, Kemerink reports the first-ever diagnostics on a first generation X-ray device.
“I decided to try to do some measurements on this equipment, because nobody ever did,” he said.
In brilliant bursts of light from the world’s most powerful X-ray laser, physicists have taken snapshots of living viruses and see the 3-D shape of proteins frozen in nanometer-scale crystals.
The technique is described Feb. 3 in two Nature papers, and the images are the first biological subjects to be captured by bouncing X-rays off single particles.
“This really lets us see things that were invisible before,” said study co-author Marvin Seibert, a Stanford University physicist. “The most important thing is that scientists will be able to solve the structures of new biological molecules.” | <urn:uuid:5decd011-3f7a-42d0-920e-a82a6df0223f> | 2.828125 | 1,104 | Content Listing | Science & Tech. | 45.950206 |
Hello Space Fans, welcome to another addition of Space Fan News.
So the big news this week came out on Monday with the announcement of the confirmation of Kepler-22b, a roughly Earth-sized planet around the habitable zone of a star a lot like the Sun.
I posted a video about this on Tuesday called "The Promise of Kepler-22b" where I posted all we know about this planet.
All I want to say about it today is emphasize that we really don't know much about Kepler-22b. All we've learned is what can be gleaned by watching a planet pass in front of a star because that's the kind of observation Kepler does. So there's not a lot of details we can get from an observation like that, but we can get some things.
We know its size: it has a radius roughly 2.4 times that of Earth
We know it's orbit around Kepler-22: it takes 290 days to orbit once around it.
We know the star is like our Sun and I've heard statements that it is 10 billion years old.
We know how far away it is: 600 light years.
We don't know what it's made of; we don't know if it is rocky or not; we don't know if it even has an atmosphere.
Those things are usually detected spectroscopically and the Kepler-22 system is so far away and the planet so small that even if we put our best spectrographs on it, we wouldn't be able to measure it. Just too far away.
Now... the James Webb Space Telescope on the other hand will have instruments onboard uniquely able to look at exoplanets and determine these things. It has a large enough primary to possibly even resolve many of these systems.
Which is reason number 812 why we need the JWST.
So if you don't know about it or haven't seen it yet, check out my video posted on Tuesday, you can find it easily on my channel page.
Next, astronomers have found the largest black holes ever.
Earlier this week it was announced that astronomers using the Gemini North Telescope in Hawai'i - along with researchers from the universities of Texas, Michigan, the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto, Canada, as well as NOAO in Arizona - have found two supermassive black holes, each approaching 10 billion solar masses.
Now here's some persepective: the supermassive black hole at the center of the Milky Way is only four million solar masses. The one in the center of the Andromeda galaxy is 140 million solar masses.
These black holes are a in a different league entirely.
The event horizons of each of these black holes is five to ten times bigger than the orbit of Pluto.
These are 10 billion solar mass black holes. We're talking about black holes two thousand times bigger than the one at the center of our galaxy.
Remember, there are two basic classes of black holes: there are stellar sized black holes that are floating around inside our galaxy. Then there are supermassive black holes devouring stars in the centers of galaxies and powering quasars. We're talking about that kind here.
So now, with this discovery, astronomers are starting to wonder just how big these things can get.
Very large black holes are thought to have been around when the universe was very young, from about one to three billion years after the Big Bang. The evidence for this comes from quasars. These are large, energetic sources believed to be the results of black holes devouring it's galaxy. Quasars are among the most distant objects ever observed. Since the distance from us is closely linked to how far back in time we're looking: the very, very distant objects we see in telescopes are from a time when the universe was young, so the existence of these quasars tells us there were powerful black holes around back then powering these quasars.
But that was 10 billion years ago, quasars are seen very far away, the quasars in nearby galaxies have gone dark, but the black holes are still there.
So where are they now?
Astronomers believe they are lurking at the centers of ancient elliptical galaxies. Elliptical galaxies are smooth, featureless galaxies containing primarily older, low mass stars. Very little star formation is happening here, these galaxies are dying.
The black holes at the centers of these galaxies are no longer fed by accreting gas and have become dormant and hidden. We see them only because of their gravitational pull on nearby orbiting stars.
The astronomers found these black holes in NGC 3842 and NGC 4889: each a giant elliptical galaxy and the brightest member of a galaxy cluster; NGC 3842 lies about 320 million light-years away in the Leo galaxy cluster, and NGC 4889 is the brightest member of the famous Coma galaxy cluster some 336 million light-years away. Both of these galaxies are bright enough to see in amateur telescopes, so look em up space fans.
Since astronomers have no idea what the upper limit on the size of a black hole can be, expect to hear more about larger and larger black hole discoveries in the future.
Finally, the Swift Space Telescope observed a very strange gamma ray burst. Gamma-ray bursts, or GRBs. are the universe's most luminous explosions, emitting more energy in a few seconds than our sun will during its entire energy-producing lifetime.
This burst, known as GRB 101225A, was discovered in the constellation Andromeda by Swift's Burst Alert Telescope at 1:38 p.m. EST on Dec. 25, 2010.
So you know what's coming next: yep. It's being called the Christmas burst.
The gamma-ray emission lasted at least 28 minutes, which is unusually long. Follow-up observations of the burst's afterglow by the Hubble Space Telescope and ground-based observatories were unable to determine the object's distance.
This gamma ray burst is so unusual, astronomers think it could have happened in one of two ways.
One scenario says that a solitary neutron star minding its own business gets hit by a passing comet which it quickly devours, causing the gamma ray burst.
Another possibility is that the a neutron star is engulfed by, spirals into and merges with an evolved giant star in a distant galaxy.
Both scenarios involve a neutron star causing mayhem.
The team seems to be leaning toward the second scenario.
They're proposing the burst occurred in an exotic binary system where a neutron star orbited a normal star that had just entered its red giant phase, enormously expanding its outer atmosphere. This expansion engulfed the neutron star, resulting in both the ejection of the giant's atmosphere and rapid tightening of the neutron star's orbit.
Once the two stars became wrapped in a common envelope of gas, the neutron star may have merged with the giant's core after just five orbits. The end result of the merger was the birth of a black hole and the production of oppositely directed jets of particles moving at nearly the speed of light, followed by a weak supernova.
The particle jets produced gamma rays. Jet interactions with gas ejected before the merger explain many of the burst's signature oddities. Based on this interpretation, the event took place about 5.5 billion light-years away, and the team has detected what may be a faint galaxy at the right location.
Well, that's it for now Space Fans, as always thank you for watching, and Keep Looking Up.
Kepler's first confirmed earth-sized planet in a habitable zone (Kepler 22B).
http://astrobites.com/2011/12/06/the-news-and-super-earth-kepler-22b/ <---Include stuff from here!
Habitable zone yes, but Kepler 22-b mass and high gravity rules out the possibility of terrestrial life
Comet Falling into Neutron Star
Kepler 21b Confirmed from Kitt Peak:
Strange new species of ultra-red galaxy discovered (might make a better IM):
Two record-breaking black holes found:
E-ELT Gets Funding Approval | <urn:uuid:1e5e1978-8ad0-4632-9592-23c74781d4ad> | 3.078125 | 1,699 | Personal Blog | Science & Tech. | 55.103395 |
Answer: I would guess that over the course of a day and a half, evaporation caused the fluid in the straw to become less dense. Diet soda is less dense than water, and as the water evaporated out of the straw, leaving more concentrated diet soda behind, it would become even less dense than usual. The fluid in the cup would have experienced somewhat less evaporation because it's not directly exposed to air. As a result the comparatively more dense fluid in the cup would push the less dense column of fluid up the straw. Once you take a sip, you have removed the lower density fluid, and the level goes back to normal. BTW, the situation would have been reversed if you'd had regular (non-diet) soda in your cup because sugary soda syrup is denser than water.
BS Physics from the University of Maryland
Former Applications Physicist at the Super Conducting Super Collider (SSC)
Lisa from Michigan | <urn:uuid:8fc03081-202e-4c3e-8460-091a089e3cc2> | 3.078125 | 195 | Q&A Forum | Science & Tech. | 42.452583 |
Arthropods are a large group of animals distinguished by having a chitinous, segmented exoskeleton (outer skeleton) and jointed appendages. Examples of arthropods found in the Pacific Northwest include:
1. Insects: Characterized by 3 pairs of jointed legs and body divided into 3 major segments. Frequently one may see one pair of antennae on the head and many possess wings. Insects are terrestrial and aerial. Only a handful may be found living by choice in marine habitats.
2. Arachnids: Characterized by 4 pairs of jointed legs and body divided into 2 major sebments. Antennae are lacking from the head and none possess wings.
3. Crustaceans: Characterized by 5 pairs of large jointed legs (one pair may be pincers) or in some cases one pair of legs per body segment. The body is frequently divided into 2 major areas, with the tail divided further into smaller segments. Crustaceans have 2 pairs of antennae. They are largely aquatic, although some are terrestrial. None have wings.
4. Centipedes: Characterized by a small head with similar shaped body segments. One pair of jointed legs is found per body segment. One pair of antennae are found on the head.
5. Millipedes: Characterized by a small head with similar shaped body segments. Two (occasionally one) pair of jointed legs is found per body segment. One pair of antennae are found on the head. | <urn:uuid:ff34d4dd-b82b-4d53-80ab-3fcf9b6bbbc5> | 3.546875 | 317 | Knowledge Article | Science & Tech. | 50.870371 |
Taken from Mars Robotics Lesson Background
Robots: Machines On the Move
The first known use of the term "robot" was by Czech playwright Karel Capek, who in 1920 wrote a play called R.U.R.: Rossum's Universal Robots. Capek used the Czech word "robot," which means "worker" or "laborer," to describe the mechanical slaves that were portrayed in his play. The first publicly-displayed robots were "Elektro" and his trusty mechanical dog, "Sparky," who were highlighted at the 1939 World's Fair Exhibition in New York City. Elektro could dance and recite a handful of words, while Sparky would happily bark alongside him. While robots were a mere curiosity in the late 1930's, they are an integral part of our daily lives today. Some robots are simple, like the automatic sprinkler system in many people's lawns. Others are more complex like the factory robots used to assemble cars or the robotic explorers NASA has sent to Mars and the rest of the solar system. Simple or complex, all robots obey the same principles and are designed using the same process.
Robots in the Real World
Unlike in science fiction, robots in the real world rarely resemble human beings. Walking, while learned naturally by every young child, is a surprisingly difficult skill. Robots, with their less-than precise sensors and motors, have a great deal more trouble mastering this task. Fortunately, robots rarely need to walk. Many robots never move from the location where they were installed!
Although research is underway to give robots artificial intelligence and "fuzzy logic" capabilities, most real robots do not have the intelligence displayed by the robots of films. In most cases, a high degree of intelligence isn't a requirement for the task the robot must perform. Once taught the steps needed to carry out the job, the robot can simply perform those steps over and over, relying on its human controllers to step in when a problem arises.
Some robots must operate in hazardous environments or in environments where humans cannot directly interact with them. In these cases, it is necessary for the robot to have much more decision-making power so that it can respond to its environment and to unforeseen circumstances. The classic examples of this case are NASA's robotic explorers to Mars and the rest of the solar system. Sending out a repair person simply isn't an option when the machine is over a 100 million km (~80 million miles) away!
Taken from Mission Possible
Robotic Exploration of the Solar System
Almost everything that we know about the Universe beyond the solar system was discovered through observations made on the Earth or in space near the Earth. However, much of what we know about the solar system has been discovered by robotic spacecraft sent to make close-up observations of the objects in our planetary system. In fact, we have learned more about the solar system in the last 50 years using robotic spacecraft than all the previous ground-based observations.
After the launch of Sputnik 1 satellite in 1957, which ushered in the Space Age, robotic spacecraft (and in the case of the Moon, human space flight) have been used to study various worlds in the solar system, with at least one spacecraft visiting each planet. In addition, robotic spacecraft have visited moons, asteroids and comets. There are spacecraft currently on their way to examine the dwarf planets Ceres and Pluto, and the spacecraft flying by Pluto may also examine at least one of the Kuiper Belt Objects, which are icy worlds beyond the orbit of Neptune discovered in the last few years. Other spacecraft missions are carrying out more detailed observations of the many different worlds in the solar system, and many more are being planned. While spacecraft are often unique in their detailed design, there are three basic types of missions: flyby, orbital or lander missions.
When planning an exploration of another world, scientists need to consider what kind of information they want to gather. They need to formulate the scientific goals of the mission, and then figure out what is the best way to meet the goals within their budget. If the study cannot be conducted with ground-based observations or telescopes located near the Earth in space, they must consider the extra cost of sending a spacecraft to explore the world by flying by, orbiting or landing on the target world. The exploration gets more complex and expensive as you progress from ground-based observations to a flyby, an orbital and a landing spacecraft mission. Most often, the final mission is a compromise between what the scientists want to find out about their target, and what real-world constraints allow.
The simplest way to explore a world close-up is to have a spacecraft just fly by the body without going into orbit around it or landing on it. A flyby can get much more detailed information on the object than Earth-based observations. However, the spacecraft can only make useful observations of the world while it is nearby, and depending on the trajectory of the spacecraft, the time for observations may be limited and only a small portion of the object facing the spacecraft as it flies by may be viewable. This means that a flyby mission requires a lot of planning to optimize the way the data is gathered. Usually, the details of the planned observations -- which instrument to use at each moment, where to point the instrument, what kind of data to take, etc. -- are stored in a computer program on the spacecraft before the flyby, and the program begins executing automatically at some distance from the target. The gathered data is then sent back to the Earth for analysis after the flyby is concluded.
The costs of a robotic flyby mission vary depending on the world that is being explored. Typically the costs involve consideration for the following aspects:
- Designing and building the instruments needed to get the desired science data;
- The power needed to run the spacecraft and its instruments;
- Launching the spacecraft;
- The amount of fuel needed to fly to the world;
- Communications needed between the Earth and the spacecraft;
- Human labor for the scientists and engineers working on the mission;
- The length of the mission.
While a flyby mission is the simplest (and the most likely to be successful) spacecraft mission to explore another world, it usually only offers a snapshot of one part of the world. A more complicated mission, but also one that can offer a more comprehensive science investigation, is an orbital mission, in which the spacecraft goes into an orbit around the target world. The main complication in this kind of mission compared with the flyby is the orbit insertion maneuver: firing the spacecraft's engines to change the trajectory so that the gravity of the target world can "capture" the spacecraft into an orbit around the object. An orbital mission can obtain more detailed information than a flyby since it not only will be able to see much more of (if not the entire) world, but it also can spend a longer time making repeated observations of the same area. In addition to the costs described in the context of a flyby mission, the following additional aspects must be considered for a robotic orbital mission:
- Propellant required for the orbit insertion maneuver and for possible orbit correction maneuvers needed later;
- Hardware and software engineering necessary to prepare the spacecraft for the orbit insertion maneuver and for orbital operations;
- Additional instruments that may be desired for a more comprehensive science investigation;
- More involved communications with ground control on the Earth.
The landing of a spacecraft, or the landing of a probe launched from a flyby or orbiting spacecraft, to another world entails additional complexity over an orbital mission. In addition to flying to the world, the mission must plan for a safe landing of the probe. In some cases, the probe is designed to just crash on the world and provide as much information as possible before the crash, but in most cases careful planning is required to ensure a soft, safe landing on the target world's surface. Spacecraft can be slowed down during descent by firing the engines at precise moments for a predetermined duration, or by using parachutes if the target world has a substantial atmosphere. The spacecraft may also include cushioning (such as air bags) to prevent a jarring landing on the surface. Often, these options are combined to ensure a safe landing. A lander mission is riskier than a flyby or an orbital mission, since there are more chances for something to go wrong. For example, about half of all lander missions sent to Mars have failed for one reason or another. On the other hand, a lander mission can provide much more detailed information on the world than the other kinds of missions, often making the higher risk acceptable. A lander can examine the world's surface features close-up and use tools to burrow underground, drill into rocks, or take samples for analysis within the spacecraft. While most landers are stationary, some have been designed to move around the surface, providing detailed information over a larger area In addition to the costs of a flyby mission, as well as those of the orbital mission (if the mission includes an orbiting component), a lander mission involves the following additional cost considerations:
- Fuel to slow down the spacecraft for landing;
- Engineering and additional hardware for landing (e.g., parachute, cushioning);
- Software engineering to prepare the spacecraft for landing;
- Engineering necessary to make communications from the surface back to the Earth reliable;
- Additional instruments that may be desired for a more comprehensive science investigation.
Communications with Robotic Missions
Communications with spacecraft studying other worlds are done using radio waves, which travel at the speed of light. As a result, the time between sending a signal to the spacecraft and receiving the response varies from a couple of seconds (for missions exploring the Moon) to several hours (for missions investigating the outer reaches of the solar system.) This delay makes it necessary for the spacecraft to be able to execute many commands on their own, without direct input from ground control on the Earth. Therefore, the computer programs operating robotic spacecraft must be designed carefully. For example, before firing the spacecraft's engines to make a course correction maneuver, a signal is sent from the ground control to the spacecraft to have the computer execute a series of commands to complete the necessary operations, but providing additional commands is usually not possible before the maneuver is completed. Communication with spacecraft is done using large radio antennas on Earth, such as NASA's Deep Space Network, which includes three radio antenna facilities located around the world. The time to use the network must be planned in advance. The cost for using these communication facilities can be several million dollars, depending on the frequency of communications and the amount of data transmitted.
From MarsBound! Mission to the Red Planet
The Design Process
Like the scientific process, the design process is not a simple, linear progression from one step to the next, resulting in a finished product. Although there are steps, the design process is an iterative one: designing, modifying, testing, and designing again until a finished product is made. A central tenet of engineering, however, is that there is no such thing as a "perfect" design. Each design solution has constraints, limitiations that are placed on the solution. For example, cost is a common constraint, as is the reliability or the strength of the materials being used. It will almost always turn out, however, that a design that excels in one aspect of the problem to be solved will be poor in another aspect. Making and justifying these trade-offs is a major part of the design process. Keeping in mind that the design process is not as linear as it may appear, here are the steps that are normally identified as being part of the design process:
- Clearly identify the problem, identifying all aspects of the issue. It's not enough to identify the problem in broad terms, for example, "There is too much traffic near our school." The specific aspects of the problem need to be identified. For example, is the traffic moving too fast, are there too many cars on the road, or is there simply poor traffic management and routing? Usually it is the "end consumer" who will specify the problem to be solved, so this is a good opportunity to explore the sociological implications of the technology that result from the design as well!
- dentify the functional requirements the solution must meet. If, in our previous example, the problem is poor traffic management near the school grounds, your functional requirements might include, "Traffic must enter and exit the school area within one minute," and, "It must be easy to pick up and drop off students." The functional requirements should be written so that if they are satisfied, the problem itself will also be satisfied.
Identify the constraints to the solution. Again using our school traffic example, the possible constraints might be, "All traffic must remain below 15 MPH," or, "Vehicles must not pass closer than 10 m from the school building."
- Design a prototype. This is the step that most people think of as "design" or "engineering", but actually this is just one step in the overall process. The prototype could be a simple concept model (perhaps a drawing on a piece of paper for our school traffic example) or a complete working model (temporary lines painted on the pavement near the school). The goal is to develop something that can be tested to see if it satisfies the functional requirements and constraints. Note that the prototype does not have to satisfy all of the functional requirements. It is perfectly acceptable (and common) to test only one aspect of a complex problem at a time.
- Evaluate the prototype. In this step, the designer must test and evaluate his or her proposed solution. Note that this is more than simply asking, "Does it work?" In this step the designer must instead ask, "How well does it work?" Graphs and charts are a common way to display the results of this test and evaluation process. Continuing with our example, the designers in this case might collect data on how many cars pass near the school, how fast they travel, or how long it takes to load and unload passengers.
- Revise and retest as needed. Based on the data collected in the previous step, the designer can see where the proposed design can be improved or what new trade-offs will have to be made. The engineer then goes back to step four (and sometimes back to step one!) and repeats the process until the design satisfies, as near as possible, all of the functional requirements and constraints.
- Present the final product. Once the design is finished, it must be demonstrated to the "end consumer" who identified the problem in the first place. Ultimately, it is the consumer, the user of the technological solution, who decides if the problem has really been solved. If the consumer is not satisfied, usually the problem has not been well-specified or the consumer may not understand the constraints that must be placed on the solution.
For further information about NASA robots, go to http://www-robotics.jpl.nasa.gov/index.cfm
Far-Ranging Robots Videos
Titles and Introduction
Mars Exploration Missions
Failure is the Mother of Invention
Ways to Land Curiosity
Questions and Answers
Mars Engineering E/PO | <urn:uuid:ceaa2210-1019-4352-b1ff-4aca2ff3e84b> | 3.484375 | 3,120 | Knowledge Article | Science & Tech. | 38.452339 |
ansible basic operating theory explained
Talking on irc tonight pointed out a lacking in the docs of ansible. Specifically, explaining the dirt-simple nature of how it works.
0. ansible has modules – modules are just executable code/scripts in any language you want – there are only 2 requirements:
a. that whatever language you want to write them in is available on the remote system(s)
b. that the modules return json as their results.
1. ansible connects to a host(or many hosts) using ssh
2. ansible shoves across the module(s) you want to run
3. ansible shoves across the arguments you want to pass to the module(s)
4. ansible runs the modules with the arguments
5. ansible gets back json from the modules and sends it to the calling script/program to be handled and/or displayed.
Now – for a lot of people the only module they really care about it is the ‘command’ or ‘shell’ module – which just lets you run a command directly on the system and it returns the results to the calling program. Pretty handy for any number of things. However, you can write a custom module – which is really nothing more than a script that ansible runs remotely. Ansible just handles the communication/execution part to multiple systems at the same time and return the results back to you, sensibly.
So that’s the dead-simple version of what ansible can do.
How do you as an admin wanting to test it out get started?
git clone https://github.com/ansible/ansible.git
echo “somehost-i-have-root-on” > ~/ansible-hosts
If you have a root ssh key setup then you can run:
bin/ansible all -i ~/ansible-hosts “uptime”
if you don’t have a root ssh key setup then run:
bin/ansible all -k -i ~/ansible-hosts “uptime”
it will prompt you for the root password
Add more hosts to ~/ansible-hosts to talk to more at the same time.
Syndicated 2012-04-11 05:29:40 from journal/notes | <urn:uuid:74cf9d10-c792-4b51-a113-91f38f9f21c6> | 3.046875 | 490 | Comment Section | Software Dev. | 63.252843 |
Science Fair Project Encyclopedia
Prumnopitys is a genus of conifers belonging to the Podocarp family, Podocarpaceae. The eight recognized species of Prumnopitys densely-branched, dioecious evergreen trees up to 40 metres in height. The leaves are similar to those of the yew, strap-shaped, 1-4 cm long and 2-3 mm broad, with a soft texture; they are green above, and with two blue-green stomatal bands below. The seed cones are highly modified, reduced to a central stem 1-5 cm long bearing several scales; from one to five scales are fertile, each with a single seed surrounded by fleshy scale tissue, resembling a drupe. These berry-like cone scales are eaten by birds which then disperse the seeds in their droppings.
The species are distributed on both sides of the Pacific, in eastern Australia, New Zealand, and New Caledonia, and along the mountain ranges of eastern South America from Chile to Venezuela and Costa Rica. This distribution indicates Prumnopitys' origins in the Antarctic flora, which evolved from the humid temperate flora of southern Gondwana, an ancient supercontinent.
A Chilean species, the Lleuque, widely known under the name Prumnopitys andina or Podocarpus andinus, has been treated by some botanists as Prumnopitys spicata (Molloy & Muñoz-Schick 1999); however this name is illegitimate (Mill & Quinn 2001).
Several species of Prumnopitys are used for timber.
- Gymnosperm Database: Prumnopitys
- de Laubenfels, D. J. 1988. Coniferales. P. 337-453 in Flora Malesiana, Series I, Vol. 10. Dordrecht: Kluwer Academic.
- Molloy, B. P. J. & Muñoz-Schick, M. 1999. The correct name for the Chilean conifer Lleuque (Podocarpaceae). New Zealand J. Bot. 37: 189–193. Available online (pdf file).
- Mill, R. R. & Quinn, C. J. 2001. Prumnopitys andina reinstated as the correct name for ‘lleuque’, the Chilean conifer recently renamed P. spicata (Podocarpaceae). Taxon 50: 1143 - 1154. Abstract.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:f562d0f9-935f-4f54-a0e5-6417503a8a86> | 3.359375 | 556 | Knowledge Article | Science & Tech. | 52.119266 |
|This shock wave
plows through space at over 500,000 kilometers per hour.
Moving toward to bottom of this
detailed color composite, the thin, braided filaments are actually
long ripples in a sheet of glowing gas seen almost edge on.
as NGC 2736, its narrow
suggests its popular name,
the Pencil Nebula.
About 5 light-years long and a mere 800 light-years away, the
is only a small part of the
The Vela remnant itself
is around 100 light-years in diameter and is the expanding
cloud of a star that was seen to
explode about 11,000 years ago.
Initially, the shock wave was moving at millions of kilometers
per hour but has
slowed considerably, sweeping up
surrounding interstellar gas. | <urn:uuid:ffdb9d3b-01f9-4e8c-a5f4-be17a64d3719> | 3.40625 | 171 | Knowledge Article | Science & Tech. | 55.862443 |
Hubble’s constantArticle Free Pass
Hubble’s constant, in cosmology, constant of proportionality in the relation between the velocities of remote galaxies and their distances. It expresses the rate at which the universe is expanding. It is denoted by the symbol H0, where the subscript denotes that the value is measured at the present time, and named in honour of Edwin Hubble, the American astronomer who attempted in 1929 to measure its value. With redshifts of distant galaxies measured by Vesto Slipher, also of the United States, and with his own distance estimates of these galaxies, Hubble established the cosmological velocity-distance law:velocity = H0 × distance.According to this law, known as the Hubble law, the greater the distance of a galaxy, the faster it recedes. Derived from theoretical considerations and confirmed by observations, the velocity-distance law has made secure the concept of an expanding universe. Hubble’s original value for H0 was 150 km (93 miles) per second per 1,000,000 light-years. Modern estimates, using measurements of the cosmic microwave background radiation left over from the big bang, place the value of H0 at between 21.5 and 23.4 km (13.3 and 14.5 miles) per second per 1,000,000 light-years. The reciprocal of Hubble’s constant lies between 13 billion and 14 billion years, and this cosmic time scale serves as an approximate measure of the age of the universe.
What made you want to look up "Hubble's constant"? Please share what surprised you most... | <urn:uuid:6a75ed3d-e3ce-46c2-af6c-4d36e7faf25a> | 3.921875 | 333 | Knowledge Article | Science & Tech. | 51.605294 |
Each rational number can be written in infinitely many forms, for example 3 / 6 = 2 / 4 = 1 / 2. The simplest form is when a and b have no common divisors, and every non-zero rational number has exactly one simplest form of this type with positive denominator.
The decimal expansion of a rational number is eventually periodic (in the case of a finite expansion the zeroes which implicitly follow it form the periodic part). The same is true for any other integral base above 1. Conversely, if the expansion of a number for one base is periodic, it is periodic for all bases and the number is rational.
Two rational numbers and are equal if and only if ad = bc
Additive and multiplicative inverses exist in the rational numbers.
Any positive rational number can be expressed as a sum of distinct reciprocals of positive integers.
For any positive rational number, there are infinitely many different such representations. These representations are called Egyptian fractions, because the ancient Egyptians used them. The Egyptians also had a different notation for dyadic fractions. See also Egyptian numerals.
To conform to our expectation that 2 / 4 = 1 / 2, we define an equivalence relation ˜ upon these pairs with the following rule:
This equivalence relation is compatible with the addition and multiplication defined above, and we may define Q to be the quotient set of ~, i.e. we identify two pairs (a, b) and (c, d) if they are equivalent in the above sense. (This construction can be carried out in any integral domain, see quotient field.)
We can also define a total order on Q by writing
The rationals are the smallest field with characteristic 0: every other field of characteristic 0 contains a copy of .
The set of all rational numbers is countable. Since the set of all real numbers is uncountable, we say that almost all real numbers are irrational, in the sense of Lebesgue measure, i.e. the set of rational numbers is a null set.
The rationals are a densely ordered set: between any two rationals, there sits another one, in fact infinitely many other ones.
The rationals are a dense subset of the real numbers: every real number has rational numbers arbitrarily close to it. A related property is that rational numbers are the only numbers with finite expressions of continued fraction.
By virtue of their order, the rationals carry an order topology. The rational numbers also carry a subspace topology. The rational numbers form a metric space by using the metric , and this yields a third topology on . All three topologies coincide and turn the rationals into a topological field. The rational numbers are an important example of a space which is not locally compact. The rationals are characterized topologically as the unique countable metric space without isolated points. The space is also totally disconnected. The rational numbers do not form a complete metric space; the real numbers are the completion of .
In addition to the absolute value metric mentioned above, there are other metrics which turn into a topological field:
in addition write | 0 | p = 0. For any rational number , we set .
Then defines a metric on .
The metric space is not complete, and its completion is the p-adic number field .bg:Рационално число bn:মূলদ সংখ্যা ca:Nombre racional cs:Racionální číslo da:Rationale tal de:Rationale Zahl et:Ratsionaalarvud es:Número racional eo:Racionala nombro eu:Zenbaki arrazional fr:Nombre rationnel ko:유리수 hr:Racionalni brojevi is:Ræðar tölur it:Numero razionale he:מספר רציונלי lt:Racionalieji skaičiai jbo:fi'urna'u nl:Rationaal getal ja:有理数 no:Rasjonalt tall pl:Liczby wymierne pt:Número racional ro:Număr raţional ru:Рациональное число scn:Nummuru razziunali simple:Rational number sl:Racionalno število sr:Рационалан број fi:Rationaaliluku sv:Rationella tal th:จำนวนตรรกยะ tr:Rasyonel sayılar uk:Раціональні числа zh:有理数 | <urn:uuid:f1d8c476-afb2-4e0d-9560-95ae16af7a80> | 3.84375 | 1,084 | Knowledge Article | Science & Tech. | 39.98552 |
|Register||Search||Today's Posts||Mark Forums Read|
|Peptide Forum Peptide Forum. Ask and discuss questions on peptide protocols, custom peptide synthesis, peptide identification and peptide sequencing.|
| ||LinkBack||Thread Tools||Display Modes|
Proteins, Amino Acid, Peptide, Peptide bond, Polypeptide??? IM CONFUSED?
Hey guys, im studying for my bio exam and im just so confused between all of the stuffs mentioned in the questions. Like I am having a hard time understanding which one is which ones part. I would REALLY APPRECIATE if you guys help me understanding this stuff.....its just so vague and confusing...so please explain in a way that is crystal clear....pls help
Re: Proteins, Amino Acid, Peptide, Peptide bond, Polypeptide??? IM CONFUSED?
Amino acid :- Amino acids are molecules containing an amine group, a carboxylic acid group and a side-chain that varies between different amino acids Nh2-CH(R)-COOH [ Where R is side chain).Their were 20 natural occuring amino acid and they are polymerize to give Peptide,Polypeptide and protein.
Peptide Bond :-A peptide bond is a covalent bond that is formed between two amino acid when the carboxyl group of one amino acid reacts with the amino group of the another amino acid. Simply [O=C-NH ], while amide bond is [O=C-NH2]
Peptide :- Peptides are short polymers of amino acids linked by peptide bonds. Here is confusion about no. of amino acid but mostly till 50 amino acid they were called peptide or polypeptide and if no. of amino acid is more then 50 it's called protien.
Polypeptide is same as peptide which use by some of the chemist to define peptide haviny amino acid more then 2 but less then 50 .
Protein :- Having amino acid more then 50 which were linked by peptide bond
go for this link for more details.
|acid , amino , bond , confused , peptide , polypeptide , proteins|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Proteins & Amino Acids/ Multiple Choice Check work please!||SillyJilly||Biochemistry Forum||0||02-23-2010 07:31 AM|
|after one amino acid is attached by a peptide bond onto the growing...||Courtney||Peptide Forum||1||12-08-2009 11:20 PM|
|Can the bond between proline's amine end with another amino acid be called a peptide||hypnonebula||Peptide Forum||0||11-17-2009 04:16 PM|
|The peptide bond joining amino acids into proteins is a specific example of...||M_C_W1989||Peptide Forum||1||11-02-2009 08:18 AM|
|The difference between each type of amino acid is the peptide bond between||SCORPIO||Peptide Forum||1||06-17-2009 09:30 PM| | <urn:uuid:b365ee4c-ac1a-43ba-8d03-22c7f6dc6000> | 2.8125 | 706 | Comment Section | Science & Tech. | 55.84189 |
THE CASE OF THE FUEHRER'S TUNIC
The Evidence: Some years ago, a collector in San Diego, California, paid top dollar for a Nazi tunic whose label stated that it had belonged to Adolf Hitler. Then, concerned about its authenticity, he asked Palenik to examine it.
The Science: Palenik removed two kinds of fiber from the label. Viewing the first type under the polarized light microscope, he identified it as polyester (bottom left). The other fiber was cotton. When Palenik subjected it to the fluorescence microscope, he observed that it had been treated with an optical brightener, a substance that is applied to fabrics to make them look especially white (bottom right). The hitch: Neither polyester nor optical brighteners were available until after World War II. The collector sued; the case was settled shortly before it was scheduled to go to trial.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:3faa7b35-05f1-4603-a734-f7a2d1623767> | 2.9375 | 239 | Truncated | Science & Tech. | 41.722365 |