text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
FINAL LANDING: Space shuttle Atlantis is seen in this still from a video camera on the exterior of the International Space Station after the two spacecraft undocked on July 19, 2011, during the final shuttle mission STS-135. It was the last time a NASA shuttle cast off from the orbiting lab. Image: NASA TV
NASA's space shuttles have racked up an amazing set of accomplishments over the last 30 years, not to mention the miles and statistics. But after three decades and 135 flights, the era of the NASA space shuttle is at an end.
The final shuttle flight, NASA's STS-135 mission aboard Atlantis, will land Thursday (July 21) to cap a 13-day trip that delivered supplies and spare parts to the International Space Station. [Photos: Shuttle Bids Farewell to Space Station]
With NASA's reusable space plane fleet retiring, here is a by-the-numbers look at the iconic 30-year spaceflight icon:
$209 Billion: The estimated total cost of NASA's 30-year space shuttle program from development through its retirement.
3,513,638: The weight in pounds of cargo that NASA's space shuttles have launched into orbit. That's more than half the payload weight of every single space launch in history since 1957 combined.
229,132: The amount of cargo (in pounds) that NASA's shuttles have returned to Earth from space through 2010.
98,728.5: The number of man-hours NASA shuttles spent in space during their 30-year history. That's about 8,280 days of manned spaceflight, NASA officials said.
20,830: The number of orbits of Earth completed by NASA shuttles before the last 13-day mission of Atlantis during the STS-135 flight. Atlantis will add another 200 orbits to that tally.
3,000: The scorching hot temperatures (in Fahrenheit) experienced by NASA shuttles in the hottest moments of atmospheric re-entry during landing.
1,323: Number of days in space spent during NASA shuttle flights between April 1981 and July 2011. That includes the 13 days of the final shuttle flight, as well as the other 31,440 hours, 59 minutes and 33 seconds of all 134 other missions.
833: The total number of crewmembers of all 135 space shuttle missions, with some individuals riding multiple times and 14 astronauts killed during the Challenger and Columbia accidents.
789: The number of astronauts and cosmonauts who have returned to Earth on a NASA shuttle. Some spaceflyers actually launched into orbit on Russian Soyuz vehicles and returned home on a shuttle.
355: The actual number of individual astronauts and cosmonauts who have flown on the space shuttle. That breaks down to 306 men and 49 women hailing from 16 different countries.
234: The total number of days space shuttle astronauts spent at the International Space Station between 1998 and 2011, the construction phase of the orbiting laboratory.
180: The total number of satellites and other payloads, including components for the International Space Station, deployed by NASA space shuttles.
135: Total number of NASA space shuttle missions that will have flown between 1981 and 2011. NASA added the prefix of "STS" (Space Transportation System) to each shuttle mission. Of the 135 missions, 133 flights went as planned, with two ending in disaster. [Most Memorable Shuttle Missions]
52: The total number of satellites, space station components and other payloads returned from orbit on NASA shuttle missions.
37: The number of times a NASA shuttle has docked at the International Space Station during the outpost's lifetime. | <urn:uuid:6b4452be-b2b9-4703-af8f-07fb2d681cc2> | 3.203125 | 750 | Structured Data | Science & Tech. | 57.668217 |
A Quick Introduction to UML Sequence Diagrams
This UML sequence diagram tutorial introduces the commonly used elements of UML sequence diagrams and explains how to use them.
All diagrams in this guide were created with Trace Modeler, an easy-to-use and smart editor for UML sequence diagrams developed by the author. Check out the 30 sec demo to see how easy it really is..
UML Sequence Diagrams
UML sequence diagrams are used to show how objects interact in a given situation. An important characteristic of a sequence diagram is that time passes from top to bottom : the interaction starts near the top of the diagram and ends at the bottom (i.e. Lower equals Later).
A popular use for them is to document the dynamics in an object-oriented system. For each key collaboration, diagrams are created that show how objects interact in various representative scenarios for that collaboration.
The UML sequence diagram gallery contains many examples, but here's a typical sequence diagram based on a system use case scenario :
The diagram above shows how objects interact in the "rent item" collaboration when the item is not available during the requested period.
To clarify how execution switches from one object to another, a blue highlight was added to represent the flow of control. Note that this highlight is not part of the diagram itself.
As with all UML diagrams, comments are shown in a rectangle with a folded-over corner :
To relate the comment to whatever diagram elements it is about, connect them with dashed lines.
Objects as well as classes can be targets on a sequence diagram, which means that messages can be sent to them. A target is displayed as a rectangle with some text in it. Below the target, its lifeline extends for as long as the target exists. The lifeline is displayed as a vertical dashed line.
The basic notation for an object is
Where 'name' is the name of the object in the context of the diagram and 'Type' indicates the type of which the object is an instance. Note that the object doesn't have to be a direct instance of Type, a type of which it is an indirect instance is possible too. So 'Type' can be an abstract type as well.
Both name and type are optional, but at least one of them should be present. Some example :
As with any UML-element, you can add a stereotype to a target. Some often used stereotypes for objects are «actor», «boundary», «control», «entity» and «database». They can be displayed with icons as well :
An object should be named only if at least one of the following applies
- You want to refer to it during the interaction as a message parameter or return value
- You don't mention its type
- There are other anonymous objects of the same type and giving them names is the only way to differentiate them
Try to avoid long but non-descriptive names when you're also specifying the type of the object (e.g. don't use 'aStudent' for an instance of type Student). A shorter name carries the same amount of information and doesn't clutter the diagram (e.g. use 's' instead).
When you want to show how a client interacts with the elements of a collection, you can use a multiobject. Its basic notation is
Again, a name and/or type can be specified. Note however that the 'Type' part designates the type of the elements and not the type of the collection itself.
The basic notation for a class is
Only class messages (e.g. shared or static methods in some programming languages) can be sent to a class. Note that the text of a class is not underlined, which is how you can distinguish it from an object.
When a target sends a message to another target, it is shown as an arrow between their lifelines. The arrow originates at the sender and ends at the receiver. Near the arrow, the name and parameters of the message are shown.
A synchronous message is used when the sender waits until the receiver has finished processing the message, only then does the caller continue (i.e. a blocking call). Most method calls in object-oriented programming languages are synchronous. A closed and filled arrowhead signifies that the message is sent synchronously.
The white rectangles on a lifeline are called activations and indicate that an object is responding to a message. It starts when the message is received and ends when the object is done handling the message.
When a messages are used to represent method calls, each activation corresponds to the period during which an activation record for its call is present on the call stack.
If you want to show that the receiver has finished processing the message and returns control to the sender, draw a dashed arrow from receiver to sender. Optionally, a value that the receiver returns to the sender can be placed near the return arrow.
If you want your diagrams to be easy to read, only show the return arrow if a value is returned. Otherwise, hide it.
Messages are often considered to be instantaneous, i.e. the time it takes to arrive at the receiver is negligible. For example, an in-process method call. Such messages are drawn as a horizontal arrow.
Sometimes however, it takes a considerable amount of time to reach the receiver (relatively speaking of course) . For example, a message across a network. Such a non-instantaneous message is drawn as a slanted arrow.
You should only use a slanted arrow if you really want to emphasize that a message travels over a relatively slow communication channel (and perhaps want to make a statement about the possible delay). Otherwise, stick with a horizontal arrow.
A found message is a message of which the caller is not shown. Depending on the context, this could mean that either the sender is not known, or that it is not important who the sender was. The arrow of a found message originates from a filled circle.
With an asynchronous message, the sender does not wait for the receiver to finish processing the message, it continues immediately. Messages sent to a receiver in another process or calls that start a new thread are examples of asynchronous messages. An open arrowhead is used to indicate that a message is sent asynchrously.
A small note on the use of asynchronous messages : once the message is received, both sender and receiver are working simultaneously. However, showing two simultaneous flows of control on one diagram is difficult. Usually authors only show one of them, or show one after the other.
Message to self
A message that an object sends itself can be shown as follows :
Keep in mind that the purpose of a sequence diagram is to show the interaction between objects, so think twice about every self message you put on a diagram.
Creation and destruction
Targets that exist at the start of an interaction are placed at the top of the diagram. Any targets that are created during the interaction are placed further down the diagram, at their time of creation.
A target's lifeline extends as long as the target exists. If the target is destroyed during the interaction, the lifeline ends at that point in time with a big cross.
A message can include a guard, which signifies that the message is only sent if a certain condition is met. The guard is simply that condition between brackets.
If you want to show that several messages are conditionally sent under the same guard, you'll have to use an 'opt' combined fragment. The combined fragment is shown as a large rectangle with an 'opt' operator plus a guard, and contains all the conditional messages under that guard.
A guarded message or 'opt' combined fragment is somewhat similar to the if-construct in a programming language.
If you want to show several alternative interactions, use an 'alt' combined fragment. The combined fragment contains an operand for each alternative. Each alternative has a guard and contains the interaction that occurs when the condition for that guard is met.
At most one of the operands can occur. An 'alt' combined fragment is similar to nested if-then-else and switch/case constructs in programming languages.
When a message is prefixed with an asterisk (the '*'-symbol), it means that the message is sent repeatedly. A guard indicates the condition that determines whether or not the message should be sent (again). As long as the condition holds, the message is repeated.
The above interaction of repeatedly sending the same message to the same object is not very useful, unless you need to document some kind of polling scenario.
A more common use of repetition is sending the same message to different elements in a collection. In such a scenario, the receiver of the repeated message is a multiobject and the guard indicates the condition that controls the repetition.
This corresponds to an iteration over the elements in the collection, where each element receives the message. For each element, the condition is evaluated before the message is sent. Usually though, the condition is used as a filter that selects elements from the collection (e.g. 'all', 'adults', 'new customers' as filters for a collection of Person objects). Only elements selected by the filter will receive the message.
If you want to show that multiple messages are sent in the same iteration, a 'loop' combined fragment can be used. The operator of the combined fragment is 'loop' and the guard represents the condition to control the repetition.
Again, if the receiver of a repeated message is a collection, the condition is generally used to specify a filter for the elements.
For example, to show that the bounds of a drawing are based on those of its visible figures we could draw the following sequence diagram :
Several things are worth noting in this example
- a local variable 'r' was introduced to clarify that it is the result of getBounds that is added.
- naming the resulting Rectangle 'bounds' avoids the introduction of an extra local variable.
- the loop condition is used as a filter on the elements of the figures collection.
Of all the UML diagram types, the sequence diagram type is the one where it matters most to choose the right tool for the job. The reason being that you have very little freedom when it comes to positioning elements on a sequence diagram :
- some elements must be placed in a certain region
- some elements must surround others
- many elements are interconnected
- most elements have a fixed orientation
- the grid-like structure practically demands a uniform spacing
- there are plenty of opportunities for elements to overlap in a bad way
You really need a tool that was designed with sequence diagrams in mind. Don't even think about using a general-purpose drawing tool, you'll waste hours connecting, resizing and laying out shapes.
With that many constraints you would think that current tools take care of the layout for you, right? Think again.
Most UML-based CASE tools offer only basic support for sequence diagrams and have low usability. Although they're an improvement over general-purpose editors, they offer little assistance when it comes to layout issues and you'll still waste a lot of time moving elements around.
When you evaluate a tool, find out how it reacts when you change an existing diagram. Add stuff, move elements around and look at the resulting diagram. Is it still a visually pleasing diagram, or do you have to step in and manually redo the layout?
|A checklist of things to try when you're evaluating a tool for sequence diagrams|
Add a new message.
Did you have to connect arrows and activations by hand?
Move a target all the way to the left or right.
Are the message arrows still connected to the correct side of their activations?
Insert a new message in the middle of the diagram.
Were the existing elements below it automatically moved down to make space for it?
Change the receiver of a message.
Did the activations and arrows adjust themselves accordingly? Even when you made it into a message to self?
Move a message or an activation up or down.
Were they and the elements around them adjusted accordingly or did you have to do that yourself?
Move a message in and out of a combined fragment.
Was the fragment automatically resized and moved in correspondence to its contents?
Scroll down so that the targets are no longer visible.
Does the tool provide any hints as to which lifeline belongs to which target?
Change the name of a message.
Did you edit it in-place on the diagram itself?
Pick the tool with the best automatic layout features, it will save you an enormous amount of time.
I never quite found what I was looking for and ended up creating my own. Trace Modeler supports all of the above and has some really neat layout features and benefits that I haven't seen anywhere else, I invite you to watch the demo and then try it yourself!
This quick introduction discussed only a few of the possible constructs you may encounter on a UML sequence diagram. A forthcoming article will discuss when an how to use sequence diagrams in projects. Look for it in the articles section or subscribe to the news feed to receive notification when new articles and content is published.
As you set off to find out all there is to know about UML sequence diagrams, bear in mind that it is more important to know when and how to use them, than to know every possible construct in the UML specification.
For practical tips on how to improve your own diagrams, have a look at the Pimp my diagram series of articles. Each episode discusses and improves a real-world sequence diagram example found on the web.
Be sure to take a peek in the gallery section, it contains many examples of sequence diagrams.
When you've reached this point in the text, you should be able to understand most sequence diagrams. You're also a very persistent reader! I leave you with a final thought : the subject of a sequence diagram doesn't have to be an interaction in a software system, any kind of interaction will do. For example, take a close look at the following diagram | <urn:uuid:32cc13fb-d2a5-42f7-b3d7-99a7a6da17e6> | 3.65625 | 2,915 | Tutorial | Software Dev. | 48.348869 |
Brief blurbs about recent arthropod news and research:
- The blue crab, Callinectes sapidus, has been found in England for the second time ever. These ill-tempered, but delicious, swimming crabs are native to North America; where they represent a major marine fishery despite serious conservation concerns. Previously, blue crabs have turned up in Japan and the Mediterranean. It is conventionally thought that these crabs were brought in as larvae in ship ballast water and have since gained a foothold in their new homes. It is possible that this blue crab in Cornwall also came over from America in ballast water, or it could have been carried on ocean currents up from the Mediterranean population. It is unclear weather this is an isolated individual or a representative of a new invasive population.
- You will be disappointed to learn that the horny females I referred to in the title are dung beetles. One usually associates the growth of horns and antlers with males who use them to battle for dominance in a social hierarchy or for their pick of the choicest females. However, female dung beetles, Onthophagus sagittarius, are known to have much more impressive horns than their male counterparts. A new study suggests that these horns are used by the females to compete over reproductive resources (i.e. poop). Size matched females with larger horns were found to achieve greater reproductive fitness, making horn size a positively selected female secondary sex characteristic in these beetles. (Via 80Beats)
- New research reports the development of synthetic superhydrophobic materials inspired by tiny, water repellent hairs in insects. These hairs are found on the legs of water walkers and the backs of Stenocarid beetles, which use the hairs to channel water droplets to their mouth.
- The genomes of the malaria mosquito, Anopheles gambiae, and the yellow fever mosquito, Aedes aegypti, were published in 2002 and 2006, respectively. These sequencing efforts appear to be bearing a lot of fruit as of late; as several genetic approaches to controlling the spread of mosquito vectored diseases have been proposed. These include; increasing the immunity of mosquitoes to the dengue fever virus, weakening mosquitoes by preventing waste secretion, and preventing female mosquitoes from developing functioning flight structures. Some of these ideas are pretty far from real-world application unfortunately, and the buzz surrounding them seems to be the result of overly-excitable university PR departments. | <urn:uuid:98b9e6ea-c590-4dad-90c3-7aa55b12819b> | 3.28125 | 502 | Personal Blog | Science & Tech. | 27.589354 |
A process for proving mathematical statements involving members of an ordered
set (possibly infinite). There are various formulations of the principle of
induction. For example, by the principle of finite induction, to prove a
statement p(i) is true for all integers i ≥ i0 , it suffices to prove that
(a) P(i0) is true;
(b) for all k ≥ i0 , the assumption that P(k) is true (the induction hypothesis)
implies the truth of P(k+1).
(a) is called the basis of the proof, (b) is the induction step.
Generalizations are possible. Other forms of induction permit the induction step
to assume the truth of P(k) and also that of
P(k-1), P(k-2), ...., P(k-i)
for suitable i. Statements of several variables can also be considered. | <urn:uuid:994b0637-9fcc-4a51-8439-08a06484ce3c> | 4 | 196 | Knowledge Article | Science & Tech. | 68.170173 |
How do fish separate oxygen from H20 & consume it? Do they break the water molecule and absorb the oxygen only?
The answer to this, I recon, is that they don't.
They use molecular oxygen (O2) dissolved in the water for respiration, where is acts as a terminal electron acceptor, just as we use molecular oxygen in the air for respiration. We can speak of the water as being oxygenated.
What is split in photosynthesis, where reducing equivalents from water are used to reduce NADP+ (giving NADPH).
One of the great discoveries of biology, IMO, is that the oxygen formed in green-plant photosynthesis comes from water, not CO2.
Tricarboxylic Acid Cycle (Krebs Cycle) Rant
Despite claims to the contrary, most infamously by Racker (1976, pp 28 - 29) and Wieser (1980), but also by Madeira (1988) and Mego (1986) for example, water is not split in the tricarboxylic acid cycle (Krebs Cycle). Banfalvi (1991) also sails pretty close to the wind on this one.
That is, reducing equivalents from water are not passed down the respiratory chain, or in any way used to make ATP, or are in any way a 'source' of free energy. Such claims, IMO, are nonsense.
The definitive answers to the Wieser (1980) paper are given by Atkinson (1981) and Herreros & Garcia-Sancho (1981). Both of these articles are models of clarity, and categorically refute the claims of Wieser (1980). Nevertheless, as shown by the references above, the controversy surfaces periodically.
The only source of reducing equivalents in the TCA cycle are carbon compounds, and the only electrons passed down the respiratory chain are those 'held' in C-H and C-C bonds (Herreros & Garcia-Sancho, 1981). An ionization is neither an oxidation nor a reduction (see Atkinson, 1981) and neither is a hydration. Adding water to (say) a double bond does not make the compound any more oxidized or reduced. As far as oxygen and electrons are concerned, and to generalize from a biological point of view, what is has it holds - except in photosynthesis.
As you may have guessed, the splitting of the water in the TCA cycle is a pet rant of mine. Thanks for the opportunity of airing my views!
Found this one when searching Google (Brière et al, 2006). In an invited review for the American Journal of Physiology (Cell Physiology) at that!
So now the TCA cycle is producing oxygen from water. Wonders will never cease!
In air-saturated buffer at 25oC the concentration of oxygen (O2 molecules) is about 0.24 mM (0.24 umoles/ml, or about 0.474 ug-atoms of oxygen per ml). [Chappell (1964)]. This figure decreases with increasing temperature.
Great question, BTW.
(Apologies for the incomplete Atkinson and Herreros & Garcia-Sancho references. I have a photocopy of these papers but have been unable to trace the full source. They are both in the 'Letters to the Editor' section of the February 1981 edition of Trends in Biochemical Sciences. They do not appear to be in Pubmed, or anywhere else on-line. Has anyone ever seen these references quoted, or can provide me with a full source?. I'll update if I find anything) | <urn:uuid:118dffc4-0bf0-427d-a67c-016a5c6fa046> | 3.078125 | 752 | Q&A Forum | Science & Tech. | 51.110011 |
we cannot See the whole universe.
light unable to cross in age of universe
there are multiple reasons for this (maybe i’m missing some):
1. given the speed of light, in all likelihood (we can’t tell) the diameter of the universe is larger than the distance light can travel in the age of the universe; so that light has yet to reach us, and
2. given that the universe is expanding (so that all objects appear to be receding at a speed proportional to their distance when observed from any point in the universe), after a certain distance (called the hubble distance, which varies), the rate of expansion of distance is the same as or more than the speed of light: so that that light will NEVER reach us.
accelerating mass to the speed of light takes and Infinite Amount of Energy and Time to Accelerate.
the graph of the amount of energy needed to accelerate from rest to a given speed (energy on Y-axis, speed on X-axis) is hyperbolic/asymptotal: the proportions are rather consistent, until you get to relativistic speeds (speeds that are a significant fraction of the speed of light) where the required energy jumps to near infinity in only a small change in speed. because c (the speed of light) is the asymptote, the energy needed to get to it is infinite. one the other side, tachyons (which may or may not exist) share the same traits in reverse: a tachyon requires infinity energy to *DE*celerate to the speed of light, and accelerates as it losses energy.
relativist poetry time!:
there once was a woman named Bright,
who could travel faster than light.
she went out one day,
in a relative way,
and returned on the previous night.
there are more then 3 States of Matter.
we usually think of 3 forms of a material deternimed by pressure and temperature: solid, gas, and liquid (and plasma).
yet there are many more, and i read about ones i’ve never heard of all the time.
SOME of the major ones:
Superfluid: exists near absolute zero; flows, but has no viscosity, despite surface tension. will flow up and around containers to form a thin film.
Supersolid: exists near absolute zero; like superfluids, but non-flowing. exibits no friction.
Bose-Einstein Condensate: exists near absolute zero; all component particles/ atoms have the same quantum state and are thus indistingishable in ALL ways.
Quark-Gluon Plasma: matter at such high temperatures that the component atomic particles split into barely associated quarks.
supercritical fluid: high temperature/pressure (but MUCH closer to home than any of the above); were it is impossible to distinguish liquid from gas.
http://en.wikipedia.org/wiki/State_of_matter (links to specific states in article) | <urn:uuid:008457e9-2a60-4b63-b6ee-7c9e80cac9c5> | 2.921875 | 631 | Personal Blog | Science & Tech. | 42.333752 |
blue shift or blueshift, in astronomy, the systematic displacement of individual lines in the spectrum of a celestial object toward the blue, or shorter wavelength, end of the visible spectrum. The amount of displacement is a function of the object's relative velocity toward the observer. Most observed blue shifts are the result of the Doppler effect. The blue shift is the opposite of the red shift. Blue shifted celestial bodies are quite rare. Of the billions of known galaxies, only about 100, including the Andromeda galaxy, are blue shifted.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on blue shift from Fact Monster:
See more Encyclopedia articles on: Astronomy: General | <urn:uuid:6fdfcdce-69cd-43f9-bbd0-c11f5592a314> | 4.09375 | 148 | Knowledge Article | Science & Tech. | 37.949256 |
The big chill of ocean warming
“I have very bad news for you. Are you man enough to take it?”
“God, no!” screamed Yossarian. “I’ll go right to pieces.”
—Joseph Heller, Catch-22
In 2001, Ian Walker, a 40-year-old associate professor of geography at the University of Victoria, began walking the desolate, kelp-strewn beach south of Rose Spit, the northeasternmost tip of Haida Gwaii. And each year that followed, he returned. An expert in coastal erosion, he’d look at 1990s Geological Survey of Canada air photos of the place and look at the modern shoreline bluffs and feel amazed.
In places, 30 metres of land had disappeared in a year. Could it be connected, he asked himself, to anecdotal reports from local Haida that North Pacific storms were getting worse? Or that the sea level was rising? The latest predictions were a rise of one to two metres this century. If these things were so, what did it mean for the more than 300,000 people who live below sea level and behind dikes in Richmond and Delta?
The news gets worse.
In 2010, Rob Saunders, long-time CEO of Qualicum Bay’s Island Scallops, set out 12 billion young scallop larvae to be nourished in the Strait of Georgia near Nanaimo. But as the weeks passed, 99.95 percent of them died. “It was catastrophic!” he says today. He suspected a biological cause: a toxin or disease. But chemical testing revealed that the ocean was far more acidic—and far more saline—than ever recorded. It was the water itself that was lethal.
Up and down the coast, others in the $38-million-a-year B.C. shellfish industry were seeing the same thing. Saunders didn’t fully appreciate then that ocean acidity is increasing exponentially worldwide. Or that this acidity most affects the sea’s smallest creatures—larvae, phytoplankton, zooplankton, and krill—the very animals that sustain the entire marine food chain.
The news gets worse.
The story you are about to hear is complex, and has mostly been submerged by the discussion about atmospheric carbon dioxide and global warming. It has only been in the past few years that scientists have begun to grasp how the world’s dramatically warming and acidifying oceans—covering four-fifths of the planet—may have far more influence on the near future than previously understood. Virtually all local marine scientists say the latest data reveals something ominous happening. But the forces are many, the data thin. And research funding is limited. The metaphor I hear most frequently from experts is the proverbial “elephant in the room” image: something so big, so hard to measure, and so unpredictable that it’s difficult to discern the outcome.
What, for example, does it mean for this region if—as scientists now report—the glaciers of B.C.’s Coast Mountains are melting at a faster rate than anytime since the end of the ice age 12,000 years ago? That in itself is cataclysmic. But how will this affect migrating, cold-water-loving B.C. salmon? Or the bears and eagles that depend on those salmon? Or the province’s 442 coastal eelgrass estuaries that act as nurseries for many fish? Or increased marine salinity and acidification caused, in part, by decreased freshwater runoff? Or B.C. fishers who depend on the sea’s bounty?
These are the kinds of questions that Victoria’s Walker spends his time considering. He’s the lead author for the B.C. chapter of From Impacts to Adaptation: Canada and the Changing Climate, a massive 2007 federal report that was buried by the then newly installed Harper government. Working with 30 other B.C. scientists, Walker tried to see the Big Picture. He tells me on the phone that at the current accelerating rate, the air temperature on the B.C. coast, already 2 ° C warmer in the past few decades, will likely rise 5 ° C more this century.
It will not only be warmer here, there will be eight percent more rain and a lot less snow. As a result, it’s expected that 97 percent of the current coastal alpine habitat will vanish before 2100. This will be good for subarctic firs, which will, as the tree line ascends, gradually occupy today’s alpine meadows. It will be bad for the wildflowers and the marmots and snowmelt. In fact, current projections say Coast Mountain glaciers will be entirely gone by the century’s end. And rivers like the Fraser, already also 2 ° C warmer than a few decades ago, not only will be warmer (and lower) in the decades ahead as runoff slows in late summer but may well become increasingly inhospitable to autumn’s annual salmon migration. And once this diminishing supply of fresh water reaches the B.C. coast, the cascade of consequences, experts say, only multiplies.
Barring a great subduction earthquake off the west coast of Vancouver Island, there are three forces that will most affect maritime B.C. in the coming decades. First, every scientist tells me that advancing ocean warming will reshuffle the deck as to which creatures remain here, which ones succumb, and which ones—like the marine mammals—simply retreat north to colder Alaskan waters. Second, sea-level rise will have a negligible effect on the province’s mostly rocky coastline but will have—in time—a calamitous effect on B.C.’s low-lying deltas and coastal estuaries, with their rich wildlife habitats, their farmlands, their industrial infrastructure, their port facilities, and their urban populations.
It is, however, the third force—ocean acidification—that experts suspect may be the trump card, both globally and locally, in humankind’s precarious future.
The details of atmospheric warming caused by the burning of carbon-rich fossil fuels are too familiar to reiterate. What’s less well known is this: since the start of the Industrial Revolution around 1760, more than 30 percent of all atmospheric CO2 has—in a complex chemical reaction—been absorbed by the world’s oceans. This is a huge benefit for the air. But this interaction alters the ocean’s pH, turning surface waters acidic. That’s because CO2 plus water equals carbonic acid. In fact, since 1760, ocean acidity worldwide has risen 30 percent. This toxicity becomes even more extreme—as Qualicum Bay shellfish farmer Saunders recently learned—in enclosed waters like those of B.C.’s archipelago-lined straits.
(This past summer, in a curious, related footnote, scientists announced they’d finally resolved the mystery of the great Permian-Triassic extinction of 251 million years ago, the single most catastrophic event in the planet’s history. The cause? Atmospheric CO2—probably the result of massive Siberian volcanic activity—produced ocean acidification so toxic that 96 percent of the planet’s marine species went extinct.) | <urn:uuid:944b0da0-fb18-4e29-b172-ea3b1ac57f77> | 2.984375 | 1,538 | Nonfiction Writing | Science & Tech. | 58.372475 |
Tritum has a half-life of 12.3 years. Pretend you started with 100 kilograms of tritium. After 12.3 years (one half-life), only half (50 kg) of the tritium would be left (yellow lines). After 24.6 years (two half-lives), only one-quarter (25 kg) of the original 100 kg of tritium would be left (blue lines).
Click on image for full size
Original artwork by Windows to the Universe staff (Randy Russell).
Some materials are radioactive. Their atoms give off radiation. When an atom gives off radiation, it turns into a different kind of atom. That is called radioactive decay. Some atoms decay very quickly, in seconds or minutes. Others take a long time to decay... sometimes millions of years! Scientists use the term "half-life" to describe how fast or slow the radioactive decay is.
Let's say you had 100 kilograms of tritium. Tritium is a radioactive form of hydrogen. The half-life of tritium is about 12 years. After 12 years, half of the tritium would be "gone". It would have given off radiation and decayed... and turned into helium. Only 50 kg of tritium would be left. After another 12 years (a second half-life), half of what was left would decay. There would only be 25 kg of tritium left after 24 years from the start. That's one-quarter (half of half) of the 100 kg we started with.
Different radioactive materials have different half-lives. Carbon-14 has a half-life of nearly 6,000 years. The half-life of uranium-235 is more than 700 million years! On the other hand, the half-life of nitrogen-13 is less than 10 minutes!
Scientists use radioactive materials with different half-lives in various ways. Carbon-14 dating is used to find out how old things that were once alive are. The more radioactive carbon-14 that is "missing" from a sample, the longer ago it must have died. Doctors use radioactive materials to treat some diseases. They use materials with short half-lives so the radiation doesn't hang around in the body too long. Old fuel from nuclear power plants can be a problem if it has a long half-life. It is hard to find a safe place to store radioactive materials with long half-lives.
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
Radiation comes in two basic types: electromagnetic radiation transmitted by photons, and particle radiation consisting of electrons, protons, alpha particles, and so forth. Electromagnetic radiation,...more
Isotopes are different "versions" of an element. All atoms of an element have the same number of protons. For example, all hydrogen atoms have one proton, all carbon atoms have 6 protons, and all uranium...more
Some materials are radioactive. They emit radiation. When an atom of a radioactive substance gives off radiation, it becomes a new type of atom. This process is called radioactive decay. There are two...more
An element (also called a "chemical element") is a substance made up entirely of atoms having the same atomic number; that is, all of the atoms have the same number of protons. Hydrogen, helium, oxygen,...more
Carbon-14 is an isotope of the element carbon. All carbon atoms have 6 protons in their nucleus. Most carbon atoms also have 6 neutrons, giving them an atomic mass of 12 ( = 6 protons + 6 neutrons). Carbon-14...more
Carbon-14 dating (also called "radiocarbon dating") is used to determine the age of materials that contain carbon that was originally in living things. It is often used in archeology and some...more
Atoms and the tiny particles from which they are made strongly influence the world around us. The fields of atomic physics and particle physics help us understand the life cycles of stars, the forms of...more | <urn:uuid:74f401af-04bb-4c04-8412-fc9fc1528af4> | 3.859375 | 881 | Knowledge Article | Science & Tech. | 63.271461 |
Winds blowing along the South American coast bring cold, deep ocean water to the surface. This is one of several ways that the ocean and atmosphere in the Southeast Pacific are connected.
Courtesy of NOAA
Ocean-Atmosphere Coupling in the Southeast Pacific
There are many connections between the ocean and the atmosphere in the Southeast Pacific Ocean.
Strong winds blow north along the coast of South America. These winds stir up the ocean. That brings cold water to the surface from the deep ocean. That water has lots of nutrients that living creatures need. There are many fish and other sea creatures in this area. The water at the surface is colder in the Southeast Pacific than in most other places at similar latitudes.
The strong winds carry dry air. The cold ocean water doesn't evaporate as easily as warmer water would. The dry air and the Andes Mountains combine to make the Atacama Desert in Chile. It is one of the driest places on Earth.
There are several kinds of particles in the air in this region. Plankton in the ocean make aerosols that have sulfur in them. High winds splash ocean spray filled with sea salt into the air. The winds also carry pollution out to sea from the land. All of these particles change the way that clouds form. There are lots of clouds most of the time in this area. The clouds shade the ocean, keeping it cool.
The connections between the ocean and the atmosphere don't just change the Southeast Pacific. They also make changes much further away. They change the flow of water in the whole Pacific Ocean. These changes help cause the famous El Niņo and La Niņa events.
Shop Windows to the Universe Science Store!
Our online store
on science education, classroom activities in The Earth Scientist
specimens, and educational games
You might also be interested in:
The winds in the Southeast Pacific mainly blow from south to north. They have a strong effect on the climate in the region and worldwide. The winds in this area get their start with a major flow in the...more
There are places in the ocean where water from the deep sea travels up to the surface. These are called areas of upwelling. The deep waters can have a large influence upon life in the ocean and the climate...more
If you like anchovies on your pizza, there is a good chance that the little fish now swimming in tomato sauce was once swimming in the Southeast Pacific Ocean. It was probably caught off the coasts of...more
One process which transfers water from the ground back to the atmosphere is evaporation. Evaporation is when water passes from a liquid phase to a gas phase. Rates of evaporation of water depend on things...more
The Atacama Desert is one of the driest places on Earth. The Atacama is in the country of Chile in South America. In an average year, much of this desert gets less than 1 millimeter (0.04 inch) of rain!...more
This page describes environments that are very hot or very cold, extremely dry, or both. Extreme environments are places where "normal" life finds it hard to survive. That doesn't mean that there isn't...more
When you look up at the sky, you are looking at more than just air. There are also billions of tiny bits of solid and liquid floating in the atmosphere. Those tiny floating particles are called aerosols...more | <urn:uuid:ef528a2f-ef8a-4edf-a474-763e4d9b899c> | 3.671875 | 703 | Content Listing | Science & Tech. | 62.244071 |
White Dwarfs Eat Earth-Like Planets For Breakfast
A new discovery by the Hubble Space Telescope has allowed astronomers to find four white dwarf stars with stellar atmospheres consisting of oxygen, magnesium, iron, and silicon. Through this we are able to deduct that planets used to be found orbiting around the star before being destroyed by the extreme tidal gravitational forces and instability.
So, here’s one doomsday prediction to look forward to (and it may well happen): in a few billion years when our sun runs out of fuel and bloats to a red giant, in which it will shed large quantities of mass before disintegrating into a white dwarf surrounded by a planetary nebula into which the remains of the planets will rain down.
(Image credit: Mark A. Garlick/space-art.co.uk/University of Warwick) | <urn:uuid:1b930309-2257-42dc-97df-6844b5887e10> | 3.125 | 172 | Personal Blog | Science & Tech. | 42.49 |
Boris Horvat / AFP - Getty Images
A hardhat worker walks around the construction site for the ITER fusion experiment in Saint-Paul-les-Durance, France.
The standard joke about nuclear fusion is that it's the energy technology of the future, and always will be. Well, fusion is still an energy option for the future rather than the present, but small steps forward are being reported on several fronts. That even includes the long-ridiculed campaign for "cold fusion."
Efforts by the Italian-based Leonardo Corp. to harness low-energy nuclear reactions (the technology formerly known as cold fusion) have reawakened the dream of somehow producing surplus heat through unorthodox chemistry. Today, Pure Energy Systems News reported that Leonardo's Andrea Rossi signed an agreement with Texas-based National Instruments to build instrumentation for E-Cat cold-fusion reactors.
Will this venture actually pan out? The E-Cat reactors are so shrouded in secrecy and murky claims that it's hard to do a reality check, but most outside experts say that the concept just won't work.
Some observers are similarly pessimistic about the other avenues for fusion research. The basic physics of the reaction is well-accepted, of course. You can see the power generated when hydrogen atoms fuse into helium when you look at that big ball of gas in the sky, 93 million miles away, or when you watch footage of an H-bomb blast.
But no one has been able to achieve a self-sustaining, energy-producing fusion reaction in a controlled setting on Earth, even after more than a half-century of trying.
Researchers had hoped to reach that big milestone, known as ignition, at the $3.5 billion National Ignition Facility by the end of 2010. But in last week's issue of Science, Steven Koonin, the Energy Department's under secretary for science, was quoted as saying "ignition is proving more elusive than hoped" and added that "some science discovery may be required" to make it a reality. (Coincidentally, Energy Secretary Steven Chu announced this week that Koonin will be leaving his post.)
The big challenge is to tweak all the factors involved in NIF's super-laser-blaster system to maximize the energy directed on tiny pellets of fusion fuel, and minimize the loss of energy through tiny imperfections or interference. "We're at the end of the beginning," NIF's director, Edward Moses, told Science.
How much longer will it take? The new director of Lawrence Livermore National Laboratory, where NIF is headquartered, told the San Francisco Chronicle that he was convinced the facility would attain ignition "in this fiscal year" — that is, by next October.
If NIF hits that schedule, it'll be way ahead of the world's most expensive fusion experiment, the $20 billion ITER experimental project in France. ITER is taking the most conventional approach to creating a controlled fusion reaction, which involves magnetic containment of a super-hot plasma inside a doughnut-shaped device known as a tokamak. The European Union and six other nations, including the United States, have divvied up the work load with the aim of completing construction in 2017 and achieving "first plasma" in 2019.
Right now, Oak Ridge National Laboratory and US ITER are testing a fuel delivery system that would fire pellets of ultra-cold deuterium-tritium fuel into the plasma.
"When we send a frozen pellet into a high-temperature plasma, we sometimes call it a 'snowball in hell,'" Oak Ridge physicist David Rasmussen said in an ITER report on the tests at the Dill-D research tokamak in San Diego. "But temperature is really just the measure of the energy of the particles in the plasma. When the deuterium and tritium particles vaporize, ionize and are heated, they move very fast, colliding with enough energy to fuse."
The tricky part has to do with shaping the pellets just right to produce the desired reaction. When it comes to snowballs in hell, the devil is in the details.
The politics of ITER is just as tricky as the technology. Considering the economic problems that are afflicting the world, and Europe in particular, will there be funding to support the development timeline? Last month, one of the leaders of the European Parliament's Green bloc called ITER a "ticking budgetary time bomb."
Wiffle-Balls and other wonders
Smaller-scale fusion research efforts, meanwhile, are getting a lot of good press. For example, the Navy-funded experiments in inertial electrostatic confinement fusion, also called Polywell fusion, are continuing at EMC2 Fusion Development Corp. in New Mexico. The latest status report for the $7.9 million project says that the test reactor, known as a Wiffle-Ball because of its shape, "has generated over 500 high-power plasma shots."
"EMC2 is conducting tests on Wiffle-Ball plasma scaling law on plasma heating and confinement," the brief report reads.
The Polywell system is designed to accelerate positively charged ions inside a high-voltage cage, in such a way that they spark a fusion reaction. If enough of the ions fuse, the energy could exceed the amount put into the system.
In the past, leaders of the EMC2 team have told me that their aim is to build a 100-megawatt demonstration reactor. Nowadays, EMC2 is more close-mouthed about their progress, primarily because that's the way the Navy wants it. But the report about 500 high-energy plasma shots brought a positive response from the Talk-Polywell discussion board, which has been following EMC2's progress closely. "I'd be drunk by now if those were shots of whiskey," one commenter joked.
Privately backed efforts are moving ahead as well: Last month, Lawrenceville Plasma Physics reported reaching a record for neutron yield with its "Focus Fusion" direct-to-electric generator. And this week, Canada's General Fusion and its magnetized target fusion technology were featured in an NPR news package.
"I wouldn't say I'm 100 percent sure it's going to work," General Fusion's Michel Laberge told NPR. "That would be a lie. But I would put it at 60 percent chance that this is going to work. Now of course other people will give me a much smaller chance than that, but even at 10 percent chance of working, investors will still put money in, because this is big, man, this is making power for the whole planet. This is huge!"
Is it a huge opportunity, or a huge waste — especially considering that the energy technology of the future will have to compete with present-day technologies such as solar, wind, biofuel and nuclear fission? Feel free to weigh in with your comments below.
Update for 3:40 p.m. ET Nov. 11: Some commenters have rightly pointed out that there are many other nuclear fusion and high-energy plasma initiatives under way, including the Z Machine, a huge X-ray generator at Sandia National Laboratories in New Mexico. The journal Science quotes Sandia researchers as saying the machine could be used to start testing the feasibility of pinch-driven fusion, but conducting a definitive test would require a far more powerful machine.
Science also notes that some researchers suspect NIF's indirect approach to laser-driven fusion, in which fuel pellets are placed inside a pulse-shaping cylinder known as a hohlraum, may not be as efficient as it needs to be. Research groups are investigating direct-drive laser fusion at the Laboratory for Laser Energetics in Rochester, N.Y., and the Naval Research Laboratory in Washington.
More about fusion:
- Fusion goes forward from the fringe
- Levitating magnet coaxes nuclear fusion
- Out-of-this-world ideas win NASA funding
- Physics turns from fission to the future
Connect with the Cosmic Log community by "liking" the log's Facebook page, following @b0yle on Twitter or following the Cosmic Log Google+ page. You can also check out "The Case for Pluto," my book about the controversial dwarf planet and the search for new worlds. | <urn:uuid:141ea205-c375-438d-916d-8c96ea4e4497> | 2.75 | 1,713 | Nonfiction Writing | Science & Tech. | 44.925091 |
2. Omitting parentheses around method arguments when not appropriate
Better use paranetheses parentheses for argument lists unless you are sure.
For instance, a left paranthesis parenthesis after a method name must always enclose the parameter list.
This may cause the compiler to to assume a complete statement although it is meant as continuing in the next line. It is not always as obvious as in the following Exampleexample.
myVariable = "This is a very long statement continuing in the next line. Result is=" + 42 // this line has no effect
4. Forgetting to write the second equals sign of the equals operator
As a rsultresult, a comparison expression turns into an assignment. | <urn:uuid:7600719a-7dc6-42de-b8eb-0fa5d4aecfd5> | 2.90625 | 143 | Documentation | Software Dev. | 37.522386 |
Gryllotalpa cultriger Uhler
Distribution and Damage
The main distribution is probably in Mexico, as it is uncommon along the US border. It is not considered a pest.
As with other mole crickets in the genus Gryllotalpa , the tibia of the western mole cricket has four dactyls. There is a row of five spines on the upper margin of the tibia of each hind leg. This characteristic is shared by the European and oriental mole crickets, but not by the prairie mole cricket.
In the western and oriental mole crickets, the ocelli are elliptical, and the ocellar-ocular distance is less than the ocellar length. In the western mole cricket the inter-ocellar distance is less than the ocellar length, whereas in the oriental mole cricket the inter-ocellar distance is greater than the ocellar length.
The life cycle and seasonality of this species have not been studied, as it is very uncommon in the USA.
Return to Knowledgebase Menu | <urn:uuid:6931dad7-6eb9-4088-9036-918310e967b4> | 3.046875 | 223 | Knowledge Article | Science & Tech. | 39.276969 |
Purple Sea Urchin
|More Ocean Resources
On a clear, sunny coastal day, the purple spines of the Purple Sea Urchin (Strongylocentrotus purpuratus) can be easy to spot in the inter-tidal zone.
Most of the time, sea urchins are stationary animals that attach themselves to a rock or other hard substrate in order to scrape off and consume the algae. They can also be found in deeper water, often around kelp beds, one of their favorite meals.
Sea Urchin bodies, formally called tests, are round and covered by a hard shell. Like other Echinoderms such as Sea Stars, their mouth is on the bottom of the body. The spines are used as a defense mechanism, and while some sea urchin species are known to be poisonous to humans, the purple sea urchin is not considered poisonous.
Both people and ocean animals such as sea otters and sea stars consider sea urchins a delicacy.
© 2010 Patricia A. Michaels | <urn:uuid:6e5cc891-5628-4d8a-9951-c14f2b24c35b> | 3.46875 | 213 | Knowledge Article | Science & Tech. | 47.352568 |
About this image
Impact craters can have a variety of floor features. Depending on the size of the meteorite and the material it is hitting the resultant crater can have a flat floor, a central peak or a central peak with a pit in it. The peak and peak/pit combination are formed by rebound of the surface material that has been melted and pulverized. Today's image is a central peak/pit combination crater in Terra Cimmeria.
Please see the THEMIS Data Citation Note for details on crediting THEMIS images. | <urn:uuid:b01026b0-b510-427b-9ffe-ee427088dca4> | 3.359375 | 111 | Knowledge Article | Science & Tech. | 47.05211 |
|Analysis index||History Topics Index|
The terminology for elliptic integrals and functions has changed during their investigation. What were originally called elliptic functions are now called elliptic integrals and the term elliptic functions reserved for a different idea. We will therefore use modern terminology throughout this article to avoid confusion.
It is important to understand how mathematicians thought differently at different periods. Early algebraists had to prove their formulas by geometry. Similarly early workers with integration considered their problems solved if they could relate an integral to a geometric object.
Many integrals arose from attempts to solve mechanical problems. For example the period of a simple pendulum was found to be related to an integral which expressed arc length but no form could be found in terms of 'simple' functions. The same was true for the deflection of a thin elastic bar.
The study of elliptical integrals can be said to start in 1655 when Wallis began to study the arc length of an ellipse. In fact he considered the arc lengths of various cycloids and related these arc lengths to that of the ellipse. Both Wallis and Newton published an infinite series expansion for the arc length of the ellipse.
At this point we should give a definition of an elliptic integral. It is one of the form
∫ r(x, √p(x) )dx
where r(x,y) is a rational function in two variables and p(x) is a polynomial of degree 3 or 4 with no repeated roots.
In 1679 Jacob Bernoulli attempted to find the arc length of a spiral and encountered an example of an elliptic integral.
Jacob Bernoulli, in 1694, made an important step in the theory of elliptic integrals. He examined the shape the an elastic rod will take if compressed at the ends. He showed that the curve satisfied
ds/dt = 1/√(1 - t4)
then introduced the lemniscate curve
(x2+y2)2 = (x2-y2)
whose arc length is given by the integral from 0 to x of
dt/√(1 - t4)
This integral, which is clearly satisfies the above definition so is an elliptic integral, became known as the lemniscate integral.
This is a particularly simple case of an elliptic integral. Notice for example that it is similar in form to the function sin-1(x) which is given by the integral from 0 to x of
dt/√(1 - t2)
The other good features of the lemniscate integral are the fact that it is general enough for many of its properties to be generalised to more general elliptic functions, yet the geometric intuition from the arc length of the lemniscate curve aids understanding.
In the year 1694 Jacob Bernoulli considered another elliptic integral
∫ t2 dt/√(1 - t4)
and conjectured that it could not be expressed in terms of 'known' functions, sin, exp, sin-1.
References (8 books/articles)
Article by: J J O'Connor and E F Robertson
|History Topics Index||Analysis index|
|Main index||Biographies Index
|Famous curves index||Birthplace Maps
|Mathematicians of the day||Anniversaries for the year
|Search Form|| Societies, honours, etc
The URL of this page is: | <urn:uuid:1c9c2909-bda5-48fc-b51f-26f370aa7815> | 3.96875 | 729 | Knowledge Article | Science & Tech. | 43.737576 |
Inventor of the Week Archive
for a different Invention or Inventor
One of the most popular labs at MIT is the one directed by
Toyoichi Tanaka, whose polymer gels have shown their potential
to transform the technology of medicine, energy, food production
Born and raised in Japan, Tanaka received his higher education
at the University of Tokyo, where he earned a B.S. (1968),
M.S. (1970), and D.Sc. (1973), in Physics. In 1975, he joined
the faculty of MIT, where he has risen to the rank of Professor
of Physics as well as Morningstar Professor of Science.
Tanaka's field of expertise is gels. A gel is typically a mixture of a polymer
"matrix," that is, a chain of individual molecules, and a
fluid "solute," in a ratio of about 1:30. The obvious example
is Jell-O, which has a matrix of gelatin in a solute
of sugar water. However, synthetic gels can be made in which
the polymers are very tightly bonded---with sometimes surprising
In the mid-1970s, Tanaka discovered that certain synthetic (polyacrylamide) gels had remarkable properties: for example, they responded to minute changes in their environment by drastically swelling up or changing color. Any substance will respond to its environment to some extent; but Tanaka learned to fine-tune his gels to undergo radical changes, or "phase transitions," when they encounter either a chemical or a change in conditions (temperature, light, electricity, magnetism, etc.).
At this stage, Tanaka's gels have valuable applications because they can expand and contract up to 1,000 times their original volume in response to predictable stimuli: for example, these gels could be used as artificial muscles, set in motion by a specific electrical pulse. More importantly, the polymers in the gels can capture or expel specific substances as they grow or shrink, so that the gels could be used, for example, as super-sponges to absorb and immobilize toxic waste, or as molecular filters of various sorts.
The more complex stage of Tanaka's research has been to develop
"smart" gels which imitate proteins by recognizing conditions
and responding to their environment. For example, smart gels
can be fine-tuned to draw humidity from the air when it is
over a given temperature, or even to release insulin when
the glucose level around them drops below a given point.
By 1992, Tanaka had earned eight patents for his gels. In the same year, he and a partner founded Gel Sciences, Inc., in order to market SmartGel products. Their first effort was a liner for shoes and skates, which is pliant until it encounters the body heat of the foot; then it firms up to provide custom-molded support. More recently, the firm has focused on medical applications of the gels, such as long-lasting eyedrops and sunscreen; and numerous drug delivery systems are in production.
Meanwhile, Tanaka continues his research at MIT, while technologists
in many disciplines monitor his progress carefully: in 1996,
Tanaka won both the R&D 100 Award and Discover Magazine's
Editor's Choice for Emerging Technology Award. Although Toyoichi
Tanaka has been the world's leading expert on gels for almost
20 years, it is clear that his greatest success is still to | <urn:uuid:0fa7d5a8-9575-4118-bcce-1f7450021356> | 2.96875 | 719 | Knowledge Article | Science & Tech. | 41.026558 |
Do ghostly, imperceptible particles called "sterile neutrinos" wander the universe? The question has given physicists sleepless nights since evidence for the particles emerged a decade ago.
But now a new experiment has poured cold water on the idea, reassuring many scientists that their ideas are on the right track.
"Our results are the culmination of many years of very careful and thorough analysis - scientists everywhere have been eagerly waiting for our results," says Janet Conrad, a spokeswoman for the experiment at Fermilab, near Chicago in Illinois, US. She announced the result at a Fermilab meeting on Wednesday.
Neutrinos are lightweight particles that whiz around the universe, barely interacting with matter. They stream out from nuclear reactions in the Sun and continually flood straight through the Earth.
The particles come in three different types, or "flavours", dubbed electron, muon and tau. And several experiments have proved that neutrinos and their antiparticle counterparts can flip from one flavour to another, or "oscillate", as they travel.
One of these experiments was the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory in New Mexico, US, which gathered data from 1993 to 1998. The experiment suggested some muon antineutrinos had flipped into electron antineutrinos after travelling about 30 metres.
However, the results of this experiment did not mesh with other experiments unless at least one extra, fourth neutrino existed, with roughly one-millionth of the mass of the electron. This fourth neutrino would be "sterile", meaning it would not interact with matter at all, except through gravity.
But sterile neutrinos had no place in the standard picture of particle physics, so they would force physicists to radically overhaul their theories. Sterile neutrinos of this mass also conflicted with cosmology, because they would have interfered with the growth of galaxies in the universe.
Butts on the line
"The implications were staggering," says Scott Dodelson at Fermilab. "Cosmologically, we decided there should not be a sterile neutrino, so to some extent, our butts were on the line."
Physicists were therefore keen to double-check the LSND result, so they dismantled the experiment and used the parts to build a more sensitive experiment at Fermilab called MiniBooNE, the first phase of a project called BooNE (Booster Neutrino Experiment).
Now, after analysing data from MiniBooNE gathered between 2002 and 2005, the team say they have resolved the issue, without the need for exotic sterile neutrinos.
MiniBooNE fired a beam of muon neutrinos into a detector 500 m away. None of them flipped into electron neutrinos. This result is consistent with other experiments and the standard three-neutrino picture.
"This kind of confirms what we were saying," says Dodelson. However, he adds that there might be some exotic, convoluted reason why both LSND and MiniBooNE are correct and can be reconciled with new physics - something physicists intend to explore.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | <urn:uuid:c6696216-77f6-4350-9f6e-5f994b2e4202> | 3.25 | 744 | Truncated | Science & Tech. | 32.097445 |
Video: An experiment on Earth simulates the sudden release of water into a crater on Mars, reproducing the pattern of sediment deposit seen in a real crater on the Red Planet (Video courtesy of Erin Kraal et al/Nature)
Billions of yeas ago on Mars, a river suddenly burst to the surface from underground and flooded a large crater, only to disappear again within a few decades, according to a new study. Although the water was short-lived on the surface, it may have been present for longer underground, potentially creating conditions favourable to life.
Many ancient river channels are among the evidence that liquid water was once present on Mars, but in many cases it is difficult to know just how long that water was around.
Now, a new study says the water in at least one location on Mars flowed for just a few decades before disappearing again. The study, led by Erin Kraal of Virginia Polytechnic Institute and State University in Blacksburg, US, is based on the pattern of sediment left behind when an ancient river emerged from underground, flowed for 20 kilometres, then drained into a crater.
It was an abrupt and catastrophic event, Kraal says. "It would be like the Mississippi River suddenly bursting out of the ground and flowing for 10 years and then stopping," she told New Scientist.
The researchers created a mock crater on Earth about 2 metres across. Water flowed into it, carrying along sand and depositing a fan of sediment on the crater floor. Fluctuations in how much sand was eroded by the flow at any given time led to sediment being deposited in distinct steps, just like in the Martian crater analysed by the team, which is 128 km wide.
Water flowing into the Martian crater after the 'stepped' sediment was first laid down would have either buried the stepped pattern in new sediment, or cut a channel into it, the researchers say. This allowed them to deduce that the sediment was all deposited in a single, uninterrupted episode where water was flowing.
The researchers then calculated the rate and duration of the water flow using estimates of how much sediment was deposited in the crater and the size of the 20-km-long channel. They arrived at a flow rate of between 2200 and 800,000 cubic metres per second, with a maximum flow time of about 90 years. 800,000 cubic metres per second is about five times the flow rate of the Amazon River.
What could have triggered such a flood? Kraal thinks the water was probably trapped beneath the surface and was mostly or entirely frozen. It may have melted suddenly when it was heated by magma.
Even though the water did not last long on the surface at this location, it could have been present for much longer underground. It may have been kept liquid in places by the heat from magma, and potentially fostered life, Kraal says. Because of this possibility, the stepped sediment deposits would be good places to look for signs of past life on Mars, she says.
Similar stepped sediment deposits appear in only a few places on Mars, Kraal says. And it is much less clear how long water was present in other locations with evidence of past water.
Bethany Ehlmann of Brown University in Providence, Rhode Island, US, says that at other places on Mars, water appears to have been much longer-lived on the surface.
Two ancient river deltas called Eberswalde and Jezero appear to have been "built up over hundreds to thousands of years, in more quiescent lakes fed by large valley networks, in systems reminiscent of rivers on Earth", she told New Scientist.
She says it should be possible to test the catastrophic flooding scenario for the crater analysed by Kraal's team by using NASA's Mars Reconnaissance Orbiter's camera to look for large boulders in the sediment, which could only be transported by rapidly flowing water.
Journal reference: Nature (DOI: 10.1038/nature06615)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
They Dont Look Quite Right.
Thu Feb 21 11:40:04 GMT 2008 by Tom Potts
I wonder if the desire for water on Mars makes some scientists see what they want to see.
Water would create a much more ragged appearance to the delta.
Perhaps more considerations should be given to fluid flow in aggregates - pyroclastic flow was a fantasy a few years ago but the lower gravity of Mars would make it more likely and a lot of the visual 'evidence' for water can be replicated in a dry sandpit.
Thu Feb 21 13:02:39 GMT 2008 by Karl Roenfanz ( Rosey )
If the water pooled then it would create a soft fan, then slowly seep back into the ground or evaporate.
Green House Gases Into Gas
Fri Feb 22 13:59:18 GMT 2008 by Jane Jones
Far out right on...will this mean the air will be clean?
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:b99cf6c1-3c72-4277-9b95-1c6db40d9968> | 4.09375 | 1,147 | Comment Section | Science & Tech. | 51.110048 |
Number Of Planets
Date: Prior to 1993
How many known planets are there, inside and outside the solar
system? I have read in Astronomy magazine that at least 2 planets were
discovered around a pulsar (I do not know where) and that perhaps a 10th
planet lies past Pluto. Any more I do not know about?
I have not heard of any good hard evidence for any planets other
than the usual nine.
Update - March 2011
As 2011, there are officially eight planets in our Solar System. Pluto, common thought of as the ninth planet, is officially listed as a dwarf planet. There are many dwaft planets in our solar system, Ceres, Pluto, Haumea, Makemake, and Eris.
There have also been planets discovered outside of our solar system, thoughI do not have a count on them at the present time.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:2a7f4640-d224-408a-a0b8-149b26c2d458> | 2.84375 | 199 | Knowledge Article | Science & Tech. | 50.411974 |
Climate Change and Weather
Does climate change affect day-to-day weather?
Yes, it appears that rising global temperatures are already affecting weather by intensifying the extremes. Hot days are hotter, rainfall and snowfall are heavier, and dry spells are longer.
Is there a connection between global warming and extreme weather events?
Scientists believe there is. Climate change produces warmer ocean temperatures, a factor tied to stronger hurricanes. With rising sea level and heavier rainfall, the impacts of these powerful storms are only expected to increase. At the same time, rising temperatures and more severe droughts are likely to lead to more wildfires, while heavier precipitation will produce more devastating floods and debilitating winter storms. Find out more about about the connections between global warming and extreme weather, and download National Wildlife Federation's special reports on these topics, here.
How can we reduce the risks associated with extreme weather events?
Reducing global warming pollution is the first step. But restoring and protecting habitats is another key strategy for reducing the negative impacts of climate change. For example, coastal wetlands and barrier islands play an important role in absorbing the destructive force of storms. Restoration and increased protection for coastal wetlands are essential as a first line of defense against hurricanes, and may also bring the added benefit of improving the ability to withstand some sea-level rise. Read about how students in Louisiana are getting hands-on with wetland-restoration solutions.
Resources for Teaching about Climate and Extreme Weather
Explore these websites and lesson plans for ideas to help you teach about the connections between climate change, weather, and extreme weather events:
Climate Literacy and Energy Awareness Network (CLEAN). Weather-related lessons from the collection. (various grades)
Beyond Weather and The Water Cycle. (K-2, 3-5)
Community Collaborative Rain, Hail and Snow Network (CoCoRaHS) A community-based network of volunteers measuring and mapping precipitation (rain, hail and snow) to provide high quality data for natural resource, education and research applications. (all ages)
Extreme weather information at Weather WizKids. (K-2, 3-5)
NOAA: Playtime for Kids. Hurricanes and other extreme weather. (various grades)
Discovery Education: On the Gulf: Coastlines In Danger (9-12)
Discovery Education: Weather (various grades)
NOAA Environmental Visualization Laboratory for visualizing extreme weather events. (various grades)
JetStream-Online School for Weather. (6-8, 9-12) | <urn:uuid:e8812296-8a23-486a-bdbf-27ed5a859d44> | 3.9375 | 520 | Knowledge Article | Science & Tech. | 25.913558 |
Really .= isn't an "append operator" it's just syntactic sugar for an assignment back to the same lvalue after using the concatenation operator, just like += is syntactic sugar for adding and assigning in one step. It's just that, unlike addition (or the other mathematical operators), concatenation isn't commutative.</pedant>
Not that it's really an answer as to why there's no prepend operator . . .
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
Outside of code tags, you may need to use entities for some characters:
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
| & || & |
| < || < |
| > || > |
| [ || [ |
| ] || ] || | <urn:uuid:7170f2fb-45aa-4064-a1d0-fac1f4f17f13> | 2.6875 | 367 | Comment Section | Software Dev. | 71.662045 |
Definition: A conceptual method of open addressing for a hash table. A collision is resolved by putting the item in the next empty place given by a probe sequence which is independent of sequences for all other key.
See also collision resolution scheme, clustering free, double hashing, quadratic probing, linear probing, perfect hashing, simple uniform hashing.
Note: Since the probe sequences are independent, this is free of clustering. This is conceptual because real probe sequences are unlikely to be completely independent.
If you have suggestions, corrections, or comments, please get in touch with Paul E. Black.
Entry modified 17 December 2004.
HTML page formatted Fri Mar 25 16:20:35 2011.
Cite this as:
Paul E. Black, "uniform hashing", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/uniformhashn.html | <urn:uuid:862ad2ff-5960-4b87-9608-bfae13d50175> | 3.03125 | 223 | Knowledge Article | Software Dev. | 52.480885 |
Some animals are poorly named. The flying lemur doesn’t fly and isn’t a lemur. The mantis shrimp isn’t a mantis or a shrimp. The killdeer couldn’t. But the giant bumphead parrotfish… it’s a giant fish with a beak like a parrot and a bump on its head. Nice one, biologists. You can have a point for that.
The giant bumphead parrotfish (Bolbometopon muricatum) is the biggest herbivorous fish in coral reefs. It can reach 1.5 metres in length and weigh over 75 kilograms, and it has a distinctively bulbous forehead. Why? There are rumours that it uses its head to ram corals, breaking them up into smaller and easier-to-eat chunks.
But Roldan Munoz from the National Oceanic and Atmospheric Administration has discovered one definite use for the bump: headbutting rivals. Check out the video below – it all kicks off at the ten-second mark, and I love the “Whoooah” that follows.
While watching the parrotfish at Wake Atoll, in the middle of the Pacific, Munoz’s team heard “loud jarring sounds”. They soon found that the males were smashing their heads together head-on, and then trying to bite each other in the flanks.
To the team’s knowledge, this behaviour has only ever been described once: on a dive blog, describing a sighting in the Red Sea. Bear in mind that this is one of the largest reef fishes, and swims in large groups. As Munoz asks, “How could this dramatic aspect of its social and reproductive behavior have gone unnoticed?”
There are two possible reasons. First, the parrotfish has been severely overfished. It swims in large groups making it easy to net by day, and sleeps in shallow water making it easy to spear by night. It also grows slowly and takes years to reproduce, so even a moderate amount of fishing seriously hurts the population. Second, and related to the first reason, the fish is now very wary of humans and tends to swim away if approached.
Wake Atoll is an exception. It’s a US Marine National Monument, and the fish are protected from nets and spears. There are plenty of them, and they’ve never learned to fear divers. As such, their dramatic contests could finally be filmed. Just think about how many cool behaviours we have yet to see because we have either exterminated or terrified the animals that perform them.
Reference: Munoz, Zgliczynski, Laughlin & Teer. 2012. Extraordinary Aggressive Behavior from the Giant Coral Reef Fish, Bolbometopon muricatum, in a Remote Marine Reserve. PLoS ONE http://dx.doi.org/10.1371/journal.pone.0038120 | <urn:uuid:c4e358f5-5236-487c-a93a-32f23a314bbe> | 2.90625 | 618 | Personal Blog | Science & Tech. | 63.477129 |
Turtle embryos find the sunny spot
Wei-Guo Du from the Chinese Academy of Sciences has found that the embryos of soft-shelled turtles “bask” inside their eggs. “People usually think reptilian embryos are immobile,” says Du. After all, their limbs are tiny stumps and they have few places to move to. But that doesn’t stop them. Du found that the embryos can not only move, but they can snuggle up to the warmest side of their eggs.
He collected 260 eggs from a local turtle farm, placed them in individual jars and warmed them with heat lamps. The eggs were about one degree Celsius warmer on the sides closest to the lamps, and the turtles could sense this. After a few days, they had pressed up against the warmer side. When Du moved the heat lamps around, the embryos followed.
“To our knowledge, no previous study has looked for this ability, presumably because embryos were thought incapable of such behaviour,” says Du.
Another good example of why scientists should provide images or video with their papers as often as possible. I would have skipped right over this if not for that great photo. | <urn:uuid:756c8368-c34a-4a05-af5a-5f9b21e4cc89> | 2.875 | 247 | Personal Blog | Science & Tech. | 61.922788 |
Permutation recursive help
Producing consecutive permutations.Need to develop a method that lists one by one all permutations of the numbers 1, 2, …, n (n is a positive integer).
(a) Recursive method . Given a verbal description of the algorithm listing all permutations one by one, you are supposed to develop a recursive method with the following header:
public static boolean nextPermutation(int array)
The method receives an integer array parameter which is a permutation of integers 1, 2, …, n. If there is “next” permutation to the permutation represented by the array, then the method returns true and the array is changed so that it represents the “next” permutation. If there is no “next” permutation, the method returns false and does not change the array.
Here is a verbal description of the recursive algorithm you need to implement:
1. The first permutation is the permutation represented by the sequence (1, 2, …, n).
2. The last permutation is the permutation represented by the sequence (n, …, 2, 1).
3. If n a ,...,a 1 is an arbitrary permutation, then the “next” permutation is produced by
the following procedure:
(i) If the maximal element of the array (which is n) is not in the first position of the array, say i n = a , where i > 1, then just swap i a and i-1 a . This will give you the “next” permutation in this case.
(ii) If the maximal element of the array is in the first position, so 1 n = a , then to find
the “next” permutation to the permutation ( ,..., ) 1 n a a , first find the “next”
permutation to ( ,..., ) 2 n a a , and then add 1 a to the end of thus obtained array of (n-1) elements.
(iii) Consecutively applying this algorithm to permutations starting from (1, 2, …, n),you will eventually list all n! possible permutations. The last one will be (n, …, 2, 1).For example, below is the sequence of permutations for n = 3 , listed by the described algorithm:
(0 1 2) ; (0 2 1) ; (2 0 1) ; (1 0 2) ; (1 2 0) ; (2 1 0)
Please help...i have done the iterative one..but unable to figure out the recursive..
Thank you in advance..
I have the same scenario for Producing consecutive permutations, Do you have the answer??
Welcome to DevX
What have you got so far that isn't working for you?
I don't answer coding questions via PM or Email. Please post a thread in the appropriate forum section.
Please use [Code]your code goes in here[/Code] tags when posting code.
Before posting your question, did you look here
Got a question on Linux? Visit our Linux sister site.
Modifications Required For VB6 Apps To Work On Vista
Last Post: 04-24-2006, 12:54 PM
By Sparky in forum Database
Last Post: 10-15-2002, 12:13 AM
By Andrew Merisanu in forum Database
Last Post: 08-08-2002, 02:53 PM
By Rodrigo in forum Database
Last Post: 12-15-2000, 11:25 PM
By Chris McCann in forum VB Classic
Last Post: 10-20-2000, 12:02 PM
Top DevX Stories
Easy Web Services with SQL Server 2005 HTTP Endpoints
JavaOne 2005: Java Platform Roadmap Focuses on Ease of Development, Sun Focuses on the "Free" in F.O.S.S.
Wed Yourself to UML with the Power of Associations
Microsoft to Add AJAX Capabilities to ASP.NET
IBM's Cloudscape Versus MySQL | <urn:uuid:ead92ff8-eeb1-4733-ba41-1b98d876da7a> | 3.578125 | 862 | Comment Section | Software Dev. | 72.764511 |
Typically when you spawn a new thread, you want to give it a name to facilitate debugging. For example:
//.. other stuff....
Thread thread = new Thread(new ThreadStart(DoSomething);
thread.Name = "DoingSomething";
The code in the method DoSomething (not shown) will run on a thread named "DoingSomething."
Now suppose you're writing a socket server using the asynchronous programming model. You might write something that looks like the following:
ManualResetEvent allDone = new ManualResetEvent(false);
public static void Main()
Socket socket = new Socket(...); //you get the idea
socket.BeginAccept(new AsyncCallback(OnSocketAccept), socket);
public void OnSocketAccept()
Thread.CurrentThread.Name = "SocketAccepted";
// Some socket operation.
In the example above, we're setting up a... | <urn:uuid:d046ffb1-f421-4b1f-b596-25c3f76d51bd> | 3.109375 | 192 | Tutorial | Software Dev. | 38.205227 |
This just goes to show that the ancient Chinese saying, "If you paddle down the river long enough, you'll find the remains of a 1,200 pound, 40,000 year old bison," is true.
University of Alaska biologists Dan Mann and Pam Groves did just that. Because the skeleton was in permafrost, hair and gristle were still attached.
Read the whole story here:
Google+: View post on Google+Author on Google+ | <urn:uuid:26eddc26-e6e4-4b81-a011-41fbd43dc672> | 2.734375 | 96 | Truncated | Science & Tech. | 63.8 |
Learn how scientists are using bacteria to create different types of fuels.
How does fire pass from one burning stick to another unlit stick?
If you own a cat you know that they have a very distinctive blink. Learn the purpose behind this symbol.
Astronomers have found a planet with water known as HD 189733b.
We often hear about woman experiencing postpartum depression but did you know that men can experience it too?
Men who marry younger women increase their life expectancy. However, do women who marry younger men life longer?
Is it true that pregnant women should avoid high heels?
This is just a little physics quiz to get you thinking!
When volcanoes erupt underwater, do the waves and ocean currents help calm the eruption?
Could a fire department stop a volcano? Learn about the town of Vestmannaeyjar and their brave firefighters who battled a volcano.
You've watched astronauts complete dangerous missions on space walks. Couldn't they use suction cups and suction themselves to the ship?
We've all played the penny stack game. Is there anyway to prevent the pennies from falling over?
Penguins are being harmed by the changes to the environment.
Why is the Middle East the place to find oil?
Raspberries, kiss squeaks, and grumphs are all noises made by orangutans. Learn how orangutans communicate and use gestures.
Have you heard about the great solar energy enthusiast, Auguste Mouchot?
Scientists are experimenting with rat whiskers. The have found that they stop a stroke from occurring. Cool, right? But, would it work on humans?
The three states of water are solid, liquid, and gas. But, what happens when water "sublimates?"
How does sublimation change the way ice cubes look?
What is the distinctive sound of the ocean hitting the sand: Woooosh!
It's time again for the Moment of Science Physics Quiz!
Mudslides may seem like total muddy chaos, but what is the cause?
Scientists have discovered million-year-old evidence that our ancestors may have been cannibals.
What were you like as a child? Chances are, your adult personality is very similar to your childhood personality.
Early bloomers have always been around. However, there is a noticeable trend of girls as young as 7-years-old who are already hitting puberty.
A smarter electricity grid will help cut your energy bill and prevent blackouts in the summer heat.
Forensic scientists are now using dogs' DNA to help stop dogfighting and abuse.
Video games are not just for kids anymore! Researchers are experimenting with video games that help the elderly with their memory.
What do illegal drugs and baked goods have in common? Well, chemistry!
An unusual partnership is happening on the African plain. The whistling-thorn acacia tree is using ants to defend against elephant attacks.
So what's all this hype about "string theory"? Find out what scientists are saying about it now!
How well to you pay attention to your backyard buzz? Time to take out our magnifying glasses and get personal.
Computer scientists say that cell phone batteries are going to need a major recharge.
Scientists have conducted surveys on just about EVERYTHING (including surveys)!
How did the apple manage to survive what killed off the dinosaurs?
Find out what World Water Monitoring Day is all about and how you can get involved.
Scientists claim they may have found a new way to fight antibiotic-resistant superbugs...with a little help from frogs!
This is one pretty outstanding cow. Besides setting world records, this Wisconsin dairy cow is helping the environment too!
Why does my pet do that? A Moment of Science gathers some of our favorite pet-related podcasts.
Scientific research is awesome when chocolate is involved! Learn what scientists are doing to understand the inner workings of the Theobroma cacao tree.
The circulatory system isn't the only part of the body that keeps a beat. The brain needs rhythm too!
Have you ever been to the Science for Citizens' site? Find out what it's all about!
Scientists have found aerosol particles that would have dominated the Earth's pre-industrial atmosphere.
Brain stimulation and hand preference go, well, hand-in-hand. Researchers are investigating whether they can manipulate if you are a righty or lefty. | <urn:uuid:873efc16-3795-48fa-8f80-7eb631331150> | 2.78125 | 917 | Content Listing | Science & Tech. | 57.662539 |
Spent nuclear fuel pools
Spent nuclear fuel (SNF) refers to fuel after it has fueled a reactor. This fuel looks like new fuel in the sense that it is made of solid pellets contained in fuel rods. The only difference is that SNF contains fission products and actinides, such as plutonium, which are radioactive, meaning it needs to be shielded. Just as with the fuel rods in a shutdown reactor, the SNF produces decay heat because most of the decay radioactivity from the fission products and actinides is deposited in the fuel and converted into thermal energy (aka heat). As a result, the SNF also needs to be cooled, but at a much lower level than fuel in a recently (<12 hours) shutdown reactor as it produces only a fraction of the heat. In summary, the SNF is stored for a certain time to: 1) allow the fuel to cool as its decay heat decreases; and 2) shield the emitted radiation.
To accomplish these goals, SNF is stored in water pools and large casks that use air to cool the fuel rods. The pools are often located near the reactor (in the upper floors of the containment structure for a BWR Mark-1 containment). These pools are very large, often 40 feet deep (or larger depending on the design). The pools are made of thick concrete, lined with stainless steel. SNF assemblies are placed in racks at the bottom of these pools, so almost 30 feet of water covers the top of the SNF assemblies. The assemblies are often separated by plates containing boron which ensure a neutron chain reaction cannot start. The likelihood of such an event is further reduced because the useful uranium in the fuel has been depleted when it was in the reactor, so it is no longer capable of sustaining a chain reaction. The water in the pool is sufficient to cool the SNF, and the heat is rejected through a heat exchanger in the pool so the pool should stay at fairly constant average temperature. The water depth also ensures the radiation emitted from the SNF is shielded to a level where people can safely work around the pools.
Under normal operating circumstances, spent fuel can be stored in the pools indefinitely. An active cooling system is in place to remove the residual decay heat and the water also provides effective radiation shielding. The amount of fuel that can be stored into the pool can vary according to the capacity of the pool itself, but most spent fuel pools are design to be able to store many reactor cores at once.
During the refueling operation the reactor is shut down, all the areas between the reactor and the spent fuel are flooded with water (to provide radiation shielding) and fuel elements are moved one by one from the reactor to the spent fuel pool where they are re-racked. Refueling can occur every 12-18 months and during a single refueling shut down, up to one third of the fuel elements of the core are replaced. All the operations are conducted remotely under water through cranes and special equipment to avoid radiation exposure to the workers.
The spent fuel is usually stored in the spent fuel pool for a number of years, depending on the spent fuel capacity and on regulations, and after that period they are usually dry stored in concrete casks located on the site outside the reactor buildings.
If there is a leak in the pool or the heat exchanger fails, the pool temperature will increase. If this happens for long enough, the water may start to boil. If the boiling persists, the water level in the pool may fall below the top of the SNF, exposing the rods. This can be a problem as the air is not capable of removing enough heat from the SNF so the rods will begin to heat up. If the rods get hot enough, the zirconium-based cladding will oxidize with the steam and air, releasing hydrogen which can then ignite. These events would likely cause the clad to fail, releasing radioactive fission products like iodine, cesium, and strontium. It is important to note that each of these occurrences (cooling system failure, pool water boiling, fuel rod overheating in air, zirconium oxidation reaction) would each have to last sufficiently long in order to cause an accident, making the total likelihood of a serious situation very low.
The most significant danger if such an event were to occur is that there is no robust containment structure (like the one housing the reactor,) surrounding the SNF pool. While SNF pools themselves are very robust structures, the roof above each pool is not as strong and may have been damaged, meaning the surface of the pool may be open to the environment. As long as the water covers the fuel, this does not pose a direct threat to the environment, however it does allow for a possible dispersion of these fission products if a fire were to occur. But if the water level stays above the fuel, the threat of a large dispersion event is low. | <urn:uuid:9fc4b192-21c7-47d9-9bce-cb9819e11bed> | 4 | 1,011 | Knowledge Article | Science & Tech. | 48.245209 |
The most common basic syntax are:
For generating a PHP file, the developer has to create a file name using .php extension. This file seems like HTML file and saves as a plain text.
Writing comments is a good habit because it directs about what you have done. You will feel no difficulty while making any change or modification. A well coded script is always full of balanced comments which makes the code absolutely perfect. A symbol of ‘//’ or ‘/*’ has been placed to leave the comment. The syntax of ‘//’ is marked for a single line comment and is used at the beginning of each line while ‘/*’ is used for commenting several lines and used at the top and bottom of the comment. The comments surrounded by the comment syntax are not executed while compiling the programme nor displays on the screen of the end users.
Let’s see the examples:
// This line will not execute in the compiling.
// And also not displayed on the screen
print (“Hello Buddy”);
These lines will also be ignored.
Use this type of comment symbol
If you need a large comment
print (“How are you?”);
3.1.2 Code Syntax
Start of Code
Each block of PHP code begins with ‘<?php’ or ‘<?’ if server support it, and ends with ‘?>’. Mostly every chunk ends with a semicolon. For Example:
print ( );
Here, print is a function and the text written inside the parentheses are printed on the display screen. At the end of the parentheses, input a semicolon, which terminate the command.
The same command can be written like “print”, which also works without the use of parentheses. The command echo () is the same as print ().
For effectively execute, compile and running of the PHP code, it is essential to write it in the correct mode, for example:
print (“This is Rose India”);
<?php print (“This is Rose India”); ?>
Both the chunks are correct and equally effective. It is advisable to use the first chunk in bigger and complex programmes.
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:99728b06-f3d2-41c8-b27f-12624551c176> | 3.703125 | 509 | Tutorial | Software Dev. | 60.485854 |
Learn more physics!
I have been reading about gravitational lensing. All the models I have seen discuss light going around the outside of some gravitational source, like a black hole or something. Would it act more like a traditional lens if passing through the middle a source that was more donut shaped, instead of around a ball type shape? If so, could somethig like this be done experementally?
- Michael French
First of all gravitational lenses are not like the reading glasses you buy at WalMart. Ordinary optical lenses focus a point source into a point image. Gravitational lenses focus a point source into a circular line image. Sometimes you see only a small arc of the circle depending on the relative positions of the source and lens. As to whether or not a doughnut shaped lens would work, I haven't worked it out in detail but I suspect the answer is no. Could one perform an experiment? In principle yes, but there are not too many doughnut shaped galaxies out there.
I presume you have read the Wiki article: http://en.wikipedia.org/wiki/Gravitational_lens
(published on 08/07/2010)
Follow-up on this answer. | <urn:uuid:dc02a44d-3db8-491f-b1e8-5e9f1b352073> | 3.21875 | 249 | Q&A Forum | Science & Tech. | 50.701925 |
We’ve been to the moon. Mars is easy. But landing on Venus? That’s tough.
- By Sam Kean
- Air & Space magazine, September 2010
(Page 3 of 4)
Answering the question requires lots of data, and the lander will have to gather its information without human supervision. The pictures and other data will arrive at Earth only after the lander has finished its work on the surface.
To ensure its survival on Venus, the SAGE lander will have to endure grueling trials in several test chambers, some new and some old. NASA’s Venus test chambers from the Pioneer days simulated the surface temperature just fine, but didn’t bother duplicating the carbon dioxide atmosphere, on the assumption that it posed no threat to spacecraft. No one is making that mistake this time around; the new chambers are toxic kilns.
So far, Smrekar’s team has tested mechanical parts and materials in chambers up to two feet in diameter, sometimes observing them through small windows. (Nothing has failed yet.) To simulate the spacecraft’s aerodynamic stability in the upper atmosphere of Venus, the engineers will test it in a wind tunnel. For simulating the lower atmosphere, they will place it in a water tank.
One item of vital concern is the communications antenna. The thick clouds around Venus muffle radio waves, and SAGE won’t have much lung power to begin with. Nor will it have orbiting satellites to communicate with, as the Mars rovers do. All the lander’s data will be beamed up to the spacecraft that dropped it off, and from there relayed to Earth. As with the rest of the SAGE hardware, the communication system has to work in terrific heat.
Unfortunately, beyond a certain temperature—about 250 degrees Fahrenheit—commercial silicon electronics crap out, and the temperatures on Venus are hundreds of degrees higher than that. Semiconductors made of silicon blended with carbon, or gallium blended with nitrogen, might be hardy enough.
Or, the engineers could revive a technology from the 1950s, says Sanjay Limaye, a University of Wisconsin planetary scientist. Vacuum tubes turned out to be impractical for computers for a number of reasons, one being that they blazed so hot that they eventually popped in air that was many degrees cooler. But that heat makes them perfect for Venus, with its higher ambient temperature.
“We used to know how to do high-temp electronics when we had vacuum tubes,” says Limaye. And even though some of that knowledge has been lost after decades of using silicon circuits, he thinks tubes could be adapted for Venus radios—provided they’re smaller than the ones used in 1955 Zenith TVs.
ANY VENUS LANDER launched in the near future will live on the surface five hours, at most. Whether that’s long enough “depends on what your perspective is, whether you’re a glass half-full or half-empty person,” says Limaye. Even three hours gives a spacecraft time to collect data, take pictures, and do a little drilling. But to really understand how the Venus system works over time—that requires longer missions and new technologies.
Insulation won’t be enough. Long-duration (weeks-long) landers will require “active” cooling—refrigeration—says planetary scientist Mark Bullock of the Southwest Research Institute in Boulder, Colorado, who heads the team designing SAGE’s camera. Future Venus landers would basically be Frigidaires, devoting 70 percent or more of their power to staying cool. They will more than likely need multi-stage cooling: fridges within fridges. The only way to achieve that, says Bullock, is with nuclear power.
Other scientists have speculated beyond rovers to Venus aircraft. To investigate how a planet that rotates so slowly can generate such powerful winds, some suggest penetrating the acid clouds with a Teflon-coated helium-filled balloon. Scientists like Geoffrey Landis at NASA’s Glenn Research Center in Ohio have proposed sending an autonomous airplane with the rover. Landis points out the advantages of this one-two combination: The airplane would fly in the cooler upper atmosphere, which is friendlier to electronics. If most of the computer brain power were placed on the airplane, it could direct the rover from above.
With these kinds of tools, scientists could really start to unravel the mysteries of Venus: Why the planet doesn’t have plate tectonics, what happened to its water, and the Big Question: Could the same runaway greenhouse effect happen on Earth? It’s still not clear which one of the twin planets is the anomaly, says Smrekar. “We have two end members of [the spectrum of] Earth-like planets, and it will be interesting to see which is more common.” | <urn:uuid:be79b99a-e351-406d-8c58-21b709f11cc7> | 3.703125 | 1,023 | Truncated | Science & Tech. | 46.828165 |
Pisces and Cetus - Downloadable article
A shipload of galaxies rides high above the watery constellations Pisces the Fish and Cetus the Whale.
March 3, 2009
|This downloadable article is from an Astronomy magazine 45-article series called "Celestial Portraits." The collection highlights all 88 constellations in the sky and explains how to observe each constellation's deep-sky targets. The articles feature star charts, stunning pictures, and constellation mythology. We've put together 11 digital packages. Each one contains four Celestial Portraits articles for you to purchase and download.|
"Pisces and Cetus" is one of four articles included in Celestial Portraits Package 3.
Just before winter's onslaught of storms, autumn typically brings the clearest skies. On the meridian we find a star-poor region of the sky that is often bypassed for the starry Milky Way fields to the north. But Pisces and Cetus do hold some telescopic treasures that include a resolvable galaxy, a nearby planetary nebula, and one of the sky's most studied objects.
Despite the faintness of its stars, the figure of Pisces the Fish is easily traced. Look for a large V-shape of 4th- and 5th-magnitude stars, 30° on a side. At the west end is the Circlet, a group of five stars that more resembles a pentagon. Pisces is drawn as two fish joined at their tails, with the knot at Alpha (α) Piscium. Cetus the Whale is another autumn constellation associated with water. Finding a whale among these 4th-magnitude stars is difficult, but there are recognizable patterns to guide your way through the constellation. Perhaps the easiest to spot is what could be dubbed a "False Circlet," a scaled-up version of the true one in Pisces, 50° to the west. To the southwest lie four stars that form a large version of the handle of the Teapot in Sagittarius. The constellation's brightest star is 2nd-magnitude Diphda (Beta [β] Ceti), which appears yellowish to the naked eye. To read the complete article, purchase and download Celestial Portraits Package 3.
|Deep-sky objects in Pisces and Cetus|
NGC 157, NGC 246, NGC 247, 65 Piscium, IC 1613, NGC 428, NGC 474, NGC 488, NGC 520, M74 (NGC 628), NGC 676, Mira, NGC 908, NGC 1052, NGC 1055, M77 (NGC 1068), NGC 1073 | <urn:uuid:6b802224-48f2-4f7b-98a3-1f0f058ee90d> | 2.96875 | 562 | Truncated | Science & Tech. | 61.32825 |
Coral Responses to Recurring Disturbances on Saint-Leu Reef
Scopelitis, J., Andrefouet, S., Phinn, S., Chabanet, P., Naim, O., Tourrand, C. and Done, T. 2009. Changes of coral communities over 35 years: Integrating in situ and remote-sensing data on Saint-Leu Reef (la Reunion, Indian Ocean). Estuarine, Coastal and Shelf Science 84: 342-352.
"Despite the multiple disturbance events," in the words of the six scientists, "the coral community distribution and composition in 2006 on Saint-Leu Reef did not display major differences compared to 1973." This pattern of recurrent recovery is truly remarkable, especially in light of the fact that "in the wake of cyclone Firinga, Saint-Leu Reef phase-shifted and became algae-dominated for a period of five years," and even more amazing when one is informed that following an unnamed cyclone of 27 January 1948, no corals survived.
Once again quoting the Australian and French researchers, their findings suggest "a high degree of coral resilience at the site, led by rapid recovery of compact branching corals," which demonstrates the amazing ability of earth's corals, in the words of the old Timex watch commercials, to take a licking and keep on ticking.
But maybe it's not amazing at all. Maybe that's the way all of earth's corals would behave, if they were not so burdened by the host of local assaults upon their watery environment that are produced by the local impacts of mankind's modern activities. Destructive cyclones and high temperature excursions have always been a part of the coral reef environment. The intensive activities of modern human societies have not. And it is these newer activities that likely provide the greatest threat to the health of earth's corals. Mitigate them significantly, and the world's coral reefs would likely successfully cope with the vagaries of nature.
Naim, O., Cuet, P. and Letourneur, Y. 1997. Experimental shift in benthic community structure. In: Final Proceedings of the 8th International Coral Reef Symposium. Panama, pp. 1873-1878. | <urn:uuid:7f7c947c-fdcc-42ce-bf67-2b83e798062b> | 3.15625 | 466 | Academic Writing | Science & Tech. | 51.810437 |
Why the Number of Feet Matters
A recent feature on population in the Economist discusses economic challenges when a nation’s population falls. In this feature, “How to deal with a falling population”, 28 July 2007, the Economist tries to minimize concerns about world population by saying that we are “hardly near the point of [resource] exhaustion”.
The Economist received a flood of letters insisting that rising world population and resource depletion are indeed serious problems. So many comments were sent in that the Economist published a follow-up piece, Population and its discontents: Lighten the footprint but keep the feet. This too highlighted deep misunderstandings about the relationship between population and humanity’s demand on our biosphere, and in particular, confusing population growth rates and population size.
Quite simply, our ecological demand is the product of 1) the number of people times 2) per-capita demand (consumption) times 3) the efficiency of production. Each of these factors contributes to humanity’s Footprint, and each is important to address.
The Economist believes that we only have to focus on the efficiency side of the equation, but as population increases, the amount of earth’s resources available per person goes down, making it more difficult for people to meet their needs.
Since 1961, global population has increased over 200 percent, while average demand per person has increased by only 30 percent. Even if average global consumption rates stabilize or decrease, we will still go further into ecological overshoot, if population increases as all models project.
Ultimately, at a global level, both population and consumption are primary reasons that humanity is exceeding the planet’s ecological limits. To get out of overshoot we must openly address all the factors that contribute to it - efficiency of production, rates of consumption, and population.
For all of humanity to live well and within the means of one planet, we must begin addressing the number of feet as well as the size of the Footprint.
Posted by Blake Alcott on 11/02/2007 at 11:27 AM
I fully agree that it is scientifically senseless to play the right-side factors of the I=PAT equation off against each other. Sure, we can try to measure each one’s contribution, but we should quit playing the game of ‘My Favorite Footprint Culprit’.
I believe we should give up hope of ‘tinkering’ and pussyfooting around on the right side of the equation. Footprint policy (reducing impact) should start on the LEFT side. If each political unit (country, usually) really stayed within its caps - whether fresh water, soil, carbon fuels [or emissions] - then each political unit would decide what combination of P, A, and T is desirable. If policy changes any of the three right-side factors, it changes the other two as well. I.e., at best complicated, at worst simply ineffective.
Posted by Sharon Ede on 11/02/2007 at 03:13 AM
R. Overby raises a critical issue - Earth is not just for one species, which of course it is absolutely not! The Footprint is highlighting that what humanity is doing is not working for humanity, let alone the rest of the hundreds of millions of other species and ecosytems, on which humanity depends. The Footprint is deliberately constructed this way (setting aside all the other issues, the ethos that all life has an intrinsic value etc) to communicate to those operating from a hard-nosed, utilitarian view of nature, its still not working!
Posted by Alan Coles on 10/29/2007 at 12:52 PM
While you identify 3 part to our our ecological demand “1) the number of people times 2) per-capita demand (consumption) times 3) the efficiency of production.” You only provide data for the first 2. It would obviously be helpful in developing a credible view of things to see the data for all 3 areas.
It would also be interesting to see good data, if it exists, on the % of our production that is wasted, not through inefficiencies of production but through waste itself, or through lack of use, spoilage, etc.
As with most things, reducing waste by say 25% should, realistically, have greater than an equivalent reduction in initial demand. A 25% reduction in “end product” demand is something that I’ve recently started looking at personally as a current goal.
Posted by R. OVERBY on 10/26/2007 at 11:50 PM
I don’t see any mention of other than humans. On Earth are also other lifeforms; som still truly wild, others increasingly not. To cram more people on to Earth, ‘developers’ will gladly get rid of all wild life;putting a roof on the Grand Canyon; filling it with people, and produce food for them as in ‘Soylent Green.’
All life on Earth -BIOKIND - have footprints; rerquiring Ecological Hectares to exist. | <urn:uuid:aa7465c6-c6ac-4354-a542-38bd130f0483> | 2.765625 | 1,062 | Comment Section | Science & Tech. | 43.654484 |
The Polar Marine Ecosystems respond to Rapid Climate Change by selective increased population growth, altering species location and mating grounds, and selective population decreasing. The warming climate change melts sea ice – vital to southern aquatic life – which causes a decline in Antarctic life to focus on one point. To give an example, the Adelie penguins that depend on sea ice for mating, food, and survival lost previous sea ice grounds and are losing new sea ice to live. In addition, since the ocean capacity is increasing due to ice meltation, species are now have more room to reproduce, live, and eat, which leads to selective increased productivity rates. However, the significantly smaller life forms that are the key to ocean life – Antarctic krill – are now small enough for phytoplankton to consume; this is disrupting the prior aquatic food chain. To sum up a very large problem in a small sentence, the major increase of sea ice temperature is creating an incredibly massive chain reaction of aquatic, climatic, and global problems.
Seth Martin (firstname.lastname@example.org) | <urn:uuid:95a6ad74-df8a-43e9-b22f-956527836c84> | 3.578125 | 226 | Comment Section | Science & Tech. | 25.399318 |
IS IT possible that our Universe exists inside a single magnetic monopole produced in the first split-second of creation? According to Andrei Linde of Stanford University in California, this idea has a lot going for it. Monopoles are hypothetical particles which carry a north or south magnetic pole.
Linde is one of the founding fathers of the inflation model of cosmology. According to this highly speculative theory, the visible Universe went through a brief period of exponential growth very early in its life, doubling its size hundreds of times in much less than a second.
Paradoxically, one of the reasons why theorists invented the idea of inflation was to get rid of magnetic monopoles, which are predicted by many of the grand unified theories that attempt to bring together three of the four basic forces of physics.
Standard models of inflation solve the "monopole problem" by arguing that the seed ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:98ba8c75-c15e-457e-a10c-29d0497025b9> | 3.640625 | 209 | Truncated | Science & Tech. | 38.677416 |
The West Mata volcano erupted nearly 4,000 feet underwater in the Pacific Ocean
This is how island chains are born. Scientists spotted the deepest erupting volcano almost 4,000 feet below the Pacific Ocean's surface, and recorded the underwater fury by using the remotely operated submarine Jason.
The West Mata Volcano sits in an ocean area between Fiji, Tonga, and Samoa, and spews out boninite lavas that are among the hottest recorded on Earth in modern times. Water from the volcano has proven as acidic as battery acid or human stomach acid, but that has not stopped shrimp from thriving near the volcanic vents. | <urn:uuid:03518c24-d215-4020-8523-ed2dca4ebc22> | 3.109375 | 124 | Content Listing | Science & Tech. | 34.680896 |
This chapter provides general information about writing trigger functions. Trigger functions can be written in most of the available procedural languages, including PL/pgSQL (Chapter 36), PL/Tcl (Chapter 37), PL/Perl (Chapter 38), and PL/Python (Chapter 39). After reading this chapter, you should consult the chapter for your favorite procedural language to find out the language-specific details of writing a trigger in it.
It is also possible to write a trigger function in C, although most people find it easier to use one of the procedural languages. It is not currently possible to write a trigger function in the plain SQL function language.
A trigger is a specification that the database should automatically execute a particular function whenever a certain type of operation is performed. Triggers can be defined to execute either before or after any INSERT, UPDATE, or DELETE operation, either once per modified row, or once per SQL statement. If a trigger event occurs, the trigger's function is called at the appropriate time to handle the event.
The trigger function must be defined before the trigger itself can be created. The trigger function must be declared as a function taking no arguments and returning type trigger. (The trigger function receives its input through a specially-passed TriggerData structure, not in the form of ordinary function arguments.)
Once a suitable trigger function has been created, the trigger is established with CREATE TRIGGER. The same trigger function can be used for multiple triggers.
PostgreSQL offers both per-row triggers and per-statement triggers. With a per-row trigger, the trigger function is invoked once for each row that is affected by the statement that fired the trigger. In contrast, a per-statement trigger is invoked only once when an appropriate statement is executed, regardless of the number of rows affected by that statement. In particular, a statement that affects zero rows will still result in the execution of any applicable per-statement triggers. These two types of triggers are sometimes called row-level triggers and statement-level triggers, respectively.
Triggers are also classified as before triggers and after triggers. Statement-level before triggers naturally fire before the statement starts to do anything, while statement-level after triggers fire at the very end of the statement. Row-level before triggers fire immediately before a particular row is operated on, while row-level after triggers fire at the end of the statement (but before any statement-level after triggers).
Trigger functions invoked by per-statement triggers should always return NULL. Trigger functions invoked by per-row triggers can return a table row (a value of type HeapTuple) to the calling executor, if they choose. A row-level trigger fired before an operation has the following choices:
It can return NULL to skip the operation for the current row. This instructs the executor to not perform the row-level operation that invoked the trigger (the insertion or modification of a particular table row).
For row-level INSERT and UPDATE triggers only, the returned row becomes the row that will be inserted or will replace the row being updated. This allows the trigger function to modify the row being inserted or updated.
A row-level before trigger that does not intend to cause either of these behaviors must be careful to return as its result the same row that was passed in (that is, the NEW row for INSERT and UPDATE triggers, the OLD row for DELETE triggers).
The return value is ignored for row-level triggers fired after an operation, and so they may as well return NULL.
If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by trigger name. In the case of before triggers, the possibly-modified row returned by each trigger becomes the input to the next trigger. If any before trigger returns NULL, the operation is abandoned for that row and subsequent triggers are not fired.
Typically, row before triggers are used for checking or modifying the data that will be inserted or updated. For example, a before trigger might be used to insert the current time into a timestamp column, or to check that two elements of the row are consistent. Row after triggers are most sensibly used to propagate the updates to other tables, or make consistency checks against other tables. The reason for this division of labor is that an after trigger can be certain it is seeing the final value of the row, while a before trigger cannot; there might be other before triggers firing after it. If you have no specific reason to make a trigger before or after, the before case is more efficient, since the information about the operation doesn't have to be saved until end of statement.
If a trigger function executes SQL commands then these commands may fire triggers again. This is known as cascading triggers. There is no direct limitation on the number of cascade levels. It is possible for cascades to cause a recursive invocation of the same trigger; for example, an INSERT trigger might execute a command that inserts an additional row into the same table, causing the INSERT trigger to be fired again. It is the trigger programmer's responsibility to avoid infinite recursion in such scenarios.
When a trigger is being defined, arguments can be specified for it. The purpose of including arguments in the trigger definition is to allow different triggers with similar requirements to call the same function. As an example, there could be a generalized trigger function that takes as its arguments two column names and puts the current user in one and the current time stamp in the other. Properly written, this trigger function would be independent of the specific table it is triggering on. So the same function could be used for INSERT events on any table with suitable columns, to automatically track creation of records in a transaction table for example. It could also be used to track last-update events if defined as an UPDATE trigger.
Each programming language that supports triggers has its own method for making the trigger input data available to the trigger function. This input data includes the type of trigger event (e.g., INSERT or UPDATE) as well as any arguments that were listed in CREATE TRIGGER. For a row-level trigger, the input data also includes the NEW row for INSERT and UPDATE triggers, and/or the OLD row for UPDATE and DELETE triggers. Statement-level triggers do not currently have any way to examine the individual row(s) modified by the statement.
We have table child inherited from table parent.
One could think when inserting|updating into child.col1 that the trigger on parent insert|update will trigger.
That is not the case.
if you want such a behaviour you have to have the trigger on both tables.
"It is also possible to write a trigger function in C, although most people find it easier to use one of the procedural languages."
However, the only immediate example is a trigger based on a C function. Suggest that we add a simple example OR move the example under 36.10 to this chapter. Alternatively, adding a page that links to various examples would be pretty effective.
Consider this example, using PG/psql: You want to monitor price changes, so you create a table to record the barcode, the new price and when it changed:
CREATE TABLE price_change (
apn CHARACTER(15) NOT NULL,
effective TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE (apn, effective)
Next you write a trigger function to insert records as required:
CREATE OR REPLACE FUNCTION insert_price_change() RETURNS trigger AS '
IF tg_op = ''DELETE'' THEN
INSERT INTO price_change(apn, effective, price)
VALUES (old.barcode, CURRENT_TIMESTAMP, NULL);
IF tg_op = ''INSERT'' THEN
changed := TRUE;
changed := new.price IS NULL != old.price IS NULL OR new.price != old.price;
IF changed THEN
INSERT INTO price_change(apn, effective, price)
VALUES (new.barcode, CURRENT_TIMESTAMP, new.price);
' LANGUAGE plpgsql;
Finally you create a trigger on the table or tables you wish to monitor:
CREATE TRIGGER insert_price_change AFTER INSERT OR DELETE OR UPDATE ON stock
FOR EACH ROW EXECUTE PROCEDURE insert_price_change();
Row-level before triggers fire immediately before a particular row is operated on, while row-level after triggers fire at the end of the statement (after all rows have been operated on but before any statement-level after triggers). | <urn:uuid:fe96a08a-1e7a-4e26-bd27-743774712357> | 3.671875 | 1,810 | Documentation | Software Dev. | 38.423117 |
Advanced Laboratory and/or Demonstration Apparatus
Apparatus Title: Polarization of reflected light
Abstract : The apparatus consists of a ray box, an acrylic prism, a glass slide, and a polarizing sheet. The apparatus can demonstrate polarization of reflected light at the Brewster angle and variation of the level of polarization with angle of incidence.
Description Polarization of Reflected Light
A ray box that gives parallel rays of light from one end and a diverging beam from the other end is used in this apparaus. As shown in Fig.1 an acrylic prism is placed in the parallel rays symmetrically. The angle of incidence of the rays falling on the prism is 600, very close to the Brewster angle of acrylic (57.50), and, therefore, the reflected rays are almost completely polarized. The polarization can be demonstrated by placing the polarizer in front of the ray box and changing its orientation. When the polarizer is held horizontally the reflected light disappears showing that the reflected light is polarized in the perpendicular direction (Fig.1).
Angular dependence of polarization of reflected light can be demonstrated with the help of the diverging rays. A glass reflector (microscope slide) is placed so that the angle of incidence of the rays falling on the glass is lower than, equal to, and higher than the Brewster angle. When the polarizer is held horizontally the fourth ray of light disappears (Fig.3) because the angle of incidence is equal to Brewster angle. The other rays with angles of incidence less than and greater than Brewster angle are visible because they are not completely polarized.
This is the cheapest, easiest, and quickest way of demonstrating the different aspects of polarization of reflected light. | <urn:uuid:0805eabe-18d4-4fef-a2f2-005c3a60bb02> | 3.640625 | 352 | Academic Writing | Science & Tech. | 36.598327 |
The predicted effects of global warming are many and various, both for the environment and for human life.
There is some speculation that global warming could, via a shutdown or slowdown of the thermohaline circulation, trigger localised cooling in the North Atlantic and lead to cooling, or lesser warming, in that region.
A northwards branch of the gulf stream, the North Atlantic Drift, is part of the thermohaline circulation (THC), transporting warmth further north to the North Atlantic, where its effect in warming the atmosphere contributes to warming Europe.
For more information about the topic Effects of global warming, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:2f9b2df2-ecc8-4d8d-bc38-c7d35eac8b88> | 3.015625 | 162 | Knowledge Article | Science & Tech. | 25.805357 |
Between the river and deep blue Gulf: The past and future of oysters in Florida's Big Bend
Oyster reefs offer diverse ecological and social services for people and natural environments; unfortunately, reefs are also highly sensitive to impairment from natural and human-induced disturbances. Florida's Big Bend coastline (Gulf of Mexico coast from Crystal River to Apalachee Bay) supports large expanses of oyster reef habitat that have existed for thousands of years in a region that is one of the most pristine coastal zones in the continental US. Using historical and current aerial imagery between 1982 and 2011 postdoctoral associate Jennifer Seavey, along with WEC faculty Peter Frederick and Bill Pine found a 66% net loss of oyster reef area. Losses were concentrated on offshore (88%), followed by nearshore (61%), and inshore reefs (50%). This very rapid loss is not typical of the local geological geological succession pattern. Multiple lines of evidence suggests that the primary mechanism for reef loss is related to strongly decreased freshwater inputs, leading to high salinity. High salinity leads both to high predation rates and high incidence and severity of diseases, leaving reefs with mostly dead oysters. Without living oysters, the reef substrate becomes unconsolidated and the nucleation site is lost. Our field observations indicate that this collapse of reefs is irreversible, even when salinity conditions improve. The decreased freshwater inputs are a product of land use and freshwater policy in north-central Florida, leading to decreased discharge of the Suwannee River. Our next step is to attempt a series of restoration experiments that will better elucidate the process of reef collapse, and test the premise that permanent stable substrate will allow oysters to repeatedly recolonize reefs. This website gives more information about the project http://floridarivers.ifas.ufl.edu/oyster.htm. | <urn:uuid:ae4f0e5b-f121-40a1-90b4-b9c5e98a668b> | 3.90625 | 378 | Knowledge Article | Science & Tech. | 27.721368 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2001 June 3
Explanation: Last March, telescopic instruments in Earth and space tracked a tremendous explosion that occurred across the universe. A nearly unprecedented symphony of international observations began abruptly on 2000 March 1 when Earth-orbiting RXTE, Sun-orbiting Ulysses, and asteroid-orbiting NEAR all detected a 10-second burst of high-frequency gamma radiation. Within 48 hours astronomers using the 2.5-meter Nordic Optical Telescope chimed in with the observation of a middle-frequency optical counterpart that was soon confirmed with the 3.5-meter Calar Alto Telescope in Spain. By the next day the explosion was picked up in low-frequency radio waves by the by the European IRAM 30-meter dish in Spain, and then by the VLA telescopes in the US. The Japanese 8-meter Subaru Telescope interrupted a maiden engineering test to trumpet in infrared observations. Major telescopes across the globe soon began playing along as GRB 000301C came into view, detailing unusual behavior. The Hubble Space Telescope captured the above image and was the first to obtain an accurate distance to the explosion, placing it near redshift 2, most of the way across the visible universe. The Keck II Telescope in Hawaii quickly confirmed and refined the redshift. Even today, no one is sure what type of explosion this was. Unusual features of the light curve are still being studied, and no host galaxy appears near the position of this explosion.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
& Michigan Tech. U. | <urn:uuid:d5fe6e5d-33ea-476e-9cd6-80cb196aba48> | 3.390625 | 371 | Knowledge Article | Science & Tech. | 40.168487 |
In 1872, the HMS Challenger left Portsmouth on a daring mission, but it didn’t set sail as a military ship. It had been retrofitted, not to project power, but to humbly petition the ocean to give up some of its secrets. Over three and a half years, the Challenger and its crew of over 200 (at the start, that is) circumnavigated the globe, collecting every scrap of information they found along the way. The crew frequently measured the depth of the seafloor and the temperature profile of the water, and brought up sediment samples (sometimes including living organisms). Among other accomplishments, the expedition discovered the submarine mountains of the Mid-Atlantic Ridge, described more than 4,700 new species, and learned that the ocean was stratified by temperature.
There is still much we do not know about the ocean, but quite a lot has changed. Thanks to the Argo project, we’re now up to 3,500 automated buoys that continuously record data from the upper 2 kilometers of Earth’s oceans. Using that incredible data coverage, oceanographers were able to compare Challenger’s temperature measurements to today’s oceans.
For each of 273 Challenger temperature profiles from the Atlantic and Pacific Oceans, researchers interpolated Argo measurements from the same location, depth, and time of year. Modern surface ocean temperatures (averaged over 2004-2010) were higher at 211 of those points. On average, the surface of the Atlantic is about 1°C warmer—0.4°C for the Pacific. The authors write, “As the Challenger's sampling was more intensive in the Atlantic and the warming may be greater in that ocean, we estimate the global difference as the area-weighted mean of the Atlantic and Pacific values, 0.59° C ±0.12.”
As you’d expect, that difference diminishes with depth. At 366 meters (200 fathoms), the area-weighted “global” average today is 0.39° C ±0.18. At 914 meters (500 fathoms), it’s down to 0.12° C ±0.07, and the difference disappears by about 1,500 meters depth.
These numbers may underestimate the warming for a number of reasons relating to the Challenger measurements. For example, the crew worked under the assumption that the line holding the thermometer extended downward perfectly perpendicular to the surface. In reality (as they knew), it was likely to trail behind the motion of the ship, which couldn’t be kept completely stationary. That means the thermometer would measure at a depth a bit shallower than intended, yielding a warmer temperature.
El Niño Southern Oscillation (ENSO) variability shouldn't have much impact on the analysis. Challenger's route cut quickly across the equatorial Pacific and avoided the eastern equatorial Pacific altogether, so most of the sampling was done outside the areas affected most. Also, averaging the Argo data over 2004-2010 makes for a minimal El Niño/La Niña bias.
The calculated differences are consistent with warming estimates from other reconstructions and records. Still, the additional data points provide some useful perspective. The researchers note that the sub-surface measurements are useful in calculating the past contribution of thermal expansion to sea level rise.
They conclude, “The Challenger data set was a landmark achievement in many respects. With regard to climate and climate change, Challenger not only described the basic temperature stratification of the oceans, but provided a valuable baseline of nineteenth-century ocean temperature that, along with the modern Argo data set, establishes a lower bound on centennial-scale global ocean warming.” | <urn:uuid:8695da67-3186-478a-a3c4-4d8a9171d684> | 3.859375 | 759 | Knowledge Article | Science & Tech. | 42.654649 |
In this case, he's been playing around with a brand new Garmin etrex H GPS receiver. Among other things it allows you to store a time sequence of position measurements, a so-called track, which you can then upload on Google maps to see which part of the forest you've been straying around in. This cute high-tech gadget is shown in the photo. On the display (click to enlarge) you can see a schematic map of the sky with the GPS satellites in view, and the strengths of the signals received from these satellites indicated in the bars at the bottom of the screen. Moreover, the display says that the accuracy of the position measurement is 5 meters.
Now, Stefan wanted to figure out the meaning of this accuracy, and if it can be improved by averaging over many repeated measurements. So, he put the GPS receiver on said patio table, and had the device measure the position every second over a duration of a bit less than 3 hours. Then he downloaded the measurement series with several thousand data points, plotted them, and computed the average value.
Remarkably, this simple experiment delivered clear evidence that spacetime is discrete! Shown in the figure below is the longitude and latitude of the data points, transformed to metric UTM coordinates, as blue crosses. The yellow dot is the average value, and the ellipse has half-axes of 1 standard deviation. Several thousand measurements correspond to just 16 different positions.
This 3-d figure shows the weights of the data points from the above figure:
Of course, the position measurements in the time series are not really statistically independent, so one has to be careful when interpreting the result. If one repeats a position measurement after the short time interval of just one second, one expects a very similar result since the signals used likely come from the same satellites which haven't changed their position much. Over the course of time, however, the satellites whose signal one receives are likely to change. To see this effect, Stefan computed the autocorrelation function of the measurement series, shown in the figure below:
The autocorrelation function, a function of the time delay τ between two measurements, tells you how long it takes till you can consider the measurements to be uncorrelated. The closer to zero the autocorrelation function, the less correlated the measurements. A positive value indicates the measurements are correlated on the same side of the average value, a negative value that they're correlated in opposite directions.
How do we interpret these results?
The origin of the discreteness of the measuring points is likely a result of rounding or some artificially imposed uncertainty. (The precision of commercial devices is usually limited as to disable their use for military purposes.) It remains somewhat unclear though whether the origin is in the device's algorithm or already in the signal received.
The initial drop of the autocorrelation functions in the figure means that after roughly half an hour, the position measurements are statistically independent. But why the autocorrelation function does not simply fall to zero and instead indicates complete anticorrelation in the y coordinate (latitude) data for a delay of about 1.5 hours and seems to hint at a periodicity is not entirely clear to us - more data and a more sophisticated analysis clearly are necessary.
Anyway, finally, to finish this little experiment, Stefan uploads the graphs to blogger. And then asks his wife to weave a story around it. The biggest mystery in the universe... | <urn:uuid:fc9043c4-b3f7-47e2-8a25-3c999027aa3d> | 3.1875 | 709 | Personal Blog | Science & Tech. | 34.566352 |
Washington, February 24 (ANI): Researchers from Amherst College and The University of Texas at Austin have proposed a new technique to probe Earth's deep interior that relies on a "fifth force of nature".
Making a breakthrough in the field of particle physics, they have established new limits on what scientists call "long-range spin-spin interactions" between atomic particles. These interactions have been proposed by theoretical physicists but have not yet been seen.
Their observation would constitute the discovery of a "fifth force of nature" (in addition to the four known fundamental forces: gravity, weak, strong and electromagnetic) and would suggest the existence of new particles, beyond those presently described by the Standard Model of particle physics.
The new limits were established by considering the interaction between the spins of laboratory fermions (electrons, neutrons and protons) and the spins of the electrons within Earth.
To make this study possible, the Professor of Physics Larry Hunter and colleagues at Amherst College and The University of Texas at Austin created the first comprehensive map of electron polarization within Earth induced by the planet's geomagnetic field.
The team combined a model of Earth's interior with a precise map of the planet's geomagnetic field to produce a map of the magnitude and direction of electron spins throughout Earth. Their model was based in part on insights gained from Lin's studies of spin transitions at the high temperatures and pressures of Earth's interior.
Every fundamental particle (every electron, neutron and proton, to be specific), explained Hunter, has the intrinsic atomic property of "spin."
Spin can be thought of as a vector-an arrow that points in a particular direction. Like all matter, Earth and its mantle-a thick geological layer sandwiched between the thin outer crust and the central core-are made of atoms. The atoms are themselves made up of electrons, neutrons and protons that have spin.
Earth's magnetic field causes some of the electrons in the mantle's minerals to become slightly spin-polarized, meaning the directions in which their spins point are no longer completely random, but have some net orientation.
Earlier experiments, including one in Hunter's laboratory, explored whether their laboratory spins prefer to point in a particular direction.
"We know, for example, that a magnetic dipole has a lower energy when it is oriented parallel to the geomagnetic field and it lines up with this particular direction-that is how a compass works," he explained.
"Our experiments removed this magnetic interaction and looked to see if there might be some other interaction that would orient our experimental spins. One interpretation of this 'other' interaction is that it could be a long-range interaction between the spins in our apparatus, and the electron spins within the Earth, that have been aligned by the geomagnetic field. This is the long-range spin-spin interaction we are looking for," he stated.
So far, no experiment has been able to detect any such interaction. But in Hunter's paper, the researchers describe how they were able to infer that such so-called spin-spin forces, if they exist, must be incredibly weak-as much as a million times weaker than the gravitational attraction between the particles.
At this level, the experiments can constrain "torsion gravity"-a proposed theoretical extension of Einstein's Theory of General Relativity. Given the high sensitivity of the technique Hunter and his team used, it may provide a useful path for future experiments that will refine the search for such a fifth force.
If a long-range spin-spin force is found, it not only would revolutionize particle physics but might eventually provide geophysicists with a new tool that would allow them to directly study the spin-polarized electrons within Earth.
"If the long-range spin-spin interactions are discovered in future experiments, geoscientists can eventually use such information to reliably understand the geochemistry and geophysics of the planet's interior," said Lin.
A paper about their work appeared in this week's issue of the prestigious journal Science. (ANI) | <urn:uuid:ce18ad2e-e64d-40b6-bcee-7eec75849dff> | 3.328125 | 833 | Truncated | Science & Tech. | 28.700235 |
|λ, k, Sometimes shortened to: conductivity a measure of the ability of a substance to conduct heat, determined by the rate of heat flow normally through an area in the substance divided by the area and by minus the component of the temperature gradient in the direction of flow: measured in watts per metre per kelvin|
|the offspring of a zebra and a donkey.|
|a gadget; dingus; thingumbob.|
A measure of the ability of a material to transfer heat. Given two surfaces on either side of the material with a temperature difference between them, the thermal conductivity is the heat energy transferred per unit time and per unit surface area, divided by the temperature difference. It is measured in watts per degree Kelvin. | <urn:uuid:0906acf3-0ec6-45ee-9d24-34a1a4f2be91> | 3.578125 | 154 | Structured Data | Science & Tech. | 29.173317 |
By Paul Rincon
Science reporter, BBC News, Houston
Enough water is locked up at Mars' south pole to cover the planet in a liquid layer 11m (36ft) deep.
The Mars Express probe used its radar instrument to map the thickness of Mars' south polar layered deposits.
Analysis of the Marsis radar data shows that the polar deposits consist of almost pure water-ice.
The findings appear in the journal Science and were also presented this week at the Lunar and Planetary Science Conference in Houston, Texas.
It was known by the 1970s that the north and south polar regions of the Red Planet were blanketed by thick accumulations of layered material.
Based upon data from the Mariner and Viking projects, the polar layered deposits were considered to be accumulations of dust and ice.
Deep and wide
Today, polar layered deposits hold most of the known water on Mars, though other areas of the planet appear to have been very wet at times in the past. The south polar layered deposits alone are the size of the US state of Texas.
Understanding where the water went is considered crucial to knowing whether the Red Planet could once have supported life.
The Mars Advanced Radar for Subsurface and Ionospheric Sounding (Marsis) consists of two 20m-long (65ft) hollow fibreglass "dipole" booms to make a primary antenna.
The layered deposits cover an area similar to the US state of Texas
It sends out pulses of radio waves from the antenna to the planet's surface and analyses the time delay and strength of the waves that return.
Analysis of those waves that penetrate the soil and bounce back will give information on transitions between materials with different electrical properties, such as rock and liquid water, beneath the Martian surface.
The instrument gathered data on the south polar region over the course of about 300 orbits of Mars Express.
It was able to reach through the icy layers to the lower boundary, which can be as deep as 3.7km (2.3 miles) below the surface.
The radar penetrated through the chaotic, lumpy deposits with very little attenuation (reduction in signal strength), suggesting they were almost 90% water-ice; the rest being dust.
The radar cannot tell whether there is carbon dioxide mixed in with the water-ice, but lead author Jeff Plaut told BBC News that the thickness of the ice also pointed to a composition of nearly pure frozen water.
Researchers traced the base of the south polar layered deposits and found a set of buried depressions within 300 km of the pole that may be ancient impact craters.
"We didn't really know where the bottom of the deposit was," Dr Plaut, from the US space agency's (Nasa) Jet Propulsion Laboratory in California, explained.
"We can see now that the crust has not been depressed by the weight of the ice as it would be on Earth.
"The crust and upper mantle of Mars are stiffer than the Earth's, probably because the interior of Mars is so much colder."
One area with an especially bright reflection from the base of the deposits posed a puzzle for the researchers. It resembled what a thin layer of liquid water might look like to radar, but the conditions are so cold that the presence of melted water was considered highly unlikely.
Marsis was developed jointly by the Italian Space Agency (Asi) and Nasa.
The radar was successfully deployed in June 2005, after a delay of more than a year amid concerns that the booms might swing back and damage the spacecraft. | <urn:uuid:c483bcc4-94d7-4868-9601-7d97f0b8c7a7> | 3.5625 | 729 | Truncated | Science & Tech. | 47.864985 |
Major Section: PROGRAMMING
(Reverse x) is the result of reversing the order of the
elements of the list or string
The guard for
reverse requires that its argument is a true list
or a string.
Reverse is defined in Common Lisp. See any Common Lisp
documentation for more information. | <urn:uuid:738267fc-0cb2-494b-b374-df8b51fcf2da> | 2.875 | 70 | Documentation | Software Dev. | 50.407687 |
Measurable (Extended) Real-Valued Functions
For a while, we’ll mostly be interested in real-valued functions with Lebesgue measure on the real line, and ultimately in using measure to give us a new and more general version of integration. When we couple this with our slightly weakened definition of a measurable space, this necessitates a slight tweak to our definition of a measurable function.
Given a measurable space and a function , we define the set as the set of points such that . We will say that the real-valued function is measurable if is a measurable subset of for every Borel set of the real line. We have to treat specially because when we deal with integration, is special — it’s the additive identity of the real numbers.
The entire real line is a Borel set, and . Thus we find that must be a measurable subset of . If is another measurable subset of , then we observe
The second term on the right is either empty or is equal to . And so it’s clear that is measurable. We say that the function is “measurable on ” if is measurable for every Borel set , and so we have shown that a measurable function is measurable on every measurable set.
In particular, if is itself measurable (as it often is), then a real-valued function is measurable if and only if is measurable for every Borel set . And so in this (common) case, we get back our original definition of a measurable function .
The concept of measurability depends on the -ring , and we sometimes have more than one -ring floating around. In such a case, we say that a function is measurable with respect to . In particular, we will often be interested in the case , equipped with either the -algebra of Borel sets or that of Lebesgue measurable sets . A measurable function will be called “Borel measurable”, while a measurable function will be called “Lebesgue measurable”.
On the other hand, we should again emphasize that the definition of measurability does not depend on any particular measure .
We will also sometimes want to talk about measurable functions taking value in the extended reals. We take the convention that the one-point sets and are Borel sets; we add the requirement that a real-valued function also have and both be measurable to the condition for to be measurable. However, for this extended concept of Borel sets, we can no longer generate the class of Borel sets by semiclosed intervals. | <urn:uuid:ec83755c-f600-41bc-98f3-31d1d53e7df2> | 2.921875 | 529 | Academic Writing | Science & Tech. | 40.520115 |
Learn more physics!
I have been unable to find this question answered and you guys have the most user friendly website I have found. My questions is:
What materials would prevent two magnets from attracting to each other when placed between them?
Melbourne, Victoria, Australia
Magnetic fields don't penetrate type I superconductors, so a big sheet
of one of them would work. Of course it would have to be kept cold, and
if the fields are too big it just quits superconducting.
Highly magnetizable material (mu-metal) can also work, if it's
arranged in the right geometry. Mu-metal does the opposite of the
superconductor. It pulls magnetic field lines in, rather than expelling
them. A loop of mu-metal from one pole to the other of one of the
magnets would keep that magnet's field from extending out as much as it
normally would. Of course, a strip of mumetal from one magnet toward
the other magnet can actually increase the attraction.
(published on 10/22/2007)
Follow-up on this answer. | <urn:uuid:093b04f4-eaf9-41f5-8f98-c0a8aac89bb6> | 2.984375 | 238 | Q&A Forum | Science & Tech. | 58.888182 |
Planetary nebulae are hot glowing gas clouds ejected by dying low- to intermediate-mass stars. The nebulae glow because they are heated by energetic ultraviolet photons from the exposed stellar core. According to Kirchhoff's laws, the light produced by a planetary nebula should be an emission spectrum, with spikes of emission at specific wavelengths corresponding to the elements in the gas. A spectrum can be displayed as a picture showing stripes of color at the wavelength of each emission line, or as a graph, plotting the amount of light at each wavelength.
In this exercise, you will learn how to
The central star in a planetary nebula is the exposed core of the original star. The temperature of the central star in a planetary nebula can be quite high, sometimes exceeding 200,000 K. (Eventually, all central stars will cool and become white dwarfs, and the planetary nebulae will expand and fade from view.) Typically, central star temperatures range from about 30,000 K to 100,000 K. At these high temperatures, a star will emit a great deal of radiation energetic enough to ionize the atoms in the nebula; the amount of radiation at each wavelength depends on the temperature, according to the Planck Law, otherwise known as "blackbody radiation."
Of particular interest is the amount of ultraviolet radiation emitted; some ultraviolet photons have so much energy that they can ionize the atoms in the nebula, stripping off one or more electrons. The amount of energy required to produce the next higher level of ionization in an atom is called its ionization potential, usually expressed in electron volts (eV). In general, heavier atoms are more easily ionized for the first time than lighter atoms are. If an atom is already ionized, the remaining electrons are held more tightly, and it becomes even harder to remove the next electron to ionize the atom more highly. The overall degree of ionization of atoms in a planetary nebula depends on the temperature of the central star. For two stars with the same radius, a hotter star emits more photons at all energies than a cooler star does, and a greater proportion of those photons will be emitted at higher energies. Therefore a hotter star is capable of ionizing more atoms to higher ionization states than a cooler star is. So, by examining the spectrum of a planetary nebula to see what ionization states of the various elements are present, you can get an idea of the temperature of the central star.
All spectra in the database are listed on the Browse page. Clicking on the name of any planetary nebula takes you to the "Spectrum Display" page for that nebula. To expand any region of the graphed spectrum, hold the left mouse button down at one corner of the region you wish to enlarge, drag the mouse to the opposite corner of that region, and then release the mouse button. You can do this repeatedly to keep enlarging. To get back to the full plot, click on the "Zoom Out" radio button under the graph display. The horizontal axis of these graphs is the wavelength in Angstroms, and the vertical axis is the flux (in ergs cm-2 s-1Angstrom-1).
The Templates page contains a set of spectra labelled with the wavelengths of emission lines seen in planetary nebulae and identifying the ion producing each emission line. The name of the element is given using the standard chemical symbol from the periodic table (e.g., H=hydrogen, N=nitrogen, Ne=neon, etc.). The ionization state of the element is indicated by a Roman numeral suffix in the following way: neutral=I, singly ionized=II, doubly ionized=III (i.e. ionization state = Roman numeral -1). For example, O III means doubly ionized oxygen, O+2. Certain electron transitions involve energy levels that are said to be metastable; the resulting emission lines are called forbidden lines, which really only means that they are less likely to occur than emission lines from the ordinary kind of transitions. Conditions in planetary nebulae, as it turns out, are extremely conducive to the production of this kind of emission line, and in fact, most of the emission lines you will see in these spectra are forbidden lines, which are denoted by brackets around the ion designation (i.e. a forbidden line produced by doubly ionized oxygen would be written as [O III].
You may find it helpful to print out this page of instructions.
Listed below are three planetary nebulae whose central stars have very different temperatures. You will examine the spectra of each nebula and by noticing the presence or absence of certain emission lines, be able to rank them in order of the temperature of the central star. | <urn:uuid:046ffe23-4378-4928-b70e-a7a23a9215e3> | 4.21875 | 985 | Tutorial | Science & Tech. | 39.026309 |
This Week: Ancient water = ancient habitat?
In the News: Flying virus!
Coming Thursday: Roaches: A lot smarter than you thought!
Typically, the swirl of stormy weather obscures the cells at the heart of severe thunderstorms. This uncommonly clear view of an entire thunderstorm cell, with the top of the growing cumulonimbus tower topping out at 40,000 feet, reveals many interesting features, including “fall streaks” of what may be hail from the underside of the overhanging anvil portion of the cloud. Shortly after this photo was taken on May 22, 2011, near Madison, the storm pelted the Sun Prairie area with large, damaging hail.
The image depicts a cross-section of mouse skeletal muscle magnified 60 times. Fibers in the tissue are fluorescently stained for protein synthesis. The green stain outlines individual fibers. The bright pink/purple fibers are newly growing muscle fibers showing protein synthesis rates of these fiber types for the first time. Images like this allow researchers to [...]
To the human eye, Bidens ferufolia — a species in the sunflower family — has all-yellow petals (left). Bees see the same flower differently: with a bullseye, guiding them to land close to the nectar, held on the nectaries at the center. Humans can distinguish more colors than bees, but bees have a broader range [...]
An emperor penguin makes the 5-foot leap to solid ice at the Penguin Ranch in McMurdo Sound, Antarctica. A few emperor penguins are placed in a fenced-in area where they can be studied. A circular hole cut through the sea ice allows the penguins to dive for their own food, but since they have to [...]
Pannexin1, a tumor-suppressing protein, plays a vital role in binding tissue together. When cells expressing Pannexin1 touch, the protein initiates a response that includes developing tight networks of actin, a structural protein. A looser cell structure can ease the spread of cancer cells. Here, cancerous rat cells that have been altered to make Pannexin1 (black) [...]
Have you ever wondered what the ocean floor looks like? The National Oceanic and Atmospheric Administration (NOAA) did, too. So they compiled gobs of data and maps generated by ships and satellites to create this amazing animation. On your tour of the dynamic ocean floor, you’ll soar over undersea mountain ranges, deep rifts, shifting plates, [...]
The Skogafoss is one of the biggest waterfalls in Iceland, and is especially beautiful in this stunning image under the aurora borealis. The Northern Lights are shining against a great sea of stars, including the constellation Ursa major, or Great Bear, home to the asterism — a recognizable cluster of stars — the Big Dipper. [...]
Eyes aren’t the only human organ that can “see” light. It turns out that skin cells called melanocytes have a light-receptor molecule called rhodopsin that fluoresces as soon as it detects ultra-violet A light (UVA), the deeper penetrating, long-wavelength UV light, as shown here. Until now, researchers have only found rhodopsin in the eye, where [...]
This view, taken by the QuickBird satellite operated by DigitalGlobe, shows the Breidamerkurjökull Glacier in Iceland. The Quickbird is a sub-meter resolution satellite, which means that each pixel of the image represents less than a one square meter area. Breidamerkurjökull is the main glacier of Vatnajökull, the largest ice cap in Europe, which covers 8 [...]
If you saw something like this falling from the sky, you might think that the weather outside was indeed frightful. But this dumbbell shaped object is, in fact, a super-magnified snowflake — yes, a snowflake. Not so frightful after all. This particular snowflake is a capped column, one of many types of snowflakes. The fuzzy [...]
Buckle your tiny seatbelts. Scientists have created a car at the nano scale. Just how small is nano? One nanometer equals one billionth of a meter. To help you wrap your head around that, the average sheet of paper is about 100,000 nanometers thick. Measuring in at 4 nanometers by two nanometers, this car is [...]
You don’t have to be a birder or ornithologist (a.k.a. a bird scientist) to think this graphic is fascinating. This map shows where American Pipits, a small, sparrow-like bird, can be found throughout the year (click on it to watch the animation of their migration). The American Pipit likes the open country. During its breeding [...]
Caenorhabditis elegans is a one millimeter-long soil roundworm, as well as an insightful model organism for research in molecular and developmental biology, because it is simple, easy to grow and can be frozen. C. elegans has two sexes: a self-fertilizing hermaphrodite and a male. Hermaphrodites make both sperm and eggs. This picture of a hermaphrodite [...]
Teeny little video cameras called minirhizotrons snapped these photos of wetland plant roots. The cameras will help scientists anticipate how the plants might respond to climate change. Minirhizotrons give scientists at the Oak Ridge National Laboratory a technological boost by allowing them to study living roots, especially the really small ones, without harming the plants. [...]
One hurdle to treating neurodegenerative diseases is the inability of neurons in the central nervous system to regenerate axons after damage. In glaucoma, retinal ganglion cell (RGC) axons, which make up the optic nerve and serve as cables to pass information from our eyes to our brains, are damaged and thus unable to regenerate. Shown [...] | <urn:uuid:e80aadaa-8478-4d2c-b29b-e73ed84486fc> | 3.375 | 1,254 | Content Listing | Science & Tech. | 52.283929 |
Educational - Cap
Back to: Weather Q & A
In order for thunderstorms to develop, the air near the surface must be buoyant. Just like air bubbles in water. Since air is less dense then water, any air bubbles released in water float to the surface, they are buoyant.
So, think of thunderstorm clouds as being almost like those air bubbles. When the air near the surface becomes buoyant, it bubbles up through the atmosphere. What makes air buoyant? It has to be less dense then the atmosphere surrounding it. Air is less dense than water, so air is buoyant in water. Warm, moist air is less dense then cool air. So, the air near the surface needs to be warm and moist, and the air above needs to be cool and dry. As the air rises, the pressure of the atmosphere surrounding it decreases (there is less atmosphere above pressing down), this causes our warm moist air to cool. So we have another concept to remember: when air goes into an area of lower pressure, it cools. When air is compressed (into an area of higher pressure), it warms. This described by the "Ideal Gas Law".
Okay, back to the air rising from the surface. As the buoyant air rises, it cools. This cooling causes the water vapor (gas) in the air to turn to clouds (the water vapor becomes small water droplets). These water droplets are what we see as cloud.
That's part one. We understand that thunderstorms form when air near the surface becomes buoyant, rises, cools, and forms clouds.
The Gulf Coast area has a VERY special weather situation in Spring and Summer that creates the CAP you are speaking of. Often there is moist air flowing from the Gulf of Mexico over the whole region, and this air is relatively cool, especially if it's already cloudy. At the same time, air higher in the atmosphere is flowing from the West (from the desert) as part of a high pressure system. This dry desert air is being forced to flow downward toward the surface by the high pressure. As the warm air descends, it gets compressed. Remember from above, when air is compressed, it warms further.
This forms the "CAP". It's called a cap because it acts just like a bottle cap. The air near the surface is cool and moist, and this dry desert air above is warm. So, the air at the surface ISN'T buoyant. It's cool and dense, while the air above is warm and less dense.
So throughout a typical severe thunderstorm day, we start will cool, cloudy air near the surface and warm air aloft. The sun slowly warms the surface air, and erodes the clouds. Our cool, denser surface air is becoming less and less dense as it warms. Eventually, there will be a spot, or several spots over the region where the this moist surface air suddenly becomes warm enough to be buoyant!
These spots of buoyant air explode into thunderstorms!
Since the air near the surface can only poke through these small holes where the air is buoyant, you tend to get big/severe storms. All of the energy of the warming sun is concentrated into a few places! I think that's really cool.
So, next time your area is forecasted to have a cap, look outside from time to time. Watch for the places where the cap breaks first. You'll see thunderstorm clouds suddenly form. | <urn:uuid:e147bb50-511d-4bd4-89fe-a26ec343cfc1> | 4.1875 | 721 | Knowledge Article | Science & Tech. | 69.255953 |
ABOUT DIAMONDS: Diamond is a crystalline form of pure carbon that forms under intense heat and pressure. Conditions found in volcanic pipes or when meteors strike the earth can create shock zones of high pressure and temperature. Diamond is the hardest known naturally occurring material, which is why it is popular for cutting and grinding tools, such as diamond-tipped drill bits and saws.
CHEMICAL VAPOR DEPOSITION: Most methods used to create diamonds in a lab or factory mimic the high pressures deep below the Earth's surface that help form natural diamonds. The chemical vapor deposition (CVD) method grows single crystal, synthetic diamonds at low pressure. Essentially, it transforms gas molecules into solid molecules. The process allows scientists to grow crystals very rapidly and with few defects. After it grows the diamond is treated at high heat, by a microwave plasma. This eliminates the initial yellow-brown color, turning the diamonds colorless or light pink.
The Materials Research Society, the American Physical Society, AVS, the Science and Technology Society and the American Geophysical Union contributed to the information contained in the TV portion of this report.
This report has also been produced thanks to a generous grant from the Camille and Henry Dreyfus Foundation, Inc. | <urn:uuid:71344aaf-c1c8-44f1-a959-372da7ced688> | 4.09375 | 258 | Knowledge Article | Science & Tech. | 37.107401 |
MPG or mpg is a three letter abbreviation with multiple meanings, including:
- miles per gallon, see fuel efficiency. Used alone, "mpg" is ambiguous, because US and imperial gallons are significantly different (quoted vehicle fuel efficiency is therefore significantly different for the same vehicle in the US and UK). Miles per gallon is the common measure used for fuel efficiency in the UK even though all road fuel is now sold there in litres. An unambigious measure is kilometers per litre (km/l though it may also be found written as kpl).
- MPEG – MPG is a common shortened file extension | <urn:uuid:78d5bc4e-1210-497d-87a6-fb6fece7a195> | 3.25 | 126 | Knowledge Article | Science & Tech. | 36.441023 |
Tag Archives: solar thermal
Scientists at Airlight Energy have joined IBM and the Swiss universities, ETH Zurich and Interstate University of Applied Sciences, to develop an affordable photovoltaic system that is capable of concentrating sunlight 2,000 times onto hundreds of one centimetre square PC cells – yielding high efficiency at low cost. The system uses a large parabolic dish made from a multitude of mirror facets. The dish is attached to a tracking system that determines the best angle based on the position of the sun. Once aligned, the sun’s rays reflect off the mirror onto triple-junction PV chips. On average, each chip can … Continue Reading
Mark Simpson and Ari Glezer at Georgia Institute of Technology in Atlanta have developed a new way of driving a wind turbine using the vortex effect which produces whirlwinds and tornados. When there is a temperature difference between hot air close to the ground and cooler air just above it, the hot air rises and cool air falls. This causes convection currents to form between these layers, leading to small whirlwinds. The researchers channelled these currents with an array of fixed blades into a vortex, which turns a turbine at the device's centre. As the warm air rises, more air … Continue Reading
Missouri inventors Matt Bellue and Ben Cooper have developed way of converting an internal combustion engine to be powered by solar energy. The solar energy is used to heat oil which is injected into the cylinder with a little water. The water boils and the steam drives the cylinder. The oil and water are collected and re-used. The process is described in this video in which the inventors are seeking further funding.
Engineeris researchers at the University of Arkansas have developed a thermal energy storage system that could dramatically increase annual energy production while significantly decreasing production costs of a concentrated solar power. Current storage methods use molten salts, oils or beds of packed rock to store heat inside thermal energy storage tanks. Although these methods do not lose much of the energy, they are either expensive or cause damage to the tanks. The use of a packed rock, currently the most efficient and least expensive method, leads to thermal "ratcheting," which is the stress caused to tank walls because of the … Continue Reading
Zenman Energy, a non-profit company located in Virginia, is aiming to drastically reduce the installed cost per watt of solar thermal power by developing a low-cost solar steam engine generator and giving away detailed construction plans after the prototype is complete .They hope that this open source model will lead to further improvements in the design and reduce the cost of the units. The generator works by focusing a large surface area of sunlight onto a smaller area to produce heat.. This heat is converted into mechanical energy by boiling water and turning a steam engine. The steam engine powers … Continue Reading
Monarch Power, a private research and development company in Arizona, is developing a lotus-shaped solar collector that is expected to produce up to 3 kilowatts of solar thermal power, as as well as steam for heating. The company also expects that the solar concentrator could be used to produce up to 3 kilowatts of solar photovotaic electricity. / The Monarch Lotus has 18 petals which unfold to form a 4-metre diameter flower solar collector that can be opened and closed – aiding in transportation as well as protecting the concentrator from severe weather. Monarch says that, because it is easy … Continue Reading
A Californian company,Thermata, has developed a high-tech mirror which it says can cut the cost of sun-tracking mirrors, or heliostats, in half. The system uses small heliostats, each with a camera to detect the angle of the sun and the heliostat, together with a mesh network of microprocessors to position each mirror with the ideal tilt. Each heliostat is automatically identified, located and calibrated under continuous control of proprietary mirror detection and pointing technology, over a wireless mesh network. The smaller, self-configuring heliostats are more accurate than conventional larger mirrors. The heliostats are fitted onto pods that are powered by … Continue Reading
MIT researchers have found a compound, made from abundant and inexpensive materials, which can store and release solar thermal energy in a chemical form without degrading. The material could be used to make rechargeable thermal batteries which could store the energy for long periods without loss. Thermo-chemical storage of solar energy uses a molecule whose structure changes when exposed to sunlight and remains stable in that form indefinitely. Then, when nudged by a stimulus, such as a catalyst, a small temperature change or a flash of light, it can quickly snap back to its original form, releasing its stored energy … Continue Reading
The world’s first solar power plant. to supply utility-scale "baseload" power has been launched near Seville in Spain. The 10 megawatt solar thermal Gemasolar plant has the capacity to store energy for up to 15 hours in molten salt batteries, enabling it to provide enough power for about 25,000 households for.24-hours a day.
A Stanford University research group says that it has found a way to more than double current solar power production efficiency. Most current technology either converts light into electricity at relatively low temperatures or converts the heat onto electricity at very high temperatures. The Stanford engineers have developed a "photon enhanced thermionic emission" technology which works best at higher temperatures, Photon enhanced thermionic emission would be used with solar comcentrators to produce electricity from photovoltaic cells at high temperatures. The techology would be most effective when used in solar farms, where any waste heat which cannot be converted using photovoltaic … Continue Reading | <urn:uuid:32107d9c-3b41-45c2-81b8-0ddfc586f4c0> | 3.25 | 1,165 | Content Listing | Science & Tech. | 26.315612 |
What I am trying to do is discover servers on the same subnet.
I believe that multicast UDP is the standard sort of way for solving this problem. (At least that's what I've done before, and IIRC it's what JINI service discovery is based on). It's really simple (and fast). Your server just broadcasts a small id-string on a pre-determined multicast group on a periodic basis (i.e. once every 250ms). Clients join the group and listen for a second or so to pick up any id-strings being broadcasted. That's it.
Actually with Jini it's more the other way around. The client sends a multicast request for the next lookup service, with it's ip address in the source address field. The lookup services (the servers) are listening for these requests and will send back a unicast answer.
The 255.255.255.255 isn't a multicast, but a global broadcast. You can send and receive broadcasts by setting the broadcast property on a DatagramSocket.
Using the global broadcast address usally works but is regarded as BadStyleTM
The real broadcast adress is the highest address of your subnet. So the actual value depends on you netmask. For a class C net, having a netmask of 255.255.255.0, the broadcast address is x.x.x.255. The host part of their address is: Addresses in this subnet minus one. (256 -1).
A net with a netmask of 255.255.255.224 (only the last five bits of the ip address are host adresses) uses a broadcast address of (x.x.x.x & 255.255.255.224) + 31.
The differences between broadcasts and multicasts are:
- Broadcasts will not be routed. They are restricted to one subnet. Multicasts can be routed if the router is configured to do so.
- Broadcast are read by every network card. To receive a multicast packet you will have to join the corresponding multicast socket beforehand.
If you are to program a server discovery algorithm in Java, take a look at Jini http://www.jini.org
. Gives you all this and more. Don't be intimidated by the 'more', the core is very simple to handle. | <urn:uuid:8e9b5034-238b-4d6c-9983-91e78d10ce21> | 2.828125 | 494 | Q&A Forum | Software Dev. | 80.271212 |
Example #1 (Sound Intensity): The relationship between the number of decibels B and the intensity of a sound I in watts per centimeter squared is given by:
Determine the intensity of a sound I if it registers 125 decibels on a decibel meter.
In examples #2 and 3, compare the intensities of the two earthquakes.
|San Francisco, CA||4/18/1906||8.3|
Formula: R = log10I, where I is the intensity of the shockwave and R is the magnitude of an earthquake.
Example #4 (Human Memory Model): The average score A for a group of students who took a test t months after the completion of a course is given by the human memory model:
A = 80 - log10(t + 1)12
How long after completing the course will the average score fall to A = 72?
Example #5 (Newton's Law of Cooling): You place a tray of water at 60oF in a freezer that is set at 60°F. The water cools according to Newton's Law of Cooling:
a) The water freezes in 4 hours. What is the constant k? (Hint: Water freezes at 32°F.)
b) You lower the temperature in the freezer to -10°F. At this temperature, how long will it take for the ice cubes to form?
c) The initial temperature of the water is 50°F. The freezer temperature is 0°F. How long will it take for the ice cubes to form? | <urn:uuid:95b56905-f7f5-40b5-a753-237d979ebf39> | 3.953125 | 329 | Tutorial | Science & Tech. | 71.279885 |
Precipitation, evaporation, and transpiration are all terms that sound familiar, yet may not mean much to you. They are all part of the water cycle, a complex process that not only gives us water to drink, fish to eat, but also weather patterns that help grow our crops.
Water is an integral part of life on this planet. It is an odorless, tasteless, substance that covers more than three-fourths of the Earth's surface. Most of the water on Earth, 97% to be exact, is salt water found in the oceans. We can not drink salt water or use it for crops because of the salt content. We can remove salt from ocean water, but the process is very expensive.
Only about 3% of Earth's water is fresh. Two percent
of the Earth's water (about 66% of all fresh water) is in solid form, found
in ice caps and glaciers. Because it is frozen and so far away, the fresh
water in ice caps is not available for use by people or plants. That leaves
about 1% of all the Earth's water in a form useable to humans and land
animals. This fresh water is found in lakes, rivers, streams, ponds, and
in the ground. (A small amount of water is found as vapor in the atmosphere.) | <urn:uuid:b1d81454-d95a-4e04-946e-6fa11b61bc16> | 3.640625 | 278 | Knowledge Article | Science & Tech. | 71.293122 |
Emilie Bigorgne of the Université Paul Verlaine – Metz and colleagues suggest that the increasing production of nanomaterials will in turn increase the release of nanosized by-products to the environment. Whether or not these particles will accumulate or be degraded and whether or not they pose an ecological risk depends on the chemical and physical properties of the individual types and classes of nanoparticles rather than “nano” representing any intrinsic hazard. With this in mind the team hoped to assess the behaviour, uptake and ecotoxicity of titania nanoparticles and by-products in the earthworm Eisenia fetida. Earthworms play a critical role in the activity of fertile soil and as such have been used extensively in ecotoxicity studies for heavy metals and organic pollutants. This ubiquitous species might thus act as a marker for risk on exposure to the common titania nanoparticles.
- Size isn’t everything, or is it? Nano or non-nano (sciencebase.com)
- Safety in the nano sphere (sciencebase.com)
- Scientist Utilizing Nanotechnology to Improve the Food Safety and Nutrition (newswise.com) | <urn:uuid:0383b99e-b7b4-4287-b722-ee9f4f36ff66> | 2.875 | 241 | Content Listing | Science & Tech. | 20.080514 |
Climate Change Impacts, Adaptation and Vulnerability - Present and Future
A second report from the Intergovernmental Panel on Climate Change (IPCC) shares the current scientific understanding of how people and natural ecosystems are affected by climate change, and how they will be affected by future warming (in February, the IPCC released their first summary report on the physical science basis for climate change).
The report, Climate Change 2007: Climate Change Impacts, Adaptation and Vulnerability, was released on Friday April 6, 2007 by the IPCC, a large group of scientists from around the world brought together by the United Nations to assess our understanding of the Earth’s climate, global warming, and the impacts of climate change. Here are a few of the report’s main conclusions.
Many natural environments on all continents and in most of the oceans are already affected by the changing climate. The snow and ice of Earth’s cryosphere is melting causing unstable ground and changes in Arctic and Antarctic ecosystems. Rising water temperatures are the likely cause of changes in marine and freshwater ecosystems. Oceans have become more acidic as they take in more carbon dioxide from the atmosphere. On land, the plants, animals, and other living things of ecosystems have been affected by warming temperatures. Warmer temperatures have affected agriculture by changing planting dates and the impacts of fires and pests. Human health has been affected as warming increases dangerous heat waves and causes changes to the amount of pollen and the spread of diseases.
In the future, according to the report, we can expect that freshwater supplies will increase at high latitudes and in wet tropical areas. However freshwater supplies will become less available in areas where water is already in short supply. Areas that are affected by drought, such as southern and northern Africa, are expected to become more so. Flooding will likely become more common in areas that are already prone to flooding.
Within this century climate change and other global changes such as change in land use and pollution will collectively be too much for many ecosystems to handle and they will not be able to adapt. Twenty to thirty percent of plant and animal species will become extinct if global average temperatures increase 1.5 – 2.5°C, which is within the range estimated by computer models for 21st Century.
Coastal communities, especially in low-lying regions, will be increasingly vulnerable to flooding as sea level rises, especially where tropical storm events are common. Plus, the frequencies and intensities of extreme weather events are very likely to increase. Small islands are especially vulnerable.
Responding to climate change
The number of communities that are taking steps to adapt to current warming and prepare for future warming is limited. The report urges that more action needs to be taken to as climate continues to warm in the future. | <urn:uuid:e20bafc2-1f2b-4c4a-a01d-82bed5a55775> | 3.609375 | 561 | Knowledge Article | Science & Tech. | 29.781014 |
In probability theory, a conditional expectation (also known as conditional expected value or conditional mean) is the expected value of a real random variable with respect to a conditional probability distribution.
The concept of conditional expectation is extremely important in Kolmogorov's measure-theoretic definition of probability theory. In fact, the concept of conditional probability itself is actually defined in terms of conditional expectation.
Let X and Y be discrete random variables, then the conditional expectation of X given the event Y=y is a function of y over the range of Y
where is the range of X.
A problem arises when we attempt to extend this to the case where Y is a continuous random variable. In this case, the probability P(Y=y) = 0, and the Borel–Kolmogorov paradox demonstrates the ambiguity of attempting to define conditional probability along these lines.
However the above expression may be rearranged:
and although this is trivial for individual values of y (since both sides are zero), it should hold for any measurable subset B of the domain of Y that:
In fact, this is a sufficient condition to define both conditional expectation and conditional probability.
Formal definition
Note that is simply the name of the conditional expectation function.
A couple of points worth noting about the definition:
- This is not a constructive definition; we are merely given the required property that a conditional expectation must satisfy.
- The required property has the same form as the last expression in the Introduction section.
- Existence of a conditional expectation function is determined by the Radon–Nikodym theorem, a sufficient condition is that the (unconditional) expected value for X exist.
- Uniqueness can be shown to be almost sure: that is, versions of the same conditional expectation will only differ on a set of probability zero.
- The σ-algebra controls the "granularity" of the conditioning. A conditional expectation over a finer-grained σ-algebra will allow us to condition on a wider variety of events.
- To condition freely on values of a random variable Y with state space , it suffices to define the conditional expectation using the pre-image of Σ with respect to Y:
- This suffices to ensure that the conditional expectation is σ(Y)-measurable. Although conditional expectation is defined to condition on events in the underlying probability space Ω, the requirement that it be σ(Y)-measurable allows us to condition on as in the introduction.
Definition of conditional probability
For any event , define the indicator function:
which is a random variable with respect to the Borel σ-algebra on (0,1). Note that the expectation of this random variable is equal to the probability of A itself:
Then the conditional probability given is a function such that is the conditional expectation of the indicator function for A:
In other words, is a -measurable function satisfying
A conditional probability is regular if is also a probability measure for all ω ∈ Ω. An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation.
- For the trivial sigma algebra the conditional probability is (almost surely) a constant function,
- For , as outlined above, .
Conditioning as factorization
In the definition of conditional expectation that we provided above, the fact that Y is a real random variable is irrelevant: Let U be a measurable space, that is, a set equipped with a σ-algebra of subsets. A U-valued random variable is a function such that for any measurable subset of U.
We consider the measure Q on U given as above: Q(B) = P(Y−1(B)) for every measurable subset B of U. Then Q is a probability measure on the measurable space U defined on its σ-algebra of measurable sets.
Theorem. If X is an integrable random variable on Ω then there is one and, up to equivalence a.e. relative to Q, only one integrable function g on U (which is written ) such that for any measurable subset B of U:
There are a number of ways of proving this; one as suggested above, is to note that the expression on the left hand side defines, as a function of the set B, a countably additive signed measure μ on the measurable subsets of U. Moreover, this measure μ is absolutely continuous relative to Q. Indeed Q(B) = 0 means exactly that Y−1(B) has probability 0. The integral of an integrable function on a set of probability 0 is itself 0. This proves absolute continuity. Then the Radon–Nikodym theorem provides the function g, equal to the density of μ with respect to Q.
The defining condition of conditional expectation then is the equation
and it holds that
We can further interpret this equality by considering the abstract change of variables formula to transport the integral on the right hand side to an integral over Ω:
This equation can be interpreted to say that the following diagram is commutative in the average.
E(X|Y)= goY Ω ───────────────────────────> R Y g=E(X|Y= ·) Ω ──────────> R ───────────> R ω ──────────> Y(ω) ───────────> g(Y(ω)) = E(X|Y=Y(ω)) y ───────────> g( y ) = E(X|Y= y )
The equation means that the integrals of X and the composition over sets of the form Y−1(B), for B a measurable subset of U, are identical.
Conditioning relative to a subalgebra
There is another viewpoint for conditioning involving σ-subalgebras N of the σ-algebra M. This version is a trivial specialization of the preceding: we simply take U to be the space Ω with the σ-algebra N and Y the identity map. We state the result:
Theorem. If X is an integrable real random variable on Ω then there is one and, up to equivalence a.e. relative to P, only one integrable function g such that for any set B belonging to the subalgebra N
where g is measurable with respect to N (a stricter condition than the measurability with respect to M required of X). This form of conditional expectation is usually written: E(X | N). This version is preferred by probabilists. One reason is that on the Hilbert space of square-integrable real random variables (in other words, real random variables with finite second moment) the mapping X → E(X | N) is self-adjoint
and a projection (i.e. idempotent)
Basic properties
Let (Ω, M, P) be a probability space, and let N be a σ-subalgebra of M.
- Conditioning with respect to N is linear on the space of integrable real random variables.
- More generally, for every integrable N–measurable random variable Y on Ω.
- for all B ∈ N and every integrable random variable X on Ω.
- Conditioning is a contractive projection
- for any s ≥ 1.
See also
- Law of total probability
- Law of total expectation
- Law of total variance
- Law of total cumulance (generalizes the other three)
- Conditioning (probability)
- Joint probability distribution
- Disintegration theorem
- Loève (1978), p. 7
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (November 2010)|
- Kolmogorov, Andrey (1933). Grundbegriffe der Wahrscheinlichkeitsrechnung (in German). Berlin: Julius Springer.[page needed]
- Loève, Michel (1978). "Chapter 27. Concept of Conditioning". Probability Theory vol. II (4th ed.). Springer. ISBN 0-387-90262-7.[page needed]
- William Feller, An Introduction to Probability Theory and its Applications, vol 1, 1950[page needed]
- Paul A. Meyer, Probability and Potentials, Blaisdell Publishing Co., 1966[page needed]
- Grimmett, Geoffrey; Stirzaker, David (2001). Probability and Random Processes (3rd ed.). Oxford University Press. ISBN 0-19-857222-0., pages 67-69 | <urn:uuid:f8a32539-85d3-46af-84d5-8427e95ede25> | 3.796875 | 1,827 | Knowledge Article | Science & Tech. | 40.689196 |
I want a brief intro to lexical scope
I understand them through examples :)
First, Lexical Scope (also called Static Scope), in C-like syntax:
Every inner level can access its outer levels.
There is another way, called Dynamic Scope used by first implementation of Lisp, again in C-like Syntax:
will print 5
will print 10
The first one is called static because it can be deduced at compile-time, the second is called dynamic because the outer scope is dynamic and depends on the chain call of the functions.
I find static scoping easier for the eye. Most languages went this way eventually even Lisp (can do both, right?). Dynamic scoping is like passing references of all variables to the called function.
An example of why the compiler can not deduce the outer dynamic scope of a function, consider our last example, if we write something like this:
The call chain depends on a run time condition. If it is true, then the call chain looks like:
If the condition is false:
The outer scope of fun in both cases is the caller plus the caller of the caller and so on.
Just to mention that C language does not allow nested functions nor dynamic scoping.
Scope defines the area, where functions, variables and such are available. The availability of a variable for example is defined within its the context, let's say the function, file, or object, they are defined in. We usually call these local variables.
The lexical part means that you can derive the scope from reading the source code.
Lexical scope is also known as static scope.
Dynamic scope defines global variables that can be called or referenced from anywhere after being defined. Sometimes they are called global variables, even though global variables in most programmin languages are of lexical scope. This means, it can be derived from reading the code that the variable is available in this context. Maybe one has to follow a uses or includes clause to find the instatiation or definition, but the code/compiler knows about the variable in this place.
In dynamic scoping, by contrast, you search in the local function first, then you search in the function that called the local function, then you search in the function that called that function, and so on, up the call stack. "Dynamic" refers to change, in that the call stack can be different every time a given function is called, and so the function might hit different variables depending on where it is called from. (see here)
To see an interesting example for dynamic scope see here.
Some examples in Delphi/Object Pascal
Delphi has lexical scope.
The closest Delphi gets to dynamic scope is the RegisterClass()/GetClass() function pair. For its use see here.
Let's say that the time RegisterClass([TmyClass]) is called to register a certain class cannot be predicted by reading the code (it gets called in a button click method called by the user), code calling GetClass('TmyClass') will get a result or not. The call to RegisterClass() does not have to be in the lexical scope of the unit using GetClass();
Another possibility for dynamic scope are anonymous methods (closures) in Delphi 2009, as they know the variables of their calling function. It does not follow the calling path from there recursively and therefore is not fully dynamic.
Lexical (AKA static) scoping refers to determining a variable's scope based solely on its position within the textual corpus of code. A variable always refers to its top-level environment. It's good to understand it in relation to dynamic scope.
Lets try the shortest possible definition:
Lexical Scoping (aka Closure) defines how variable names are resolved in nested functions: inner functions contain the scope of parent functions even if the parent function has returned. | <urn:uuid:d9da69b7-fbb4-488e-aec2-1f57e4a9aa6a> | 3.5625 | 803 | Q&A Forum | Software Dev. | 43.646411 |
The Spacecraft Bus
The Mars Express spacecraft has been designed to take a payload of seven state-of-the-art scientific instruments and one lander to the red planet and allow them to record data for at least one Martian year, or 687 Earth days. The spacecraft also carries a data relay system for communicating with Earth.
Exploded Diagram of Mars Express
The mission is a test case for new working methods to speed up spacecraft production and minimise mission costs. These new methods have had two major impacts on spacecraft design. Weight was kept to an absolute minimum: 116 kg was allowed for the seven instruments and 60 kg for the lander. And off-the-shelf technology, or technology developed for the Rosetta mission to a comet, was used wherever possible.
The instruments sit inside the spacecraft bus which is a honeycomb aluminium box just 1.5 m long by 1.8 m wide by 1.4 m high. The lander, Beagle 2, was attached to the outside of the bus. Payload, lander, spacecraft and on-board fuel weighed 1223 kg at launch.
Last Update: 17 May 2010 | <urn:uuid:a5c16a22-906e-48bb-a4b8-fc90e0beb8bd> | 3.3125 | 236 | Knowledge Article | Science & Tech. | 64.620179 |
London, June 18 (ANI): A theory put forward by researchers working at two Spanish universities has challenged the popular adage that time speeds up as we age.
The radical theory by academics suggests that time itself could be slowing down - and may eventually grind to a halt altogether in billions of years time.
The latest mind-bending findings propose that we have all been fooled into thinking the universe is expanding.
According to scientists, the gradual loss of time is not noticeable to the human eye.
And they say, we'll all be long gone by the time time really does end.
"Everything will be frozen, like a snapshot of one instant, forever. Our planet will be long gone by then," the Daily Mail quoted Professor Senovilla as telling New Scientist magazine.
Scientists have previously measured the light from distant exploding stars to show that the universe is expanding at a rapid rate.
The accepted theory is based on the idea that a kind of anti-gravitational force - known as 'dark energy'- must be driving galaxies apart.
However, the scientists working on the latest theory say that we're looking at thing backwards.
And Senovilla proposes that the current assumption has got it all wrong - with the appearance of acceleration instead caused by time gradually slowing. (ANI) | <urn:uuid:68b84031-076c-469c-9481-24ade622af08> | 2.90625 | 263 | Truncated | Science & Tech. | 43.342694 |
Let's say two comets crashed into each other. If it was 0.1 AU away from the Earth, would the collision cause mass destruction here, or not affect us at all?
It depends. There are collisions amongst asteroids that have been caught on film that had no effect on us whatsoever. In your scenario, they would have to have a resultant vector towards us in order to cause any problems. Then there is the question of how many resultant particles are big enough to cause any problems (that is, big enough to get through our atmosphere and cause damage, see NOTE 1).
Keep in mind that space is big! The cosmic billiards you are proposing are about as likely as a K-T type event. Not at all likely. And our species is actually getting to the point where we may be able to do something about it. I highly suggest you read "Death From the Skies" by Dr. Phil Plait (an astronomer that has researched all this sort of stuff). And keep in mind that even 0.1 AU is still 14.9 MILLION kilometers!
NOTE 1: Many factors will play into even this. Such as the composition of the particles, their size, their trajectories, etc. For instance, the Tunguska Incident was caused by a piece of debris that is estimated in the tens of meters size. However, its composition yielded an air-burst. Whereas, the Arizona Crater from 50,000 years ago was caused by a mostly metallic impactor, which was also only 50 meters across. The main difference here is that because it was more metallic, it made it to the surface mostly intact. Since comets are mostly made of ice and loose rock, the threshold for larger size resulting in air-bursts would probably be higher. Again, read Dr. Plait's book. | <urn:uuid:2a1d0c26-7431-4c7b-9166-223483cbccc2> | 3.296875 | 373 | Q&A Forum | Science & Tech. | 67.333015 |
Yesterday, the Space Shuttle Atlantis docked with the Hubble Space Telescope, and now the removal and replacement of WFPC2 has commenced.
As you probably know, I’m going to miss that camera. It’s been unveiling the secrets of the Universe for the last 16 years, and in a way that no other camera ever has before. So, you can check out parts one, two, and three in my series of ways that this camera has changed the Universe, and then look below for today’s edition of saying goodbye to Hubble’s grand old camera, WFPC2.
In theory, you’re supposed to get arcs of the lensed images that are magnified and either stretched or present in multiple images. In practice, this is very difficult to do, because of how faint these distant objects are and how susceptible they are to atmospheric distortion. Here’s what “gravitational lensing” looked like before the Hubble Space Telescope.
Not so impressive, is it? What’s astounding is that those four images are four separate images of the same, distant galaxy! It shows up as multiple images because of the fact that the light is bent into four separate paths by the intervening lens. Well, this object, known as an Einstein Cross, was imaged by Hubble before WFPC2 was installed. The results are hugely disappointing, and shown below.
So, what? Why would I show you this disappointment? Because I want you to appreciate what WFPC2 has done. Take a look at this image from 1996, taken of Cluster 0024+1654.
When you look at a cluster, sometimes you get lucky, and there are galaxies (or even other clusters) directly behind it. These background galaxies can show up as lensed images. You see those blue arcs, that look like they trace out part of a circle? Those are the same few galaxies, stretched and shown multiple times. Because of the high resolution of Hubble with WFPC2, they were able to pull out which images were of the same galaxy, and reconstruct resolutions down to less than one arc-second, or 1/12960000 of a square degree!
(And click to enlarge.) Amazing! Simply amazing. What can you learn from this? Well, other than all sorts of things about the lensed galaxies, you can learn about dark matter! You see, gravitational lensing only cares about mass, and so we can figure out where — in a cluster like this — the mass is distributed. The results are breathtaking.
What this shows you is that yes, there are spikes where the individual galaxies are. But the cluster is dominated by this giant spherically-distributed mass that’s present everywhere, both where there are galaxies and where there aren’t. And that has got to be dark matter.
And just so you don’t think this is an isolated incident, here are a couple of other clusters imaged with WFPC2, for your lensing perusal.
So we’ve seen how WFPC2 has taught us about the large-scale Universe, about planets in our Solar System, about individual galaxies, and — as you’ve seen today — about clusters and dark matter. Is there anything missing? Is there anything left? And perhaps more importantly, is there anything this camera couldn’t do? Check back tomorrow, for part 5! | <urn:uuid:f86a6b8d-1eba-4156-9891-fb97dda8ace7> | 2.78125 | 713 | Personal Blog | Science & Tech. | 57.547051 |
|Hash and Hash Iterator Object Language Elements|
|Applies to:||Hash object|
|rc=object.EQUALS(HASH: 'object', RESULT: variable name);|
specifies whether the method succeeded or failed.
A return code of zero indicates success; a nonzero value indicates failure. If you do not supply a return code variable for the method call and the method fails, then an appropriate error message is written to the log.
specifies the name of a hash object.
specifies the name of the second hash object that is compared to the first hash object.
specifies the name of a numeric variable name to hold the result. If the hash objects are equal, the result variable is 1. Otherwise, the result variable is zero.
The following example compares H1 to H2 hash objects:
length eq k 8; declare hash h1(); h1.defineKey('k'); h1.defineDone(); declare hash h2(); h2.defineKey('k'); h2.defineDone(); rc = h1.equals(hash: 'h2', result: eq); if eq then put 'hash objects equal'; else put 'hash objects not equal';
The two hash objects are defined as equal when all of the following conditions occur:
Both hash objects are the same size--that is, the HASHEXP sizes are equal.
Both hash objects have the same number of items--that is, H1.NUM_ITEMS = H2.NUM_ITEMS.
Both hash objects have the same key and data structure.
In an unordered iteration over H1 and H2 hash objects, each successive record from H1 has the same key and data fields as the corresponding record in H2--that is, each record is in the same position in each hash object and each such record is identical to the corresponding record in the other hash object.
In the following example, the first return call to EQUALS returns a nonzero value and the second return call returns a zero value.
data x; length k eq 8; declare hash h1(); h1.defineKey('k'); h1.defineDone(); declare hash h2(); h2.defineKey('k'); h2.defineDone(); k = 99; h1.add(); h2.add(); rc = h1.equals(hash: 'h2', result: eq); put eq=; k = 100; h2.replace(); rc = h1.equals(hash: 'h2', result: eq); put eq=; run; | <urn:uuid:1d550780-3e71-40ae-a082-482ce050d9ed> | 3.359375 | 544 | Documentation | Software Dev. | 67.36819 |
1,700 billion metric tonnes of carbon dioxide. That's twice the current record amount of atmospheric CO2. 1,700 billion metric tonnes is the estimated amount of CO2 held in the northern hemisphere's permafrost that is now at risk of being released.
A report of the U.N. Environment Programme released at the Doha climate summit warns that data from existing monitoring networks, "indicates that large-scale thawing of permafrost may have already started."
With Arctic temperatures warming twice as fast as the global average,
scientists estimate thawing permafrost could release large amounts of
carbon into the atmosphere through the end of the century with
significant climate impacts.
Thawing permafrost could emit 43 billion to 135 billion metric tons of
carbon dioxide equivalent by 2100, and 246 billion to 415 billion metric
tons of CO2 by 2200, the U.N. report says.
"Uncertainties are large, but emissions from thawing permafrost could
start within the next few decades and continue for several centuries,
influencing both short-term climate (before 2100) and long-term climate
(after 2100)," it continues.
Despite that risk, current climate models do not include the risk of
emissions from thawing permafrost, the UNEP analysis warned.
As a consequence, the projections of future climate change made in the
IPCC's next major report, due next year, "are likely to be biased on the
low side," the new report says. | <urn:uuid:0a90f6f6-8af1-4b13-9e00-97c89256b2ee> | 3.625 | 322 | Personal Blog | Science & Tech. | 45.411671 |
Circle Sector, Segment,
Chord and Arc Calculator
Scroll to the bottom for the Circle Calculator
Click here for the formulas used in this calculator.
Lines AO and OB are called radii.
Lines AC, BC and AB are called chords.
The angle ACB is an inscribed angle.
The angle AOB is a central angle.
The curved blue line AB is called an arc.
Line OE is the apothem and is the height of triangle AOB.
Line ED is the segment height
or the sagitta - a rarely-used term.
The length of an arc equals
(Central Angle / 180°)
The yellow area that resembles
a "pizza slice" is a sector.
The area of a sector equals
(Angle AOB / 360°) π r²
The green area (Circle Two) is a
The area of a segment equals
Minus Triangle AOB Area
RETURN TO GEOMETRY
Numbers are displayed in scientific notation in the amount of
significant figures you specify. For easier readability, numbers between 1,000 and -1,000
will not be in scientific notation but will still have the same precision.
You may change the number of significant figures displayed by
changing the number in the box above.
Most browsers, will display the answers properly but
if you are seeing no answers at all, enter a zero in the box above, which will
eliminate all formatting but at least you will see the answers.
Return To Home Page
Copyright © 1999 -
1728 Software Systems | <urn:uuid:2f593676-62f7-4a20-a5e3-537335a8375d> | 3.859375 | 338 | Tutorial | Science & Tech. | 58.03963 |
PLATE MOVEMENTS AND CLIMATE CHANGE
Karen L. Bice
Department of Geosciences
Pennsylvania State University
University Park, PA 16802
Level: Grades 7 and above
Estimated Time Required: 50 minutes
Anticipated Learning Outcomes
Climate is simply weather "averaged" over a time period of one year or more. In general terms, the climate in most of the United States and Canada is "temperate". Moving to the south, closer to the equator, the climate is "subtropical" and then "tropical". At the Earth's poles the climate is termed "polar". Each of these climate zones is characterized by a distinctive temperature range, rainfall amount, and type of vegetation.
In a very simplified sense, climate zones are oriented
roughly parallel to lines of latitude about the Earth. However, according
to the theory of plate tectonics, the continents "ride" on dynamic
plates which make up the Earth's surface. Although the resulting movement
of the continents is very slow, over millions of years it is enough to get
a continent from one place to another, and that movement may take the landmass
through several latitudes and climate zones.
Three maps are provided showing: 1) the present-day position of the continents around the Atlantic Ocean, 2) that same area 55 My ago, and 3) 180 My ago. Maps showing the distribution of continents and oceans in the past are known to geologists as paleogeographic reconstructions. The short line segments shown in Figures 1 and 2 are portions of the Atlantic sea floor which formed at the Mid-Atlantic Ridge 55 My ago. In Figure 2, the continents have been rotated or moved to their positions at 55 My ago. The line segments representing the 55 million year-old sea floor come together and indicate the position of the mid-ocean ridge at that time (for more on the concept of sea floor spreading, see the previous exercise in this book titled "The Distance Between Us and Them"). Figure 3 shows the configuration of the super continent Pangea (Pan-GEE-uh) just as it was beginning to break apart.
In small groups or as a class, consider one or more of the following climate concepts:
TROPICAL REGIONS - discuss with students the location of and conditions (rainfall, temperatures, vegetation) in the tropics. Have them locate the tropics of Cancer and Capricorn (23.5 degrees north and south, respectively) on each of the three maps. How might the climate in Georgia, for example, have changed from 180 My to 55 My ago? From 55 My ago to the present? Because plants may be preserved as fossils in sediments laid down where the plants grew, why might it be reasonable to find tropical plant fossils in Mesozoic-aged rocks exposed in the southwestern United States today?
DESERT REGIONS - perform the same thought exercise with deserts. Because of the Earth's general atmospheric circulation pattern, the most extensive deserts form along bands around 30 degrees north and south latitude (the "horse latitudes"). Is there reason to think that deserts may have covered more of southern Africa 180 million years ago than they do today?
COASTAL MOISTURE SOURCES - the oceans are the major source of moisture for the continents. Water evaporates from the sea surface to form clouds, many of which move over continents, rise, cool, and drop rain on the land. In general, the farther clouds move inland, the more moisture they lose in the form of rain. For this reason, coastal areas commonly receive more rainfall than the interior of large continents. Compare the size of the super continent that existed 180 My ago to the sizes of the continents that formed from the breakup of Pangea. Was Maryland, for example, a coastal area about 180 My ago? How might the size of the continent of Pangea have affected the climate on land that is now part of eastern North America?
INDIA - 180 million years ago, India was connected to Africa, Antarctica and Australia. Today the collision of the Indian plate with Asia is causing the uplift of the Himalayan mountain range. What type of climate might India have experienced 180 My ago? Discuss how the climate of India may have changed dramatically as the plate moved equator-ward during the last 180 million years.
Many world atlases contain maps indicating annual rainfall, temperature, desert and rainforest distribution. Junior and senior high school geography texts may also provide information concerning present-day climate zones. Periodicals such as National Geographic often publish excellent maps and photos of polar, desert, and tropical climate regions.
Figure 1. PRESENT DAY - positions of the continents bordering the Atlantic Ocean. The line segments shown in the oceans indicate the extent of strips of sea floor rock with an age of 55 million years.
Figure 2. 55 MILLION YEARS AGO - positions of the continents during the early Tertiary period. Note that with the plates rotated to their positions 55 My ago, the strips of sea floor rock which formed 55 My ago come together and meet at the mid-ocean ridge where they formed.
Figure 3. 180 MILLION YEARS AGO - configuration of the super-continent Pangea which had just begun to split apart.
|Return to Activity-Age Table| | <urn:uuid:31dbc2c9-4cbc-46a2-b7c5-b3bb8e8577ed> | 3.640625 | 1,102 | Tutorial | Science & Tech. | 43.547053 |
Links Between Hydrothermal Environments, Pyrophosphate, Na+, and Early Evolution
The discovery that photosynthetic bacterial membrane-bound inorganic pyrophosphatase (PPase) catalyzed light-induced phosphorylation of orthophosphate (Pi) to pyrophosphate (PPi) and the capability of PPi to drive energy requiring dark reactions supported PPi as a possible early alternative to ATP. Like the proton-pumping ATPase, the corresponding membrane-bound PPase also is a H+-pump, and like the Na+-pumping ATPase, it can be a Na+-pump, both in archaeal and bacterial membranes. We suggest that PPi and Na+ transport preceded ATP and H+ transport in association with geochemistry of the Earth at the time of the origin and early evolution of life. Life may have started in connection with early plate tectonic processes coupled to alkaline hydrothermal activity. A hydrothermal environment in which Na+ is abundant exists in sediment-starved subduction zones, like the Mariana forearc in the W Pacific Ocean. It is considered to mimic the Archean Earth. The forearc pore fluids have a pH up to 12.6, a Na+-concentration of 0.7 mol/kg seawater. PPi could have been formed during early subduction of oceanic lithosphere by dehydration of protonated orthophosphates. A key to PPi formation in these geological environments is a low local activity of water. | <urn:uuid:26026cf1-3015-4f1b-8993-36749132d52c> | 3.453125 | 320 | Academic Writing | Science & Tech. | 25.382516 |
• Family: Percidae (Perches and darters)
• Other Names: None
• Ohio Status: Threatened
• Adult Size: Typically 1-2 inches, can reach 2.5 inches.
• Typical Foods: May fly larvae, midge larvae, and other aquatic invertebrates.
The channel darter is a small slender fish and has 10 to 15 small oblong dark blotches along the side. This species has a continuous deep groove which separates the upper lip from the mouth. Channel darters are yellowish-olive with the scales outlined in brown. Channel darters differ from johny darters by having solid dashes along their side rather than "w" or "x" shaped marks like johnny darters.
Habitat and Habits
Channel darters are found large course sand or fine gravel bars in large rivers or along the shore of Lake Erie. Up until the invasion of the round goby large schools of channel darters could be observed on the bars around the Lake Erie islands. It is likely the Lake Erie population no longer exists. They are still found in the Ohio River and the lower portion of the Scioto, Muskingum and Hocking Rivers. There may also be a small remnant population in the lower Maumee and Sandusky Rivers in the Lake Erie drainage.
Reproduction and Care of the Young
Channel darters spawn during the spring and summer. Males defend territories centered around at least one rock. The females select a mate and then deposit their eggs in the gravel on the downstream side of the selected male’s rock. This species remains in water that is deeper than three feet during the day and migrates into shallow water at night. | <urn:uuid:9747dbd6-d9fc-464c-9720-0f1a07a478c7> | 3.109375 | 352 | Knowledge Article | Science & Tech. | 52.883181 |
Scientists classify the
severity of solar flares according to the amount of X-rays
they emit, known as X-ray flux. The GOES satellite measures
the X-rays coming from the sun at any given time—see
the graph of live data below.
Looking at this graph, you can easily identify
flares as sharp spikes—the higher the spike, the larger
the flare. A spike that reaches up into the zone marked 10-5
or higher is likely to have effects here on earth. Spikes that
don’t reach this high are generally inconsequential flares.
Use the horizontal scale (Universal Time)
to figure out when the flare happened. The data on this graph
are automatically updated every five minutes. | <urn:uuid:6d2410aa-b6c0-4e1f-ba3d-d5488c4cf417> | 3.734375 | 157 | Knowledge Article | Science & Tech. | 48.816975 |
"Exploring Products - Computer Hard Drives" is a hands on activity in which visitors use floating ring magnets to store data. They learn that computer hard drives are one of the most common applications of nanotechnology.
"Exploring Materials - Memory Metal" is a hands on activity in which visitors compare the properties of a memory metal spring to an ordinary spring. They learn that the way a material behaves on the macroscale is affected by its structure on the nanoscale.
Visitors will engage in a variety of survey type questions focusing on different aspects of nanotechnology. For each question posed, they will be provided short descriptions about the possible options. They will then place their vote using a marble in the container labeled with their selection. Throughout the day the public will be able to visualize how others have answered the same question by looking at the quantity of marbles in each container. Museum staff can use the data to chart trends in public knowledge about nanotechnology.
"Exploring Nano & Society - You Decide!" is a hands-on activity in which visitors sort and prioritize cards with new nanotechnologies according to their own values and the values of others. Visitors explore how technologies and society influence each other and how people’s values shape how nanotechnologies are developed and adopted.
"Exploring Nano & Society - Space Elevator" is a open-ended conversational experience in which visitors imagine and draw what a space elevator might look like, what support systems would surround it, and what other technologies it might enable. Conversation around the space elevator lead visitors to explore how technologies and society influence each other and how people’s values shape the ways nanotechnologies are developed and adopted.
In the first part of the “Robots & People” program, visitors learn what robots and nanobots are, what they can do, and how they affect our lives. In the second part of the program, visitors imagine and draw a robot, designing it to do a particular task.
Scientist Speed Dating is a facilitated, yet informal and high-energy, social activity to encourage a large group of people to speak with one another, ask questions, and learn about specific areas of research and practice within the field of nanoscale science and engineering, as well as the related societal and ethical implications of work in this field.
Nano Around the World is a card game designed to get participants to reflect on the potential uses of nanotechnology across the globe. Players each receive three cards: a character card, a current technology card, and a future technology card. They are asked to assume the role of their character to find nanotechnologies that might benefit them. After game play there is a facilitated discussion to help players reflect on the choices they made, the difficulty in finding appropriate technologies for many of the characters, and the possible nanotechnologies that could benefit a wider array of people than current nanotechnologies do.
In this classroom activity, students learn about organic light-emitting diodes (OLEDs). During the activity students make OLEDs, learn how OLEDs work, and discover what devices currently use OLEDs. Students also learn about spin coating since a spin coater is used to create the OLEDs. | <urn:uuid:eb45645b-0bd6-48b6-85be-742accdf1c1b> | 3.390625 | 664 | Content Listing | Science & Tech. | 27.857638 |
|MySQL Conference and Expo April 14-17, 2008, Santa Clara, CA|
CSS also allows for comments, but it uses a completely different
syntax to accomplish this. CSS comments are very similar to C/C++ comments, in
that they are surrounded by
Comments can span multiple lines, just as in C++:
It's important to remember that CSS comments cannot be nested. So, for example, this would not be correct:
However, it's hardly ever desirable to nest comments, so this limitation is no big deal.
If you wish to place comments on the same line as markup, then you need to be careful about how you place them. For example, this is the correct way to do it:
Given this example, if each line isn't marked off, then most of the style sheet will become part of the comment, and so will not work:
In this example, only the first rule (
Moving on with our example, we see some more CSS information actually found inside an HTML tag!
For cases where you want to simply assign a few styles to one
individual element, without the need for embedded or external style sheets,
you'll employ the HTML attribute
The syntax of a
In order to facilitate a return to structural HTML, something was needed to permit authors to specify how a document should be displayed. CSS fills that need very nicely, and far better than the various presentational HTML elements ever did (or probably could have done). For the first time in years, there is hope that web pages can become more structural, not less, and at the same time the promise that they can have a more sophisticated look than ever before.
In order to ensure that this transition goes as smoothly as possible, HTML introduces a number of ways to link HTML and CSS together while still keeping them distinct. This allows authors to simplify document appearance management and maximize their effectiveness, thereby making their jobs a little easier. The further benefits of improving accessibility and positioning documents for a switch to an XML world make CSS a compelling technology.
As for user agent support, the
In order to fully understand how CSS can do all of this, authors need a firm grasp of how CSS handles document structure, how one writes rules that behave as expected, and most of all, what the "Cascading" part of the name really means. | <urn:uuid:ae5534aa-87f2-4ed7-a04b-4d1cb9048a1f> | 2.6875 | 483 | Documentation | Software Dev. | 40.284889 |
PostgreSQL provides a large number of functions and operators for the built-in data types. Users can also define their own functions and operators, as described in Part V. The psql commands \df and \do can be used to show the list of all actually available functions and operators, respectively.
If you are concerned about portability then take note that most of the functions and operators described in this chapter, with the exception of the most trivial arithmetic and comparison operators and some explicitly marked functions, are not specified by the SQL standard. Some of the extended functionality is present in other SQL database management systems, and in many cases this functionality is compatible and consistent between the various implementations.
|a||b||a AND b||a OR b|
The operators AND and OR are commutative, that is, you can switch the left and right operand without affecting the result. But see Section 4.2.11 for more information about the order of evaluation of subexpressions. | <urn:uuid:2c2eb61c-bf0f-454f-94fb-a6237ae33bbe> | 2.75 | 200 | Documentation | Software Dev. | 40.048485 |
In a sense solar energy is all about the future. Right now we have the capability to fulfil all of our energy needs from other sources, non-renewable sources like oil, natural gas and coal. But these won’t last forever, so when they run out, or become so scarce that there isn’t enough to go around anymore, we will either need to use a different sort of energy or else drastically change our way of life. Ideally of course we would be able to switch to a different sort of energy. Actually, that is a process which has already started with renewable energy, and solar energy in particular. In the future therefore, the future uses of solar energy will be just about everything that we currently use energy for at the moment, and more!
Solar Energy for Houses
We are already in a situation in which solar energy can be used to power everything in our houses. Whether that be the computer on which you are likely reading this blog posting, or the kettle you use to make a cup of tea, or the television in the living room. Anything that needs power in your home can get it from solar energy when you have solar panels installed on your roof.
At the moment though it is quite expensive to have solar cells installed, even though they are a good long term investment. As soon as you have the solar cells they start to save you money on your electricity bills, and on top of that, at the moment the government is offering a feed-in tariff which pays you for all of the energy which is produced by the solar panels and subsequently is fed back in to the national grid.
In terms of future uses of solar energy then, the only difference will have to be that solar panels become a good short term investment as well as being good in the long run. They have to become more affordable and efficient so that they become obtainable by the majority of people. These are two of the main goals of the research that is currently being done into solar energy around the world, how to make them cheaper and more efficient.
Mobile Solar Energy
Of course the energy we need for static locations is only one kind of the energy we need. We also need energy on the move, for example for transport and electrical devices which we carry with us. Presently solar energy can be used to power watches and calculators but little else. Future uses of solar energy however should include cars, trains and even planes, as well as mobile phones, laptops and any other electrical devices. This will be possible when solar cells are reduced in size and, again, made more affordable.
Research has already been done into nano sized solar cells which will be able to come in many forms, including a spray. This means that you will be able to literally coat devices in solar cells. Also, there is already an unmanned plane that is powered by solar energy, as well as prototype cars.
What this means is that we have much of the technology for the future uses of solar energy, all that is necessary now is for it to become cheap enough that it can become widespread. | <urn:uuid:5431028d-8015-4029-be7a-fb8f853e37ac> | 2.921875 | 626 | Knowledge Article | Science & Tech. | 43.181185 |
Climate change may spur the destruction of ozone in unexpected parts of the globe.
In a warming world, many scientists believe, severe weather will become more common. That could be a problem in part because powerful rainstorms have the potential to erode ozone above the United States, researchers report online July 27 in Science.
“For 30 years, we’ve studied the problems of ozone loss and climate change separately,” says team leader James Anderson, a Harvard atmospheric scientist. “Now it’s pretty clear that climate change appears to be linked directly to the loss of ozone.” High-altitude ozone acts as a protective shield, blocking ultraviolet rays that can cause skin cancer.
Anderson and his colleagues stumbled on the unexpected connection while studying strong summer storms fueled by rising heat. During missions from 2001 to 2007, NASA planes flying close to the edge of space spotted water spewed high into the sky by convective storms over the U.S. The goal was to gather useful measurements for figuring out how high-altitude clouds form and trap heat.
Read More… (Science News)
Two climate-change communications research centers, one at Yale, the other at George Mason University, found that more Americans have been giving themselves a break on taking actions that would limit climate change… | <urn:uuid:1c942576-44fd-41d5-beac-d5584bb48dff> | 3.71875 | 265 | Content Listing | Science & Tech. | 36.866336 |
A software design pattern is a three-part rule which expresses a relation between a certain context, a problem, and a solution. The well-known "GoF Book" describes 23 software design patterns. Its influence in the software engineering community has been dramatic. However, Peter Norvig notes that "16 of [these] 23 patterns are either invisible or simpler [...]" in Dylan or Lisp (Design Patterns in Dynamic Programming, Object World, 1996).
We claim that this is not a consequence of the notion of "pattern" itself, but rather of the way patterns are generally described; the GoF book being typical in this matter. Whereas patterns are supposed to be general and abstract, the GoF book is actually very much oriented towards mainstream object languages such as C++. As a result, most of its 23 "design patterns" are actually closer to "programming patterns", or "idioms", if you choose to adopt the terminology of the POSA Book.
In this talk, we would like to envision software design patterns from the point of view of dynamic languages and specifically from the angle of CLOS, the Common Lisp Object System. Taking the Visitor pattern as an illustration, we will show how a generally useful pattern can be blurred into the language, sometimes to the point of complete disappearance.
The lesson to be learned is that software design patterns should be used with care, and in particular, will never replace an in-depth knowledge of your preferred language (in our case, the mastering of first-class and generic functions, lexical closures and meta-object protocol). By using patterns blindly, your risk missing the obvious and most of the time simpler solution: the "Just Do It" pattern.
Document location: http://www.lrde.epita.fr/~didier/research/publis.php#verna.09.accu
You must be logged to add a note
You must be logged to add a comment | <urn:uuid:ba0e752e-65c6-49b9-9877-b8c0043d3c54> | 3 | 399 | Academic Writing | Software Dev. | 51.668935 |
How can I create customized classes that have similar properties as 'str'?
hniksic at xemacs.org
Sun Nov 25 02:48:30 CET 2007
Steven D'Aprano <steve at REMOVE-THIS-cybersource.com.au> writes:
> On Sun, 25 Nov 2007 01:38:51 +0100, Hrvoje Niksic wrote:
>> samwyse <samwyse at gmail.com> writes:
>>> create a hash that maps your keys to themselves, then use the values of
>>> that hash as your keys.
>> The "atom" function you describe already exists under the name "intern".
> Not really. intern() works very differently, because it can tie itself to
> the Python internals.
The exact implementation mechanism is subtly different, but
functionally intern is equivalent to the "atom" function.
> In any case, I'm not sure that intern() actually will solve the OP's
> problem, even assuming it is a real and not imaginary
> problem. According to the docs, intern()'s purpose is to speed up
> dictionary lookups, not to save memory. I suspect that if it does
> save memory, it will be by accident.
It's not by accident, it follows from what interning does. Interning
speeds up comparisons by returning the same string object for the same
string contents. If the strings you're working with tend to repeat,
interning will save some memory simply by preventing storage of
multiple copies of the same string. Whether the savings would make
any difference for the OP is another question.
> From the docs:
> intern( string)
> Enter string in the table of ``interned'' strings and return the interned
> string - which is string itself or a copy. [...]
> Note the words "which is string itself or a copy". It would be ironic if
> the OP uses intern to avoid having copies of strings, and ends up with
> even more copies than if he didn't bother.
That's a frequently misunderstood sentence. It doesn't mean that
intern will make copies; it simply means that the string you get back
from intern can be either the string you passed it or another
(previously interned) string object that is guaranteed to have the
same contents as your string (which makes it technically a "copy" of
the string you passed to intern).
More information about the Python-list | <urn:uuid:48685521-a44f-4deb-a51a-72d25c556821> | 2.75 | 531 | Comment Section | Software Dev. | 62.235504 |
The function cos2x has a period of 2x=2pi, x=pi
The function y=(cos2x)^2 has a period of what? By looking at a graph of the function it is pi/2, but the answer in the book says pi.
Thanks for any help offered.
Thus we have
This means that or if you don't understand this, this means that :
<< difference of two squares :
So either , either
From the first one, we get (k is an integer), from the second one, we get (k' is an integer)
In fact, the period T is the least nonnegative number such that it is periodic.
The period of cos(x) is 2pi/1 = 2pi.
The graph starts at the maximum at x=0, goes down to zero at pi/2, goes down to minimum at x=pi, goes up to zero at 3pi/2, and goes up to maximum at 2pi. Then at this x-value, pi, the curve goes down again to repeat anorher cylce. That is why 2pi is the period of cos(x).
The period of cos(2x) is 2pi/2 = pi.
The curve starts at maximun at x=0, goes down to zero at pi/4, goes down to minimum at pi/2, goes up to zero at 3pi/4, and goes up again to maximum at pi. Then the curve goes down again to repeat another cycle. That is why the period of cos(2x) is pi.
What about cos^2(2x) or (cos(2x))^2?
Since it is a square, then the function value will always be positive. The curve will not go down below the horizontal axis.
So the curve will start at maximum at x=0, goes down to zero at pi/4, then instead of going down below the x-axis, the curve will go up to maximum at x=pi/2, goes down to zero at x = 3pi/4, and then goes up again to maximum at at x = pi.
The curve has a complete cycle from x = pi/4 to x = 3pi/4. The shape is that of a "dome".
That is 3pi/4 - pi/4 = 2pi/4 = pi/2.
Therefore, the period of (cos(2x))^2 is pi/2. ----------answer.
Or, looking at the curve in another way, the curve has a complete cycle from x = 0 to x = pi/2. The shape is that of a "bird, flying." | <urn:uuid:f0995f94-3220-4e1f-a9c7-16ec41838a4b> | 4.1875 | 569 | Q&A Forum | Science & Tech. | 90.835326 |
Defines size, enumerators, and synchronization methods for all nongeneric collections.
Assembly: mscorlib (in mscorlib.dll)
Thetype exposes the following members.
|AsParallel||Enables parallelization of a query. (Defined by ParallelEnumerable.)|
|AsQueryable||Converts an IEnumerable to an IQueryable. (Defined by Queryable.)|
|Cast<TResult>||Casts the elements of an IEnumerable to the specified type. (Defined by Enumerable.)|
|OfType<TResult>||Filters the elements of an IEnumerable based on a specified type. (Defined by Enumerable.)|
The interface is the base interface for classes in the System.Collections namespace.
The interface extends IEnumerable; IDictionary and IList are more specialized interfaces that extend . An IDictionary implementation is a collection of key/value pairs, like the Hashtable class. An IList implementation is a collection of values and its members can be accessed by index, like the ArrayList class.
For the generic version of this interface, see System.Collections.Generic.ICollection<T>.
Windows 7, Windows Vista SP1 or later, Windows XP SP3, Windows XP SP2 x64 Edition, Windows Server 2008 (Server Core not supported), Windows Server 2008 R2 (Server Core supported with SP1 or later), Windows Server 2003 SP2
The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements. | <urn:uuid:f60c66a0-49ee-4b4e-a51b-5eaf24a7e7ed> | 2.78125 | 338 | Documentation | Software Dev. | 29.46654 |
This post is a guest contribution by Colin Beale, a research fellow at the University of York who studies ecology in Tanzania. Colin writes about the living community of the savannah, from butterflies to wildebeests, with co-blogger Ethan Kinsey at Safari Ecology. If you have an idea for a post, and you’d like to contribute to Nothing in Biology Makes Sense, e-mail Jeremy to inquire.
Spinescent. Now there’s a word! It simply means having spines and one of the first things many visitors to the African savannah notice is that everything is covered in thorns. Or, in other words, Africa is spinescent. It’s not a wise idea to brush past a bush when you’re walking, and you certainly want to keep arms and legs inside a car through narrow tracks. These are thorns that puncture heavy-duty car tyres, let alone delicate skin. But why is the savanna so much thornier than many of the places visitors come from? Or even than other biomes within Africa, such as the forests?
At one level the answer is obvious—there are an awful lot of animals that like to eat bushes and trees in the savanna. Any tree that wants to avoid this would probably be well advised to grow thorns or have some other type of defence mechanism to protect itself. But then again, perhaps the answer isn’t so obvious: all those animals that like to eat bushes seem to be eating the bushes perfectly happily despite the thorns. So why bother having thorns in the first place? There’s certainly a serious cost to having thorns: plants that don’t need to grow them have been shown in experiments to produce more fruits. So if animals eat the plants with thorns anyway, why pay this cost?
Look a bit closer at an animal browsing a thorn tree and you’ll see it has one of two strategies – it bites the end of a branch off, wood, leaves, thorns and all (an activity we could call ‘pruning’), or it can nibble carefully among the thorns, picking the leaves out with care. And look closer still and you’ll see that most animals ‘pruning’ trees only eat the soft new tips, where the thorns haven’t hardened yet, which suggests that thorns do at least make it difficult for these animals, even if they still make their living from eating thorny bushes (I don’t think I’ve ever seen an impala eat anything that isn’t thorny!). So whilst thorns might slow animals down, it would still seem that thorny bushes are paying two prices – first they pay the cost of growing thorns in the first place, and secondly they still experience herbivory. Why, then, do they do it? The only way growing thorns makes sense (beyond the Genesis explanation that the land was cursed following the fall, of course), is in terms of an evolutionary arms race.
Just as in the Cold War ‘the West’ and ‘the East’ were busy building costly weapons stockpiles with no obvious benefits beyond “they did it, so we have to as well”, we can think of plants and herbivores as being in a constant war. Plants trying not to get eaten, herbivores trying to eat plants. For every adaptation that a plant evolves in defence, before long the herbivores are likely to evolve a way around it (behavioural or morphological). Grow prickles and you’ll be well defended for a while, but then some animal will learn that eating new shoots whilst the prickles are soft is easy, grow thorns and before long something will evolve a long tongue that can lick the leaves out between the thorns, or a very narrow muzzle to squeeze between them. Once the process has started, there’s little obvious way to stop – nuclear disarmament treaties require both sides to sign up and trust one another, not something that’s common in nature.
We can also see how it happens within the species: consider the first bush to evolve a few stiff stipules (the little bits where leaves join the plant stem) to make tiny thorns: there’d be one bush among many that was a bit prickly, and all the herbivores would avoid it, with many undefended plants to eat instead. So before long the genes that led to this mutation would spread and the whole population would have small prickles. Herbivores still need to eat, so with no choice now they’ll evolve a way to eat the prickly plant, or they’ll die. Then another bush has a mutation that makes those little prickles longer and now there’s one bush with spines among many with prickles. Again, the herbivores will eat the ones with little prickles, and the longer one has an evolutionary advantage. It’s easy to see how this process will rapidly run away until all plants have long, nasty thorns that cost them lots to grow, but still get eaten. This competition between the plants themselves can be seen as another evolutionary arms race, one we often call the Red Queen effect, after a character in Lewis Carroll‘s Through the Looking Glasswho said “it takes all the running you can do, to keep in the same place”.
It follows that if some evil scientist went out with a pair of nail clippers and removed all the thorns from a thorn tree, it should suffer dramatically from being less well armed than it’s neighbours. And you might not be surprised to lean that someone’s actually done the experiment (never let it be said that science is about proving the obvious…)! Wilson and colleaguesremoved spines from a whole lot of thorny African trees and bushes and watched what happened when goats and bushbuck came along. I doubt they were too surprised to discover that the animals took bigger bites and consequently fed faster on the branches they’d removed spines from than those they hadn’t …
In fact, the defences provided by thorns are pretty sophisticated. As there’s a cost to being spinescent, it’s only sensible to grow thorns if the benefits outweigh the costs and where that happens depends on a number of things above and beyond simply the density of herbivores: it’s the actual cost of that herbivory that matters. Herbivory is more costly in places where you can’t grow much to replace what’s eaten, so it makes sense that in the driest environments thorns are more valuable than in wetter places where new growth can rapidly replace lost material. Exactly what has been found for Vachellia tortilis in experiments in Israel. Using the same logic, you might expect that if you give fertiliser to a growing tree it will also be able to grow faster so will invest less in thorns. But sadly, when that experiment was done on the same species, it didn’t hold out—more fertiliser meant more thorns, which the authors of that study took to mean that the ability to grow thorns is nutrient limited. Now I suspect their side note that trees (even of the same species) growing on nutrient rich soils often have more thorns than those on poorer soils is more relevant here—if you’re packed full of nutrients you’re probably a much better target for herbivory than if you’re not, so although fertiliser means you can grow faster, it also means you’ll face higher herbivory rates. Which in turn means if you grown on rich soils you’ll face higher costs and would be wise to invest more in defence.What’s more, it makes sense for plants to be able to sense the amount of herbivory they’re facing and only grow thorns when herbivory is high: again, exactly what’s seen in experiments.
Similarly, if you’re going to pay the costs of being thorny, it’s worth making that immediately obvious to potential herbivores to ensure that they avoid you rather than taking a few bites before making the discovery. So instead of hiding your thorns, why not make them obviously white or even red as a warning? And just as well defended insects are often bright and obvious (we say they’re aposematic), many thorns are also aposematic. But where it gets really scary is recent work suggesting that thorns not only provide direct defence, but are actually used as needles to inject bacteria and fungi into whatever brushes against them. There’s evidence to suggest the plants have evolved such that the thorns are particularly good at making homes to some pretty nasty beasties: Clostriduim botulinum, Bacillus anthracis (I’m sure you can guess what those two give you) and many other nasties are reported to be happy living on thorns. What’s more, those nasties are happier and therefore in higher densities on the thorns than the photosynthetic green parts of the plants, suggesting the plants really have evolved thorns that are really hypodermic needles. Truly plant biological warfare! No wonder that tiny thorn scratch can go nasty on you.
So, to summarise, African savanna is thorny because of all the animals, just as we first thought, but hopefully we’ve learnt something interesting in the longer answer: even if it is only to pack the disinfectant when going on a walking safari! | <urn:uuid:1acf3041-962b-4e2d-84a6-18f8feb19be8> | 3.046875 | 2,007 | Personal Blog | Science & Tech. | 47.726935 |
Formation of a Reconnection Site
A numerical simulation based on a global MagnetoHydroDynamic (MHD) model was run to reproduce the 8 May 2004 event. This image shows the topology of the magnetic field produced by the global MHD model around 10:00 UT, viewed from the direction of the Sun. Earth is represented by the blue sphere, the location of the Double Star TC-1 spacecraft is indicated, the magnetic field lines are colour-coded: red=unconnected, yellow=open, blue=closed.
The model reveals the formation of a reconnection site extending about 7 to 10 Magnetic Local Time (MLT), in agreement with the data collected by both Double Star TC-1 and Cluster. | <urn:uuid:b247be7a-a3a8-4cd4-b9d2-25f5dbe36bc7> | 3.0625 | 149 | Knowledge Article | Science & Tech. | 38.15948 |
stoilis writes "A collaboration between Deborah Gordon, a Stanford ant biologist, and Balaji Prabhakar, a computer scientist, has revealed that the behavior of harvester ants, as they forage for food, mirrors the protocols that control traffic on the Internet. From the article: 'Prabhakar wrote an ant algorithm to predict foraging behavior depending on the amount of food – i.e., bandwidth – available. Gordon's experiments manipulate the rate of forager return. Working with Stanford student Katie Dektar, they found that the TCP-influenced algorithm almost exactly matched the ant behavior found in Gordon's experiments. "Ants have discovered an algorithm that we know well, and they've been doing it for millions of years," Prabhakar said.' The abstract is published in the Aug. 23 issue of PLoS Computational Biology." | <urn:uuid:3f4906c1-6a34-4751-bd04-8da2395465bb> | 2.734375 | 175 | Comment Section | Science & Tech. | 41.705694 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Wednesday, 1 December 2010
StarStuff Podcast Have scientists discovered the universe before its creation? Plus: evidence reveals liquid universe moments after Big Bang; and super-tankers to detect secret nuclear facilities.
Friday, 26 November 2010
Saturn's second largest moon, Rhea, has a thin atmosphere of oxygen and carbon dioxide, according to a new study.
Wednesday, 24 November 2010
StarStuff Podcast Scientists spot extragalactic exoplanet being devoured by our galaxy. Plus: new Australian space engineering centre opens; and CERN scientists a step closer to understanding antimatter.
Thursday, 18 November 2010
StarStuff Podcast Thirty-year-old black hole seen in our cosmic neighbourhood. Plus: 50,000 light-year-long galactic monster found in Milky Way; and stereo telescopes help scientists understand Sun's mega-blasts.
Thursday, 11 November 2010
StarStuff Podcast Large Hadron Collider recreates the conditions of the Big Bang. Plus: EPOXI mission flies by comet Hartley 2; and earliest stars and galaxies found to have created known universe.
Friday, 5 November 2010
A spacecraft has successfully conducted a close fly-by of the comet Hartley 2, providing the most extensive observations of a comet in history.
Thursday, 4 November 2010
StarStuff Podcast Discovery of a neutron star twice the size of our Sun causes a rethink by astronomers. Plus: lost generation of stars found in 'stellar jewel box'; and scientists trace our solar system's orbit through Milk Way.
Wednesday, 3 November 2010
Scientists may have a new tool to help them work out what sort of environment our solar system's been experiencing during its journey through our galaxy, the Milky Way.
Friday, 29 October 2010
Scientists have found a lost generation of stars in the galaxy's most densely populated stellar cities known as globular clusters.
Wednesday, 27 October 2010
StarStuff Podcast Astronomers detect over 13-billion-year-old galaxy. Plus: black hole formation theory goes bang; and more water than expected found on our Moon.
Friday, 22 October 2010
A head-on collision by a NASA spacecraft last year has confirmed the presence of confirmed significant quantities of water and frozen volatiles on the surface on the Moon, according to a series of new studies.
Thursday, 21 October 2010
StarStuff Podcast Asteroid passes 45,000 kilometres over Singapore. Plus: mysterious pulsar provides fresh clues about magnetic neutron stars; and galactic evolution revisited.
Friday, 15 October 2010
Astronomers may have to go back to the drawing board after the discovery of an unusual pulsar, which doesn't appear to be slowing down.
Wednesday, 13 October 2010
StarStuff Podcast Australian scientists discover galaxies from the distant past. Plus: Saturn's moon, Titan, holds ingredients of life; and NASA leads new mission to Mars.
Monday, 11 October 2010
The long lost lunar rover Lunokhod 1, has been rediscovered by astronomers using laser pulses, thirty-six years after it disappeared. | <urn:uuid:66f848df-1fc6-4c35-b891-712e6ecb5315> | 2.90625 | 643 | Content Listing | Science & Tech. | 42.340894 |
Fire acts differently in space than on Earth. This artwork is comprised of multiple overlays of three separate microgravity flame images. Each image is of flame spread over cellulose paper in a spacecraft ventilation flow in microgravity. The different colors represent different chemical reactions within the flame. The blue areas are caused by chemiluminescence (light produced by a chemical reaction.) The white, yellow and orange regions are due to glowing soot within the flame zone.
This microgravity combustion research was performed at NASA's Glenn Research Center in Cleveland, Ohio. Understanding microgravity combustion provides insights into spacecraft fire safety.
Image Credit: NASA
This image was created by Sandra Olson, an aerospace engineer at NASA's Glenn Research Center. The image won first place in the 2011 Combustion Art Competition, held at the 7th U.S. National Combustion Meeting. | <urn:uuid:604401e6-3904-48db-abe0-8008919074ff> | 3.46875 | 175 | Knowledge Article | Science & Tech. | 29.027717 |
This is light, fast and simple to understand mathematical parser designed in one class, which receives as input a mathematical expression (System.String) and returns the output value (System.Double). For example, if you input string is "√(625)+25*(3/3)" then parser return double value — 50.
The idea was to create a string calculator from educational goals.
How it work
More look in the code, I tried to explain the work of the parser code in the comments and how it can be changed and extended.
Convert to RPN:
* Input operators are replaced by the corresponding token (its necessary to distinguish unary from binary).
Using the code
public static void Main()
MathParser parser = new MathParser();
string s1 = "pi+5*5+5*3-5*5-5*3+1E1";
string s2 = "sin(cos(tg(sh(ch(th(100))))))";
bool isRadians = false;
double d1 = parser.Parse(s1, isRadians);
double d2 = parser.Parse(s2, isRadians);
Console.WriteLine(d1); Console.WriteLine(d2); Console.ReadKey(true);
Ariphmetical operators :
- () — parenthesis;
- + — plus (a + b);
- - — minus (a - b);
- * — multiplycation symbol (a * b);
- / — divide symbol (a / b);
- ^ — degree symbol (a ^ b);
- √(), sqrt() — square root ( √(a) ).
- tg(x); //tan(x)
- ctg(x). //cotan(x)
- exp(x) or e^x — exponential function;
- ln(x), (a)log(b) — (natural) logarithm;
- abs(x) — absolute of a number.
- pi — 3.14159265...;
- e — 2.71828183....
Arguments of the trigonometric functions can be expressed as radians or degrees
example (as radians):
Work with any char decimal separator in real number (regional settings).
New operators, functions and constants can be easily added to the code.
This parser is very basic (special designed in 1 file such class), convenient for the further implementation, expansion and modification.
Points of Interest
I better understand the parsers and learned about the reverse-polish notation.
- 2012/05/09: released the source code (1_0);
- 2012/05/15: optimized and modified code (1_1);
- 2012/06/07: optimized parser (1_2);
- 2012/09/11: some refactoring in code, added unit tests (1_2a);
- 2012/12/02: improved code (1_2b).
- 2013/01/27: improved code (1_3) | <urn:uuid:eb204a1f-8120-4d11-b609-f3e527c211bd> | 3 | 661 | Documentation | Software Dev. | 69.857071 |
Sub-tropical Ridge (STR)
The STR is a natural high pressure belt that sits across the southern parts of Australia.
We have detected that the browser used to view this content is missing the supported 'Flash plugin'.
Once you have installed the 'plugin' found on the Adobe website, please return to this web page by using the back button on your browser.
Click a Climatedog below to view animation:
Key things to know about STR
Click on the image to open a larger version in a new window
- The sub-tropical ridge moves north and south seasonally over Australia
- This can affect the passage of cold fronts across southern Australia. Fronts are a good source of moisture and potential rainfall.
- Typically in winter the STR moves north, allowing fronts to pass over southern Australia.
- In summer, the STR typically moves south, blocking the passage of fronts.
- This is part of the reason why Victoria experiences rain bearing cold fronts during winter.
- The strength (or intensity) of the high pressure systems also affect rainfall. Higher pressure means less rainfall.
- Farmers know that the seasons with stronger or more frequent blocking high pressure systems over southeast Australia don’t tend to produce the regular rainfall we would like.
Where to go for more information
For more information about the STR visit some of the pages below. | <urn:uuid:a5edb255-1acd-48a8-8954-467ef5143a5c> | 3.5 | 280 | Knowledge Article | Science & Tech. | 50.035655 |
No longer is David Braaten constantly cocooned in his red super parka. He left the insta-freeze winds of the Antarctic interior in January.
But as cold as the trip was for the University of Kansas scientist, he recognizes what one discovery after the next has demonstrated this year: Itís getting remarkably warm down there, and itís heating up incredibly fast.
ďWeíre trying to find out whatís happening to the ice,Ē said Braaten, the deputy director of the KU-based Center for Remote Sensing of Ice Sheets.
Even as the changing climate brings more moisture, and ice, to Antarcticaís center, on its edges the frozen continent is becoming less so. Melting skyscrapers of ice crash into the ocean at ever-faster rates.
Thatís raising sea levels, disrupting ocean food chains and reducing the regionís ability to moderate the planetís climate.
Climate scientists once were befuddled about why Antarctica seemed to be cooling while the rest of the world got toastier. It turns out the bottom of the world has been warming after all.
ďMore is happening than we thought, and itís happening faster,Ē said Douglas Martinson of Columbia University, who studies the impact of polar oceans on global climate.
Read the complete story at kansascity.com | <urn:uuid:9b7dd297-1f5f-4531-9aba-66a4064e5f56> | 3.0625 | 278 | Truncated | Science & Tech. | 43.65958 |
The top 100 carbon dioxide-producing facilities in California generated 101,890,944 metric tons of CO2 in 2007, according to data recently released by the California Air Resources Board. We’ve mapped that data to show where the 100 largest polluters are located. Power plants and oil refineries appear to be the largest culprits. The data is self-reported to the air resources board.
The California Air Resources Board uses this data to identify major sources of pollution in the state, and to determine which businesses will be charged administrative fees that will be used to pay for the implementation of AB 32, the Solutions to Global Warming Act.
AB 32 is intended to reduce California's greenhouse gas emissions 25 percent from current levels by 2020 through diverse measures ranging from reducing landfill emissions to higher fuel standards to a cap-and-trade system for polluters.
The CO2 emission data also indicates which communities are most impacted by heavy polluters. Many of these companies have been at the forefront of efforts to slow the implementation of AB 32, and have lobbied to influence the air resources board's writing of the rules governing emission limits. Twenty two of the companies here are also members of the AB 32 Implementation Group.
In the fall, the air resources board will begin to collect roughly fifteen cents a ton in emission fees from these and other industries to help cover the operating costs of further regulation of greenhouse gas emissions.
DISCLAIMER FROM AIR RESOURCES BOARD: This is the first year of reporting, and these numbers are self-reported and have not been verified. The air resources board has accredited the first batch of third-party verifiers and we will begin that process in 2010. Thus, these numbers are subject to change and could contain errors.The measurements reported here are CO2E, "carbon dioxide equivalent," as some greenhouse gas emissions might be other gases, like methane, which have different "global warming potentials." Almost all emissions reported are CO2.
CO2 conversions are from the EPA Greenhouse Gas Equivalencies Calculator. | <urn:uuid:1d4b0a94-71bf-4186-9882-4a9a912270c5> | 2.890625 | 417 | Knowledge Article | Science & Tech. | 37.691405 |
|Date||2006-10-31 to 2006-12-26|
|Total Depth||1284.85 m|
ANDRILL McMurdo Ice Shelf (MIS) Project
The key aim of the MIS Project is to determine past ice shelf responses to climate forcing, including variability at a range of timescales. To achieve this aim ANDRILL will recover core from beneath the McMurdo Ice Shelf. The primary target for the MIS site is a 1200 meter-thick body of Plio-Pleistocene (0-5 million years ago) glacimarine, terrigenous, volcanic, and biogenic sediment that has accumulated in the Windless Bight region of a flexural moat basin surrounding Ross Island. A single ~1000 meter-deep drillcore will be recovered from approximately 900m of water.
Below is a graphical representation of all core recovered from this hole. Each column represents 10 meters of core. Bright blue areas denote sections of no core recovery. Clicking on the image will allow you to quickly jump to that specific depth in the hole. | <urn:uuid:d93e633d-4fef-4e5a-819a-5930ef602abf> | 3.03125 | 227 | Knowledge Article | Science & Tech. | 50.031635 |
Missing Link To Crocodile/Avian/Mammalian Past Rediscovered At Museum of Natural History
By Elaine Meinel Supkis
Researchers noticed what looked like a crocodile ankle poking out of a very old cast stored in the Museum's back rooms. Upon opening it, the fossil, which hasn't been seen in many years, turned out to be an important missing link from right after the great Permian extinction.
From National Geographic News:
Ankle aside, Effigia had large eyes, a long tail, and a toothless beak—not unlike the ostrich dinosaurs.Untangling the tree of life is a challenge. When we apply genetic tools, it is always surprising and filling out the record by classifying fossils enables us to understand all the many odd things that happened in the past for evolution is all about things happening.
Effigia also walked on two feet, unlike modern crocs.
These physical similarities suggest that Effigia and the ornithomimid dinosaurs evolved similarly during two different eras, the scientists say.
The fossil record shows that many different features have been reinvented time and again in different species. But the Effigia example is a bit surprising.
Modern crocodiles are but one remnant of what was once a far more diverse croc family.
"Today we think of crocodiles as looking basically the same," said Nesbitt, of the American Museum of Natural History and Columbia University's Lamont-Doherty Earth Observatory. "But in their history they took on a wide variety of different body plans."
"Some looked like reptilian armadillos or cats, and others looked like little dinosaurs," Nesbitt said.
The crocodilian family may have been at its peak during the Triassic period.
"Toward the end of the Triassic period you have this crazy diversification of these crocodile relatives, including this animal," Nesbitt said.
"It was really the heyday of the crocodile-like animals, but the only lineage to really make it out of the Triassic was the lineage that led to modern crocodiles."
Recent research now shows that birds, turtles and crocodiles all split off from the main trunk of the dinosaur family pretty quickly and each struck out upon its own path which increasingly diverged. This splitting happened when many species were cut off from each other by the tremendous disaster of the end of the Permian when not only did the planet become very hot, the oxygen supply nearly disappeared again, it being an artifact of plants releasing oxygen when processing CO2. Each species the survived, and very few individuals survived this disaster, lived in small, restricted, protected areas. This allowed them to change rapidly without being interfered with.
Prior to the disaster, proto-mammalian Theraspids hunted proto-dinosarus. It looked like the earth would be inherited by the mammals who could keep their young warm. The bulk of all our fossil fuels comes from this balmy, wet, fertile period. For 100 million years, thick forests of fern-type plants covered the landscape. This mass of greenery and the milling insects and animals thrived happily. From Berkeley University:
In addition to having the ideal conditions for the beginnings ofcoal, several major biological, geological, and climatic events occurred during this time. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed for the further exploitation of the land by certain tetrapods. The amniote egg allowed the ancestors of birds, mammals, and reptiles to reproduce on land by preventing the desiccation of the embryo inside. There was also a trend towards mild temperatures during theCarboniferous, as evidenced by the decrease in lycopods and large insects and an increase in the number of tree ferns.All this got buried suddenly and totally, under rock, no less. This doesn't happen inevitably. The only way to build up rock to seal in great masses of organic matter that couldn't rot easily is by suddenly turning the climate inside/out.
So hot winds, blowing sand, rains falling on bare rocks high up, causing mud flows to pour into deep valleys that were deep graves of former tremendous forests, compressing it all over the eons to turn it all into rock called "coal" or a liquid called "oil".
Here is more on the debate about warming vs asteroid impact: From GSA:
Rapid end-Permian extinctions probably intensified conditions that were already developing on Earth including: 1) extreme warmth, 2) deep-sea stratification and anoxia facilitated by warm, saline bottom water, 3) limitation of nutrient availability, and 4) reduction in atmospheric oxygen levels. All of these factors could have delayed Early Triassic biotic recovery. Decay of unburied biomass would release considerable carbon dioxide to the atmosphere. Destruction of most photosynthetic organisms (land plants and phytoplankton) would sustain warmth by sharply reducing Earth’s capability for CO2 drawdown. Water lost during forest destruction would facilitate desertification that would foster erosion resulting in depletion of soil nutrients and release of CO2. Additional greenhouse gas probably entered Earth’s atmosphere from the Siberian Traps eruptions, gas hydrate release, and ocean overturn. Absence of active, low-latitude Late Permian orogenic belts had already reduced long-term silicate weathering and CO2 drawdown.Like any terrible disaster, multiple forces coiciding with each other amplify each other's effects and thanks to this being a fairly complex system, once out of whack, everything falls rapidly apart since the entire ecosystem is built one part interacting with all neighboring parts which is why we are seeing so many extinctions today, for example.
It so happens, when all animals struggled to survive the great Permian extinction, when things got better over several million desperate years, the niche occupied by the dinosaurs happened to be bigger and more fertile than the one mammalians occupied and when the two lineages met each other again as oxygen levels rose, mammals were much tinier than before and little dinosaurs were much bigger than the poor mammals. So dominating the former dominators was easy and mammals spent the following 100 million years, dodging the thundering footsteps of the dinosaurs.
Until, again, the ecology collapsed, the climate changed and the dinosaurs lost their grip on the planet and became helpess fossils.
Previous Similar Articles
To return to homepage click here
To read more science news click here | <urn:uuid:304bd1be-f02a-4a77-b139-06bd4a9100e1> | 2.9375 | 1,344 | Truncated | Science & Tech. | 30.309773 |
Assignment statements are used to (re)bind names to values and to modify attributes or items of mutable objects:
(See section 5.3 for the syntax definitions for the last three symbols.)
An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right.
Assignment is defined recursively depending on the form of the target (list). When a target is part of a mutable object (an attribute reference, subscription or slicing), the mutable object must ultimately perform the assignment and decide about its validity, and may raise an exception if the assignment is unacceptable. The rules observed by various types and the exceptions raised are given with the definition of the object types (see section 3.2).
Assignment of an object to a target list is recursively defined as follows.
Assignment of an object to a single target is recursively defined as follows.
The name is rebound if it was already bound. This may cause the reference count for the object previously bound to the name to reach zero, causing the object to be deallocated and its destructor (if it has one) to be called.
If the primary is a mutable sequence object (e.g., a list), the subscript must yield a plain integer. If it is negative, the sequence's length is added to it. The resulting value must be a nonnegative integer less than the sequence's length, and the sequence is asked to assign the assigned object to its item with that index. If the index is out of range, IndexError is raised (assignment to a subscripted sequence cannot add new items to a list).
If the primary is a mapping object (e.g., a dictionary), the subscript must have a type compatible with the mapping's key type, and the mapping is then asked to create a key/datum pair which maps the subscript to the assigned object. This can either replace an existing key/value pair with the same key value, or insert a new key/value pair (if no key with the same value existed).
(In the current implementation, the syntax for targets is taken to be the same as for expressions, and invalid syntax is rejected during the code generation phase, causing less detailed error messages.)
WARNING: Although the definition of assignment implies that overlaps between the left-hand side and the right-hand side are `safe' (e.g., "a, b = b, a" swaps two variables), overlaps within the collection of assigned-to variables are not safe! For instance, the following program prints "[0, 2]":
x = [0, 1] i = 0 i, x[i] = 1, 2 print x | <urn:uuid:6fa39f44-22be-4db5-9441-fe9ec1b87ced> | 2.953125 | 588 | Documentation | Software Dev. | 51.178885 |
Most western nations advance the clock ahead 1 hour during summer months. This period is called daylight saving time. This period lasts from 1st Sunday in april to last Sunday in October in most of the united states, mexico and canada. The nations of EU observe daylight saving time but they call it summer time period. This period begins a week earlier than its north American counterpart but ends at the same time.
On Error Resume Next causes execution to continue with the statement immediately following the statement that caused the run-time error, or with the statement immediately following the most recent call out of the procedure containing the On Error Resume Next statement. This statement allows execution to continue despite a run-time error. You can place the error-handling routine where the error would occur rather than transferring control to another location within the procedure. An On Error Resume Next statement becomes inactive when another procedure is called, so you should execute an On Error Resume Next statement in each called routine if you want inline error handling within that routine.
Public Sub OnErrorDemo() On Error GoTo ErrorHandler ' Enable error-handling routine. Dim x As Integer = 32 Dim y As Integer = 0 Dim z As Integer z = x / y ' Creates a divide by zero error On Error GoTo 0 ' Turn off error trapping. On Error Resume Next ' Defer error trapping. z = x / y ' Creates a divide by zero error again If Err.Number = 6 Then ' Tell user what happened. Then clear the Err object. Dim Msg As String Msg = "There was an error attempting to divide by zero!" MsgBox(Msg, , "Divide by zero error") Err.Clear() ' Clear Err object fields. End If Exit Sub ' Exit to avoid handler. ErrorHandler: ' Error-handling routine. Select Case Err.Number ' Evaluate error number. Case 6 ' Divide by zero error MsgBox("You attempted to divide by zero!") ' Insert code to handle this error Case Else ' Insert code to handle other situations here... End Select Resume Next ' Resume execution at same line ' that caused the error. End Sub
Macro security settings determine how permissive Excel should be about allowing macros to be run on your computer. There are four security levels: Very High, High, Medium, and Low. You control these in the Security dialog box (Tools menu, Options command, Security tab, Macro Security button), as shown in the following illustration.
IntelliSense is Microsoft’s implementation of autocompletion, best known for its use in the Microsoft Visual Studio integrated development environment. In addition to completing the symbol names the programmer is typing, IntelliSense serves as documentation and disambiguation for variable names, functions and methods using reflection.
Collections are grouping of related item togrther. Its hetrogenous in nature meaning the members of collection need not share the same data types, its very handy as we doesn’t always need collection of same type.
VBA, or Visual Basic for Applications, is the simple programming language that can be used within Excel 2007 (and earlier versions, though there are a few changes that have been implemented with the Office 2007 release) to develop macros and complex programs. The advantages of which are:
Declares the name, parameters, and code that define a Sub procedure.
While you might visualize a Visual Studio project as a series of procedures that execute in a sequence, in reality, most programs are event driven—meaning the flow of execution is determined by external occurrences called events.
An event is a signal that informs an application that something important has occurred. For example, when a user clicks a control on a form, the form can raise a Click event and call a procedure that handles the event. Events also allow separate tasks to communicate. Say, for example, that your application performs a sort task separately from the main application. If a user cancels the sort, your application can send a cancel event instructing the sort process to stop.
This section describes the terms and concepts used with events in Visual Basic.
You declare events within classes, structures, modules, and interfaces using the Event keyword, as in the following example:
Event AnEvent(ByVal EventNumber As Integer)
An event is like a message announcing that something important has occurred. The act of broadcasting the message is called raising the event. In Visual Basic, you raise events with the RaiseEvent statement, as in the following example:
Events must be raised within the scope of the class, module, or structure where they are declared. For example, a derived class cannot raise events inherited from a base class.
Any object capable of raising an event is an event sender, also known as an event source. Forms, controls, and user-defined objects are examples of event senders.
Event handlers are procedures that are called when a corresponding event occurs. You can use any valid subroutine with a matching signature as an event handler. You cannot use a function as an event handler, however, because it cannot return a value to the event source.
Visual Basic uses a standard naming convention for event handlers that combines the name of the event sender, an underscore, and the name of the event. For example, the Click event of a button named button1 would be named Sub button1_Click.
We recommend that you use this naming convention when defining event handlers for your own events, but it is not required; you can use any valid subroutine name.
Before an event handler becomes usable, you must first associate it with an event by using either the Handles or AddHandler statement.
WithEvents and the Handles Clause
The WithEvents statement and Handles clause provide a declarative way of specifying event handlers. An event raised by an object declared with the WithEvents keyword can be handled by any procedure with a Handles statement for that event, as shown in the following example:
' Declare a WithEvents variable. Dim WithEvents EClass As New EventClass ' Call the method that raises the object's events. Sub TestEvents() EClass.RaiseEvents() End Sub ' Declare an event handler that handles multiple events. Sub EClass_EventHandler() Handles EClass.XEvent, EClass.YEvent MsgBox("Received Event.") End Sub Class EventClass Public Event XEvent() Public Event YEvent() ' RaiseEvents raises both events. Sub RaiseEvents() RaiseEvent XEvent() RaiseEvent YEvent() End Sub End Class
The WithEvents statement and the Handles clause are often the best choice for event handlers because the declarative syntax they use makes event handling easier to code, read and debug. However, be aware of the following limitations on the use of WithEvents variables:
- You cannot use a WithEvents variable as an object variable. That is, you cannot declare it as Object—you must specify the class name when you declare the variable.
- Because shared events are not tied to class instances, you cannot use WithEvents to declaratively handle shared events. Similarly, you cannot use WithEvents or Handles to handle events from a Structure. In both cases, you can use the AddHandler statement to handle those events.
- You cannot create arrays of WithEvents variables.
WithEvents variables allow a single event handler to handle one or more kind of event, or one or more event handlers to handle the same kind of event.
Although the Handles clause is the standard way of associating an event with an event handler, it is limited to associating events with event handlers at compile time.
In some cases, such as with events associated with forms or controls, Visual Basic automatically stubs out an empty event handler and associates it with an event. For example, when you double-click a command button on a form in design mode, Visual Basic creates an empty event handler and a WithEvents variable for the command button, as in the following code:
Friend WithEvents Button1 As System.Windows.Forms.Button Protected Sub Button1_Click() Handles Button1.Click End Sub
AddHandler and RemoveHandler
The AddHandler statement is similar to the Handles clause in that both allow you to specify an event handler. However, AddHandler, used with RemoveHandler, provides greater flexibility than the Handles clause, allowing you to dynamically add, remove, and change the event handler associated with an event. If you want to handle shared events or events from a structure, you must use AddHandler.
AddHandler takes two arguments: the name of an event from an event sender such as a control, and an expression that evaluates to a delegate. You do not need to explicitly specify the delegate class when using AddHandler, since the AddressOf statement always returns a reference to the delegate. The following example associates an event handler with an event raised by an object:
AddHandler Obj.XEvent, AddressOf Me.XEventHandler
RemoveHandler , which disconnects an event from an event handler, uses the same syntax as AddHandler. For example:
RemoveHandler Obj.XEvent, AddressOf Me.XEventHandler
In the following example, an event handler is associated with an event, and the event is raised. The event handler catches the event and displays a message.
Then the first event handler is removed and a different event handler is associated with the event. When the event is raised again, a different message is displayed.
Finally, the second event handler is removed and the event is raised for a third time. Because there is no longer an event handler associated with the event, no action is taken.
Module Module1 Sub Main() Dim c1 As New Class1 ' Associate an event handler with an event. AddHandler c1.AnEvent, AddressOf EventHandler1 ' Call a method to raise the event. c1.CauseTheEvent() ' Stop handling the event. RemoveHandler c1.AnEvent, AddressOf EventHandler1 ' Now associate a different event handler with the event. AddHandler c1.AnEvent, AddressOf EventHandler2 ' Call a method to raise the event. c1.CauseTheEvent() ' Stop handling the event. RemoveHandler c1.AnEvent, AddressOf EventHandler2 ' This event will not be handled. c1.CauseTheEvent() End Sub Sub EventHandler1() ' Handle the event. MsgBox("EventHandler1 caught event.") End Sub Sub EventHandler2() ' Handle the event. MsgBox("EventHandler2 caught event.") End Sub Public Class Class1 ' Declare an event. Public Event AnEvent() Sub CauseTheEvent() ' Raise an event. RaiseEvent AnEvent() End Sub End Class End Module
Declares and allocates storage space for one or more variables.
The access level of a declared element is the extent of the ability to access it, that is, what code has permission to read it or write to it. This is determined not only by how you declare the element itself, but also by the access level of the element’s container. Code that cannot access a containing element cannot access any of its contained elements, even those declared as Public. For example, a Public variable in a Private structure can be accessed from inside the class that contains the structure, but not from outside that class. | <urn:uuid:74e0059b-6660-4b72-a4a6-523daf6868df> | 2.765625 | 2,342 | Documentation | Software Dev. | 38.81401 |
Now that you know what test-first means and how you can build a testable architecture, you are ready to start writing some real unit tests. This chapter will explain what kind of assertions you have, how a good unit test is set up and more.
Mocks vs. Stubs
Before I start explaining how to build a unit test, I will first briefly describe what mocks and stubs are. Mock objects and stub objects are both 'fake' objects. This means that we can just 'mock' an object (our repository for example), creating a fake implementation of that object, that does nothing. This is useful because we now no longer have to query against the database, we just query against our fake object that says "al right, I received some data from the database, here you have it".
The difference between mocks and stubs is that mock objects can let your tests fail. A mock repository allows us to validate that our repository's 'Add' method was called, raising an exception if it wasn't called. Or we could validate that our methods were called in a specific order. Stub objects on the other hand are just empty objects that don't track your actions.
Some people will make this distinction, some people treat both mock objects and stub objects as 'fake' objects, not caring about their behaviour, which is OK in my opinion.
To start with, you'll need a unit testing framework that contain the assertions you will need, and are often able to run your tests. Below is a small list of frameworks:
- .NET - NUnit
- .NET - MSTest (built in with Visual Studio)
- Java - JUnit
- PHP - PHPUnit
- Python - PyUnit
Again, you can find a more extensive list here.
When writing unit tests, you usually have at least one test class for each class you want to test, and at least one test method for each method that you want to test. You should put these test classes in a different project, usually named [Projectname].Tests. When you are writing a test class you should let the frameworks know that it is a test class. How you do this depends on the framework. Some frameworks rely on convention such as PHPUnit, other frameworks require you to mark the class with an annotation. MSTest for example requires a [TestClass] attribute above the class and a [TestMethod] attribute above each test method, where PHPUnit 'requires' you to start the name of your test method with 'test' to indicate that it is a test method (PHPUnit has annotations as well, but it's easier to stick to the conventions).
Not only can you create test methods inside your test class, you can also create methods that are run before each individual test and methods that are run when an object of that class is created. There are also similar 'teardown' methods; one that is run after each individual test and one that is run just before the object is destroyed. These kind of methods are useful to set up some test data that is used across all methods, or to clean up resources after using them, resulting in better performance.
Below is a small example of this:
public class CalculatorTests
private Calculator _calculator;
public void Initialize()
_calculator = new Calculator();
public void Sum_Add1And2_ShouldReturn3()
const int expected = 3;
var result = _calculator.Sum(1, 2);
There are a couple of interesting things about the example above; first we see the initialize method and a test method as discussed above. The second thing we notice is the strange name for the test method. It consists of: [MethodName]_[ActionToPerform]_[ExpectedResult]. So we're testing the Sum method and we expect it to return 3 if we input 1 + 2. We do this because it allows us to see directly what is wrong when our test fails. Having a name that describes the scenario does indeed result in very long names, but it gives you a lot of information, especially if you want to print reports about your unit test runs.
The third thing we can see is the AAA, or Arrange - Act - Assert pattern. It means that each test method has 3 parts: the part where you arrange the data and objects you need (arrange), the part where you perform the action (act) and the part where you validate the output (assert). This is a very useful pattern because it gives you a nice structure for your tests.
The pieces of code that validate the result of your tests are usually the Assertions (or sometimes the Verify method of your mock objects). There are different kind of assertions. The most common kind of assertions (I will just describe them, the syntax varies slightly between different frameworks and languages) are:
- Assertions that verify that two variables (don't) have the same value
- Assertions that verify that two objects (don't) have the same reference
- Assertions that verify that a variable is (not) null
- Assertions that verify that a variable is true/false
- Assertions that verify that an object is (not) of a specific type
Some frameworks provide different kind of assertions, but must of them have the assertions that are listed above.
Next: Step-by-step walkthrough | <urn:uuid:8e989258-9477-452e-9cc2-7e81aab47c8c> | 3.3125 | 1,124 | Documentation | Software Dev. | 50.680279 |