text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Short Summaries of Articles about Mathematics
in the Popular Press
"The impossible puzzle," by Michael Brooks.New Scientist, 5 April 2003,pages 34-35.
This article reports on a lecture presented by the renowned physicist StephenHawking at the Diraccentennial celebration at the University of Cambridge. In the lecture,entitled "Gödel and the end of physics," Hawking presented the startlingidea that there may be a fundamental obstruction to a "theory of everything."The work of the mathematician Kurt Gödel showed that any mathematicalsystem contains self-referential statements that are not provable within thatsystem. Models of the physical universe might have this same kind ofincompleteness, Hawking suggested. "We and our models are both part of theUniverse we are describing," Hawking is quoted as having said. "We are notangels who view the Universe from outside." As Brooks explains, this meansthat physical theories can be self-referential, so "we shouldn't be surprisedif they are inconsistent or incomplete."
--- Allyn Jackson | <urn:uuid:d4678a9f-d376-4000-9a09-a58f8eb1f104> | 2.84375 | 219 | Content Listing | Science & Tech. | 28.71859 |
Poison: Concurrent Termination
All programs must come to an end. At some point, a program will need to terminate, either because it has finished its task, or because the user has told it to terminate — or perhaps because the program has encountered an unrecoverable error. Especially in the latter two cases (where the termination is unanticipated), the program may have files or sockets open and so we need to terminate it gracefully, tidying up all such externalities.
In Haskell, we differentiate between pure computations, and side-effecting computations. Pure computations, used by many parallel mechanisms, can be terminated at any time without needing to tidy up — or rather, all that needs tidying up is immediately apparent to the run-time. Side-effecting computations, which are used with concurrency mechanisms such as MVars, STM and our own CHP, need extra code to deal with termination.
Sequential imperative programs often deal with this via exception handlers — the termination becomes an exception that unwinds the call stack, tidying up all current work (call-stack entries) until it reaches the top of the program. Haskell’s error monads can provide similar functionality.
Concurrency introduces two issues that can make termination much more difficult. Firstly, there is the issue of notifying all concurrent threads about the termination. The termination usually originates in one concurrent thread — the thread that handled the user request, the thread that encountered the error, or the thread that was collecting the results of the work being done. This thread needs to tell all the other concurrent threads in the system that it is time to terminate — bearing in mind that the other threads might be busy doing something else (e.g. computation, reading from a file, waiting to communicate with other threads, and so on). Secondly, there is the problem that the program no longer consists of a single call stack that can be unwound. It instead consists of lots of different call stacks.
Haskell now has asynchronous exceptions, which can be used for terminating a concurrent system. Asynchronous exceptions are quite a neat idea, but they do require thought over the use of block and unblock to get the right semantics, as well as it being a little overwhelming as to when exceptions might occur (i.e. at any time!). Asynchronous exceptions also introduce a book-keeping problem: which thread is responsible for throwing to which thread? You could keep a registry of threads and make any thread wishing to terminate throw to all of them (a machine-gun approach to terminating the system), but then you may end up with race hazards if new threads are created by a still-running thread while you are attempting to kill off all the threads and so on. Furthermore, some threads may only be able to tidy up once their children threads have tidied up — for example, a thread may have initialised an external library and spawned children to work with the library, but may need to call a finalising function in the external library once all the children have tidied up — and not before.
CHP introduces poison as a way to terminate concurrent systems. The idea is that the communication/synchronisation network in your program already provides a way to link your program together. Using the channels to send termination messages is an approach fraught with race hazards (see the classic paper on the matter, although poison improves on the solution offered there) — poison is not about sending messages. In CHP you can poison a channel, barrier and clock (I’ll talk about channels here, but it applies equally to barriers and clocks) . This sets the channel into a poisoned state. Forever after, any attempt to use that channel will cause a poison exception to be thrown in the thread that made the attempt. If a process is waiting on a channel when the other end is poisoned, the waiting process is woken and the exception is thrown. Thus poison is really a system of synchronous exceptions, that can occur only when you attempt to communicate on a channel. This notifies your immediate neighbours (i.e. those with whom you share a channel) that they should tidy up and terminate. The key is that on discovering the poison, a process should poison all its channels (repeated poisonings of the same channel are benign, and will not throw a poison exception), thus spreading the poison around the network.
An example is shown in a sequence of diagrams below. The network begins unpoisoned (1), with boxes for processes and arrows for channels connecting them. The mid-bottom process introduces poison into its channels and terminates (2) — poisoned channels are shown green, and terminated processes with a red cross. Any process that notices the poison on any of its channels terminates and poisons its other channels (3,4) until the whole network has terminated (5). The poison may not happen to spread in such a lock-step fashion as shown here, but the ordering of termination using poison does not matter.
Poison should not deadlock a process network, because the ordering of poison does not matter, and once all the channels are poisoned, there is nothing further that a process can wait on without being woken up. There are some tricky corner cases with poison, but I will discuss those in another post. To show how poison is used in the code, I will use my recent prime number sieve example. Here was the code before:
import Control.Concurrent.CHP import Control.Concurrent.CHP.Utils import Control.Monad filterDiv :: Integer -> Chanin Integer -> Chanout Integer -> CHP () filterDiv n input output = forever $ do x <- readChannel input when (x `mod` n /= 0) $ writeChannel output x end :: Chanin Integer -> Chanout Integer -> CHP () end input output = do x <- readChannel input writeChannel output x (filterDiv x |->| end) input output genStream :: Chanout Integer -> CHP () genStream output = mapM_ (writeChannel output) [2..] primes :: Chanout Integer -> CHP () primes = genStream ->| end main :: IO () main = runCHP_ $ do c <- oneToOneChannel primes (writer c) <||> (forever $ readChannel (reader c) >>= (liftIO . putStrLn . show))
This is a complete example program that runs forever spitting out primes. Let’s imagine that we want to stop the program after a time — for example, we may want only the first 100 primes. First, we must add a poison handler to each process. The filterDiv process shows the typical poison handler:
filterDiv :: Integer -> Chanin Integer -> Chanout Integer -> CHP () filterDiv n input output = (forever $ do x <- readChannel input when (x `mod` n /= 0) $ writeChannel output x ) `onPoisonRethrow` (poison input >> poison output)
That’s usually all that is involved — tacking an onPoisonRethrow block on the end that poisons all the channels that the filterDiv process is aware of. Here, the onPoisonRethrow block can either be outside the forever (which is broken out of by the poison exception) or inside the forever (rethrowing the poison exception would similarly break out of the forever). Our end process is more interesting, as it contains a parallel composition (via the |->| operator). These are the rules for parallel composition:
A parallel composition returns once all of its children have terminated (successfully, or through poison). Once they are all terminated, if any of them terminated because of poison, a poison exception is also thrown in the parent process.
So, back to our end process. There are two ways we could add a poison handler. This is the simple way, tacking on a poison handler as before:
end :: Chanin Integer -> Chanout Integer -> CHP () end input output = (do x <- readChannel input writeChannel output x (filterDiv x |->| end) input output ) `onPoisonRethrow` (poison input >> poison output)
This will give the correct effect; if any poison is encountered during the input or output on the first line of the do block, the handler will poison both channels. If any poison is discovered in the filterDiv or end sub-processes, the channels will similarly be poisoned. However, if any poison is discovered in the filterDiv or end sub-processes, they will already have poisoned the input and output channels. Again, this multiple poisoning is harmless.
We do not need to add handlers to genStream or primes; genStream only has one channel, so if poison is encountered there, all of its (one) channels must already be poisoned, and primes is merely a parallel composition of two processes that will deal with any poison encountered. So our final change is to main, to shut the program down after 100 primes have come out:
main :: IO () main = runCHP_ $ do c <- oneToOneChannel primes (writer c) <||> (do replicateM_ 100 $ readChannel (reader c) >>= (liftIO . putStrLn . show) poison (reader c))
That poison command is what introduces poison into our pipeline. Thereafter, our poison handlers will take care of spreading the poison all the way along the pipeline and shutting down our whole process network without fear of deadlock. So primes exits from being poisoned, our prime-printing mini-process exits successfully, and thus the parallel composition (and hence runCHP_) also exits with poison. This is quite typical of a CHP program, and is not (necessarily) indicative of a failure.
Poison is a fitting way to terminate CHP programs, and poison handlers are fairly simple to write. It is a good idea to add them to all your CHP programs, but since they are very formulaic I will often leave them out of my blog examples so that they don’t get in the way of what I’m trying to explain. | <urn:uuid:bace4661-b364-42bb-8b4c-13099829060b> | 3.125 | 2,105 | Personal Blog | Software Dev. | 44.953186 |
Choose a Topic or Enter a keyword to search:
You have selected the following Demos:
Purpose:To show that excess charge resides totally on the outside of an insulated conductor.
Procedure:Charge the metal cylinder by touching it with the Teflon rod (charged with fur or silk) a few times. Show that there is a charge on the cylinder by bringing it close to the electroscope. Then lower the ball into the hole in the cylinder making sure not to touch the edges. Extract the ball with the same care after banging it around inside the cylinder. Touch it to the electroscope and note the absence of charge. Now touch the outside of the cylinder with the ball and demonstrate with the electroscope that the ball is charged.
Hints:Be patient and careful. Ground the cage before the first trial, ground the ball on the rod between the sets, be mindful of not touching the edges of the cylinder while extracting the stick from inside.
In case of wet weather one can use the following trick: charge the cup with a Wimhurst machine (lots of charge that doesn't leak off very fast) and instead of electroscope use a pith ball. When the charge is taken from inside the cup the ball will stay immobile while when taken on the outside the ball will deviate.
Comments:If you have a suggestion for the use of this demo which might benefit others, you can enter it below. | <urn:uuid:4678a55e-70ef-46ec-9417-60d6cbced8a7> | 3.0625 | 294 | Tutorial | Science & Tech. | 52.593354 |
Boolean values are the two constant objects
True. They are used to represent truth values (although other
values can also be considered false or true). In numeric contexts
(for example when used as the argument to an arithmetic operator),
they behave like the integers 0 and 1, respectively. The built-in
function bool() can be used to cast any value to a Boolean,
if the value can be interpreted as a truth value (see section Truth
Value Testing above).
They are written as
See About this document... for information on suggesting changes. | <urn:uuid:0d63a05d-53cf-4d39-8292-1931ad8407a2> | 2.921875 | 116 | Documentation | Software Dev. | 50.790761 |
1. what is the area of a square whose diagnol is 1 unit longer than the length of a side?
2. if a 3 didget number is chosen at random, from the set of all 3 diget numbers what is the probablility all 3 didgets will be prime?
for what value of "b" will the solution of the system consist of pairs of positive numbers?
4. Find the equation of a line perpendicular to 3x-5y=17 that goes throgh the point (4,3)
5.ms sanders has Epogen 2200 units subcutaneous injection 3 times a week ordered for the anemia caused by chronic renal failur. Epogen 3000units/ml is avalible. How many millilieters will the patient recieve for each dose? | <urn:uuid:b72434e7-eb43-41eb-bd5c-e046ce4b9e6a> | 2.71875 | 167 | Q&A Forum | Science & Tech. | 74.994822 |
Is it possible to reverse global warming?by Cathy Wurzer, Minnesota Public Radio
ST. PAUL, Minn. — July was a very hot month, the 10th straight month that the average temperature in Minnesota was above normal. Is this a preview of the type of weather that will be normal in the future?
There is a lot of evidence that the earth is warming. As we feel the consequences of the changing climate, more people are wondering if the warming trend can be reversed.
Peter Snyder thinks about these things. He's a climate scientist at the University of Minnesota. He talked about his research with Morning Edition host Cathy Wurzer.
- Morning Edition, 08/13/2012, 8:25 a.m. | <urn:uuid:e2c9c480-5bb5-43fe-9838-69c73244727e> | 2.765625 | 152 | Truncated | Science & Tech. | 62.784675 |
Ocean Acidification: How Bad Can it Get?
Feely, R.A., Doney, S.C. and Cooley, S.R. 2009. Ocean acidification: Present conditions and future changes in a high-CO2 world. Oceanography 22: 36-47.
Tans, P. 2009. An accounting of the observed increase in oceanic and atmospheric CO2 and an outlook for the future. Oceanography 22: 26-35.
Getting right to the crux of the matter, the three researchers write in their abstract that "estimates based on the Intergovernmental Panel on Climate Change business-as-usual emission scenarios suggest that atmospheric CO2 levels could approach 800 ppm near the end of the century," and that "corresponding biogeochemical models for the ocean indicate that surface water pH will drop from a pre-industrial value of about 8.2 to about 7.8 in the IPCC A2 scenario by the end of this century." And, they warn that, as a result, "the skeletal growth rates of calcium-secreting organisms will be reduced," ending with the statement that "if anthropogenic CO2 emissions are not dramatically reduced in the coming decades, there is the potential for direct and profound impacts on our living marine ecosystems."
However, in the very same issue of Oceanography -- in the article that appears just before the Feely et al. paper, in fact -- NOAA's Pieter Tans presents a much different take on the subject.
Tans begins his analysis by indicating that the effect of CO2 on climate -- and, on its own concentration in the atmosphere -- "depends primarily on the total amount emitted, not on the rate of emissions," and that "unfortunately, the IPCC reports have not helped public understanding of this fact by choosing, somewhat arbitrarily, a rather short time horizon (100 years is most commonly used) for climate forcing by CO2." Thus, "instead of adopting the common economic point of view, which, through its emphasis on perpetual growth, implicitly assumes infinite earth resources," Tans notes that the cumulative extraction of fossil-fuel carbon currently stands at about 345 GtC, and that there appears to be another 640 or so GtC of proven reserves, yielding a total original reserve of about 1,000 GtC, from which he proceeds with his analysis.
The figure below shows much of the past and projected history of fossil-fuel carbon utilization, together with historical and projected atmospheric CO2 concentrations out to the year 2500, as calculated by Tans. As can be seen there, his analysis indicates that the air's CO2 concentration peaks well before 2100 and at only 500 ppm, as compared to the 800 ppm that Feely et al. take from the IPCC. In addition, by the time the year 2500 rolls around, the air's CO2 concentration actually drops back to about what it is today.
Figure 1. Past and projected trends of fossil-fuel carbon utilization and the atmosphere's CO2 concentration. Adapted from Tans (2009).
Based on his more modest projections of future atmospheric CO2 concentrations, Tans also finds the projected pH reduction of ocean waters in the year 2100 (as compared to preindustrial times) to be only one-half of the 0.4 value calculated by Feely et al., with a recovery to a reduction of only a tad over 0.1 pH unit by 2500, which is less than the range of pH values that are typical of today's oceans (8.231 in the Arctic Ocean minus 8.068 in the North Indian Ocean equals 0.163, according to Feely et al.).
Thus, things may not be quite as bad as the IPCC and other scientists make them out to be, especially when it comes to potential effects of anthropogenic CO2 emissions and their effects on the air's CO2 content and oceanic pH values. | <urn:uuid:e86cd54b-f42c-40f9-ad39-e5ce3654c56a> | 2.953125 | 795 | Academic Writing | Science & Tech. | 55.733183 |
Species at Risk: Leatherback sea turtle
In 1982, scientists estimated that there were 115,000 adult female leatherback sea turtles worldwide. Recent estimates have placed the number between 20,000 and 30,000.
The Pacific leatherback sea turtle is in such severe decline that scientists believe they will become extinct in the Pacific Ocean within the next 30 years unless significant actions are taken to protect them very quickly.
Incidental catch in fishing gear, poaching of their eggs and ingestion of plastics have all contributed to the listing of the leatherback sea turtle as endangered. The World Conservation Union (IUCN) has concluded that most leatherback nesting populations in the Pacific have declined more than 80 percent.
In some other areas, leatherback sea turtle populations are still seriously reduced but doing better. For example, nesting on U.S. beaches along the Atlantic coast has been increasing in recent years.
In 2007, Oceana petitioned the federal government to designate critical habitat for leatherback sea turtles off the U.S. West Coast. In response, in January 2012, The National Marine Fisheries Service finalized protection of 41,914 square miles of protected critical ocean habitat off the shores of Washington, Oregon and California for the endangered Pacific leatherback sea turtle.
The final rule establishes critical habitat in areas where leatherbacks feed on jellyfish after swimming 6,000 miles across the ocean from nests in Indonesia. This is the first permanent safe haven for leatherbacks designated in continental U.S. waters and is the largest area set aside to protect sea turtle habitat in the United States or its territories.
Check out the map below to see the areas designated as Critical Habitat (click to enlarge): | <urn:uuid:9d9e9bd8-80be-4db9-b75e-00d6b8408a01> | 3.75 | 343 | Knowledge Article | Science & Tech. | 38.620636 |
Climate change has become a hot topic in more ways than one, so instead of jumping straight into the issues surrounding sea level rise it’s worth discussing how climate change or global warming can sometimes play out in the news.
Let’s start with something cool, concrete, and agreeable: numbers. Most people would agree that .3 is a small number; it’s a fraction, three tenths, just a little bit more than zero.
And since .3 is a small number, it seems safe to say that .3 millimeters is a small amount. In fact, .3 mm is so little, that it’s difficult to imagine .3 mm of anything making the news, unless it was in reference to a crime scene maybe…police find .3 mm of poison in the freezer. But climate change has become so political that sometimes things that aren’t news make headlines obscuring the real news.
For example, in May scientists with the University of Colorado Sea Level Research Group made a .3 mm per year correction to the rate of sea level rise being measured by satellites. This type of modification happens all the time, which is explained on the UC Sea Level Research Group website: “Since 1993, measurements from the TOPEX and Jason series of satellite radar altimeters have allowed estimates of global mean sea level. These measurements are continuously calibrated against a network of tide gauges. When seasonal and other variations are subtracted, they allow estimation of the global mean sea level rate. As new data, models and corrections become available, we continuously revise these estimates (about every two months) to improve their quality.” This is a complicated way of saying that the way sea level rise is measured is constantly being improved and adjusted as more information becomes available.
Adding .3 mm per year to the rate of sea level rise was just another very tiny improvement in the ongoing effort to determine the average sea level rate across the globe, which is no small task by the way since it involves measuring the entire ocean. But this minor adjustment actually made headlines and was covered by Forbes and Fox News…two cases of .3 mm making the news. (Forbes published an op-ed online written by a senior fellow at the Heartland Institute questioning the validity of the .3 correction and accusing NASA-funded scientists of doctoring data, for the full “scoop” click here and then Fox News ran a story featuring the writer of the op-ed.)
According to Josh Willis, an oceanographer at JPL, the author of the Forbes piece is missing the point, “Sea levels are going up about 3 mm per year and this is .3 mm per year, it’s 10 times smaller. So 3 mm per year that’s about an inch per decade and this effect is an inch every 100 years, so it’s totally unimportant for sea level rise, it’s purely an academic thing,” said Willis. “The point is that researchers in the field are suggesting three to four feet of sea level rise is not unlikely, it’s a distinct possibility. So we’re looking at three or four feet in the next 100 years and this guy is arguing over one inch in the next 100 years.”
Focusing on an insignificant .3 mm per year correction draws attention away from the real news that sea levels are rising 3 mm per year (this time there’s no dot, period or point in front of the three). This type of non-news is frustrating to scientists like Willis, “I think a lot of people read Forbes, I don’t know how many people read their blog, but you know a lot of people read it and it’s read by a lot of business minded folks,” said Willis. “And that’s a big thing because one of the things we have to cope with in climate change and global warming is we have to get the business community on board to help think of solutions.”
Innovative business solutions that could help vulnerable coastal villages such as Shishmaref, Alaska relocate to higher ground without breaking the bank. A 2006 study by the U.S. Army Corps of Engineers estimates that Shishmaref has 10 to 15 years until the coast erodes allowing the sea to move in. With the clock ticking five years later, Shishmaref residents haven’t budged because the village doesn’t have the $100 to 200 million it costs to relocate. (For a detailed description of the challenges facing Shishmaref read this case study by the Climate Adaptation Knowledge Exchange.)
The story of Shishmaref, a community facing a bleak future from a perfect storm of global warming factors including sea level rise, is real news, not a .3 correction to the way sea level rise is measured. Stay tuned for real news on sea level rise in a future post. | <urn:uuid:5a678d15-4325-43c1-971f-faed2380e6e2> | 2.84375 | 1,018 | Personal Blog | Science & Tech. | 56.653573 |
Perl 5 is known to have very good Unicode support (starting from version 5.8, the later the better), but people still complain that it is hard to use. The most important reason for that is that the programmer needs to keep track of which strings have been decoded, and which are meant to be treated as binary strings. And there is no way to reliably introspect variables to find out if they are binary or text strings.
In Perl 6, this problem has been addressed by introducing separate types.
Str holds text strings. String literals in Perl 6 are of type
Str. Binary data is stored in
Buf objects. There is no way to confuse the two. Converting back and forth is done with the
my $buf = Buf.new(0x6d, 0xc3, 0xb8, 0xc3, 0xbe, 0x0a); $*OUT.write($buf); my $str = $buf.decode('UTF-8'); print $str;
Both of those output operations have the same effect, and print
møþ to the standard output stream, followed by a newline.
Buf.new(...) takes a list of integers between 0 and 255, which are the byte values from which the new byte buffer is constructed.
$*OUT.write($buf) writes the
$buf buffer to standard output.
$buf.decode('UTF-8') decodes the buffer, and returns a
Str object (or dies if the buffer doesn’t consistute valid UTF-8). The reverse operation is
Str can simply be printed with
UTF-8. The Perl 6 specification allows the user to change the default, but no compiler implements that yet.
For reading, you can use the
.read($no-of-bytes) methods to read a
.get for reading a line as a
write methods are also present on sockets, not just on the ordinary file and stream handles.
One of the particularly nasty things you can accidentally do in Perl 5 is
concatenating text and binary strings, or combine them in another way (like with
join or string interpolation). The result of such an operation is a string that happens to be broken, but only if the binary string contains any bytes above 127 — which can be a nightmare to debug.
In Perl 6, you get
Cannot use a Buf as a string when you try that, avoiding that trap.
The existing Perl 6 compilers do not yet provide the same level of Unicode support as Perl 5 does, but the bits that are there are much harder to misuse. | <urn:uuid:3d63b262-e20c-4f76-af0e-4bbd38c3ac12> | 2.984375 | 552 | Documentation | Software Dev. | 72.093102 |
To detect blood the smell first has to reach the shark.
From The Naked Scientist:
Water molecules in general are carried to the shark by water currents.
If there are no water currents then it is molecular diffusion, the
random movement of molecules that disperses the odour away from the
In general the travel time of odour depends entirely on the local
Near the water surface water velocities in the ocean
can range between a few centimetres per second on a very calm day and
several metres per second in a strong current.
From the ReefQuest Center for Shark Research:
A shark's lateral line system enables it to detect subtle water
movements. Therefore, when a shark's acute olfactory system detects an
attractive chemical, all it needs to do is turn into the current.
Sooner or later, this will bring a shark to the source of the odor.
From PBS - Jean-Michel Costeau Ocean Adventures:
The notion that the mighty great white can smell blood from a great
distance has been central to modern shark mythology.
They can detect some scents at concentrations as low as
1 part per 25 million, which translates to about a third of a mile
away in the open ocean.
From an Interview with shark expert Samuel Gruber:
It comes down to the concentration of the stimulating chemical at the
nose's receptor cell that determines if an animal will detect a smell.
That level is parts per million in sharks.
From the American Museum of Natural History:
The lemon shark can detect tuna oil at one part per 25 million--that's
equivalent to about 10 drops in an average-sized home swimming pool.
Other types of sharks can detect their prey at one part per 10
billion; that's one drop in an Olympic-sized swimming pool!
sharks can detect these low concentrations of chemicals at prodigious
distances--up to several hundred meters (the length of several
football fields)—depending on a number of factors, particularly the
speed and direction of the water current.
From Sharks of the Atlantic Research and Conservation Coalition (ShARCC):
Some species are able to detect as little as one part per million of
blood in seawater.
... however, it is suggested that sharks are not stimulated by mammal blood (e.g. from humans) in the same way as they are by fish blood. | <urn:uuid:8b521c31-64f5-445d-9b85-068863462c44> | 3.734375 | 501 | Q&A Forum | Science & Tech. | 52.499636 |
Forget the world, the Sun is not enough. Astronomers have found a planet outside our own Solar System, which has as many as four suns! Yes, this planet, thought to be a gas giant, orbits four different suns.
The planet was discovered by volunteers who were using the Planethunters.org website, which allows enthusiastic astronomy amateurs access to a lot of data from NASA’s Kepler’s Space telescope. Kepler is built specifically for detecting exoplanets. The planet, named PH1, after the website, is just 5000 light years away, a stone’s throw away on the cosmological distance scales.
The technical ArXiv report is here: http://arxiv.org/abs/1210.3612
The planet is believed to be about the size of Neptune, just a bit larger. The interesting thing about this planet is the number of suns – there are four of them. Now, maintaining a stable orbit around four different suns is a very difficult and delicate problem. While it is moderately difficult to calculate the stable orbit configurations around one star, it is a huge pain to do it around two stars and impossible for four suns! There are virtually no stable points in the gravitational field in the region, where the planet can reside for an indefinite period of time.
We all know a single star – our Sun is an example. Now, imagine another star around it. This forms a binary system of stars. These are very abundant in the Universe – two stars circling around each other – and these reveal a lot of information about star formation and their subsequent evolution. Now, add two more stars orbiting this central binary system of stars! This is a highly improbable configuration. These outer stars will have a very hard time following a stable orbit around the core binary.
Now, throw in a gas giant planet in the mix! And what you get is utter confusion, if you could somehow see the pattern in the gravitational field. No one knows for how long this planet has existed or for how long it will exist.
One can only imagine the spectacular sunrise and sunset on the planet, but then all four suns won’t be rising or setting at the same time. | <urn:uuid:cbb07255-c182-4060-a72f-03e1215e7f15> | 3.3125 | 459 | Personal Blog | Science & Tech. | 53.02327 |
Summary of 2010 Madison Plateau, Yellowstone Earthquake Swarm
Retrospective analysis shows that the 2010 Madison Plateau swarm began on January 15, 2010 with a few small earthquakes and picked up in intensity on the 17th of January. By the end of February 2010, earthquake activity at Yellowstone had returned to near-background levels, but activity has picked up somewhat in early April 2010. The swarm is located about 10 miles (16 km) northwest of the Old Faithful area on the northwestern edge of the Yellowstone Caldera. Swarms have occurred in this area several times over the past 30 years. Visual observation of landforms and geothermal features by Yellowstone National Park personnel did not show any changes that could be attributed to the earthquakes.
This swarm is now the second largest recorded swarm at Yellowstone. It was longer (in time) and included more earthquakes than last year's swarm beneath Yellowstone Lake (December '08/January '09). Calculations, by the University of Utah Seismology Research Group, of the total seismic energy released by all the swarm earthquakes corresponds to one earthquake with an approximate magnitude of 4.4. The largest recorded swarm at Yellowstone remains the Fall 1985 swarm, which was located in a similar location, in the NW corner of the Yellowstone Caldera.
As of April 6, 2010 a total of 2,347 earthquakes had been automatically located for the entire swarm, including 16 with a magnitude greater than 3.0; 141 with M2.0-2.9; 742 with M1.0-1.9; and 1,361 with M0.0-0.9. The largest events were a pair of earthquakes of magnitude 3.7 and 3.8 that occurred after 11 PM MST on January 20, 2010. Both events were felt throughout the park and in surrounding communities in Wyoming, Montana, and Idaho.
See the University of Utah Seismograph Stations for the most recent earthquake data. Analysts continue to work through all the automatic earthquake locations, and are refining hypocenter locations, depths and magnitudes for inclusion in the earthquake catalog. As the events are refined, they are listed on the UUSS website and loaded into the ANSS catalog . Seismograph recordings are also available online by clicking on the station of interest on the Yellowstone seismograph network station map.
Swarms are common at Yellowstone
The number of earthquakes per day throughout the swarm was well above average at Yellowstone. Nevertheless, swarms are common at Yellowstone, with 100s to 1000s of events, some of which can reach magnitudes greater than 4.0. There were about 900 earthquakes during the December 2008 - January 2009 Yellowstone Lake swarm. The largest earthquake was a magnitude 3.9. The 1985 swarm, also on the northwest rim of the caldera but several miles from the current swarm, lasted for three months. During the 1985 swarm there were over 3000 total events recorded, with magnitudes ranging up to M4.9.
Although we give earthquake counts for previous swarms, it is not strictly correct to compare small differences in the number of earthquakes from one swarm to another. The number of earthquakes located depends on how close the earthquakes are to the monitoring equipment, the type and number of the seismometers in the network, and the software for analyzing the earthquakes. Our current monitoring capabilities allow for us to record many more earthquakes than we recorded in 1985, especially on the lower end of the magnitude scale. Even in the past year, the difference in the swarm locations and a change in software used to analyze the earthquakes makes it difficult to directly compare the earthquake count from last year's Yellowstone Lake Swarm to the current swarm. However, the earthquake count is still a useful number, especially for comparing the number of earthquakes from a swarm to other days during that year. To see the differences throughout the year in earthquake counts, please see Graphs of earthquake activity for the years 1994 to 2009.
Seismologists Continue to Review the Earthquakes
Earthquakes with magnitudes greater than 2.5 are automatically located and then automatically plotted on the University of Utah Map of Recent Earthquakes. The smaller events must be analyzed by a seismic analyst who determines which are the correct earthquakes from a specific area. Because the smaller events need to be individually located, they are added to the map later than those that are automatically located. The delay in reporting the smaller earthquakes is usually not very noticeable, except when there are large numbers of very small earthquakes. The smaller earthquakes can be viewed on the University of Utah Yellowstone seismic network helicorders. Please keep in mind that all of the earthquakes will be analyzed but it will take time to get to the smaller ones.To learn more about why seismologists need to review earthquakes see our Frequently Asked Question, How are Yellowstone earthquakes analyzed and mapped?
If you feel and earthquake, please report it.
Many of the larger (> M 2.5) earthquakes have been felt in the Park and in the surrounding areas. If you feel earthquakes, please fill out a form on the USGS "Did You Feel It?" web site. Information collected from the form is used for scientific research. Maps are generated by the form information for each felt earthquake. For more information about what others have felt, see the shake map created by responses after the M3.8 on Wednesday, January 20, 2010 at 23:16.
We Continue to Monitor Yellowstone Volcano
YVO staff from the USGS, University of Utah, and Yellowstone National Park continue to carefully review all data streams that are recorded in real-time. At this time, there is no reason to believe that magma has risen to a shallow level within the crust or that a volcanic eruption is likely. Yellowstone National Park is in a region of active seismicity associated with regional Basin and Range extension of the Western U.S., as well as volcanism of the Yellowstone volcanic field. Pressurization due to crustal magma bodies of the Yellowstone hotspot and associated shallow geothermal reservoirs can also contribute to earthquakes. Scientists continue to research the origin of these and other Yellowstone earthquakes.
The Yellowstone Volcano Observatory (YVO) is a partnership of the U.S. Geological Survey (USGS), Yellowstone National Park, and University of Utah to strengthen the long-term monitoring of volcanic and earthquake unrest in the Yellowstone National Park region. Yellowstone is the site of the largest and most diverse collection of natural thermal features in the world and the first National Park. YVO is one of the five USGS Volcano Observatories that monitor volcanoes within the United States for science and public safety.
Other items of interest
- Closest online helicorder: YMR. It is also interesting to view helicorders that record the seismic activity from further away, such as YTP. YTP is more than 60 km away and therefore filters out the smaller earthquakes.
- Earthquake Data: ANSS catalog search
- Jan 8, 2009: Yellowstone Lake Swarm Summary Page, compilation of swarm information for the Dec 2008 - Jan 2009 Yellowstone swarm.
- Oct 2004 Web Article: Earthquake Swarms at Yellowstone
- Yearly earthquake plots: Graphs of earthquake activity for the years 1994 to 2009
- The Old Faithful webcam provides views of Old Faithful along with weather information.
- Although the current swarm was not triggered by a large earthquake such as the January 12, 2010 M7.0 Haiti earthquake, in Nov 2002 there was a swarm of earthquakes at Yellowstone that were triggered by the Nov 3, 2002 Denali earthquake. See the University of Utah Press Release: Alaska Quake Seems to Trigger Yellowstone Jolts Small Tremors Rattle National Park After Big Quake 2,000 Miles Away. And the later release in May 2004: Quake in Alaska Changed Yellowstone Geysers
- Jan 2004 Web Article: Frequently asked questions about findings at Yellowstone Lake
- Nov. 2007 Web Article: Recent ups and downs of the Yellowstone Caldera
- Swarm paper by YVO scientists: Earthquake swarm and b-value characterization of the Yellowstone volcano-tectonic system
- March 2007: Preliminary Assessment of Volcanic and Hydrothermal Hazards in Yellowstone National Park and Vicinity.
- Nov. 2006: Volcano and Earthquake Monitoring Plan for the Yellowstone Volcano Observatory, 2006- 2015
- 2006 Paper on Supervolcanoes:
- 2006 Web Article: Satellite Technologies Detect Uplift in the Yellowstone Caldera
- NVEWS report May 2005: An Assessment of Volcanic Threat and Monitoring Capabilities in the United States: Framework for a National Volcano Early Warning System
- 2005 Article: Truth, fiction and everything in between at Yellowstone
- 2005 Fact Sheet: Steam Explosions, Earthquakes, and Volcanic Eruptions — What's in Yellowstone's Future?
- 2004 Fact Sheet: Tracking Changes in Yellowstone's Restless Volcanic System
- 2003 Web Article: Notable Changes in Thermal Activity at Norris Geyser Basin Provide Opportunity to Study Hydrothermal System | <urn:uuid:3fbb1cd0-9306-49b2-9e77-79c8f5248d75> | 2.984375 | 1,818 | Knowledge Article | Science & Tech. | 41.679541 |
Isotopes of nitrogen
Nitrogen has two isotopes, N-14 and N-15, both of which are used in various applications. N-15 is used for the production of the radioisotope O-15 which is used in PET. N-15 is also used to study the uptake of Nitrogen in plants and the metabolism of proteins in the human body. N-14 is used for the production of the PET radioisotope C-11. It can also be used for the production of the PET radioisotopes N-13 and O-15.
Naturally occurring isotopes
This table shows information about naturally occuring isotopes, their atomic masses, their natural abundances, their nuclear spins, and their magnetic moments. Further data for radioisotopes (radioactive isotopes) of nitrogen are listed (including any which occur naturally) below.
||Atomic mass (ma/u)
||Natural abundance (atom %)
||Nuclear spin (I)
||Magnetic moment (μ/μN)
|| 14.003 074 005 2(9)
|| 15.000 108 898 4(9)
In the above picture, the most intense ion is set to 100% since this corresponds best to the output from a mass spectrometer. This is not to be confused with the relative percentage isotope abundances which total 100% for all the naturally occurring isotopes.
Further data for naturally occuring isotopes of nitrogen are listed above. This table gives information about some radiosotopes of nitrogen, their masses, their half-lives, their modes of decay, their nuclear spins, and their nuclear magnetic moments.
||Mode of decay
||Nuclear magnetic moment
||EC to 12C; EC + 3α to n
||EC to 13C
||β- to 16O
||β- to 17O, β- + n to 16O
||β- to 18O; β- + α to 14C
||β- to 19O
||β- to 20O
- Naturally occurring isotope abundances: Commission on Atomic Weights and Isotopic Abundances report for the International Union of Pure and Applied Chemistry in Isotopic Compositions of the Elements 1989, Pure and Applied Chemistry, 1998, 70, 217. [Copyright 1998 IUPAC]
- For further information about radioisotopes see Jonghwa Chang's (Korea Atomic Energy Research Institute) Table of the Nuclides
- Masses, nuclear spins, and magnetic moments: I. Mills, T. Cvitas, K. Homann, N. Kallay, and K. Kuchitsu in Quantities, Units and Symbols in Physical Chemistry, Blackwell Scientific Publications, Oxford, UK, 1988. [Copyright 1988 IUPAC]
NMR Properties of nitrogen
Common reference compound: CH3NO2 /neat CDCl3.
- R.K. Harris in Encyclopedia of Nuclear Magnetic Resonance, D.M. Granty and R.K. Harris, (eds.), vol. 5, John Wiley & Sons, Chichester, UK, 1996. I am grateful to Professor Robin Harris (University of Durham, UK) who provided much of the NMR data, which are copyright 1996 IUPAC, adapted from his contribution contained within this reference.
- J. Mason in Multinuclear NMR, Plenum Press, New York, USA, 1987. Where given, data for certain radioactive nuclei are from this reference.
- P. Pyykkö, Mol. Phys., 2008, 106, 1965-1974.
- P. Pyykkö, Mol. Phys., 2001, 99, 1617-1629.
- P. Pyykkö, Z. Naturforsch., 1992, 47a, 189. I am grateful to Professor Pekka Pyykkö (University of Helsinki, Finland) who provided the nuclear quadrupole moment data in this and the following two references.
- D.R. Lide, (ed.), CRC Handbook of Chemistry and Physics 1999-2000 : A Ready-Reference Book of Chemical and Physical Data (CRC Handbook of Chemistry and Physics, CRC Press, Boca Raton, Florida, USA, 79th edition, 1998.
- P. Pyykkö, personal communication, 1998, 204, 2008, 2010.
- The isotopic abundances are extracted from the naturally occurring isotopes section within WebElements. | <urn:uuid:83f2da68-6e05-46a4-bb04-8ba0e0e8f7bc> | 3.609375 | 939 | Knowledge Article | Science & Tech. | 58.985376 |
Fire Ant Outcompetes Other SpeciesEven in its
July 2, 2009
Even in its native Argentina, the fire
ant wins in head-to-head competition with other ant species more than
three-quarters of the time, according to Agricultural Research Service (ARS)
ARS scientists at the
American Biological Control Laboratory (SABCL) in Hurlingham, Argentina,
have been studying how different ant species fare against the fire ant as part
of an effort to learn more about the behavior of this pestan invasive
species in its non-native United States.
Fire ants often attack in swarms--not only causing painful stings to humans,
but can even kill small animals. Little has been known, however, about the fire
ant's competitive nature or how it interacts with other ants.
SABCL biologist Luis Calcaterra, working closely with lab director
Briano, has been studying interactions between the red imported fire ant,
Solenopsis invicta, and other aboveground foraging ants in two habitats
in northeastern Argentinausing a combination of pitfall traps and baits
to study day-to-day activity in ant communities.
The pitfall trap is a 50 milliliter plastic tube buried in the ground and
half-filled with soapy water. The bait is one gram of canned tuna placed on a
plastic card measuring five centimeters in diameter. The trap and bait gave the
scientists a way to determine ant populations at the sites, and showed the
dominance of each species.
Some 28 ant species coexisted with S. invicta in an open area of
forest growing along a watercourse, whereas only 10 species coexisted with S.
invicta in the dry forest grassland. The researchers found that the fire ants
had the highest numbers in the open forest area along the watercourse.
Prior to these studies, it was thought that the fire antnow
established throughout the Americaswas not dominant in its native land.
But the studies showed that the fire ants were the most ecologically dominant,
winning 78 percent of the interactions with other ants, mostly against its most
frequent competitor, the South American big-headed ant, Pheidole
obscurithorax, an ant of northern Argentina and Paraguay also introduced in
the United States. And in battles with the invasive Argentine ant,
Linepithema humile, the fire ants were even more dominant, winning out
80 percent of the time.
This study was published in Oecologia, a journal
that deals with plant and animal ecology.
more about the research in the July 2009 issue of Agricultural
ARS is the principal intramural scientific research agency of the
U.S. Department of Agriculture. | <urn:uuid:dbdf395a-6b16-4245-a33c-b75c3ae8b2fd> | 3.234375 | 579 | Knowledge Article | Science & Tech. | 32.583712 |
2, 3, 5, 7, 11, 13, 17...
We've known that there are infinitely many primes since c. 300 BC when it was proven by Euclid of Alexandria. Euclid used a very simple and elegant proof by contradiction.
Euclid's proof of the infinitude of primes
First, assume that there are a finite number of primes, p1, p2, p3, ..., pn.
Q = (p1 * p2 * ... * pn) + 1
That is, Q is equal to all of the primes multiplied together plus one.
By the Fundamental Theorem of Arithmetic, Q is either prime or it can be written as the product of two or more primes. However, none of the primes in our list evenly divides Q. If any prime in our list did evenly divide Q, then that same prime would also evenly divide 1, since
Q - (p1 * p2 * ... * pn) = 1
This contradicts the assumption that we had listed all the primes. So no matter how many primes we start with in our list, there must always be more primes. (Note that this proof does not claim that Q itself is prime, just that there must be some prime not in the initial list.)
Twin primes are those pairs of numers that have a difference of two, and that are both prime.
Given the simplicity of Euclid's proof of the infinitude of primes, it's tempting to hope for an equally simple proof to the Twin prime conjecture. Needless to say, such a proof has not been found.
Other facts about twin primes:
Other than (3, 5), all twin primes have the form (6n - 1, 6n + 1).
In 1919 Viggo Brun showed that the sum of the reciprocals of the twin primes converges to a definite number, now known as Brun's constant (approximately 1.902160578).
In 1994, while in the process of estimating Brun's constant by calculating the twin primes up to 1014, Thomas Nicely discovered the infamous Pentium bug.
The largest known twin primes (as of January 2010) are a pair of 100,355 digit primes with the values 65516468355 * 2333333 ± 1. | <urn:uuid:5ab2fd90-9a0e-44e6-8a12-bbe53de609df> | 3.1875 | 492 | Knowledge Article | Science & Tech. | 78.812674 |
sublimationArticle Free Pass
sublimation, in physics, conversion of a substance from the solid to the vapour state without its becoming liquid. An example is the vaporization of frozen carbon dioxide (dry ice) at ordinary atmospheric pressure and temperature. The phenomenon is the result of vapour pressure and temperature relationships. Freeze-drying of food to preserve it involves sublimation of water from the food in a frozen state under high vacuum. See also vaporization; phase diagram.
What made you want to look up "sublimation"? Please share what surprised you most... | <urn:uuid:2a7aff80-4f68-4a6f-ab38-e378532b5eff> | 2.96875 | 119 | Knowledge Article | Science & Tech. | 30.449463 |
2 sound waves in the same direction have average power transmitted in the ratio 1:1
Also ration of their wavelengths lambda1/lambda2 = 1:2
Find ratio of their pressure amplitudes
P1/P2 = A1^2 f1^1 / (A2^2 f2^2)
1 = (A1^2/A2^2) (lambda2^2 / lambda1^2)
So (A1^2/A2^2) = 1/4 or A1/A2 = 0.5
wht is the formula for average power transmitted??
Preparing for JEE?
Kickstart your preparation with new improved study material - Books & Online Test Series for JEE 2014/ 2015
@ INR 4,950/-
For Quick Info
Find Posts by Topics | <urn:uuid:27df2dec-3ec4-4400-9820-a82a025df894> | 2.875 | 175 | Content Listing | Science & Tech. | 79.168295 |
Dr. Oldfield analyzed the Bosnian pyramids with several cameras and the results proved to be very surprising and unexpected. A number of anomalies were registered, the existence of which could not be attributed to naturally-occurring phenomena.
If one records ordinary hills with this equipment, there are no high-energy fields nor the genesis of additional energy. Above such natural hills, one finds horizontal, fixed, and homogenous energy fields.
Dr. Oldfield first examined the north face of the Pyramid of the Sun from a distance of 1 kilometer. The results of this analysis are as follows: Inside the pyramid, below and above the pine forest on the north face, there is a steady genesis of energy, which is in direct contrast to naturally occurring hills. In the past several decades, it has been proven that precisely stone pyramids do indeed generate specific types of energy.
Furthermore, video footage clearly shows that the newly-generated energy accumulates, and is discharged through the tip of the Pyramid of the Sun.
Additional proof of the pyramids’ existence comes from the area above the pyramid. Namely, the energy fields are vertical, as opposed to horizontal, which is the case with naturally occurring hills.
In contrast to natural phenomena where the energy fields are fixed, these electromagnetic fields are pulsating and non-homogenous.
In other words, the Bosnian Pyramid of the Sun is in fact acting like a giant energy accumulator which continually emits large quantities of energy. It is the proverbial perpetuum mobile, which got its start in the distant past and continues its activity without respite. | <urn:uuid:50dc4f23-2d59-42e4-a2fd-439792d355cd> | 2.984375 | 325 | Comment Section | Science & Tech. | 26.272127 |
Some puzzling land formations on Mars's equator could be huge glacier-like deposits of frozen water, new radar observations suggest. The material's radar properties might be explained by unusually porous rocky material instead, but if it is water it would represent a huge amount - as much as a polar ice cap contains, providing a potential water source for future human explorers.
Scientists have puzzled for decades over a group of mound-like structures at Mars's equator called the Medusa Fossae Formation. A variety of explanations have been offered, including that they are piles of volcanic ash, and that they are glacier-like structures made mostly of water ice.
Now, radar sounding has probed the material for the first time 2.5 kilometres below its surface. The way the radio waves interact with the material suggests that it must be either ice or an extremely porous rocky material.
Thomas Watters of the National Air and Space Museum in Washington, DC, US, led a team that probed the material with a ground-penetrating radar instrument called MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) on Europe's Mars Express spacecraft.
In terms of its radar properties, the Medusae Fossae Formation material is indistinguishable from the Mars polar layered deposits, which are nearly pure water ice, Watters says. If they are water, the deposits would increase the amount of known water on Mars by 36%, an amount equal to all the water locked in Mars's south polar cap.
However, the possibility remains that the material could be made of a very fluffy, porous material, like volcanic ash. If it is some sort of porous material, the puzzle is how it would manage to avoid being compacted from its own weight in order to stay fluffy all the way down to about 2.5 kilometres below the surface, Watters says.
On the other hand, it is too warm at the equator for ice exposed on the surface to be stable - it should sublimate away. "Both of our interpretations - dry, fluffy, porous material or ice-rich material - make sense in certain respects and have problems in other respects," Watters told New Scientist.
But if the top layer of ice sublimated away and left behind a protective layer of dust several metres deep, the ice below might be preserved, he says: "I think you can make a pretty convincing argument that if you provide the correct depth of insulation, ice would be stable there."
How would the ice have gotten there? Some researchers have previously suggested that water ice migrates from time to time on Mars because its spin axis tilts by up to 40° on timescales of tens to hundreds of thousands of years.
That means that in the past, Mars's equator might have been much colder, allowing ice to be build up there, perhaps by freezing directly onto the surface from the air like frost, Watters says.
Brian Hynek of the University of Colorado in Boulder, US, says if the deposits are made of ice, they could be an important water source for future human explorers.
But he does not favour the ice interpretation. He is one of the researchers to previously suggest the deposits are made of volcanic ash. "These radar results are consistent with that hypothesis," he told New Scientist. "Additionally, it is hard to conceive of a model as to why there would be big ice deposits at the equator in this region and not elsewhere around the equator," he says.
Susan Sakimoto of Notre Dame University in Indiana, US, is also sceptical of the ice hypothesis. "It's a really cool idea, and it's entirely possible, but I don't think it's required," she told New Scientist. She says porous rocky materials called tuffs, which are made from volcanic ash and resist compacting, could also explain the radar signal. "They're used as building material, they don't compact very well," she says.
Journal reference: Science Express (DOI: 10.1126/science.1148112)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
A Postponement Of A Significant Trip To Mars Is An Insult To Our Generation.
Thu Nov 01 23:39:21 GMT 2007 by George Peart
It is during our time that most of the Martian investigation have occurred. Space enthuasists within the age range of 40 to 60 years has come to see Mars as truely the sister planet that it will turn out to be. For us not to see this come to fruition is an insult.
A soil collection trip will not occur within 10-15 years. Man exploration is even farther into the future. Our generation will be dead by then.
Robert Zubrin and others have argued over and over again that we have the technology to go to mars. The unwillingness of Governmental bodies to arrange a significant voyage within our life time, is a demonstration of the fat cowards they have become. Where is the spirit of Columbus, David Livingston and others who were willing to where no one had been before?
What is the difference between sending young men, trained with million of dollars of tax payers money to die in an unnecessary war, and the sending of trained astronauts to discover the glorious unknown when we have the technology and the know how?
Trips To Mars?
Fri Nov 02 09:23:48 GMT 2007 by Kevin Dollard
The below comment is not entirely true. There is no technology to protect humans from the massive radiation exposure involved in traveling to mars, something that is rarely if ever pointed out.
Trips To Mars?
Fri Nov 02 15:53:09 GMT 2007 by Wargammer2005
Sorry, you are incorret.
All it takes is the will and resources to put a large enough ship in orbit, with water a shielding and fuel.
Not the rubbish Orion that NASA has, a REAL space ship. Put up something the size of the Saturn 5 first stage, double walled.
So it takes more than one launch to get the thing into orbit, so what, make it big and make it sturdy and make it reusable.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:64b63eb7-81f5-4dd2-b19d-6894e8807604> | 3.765625 | 1,386 | Comment Section | Science & Tech. | 50.397249 |
DailyDirt: Life Abhors A Vacuum
from the urls-we-dig-up dept
Biologists continue to find signs of life in some of the most remote places on Earth. A variety of organisms seem to be able to thrive under harsh conditions that are similar to extra-terrestrial places elsewhere in our solar system. So finding these extremophiles could point us towards good places to find alien life forms on other planets or moons or asteroids... Here are just a few more examples of some really tough microorganisms.
- Evidence of life in a subglacial lake in Antarctica has been found, and it could mean that bacteria are much more widespread than we previously thought. Researchers still need to verify this discovery and make sure they're not looking at bacterial contamination from other sources. [url]
- Frost flowers are salty ice crystals that form on calm ocean surfaces, and arctic sea meadows of these flowers may become more common with climate change near the north/south poles. About a million bacteria live in the few milliliters of frozen saltwater of a frost flower, and studying these cells could teach us more about how hardy some extremophile organisms can be. [url]
- Bacteria living below the ocean and at the ocean surface have it easy compared to bacteria that live 6 miles above sea level in the troposphere. Microorganisms could play a role in cloud formation, and there is a lot we don't know about how life survives in different parts of the atmosphere. [url] | <urn:uuid:b74a4b81-4b3d-4031-8ffc-2c9b584cfd19> | 2.953125 | 310 | Listicle | Science & Tech. | 42.17 |
Working With Forms in PHP
Working With Forms
Forms are how your users talk to your scripts. To get the most out of PHP, you must master forms. The first thing you need to understand is that although PHP makes it easy to access form data, you must be careful of how you work with the data.
Security Measures: Forms Are Not Trustworthy
Your site's users can write their own form in HTML to use against your server; users can also bypass the browser entirely and use automatic tools to interact with web scripts. You should assume that people will mess around with parameters when you put a script on the Web, because they might be trying to discover an easier way to use your site (though they could be attempting something altogether less beneficial).
To ensure that your server is safe, you must verify all data that your scripts receive.
There are two approaches to checking form data: blacklisting and whitelisting.
Blacklisting is the process of trying to filter out all bad data by assuming that form submissions are valid and then explicitly seeking out bad data. In general, this technique is ineffective and inefficient. For example, let's say that you're trying to eliminate all "bad" characters from a string, such as quotes. You might search for and replace quotation marks, but the problem is that there will always be bad characters you didn't think of. In general, blacklisting assumes that most of the data you receive is friendly.
A better assumption to make about form data you're receiving is that it's inherently malicious; thus, you should filter your data in order to accept only valid data submissions. This technique is called whitelisting. For example, if a string should consist of only alphanumeric characters, then you can check it against a regular expression that matches only an entire string of A-Za-z0-9. Whitelisting may also include forcing data to a known range of values or changing the type of a value. Here is an overview of a few specific tactics:
$_FILES to Access Form Data
In Chapter 2, we showed you how to turn off the
register_globals setting that automatically sets global variables based on form data.
To shut down this dangerous setting, refer to "#14: Turning Off Registered Global Variables" on page 25. How do you use
$_GET to retrieve form data? Read on. | <urn:uuid:3e2d27c0-2528-4a93-9012-8cc61c3575bb> | 3 | 491 | Tutorial | Software Dev. | 53.568634 |
This photograph shows many contrails in the sky near Sutherland, NE (November 2004).
Click on image for full size
Courtesy of Susan Gallagher
The white streaks you see coming off airplanes are called contrails. That is short for “condensation trail.” Contrails are actually clouds made by airplanes.
An airplane's exhaust has some water vapor in it, just like the air. Sometimes when the exhaust mixes with the air, it has so much water vapor that the air can't hold it all. The water vapor then turns into clouds.
There are three types of contrails: short-lived, persistent non-spreading, and persistent spreading.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Altocumulus clouds are part of the Middle Cloud group. They are grayish-white with one part of the cloud darker than the other. Altocumulus clouds usually form in groups. Altocumulus clouds are about...more
Altostratus clouds belong to the Middle Cloud group. An altostratus cloud usually covers the whole sky. The cloud looks gray or blue-gray. The sun or moon may shine through an altostratus cloud, but will...more
Cirrocumulus clouds belong to the High Cloud group. They are small rounded puffs that usually appear in long rows. Cirrocumulus are usually white, but sometimes appear gray. Cirrocumulus clouds are the...more
Cirrostratus clouds belong to the High Cloud group. They are sheetlike thin clouds that usually cover the entire sky. The sun or moon can shine through cirrostratus clouds. When looking at the sun through...more
Cirrus clouds are the most common of the High Cloud group. They are made of ice crystals and have long, thin, wispy streamers. Cirrus clouds are usually white and predict fair weather. ...more
Cumulonimbus clouds belong to the Clouds with Vertical Growth group. They are also known as thunderstorm clouds. A cumulonimbus cloud can grow up to 10km high. At this height, high winds make the top...more
Cumulus clouds belong to the Clouds with Vertical Growth group. They are puffy white or light gray clouds that look like floating cotton balls. Cumulus clouds have sharp outlines and a flat base. Seeing...more | <urn:uuid:7612f674-372c-429c-b868-be113f0431b0> | 3.765625 | 538 | Content Listing | Science & Tech. | 64.684612 |
Typical LSCMs take 3-D images of thick tissue samples by visualizing
thin slices within that tissue one layer at a time. Sometimes
scientists supplement these microscopes with spectrographs, which are
devices that measure the pattern of wavelengths, or "colors," in the
light reflected off of a piece of tissue.
This pattern of wavelengths acts like a fingerprint, which scientists
can use to identify a particular substance within the sample. But the
range of wavelengths used so far with these devices has been narrow,
limiting their uses. Not so with the new microscope developed by
physicists from the Consiglio Nazionale delle Ricerche (CNR) in Rome,
and described in a paper accepted to the AIP's new journal AIP Advances.
Unlike other combination "confocal microscope plus spectrograph"
devices, the new machine is able to gather the spectrographic
information from every point in a sample, at a wide range of
wavelengths, and in a single scan. To achieve this, the authors
illuminate the sample with multiple colors of laser light at once
– a sort of "laser rainbow" – that includes visible light
as well as infrared. This allows scientists to gather a full range of
information about the wavelengths of light reflected off of every point
within the sample.
Using this method, the researchers took high-resolution pictures of the
edge of a silicon wafer and of metallic letters painted onto a piece of
silicon less than half a millimeter wide. They also demonstrated that
it is possible to apply this technique to a tissue sample (in this
case, chicken skin) without destroying it. With further testing, the
researchers say the microscope could be used to detect early signs of
melanoma; until then, it may be useful for non-medical applications,
such as inspecting the surface of semiconductors.
"Supercontinuum ultra wide range confocal microscope for reflectance
spectroscopy of living matter and material science surfaces" is
published in AIP Advances.| | <urn:uuid:5fc4d2df-c95b-45f3-9dec-801721bc91ab> | 3.421875 | 443 | Truncated | Science & Tech. | 26.702236 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2002 October 3
Explanation: A leading candidate for the most mysterious star found in recent times is variable star V838 Monocerotis. At a distance of about 8,000 light-years, V838 Mon was discovered to be in outburst in January of this year. Initially thought to be a familiar type of classical nova, astronomers quickly realized that instead, V838 Mon may be a totally new addition to the astronomical zoo. Observations indicate that the erupting star transformed itself over a period of months from a small under-luminous star a little hotter than the Sun, to a highly-luminous, cool supergiant star undergoing rapid and complex brightness changes. The transformation defies the conventional understanding of stellar life cycles. A most notable feature of V838 Mon is the "expanding" nebula which now appears to surround it. Seen above in two separate images from the South African Astronomical Observatory's 1 meter telescope, the nebula is probably a light echo from shells of formerly unseen material lost by the star during its previous evolution. Light-years in diameter, the shells progressively reflect the light from V838 Mon's outbursts, providing an opportunity to look back at the history of this remarkable star's behaviour.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
& Michigan Tech. U. | <urn:uuid:a2e78f85-d82d-4c08-b67b-b8d3158ced49> | 3.1875 | 335 | Knowledge Article | Science & Tech. | 40.199671 |
In Stricken Fuel-Cooling Pools, a Danger for the Longer Term
The pools, which sit on the top level of the reactor buildings and keep spent fuel submerged in water, have lost their cooling systems and the Japanese have been unable to take emergency steps because of the multiplying crises.
Experts now fear that the pool containing those rods from the fourth reactor has run dry, allowing the rods to overheat and catch fire. That could spread radioactive materials far and wide in dangerous clouds.
“It’s worse than a meltdown,” said David A. Lochbaum, a nuclear engineer at the Union of Concerned Scientists.
Certainly damage to these pools could be an issue at several other reactors. The article mentions that in 1997 the Brookhaven national laboratory did a study of a potential fire at one of these sites.
Severe Accidents in Spent Fuel Pools
in Support of Generic Safety
Accidents leading to complete pool draining that might be initiated by
loss of cooling water circulation capability, missiles, and pneumatic seal
failure were found to have a very low likelihood. However, the frequency
estimates for pool draining due to structural failure resulting from seismic
events and heavy load drops were found to be quite uncertain. In the case of
seismic events, the seismic hazard and structural fragilities both contribute
to the uncertainty range. For heavy load drops, human error probabilities,
structural damage potentials and recovery actions are the primary sources of | <urn:uuid:6b19bdd7-904b-4bdd-9e29-258debf79e86> | 3.125 | 306 | Personal Blog | Science & Tech. | 34.235741 |
This is one of the oldest synchronization primitives in the history of computer science, invented by the early Dutch computer scientist Edsger W. Dijkstra (he used P() and V() instead of acquire() and release()).
A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some other thread calls release().
When invoked without arguments: if the internal counter is larger than zero on entry, decrement it by one and return immediately. If it is zero on entry, block, waiting until some other thread has called release() to make it larger than zero. This is done with proper interlocking so that if multiple acquire() calls are blocked, release() will wake exactly one of them up. The implementation may pick one at random, so the order in which blocked threads are awakened should not be relied on. There is no return value in this case.
When invoked with blocking set to true, do the same thing as when called without arguments, and return true.
When invoked with blocking set to false, do not block. If a call without an argument would block, return false immediately; otherwise, do the same thing as when called without arguments, and return true. | <urn:uuid:45c63cb9-66ef-4d55-9057-626dddae4ddd> | 4.03125 | 277 | Documentation | Software Dev. | 45.533176 |
Comprehensive DescriptionRead full entry
Description of Pelomyxa palustrisPelomyxa palustris is usually reported as a large multinucleate amoeboid pelobiont, 100-5000 microns long, with small non-motile flagella, endosymbiotic bacteria, refringent cytoplasmic granules (sand and food particles); movement takes place usually as a monopodial progression, with a central forward flow of cytoplasm along a single axis, and the cytoplasm spills outwards at the anterior-most tip of the flagellum. The posterior uroid is involved in the capture of food which is comprised of detritus and algae. The cysts are about 100 microns in diameter. This species has been reported from freshwater and soils. This species has many inactive flagella attached to single basal bodies. Cones of microtubules arise from the bases and sides of the basal bodies; sometimes enclosing a nucleus. The nuclei may also be surrounded by endosymbiotic bacteria instead of microtubules. The axonemes of the flagella may or may not have the standard 9+2 arrangement of microtubules. | <urn:uuid:63e5fbab-fff4-4216-827e-a646eaad1bb2> | 2.71875 | 254 | Knowledge Article | Science & Tech. | 20.122252 |
An irrational number is a real number that cannot be expressed in the form , when a and b are integers(b ≠ 0). In decimal form, it never terminates (ends) or repeats.
The first such equation to be studied was 2 = x2. What number times itself equals 2?
is about 1.414, because 1.4142 = 1.999396, which is close to 2. But you'll never hit exactly by squaring a fraction (or terminating decimal). The square root of 2 is an irrational number, meaning its decimal equivalent goes on forever, with no repeating pattern:
According to legend, the ancient Greek mathematician who proved that could NOT be written as a ratio of integers p/q made his colleagues so angry that they threw him off a boat and drowned him!
Other famous irrational numbers are the golden ratio, a number with great importance to biology:
π = 3.14159265358979...
e = 2.71828182845904...
Irrational numbers can be further subdivided into algebraic numbers, which are the solutions of some polynomial equation (like and the golden ratio), and transcendental numbers, which are not the solutions of any polynomial equation. π and e are both transcendental.
The Venn diagram below shows the relationships of the various sets of numbers. | <urn:uuid:2af38f85-cbb0-4f70-aad1-b06b4a2bfc62> | 3.15625 | 280 | Knowledge Article | Science & Tech. | 65.513235 |
Various astronomical tables, including:
1. A letter from John Senex of London, 16 May 1715, requesting information on a comet and a meeting with Halley. A list of observations and calculations is on the reverse.
2. Table of the different declinations of the stars in the eclipse.
3. Logarithm tables.
4. Annual ephemeris for the Sun.
5. Table on the Sun's declination.
6. Table of angles of the stars from the ecliptic.
7. List of angles of the meridian from the horizontal.
8. Table of the altitude of the Sun, with equations and calculations.
9. Table of the nonagesimal at the Greenwich latitude 0°-360°.
10. Table of the nonagesimal at the latitude 55° 54', 0°-360°.
11. Table of the nonagesimal at the latitude of St Helena 0°-360°.
12. Table of the nonagesimal at the latitude of 54° 23', 0°-360°.
13. Observations and computations of the Sun based on Ptolemy's findings at Alexandria. | <urn:uuid:ac6b508f-97aa-4ede-be4e-aaf5a78e98d0> | 2.828125 | 246 | Content Listing | Science & Tech. | 71.853598 |
Cosmic infrared background (CIB) radiation is expected to arise from the cumulative emissions of pregalactic, protogalactic, and evolved galactic systems. It has long been recognized that measurement of the CIB will provide new insight into energetic processes associated with structure formation and chemical evolution following the decoupling of matter from the cosmic microwave background (CMB) radiation (Partridge & Peebles 1967; Harwit 1970; Bond, Carr, & Hogan 1986, 1991; Franceschini et al. 1991, 1994; Fall, Charlot, & Pei 1996). In this paper, the CIB is defined to be extragalactic radiation, exclusive of the CMB, in the wavelength range 1-1000 µm.
Sources of cosmic radiant energy include nuclear processes such as nucleosynthesis in stars; gravitational processes, such as accretion of matter onto black holes; and decaying unstable particles remaining from the Big Bang. Though the primary radiant energy from such processes may not emerge at infrared wavelengths, the combined effects of the cosmic redshift and absorption of some fraction of the primary radiations by dust with re-emission by the dust at long wavelengths will shift much of the energy into the infrared. Hence, the infrared background is expected to be uniquely informative about cosmic history.
Sky brightness measurements with instruments on NASA's Cosmic Background Explorer (COBE) mission have provided the first definitive detections of the CIB. In this paper I focus on the COBE measurements, which initially provided detections only at far infrared and submillimeter wavelengths. I will also describe recent results of analyses based upon COBE data that have provided likely detections in the near infrared and restrictive limits in the mid-infrared, since these results are not covered substantially elsewhere at this conference. I then briefly describe some of the implications of these results. A comprehensive review paper on the infrared background and its implications is in preparation (Hauser & Dwek 2001). Since energy distributions are of primary interest, sky brightness measurements are reported as I in units of nW m-2 sr-1, where I is the spectral intensity at frequency . Conversion to I can be accomplished using the relation I(nW m-2 sr-1) = [3000 / (µm)] I(MJy/sr). | <urn:uuid:db4a89d4-b663-4f3d-9133-c542bdaca96a> | 2.796875 | 467 | Academic Writing | Science & Tech. | 26.166987 |
Two potential signatures of life on Saturn's moon Titan have been found by the Cassini spacecraft But scientists are quick to point out that non-biological chemical reactions could also be behind the observations. Microbes on Titan could eke out an existence by breathing in hydrogen gas and eating the organic molecule acetylene, creating methane in the process. This would result in a lack of acetylene on Titan and a depletion of hydrogen close to the moon's surface, where the microbes would live. Measurements from the Cassini spacecraft have borne out these predictions, hinting that life may be present.
Infrared spectra of Titan's surface taken with the Visual and Infrared Mapping Spectrometer (VIMS) showed no sign of acetylene, even though ultraviolet sunlight should constantly trigger its production in the moon's thick atmosphere
More work must be done to rule out non-biological causes.
Ultimate proof would require sending a landing probe on Titan and actually imaging microbes.
If you liked this article, please give it a quick review on Reddit, or StumbleUpon. Thanks
How to Make Money | <urn:uuid:7134a6f9-afa8-4d35-854e-a14cef66c9b8> | 3.3125 | 224 | Truncated | Science & Tech. | 29.151326 |
Getting started with CLP(R)
The first thing to know about CLP(R) is that it
is a declarative programming language, rather than one
of the imperative languages like C++ or Java to which
you are used. This affects the way in which you program in tow ways.
- In general your program specifies the solution you are looking for,
rather than the way in which you find the solution.
- Eventually you do need to understand how the CLP(R) engine works
so that you write sensible programs.
A nice declarative example
Consider the set of three linear equations:
3x + 4y - 2z = 8
This is a specification for the three variables x,y, and z.
It can be turned into a CLP(R) program (technically a goal) as:
x - 5y + z = 10
2x + 3y -z = 20
3*X + 4*Y - 2*Z = 8,
Note the commas, which serve the role of and and the
period at the end. Note also that the variables are now in upper case.
X - 5*Y + Z = 10,
2*X + 3*Y -Z = 20.
Given this goal CLP(R) responds:
Z = 35.75
Y = 8.25
X = 15.5
Thus, at least for linear equations we have a direct connection
(state the problem -- get the solution) no algorithm required.
Before defining a CLP(R) program we need some definitions:
A constraint logic program is a sequence of rules.
- User defined constraint:
- is of the form
p(t1, ...,tn) where p is an n-ary predicate and each
ti is an expression from the constraint domain
A primitive constraint or a user defined constraint
CLP(R) programs consist of a collection of rules.
- A goal
- is a sequence of literals.
- is of the form H :- B where
- H (the head
of the rule) is a user-defined constraint,; and
- B (the body) is a goal.
- CLP(R) uses the standard Prolog syntax for lists
- is the empty list
- [a,b,...,k] is the list whose elements are a,b, ..., k
- [A|B] is the list whose first element is A
and whose tail is B. With this notation if we unify
[a,b,c,d,e] and [H|T] then H = a and
T = [b,c,d,e]
You can put a goalto be executed at startup into a CLP(R) program
by inserting code of the form:
Into your file. Note that if you put more than one such into your file
you can't be sure of the order in which they are tried.
Note also that you can put in goals preceeded by :- or ?-. These affect when
the goal is attempted. If you are using op/3 to define operators you almost
certainly want to use ?- so that the operator is recognized both in the source
file and at run time.
We can turn the above example into (a very dull) CLP(R) program as
3*X + 4* Y - 2 *Z = 8,
X - 5 * Y + Z = 10,
2 * X + 3 * Y - Z = 20.
Note that , plays the role of the wedge (/\).
If you want to try this you can get the code here!
As an exercise you could write a program where the coefficients
can be chosen at run time. A suggestion for two
Running the programs
CLP(R) is installed on maxwell. There are two ways to load a program into
- At startup time: If your program is in a file my_prog
just type clpr my_prog on the command line;
- Or you can start clpr by typing clpr on the command line
and then at the ?- prompt type [my_prog].-- The period is important,
if you don't enter it the system will wait until you do!
If you type in the goal ls. you will see that the rules from
your program have been entered.
In some cases you may want to see the effect of projecting the
constraints onto some of the variables. dump/1 and dup/2
do this for you. dump/1 takes as argument a list of the target variables
and prints the projection of the constraints onto these variables.
dump/2 is more flexible: the first argument is the variable names and the second
is a list of new names to be used in the output.
The Options example from the CLP(R) distribution
Dr. J.P.E. Hodgson
Department of Mathematics and Computer Science
Saint Joseph's University
5600 City Avenue
Philadelphia. PA 19131
Last Changed: 2000/01/25 | <urn:uuid:25fc83d1-f2b5-45f2-a305-9462d50b94dc> | 3.609375 | 1,091 | Tutorial | Software Dev. | 82.668173 |
Did you know the jawless fish are known to be the earliest known vertebrates? Researchers have identified jawless fish to be very ancient like five hundred million years old. Jawless fish are of the group Agatha. Most of the jawless fish class is extinct. Two classes of jawless fish that are still living are the hagfish and the blood sucking lamprey. Hagfish live in cold and temperate waters. The hagfish is said to have a braincase but no jaw, the hagfish also have very poorly developed eyes. The hagfish also lack a nervous system, a spleen, and scales. The hagfish has a circular mouth with rows of teeth. The hagfish spend most of their time in embedded muddy bottoms. The hagfish are scavengers eating through their victims and only leaving the skin and skeleton left of the victim. Hagfish have glands on both sides of their body. Theses glands produce lots of mucoid material, they use this for a defense mechanism. Scientist say the hagfish have no larval stage. There are twenty species of hagfish. The Atlantic hagfish called Myxine Glutinosa reaches a length of thirty inches. The Pacific hagfish called Eptatretus Stouti has been used for physiological studies. The hagfish is in the phylum chordata. The blood sucking lampreys lack a nervous system, spleen, and scales. The lamprey’s circular mouth is also lined with rows of teeth like the hagfish. The lamprey’s teeth attach to the prey and it feeds as it’s carried along. The lampreys have anticoagulant in there saliva to keep there victims blood fluid. Some of the fresh water lampreys eat flesh to. The lampreys look like ells but researchers say they are not related. When the lampreys aren’t feeding they swim with undulating movement. The larvae of the lampreys are called ammocoetes. The ammocoetes are about ¼ inches which is about 6mm long. The larvae of the lampreys are transparent. Some of the interesting facts I found out about the jawless are that the hagfish have no larval stage, and the lampreys larvae are transparent. What I also thought was interesting was that the hagfish spend most of there time in muddy bottoms. The lampreys just swim along side its victim drinking its blood. I thought it was neat that the lampreys have anticoagulant in their saliva to keep there preys blood fluid.
Author: Brooklyn, P
Date Published: April 2006
Sources:http://oceanexplorer.noaa.gov/explorations/lewis_clark01/logs/jul08/r609hagfish_220.jpg http://coris.noaa.gov/glossary/Agnatha_186.jpg http://www.nsf.gov/news/mmg/media/images/lamprey_h.jpg | <urn:uuid:d03e30e7-2bc4-4532-8ada-dfa8d4a369bc> | 3.796875 | 638 | Knowledge Article | Science & Tech. | 63.519416 |
Curious About Life: Interview with Felipe Gómez
Provided by the Spanish government, the Rover Environmental Monitoring System (REMS) will monitor the daily weather on Mars to help determine habitability at the planet's surface. Felipe Gómez, of the Centro de Astrobiologia in Spain, is one of the scientists working with REMS.
What kind of research do you generally do?
I'm a biochemist working with extreme environments, or extremophiles, and developing an automated tool for the identification of life. I'm working on habitability studies on Earth as a way to try to understand how to recognize life if we see it on another planet, specifically on Mars. I've been using extreme environments worldwide during the last few years to develop a "Habitability Index," which is being applied to the special case of Mars during the MSL mission. I've studied the limits of life on Earth to try to understand the particular physico-chemical process which is life, which is necessary if we want to recognize such a process on other planets.
There are three main components to developing a habitability index. They are energy inputs, water available in liquid state, and other important ingredients for life—carbon, hydrogen, oxygen, nitrogen, phosphorous and sulfur (known generally as CHONPS). All of these elements, integrated, make up the whole model of the habitability potential measurement. CHONPS are available in low quantities on Mars' surface but they are available. Energy input is calculated as potential metabolic energy which is also available in some surface components. The biggest problem is the water availability. This last point (water) is what I am more involved in with the MSL mission, because it is the most important and controversial part for this habitability potential.
The automated system for life detection in which I'm working on is not included in the rover. It is something I'm developing for future missions, to Mars or beyond. It is based on a metabolic detection system, focused in environmental parameter modification only promoted by life for life identification.
What do you do specifically for MSL?
My main scientific interest on the MSL mission is to adapt the "Habitability Index" to Mars using the MSL data. For example, I will use the REMS data, environmental information, and others instruments for implementing the habitability potential of the environment where Curiosity is located. Other instruments can help me to go further and to identify particular niches around Curiosity where life could be settled, special "hot spots" located by Curiosity but difficult for the rover to get to, which could be of high potential for life to exist on Mars. REMS has the ability to measure pressure, air and ground temperature, UV radiation at the Martian surface, water in the atmosphere, and wind speed and direction. All of these are very important. Inputs from other instruments are also necessary. For example, I need to know the mineralogy of Gale Crater and specifically of some of the layered materials on Mount Sharp in order to evaluate other important components of the model. I'm also very interested in the water cycle of Mars, other important questions to be approached in the MSL mission, and the direct relationship of the habitability potential.
How could your work help us to answer astrobiology questions?
My work can help us to answer one of the main scientific objectives of the MSL mission which is the "habitability" of Mars.
How did you feel when the rover touched down?
It was really an exciting moment for me. Since childhood, I have been passionate about NASA space missions, and I followed all of them with great emotion. Being part of one of them, especially a mission as big and challenging as Curiosity, is a dream come true for me. When the rover landed, dream became a reality. It was also a special moment because all of the hard work we did to prepare for the operations on the Mars surface was transformed in that moment into happiness and good emotions.
Have you received preliminary data from REMS, and, if so, how did you feel knowing that the instrument is working on the surface of Mars?
We are receiving data on a daily basis. Each Sol [Martian day], we have the weather data from Mars. Working with an instrument, with a rover, which is located on the surface of Mars is something very exciting and a scientific challenge. To be part of a science team for such a mission, working on Mars and sending data back to Earth from the surface of Mars, is really a challenge for me. | <urn:uuid:2a725bcc-28fb-4adf-969b-1038eef55e93> | 3.171875 | 936 | Audio Transcript | Science & Tech. | 36.924651 |
Bats are the only mammals that truly fly, rather than just gliding. The bats are very numerous, there being well over 900 recognised species. They are divided into two types - the megabats, which mainly eat fruit, and the microbats, which mainly eat insects.
Scientific name: Chiroptera
The shading illustrates the diversity of this group - the darker the colour the greater the number of species. Data provided by WWF's Wildfinder.
Bats are mammals of the order Chiroptera (pron.: /kaɪˈrɒptərə/; from the Greek χείρ - cheir, "hand" and πτερόν - pteron, "wing") whose forelimbs form webbed wings, making them the only mammals naturally capable of true and sustained flight. By contrast, other mammals said to fly, such as flying squirrels, gliding possums, and colugos, can only glide for short distances. Bats do not flap their entire forelimbs, as birds do, but instead flap their spread-out digits, which are very long and covered with a thin membrane or patagium.
Bats represent about 20% of all classified mammal species worldwide, with about 1,240 bat species divided into two suborders: the less specialized and largely fruit-eating megabats, or flying foxes, and the more highly specialized and echolocating microbats. About 70% of bats are insectivores. Most of the rest are frugivores, or fruit eaters. A few species, such as the fish-eating bat, feed from animals other than insects, with the vampire bats being hematophagous.
Bats are present throughout most of the world, performing vital ecological roles of pollinating flowers and dispersing fruit seeds. Many tropical plant species depend entirely on bats for the distribution of their seeds. Bats are important in eating insect pests, reducing the need for pesticides. The smallest bat is the Kitti's hog-nosed bat, measuring 29–34 mm (1.14–1.34 in) in length, 15 cm (5.91 in) across the wings and 2–2.6 g (0.07–0.09 oz) in mass. It is also arguably the smallest extant species of mammal, with the Etruscan shrew being the other contender. The largest species of bat are a few species of Pteropus and the giant golden-crowned flying fox with a weight up to 1.6 kg (4 lb) and wingspan up to 1.7 m (5 ft 7 in).
Take a trip through the natural world with our themed collections of video clips from the natural history archive.
Slow motion filming techniques transform amazing wildlife moments into full scale events, and simple action into incredibly detailed video sequences.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so. | <urn:uuid:b4d6cbf8-c5cd-4418-88ab-12d959e2c620> | 3.890625 | 671 | Knowledge Article | Science & Tech. | 62.998488 |
Real-time polymerase chain reaction, also called quantitative real time polymerase chain reaction (qPCR) or kinetic polymerase chain reaction, is a laboratory technique based on polymerase chain reaction, which is used to amplify and simultaneously quantify a targeted DNA molecule. It enables both detection and quantification (as absolute number of copies or relative amount when normalized to DNA input or additional normalizing genes) of a specific sequence in a DNA sample.
The procedure follows the general principle of polymerase chain reaction; its key feature is that the amplified DNA is quantified as it accumulates in the reaction in real time after each amplification cycle. Two common methods of quantification are the use of fluorescent dyes that intercalate with double-stranded DNA, and modified DNA oligonucleotide probes that fluoresce when hybridized with a complementary DNA.
Frequently, real-time polymerase chain reaction is combined with reverse transcription polymerase chain reaction to quantify low abundance messenger RNA (mRNA), enabling a researcher to quantify relative gene expression at a particular time, or in a particular cell or tissue type.
Although real-time quantitative polymerase chain reaction is often marketed as RT-PCR, it should not to be confused with reverse transcription polymerase chain reaction, also known as RT-PCR.Real-time PCR using double-stranded DNA dyes
A DNA-binding dye binds to all double-stranded (ds)DNA in a PCR reaction, causing fluorescence of the dye. An increase in DNA product during PCR therefore leads to an increase in fluorescence intensity and is measured at each cycle, thus allowing DNA concentrations to be quantified. However, dsDNA dyes such as SYBR Green will bind to all dsDNA PCR products, including nonspecific PCR products (such as "primer dimers"). This can potentially interfere with or prevent accurate quantification of the intended target sequence.
1. The reaction is prepared as usual, with the addition of fluorescent dsDNA dye.
2. The reaction is run in a thermocycler, and after each cycle, the levels of fluorescence are measured with a detector; the dye only fluoresces when bound to the dsDNA (i.e., the PCR product). With reference to a standard dilution, the dsDNA concentration in the PCR can be determined.
Like other real-time PCR methods, the values obtained do not have absolute units associated with it (i.e. mRNA copies/cell). As described above, a comparison of a measured DNA/RNA sample to a standard dilution will only give a fraction or ratio of the sample relative to the standard, allowing only relative comparisons between different tissues or experimental conditions. To ensure accuracy in the quantification, it is usually necessary to normalize expression of a target gene to a stably expressed gene . This can correct possible differences in RNA quantity or quality across experimental samples.Fluorescent reporter probe method
reporter probes is the most accurate and most reliable of the methods, but also the most expensive. It uses a sequence-specific RNA or DNA-based probe to quantify only the DNA containing the probe sequence; therefore, use of the reporter probe significantly increases specificity, and allows quantification even in the presence of some non-specific DNA amplification. This potentially allows for multiplexing - assaying for several genes in the same reaction by using specific probes with different-coloured labels, provided that all genes are amplified with similar efficiency.
It is commonly carried out with an RNA-based probe with a fluorescent reporter at one end and a quencher of fluorescence at the opposite end of the probe. The close proximity of the reporter to the quencher prevents detection of its fluorescence; breakdown of the probe by the 5' to 3' exonuclease activity of the taq polymerase breaks the reporter-quencher proximity and thus allows unquenched emission of fluorescence, which can be detected. An increase in the product targeted by the reporter probe at each PCR cycle therefore causes a proportional increase in fluorescence due to the breakdown of the probe and release of the reporter.
- The PCR reaction is prepared as usual (see PCR), and the reporter probe is added.
- As the reaction commences, during the annealing stage of the PCR both probe and primers anneal to the DNA target.
- Polymerisation of a new DNA strand is initiated from the primers, and once the polymerase reaches the probe, its 5'-3-exonuclease degrades the probe, physically separating the fluorescent reporter from the quencher, resulting in an increase in fluorescence.
- Fluorescence is detected and measured in the real-time PCR thermocycler, and its geometric increase corresponding to exponential increase of the product is used to determine the threshold cycle (CT) in each reaction.
There are numerous applications for real-time polymerase chain reaction in the laboratory. It is commonly used for both diagnostic and research applications.
Diagnostically real-time PCR is applied to rapidly detect the presence of genes involved in infectious diseases, cancer and genetic abnormalities. In the research setting, real-time PCR is mainly used to provide highly sensitive quantitative measurements of gene transcription.
The technology may be used in determining how the genetic expression of a particular gene changes over time, such as in the response of tissue and cell cultures to an administration of a pharmacological agent, progression of cell differentiation, or in response to changes in environmental conditions.
Also, the technique is used in Environmental microbiology, for example to quantify resistance genes in water samples. | <urn:uuid:f9bce8b3-9068-4a38-9b19-bce7258a9dae> | 3.546875 | 1,151 | Knowledge Article | Science & Tech. | 21.411281 |
When James Watson and Francis Crick elucidated the structure of DNA in 1953, they solved one mystery, but created another.
For almost a hundred years after the publication of On the Origin of Species by Charles Darwin in 1859, the science of biology rested secure in the knowledge that it had explained one of humankind’s most enduring enigmas. From ancient times, observers of living organisms had noted that living things display organized structures that give the appearance of having been deliberately arranged or designed for a purpose, for example, the elegant form and protective covering of the coiled nautilus, the interdependent parts of the eye, the interlocking bones, muscles, and feathers of a bird wing. For the most part, observers took these appearances of design as genuine. Observations of such structures led thinkers as diverse as Plato and Aristotle, Cicero and Maimonides, Boyle and Newton to conclude that behind the exquisite structures of the living world was a designing intelligence. As Newton wrote in his masterpiece The Opticks: “How came the Bodies of Animals to be contrived with so much Art, and for what ends were their several parts? Was the Eye contrived without Skill in Opticks, and the Ear without Knowledge of Sounds? . . . And these things being rightly dispatch’d, does it not appear from Phænomena that there is a Being incorporeal, living, intelligent . . . ?”
But with the advent of Darwin, modern science seemed able to explain this appearance of design as the product of a purely undirected process. In the Origin, Darwin argued that the striking appearance of design in living organisms—in particular, the way they are so well adapted to their environments—could be explained by natural selection working on random variations, a purely undirected process that nevertheless mimicked the powers of a designing intelligence. Since then the appearance of design in living things has been understood by most biologists to be an illusion—a powerfully suggestive illusion, but an illusion nonetheless. As Crick himself put it thirty-five years after he and Watson discerned the structure of DNA, biologists must “constantly keep in mind that what they see was not designed, but rather evolved.”
But due in large measure to Watson and Crick’s own discovery of the information-bearing properties of DNA, scientists have become increasingly and, in some quarters, acutely aware that there is at least one appearance of design in biology that may not yet have been adequately explained by natural selection or any other purely natural mechanism. Indeed, when Watson and Crick discovered the structure of DNA, they also discovered that DNA stores information using a four character chemical alphabet. Strings of precisely sequenced chemicals called nucleotide bases store and transmit the assembly instructions—the information—for building the crucial protein molecules and machines the cell needs to survive.
Crick later developed this idea in his famous “sequence hypothesis,” according to which the chemical parts of DNA (the nucleotide bases) function like letters in a written language or symbols in a computer code. Just as letters in an English sentence or digital characters in a computer program may convey information depending on their arrangement, so too do certain sequences of chemical bases along the spine of the DNA molecule convey precise instructions for building proteins. Like the precisely arranged zeros and ones in a computer program, the chemical bases in DNA convey information in virtue of their “specificity.” As Richard Dawkins notes, “The machine code of the genes is uncannily computer-like.” Software developer Bill Gates goes further: “DNA is like a computer program but far, far more advanced than any software ever created.”
But if this is true, how did the information in DNA arise? Is this striking appearance of design the product of actual design or of a natural process that can mimic the powers of a designing intelligence? As it turns out, this question is related to a long-standing mystery in biology—the question of the origin of the first life. Indeed, since Watson and Crick’s discovery, scientists have increasingly come to understand the centrality of information to even the simplest living systems. DNA stores the assembly instructions for building the many crucial proteins and protein machines that service and maintain even the most primitive one-celled organisms. It follows that building a living cell in the first place requires assembly instructions stored in DNA or some equivalent molecule. As origin-of-life researcher Bernd-Olaf Küppers explains, “The problem of the origin of life is clearly basically equivalent to the problem of the origin of biological information.”
Much has been discovered in molecular and cell biology since Watson and Crick’s revolutionary discovery more than fifty years ago, but these discoveries have deepened rather than mitigated the enigma of DNA. Indeed, the problem of the origin of life (and the origin of the information needed to produce it) remains so vexing that Harvard University recently announced a $100 million research program to address it. When Watson and Crick discovered the structure and information bearing properties of DNA, they did indeed solve one mystery, namely, the secret of how the cell stores and transmits hereditary information. But they uncovered another mystery that remains with us to this day. This is the DNA enigma—the mystery of the origin of the information needed to build the first living organism.
In one respect, of course, the growing awareness of the reality of information within living things makes life seem more comprehensible. We live in a technological culture familiar with the utility of information. We buy information; we sell it; and we send it down wires. We devise machines to store and retrieve it. We pay programmers and writers to create it. And we enact laws to protect the “intellectual property” of those who do. Our actions show that we not only value information, but that we regard it as a real entity, on par with matter and energy.
That living systems also contain information and depend on it for their existence makes it possible for us to understand the function of biological organisms by reference to our own familiar technology. Biologists have also come to understand the utility of information, in particular, for the operation of living systems. After the early 1960s advances in the field of molecular biology made clear that the digital information in DNA was only part of a complex information-processing system, an advanced form of nanotechnology that mirrors and exceeds our own in its complexity, storage density, and logic of design. Over the last fifty years, biology has advanced as scientists have come to understand more about how information in the cell is stored, transferred, edited, and used to construct sophisticated machines and circuits made of proteins.
Articles on the BreakPoint website are the responsibility of the authors and do not necessarily represent the opinions of Chuck Colson or Prison Fellowship. Outside links are for informational purposes and do not necessarily imply endorsement of their content. | <urn:uuid:0af2eee8-4bf6-488c-9e5a-e189d9a90f82> | 3.65625 | 1,415 | Nonfiction Writing | Science & Tech. | 27.582743 |
Pitch vs Frequency
Pitch and frequency are two concepts discussed in physics and music. Frequency is the number of repetitive occurrences per unit time whereas pitch is an intuitive concept associated with the frequency of a sound wave. These concepts are widely used in fields such as acoustics, music, waves and vibrations and various other fields. In this article, we are going to discuss what frequency and pitch are, their definitions, the similarities between pitch and frequency, the applications of pitch and frequency, and finally the difference between pitch and frequency.
Frequency is a concept discussed in periodic motions of objects. To understand the concept of frequency, a proper understanding of periodic motions is required.
A periodic motion can be considered as any motion that repeats itself in a fixed time period. A planet revolving around the sun is a periodic motion. A satellite orbiting around the earth is a periodic motion even the motion of a balance ball set is a periodic motion. Most of the periodic motions we encounter are circular, linear or semi-circular. A periodic motion has a frequency.
The frequency means how “frequent” the event is. For simplicity, we take frequency as the occurrences per second. Periodic motions can either be uniform or non-uniform. A uniform motion can have uniform angular velocity. Functions such as amplitude modulation can have double periods. They are periodic functions encapsulated in other periodic functions. The inverse of the frequency of the periodic motion gives the time for a period. Simple harmonic motions and damped harmonic motions are also periodic motions. Thereby the frequency of a periodic motion can also be obtained using the time difference between two similar occurrences. The frequency of a simple pendulum only depends on the length of the pendulum and the gravitational acceleration for small oscillations.
Frequency is also discussed in statistics. The absolute frequency is the number of times an event is repeated over the given time or over a unit time.
Pitch is a concept connected directly with frequency. Higher the pitch of a sound higher the frequency in which it oscillates.
The pitch is a property discussed only in sound waves. Pitch is not a well-defined concept. Pitch is not a property of the sound wave. The pitch is the hearing sensation created by such a sound wave. Pitch can be quantified only using words such as “high pitched” or “low pitched”. There is no way of measuring an absolute amount of pitch since it is not a well-defined quantity.
Some sound waves contain several numbers of pitches as they are combinations of overtones.
Pitch vs Frequency
- Frequency is a very well-defined quantity whereas pitch is not a well-defined quantity.
- Pitch is a property observed only in sound waves. Frequency is a property discussed in all forms of waves including electromagnetic waves and mechanical waves.
- Frequency is also discussed in oscillations and vibrations. | <urn:uuid:268bf356-d5b1-41bd-8c79-9f92a6c10925> | 3.96875 | 587 | Knowledge Article | Science & Tech. | 38.0325 |
Okay, we are about to go into the details of SQL queries, but before that we should say one
last thing about SQL database structures. Specifically,
most databases store their data in terms of data types.
Defining data types allows the database to be more
efficient and helps to protect you against adding bad
data to your tables.
There are several standard data types including
||Contains a string of characters.
Usually, these fields will have a specified maximum length that is defined
when the table is created.
||Contains a number with a specified number
of decimal digits and scale (indicating a power to which the
value should be multiplied) defined at the table creation.
||Similar to NUMERIC except that it is more
||Only accepts integers
||Same as INTEGER except that precision
must be smaller than INT precisions in the same table.
||Contains floating point numbers
||Like FLOAT but with greater precision
It is important to note that not all databases will
implement the entire list and that some will implement their own data types such as
calendar or monetary types. Some fields may also allow a NULL value
in them even if NULL is not exactly the correct type.
Okay, we will explain data types when we actually start using them,
so for now, let's go on to some real examples of doing things with SQL. Let's
log on to a database and start executing queries using SQL.
Table of Contents | <urn:uuid:6d1c5399-a214-4b9c-a785-96da131599ed> | 3.765625 | 310 | Tutorial | Software Dev. | 44.80532 |
int dup(int fildes);
int dup2(int fildes, int fildes2);
The dup() and dup2() functions provide an alternative interface to the service provided by fcntl() using the F_DUPFD command. The call:
fid = dup(fildes);
shall be equivalent to:
fid = fcntl(fildes, F_DUPFD, 0);
fid = dup2(fildes, fildes2);
shall be equivalent to:
close(fildes2); fid = fcntl(fildes, F_DUPFD, fildes2);
except for the following:
Upon successful completion a non-negative integer, namely the file descriptor, shall be returned; otherwise, -1 shall be returned and errno set to indicate the error.
The dup() function shall fail if:
The dup2() function shall fail if:
The following sections are informative.
The following example closes standard output for the current processes, re-assigns standard output to go to the file referenced by pfd, and closes the original file descriptor to clean up.
#include <unistd.h> ... int pfd; ... close(1); dup(pfd); close(pfd); ...
The following example redirects messages from stderr to stdout.
#include <unistd.h> ... dup2(1, 2); ...
The dup() and dup2() functions are redundant. Their services are also provided by the fcntl() function. They have been included in this volume of IEEE Std 1003.1-2001 primarily for historical reasons, since many existing applications use them.
While the brief code segment shown is very similar in behavior to dup2(), a conforming implementation based on other functions defined in this volume of IEEE Std 1003.1-2001 is significantly more complex. Least obvious is the possible effect of a signal-catching function that could be invoked between steps and allocate or deallocate file descriptors. This could be avoided by blocking signals.
The dup2() function is not marked obsolescent because it presents a type-safe version of functionality provided in a type-unsafe version by fcntl(). It is used in the POSIX Ada binding.
The dup2() function is not intended for use in critical regions as a synchronization mechanism.
In the description of [EBADF], the case of fildes being out of range is covered by the given case of fildes not being valid. The descriptions for fildes and fildes2 are different because the only kind of invalidity that is relevant for fildes2 is whether it is out of range; that is, it does not matter whether fildes2 refers to an open file when the dup2() call is made.
close() , fcntl() , open() , the Base Definitions volume of IEEE Std 1003.1-2001, <unistd.h> | <urn:uuid:3e7a5a6a-ad84-45eb-8a8c-774946b71ef3> | 2.78125 | 649 | Documentation | Software Dev. | 58.176302 |
Current information on algal blooms
Algal Bloom Monitoring August 30, 2012: Cyanobacterial amounts are low at sea, no changes in lake observations
Amounts of cyanobacteria are quite low at the Finnish sea areas. Low amounts of cyanobacteria have been observed in the Gulf of Finland, in the Archipelago Sea, and the Bothnian Sea. In lakes, the situation is better than typically during this time of the year. Cyanobacteria have been recorded from every fifth observation site.
The summer's weekly algal information ends this week. However, algal bloom monitoring will continue in the observation sites until the end of September. Information will be provided in case the algal bloom situation changes.
A nationwide algal monitoring network
Algal blooms are monitored at more than 300 permanent monitoring locations in the Baltic Sea and inland waters in all regions of Finland. These sites are chosen to be representative of different types of waters in terms of their depth, size, water quality and nutrient levels. Monitoring is carried out weekly from the beginning of June to mid September. The extent of blue-green algal blooms is first assessed by visual observation on a scale from 0 (no algae) to 3 (very abundant algal blooms). If algal blooms are clearly evident, samples are taken to allow the identification of the species involved.
Monitoring is mainly carried out by municipal environmental and health officials, but members of the public also help with this work. Observations from open marine waters are taken automatically by commercial ships. Additional information for open marine waters are obtained by remote sensing and visual observations submitted by the pilots of the Finnish Frontier Guard.
Monitoring harmful algae
Summer 2003. © Seppo Knuuttila
Finland’s environmental authorities closely monitor occurrences of blue-green algae, and have kept records of harmful blooms since the 1980s, with the help of samples submitted by officials and members of the public.
More systematic monitoring of harmful algal blooms began in 1998, and nowadays provides a comprehensive overview of the state of algal blooms throughout Finland's inland and coastal waters and the Baltic open sea area. Monitoring is organised jointly by the Centres for Economic Development, Transport and the Environment, municipal environmental officials and the Finnish Environment Institute (SYKE).
Top of the page | <urn:uuid:816307f7-3762-47c4-af43-a9fc399bfeb3> | 3.140625 | 474 | Knowledge Article | Science & Tech. | 27.448684 |
Researchers at the CNRS-AIST Joint Robotics Laboratory (a collaboration between France's Centre National de la Recherche Scientifique and Japan's National Institute of Advanced Industrial Science and Technology) are developing software that allows a person to drive a robot with their thoughts alone. The technology could one day give a paralyzed patient greater autonomy through a robotic agent or avatar.
The system requires that a patient concentrate their attention on a symbol displayed on a computer screen (such as a flashing arrow). An electroencephalography (EEG) cap outfitted with electrodes reads the electrical activity in their brain, which is interpreted by a signal processor. Finally, the desired command is sent to the robot to carry out.
The system does not provide direct fine-grain motor control: the robot is simply performing a preset action such as walking forward, turning right or left, and so on. The robot's artificial intelligence, developed over several years at the lab, allows it to perform more delicate tasks such as picking up an object from a table without needing human input. In this scenario, the robot's camera images are parsed by object recognition software, allowing the patient to choose one of the objects on a table by focusing their attention on it.
With training, the user can direct the robot's movements and pick up beverages or other objects in their surroundings. The system can be seen in use in the DigInfo video at the bottom of the page.
This is similar to but more sophisticated than previous projects, one involving Honda's ASIMO robot from 2006, and another at the University of Washington from 2007.
A different but more direct approach would be to track a patient's eye movements. Recent research conducted at the Université Pierre et Marie Curie-Paris enabled cursive writing on a computer screen through eye movement alone. The same technology could allow a patient to move a cursor and select from a multitude of action icons without having to go through the EEG middle-man. The hitch is that – in some circumstances – eye movement isn't possible or can't be tracked reliably due to eye conditions. In that case, brain implants may be the way to go.
No matter how you slice it, researchers aren't giving up, and with further progress robot avatars may cease being the stuff of science fiction. No doubt patients would feel empowered and liberated by this technology, but it will be awhile before it can be implemented, and the robots being deployed will likely look more like Toyota's recently unveiled Human Support Robot than advanced bipedal robots. | <urn:uuid:8cf11888-b0f2-478d-bfd9-5827269fac25> | 3.359375 | 511 | Comment Section | Science & Tech. | 35.23472 |
Global Warming Kills Whitebark Pines, Threatening Mountain Ecosystems
Millions of dying whitebark pine trees could be disastrous for the delicate ecosystem that supports grizzlies, birds and Western water supplies.
Across the American West, whitebark pine, a linchpin of high-altitude ecosystems, is rapidly falling victim to the aggressive mountain pine beetle. Warming temperatures allow the native beetle to thrive in previously inhospitable high-elevation forests, where the insect bores into and kills whitebark pine trees. Swaths of dead whitebarks now stretch across the landscape, their telltale red needles bearing witness to the unprecedented impacts of climate change in this iconic ecosystem.
Whitebark Pine: A Keystone Species
Whitebark pine is the foundation species for alpine ecosystems of western North America, its range stretching from California and Nevada in the south, through the Northern Rocky Mountains and the Cascades, to British Columbia and Alberta in the north. Whitebark pine grows at high elevations, and provides food and shelter for animals where few other trees can even survive.
Dead and dying whitebark pines near Goodwin Lake, WY (Photo by Whitney Leonard). Click to view more.
Consider the Yellowstone grizzly bear, one of the most iconic species affected by the whitebark decline. In the fall, just before hibernation, grizzlies raid caches of whitebark pine cones stored by other animals. It’s an efficient way to get large, nutritious, whitebark pine seeds at a critical time of year. When these caches of high-quality food are not easily available, female grizzlies risk entering hibernation with fewer nutritional reserves, and give birth to fewer cubs.
In addition, if whitebark pine crops fail, grizzlies are driven to forage in lower-elevation areas where they risk encountering, and being killed by, humans. These well-documented threats to whitebark pine were a key factor in the September 2009 decision by a federal judge to put Yellowstone grizzlies back on the endangered species list.
Whitebark pine seeds are also an important food source for Clark’s nutcrackers, whose forgotten seed caches help plant new whitebark stands. Red squirrels and a host of other small mammals and birds also rely on whitebark pine seeds in high mountain environments where food can be scarce.
Moreover, whitebark pine forests stabilize and shade the snowpack, reducing avalanches and extending precious snowmelt flows into the summer months. This slow melting process not only keeps rivers cool for trout and other aquatic wildlife but also helps maintain sufficient water resources for the people living in the arid American West.
Threats to the Future of Whitebark Pine: Blister Rust, Global Warming and Mountain Pine Beetles
Although whitebark pine survives the harshest weather conditions in the western mountains – frigid temperatures, high winds and lightning strikes – it is no match for the relatively recent impacts humans have caused. White pine blister rust, a lethal disease accidentally brought to the continent on imported seedlings, has wiped out roughly 50 percent of the whitebark pine in the Rocky Mountains since its arrival in the early 20th century. In some areas such as Glacier National Park, it has killed 85 to 95 percent of the whitebark pine. Infected trees can take a long time to die, but the disease can also cause their cone production to drop significantly, affecting grizzlies and other wildlife.
Climate-Driven Mountain Pine Beetle Infestation in Whitebark
Exacerbating the effects of blister rust is a new threat: the mountain pine beetle, a recent arrival to the high-elevation ecosystems where the whitebark pine lives. This small insect bores into mature pine trees, killing them by eating critical tissue under the bark. When the beetles hatch in the summer, huge swarms attack a forest all at once. Cool year-round temperatures and freezing winters once kept this beetle confined to low-elevation forests, where native lodgepole pines evolved natural defenses against beetles. Global warming, however, has allowed the mountain pine beetle to expand its range into high-elevation forests, where the whitebark pine is virtually defenseless against this newcomer and its explosive attacks.
Beetles attack the mature whitebark pines and blister rust kills the smaller trees, creating a perfect storm; together, beetles and blister rust could wipe out whitebark pine as a functional component of high-elevation ecosystems. Entire hillsides have turned red with the dried needles of dead whitebark pines. As the needles drop, the hills turn gray. The term “evergreen” no longer applies to these once majestic forests.
In December of 2008, NRDC submitted a petition to the U.S. Fish and Wildlife Service, asking the agency to list the tree as endangered under the Endangered Species Act. In response to litigation by NRDC, USFWS finally released a positive 90-day finding on whitebark in July 2010, indicating that listing the species may be warranted. The agency is now conducting a year-long status review, to determine whether or not they will list whitebark, and – if it is listed – whether it should receive “threatened” or “endangered” status.
Endangered Species Act protections could help federal agencies focus their whitebark efforts and could bring increased resources for research, conservation, and restoration efforts.
last revised 12/7/2010
Sign up for NRDC's online newsletter
NRDC Gets Top Ratings from the Charity Watchdogs
- Charity Navigator awards NRDC its 4-star top rating.
- Worth magazine named NRDC one of America's 100 best charities.
- NRDC meets the highest standards of the Wise Giving Alliance of the Better Business Bureau.
- Protecting Eagles While Working to Guide Wind Development
- posted by Katie Umekubo, 5/20/13
- Healthy Rivers Help Farmers
- posted by Elly Pepper, 5/14/13
- Voices for America's Wildlife - Farmers in the Bay Delta Support Endangered Species Protections (VIDEO)
- posted by Sylvia Fallon, 5/13/13 | <urn:uuid:e534f6be-373a-4fc2-afb8-53e3aed5bd1b> | 3.671875 | 1,290 | Knowledge Article | Science & Tech. | 31.516085 |
"Each bee on her return is followed by three or four companions . . . how they do it has not yet been observed" Aristotle, Historia Animalium, IX
"In the summer of 1944 a few very simple experiments led to a result that was just as unexpected as it was thrilling" Karl von Frisch on the discovery of the dance language of bees .
Soon after the end of World War II in war ravaged Germany, Frisch was observing the dance of bees and "reading" the language he himself had recently deciphered. In a way, he was feeling ecstatic: he could eavesdrop in the bee conversation and interpret their symbolic language. He understood the eight-shaped dance meant, for example, nectar 1.5 km away and at 30 degrees from the current position of the sun. On several occasions he had astonished neighbors by telling them that his bees were feeding from sources on their farms which he had not seen. The human and insect brain had never communicated in such a way before. But at the same time he was baffled.
How could the bee know the position of the sun? At that time he was studying the bee dance on a comb placed horizontally. Previous experiments had proven conclusively that bees used the sun as a compass. He could even rotate at will the dance by replacing the sun with a lamp. If the horizontal comb was covered and illuminated by diffuse light, the dances were disoriented. But somehow they became oriented again if the bee could see a small patch of blue sky. As hard as it was to believe at the time, Frisch concluded that the bee could see the polarization pattern of the sky! Later, other researchers discovered many other animals sensitive to polarized light (eyes), some of which could use it for navigation, as the bee does. But this capacity was discovered in honeybees first because they gave away their secret through their dance language.
The Dance Floor
Bees returning to the beehive after finding a good supply of food will communicate to other bees by dancing at a particular region in the comb: the dance floor. The dance floor is generally close to the entrance but sometimes moves, e.g. goes further inside when it is cold or closer to the entrance when there is lots of activity. In Nature honey combs are vertical, so the dance is generally performed on a vertical plane. This is of great significance for the bee dance as the language must provide information of horizontal directions on a vertical plane. However, when the weather is very warm the dance floor may move outside the entrance to a horizontal flight board. It is also horizontal in some primitive bee species and can always be made horizontal by the human experimenter. Dances on oblique dancing floors can also happen, mainly on the obliquely rounded lower edge of a free-hanging comb or on the rounded swarm cluster bees form when looking for a new nesting place. Notice that in nature the vertical dancing floor is inside the hive and thus quite dark while the horizontal one is generally under the open sky.
The Bee Dancing Repertoire
When a foraging bee finds food close to the beehive, it performs its simplest dance, the Round Dance. This dance doesn't provide much information, it is more of an arousal signal. The forager bee runs in a small circle, leaving a single cell inside it. Every one or two circles it suddenly reverses orientation and this goes on from seconds to minutes. The bees recruited follow the dancer on the floor and then fly off by themselves looking for the food. If these bees haven't been feeding at a particular place before, they will look for food in every direction in the proximity of the beehive. However, the dancing bee also gives away odors that can be recognized by bees frequenting the same flowers, who will fly directly to them.
When the goal is further away, the bees need more sophisticated means of communication. If food is scarce, bees have been known to feed up to about 15 km (~10 miles) from the beehive. In relation to the small size of this animal these distances are outstanding. Although a bee flying to a known source of food uses as references conspicuous landmarks in addition to the sun compass, it can only communicate information about the latter to fellow bees. The Tail-Wagging dance tells the other bees very accurately at what distance and in which direction the food is, so they can look for it by themselves. Some European honeybees start to perform it when the source of food is more than 100 meters away. Other bee species will do them for closer sources, up to just a few meters away in the case of some Indian bees. For intermediate distances there is a gradual transition between the round dance and the tail-wagging dance.
In a typical tail-wagging dance the honeybee (Apis Melliphera) runs straight ahead for a short distance, returns in a semicircle to the starting point, again runs through the straight stretch, describes a semicircle in the opposite direction and so on in regular alternation. The straight part of the run is given particular emphasis by a vigorous wagging of the body (rapid rhythmic sidewise deflections). In addition, during the tail-wagging portion of the dance it emits a buzzing sound. Interestingly, the dance followers can make the dancer pause and give them a taste of the nectar by using a squeaking sound.
With increasing distance the number of circuits (8's) per unit time decreases and the length and duration of the individual circuits increases. For example, for a goal at 100 meters it makes 10 short circuits in 15 seconds but at 3 km only 3 long circuits in the same time. The duration of the wagging part has the best correlation with distance. The distance is calculated based on the expenditure of energy on the flight towards the source (a head-wind increases it). Each recruited bee averages many dance circuits or even several dances from different bees to calculate the distance. For each bee species a distance-frequency curve can be plotted. It is remarkably precise, especially if the distance is not close to their foraging range limit.
If the dance floor is horizontal (the least common case in Nature), the indication of direction is straight-forward: the wagging (straight) portion of the eight-figure dance points towards the food source (and in the same direction as the bee runs through it). But, what does the dancing bee use as compass to accurately point in the right direction? The bee reference is the direction of the sun. This can be demonstrated easily by covering the sky and using a lamp as an artificial sun: the direction of dancing will rotate, always maintaining the same angle with the lamp as the angle with the sun during direct flight towards the food.
If the dance floor is vertical the indication of direction requires a higher-level language that can communicate horizontal directions with an indirect, symbolic, representation. In a vertical plane the natural reference is gravity, so the dancer replaces the real reference, the sun, by the "UP" direction. For example, if the bee maintained the sun 70 degrees to her left when flying towards the nectar, the wagging portion of her dance will point 70 degrees in the clockwise direction from the upwards vertical direction. The bee transposes the solar angle into a gravitational angle! On an oblique comb the gravitational transposition works well up to an angle of about 10 degrees to the horizontal.
Next Page: polarized navigation => | <urn:uuid:794cb955-9b79-4038-9ef5-55a161b78182> | 4.15625 | 1,517 | Knowledge Article | Science & Tech. | 47.35749 |
"This provides the accuracy missing in previous calculations,
and it makes bigger systems possible."
Sometimes proteins are like the Wicked Witch. Add water, wait a nanosecond and you can almost hear that piteous whimper. "I'm mel-l-l-ting!" Well, not really. But there may be an audible groan from biochemists as they observe simulations of a large protein or DNA in water. For a few hundred picoseconds of simulation time it looks fine, then the molecule appears to unravel. It's been a troubling problem, but with a series of computations at the Pittsburgh Supercomputing Center, Tom Darden and Lee Pedersen seem to have it fixed.
The problem has been molecular dynamics (MD), computations that simulate and predict how molecular structure changes over time. As computing power has increased over the past 15 years, MD has evolved to become an important part of the molecular biology toolkit. The colorful protein structures in biochemistry textbooks represent molecules painstakingly removed from cells and crystallized so that X-ray crystallography can reveal their three-dimensional structure. These structures have been enormously important in advancing knowledge, but as representations of reality they are analogous to butterflies mounted in a museum showcase. MD makes it possible, in effect, for the butterflies to fly; the static molecular structures are starting data for simulations in a cell-like environment where scientists can observe their movements. The most realistic MD simulations include surrounding water molecules and ions, replicating many of the structural forces acting in the cell.
Nevertheless, trying to account for the interactions among the atoms of the molecule itself with each other and with thousands of water molecules is an extremely complicated computational task. As computing power has increased, it has become feasible and desirable to track how the structure changes over as long as several nanoseconds -- less than a fast eyeblink in human time, but as good as a lifetime in protein biochemistry. Such simulations can take hundreds of computing hours, and the outcome can be disappointing.
A few years ago, Pedersen and Darden encountered protein melting face-to-face. "The protein we were simulating would literally shake itself apart in a couple hundred picoseconds," says Darden, a biomathematician at the National Institute of Environmental Health Science. "This melting behavior is even more pronounced for DNA. Generally, the longer you run, the worse the situation."
With simulations at the Pittsburgh Supercomputing Center, the researchers diagnosed the malady -- simulating electrostatics: the attraction-repulsion forces between atoms that aren't bonded to each other, and Darden devised a cure. He came up with a new method -- "particle mesh Ewald" -- that is fast and therefore has the added advantage of making it feasible to study larger structures. "There's two significant outcomes," says Pedersen, a physical chemist at the University of North Carolina. "This provides the accuracy missing in previous calculations, and it makes bigger systems possible." To further exploit the advantages of this new method, a parallelized version is now implemented on the CRAY T3D at Pittsburgh.
This graphic depicts the molecular structure of a large protein, bovine pancreatic trypsin inhibitor, as determined in particle mesh Ewald simulations by Lee Pedersen and Tom Darden. The color coding indicates oxygen molecules (red), nitrogen (blue), hydrogen (cyan), carbon (white) and sulfur (yellow).
The "backbone" of bovine pancreatic trypsin inhibitor coded according to secondary structure: Random coil (wheat) is tube shaped; helices (aqua) are flat; and sheets (magenta) are arrows. The dotted structure shows the solvent accessible molecular surface.
Researchers: Tom Darden, National Institute of Environmental Health Science; Lee Pedersen, University of North Carolina.
Hardware: CRAY C90, CRAY T3D
Software: AMBER, Particle Mesh Ewald, PME
Keywords: Proteins, molecular dynamics, biochemistry, molecular structure, electrostatics, bonds, non-bonded interactions, particle mesh Ewald, Ewald summation, cutoff radius, biomolecules, counterions, X-ray crystallography, fast Fourier transforms, FFT, macromolecular Ewald summation, bovine pancreatic trypsin inhibitor, H-ras p21, DNA.
Related Material on the Web:
The PSC Biomedical Supercomputing Initiative.
Information on AMBER, including T3D implementation of PME.
Projects in Scientific Computing, PSC's annual research report.
References, Acknowledgements & Credits | <urn:uuid:f060c399-48e7-4ec2-ae93-1dc33dc8c607> | 3.328125 | 960 | Academic Writing | Science & Tech. | 21.159698 |
I'm sorry for pause in writing new posts about design patterns.
So, today I'm going to write a post about the most popular and well-known design pattern - Singleton.
The essence of Singleton is to provide :
- exactly one instance of class across the system;
- simple access to it.
The implementation of Singleton based on creation a class with a method(or property in .NET) that creates an instance of this class if one doesn't exists yet. The constructor of class must be private to prevent other ways of initialization. Also Singleton must be carefully used in multi-threaded applications because in one point of time, the two threads may create two different instances (which violates singleton pattern).
Let's start from not thread-safe implementation.
This is basic implementation and not thread-safe. The two different threads can evaluated the condition
Simple thread-safe implementation is situated below.
This implementation is thread-safe. Here I used shared object (lockObject) to mark a statement block as critical section via lock keyword. But performance suffers because lock is acquired every time the instance requested.
Let's look at thread-safe without using locks implementation.
In C# static constructors executed (once per AppDomain) only when static member is referenced or an instance of class is created. So here you can see some sort of lazy instantiation.
Let's go toward real-world example.
I didn't want to show you example with log system as almost all books/blogs did it. So I have decided to dive deeper in restaurant's stuff.
Imagine that you're going to restaurant. So you're calling to restaurant to make a reservation. Also you want to reserve table near fountain and near big window. Hostess checking all available tables which satisfies your desires and proposing theirs to you.
So all hostesses must use ONLY ONE a piece of paper, which illustrated all tables in the restaurant and theirs statuses, or any application, which helps with it.
Singleton is really suitable for this functionality.
Restaurant and Table classes:
And here's implementation of singleton pattern - Hostess class:
Source code of Singleton pattern you always can find on github.
Have a nice day ;)
P.S. for more details about Singleton pattern go here. | <urn:uuid:8c279192-41c1-455e-8032-4592ee4ed39f> | 2.796875 | 479 | Personal Blog | Software Dev. | 50.366905 |
To answer big questions, scientists need to think big about their observing tools.
May 22, 2006
|Over the next decade, scientists will make observations to learn more about cosmology's biggest mystery: the nature of dark energy. A slew of telescopes are in the planning or construction stages to aid in this effort.|
You are currently not logged in. This article is only available to Astronomy magazine subscribers.
Already a subscriber to Astronomy magazine?
If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will
need to regsiter for one. Registration is FREE and only takes a couple minutes.
Non-subscribers, Subscribe TODAY and save!
Get instant access to subscriber content on Astronomy.com!
- Access our interactive Atlas of the Stars
- Get full access to StarDome PLUS
- Columnist articles
- Search and view our equipment review archive
- Receive full access to our Ask Astro answers
- BONUS web extras not included in the magazine
- Much more! | <urn:uuid:8cca4f0c-be05-48df-8526-03e3a9922963> | 2.9375 | 229 | Truncated | Science & Tech. | 43.083846 |
On Tuesday night, I headed down to the beautiful Bell House in the Gowanus section of Brooklyn for the latest Secret Science Club lecture, featuring astrophysicist Dr Jeremiah Ostriker, professor emeritus of Princeton University, currently of Columbia University. Dr Ostriker has enjoyed a long, storied career in astrophysics, and his latest book is the brand-spanking new Heart of Darkness: Unraveling the Mysteries of the Invisible Universe, co-authored with Simon Mitton.
Dr Ostriker's lecture was an overview of the study of astronomy and astrophysics, with a concentration on the developments in the field throughout the 20th century- a century of cosmological investigations which culminated in a paradigm that really works, but remains puzzling in many ways.
Dr Ostriker began with a brief survey of astronomical studies from the sixteenth to the nineteenth centuries, beginning with the work, from 1550-1650, of Brahe, Kepler, and Gallileo, who challenged long-standing views of a static cosmos with the Earth at its center, and created the model of the heliocentric solar system. From 1650-1750, Halley, Euler posited universal laws for the cosmos, and applied mathematical analyses to celestial matters. Thomas Wright described the shape of the Milky Way as "an optical effect due to our immersion in what locally approximates to a flat layer of stars." The sun is one star in a galaxy of stars. The majority of a galaxy's stars are near the center of the galaxy- our sun is far from the center of the Milky Way. He also speculated that the dim spiral nebulae were other galaxies, previously, such nebulae were not recognized as such. In 1924, Edwin Hubble confirmed that these spiral nebulae were indeed galaxies. Immanuel Kant elaborated on this theme, speaking of a universe composed of "islands" in the void. In the 1840's a great telescope named the Leviathan was built in the Irish town of Parsonstown specifically to explore the nature of nebulae.
In the 20th Century, Einstein, Hubble and Baade were instrumental in laying the foundations of modern cosmology- not only are there many galaxies, but the galaxies seem to be moving away from each other with a velocity proportional to their separation (an observation known as Hubble's Law. With the discovery that the universe is expanding, there was a question as to whether the universe will expand forever or if gravity will cause the expansion to decelerate and the universe will collapse. When Einstein formulated his Theory of General Relativity, he believed that the universe was static, and postulated a cosmological constant. After seeing Hubble's evidence of an expanding universe, he realized that the universe is not static- the cosmological constant is characterized as Einstein's "greatest blunder".
Physical theories were put on the back burner through much of the 20th Century as astronomers were trying to puzzle out other astronomical questions, such as the size and the age of the visible universe. In the years 1958-1975, ever larger and more powerful telescopes were used to estimate these cosmological parameters. A major breakthrough took place at a Bell Labs radio facility in Holmdel, New Jersey when physicists Arno Penzias and Robert Wilson detected background radiation from the Big Bang.
Additional surveys of the sky showed a large scale structure in galaxy positions- galaxies tend to cluster, and there are filaments between various clusters. The question of the origin of this structure, and the formation of galaxies after the Big Bang became of paramount importance. Immediately after the Big Bang, the elements were "cooking together"-the most common elements are the lightest ones: hydrogen, helium, and lithium. While the Big Bang theory seemed correct by the end of the 1960's, the density of observable matter was deemed insufficient to slow the expansion of the universe. A paucity of matter would lead to an increasingly empty universe, so why would the observed clustering take place... something crucial was missing from the cosmological model.
From 1975-1995, dark matter came into its own. Dark matter was initially proposed as an explanation for discrepancies between the visible matter and estimated total mass in a distant galaxy cluster by astrophysicist Fritz Zwicky in the 1930s. Simply put, the observable matter was insufficient to account for Zwicky's observations of Coma Cluster, a gluster of at least one thousand galaxies. Zwicky, who is largely unsung, theorized that clusters of galaxies are held together by the gravitational force of dark matter. Dark matter seems to be electromagnetically inert- it neither reflects nor emits light- the only observable indication that dark matter exists is its gravitational effect.
In 1977, Dr Ostriker observed that rotation curves showed that most of the mass of a galaxy is in the outer regions of the galaxy that have little light output. The mass goes up as one measures outward. Each galaxy has a vast dark halo. The total amount of matter in the universe is ten times what was originally thought when only visible matter was taken into account. At this point in the lecture, Dr Ostriker observed that Van Gogh was eerily prescient:
Perhaps the best evidence for dark matter was found around the year 2000, when observations of the Bullet Cluster, a cluster of two colliding galaxies, showed that dark matter is not merely composed of baryonic dust and gases, but is something completely different. Dr Ostriker likened the Bullet Cluster to the "Rosetta Stone of Gravity". In his Theory of General Relativity, Einstein theorized that light could be be bent by gravity, an effect known as "gravitational lensing. Gravitational forces, to a large extent resulting from dark matter, cause observed galaxies to form "arcs".
In 1991, the Cosmic Background Explorer satellite confirmed the basic prediction of the Big Bang- a universe filled with black body radiation. The "sky" is not uniform- it shows the seeds for stucture, evidence for dark matter. Tiny early fluctuations grew with time as the universe "evolved".
In the 1990's, evidence suggesting an increase in the pace of the expansion of the universe led to the theory that dark energy permeates the universe.
While the basic model of cosmology seems to work, big questions remain unanswered- what is the origin of the perturbations which gave rise to the structure of the universe? What is dark matter? What is dark energy? In addition, there are Modified Newtownian Dynamics, or MOND theories which posit alternatives to the dark matter/dark energy model, but these are not commonly accepted.
In the Q&A, Dr Ostriker discussed several characteristics of dark matter. Dark matter is not observable in the electromagnetic spectrum, merely by its gravitational effects. While dark matter may "collide", it is unknown whether gamma rays would result from such collisions- any ideas about the nature of dark matter are still theoretical. Closer to stars, baryonic matter is more common- dark matter is less dense. Dr Ostriker opined, "There may be one microgram of dark matter in this room". I looked, but the closest I came to finding it was a pint of Guinness.
At the end of the Q&A session, Dr Ostriker uttered one of the best lines I've ever heard in my life... upon answering the last question he said, "Shouldn't we stop this and start drinking?"
Way ahead of you, good doctor, way ahead of you!
All told, it was another top-notch lecture presented by the Secret Science Club. | <urn:uuid:b99985f3-479a-46f8-92f3-288e605f8519> | 2.84375 | 1,560 | Personal Blog | Science & Tech. | 37.15652 |
Subcutaneous rock forms
Tadej Slabe - (28/2,1999)
Subcutaneous rock forms occur on karst surfaces covered with sediment or soil. They are the consequence of water running along the contact between rock and soil, the percolation of water through the soil, and inflow of water to the surface of the soil surrounding the rock. Subcutaneous rock forms are often important traces of the development of karst surfaces. This article tries to distinguish typical karst subcutaneous rock features and to offer a design for their new standardization. | <urn:uuid:82dd7fb2-b330-4c81-9154-56d2b91fbe50> | 3.046875 | 120 | Truncated | Science & Tech. | 40.405523 |
Marcellin Pierre Eugène Berthelot:
Marcellin Berthelot was a French chemist.
October 27, 1827 in Paris, France
March 18, 1907 in Paris, France
Claim to Fame:
Berthelot was a French chemist who believed all chemical reactions depended on the action of physical forces that could be measured. He was also partly responsible for the end of the vitalism theory of organic chemistry. It was generally believed that organic compounds could only be formed from other organic sources and required some 'vital spark'. He synthesized hydrocarbons, natural fats and sugars from inorganic sources to disprove this theory.
He was responsible for the Thomsen-Berthelot principle in thermochemistry that postulated chemical changes produce heat and will produce the change that generates the most heat. This theory would be modified later by Helmholtz to consider not just heat, but the reaction's free energy. | <urn:uuid:06115a62-d4ea-4f19-8f78-d962e204a231> | 3.46875 | 193 | Knowledge Article | Science & Tech. | 34.859286 |
Sub-annually resolved ice core chemistry data from various sites on the Antarctic Ice Sheet were obtained from 1999 to 2008 during the US International Trans-Antarctic Scientific Expedition (US ITASE) deployments. Researchers conducted experiments approximately every 100-300 km looking for clues representing climatic conditions over the past 200-1000+ years. Ice cores, obtained for the ... glaciochemical component of the US ITASE research, were analyzed for soluble major ion content and in some cases trace elements. At each site, a ~3-inch diameter ice core was drilled to depths as great as 120 m. Surface snow samples were collected every ~10-40 km. High-resolution chemical analysis (up to ~75 measurements per meter) was used to define each core-chemistry year based on peaks in Na+, Ca2+, Mg2+, K+, NH4+, Cl-, NO3-, SO42-, CH3SO3- (methylsulfonate), and in some cases trace elements. Extreme events such as volcanic eruptions provide absolute age horizons within each core that are easily identified in chemical profiles. Our chemical analysis is also useful for quantifying anthropogenic impact, biogeochemical cycling, and for reconstructing past atmospheric circulation patterns. | <urn:uuid:d6d840a9-705d-4019-8555-b5a6e5bcc3a2> | 3.234375 | 251 | Academic Writing | Science & Tech. | 28.879493 |
Let us consider another strategy to deal with our diameter problem. Let us try to associate other graphs to our family of sets.
Recall that we consider a family of subsets of size of the set .
Let us now associate more general graphs to as follows: For an integer define as follows: The vertices of are simply the sets in . Two vertices and are adjacent if . Our original problem dealt with the case . Thus, . Barnette proof presented in the previous part refers to and to paths in this graph.
As before for a subset let denote the subfamily of all subsets of which contain . Of course, the smaller is the more edges you have in . It is easy to see that assuming that is connected for every for which is not empty already implies our condition that is connected for every for which is not empty.
Let be the maximum diameter of in terms of and , for all families of -subsets of satisfying our connectivity relations.
Here is a simple claim:
Can you prove it? Can you use it? | <urn:uuid:2926ac7d-04c0-46f0-854f-3f06f5bc4ebf> | 2.84375 | 214 | Academic Writing | Science & Tech. | 59.159176 |
A small meteorite named ALH84001 that was found in the mid 1980's in Antarctica was analyzed carefully years later; surprisingly chemical tests indicated evidence for possible past life on Mars. With such an announcement pending, the researchers had all they could do to prevent a gushing leak of the sensational news.
Two past missions to Mars had indicated there was no life. The 1965 Mariner Orbiter found that there were dry stream beds but no canals, thought to exist by earlier scientists. Canals suggested engineering works by a race of intelligent Martians. In 1976 the Viking Lander's search for life came up empty. The Vikings found positive chemical reactions that weren't life, but scientists didn't know what they were.
The Friday before Easter 1996, scientists sent their findings to Science. After reviewing the findings for three months, the editors at Science agreed to publish the findings. Daniel Goldin, an administrator for NASA, found himself in the Oval Office at 8:30 in the morning, briefing President Clinton on the findings. They tried to keep the news a secret from the press while they prepared a conference for scientists to attend. In early August, news began to leak, causing the conference to be moved up a week. The announcement stirred up a sensation all over the world. Bookies in London lessened the likelihood of life elsewhere in the universe from 500 - 1 down to 25 - 1. Scientists said meteorite ALH84001, could be considered to be like the Rosetta Stone, which revealed the mysteries of ancient cultures.
Photo. Two close-up views of ALH84001. Courtesy of NASA.
Ralph Harvey, a scientist who headed the meteorite missions in Antarctica, is somewhat of a skeptic. Harvey thinks there is a chance that the hydrocarbonates formed when there was too much heat for life to exist. Maybe we will find more evidence with the Pathfinder and Surveyor missions to Mars. Carl Sagan's comment on the possibility of life on Mars was quite succinct: "Extraordinary claims need extraordinary evidence."
Mission to Mars. An educational site created for the ThinkQuest contest. | <urn:uuid:5106c147-e663-4da9-89b8-e4577d78a430> | 3.5 | 429 | Knowledge Article | Science & Tech. | 52.933661 |
You can add these boxes to your site.
Every thing has a link like this:
Add this to your blog
Just click on it and follow the one-step instructions. Whenever you add one of these boxes to your site you will be getting links back to you in our site!
In radio astronomy, the flux unit or jansky (symbol Jy) is a non-SI unit of spectral electromagnetic flux density equivalent to 10 watts per square metre per hertz. The flux density or monochromatic flux,, of a source is the integral of the spectral radiance,, over the source solid angle: The unit is named after pioneering US radio astronomer Karl Guthe Jansky, and is defined as: The flux density in Jy can be converted to a magnitude basis, for suitable assumptions about the spectrum. More information...
We are adding some soon!
No trackbacks found yet
Register now, and make your vote count more!Votes of unregistered users count only half as much compared to registered users. | <urn:uuid:29ebe4a2-14de-4a09-b211-77f608d2f418> | 2.703125 | 210 | Truncated | Science & Tech. | 52.620664 |
If a multimeter has a resistance (1M ohm, say) when measuring voltages how do I take that into account in my error?
By treating the multimeter as a resistor in parallel with the rest of the circuit, connected at the contact points. When you measure the potential difference across those points, you can compute the potential difference in the absence of the multimeter by comparing the resistance of the circuit to the resistance of the circuit-in-parallel-with-multimeter. Of course, if you have a constant voltage power supply, this will make no difference. If you have a constant current power supply, then this might be a worthwhile exercise. However, unless you know the effective resistance of the circuit very precisely, or the circuit has a high effective resistance (> 10k$\Omega$), the effect of the multimeter in parallel is likely to be much smaller than the uncertainty in the resistance of the circuit. If you are dealing with a circuit with a high effective resistance, then you should find a multimeter suited to dealing with these systems (one with an even larger resistance). | <urn:uuid:42f0b2a0-eac9-4a42-97a3-23bcb79ba47f> | 2.6875 | 225 | Q&A Forum | Science & Tech. | 30.26587 |
The monitor executes one thread at a time. Assuming you have T1-T10 threads, 9 are
BLOCKED and one is
RUNNABLE. Every once in a while, the monitor picks a new thread to run. When that happens, the chosen/current thread, say T1, goes from
BLOCKED. Then another thread, say, T2, goes from
RUNNABLE, becoming the current thread.
When one of the threads needs some information to be made available by another thread, you use
wait(). In that case, the thread will be flagged as
WAITING until it is
notify()ed. So, a thread that is waiting will not be executed by the monitor until then. An example would be, wait until there are boxes to be unloaded. The guy loading boxes will notify me when that happens.
In other words, both
WAITING are status of inactive threads, but a
WAITING thread cannot be
RUNNABLE without going to
WAITING threads "don't want" to become active, whereas
BLOCKED threads "want" to, but can't, because it isn't their turn. | <urn:uuid:dae8c0ca-fbee-4f5f-86cd-bb3827347926> | 2.6875 | 248 | Q&A Forum | Software Dev. | 65.24393 |
Now, these ideas have an obvious role to play in physics, but it's the same role they play in any science which uses parameter estimation. Knowing something about the limits of statistical inference is often useful --- constant readers may recall that the Fisher information matrix plays a leading role in Optimum Experimental Designs --- but this is, so to speak, external to the content of the science, whether that be physics, geology or mycology. What Frieden claims here, and a series of related articles published in the physics journals, is that Fisher information is actually connected to physics in the most profound way, that you can derive physical laws by manipulating Fisher information.
In Frieden's vision, physics is entirely about measurements, and he supposes that the measurements must be like trying to estimate a parameter of a distribution. When we think we are measuring (say) the space-time coordinates of an electron, what we record is the true coordinates plus noise, and from this we try to get the true position, which is the underlying unknown parameter. (On this basis he replaces the conventional Fisher formula with <(dp(x)/dx 1/p(x))^2>.) The way Frieden initially states his program is to ask what dependence of the distribution on these parameters will maximize the Fisher information, subject to certain constraints. The solution to such a problem is generally given by a second-order differential equation --- and physical laws are generally second-order differential equations.
I need to say something here about the orthodox ideas about physical dynamics --- "mechanics," in the jargon. In classical and quantum mechanics, equations of motion are derived from what is called the principle of least action. (The "least" is a bit misleading, as we'll see.) "Action" is a term of our art, meaning a quantity having dimensions of energy integrated over time. The energy in question here is what is called the "Lagrangian," generally equal to the kinetic energy of the system minus its potential energy. In the simplest case, of a single particle moving under the influence of external forces, we say that we know the particle will, at one time, be at a certain position and moving with a certain velocity, and at a second, fixed time will have another position and velocity. We then ask what trajectory, connecting these two points, will minimize the action, and claim the particle will follow that trajectory. If we have multiple particles, fields, and interactions between them, the math becomes more complicated, but the principle does not.
A number of semi-technical points ought to be brought out here. The first is that we don't really look for the trajectory of least action; instead we look for the one where slight changes in the trajectory have the least effect on the total action. The favored trajectory is the one of "stationary variation" in the action, whether this be a minimum, a maximum, an inflection point or a saddle (there are textbook cases of all of these). (Hence this is a "variational problem," and analyzed by the "calculus of variations"; hence also why "principle of least action" is a misleading name.) The solution to such a variational problem is given by an equation involving various derivatives of the Lagrangian. Second, we know (since Newton) that physically realistic trajectories are solutions to second-order differential equations. (In effect, this is what Newton's first two laws of motion tell us.) To ensure that the trajectory we get from the variational problem is also a second-order differential equation, we make sure that the Lagrangian includes the square of a first derivative --- which is to say, the kinetic energy. Third, the potential energy term in the Lagrangian, while essential to deriving any behavior more interesting than uniform motion along straight lines, is itself arrived at through tradition, analogy and guess-work. Finally, we require that the Lagrangian as a whole have certain symmetry properties --- that it be invariant under certain kinds of changes in our coordinate system or the physical system, which we figure shouldn't make any difference. Noether's theorem then connects the Lagrangian's symmetries to the physical conservation laws.
We are now in a position to begin to see why Frieden's program is nowhere near as impressive as it first sounds. He doesn't really maximize Fisher information; he simply requires that its variation be stationary. Worse yet, he is admirably candid about the fact that simply doing this doesn't give us any very interesting equation of motion. To get that, he subtracts from the Fisher information a new quantity of his own devising, the "bound information," and requires that the difference between these two, which he calls the "physical information," have stationary variation. Now, while he might have plausibly argued that the "correct" physical variables are the most informative ones, I simply cannot see any reason why his physical information should be maximized. (Note however that unlike a Lagrangian, Fisher information is generally not invariant under change of coordinates, e.g. from Cartesian to spherical, so I'd have liked some reassurance on this point, which is not forthcoming; Frieden evidently believes that Nature thinks in Cartesian coordinates.) He tries to justify his "extremal physical information principle" (pp. 79--82) by saying that physicists are in a non-cooperative game with Nature, trying to seize as much data as we can from Her, and the upshot of this is that physical information should have stationary variation. I couldn't say why he thinks this should convince anyone not raised on the lumpenfeminist idea that modern science is a way of raping and torturing Nature.
In any case, adding bound information (or rather, subtracting it off) reduces the scheme to vacuity. Frieden pulls these terms from out of, to put it politely, the air, and they seem to have no independent significance whatsoever. They are simply whatever he needs to get the equation he wants at the end of the variational problem, subject only to the (really rather mild) constraint that they have the right symmetry properties.
In short, if there is any superiority to dealing with Frieden's physical information rather than with the action, he hasn't demonstrated it. Both get the necessary second-order differential equations by sticking in a squared derivative term --- the kinetic energy for Lagrangians, Fisher information for Frieden. Both involve a more or less ad hoc second term, respectively the potential energy and the bound information, to get the right sort of dynamics. Both do not actually guarantee an extremum, merely stationary variation. They may well be equivalent, in the sense that for every physically important Lagrangian, Frieden can come up with a bound information term which delivers the same equations of motion. On what basis, then, could we choose between the schemes?
The first point can, I dare say, be dismissed at once. The prospect of solipsism, like that of suicide, may help us over some of life's rough patches, but it's hardly something which can be established in a book on mathematical physics! But maybe we shouldn't hold this against Frieden's scheme, since his formalism doesn't employ observers, reality-creating or otherwise.
As to the second point, that physics should be a science of measurements, I have two objections. The first is that lots of physics deals with things which we don't measure (velocity at most points in a fluid, for instance), or even which we cannot measure (e.g., the interior of a star a billion years ago). Measurements give us our evidence about these matters, but they don't constitute them. The second objection is that what Frieden, following the rest of our profession, calls a "measurement" is, on the basis of our own best theories, really an immensely complicated process. If he really wants to look at data, at what's "given" to us, he shouldn't be thinking about the position of an electron at all, but about gauges, pointers, LED displays, and so on. Even that is being generous: transient colored blobs in his visual field is more like it.
Finally, I have no objection to the notion that dynamics are fundamentally stochastic. But most physicists have accepted this since the 1920s (if not earlier), so this is hardly a selling-point for Frieden's particular scheme. I do note that he has nothing to say about why really fundamental physical theory, in quantum mechanics, should contain a very odd sort of stochasticity, where instead of decent, real-valued probabilities, we have perverse, complex-valued probability amplitudes, whereas that is the sort of thing I'd expect a real unification with statistics would worry over.
To sum up: Frieden's scheme is at best mathematically equivalent to orthodoxy; it adds nothing empirical; places fundamental and useful concepts in doubt; does nothing to unify physics either internally or with statistics; and it is associated with some really bad metaphysics, though that last perhaps reflects more on Frieden than on the scheme itself. I see absolutely no reason to prefer this scheme to conventional mechanics, rather the reverse. This is at best an extended mathematical curiosity.
To follow this book you need to know the variational formulation of classical and quantum mechanics at, say, the level of Goldstein, and some prior acquaintance with estimation theory would help. Even then, your time would be better spent reading Greg Egan's science fiction, since he deals with many of the same themes, only more convincingly and with far greater sophistication. | <urn:uuid:de38ff4f-58a5-420b-bb15-dce2cdf881a2> | 2.859375 | 1,971 | Nonfiction Writing | Science & Tech. | 35.300605 |
In my new book, The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists, I show the results of experiments with a simple climate model that runs in an Excel spreadsheet. The model is meant to illustrate how natural monthly-to-yearly variability in global (a) cloud cover and (b) surface evaporation can affect our satellite observations of (1) temperature and (2) total radiative flux.
Those last two measurements are what are traditionally used to determine the temperature “sensitivity” of our climate system. By specifying that sensitivity (with a total feedback parameter) in the model, one can see how an analysis of simulated satellite data will yield observations that routinely suggest a more sensitive climate system (lower feedback parameter) than was actually specified in the model run.
And if our climate system generates the illusion that it is sensitive, climate modelers will develop models that are also sensitive, and the more sensitive the climate model, the more global warming it will predict from adding greenhouse gasses to the atmosphere.
Here is the model to download. It is currently set up to do a 100 year simulation at monthly time resolution. Here are two example plots from the model, run with a 50 meter deep ocean and a feedback parameter of 3 Watts per sq. meter per deg. C…but the output of the model suggests a feedback of 2.08, rather than 3:
The 4 basic inputs to the model are in large blue font, all of which are adjustable. These include (in no particular order):
1) Bulk heat capacity of the system, specified as an equivalent ocean water depth (nominally 50 meters deep).
2) Net feedback parameter (controlling the model’s temperature sensitivity to energy imbalances)
3) Radiative forcing (e.g. from natural variations in cloud cover)
4) Non-radiative forcing (from fluctuations in convective heat transfer between the surface and atmosphere)
Those last 2 heat flux forcings are driven by a random number generator. The radiative forcing also has a low-pass filter applied to the monthly random numbers, which seems to mimic the satellite observations pretty well.
In addition to these 4 inputs, one can also “turn on” carbon dioxide forcing, which will lead to a long-term warming trend in the model at a rate that depends mainly upon the specified feedback parameter and ocean depth.
(1) After running the model many times, eventually the memory cache used by Excel gets filled up (I think), and garbage numbers start to appear. Just close out Excel and re-open it to fix this.
(2) A new model run is automatically made any time ANY entry in the spreadsheet is changed, including when you do a file “Save”. So if you want to show someone the results of a specific model run, you are going to have to copy and “special paste” the values somewhere else, and then make a new graphs from those. | <urn:uuid:1bd7cf23-8091-4ca4-9d11-f290315d38dc> | 3.046875 | 622 | Personal Blog | Science & Tech. | 37.164026 |
In this lesson, we will see some examples on the
perimeter of a parallelogram...
About This Lesson
In this lesson, we will:
See an example on finding the perimeter of a parallelogram.
See another example on finding the
side length of a parallelogram.
The study tips and math video
below will explain more.
A parallelogram has two pairs of parallel sides and its opposite sides are equal
in length. These properties are shown on the right.
Now, if the parallelogram has sides of length a and b,
of the parallelogram, P, will be:
P = 2(a+b)
This formula is similar to the
formula for the perimeter of a rectangle. Rather than repeating the same
explanation, the math video below will show some examples on the perimeter of a
parallelogram without using the
on finding the perimeter of a
on finding side length of a
Site-Search and Q&A Library
Please feel free to visit the Q&A Library.
You can read the Q&As listed in any of the available categories such as Algebra,
Graphs, Exponents and more. Also, you can submit math question,
share or give comments there. | <urn:uuid:b93a791a-65ae-48f5-b399-8eaa8e40a3d2> | 4.15625 | 268 | Tutorial | Science & Tech. | 52.041887 |
10.23.12 - The sun emitted a significant solar flare on Oct. 22, 2012, peaking at 11:17 p.m. EDT.
10.22.12 - Newly named sunspot AR1598 has release an M5 class solar flare. This is the same region that released an M9 flare on Oct. 20, 2012.
10.20.12 - The sun emitted a significant solar flare, an M9, peaking at 2:14 p.m. EDT on Oct. 20, 2012. The associated radio blackout, an R2, has subsided, reports NOAA.
10.08.12 - The CME release on Oct. 4, 2012 has generated a G2-level geomagnetic storm on Earth resulting in aurora in upper latitudes.
10.01.12 - The CME launched by the sun on Sept. 27 resulted in aurora dipping into the continental U.S. as far south as Maryland and Ohio on Sept. 30, 2012.
09.04.12 - On September 1, 2012, a long, whip-like filament erupted on the sun. The eruption, called a coronal mass ejection, caused aurora near Earth on September 3.
08.18.12 - An active region, just beginning to rotate into view, released an M5.6 class solar flare last night at 9:01pm EDT.
08.13.12 - On July 23, 2012, a massive cloud of solar material erupted off the sun. NASA Goddard scientists clocked the giant cloud, known as a coronal mass ejection, or CME, at speeds between 1,800 and 2,200 miles per second.
08.10.12 - SDO sees a very long, whip-like solar filament extending over half a million miles in a long arc above the sun’s surface. Part of the filament seems to break away, but its basic length and shape seem to have remained mostly intact.
07.30.12 - The sun emitted a mid-level flare, peaking at 4:55 PM EDT on July 28, 2012. This flare is classified as a M6.2 flare. M-class flares are the weakest flares that can still cause some space weather effects near Earth.
07.19.12 - The sun emitted a mid-level solar flare (M7.7) on July 19, 2012, beginning at 1:13 AM EDT and peaking at 1:58 AM.
07.15.12 - The arrival of the CME associated with the July 12, 2012 X1.4 class flare, resulted in a geomagnetic storm that caused aurora to appear in lower latitudes than usual.
07.09.12 - As it turns away from Earth, AR1515 releases an M6.9 class solar flare.
07.07.12 - Anticipated for the past week, the sun finally releases an X1.1 class solar flare late on July 6.
07.05.12 - Today's M6.1 solar flare, originating from behemoth sunspot AR1515, is the twelfth M-class flare from that region in the last 3 days.
07.04.12 - Even the sun joins in America's Fourth of July celebration, with an M5.3 solar flare.
07.03.12 - On July 2, 2012, an M5.6 class solar flare erupted from the sun, peaking at 6:52 AM EDT. | <urn:uuid:734e9dc9-1a3d-430f-bfc2-8082af9435f3> | 3.265625 | 725 | Content Listing | Science & Tech. | 98.090034 |
Special cells having a hollow central cathode were immersed in liquid air for an extended period to insure that any gases, if present, were condensed on the outer alkali metal coated walls. The temperature of the cathode was controlled by a stream of evaporating liquid air, whereby all temperatures between +20 and - 180°C could be attained and held constant and be measured. In these cells the variation of photoelectric current with temperature in sodium, potassium, and rubidium is continuous, without abrupt changes. The effect is relatively small for sodium, showing hardly at all for blue light or white light, but clearly for vellow light. The behavior of rubidium is similar to that previously reported for potassium.
In a second form of cell, potassium was collected in a deep pool. By slowly cooling the metal from the molten condition, smooth crystalline surfaces were obtained. With these annealed potassium surfaces, the variation of photoelectric current with temperature is represented by curves varying systemmatically in shape with the color of the light, and the effect is far greater than previously reported, amounting, for yellow light, to a variation of 10 to 15 times between room and liquid air temperature. When the surface is roughened curves of the previously reported type are obtained. Small pools give erratic effects, showing changes in opposite directions for different portions of the temperature range. It is concluded that the variation of photoelectric effect is intimately connected with the strains produced in the surface by expansion and contraction with temperature.
HERBERT E. IVES and A. L. JOHNSRUD, "THE INFLUENCE OF TEMPERATURE ON THE PHOTOELECTRIC EFFECT OF THE ALKALI METALS," J. Opt. Soc. Am. 11, 565-569 (1925) | <urn:uuid:40e5f193-b7b9-4b37-899d-db78cda5627f> | 2.90625 | 368 | Academic Writing | Science & Tech. | 38.579919 |
Top Speed: 107,000 mph
In 2003, the Galileo mission to study Jupiter and its moons had already been extended several years and the space probe was running out of propellant. Probes to distant planets use nuclear thermal generators, which keep them operational for decades, but eventually wear out, says Roger Launius, an expert on U.S. spaceflight. "Those are good for 30, 40 maybe 50 years, but they have a half-life," he says. "Over the course of a lengthy mission, you find you have less and less power to supply whatever systems are onboard." Instead of risking a crash into Europa, Jupiter's ice-covered moon, the Galileo probe took a controlled dive into the gas giant. Jupiter has a gravitational pull that is 2.5 times the gravitational pull of Earth, and as Galileo entered its atmosphere, it accelerated to a speed of 107,000 mph. At that speed, a trip from the Earth to the Moon would take 2.2 hours. | <urn:uuid:9cc0a7c0-577a-4315-8d8a-cfe159381201> | 3.46875 | 202 | Listicle | Science & Tech. | 65.479091 |
How the MBDI Compares
Funder & Affiliates
Rank of drought severity
To compare drought severity for a specified time frame, the Moisture Balance Drought Index (MBDI) uses a ranking system based on where that time frame falls in the historical line-up since 1895.
The driest years will fall into the lowest ranks, such as bottom quarter of values – the 25th percentile or below. The wettest years will fall within the upper ranks, the top 75th percentile or above. Because the historic record contains more than 100 years, the driest years can have values below 1.
The table below uses values for Payson, Arizona, for the months of June during the decade from 1997-2006 to provide a simplified illustration about how the MBDI ranking system works. In actuality, the record goes back to 1895, so even assessments of conditions from a single decade will be ranked against the full record.
Using real climate data from Payson, Arizona, this hypothetical example shows how the MBDI ranking system would work at the scale of a decade.
Credit: Table design by Jorge Arteaga
In this example, June of 1997 had the lowest value for P – PE in the record, while June of 1998 had the highest value. As a result, June of 1997 was ranked in the lowest percentile (10%) while June of 1998 was ranked in the highest percentile (100%). In practice, ranks can fall below 1 because there are more than 100 years in the record.
Rankings can be undertaken at a variety of scales, from one month – as in this case – to four years. This use of multiple time scales allows for comparisons of the multiple dimensions of drought. | <urn:uuid:aae167ab-fa1e-4574-a7d7-c955a8231b4b> | 3.0625 | 351 | Knowledge Article | Science & Tech. | 49.513669 |
25. The main reason why certain lipids will preferentially form micelles rather than bilayers is that a. they are amphipathic. b. the diameter of their polar head groups is larger than that of their hydrophobic tails. c. their tails are longer than their heads. d. they are less soluble in an aqueous environment than lipids which do form bilayers. | <urn:uuid:8c0841e7-16fb-423a-8628-2d0a8e98023a> | 3.046875 | 83 | Q&A Forum | Science & Tech. | 73.99806 |
...This is a project to be proud of at so many levels. It will attract tourists. It will set a positive precedent, nationally and globally, on environmental policy and action.
-- Joy Lapseritis, Falmouth resident & marine biologist
See all stories in this topic...
Two new important studies on climate change
Monday, January 14, 2013
Reuters is reporting on two significant new reports on climate change -- in 'Impact of climate change hitting home, U.S. report finds'
, Reuters reports how climate change is already significantly impacting every region of the United States according to a draft 1,146 page report
of the U.S. National Climate Assessment, commissioned by the U.S. Department of Commerce. Reuters' article 'Emissions limits could cut climate damage by two-thirds - study'
reports on a new study
published in the journal Nature Climate Change that is the first comprehensive assessment of the benefits of cutting greenhouse gas emissions, finding up to two thirds of the damange this century from climate change can be prevented if major steps are taken now to reduce emissions.
U.S. East Coast a "hot spot" for sea level rise: study
Tuesday, June 26, 2012
(Reuters) - Sea levels from Cape Hatteras to Cape Cod are rising at a faster pace than anywhere on Earth, making coastal cities and wetlands in this densely-populated U.S. corridor possibly more vulnerable to flooding and damage, researchers at the U.S. Geological Survey reported.
Note: Link to Reuters article
Report sees sharper sea rise from Arctic melt
Wednesday, May 04, 2011
The ice of Greenland and the rest of the Arctic is melting faster than expected and could help raise global sea levels by as much as 5 feet this century, dramatically higher than earlier projections, an authoritative international assessment says.
The findings "emphasize the need for greater urgency" in combating global warming, says the report of the Arctic Monitoring and Assessment Program (AMAP), the scientific arm of the eight-nation Arctic Council.
Click here to read this Associated Press article in the Cape Cod Times
Scientists Explore Impact Of Sea Level Rise On Falmouth’s Coast
Wednesday, September 30, 2009
Falmouth Enterprise article, run in its entirety, with permission.
Climate Change Seen as Threat to U.S. Security
Sunday, August 09, 2009
The changing global climate will pose profound strategic challenges to the United States in coming decades, raising the prospect of military intervention to deal with the effects of violent storms, drought, mass migration and pandemics, military and intelligence analysts say.
...If the United States does not lead the world in reducing fossil-fuel consumption and thus emissions of global warming gases, proponents of this view say, a series of global environmental, social, political and possibly military crises loom that the nation will urgently have to address.
Click here to read this article in the New York Times
Global Warming May Exceed Infections as Health Threat
Thursday, May 14, 2009
May 14 (Bloomberg) -- Global warming is the biggest public health threat of the 21st century, eclipsing infectious diseases, water shortages and poverty, a team of medical and climate-change researchers concluded.
The phenomenon will be felt first in the developing world, further burdening a population already in crisis from food shortages, said the report from University College London that was published today in The Lancet journal. The changing climate will also cause real and lasting damage to the Western world, affecting generations to come, said Anthony Costello, a pediatrician at University College London.
“Climate change is a health issue affecting billions of people, not just an environmental issue about polar bears and deforestation,” Costello said during a news conference. “We are setting up a world for our children and grandchildren that may be extremely frightening and turbulent.”
Click here to read this Bloomberg article
The seven eco-wonders of the world
Tuesday, April 14, 2009
The Seven Wonders of the World. The phrase, first used by ancient Greek historians like Herodotus, recalls a simpler time, when the human influence over nature was an awesome mystery. From the Hanging Gardens of Babylon to the Great Pyramid of Giza, the magnitude and mastery displayed by the ancient wonders were staggering to our ancestors. ...The editors decided we need a new list of wonders—one with an eco-enlightened perspective. So we searched the globe. We visited today’s most progressive, iconic structures. And we studied blueprints for projects now under construction that represent a better form of development for tomorrow. We insisted that these eco wonders connect our built and natural realms, cultivating hope for a brighter, greener, more innovative century. And lo and behold, Plenty’s Seven Eco Wonders of the World was born: present and future marvels (in no particular order) that prove our civilization can leave an eco-friendly imprint.
... 3. Nysted Havmøllepark, Denmark
The Eco Wonder: The Dutch are known for windmills, but it’s the Danish who now claim the world’s second-largest offshore wind farm, located in shallow but navigable waters 6 miles off the shore of the bucolic southern coastal town of Nysted. Gently rotating blades reach out more than 130 feet from their colossal 225-foot posts. Seen from the sky, the 72 sleek, marine-gray towers rise from the ocean in neat rows, marking out a parallelogram.
...Eco-touring Tips: Visitors can sail in the unrestricted waters around the Nysted wind farm using sailing directions found on the farm’s website. Frequent tours leave from Nysted, where sport fishing is another popular local pastime. On shore, the Rødsand area is well-liked for its dunes, seaside campsites, game reserves, and a European Union bird sanctuary—and don’t miss the Egholm Ulvecenter, a wolf park and museum.
Click here to read this Plenty Magazine article on Mother Nature Network
Warming trend seen depleting fishing stocks
Monday, September 22, 2008
Global warming may be hitting you right where it really hurts: the dinner plate.
New research done by a team of scientists at the Northeast Fisheries Science Center shows that rising temperatures in coastal waters along the East Coast could be lowering overall productivity in the North Atlantic food chain, slowing the growth of fish and shellfish. That means fewer fish in the ocean now than scientists anticipated, and lowered expectations for the size of fish populations once New England's severely depleted stocks are rebuilt to healthy levels in the future.
Note: Click here to read this article in the Cape Cod Times
Ocean Dead Zones Growing; May Be Linked to Warming
Friday, May 02, 2008
Smog exposure linked to premature deaths
Wednesday, April 23, 2008
Short-term exposure to smog, or ozone, is clearly linked to premature deaths that should be taken into account when measuring the health benefits of reducing air pollution, a National Academy of Sciences report concluded yesterday.
Click here to read this AP article in the Boston Globe | <urn:uuid:79ed5616-f198-4f89-8d1b-6692c8413f4a> | 2.9375 | 1,480 | Content Listing | Science & Tech. | 43.895507 |
As oceans absorb carbon dioxide, they become more acidic, diminishing the amount of available carbonate. Because carbonates are the building blocks that reefs and other marine species need to grow and maintain shells and body structures, ocean acidification poses a grave threat to these species.
Though often miniscule in size, shell-bearing, bottom feeding creatures like pteropods serve as the base of the food chain for many economically important species such as salmon.
And acidification isn't just limited to the tropical areas. The Arctic's frigid waters are acidifying faster than anywhere else. Scientists estimate that by 2020, 10% of the Arctic is likely to reach corrosive levels. By the end of the century, the entire Arctic Ocean will be corrosively acidic.
Even if humans stop emitting all carbon dioxide today, the oceans will continue to acidify because we've already loaded so much into the atmosphere and oceans are a major carbon sink.
In the absence of a global agreement on reducing carbon emissions, many scientists are advocating for the use of existing laws to reduce environmental stressors and build up the ocean's resilience.
For more than a decade, Earthjustice has been spearheading efforts to use the law to increase the health and resilience of the oceans. (Learn how.)
Learn about four key environmental stressors battering the ocean ecosystem, and how Earthjustice is working hard to reverse course on an impending environmental catastrophe: Stemming The Tide: Ocean Stressors | <urn:uuid:307cf3f8-d8c4-47d8-8047-44dcb64ebf23> | 3.796875 | 297 | Knowledge Article | Science & Tech. | 24.524244 |
If you enjoy gardening, you know how much it can cost to keep your plants alive. You also know how frustrating it is when an unexpected frost destroys them.
Honeybees make propolis by collecting the secretions of trees and other plants where they live; thus the make-up of propolis varies depending on the plant life around. Researchers have found the propolis of Brazilian honeybees to be particularly potent when it comes to protecting teeth.
There is a war going on between a certain tropical butterfly, Heliconius sara, and its only food source, the passion vine. This war involves chemical warfare. More precisely, the plant arms itself with cyanide bombs that are rather useful in getting rid of most insect pests.
Does an onion a day keep the doctor away? Find out on this Moment of Science. | <urn:uuid:bf3af3c5-c0e1-4b8e-a692-6767a5c92c15> | 3.09375 | 167 | Content Listing | Science & Tech. | 52.442839 |
1. Find the volume of the solid formed by rotating the region enclosed by
x=0, x=1, y=0, y=7+x^5 about the x-axis.
2. Find the volume formed by rotating the region enclosed by:
x=4y and y3=x with y0 about the y-axis.
3. The region between the graphs of y=x2 and y=4x is rotated around the line y=16.
The volume of the resulting solid is | <urn:uuid:d717dd9a-ee38-4189-aa6d-e8b2162633ec> | 3.484375 | 109 | Tutorial | Science & Tech. | 92.680965 |
Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are
natural numbers and 0 < a < b < c. Prove that there is
only one set of values which satisfy this equation.
Powers of numbers behave in surprising ways. Take a look at some of
these and try to explain why they are true.
I start with a red, a blue, a green and a yellow marble. I can
trade any of my marbles for three others, one of each colour. Can I
end up with exactly two marbles of each colour?
The picture illustrates the sum 1 + 2 + 3 + 4 = (4 x 5)/2. Prove the general formula for the sum of the first n natural numbers and the formula for the sum of the cubes of the first n natural. . . .
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
If you know the sizes of the angles marked with coloured dots in
this diagram which angles can you find by calculation?
Three frogs hopped onto the table. A red frog on the left a green in the middle and a blue frog on the right. Then frogs started jumping randomly over any adjacent frog. Is it possible for them to. . . .
Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps?
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
The nth term of a sequence is given by the formula n^3 + 11n . Find
the first four terms of the sequence given by this formula and the
first term of the sequence which is bigger than one million. . . .
Take any whole number between 1 and 999, add the squares of the
digits to get a new number. Make some conjectures about what
happens in general.
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
ABC is an equilateral triangle and P is a point in the interior of
the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP
must be less than 10 cm.
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
Carry out cyclic permutations of nine digit numbers containing the
digits from 1 to 9 (until you get back to the first number). Prove
that whatever number you choose, they will add to the same total.
How many pairs of numbers can you find that add up to a multiple of
11? Do you notice anything interesting about your results?
Can you fit Ls together to make larger versions of themselves?
You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . .
Make a set of numbers that use all the digits from 1 to 9, once and
once only. Add them up. The result is divisible by 9. Add each of
the digits in the new number. What is their sum? Now try some. . . .
Show that if three prime numbers, all greater than 3, form an
arithmetic progression then the common difference is divisible by
6. What if one of the terms is 3?
Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him?
In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again.
This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point.
The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values.
Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you
notice when successive terms are taken? What happens to the terms
if the fraction goes on indefinitely?
A huge wheel is rolling past your window. What do you see?
Is it true that any convex hexagon will tessellate if it has a pair
of opposite sides that are equal, and three adjacent angles that
add up to 360 degrees?
What happens to the perimeter of triangle ABC as the two smaller
circles change size and roll around inside the bigger circle?
Is it possible to rearrange the numbers 1,2......12 around a clock
face in such a way that every two numbers in adjacent positions
differ by any of 3, 4 or 5 hours?
Let a(n) be the number of ways of expressing the integer n as an
ordered sum of 1's and 2's. Let b(n) be the number of ways of
expressing n as an ordered sum of integers greater than 1. (i)
Calculate. . . .
Can you see how this picture illustrates the formula for the sum of
the first six cube numbers?
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the. . . .
The largest square which fits into a circle is ABCD and EFGH is a square with G and H on the line CD and E and F on the circumference of the circle. Show that AB = 5EF.
Similarly the largest. . . .
Find the largest integer which divides every member of the
following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n.
Use the numbers in the box below to make the base of a top-heavy
pyramid whose top number is 200.
Some puzzles requiring no knowledge of knot theory, just a careful
inspection of the patterns. A glimpse of the classification of
knots and a little about prime knots, crossing numbers and. . . .
Find the smallest positive integer N such that N/2 is a perfect
cube, N/3 is a perfect fifth power and N/5 is a perfect seventh
Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps?
Can you find all the 4-ball shuffles?
Pick a square within a multiplication square and add the numbers on
each diagonal. What do you notice?
This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . .
Here are some examples of 'cons', and see if you can figure out where the trick is.
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct.
Can you rearrange the cards to make a series of correct
A introduction to how patterns can be deceiving, and what is and is not a proof.
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas.
What can you say about the angles on opposite vertices of any
cyclic quadrilateral? Working on the building blocks will give you
insights that may help you to explain what is special about them. | <urn:uuid:9f202c56-0395-4c44-8e26-c226df65bfb1> | 3.6875 | 1,809 | Content Listing | Science & Tech. | 77.676458 |
A podcast is a an audio file published on the web. The files are usually downloaded onto computers or portable listening devices such as iPods or other players.
Read more about podcasting from webcontent.gov
HOST: Welcome to Diving Deeper where we interview National Ocean Service scientists on the ocean topics and information that are important to you! I’m your host Kate Nielsen.
Today’s question is….What are tides?
Tides are basically very long-period waves that move through the oceans in response to the forces exerted by the moon and the sun. Tides begin in the oceans and then move towards the coast where they appear as the regular rise and fall of the sea surface.
To help us dive a little deeper into this question, we will talk with Steve Gill on tides – what they are, what causes them, and the factors that affect them. Steve is the Senior Scientist with the Center for Operational Oceanographic Products and Services. Hi Steve, welcome to our show.
STEVE GILL: Hi Kate, thanks, it’s good to be here to talk about a topic that I have studied and worked on for over 33 years. Much of the practical application of tides is something that comes from on-the-job training and not learned in text books, so what the NOAA Tides and Currents program does is fairly unique.
HOST: Steve, first, what is the difference between a tide and a current?
STEVE GILL: Well, Kate, that’s a good question and typically the first thing I cover in many of my talks with students. The word “tides” is a general term used to define the alternating rise and fall in sea level with respect to the land. So, tides are characterized by water moving up and down during the day. Currents on the other hand move horizontally rather than vertically. Currents describe the horizontal motion of the water and are driven by several factors, one of those is tides; another is the wind. The horizontal movement of water that accompanies the rising and falling of the daily tides is called the tidal current.
HOST: Thanks Steve, so basically tides move up and down and currents move back and forth. What causes tides?
STEVE GILL: Gravity is one of the major forces that causes tides. Tides are caused by the gravitational pull of the moon and the sun. The gravitational forces are counterbalanced by the outward force of inertia from the moon revolving around the Earth and Earth revolving around the sun in their orbital paths. The combination of these two forces results in the tide-producing forces. So, ocean tides are a combination of lunar tides (lunar meaning the moon) and solar tides (solar meaning the sun).
HOST: So what does this mean exactly, the gravitational pull of the moon and the sun and the outward force of inertia?
STEVE GILL: Well, that can be a little confusing, but let me provide a little more background. In 1687, Sir Issac Newton first found that ocean tides can be explained by the gravitational attraction of the sun and the moon on the oceans of the Earth. In simpler terms, Newton’s law of universal gravitation states that the greater the mass of the objects and the closer they are to each other, the greater the gravitational attraction between them. The outward inertial forces counterbalance gravity; that is why the moon doesn’t fall towards the Earth in its monthly orbit and why the Earth doesn’t fall towards the sun in its yearly orbit. Put together, these forces result in the distance between the two objects being much more critical than their masses in forming the tide-producing forces on the Earth.
HOST: Steve, can you break this down for us a little bit more?
STEVE GILL: Certainly. Our sun is about 27 million times larger than our moon. Based on its mass, the sun’s gravitational attraction to the Earth is more than 177 times greater than that of the moon to the Earth. If tidal forces were based only on these masses, the sun should have a tide-generating force that is much greater than that of the moon. However, the sun is 390 times further from the Earth than the moon is, reducing its tide-generation force. Because of this, the sun’s tide-generation force is only about half that of the moon.
HOST: OK, so mass and distance combined are needed to fully understand the gravitational pull of the moon and the sun. And the orbits of the Earth around the sun and the moon around the Earth are needed to understand inertia. Is there anything else we need to know about tide-producing forces?
STEVE GILL: Actually, yes. The tide-producing forces can be thought of in terms of a tide-generation force envelope surrounding the Earth. This tidal-force envelope has bulges; one facing the moon and one on the opposite side of the Earth from the moon. There is a similar tidal-force envelope for the sun. On the near side of the Earth, the gravitational forces are greater than the outward forces of inertia, resulting in a bulge in the envelope toward the moon and the one for the sun. On the far side of the Earth, the forces of inertia exceed the gravitational forces, resulting in an equal, but opposite facing bulge. These forces are not actually strong enough to pull the ocean away from the surface of the Earth however. The tides are caused by the oceans being moved back and forth in their basins as they rotate underneath the tide-generating force bulges. The back and forth motion results in a tide wave, not a tidal wave or tsunami that you sometimes hear about. This tide wave is set up in each of the ocean basins.
HOST: So, how is this back and forth motion of a tide wave different than a tsunami?
STEVE GILL: A tsunami is set up by underwater seismic earthquake and they’re actually much higher frequency wave. They happen with peaks and troughs every several minutes as opposed to a tide wave which is every 12 hours or every 24 hours. This is a very high speed wave in the open ocean and you really don’t see that wave until it reaches shore.
HOST: Steve, what is the difference between high tide and low tide?
STEVE GILL: When the highest part, or crest, of the tide wave reaches a particular location, high tide occurs; low tide is the lowest part of the tide wave or trough. The difference between high tide and low tide is called the tidal range. Most people experience this difference when they are walking along the beach and perhaps notice either more or less beach area for a place to stop, sit down, or rest. I know my children would have fun building a series of sand castles further and further up the beach throughout the day as the tide came in and washed them out. Tides on all coasts originate in the oceans and travel onto shore and up into the estuaries, bays, and rivers.
(FREQUENCY AND INTENSITY OF TIDES)
HOST: Thanks Steve, it sounds like there are a lot of factors that do affect tides. You just mentioned that these tidal bulges have a direct effect on tidal heights. Do these tidal bulges impact how often tides occur?
STEVE GILL: Kate, yes they do. Tidal bulges do play a role in the frequency of tides. Remember that there are lunar and solar tides because there are separate tidal-force bulges for both the moon and the sun. Most coastal areas experience two high tides and two low tides every lunar day. A lunar day is the time it takes for a specific site on Earth to rotate from an exact point under the moon to the same point under the moon the next day. Unlike a solar day which is 24 hours, a lunar day is 24 hours and 50 minutes because the moon revolves around the Earth in the same direction that the Earth rotates on its axis; so it takes the Earth an extra 50 minutes to catch up to the moon.
Because the Earth rotates through two opposite lunar tidal bulges every lunar day, most coastal areas experience these two highs and two low tides every 24 hours and 50 minutes. High tides occur approximately 12 hours and 25 minutes apart and it takes about six hours and 12 minutes for the water at the shore to go from a high to low, or from a low to high. The high and low tides tend to occur about 25 minutes later each calendar day because the lunar day is longer than our 24-hour clock day. However, there are a few areas at which the solar tide-producing forces dominate those of the moon. In those very few areas, the high and low tides tend to occur 12 hours apart and at around the same time each day.
HOST: Steve, why does the difference between high and low tides vary more in some places than in others?
STEVE GILL: Well, Kate, if the Earth were a perfect sphere covered by one ocean without large continents, all areas on the planet would experience two equally proportioned high and low tides every day. However, these large areas of land divide the oceans into large ocean basins, each reacting to the tide-producing forces in their own way, depending upon their size and depth. This also brings up another important point on how tides can vary over time and by location. The lunar and solar tidal bulges are not always aligned with the plane of the Earth’s equator, and their alignment tracks the moon and the sun as they change their declinations. The declination can be thought of as angle in the sky of the moon or sun above the horizon. The moon has a maximum and minimum declination each month because the plane of the moon’s orbit around the Earth is not the same as the plane of the equator. Similarly for the sun – the sun has a minimum and maximum declination each year because the plane of the Earth’s orbit around the sun is not the same as the plane of the equator. This solar declination is also what causes the seasons on Earth. The different responses to the tidal bulges and their declinations cause the ocean basins to have varying tidal patterns.
There are three basic tidal patterns that occur along the Earth’s major shorelines. Most areas have two high and two low tides each day. If an area has two highs and two low tides each day that are about the same height, the pattern is called a semi-daily or a semidiurnal tide. This is true for most of the East Coast of the United States. If the two high and two low tides each day differ in height, the pattern is called a mixed semidiurnal tide, like you may see along the West Coast of the United States. Some areas, such as the northern Gulf of Mexico, have only one high and one low each day. This is called a diurnal tide.
HOST: Thanks Steve for your explanations on the causes, frequency, and intensity of tides as well as the role of gravity with the position of the Earth, moon, and sun and how these cause tides. Are there other factors that affect tides?
STEVE GILL: Kate, yes there sure are. Again, remember that there are lunar tides and solar tides and the tides we observe on the coasts are a combination of the two. So each month there are full moons and new moons in which the sun and moon are aligned and their gravitational attraction on the Earth acts together to cause stronger tides than normal. These are called spring tides. And there are two times each month when the moon and the sun are at right angles to each other with respect to the Earth to cause weaker tides than normal, these are at half moons, and these are called neap tides. The orbits of the moon around the Earth and the Earth around the sun are also not perfect circles, but they’re elliptical or oval in shape. This results in stronger lunar tides each month when the moon is closer to the Earth in its orbit and stronger solar tides occur each year when the Earth is closest to the sun, that typically occurs in early January.
So, while the gravitational pull of the moon and the sun is the main factor, on a much smaller scale, the magnitude of local tides can be strongly influenced by the shape of the shoreline. When oceanic tides hit wide continental margins, the height of the tide can be magnified when compared to very small tides at ocean islands not near the continental margins. Also, the shapes of bays and estuaries can magnify the intensity of tides and some shallow bays, lagoons, and rivers can also lessen the intensity. The Bay of Fundy in Nova Scotia in Canada is a classic example of the magnification effect and has the highest tides in the world at more than 15 meters or approximately 49 feet. Cook Inlet in Alaska is a similar example with the highest tides in the United States.
And finally, local wind and weather patterns can also affect tides. Strong offshore winds can move water away from coastline, exaggerating low-tide exposures. In many areas with very weak tides, such as the shallow Chesapeake Bay and areas of the Gulf of Mexico, the weather changes in wind and barometric pressure can affect the water levels as much or more than the tides.
(IMPORTANCE OF STUDYING TIDES)
HOST: So Steve, why do we study tides?
STEVE GILL: Well, we study tides for a variety of reasons. If we know the times, heights, and extents of both the inflow and outflow of the tidal waters we can better navigate through the intracoastal waterways and within the estuaries, bays, harbors; and we can work on harbor engineering projects such as the construction of bridges and docks; and we can collect data critical to fishing, boating, surfing, and many other water-related sports. We put in tide stations to measure the tides and analyze the data so that we can predict the tides and publish tide tables. And this is just to name a few of the ways that we use tidal data to help us in our daily lives.
HOST: How can the public access tidal data and information to plan upcoming recreational activities?
STEVE GILL: There are a few Web sites out there that provide local and regional tidal information. You can go to tidesonline.nos.noaa.gov to see current conditions, especially during storms or go to tidesandcurrents.noaa.gov to get historical data and much more information on tides and currents and sea levels. You can go to tidesandcurrents.noaa.gov/ports to see how real-time tides and currents measurements and forecast models are used by the maritime community.
HOST: Thanks Steve. We also have listeners from many different regions of the U.S. How are some of our non-coastal listeners impacted by tides?
STEVE GILL: Well, by measuring and analyzing the tides, we can produce very accurate tide and tidal current predictions and estuarine models that are used to make sure that our ports are used in the safest and most efficient manner. This affects the whole U.S. economy, as most of the U.S. trade and commerce comes into and out of the U.S. through the major coastal ports. So that affects everyone’s lives.
HOST: Steve, what is the role of the National Ocean Service in studying tides?
STEVE GILL: Kate, the Center for Operational Oceanographic Products and Services in the National Ocean Service is responsible for maintaining a national network of 205 long-term continuously operating water-level stations around the country, including the U.S. Great Lakes, which are non-tidal. We put in many more short-term water-level stations each year for various surveying, engineering, or habitat restoration projects. In addition, our office is also responsible for predicting and monitoring tides and tidal currents, computing tidal datums and sea level elevations, and for computing long-term relative sea level trends along our coasts.
So, the study, measurement, and analysis of tides by NOAA continue to be an important program for our nation so that all maritime users can enjoy use of our coastal resources.
HOST: Thanks Steve for joining us on today’s episode of Diving Deeper and exploring what tides are and what causes them. To learn more about tides and some of the products that Steve mentioned today, please visit the Center for Operational Oceanographic Products and Services Web site at tidesandcurrents.noaa.gov.
That’s all for this week’s show. Please tune in on April 22nd for our next episode on estuaries. | <urn:uuid:998bbf20-6164-4d91-a77c-09994cabd4cb> | 3.921875 | 3,466 | Audio Transcript | Science & Tech. | 53.754347 |
An interesting way to generate electricity from the sun w/o solar panels. Since a car heats up when the windows are up even on a cloudy day, due to the same greenhouse effect of glass, this should be able to use energy from the sun even on cloudy days.
The solar updraft tower is a renewable-energy power plant. It combines the chimney effect, the greenhouse effect and the wind turbine. Air is heated by sunshine and contained in a very large greenhouse-like structure around the base of a tall chimney, and the resulting convection causes air to rise up the updraft tower. This airflow drives turbines, which produce electricity.
Heat can be stored inside the collector area greenhouse to be used to warm the air later on. Water, with its relatively high specific heat capacity, can be filled in tubes placed under the collector, increasing the energy storage as needed.[ | <urn:uuid:68518f56-f1c8-4bc5-9313-d5b06ef7d6b6> | 3.546875 | 179 | Personal Blog | Science & Tech. | 41.999596 |
“Big red” is the nickname that MBARI marine biologists gave to this startlingly large jellyfish, which grows over 1 m (3 ft) in diameter. After determining that it was an entirely new species of jelly, they named itTiburonia granrojo after MBARI’s remotely operated vehicle Tiburon.
This giant scyphomedusa would be hard to miss, except it lives deep below the oceans surface, at depths of 650 to 1,500 m (2,000 to 4,800 ft). “Big red” has since been observed in deep waters off the west coast of North America, Baja California, Hawaii, and Japan. It uses its four to seven fleshy “feeding arms” instead of stinging tentacles to capture food. | <urn:uuid:88ea2cd3-438a-480a-b033-42e7208accb7> | 3 | 163 | Knowledge Article | Science & Tech. | 52.129628 |
Combined Release and Radiation Effects Satellite
Launch Date: July 25, 1990
Mission Project Home Page - http://nssdc.gsfc.nasa.gov/nmc/masterCatalog.do?sc=1990-065A
The Combined Release and Radiation Effects Satellite (CRRES) was launched into a geosynchronous transfer orbit (GTO) for a nominal three-year mission to investigate fields, plasmas, and energetic particles inside the Earth's magnetosphere.
As part of the CRRES program the SPACERAD (Space Radiation Effects) project, managed by Air Force Geophysics Laboratory, investigated the radiation environment of the inner and outer radiation belts and measured radiation effects on state-of-the-art microelectronics devices. Other magnetospheric, ionospheric, and cosmic ray experiments were included onboard CRRES and supported by NASA or the Office of Naval Research.
The chemical release project was managed by NASA/MSFC and utilized the release of chemicals from onboard cannisters at low altitudes near dawn and dusk perigee times and at high altitudes near local midnight. The chemical releases were monitored with optical and radar instrumentation by ground-based observers to measure the bulk properties and movement of the expanding clouds of photo-ionized plasma along field lines after the releases occurred. In order to study the magnetosphere at different local times during the mission, the satellite orbit was designed to precess with respect to the Earth-Sun line such that the local time at apogee decreased by 2.5 minutes/day from 08:00 (LT) just after launch and returned to this position in nineteen-month cycles.
The CRRES spacecraft had the shape of an octagonal prism with solar arrays on the top side. The prism is 1 m high and 3 m between opposite faces. Four of the eight compartments were for the chemical canisters and the other four housed SPACERAD and other experiments. The spacecraft body was spun at 2.2 rpm about a spin axis in the ecliptic plane and kept pointed about 12 degrees ahead of the Sun's apparent motion in celestial coordinates. Pre-launch and in-flight operations were supported by the Space Test and Transportation Program Office of the U.S. Air Force Space Division. Contact with the CRRES spacecraft was lost on October 12, 1991 and was presumed to be due to onboard battery failure. | <urn:uuid:b2172df7-cb95-4cb1-bba6-24576e459682> | 3.40625 | 489 | Knowledge Article | Science & Tech. | 41.607203 |
A comparison of metallographic cooling rate methods used in meteorites
The primary objective of this study was to test the postulate that cooling rates acquired from metal grains in chondrites are consistent with those from iron meteorites. Both types of metal occur in some Group IAB meteorites, which are mixtures of massive metal with well-developed Widmanstätten structures and chondritic inclusions with dispersed metal grains. The grains have textures and compositions similar to chondritic metal, including negligible P. The meteorites studied show little or no sign of shock reheating and textural evidence indicates that silicates and metal were mixed before Widmanstätten patterns formed during cooling. Cooling rates were obtained by comparing measured to modeled taenite grain or lamellae dimensions and central Ni contents. Modeling entails solving diffusion equations using experimental diffusion coefficients, phase relations, and bulk or local Ni and P contents, taking into account geometry, undercooling, and impingement. There is one set of parameters for grains and another, quite different set for Widmanstätten lamellae, including a factor of 30 difference in diffusion coefficients. Yet cooling rates obtained from Widmanstätten structures and metal grains in chondritic inclusions of the same meteorite are consistent; uncertainties in the best data are ±10° /Ma, equivalent to a factor of 1 ± 0.25. This agreement implies that the data and models are correct or contain fortuitously offsetting errors, which is quite unlikely. Cooling rates range from 40°/Ma to 70° /Ma in IAB meteorites that contain both grains and Widmanstätten structures. Rates based on grains in Ni-poor and Ni-rich meteorites lacking Widmanstätten patterns expand the range from 30°/Ma to perhaps 200°/Ma. Cooling rates correlate with Ni content; Ni-poor meteorites have slower rates than Ni-rich ones. Evidently, IAB meteorites were radially distributed over >30km in a body with a radius >50km. A comparison of the available Ar ages with cooling times inferred from the cooling rates suggests that the parent body cooled more slowly after the metallographic cooling rates were established.
Joseph I. Goldstein. "A comparison of metallographic cooling rate methods used in meteorites" Geochimica et Cosmochimica Acta 58.4 (1994): 1353-1365. | <urn:uuid:f112932c-d422-432e-86f5-139b89e875ad> | 3.21875 | 502 | Academic Writing | Science & Tech. | 26.849371 |
Johannes Kepler lived from 1571 to 1630. Kepler was a German astronomer who worked closely with Galileo and Tycho Brahe. Kepler came up with the three laws of planetary motion, which he is most famous for. Kepler calculated some of the most well known mathematical calculations to this day as well as gave the first proof as to how logarithms worked. He also did a large amount of work with optics as well. The following is an interview with the famous astronomer.
WM Times: What type of lifestyle were you born into?
J. Kepler: I was born into a poor family, where there were problems. My parents had a very unhappy marriage and I, unfortunately, was subjected to there constant bickering. When I was five years old my father left and died most likely in a war with the Netherlands.
WM Times: What was your first college or university?
J. Kepler: At the time period nobody called a school a college but the university which I did attend was the University of Tubingen where I got a scholarship from the Dukes of Wurttemberg.
WM Times: How did you get to know Tycho Brahe and what accomplishments did you share?
J. Kepler: In the year 1600 AD Tycho Brahe invited me to join his research team. After his death I was appointed to be his successor as the Imperial Mathematician.
WM Times: What were the books that you published and what were they on?
J. Kepler: My first book was Ad Vitellionem Parapolemna, Quibus stronomicaal Pars optica Traditur (Supplement to Witelo, concerning optical astronomy). Then in 1609 I wrote Astronomia Nova (New Astronomy). Dissertatio cum Nuncio Siderio Nuper co mortal's missoa Galilaeo Galilaeo (conversation with Galileo's sidereal revenge) was written including Galileo's information. Ten years later I published a book, Harmonices Mundi (Harmonies of the world) on the harmonic law.
WM Times: What was your Harmonic Law?
J. Kepler: My Harmonic Law was the third law of planetary motion which states The squares of the sidereal periods of the planets are proportionate to the cubes of the semi major axes (mean radii) of their orbit.
WM Times: How would you explain this law?
J. Kepler: This can simply be stated as the time that it takes a planet to revolve around the sun squared is equivalent to the cube of the distance between the planet and the sun.
WM Times: What was your first law?
J. Kepler: The path of a planet around the sun is an ellipse, with the sun at one focal point.
WM Times: How would you explain this law?
J. Kepler: My first law generally states that the path of all planets is an ellipse centered around two foci. An ellipse is generally the shape of an oblate spheroid meaning that it is close to an oval that is slightly squashed.
WM. Times: What is your second law?
J. Kepler: The radius vector to a planet sweeps over equal areas in equal intervals of time.
WM. Times: How would you explain this law?
J. Kepler: Angular momentum is the constant time for all central forces and when the mass and angular momentum are combined it causes the planet when moving closer to the sun to move faster than when it is at a farther distance from the sun. The gravitational pull would also cause it to move faster while being closer to the sun.
Through this interview we can see how much
just a bit of Kepler's work has helped society in the long run
and if we had talked about everything that he did we would run
out of room and would have a never ending article. Kepler was
also said to have been an amazing student from a very young age.
He conveyed his knowledge through math and sciences, with the
laws of planetary motion and optics, and his great works.
Kepler, Johannes. Charles Glenn Wallis, trans. Epitome of Copernican Astronomy: 7 Harmonies of the World. Prometheus Books, 1995.
Strong primary sources for the paper but vary difficult to understand for the ordinary mind.
Land, Barbara. Sam Wisnom, ill. The Quest of Johannes Kepler, Astronomer. Garden City: Doubleday & Company, Inc., 1963.
Provided excellent background info on him and his family and explained the three laws of planetary motion.
Mitton, Jacqueline. Astronomy: an Introduction for the Amateur Astronomer. The Berne Convention, 1978.
This work was decent but slightly difficult to understand in the sense of the mathematics but useful as well as it is very similar to Introductory Astronomy and Astrophysics.
Moore, Patrick. The Amateur Astronomer: A completely new version of Moore's classic work. W.W. Norton & Company, Inc., 1990.
Slightly easy to understand but also depends on the reader fully in this area.
Zeilik, Michael and Elske v.P. Smith. Introductory Astronomy and Astrophysics, 2nd ed. CBS College Publishing, 1987.
This book is a little more challenging for the reader to understand then Astronomy an Introduction for an Amateur Astronomer which is also less factual as the other book. | <urn:uuid:898a252d-5e46-408c-8712-39bca5fd6177> | 3.484375 | 1,129 | Knowledge Article | Science & Tech. | 59.144942 |
Visual binaries are systems in which the individual stars can be seen through a telescope.
Spectroscopic binaries are systems in which the stars are so close together that they appear as a single star even in a telescope. The binary nature of the system is deduced from the periodic doppler shifts of the wavelengths of lines seen in the spectrum, as the stars move through their orbits around the center of mass. In some instances, the spectrum shows the lines from both stars; this case is called a double-lined spectroscopic binary. In other cases, only one set of lines is seen, the other star being too faint, and we call the system a single -lined spectroscopic binary.
Eclipsing binaries are systems in which the orbital plane is oriented exactly edgewise to the plane of the sky so that the one star passes directly in front of the other, blocking out its light during the eclipse. Eclipsing binaries may also be visual or spectroscopic binaries. The variation in the brightness of the star is called its light curve.
Five to ten percent of the stars visible to us are visual binary stars. Careful spectroscopic studies of nearby solar-type stars show that about two thirds of them have stellar companions. We estimate that roughly half of all stars in the sky are indeed members of binaries.
One of the fundamental properties that we want to know about a star is its mass. The only way that we can determine the masses of stars is to study the orbital motions of binary stars. Application of the laws of celestial mechanics allows us to calculate the masses of the stars from measures of their orbital periods, sizes and velocities.
Link here to the Astro 101 binary star simulations.
|[back to the topics page]||[back to astro 2201 home page]||[back to Astro 2201 FAQ page]||[back to Astro 233 FAQ page]| | <urn:uuid:44296c15-e885-4154-9737-e5e8ca951fdf> | 4.09375 | 389 | Knowledge Article | Science & Tech. | 48.138989 |
Neutron Stars Join The Black Hole Jet Set
This artist's illustration depicts the jet of relativistic particles blasting out of Circinus X-1, a system where a neutron star is in orbit with a star several times the mass of the Sun. The neutron star, an extremely dense remnant of an exploded star consisting of tightly packed neutrons, is seen as the sphere at the center of the disk. The powerful gravity of the neutron star pulls material from the companion star (shown as the blue star in the background) into a so-called accretion disk surrounding it. Through a process that is not fully understood, a jet of material moving at nearly the speed of light is generated. A high percentage of the energy available from material falling toward the neutron star is converted into powering this jet.
The image in the inset is Chandra's X-ray image of the neutron star in Circinus X-1. Low energy X-rays are shown in red, medium energy X-rays in green and high energies in blue. The jet itself is seen to the upper right corner and consists of two fingers of X-ray emission (shown in red) separated by about 30 degrees. These two fingers, located at least about 5 light years from the neutron star, may represent the outer walls of a wide jet. Alternatively, they may represent two separate, highly collimated jets produced at different times by a precessing neutron star. That is, the neutron star may wobble like a top as it spins and the jet fires at different angles at different times. The structures on the opposite side (red, to the lower left) may be evidence for counter jets. The rest of the colored areas surrounding the bright central source are instrumental artifacts and not representative of structures associated with Circinus X-1.
The jet in Circinus X-1 is helping astronomers better understand how neutron stars, and not just black holes, can generate these powerful beams. Many jets have been found originating near black holes (both the supermassive and stellar-mass variety), but the Circinus X-1 jet is the first extended X-ray jet associated with a neutron star in a binary system. This detection shows that the unusual properties of black holes -- such as presence of an event horizon and the lack of an actual surface -- may not be required to form powerful jets. The result also reveals how efficient neutron stars can be as cosmic power factories. | <urn:uuid:37d9e15b-927f-4db5-83c8-4dc16f8ba8e8> | 3.96875 | 490 | Knowledge Article | Science & Tech. | 48.298994 |
When we visualize software as a machine, it becomes clear just how unwise it is to invent too much in a new software system. Picture the overall software as a factory assembly line of robots, or a new kind of automobile. The major software modules are sections of the factory, or important pieces of the automobile. The software subroutines are parts making up the larger mechanical components. Individual lines of source code are single pieces of metal in a robot, or springs, or gears, or levers. Function parameters are rods or lasers reaching into another mechanical subassembly. When the assembly line or car is started for the first time, the parts may not work together correctly. They may rub or bang into each other, preventing the whole machine from working right. This might occur in hundreds of places. Some problems may not be seen until a certain sequence of actions is attempted at the same time.
In the same way, a large software project is an incredibly complex machine, with millions of possible interactions among overlapping parts, compounded by interactions of the interactions. The full behavior of many software systems is well beyond human understanding. This is why we cannot accurately predict bugs in complex software; we are trying to build machines we cannot comprehend.
Oversimplifying a bit, there are two common approaches to software projects.
- Design and build software in a conservative manner, using tried-and-true components, assembled by a stable team of engineers, who have successfully built similar systems. These projects usually can be estimated accurately, and completed on time and budget.
- Attempt to create software that is substantially new. These are really research projects, not engineering endeavors. They have uncertain outcomes and no reliable time/cost estimates.
An example of #1 is the creation of a new compiler by a software development company that has produced many compilers for dozens of languages and target machines. When this company takes on a new compiler project, for a variation of an existing source language, with a carefully specified target instruction set, by an experienced team of compiler engineers, then this project has a high likelihood of success. Techniques such as reusable class libraries and design patterns help software projects conform to this model.
An example of #2 was the FBI's Virtual Case File, previously mentioned. No one had ever created a software system to perform the functions envisioned for it. Creating it was like trying to construct a wholly new type of machine, from a new kind of metal, using a yet-to-be-invented welding technique.
Either of these two approaches to software is valid. The key problem is that we take on projects like #2, but pretend they are like #1. This is what ails the world of software development.
We fool ourselves about how well we understand the complex new software machines we are trying to build. Just because we plan to code a new project in a known programming language, say Java, and our engineers are good at Java, this does not mean we have answers to all the challenges that will arise in the project. Using the mechanical analogy, just because our inventors have put together many machines that use springs and gear and levers, does not mean we can correctly build any machine using these parts. We can't have it both ways. If we want an accurate budget and completion time, we cannot engage in significant research during a software project. Conversely, there is nothing wrong with research and trial-and-error, but we should not think we know when it will be finished.
But what is the solution in the real world? Everyone would like to make software engineering as predictable as traditional engineering. There are many important pending software projects with large unknowns. We cannot simply say, "Oh, this software poses some new challenges, so let's give up." The solution is to get over our hubris that software development is some special kind of animal, unlike other engineering endeavors, and that we programmers are so much smarter than our traditional engineering brethren. Software is just a machine, and people have been building machines for a very long time.
To wit, here is my prescription for improving the success rate and reputation of software developers....
Stop fooling ourselves about how much we know and how clever we are. Large software projects are impossible to understand fully. No one can grasp all of the overlapping effects of each component. Picturing software as a physical machine helps to illuminate just how complex these systems are.
In a large software project, there may be one person who fully understands each particular component, but that is a different person for each component. No one has a grasp of the whole system, and we have no way to meld isolated individual knowledge into a collective whole. This was precisely the problem with the embarrassing tale of the Metric-English measurement error on the Mars Climate Orbiter. (Wasting $300 million in taxpayer money.)
Incremental improvement to existing systems is good. Sometimes this means adding one additional feature to a working system. Sometimes it means combining two working systems with a new interface. And it is helpful if the interface method itself has been used elsewhere.
Iterative development is good. This applies the above principle, again and again, to one particular software system. The first release of the software does little more than say "Hello" to the user. The next release adds one basic feature. The next, one more, etc. The idea is that each software release only has one major problem to solve. If it does not work, there is one thing to fix. Each release is an incremental improvement to working software. In practice, of course, we may stretch a bit and include a few new features in each release, but we never attempt to create a huge, complex piece of software all at once. (See the Agile Manifesto.) The iterative approach also can be applied to estimating the size of software projects.
Research projects are great, but be honest about them. The Denver Airport software disaster, cited above, could have been avoided if it were handled in this way...
Admit that we don't know how to sort airline baggage automatically, but would like to solve this problem. Start such a research project in an empty warehouse, using a few conveyor belts, some bar-coded suitcases, and some sorting gates with embedded software. After working out the kinks, try the system at a small airport, for incoming flights from one other city. When that works, try it for all flights to this small airport. After success there, and further hardware/software refinements, create a similar system at a mid-size airport. Improve the hardware/software again. Install at several mid-size airports and fix any problems.
Then you are finally in a position to say, "Let's think about handling baggage at a large airport this way."
Software development need not be a mystical process, undertaken only by the most brilliant, with no hope of predicting the outcome. Software is a machine, and over many years we have learned the principles of good machine design. Unfortunately, because software is so new and is impossible to see or touch, we get clouds in our eyes when we think about software projects. We forget that we know how to plan, design, and construct high-quality machines -- by incremental improvement to previous machines, using proven materials and methods. | <urn:uuid:0f9aef31-f1b3-49ba-9619-6493b8e9d070> | 3.578125 | 1,483 | Personal Blog | Software Dev. | 42.43252 |
First the good news: North America's monarch butterfly (Danaus plexippus) has bounced back after its worst year ever. Now the bad: it is still the fourth worst year since records began in 1993.
WWF Mexico's latest survey of the butterfly's Mexican heartland shows that the insects wintering there since November colonised 4 hectares of forest, over double the area occupied last year. The area occupied is used as an indirect measure of butterfly numbers.
In 2009, the butterflies faced storms as they migrated from Canada and the US, which devastated their numbers. "These figures are encouraging, because they show a trend toward recovery after a record low," says Omar Vidal, director of WWF Mexico.
Vidal says that the illegal logging which threatened the monarch's habitat is now under control, but climate change and farming in the US could deplete the food the butterflies rely on en route.
- New Scientist
- Not just a website!
- Subscribe to New Scientist and get:
- New Scientist magazine delivered every week
- Unlimited online access to articles from over 500 back issues
- Subscribe Now and Save
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | <urn:uuid:b4fc6861-704d-4a28-acbe-315405c47029> | 3.390625 | 318 | Truncated | Science & Tech. | 42.372965 |
The fictional world of Arthur C. Clarke's 2001: A Space Odyssey this week came a step closer to reality. NASA has begun testing a computer system designed 'to look over the astronaut's shoulder' and give advice on malfunctions and faults that occur during space shuttle flights.
A prototype of the system, which uses advanced artificial intelligence programming, has been running for the past two weeks at the Johnson Space Center in Houston, monitoring the in-flight actions of the six-member crew of Endeavour.
Among its duties has been to ensure that the astronauts properly isolate faulty valves and tank leaks in the shuttle's orbital manoeuvring equipment. But its designers say it could eventually automate many routine space operations, including the maintenance of life support systems, power and communications management and the servicing of satellites.
On average, about 25 malfunctions occur during each shuttle flight. These are diagnosed and fixed by the ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:02a6e037-1b51-4ad6-8f8f-bae560f9077b> | 3.328125 | 209 | Truncated | Science & Tech. | 39.752917 |
Write a program that makes use of a class called Employee that will calculate an employee's weekly paycheque. Design your class based on the UML diagram to the right as well as the following notes:
A static field, empCount, keeps track of the number of instantiated employees and can be retrieved using the static method getCount( ) The constructor initializes the employee's name and employee number (a random number in the range of 1000 - 9999) The method setEmployeePay( ) is overloaded to accomodate different payment methods based on empType: Type 1 - Salaried employees have a yearly salary, paid on a weekly basis Type 2 - Hourly employees are paid an hourly rate for the number of hours worked (overtime is time and a half for hours over 40) Type 3 - Piece employees are paid a base amount plus $24.00 for every piece completed The method calculatePay( )will determine what the employee's weekly pay is, based on their payment type The methods getName( ) and getNumber( )return the employee's name and employee number respectively All input & output should be displayed in the main class only The main program should test your class by instantiating four different employees: Test a salaried employee Test two hourly employees, one who has worked overtime and one who hasn't Test a piecework employee The following data can be hard-coded into your program (no prompts to enter data other than the names): Employee 1: (22.50, 35.0) Employee 2: (45350.00) Employee 3: (500.00, 25) Employee 4: (14.75, 48.0) Output: Enter an employee name: Bob Employee Count is: 1 Enter an employee name: Ted Employee Count is: 2 Enter an employee name: Carol Employee Count is: 3 Enter an employee name: Alice Employee Count is: 4 Employee 8826 Bob earned $787.50 Employee 5454 Ted earned $872.12 Employee 1083 Carol earned $1100.00 Employee 8782 Alice earned $767.00
using these names
click on above line "using these names" | <urn:uuid:838e400d-3f34-48a6-991d-2c580949aec5> | 2.9375 | 441 | Tutorial | Software Dev. | 54.073409 |
My next dive into attempting to write a program deals with a Fibonacci-se-quence. I have started an outline to portray the ideas of how I think the flow of code should go to produce this program and have come across some questions.
I want my program to query the user for four separate integers. The first two are the beginning numbers of the sequence and the last two will indicate the starting point and ending point of the sequence.
My question is: if the user decides to start farther down the line, instead of the beginning number, then how should I put this together? Should I write a function that will work through the computations ( beginning from the first ) which will keep count of each loop through the computations and then return when it has reached the proper beginning line or should I try to compute an algorithm that will do the same and begin printing at said starting point.
the user inputs 1,1 for the sequence
the user wants the starting point for output to be 3 and end at 9.
the output should read:
I am also going to include the ratio of the current number of the sequence to the previous, but that didn't seem necessarily relevant to my question.
Sorry if this was poorly written. I am kind of scrambling through my thoughts here. | <urn:uuid:7687a6d5-f388-44ec-a906-75bbb44b8b74> | 2.796875 | 264 | Q&A Forum | Software Dev. | 60.033552 |
Scheme provides a very powerful control abstraction named call-with-current-continuation. Not only is the name intimidating, but many introductions to its semantics are as well. This introduction tries to be much gentler. C programmers can also read call-with-current-continuation-for-C-programmers.
To understand this text, you should be aware and secure in your use of higher order procedures like map and for-each, the use of lambda, passing procedures to other procedures, and returning procedures from procedures.
First of all, though, this introduction uses an abbreviation for the long-winded name: call/cc. This is somewhat less intimidating and a bit easier on the fingers. Some implementations provide an alias variable already; if not it is trivial to define for running the example code in this tutorial:
(define call/cc call-with-current-continuation)
One of the simplest uses of call/cc is to provide a simple escape mechanism out of any nested computation. In some languages, such as Java or C, this is known under names such as break, continue, and return. I now have a problem, though. Using normal recursion, most problems that mandate the use of any of those keywords in those languages (for example, iterating over a two-dimensional array) are trivial when written in the recursive of Scheme. So you now have to bear with me when I take some far-fetched examples.
Let's take for example a simple search for an entry in a list. We can of course manually recurse over the list, and just not recurse anymore when we've found it. Yet, there's the procedure for-each for traversing a list. We somehow are forced to use it. It would be useful if we could just write the following:
;;; Return the first element in LIST for which WANTED? returns a true ;;; value. ;;; NOTE: This is not working Scheme code! (define (search wanted? lst) (for-each (lambda (element) (if (wanted? element) (return element))) lst) #f)
We use a procedure named return here which would, hopefully, return from search, and use the value of element which we pass as an argument as the return value of search. If for-each comes to an end, we haven't found what we were looking for, and can return #f.
Sadly, such a procedure does not exist in Scheme. But, call/cc to the rescue! Using it, we can get this procedure. Now, how do we use call/cc? The procedure gets as an argument another procedure, which is then called with a single argument: The return procedure we above were looking for! As soon as the return procedure is called, the call to call/cc returns. Specifically, call/cc returns the argument passed to the return procedure. In a concrete example, the code above can be written in actual working Scheme:
;;; Return the first element in LST for which WANTED? returns a true ;;; value. (define (search wanted? lst) (call/cc (lambda (return) (for-each (lambda (element) (if (wanted? element) (return element))) lst) #f)))
Here we can see how the procedure return is introduced into the scope of the for-each expression where we are using it. As soon as we found the element we were looking for, we pass it to the return procedure. As soon as this happens, the call/cc expression returns this value. Nothing else is done. The whole expression terminates. You can picture this as a kind of goto - you just jump through the code and all the calls to the call/cc and return from it.
A note is appropriate here. Dijkstra's famous paper GOTO Statement Considered Harmful is appropriate here as well: call/cc can be very confusing for programs. Its use should be carefully limited and preferably abstracted away so that it can not interfere with the normal understanding of the program.
Now that I issued this warning, let's continue. Is your brain twisted enough already? You haven't seen much of what call/cc is able to do.
As return is just a normal variable, we can of course pass it around. Again stretching examples a bit to be able to concentrate on the issue at hand, let's say we pass not a predicate wanted?, but a procedure treat that accepts two arguments: an element, and a procedure to call if it likes that element.
;;; Call LIKE-IT with a custom argument when we like ELEMENT. (define (treat element like-it) (if (good-element? element) (like-it 'fnord))) ;;; Call TREAT with every element in the LIST and a procedure to call ;;; when TREAT likes this element. (define (search treat lst) (call/cc (lambda (return) (for-each (lambda (element) (treat element return)) lst) #f)))
As you can see, we pass treat the return procedure we got from call/cc. Treat then proceeds to do some processing, and in the event that it likes it, calls the return procedure with a value it's liking. As we remember, as soon as we call the return procedure, the argument to it is returned from the call to call/cc that created the return procedure. We are effectively jumping from within the treat procedure to the invocation of call/cc in search.
This is no different from the use of return above, but as you can see, it works from anywhere in the program. That property gave this kind of operation a special name: non-local exit. We are not exiting locally at the location of the exit instruction, but at a completely different place, or locality.
So far we have seen a trivial use of call/cc only. All we did so far was escaping from a deeply nested call structure - many calls of procedures deep, we were able to use the return procedure to leave all these calls behind and go up, to exit. These are called exit continuations. Only now, the word continuation which is part of call/cc shows up again. Exit continuations are a special kind of continuation, and call/cc creates the general kind.
A continuation is something that is used to continue. The return we used above was a continuation we used to continue back up where we created it. We continued the computation at the location of call/cc.
What happens if the continuation created with call/cc escapes the body of the procedure passed to call/cc? Let's say, for example:
(define return #f) (+ 1 (call/cc (lambda (cont) (set! return cont) 1)))
The evaluator will print the value 2, because the body of the call/cc expression terminated normally, returning the value of 1. This is then added to 1, which yields 2. But! We have a variable named return now, which is bound to the continuation created with this call/cc there. What happens if we call it with a parameter of, say, 22?
> (return 22) 23
Now what happened here? The return procedure did the same thing it did above: The 22 passed to return is used as the return value of the call/cc call. This return value is then added to 1, which yield 23. And this is what we got. We never returned from the call to return, but returned from the addition way above there.
The difference here is that we re-entered the computation from outside, we did not just leave it. This is a big brain-twister. Be sure to understand it!
Let's assume we have a procedure which does a lengthy task. We don't want to block the system completely, so we separate the task in little steps. After we did one of those steps, we call a procedure which lets the system carry out any task it might want to do right now. Now comes the twist: This procedure is passed a procedure, specifically a continuation, which it should call to resume the lengthy computation:
(define (hefty-computation do-other-stuff) (let loop ((n 5)) (display "Hefty computation: ") (display n) (newline) (set! do-other-stuff (call/cc do-other-stuff)) (display "Hefty computation (b)") (newline) (set! do-other-stuff (call/cc do-other-stuff)) (display "Hefty computation (c)") (newline) (set! do-other-stuff (call/cc do-other-stuff)) (if (> n 0) (loop (- n 1)))))
When do-other-stuff calls the procedure it got passed by call/cc, the computation will resume right here, the loop will recurse, and the next step is done. Of course, besides such a hefty computation, every other computation is superfluous. Since it's superfluous, it should do the same as pointed, and let the system do other stuff when possible:
;; notionally displays a clock (define (superfluous-computation do-other-stuff) (let loop () (for-each (lambda (graphic) (display graphic) (newline) (set! do-other-stuff (call/cc do-other-stuff))) '("Straight up." "Quarter after." "Half past." "Quarter til.")) (loop)))
What happens now if you enter the following input:
> (hefty-computation superfluous-computation) Hefty computation: 5 Straight up. Hefty computation (b) Quarter after. Hefty computation (c) Half past. Hefty computation: 4 Quarter til. . . .
Now what is happening here? The hefty computation did one step, and passed the control to the superfluous computation. This in turn did one step and passed control back to the hefty computation. Now this of course did a step and passed control to the superfluous computation. And... You get the pattern. These two procedures are calling each other alternatingly, passing control between the two. Of course, this is not limited to two procedures, but could be done between any number of procedures. Procedures which pass control between them like this are called Coroutines, since they are effectively processed in parallel.
I hope you now have a basic understanding of continuations, and will be able to tell anyone who is scared about them that they're not scary at all. But let me repeat my earlier warning: Continuations can be easily abused. Don't overdo it. You don't have to use continuations everywhere, really!
See also call-with-current-continuation in the R5RS. | <urn:uuid:591e36f5-1578-4ea8-97ef-0ce3f450dbc3> | 3.046875 | 2,266 | Documentation | Software Dev. | 58.802681 |
The Database Environment
This chapter describes the database environment which is composed of a set of Mimer SQL system databanks, one or more idents authorized to connect to the database and the databanks created by the idents. It also describes database security and data integrity.
The objects which are created in the database (schemas, tables, views, domains, sequences, modules, procedures, synonyms and indexes) are described in the Mimer SQL User's Manual.
Upright Database Technology AB
Voice: +46 18 780 92 00
Fax: +46 18 780 92 40 | <urn:uuid:4426e582-ba6b-41bc-8d71-f6d1d3eb3353> | 2.859375 | 123 | Documentation | Software Dev. | 26.127576 |
How We Present
1,000-year-old farming secrets could save the Amazon rainforest
by Alasdair Wilkins
io9.com Translate This Article
9 April 2012
On 9 April 2012 io9.com reported:
An international team of archaeologists have made an intriguing discovery -- the peoples who farmed the Amazon long before the arrival of Europeans did so without burning down trees to clear room for their fields. These indigenous farmers used raised-field farming. 'This ancient, time-tested, fire-free land use could pave the way for the modern implementation of raised-field agriculture in rural areas of Amazonia,' said University of Exeter researcher Dr Jose Iriarte.
Global Good News service views this news as a sign of rising positivity in the field of environment, documenting the growth of life-supporting, evolutionary trends.
To read the entire article click here
Every day Global Good News documents the rise of a better quality of life dawning in the world and highlights the need for introducing Natural Law based—Total
Knowledge based—programmes to bring the support of Nature to every individual, raise the quality of life of every society, and create a lasting state of world peace.
Translation software is not perfect; however if you would like to try it, you can translate this page using:
Send Good News to Global Good News. | <urn:uuid:7b9e2058-8cd2-431e-8b36-658b99b65c72> | 3.078125 | 281 | Truncated | Science & Tech. | 32.181145 |
A charged particle moves along a line joining two charged particles which are fixed at the locations and . Assume the interactions between the moving charged particle and each of the fixed charges is repulsive and obeys an inverse square law. The differential equation
for some constants and models the motion of the trajectory of the moving particle.
1)Calculate the frequency of the small periodic oscillations near the equilibrium point.
2) Suppose that the initial condition is and and suppose . Find a formula for the velocity of the particle as it passes through the equilibrium position.
Attempt at solution:
I know that the equilibrium point is:
Also that the potential
I'm not really sure how to proceed for part 1 and for part 2, the only way I can imagine is to find an analytic solution to the given differential equation and plug in the initial conditions. I don't think that's the correct method to obtain the solution.
I was able to solve part 2. I only need assistance with part 1 at the moment.
Any hints or suggestions would be greatly appreciated! | <urn:uuid:85c60f0b-8b4f-4dde-af48-32cdfdc3ca53> | 2.96875 | 217 | Q&A Forum | Science & Tech. | 49.127111 |
Boosted carbon emissions from Amazon deforestation
Article first published online: 21 JUL 2009
Copyright 2009 by the American Geophysical Union.
Geophysical Research Letters
Volume 36, Issue 14, July 2009
How to Cite
2009), Boosted carbon emissions from Amazon deforestation, Geophys. Res. Lett., 36, L14810, doi:10.1029/2009GL037526., , and (
- Issue published online: 21 JUL 2009
- Article first published online: 21 JUL 2009
- Manuscript Accepted: 22 JUN 2009
- Manuscript Received: 25 FEB 2009
- tropical deforestation;
- carbon emissions
Standing biomass is a major, often poorly quantified determinate of carbon losses from land clearing. We analyzed maps from the 2001–2007 PRODES deforestation time series with recent regional pre-deforestation aboveground biomass estimates to calculate carbon emission trends for the Brazilian Amazon basin. Although the annual rate of deforestation has not changed significantly since the 1990s (ANOVA, p = 0.3), the aboveground biomass lost per unit of forest cleared increased from 2001 to 2007 (183 to 201 Mg C ha−1; slope of regression significant: p < 0.01). Remaining unprotected forests harbor significantly higher aboveground biomass still, averaging 231 Mg C ha−1. This difference is large enough that, even if the annual area deforested remains unchanged, future clearing will increase regional emissions by ∼0.04 Pg C yr−1 – a ∼25% increase over 2001–2007 annual carbon emissions. These results suggest increased climate risk from future deforestation, but highlight opportunities through reductions in deforestation and forest degradation (REDD). | <urn:uuid:28728834-c196-4e48-aadc-e00d6ad9c14b> | 2.796875 | 346 | Academic Writing | Science & Tech. | 41.023294 |
Aug 19, 2009, 5:13 AM
Post #1 of 4
Hi! I'm trying to understand a perl script. There are some switches (I guess) used, and I can't seem to find their role/use.
-e and -s switches?
The code looks like:
my $filename ="doc";
my $filename1 = -s $filename;
while( -e $filename && $filename1 == -s $filename )
What is the role or use of -e and -s? | <urn:uuid:d6d7606b-4874-48b5-8254-4f0e8d2ec052> | 2.765625 | 113 | Comment Section | Software Dev. | 82.117582 |
No one know how will a quantum Internet work as it is still in its infancy of the idea. The main difficulty is that quantum information are tight to a particular physical system. In classical communication, you can copy any signal, store and retransmit later easily. However, in quantum communication, once you have done measurement on the whole system, everything is gone. Also, the coupling with the environments are serious problem for communication.
The purpose of quantum Internet is how to move quantum information around, which would allow the task like distributing quantum key and collaborating between different quantum computers (unlike classical computing, a quantum server can work on a job but it never know what it is working on). There are two main different ways for quantum communication.
The first scheme is the use of the quantum teleportation. We assume two parties are sharing unlimited entangled pair. To transmit quantum information, they only need to preform local operations and classical communication (LOCC). Transmission of one qubit requires a measurement on one particle of an entangled pair and sending two classical bits. The model requires a reliable source that generates large amount of entangled pairs and transmit to both parties. Also the operations of two qubits gates need to be precise.
The second scheme is the direct transmission of qubits. Using this method, the qubits is usually carried by the spin-1/2 photon. The generation of the photon likely involve the transfer of quantum information from the internal state of a quantum computer and then convert back in the receiving party. This depends on how a quantum computer might actually work and we dont know yet.
Both schemes require "quantum router" to store and redirect the qubits to the correct destination. One method is to slow the light to nearly stop, so that it can be stored in the quantum router for a while. Another approach would be to transfer qubit from the photon to an atomic state and then convert back later, which could be stored for a longer time.
As said before, it is unclear how the final quantum Internet would work. The decoherent should be the main issue in the deciding the practical schemes. Probably, we may use the neutrino for interplanetary communication one day as it has very weak interaction with normal particles and so the quantum state can be perserved over lightyear. | <urn:uuid:627454d6-3212-4ec4-b6e2-7c2dc813f306> | 3.46875 | 466 | Q&A Forum | Science & Tech. | 39.839429 |
“Everything about it would be bad,” says Mark Hammergren, an astronomer at Adler Planetarium in Chicago, starting with the sad attempt to scoop some of that star candy up. Even though white dwarfs are fairly common throughout the universe, the nearest one is still 8.6 light-years away, which is roughly 81,360,544,300 kilometers. So assuming you’ve got a light-speed spaceship, a bunch of books and videos to keep you amused for 8.6 years and that the heat and radiation emanating from the star didn’t kill you on your approach you might be able to get somewhere. “You’d have to get your sample—which would be very hard to carve out—without falling onto the star and getting flattened into a plasma,” Hammergren says. “And even then, the high pressure would cause the hydrogen atoms in your body to fuse into helium.” (This type of reaction, by the way, is what triggers a hydrogen bomb.)
Now that you have your super volatile sample and have somehow removed it from the superdense, high-pressure star; you got the problem of containing it on Earth’s low-pressure environment, which would cause it to explode if not encapsulated properly. Let’s just say it didn’t blow up or vaporize your entire being, since the teaspoon sample temperature would range about 5538˚ and 55538 C˚- and you somehow got it to your kitchen table, it’d be pretty damn hard to feed yourself: A single teaspoon weighing in excess of five tons!
“You’d pop it into your mouth and it would fall unimpeded through your body, carve a channel through your gut, come out through your nether regions, and burrow a hole toward the center of the Earth,” Hammergren says. “The good news is that it’s not quite dense enough to have a strong enough gravitational field to rip you apart from the inside out.”Ouch. If you observed your friend doing all of this and still wanted a taste for yourself, but don’t want to travel the 8.5 light-years or die, you can always open your fridge since it’s full of the stuff. Most of the elements that make up our bodies and everything we see around us were formed in the cores of stars. We fall in love, play with, eat and live on star poop. | <urn:uuid:158e2f31-da79-461e-aa94-281817583577> | 2.765625 | 518 | Personal Blog | Science & Tech. | 62.820647 |
How to yell across a solar system
The transceiver in the movie sends the data through space using electromagnetic energy. One kind of electromagnetic energy is the light we see with our eyes. But there are many other forms of electromagnetic energy. One form is radio waves. We can't see radio waves, but we know how to make them and we know how to detect them. We also know how to make them carry messages. It is radio waves that bring you radio and TV!
Changing the shape of the radio waves to make them carry information is called "modulating" the wave. A radio wave that has been modulated to carry information is called a "signal." Now the wave is more than electromagnetic energy. It is a message!
It is the transceiver's job to convert an ordinary radio wave into one that carries a message. | <urn:uuid:f2e27387-ceaf-4639-b1c6-b2b0b42d6e1e> | 3.296875 | 171 | Knowledge Article | Science & Tech. | 54.55227 |
The Latin name of all the major insect orders ends in “ptera”, which means “wing”, and for good reason! The majority of the thirty or so orders of insects are flighted arthropods*, and their wings help to define them both ecologically and taxonomically. It is even suggestive as to the course of their evolution!
For example, the most diverse insect orders are Coleoptera (beetles), Diptera (flies), Hymenoptera (bees, wasps, and ants), Lepidoptera (butterflies, skippers, and moths). These four orders are represented by about 600,000–795,000 species (according to Wikipedia). So I’ll briefly explain how wings help to identify these four orders, and I’ll throw in a fifth (because I think it is common and important), the Hemiptera (true bugs).
(NB: remember that the stereotypical insect anatomy includes four wings, a pair of forewings, and a pair of hindwings. It is important to know that before we start deviating from it)
Coleoptera The name of the beetle order means “sheathed wing”. It refers to the fact that the forewings on beetles have been modified into a hard carapace, or shell, which protects them. The front wings are now called “elytra”, and they have specialized hooks all along the edges that allow beetles to lock their wings together. When a beetle is threatened, it will close the elytra (folding its delicate membranous wings beneath), fold its legs under its body, tuck its antennae in, and hunker down until the danger is passed. A locked down beetle is like a tank; it is very hard to crack into.
Diptera The name of the beetle order means “two wings” and refers to the fact that all flies have only two wings instead of four. Their hindwings have been modified into small, clublike appendages known as “halteres”. The halteres, aside from being a diagnostic feature of the order, give flies Supreme Flight Manoeuverability, which allows them to flip upside down and land on the ceiling, or annoyingly dodge your swats.
Hymenoptera A little less helpfully, the name of the order of ants, bees, and wasps means “membrane wing”. Members of this order have four membranous wings, but that is not a diagnostic feature (given that other orders do also have four membranous wings). Most hymenopterans have what is known as a “wasp waist”, or a constriction between the thorax or the abdomen. The exception is the primitive hymenopterans known as sawflies (suborder Symphyta).
Lepidoptera The name of the order of butterflies, skippers, and moths means “scale wing”, referring to the often colourfully patterned scales that cover the membranous wings of these insects. The scales rub off like dust or powder if you touch the wings, which you may or may not have experience with from childhood. There are some notable exceptions (i.e. the clearwings).
Hemiptera The name of the order of true bugs means “half wing” and refers to those members of this order whose forewings are divided into both membranous and hard parts. The modified forewings are then known as “hemielytra” (like partial hard wing from the beetles). However, this order now includes members of what used to be a separate order, the Homoptera. These insects have four whole wings, and therefore do not fit the diagnostic feature of half wing. In other words, all insects with hemielytra are hemipterans but not all hemipterans have hemielytra.
*save the springtails (Collembola)**, mantises (Mantodea), cockroaches (Blattodea), dragonflies and damselflies (Odonata), termites (Termitoidae), webspinners (Embiidina), bristletails (Microcoryphia), silverfish (Zygentoma), and chewing and sucking lice (Mallophaga and Anoplura), but we’ll save them for another post!
**some would argue that Collembola does not technically fall in the class insecta, but in the subphylum hexapoda. It is up for debate! | <urn:uuid:5ef58d38-8a50-4eb6-b536-e83d7bbc3836> | 4.03125 | 957 | Personal Blog | Science & Tech. | 42.405016 |
Introduction: XAML in WEB
A rich Internet application (RIA) is an entirely new kind of web experience that is engaging, interactive, lightweight, and flexible. RIAs offer the flexibility and ease of use of an intelligent desktop application, and add the broad reach of traditional web applications. This richer functionality may include anything that can be implemented in the technology being used on the client side, including drag and drop, using a slider to change data, animations, and client interacting in asynchronous manner with the server or graphics. Some of the sites that employ RIA features are Gmail, Yahoo! Mail (beta), Flickr and Popfly.
The advent of RIA technologies has introduced considerable additional complexity into Web applications. Traditional Web applications built using only standard HTML, having relatively simple software architecture and being constructed using a limited set of development options, is relatively easy to design and manage. For the person or organization using RIA technologies to deliver a Web application, their additional complexity makes them harder to design, test, measure, and support.
Aspects of the RIA architecture that complicate management processes are:
- Greater complexity makes development harder
- Asynchronous communication makes it harder to isolate performance problems
- Not all features are supported by a single technology since there is a mismatch of technologies used by designers and that used by developers. Ex: XUL, Java and applet, Flash, Laszlo
XAML provides an easy solution to all the above concerns. Microsoft Silverlight which employs XAML is a cross-browser; cross-platform plug-in for delivering the next generation of .NET based media experiences and rich interactive applications for the Web.
What is XAML?
Extensible Application Markup Language (XAML, pronounced zammel) by Microsoft is a declarative language used to initialize structured values and objects. You can create visible user interface (UI) elements in the declarative XAML mark up. It is a case-sensitive language. XAML is used extensively in the .NET Framework 3.0 technologies, particularly in Windows Presentation Foundation (WPF), where it is used as a user interface mark up language to define UI elements, data binding, event handling and other features, and in Windows Workflow Foundation (WF), in which workflows themselves can be defined using XAML.
XAML is an XML-based language that is used to define graphical assets, user interfaces, behaviours, animations, and more. Basically XAML renders a rich UI. It was introduced by Microsoft as the mark up language used in Windows Presentation Foundation, a desktop-oriented technology. WPF allows for the definition of 2D and 3D objects, rotations, animations, and a variety of other effects and features rendered by the XAML file. XAML elements can map directly to Common Language Runtime (CLR) object instances whereas attributes can map to CLR properties and events on those objects. In typical usage, XAML files will be produced by visual design and developer tools, such as Microsoft Expression Blend, Microsoft Visual Studio, XAML Pad or the Windows Workflow Foundation (WF) visual designer.
Although XAML has been introduced as an integral part of WPF, the XAML standard itself is not specific to WPF (or even .NET). XAML can also be used to develop applications using any programming API and is in itself language independent. However, a key aspect of the technology is the reduced complexity needed for tools to process XAML, because it is simply XML. As XAML is simply based on XML, developers and designers are able to share and edit content freely amongst them without requiring compilation.
XAML for Web applications comes in the form of Silverlight. Microsoft Silverlight is a web-based subset of WPF. During development it was named WPF/E, which stood for "Windows Presentation Foundation Everywhere". Silverlight is based on XAML –Jscript-HTML in case of 1.0 version and XAML-.NET in case of 1.1. The Silverlight subset enables Flash-like web and mobile applications with the same code as Windows .NET applications. 3D features are not supported, but XPS, vector-based drawing and hardware acceleration are included, thus rendering Rich UI to web sites. The XAML used by Silverlight differs from that used by WPF, in that the former is a Web-oriented subset of the full XAML available for the desktop.
From a user perspective, there will be a big jump in interface capability. Eventually users will find it difficult to recognize the difference between an online web application and a desktop application. A variety of media can now be incorporated seamlessly into the interface including audio, video, and 2D graphics.
XAML in Silverlight
With Silverlight one can create RIA, Rich Internet Applications, and make astonishing interfaces, that can integrate animations and videos. It is similar to HTML files, which are plain text that contain information that tells the Web browser how to render the look and feel of a webpage. XAML does the same thing. However, instead of the browser interpreting the instructions about how to render the file, the Silverlight runtime does the rendering.
With 1.1, all the features and advantages that .NET framework has to offer is incorporated implicitly. With this, the XAML layout mark up file (.xaml file) can be augmented by code-behind code, written in any .NET language, which contains the programming logic. It can be used to programmatically manipulate both the Silverlight application and the HTML page which hosts the Silverlight control. Silverlight ships with a lightweight class library which features, among others, extensible controls, XML Web Services, networking components and LINQ APIs. This class library is a subset of and is considerably smaller than .NET Framework's Base Class Library. Some of the supported classes being string handling, regular expressions, input and output, reflection, collections, and globalization.
In XAML, you define elements using XML tags. At the root level of every Silverlight document is a
Canvas tag, which defines the space on which UI elements will be drawn. A
Canvas can have one or more children, including child Canvases that can create their own children. Children of a
Canvas have relative positions to their parent
Canvas, not to the root
Silverlight XAML supports a number of shapes that can be orchestrated into complex objects. The basic supported shapes are
Poly Line, and
Brushes determine how objects are painted on the screen. Their contents are painted using a
Fill and their outlines are painted using a
Stroke. There are solid-colour brushes, gradient brushes, and image brushes. Solid colour brushes are implemented using either a fixed colour on the fill attribute. Gradient brushes are implemented by defining a gradient range and a number of gradient stops across a normalized space. Objects may also be painted using Image Brushes, and the image will be clipped or stretched as appropriate.
Text can be rendered in XAML using the
TextBlock tag. This gives you control over aspects of the text such as content, font, size, wrapping and more. In addition, Silverlight supports keyboard events that can be used to implement text input.
Transformations, Media, and Animations: XAML allows you define a number of transformations on objects.
RotationTransform rotates the element through a defined number of degrees,
ScaleTransform can be used to stretch or shrink an object,
SkewTransform skews it in a defined direction by a defined amount,
Translate Transform moves an object according to a defined vector, and
Matrix Transform can combine all of the above.
- Rotate Transform: Rotates an element by the specified Angle
- Scale Transform: Scales an element by the specified
- Skew Transforms: Skews an element by the specified
- Translate Transform: Moves (translates) an element by the specified X and Y amount
Audio and video content is controlled using the
MediaElement tag. This tag takes a source attribute that points to the media to be played. An object defined using this tag provides many methods and events that control media playback.
<MediaElement AutoPlay="True" Width="400" Height="300"
x:Name="Movie_wmv" Canvas.Left="80" Canvas.Top="40" Source="FullCut2.wmv" />
Animations in XAML are implemented by defining how properties should be changed over time via a timeline. Animation definitions are contained within a
Storyboard. There are a number of different types of animation, including
DoubleAnimation, which changes numeric properties;
ColorAnimation, which changes colours and brushes; and
PointAnimation, which changes two-dimensional values. These animations can either be linear or key frame based. In the case of a linear animation, the animation changes smoothly along the defined timeline. With a key frame-based animation, the animation can move between discrete values along the way.
Why Use XAML?
It gives rich UI in both cases being either desktop application or a web application where it bridges the gap between designers and developers. XAML introduced in Web is cross-platform by design.
It is a declarative language with Flow Control Support, meaning you can declare your UI controls and give them a flow control as well. You can also separate the UI definition from the run-time logic by using code-behind files, joined to the mark up through partial class definitions. Also, in a declarative programming language, the developer (or designer) describes the behaviour and integration of components without the use of procedural programming. This allows someone with little or no traditional programming experience to create an entire working application with no programming. Although it is rare that an entire application will be built completely in XAML, the introduction of XAML allows application designers to more effectively contribute to the application development cycle. Using XAML to develop user interfaces also allows for separation of model and view; which is considered a good architectural principle. In XAML, elements and attributes map to classes and properties in the underlying APIs.
From an interface developer perspective, the most visible addition is a complete set of graphic widgets. From buttons, menus, and tree lists to panels, toolbars, and shape canvases, XAML includes every commonly used interface gizmo and widget known to developers. 2D vector graphics is a seamlessly integrated part of this larger tool bag. These widgets and graphic controls can be nested within an object tree. This provides a much more comprehensive interface capability than previously possible using combinations of HTML and SVG (Scalable Vector Graphics).
It enables you to create rich UI and animations, and blend vector graphics with HTML to create compelling content experiences. Vector Graphics is a mathematical means of representing pictures by drawing lines and shapes in relationship to designated coordinates. The saved file contains instructions for drawing the image, which can be enlarged or reduced without losing quality. Eps, svg and dxf files are examples of vector graphics.
XAML in Silverlight makes it easy to build rich video player interactive experiences. You can blend together its media capabilities with the vector graphic support to create any type of media playing experience you want. Silverlight includes the ability to "go full screen" to create a completely immersive experience, as well as to overlay menus/content/controls/text directly on top of running video content (allowing you to enable DVD like experiences). Silverlight also provides the ability to resize running video on the fly without requiring the video stream to be stopped or restarted. The XAML has embedded media player in it and thus has no dependencies on the media player at your client end.
XAML in Silverlight provides good separation of the code and UI. XAML eases to create, edit and reuse graphical user interfaces for web applications. One can generate XAML code from data on the server, and thus create a dynamic application.
XAML can render interactive UI since it includes a facility for attaching event handlers to objects in the mark up. Also, XAML parser does not allow you to misspell an attribute, as it will emit an error message in comparison to HTML which does not include debugging.
Search engines, like Google, can scan XAML. They can't dive into compiled Flash applications which give almost the same UI features as in case of XAML in Silverlight. Thus XAML makes Silverlight applications more findable.
Downsides of XAML in WEB
XAML requires a plug in to be installed in your browser to render its contents.
Download URL: To view:
Without the plug in which includes the XAML parser, XAML is useless to the browser. There are also many missing features of XAML in Silverlight, since on increasing this memory of the plug in required to be downloaded increases substantially. Thus there is a trade off between the features supported and download content. XAML in WEB format does not have control support (
listbox), styles and templates, data binding, though the future release of Silverlight 1.1 is said to include these.
What kind of applications will XAML enable in WEB?
XAML in Silverlight is perfect for the following Web application scenarios that encompass many real-world scenarios:
- Web media— Branded playback with events, video and marketing mix, dynamic videos with ads, audio playback, and so forth
- Rich islands on a page (mini apps)— Casual games and gadgets
- Web visualization elements— Navigation properties, data visualization, and ads
XAML in Silverlight is a breakthrough in delivering effective experiences to end users, enabling rich Internet applications that blend content, application logic and communications. As rich clients emerge to make the Internet more usable and enjoyable, XAML provides a solid architecture for developers embracing the future.
- 6th April. 2008: Initial post | <urn:uuid:72bdcb49-5966-41a1-9c12-4d443abb2b7d> | 3 | 2,897 | Knowledge Article | Software Dev. | 34.211937 |
programs from prekindergarten through grade 12 should enable all students
When students can see the connections across
different mathematical content areas, they develop a view of mathematics
as an integrated whole. As they build on their previous mathematical understandings
while learning new concepts, students become increasingly aware of the
connections among various mathematical topics. As students' knowledge
of mathematics, their ability to use a wide range of mathematical representations,
and their access to sophisticated technology and software increase, the
connections they make with other academic disciplines, especially the
sciences and social sciences, give them greater mathematical power.
Students in grades 912 should develop an increased capacity to link mathematical ideas and a deeper understanding of how more than one approach to the same problem can lead to equivalent results, even though the approaches might look quite different. (See, e.g., the "counting rectangles" problem in the "Problem Solving" section in this chapter.) Students can use insights gained in one context to prove or disprove conjectures generated in another, and by linking mathematical ideas, they can develop robust understandings of problems.
The following hypothetical example highlights the connections among what would appear to be very different representations of, and approaches to, a mathematical problem.
The students in Mr. Robinson's tenth-grade mathematics class suspect
they are in for some interesting problem solving when he starts
class with this story: "I have a dilemma. As you may know, I have
a faithful dog and a yard shaped like a right triangle. When I go
away for short periods of time, I want Fido to guard the yard. Because
I don't want him to get loose, I want to put him on a leash and
secure the leash somewhere on the lot. I want to use the shortest
leash possible, but wherever I secure the leash, I need to make
sure the dog can reach every corner of the lot. Where should I secure
After Mr. Robinson responds to the usual array of questions and comments (such as "Do you really have a dog?" "Only a math teacher would have a triangle-shaped lotor notice that the lot was » triangular!" "What type of dog is it?"), he asks the students to work in groups of three. All their usual tools, including compass, straightedge, calculator, and computer with geometry software, are available. They are to come up with a plan to solve the problem.
Jennifer dives into the problem right away, saying, "Let's make a sketch using the computer." With her group's agreement, she produces the sketch in figure 7.36.
on to work with other groups, Mr. Robinson works with the members
of Jennifer's group on clarifying their ideas, using more-standard
mathematical language, and checking with one another for shared
understanding. Jennifer clarifies her idea, and the group decides
that it seems reasonable. They set a goal of finding the position
for D that results in the line segments DA, DB, and
DC all being the same length. When Mr. Robinson returns,
the group has concluded that point D has to be the midpoint
of the hypotenuse, otherwise, they say, it could not be equidistant
from B and C. (Mr. Robinson notes to himself
that the group's conclusion is not adequately justified, but he
decides not to intervene at this point; the work they will do later
in creating a proof will ensure that they examine this reasoning.)
Small-group conversations continue until several groups have made observations and conjectures similar to those made in Jennifer's group. Mr. Robinson pulls the class back together to discuss the problem. When the students converge on a conjecture, he writes it on the board as follows:
He then asks the students to return to their groups and work toward providing either a proof or a counterexample. The groups continue to work on the problem, settling on proofs and selecting group members to present them on the overhead projector. As always, Mr. Robinson emphasizes the fact that there might be a number of different ways to prove the conjecture.
Remembering Mr. Robinson's
mantra about placing the coordinate system to "make things eeeasy,"
one group places the coordinates as shown in figure 7.37a, yielding
a common distance of .
Alfonse, who is explaining this solution, proudly remarks that it
reminds him of the Pythagorean theorem. Mr. Robinson builds on that
observation, noting to the class that if the students drop a perpendicular
from M to AC, each of the two right triangles
that result has legs of length a and b; thus
the length of the hypotenuses, MC and MA, are
|Jennifer's group returns to her earlier comment
about the three points A, B, and C being on a
circle. After lengthy conversations with, and questions from, Mr.
Robinson, that group produces a second proof based on the properties
of inscribed angles (fig. 7.37b). » Pedro
presents his group's solution showing how they constructed a rectangle
that includes the three vertices of the right triangle (fig. 7.37c)
and reasoned about the properties of the diagonals of a rectangle.
Anna presents a solution using transformational geometry (figure 7.37d).
Since M and M' are the midpoints of and , respectively, the triangle MAM' is similar to
the triangle BAB', with each of the sides of the smaller triangle
half the length of the corresponding side of the larger triangle.
The same relationship holds for triangles BMC and BAB'. Using this
fact and the fact that BAB' is isosceles (since reflects onto ), Anna shows that triangle MAM' is congruent
to triangle CMB, from which it follows that CM and MA are the same
Mr. Robinson congratulates the students on the quality of their work and on the variety of approaches they used. He points out that some basic mathematical ideas such as congruence were actually part of the mathematics in a number of their solutions and that some of their thinking, such as Alfonse's comment about the Pythagorean theorem, highlighted connections to other mathematical ideas. Taking a step backward to reflect, the students begin to see how different approachesusing coordinate geometry, Euclidean geometry, and transformational geometryare all connected. Mr. Robinson notes that it is good to have all these ways of thinking in their mathematical "tool kit." Any one of them might be the key to solving the next problem they encounter.
Although the students learned a great deal from working on the problem,
the class was not yet finished with it. Mr. Robinson had selected this
problem for the class to work on because it supports a number of interesting
explorations and because the students would be exploring the properties
of triangles and circles as they worked on it. And, indeed, as the students
worked on the problem, they remarked that they were "seeing circles everywhere."
(The following discussion is inspired by Goldenberg, Lewis, and O'Keefe
group decides to look at the set of all the right triangles they can
find, given a fixed hypotenuse. A group member starts by constructing
a right triangle with the given hypotenuse and then dragging the right
angle (fig. 7.38a). Another group decides to fix the position of the
right angle and look at the set of right triangles whose hypotenuses
are the given length (fig. 7.38b). They observe that the plot of the
midpoints of the hypotenuses of the right triangles appears to trace
out the arc of a circle. At first the students are ready to dismiss
the circular pattern as a coincidence. But Mr. Robinson, seeing the
potential for making a connection, asks questions such as, "Why do
you think you get that pattern?" and "Does the circle in your pattern
have anything to do with the circle in Jennifer's group's solution?"
As the groups begin to understand Mr. Robinson's questions, they begin
to see the connections among the circles in their new drawings, the
definition of a circle, and the fact that their problem deals with
points that are equally distant from a third point.
|Mr. Robinson adds a final challenge for homework: can the students connect this problem (or problems related to it) to real-world situations or to other mathematics? The students create posters illustrating the mathematical connections they see. Most of the posters depict situations similar to the original problem in which something, for some reason, needs to be positioned the same distance from the vertices of a right triangle. One group, however, creates an experiment that they demonstrate for the class in one of the dark, windowless rooms in the building. They put on the floor a large sheet of white chart paper with a right triangle drawn on it, place candles (all of the same height) at each vertex, and stand an object shorter than the candles inside the triangle. The class watches the shadows of the object change as one of the group members moves it around inside the triangle. The three shadows are of equal length only when the object is placed at the midpoint of the hypotenusea phenomenon that delights both Mr. Robinson and his students. This activity concludes the discussion of right triangles, but it is far from the end of the class's work. Mr. Robinson reminds the students of the problem that started their discussion and asks them how the problem might be extended. "After all," he says, "not all backyards have right angles or are triangular in shape." This comment sets the stage for abstracting and generalizing some of their workand for making more connections. »|
The story of Mr. Robinson's classroom indicates many of the ways in which teachers can help students seek and make use of mathematical connections. Problem selection is especially important because students are unlikely to learn to make connections unless they are working on problems or situations that have the potential for suggesting such linkages. Teachers need to take special initiatives to find such integrative problems when instructional materials focus largely on content areas and when curricular arrangements separate the study of content areas such as geometry, algebra, and statistics. Even when curricula offer problems that cut across traditional content boundaries, teachers will need to develop expertise in making mathematical connections and in helping students develop their own capacity for doing so.
One essential aspect of helping students make connections is establishing a classroom climate that encourages students to pursue mathematical ideas in addition to solving the problem at hand. Mr. Robinson started with a problem that allowed for multiple approaches and solutions. While the students worked the problem, they were encouraged to pursue various leads. Incorrect statements weren't simply judged wrong and dismissed; Mr. Robinson helped the students find the kernels of correct ideas in what they had said, and those ideas sometimes led to new solutions and connections. The students were encouraged to reflect on and compare their solutions as a means of making connections. When they had done just about everything they were able to do with the given problem, they were encouraged to generalize what they had done. Rich problems, a climate that supports mathematical thinking, and access to a wide variety of mathematical tools all contribute to students' ability to see mathematics as a connected whole.
|Home | Table of Contents | Purchase | Resources|
|NCTM Home | Illuminations Web site|
Copyright © 2000 by the National Council of Teachers of Mathematics. | <urn:uuid:339148d1-27cd-42b9-9ed4-a3d577200b99> | 3.96875 | 2,380 | Tutorial | Science & Tech. | 49.405815 |
|Back to AutoLisp Home
The Ultimate AutoLisp Tutorial
What is a "car of a cdr"?
Beginner - You know how to spell AutoLisp and that is about it. From opening notepad, writing your first program, saving your first program, to executing your first program and checking the variables inside AutoCAD. I will cover it all. Step by step.
Intermediate - Lets go over the basic autolisp functions. What is a "car of a cdr"?
Advanced - Loops and conditional statements.
Extreme - Answer the tough questions? Here are a few examples. Work in progress...
Dialog Control Language - A tutorial for creating autolisp and dcl code.
Support this Site!
All questions/complaints/suggestions should be sent to JefferyPSanders.com
Last Updated May 1st, 2013
Copyright 2002-2013 JefferyPSanders.com. All rights reserved. | <urn:uuid:3e9383e1-a433-4f0c-9bce-af5bad5bccf9> | 3.265625 | 198 | Tutorial | Software Dev. | 60.160763 |
In 1995-1996, John Goold was gathering data on the daily activity patterns of Odonata. He made collections at various times of the day and in different weather conditions to help determine under what condition dragonflies fly. Johnalso used data collected by Dirk Westfall and Chris Todd. John found strong correlations between light intensity and temperature and the number of dragonflies which were active (there are more dragonflies out on warm or sunny days).
John Goold collecting on the Little Muskingum River at Lane's Farm.
Computerized Database of Ohio Odonata Odonata of Washington County Flight Speed of Odonata Return to Odonata Research Home Page Return to McShaffrey Home Page | <urn:uuid:9c593ec2-9972-4914-a18c-15ab0a8e51b9> | 2.859375 | 143 | Knowledge Article | Science & Tech. | 27.610455 |
You know there are no such things as cross-browser layers, because the
layer object only exists in a Netscape Navigator version 4 or later
browser. However, Internet Explorer is rich enough to present something
of its own that resembles layers. This article discusses problems with
writing code for layers in both browsers and presents you some
A layer is just another page of HTML tags. Unlike the traditional HTML
page, however, a layer can be placed on top of another page and
positioned exactly at the coordinate of your choice in the browser
window. You can have any number of layers, and by changing the z-order
of those layers, you can determine which layer will be displayed. When
programming layers, these are the issues you need to address.
Layers are possible thanks to the Dynamic HTML specifications in version
4 of Netscape Navigator and Internet Explorer. Unfortunately, as usual,
Microsoft and Netscape worked faster than the standard body could
produce standards for DHMTL. As a result, the resulting DHTML object
models for both browsers could not be more different. In fact, when you
want something to mimic a cross-browser layer, you need two sets of
code, one for each browser. By detecting the type of browser, you can
force a browser to run the appropriate code.
Layers only work in Netscape Navigator version 4 and above or Internet
Explorer version 4 and above. Users with older browsers should
still be able to surf the layer-less version of your site comfortably.
In fact, there shouldn't be any noticeable difference.
The issues in the second point disappear if you don't mind having three
versions of each page and you are using a processing engine such as ASP
or JSP that can detect the user browser type when the HTTP request
comes. But, you know, using ASP or JSP to output plain HTML is much
slower because the page will be processed by the ASP engine or JSP
container before being sent as an HTTP response. Moreover, having three
versions of each page (one for Netscape Navigator version 4 and above,
one for IE 4 and above, and one for older browsers) presents awful
maintenance issues. A minor modification to the page will require you to
work on three different files.
In parts one and two of this article, you'll see how to use layers to
make your web site more attractive. The most popular application of
layers is for creating submenus that show up when you move the mouse
pointer over a menu. However, before you start with the code, you should
familiarize yourself with the <div> tag, a tag that plays a critical role when you work with layers.
Layers were considered a very hot innovation a couple years ago when they first began to surface. Do you think they're here to stay, or will they eventually be replaced by another technology? Post your comments
Before you start working with layers, you should be familiar with the
<div> tag. The <div>tag is used to define an
area of the page, or document division. Anything between the opening and
the closing tag is referred to as a single item.
Introduced in HTML 3.2 standard, the <div> tag does not
allocate any particular style of structure to the text, it just
allocates an area. The <div> tag is a block-level element,
it can be used to group other block-level elements, but it can't be used
within paragraph elements.
The <div> tag can have the following attributes: align, class, dir, id, lang, style and title. The id and style attributes are
of importance in creating the layer effect.
The id attribute gives a layer a unique identifier that is useful in the programming. The style attribute controls the position, visibility, and the z-order of a page division. The properties and possible values of
the style attribute are given in the table below. These properties are
part of the Cascading Style Sheet -- Positioning (CSS-P), an addition to
the CSS1 syntax for specifying an exact location on the page where an
HTML element is to appear.
The properties and values of the style attribute:
position -- absolute | relative
left -- pixels relative to the left of the containing element
top -- pixels relative to the top of the containing element
visibility -- visible | hidden | inherit
z-index -- layer position in stack (integer)
The visibility property is the most important for cross-browser layer control because with it you can show and hide a layer. In Netscape
Navigator 4 and above, the visibility value has a different value for
visible: show. To create a layer, just write normal HTML tags and put them inside the <div>tags using special attributes.
Your layer does not need a <body> tag because a layer is
always in the <body>. The <div> tag could
appear at the top of the page, right after the <body> tag,
or near the end of the file before the closing </body>
tag. You can have as many layers as you want, as long as they all have
unique IDs. In each tag, you can put anything legal for HTML body, even
though you probably want to use a table for alignment and other
Browsers prior to version 4 don't know how to render
<div>. Fortunately, they are happy to accept the
<div> tags without complaint. As you'll see later, this is
a boon for us.
In part two next week, I'll show you how to display and hide these
layers. Until then, happy scripting! | <urn:uuid:0941f129-267a-4da1-a238-19fbdc603d6a> | 2.78125 | 1,200 | Tutorial | Software Dev. | 52.320821 |
This is a somewhat outdated term used to refer to a sub-interval of the Holocene period from 5000-7000 years ago during which it was once thought that the earth was warmer than today. We now know that conditions at this time were probably warmer than today, but only in summer and only in the extratropics of the Northern Hemisphere. This summer warming appears to have been due to astronomical factors that favoured warmer Northern summers, but colder Northern winters and colder tropics, than today (see Hewitt and Mitchell, 1998; Ganopolski et al, 1998). The best available evidence from recent peer-reviewed studies suggests that annual, global mean warmth was probably similar to pre-20th century warmth, but less than late 20th century warmth, at this time (see Kitoh and Murakami, 2002).
More information about the so-called “Mid-Holocene Optimum” can be found here.
Ganopolski, A., C. Kubatzki, M. Claussen, V. Brovkin, and V. Petoukhov, The Influence of Vegetation-Atmosphere-Ocean Interaction on Climate During the Mid-Holocene, Science, 280, 1916-1919, 1998.
Hewitt, C.D. and J.F.B. Mitchell, A Fully Coupled GCM Simulation of the Climate of the Mid-Holocene, Geophys. Res. Lett., 25, 361-364, 1998.
Kitoh, A., and S. Murakami, Tropical Pacific Climate at the mid-Holocene and the Last Glacial Maximum simulated by a coupled ocean-atmosphere general circulation model, Paleoceanography, 17, 1-13, 2002. | <urn:uuid:0a09f47a-4d63-481b-98e9-374f03c5899b> | 3.84375 | 363 | Knowledge Article | Science & Tech. | 59.30454 |
Planck’s cooling system composite
May 26, 2009
In order to achieve its scientific objectives, Planck's detectors have to operate at very low and stable temperatures. The spacecraft is therefore equipped with the means of cooling the detectors to levels close to absolute zero (-273.15º C), ranging from about -253 º C to only a few tenths of a degree above absolute zero.
Topics: Technology Internet, Environment, Temperature, Physics, Space exploration, Zero, Absolute zero, Thermodynamics, Planck, Spacecraft | <urn:uuid:a2ec4977-1308-4c7c-99d7-aa4584d1f984> | 3.03125 | 114 | Truncated | Science & Tech. | 20.544641 |