text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
For any number "a" (except zero), a0 = 1.
Now I have a homework problem:
Simplify the expression: mn0
So, I put in for my answer 1, but when it was graded the teacher said that that was the wrong answer. How could it be wrong if any number the the power if zero is 1? Did I misread my book, or did the teacher's answer key make a mistake? | <urn:uuid:77489e05-b932-42b0-b0ad-40b8c04d38f3> | 2.734375 | 93 | Q&A Forum | Science & Tech. | 79.0925 |
A derecho is a widespread and long-lived, violent convectively induced windstorm that is associated with a fast-moving band of severe thunderstorms usually taking the form of a bow echo.
Derechos are usually not associated with a cold front, but a stationary front.
For more information about the topic Derecho, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:18cacafc-fc8c-448f-b38e-204e6e70fc81> | 2.90625 | 109 | Knowledge Article | Science & Tech. | 25.345 |
Web edition: March 10, 2004
Print edition: March 13, 2004; Vol.165 #11 (p. 175)
"Tapping sun's light and heat to make hydrogen" (SN: 1/17/04, p. 46: http://www.sciencenews.org/articles/20040117/note14.asp) seems to be delivering good news for the environment: "Clean" hydrogen can be produced from water using solar energy. This seems to me, however, to be even more horrifying than the burning of fossil fuels, which I believe we will be able to survive quite well without, once we consume them all. Will we shift our voracious consumption to the extremely finite water supply? Will we pump the very basis of life out of the ground to burn, nonrenewably, in our cars? This sounds like the worst plan yet.
The political pseudoscience press strikes again ("Warming climate may slam many species," SN: 1/24/04, p. 62: http://www.sciencenews.org/articles/20040124/note15.asp). Now, we are told that by 2050, as many as 31 percent of species will be wiped out by a temperature increase of 0.8 to 1.7°C. I find this impossible to believe, in that these organisms are all presently surviving with diurnal, seasonal, yearly, and cyclical average-temperature fluctuations that exceed these numbers. Moreover, they'll be able to move to more hospitable climes during this extremely gradual change, should it indeed occur.
The researchers considered both an organism's climate requirements and its capability to migrate to suitable habitat in their analysis. They found that conditions might change too quickly for slow-spreading organisms, such as plants, to travel to new habitat, if indeed it exists.S. Perkins
Once again, we see evidence that supports what we knew all along ("Sleeper Effects: Slumber may fortify memory, stir insight," SN: 1/24/04, p. 53: http://www.sciencenews.org/articles/20040124/fob5.asp). As my mother told me growing up, "Just sleep on it."
In regards to the picture accompanying "Pumping Carbon: Researchers watch nanofibers grow" (SN: 1/31/04, p. 69: http://www.sciencenews.org/articles/20040131/fob5.asp), it might be helpful to remind readers that such colorful depictions of nanoscale materials are slightly fanciful. These structures exist in realms so small that visible light, and therefore color, has little meaning.
Portland State University | <urn:uuid:2496cf15-15d9-4ad4-bd4a-9defd27572b1> | 3.109375 | 564 | Nonfiction Writing | Science & Tech. | 67.835926 |
Pluto is a small, mysterious world, deep in the cold, dark recesses of the distant outer solar system. It was named a planet when it was discovered in 1930, but that designation has been in dispute for some years now. The issue was settled in August of 2006, when the International Astronomical Union reclassified Pluto as a dwarf planet.
The reclassification is a result of new and better information that, in turn, came from improved technology and observing methods. It’s a beautiful example of the scientific process in action. Science is not a static body of facts, but an active process subject to constant revision. Theories are tried and tested. If they fail they are discarded. Old information sometimes turns out to be inadequate. New information can change our understanding of the world (and universe) around us. It helps to remember the history of astronomy is a history of changing worldviews as a result of new and better data.
And we want the science process to continue. Refining the classification of solar system objects does not change the nature of Pluto. It is still a small, rock and ice body far away from the Sun. We don’t know a lot about Pluto. We want to lessen the mystery and learn more about this distant world. In the process we’ll learn more about the rest of the solar system and our history in it.
But it’s not easy to get there. Distance alone is not the whole problem. Pluto’s orbit lies at a 17-degree tilt (figure 1) from that of the eight major planets, making the trip more of a challenge. But just over a year ago, in January 2006, the New Horizons mission to Pluto was launched. And right about now, the largest of the Jovian planets is playing a key role in the flight of New Horizons.
On February 28 this year, Jupiter will give the New Horizons spacecraft a big gravitational push (see figure 2), speeding it on its way to a rendezvous with Pluto and its moon, Charon, in 2015. Jupiter will add another 9000 mph to the spacecraft’s speed, flinging it out toward its target at 52,000 mph. New Horizons is the fastest spacecraft ever sent into the solar system.
You can see where New Horizons is this month by finding Jupiter in the early morning. Look low in the southern dawn sky at about 6:00 a.m. That bright “morning star” in figure 3 is the planet Jupiter. The little space probe, invisibly tiny, is there too.
If you look at Jupiter with binoculars, you can see Jupiter’s four largest moons as tiny points of light. They are all bigger than Pluto. Now imagine trying to study something the size of those tiny moons but eight times farther away! That gives you an idea of how difficult it is to study objects in the outskirts of the solar system.
When New Horizons arrives at its destination in 2015, some of Pluto’s mysteries will be solved. As is often the case in science, answering one question leads to the formation of ten new questions. We have a lot of exciting new science to look forward to at Pluto! | <urn:uuid:128c7352-c6fb-45b2-85f9-137af4b75527> | 4.15625 | 655 | Knowledge Article | Science & Tech. | 63.403278 |
Explains the natural and human-affected factors that determine the concentration of contaminants in groundwater, especially where the concentration is different at the surface than at depth, and where pumping varies with time.
We estimated in-place resources of 1.07 trillion short tons of coal in this area using a geology-based assessment methodology. Of that total, recoverable coal was 162 billion tons, 25 billion economically recoverable.
Using the Fischer assay measure of oil yield, we estimated a total of 1.44 trillion barrels of oil in three assessed units. There is currently no economic method to recover oil from this geologic unit.
The electric power generation potential from identified geothermal systems is 9,057 Megawattselectric (MWe), distributed over 13 states. Undiscovered resources are estimated to provide an additional 30,033 MWe.
Estimates of additional energy resources present within known oil and gas fields using statistical analysis that includes geology and engineering practices in addition to growth trends in production data. | <urn:uuid:a7e00bee-0b1d-4ab5-a85b-352d4a9f75a1> | 3.4375 | 203 | Knowledge Article | Science & Tech. | 25.369778 |
|From: Martegan||17/06/00 2:07:14|
|Subject: How does mass warp space?||post id: 85991|
I've seen the diagrams of a bowling ball creating a depression on a rubber sheet as a description of mass warping space. Problem is, it would seem that the very thing causing this depression is gravity itself (turn the diagram up-side-down and the depression disappears pretty quickly, lol).
I understand that this is only meant as a reduced-dimensional illustration of what the boffins believe is actually happening, but it makes me wonder:
The force which causes the warping of the rubber sheet is gravity. What causes the warping of space?
How does mass warp space?
PS, I love this forum. Yez are all legends :-)
|From: Dr. Ed G (Avatar)||17/06/00 2:55:30|
|Subject: re: How does mass warp space?||post id: 85999|
|Gravity causes the warping of space...|
Think of an observer situated above the plane of the solar system watching an unpowered spaceship travelling from one side of the solar system to the other and travelling just close enough to the Sun to be dramatically influenced by it without being going into orbit, falling into it, or being engulfed in solar flares emanating from it. As the spaceship passes close to the Sun the gravitational pull of the Sun will cause it to change direction - it causes the straight line of its trajectory to be bent from the point of view of our observer situated perpendicular to the plane of its path.
However, there is absolutely positively no measurement that an observer INSIDE the spaceship can do to determine whether or not the trajectory of the spaceship has been bent by the graviational influence of the Sun. There is no means by which she might determine if the spaceship has been influence by the gravity of the Sun, for example, because at all times it is in freefall towards the Sun. Furthermore, the observer INSIDE the spaceship is aware of Newton's Laws and that objects uninfluenced by external forces travel in straight lines. Therefore, since she cannot detect any external forces acting on her spaceship she must conclude that she has travelled in a straightline from one side of the solar system to the other.
So the external observer sees a path that is curved, and the internal observer sees a path that is straight. How do we resolve the two? Gravity curves the nature of space. Under the influence of gravity lines that are locally straight (i.e. the path followed by the unpowered spaceship) appear bent to the outside observer. Hence, gravity curves space.
Furthermore, the influence of gravity slows down time (as observed by an outside observer) also. So gravity also "curves" time.
Or, to put these two concepts together, gravity curves space-time.
Note, the ability or otherwise of an observer to determine the nature of the Universe and their own trajectory through it is crucial. What General Relativity achieves in introducing this concept that gravity curves space-time is to extend the idea that the laws of Physics should be entirely independent on the frame of reference of the observer, even in frames that are accelerating under the influence of gravity.
|From: Martegan||17/06/00 3:05:20|
|Subject: re: How does mass warp space?||post id: 86001|
|Thanks Dr. Ed,|
My next question, then, would be, what is gravity?
My understanding is that 'gravity is a warping of space', but given your answer above, this becomes a little, em... circular.
|From: Dr. Ed G (Avatar)||17/06/00 3:23:03|
|Subject: re: How does mass warp space?||post id: 86004|
|Indeed, you beat me to the punch. Gravity is the influence of matter that warps space-time. Yes, it is circular, but when you get down to fundamentals things to start to get circular by necessity. The thing is, we will never get to the bottom of everything because you can always ask the question, "Well, if this appears to be the bottom, what's below that?"|
At the end of the day, the best we can really do is to connect and group phenomena into various categories. So an electron is a particle that behaves like an electron, and that behaviour is typified by (a) ..., and (b) ..., and (c) ..., and (d) ..., and so on ...
That's about as good as I can do, I'm afraid. :-)
|From: Martegan||17/06/00 4:31:26|
|Subject: re: How does mass warp space?||post id: 86008|
I don't have time now to explore it fully, but I just visited your site, and I know I'm gonna love it. Big big fan of Winnie-ther-Pooh, and especially wise old Wol and his door-signs :-)
Re: your second post above... I've been lurking here for a while now, and you seem to be an "Avatar" who knows a thing or two about a thing or two ;). How satisfying did you find the above answer that you gave (ie "at the end of the day the best we can do...")?
I'm not having a go at you or anything, but "gravity causes warping of space-time" plus "gravity is a warping of space-time"... is this really where science is at in the 21st century?
|From: Dr. Ed G (Avatar)||17/06/00 5:04:57|
|Subject: re: How does mass warp space?||post id: 86009|
|I guess the imense size and complexity of the Universe can have a somewhat humbling effect on those of us that are (gladly) paid to explore it... I'm just happy to understand about it what I can. :-)|
However, like I said, fundamentally the problem is not one of the limitations of current knowledge but the limitations imposed by formal logic. No matter how deep you go, the question "Why?" demands that you try to go deeper. It is therefore logically impossible that we won't come up against facts that are just facts (like, potentially, the existence of electrons [I say "potentially" because although in over 100 years since the discovery of the electron we have not found anything more fundamental that makes an electron, that doesn't mean there isn't a deeper level of reality to them, just that we haven't found it yet]).
As for the situation with respect to gravity, I think it's fair to say that we don't really have as good an understanding of it as we do, say, electromagnetism or weak and strong nuclear forces, and that there is more to be uncovered. But let's take the example of electromagnetism. We could ask,
Q: Why do objects that are both negatively (or positively) charged repel?
A: Because negatively charged objects contain an excess of electrons and electrons repel electrons.
Q: Why do electrons repels each other?
A: Because they generate an electric field that causes that repulsion.
Q: Why does that electric field cause repulsion?
A: Because electrons emit virtual photons which cause a total quantum field in which the energy of the system of two electrons in free space is minimised by those electrons being as far apart as possible. The force is proprtional to the gradient of the quantum field potential and because the flux all particles radiating from an object (in this case vitual photons) will, by purely geometric arguments, decrease inversely as the square of the radius from the origin, the force will vary as the inverse square of the distance between the electrons.
Q: So how can you say that electrons emit virtual photons?
A: Because all experimental data which has been collected thus far is overwhelmingly consistent with that fact. (and like, REALLY overwhelmingly!)
Q: So why do electrons emit virtual photons?
A: Because the photon is the exchange boson which all electrostatically charged particles emit.
Q: So why do all electrostatically charged particles emit virtual photons?
A: Because they do.
Q: But why?
A: Because they do.
Q: But why?
A: Oh, sod this I'm going to the pub...
|From: Dr. Ed G (Avatar)||17/06/00 5:41:06|
|Subject: re: How does mass warp space?||post id: 86010|
|p.s. ... oh and I thank you for the compliment! *blush* :-)|
p.p.s. ... I wouldn't bother getting too excited about the old homepage... it's mostly just a montage of links to other places on the web and doesn't have a whole lot of original content of its own :-/ ... someday, though, I do intend on jazzing it up somewhat (I'm actually hoping soon to construct an electronic version of a multimeda presentation on whales I did when I was an irritatingly precocious 11 year old... Dr. Eddie ... :-) hmmm ...)
|From: B.C. ®||17/06/00 8:32:49|
|Subject: re: How does mass warp space?||post id: 86018|
|Well, Dr Ed--you,ve just earned your money--excellent rundown and explanation!|
|From: James Richmond (Avatar)||17/06/00 11:07:15|
|Subject: re: How does mass warp space?||post id: 86041|
|Just to add a little to Dr Ed's gravity explanation:|
Fundamentally, matter and energy cause spacetime to curve, and this curvature is what we perceive as gravity. Einstein came up with the gravitational field equations in his General Theory of Relativity. The main equation has a bunch of terms on one side which describe the local spacetime geometry, and another bunch of terms on the other side describing the local matter/energy distribution.
The short, well worn summary of this is "Matter tells spacetime how to curve; spacetime tells matter how to move."
|From: David Jones||22/06/00 10:24:30|
|Subject: re: How does mass warp space?||post id: 88124|
|Isn't "matter" curved "space-time" (ie. aren't they the same thing in relativity theory)? And, isn't the undetectable curved motion of the spacecraft due to the acceleration produced by the gravitational field of the sun be equal and opposite to the inertial (straight line) force of the craft (thus producing a state of balance)? And, finally, how is it , in relativity theory, that space-time appears from "nowhere" to expand the space-time between galaxies?|
|From: David Brennan||22/06/00 10:35:55|
|Subject: re: How does mass warp space?||post id: 88126|
|>>Isn't "matter" curved "space-time" (ie. aren't they the same thing in relativity theory)?<<|
AFAIK, relativity theory doesn't say anything about what matter *is*. But I have heard theories that matter is a "ultrafine" warp in spacetime, or some such stuff. I didn't understand what they meant and the person who told me couldn't explain it to me.
>> And, isn't the undetectable curved motion of the spacecraft due to the acceleration produced by the gravitational field of the sun be equal and opposite to the inertial (straight line) force of the craft (thus producing a state of balance)?<<
Undetectable curved motion? So how do yo know its there if it undetectable? Not sure what you mean with this question, are you talking about orbiting spacecraft?
>> And, finally, how is it , in relativity theory, that space-time appears from "nowhere" to expand the space-time between galaxies?<<
Pardon? I not clear about what you mean by spacetime appearing from nowhere.
|From: B.C. ®||22/06/00 10:51:33|
|Subject: re: How does mass warp space?||post id: 88138|
I'm only a layman but I know that what matter is, is a form of energy.Also a little quote from a previous post in this thread by James "matter tells space/time how to curve and space/time tells matter how to move"The only way I can explain space/time is that which emenated from the big bang.Matter came into existence out of fluctuations in the quantum foam the same as which caused the big bang. Granted it is all hard to understand, but slowly but surely things are falling into place. | <urn:uuid:de87dba5-192f-4948-8c5b-ebfa64f6710e> | 3.140625 | 2,752 | Comment Section | Science & Tech. | 66.170704 |
Sun’s energy could be used to zap dangerous asteroids
Described as a ‘directed energy orbital defense system’, DE-STAR is designed to harness some of the power of the sun and convert it into a massive phased array of laser beams that can destroy, or evaporate, asteroids posing a potential threat to Earth.
It can also be used to deflect an asteroid away from Earth, or into the sun – or to a conventient spot where it can be mined.
“This system is not some far-out idea from Star Trek,” says Gary B. Hughes of California Polytechnic State University. “All the components of this system pretty much exist today. Maybe not quite at the scale that we’d need – scaling up would be the challenge – but the basic elements are all there and ready to go.”
The larger the system, the greater its capabilities. For example, a 100-meter version could start nudging comets or asteroids out of their orbits,” Hughes said. But a 10-kilometer version could deliver 1.4 megatons of energy per day to its target, says Lubin, obliterating an asteroid 500 meters across in one year.
A system this size could also boost the speed of interplanetary travel or power advanced ion drive systems for deep space travel.
No comments yet.
Leave a Reply
- Australis – Barren Lands
- Melina Aslanidou – The past remembered (Video Clip)
- A shocking discovery: Zapping the brain with electricity during maths lessons can boost numerical skills by a THIRD
- E-Ink Introduces 13.3” Flexible Electronic Paper Display
- Cloning fears as scientists create human embryos from SKIN in stem cell ‘milestone’
- Sony Digital Paper Makes A Push For Flexible Displays
- Hints of lightweight dark matter get even stronger
- Young blood reverses heart decline in old mice
- The British engineer who really HAS reinvented the wheel: Loopwheels system abandons spokes for springs
- D-Wave’s Quantum Computer Goes to the Races, Wins
- Scientists hope to implant microchips that generate memories in human brains in the next 2 years
- Radical breakthrough study pinpoints exactly how our brains track fast-moving object
- Blogs RO
- Famous Quotes
- Food recipises
- IT Hardware | <urn:uuid:aaafaf54-1f52-4978-911c-1841a1d90ef6> | 2.859375 | 504 | Personal Blog | Science & Tech. | 38.991495 |
Warning: Missing argument 2 for wpdb::prepare(), called in /home/nithish/public_html/btechzone.com/wp-content/plugins/sharebar/sharebar.php on line 112 and defined in /home/nithish/public_html/btechzone.com/wp-includes/wp-db.php on line 990
Warning: Missing argument 2 for wpdb::prepare(), called in /home/nithish/public_html/btechzone.com/wp-content/plugins/sharebar/sharebar.php on line 124 and defined in /home/nithish/public_html/btechzone.com/wp-includes/wp-db.php on line 990
Creating of Objects is a very easy thing, if you understand the actual processing going on at the back end. Understanding how to create a new Object is essential for getting a good hold on the Principle of OOPS and Java itself.
For creating a new Object in Java, we use the keyword “new”. Simple isn’t it. For illustrating this concept, we create a class called BMW, and then make a new Object of this class in a new class called TestDrive. Here we will step by step understand the steps involved in creation of a new Object and its importance on the heap. Oh! I forget to tell you, every Object in Java lives on the heap. Every time you create a new Object it takes some space on the heap, which must be cleared once the Object gets useless or out of reach. For this we use Garbage Collection in Java.
Anyhow, here is your first class.
System.out.println(“Speed is 100 miles/hr”);
Public Static void main(String args)
BMW b = new BMW();
b.color = “red”;
The above class TestDrive when runs, it prints the output as “Speed is 100 miles/hr”. In the above program, we have created a class called BMW and in that defined its instance variables and methods. Then we have defined a seprate class called TestDrive in which we make an Object of type BMW, we set the color of this BMW to red, and then we call the showSpeed() method, which displays the speed of the BMW.
Here the “.” operator in b.showSpeed() is used for calling the showSpeed function of the BMW. The main thing here is that we have created a new Object by using the keyword “new”.
When we say:-
BMW b = new BMW();
Here there are three steps involved, they are:-
1. Declaration of a reference variable
2. Creation of an Object
3. Assignment of the new Object to the refernce variable.
I hope you got my point. Anyhow I will explain it again. When we say:-
we have just created a new refernve variable of type BMW. And then we say:-
here we have just created a new Object of type BMW. And then we equate them by using the “=” operator.
So now, we have created a new Object and then assigned it a reference variable, and we have done this because now we can use this reference variable to perform operations on this Object.
Now I am pretty sure that you have got my point. | <urn:uuid:a24aba72-5976-452c-9c7e-035239fc4f22> | 2.90625 | 719 | Tutorial | Software Dev. | 72.6619 |
Black holes vibrate at a frequency whose pitch corresponds to the note of B flat. However, don’t be surprised if you can’t find any recordings of it; the note that black holes create is 57 octaves below any B flat that can be detected by human ears.
The above fact blows my mind for a number of reasons. First of all, it is astounding to think that the beauty of music can literally be found throughout the universe, existing in forms that we can’t even comprehend. Second of all, it unites the two seemingly disparate disciplines of music and science in a fascinating, seamless fashion.
I learned this bit of information about black holes while watching ‘The Music Instinct: Science and Song’ on Netflix over Thanksgiving break. The documentary, originally aired on PBS in 2009, attempts to explain the intensely subjective experiences of music listening and performance by pairing it with objective scientific fact. Far from losing its magic, music becomes even more mysterious after having watched the film.
If you think about it, music is really just a highly organized collection of sound waves. Furthermore, these sound waves are merely vibrations that sail through the air at a variety of frequencies. So how could something so simple as vibrations affect people in such vastly different ways? Also, how is it that it even affects you in the first place? These aren’t questions that are easily answered, as the many of the answers have yet to be discovered.
Science is not only able to explain the mysteries of music, but in doing so can also create novel and useful applications for it. For example, take your brainwaves. The electrical activity of your brain does not exist at a constant energy level; rather, the frequency of your brainwaves changes depending on your mental state. Making note of the connection between the frequency of brainwaves and the frequency of sound, some scientists have gone about creating binaural beats, music that is meant to mimic the neural activity of your brain during certain mental states. For instance, there are binaural beats that are meant to induce meditation, sleep, lucid dreaming, and even happiness. The idea that something so simple as music could change our mental state—and even our outlook—is phenomenal, and some early studies have given evidence for binaural beats’ anxiety-reducing properties.
Other researches continue to explore the practical medical uses of music in healing the brain. Researchers as close as those at the Music and Neuroimaging Laboratory at Beth Israel Deaconess Medical Center have begun to uncover some of music’s powerful properties. The lab, directed by neurologist Gottfried Schlaug, has conducted ample research on music’s ability to help patients with stroke-induced aphasia and to induce speaking in autistic children, among other things. The evidence is accumulating, and other current projects—such as research that looks into music’s effects on things like emotion and cognitive skill—show hope for future useful findings.
To bring up such research on music’s beneficial effects isn’t to say that music can mend all mental ailments. There are some mysteries of science and music that may never be solved – such as the infamously baffling Deep Ocean Bloop. But as our understanding of the links between the two grows—and as the technology allowing us to investigate these links improves—it will be interesting to how music and science may come to explain and influence each other.
If you’re interested in the mysterious and endlessly fascinating connections between music and science, I highly suggest watching “The Music Instinct: Science and Song” (the video may be accessed in its entirety on Netflix), or reading This Is Your Brain On Music by Daniel J. Levitin (which, if you do not wish to purchase it, can be found at Mugar Library when I’m not busy renting it out).
As a parting gift: Have scientists really created the “most relaxing tune ever”? Judge for yourself. | <urn:uuid:cf4447cd-80eb-43e4-9730-0dd01e99fe2b> | 3.53125 | 819 | Personal Blog | Science & Tech. | 39.928137 |
That genetic differences account for a substantial part of biological variability is hardly in dispute, and the inclusion of genetics (and increasingly molecular genetics) was arguably the key contribution that created neo-Darwinism and led to the ‘modern synthesis’ of evolutionary theory.
Selective breeding programmes amply illustrate the contribution to the phenotype that can be effected by genetic variation. Thus in a very nice summary, Hill (2005) describes an experiment at the University of Illinois (Laurie et al. 2004) that has been running since 1896. In this experiment, scientists have selected (and bred) strains of maize (corn) that are either high or low in the content of oil in their kernels (a trait of considerable agronomic importance). Over the years, the initial 5% oil has changed to 20% in the high-oil lines, and has decreased almost to zero in the low-oil strains. (A similar experiment using protein as the trait of interest gave a similar result, save that the ‘low-protein’ lines retain about 5% of protein.) Genetic analysis (of the quantitative trait loci) showed that a great many parts of the genome contributed this variation in oil content, that the largest could account for a difference of only 0.3% in oil content, and most accounted for just 0.1 – 0.2%. Given this, it is possibly unsurprising that these (small) effects were seen as additive (i.e. independent); put another way, there was negligible epistasis observed in these populations in which all other genes were also segregating.
Continue reading: When genetics meets the environment…the case of the missing heritability | <urn:uuid:8c2ec20a-afcb-4611-afc1-d2c1cc4882d4> | 3.203125 | 340 | Truncated | Science & Tech. | 37.081364 |
The test package is meant for internal use by Python only. It is documented for the benefit of the core developers of Python. Any use of this package outside of Python’s standard library is discouraged as code mentioned here can change or be removed without notice between releases of Python.
The test package contains all regression tests for Python as well as the modules test.test_support and test.regrtest. test.test_support is used to enhance your tests while test.regrtest drives the testing suite.
Each module in the test package whose name starts with test_ is a testing suite for a specific module or feature. All new tests should be written using the unittest or doctest module. Some older tests are written using a “traditional” testing style that compares output printed to sys.stdout; this style of test is considered deprecated.
It is preferred that tests that use the unittest module follow a few guidelines. One is to name the test module by starting it with test_ and end it with the name of the module being tested. The test methods in the test module should start with test_ and end with a description of what the method is testing. This is needed so that the methods are recognized by the test driver as test methods. Also, no documentation string for the method should be included. A comment (such as # Tests function returns only True or False) should be used to provide documentation for test methods. This is done because documentation strings get printed out if they exist and thus what test is being run is not stated.
A basic boilerplate is often used:
import unittest from test import test_support class MyTestCase1(unittest.TestCase): # Only use setUp() and tearDown() if necessary def setUp(self): ... code to execute in preparation for tests ... def tearDown(self): ... code to execute to clean up after tests ... def test_feature_one(self): # Test feature one. ... testing code ... def test_feature_two(self): # Test feature two. ... testing code ... ... more test methods ... class MyTestCase2(unittest.TestCase): ... same structure as MyTestCase1 ... ... more test classes ... def test_main(): test_support.run_unittest(MyTestCase1, MyTestCase2, ... list other tests ... ) if __name__ == '__main__': test_main()
This boilerplate code allows the testing suite to be run by test.regrtest as well as on its own as a script.
The goal for regression testing is to try to break code. This leads to a few guidelines to be followed:
The testing suite should exercise all classes, functions, and constants. This includes not just the external API that is to be presented to the outside world but also “private” code.
Whitebox testing (examining the code being tested when the tests are being written) is preferred. Blackbox testing (testing only the published user interface) is not complete enough to make sure all boundary and edge cases are tested.
Make sure all possible values are tested including invalid ones. This makes sure that not only all valid values are acceptable but also that improper values are handled correctly.
Exhaust as many code paths as possible. Test where branching occurs and thus tailor input to make sure as many different paths through the code are taken.
Add an explicit test for any bugs discovered for the tested code. This will make sure that the error does not crop up again if the code is changed in the future.
Make sure to clean up after your tests (such as close and remove all temporary files).
If a test is dependent on a specific condition of the operating system then verify the condition already exists before attempting the test.
Import as few modules as possible and do it as soon as possible. This minimizes external dependencies of tests and also minimizes possible anomalous behavior from side-effects of importing a module.
Try to maximize code reuse. On occasion, tests will vary by something as small as what type of input is used. Minimize code duplication by subclassing a basic test class with a class that specifies the input:
class TestFuncAcceptsSequences(unittest.TestCase): func = mySuperWhammyFunction def test_func(self): self.func(self.arg) class AcceptLists(TestFuncAcceptsSequences): arg = [1,2,3] class AcceptStrings(TestFuncAcceptsSequences): arg = 'abc' class AcceptTuples(TestFuncAcceptsSequences): arg = (1,2,3)
test.regrtest can be used as a script to drive Python’s regression test suite. Running the script by itself automatically starts running all regression tests in the test package. It does this by finding all modules in the package whose name starts with test_, importing them, and executing the function test_main() if present. The names of tests to execute may also be passed to the script. Specifying a single regression test (python regrtest.py test_spam.py) will minimize output and only print whether the test passed or failed and thus minimize output.
Running test.regrtest directly allows what resources are available for tests to use to be set. You do this by using the -u command-line option. Run python regrtest.py -uall to turn on all resources; specifying all as an option for -u enables all possible resources. If all but one resource is desired (a more common case), a comma-separated list of resources that are not desired may be listed after all. The command python regrtest.py -uall,-audio,-largefile will run test.regrtest with all resources except the audio and largefile resources. For a list of all resources and more command-line options, run python regrtest.py -h.
Some other ways to execute the regression tests depend on what platform the tests are being executed on. On Unix, you can run make test at the top-level directory where Python was built. On Windows, executing rt.bat from your PCBuild directory will run all regression tests.
The test.test_support module has been renamed to test.support in Python 3.x.
The test.test_support module provides support for Python’s regression tests.
This module defines the following exceptions:
The test.test_support module defines the following constants:
The test.test_support module defines the following functions:
Execute unittest.TestCase subclasses passed to the function. The function scans the classes for methods starting with the prefix test_ and executes the tests individually.
It is also legal to pass strings as parameters; these should be keys in sys.modules. Each associated module will be scanned by unittest.TestLoader.loadTestsFromModule(). This is usually seen in the following test_main() function:
def test_main(): test_support.run_unittest(__name__)
This will run all tests defined in the named module.
A convenience wrapper for warnings.catch_warnings() that makes it easier to test that a warning was correctly raised with a single assertion. It is approximately equivalent to calling warnings.catch_warnings(record=True).
The main difference is that on entry to the context manager, a WarningRecorder instance is returned instead of a simple list. The underlying warnings list is available via the recorder object’s warnings attribute, while the attributes of the last raised warning are also accessible directly on the object. If no warning has been raised, then the latter attributes will all be None.
A reset() method is also provided on the recorder object. This method simply clears the warning list.
The context manager is used like this:
with check_warnings() as w: warnings.simplefilter("always") warnings.warn("foo") assert str(w.message) == "foo" warnings.warn("bar") assert str(w.message) == "bar" assert str(w.warnings.message) == "foo" assert str(w.warnings.message) == "bar" w.reset() assert len(w.warnings) == 0
New in version 2.6.
with captured_stdout() as s: print "hello" assert s.getvalue() == "hello"
New in version 2.6.
The test.test_support module defines the following classes:
Instances are a context manager that raises ResourceDenied if the specified exception type is raised. Any keyword arguments are treated as attribute/value pairs to be compared against any exception raised within the with statement. Only if all pairs match properly against attributes on the exception is ResourceDenied raised.
New in version 2.6.
Class used to temporarily set or unset environment variables. Instances can be used as a context manager.
New in version 2.6. | <urn:uuid:41bc7ad7-ebb4-4e82-9553-9923503fcdfb> | 2.71875 | 1,888 | Documentation | Software Dev. | 52.42598 |
Earthquake Glossary - seismic moment
The seismic moment is a measure of the size of an earthquake based on the area of fault rupture, the average amount of slip, and the force that was required to overcome the friction sticking the rocks together that were offset by faulting. Seismic moment can also be calculated from the amplitude spectra of seismic waves.
Moment = µ A D
µ = shear modulus = 32 GPa in crust, 75 GPa in mantle
A = LW = area
D = average displacement during rupture | <urn:uuid:4c507c7c-32c5-4dc1-9540-151f2bdc460f> | 3.734375 | 113 | Structured Data | Science & Tech. | 26.090813 |
ANSI Common Lisp 3 Evaluation and Compilation 3.3 Declarations
3.3.4 Declaration ScopeDeclarations can be divided into two kinds: those that apply to the bindings of variables or functions; and those that do not apply to bindings.
A declaration that appears at the head of a binding form and applies to a variable or function binding made by that form is called a bound declaration; such a declaration affects both the binding and any references within the scope of the declaration.
A free declaration in a form F1 that applies to a binding for a name N established by some form F2 of which F1 is a subform affects only references to N within F1; it does not to apply to other references to N outside of F1, nor does it affect the manner in which the binding of N by F2 is established.
The scope of a bound declaration is the same as the lexical scope of the binding to which it applies; for special variables, this means the scope that the binding would have had had it been a lexical binding.
Unless explicitly stated otherwise, the scope of a free declaration includes only the body subforms of the form at whose head it appears, and no other subforms. The scope of free declarations specifically does not include initialization forms for bindings established by the form containing the declarations.
Some iteration forms include step, end-test, or result subforms that are also included in the scope of declarations that appear in the iteration form. Specifically, the iteration forms and subforms involved are: | <urn:uuid:7b95b861-fc53-4ee4-b448-b9a486584035> | 2.796875 | 313 | Documentation | Software Dev. | 41.385047 |
Introduction to Ice Science
Thousands of scientific papers have been written on ice science and many scientists have devoted their lives to the field. Many books have been written and several of them are sill in print. Most of this work is related to polar ice, glacial ice, ice physics, etc., however there are a reasonable number of papers that are relevant to lake ice. This section of the website looks a bit deeper into why ice behaves the way it does. It tries to bridge the gap between the rigorous scientific world and what we see on the ice.
One of best libraries anywhere for lake ice science is the Cold Regions Research and Engineering Libratory, an Army Core of Engineers laboratory, located in Hanover New Hampshire. CRREL scientists have produced a great deal of the scientific work that has been done on the ice we like to play on. Other hotbeds of ice research are, as will come as no supprise, Scandanavia, Russia and Canada.
Ref (1) River Lake Ice Engineering Edited by G. Ashton, Page 184, Water Resources Publications, LLC, 2010 printing. | <urn:uuid:19f643f5-4fd0-4426-ba35-167d09247feb> | 2.75 | 223 | Knowledge Article | Science & Tech. | 46.630491 |
Named after the astronomy Edwin Hubble, this long overdue space-based telescope was put into orbit…and it didn’t work courtesy of a lens ground to the wrong specification (the media probably got this wrong, effective telescopes after Newton use mirrors because chromatic aberration rendered lenses impractical). How it served as fodder for late-night comedy for a while.
I was still in college and more involved with my girlfriend Carrie at the time to recall many details. Heck, I was surprised it was finally launched because I remember it being on the agenda since the long-delayed space shuttle Columbia finally got going in 1981. I think Hubble was scheduled for 1987 but the Challenger accident grounded everything for a while. ABC didn’t go into great detail, the focus was more on Americans returning to space for the first time since the Ford Administration.
NASA got it repaired in 1993 and then lots of great stuff was captured through it. I remember NPR doing a piece saying Hubble had proposed the universe to be around 20 billion years old; this seems to coincide with Burbidge. With the successful operation of the telescope named after him, cosmological theories could be fine tuned to see if he was right. Hubble turned out to be off by five billion then and with COBE more like six and change (currently, it’s around 13-14 billion).
The Guardian has an OK piece they published a couple weeks ago here. It’s what reminded me to plug the anniversary.
As expected, Dr. Plait has a more informative and accurate series here. Click on the “next” link under the picture to proceed. His is excellent because Dr. Plait has actually worked with the telescope in his career. | <urn:uuid:675d9c9d-f307-48f3-b369-a426e4a61b1f> | 2.765625 | 353 | Personal Blog | Science & Tech. | 50.583511 |
3 The irreversible Universe
3.3 Statistical mechanics
Part 1 of 2 | Part 2For a printable version of '3 The irreversible Universe' click here
There is a special branch of physics, called statistical mechanics, which attempts to bridge the gap between descriptions on the scale of molecules and thermodynamics. It recognizes that our knowledge of a complicated system, such as a glass of water, is inevitably incomplete so we are essentially reduced to making guesses. This may seem to be a terrible weakness, but statistical mechanics actually turns it into an advantage. It replaces precise knowledge of the motion of molecules by probabilities indicating how the molecules are likely to move, on average. It then goes on to estimate the probability of measuring a particular pressure, energy or entropy in the system as a whole.
This is rather like the trick pulled by opinion pollsters when they predict the result of a general election without knowing how every individual in the country intends to vote. Pollsters have a mixed reputation, but the calculations of statistical mechanics are much more clear cut. They turn out to provide predictions that are overwhelmingly likely to happen - so much so, that they appear to be laws of Nature. The second law of thermodynamics is a case in point. From the viewpoint of statistical mechanics, the entropy of the Universe is not bound to increase, it is just overwhelmingly likely to do so. Perhaps 'heat death' will not be the end after all. After countless years of dull uniformity, a very unlikely (but possible) new fluctuation may occur with a lower than maximum entropy, and interesting things will start to happen again.
Continue on to 4 The intangible Universe, part 1 of 4 | <urn:uuid:2003f1df-5a46-4718-881f-5d202c9c1a79> | 3.171875 | 336 | Truncated | Science & Tech. | 30.717596 |
Focus Areas for Swift
Our galaxy, the Milky Way, is typical: it has hundreds of billions of stars, enough gas and dust to make billions more stars, and about six times as much dark matter as all the stars and gas put together. And it’s all held together by gravity. Like more than two-thirds of the known galaxies, the Milky Way has a spiral shape. At the center of the spiral, a lot of energy and, occasionally, vivid flares. are being generated. Based on the immense gravity that would be required explain the movement of stars and the energy expelled, the astronomers conclude that at the center of the Milky Way is a supermassive black hole. | <urn:uuid:2b5e879f-b2af-41d5-a9d0-561281a9ee81> | 3.703125 | 139 | Knowledge Article | Science & Tech. | 53.520791 |
|Name Meaning:||Thick Headed Lizard|
|Distribution:||North America (Montana, South Dakota, Wyoming)|
|Time Period:||Late Cretaceous, 75 Ma - 65 Ma|
|Length:||15 ft (4.5 m)|
|Height:||17.5 ft (4.3 m)|
|Diet:||Herbivore or Omnivore|
Not much is known about the Pachycephalosaurus because only a single nearly-complete skull has been uncovered along with a few thick skull roofs. It would have been one of the last dinosaurs to live on the earth before the great extinction. Small remains have been found since the 1850s. In 1859 Ferdinand Vandiveer discovered a small bone fragment at the head of the Missouri river in Montana. It was described as armor from some reptile or armadillo-like creature. The species was named in 1931 by Charles Gilmore from a partial skull. Of that species, Pachycephalosaurus is the only known genus. In 1943 a nearly complete skull was discovered in the Hell Creek Formation in Montana, and the genus was official defined by Barnum Brown and Erich Maren Schlaikjer.
The Pachycephalosaurus had a huge skull plate, providing cushion for its small brain. The plate was surrounded by small blunt spikes. It had a small muzzle with a pointed beak and tiny teeth. Its head was either supported by a S- or U-shaped neck. Its eyes were large and could have used binocular vision giving it a great advantage. If it was like other pachycephalosaurids it had long legs and small arms, probably bipedal, and carried a long tail held rigid with tendons that slowly solidified with time to turn to bone. It would have been the largest known Pachycephalosaur.
For a long time paleontologists thought that the thick skull was used for combat and competition, lowering its head, straightening the neck to become rigid, and headbutting others of its kind. Recent study has shown that this is probably not the case. First, no scars or other evidence of severe damage have been found in the skulls. Also, the head was probably supported by an S- or U-shaped neck making it impossible to straighten the spine. It may have been used as a display or used to flank the other competitor and strike them from the side. With its small teeth it could not have eaten large tough plant material and may have feasted on small leaves, fruits, or insects.
ScienceViews Writer: Jason Hamilton.
Copyright © 2005-2010 Calvin & Rosanna Hamilton. All rights reserved. | <urn:uuid:5e07622b-6d67-4afe-a62b-bf6205ce696b> | 3.65625 | 551 | Knowledge Article | Science & Tech. | 57.246658 |
User-mode Linux can be ported to other operating systems that have the necessary functionality. The ability to virtualize Linux system calls by intercepting and nullifying them in the host kernel is essential. An OS that can't do that can't run this port. Also needed is the ability to map pages into an address space in multiple places. Without this, virtual memory emulation can't be done. If those two items are available on a particular OS, then this port can probably run on it.
It would be convenient to have the equivalent of CLONE_FILES, which allows processes to share a file descriptor table without sharing address spaces. This is used to open a file descriptor in one process and have it be available automatically in all the other processes in the kernel. An example of this is the file descriptors that are used by the block device driver. They are opened when the corresponding block device is opened, usually in the context of a mount process. After that, any process that performs operations on that filesystem is going to need to be able to access the file descriptor. Without CLONE_FILES, there will need to be some mechanism to keep track of what file descriptors are open in what processes.
Beyond those, this port makes heavy use of standard Unix interfaces. So ports to other Unixes will be significantly easier than ports to non-Unixes. However, those interfaces have equivalents on most other operating systems, so non-Unix ports are possible. | <urn:uuid:a9d45f49-269d-4016-a5e1-bc19734f15b8> | 2.90625 | 297 | Documentation | Software Dev. | 40.244187 |
If all users of your instance of SQL Server speak the same language, you should pick the collation that supports that language. For example, if all users speak French, select a French collation. If the users of your instance of SQL Server speak different languages, you should pick a collation that best supports the requirements of the various languages. For example, if the users generally speak western European languages, choose Latin1_General collation.
When you support users who speak different languages, it is most important to use the Unicode data types nchar, nvarchar, and nvarchar(max) for all character data. Unicode avoids the code page conversion difficulties of the non-Unicode char, varchar, and text data types. Collation still makes a difference when you implement all columns using Unicode data types, because it defines the sort order for comparisons and the sorts of Unicode characters. Even when you store your character data using Unicode data types, you should pick a collation that supports most of the users in case a column or variable is implemented by using the non-Unicode data types.
SQL Server can support only code pages that are installed or supported by the underlying operating system. When you perform an action that depends on collations, the SQL Server collation used by the referenced object must use a code page either supported by or installed on the operating system running on the computer.
If the collation specified, or the collation used by the referenced object, uses a code page not supported by the Windows operating system, SQL Server issues an error. Your response to the error depends on the version of the Windows operating system installed on the computer. Windows 2000 and later versions support all the code pages that are used by SQL Server collations. Therefore, the error message will not occur. | <urn:uuid:d7ae83e0-dcf8-4aad-8729-6b4580e9fa72> | 2.71875 | 370 | Documentation | Software Dev. | 33.962105 |
|Image Credit: Markos Possel Mapos|
A few weeks ago word spread of the passing of F. Sherwood Rowland. The first notice was a press release on the UC Irvine website. He had an illustrious career as a chemist and won a Nobel Prize in Chemistry. I never had the pleasure of meeting him, but there are few in meteorology that didn’t know of him. Those who knew him can tell you more about his career and they can be found at Real Climate and Climate Progress. Rowland was a member of the National Academy of Sciences where there is a wonderful tribute to him.
Rowland along with post-doctoral student Mario Molino found that chlorofluorocarbons (CFCs), a man-made substance, could be highly destructive to ozone. One CFC could destroy up to 100,000 ozone molecules. This could be damaging to the ozone layer even at concentration on the order of parts per billion. The discovery led to their landmark paper published in Nature in 1974.
There were no observations at the time to confirm this, but it would not take long. The first sign of trouble was reported by British scientists making measurements in the Antarctic. Very low readings were being reported, but NASA could not confirm the observations from its satellite record. A software glitch was found to be preventing NASA from seeing the low readings. It turns out that the software was simply ignoring readings below 180 Dobson units, a measure of ozone concentration.
NASA was then able to confirm the British observations once the glitch was corrected and the data re-examined. What they found was a tremendous hole in the ozone layer over the southern polar region. The discovery sent shock waves through the scientific community and an international effort to study the phenomena was organized.
The scientific team included Susan Solomon, an atmospheric chemist, who proposed that CFCs combined with an extremely cold stratosphere were in fact destroying the ozone layer. It wasn’t that the ozone was all gone, just severely depleted. This heightened concerns and an effort to stop CFC production led to the Montreal Protocol in 1987. The ozone continued to decline even though CFC production came to a halt in 2000.
This led to the Nobel Prize for Rowland and his colleagues in 1995. It was an example of discovery in the laboratory applied to the real world. When the danger was recognized action saved the day.
The story above is a much simplified version of reality and reality is never simple or nice. Sherry Rowland endured much criticism for his findings often from those who either knew nothing about atmospheric chemistry or who belonged to the industry producing the chemical.
I hear some of these myths even today and this is where I became familiar with his work. One of the most frequent myths is that CFCs are too heavy to exist high in the atmosphere. Yes, CFC molecules are heavier than oxygen or nitrogen molecules. However, the atmosphere is not stratified by molecular weight. It is well mixed due to convection in the troposphere and any chemical released at the surface can make it high into the atmosphere. Molecules are no match for air currents. Much heavier substances like dust can make it into the stratosphere.
There are still a few scientists today that deny that CFCs cause ozone depletion. However, they have never substantiated their claim in peer-reviewed journals and are not taken seriously by scientists “in the know”.
The ninth lowest measurement for ozone over the Antarctic was observed in 2011. There was also an ozone hole observed over the Arctic region for the first time.
|Antarctic ozone hole in 2011. Image Credit: NASA|
|The Arctic ozone hole in 2011. Note the comparison with 2010, both taken on March 19. Image Credit: NASA|
I hear some of these same arguments today relating to carbon dioxide and climate change. The argument goes that CO2 is too heavy to be the cause on global warming in the troposphere. Again, the facts above dispel this myth. Or how about the recent statement that CO2 is not well mixed and cannot be the cause of global warming. There may temporary local concentrations of molecules especially near point sources. However, the atmosphere is well mixed through the troposphere. In fact, the concentrations are homogeneous up to the ozone layer.
Sherry Rowland had begun to study the effects of increasing greenhouse gases in recent decades. The news release from the National Academy of Sciences mentioned this. They went on to write:
Speaking to a 1997 White House roundtable on climate change, Rowland asked: "Is it enough for a scientist simply to publish a paper? Isn't it the responsibility of scientists, if you believe that you have found something that can affect the environment, isn't it your responsibility to actually do something about it, enough so that action actually takes place? …If not us, who? If not now, when?"
In 2008 he sat down with Andrew Revkin of Dot Earth and made to following comments:
During a break, I asked Dr. Rowland two quick questions. The first: Given the nature of the climate and energy challenges, what is his best guess for the peak concentration of carbon dioxide?
(Keep in mind that various experts and groups have said risks of centuries of ecological and economic disruption rise with every step toward and beyond 450 parts per million, with some scientists, most notably James Hansen of NASA, saying the long-term goal should be returning the atmospheric concentration to 350 parts per million, a level passed in 1988.)
His answer? “1,000 parts per million,” he said.
My second question was, what will that look like?
“I have no idea,” Dr. Rowland said. He was not smiling.
Joe Romm of Climate Progress has an idea. He points out that “readers of Climate Progress have an idea, since I have done my best to describe this grim future that scientists rarely model because they can’t believe humanity would be so self-destructive as to let it happen:
At 800 to 1000 ppm, the world faces multiple miseries, including:
- Sea level rise of 80 feet to 250 feet at a rate of 6 inches a decade (or more).
- Desertification of one third the planet and drought over half the planet, plus the loss of all inland glaciers.
- More than 70% of all species going extinct, plus extreme ocean acidification.”
F. Sherwood Rowland spent much of his career studying the chemistry of the atmosphere and raising the alarm about what the science said. Much like James Hansen and the multitude of climate scientists today trying to warn the world that the path we are on is unsustainable and destructive. Some call them alarmists, but their concern is backed up by the facts and the science. They are not only scientists, but also heroes. | <urn:uuid:0233281b-b2e1-4ae2-b9c1-61b732be06e3> | 3.171875 | 1,407 | Personal Blog | Science & Tech. | 52.497679 |
DNA Replication Rates lower Eukaryotes and Prokaryotes... why a difference...
dellaire at ATLAS.ODYSSEE.NET
Tue Jan 16 01:05:18 EST 1996
>for a biology paper,=20
>i need to solve the follwing question:
>WHY IS DNA REPLICATION IN PROKARYOTES NEARLY 100 TIMES FASTER THAN
>REPLICATION IN EUKARYOTES ?
The numbers I have quote:
E.coli at 500 nt/s and Human Fibroblast at 50 nt/s
that would be 10X faster
>i have wondered if it is because of (a) life span of pro vs eu
>karyotes, (b) because of existence ofa nuclear membrane in eukaryotes,
>where one is not found in prokaryotes...
>if anyone can shed some light on this question, please reply by
>posting or send email to:
k.venkiteswaran at odyssey.on.ca
Here are a few possibilities I could come up with.
One, in E.coli you are not worrying about mutations that much.
In fact =
a high mutation rate is desirable if the bacterium is to adapt
changing environments. The result is you can replicate your DNA
without worrying about a mutation occurring. In higher
eukaryotes such =
as mammals this is a big problem... mutations can kill the
organism if well placed (i.e. cancer). For yeast I don't think
this is =
as strong a point as they are unicellular and much like in
--high mutation rates are adaptive.
The error rate in E.coli is consequently ~10-8 where as in
Fibroblasts it is ~10-10
Two in E.coli you have very little packing of DNA compared to
eukaryotes. To start DNA replication you need to gain access to
the DNA =
and expose the template for the polymerase. For instance you
have the =
HU protein of bacteria compared to H1, H2A, H2B, H3 and H4 as
well as =
the HMG proteins in mammals. The bacterial genome is much less
so by simple logic to replicate DNA it is much easier to gain
accesss to =
DNA in bacteria then in a mammal and hence it is faster in
Further as a point of interest
In E.Coli you only have one origin of replication where as in
cells it is estimated there are between 10 to the 4 --10 to the
possible origins. This may reflects both the difference in size
complexity of the two genomes as well as the speed at which DNA
replicated. Multiple initiations of replication could help
increase the =
time to replicate the genome if it is very large.
E.Coli has 3 X10 to the 6 bp and in the human genome there are
~3X 10 to =
the 9 or=20
1000X more DNA to replicate. =20
1000 X 10 (the difference in speed of DNA replication)=3D 10
000 or 10 =
to the 4
hmmmm by a rough calculation it makes sense why you have ~10 to
the 4 =
origins in the human genome and only 1 in bacteria.
To your suggestion of the nuclear membrane as a possible reason
higher replication rates I think the membrane provides a way of
segregating transcription and translation first of all and may
really have much to do with replication on a surface level. =20
dellaire at odyssee.net
More information about the Biochrom | <urn:uuid:cd94839f-1f26-4db0-abb2-0fb66ad2b43e> | 3.296875 | 790 | Comment Section | Science & Tech. | 65.029198 |
popen is used to read and write to a unix pipe.
This function is NOT included in 'C Programming Language' (ANSI) but can be found in 'The Standard C Library' book.
Library: stdio.h Prototype: FILE *popen(const char *command, const char *type); Syntax: FILE *fp; fp = popen( "ls -l", "r"); Notes: command - is the command to be issued. type - r - read O/P from command. - w - Write data as I/P to command.
fp=popen("ls -l 2>&1", "w"); AAAA |||| | <urn:uuid:9fde86f7-085e-44ef-b94b-622e98a440e9> | 3.53125 | 143 | Documentation | Software Dev. | 87.729187 |
Atmospheric Gases Block UV <290 nm
The column of absorbing gas in the path of incoming radiation includes atomic and molecular nitrogen and oxygen and their products. These gases block all short wave ultraviolet radiation. Molecular oxygen in the stratosphere (10 to 48 km) absorbs short wave (up to 242.4 nm from the ground state and with very strong bands from 50-100 nm, and 140-175 nm) ultraviolet and photodissociates. The atomic oxygen produced leads to the production of ozone. Ozone strongly absorbs longer wave ultraviolet in the Hartley and Huggins bands from 200 - 360 nm (Fig. 1), and has additional weak absorption bands in the visible (Chappius bands from 450 - 750 nm) and infrared.Fig. 1 Transmittance of ozone layer.
Absorption in the Herzberg continuum of the abundant molecular oxygen blocks most ultraviolet up to ca. 250 nm. Rayleigh scattering by air molecules (Fig. 2) and the strong ozone absorption from 200 - 290 nm determine the terrestrial ultraviolet edge at around 290 nm. Ozone absorption is variable however, since the amount of ozone in the upper atmosphere depends in a complex manner on formation and circulation patterns of the ozone and on the long-lived catalytic chemicals that destroy ozone.Fig. 2 Rayleigh scattering; impact on transmittance.
Variable Ozone Level Means Variable Absorption Edge
The ozone level is quantified as the corresponding path length of the gas at standard temperature and pressure (STP) or in Dobson units (D.U.), the number of milliatmosphere centimeters of ozone at STP. Typical ozone levels vary from 2.4 mm (STP) or 240 D.U. at the equator increasing with latitude to 4.5 mm at the North Pole. Seasonal variation is highest at the poles. Prior to the report of the ozone hole, the ozone level at the North Pole was known to drop to ~2.6 mm (STP) in October. Antarctic levels as low as 1.1 mm (STP) have been reported and attributed to chemical destruction of ozone. The ASTM standard spectra use 3.4 mm ozone in the computation, as this is the expected average value for the U.S.Fig. 3 UV transmittance of normal and depleted ozone layer.Fig. 4 UV irradiance and ozone depletion.
Since ozone is the principal absorber of solar radiation in the 250 - 300 nm region, with the strong slope shown in Fig. 3, ozone depletion leads to increased levels of UVB. We discuss some of the implications of this in our section on Photochemistry and Photobiology. Ironically, ground level ozone, a pollutant in heavily industrialized or populated areas, seems to reduce UV irradiation there. Fig. 4 shows how the calculated terrestrial ultraviolet at 50° latitude changes with ozone depletion. This figure is based on data taken from Vol. I of the UVB Handbook by Gerstl et al. of Los Alamos National Laboratory. | <urn:uuid:f0bf582e-10df-4c11-84aa-e65da6b13445> | 3.609375 | 617 | Academic Writing | Science & Tech. | 57.638354 |
rates of freezing
I should know the answer to this question.
So should the physics and chemistry teachers
at our school, but they disagree.
I have heard two, perfectly good explanations
with different "facts". Here goes.....
DOES hot water freeze faster than cold
water? (You were expecting something really
good were not you?) If so, why? If not, why?
Somebody's gonna have to eat crow. (I am glad
that I did not express my opinion - but I will
bet I am right!)
First, I think scientifically you should qualify your 'hot' or
'cold' description by temperature, in any scale of your choice
(i.e. Fahrenheit, Celsius, or Kelvin). Given that, however, I will
assume that your 'hot' water is higher in temperature than your
'cold' water. Assuming no dissolved impurities which might other-
wise affecting the freezing (melting) point of said water, I will
place my bet and say that since the 'hot' water, by definition
contains more energy and therefore more molecular motion, it would
freeze more slowly (i.e. would require more cooling to get down
to the freezing point) than the water you describe as 'cold'. The
'cold' water , containing less energy (by definition) and therefore
less molecular motion should require less cooling to get to the
freezing point. I would like to hear both sides of the opinions of
Yes, hot water in a freezer freezes fast than cold water, as every
competent houseperson (formerly,"housewife") knows. The reason is that
the surface of the freezer usually has a layer of ice on it. Ice is an
excellent insulator, and therefore limits the heat transfer to the freezing
surface. Putting hot water in your pan (or ice-cube tray) has the effect
of melting the ice-layer on the surface, providing for better heat transfer
to the surface.
There is also the fact that there is increased evaporation from the
warm water, which reduces the mass of water that must be cooled, as noted
in item 3.40 of Jearl Walker's book "The Flying Circus of Physics." That
book is the first place to look for answers to questions of the type that
Jack L. Uretsky
Jack L. Uretsky, You will recall that the question was does hot water
freeze faster than cold water. I don't recall it mentioning a freezer, nor a
coating of ice, nor allowing for escape of some of the targeted hot water to
freeze. I maintain my answer as correct, given the question that was
asked. Naturally if heat is allowed to escape from the system, either
thru melting of a supposed coating of ice or warming of some freezer
space, than your assertion might change the scenario. In a laboratory
with a controlled experiment where all the heat is accounted for,
I believe you will find my answer to be the correct one , i.e.
cold water will freeze faster than an equivalent quantity of hot
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:ed7f1135-1d91-48bd-a94f-d5ea60d23aa9> | 2.703125 | 669 | Q&A Forum | Science & Tech. | 59.238978 |
Last year more than 1 million Americans had their eyes zapped with lasers to free themselves from eyeglasses. For the majority, the procedure was a success-but not for everyone. People who are farsighted, for instance, don't generally have great results. Many experience side effects such as seeing halos around objects. And laser surgery won't help people who need reading glasses because their vision continues to deteriorate as they age. So Mohsen Shahinpoor of the University of New Mexico decided to come up with an alternative.
Since most cases of poor vision are caused by deformities in the eyeball's shape, Shahinpoor and his colleagues' technique wraps a thin band of artificial muscles around the eyeball to squeeze it into place. The artificial muscles are made up of polymer strings with gold wires wound around them. Electricity is induced from a power source located behind the ear (see left); a person can direct the band to either expand or contract.
Compared with laser surgery, having the equivalent of a rubber band strapped around your eyeball may seem drastic. But Shahinpoor points out that the band could let people adjust their focus on both near and far objects. And unlike laser surgery, the procedure would be reversible. The device is now being tested on cadavers' eyes and it's likely to be at least two years before it is tested on people.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:bf1c69dd-2f4e-42b7-831b-ce5eec4556c3> | 2.6875 | 334 | Listicle | Science & Tech. | 46.545957 |
What is the average kinetic energy of a substance
It's its temperature! Thanks for using ChaCha!
More Answers to "What is the average kinetic energy of a substance"
- What is the measure of the average kinetic energy of the particle...?
- Temperature, which is the average kinetic energy of all the particles that make up a substance.
- What's the average kinetic energy of the particles of a substance...?
- The sum of each particle times its energy divided by the total number of particles. (I know this is not what you meant to ask. I'm just giving the technically correct answer to show you that you could get what you want easier and faster by ...
- What is the average kinetic energy of particles in an object??
- it is the half of energy that is considered between the phenomenal kinetic energy requirements
Related Questions Answered on Y!Answers
- Which temperature scale provides a direct measure of the average kinetic energy of a substance?
- Q: Which temperature scale provides a direct measure of the average kinetic energy of a substance?What is the name of the device used to measure gas pressure?Help? Thanks.
- A: You have to use the Kevin scale for any energy calculations.The instrument used to measure pressure depends on the pressure. Very low pressures such as in vacuum operations use an ionisation gauge or a McCloed gauge. Near ambient pressures you can use a manometer. At very high pressures such as in a rocket engine you would need a pressure gauge or perhaps a strain gauge.
- If the average kinetic energy of a substance changes what happens?
- Q: A)its temperature changesB)it may change phaseC)thermal energy has been transfered into or out of the substanceD)all of these
- A: The answer should be A.Average kinetic energy of a substance is related to the speed of the particles moving in it and the temperature of the substance.As the particles in the a substance moves faster, the average kinetic energy of the substance increases. So does the temperature.When a substance changes phase, the temperature will remain constant. This means that the average kinetic energy is also constant during this process. Hence, B is wrong.When thermal energy is being transferred out of the substance, one could understand it as taking out some particles out of the substance. So what is changed here? I would think that the total kinetic energy is being changed, since some particles have been taken away. But the taking away of the particles would not affect the speed of the remaining particles. The AVERAGE kinetic energy should remain unchanged. Hence, C is also wrong.
- In which phase of a substance do its particles have the greatest average kinetic energy?
- Q: In which phase of a substance do its particles have the greatest average kinetic energy?In Which phase does the substance have the highest potential energy?
- A: I'll just throw out there that it seems likely a plasma would have the highest kinetic energy. I'm not sure if that is one of the choices, depending on the context of the original question.
Prev Question: What is the study of physical science | <urn:uuid:eeb8750d-3cb3-43e3-9799-df0b316e00a7> | 2.6875 | 644 | Q&A Forum | Science & Tech. | 46.46335 |
A photo of the SWICS instrument
Click on image for full size
Courtesy of the University of Maryland Space Physics Group
SWICS Instrument Page
The SWICS (Solar Wind Ion Composition Spectrometer) instrument on Ulysses is to measure the frequency of occurence, temperature and average speed of all the major solar wind ions, from Hydrogen through Iron. Another main goal of the Ulysses mission
is to test the solar wind
coming from the poles of the Sun. These regions of the Sun are known to give off high-speed streams (HSST's) of solar wind. Scientists are using the SWICS instrument to see if these streams are of constant speed and constant composition.
The instrument has already collected data at the north and south pole of the Sun. Scientists have already found evidence that speed and composition are steady at the poles of the Sun.
One interesting fact about the SWICS instrument is that it measures the composition, temperature, and speed of solar wind ions of speeds ranging from 175 km/s to 1280 km/s. Compare these to the velocity of Ulysses, the fastest space probe to date travelling at 11.3 km/s!
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
The Hubble Space Telescope (HST) was one of the most important exploration tools of the past two decades, and will continue to serve as a great resource well into the new millennium. The HST found numerous...more
Driven by a recent surge in space research, the Apollo program hoped to add to the accomplishments of the Lunar Orbiter and Surveyor missions of the late 1960's. Apollo 11 was the name of the first mission...more
Apollo 12 was launched on Nov. 14, 1969, surviving a lightning strike which temporarily shut down many systems, and arrived at the Moon three days later. Astronauts Charles Conrad and Alan Bean descended...more
Apollo 15 marked the start of a new series of missions from the Apollo space program, each capable of exploring more lunar terrain than ever before. Launched on July 26, 1971, Apollo 15 reached the Moon...more
NASA chose Deep Impact to be part of a special series called the Discovery Program on July 7, 1999. The Discovery program specializes in low-cost, scientific projects. In May 2001, Deep Impact was given...more
The Galileo spacecraft was launched on October 19, 1989. Galileo had two parts: an orbiter and a descent probe that parachuted into Jupiter's atmosphere. Galileo's main mission was to explore Jupiter and...more
During 1966 through 1967, five Lunar Orbiter spacecrafts were launched, with the purpose of mapping the Moon's surface in preparation for the Apollo and Surveyor landings. All five missions were successful....more | <urn:uuid:5170cefc-10e4-4b38-a1c1-f61d8f0458b9> | 3.421875 | 593 | Knowledge Article | Science & Tech. | 58.032028 |
Recently, the students have been complaining that physics has been putting too pressure on them. But what is pressure?
In physics we can define pressure on a solid as a force distributed over the contact surfaces of the two objects.
P = F/A
where P is pressure, F is force, and A is the area between the two surfaces in contact.
A nice way to think about variations in pressure is the way a book can be laid on a surface.
The common metric unit for pressure is the Pascal (Pa) which is a Pa = N/m2
However, when you inflate a bicycle tire, more likely you will see the units for pressure in lb/in2
The conversion is 1 lb/in2 = 6895 Pa
We commonly also use atmospheres as a unit of pressure, also known as a bar
One Atmosphere (atm) = 1.01 x 105 Pa = 1 bar
Sometimes you will see millibar as a unit on vacuum pumps.
In weather reports, you will often see pressure presented in units of inches of Mercury = inHg, as this is the units that you would read on a barometer. This is the barometer from my grandfather’s liquor store, Silk City Liquors.
When I was a pilot, I use to take the altimeter settings on my airplane in inHg. You can see the small calibration window on this altimeter between the 2 and 3 is set for 29.9 inHg.
1 atm = 29.92 inHg = 760 mmHg = 760 Torr
In metric units this would be millimeters of Mercury (Hg) Another name for the millimeter of Mercury would be the Torr in honor of Evangelista Torricelli who
Torricelli, one of Galileo’s students, is know historically as taking over Galileo’s academic posts. He did a lot of work with vacuum pumps and built an accurate barometer using Mercury (as opposed to those who had done it earlier with water), which solved many of the problems of using the barometer over a wide temperature range. | <urn:uuid:a6e10ef7-d246-4116-aaa3-0e468dea78ac> | 3.65625 | 440 | Personal Blog | Science & Tech. | 56.512811 |
The Contribution of Global Transport on
- Ehlers,Susanna(2007).pdf (3.751Mb PDF)
- May 18, 2007
- Carbon Monoxide
- Carbon monoxide, CO, is produced by natural and anthropogenic processes including biomass burning and fossil fuel usage and affects atmospheric chemistry through its roles as a sink for the hydroxyl radical (OH) and as a precursor to ozone. As the primary atmospheric sink for OH, which is responsible for chemically destroying numerous air pollutants, CO concentrations impact the concentrations of other such pollutants. Here we use CO as a tracer for polluted air masses by examining the transport of CO both to and from North America. CO is an ideal tracer for atmospheric and climate modeling because it is well understood and well captured due to its simple chemistry and long lifespan. By employing MOZART, a numerical global tropospheric chemistry model, we seek to address the nature of air pollutant transport and establish the role of transport on regional North American CO concentrations. We find that the greatest intercontinental transported CO occurs on days when the overall carbon monoxide concentrations are low to moderate. In addition, carbon monoxide concentrations increase eastward, reflecting the different regional impacts of emissions. We also define three main transport pathways of CO over North America and identify specific episodic flux events by comparing model results to INTEX-NA flight observations taken the summer of 2004 in cooperation with NASA, NOAA, and the ICARn campaign. The main pathways of CO transport over North America are eastward, aloft import from Asia; northward, surface import from Africa, attributed to heavy biomass burning in the summer; and eastward export from North America, at the surface and aloft. Understanding these pathways for CO transport and the regional impacts is a key step towards understanding how polluted air masses evolve and can provide insight into the extent to which local air quality is influenced by intercontinental transport.
- Permanent link
- Export to RefWorks | <urn:uuid:a180d2df-f93b-4474-9e7a-0e7996e8f2c8> | 3 | 399 | Academic Writing | Science & Tech. | 22.056678 |
Plate-wide deformation before the Sumatra-Andaman earthquake
Crampin, Stuart; Gao, Yuan. 2012 Plate-wide deformation before the Sumatra-Andaman earthquake. Journal of Asian Earth Sciences, 46. 61-69. 10.1016/j.jseaes.2011.10.015Full text not available from this repository. (Request a copy)
Rock is weak to shear-stress and the energy released by the 26th December, 2004, M ≈ 9.2 Sumatra–Andaman Earthquake, the largest earthquake for four decades, must have accumulated over enormous volumes of crust and mantle, certainly plate-wide, possibly world-wide. Here we report evidence for plate-wide stress accumulation. Changes in seismic shear-wave splitting monitor stress-induced changes in the geometry of the microcrack distributions in almost all rocks in the Earth’s crust. Such changes observed in Iceland show stress-accumulation beginning at least four years before the Sumatra–Andaman Earthquake. These changes were recognised as monitoring stress-accumulation before an impending large earthquake and 10 ‘stress-forecasts’ were emailed to Iceland Meteorological Office for some 29 months forecasting an impending large earthquake. The remarkable sensitivity of critical-systems of microcrack geometry to miniscule changes of stress had not been recognised at that time and the stress-accumulation was expected to lead to a M ⩾ 7 earthquake somewhere in Iceland. Only now is it recognised that the changes in shear-wave splitting were monitoring stress-accumulation which would eventually lead to the Sumatra–Andaman Earthquake at a distance of some 10,500 km on the opposite side of the Eurasian Plate. This extreme sensitivity confirms the critical nature of fluid-saturated stress-aligned microcracks in the Earth’s crust.
|Programmes:||BGS Programmes 2010 > Earth hazards and systems|
|Date made live:||10 May 2012 15:14|
Actions (login required) | <urn:uuid:9c498e9c-e71f-4323-ac8c-352b5ff207b9> | 2.703125 | 422 | Academic Writing | Science & Tech. | 37.818487 |
How did this collaboration with National Physical Laboratory come about for your project The Fundamental Units?
For six months I was having tests run all around the UK on different types of microscopes such as scanning electron microscopes, at different institutions, universities and testing laboratories. The Curator of Modern Money at the British Museum suggested an idea which eventually lead me to the National Physical Laboratory.
I ended up at the Advanced Engineered Materials Group which is part of the National Physical Laboratory, using an Alicona infinite focus 3D optical microscope.
They were really into experimenting and pushing the equipment. It took about a month of tests to get the results we see. The process involved Petra the scientist in charge of the machine writing programs to capture the data as a whole, as the machine is designed for looking in detail at one tiny part of an object. We crashed it several times working out the right solution. Each coin, which are generally around 18-20mm in diameter, take a whole night to capture. Then computers run for three days assembling the data into extremely high resolution photographic images. We are talking files too big for normal image editing software such as Adobe Photoshop. Each photographic print is from files with around 400 million pixels.
What did some of the earlier tests look like?
Many microscopes are not optical, they don't use light, and therefore produce results that are removed from what we generally expect to see. A scanning electron microscope, for example (attached), produces images in greyscale and the electric charge greatly emphasises dust and dirt. Clean images could be obtained though sonic cleaning and plating the coins in gold, but this started to become very removed from examining these low value tokens of exchange.
Could you explain ... | <urn:uuid:c4f224ed-a708-4930-91f4-7406500aa02b> | 2.9375 | 350 | Audio Transcript | Science & Tech. | 35.608419 |
The eye uses three methods to determine distance:
- The size a known object has on your retina - If you have knowledge of the size of an object from previous experience, then your brain can gauge the distance based on the size of the object on the retina.
- Moving parallax - When you move your head from side to side, objects that are close to you move rapidly across your retina. However, objects that are far away move very little. In this way, your brain can tell roughly how far something is from you.
- Stereo vision - Each eye receives a different image of an object on its retina because each eye is about 2 inches apart. This is especially true when an object is close to your eyes. This is less useful when objects are far away because the images on the retina become more identical the farther they are from your eyes. | <urn:uuid:c2c81882-01a3-4ec4-8332-ffa79472b805> | 4.25 | 175 | Knowledge Article | Science & Tech. | 48.235364 |
A sigle wave
Superposition of waves
© The scientific sentence. 2010
1. Moving source forward an observer at rest
The source S is moving at right at the speed Vs in the frame of
the observer at rest. At the precise time T, which is the period of the wave;
this wave that was emitted from So reaches the observer. The source at
this time is not located at the position So, that is at the distance Ls,
but It was moved and now It is located at the distance L's. For the
observer, It appears that the wave was originated from the position S
taking T' as time to arrive at its position. If Vwis the speed of the
wave, we can write:
L's = Vw. T'
Ls = Vw. T
L's = Ls + Vs. T
Vw. T '= Vw. T +Vs. T
T' = T . (Vw+ Vs)/Vw
With ƒ = 1/T, we have:
ƒ' = ƒ[Vw/(Vw+ Vs)]
ƒobs = ƒ . Vw/(Vw+ Vs)
Where ƒ' is the apparent frequency
for the Observer, and ƒ is the real frequency of the wave.
If the source is stationary (Vs = 0), then ƒ' = ƒ' and
T' = T, that is the observer gets waves at every real period.
2. Moving source toward an observer at rest
Next, the observer is always at rest and the source
S is moving at a speed Vs towards this observer. The wave propagates
at the speed Vwin all directions and towards the observer of course.
When this observer receives the wave, at this precise time, the
source is not located at the distance Ls but at L's. For the observer, the
period of the wave is T's. And at this precise time, the wave has
travelled Ls = Vw. T , where T is the real period .
T's is the apparent period.
We can write:
L's = Ls - Vs . T
Because Ls = Vw. T
and L's = Vw. T's
Vw. T's = Vw. T - Vs . T = (Vw- Vs) . T
1/T's = Vw/(Vw- Vs) (1/T)
ƒ' = Vw/(Vw- Vs) ƒ
ƒobs = ƒ . Vw/(Vw- Vs)
ƒobs is the apparent frequency.
3. Moving source and observer
Now, let's consider that the observer is moving
at a speed Vo and the sourse is moving at a speed Vs,
oppraching the observer, that is in the opposite direction.
The wave is always moving at the velocity Vw.
This case is like the last one except that the observer
is moving. Then insted of L's = Vw. T' , we have:
L's = (Vw + Vo) . T'
(Vw + Vo) . T' = (Vw- Vs) . T
ƒobs = ƒ . (Vw+Vo)/(Vw-Vs)
4. Moving source and observer : Relativistic case
In the relativistic case, the time is streched, that is the time
T becomes γT, where
γ = 1/[1- (Vs/Vw)2]1/2.
the frequency (1/T) is changed into: 1/γT. That is:
ƒ' (Relativistic) = ƒ'(Classic)/γ
For the latter case, we have:
ƒ'(Relativistic) = (Vw+Vo)/(Vw-Vs)ƒ/γ = (Vw+Vo)/(Vw-Vs) [1- (Vs/Vw)2]1/2 ƒ
= ƒ . [(Vw+Vs)/(Vw-Vs)]1/2 . (Vw+Vo)/Vw.
ƒ' = ƒ . [(Vw+Vs)/(Vw-Vs)]1/2 . (Vw+Vo)/Vw
ƒobs = ƒ.[(Vw+Vs)/(Vw-Vs)]1/2 . (Vw+Vo)/Vw
In the case of the wave is light, Vw= c (speed of light in the vacuum = 3.10 8m/s)
ƒ' = ƒ . ((c + Vs)/(c - Vs))1/2 . (c + Vo)/c
In the case of the observer is at rest, V0 = 0, then:
ƒ' = ƒ . [(c + Vs)/(c - Vs)]1/2.
ƒ' = ƒ . (1/c)(c + Vo)/[1 - (Vo/c)2)]1/2 ≈ ƒ(1 + Vo/c)
ƒobs = ƒ (1/c)(c + Vo)/[1 - (Vo/c)2)]1/2 ≈ ƒ(1 + Vo/c)
©: The scientificsentence.net. 2007. | <urn:uuid:dcc8036e-903c-4f75-94a3-ff8b7f11eeee> | 4.03125 | 1,179 | Documentation | Science & Tech. | 114.148886 |
Coming to you from the Air Force Space and Missile Museum at Cape Canaveral Air Force Station Florida — it’s a model of a Corona satellite:
Corona (originally known to the world via its cover name “Discoverer”) was the world’s first photo reconnaissance satellite program. It started in 1958 as one of a number of think-tank studies, then was rushed forward after the 1960 downing of Francis Gary Powers’ U-2 spy plane over the Soviet Union. Coming at a particularly stressful part of the Cold War, this incident made the proposal for controversy-free overhead reconnaissance photography irresistible.
So starting in early 1959, test launches of Corona (advertised publicly as a technology development program called Discoverer) began. The idea was to take images on film, then return the exposed film in a re-entry capsule (“bucket”) for processing and analysis. A returned bucket was recovered by aircraft in mid-air, while dangling from its parachute.
Many of the early missions failed, while the program’s engineers and management learned how to build a reliable spacecraft. The first successful return of images came on the Discoverer 13 mission in August of 1960 — this was also the first time any object had been successfully returned from orbit. Launches continued under the Discoverer banner until 1962, at which point Corona continued for another decade as a completely secret program.
Meanwhile, the original single return bucket was replaced by a two-bucket system, resolution improved from 20 feet (6 meters) to 6 feet (2 meters), and the film base was made thinner (courtesy of the invention of Mylar) allowing for more film to be launched. By the time the program wound down in 1972, Corona satellites had taken over 800,000 images on 2.1 million feet (640 km / 400 miles) of film.
The Corona program was declassified in 1995, so now all sorts of information is publicly available on the program — the U.S. National Reconnaissance Office even has devoted a section of its website to the program. | <urn:uuid:a8056b27-f77e-4fac-9782-06c77a8eca21> | 3.375 | 430 | Personal Blog | Science & Tech. | 39.042365 |
Are You Really Sleeping on Clean Bedsheets?
...use of indoor allergies. The other primary offenders cited in the arena of indoor allergens are pet dander and dust mold. Dust mites are microscopic arachnids
that feed on dead skin. They infest beds and carpets by the tens of millions. There can be as many as 1000 mites in one gram of dust. In an article...
arachnids in Medical Dictionary
... Tick is the common name for the small arachnids
in superfamily Ixodoidea that, along with other mites, constitute the Acari... Although ticks are commonly thought of as insects, they are actually arachnids
... Ticks are among the most efficient carriers of disease because they...
arachnids in Biological News
Scary ancient spiders revealed in 3-D models, thanks to new imaging technique
...es, called pedipalps, had tiny 'tarsal' claws attached at the end to help the creature to manipulate its prey. These claws are seen in rare modern-day arachnids
such as the Ricinulei. The researchers say that the existence of this common physical feature, shared by the Cryptomartus hindi and the Ricinulei, l...
Microscopic morphology adds to the scorpion family tree
...addy long legs. Consequently, the question of whether the book lung evolved once, at the base of the arachnid phylogenetic tree, or more than once, as arachnids
adapted to life out of water, is unresolved.
This study is the most complete review of scorpion breathing apparatus ever: 200 specimens from 100 g...
A rarity among arachnids, whip spiders have a sociable family life
...turity, and all mix in social groups. This is surprising behavior for these arachnids
long-thought to be purely aggressive and anti-social, according to a Cornel...d hide when faced with threat. While sharing prey is an advantage for other arachnids
with social tendencies, Rayor said she has yet to witness amblypygids inten...
arachnids in Biological Dictionary
... The Malpighian tubule system is a type of excretory and osmoregulatory system found in some Uniramia, arachnids
and tardigrades. Information for research on the Malpighian tubule (insect kidney) ... The Drosophila melanogaster Malpighian tubule provid...
... The cephalothorax is an anatomical term used in arachnids
and malacostracan crustaceans for the first (anterior) major body section. The remainder of the body is the abdomen (opisthosoma), which may also be...
... A carapace is a dorsal section of the exoskeleton or shell in a number of animal groups, including arthropods such as crustaceans and arachnids
as well as vertebrates such as chelonians, order Testudines, turtles and... carapace ( ) n. Zoology. A hard bony or chitinous outer covering,...
are a class ( Arachnida ) of joint-legged invert...ebrate animals in the subphylum Chelicerata. All arachnids
have eight legs, although in some species the fr...e great sub-phylum Arthropoda, according to ... arachnids
are in the class of Arachnida . ... Basic chara... | <urn:uuid:4236a70d-f544-4fe8-82e4-cd4a2e09b076> | 2.9375 | 753 | Content Listing | Science & Tech. | 45.973901 |
Researchers at Worcester Polytechnic Institute have just done a batch of research that they hope will help turn the world's roads into cheap collectors of solar power.
They started with the assumption that asphalt gets frakking hot when the sun shines on it, and then started making some serious leaps.
First, they decided to figure out what part of the asphalt gets hottest, which turns out to be about two centimeters below the surface. Then they tried to figure out how to make it even hotter. The painted an anti-reflective coating to their test blocks, and then added highly thermally conductive quartzite to the mix.
The result is blacktop that gets even hotter and stays hotter for longer than regular asphalt. Of course, this left them with the problem of how to get the energy out of the road. By laying down a series of flexible and highly conductive copper pipes before pouring the asphalt they were able to pump water through the asphalt, picking up the heat, for use in power generation.
However, project leaders hoped to replace the copper pipes with a "highly efficient heat exchanger." Whether or not that would be water based, or exchange heat some other way, we don't know.
The system has several large advantages over traditional photovoltaic power.
- It's really cheap
- They don't need to find extra land
- It's invisible to the average person
- Blacktop stays hot, and could produce power for hours after the sun goes down
- There are roads and parking lots everywhere power is needed.
There are already a examples of similar technology in use around the world, but modifying the chemistry of the asphalt specifically to make it a good solar collector is a new move.
written by Rohan, August 18, 2008
written by The Food Monster, August 18, 2008
written by Saad, August 18, 2008
written by Yoshi, August 18, 2008
written by Alex, August 18, 2008
written by Mark Bartosik, August 18, 2008
written by Steven Long, August 18, 2008
written by Bernd, August 18, 2008
written by DavidT, August 19, 2008
written by Corky Boyd, August 19, 2008
|< Prev||Next >| | <urn:uuid:d3ef3199-e736-4c30-a881-0d8690f85d41> | 3.59375 | 461 | Personal Blog | Science & Tech. | 49.947308 |
Found 0 - 10 results of 89 programs matching keyword "model of the atmoshpere on mars"
In February 2013, Curiosity drilled into a rock called "John Klein" and then analyzed the sample material with its on-board scientific instruments. On March 12, NASA announced that the analyses show conditions on Mars were once favorable for life! Join us to learn more about this breakthrough discovery. On Mars, as on Earth, sometimes things can take on an unusual appearance. A case in point is a shiny-looking rock seen in a recent image from NASA's Curiosity Mars rover. Curiosity has made a discovery! What could it be? Why are JPL scientists keeping this breaking news classified for now? Exploratorium host and Mars enthusiast Robyn Higdon and Ron Hipshman will give you a refresher on Curiosity's SAM instrument and will discuss the process that scientists at JPL must endure before releasing this ground breaking discovery to the public. Exploratorium host and Mars enthusiast Robyn Higdon gives us a tour of the Mars Science Laboratory Mission thus far, what the Curiosity rover is doing now, and what to look forward to in the months to come. En el programa de hoy, científicos del Exploratorium presentarán ejemplos de extremófilos – microrganismos que viven en condiciones extremas en la Tierra. Como Marte es un ambiente extremo, la pregunta sigue siendo, ¿podría el planeta rojo haber sustentado alguna forma de vida microbiana? Infórmate en nuestro webcast en vivo. Únete a los científicos del Exploratorium e infórmate de las últimas hazañas del astromóvil Curiosidad que está paseando por el planeta Marte e investigando la posibilidad de que haya condiciones hospitalarias para sustentar la vida extraterrestre. In today's program, Exploratorium host Ron Hipschman will give a quick update on Curiosity and then revisit the MER rovers, Spirit and Opportunity. It's been a week since we did our last Mars webcast-join Exploratorium hosts Ron Hipschman and Linda Shore as they give us updates us on all the latest images and findings, and share a little bit about time on Mars. Curiosity has started roving! Join hosts Robyn and Ron as they bring the latest updates on Curiosity's progress, and then delve into investigating the nuclear power source on the rover. Join host Ron Hipschman and a very special guest, David M. Seidel, who is the Deputy Manager from the Education office at the Jet Propulsion Laboratory! They will go over new updates, and then share photos and stories from the Exploratorium's recent visit to JPL to learn more about the MSL mission. | <urn:uuid:e66f36ff-48eb-4175-bbbb-3bf68ef90f62> | 2.75 | 613 | Content Listing | Science & Tech. | 39.284315 |
Note: This message is displayed if (1) your browser is not standards-compliant or (2) you have you disabled CSS. Read our Policies for more information.
International Great Lakes Datum (IGLD) is the reference system by which Great Lakes-St. Lawrence River Basin water levels are measured. It consists of benchmarks at various locations on the lakes and St. Lawrence River that roughly coincides with sea level. All water levels are measured in feet or meters above this point. Movements in the earth's crust necessitate updating this datum every 25-30 years. The first IGLD was based upon measurements and bench marks that centered on the year 1955, and it was called IGLD (1955). The most recently updated datum uses calculations that center on 1985, and it is called IGLD (1985). Measurements recorded in NGVD (1929) or IGLD (1955) need to be converted to IGLD (1985) measurements before they can be used in comparison situations. The table below displays the different readings for the OHWM in Indiana and how to convert NGVD (1929) and IGLD (1955) measurements to IGLD (1985) measurements. Conversion Equations | <urn:uuid:59773736-0732-452d-8a4d-aed63e93c4a2> | 3.4375 | 254 | Knowledge Article | Science & Tech. | 58.854275 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
New Mexico Temperature Rankings, October 1988
More information on Climatological Rankings
(out of 119 years)
|108th Coldest||1970||Coldest since: 1987|
|9th Warmest||1950||Warmest since: 1979| | <urn:uuid:64d3d058-ba7e-4054-9e58-c96d65c02b67> | 2.734375 | 133 | Structured Data | Science & Tech. | 49.958588 |
What are tides, and why
do they happen?
How many high and low tides happen
Are all tides the same height?
page links to an interdisciplinary learning
created by geologists, chemists, physicists, and mathematicians.
The module will help students and teachers learn more about tides by using
different tools and methods of these disciplines.
from the Nova Scotia Museum. See
were to track ocean water level over a period of a few days, you would
see that it changes with a regular pattern. As a scientist, you can
do more than just observe these tidal patterns. You
can try to figure out why they happen!
In order to investigate tides,
you'll need scientific data. Click
here for a step-by-step guide to collecting and analyzing your own ocean
water level data.
activity is appropriate for classes from advanced middle school to university
level. It takes some computer savvy--the main technical requirements
are the ability to cut and paste data from a web page to MSWord and Excel,
and to create graphs using Excel or similar software. It also takes
a little patience to find a data station that will work for you.
But be assured that they do exist, and with a few minutes of persistence,
you will find one.
Would you like to use a data set that has
already been downloaded for you, with a graph already made for a few days
(or a year)? Click on one of these:
City, CA Kawaihae,
Hawaii Island, HI Rockport,
Be prepared to be amazed by the graphs
that you will produce--you're going to get graphs that show much, much
more than the "two highs & two lows per day" that you might initially
expect, although you definitely will see that too! Many factors influence
tidal levels. This activity doesn't explain why tides happen.
But it will draw you and your students to try to explain the data trends
that you discover. Maybe you will be able to figure out something
new--and really interesting!
you looking for materials and activities to support your courses in
Physics, or Mathematics?
out Links for Educators*(*and
other people with insatiable curiosity).
Text of this page © 2003
The NASA/UNCF Project, Northeastern Illinois University
Last updated May 1, 2003. | <urn:uuid:e2ca74ad-1dfe-4d47-8efb-545ad25a3cc1> | 3.359375 | 499 | Tutorial | Science & Tech. | 53.592277 |
The second lightest naturally occurring atomic element, and the most common in the universe after hydrogen. About one quarter of all conventional matter in the universe is helium, and it is plentiful in stars, the atmosphere of gas giants, and the interstellar gaseous medium, though rare on smaller rocky or icy planets and moons lacking the gravity to retain it. Aside from the quantities formed in the original nucleosynthesis of the Big Bang, helium is also created from hydrogen atoms in the process of nuclear fusion that occurs within stars. The most common isotope of helium has two protons, two neutrons, and two electrons. Rarer isotopes of helium are important in fusion reactor technology. Before conversion reactors became prevalent, helium-3 was an extremely valuable resource, as it still is in regions lacking monopole-based technologies. Helium is the lightest noble gas, and because it is chemically inert it is very popular as the lifting gas in dirigibles, blimps and airships on worlds or on large habs with a standard Terragen atmosphere. | <urn:uuid:ac73f3d5-4aff-4aa4-b9b6-c36adb8bb1c4> | 3.78125 | 213 | Knowledge Article | Science & Tech. | 27.450814 |
Global Climate Change and Energy
The Climate Challenge: Our Choices
The amount of carbon dioxide (CO2) in the atmosphere is increasing. The world is getting warmer. If this continues, the ecosystems and economies of the world will be dramatically altered. What can be done about this?
The Climate Challenge
Click on image to launch a simulation that lets you decide the future of the planet.
Click on the picture above to launch a simulation that lets you decide the future of the planet. We use water in a bathtub to represent CO2 in the atmosphere. Can you keep the bathtub from overflowing?
This simulation was designed in collaboration with The Sustainability Institute and the Society for Organizational Learning. The numbers that drive the graphs and the bathtub animation were calculated in a system dynamics model built by Dr. Thomas Fiddaman. His research can be viewed here.
Watch Drew Jones of the Sustainability Institute discuss the Climate Challenge simulation. Click on image to start the video.
Read the information below to find out more about CO2, climate change, and the way we designed this simulation.
CO2 and Climate Change
We know that carbon dioxide (CO2) is increasing in the atmosphere from human activities such as burning of fossil fuels and deforestation. This increase is one of the major factors in global warming. There is no longer any scientific debate about this. The most recent report by the Intergovernmental Panel on Climate Change has confirmed this.
In 2007 the concentration of CO2 in the atmosphere is approximately 380 parts per million (ppm). Every year human activities add to that. Some scientists and economists in the climate science world such as Nicholas Stern and James Hansen (note) have identified a concentration of 450 ppm as a maximum goal for CO2 that may avoid the most significant damage to the Earth's ecosystems and economies. There is a great deal of uncertainty about the severity of the effects associated with this or any other target level for CO2. We have chosen to use it for this simulation, but we could have set it higher or lower. As you play with the simulation consider how the three scenarios would play out if the bathtub overflowed at a level other than 450 ppm.
Already there is much more CO2 in the atmosphere than at any time in the past 425,000 years. Follow the green line to the right edge of this graph. This is where we are today.
We cannot change what has already happened—the gray parts of the graphs in our simulation show the past from 1950 to 2007. Now the challenge is to find a way to decrease the CO2 emissions into the atmosphere.
The question is, by how much? This simulation challenges you to choose a future for human emissions of CO2. The top of the bathtub is 450 ppm of CO2. The goal: To keep CO2 in the atmosphere below the level of 450 ppm. But what is the best way to do this? Should we keep letting emissions increase? Should we keep emissions at the current level? Or should we reduce emissions?
The principle at work here is stock and flow. A “stock” is something that accumulates, in this case CO2 in the atmosphere, represented by the water. The bathtub stands in for the Earth’s atmosphere. Water (CO2) enters the bathtub (atmosphere) from the spigot above and leaves the bathtub through the drain below. This is the “flow,” a representation of how much goes in and how much goes out.
For the past 425,000 years the amount of CO2 in the atmosphere has fluctuated between 175 ppm and 300 ppm. The inflow (the amount of CO2 going into the atmosphere) and outflow (the amount of CO2 removed from the atmosphere) were sufficiently in balance during this period of time to keep the CO2 level within that range. In the past few decades the inflow has increased dramatically. The flows are now out of balance. More and more CO2 is entering the atmosphere, but not nearly as much is being removed. Thus, CO2 increasingly accumulates in the atmosphere. The amount now stands at a concentration of 380 ppm. In our simulation, the bathtub can overflow if the amount of CO2 in the atmosphere increases to the point of significantly altering the climate.
So what do you think we should do? Play the game to see if you can figure out how to stop global warming. Here are your choices:
• Allow Increasing CO2 Emissions
One scenario is to allow human emissions of CO2 to increase at roughly current levels. This means that governments around the world would not regulate CO2 emissions, and businesses and individuals would not take any special action to reduce CO2 emissions. Everything would continue on as it has been going. This “business as usual” or “status quo” approach asks: What if we did nothing?
The numbers used in this scenario were the “business as usual” estimates of the Intergovernmental Panel on Climate Change (IPCC), the international group dedicated to studying this issue. In this scenario, removals increase naturally, but are never able to keep up with the increase. By the year 2045 the levels would reach 450 ppm. This amount would cause significant changes in the atmosphere, and global warming would cause dramatic changes to the environment.
In our climate simulation, the bathtub would overflow by 2045, and we would experience even more significant climate change. This future is what the IPCC scientists expect will happen if we make no major changes to avert climate change.
• Level Off CO2 Emissions
Another option is to gradually stop the increase of human-caused emissions of CO2 in the decades following 2007. This scenario is based loosely on the Kyoto Protocol, an international treaty to reduce CO2 emissions. The treaty was negotiated by the United Nations Framework Convention on Climate Change (UNFCCC) in 1997 and went into effect in 2005. More than 150 nations were involved in creating the Kyoto Protocol, and 84 countries signed the agreement. However, the agreement also needed to be ratified by each country, and not all who signed the protocol ratified or approved it at home. The leading industrialized countries that have ratified it include Russia, Japan, and the members of the European Union. Other countries have since joined the agreement, bringing the total to more than 165. The United States and Australia are among the industrialized countries that signed but did not ratify the Kyoto Protocol.
The countries that did agree to follow the protocol produce about 60% of the world's greenhouse gases. The agreement is for industrialized countries to reduce greenhouse gas emissions to 5.2% lower than 1990 levels by 2012. This would roughly level off CO2 emissions. But is stabilizing emissions enough to prevent CO2 levels from going above 450 ppm?
• Reduce CO2 Emissions
What if all the governments in the world agreed to significantly reduce CO2 emissions? A plan like this has been proposed by former U.S. vice president Al Gore. Climatologist David Stern has proposed something similar. This scenario calls for reducing emissions of CO2 by 58% of the 2007 level by 2070. What would happen to our bathtub? Would it still overflow?
Questions about CO2 and Climate Change
What are CO2 emissions?
Carbon dioxide (CO2) is a gas that makes up a tiny fraction of the Earth’s atmosphere. It occurs naturally, mostly as a result of breathing, of decay, from the burning of wood and the release of CO2 from the oceans. CO2 emissions also result from the burning of fossil fuels and other human activities. It is this human-generated CO2 that we are showing in our simulation.
What are CO2 removals?
Carbon sinks remove carbon from the atmosphere. The main carbon sinks responsible for removals are photosynthesis and absorption by the oceans.
The oceans are both a carbon sink and a source of CO2. There is an ongoing exchange of CO2 between the atmosphere and the oceans. The balance depends upon factors including water temperature and the concentrations of CO2 in both the oceans and the atmosphere.
For hundreds of thousands of years emissions and removals remained roughly in balance with the concentration of CO2 in the atmosphere varying between 180 and 300 parts per million (ppm). This was true until humans began to burn fossil fuels during the Industrial Revolution. These additional CO2 emissions are the problem. Currently much more CO2 is being released than can be taken up by plants or absorbed by the ocean. The concentration of CO2 in the atmosphere is now 380 ppm and rising.
Why do removals seem to follow emissions?
Carbon dioxide flows between the atmosphere, biosphere, and oceans in order to maintain a balanced distribution. When the concentration of CO2 in the atmosphere increases, two things happen:
- "CO2 fertilization" occurs. Plants use more CO2 for photosynthesis, growing more leaves and woody material.
- The surface ocean—mixed by wind-driven waves— quickly absorbs CO2, which then diffuses more gradually into the deep ocean.
Both processes have limits. The oceans can only absorb so much CO2 before releasing as much CO2 back to the atmosphere as was taken up. For plants, the limitations on growth from water and other nutrients become important. This is called “sink saturation.”
In the "Allow Increased Emissions" future, removals increase because the rapidly-growing concentration of CO2 in the atmosphere continues to drive uptake. Part of the excess CO2 is absorbed by plants and the oceans.
In the "Reduce CO2 Emissions" future, removals fall because the excess of CO2 in the atmosphere above that in the biosphere and oceans is not so great.
What’s the connection between CO2 and climate change?
We know that CO2 absorbs heat from the Sun and releases it into the atmosphere. Going back millions of years, when the concentration of CO2 was higher, the Earth was warmer. Eventually CO2 concentration dropped and the world became cooler. Since the 1740s CO2 concentration has increased significantly, and the average temperature on Earth has also increased.
Why does the CO2 level in the atmosphere continue to rise even when emissions are leveled off?
This scenario corresponds to clicking the middle button in our simulation: "LEVEL OFF CO2 EMISSIONS." After about 2045 emissions are no longer increasing. At that point removals are also level from year to year. But since emissions are greater than removals, each year more CO2 goes into the atmosphere than is removed. So the amount of CO2 in the atmosphere continues to rise.
It’s like a bus traveling through the city with people getting on and off. Let’s say that at one stop 5 people get on the bus and 3 get off. At the next stop the same thing happens: 5 people get on and 3 get off. If this pattern continues the bus will get very crowded. The number of people getting on the bus is level: 5 at each stop. But since only three people get off there is an increase of 2 people each time the bus stops. In order to keep the crowding from getting worse, the same number of people have to get off the bus as get on. And to reduce the crowding, more people have to get off than get on.
In order to keep the concentration of CO2 in the atmosphere at a given level, say 450 ppm, emissions and removals have to be equal. In order to reduce the concentration of CO2 in the atmosphere, removals have to be greater than emissions. | <urn:uuid:f1cfd9cc-b3bb-4d87-9523-e66d2c6327d3> | 3.875 | 2,403 | Knowledge Article | Science & Tech. | 52.23113 |
Radio Jet in 3C120
November 4, 2003
This pseudo color image shows the structure of the 1.66 GHz emission from the radio source 3C120 as measured by the VLBA in June 1994. The resolution is 0.015 by 0.007 arcseconds with the higher resolution in the east-west direction. The radio jet is seen extending from the bright core region, where the energy is presumably generated, to about 0.5 arcseconds, or almost 1000 light years. Other observations with both VLBI and the VLA have shown this source to have emission on scales from significantly less than a light year to well over a million light years. The inner few light years of the jet have long been known to show motions of components at speeds faster than that of light. This is the well known "superluminal motion" seen in many bright, compact radio sources. No violation of relativity is involved, just a projection effect for a source moving near the speed of light almost directly toward the observer.
Topics: Interferometry, Astrophysics, Radio astronomy, energy, Technology Internet, Very Long Baseline Interferometry, Speed of light, Superluminal motion | <urn:uuid:9103d09a-758d-4812-af72-aa5e67315740> | 2.859375 | 243 | Truncated | Science & Tech. | 53.587727 |
In this section, you will learn to convert an integer into a double. The the java.lang package provides the functionality for converting an integer type data into a double.
This program helps you in converting an integer type data into a double. Define a class named "IntegerToDouble" for java component. Program takes an integer value at the console and it converts an integer type data into a double using the toString() method. This method returns a string representation of a double object. It converts a double into a string and also vice-versa.
Here is the code of this program:
Output of this program given below.
Enter the integer value:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:15e4392d-5f00-4caf-8f81-24ebf176e9c5> | 3.5 | 178 | Tutorial | Software Dev. | 47.666827 |
The DOCTYPE describes to the Web browser displaying the page which set of rules to use when parsing (reading and displaying) the page.
So using one set of rules will display one way. Using a different set of rules means the page will display some other way.
More info here: http://www.w3schools.com/tags/tag_doctype.asp
Eye for Video www.cidigitalmedia.com
To put a finer point on this issue, the <!DOCTYPE> is particularly important to web designers because using a complete <!DOCTYPE> statement sets browsers to Standards Compliance Mode in which they are all at least *trying* to render pages according to the W3C Standards. Using an incomplete <!DOCTYPE> or omitting it entirely, leaves browsers in what's called "Quirks Mode" in which they render pages with their own methods - each of which is different to one degree or another. Simply put, if your page relies on anything more sophisticated than <table>s and <font> tags, a complete <!DOCTYPE> is vital for cross-browser compatibility.
Your example "<!doctype>" is invalid and does nothing. Browsers will use Quirks Mode if you use that. Use a complete <!DOCTYPE> that includes the document type and a URL so that the browser will use Standards Compliance Mode. For example:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
No, that's not true.
Only the most recent FX edition seems to have introduced a bug, which makes it complain. But they seem to have forgotten that the default browser language (and the reason they exist) is was and always will be: HTML code interpretation.
Anyway until FX reverts to complying with the convention they always complied to, going with <!DOCTYPE html> declaration will do you no harm.
It mostly needs tweaks for IE(surprise) but I think it's doable. You still need to learn were to put stuff as some of the tag aren't as straight forward as you think they should be. <section>, <aside>, <header>, and <footer> are good examples of tags that have flexibility. I would definitely read up on it to help sort it out. I'm going through a book on it and making a mock up from what I learn. There is also a number of technologies that are getting lumped into HTML 5 as well. The Missing Manual series has a HTML 5 book that covers the topics and it's the one I'm going through.
Here's the basics of code to make it more cross browser friendly. I might add I could see adding the <video> and <audio> tags as block level display as well.
Put into <head> as IE doesn't even recognize the HTML 5 tags and this counters it. | <urn:uuid:970625e1-2dea-4f45-9b95-6f55c92d42f2> | 3.140625 | 642 | Comment Section | Software Dev. | 71.021196 |
Water sampling devices range from a bucket dropped over the side of a small boat to large water bottles sent toward the deep ocean seafloor on a wire. Probably the most commonly used water sampler is known as a rosette. It is a framework with 12 to 36 sampling bottles (typically ranging from 1.2- to 30-liter capacity) clustered around a central cylinder, where a CTD or other sensor package can be attached.
Relatively simple to use (rosette only)
Heavy to transport/deploy
May be very fragile
Heavy gauge winch wire
High capacity boat
Introductory text modified from: University-National Oceanographic Laboratory System (UNOLS), The Research Fleet, edited by Vicky Cullen; Moss Landing Marine Laboratories, Moss Landing, CA; 2000. | <urn:uuid:72df81ea-5d0b-49b3-819e-a1cf86dfc38c> | 3.03125 | 164 | Knowledge Article | Science & Tech. | 28.08 |
This species has a very wide range from Central America (recorded as far north as Costa Rica) south throughout tropical South America. Costa Rica: Atlantic slope from sea level to 800m.
This species inhabits wet forest habitats. It is most often encountered by looking under stones or by sifting leaf litter from the forest floor (Winkler samples). An alate queen was collected at La Selva Biological Station on 2 November 1994.
Found most commonly in these habitats: 131 times found in mature wet forest, 64 times found in tropical rainforest, 40 times found in montane wet forest, 34 times found in ridgetop cloud forest, 38 times found in tropical wet forest, 5 times found in 2º lowland rainforest, 8 times found in tropical moist forest, 5 times found in wet forest, 1 times found in Puesto18,135G. 10m., 5 times found in montane rainforest, ...
Collected most commonly using these methods or in the following microhabitats: 262 times MiniWinkler, 38 times MaxiWinkler, 11 times Berlese, 11 times Winkler, 14 times Mini Winkler, 9 times search, 1 times Search & Berles, 1 times Baiting, 1 times FIT, 1 times Fogging
Elevations: collected from 50 - 1150 meters, 471 meters average | <urn:uuid:b93f954d-9cc0-4b3c-8d49-a659ca328172> | 2.953125 | 281 | Knowledge Article | Science & Tech. | 53.134878 |
Mount Everest is often the site of impressive physical feats, as climbers brave brutal conditions to scale the tallest peak in the world. But the extreme altitude takes quite a toll on the body, causing hypoxia, muscle loss, sleep apnea, and other ill effects. Many of the same symptoms are more commonly found in elderly patients suffering from heart conditions or other chronic ailments—meaning Everest provides a natural laboratory for researchers to gain a better understanding of these diseases.
What’s the News: Keeping track of what’s happening inside the body often requires a great deal of equipment outside it: Just think of the tangle of sensors in any hospital room. Now, though, engineers have developed an ultra-thin electrical circuit that can be pasted onto the skin just like a temporary tattoo. Once it’s served its purpose, you can simply peel it off. These patches could be provide a simpler, less restrictive way to monitor a patient’s vital signs, or even let wearers command a computer with speech or other slight movements.
The silicon from which most electronics are built is a useful, durable material up to about 350 degrees Fahrenheit (but don’t go sticking your iPhone in the oven). Three hundred fifty isn’t bad, says engineer Alton Horsfall of Newcastle University in the U.K., but not nearly good enough for his mission: monitoring volcanoes. Horsfall and colleague Nick Wright say their research into a different material, silicon carbide (SiC), shows that it could work at temperatures in excess of 1,000 degrees F, and might be just what they need to keep watch on inhospitable places like the blazing-hot mouth of a volcano.
The silicon and carbon in silicon carbide bond very strongly, permitting them to survive extreme temperatures. But the material’s pricey and hard to work with for the same reason. So while organizations like NASA have done silicon carbide research, the material hasn’t spread to a multitude of applications. | <urn:uuid:11888fc0-4204-44b3-b0c5-7403211699dc> | 3.796875 | 411 | Truncated | Science & Tech. | 41.269568 |
Batman’s ability to fly is a falsehood. Or at least so says science. We didn’t know science was into disproving super-hero movies (that’s a deep well to drink from) but to each his own. But back in December the Journal of Physics Special Topics took on the subject with their scholarly paper entitled Trajectory of a Falling Batman. The equations presented in the two-page white paper may be above your head, but the concepts are not.
It’s not that Batman can’t fly in the way explained in the film. It’s that he can’t land without great bodily harm. By analyzing the cape in this frame of the film, researchers used Batman’s body height to establish wing span and area. The numbers aren’t good. Top speed will reach about 110 km/h with a sustained velocity of 80 km/h. That’s 80 mph at top speed and just under 50 mph when he comes in for a landing. | <urn:uuid:081f5372-8d7b-470b-a955-d71b7bed6ba4> | 3.375 | 212 | Personal Blog | Science & Tech. | 80.61014 |
Provided by: manpages-pt-dev_20040726-2_all
strcpy, strncpy - copy a string
char *strcpy(char *dest, const char *src);
char *strncpy(char *dest, const char *src, size_t n);
The strcpy() function copies the string pointed to be src (including
the terminating ‘\0’ character) to the array pointed to by dest. The
strings may not overlap, and the destination string dest must be large
enough to receive the copy.
The strncpy() function is similar, except that not more than n bytes of
src are copied. Thus, if there is no null byte among the first n bytes
of src, the result wil not be null-terminated.
In the case where the length of src is less than that of n, the
remainder of dest will be padded with nulls.
The strcpy() and strncpy() functions return a pointer to the
destination string dest.
If the destination string of a strcpy() is not large enough (that is,
if the programmer was stupid/lazy, and failed to check the size before
copying) then anything might happen. Overflowing fixed length strings
is a favourite cracker technique.
SVID 3, POSIX, BSD 4.3, ISO 9899
bcopy(3), memccpy(3), memcpy(3), memmove(3) | <urn:uuid:8abcfb20-6e14-44d7-b0ae-d43c55661726> | 3.6875 | 326 | Knowledge Article | Software Dev. | 70.779535 |
Introduction of Special RelativityIn 1905, Albert Einstein published (among other things) a paper called "On the Electrodynamics of Moving Bodies" in the journal Annalen der Physik. The paper presented the theory of special relativity, based upon two postulates:
Actually, the paper presents a more formal, mathematical formulation of the postulates. The phrasing of the postulates are slightly different from textbook to textbook because of translation issues, from mathematical German to comprehensible English.
Einstein's PostulatesPrinciple of Relativity (First Postulate): The laws of physics are the same for all inertial reference frames.
Principle of Constancy of the Speed of Light (Second Postulate): Light always propagates through a vacuum (i.e. empty space or "free space") at a definite velocity, c, which is independent of the state of motion of the emitting body.
The second postulate is often mistakenly written to include that the speed of light in a vacuum is c in all frames of reference. This is actually a derived result of the two postulates, rather than part of the second postulate itself.
The first postulate is pretty much common sense. The second postulate, however, was the revolution. Einstein had already introduced the photon theory of light in his paper on the photoelectric effect (which rendered the ether unnecessary). The second postulate, therefore, was a consequence of massless photons moving at the velocity c in a vacuum. The ether no longer had a special role as an "absolute" inertial frame of reference, so it was not only unnecessary but qualitatively useless under special relativity.
As for the paper itself, the goal was to reconcile Maxwell's equations for electricity and magnetism with the motion of electrons near the speed of light. The result of Einstein's paper was to introduce new coordinate transformations, called Lorentz transformations, between inertial frames of reference. At slow speeds, these transformations were essentially identical to the classical model, but at high speeds, near the speed of light, they produced radically different results.
Effects of Special RelativitySpecial relativity yields several consequences from applying Lorentz transformations at high velocities (near the speed of light). Among them are:
- Time dilation (including the popular "twin paradox")
- Length contraction
- Velocity transformation
- Relativistic velocity addition
- Relativistic doppler effect
- Simultaneity & clock synchronization
- Relativistic momentum
- Relativistic kinetic energy
- Relativistic mass
- Relativistic total energy
Mass-Energy RelationshipEinstein was able to show that mass and energy were related, through the famous formula E=mc2. This relationship was proven most dramatically to the world when nuclear bombs released the energy of mass in Hiroshima and Nagasaki at the end of World War II.
Speed of LightNo object with mass can accelerate to precisely the speed of light. A massless object, like a photon, can move at the speed of light. (A photon doesn't actually accelerate, though, since it always moves exactly at the speed of light.)
But for a physical object, the speed of light is a limit. The kinetic energy at the speed of light goes to infinity, so it can never be reached by acceleration.
Some have pointed out that an object could in theory move at greater than the speed of light, so long as it did not accelerate to reach that speed. So far no physical entities have ever displayed that property, however.
Adopting Special RelativityIn 1908, Max Planck applied the term "theory of relativity" to describe these concepts, because of the key role relativity played in them. At the time, of course, the term applied only to special relativity, because there was not yet any general relativity.
Einstein's relativity was not immediately embraced by physicists as a whole, because it seemed so theoretical and counterintuitive. When he received his 1921 Nobel Prize, it was specifically for his solution to the photoelectric effect and for his "contributions to Theoretical Physics." Relativity was still too controversial to be specifically referenced.
Over time, however, the predictions of special relativity have been shown to be true. For example, clocks flown around the world have been shown to slow down by the duration predicted by the theory.
Einstein's Theory of Relativity - Index | <urn:uuid:1dccb60c-7790-4d4e-869a-5b07ecb71d52> | 4.28125 | 906 | Knowledge Article | Science & Tech. | 24.591363 |
Major Section: MISCELLANEOUS
:max ; the command most recently typed by the user :x ; synonymous with :max (:x -1) ; the command before the most recent one (:x -2) ; the command before that :x-2 ; synonymous with (:x -2) 5 ; the fifth command typed by the user 1 ; the first command typed by the user 0 ; the last command of the system initialization -1 ; the next-to-last initialization command :min ; the first command of the initialization :start ; the last command of the initial ACL2 logical world fn ; the command that introduced the logical name fn (:search (defmacro foo-bar)) ; the first command encountered in a search from :max to ; 0 that either contains defmacro and foo-bar in the ; command form or contains defmacro and foo-bar in some ; event within its block.
The recorded history of your interactions with the top-level ACL2 command loop is marked by the commands you typed that changed the logical world. Each such command generated one or more events, since the only way for you to change the logical world is to execute an event function. See command and see events. We divide history into ``command blocks,'' grouping together each world changing command and its events. A ``command descriptor'' is an object that can be used to describe a particular command in the history of the ongoing session.
Each command is assigned a unique integer called its ``command number'' which indicates the command's position in the chronological ordering of all of the commands ever executed in this session (including those executed to initialize the system). We assign the number 1 to the first command you type to ACL2. We assign 2 to the second and so on. The non-positive integers are assigned to ``prehistoric'' commands, i.e., the commands used to initialize the ACL2 system: 0 is the last command of the initialization, -1 is the one before that, etc.
The legal command descriptors are described below. We use
denote any integer,
sym to denote any logical name
(see logical-name), and
cd to denote, recursively, any command
command command descriptor described
:max -- the most recently executed command (i.e., the one with the largest command number) :x -- synonymous with :max :x-k -- synonymous with (:x -k), if k is an integer and k>0 :min -- the earliest command (i.e., the one with the smallest command number and hence the first command of the system initialization) :start -- the last command when ACL2 starts up n -- command number n (If n is not in the range :min<=n<=:max, n is replaced by the nearest of :min and :max.) sym -- the command that introduced the logical name sym (cd n) -- the command whose number is n plus the command number of the command described by cd (:search pat cd1 cd2) In this command descriptor, pat must be either an atom or a true list of atoms and cd1 and cd2 must be command descriptors. We search the interval from cd1 through cd2 for the first command that matches pat. Note that if cd1 occurs chronologically after cd2, the search is ``backwards'' through history while if cd1 occurs chronologically before cd2, the search is ``forwards''. A backwards search will find the most recent match; a forward search will find the chronologically earliest match. A command matches pat if either the command form itself or one of the events in the block contains pat (or all of the atoms in pat if pat is a list). (:search pat) the command found by (:search pat :max 0), i.e., the most recent command matching pat that was part of the user's session, not part of the system initialization. | <urn:uuid:9bbcb315-8fd6-4945-b821-c3cd8c524869> | 2.890625 | 811 | Documentation | Software Dev. | 54.725319 |
In high energy physics, the most frequently encountered standard distributions governing frequencies (e.g. for events) are the Poisson distribution, the Gaussian distribution, and the binomial distribution. The statistical literature is replete with excellent discussions of these and other distributions, and a full explication will not be given here. (See the statistics books and statistics reference section of this site.) However, here a few rules of thumb and guidelines are in order:
One of the most frequently occurring problems in high-energy physics is to compare an observed distribution with a prediction, for example from a simulation. Indeed, an analysis might be designed to extract some physical parameter from the simulation or prediction which best fits the data. (See also the page on goodness of fit tests.) For data points with Gaussian errors, one can write a chi square function
Here the yi are the observed data, and the yi with ~'s are the predictions of the model at each point. The chi square expresses the deviation of the observed data from the fit, weighted inversely by the uncertainties in the individual points. The chi square can be either used to test how well a particular model describes the data or, if the prediction is a function of some parameters ak then the optimal values of the parameters can be found by minimizing the chi square.
Indeed, the main attractiveness of the chi square in high energy physics lies in extracting such parameters, particularly in the case where the function y(a1, a2, ...,am) is linear in the ak. In this case, one can find the parameters, and their Gaussian uncertainties, by inverting an m-by-m matrix to find the covariance matrix. This well-known technique is described in many statistical references.
The main pitfall here is that the purely Gaussian case is in fact rather rare, usually because the data points come from Poisson-distributed numbers of events which are not well approximated by Gaussian distributions. Using a standard chi square approach in such cases leads to biased estimates of both the parameters and their uncertainties. So pervasive is this problem thatA likelihood function simply expresses how likely the observed distribution is, given some model. For the Gaussian case one can in fact write
In most analyses, the data samples are unique enough that no pre-packaged program will be able to do the final analysis, and the high-energy physicist has no alternative but to write a special program. At the core of this program will be some function which calculates the likelihood L of one's data xi given some parameters ai, and will in most cases also depend on some additional auxiliary (or "nuisance") parameters and/or uncertainties.
Most likelihood functions will express the product of the probabilities for observing the data based on the Poisson probability in each data bin (or each event in the unbinned case). In writing likelihoods incorporating systematic uncertainties it is important to keep in mind the nature of each uncertainty: whether it affects signal, background, or both, and whether it results in uncorrelated or correlated effects, bin-to-bin or source-to-source. These latter effects can be tricky and in some cases intractable. See the recommendations section on systematic uncertainties for a more detailed discussion of this issue.
The most commonly employed program in high energy physics to perform function minimization is MINUIT, by Fred James of CERN. MINUIT is embedded into ROOT and PAW, but can also be run by stand-alone programs. The documentation is quite extensive and useful, see also Joel Heinrich's tips and tricks for using MINUIT.
For hard-core types faced with functions of many parameters (> 100) one might find it useful to use the FUMILI package, a now-ancient but extremely fast gradient-descent program written by Soviet missile scientists in the early 1960's. FUMILI has been successfully used on problems on which MINUIT has failed to converge at all, or converged too slowly.
For least-squares minimization of functions which are linear in the unknown parameters, there are several packages to determine the parameters, errors, and covariance matrices given the input data. It is straightforward (and rewarding), however, to write one's own software to perform these procedures. | <urn:uuid:f4ec8da6-0fa8-4e3b-b16d-c52fe7f6bb69> | 2.984375 | 881 | Knowledge Article | Science & Tech. | 33.129712 |
Science Fair Project Encyclopedia
The phase of a wave relates the position of a feature, typically a peak or a trough of the waveform, to that same feature in another part of the waveform (or, which amounts to the same, on a second waveform). The phase may be measured as a time, distance, a fraction of the wavelength, or as an angle.
A phase shift is simply a difference or change in phase.
It is an essentially abstract and arbitrary notion with no absolute meaning. To get a grasp of it, consider the two waves A and B in this diagram:
It is apparent that the positions of the peaks (X), troughs (Y) and zero-crossing points (Z) of both waves all coincide. The phase difference of the waves is thus zero, or, the waves are said to be in phase.
If the two in-phase waves A and B are added together (for instance, if they are two light waves shining on the same spot), the result will be a third wave of the same wavelength as A and B, but with twice the amplitude. This is known as constructive interference.
Now consider waves A and C:
A and C are also of the same amplitude and wavelength. However, it can be seen that although the zero-crossing points (Z) are coincident between A and C, the positions of the peaks and troughs are reversed, that is an X on A becomes a Y on C, and vice versa. In this case, the two waves are said to be out of phase or in antiphase, or the phase difference of the two waves is π radians, or half the wavelength (λ/2).
Also consider waves A and D:
In this situation, a peak (X) on wave A becomes a zero-crossing point (Z) on D, a zero-point becomes a peak, and so on. The waves A and D can be said to be in quadrature, or exactly π/2, or λ/4 out of phase.
In nature waveforms are often encountered as sine waves, because of the ubiquitous harmonic motion in physics. In this case the wave amplitude φ is given as a function of a variable, say x, by φ(x) = A sin(α x + φ0). In such an expression the constant φ0 is called the phase of the sine (the other constant A is the amplitude). If we plot this function, varying the value of φ0 results in translating the curve, i.e., to take a new relative "observational point". As it is easier to work with exponentials, the expression would more profitably be written φ(x) = A exp(i(αx + φ0)), with i the square root of −1. There we can factor φ0 and consequently exp(iφ0) is called a pure phase since it contains only phase information and multiplying a function by such a complex exponential changes its phase only.
Coherence is the quality of a wave to display well defined phase relationship in different regions of its domain of definition.
In physics, quantum mechanics ascribes waves to physical objects. The wave function is complex and since its square modulus is associated to probability of observing the object, the complex character of the wave function is associated to the phase. Since the complex algebra is responsible for the striking interference effect of quantum mechanics, phase of particles is therefore ultimately related to their quantum behavior.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:89195cfa-0b2c-4d86-91e4-7984937f6020> | 4.25 | 761 | Knowledge Article | Science & Tech. | 52.88158 |
Science Fair Project Encyclopedia
Smoke signals is an early form of the optical telegraph, developed by the native North Americans and Chinese (as in the towers of the Great Wall.) By covering an open fire with a blanket, and suddenly removing it for a short time, a puff of smoke is generated. With some training, the sizes, shapes of, and intervals between these puffs can be controlled in a way that can be observed from a long distance, and used to carry information.
Apparent to anyone within its visual range, the smoke signal is not a standardized code that can be easily translated. Like other forms of communication, the signals are often of a predetermined pattern discerned by sender and receiver. Still, the smoke signal can abide by other universal patterns of communication. For example, as in other distress calls, a pattern of three would indicate a call for help.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:e6c9a327-a816-4aa0-bc98-32ddd0c84515> | 3.796875 | 208 | Knowledge Article | Science & Tech. | 39.744076 |
American solar satellite. One launch, 1997.08.25.
The primary purpose of ACE was to determine and compare the isotopic and elemental composition of several distinct samples of matter, including the solar corona, the interplanetary medium, the local interstellar medium, and Galactic matter.
ACE was conceived at a meeting on June 19, 1983 at the University of Maryland. The meeting was hosted by George Gloecker and Glen Mason. The participants were Drs. L. F. Burlaga, S. M. Krimigis, R. A. Mewaldt, and E. C. Stone. This meeting had been preceded by preliminary documentation from the Johns Hopkins University Applied Physics Laboratory (APL) and the University of Maryland under the proposal name of Cosmic Composition Explorer. An unsolicited proposal was put together and forwarded to the NASA Explorer Program Office later that year, but was not acted upon.
The proposal was resurrected at the instigation of Dr. Vernon Jones and officially resubmitted to NASA in 1986 as part of the Explorer Concept Study Program. In 1988, the ACE mission was selected for a one-year "Phase A" (concept) Study. This study was a collaborative effort between spacecraft design and science teams.
The ACE Mission officially began on 22 April 1991 when the contract between NASA/GSFC and the California Institute of Technology was signed. APL, designer and builder of the ACE spacecraft, was involved in planning for Phase B (definition). The early ACE Spacecraft effort (April to July 1991) was primarily for ACE mission support, spacecraft system specification and ACE instrument support and interface definition. Phase B of the ACE mission officially began in August 1992.
The Mission Preliminary Design Review was held in November 1993. Phase C/D (implementation) began shortly thereafter.
Mission and Spacecraft Characteristics
The spacecraft was 1.6 meters across and 1 meter high, not including the four solar arrays and the magnetometer booms attached to two of the solar panels. At launch, it weighed 785 kg, which included 189 kg of hydrazine fuel for orbit insertion and maintenance. The solar arrays generated about 500 watts of power. The spacecraft spun at 5 rpm, with the spin axis generally pointed along the Earth-sun line and most of the scientific instruments on the top (sunward) deck.
In order to get away from the effects of the Earth's magnetic field, the ACE spacecraft traveled almost 1.5 million km from the Earth to the Earth-sun libation point (L1). By orbiting the L1 point, ACE stayed in a relatively constant position with respect to the Earth as the Earth revolved around the sun.
The primary purpose of ACE was to determine and compare the isotopic and elemental composition of several distinct samples of matter, including the solar corona, the interplanetary medium, the local interstellar medium, and Galactic matter. The nine scientific instruments on ACE performed:
- Comprehensive and coordinated composition determinations
- Ionic charge state
- Observations spanning broad dynamic range
- Solar wind to galactic cosmic ray energies
- (~100 eV/nucleon to ~500 MeV/nucleon)
- Hydrogen to Zinc (Z = 1 to 30)
- Solar active and solar quiet periods
- Investigations of the origin and evolution of solar and galactic matter
- Elemental and isotopic composition of matter
- Origin of the elements and subsequent evolutionary processing
- Particle acceleration and transport in nature
Electric System: 0.50 average kW.
AKA: Advanced Composition Explorer.
More... - Chronology...
Gross mass: 785 kg (1,730 lb).
Unfuelled mass: 596 kg (1,313 lb).
Height: 1.00 m (3.20 ft).
First Launch: 1997.08.25.
Number: 1 .
Delta The Delta launch vehicle was America's longest-lived, most reliable, and lowest-cost space launch vehicle. Development began in 1955 and it continued in service in the 21st Century despite numerous candidate replacements. More...
Associated Launch Vehicles
Delta American orbital launch vehicle. The Delta launch vehicle was America's longest-lived, most reliable, and lowest-cost space launch vehicle. Delta began as Thor, a crash December 1955 program to produce an intermediate range ballistic missile using existing components, which flew thirteen months after go-ahead. Fifteen months after that, a space launch version flew, using an existing upper stage. The addition of solid rocket boosters allowed the Thor core and Able/Delta upper stages to be stretched. Costs were kept down by using first and second-stage rocket engines surplus to the Apollo program in the 1970's. Continuous introduction of new 'existing' technology over the years resulted in an incredible evolution - the payload into a geosynchronous transfer orbit increasing from 68 kg in 1962 to 3810 kg by 2002. Delta survived innumerable attempts to kill the program and replace it with 'more rationale' alternatives. By 2008 nearly 1,000 boosters had flown over a fifty-year career, and cancellation was again announced. More...
Delta 2 7000 American orbital launch vehicle. The Delta 7000 series used GEM-40 strap-ons with the Extra Extended Long Tank core, further upgraded with the RS-27A engine. More...
Delta 7920-8 American orbital launch vehicle. Three stage vehicle consisting of 9 x GEM-40 + 1 x EELT Thor/RS-27A + 1 x Delta K with 2.4 m (8 foot) diameter fairing) More...
Associated Manufacturers and Agencies
NASA American agency overseeing development of rockets and spacecraft. National Aeronautics and Space Administration, USA, USA. More...
APL American manufacturer of rockets and spacecraft. Applied Physics Laboratory, Johns Hopkins University, Laurel, MD, Laurel, Maryland, USA. More...
McDowell, Jonathan, Jonathan's Space Home Page (launch records), Harvard University, 1997-present. Web Address when accessed: here.
NASA Report, ACE Brochure, Web Address when accessed: here.
NASA Report, ACE Mission Paper, Web Address when accessed: here.
NASA Report, ACE Spacecraft, Web Address when accessed: here.
NASA Report, Position and Orientation of subsystems on the ACE spacecraft , Web Address when accessed: here.
Associated Launch Sites
Cape Canaveral America's largest launch center, used for all manned launches. Today only six of the 40 launch complexes built here remain in use. Located at or near Cape Canaveral are the Kennedy Space Center on Merritt Island, used by NASA for Saturn V and Space Shuttle launches; Patrick AFB on Cape Canaveral itself, operated the US Department of Defense and handling most other launches; the commercial Spaceport Florida; the air-launched launch vehicle and missile Drop Zone off Mayport, Florida, located at 29.00 N 79.00 W, and an offshore submarine-launched ballistic missile launch area. All of these take advantage of the extensive down-range tracking facilities that once extended from the Cape, through the Caribbean, South Atlantic, and to South Africa and the Indian Ocean. More...
Cape Canaveral LC17A Delta launch complex. Part of a dual launch pad complex built for the Thor ballistic missile program in 1956. Pad 17A supported Thor, Delta, and Delta II launches into the 21st Century. More...
1997 August 25 -
14:39 GMT - .
: Cape Canaveral
. Launch Complex
: Cape Canaveral LC17A
. LV Family
. Launch Vehicle
: Delta 7920-8
. LV Configuration
: Delta 7920-8 D247.
- ACE - .
Payload: ACE. Nation: USA. Agency: NASA Greenbelt. Manufacturer: APL. Class: Earth. Type: Magnetosphere satellite. Spacecraft: ACE. USAF Sat Cat: 24912 . COSPAR: 1997-045A. Apogee: 128,196 km (79,657 mi). Perigee: 176 km (109 mi). Period: 86,411.37 min. Summary: Earth-Sun L1 point.
Home - Browse - Contact
© / Conditions for Use | <urn:uuid:2b58e6ee-de7c-435c-a3ea-406bf91f3d8b> | 2.875 | 1,685 | Knowledge Article | Science & Tech. | 50.089855 |
How To Use A Hydrometer
A hydrometer is an instrument whose function is based on Archimedes principle. This principle states that a body (the hydrometer) immersed in a fluid is buoyed up by a force equal to the weight of the displaced fluid. The hydrometer measures the weight of the liquid displaced by the volume of the hydrometer.
Specific Gravity is a dimensionless unit defined as the ratio of density of the material to the density of water. If the density of the substance of interest and the reference substance (water) are known in the same units (e.g., both in g/cm3 or lb/ft3), then the specific gravity of the substance is equal to its density divided by that of the reference substance (water =1 g/cm3), hence
Specific Gravity = Density g/cm3
Herein lies the equality between specific gravity and density,
the dimensions drop out!
The greater the density, the tighter or closer the molecules are packed inside the substance.
Therefore, the greater the density / specific gravity of a liquid the higher a hydrometer will be buoyed by it.
Fill your hydrometer jar about ¾ with the liquid you wish to test. Insert the hydrometer slowly. Do not drop it in! Now give it a spin with your thumb and index finger, this will dislodge any bubbles that may have formed. Once the hydrometer comes to a rest, observe the plane of the liquid surface. Your eye must be horizontal to this plane. The point at which this line cuts the hydrometer scale is your reading.
Food for Thought
(Using specific gravity to determine the concentration of a solution)
100% ethanol has a specific gravity of .785 which is lighter than water with a specific gravity of 1.0
A 50/50 mixture of water and ethanol (100 proof / 50%) will have the following specific gravity.
(.5L x 1.0) + (.5 L x 0.785) = 0.8925
A 75/25 mixture of water and ethanol (50 proof / 25%) will have the following specific gravity
(.75 L x 1.0) + (.25 L x 0.785) = 0.9463
As you can see the specific gravity of the mixture is inversely proportional to the alcohol concentration. As the alcohol concentration decreases the specific gravity increases and the hydrometer floats higher in the solution
The alcohol hydrometer is calibrated in two scales % alcohol and proof. (1% alcohol = 2 proof ) The manufacturer used the specific gravity of alcohol at various concentrations to calibrate the instrument.
To determine the concentration of a 1 liter solution of alcohol and water
using specific gravity.
1) Measure the specific gravity of the solution
Let X = unknown volume of water
Let (1-X) = unknown volume of alcohol
Then X + (1-X) = 1 Liter
Specific Gravity of water = 1.0
Specific Gravity of ethanol = 0.785
(X) (1.0) + (1-X) (0.785) = Sp. G of solution
Solve for X
(X) (100%) = the concentration of water
(1-X) (100%) = the concentration of alcohol
Example: Assume the measured specific gravity is 0.9463
X (1.0) + (1-X) (0.785) = 0.9463
X + (0.785 - 0.785X) = 0.9463
0.215X + 0.785 = 0.9463
0.215X = 0.1613
X =0.75 (0.75) (100%) = 75% water
1-X =0.25 (0.25) (100%) = 25% alcohol | <urn:uuid:0052626e-3a27-402a-90da-c5853e9ca88a> | 3.9375 | 803 | Tutorial | Science & Tech. | 72.978654 |
by Zen Gardner
Here’s another scientific “anomaly” for which we can’t expect a straight answer. While it is apparently “puzzling scientists” we’re no doubt given hardly any information on what’s going on in the overt and covert space programs.So what do you expect us to think when we read science reports like this:
“Mysterious ‘night shining’ or noctilucent clouds are beautiful to behold, and this stunning image offers an unusual view of these clouds as seen by astronauts on board the International Space Station. Also called polar mesospheric clouds, these clouds are puzzling scientists with their recent dramatic changes. They used to be considered rare, but now the clouds are growing brighter, are seen more frequently, are visible at lower and lower latitudes than ever before, and sometimes they are even appearing during the day.
There is quite a bit of debate for the cause of noctilucent clouds. Dust from meteors, global warming, and rocket exhaust have all been tagged as contributors, but the latest research suggests that changes in atmospheric gas composition or temperature has caused the clouds to become brighter over time.” Source
Here’s the video from the International Space Station showing a time lapse shot of these “clouds”. Notice the entire atmosphere is incandescent due to the chemtrail bombardment over the years. Is it any wonder these particles would also make it to higher altitudes?..never mind the possibility that the mad scientists are spraying at even higher altitudes?Meet CARE - the Charged Aerosol Release Experiment(s)Besides the ongoing toxic chemtrail program, we also know there have been experiments spraying metallic particles in the way upper atmosphere under the name of the Cloud of CARE, some type of experimental aerosol dispersal supposedly to test atmospheric conditions.
A NASA Black Brant XII rocket launches carrying the CARE experiment Credit:
NASANight Time Artificial Cloud Study Using NASA Sounding Rocket
“A rocket experiment that may shed light on the highest clouds in the Earth’s atmosphere was conducted from NASA’s Wallops Flight Facility in Virginia on September 19, 2009. The experiment was launched on a NASA Black Brant XII Sounding Rocket.
The Charged Aerosol Release Experiment (CARE) was conducted by the Naval Research Laboratory and the Department of Defense Space Test Program using a NASA four-stage Black Brant XII suborbital sounding rocket. Using ground based instruments and the STP/NRL STPSat-1 spacecraft, scientists will study an artificial noctilucent cloud formed by the exhaust particles of the rocket’s fourth stage at about 173 miles altitude.
Data collected during the experiment will provide insight into the formation, evolution, and properties of noctilucent clouds, which are typically observed naturally at high latitudes. In addition to the understanding of noctilucent clouds, scientists will use the experiment to validate and develop simulation models that predict the distribution of dust particles from rocket motors in the upper atmosphere.
Natural noctilucent clouds, also known as polar mesospheric clouds, are found in the upper atmosphere as spectacular displays that are most easily seen just after sunset. The clouds are the highest clouds in Earth’s atmosphere, located in the mesosphere around 50 miles altitude.
They are normally too faint to be seen with the naked eye and are visible only when illuminated by sunlight from below the horizon while the Earth’s surface is in darkness.”Source
Sound Fishy Enough to You?I personally don’t put anything past these insane bastards. That they’d spray that far out to keep the sun’s healing rays from hitting earth under the all encompassing pretense of global warming, a lie that even fooled their own crowd, is no surprise at all.At the least, the overdose of chemtrails around the world could certainly have this effect. Something is off and someone knows why.I don’t know. I’m no scientist or super-duper insider. I just know how to smell a rat in this sci-fi cover up on the planet earth we’re living on. Conveniently, just a few are in charge. Especially in the ministry of information.They lie.See the world for what it is.That’s all I can say. Find out for yourself, or be enveloped by the lie. | <urn:uuid:4b3b5750-44f9-4efe-ba27-a515f640d1ed> | 3.46875 | 929 | Personal Blog | Science & Tech. | 42.052717 |
Satellite View Selection
09:30 UTC on Friday 24 May 2013 | Cloud/surface composite, Australia
Images from Japan Meteorological Agency satellite MTSAT via Bureau of Meteorology.
Australian Government Bureau of Meteorology
National Meteorological and Oceanographic Centre
Satellite Notes for the 0000UTC chart on 25 May 2013
Issued at 11:59 am EST Saturday on 25 May 2013
A complex low pressure system is moving away from the north coast of New South Wales into the Tasman Sea. Much of the associated low cloud and shower activity along the northern New South Wales and southeast Queensland coasts is now easing. However a significant cloud band associated with this low continues to extend from the Coral Sea southeast to the North Island of New Zealand, and this is bringing showers and thunderstorms.
Otherwise, a broad high pressure system centred over South Australia dominates the synoptic pattern over Australia. This is keeping skies mostly clear apart from some patchy low cloud over the Bight and the southeast corner of the mainland. Some patchy low cloud near the west coast of Western Australia is due to a weak surface trough offshore.
A cold front crossing Tasmania is bringing low cloud and isolated showers.
A broad area of convective cloud lies over tropical waters to the northwest of Western Australia. | <urn:uuid:ca94869d-0003-4e48-8bac-ffecbe205654> | 2.75 | 260 | Knowledge Article | Science & Tech. | 40.144987 |
Characteristics of SQL
Characteristics of SQL:
* SQL is an ANSI and ISO standard computer language
for creating and manipulating databases.
* SQL allows the user to create, update, delete, and
retrieve data from a database.
*SQL is very simple and easy to learn.
*SQL works with database programs like DB2, Oracle,
MS Access, Sybase, MS SQL Sever etc. | <urn:uuid:de174013-e928-4127-bfbf-69e268ca4690> | 3.53125 | 87 | Knowledge Article | Software Dev. | 48.402829 |
Storm season ends: Are potent hurricanes linked to global warming?
(Page 2 of 2)
Gray, for example, argues that Atlantic hurricane records show that storms are stronger and more frequent for several decades, then ease for several decades.Skip to next paragraph
Subscribe Today to the Monitor
These changes correspond with what several scientists say are naturally occurring cycles in Atlantic sea-surface temperatures.
Yet in August and September, Dr. Emanuel and a team from the Georgia Institute of Technology published independent studies that pointed to an increase in tropical cyclone strength globally over the past 30 years. They noted that the increase coincides with rising average sea-surface temperatures in the tropics, which other researchers have linked to global warming.
One of the big challenges for everyone trying to sort out the issue is a paucity of good hurricane measurements before about 1950, Dr. Emanuel notes.
For an estimate of storms before then, researchers have to run a backward forecast, known as a hindcast. These hindcasts suggest that prior to the 1940s and '50s, the number of hurricanes eases through about 1900, then remains flat before 1900, Emanuel says.
Thus, he continues, at best the proponents of the natural-cycle notion have as few as two "peaks" and a "trough" to work with - not enough to firmly establish that it's a set of cycles at all.
Moreover, he adds, during much of the time that the oscillation was in cool phase, air pollution was high. This could mean that the cycle in sea-surface temperatures could be an artifact of air pollution, as it blows off North America toward Europe and cuts the amount of sunlight available to warm the sea surface.
"It's urgent to settle this debate," says Emanuel, who until publishing his study in August described himself as an agnostic on the question of global warming and tropical cyclones. "If it is a natural cycle, then we can expect to see a downturn" that will last for decades, he says.
It might also indicate that, at least for now, this natural cycle would overpower any signs of global warming on Atlantic tropical cyclones until roughly midcentury.
On the other hand, if recent Atlantic hurricane seasons are part of the broader global trends he and the Georgia Tech team say they've detected, "that's bad news. It means hurricane activity will keep going up."
• Dennis, then Emily, set records for the most intense hurricane before August.
• Katrina became the most destructive storm on record with an estimated $50 billion of insured damage, breaking the estimated $25 billion record (in 2005 dollars) set by Andrew in 1992.
• Wilma became the third Category 5 storm of the season - the first time three Category 5 storms have formed in one year.
• Alpha became the 22nd named storm of the 2005 season, breaking the record of 21 named storms in 1933.
• Beta became the 13th hurricane of the 2005 season, breaking the record of 12 hurricanes in 1969.
• Epsilon became the 26th named storm of the 2005 season, according to NOAA.
Sources: William Gray and Philip J. Klotzbach, Colorado State University and NOAA | <urn:uuid:6ad93273-f619-4a39-984b-4734f8cae092> | 2.953125 | 653 | Truncated | Science & Tech. | 45.90602 |
"Scientists have been trying to find the sources of high-energy cosmic rays since their discovery a century ago," said Elizabeth Hays, a member of the research team and Fermi deputy project scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. "Now we have conclusive proof supernova remnants, long the prime suspects, really do accelerate cosmic rays to incredible speeds."
Cosmic rays are subatomic particles that move through space at almost the speed of light. About 90 percent of them are protons, with the remainder consisting of electrons and atomic nuclei. In their journey across the galaxy, the electrically charged particles are deflected by magnetic fields. This scrambles their paths and makes it impossible to trace their origins directly.
Through a variety of mechanisms, these speedy particles can lead to the emission of gamma rays, the most powerful form of light and a signal that travels to us directly from its sources.
Since its launch in 2008, Fermi's Large Area Telescope (LAT) has mapped million- to billion-electron-volt (MeV to GeV) gamma-rays from supernova remnants. For comparison, the energy of visible light is between 2 and 3 electron volts.
The Fermi results concern two particular supernova remnants, known as IC 443 and W44, which scientists studied to prove supernova remnants produce cosmic rays. IC 443 and W44 are expanding into cold, dense clouds of interstellar gas. These clouds emit gamma rays when struck by high-speed particles escaping the remnants.
Scientists previously could not determine which atomic particles are responsible for emissions from the interstellar gas clouds because cosmic ray protons and electrons give rise to gamma rays with similar energies. After analyzing four years of data, Fermi scientists see a distinguishable feature in the gamma-ray emission of both remnants. The feature is caused by a short-lived particle called a neutral pion, which is produced when cosmic ray protons smash into normal protons. The pion quickly decays into a pair of gamma rays, emission that exhibits a swift and characteristic decline at lower energies. The low-end cutoff acts as a fingerprint, providing clear proof that the culprits in IC 443 and W44 are protons.
The findings will appear in Friday's issue of the journal Science.
"The discovery is the smoking gun that these two supernova remnants are producing accelerated protons," said lead researcher Stefan Funk, an astrophysicist with the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University in Calif. "Now we can work to better understand how they manage this feat and determine if the process is common to all remnants where we see gamma-ray emission."
In 1949, the Fermi telescope's namesake, physicist Enrico Fermi, suggested the highest-energy cosmic rays were accelerated in the magnetic fields of interstellar gas clouds. In the decades that followed, astronomers showed supernova remnants were the galaxy's best candidate sites for this process.
A charged particle trapped in a supernova remnant's magnetic field moves randomly throughout the field and occasionally crosses through the explosion's leading shock wave. Each round trip through the shock ramps up the particle's speed by about 1 percent. After many crossings, the particle obtains enough energy to break free and escape into the galaxy as a newborn cosmic ray.
The supernova remnant IC 443, popularly known as the Jellyfish Nebula, is located 5,000 light-years away toward the constellation Gemini and is thought to be about 10,000 years old. W44 lies about 9,500 light-years away toward the constellation Aquila and is estimated to be 20,000 years old. Each is the expanding shock wave and debris formed when a massive star exploded.
The Fermi discovery builds on a strong hint of neutral pion decay in W44 observed by the Italian Space Agency's AGILE gamma ray observatory and published in late 2011.
NASA's Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership. Goddard manages Fermi. The telescope was developed in collaboration with the U.S. Department of Energy, with contributions from academic institutions and partners in the United States France, Germany, Italy, Japan, and Sweden.
For images and a video related to this finding, please visit:
For more information about NASA's Fermi Gamma-ray Space Telescope and its mission, visit: http://www.nasa.gov/fermi
J.D. Harrington | Source: EurekAlert!
Further information: www.nasa.gov
Further Reports about: atomic particles > cosmic ray > gamma rays > Gamma-ray > Gamma-ray Space Telescope > gas clouds > Goddard Space Flight Center > interstellar gas > magnetic field > shock wave > Space > Space Telescope > speed|scan atlineCT-System > subatomic particle > Supernova > supernova remnant > Telescope
More articles from Physics and Astronomy:
A Hidden Population of Exotic Neutron Stars
24.05.2013 | Chandra X-ray Center
Hubble reveals the Ring Nebula’s true shape
24.05.2013 | NASA/Goddard Space Flight Center
This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers.
Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking.
Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ...
The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist.
"The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn.
He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ...
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
24.05.2013 | Life Sciences
24.05.2013 | Ecology, The Environment and Conservation
24.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:e03c1291-f37e-440e-b1c9-4236b1feece2> | 3.96875 | 1,647 | Content Listing | Science & Tech. | 47.418738 |
Experimental NASA research models, based on observations from the Solar Terrestrial Relations Observatory (STEREO) and the ESA/NASA mission the Solar and Heliospheric Observatory, show that the CME left the sun at speeds of 275 miles per second. This is a fairly typical speed for CMEs, though much slower than the fastest ones, which can be almost ten times that speed.
When Earth-directed, CMEs can cause a space weather phenomenon called a geomagnetic storm, which occurs when they successfully connect up with the outside of the Earth's magnetic envelope, the magnetosphere, for an extended period of time. In the past, CMEs of this speed have not caused substantial geomagnetic storms. They have caused auroras near the poles but are unlikely to affect electrical systems on Earth or interfere with GPS or satellite-based communications systems.
Two active regions -- named AR 11652 and AR 11654 by the National Oceanic and Atmospheric Administration (NOAA) – have produced four low-level M-class flares since Jan. 11. Solar flares are powerful bursts of light and radiation. Harmful radiation from a flare cannot pass through Earth's atmosphere to physically affect humans on the ground, however, when intense enough, they can disturb the atmosphere in the layer where GPS and communications signals travel. M-class flares are the weakest flares that can still cause some space weather effects near Earth. The recent flares caused weak radio blackouts and their effects have already subsided.
NOAA's Space Weather Prediction Center (http://swpc.noaa.gov) is the United States Government official source for space weather forecasts.
Updates will be provided if needed.
What is a CME?
For answers to this and other space weather questions, please visit the Spaceweather Frequently Asked Questions page.
Karen C. Fox
NASA's Goddard Space Flight Center, Greenbelt, Md.
Karen C. Fox | Source: EurekAlert!
Further information: www.nasa.gov
Further Reports about: CME > communications signals travel > Earth's magnetic envelope > Earth's magnetic field > geomagnetic storm > Goddard Space Flight Center > GPS > GPS data > Observatory > satellite-based communications system > Solar and Heliospheric Observatory > Solar Decathlon > solar phenomenon > Space > speed|scan atlineCT-System > sunspots
More articles from Physics and Astronomy:
A Hidden Population of Exotic Neutron Stars
24.05.2013 | Chandra X-ray Center
Hubble reveals the Ring Nebula’s true shape
24.05.2013 | NASA/Goddard Space Flight Center
This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers.
Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking.
Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ...
The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist.
"The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn.
He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ...
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
24.05.2013 | Life Sciences
24.05.2013 | Ecology, The Environment and Conservation
24.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:32466d88-d9d8-4f7a-b5a8-3fb288250a4e> | 3.796875 | 1,135 | Knowledge Article | Science & Tech. | 50.353509 |
Topic review (newest first)
- 2005-12-21 21:25:37
Or it would be a straight line.
- 2005-12-21 21:24:35
True. I forgot about that bit. Yes, two sides of a triangle must always be greater than the other side, because otherwise they wouldn't be able to reach its two ends.
- John E. Franklin
- 2005-12-21 15:03:21
in mathsyperson's generalization, variable b must be closer to 1 than the golden ratio or its reciprocal, 1.618 and .618
So it appears. if b > 1, then a + ab > ab².
If b < 1, then a < ab + ab²
- 2005-12-16 03:41:13
It looks good to me. You've just scaled up by √2, so all the angles would be the same, and two sides match as well.
The general solution would be the first triangle having sides of a, ab and ab² and the second having sides of ab, ab² and ab³ (a>0, b ≠0 or 1) .
In ganesh's example, a = 1 and b = √2.
- 2005-12-15 15:10:19
In ΔABC, AB=1, BC=√2, AC=2.
In ΔDEF, DE=√2, EF=2, DF=2√2.
Since ΔABC and ΔDEF are similar, the three angles are equal.
And they have two sides equal.
Isn't this a solution?
- 2005-12-15 10:56:20
There is a 5 con triangle... there are things written about it, i just cant find the exact dimensions. The only thing i know about it is that there is only one, and it has some exception to the rule.
A 4 con triangle is very very possible... even in 30/60/90's... just try it.
The thing about the triangle is that the equal sides/angles dont have to be corressponding, which is why the SAS AAS etc will not prove it to be congruent...
- 2005-12-15 03:46:17
The easiest way to prove that none exist is to refer to the AAS rule. That is, if two triangles have 2 similar angles and one similar side, they are congruent.
A 5-con triangle would be a non-congruent triangle with 5 similar properties. For this to happen, it would need at least 2 similar angles and 2 similar sides, and this satisfies the AAS rule, so it would have to be congruent and so be a 6-con triangle.
In fact, I don't think you can even have 4-con triangles.
- 2005-12-14 18:20:55
Thats right! I realized the mistake after I posted and logged out.
In fact, a 30-60-90 degree may well be ruled out, as the sides would have to be in the ratio 1:√3:2.
I shall think about it and post, as of now, I ain't sure a solution exists!
- 2005-12-14 15:42:34
a 3 4 5 right triangle is not a 30 60 90 triangle, so it would not work for that...
- 2005-12-14 15:25:53
I guess a right angled triangle with angles 30, 60 and 90 degrees would be ideal to start with. Let the sides be 3, 4 and 5 units for the first traingle. The sides of the second triangle would then be 4, 5 and √(41).
- 2005-12-14 14:00:35
Im looking for the angles/sides of a 5-con triangles
What 5-Con triangles are, are any two triangles who have 5 (not necessarily corresponding) sides and angles equal.
Triangles obviously have 3 angles and 3 sides, so 5 of the 6 must be equal.
I know the 3 angles must be the same and 2 sides equal, because if all 3 sides are equal lengths, all 3 angles must be also.
So pretty much i need to find 2 triangles, who have all three angles equal, but only 2 sides equal. Any ideas?
(Ive been working with 30/60/90 triangles but cant seem to come up with it) | <urn:uuid:53e16762-df87-4500-bda6-c2e1ca0a4686> | 2.875 | 941 | Comment Section | Science & Tech. | 100.075414 |
by Staff Writers
Nashville TN (SPX) Nov 07, 2011
As a first line of defense, steel barrels buried deep underground are designed to keep dangerous plutonium waste from seeping into the soil and surrounding bedrock, and, eventually, contaminating the groundwater. But after several thousand years, those barrels will naturally begin to disintegrate due to corrosion. A team of scientists at Argonne National Lab (ANL) in Argonne, Ill., has determined what may happen to this toxic waste once its container disappears.
"We want to be sure that nuclides (like plutonium) stay where we put them," says Moritz Schmidt, an ANL post-doctoral researcher who will present his team's work at the AVS Symposium in Nashville, Tenn., held Oct. 30 - Nov. 4. Understanding how these radioactive molecules behave is "the only way we can make educated decisions about what is a sufficient nuclear waste repository and what is not," he adds.
Plutonium, with its half-life of 24 thousand years, is notoriously difficult to work with, and the result is that very little is known about the element's chemistry. Few labs around the world are equipped to handle its high radioactivity and toxicity, and its extremely complicated behavior around water makes modeling plutonium systems a formidable task.
Plutonium's extraordinary chemistry in water also means scientists cannot directly equate it with similar elements to tell them how plutonium will behave in the environment. Other ions tend to stick to the surface of clay as individual atoms. Plutonium, on the other hand, bunches into nanometer-sized clusters in water, and almost nothing is known about how these clusters interact with clay surfaces.
To better understand how this toxic substance might respond to its environment, the Argonne team examined the interactions between plutonium ions dissolved in water and a mineral called muscovite. This mineral is structurally similar to clay, which is often considered for use in waste repository sites around the world due to its strong affinity for plutonium. Using a range of X-ray scattering techniques, the scientists reconstructed images of thin layers of plutonium molecules sitting on the surface of a slab of muscovite.
What they found was "very interesting," Schmidt says. The Argonne scientists discovered that plutonium clusters adhere much more strongly to mineral surfaces than individual plutonium ions would be expected to. The result of this strong adherence is that plutonium tends to become trapped on the surface of the clay, a process which could help contain the spread of plutonium into the environment.
"In this respect, it's a rather positive effect" that his group has observed, Schmidt says; but, he adds, "it's hard to make a very general statement" about whether this would alter the rate of plutonium leaking out of its repository thousands of years from now.
Schmidt cautions that these are fundamental studies and probably will not have an immediate impact on the design of plutonium-containing structures; however, he stresses that this work shows the importance of studying plutonium's surface reactivity at a molecular level, with potential future benefits for nuclear waste containment strategies.
"This is a field that is only just emerging," Schmidt says.
AVS 58th International Symposium and Exhibition
Nuclear Power News - Nuclear Science, Nuclear Technology
Powering The World in the 21st Century at Energy-Daily.com
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
Japan nuclear minister cool in eye of the storm
Tokyo (AFP) Nov 5, 2011
For a man who has spent almost every waking hour over the past half year shepherding Japan through one of the world's worst nuclear disasters, Goshi Hosono wears his worries lightly. The youthful and telegenic government pointman on the Fukushima clean-up has been in his post since shortly after the earthquake and tsunami of March 11. He has outlived one prime minister and counts as a s ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2011 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:41ce3dc3-6a24-4967-bb61-20b2cb115b55> | 3.984375 | 913 | Truncated | Science & Tech. | 36.364328 |
A Mars meteorite
is a meteorite
that has landed on Earth and originated from Mars
. This could have been the result of an impact of a celestial body on Mars, sending material from Mars into space. Of the many thousand meteorites that have been found on Earth, only 34 have been identified as originating from Mars, most of which have been found since 2000.
Note that this does not refer to meteorites actually found on Mars, such as Heat Shield Rock.
In 1983 it was suggested by Smith et al
that meteorites in the so called SNC group (Shergottites, Nakhlites, Chassignites) originated from Mars, from evidence from an instrumental and radiochemical neutron activation analysis
of the meteorites. They found that the SNC meteorites possess chemical, isotopic
, and petrologic
features consistent with data available from Mars at the time, findings further confirmed by Trieman et al
a few years later, by similar methods. Then in late 1983, Bogard et al
showed that the isotopic
concentrations of various noble gases
of some of the shergottites were consistent with the observations of the atmosphere of Mars
made by the Viking spacecraft
in the mid-to-late 1970s.
In 2000, an article by Trieman, Gleason and Bogard gave a survey of all the arguments used to conclude the SNC meteorites (of which 14 had been found at the time) were from Mars. They wrote, "There seems little likelihood that the SNCs are not from Mars. If they were from another planetary body, it would have to be substantially identical to Mars as it now is understood."
The 34 Mars meteorites are divided into three rare groups of achondritic
(7), and chassignites
(2). Consequently, Mars meteorites as a whole are sometimes referred to as the SNC group
. They have isotope
ratios that are said to be consistent with each other and inconsistent with the Earth. The names derive from the location of where the first meteorite of their type was discovered.
The first shergottite, the Shergotty meteorite, was found in Sherghati, India in 1865.
The most famous shergottite, known as ALH84001, received a lot of attention after an electron microscope revealed structures that were considered to be the fossilized remains of bacteria-like lifeforms. As of 2005 however, most experts agree that the microfossils are not indicative of life, but of contamination by earthly biofilms. It has not yet conclusively been shown how they formed. ALH84001 is much older than the others in the SNC group — dating back to the original formation of Mars about 4.5 billion years ago. In this respect, it resembles a typical meteorite rather than the other SNC's, which all appear to be formed less than 1.3 billion years ago.
All the meteorites are igneous rocks. Lherzolitic shergottites (one from Antarctica, 2 from California) are identified by their Deuterium/Hydrogen ratios. The crystals appear to be 154-187 million years old and they appear, from cosmic ray analysis, to have spent 2.5 to 3.6 million years in space. There are also basaltic shergottites, some of which appear (from the presence of hydrated carbonates and sulfates) to have been exposed to liquid water prior to injection into space.
There are 7 known nakhlites, the first of which, the Nakhla meteorite
, fell in El-Nakhla
in 1911 and had an estimated weight of 10 kg
. The most recent nakhlite was found in Antarctica
on December 15
Nakhlites are igneous rocks that are rich in augite and were formed from basaltic magma at about 1.3 Ga. They contain augite and olivine crystals. Their crystallization ages, compared to a crater count chronology of different regions on Mars, suggest the nakhlites formed on the large volcanic construct of either Tharsis, Elysium, or Syrtis Major.
It has been shown that the nakhlites were suffused with liquid water around 620 Ma, and that they were ejected from Mars around 10.75 Ma by an asteroid impact, and fell to Earth within the last 10,000 years.
The first chassignite, the Chassigny meteorite
, was found in Chassigny
in 1815. There has been only one other chassignite found , known as Diderot
, or NWA 2737, since it was found in North-West Africa. NWA 2737 was found by meteorite hunters B. Fectay and C. Bidaut of "The Earth's Memory" in August 2000, and it was shown by Beck et al
that its "mineralogy
, major and trace element chemistry as well as oxygen isotopes
revealed an unambiguous Martian origin and strong affinities with Chassigny
In March 2004 it was suggested that the unique Kaidun meteorite, which landed in Yemen on March 12, 1980, may have originated on the Martian moon of Phobos.
The majority of SNC meteorites are quite young by geologic standards and seem to imply that volcanic
activity was present on Mars only a few hundred million years ago. Cosmic ray
traces in the meteorites indicate relatively short stays (3 to 3.5 million years) in space. It has been asserted that there are no large young craters on Mars that are candidates as sources for the SNC meteorites, but recent research claims to have a likely source for ALH84001
and a possible source for other shergottites.
Possible evidence of life
Possible evidence of life has been hypothesized in three meteorites.
- A 1.3 billion-year-old meteorite from near El-Nakhla, Egypt. Small structures that look vaguely like Earth bacteria. More like bacteria than those in the better-known Allan Hills meteorite.
- A 165-million-year-old meteorite from Sherghati, India. Still to be analyzed.
- A 4.5 billion-year-old meteorite found in the Allan Hills of Antarctica (ALH84001). Ejection from Mars seems to have taken place about 16 million years ago. Arrival on Earth was about 13 000 years ago. Cracks in the rock appear to have filled with carbonate materials between 4 and 3.6 billion-years-ago. Evidence of polycyclic aromatic hydrocarbons (PAHs) have been identified with the levels increasing away from the surface. Other Antarctica meteorites do not contain PAHs. Earthly contamination should presumably be highest at the surface. Several minerals in the crack fill are deposited in phases, specifically, iron deposited as magnetite, that are claimed to be typical of biodepositation on Earth. There are also small ovoid and tubular structures that might possibly be nanobacteria fossils in carbonate material in crack fills (investigators McKay, Gibson, Thomas-Keprta, Zare). Micropaleontologist Schopf, who described several important terrestrial bacterial assemblages, examined ALH 84001 and opined that the structures are too small to be Earthly bacteria and don't look especially like lifeforms to him. The size of the objects is consistent with Earthly "nanobacteria", but the existence of nanobacteria itself is controversial.
In August 2002, a NASA team led by Thomas-Keptra published a study indicating that 25% of the magnetite in ALH 84001 occurs as small, uniform-sized crystals in a crystal form that, on Earth, is associated only with biologic activity. The remainder of the material appears to be normal inorganic magnetite. The extraction technique did not permit determination as to whether the possibly biologic magnetic was organized into chains as would be expected. | <urn:uuid:ae75d6ad-434f-41a3-bc41-fa8a2f7d1de5> | 3.984375 | 1,675 | Knowledge Article | Science & Tech. | 44.348293 |
A major theme of IonE's work is the management of freshwater resources. When the population growth of the next 40 years is coupled with the climate change likely to occur in that time, the availability of clean, fresh water will be a major issue. The water sources for intensely populated parts of the world, often fed by global warming-sensitive glaciers and snowmelt, may dwindle even as those population centers grow.
As important as the availability of drinking water is, it is only a small part of the looming water crisis. Approximately 15% of global water consumption is for household use. An equal amount is consumed for industrial uses—for the production of electricity and for refining and manufacturing processes. Agriculture, however, is the thirstiest consumer of freshwater. About 69% of worldwide water use is for the irrigation of crops, much of that coming from sources that will not replenish themselves.
As population grows, so will the need for electricity, goods and food. But freshwater only composes 3% of the Earths water, and more than two thirds of that is frozen in glaciers and polar icecaps. IonE and its collaborating programs are researching the impacts of agriculture and industry on the freshwater supply, and developing strategies that will allow the planet’s limited supply of water to meet the needs of its future population, without sacrificing our health or that of the environment.
IonE's River Life program applies global water resource management studies to the Mississippi River, running right through the University of Minnesota's campus. The River Life program hopes to contribute to the development of a sustainable urban waterfront, creating an example of a healthy community balanced with a healthy river. | <urn:uuid:062a65c0-c0cd-444d-ba53-8ec9c2164941> | 3.265625 | 335 | Knowledge Article | Science & Tech. | 32.006235 |
Some of the costs and benefits of reactive nitrogen in the environment can be seen clearly in nitrogen-deficient habitats like the Bornean dwarf forests known as “kerangas.” Kerangas grow in soils that are acidic, sandy and podzolized. Essential elements enter the soil from decaying leaf litter, but most of these--magnesium, carbon, calcium and nitrogen, in particular--leach away very quickly, and are only available at meaningful concentrations in the top few inches. Phosphorus seems to leach away more slowly. Continual deposition of leaf litter is critical to the system, and disease, fire and logging or clearing for agriculture will convert kerangas to a barren habitat known as padang, dominated by grasses and sedges. The kerangas-adapted she-oak Gymnostoma nobile bears nitrogen-fixing root nodules. Orchids obtain carbon via mycorrhiza; to what degree nitrogen is transferred this way is the subject of some controversy, but among plant families in the kerangas, orchids show the greatest species diversity, possibly supporting suspicions that micorrhizal nitrogen transfer plays an important role in orchid nutrition. Pitcher plants (Nepenthes spp.) trap insects in modified water-bearing leaves. At least one Bornean species, N. rajah, secretes a nectar that attracts tree shrews whose droppings are captured in the pitcher to nourish the plant. In perennially wet padang habitat, Bladderworts (Utricularia spp.) and Sundews (Drosera spp.) also trap small arthropods. Epiphytic ant plants (Hydnophytum spp.) form symbiotic relationships with ants, by providing them shelter while receiving protection from their colony and nitrogen and other nutrients from their wastes. Kerangas provide a good lens to see the many avenues of nitrogen transfer in nitrogen-deficient habitats and suggest the complexity of monitoring and managing human-caused changes of reactive nitrogen in the environment, in general. | <urn:uuid:98491007-061b-4586-8e7f-22e1be2d1e36> | 4.0625 | 426 | Academic Writing | Science & Tech. | 30.040208 |
Introduction to Multituberculates
The Lost Tribe of Mammals
Multituberculates are the only major branch of mammals to have become completely extinct, and have no living descendants. Although not known to many people, they have a 100 million-year fossil history, the longest of any mammalian lineage. These rodent-like mammals were distributed throughout the world, but seem to have eventually been outcompeted by true rodents.
Multituberculates first appeared in the Late Jurassic, and went extinct in the early Oligocene, with the appearance of true rodents. Over 200 species are known, some as small as the tiniest of mice, the largest the size of beavers. Some, such as Lambdopsalis from China, lived in burrows like prairie dogs, while others, such as the North American Ptilodus, climbed trees as squirrels do today. The narrow shape of their pelvis suggests that, like marsupials, multituberculates gave birth to tiny, undeveloped pups that were dependent on their mother for a long time before they matured.
Pictured is the reconstructed lower jaw of Meniscoessus robustus, a squirrel-sized multituberculate from the Upper Cretaceous. This specimen was collected from the Hell Creek Formation of Montana, USA, and is now part of the UCMP collection.
Multituberculates get their name from their teeth, which have many cusps, or tubercles arranged in rows. Although there are some spectacular multituberculate specimens from Mongolia, many of these unique teeth have been found in North America, and UCMP houses a large collection. | <urn:uuid:33984f49-97b5-45bd-8bb9-69ba0846c46a> | 3.828125 | 341 | Knowledge Article | Science & Tech. | 24.785165 |
Image: Devil's Coach-horse Beetle
Devil's Coach-horse Beetle Creophilus erythrocephalus.
- Richard Major
- © Australian Museum
Rove beetles - Family Staphylinidae
Staphylinids are usually elongate beetles with small elytra (wing covers) and large jaws. Like other beetles inhabiting carrion, they have fast larval development with only three larval stages.
Devil's Coach-horse Beetle, Creophilus erythrocephalus, is a common predator of carrion, and with its bright red head, is a very visible component of the fauna of corpses in Australia.
Adults are early visitors to a corpse and they feed on larvae of all species of fly, including predatory fly larvae. They lay their eggs in the corpse, and the emerging larvae are also predators. Creophilus erythrocephalus has a long development time in the egg, so it is common during the later stages of decomposition. As well as consuming maggots, they can also tear open the pupal cases of flies, so there is sufficient food to sustain them at a corpse for long periods.
Another rove beetle, Aleochara haemorrhoidalis feeds on eggs as well as young blowfly larvae. | <urn:uuid:d346d06b-5d17-4d4d-b034-d5788be4187f> | 3.296875 | 270 | Knowledge Article | Science & Tech. | 33.720106 |
dnsfilter reads a series of lines from stdin, converts an IP address to a host name at the beginning of each line, and prints the results to stdout.
If a line does not begin with an IP address, dnsfilter leaves the line alone. If an IP address does not have a host name listed in DNS, dnsfilter leaves the line alone. If an IP address has a host name listed in DNS, dnsfilter inserts an equals sign and the host name before the first space or tab in the line. If a DNS lookup fails temporarily, dnsfilter inserts a colon and a dash-separated error message before the first space or tab in the line.
While dnsfilter is looking up an address in DNS, it reads ahead in the input and looks for more addresses to look up in parallel.
opts is a series of getopt-style options: | <urn:uuid:52547f5d-a25f-4d8f-bf6c-3e4366603548> | 2.734375 | 184 | Documentation | Software Dev. | 58.899685 |
qc is a testing tool that lets you write properties that you expect to hold true, and let the computer generate randomized test cases to check that these properties actually hold. For example, if you have written compress and decompress functions for some data compression program, an obvious property to test is that compressing and decompressing a string gives back the original string. Here's how you could express that:
"""Test that compressing and decompressing returns the original data."""
data = qc.str() # An arbitrary string. Values are randomized.
self.assertEqual(data, decompress(compress(data)), repr(data))
That's an ordinary test with Python's built-in unittest framework (which is why there's so much boilerplate). Alternately, you could do the exact same thing with a different testing framework, like the minimally verbose, quite pleasant nose. The @qc.property decorator runs the decorated function several times, and each time the values returned by functions like qc.string() are different. In other words, QuickCheck is compatible with pretty much every unit test framework out there; it's not particularly demanding.
Functions like qc.str(), qc.int(), and so on, generate arbitrary values of a certain type. In the example above, we're asserting that the property holds true for all strings. When you run the tests, QuickCheck will generate randomized strings for testing.
You'll notice that I said "randomized", not "random". This is intentional. The distribution of values is tweaked to include interesting values, like empty strings, or strings with NUL characters in the middle, or strings containing English text. In general, QuickCheck tries to give a good mix of clever tricky values and randomness. This is essentially what you would do, if you had to write really thorough test cases by hand, except that you don't have to do it. In practice, the computer has fewer preconceptions about what constitutes sane data, so it will often find bugs that would never have occurred to you to write test cases for. It doesn't know how to subconsciously avoid the bugs.
You're not limited to the built-in arbitrary value functions. You can use them as building blocks to generate your own. For example:
def __init__(self, x, y):
self.x, self.y = float(x), float(y)
"""Get an arbitrary point."""
x = qc.int(-20, 20)
y = qc.int(-34, 50)
return Point(x, y)
You can then use this to generate arbitrary point values in properties. Here's a nose-style test:
pt = point()
assert abs(pt.x) + abs(pt.y) >= math.sqrt(pt.x**2 + pt.y**2), (pt.x, pt.y)
When you run this, something magical happens: QuickCheck will try to generate tricky values for both the x and y variables in the Point class, together, so you'll see points like (0, 0), (1, 1), (0, 1), (385904, 0), as well as totally random ones like (584, -35809648). In other words, rather than just drawing x and y values from a stream of random numbers with some tricky values in it, QuickCheck will actually try to generate tricky combinations of x and y coordinates.
Functions for getting arbitrary data
- int(low, high) gives ints, between the optional bounds low and high.
- long(low, high) gives longs, between the optional bounds low and high.
- float(low, high) gives floats, between the optional bounds low and high. No Infinities or NaN values.
str(length=None, maxlen=None) gives strings, of type str. The encoding is UTF-8. If length is given, the strings will be exactly that long. If maxlen is given, the string length will be at most maxlen characters.
- unicode(length=None, maxlen=None) gives unicode strings, of type unicode. If length is given, the strings will be exactly that long. If maxlen is given, the string length will be at most maxlen characters.
- name() gives names, in Unicode. These range from the prosaic, like "John Smith", to the exotic -- names containing non-breaking spaces, or email addresses, or Unicode characters outside the Basic Multilingual Plane. This is, if anything, less perverse than the names you will see in a sufficiently large set of Internet data.
- nameUtf8() is the same as name().encode('utf8').
- fromList(items) returns random items from a list. This is mostly useful for creating your own arbitrary data generator functions.
- randstr(length=None, maxlen=sys.maxint) gives strings of random bytes. If length is given, the strings will be exactly that long. If maxlen is given, the string length will be at most maxlen bytes.
The strings produced by str and unicode are randomized, but some effort has been put into making them sufficiently perverse as to reveal bugs in a whole lot of string processing code. The name list is loosely based on horrible memories of seeing name processing code crash on real-world data, over and over and over again, as it became ever more clear that the world is mad, and we are truly doomed. (This feeling passes once you get enough test coverage and things finally stop crashing. There is hope!)
The name and string example data in qc.arbitrary may be interesting as a source of more deteministic test case data. Feel free to borrow any of it. The internals are magic, but of the magical internal parts, the most interesting ones are in qc.arbitrary and qc. | <urn:uuid:f5315a35-1cdd-4da0-8ed5-5b9a5ebcfb1c> | 3.25 | 1,250 | Documentation | Software Dev. | 63.435599 |
Date: Dec 15, 2011 11:53 AM
Author: Milo Gardner
Subject: Egyptian fraction math only used quotient and remainder statements
Franz, and list members:
Franz offers a distraction quoted by:
"I don't know what problem number 31 of the Rhind Mathematical Papyrus has to do with Mayan astronomy, but if I must I can also shed light on this problem. In my opinion, the Rhind Mathematical Papyrus offers problems that can be solved on several level. On the first level, beginners learn how to handle unit fraction series. On the advanced level they are asked to solve more demanding problems, and on the highest level they are being told about theoretical insights. RMP 31 on the advanced level is about a geometrical problem, it offers a fine example of Egyptian wit, plus a theoretical insight:
RMP 31 - a granary on a ring
33 divided by 1 "3 '2 '7 equals 14 '4 '56 '194 '388 '679
Since 1879 it has been clear that
RMP 31 offered a simple algebra problem
x + (2/3 + 1/2 + 1/7)x = 33
(1 + (28 + 21 + 6)/42)x = 33
x = 14 + 28/97
It has been clear since 2006 that 20th century scholars, Gillings, Peet, et al, failed to solve the algebra problem by following Ahmes' shorthand conversion hints:
28/97 solved by considering: 2/97 + 28/97
with 2/97 solved in the RMP 2/n table manner that scaled 2/97 by 56/56 and 26/97 by 4/4, scribal steps that Franz totally ignore by jumping from a false granary problem to a correct quotient and remainder answer.
Please transliterate each of Ahmes' problems as scholars have been doing since 1879. Franz's granary ring is silly.
Please, everyone correct scholarly 20th century translation errors that followed false additive patterns ... that Franz and others oddly 'advocate' thereby 'throwing out the scribal solutions to 2/97 and 26/97 ...
Additional proof is provided by RMP 36, an actual hekat (granary) problem that solved a hekat unity
(53/53) hekat solved :
2/53 + 3/53 + 5/15 + 15/53 + 28/53
2/53 scaled by 30/30
3/53 scaled by 20/20
5/53 scaled by 12/12
15/53 scaled by 4/4
28/53 scaled by 2/2
with non-additive red auxiliary numbers required to complete each set of remainders recorded as exact unit fraction series. | <urn:uuid:106b2625-dee4-4685-924d-a39a282f902d> | 2.890625 | 575 | Comment Section | Science & Tech. | 68.278 |
This problem is about investigating whether it is possible to start at one vertex of a platonic solid and visit every other vertex once only returning to the vertex you started at.
A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start.
How many Hamiltonian circuits can you find in these graphs?
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Thank you for all your solutions. I think it is quite difficult
to explain the solution to this problem but many of you noticed the
pattern of the nodes in the diagram and the importance of the fact
that there are four nodes on the diagram, each of which has an odd
number of routes from it.
The model of the town, where each island and each side of the
river has an odd number of bridges leading to/from it, was included
so that you could relate it to the diagrams in the hints. To be
able to move from each of the four regions you must have two areas
with an even number of bridges so that you can arrive and leave.
Solutions received from Andrei of School 205 Bucharest, Alex and
Joanna of Woodfall Junior School, Katherine, Phoebe and Katharine
of The Mount School and Gowri, Hannah and Sophie of Caistor Grammar
School used this idea.
The hints were there to help you to identify the importance of
the number of odd nodes. It is worth going back to these and try to
generalise your findings from this problem to other similar
Andrei observed that the city is symmetrical in respect to a
The journey is impossible.
Suppose the starting point of the walk is on a side of the river
(it could be north or south, as specified the city is symmetrical);
there are three bridges. The people must first go on the first
bridge, return on the second, and then go away by the third, so
they cannot come back! There must be an even number of bridges on
each part so, that they can return home.
After solving the problem this way, I discovered that this is a
famous problem of the history of mathematics, being at the basis of
topology of networks, first developed by Euler in 1735. He
constructed a diagram (essentially the same as yours), and
associated the land with the vertices, and the bridges with the
possible ways of connecting the vertices - arcs.
Using the reciprocal of Euler's theorem: "if a network has two or
less odd vertices, it has at least an Euler path", that says "if a
network has two or less odd vertices, it has at least an Euler
path", it is easy to see that the proposed problem does not admit
an Euler path (i.e. a continuous path that passes through every arc
once and only once). | <urn:uuid:c904279d-922c-4309-b185-419a6b19c2a2> | 2.953125 | 638 | Comment Section | Science & Tech. | 52.69048 |
This is consistent with there not being enough space (20 Å) for two purines to fit within the helix and too much space for two pyrimidines to get close enough to each other to form hydrogen bonds between them.
But why not A with C and G with T?
The answer: only with A & T and with C & G are there opportunities to establish hydrogen bonds (shown here as dotted lines) between them (two between A & T; three between C & G). These relationships are often called the rules of Watson-Crick base pairing, named after the two scientists who discovered their structural basis.
The rules of base pairing tell us that if we can "read" the sequence of nucleotides on one strand of DNA, we can immediately deduce the complementary sequence on the other strand.
The rules of base pairing explain the phenomenon that whatever the amount of adenine (A) in the DNA of an organism, the amount of thymine (T) is the same (called Chargaff's rule). Similarly, whatever the amount of guanine (G), the amount of cytosine (C) is the same.
|Relative Proportions (%) of Bases in DNA| | <urn:uuid:462b58b0-0cf8-46f1-89ec-bb1043beb45e> | 4.28125 | 253 | Knowledge Article | Science & Tech. | 47.382381 |
Do a binary search for a given value within a character string array, accompanied by an order vector. Return the index of the matching array entry, or -1 if the key value is not found.
VARIABLE I/O DESCRIPTION -------- --- -------------------------------------------------- value I Key value to be found in array. ndim I Dimension of array. lenvals I String length. array I Character string array to search. order I Order vector. The function returns the index of the first matching array element or -1 if the value is not found.
value is the key value to be found in the array. Trailing blanks space in this key are not significant: string matches found by this routine do not require trailing blanks in value to match those in the corresponding element of array. ndim is the dimension of the array. lenvals is the declared length of the strings in the input string array, including null terminators. The input array should be declared with dimension [ndim][lenvals] array is the array of character srings to be searched. Trailing blanks in the strings in this array are not significant. order is an order vector which can be used to access the elements of array in order. The contents of order are a permutation of the sequence of integers ranging from zero to ndim-1.
The function returns the index of the specified value in the input array. Indices range from zero to ndim-1. If the input array does not contain the specified value, the function returns -1. If the input array contains more than one occurrence of the specified value, the returned index may point to any of the occurrences.
A binary search is performed on the input array, whose order is given by an associated order vector. If an element of the array is found to match the input value, the index of that element is returned. If no matching element is found, -1 is returned.
Let the input arguments array and order contain the following elements: array order "FEYNMAN" 1 "BOHR" 2 "EINSTEIN" 0 "NEWTON" 4 "GALILEO" 3 Then bschoc_c ( "NEWTON", 5, lenvals, array, order ) == 3 bschoc_c ( "EINSTEIN", 5, lenvals, array, order ) == 2 bschoc_c ( "GALILEO", 5, lenvals, array, order ) == 4 bschoc_c ( "Galileo", 5, lenvals, array, order ) == -1 bschoc_c ( "BETHE", 5, lenvals, array, order ) == -1
1) The input array is assumed to be sorted in increasing order. If this condition is not met, the results of bschoc_c are unpredictable. 2) String comparisons performed by this routine are Fortran-style: trailing blanks in the input array or key value are ignored. This gives consistent behavior with CSPICE code generated by the f2c translator, as well as with the Fortran SPICE Toolkit. Note that this behavior is not identical to that of the ANSI C library functions strcmp and strncmp.
1) If ndim < 1 the function value is -1. This is not considered an error. 2) If input key value pointer is null, the error SPICE(NULLPOINTER) will be signaled. The function returns -1. 3) The input key value may have length zero. This case is not considered an error. 4) If the input array pointer is null, the error SPICE(NULLPOINTER) will be signaled. The function returns -1. 5) If the input array string's length is less than 2, the error SPICE(STRINGTOOSHORT) will be signaled. The function returns -1. 6) If memory cannot be allocated to create a Fortran-style version of the input order vector, the error SPICE(MALLOCFAILED) is signaled. The function returns -1 in this case.
N.J. Bachman (JPL) W.L. Taber (JPL) I.M. Underwood (JPL)
-CSPICE Version 1.0.0, 26-AUG-2002 (NJB) (WLT) (IMU)
search in a character array | <urn:uuid:bc02b943-9765-4d6a-ae3d-0373ed6146c9> | 3.390625 | 911 | Documentation | Software Dev. | 66.813673 |
In order to fully understand long-term climate changes, analyses of observed and model-simulated climate changes, especially aspects related to the global hydrological cycle should be given due dilligence. Such aspects include: changes in atmospheric and near-surface water vapor, humidity, cloudiness, precipitation amount, frequency, intensity and diurnal cycle, evaporation, soil moisture, drought, runoff, streamflow, and continental freshwater discharge.
The hydrological cycle. Estimates of the main water reservoirs, given in plain font in 103 km3, and the flow of moisture through the system, given in slant font (103 km3 yr-1, equivalent to Eg (1018 g) yr-1 . - From Trenberth et al., 2007
Adapted time series of 20°N to 20°S ERBS non-scanner wide-field-of-view broadband short- wave, longwave, and net radiation anomalies from 1985 to 1999 [Wielicki et al., 2002a, 2002b] where the anomalies are defined with respect to the 1985 to 1989 period with Edition 3_Rev 1 data [Wong et al., 2006]. - From Trenberth and Dai, 2007.
Time series of the annual water year (Oct. to Sep.); note slight offset of points plotted vs. tick marks indicating January continental freshwater discharge and land precipitation (from Figure 1) for the 1985 to 1999 period. The period clearly influenced by the Mount Pinatubo eruption is indicated by grey shading. - From Trenberth and Dai, 2007
Basin-averaged trends in the water and energy budget components for the Mississippi River basin: M is the long-term (1948–2004) annual (water year) mean (in mm for water components and W m 2 for energy components) and b is the annual linear trend during 1948–2004 (in mm century 1 for water components and W m 2 century 1 for energy components, proportional to arrow shaft width). Note that the downward arrow means that the flux increases the trend of dW/dt or G. - From Qian et al., 2007.
Linear trends from 1948-2004 in annual (water year) runoff trend inferred from the discharge. - Adapated from Dai et al., 2009.
Map and trends of PDSI. Adapted from Dai et al., 2004 and 2007 IPCC.
- Dai, A., J. Wang, P.W. Thorne, D.E. Parker, L. Haimberger, and X.L. Wang, 2010: A new approach to homogenize daily radiosonde humidity data. J. Climate, submitted.
- Dai, A., 2010: Drought under global warming: A review. Wiley Interdisciplinary Reviews: Climate Change, in revision.
- Trenberth, K. E., 2010: Changes in precipitation with climate change. Climate Research, submitted.
- Sun, Y., Y. Ding, and A. Dai, 2010: Changing links between South Asian summer monsoon circulation and tropospheric land-sea thermal contrasts under a warming scenario. Geophys. Res. Lett., 37, L02704, doi:10.1029/2009GL041662.
- Li, H., A. Dai, T. Zhou, and J. Lu, 2010: Response of East Asian summer monsoon to historical SST and atmospheric forcing during 1950-2000. Climate Dynamics, 34, 501-514. doi:10.1007/s00382-008-0482-7.
- Dai, A., T. Qian, K. E. Trenberth, and J. D Milliman, 2009: Changes in continental freshwater discharge from 1948-2004. J. Climate, 22, 2773–2791.
- Qian, T., A. Dai, and K. E. Trenberth, 2007: Hydroclimatic trends in the Mississippi river basin from 1948 to 2004. J. Climate, 20, 4599-4614.
- Sun, Y., S. Solomon, A. Dai, and R. Portmann, 2007: How often will it rain? J. Climate, 20, 4801-4818.
- Trenberth, K. E. and A. Dai, 2007: Effects of Mount Pinatubo volcanic eruption on the hydrological cycle as an analog of geoengineering. Geophys. Res. Lett., 34, L15702, doi:10.1029/2007GL030524.
- Trenberth, K. E., L. Smith, T. Qian, A. Dai and J. Fasullo, 2007: Estimates of the global water budget and its annual cycle using observational and model data. J. Hydrometeor, 8, 758-769.
- Qian, T., A. Dai, K. E. Trenberth, and K. W. Oleson, 2006: Simulation of global land surface conditions from 1948-2004. Part I: Forcing data and evaluation. J. Hydrometeorology, 7, 953-975.
- Dai, A., 2006: Recent climatology, variability and trends in global surface humidity. J. Climate, 19, 3589–3606.
- Dai, A., T. R. Karl, B. Sun, and K. E. Trenberth, 2006: Recent trends in cloudiness over the United States: A tale of monitoring inadequacies. Bull. Am. Met. Soc., 87, 597-606.
- Su, F., J. C. Adam, K. E. Trenberth, D. P. Lettenmaier, 2006: Evaluation of surface water fluxes of the pan-Arctic land region with a land surface model and ERA-40 reanalysis. J. Geophys. Res., 111, D05 110, doi:10.1029/2005JD006387.
- Dai, A., K. E. Trenberth, and T. Qian, 2004: A global data set of Palmer Drought Severity Index for 1870-2002: Relationship with soil moisture and effects of surface warming. J. Hydrometeorology, 5, 1117-1130.
- Trenberth, K. E., A. Dai, R. M. Rasmussen, and D. B. Parsons, 2003: The changing character of precipitation. Bull. Am. Met. Soc., 84, 1205-1217.
- Dai, A. and K.E. Trenberth, 2002: Estimates of Freshwater Discharge from Continents: Latitudinal and Seasonal Variations. J. Hydro., 3, 660-687.
- Dai, A., T.M.L. Wigley, B.A. Boville, J.T. Kiehl, and L.E. Buja, 2001: Climates of the 20th and 21st centuries simulated by the NCAR Climate System Model. J. Climate, 14, 485-519
- Dai, A., 2000: .Global precipitation and thunderstorm frequencies. Part I: Seasonal and interannual variations. J. Climate, 14, 1092-1111.
- Dai, A., 2000: .Global precipitation and thunderstorm frequencies. Part II: Diurnal variations. J. Climate, 14, 1112-1128, 2001. | <urn:uuid:8b3e5e59-22dc-435b-9a94-d02c70d97356> | 2.796875 | 1,553 | Knowledge Article | Science & Tech. | 84.209743 |
|Ivars Peterson's MathTrek|
October 27, 1997
Recreational mathematics furnishes a vast playing field for amateur and professional mathematicians alike. It combines a sense of play with the joy of discovery. Sometimes the results are mathematically trivial; occasionally they lead to new mathematical insights.
Whole numbers or integers are often the subject of such pursuits. Once someone discovers an interesting pattern or type of behavior, those particular numbers are likely to earn a collective name. So we have perfect numbers, amicable numbers, lucky numbers, Mersenne numbers, Fermat numbers, Fibonacci numbers, Keith numbers, Niven numbers, Carmichael numbers, Stirling numbers, Catalan numbers, Ruth-Aaron numbers, Rhonda numbers, and so on. The list keeps growing!
One such curiosity came about in 1982 as the result of a telephone call. When phoning his brother-in-law, mathematician Albert Wilansky of Lehigh University in Bethlehem, Penn., noticed that the telephone number had a striking property. The number, 493-7775 (4,937,775), is composite, meaning that it can be expressed as the product of prime numbers: 3 x 5 x 5 x 65,837. Interestingly, when the digits of the original number are added together, the result (42) equals the sum of the digits of the prime factors (3 + 5 + 5 + 6 + 5 + 8 + 3 + 7 = 42). This discovery marked the birth of Smith numbers, named for Wilansky's brother-in-law.
The smallest Smith number is 4 because the number's factors, 2 x 2, when added together also equal 4. The next one is 22; then comes 27. Overall, there are 376 Smith numbers among the first 10,000 positive integers. The integer 6,036, for instance, has the prime factorization 2 x 2 x 3 x 503, and 6 + 0 + 3 + 6 = 2 + 2 + 3 + 5 + 0 + 3. About 3,300 Smith numbers lie between zero and 100,000, and slightly fewer fall between 100,000 and 200,000.
Investigations by both amateur and professional mathematicians have revealed that special patterns of digits automatically produce Smith numbers. For example, if p is a prime whose digits are all 1s, then 3304p is a Smith number. However, no one has yet found a general-purpose formula for automatically generating every possible Smith number.
In 1985, Wayne McDaniel of the University of Missouri at St. Louis managed to prove there are infinitely many Smith numbers. Others later identified palindromic Smith numbers, such as 12,345,554,321, and investigated Smith brothers (consecutive Smith numbers, such as 728 and 729)Is this serious math? To some mathematicians, anything having to do with the decimal digits of a number can't be counted as worthy of attention.
Those digits are merely a consequence of having chosen base 10 for expressing the numbers. "Thus, those who investigate Smith numbers are not trying to penetrate deep into the secrets of integers," comments Underwood Dudley of DePauw University in Greencastle, Ind. "They are instead observing mere accidents of their representation in an arbitrary system."
At the same, however, such studies can sometimes prove useful in various applications of mathematics -- particularly because we use base 10 numbers extensively in our everyday lives, from measuring distances to adding up the cost of groceries bought at the supermarket.
There's also a chance that the study of numbers expressed in a certain base highlights something unusual in the realm of mathematics -- something that distinguishes numbers expressed in one base from those expressed in another. Whether there is more to Smith numbers than mere happenstance remains to be seen. Copyright1997 by Ivars Peterson.
Dudley, Underwood. 1994. Smith numbers. Mathematics Magazine 67(February):62-65.
Guy, Richard K., and Robert E. Woodrow, eds. 1994. The Lighter Side of Mathematics: Proceedings of the Eugene Strens Memorial Conference on Recreational Mathematics and Its History. Washington, D.C.: Mathematical Association of America.
Peterson, Ivars. 1990. Islands of Truth: A Mathematical Mystery Cruise. New York: W.H. Freeman.
An annotated index of websites devoted to various types of numbers can be found in the Math Forum Internet Resource Collection at http://forum.swarthmore.edu/~steve/steve/numbers.desc.html. | <urn:uuid:1cda513c-4fb1-402f-810c-0beaf8a4fb23> | 2.78125 | 932 | Knowledge Article | Science & Tech. | 50.534415 |
Groundwater exists in an underground stratum of porous rock (aquifer). Groundwater and surface water are closely linked; water pumped onto uplands from a surface source will ultimately enter groundwater. Overexploitation of ground water can lead to saline intrusions and land subsidence. Successfully managing groundwater mainly requires dealing with human needs and perceptions; balancing both the timing and quantity of required water removals.
Graphical Ontology Browser
- Click on a node to jump to the content of that node
- Pan to see the rest of the graph
- Scroll the mousewheel up and down to zoom in and out
- Rearrange the nodes in the graph by dragging a node to a different position | <urn:uuid:b219e4c6-e3e7-4ed1-8095-e41eea8b6c97> | 3.203125 | 145 | Tutorial | Science & Tech. | 25.873409 |
August 01st, 2011
Take a closer look at the world's oldest fossils of carrion beetles and experience a unique view of these 165 million-year old fossils.
July 18th, 2011
In this episode, speak with Dr. Lance Grande about his research on the fossils of the Green River Formation – a 52 million year old fossil lake bed in southwestern Wyoming.
July 14th, 2011
In this episode, we speak with Dr. Ian Glasspool about the environment in Illinois 300 million years ago and his research on fossil charcoal.
July 11th, 2011
In this episode, we speak with Dr. Melanie Hopkins about her research on the evolution of trilobites – an extinct group of marine invertebrates.
June 27th, 2011
We continue our discussion with Sabine Huhndorf by exploring the different stages of decomposition and the role of ascomycete fungi in this process.
June 23rd, 2011
We continue our discussion with Sabine Huhndorf by discussing how ascomycete fungi can reproduce both sexually and asexually.
June 13th, 2011
How are birds collected for museums? What types of information can be gathered from bird specimens? In this episode, we speak with Jason Weckstein to discover the many methods used to collect information from bird specimens.
June 06th, 2011
In this video we explore the Emerging Pathogens project, a unique research program to understand the evolution of parasites and pathogens that result in diseases such as malaria and AIDS. This project will provide a database of pathogens, which could lead to important insights for how humans can combat emerging epidemics.
May 30th, 2011
We continue our discussion with Matt von Konrat by exploring the unique structures called oil bodies that are found inside the cells of liverworts.
May 26th, 2011
We continue our discussion with Matt von Konrat by exploring the biological and environmental significance of early land plants. | <urn:uuid:b2fcfd15-6974-4cef-8666-2d5e113eb5a7> | 3.015625 | 400 | Content Listing | Science & Tech. | 46.33423 |
Twelve months ago, this was one of our images of 2011.
It’s a Flame Shell, a ‘cryptic’ marine species with bright orange feeding tentacles. It is found only in a a very few west coast locations. This one was photographed in 2011 [© SNH] in flame shell beds in Loch Linnhe in Argyll, during one of 15 marine surveys carried out by Scottish Natural Heritage in that year.
The surveys covered 2,000 square miles of Scotland’s seabed – a stunning achievement revealing major Flame Shell beds in Loch Fyne as well as Loch Linnhe and other locations.
Now, a year later, the Flame Shell is back in the news, with the discovery of a massive 75 hectare Flame Shell reef in Loch Alsh, estimated to contain over 100 million examples of the species.
This has come from a survey by Heriot Watt University, commissioned by Marine Scotland and will now be subject to special protection.
Note: The image above of a Flame Shell is © SNH and may not be reproduced without permission. | <urn:uuid:9aadad69-cf2d-48a7-a385-c716b7c0823f> | 3.09375 | 225 | Personal Blog | Science & Tech. | 53.828978 |
Credit: NASA/CXC/U.Manitoba/H.Matheson & S.Safi-Harb
Finding a Crab's Shell
Sometimes it's what's on the outside that counts. This seems to be the case with G21.5-0.9, a supernova remnant identified 30 years ago by radio astronomers. G21.5-0.9 is (or was) a mysterious member of the class of so-called "Crab-like" (or plerionic) supernova remnants, one (like the class namesake, the Crab Nebula itself) lacking an outer "shell" where the supernova blast wave shocks the interstellar medium. The lack of a shell is a surprise and not well understood. But a deep image of G21.5-0.9 (shown above) by the Chandra X-ray Observatory has helped resolve the mystery, at least in this one particular case. The Chandra observations show that G21.5-0.9 is indeed surrounded by a shell of X-ray emitting plasma. The Chandra data helps pinpoint the total amount of energy in the original explosion, as well as helping to determine when the star blew up.
Last Week *
HEA Dictionary * Archive
* Search HEAPOW
Each week the HEASARC
brings you new, exciting and beautiful images from X-ray and Gamma ray
astronomy. Check back each week and be sure to check out the HEAPOW archive!
Page Author: Dr. Michael F. Corcoran
Last modified Friday, 20-Apr-2012 15:24:07 EDT | <urn:uuid:6b8f123f-431e-4cdf-9e0a-30f7c792d182> | 3.4375 | 334 | Knowledge Article | Science & Tech. | 74.105375 |
Global Warming: Global warming is defined as the increase of the average temperature on Earth. As the Earth is getting hotter, disasters like hurricanes, droughts and floods are getting more frequent.
Cause of global warming: Almost 100% of the observed temperature increase over the last 50 years has been due to the increase in the atmosphere of greenhouse gas concentrations like water vapor, carbon dioxide (CO2), methane and ozone. Greenhouse gases are t
hose gases that contribute to the greenhouse effect (see below). The largest contributing source of greenhouse gas is the burning of fossil fuels leading to the emission of carbon dioxide.
The greenhouse effect: When sunlight reaches Earth’s surface some is absorbed and warms the earth and most of the rest is radiated back to the atmosphere at a longer wavelength than the sun light. Some of these longer wavelengths are absorbed by greenhouse gases in the atmosphere before they are lost to space. The absorption of this long wave radiant energy warms the atmosphere. These greenhouse gases act like a mirror and reflect back to the Earth some of the heat energy which would otherwise be lost to space. The reflecting back of heat energy by the atmosphere is called the “greenhouse effect”.
The major natural greenhouse gases are water vapor, which causes about 36-70% of the greenhouse effect on Earth (not including clouds); carbon dioxide CO2, which causes 9-26%; methane, which causes 4-9%, and ozone, which causes 3-7%. It is not possible to state that a certain gas causes a certain percentage of the greenhouse effect, because the influences of the various gases are not additive. Other greenhouse gases include, but are not limited to, nitrous oxide, sulfur hexafluoride, hydro fluorocarbons, per fluorocarbons and chlorofluorocarbons.
Global warming causes by greenhouse effect: Greenhouse gases in the atmosphere (see above) act like a mirror and reflect back to the Earth a part of the heat radiation, which would otherwise be lost to space. The higher the concentration of greenhouse gases like carbon dioxide in the atmosphere, the more heat energy is being reflected back to the Earth. The emission of carbon dioxide into the environment mainly from burning of fossil fuels (oil, gas, petrol, kerosene, etc.) has been increased dramatically over the past 50 years, see graph below.
Effects of global warming: There are two major effects of global warming:
* Increase of temperature on the earth by about 3° to 5° C (5.4° to 9° Fahrenheit) by the year 2100.
* Rise of sea levels by at least 25 meters (82 feet) by the year 2100.
More details about the effects of global warming: Increasing global temperatures are causing a broad range of changes. Sea levels are rising due to thermal expansion of the ocean, in addition to melting of land ice. Amounts and patterns of precipitation are changing. The total annual power of hurricanes has already increased markedly since 1975 because their average intensity and average duration have increased (in addition, there has been a high correlation of hurricane power with tropical sea-surface temperature).
Changes in temperature and precipitation patterns increase the frequency, duration, and intensity of other extreme weather events, such as floods, droughts, heat waves, and tornadoes. Other effects of global warming include higher or lower agricultural yields, further glacial retreat, reduced summer stream flows, species extinctions. As a further effect of global warming, diseases like malaria are returning into areas where they have been extinguished earlier.
Although global warming is affecting the number and magnitude of these events, it is difficult to connect specific events to global warming. Although most studies focus on the period up to 2100, warming is expected to continue past then because carbon dioxide (chemical symbol CO2) has an estimated atmospheric lifetime of 50 to 200 years. For a summary of the predictions for the future increase in temperature up to 2100, see here .
We are all personally responsible for releasing carbon dioxide into the atmosphere by burning fossil fuels for transportation (driving and flying) and home energy (electricity, heating, and cooling). This leads to global warming, which is destroying Earth’s biodiversity and native ecosystems.
Saving Bangladesh from Global Warming: When it comes to climate change, Bangladesh–with 140 million mostly poor residents and low-lying coastal geography–is among the most vulnerable nations on Earth. As part of the country’s effort to prepare and adapt, Bangladesh government agencies are attempting to take global projections of climate change and turn them into highly local predictions.
Typically, global climate models show projected temperature and precipitation changes at a very coarse scale that isn’t very helpful at a local level. Climate models used by the Intergovernmental Panel on Climate Change have a grid scale–think of it as pixel-size–of 200 kilometers by 200 kilometers.
Already, some preliminary maps have been made that show areas where droughts, for example, are expected to get worse in Bangladesh. Even a marginal increase in sea level, hurricane strength, storm surge height, or drought extent could have a staggering human toll.
Bangladesh is one of 48 countries on a United Nations list of least developed countries. The UN provided $200,000 for each of these nations to, first, perform a high-level assessment of how climate change will affect them, and then draw up a list of priority projects. Now the nations are competing for about $115 million to implement adaptation measures over the next three to five years.
Some work is already under way in Bangladesh. In certain areas, coasts are being planted with mangrove trees and other species that could help stanch erosion and provide a bulwark against storms.
1. Reduce your use of fossil fuels
2. Protect native forests as `carbon storehouses’
3. Help plant native trees in urban and deforested areas | <urn:uuid:c4018019-311d-414c-9ace-cb161deeba80> | 4.0625 | 1,205 | Knowledge Article | Science & Tech. | 35.897767 |
Quarterdeck Volume 4, Number 1, Spring 1996
World Ocean Circulation Experiment
Studying the ocean's role in climate change
Carri T. Hill
Climate variability affects our daily lives. Economic and social impacts of climate anomalies such as the recent cold weather in the northeast United States or the flooding in Oregon can be enormous and far ranging. While climate variability occurs naturally, it also may be driven by human activities like greenhouse gas emissions, deforestation, and urban development.
Understanding and predicting climate changes is the goal of a broad range of scientists. The realization that human activity can and does impact global climate has led to renewed interest in studies of clouds, ocean circulation, land-surface processes, volcanic activity, and atmospheric chemistry. As part of the process, oceanographers worldwide have embraced the challenge of understanding the ocean's role in climate change.
How does the ocean affect climate?
The upper layer of the ocean contains as much heat as the whole atmosphere. Interplay between the two impacts us directly through changes in weather, sea level, and more. The ocean also absorbs trace gases implicated in global warming (particularly carbon dioxide), mitigating their immediate effects. More importantly, however, the ocean mixes and moves water away from the surface and redistributes it in deeper layers around the globe as part of large-scale ocean circulation. Thus, the ocean acts as a buffer to reduce some of the potential climatic shifts.
Unfortunately, we cannot be too sanguine. Oceanographers speculate, for example, that circulation in the Nordic Seas could change over as short a period as a few years. Resulting alterations in the North Atlantic may weaken and slow the Gulf Stream, which normally delivers warm water to the shores of northwest Europe. The ultimate result could be dramatically colder weather in this region. Therefore, we must learn more about the global ocean and its circulation to understand and predict its impact on Earth's climate.
Global research program has a home at Texas A&M
The World Ocean Circulation Experiment (WOCE) is a cooperative effort by scientists from more than 30 nations to study large-scale circulation of the ocean. The knowledge we gain during this unprecedented program will help unravel the role of ocean circulation in long-term climate change and help develop models for predicting such fluctuations.
Dr. Worth D. Nowlin, Jr., Distinguished Professor at Texas A&M University, leads the U.S. contribution to WOCE. He has been instrumental in planning and successfully implementing the U.S. WOCE program as well as establishing a U.S. WOCE program office located at Texas A&M.
The office coordinates diverse activities such as producing implementation plans, obtaining clearances for ships to work in the exclusive economic zones of various coastal states, arranging travel to planning meetings for the many U.S. scientists involved in WOCE, or working with scientists throughout the world to ensure WOCE data are collected and archived properly.
WOCE scientists conduct research using many tools that operate at different scales. For example, satellites like ERS-1 and TOPEX/POSEIDON provide global coverage of ocean surface topography, surface winds, and sea-surface temperature. High-quality data from the TOPEX/POSEIDON satellite (see back cover) allow us to track changes in seasonally varying currents, including the development of the "Great Whirl," a large clockwise eddy that appears in the Somali Current during the southwest monsoon.1
Sea-level gauges and temperature probes deployed by voluntary observing ships also provide global coverage and a means of verifying the satellite data. Repeated measurements of temperature within the top 1000 meters of the ocean allow us to estimate changes in the total heat stored, for example, in the North Atlantic.2
Most observations, however, are made at the scale of a single ocean basin or smaller. Fleets of surface drifters, free-floating instrument packages, record and transmit data about surface currents. In some cases these drifters also report high-quality temperature and atmospheric pressure data, which are important for operational weather forecasters.
Below the surface, neutrally buoyant floats track ocean flow at a depth of 1000 meters to provide both a statistical representation of where currents at that depth flow and a reference point against which current velocities at other depths can be calibrated. The floats deployed in the Pacific Ocean, for example, suggest that most flow away from the basin margin at 1000 meters is zonal (east-west rather than north-south).3
Most oceanographers are familiar with moored current meters and hydrographic data obtained by research vessels. WOCE also relies on these sampling systems. Scientists participating in the hydrographic program (the largest single component of WOCE) collect data along a series of lines extending coast-to-coast across all the major ocean basins.
The goal is to establish a database that describes the distribution of density and chemical tracers in the oceans during the 1990s. These distributions can be used to highlight the sources of the ocean's water masses, patterns of movement, and time scales for water renewal. For example, measurements of 14C in seawater show how the upper layers of the ocean have been affected by the atmosphere, or ventilated, during the past 30 years.4
Moored current meters provide data on short-term variability in flow at particular choke points in the ocean's circulation. The data also provide statistics that describe the size and number of eddies close to the choke points. Data from a mooring near Abaco in the Bahamas, for example, provide insight into the flow of the deep western boundary current in the North Atlantic.5
WOCE also supports research to improve our ocean modeling capability. Only by incorporating field data into models can we increase our ability to predict ocean behavior. Presently this is limited by a lack of understanding of certain critical ocean processes and by the lack of computing power needed to cope with a fine-scale model of the global ocean. Progress is being made on both fronts, however, as well as in the assimilation of data into models. We now have several models that can resolve ocean eddies and provide reasonably realistic views of known ocean features.6
To date, U.S. WOCE has concentrated on field work in the Pacific Ocean and, most recently, the Indian Ocean. The Indian Ocean expedition began in early December 1994 with a cruise across the Antarctic Circumpolar Current (ACC) and concluded in late January 1996-some 50,296 miles and 1,244 hydrographic stations later. During this period the U.S. and other nations collected data using the sampling platforms mentioned above. These data constitute an unprecedented set of observations of the Indian Ocean.
Contributions from U.S. WOCE are already producing new and exciting insights into the nature of ocean circulation. These include multi-year measurements of flow in major ocean current systems, global tracer data sets that provide information on mixing rates and the ocean's capacity for absorbing excess carbon dioxide, changes over decades in how the ocean transports heat, interactions between the upper ocean and the atmosphere, and many more. Here are a few examples of the types of observations being made:
Evidence for long-term variations and changes in the ocean has been discovered, and its climatic importance is continually being assessed (IPCC, 1990; IPCC, 1992). However, the full impact and scope of knowledge gained from WOCE is not likely to be realized before the early 2000s.
With the continued work of WOCE and similar research programs,
we eventually should be better able to make long-term climate forecasts
and apply our knowledge to predict the economic impacts of such changes.
IPCC, 1990: Climate Change: The IPCC Scientific Assessment. (J. T. Houghton, G. J. Jenkins and J. J. Ephraums, eds.) Cambridge University Press, Cambridge, U.K., 365 pp.
IPCC, 1992: Climate Change: The Supplementary Report to the IPCC Scientific Assessment. (J. T. Houghton, B. A. Callendar and S. K. Varney, eds.) Cambridge University Press, Cambridge, U.K., 200 pp.
Comments to: firstname.lastname@example.org
Last updated February 24, 1997 | <urn:uuid:81c6a610-a65c-4a7e-820d-2401c40020ca> | 3.71875 | 1,710 | Academic Writing | Science & Tech. | 39.856804 |
After an accelerator has pumped enough energy
into its particles, they collide
either with a target or each other. Each of these collisions is called an
event. The physicist's goal is to isolate each event,
collect data from
it, and check whether the particle processes of that event agree with the
theory they are testing.
Each event is very complicated since lots of particles are produced.
Most of these particles have lifetimes so short that they go an
extremely short distance before decaying into other particles,
and therefore leave no detectable tracks.
How can a physicist determine what
happened if she can never record the presence of several key particles? | <urn:uuid:a590c960-41c0-4f22-bc55-7a49fc23f7da> | 3.328125 | 137 | Knowledge Article | Science & Tech. | 34.100743 |
enum type signed or unsigned? Is the Signedness of enums differ in C/C99/ANSI C/C++/C++x/GNU C/ GNU C99?
An enum is guaranteed to be represented by an integer, but the actual type (and its signedness) is implementation-dependent.
You can force an enumeration to be represented by a signed type by giving one of the enumerators a negative value:
In C++0x, the underlying type of an enumeration can be explicitly specified:
(C++0x also adds support for scoped enumerations)
For completeness, I'll add that in The C Programming Language, 2nd ed., enumerators are specified as having type
|show 1 more comment| | <urn:uuid:ff243d88-6525-4fb9-a059-1ca65a759e22> | 2.828125 | 158 | Q&A Forum | Software Dev. | 40.78 |
21st Century : Science : SETI@home
Are we alone in the Universe? This is the question which has baffled and fascinated mankind for centuries. One only has to look at the plethora of popular Science Fiction TV shows and films to see that we are fascinated by the idea of other intelligent life "out there".We still have no conclusive proof that we are, or are not alone, for that matter. Discounting the thousands of unsubstantiated UFO reports, as far as we know, E.T. has not dropped in, and Mr Spock has not popped by to see if we are living long and prospering.
But now, mankind has the technology to search the heavens, if on a somewhat limited basis; but then, there is rather a lot of ground to cover. This search has a name: SETI, or the Search for Extraterrestrial Intelligence, which is a scientific effort aiming to determine if there is intelligent life out in the universe. There are many methods that SETI scientific teams use to search for extraterrestrial intelligence. Many of these search billions of radio frequencies that flood the universe, looking for another civilization that might be transmitting a radio signal. Other SETI teams search by looking for signals in pulses of light emanating from the stars.
And now anyone with a humble desktop computer can take part in the search, thanks to a project called "SETI@home". This project which is based in UC Berkeley in the USA uses the world's largest single-dish radio telescope (above) at Arecibo in Puerto Rico to search for any possible radio signal from another world. The telescope is 305 m (1000 feet) in diameter, 167 feet deep, and covers an area of about twenty acres. But there is so much data recorded by the telescope; how possibly to analyse it all? The answer; break it up in to small chunks and distribute it to as many computers as possible...
The UC Berkeley SETI team has discovered that there are already thousands of computers that might be available for use. Most of these computers sit around most of the time with screensavers accomplishing absolutely nothing and wasting electricity to boot. This is where SETI@home (and you!) come into the picture. The SETI@home project hopes to convince you to allow them to borrow your computer when you aren't using it and to help them "...search out new life and new civilizations." This is accomplished with a screen saver (pictured below) that can go get a chunk of data from the SETI team over the internet, analyze that data, and then report the results back to them. When you need your computer back, the screen saver instantly gets out of the way and only continues it's analysis when you are finished with your work.
If you should be fortunate to be the first to discover a signal from another world, then no doubt instant fame and possibly fortune will follow.
No conclusively extraterrestrial signals have yet been discovered, but who knows, it could be you! Be sure to visit the SETI@home website to download the screensaver and for any other related info.
The SETI Institute Online
Arecibo Radio Telescope website | <urn:uuid:8614451e-eb7a-4f81-802d-e9339ffee95b> | 3.15625 | 646 | Knowledge Article | Science & Tech. | 49.904825 |
Read what our writers around the world are saying about climate change.
...or have your own say in the Celsias Lounge »
March 2012 set records for warm temperatures that promoted early leafing and flowering across large areas of the United States. A team of scientists at the USA National Phenology Network, which is sponsored by the U.S. Geological Survey, have published a studywhich shows that 2012 was the earliest ... keep reading
To protect New York City from the increasing number of flooding events expected in the next century—similar to Hurricane Sandy—the city needs to consider a costly and invasive option: permanent evacuation of communities in the lowest-lying areas and massive barriers around the city that will cost billions of ... keep reading
Climate change could destroy more than half of the habitats of most plants and a third of animals by 2080 unless we take steps to limite greenhouse gases. The study was released the journal Nature Climate Change. The study’s authors looked at 50,000 common species. They found that ... keep reading
It seems that the weather may be getting through to Americans in a way that scientists have not managed to in the past . A study at George Mason University shows that a majority of Americans Say Global Warming Is Affecting Weather in the United States. About six in ten ... keep reading
NASA has produced this amazing and very short animation that depicts how temperatures around the globe have warmed over the past years. The data come from NASA's Goddard Institute for Space Studies in New York (GISS), which monitors global surface temperatures. NASA notes, “All 10 of the warmest years ... keep reading
The World Meteorological Organization’s Statement on the Status of the Global Climate which put 2012 joined the ten previous years as one of the warmest — at ninth place — on record made number of other significant observations. Above-average temperatures were observed during 2012 across most of the globe’s ... keep reading
When Superstorm Sandy hit New York City last fall, the publisher Farrar, Straus and Giroux, like most everything else, totally shut down. It was a week before power returned to FSG, according to Brian Gittis, a senior publicist. When he got back to his office, he began sorting through galleys ... keep reading
Only 39% of mining companies believe the climate is changing; 13% have made plans to adapt. CSIRO Recent research suggests only a minority of mining companies are preparing for the biophysical impacts of climate change. Those that are preparing are going it alone: there is little collaboration on planning between ... keep reading
Continuing a decade-long increase, global food prices rose 2.7 percent in 2012, reaching levels not seen since the 1960s and 1970s but still well below the price spike of 1974. Between 2000 and 2012, the World Bank global food price index increased 104.5 percent, at an average annual ... keep reading
A new survey finds that an overwhelming majority of Americans want to prepare in order to minimize the damage likely to be caused by global warming-induced sea-level rise and storms. A majority also want people whose properties and businesses are located in hazard areas – not the government – to foot the ... keep reading
« Prev | Page 1 of 68 | Next »
Join the conversation in the Celsias Lounge. | <urn:uuid:44d829ab-e731-4e2a-9d3b-9bf656ba2711> | 3 | 678 | Content Listing | Science & Tech. | 55.029223 |
Here is a good site that has some very cool infor about XAML. XAML.NET They have a very nicely worded definition on their main page which I will include here. I hope that they don't mind my posting their content directly. I definitely recommend their site for finding good XAML info. i strongly recommend for folks to get up to speed on it since it will have a trememndous impact on our future software development.
What is XAML? (Extensible Application Markup Language; pronounced "zammel")
XAML is a declarative XML-based language that defines objects and their properties in XML. XAML syntax focuses upon defining the UI (user interface) for the Windows Presentation Foundation (WPF) and is therefore separate from the application code behind it.
Although XAML is presently for use on the Windows platform, the WPF/E (Windows Presentation Foundation/Everywhere) initiative will eventually bring XAML to other platforms and devices.
XAML syntax describes objects, properties and their relationships to one another. Generic XAML syntax defines the relationship between objects and children. Properties can be set as attributes or by using 'period notation' to specify the object as a property of its parent.
<child property="x" property="y">
<class property="u" property="v"/> | <urn:uuid:f3a8c804-2b2a-4aa6-82b5-fc62ae277d30> | 3.3125 | 287 | Comment Section | Software Dev. | 38.549231 |
Discovered on January 23, 1779 by Charles Messier.
[From: Memoir on the Comet of 1779, Mem. Acad. for 1779, p. 318-372
+ Pl. XIV. Discovery announce of M56, p. 320]
On the 19th [of January, 1779], when observing the Comet [C/1779 A1 Bode, Messier's 17th comet], I saw at little distance of it, & on its parallel, a very faint nebula, which one cannot perceive without a refractor, the dusk then prevented me to determine its position: In the morning of the 23rd, I have compared it directly with the second star of Cygnus of the fifth magnitude; I have reported it in the Chart, & here is its position.
Nebula Right Northern near the head of Cygnus. Ascension Declination[p. 352]
January 23, 1779 287d 0' 1" 29d 48' 14"
[PT 1818, p. 444-445, reprinted in:
Scientific Papers, Vol. 2, p. 599]
The 56th of the Connoissance. [M 56 = NGC 6779]
"1783, 7 feet telescope. A strong suspicion of its being stars."
"1783, 1799, 10 feet telescope. 120 will not resolve it; 240 wants light: 350 however shows the stars, but they are so exceedingly close and small that they cannot be counted."
"1784, 1807, 20 feet telescope. A globular cluster of very compressed small stars about 4 or 5 minutes in diameter."
"1805, 1807, large 10 feet teelscope. With 171 it is 3' 36" in diameter."
The profundity of this cluster, by the observation of the 10 feet telescope, must be of the 344th order. It is near the preceding branch of the milky way.
Sweep 197 (July 31, 1829)
RA 19h 9m 52.1s::, NPD 60d 6' 37s (1830.0) [Right Ascension and North Polar Distance]
Fine; v compressed; m b M; stars 11m; a * 9 m precedes. Clouds interferred.
Fine; very compressed; much brighter toward the middle; stars of 11m; a star of 9 m precedes. Clouds interferred.
Sweep 159 (July 6, 1828)
RA 19h 9m 55.8s, NPD 60d 7' 8s (1830.0) [Right Ascension and North Polar Distance]
p rich; S; irreg R; g b M but not to a nucleus; 2 1/2' to 3 diam; stars 13 and 14m, well seen in full illumination of field. A few scattered stars.
Pretty rich; small; irregularly round; gradually brighter toward the middle but not to a nucleus; 2 1/2' to 3' diameter; stars of 13m and 14m, well seen in full illumination of field. A few scattered stars.
Sweep 7 (September 4, 1825)
RA 19h 9m 56.6s, NPD 60d 7' 10s (1830.0) [Right Ascension and North Polar Distance]
L; R; v g b M. I see the stars which are v S and of different sizes. It fades gradually away to the borders.
Large; round; very gradually brighter toward the middle. I see the stars which are very small [faint] and of different sizes [magnitudes]. It fades gradually away to the borders.
Sweep 199 (August 5, 1829)
RA 19h 9m 57.1s, NPD 60d 6' 50s (1830.0) [Right Ascension and North Polar Distance]
Sweep 198 (August 1, 1829)
RA 19h 9m 58.3s, NPD 60d 6' 49s (1830.0) [Right Ascension and North Polar Distance]
Fine comp cluster; R, inclining to triangular form; b M; stars 12...14m. A fine object, diam 3'.
Fine compressed cluster; round, inclining to triangular form; brighter toward the middle; stars of 12th to 14th magnitude. A fine object, diameter 3'.
Last Modification: March 30, 2005 | <urn:uuid:f781df7a-823b-453f-b841-9079aebeb986> | 3.234375 | 924 | Knowledge Article | Science & Tech. | 97.480399 |
Since pointers are variables themselves, they can be stored in arrays just as other variables can.This is just one aspect of the generality of C's data types, which we'll be seeing in the next few sections.
We've used a recursive definition of ``expression'': a constant or variable is an expression, an expression in parentheses is an expression, an expression plus an expression is an expression, etc. There are obviously an infinite number of expressions, of arbitrary complexity. In exactly the same way, there are an infinite number of data types in C. We've already seen the basic data types: int, char, double, etc. But then we have the derived data types such as array-of-char and pointer-to-int and function-returning-double. So we can say that for any type, array-of-type is another type, and pointer-to-type is another type, and function-returning-type is another type. Once we've said that, we can see that there is also the possibility of arrays of pointers, and arrays of arrays, and functions returning pointers, and even (in section 5.11, though this is a deeper topic) pointers to functions. (The only possibilities that C doesn't support are functions returning arrays, and arrays of functions, and functions returning functions.)
Make sure you understand why an integer is something that can be ``compared or moved in a single operation,'' but that a string (that is, an array of char) is not. Then, realize that a pointer is also something that can be ``compared or moved in a single operation.'' (Actually, though, the string comparisons we'll be doing are not single operations.) From time to time you'll hear me caution you not to worry too much about certain aspects of efficiency. Here, it's true that the overhead of copying entire strings from one place to another, a character at a time (which is the overhead we'll be getting around by manipulating pointers instead) can be significant, but that's not the only concern: once we're comfortable with the idea, manipulating pointers will be somewhat easier on us, too. (Copying lots of characters around is a nuisance, and it can also be dangerous, if the destination isn't big enough or isn't in the right place.)
Don't worry about the ``one long character array'' that the ``lines to be sorted are stored end-to-end in.'' Instead, look at the picture at the bottom of page 107, which shows the pointers that might be set up after reading the lines
defghi jklmnopqrst abcOn the left are the pointers before sorting, and on the right are the pointers after sorting. The three strings have not been moved, but by reshuffling the pointers, the three pointers in order now point to the lines
abc defghi jklmnopqrst
Once again, we see a nice simple decomposition of the problem, which might seem deceptively simple except that when problems are decomposed in simple ways like this, and then implemented faithfully, they really can be this simple. Deferring the sorting step is an excellent idea, especially if we didn't quite follow the details of the sorting functions in the previous chapter. (Actually, in practice, we can usually defer the sorting step forever, since there's often a general-purpose sort routine provided for us somewhere. C is no exception: a qsort function is a required part of its standard library. For the most part, the only people who have to write sort routines are programming students and the few people who get stuck implementing system functions.)
The main program at the bottom of page 108 looks a bit more elaborate than the pseudocode at the top of the page, but the essence of the program is the three calls to readlines, qsort, and writelines. Everything else is declarations, plus an error message which is printed if readlines is for some reason not able to read the input. Eventually, you should be able to understand why all of the various declarations are required, but you can skim over them at first.
The readlines function first calls our old friend getline to read each line into a local array, line. On page 29 in section 1.9, we saw a program for finding the longest line in the input: it read each line into a local array line, and kept a copy of the longest line in a second array longest. In that program, it didn't matter that the input array line was continually overwritten with each new input line, and that most lines (except the longest one) were lost and forgotten. Here, however, we do need to save all of the input lines somewhere, so that we can sort them and print them later.
The lines are saved by calling alloc, a function which we wrote in section 5.4 but may have skimmed over. alloc allocates n bytes of new memory for something which we need to save. Each time we read another line, we call alloc to allocate some new memory to store it, then call strcpy to copy the line from the line array to the newly allocated memory. This way, it's okay that the next line is read into the same line array; we save each line, as it's read, into its own little alloc'ed piece of memory.
Note that memory allocated with a routine such as alloc persists, just as global and static variables do; it does not disappear when the function that allocated it returns.
Hopefully you're getting used to reading compressed condition statements by now, because here's another doozy:
if (nlines >= maxlines || (p == alloc(len)) == NULL)This line checks to make sure we have enough room to store the new line we just read. We need two things: (1) a slot in the lineptr array to store the pointer, and (2) space allocated by alloc to store the line itself. If we don't have either of these things, we return -1, indicating that we ran out of memory. We don't have a slot in the lineptr array if we've already read maxlines lines, and we don't have room to store the line itself if alloc returns NULL. The subexpression (p = alloc(len)) == NULL is equivalent in form to to other assign-and-test combinations we've been using involving getchar and getline: it assigns alloc's return value to p, then compares it to NULL.
Normally, we might be suspicious of the call alloc(len). Why? Remember that strings are always terminated by '\0', so the space required to store a string is always one more than the the number of characters in it. Normally, we'll call things like alloc(len + 1), and accidentally calling alloc(len) is usually a bug. Here, it happens to be okay, because before we copy the line to the newly-allocated memory, we strip the newline '\n' from the end of it, by overwriting it with '\0', hence making the string one shorter than len. (Why is the last character in line, namely the '\n', at line[len-1], and not line[len]?)
if (nlines >= maxlines ... and lineptr[nlines++] = p;deserve some attention. These represent a common way of filling in an array in C. nlines always holds the number of lines we've read so far (it's another invariant). It starts out as 0 (we haven't read any lines yet) and it ends up as the total number of lines we've read. Each time we read a new line, we store the line (more precisely, a pointer to it) in lineptr[nlines++]. By using postfix ++, we store the pointer in the slot indexed by the previous value of nlines, which is what we want, because arrays are 0-based in C. The first time through the loop, nlines is 0, so we store a pointer to the first line in lineptr, and then increment nlines to 1. If nlines ever becomes equal to maxlines, we've filled in all the slots of the array, and we can't use any more (even though, at that point, the highest-filled cell in the array is lineptr[maxlines-1], which is the last cell in the array, again because arrays are 0-based). We test for this condition by checking nlines >= maxlines, as a little measure of paranoia. The test nlines == maxlines would also work, but if we ever accidentally introduce a bug into the program such that we fill past the last slot without noticing it, we wouldn't want to keep on filling farther and farther past the end.
...lineptr is an array of MAXLINES elements, each element of which is a pointer to a char. That is, lineptr[i] is a character pointer...We can see that lineptr[i] has to be a character pointer, by looking at two things: in the function readlines, the line
lineptr[lines++] = p;has a character pointer on the right-hand side, and the only thing we can assign a character pointer to is another character pointer. Also, in the function writelines, in the line
printf("%s\n", lineptr[i]);printf's %s format expects a pointer to a character, so that's what lineptr[i] had better be.
Note that writelines prints a newline after each line, since newlines were stripped out of the input lines by readlines.
Don't worry too much about the discussion at the bottom of page 109. We saw in section 5.3 that due to the ``strong relationship'' between pointers and arrays, it is always possible to manipulate an array using pointer-like notation, and to manipulate a pointer using array-like notation. Since lineptr is an array, it is possible to manipulate it using pointer-like notation, but since what it's an array of is other pointers, it can start to get a bit confusing. Though many programmers do write things like
printf("%s\n", *lineptr++);and though this is correct code, and though one should probably understand it to have a 100% complete understanding of C, I've decided that code like that is just a bit too hard to follow, and I'd always write (perhaps more pedestrian and mundane) things like
printf("%s\n", lineptr[i]); or printf("%s\n", lineptr[i++]);
Since I didn't ask you to follow the qsort example in section 4.10 in complete detail, I won't ask you to work through this one completely, either. But if you compare the code here to the code on pages 87-88, you will see that the only significant differences are that the variables and arrays containing the things being sorted have been changed from int to char * (pointer-to-char), and the comparison
if (v[i] < v[left])has been changed to
if (strcmp(v[i], v[left]) < 0)
Read sequentially: prev next up top
This page by Steve Summit // Copyright 1995, 1996 // mail feedback | <urn:uuid:7ca4eeb0-cbd3-49ab-b4c9-c05ba089ee64> | 3.515625 | 2,347 | Documentation | Software Dev. | 54.195975 |
is a functional group
consisting of oxygen
It has a charge (oxidation number) of -1 unit.
The term hydroxyl group is used when the functional group -OH is counted as a substituent of an organic compound. Organic molecules containing a hydroxyl group are known as alcohols.
Hydroxide is one of the simplest of the polyatomic ions.
a general term for any salt containing stoichiometric amounts of this polyatomic ion. | <urn:uuid:4d176ae2-5c25-46a8-8f60-90dbea313bdf> | 2.953125 | 98 | Knowledge Article | Science & Tech. | 42.019304 |
Choose one of the following Minerals for more details:
Crystal formations such as this grow out of mineral-rich solutions in spacious rock cavities. When the solution becomes saturated, crystals begin to form on rock walls, on other crystals, or even on particles of dust. They grow quickly or slowly--or stop--depending on changes in temperature and the concentration of the solution. Marcasite forms from iron and sulfur to make crystals with a chemical composition of FeS2. Note how the size of the forming crystals grew larger with each round of formation in this specimen. | <urn:uuid:944b4013-8d7c-49f6-819c-8e35de60ca23> | 2.90625 | 115 | Knowledge Article | Science & Tech. | 43.70325 |
Acid Rain Lesson Plan
Activity 1 – the pH Scale
Time: 1 Hour
At the end of this lesson the student will be able to:
- Describe the pH scale and its components
- Explain why a pH measurement must be accurate and why small changes in pH are important.
You will need enough of the following for each student:
- Dried apricots
- Grapefruit pieces
- Lemon pieces
- Writing paper, pencils
- Four paper cups for each child
Instructions to Teacher
- Write on blackboard:
- Molasses - pH 5
- Dried Apricots - pH 4
- Grapefruit - pH 3
- Lemon- pH 2
(Other common substances' pH can be found in Table 1)
Instructions to Students
- Sample each item. You may try any one of the four samples first.
- Record which sample tasted the least bitter, the most bitter. Rank them in that order. Save these answers for the discussion later.
Questions to Students
- How did you rank the four samples, least bitter to most bitter?
- Why did you rank them this way?
- Where do you think the following fruits and vegetables would be placed on the pH scale: Apples? Carrots? Spinach? Jams? (pH 3, 5, 5, 4, respectively) | <urn:uuid:b6145d4a-3770-48c2-a132-f1b174f4d24e> | 3.921875 | 284 | Tutorial | Science & Tech. | 72.035029 |
freeMem:172,629,536 totMem:477,560,832 reqNum:1074775 openSessions:0 generationTime:2013/05/20 14:08:21
979 Topics - 5229 Related Knowledge - 11257 Members - 47 Editors
Navigate the Atlas:
|Monitoring and Observing Systems||
Maintained by IOC
Text-only Printer-friendly version
|Global Observing and Monitoring Systems|
|The vastness of the global oceans, the amount of information they contain and their relevance to society qualify them as an international research priority. Furthermore, current issues such as inter alia: global climate change and sea-level rise, marine ecosytem degradation - including the collapse of several fisheries around the world and pollution, and the occurrance of extreme events such as tsunamis and El Niño/Southern Oscillation - requires not only a scientific understanding the global oceans and its systems, but also a knowledge of and familiarity with its patterns over time. To meet this need there are numerous international efforts to promote programmes for various monitoring and observing systems, with a global scope, for the world's oceans and marine ecology. These systems range from autonomous drifting buoys to human observers aboard fishing vessels around the world.
|Photo title: Image of recent seismic activity in the Pacific.|
|Photo credit: NOAA|
|Argo: a component of the Integrated Ocean Observing System|
|Argo floats are deployed to measure sea temperature and salinity and compose part of GCOS, GOOS and GODAE. Argo is an international project to collect information on the upper part of the world's oceans. The 2007 goal was met to have 3,000 ocean-traveling float instruments operating producing 100,000 temperature and salinity profiles per year. By the end of 2008 there are about double that number of platforms gathering data of which 3,150 are floats. Applications include: ocean heat storage and climate change; ocean salinity changes due to rainfall; ocean-driven events such as El Niño; impacts of ocean temperature on fisheries and regional ecosystems; interactions between the ocean and monsoons; and how the oceans drive hurricanes and typhoons.
|Fisheries Observer and Monitoring Programs|
|Fisheries observer programs provide for the collection of biological, environmental and socio-economic data for science, fisheries management, and compliance monitoring. Observer data also provide a means for verifying other independent sources of data such as logbooks and landing reports.|
Observing in many countries began with monitoring foreign fishing vessels in the new EEZs during the 1970s and shifted to domestic coverage in the 1980s.
Most fisheries observer programs have developed independently in each global region to meet regional needs, but each has common issues. The issues include health, injury, liability insurance to protect observers and the vessels they observe, relationships between observers and crew, duties of observers on common crew tasks such as helping in the galley, standing watch, or on deck, and objectivity of collected data.
As an example of the importance of observer programs, the USA's National Marine Fisheries Service (part of NOAA) first put observers aboard multiple fishing vessels during the height of the dolphin/tuna affair in the 1970s, when hundreds of dolphins were being killed in each tuna set involving the herding of dolphins to catch the tuna swimming with them. Before there was a regulatory role, NMFS placed observers on tuna seiners to learn which species of dolphins and how many were involved and what was happening. In the days before laptops, observers filled out 22 page data forms for each set, detailing dolphin, tuna and crew actions, including the various ways crew worked to prevent mortalities, as well as information such as location, time lines, and oceanography, necessary for the analysts back at the lab to understand what was going on. Dozens of observers brought back data, diagrams, and ideas that formed the basis for a suite of gear and procedures. This protocol reduced dolphin mortalities by a factor of over 100, made fishing more efficient, and allowed the US industry to stay in business. Most observers now have a regulatory capacity, monitoring quota progress or compliance, but they also continue collecting information that allows improvements in stock assessments, bycatch reduction, and knowledge about the species themselves.
Observers are the eyes of the scientists as well as the regulators. Their data, combined with that from research vessels and from landings statistics, directly support both resource science and management.
Funding sources vary for each observer program. The fishing industry pays for observer coverage in some programs and countries while national funds, or perhaps a fisheries commission pays in others. In large countries, there may be many types of observer programs and many ways to pay for them.
|Photo title: Pelagic Pair Trawler Fishing Vessel DOÑA MARTITA at dock, New Bedford, Massachusetts, USA|
|Photo credit: www.OceansArt.US|
|International Fisheries Observer and Monitoring Conferences|
|Photo title: Banner of the International Fisheries Observer and Monitoring Conference|
|Photo credit: IFOMC|
|These biennial conferences are the premier international fora for fisheries monitoring and observer program issues. The upcoming conference is expected to attract over 300 delegates from over 40 countries. Attendees will include organizers and participants of fishery monitoring programs, fishing industry groups, and end users of fishery-dependent data collection systems. The conference format includes presented papers, panel discussion sessions, a poster session, a trade show, and social events. |
The Conference mission is to improve fishery monitoring programs worldwide through sharing of practices and development of new methods of data collection and analysis. To provide a forum for dialog between those responsible for monitoring fisheries and those who rely upon the data they collect.
A link to the conference and the proceedings from prior sessions is here
- Improve the quality of fishery monitoring data through sharing of best practices for collection and analysis of information.
- Improve the use of fishery monitoring data to support sustainable resource management.
- Promote the international exchange of ideas and best practices from fishery monitoring programs throughout the world.
- Improve accessibility to fishery monitoring data.
- Support the development of new innovative data collection methods.
- Improve the training and safety of at-sea fisheries observers.
- Advance the development of the observer profession.
The Conferences benefit all interested in fisheries. Whether a fishing firm or its support base, a government or scientific body, or a concerned citizen, we all have a stake in improving observer programs while reducing costs. As an example, electronics and automated systems for monitoring will play a key role in these discussions.
The oceans community is invited to consider financial support to enable observers and program managers from around the world to share their ideas and experiences. Information about the Conference and about becoming a sponsor or advertiser is available at the conference website. Participation provides exposure on the web, at the conference and in over 40 countries.
|FAO Committee on World Food Security 2009|
|Date:||14 October 2009 - 17 October 2009|
|Information:||The Committee on World Food Security will meet to consider reforms that will enable it to play a much more effective role in the global governance of food security.| | <urn:uuid:cd60e14b-7098-4fd0-a7f4-db1061e0e03d> | 2.984375 | 1,489 | Content Listing | Science & Tech. | 21.212486 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 13 results on physics.org and 205 results in our database of sites
204 are Websites,
0 are Videos,
and 1 is a Experiments)
Search results on physics.org
Search results from our links database
Perfectly elastic collisions are those in which no kinetic energy is lost in the collision. Macroscopic collisions are generally inelastic and do not conserve kinetic energy, though of course the ...
The site gives details of the different forms of energy ( Light, Heat, Sound, Chemical, Kinetic, Potential, Nuclear) and their sources.
Notes about different forms of energy.
How waste can be transformed into energy.
An explanation of the different ways in which we can use the sun's energy.
Introduction to the concept of potential energy and a potential energy function.
Gravitational potential energy is energy an object possesses because of its position in a gravitational field.
Information about E=hf and discussion about the energy required by photons to promote electrons to higher levels.
A basic introduction to forms of energy with a brief animated movie and quiz. This site allows you to watch two movies a day before subscribing.
An introduction to the main types of renewable energy, focusing on wind, solar, wave, tidal and waste power.
Showing 31 - 40 of 205 | <urn:uuid:35e6b282-3fb5-44fb-a11a-77ae47737617> | 3.109375 | 316 | Content Listing | Science & Tech. | 47.436772 |
Mechanics: Vectors and Projectiles
Vectors and Projectiles: Audio Guided Solution
The takeoff speed of a military aircraft from an aircraft carrier is approximately 170 mi/hr relative to the air. They acquire this speed through a combination of a catapult system present on the aircraft carrier and the aircraft's jet propulsion system. A common strategy is to head the carrier and the plane into the wind. If a plane is taking off from an aircraft carrier which is moving at 40 mi/hr into a 20 mi/hr headwind, then what speed relative to the deck of the aircraft carrier must it obtain to takeoff?
Audio Guided Solution
Click to show or hide the answer!
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., vox = 12.4 m/s, voy = 0.0 m/s, dx = 32.7 m, dy = ???.
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Vectors and Projectiles at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | <urn:uuid:a40e2bac-0a11-4a01-827d-d8f1a339e0ed> | 3.296875 | 333 | Tutorial | Science & Tech. | 56.556909 |
Jan. 21, 2010 Dunes of sand-sized materials have been trapped on the floors of many Martian craters. This view shows dunes inside a crater in Noachis Terra, west of the giant Hellas impact basin in Mars' southern hemisphere.
The High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter captured this view on Dec. 28, 2009. The orbiter resumed making observations in mid-December following a three-month hiatus. A set of new images from the HiRISE camera is on the camera team's site, at http://hirise.lpl.arizona.edu/nea.php.
The dunes here are linear, thought to be due to shifting wind directions. In places, each dune is remarkably similar to adjacent dunes, including a reddish (or dust-colored) band on northeast-facing slopes. Large angular boulders litter the floor between dunes.
The most extensive linear dune fields known in the solar system are on Saturn's large moon Titan. Titan has a very different environment and composition, so at meter-scale resolution they probably are very different from Martian dunes.
The University of Arizona, Tucson, operates the HiRISE camera, which was built by Ball Aerospace & Technologies Corp., Boulder, Colo. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Reconnaissance Orbiter for the NASA Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:9cea5201-efd6-47f5-94e3-4d804c10b06e> | 3.796875 | 373 | Truncated | Science & Tech. | 40.363699 |
First, fusion's only bi-product is hydrogen, which cannot be radioactive (Its atoms aren't big enough to be unstable)- however, after billions of dollars of research, we've failed to produce 1kw from the thing.
The risk from nuclear powerplants isn't too bad if we take necissary precautions (like using deionized H2O instead of molten Na metal for the cooling process), and it's likely to be used more extensively in the future (the earth's supply of uranium is nearly limitless - not to mention the tons of radioactive materials in space).
Biomass is using alternative fuels that come from plants (mostly corn).
Solar power is clean, but not very efficient (or cheap). The most expensive part of solar energy generators, are the actual panels (or mirrors, depending on the process). Instead, I propose that it is actually possible to use plants to produce harnessable energy through photosynthesis.
The science checks out - H2O + CO2 + Solar Energy => C6H12O6 + 02 (photosynthesis). C6H12O6 + O2 => CO2 + H20 + Free electrons (aerobic respiration).
It is possible to harvest the glucose produced by plants, and break it back down into CO2 and H2O via micro organisms. Then, the free electrons can be picked up via carbon electrodes, and voila!
But wait!!! This produces CO2!!! CNN told me that CO2 is bad!!! WTF??? - No problem. The C02 and H2O is sent right back to the plants for more photosynthesis - thus providing a closed system with no by-products, which (according to CNN) is good. :p
"We can categorically state that we have not released man-eating badgers into the area."
-UK military spokesman Major Mike Shearer | <urn:uuid:e9b89589-a7d6-4beb-9f02-fe4786d03156> | 3.09375 | 384 | Comment Section | Science & Tech. | 58.474466 |