text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Want to stay on top of all the space news? Follow @universetoday on TwitterWe keep sending missions to Mars with the key objective to search for past or present life. But what if a huge impact early in the Red Planet’s history hindered any future possibility for life to thrive? Recent studies into the Martian “crustal dichotomy” indicate the planet was struck by a very large object, possibly a massive asteroid. Now researchers believe that this same impact may have scrubbed any chance for life on Mars, effectively making the planet sterile. This asteroid may have penetrated the Martian crust so deep that it damaged the internal structure irreparably, preventing a strong magnetic field from enveloping the planet. The lack of a Mars magnetosphere thereby ended any chance for a nurturing atmosphere… Mars looks odd. Early astronomers noticed it, and today’s observatories see it every time they look at the red globe. Mars has two faces. One face (the northern hemisphere) is composed of barren plains and smooth sand dunes; the other face (the southern hemisphere) is a chaotic, jagged terrain of mountains and valleys. It would appear the crustal dichotomy formed after a massive impact early in the development of Mars, leaving the planet geologically scarred for eternity. But say if this impact went beyond pure aesthetics? What if this planet-wide impact zone represents something a lot deeper? To understand what might have happened to Mars, we have to first look at the Earth. Our planet has a powerful magnetic field that is generated near the core. Molten iron convects, dragging free electrons with it, setting up a huge dynamo outputting the strong dipolar magnetic field. As the magnetic field threads through the planet, it projects from the surface and reaches thousands of miles into space, forming a vast bubble. This bubble is known as the magnetosphere, protecting us from the damaging solar wind and prevents our atmosphere from eroding into space. Life thrives on this blue planet because Earth has a powerful magnetic solar wind defence. Although Mars is smaller than Earth, scientists have often been at a loss to explain why there is no Martian magnetosphere. But according to the growing armada of orbiting satellites, measurements suggest that Mars did have a global magnetic field in the past. It has been the general consensus for some time that Mars’ magnetic field disappeared when the smaller planet’s interior cooled quickly and lost its ability to keep its inner iron in a convective state. With no convection comes a loss of the dynamo effect and therefore the magnetic field (and any magnetosphere) is lost. This is often cited as the reason why Mars does not have a thick atmosphere; any atmospheric gases have been eroded into space by the solar wind. However, there may be a better explanation as to why Mars lost its magnetism. “The evidence suggests that a giant impact early in the planet’s history could have disrupted the molten core, changing the circulation and affecting the magnetic field,” said Sabine Stanley, assistant professor of physics at the University of Toronto, one of the scientists involved in this research. “We know Mars had a magnetic field which disappeared about 4 billion years ago and that this happened around the same time that the crustal dichotomy appeared, which is a possible link to an asteroid impact.” During Mars’ evolution before 4 billion years ago, things may have looked a lot more promising. With a strong magnetic field, Mars had a thick atmosphere, protected from the ravages of the solar wind within its own magnetosphere. But, in an instant, a huge asteroid impact could have changed the course of Martian history forever. “Mars once had a much thicker atmosphere along with standing water and a magnetic field, so it would have been a very different place to the dry barren planet we see today.” – Monica Grady, professor of planetary and space sciences at the Open University. Losing its magnetic field after the deep asteroid impact catastrophically damaged the internal workings of the planet, Mars quickly shed its atmosphere, thereby blocking its ability to sustain life in the 4 billion years since. What a sad story… Original source: Times Online (UK)
<urn:uuid:b9afece9-df12-4944-b4a5-e5a50a7a2548>
4.1875
864
Truncated
Science & Tech.
36.890406
Portrait of Niels Bohr The Niels Bohr Archive, Copenhagen Niels Bohr was a Danish physicist who lived between 1885-1962. He studied the structure of atoms and developed a new theory about how the electrons in an atom were arranged. After helping build the first nuclear bomb, Bohr spent the later years of his life encouraging peaceful uses of atomic energy. Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more Florence Bascom, who lived from 1862 until 1945, was one of the most important geologists in the United States. She studied mineral crystals by looking at them very closely with a microscope. She also...more Niels Bohr was a Danish physicist who lived between 1885-1962. He studied the structure of atoms and developed a new theory about how the electrons in an atom were arranged. After helping build the first...more Marie Curie was a physicist and chemist who lived between 1867-1934. She studied radioactivity and the effects of x-rays. She was born Maria Skłodowska in Warsaw, Poland. Women could not study then...more Albert Einstein was a German physicist who lived between 1879-1955. He is probably the most well-known scientist in recent history. Have you heard of Einstein's famous theory? It is called the theory...more Robert Goddard was an American physicist who lived between 1882-1945. He studied rockets and showed how they could be used to travel into outer space and to the Moon. Goddard experimented with different...more Werner Heisenberg was a German physicist who lived between 1901-1976. Heisenberg is most famous for his "uncertainty principle", which explains the impossibility of knowing exactly where something is...more Edwin Hubble was an American astronomer who lived between 1889-1953. He spent a lot of time looking at groups of stars and planets, called galaxies, and trying to explain their motion. He found that all...more
<urn:uuid:c9fb30af-2444-4f53-9d2e-7de1ae8d8bbe>
3.34375
507
Content Listing
Science & Tech.
59.534537
February 12, 2001 Winter Weather Event and the "Seeder - Feeder Mechanism" Background information on the February 12, 2001 Winter Weather Event. On February 12, an ice storm, consisting primarily of sleet, was expected over the RAH CWFA. The heaviest sleet was expected during the morning between 12Z and 18Z. In retrospect, the precipitation was considerably lighter and more of a mixed variety than expected due to poor model performance prior to the event. Easily lost in the larger scale, however, was a mesoscale snow outbreak, which occurred overnight prior to the main precipitation event. Upon examination, it was determined that the snow resulted from a very pronounced seeder-feeder mechanism in which low level supercooled clouds were seeded with ice crystals from above. This seeding modified the mid and low level temperature profiles, glaciated the low level clouds, and produced light snow for several hours in the foothills and northwest piedmont. Snowfall amounts were not large in RAH's CWFA, ranging from ½ to 1 inch in the northwest piedmont, and were overshadowed by the larger scale icing event which evolved shortly thereafter. It is clear, however, that the seeder-feeder mechanism alone could easily produce an "advisory-scale" event, consisting of either frozen or freezing precipitation. The purpose of this paper is to explain how the seeder-feeder mechanism might be foreseen by providing a real-life and close-to-home example for future reference. To review very briefly, the 'seeder-feeder' mechanism is the introduction of ice from above into a lower level liquid or supercooled liquid cloud. This introduction of ice provides condensation nuclei, thus initiating precipitation from the low level cloud layer. The low cloud layer may consist of 1) liquid droplets, 2) supercooled liquid droplets, or 3) if the temperature is cold enough (maximum low layer temperature of < 1C - figure 1) the cloud may glaciate, and produce snow. The resulting precipitation type is, of course, dependent upon the thermal profile from the cloud to the surface (as well as temperatures of exposed surfaces in the case of freezing rain). The icing event is not the focus of this paper, thus we will not provide an analysis of synoptic scale features leading up to the event. The seeder-feeder mechanism, being mesoscale in size, would likely not be readily discernable beyond 24 hours. We will thus narrow our analysis to a time frame beginning 18 hours prior to the onset of precipitation at 12Z on the 11th, and concentrate mainly on forecast soundings and observed data. Initial Mesoscale Conditions at 00Z - 2/12/01 From the attached zone forecast excerpt (figure 2), we were expecting the precipitation (sleet) to begin towards morning as late as the evening update on Sunday night, February 11. At sunset, western NC was blanketed by a uniform deck of stratocumulus based from 5 to 6 thousand feet, while an altocumulus deck around 10 thousand feet was approaching from the southwest. (figure 3). The 00Z Greensboro sounding (figure 4) was quite dry, and a significant portion of the sounding was warmer than 0C below 10 thousand feet. Of particular note is a warm (> 0) 2 to 3 thousand foot thick layer at the mid-levels (790-710 mb, ~7 to 10 thousand feet). The 00Z radar composite (figure 5) showed a narrow band of light rain over western SC, where Anderson was the only METAR site reporting rain. The rain was evidently in response to a small vorticity max, depicted on the 00Z Mesoeta over the Greenville-Spartanburg area, which was forecast to move quickly northeast across western NC. Temperatures at 04Z across the area were in the 40 to 45 degree range (figure 6) and, given significant precipitation amounts, would fall rapidly due to diabatic cooling as dewpoints were in the teens. This would produce wet bulb temperatures in the 30 to 35 degree range. What Occurred Overnight The light rain moved into the southern piedmont near Charlotte shortly before midnight. This initial surge of precipitation was not expected to be significant, as evidenced by the lack of its inclusion in the evening update. The precipitation's early arrival would not appear to pose a problem for the forecast of sleet, as diabatic cooling would quickly lower surface temperatures to freezing or below as the rain moved north of Charlotte into the foothills and northwest piedmont. Meanwhile, warm air advection, as evidenced by the veering wind profile at GSO (figure 4) and temperature profiles from downstream sites at Nashville and Atlanta (figure 7), would reinforce and even strengthen the warm air already in place at the mid-levels. We could then expect a changeover to primarily sleet as forecast. Catching us somewhat by surprise, the precipitation began to fall as snow (figure 8). Observations at Hickory and Statesville showed the precipitation beginning at 05Z as rain, then changing to snow by 06Z and remaining snow until around 09Z. At Greensboro, the precipitation began as snow at 08Z and continued until around 1030Z when it changed over to freezing rain, which lasted through the afternoon. The snow accumulated up to an inch in the northwest piedmont and foothills. Diagnosing the 'Seeder-Feeder' Mechanism Responsible for the Snow The GSO sounding at 00Z shown in figure 4 points towards a sleet/freezing rain scenario, as it exhibits an 80mb deep warm layer aloft (790 - 710 mb) as well as the potential for significant diabatic cooling at the surface. If we go beyond a cursory glance, however, there are a couple of subtle details to note as well: 1) The warm layer, while deep, is only 1 to 2 degrees above freezing and very dry - thus we should expect diabatic cooling here as well; 2) we see the initial hint of mid-level moisture at 560 mb, just above the -10C isotherm (snow formation likely); 3) the maximum warm layer temperature (note - the warm layer is the lower cloud layer, not the entire boundary layer) is around -2C at Upstream soundings at Nashville and Atlanta in figure 7 show that the mid- level moisture which will be arriving in the CWFA has a base around 750 mb (~ 8 kft) and is extremely deep. Notice also that the -10C isotherm is around 550 mb, so little thermal advection will occur at this level downstream at GSO. 06Z GSO Sounding At the 06Z sounding (figure 9), the mid-level moisture has deepened and now has a base below 600 mb, with the vast majority of the layer colder than -10C. The mid-level melting layer has cooled to less than 0C as well, and only the surface layer remains At this point, the low level cloud has glaciated and snow began to fall in GSO shortly before 07Z. METAR observations of ceilings indicate that the mid-level altocumulus deck had progressed to GSO by 06Z. A few hours prior, rain had begun in the Charlotte area around 04Z, shortly after the arrival of the mid-level cloudiness. It is apparent that the mid-level cloudiness was responsible for seeding the clouds below, thus causing the unexpected surge of precipitation prior to the principal, more widespread precipitation. Approximately one hour separated the arrival of the mid-level cloud deck and the onset of snow at GSO. Comparison of Observed Soundings and ETA Forecast Soundings Forecast soundings from the ETA model's 12Z run on February 11 and 00Z run on February 12 were analyzed and compared to the actual soundings that were taken during the event. The presence of a warm nose, any above freezing layers, low level winds, and dry layers were noted. The 12Z model sounding initialized well with a warm nose between 850 and 700mb and little moisture in the sounding (not shown). At 18Z (figure 10), the model sounding correctly showed increasing moisture, especially above 400mb, and continued to show a deep above-freezing layer. The model sounding did not show the stratocumulus deck at Figure 11 compares the ETA 12 hour forecast sounding valid at 00Z with the observed sounding at GSO. The observed sounding showed that the upper level moisture (above 400mb) continued to increase, with a thin moist layer evident at 560mb. This layer was the leading edge of the mid-level cloud deck that was approaching from the southwest. The model sounding underestimated the moisture above 400mb, and did not pick up on the approach of mid-level cloud deck from the southwest at 560mb. The observed sounding showed the presence of a stratocumulus deck at 800mb, while the model had a 15 degree dewpoint depression at this level. A dry 2°C warm nose remained between 800 and 700mb in the forecast sounding. This compared well to the actual 00Z sounding, although the model greatly underestimated the amount and depth of the dry air from 800 to 600mb. These model inaccuracies had large implications on its p-type forecast. The observed cloud layer that was approaching from the southwest near 560mb, which was not depicted in the model sounding, began to precipitate into the dry warm nose between 00Z and 06Z. This caused evaporative cooling in that layer, and reduced the temperature of the warm nose to near freezing isothermal. Because the model underestimated the amount of dry air that the precipitation was falling into (between 800 and 600mb), the model underestimated the potential for evaporative cooling in that layer. The precipitation from the mid-level clouds proceeded to seed the stratocumulus deck at 810mb with ice, which later produced snow. Since the model did not account for the stratocumulus deck, the mid-level cloud layer, or the very dry air between 800 and 600mb, it did not correctly portray the seeder-feeder process. The 6 hour forecast sounding valid at 06Z is shown in figure 12. At this time, the model did pick up on the stratocumulus deck near 810mb and the precipitating mid-level cloud layer. At 06Z, the actual sounding (figure 9) showed an isothermal layer at 0°C between 750 and 700mb with the rest of the sounding below freezing, except at the surface. However, the model showed that the layer was not near-freezing isothermal; it remained near 2°C. Because the model forecast the layer to be too warm (underestimated the amount of evaporative cooling), the model p-type forecast of sleet and freezing rain at 06Z was incorrect. A warm nose between 1°C to 3°C warm nose will produce a snow/sleet mix if ice is introduced, and freezing rain if ice is not introduced (see figure 1, which was taken from the VISIT training session "P-Type Forecasting - The Top-Down Approach"). Since the ETA did not account for the mid-level cloud layer that introduced ice into the stratocumulus deck between 00Z and 06Z, the model did not account for ice seeding into the lower cloud layer early enough, and freezing rain was the predominant p-type that was forecast by the model. Evaluation of partial thickness scheme and TRENDS The 1000-850mb thicknesses and 850-700mb thicknesses were computed from the actual soundings and compared to the ETA model forecast thicknesses. The ETA forecast thicknesses were taken from the model run closest to that time. The partial thicknesses from the observed soundings and the forecast soundings were plotted on a partial thickness nomogram. The observed thickness nomogram correctly portrayed snow changing to mainly freezing rain and a little sleet. However, the model forecast thicknesses painted a different p-type scenario - one that was dominated by only freezing rain and sleet. Very slight differences in the observed and forecast soundings led to the inaccurate model forecasts. The following chart compares the actual partial thicknesses and their corresponding p-type based on the nomogram with the model forecast thicknesses and ||Observed p-type from ||ETA forecast p-type from ||Mostly FZRA, trace PL ||Mostly FZRA, trace PL| ||Mostly FZRA, trace PL ||Mostly FZRA, trace PL| ||Snow if isothermal near ||Mostly FZRA, trace PL| ||Mostly FZRA, trace PL ||Measurable PL w/ FZRA| Figure 13 shows the low level thicknesses from the observed soundings plotted on a nomogram. The corresponding ETA model thicknesses were plotted on a separate nomogram, which is shown in figure14. Note that at 18Z, the ETA overestimated the thickness of the above-freezing layer and underestimated the low level thickness. Although the model correctly lowered the thickness in the above-freezing layer at the initialization time of 00Z, it was still too strong compared to reality. From 00Z to 06Z, the observed 850-700mb thickness decreased slightly, from 1553m to 1549m, placing the sounding in the "snowy nose" category of the nomogram. In this area of the nomogram, the p-type will be snow if a near-freezing isothermal layer is present; otherwise freezing rain and sleet will be the dominant p-type. In the observed sounding at 06Z, a near-freezing isothermal layer was present, and snow was able to reach the ground. At the same time, the model forecast 850-700mb thickness did not change, while the 1000-850mb thickness dropped by 11m. A near-freezing isothermal layer was not present in the 06Z forecast sounding. Instead, the ETA forecast sounding showed a 2°warm nose between 800 and 700mb. This overestimation of the above-freezing layer led to an incorrect model p-type forecast of mostly freezing rain with a trace of sleet. The model overestimated the amount of low-level cold advection that was occurring and overestimated the temperature of the above-freezing layer, leading to a freezing rain and sleet forecast. Between 06Z and 12Z, the model forecast 850-700mb thickness dropped 5m, while the observed drop in 850-700mb thickness was between 00Z and 06Z. The model may have been too slow to forecast evaporative cooling in this layer, which was due to the absence of the mid-level clouds that approached GSO at 00Z and precipitated into the above- freezing layer. The partial thickness nomogram performed very well in predicting p-type, even though the thickness changes were very subtle. While the ETA forecast thicknesses were close to the observed thicknesses, they were off just enough to cause an incorrect p-type forecast. The model underestimated the effects of evaporative cooling in the warm nose layer because it did not account for the presence of the mid-level clouds, which precipitated into the layer and seeded the stratocumulus deck. The feeder-seeder mechanism can be diagnosed by carefully examining the entire depth of the soundings. Very small changes in the vertical temperature profile can profoundly affect the p-type, and models often cannot resolve these details. Therefore, observed data must be analyzed intensively to improve the forecast. Comparing the differences between the observed sounding and the forecast sounding is critically important to determine when the model is deficient or accurate. The presence of any cloud layers or dry layers that the model is not accounting for must be carefully considered when making a p-type forecast. This sort of event is not particularly difficult to diagnose beforehand - IF one is familiar with the mechanism responsible and thoroughly analyzes the observed and forecast soundings accordingly. This case study was drafted for such a purpose - to show that the seeder-feeder mechanism does exist and can cause potentially significant precipitation type problems, and that it can be forecast, at least in the short term.
<urn:uuid:212faeb0-a9cf-4ab3-8a40-5eb59806f057>
2.984375
3,596
Academic Writing
Science & Tech.
42.183361
Previous Article: Doubly Linked List Next Article: Deletion of a Node from Doubly Linked List. Insertion of a NodeBefore we discuss about how to insert a NODE let us discuss few rules to follow at the time of insertion. - Check the location into which the user want to insert a new NODE. The possible locations where an user can insert a new node is in the range of 1 <= loc <= (length of list)+1. Let us say the length of the list is 10 & the user want to insert at location 12 (sounds stupid). - As we know we can traverse Bi-Directional in case of Doubly Linked Lists so we have to take care of PREV and NEXT variables in the NODE structure. We should also update the neighboring Nodes which are affected by this operation. If not we might break up the List somewhere or the other by creating a BROKEN LIST. We have following scenarios in the case of insertion of a NODE. - Adding a Node at the start of the Empty List |Figure 1: Empty List and the newNode we want to add| - In HEAD - FIRST variable points to newNode (head->FIRST = newNode). - In newNode - NEXT and PREV points to NULL as we don't have any other Nodes in the List. - Increment the LENGTH variable in HEAD once insertion is successful to maintain the count of number of Nodes in the List. HEAD->FIRST = newNode newNode->PREV = NULL newNode->NEXT = NULL increment(HEAD->LENGTH) |Figure 2: After adding newNode in Empty List. (Changes in BLUE)|
<urn:uuid:ddd92b1b-5fe8-43df-9ae5-975b5d164813>
3.421875
359
Tutorial
Software Dev.
58.896475
See also the Dr. Math FAQ: 0.9999 = 1 0 to 0 power n to 0 power 0! = 1 dividing by 0 Browse High School Number Theory Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Infinite number of primes? Testing for primality. What is 'mod'? - Prime Number Theorems [01/03/1999] Can you explain the prime number theorem, Mersenne primes, the Lucas- Lehmer test, and the Riemann Hypothesis? - Prime Proofs [10/08/2002] If a^(n) - 1 is prime, show that a=2 and that n is a prime. If a^(n) + 1 is a prime, show that a is even and that n is a power of 2. - Primes and Repeating Unit Numbers [12/09/1998] How do you prove this statement: For every prime number there exists a repeated unit number that is a multiple of that prime. - Primes and Squares [05/03/2001] For what values of prime number p is (2^(p-1)-1)/p a perfect square? - Primes Containing but Not Ending in 123456789 [02/26/2003] Are there infinitely many primes that contain but do not end in the block of digits 123456789 ? - Primes Greater Than/Less Than Multiples of Six [01/18/2002] Has the postulate stating that every prime number is either one more or one less than a multiple of six, excluding 2 and 3, been proven? - Primes in the Form n^2 + 1 [04/06/2003] Let n be a positive integer with n not equal to 1. Prove that if n^2 + 1 is a prime, then n^2 + 1 is expressible in the form 4k + 1 with k in the - Primes of the Form 4n+3 [11/07/1999] Prove that there are infinitely many primes of the form 4n+3 where n is an element of the natural numbers. - Primes: p+1 a Multiple of 6? [10/06/2002] Prove that if p and p+2 are both prime, then p+1 is divisible by 6. Completely stuck on what to do. - Primes that are Sums of Primes [06/22/1999] Is there an nth prime number, p, (other than 5, 17 and 41) that is equal to the sum of the prime numbers up to n? For example, the 7th prime is - Primes That Are the Sum of 2 Squares [09/17/1999] How can I prove that every prime of the form 4m + 1 can be expressed as a sum of two squares? - Prime Triplet [12/07/2001] The consecutive odd numbers 3,5,7 are all primes. Are there infinitely many such 'prime triplets'? - Primitive Elements vs. Generators [05/24/2002] Prove that x is a primitive element modulo 97 where x is not congruent to 0 if and only if x^32 and x^48 are not congruent to 1 (mod 97). - Primitive Pythagorean Triples [02/23/1998] Given a triple of numbers (a, b, c) so that a, b, and c have no common factors and satisfy a^2+b^2 = c^2, make a guess about when a, b, or c is a multiple of 5. - Primorials [10/15/2003] We know that p_1 * p_2 * ... * p_n + 1 is either prime or divisible by a prime not included in the list. But is the second condition necessary? Is the result ever not prime? - Probability of Divisibility [06/18/2002] What is the probability that a randomly selected three-digit number is divisible by 5? - Probability of Random Numbers Being Coprime [08/12/1997] I have heard that the probability of two randomly selected integers being coprime is 6/(pi^2). How do you show this is true? - Problem Posed by Fermat [05/04/2001] Find a right triangle such that the hypotenuse is a square and the sum of the two perpendiculars, or indeed of all three sides, is also a square... - Product Always an Even Number? [03/17/2002] The letters a1, a2, a3, a4, a5, a6, a7 represent seven positive whole numbers; b1, b2, b3, b4, b5, b6, b7 represent the same numbers but in a different order. Will the value of the product (a1-b1)(a2-b2)(a3- b3)(a4- b4)(a5-b5)(a6-b6)(a7-b7) always be an even number? - Product and Sum of Digits = Number [10/24/2001] How many two-digit numbers exist such that when the products of their digits are added to the sums of their digits, the result is equal to the original two-digit number? - Product of Primes [02/27/2002] Can you provide me with the proof that every non-zero positive integer can be written as a product of primes? - Product of Two Primes [10/27/1999] How many positive integers less than 100 can be written as the product of the first power of two different primes? - Products of Integers (Even or Odd) [03/23/2002] How can I prove that the product of two even integers is an even integer and the product of two odd integers is an odd integer? - Programs to Find Prime Numbers [11/27/1996] Can a program be written in BASIC to compute the number of prime numbers smaller than n? - Program to Convert Number Bases [07/12/1999] Is there an easier method for converting bases than dividing and collecting the remainders? I want to write a computer program to do this. - Proof by Contraposition [03/06/2002] How can I prove that n^6 + 2n^5 - n^2 - 2n is divisible by 120? - Proof by Induction [05/24/2002] Prove by induction that (n^7 - n) is divisible by 42. - Proof by Mathematical Induction [09/24/1999] Prove the following statement by mathematical induction: for any integer n greater than or equal to 1, x^n - y^n is divisible by x-y where x and y are any integers with x not equal to y. - Proof Involving Legendre Symbol [02/03/2003] If p, q are both prime odd numbers such that they are not factors of a, and p=q(mod 4a), prove that (a/p)=(a/q). - Proof Involving mod 5 [10/27/2002] Prove n^2 mod 5 = 1 or 4 when n is an integer not divisible by 5. - Proof of Lagrange's Theorem [11/23/2000] I am looking for a proof of Lagrange's Theorem, which states that any positive integer can be expressed as the sum of 4 square numbers. - Proof of the Infinite Series That Calculates 'e' [02/04/2004] Is there a proof about this infinite series that gives the value of e: 1 + 1/1! + 1/2! + 1/3! + 1/4! + . . . + 1/n! where n goes to infinity? - Proof of the Rational Root Theorem [11/13/2000] How can I prove the Rational Root theorem? - Proof Regarding LCM [12/05/2001] Is there a proof of the equation: given integers a and b, a*b = GCF(a,b) - Proof That 0/0 = 1 Based on x^0 Equaling 1? [02/02/2006] I know the reason x^0 = 1 is because 1 = (x^3)/(x^3) = x^(3-3) = x^0. I also know that 0/0 doesn't make sense, but following a similar argument for 0^0 gives 1 = (0^3)/(0^3) = 0^(3-3) = 0^0. But since (0^3) = 0, haven't I just shown that 0/0 = 1? - Proof That 2 Does Equal 1! [03/24/1997] I came up with a proof that 1 = 2. Where does my math go wrong? - Proof that an Even Number Squared is Even [06/02/1999] How do you prove that any even number squared is even and any odd number squared is odd? - Proof That Equation Has No Integer Roots [05/09/2000] How can I prove that if p is a prime number, then the equation x^5 - px^4 + (p^2-p)x^3 + px^2 - (p^3+p^2)x - p^2 = 0 has no integer roots? - Proof That Product is Irrational [03/28/2001] How can I prove that the product of a non-zero rational number and an irrational number is irrational without using specific examples? - Proof That sin(5) is Irrational [04/24/2001] How do you prove that sin(5) is an irrational number?
<urn:uuid:e41d70b9-2e69-40da-a91f-b6e9400d15a1>
2.953125
2,195
Q&A Forum
Science & Tech.
86.450387
Grouping constructs allow you to capture groups of subexpressions and to increase the efficiency of regular expressions with noncapturing lookahead and lookbehind modifiers. The following table describes the Regular Expression Grouping Constructs. |( )||Captures the matched substring (or noncapturing group; for more information, see the ExplicitCapture option in Regular Expression Options). Captures using () are numbered automatically based on the order of the opening parenthesis, starting from one. The first capture, capture element number zero, is the text matched by the whole regular expression pattern.| |(?<name> )||Captures the matched substring into a group name or number name. The string used for name must not contain any punctuation and it cannot begin with a number. You can use single quotes instead of angle brackets; for example, | |(?<name1-name2> )||Balancing group definition. Deletes the definition of the previously defined group name2 and stores in group name1 the interval between the previously defined name2 group and the current group. If no group name2 is defined, the match backtracks. Because deleting the last definition of name2 reveals the previous definition of name2, this construct allows the stack of captures for group name2 to be used as a counter for keeping track of nested constructs such as parentheses. In this construct, name1 is optional. You can use single quotes instead of angle brackets; for example, | |(?: )||Noncapturing group.| |(?imnsx-imnsx: )||Applies or disables the specified options within the subexpression. For example, | |(?= )||Zero-width positive lookahead assertion. Continues match only if the subexpression matches at this position on the right. For example, | |(?! )||Zero-width negative lookahead assertion. Continues match only if the subexpression does not match at this position on the right. For example, | |(?<= )||Zero-width positive lookbehind assertion. Continues match only if the subexpression matches at this position on the left. For example, | |(?<! )||Zero-width negative lookbehind assertion. Continues match only if the subexpression does not match at the position on the left.| |(?> )||Nonbacktracking subexpression (also known as a "greedy" subexpression). The subexpression is fully matched once, and then does not participate piecemeal in backtracking. (That is, the subexpression matches only strings that would be matched by the subexpression alone.)| Named captures are numbered sequentially, based on the left-to-right order of the opening parenthesis (like unnamed captures), but numbering of named captures starts after all unnamed captures have been counted. For instance, the pattern ((?<One>abc)/d+)?(?<Two>xyz)(.*) produces the following capturing groups by number and name. (The first capture (number 0) always refers to the entire pattern). |0||0 (default name)|| | |1||1 (default name)|| | |2||2 (default name)|| |
<urn:uuid:9b30ec7c-bd9b-475c-abf3-a52752c405b4>
2.96875
676
Documentation
Software Dev.
39.546553
The basic results of the recent efforts concerning observations of gas deficiency in cluster spirals can be summarized as follows: After 15 years of effort, the cluster samples are significant but not terribly large, and statistical studies are still plagued by small numbers, especially when one is trying to investigate individual variables for which subsamples must be compared. It is also important to keep in mind that specific objects - whether single galaxies within a cluster or individual clusters - may not be representative of the universe at large. There are several additional points that best illustrate our uncertainties: The future offers us a number of opportunities to follow the study of gas deficiency. The study of the HI content of cluster spirals and especially of early type objects will be greatly enhanced by the increases in sensitivity gained by the Arecibo gregorian feed upgrade and by the construction of the new Green Bank telescope. Aperture synthesis observations of both HI and CO in additional galaxies in Virgo, in Hydra and in other nearby clusters are vital to our understanding both of the sweeping mechanism and its effect on the conversion of gas to stars in galaxies. Further clues to the star formation process will be sought through careful studies of indicators such as H emission with the spatial dimension included. Possible environmental influences on the dark matter distribution are critical both to our understanding of galaxy formation and to application of the Tully-Fisher relation to obtain the distance scale. The Hubble Space Telescope will contribute enormously to our ability to study galaxies in clusters at higher redshifts when we know galaxies were not all like the objects we see in clusters today. Of particular relevance will be the continued study of the blue cluster objects seen at z > 0.3. Are they spirals falling into the core and suffering HI depletion both by induced star formation and sweeping? Based on the indirect evidence of the observed HI deficiency in cluster spirals, it now seems well established that selected spirals that pass through the cores of rich clusters lose significant portions of their cool interstellar gas. We are led to return to the long-standing debate over whether stripped spirals are responsible for the S0 class. Based on the usual arguments about the occurrence of S0's in the field and the fundamental differences in the dominance of disk and bulge components (Dressler 1980), it does not seem likely that all S0's are stripped spirals. The loss of nearly all of the HI gas, despite the retention of the molecular component, must affect the galaxy's future evolutionary path. It still remains difficult to see how one could turn an Sc into an S0, but the possibility that the early type spirals may preferentially evolve towards the S0 class because they follow radial orbits (Dressler 1986) is intriguing. de Freitas et al. (1985) have noted already the tendency for cluster S0's to have flatter axial ratios than field ones, implying a contribution to the S0 population of stripped spirals. In comparing the morphology-density relation in clusters with high and low X-ray luminosity separately, GH85 have noted a decrease in the population of spirals and a corresponding increase in the population of S0's, for the same galaxy density, in the clusters with high X-ray luminosity. While we can recognize candidates for stripping and plausible galaxy-ICM interactions that could result in adequate gas removal, the same stripping mechanism(s) cannot be responsible for the S0's seen in less dense regions. Hence, we conclude that there are effective mechanisms for environmentally-driven galaxy evolution in operation in cluster cores containing a hot, healthy ICM, but the regimes of density within which such mechanisms could be of significance in enhancing the morphological segregation represent only a small fraction of the volume of the universe. I thank R. Giovanelli, T. L. Herter and M. S. Roberts for many discussions on the continuing questions about stripping. My talk was greatly aided by the contribution of unpublished data by L. Cayatte, C. Balkowski and J. van Gorkom and V. Rubin. This work has been supported in part by NASA-JPL contract no. 957289. The study of Sa and Sc galaxies has been conducted in collaboration with C. Magri as part of his dissertation research at Cornell University.
<urn:uuid:6db94681-7f15-45fc-b71f-d7cbdd6cd952>
3.203125
864
Academic Writing
Science & Tech.
37.967625
The Perseid meteor shower happens every year around mid-August. This year's peak nights will be during the early morning hours of August 12th and 13th. During the shower you'll be able to see up to 60 meteors per hour. What is a meteor shower? A meteor is a tiny dust particle that a comet sheds. These dust particles remain along the orbit of the comet for many years. When the Earth comes close to - or plows through - the comet dust debris, dust particles fall into the Earth's atmosphere and burn up completely. We see the break up of the dust particles in streaks of light that we call meteors, which are also commonly referred to as "shooting stars" -- although this term is technically inaccurate. Comet dust particles are typically smaller than the size of a pea and burn up in the Earth's atmosphere about 70 km above the surface. Why is the meteor shower called Perseid? If you watch the meteor streaks carefully, it appears as though you could trace them back to a point in the sky where they originate. In reality this is an optical illusion, because all meteors travel in a parallel direction in space. The meteor shower is called "Perseid" because its apparent point of origin, or the radiant point, is in the constellation Perseus. What's the best time to view the 2012 Perseid meteor shower? The best time to view the meteor shower is when its radiant point appears to be high in the sky, during the early morning hours. The shower is forecasted to peak on August 12th, but you'll be able to catch the show on August 13th as well. You'll also be able to see a few Perseid meteor showers a few days before and after this time. How can I view the meteor showers? You won't need binoculars or a telescope to see the meteors. In fact, your eyes will be the best viewing instruments! Try to choose a viewing location with little direct light in the surrounding. Make yourself comfortable and look towards the sky, directly opposite from the the full moon. Summer nights can get cool, so bring along an extra blanket. And don't forget your insect repellent!
<urn:uuid:74aa5029-f443-498f-bf6c-1636d5f82678>
3.53125
466
Knowledge Article
Science & Tech.
67.366353
Mapping The Genomes Of Crocodiles And Alligators - It's Not For The Faint Of Heart! David Ray never turns his back on his research, and with good reason! Ray and his team study alligators, crocodiles, and bats, among other creatures. With support from the National Science Foundation (NSF), this multidisciplinary team from several universities is mapping crocodile and alligator genomes. Reptiles resembling these have existed for around 80 million years and they are among the first reptiles to have their DNA sequenced. The research could expand our knowledge well beyond crocodilians to other reptiles, birds, and even dinosaurs. When they’re not fishing for ‘crocs’ and ‘gators,’ Ray’s team might be tracking down bats for their research on transposable elements or so-called ‘jumping genes.’ These genes can copy themselves and literally jump around in a DNA sequence. Better understanding of them could lead to improved genetic therapies. Provided by the National Science Foundation More Science Nation videos
<urn:uuid:ef7b2b34-87b1-4020-a58d-ff7a0211e785>
3.40625
221
Truncated
Science & Tech.
39.922201
Browse Other Climate Resources Here are additional resources relating to aspects of Climate Change from the SERC catalog. These resources have not been reviewed by the Climate Change Collection review team. : Greenhouse Effect [link browse_local.html?search_text=carbon cycle&search 'Carbon cycle']| [link browse_local.html?search_text=sun solar cycle 'Solar cycle'] Refine the Results Results 41 - 50 of 1665 matches Paleoclimatology Education and Outreach part of SERC Web Resource Collection This NOAA website offers a collection of links to paleoclimate (past climate) information and data. Links are organized by topic, which include: highlights from the paleoclimatology program; Ocean ... Grade Level: High School (9-12), Middle (6-8), General Public, Graduate/Professional, College Lower (13-14), College Upper (15-16), Informal Bering Land Bridge Virtual Visitor Center part of SERC Web Resource Collection This resource contains information about the location, geology, flora, and fauna of the Bering Land Bridge National Preserve, located on the Seward Peninsula in northwest Alaska. Global Warming Facts and Our Future part of SERC Web Resource Collection This virtual museum website provides easily understood scientific information that helps both policy makers and the public answer important questions about the changing global climate in order to ... Resource Type: Audio:Sound Grade Level: Intermediate (3-5), High School (9-12), Middle (6-8) Energy and the Environment part of SERC Web Resource Collection This website examines the issues related to balancing America's energy needs with concerns about safety and the environment. This site features articles and teaching materials from the New York Times ... Listening to the 2004 Indian Ocean tsunami quake part of SERC Web Resource Collection This resource is an abstract. This study tracks the movement of the rupture that caused the December 26, 2004 Indian Ocean tsunami by comparing recordings of sound waves from five sensors located ... How volcanic eruptions cause tsunamis part of SERC Web Resource Collection This study investigates the effect of pyroclastic flows on tsunami generation. The authors analyzed several possible mechanisms that occur when the particle rich flows encounter water and conclude ... Using GPS for earthquake imaging part of SERC Web Resource Collection This resource provides an abstract. The authors used a dense array of Global Positioning System (GPS) stations to model how the Earth slipped during the 2003 8.0-magnitude Tokachi-Oki earthquake near ... Australia's Biodiversity- Extinction part of SERC Web Resource Collection This website from the Australian Museum is devoted to understanding the current crisis in biodiversity in Australia, and what can be done to slow the tide of extinction. Several pages address both ... The Permo-Triassic Extinction part of SERC Web Resource Collection This website contains a collection of pages concerning different aspects of the Permo-Triassic extinction, the largest mass extinction of all time. The pages address different scenarios for the ...
<urn:uuid:6e708047-7b01-4526-9ac0-7bb8687ad4db>
2.828125
639
Content Listing
Science & Tech.
36.995012
Since its discovery in 1801, Ceres (right) has been classified as a planet, an asteroid and a dwarf planet. It is the nearest dwarf planet to our sun. Dawn traveled 2.8 billion kilometers (1.8 billion miles) to get from Earth to Vesta. It will travel another 1.6 billion kilometers (990 million miles) to get to Ceres. Dawn used Mars' gravity to give it a boost to the main asteroid belt. NASA's Dawn spacecraft on July 16 became the first probe ever to enter orbit around an object in the main asteroid belt between Mars and Jupiter.
<urn:uuid:edbfc9e1-9c68-4177-ab98-ec818838c284>
3.421875
125
Knowledge Article
Science & Tech.
72.887222
Josephson received the 1973 Nobel Prize in Physics for discovery of the Josephson effect, which occurs in two superconducting layers separated by an insulating oxide. Under certain conditions current can pass through the insulator through tunneling of Cooper pairs of electrons. The effect has been used to design superconducting quantum interference devices (SQUID) because switching is very fast, in the order of picoseconds. Tunneling in the Josephson junctions is very sensitive to magnetic fields and can therefore be used to measure extremely small magnetic fields with thresholds as low as Tesla. Josephson junctions are also used for other precision measurements. The standard volt is now defined as the voltage required to produce a frequency of 483,597.9 GHz in a Josephson junction oscillator. A schematic diagram of a circuit with a Josephson junction is shown below. The quantum effects can be modeled by the Schrodinger equation, but it turns out that the circuit can also be modeled as a system with lumped parameters. Let be the flux that is the integral of the voltage across the device, It follows from quantum theory [Feynman, 1970] that the current through the device is a function of the flux : where is a device parameter, and the Josephson parameter is given by The circuit in the figure has two storage elements: the capacitor and the Josephson junction. We choose the states as the voltage across the capacitor and the flux of the Josephson junction. Let , and be the currents through the resistor, the capacitor and the Josephson junction. We have and a current balance gives which can be rewritten as Combining this equation with equation for gives the following state equation for the circuit Notice that apart from parameter values, this equation is identical to the equation for the inverted pendulum.
<urn:uuid:a3d380a3-f81a-4a21-92ba-249ff5aa55ff>
3.84375
370
Knowledge Article
Science & Tech.
33.388882
The Power Behind The Throne Peter J. D'Adamo, ND, MIFHI Although just about everyone knows something about DNA, I’d like to take a few moments to introduce you to RNA, the real power behind the throne. Protein represents what biologists call phenotype – the living, breathing, metabolizing part of life. DNA is information. Other than acting as a blueprint and occasionally remembering to replicate itself, it doesn’t have a single real world obligation. It is RNA that acts as the bridge between DNA and protein, translating the message of DNA into the reality of proteins. All the basic functions of the cell require RNA. Copies of the desired DNA gene message are first copied onto one type of RNA, which is then read by a machine composed in part by some more RNA to create proteins by linking amino acids which are delivered by another type of RNA. Let’s start the second part of our story with the sweet, if short life of Messenger RNA, or mRNA. At a certain point in its life, the cell may get an urge to make some sort of protein or enzyme. Let’s say that you have developed an untidy habit, like smoking cigars. As anyone who has ever tried one can tell you, the first experience with nicotine is usually far from pleasant, with dizziness and nausea the usual end result. This reaction occurs because the new smoker has yet to habituate himself to the poisons in the cigar and has not yet developed a way to detoxify and break them down. Over time the continued smoking of cigars sends an environmental message to cells of the liver telling them that they need to make higher levels of the enzymes used to detoxify tobacco toxins. This message (“hey, he’s trying to kill us out there!”) travels to the cell nucleus, where special machinery locates the section along the DNA that contains the gene to produce these detoxifying enzymes, snips it open and unravels that part of the DNA to expose the blueprint. At that point an enzyme called RNA polymerase comes along, reads the DNA code and makes an RNA copy by linking together similar building blocks (a stretch of RNA is similar to DNA except that RNA is almost always single-stranded and uses the nucleotide Uracil instead of Thymine). This is called “transcription” and just like a court stenographer transcribes the court proceedings, so RNA transcripts the proceeding necessary to make a protein. The RNA strand, called Messenger RNA, (mRNA) is then extensively primped and tweaked to clean it up and get it just right. From here it is about to embark on the ride of its life. Once everything is set to go, the mRNA is shot through the one of the many pores which act as gates between the cell body and the nucleus. Once out into the cell proper it is carried to the real workhorses of protein synthesis, the ribosomes. Using a railroad analogy, you can think of a ribosome as a dispatcher in the rail yard, whose job it is to assemble an entire freight train. Each time the phone rings the dispatcher gets his next order: “Fetch the Baltimore and Ohio flatbed with the Honda Hybrids on it. Attach it to the Union Pacific 3985 locomotive.” “Next, locate and attach the milk tanker from Happy Cow Farms.” And on and on, until you have one of those interminably long freight trains that take twenty minutes to pass by the railroad crossing as you desperately try to get to the airport. Just like our rail dispatcher, ribosomes get the information from messenger RNA, by zipping along the code like an old fashioned ticker-tape, reading the code called 'codon triplets' to determine which amino acid to fetch, then linking that amino acid to the prior one, and fetching the next instruction, etc. until it gets a stop message. In this job the ribosome is assisted by a different type of RNA called Transfer RNA which acts like a crusty old rail yard worker, bringing the appropriate amino acid to the ribosome. At some point the protein is finished up and released, and the messenger RNA decomposes back to the basic building blocks of DNA and RNA, called nucleotides, and ready to do it all over again. From there the sky is the limit. Proteins are interesting in a lot of ways but perhaps most interesting in their folding tendencies, a molecular origami if you will. Depending on the amino acid sequence and length proteins will fold into a myriad number of complex three dimensional shapes, and it is these shapes that give them their unique powers over the environment. For example a protein of a certain shape may function as an enzyme, taking sugar molecules and attaching them together, turning single sugars onto cellulose, an important dietary fiber. The protein that results from our string of amino acids might be an insulin molecule, helping to control the owner’s blood sugar, or even a protein that helps DNA do its job, perhaps even part of another ribosome! As I said, the sky is the limit. The RNA Queen is so basic to life that many scientists think that perhaps life originated with it, and not with DNA: That DNA came along later as a way to 'memorialize' the work of RNA.
<urn:uuid:e3d75442-1189-4c32-b265-7a5d0a75c29f>
2.78125
1,109
Personal Blog
Science & Tech.
45.649526
Scientists are building a DNA library of the entire planet, a vast index of everything that's alive (including some things that might soon not be). But a recent study cautions researchers to be careful, as a scanning mistake here could have more severe consequences than getting overcharged at the checkout. The International Barcode of Life is a proposal to index every singe organism utterly and unequivocally (they've got nearly half a million already) - instead of depending on ornithologist's arguments about exactly what constitutes a certain type of beak, every lifeform would be indexed according to a specified section of DNA. Just as you don't need to read an entire book to tell it apart from others, these DNA barcodes would be shorter than (but just as unique to the lifeform as) the full DNA sequence. Plus you don't need the whole animal to do it, which is an ecological plus - a small skin, spit or stalk sample will be enough for identification. Killing endangered species to record them would kind of miss the point. With an established global effort, this ID-process would be cheaper and available to far more researchers worldwide, enabling entirely new investigations in the ecosphere. But a study at the Brigham Young University warns that when you're typing in the Index of Everything, you want to be careful with that keypad. Existing techniques which target agreed on sections of mitochondrial DNA can be misled by broken sections of nucleus material, leading to misidentification of new species. With the flaw highlighted (and over a hundred and fifty million dollars earmarked for the project), it should soon be fixed. Good - we'd hate to think that excitement over entirely new lifeforms was due to an input error. Posted by Luke McKinney. Related Galaxy posts: Is Growth of Cities Accelerating the Planet's Bioversity Crisis? -A Galaxy Classic The Earth's 6th Great Mass Extinction is Occurring as You Read This Bigger Threat Than Global Warming: Mass Species Extinction Urban Life -An Organism "Beyond the Bounds of Biology" Dr Strangelove Two? -Cambridge Astrophysicist Gives Earthlings a 50/50 Chance of Making it Through the Century A Post-Human Future: Are Humans the Limit of Evolutionary Complexity? A Galaxy Classic Homo Urbanus - For the 1st Time in Human History the City Dominates TrackBack URL for this entry: Listed below are links to weblogs that reference Scientists Building a DNA Library of Entire Planet -a Vast Index of Everything Alive:
<urn:uuid:91a04259-d636-4ad3-a02f-077109cac7a6>
2.984375
528
Personal Blog
Science & Tech.
31.97743
New Emperor Penguin Colonies Found in Antarctica While about 2500 chicks of emperor penguins were raised this year at the colony close to the French Dumont d'Urville Station, two new colonies totalling 6000 chicks have just been observed about 250 km away, near Mertz Glacier by the scientists Dr André Ancel and Dr Yvon Ancel, from the Institut Pluridisciplinaire Hubert Curien in Strasbourg (CNRS and Université de Strasbourg). Since a pair of emperor penguins may only successfully raise one chick a year, the population of breeding emperor penguins in this area of the Antarctic can therefore be estimated to more than about 8500 pairs, about three fold that previously thought. The two new colonies have been revealed on 1st and 2nd November, during the late winter season trip of the MSS Astrolabe1 towards Dumont d'Urville. They are located on the winter sea ice. This ice surrounds the remains of the Mertz Glacier, from which a large ice wall, 80 km long, 40 km wide and 300-400 m thick has separated. These may be two sub-populations originating from the initial Mertz colony which, following the Mertz Glacier break, are attempting to settle again on favorable surroundings. One accounts for about 2000 chicks and the second for about 4000 chicks. Dr André Ancel had suspected the existence of an emperor penguin colony near the Mertz Glacier since 1999, when with Dr Barbara Wienecke (Australian Antarctic Division), they observed thousands of emperor penguins going back and forth in the Mertz glacier area. Dr Peter Fretwell and Dr Phil Trathan of the British Antarctic Survey localised this colony in 2009 based on the images from space of emperor penguin nitrogen dejections on the sea ice. However, the break of the Mertz glacier in 2010 questioned the fate of this colony. New satellite images obtained since then suggested that the birds might attempt breeding on different sites. Over the last 13 years all French attempts to find the birds had failed, due to the harsh winter conditions and the summer disappearance of the sea ice where the Emperors breed. Emperor Penguin Colony via Shutterstock. Read more at ScienceDaily.
<urn:uuid:075117c4-5cbd-4d35-84d5-e7cf8b699fbf>
2.953125
461
Truncated
Science & Tech.
44.455021
1 gigajoule = 277.78 kilowatt-hour. (GJ) - One billion Joules or a thousand Mega Joules.) a measure of heat or energy A unit of energy equaling 943,213.3 Btu. One billion joules. Annual gas usage of residential and commercial customers measured in gigajoule. A joule is an international unit of energy defined as the energy produced from one watt flowing for one second. Power - 3-6-mil joules in a kilowatt-hour. Gas - one Gigajoule (GJ) = 0.96 Mcf under standard temperature and pressure conditions. One Gigajoule (GJ) = 1 Mcf. 1 billion joules, approximately equal to 948,000 British thermal units. Unit of energy equal to 1 billion joules. oule - Unit of energy equal to the amount of heat needed to increase the temperature of one gram of water by one degree Celsius (ºC) at a standard pressure 101.325 kPa and standard temperature (15ºC). ilovolt (kV) - One thousand volts. ilovolt amperes (kVA) - A unit of power. ilowatt (kW) - One thousand watts. Equals the Square Root of [ (KW * KW) + (KVAR * KVAR) ] or kWh * # Intervals Per Hour ilowatt-hour (kWh) - Energy used by 1000 watts in one hour. Also equivalent to the actual usable power transfer. A gigajoule is a billion (1,000,000,000) joules approximately equivalent to the energy contained in a small car's full petrol tank. A gigajoule (GJ) is 1,000,000,000 joules. It is a unit of energy.
<urn:uuid:ab620dac-e4f8-46c2-8a89-3be4886c58d2>
3.0625
402
Structured Data
Science & Tech.
62.616458
OpenGL Shading Language This document describes a programming language that is a companion to OpenGL 2.0 and higher, called The OpenGL Shading Language. The OpenGL Shading Language is part of the core OpenGL 4.3 specification. - GLSL Reference Pages - OpenGL 3.3 & GLSL - OpenGL 4.1 & GLSL Quick Reference Guide - OpenGL 4.2 & GLSL Quick Reference Guide - OpenGL 4.3 & GLSL Quick Reference Guide The recent trend in graphics hardware has been to replace fixed functionality with programmability in areas that have grown exceedingly complex (e.g., vertex processing and fragment processing). The OpenGL Shading Language has been designed to allow application programmers to express the processing that occurs at those programmable points of the OpenGL pipeline. Independently compilable units that are written in this language are called shaders. A program is a set of shaders that are compiled and linked together. The aim of this document is to thoroughly specify the programming language. The OpenGL entry points that are used to manipulate and communicate with programs and shaders are defined separately from this language specification. The OpenGL Shading Language is based on ANSI C and many of the features have been retained except when they conflict with performance or ease of implementation. C has been extended with vector and matrix types (with hardware based qualifiers) to make it more concise for the typical operations carried out in 3D graphics. Some mechanisms from C++ have also been borrowed, such as overloading functions based on argument types, and ability to declare variables where they are first needed instead of at the beginning of blocks.
<urn:uuid:dcf64032-804f-48c0-8b69-d7425ff082f3>
3.296875
337
Documentation
Software Dev.
40.195478
In 9847 a Sophic long range probeship, exploring along a line to anti-spinward and Rimward of the Periphery discovered an anomalous light signature while travelling through interstellar space. Upon altering course and performing a high-speed flyby of the phenomenon, the probe was able to determine that the light signature was reflected star light. The probe had detected the reflection of a gigantic mirror in the middle of interstellar space. Thinking at first that they had found evidence of either alien technology or terragens activity, the probe and eir crew initiated deceleration and rendezvous manoeuvres that placed them in close proximity to the array some 18 weeks after the initial detection event. What the probe found was indeed a product of alien manufacture. But not exactly a product of alien technology. Drifting between the stars, light-years from any sun, the probe discovered a huge array of thin film mirrors some thirty thousand kilometers across. The mirrors acted to focus sunlight on a point nearby in space. Floating at this point was the first deepwood ever discovered. A deepwood is, in outward appearance, very similar to the more familiar orwoods of terragens space. However, there are a number of crucial differences. While terragens orwoods can grow to hundreds of kilometers across, the largest deepwood so far discovered is only 3 kilometers in diameter (after some initial study, the probeship signalled back to its base station and then travelled to the nearest star system where it nanofactured telescopes and several million short range probes to explore space in the area of the first deepwood for others of its kind. To date some 11,000 deepwoods have been identified across a volume some 10 light-years across with no indication that their numbers are thinning.). Deepwoods have much darker leaves then standard orwoods (the better to absorb reflected starlight) and a much slower, more efficient metabolism, allowing them to consume the resources of their host comet very gradually. And, of course, there is the mirror array. The mirror array of a deepwood is a bionano produced construct made from the metals available in the host comet. Each array is made up of many smaller mirrors all acting in coordination to reflect starlight onto the central tree. The output of the mirrors is sufficient to closely approximate the light level of a near-solar environment and provides the energy for the deepwood to metabolize the resources of its comet as well as maintaining a low level electric charge across its entire structure. While the exact purpose of this charge is not entirely clear, it is theorized that the deepwood uses this effect to gradually direct its course through space by reacting against the magnetic field lines of the galaxy. While the exact origins of the deepwood have yet to be determined, the most popular theory to date is that deepwoods are the product of an alien biotechnology. It is hypothesized that at some point in the far past an alien species engineered the deepwoods to serve as deep space habitats for their population, much as orwoods are used as habitats by terragens. At some point, some of the deepwoods 'escaped into the wild' perhaps due to the extinction of their creators, and have been gradually spreading through deep space ever since. A second, less generally accepted, hypothesis is that terragens civilization has encountered the outer edges of an expanding alien civilization (perhaps the builders of the Whisper sonic virtchcosm and alien Biogeocomputing worlds) and that it is only a matter of time until the deepwood's creators are encountered along with their creations. - Dyson Trees - Orwoods - Text by M. Alan Kazlev Dyson tree forest ecosystem, especially one that constitutes a stable or evolutionary space-based biota, with or without symbiotic sentient (human, neogen, etc.) interaction. Text by Todd Drashner Initially published on 19 April 2004. Page uploaded 19 April 2004; last modified 1 June 2007
<urn:uuid:05f755db-4973-4e22-b12a-1a133920bca3>
3.9375
811
Knowledge Article
Science & Tech.
32.490243
The STORM model provides an estimate of the expected change in the ionosphere during periods of increased geomagnetic activity. The model estimates the departure from normal of the F-region critical frequency (foF2) every hour of the day for the current and previous day. Values are given in six separate geomagnetic latitude bands, 20° wide, from 20° geomagnetic latitude to the North and South magnetic poles. Storm-time corrections within 20° of the magnetic equator are not made. The corrections are given in terms of a scaling factor, which can be used to adjust the climatological mean. As the ionosphere departs further from normal the color of the trace changes from green, to yellow, to red, where green represents changes within 10% of normal, and red indicates departures in excess of 25%. An estimate of the error in the prediction is also shown based on an average of the geophysical variability and the standard error of the mean. A detailed description of the estimated errors can be found at Estimated Errors. The empirical model provides a useful, yet simple tool for estimating the changes to ionosphere in response to geomagnetic activity. The STORM model was developed from ionospheric observations during many storms that were analyzed as a function of season and latitude. Within each season and latitude sector, the magnitude of the ionospheric response was determined as a function of an index parameterizing the magnitude of the storm. The storm magnitude index depends on the previous 33 hours of ap, weighted by an appropriate filter. The optimum length and shape of the filter was obtained by a singular value decomposition method. The real-time model uses the hourly values of the 3-hour running ap, provided by the USAF Hourly Magnetometer Analysis Reports. The blue line in the lower boxes shows the hourly value of the integrated ap, which is the index used to drive the model. The storm-time correction of the F-region critical frequency is primarily of benefit for high frequency (HF; 3-30 MHz) communication users. During a geomagnetic storm the F-region ionosphere can be either depleted or enhanced. When the ionosphere is enhanced, higher communication frequencies can be used, enabling a reduction in absorption and an increase in received signal strength. If the ionosphere is depleted, the maximum usable communication frequencies must be reduced to ensure reflection of the radio signal by the ionosphere to the receiver. The real-time web page can be used to access the results of STORM for a number of past geomagnetic storms by clicking on the Significant Storms link. The response for any day of interest, for at least the last 365 days, may also be obtained by simply inserting a given date in the appropriate box on the page. A comprehensive validation has been performed by comparing the output of the correction model with data obtained from all available ionosonde stations during all the geomagnetic storms in the year 2000. The validation shows that the model captures more than half of the increase in the storm-induced variability. Follow the links beyond Significant Storms to see graphical examples of the validation. The references below provide more detailed information regarding the development of the model; simply click on the papers. Please contact Tim.Fuller-Rowell@noaa.gov, Eduardo Araujo-Pradere Eduardo.Araujo@noaa.gov or Mihail.Codrescu@noaa.gov for further information. Araujo-Pradere, E.A., T.J. Fuller-Rowell, and M.V. Codrescu; STORM: An empirical storm-time ionospheric correction model. I, Model Description. Radio Science, 37, 10.1029/2001RS002467, 2002. Fuller-Rowell T.J., M.V. Codrescu, and E.A. Araujo-Pradere; Capturing the Storm-Time Ionospheric Response in an Empirical Model. AGU Space Weather Geophysical Monograph, 125, 393-402, 2001. Araujo-Pradere, E.A., T.J. Fuller-Rowell, and M.V. Codrescu; STORM: An empirical storm-time ionospheric correction model. II, Validation. Radio Science, 37, 10.1029/2002RS002620, 2002. Araujo-Pradere, E.A., T.J. Fuller-Rowell, and D. Bilitza. Validation of the STORM response in IRI2000, J. Geophys. Res., 108(A3), 1120, doi:10.1029/2002JA009720, 6-1 – 6-10, 2003 NOAA/ Space Weather Prediction Center or STORM Home Page
<urn:uuid:fc799baf-337a-4795-93a9-a638fa553587>
2.9375
1,000
Knowledge Article
Science & Tech.
49.179207
Can you imagine living without the vertebrae in your neck? Surely no animal on earth has a backbone that doesn't connect with its skull. Think again ... Larval (baby) fishes have skeletons made of cartilage. As the fish grows the cartilage ossifies (changes to bone). In the vast majority of bony fishes most of the flexible 'spinal support' (the notochord) is totally replaced by bone. It has been known for many years that the stomiid fishes (bearded dragonfishes) have a region of 'spine' behind the head that lacks vertebrae. This region, called the occipito-vertebral gap, is clearly visible as a blue strip* between the skull and the red coloured vertebrae in the top image. In the early 20th century it was proposed that this gap allowed the fish to bend its head backwards to an extraordinary degree, allowing it to efficiently swallow large prey. Nalani Schnell (University of Tuebingen), Dave Johnson (USNM, Washington) and Ralf Britz (NHM, London) were awarded the Reinhard Rieger Award for excellence in research in zoomorphology for their 2010 paper (see below). They investigated the development of the occipito-vertebral gap by staining specimens to show the bones (red), cartilage (blue) and nerves in different colours. For most bony fishes ossificaton of the backbone starts at the front of the fish and proceeds towards the tail. Surprisingly, in the stomiid fishes, ossificaton proceeds in the opposite direction. Schnell, Johnson and Britz' research confirmed that vertebrae behind the skull fail to form in two stomiid genera (Chauliodus and Eustomias), plus Leptostomias gladiator. Interestingly the bones above and below the notochord (red triangles in the top image) develop but the vertebral centra (the circular 'body' of each vertebra) do not. The remaining 24 stomiid genera also have an occipito-vertebral gap but do have a full complement of vertebrae. These fishes have an extended portion of the notochord, a condition that is highly unusual for an adult fish. * Despite being stained blue, the notochord is not made of cartilage. For those of you who really want to know ... it is composed of cells derived from the mesoderm and is surrounded by several layers of connective tissue. Schnell, N. Britz, R., & G.D. Johnson. 2010. New Insights into the Complex Structure and Ontogeny of the Occipito-Vertebral Gap in Barbeled Dragonfishes (Stomiidae, Teleostei). Journal of Morphology. 271(8): 1006-1022. View the full paper.
<urn:uuid:4209c8be-758c-427c-9e94-7eec89d66a04>
3.59375
601
Knowledge Article
Science & Tech.
49.099599
“…A Cal Tech seismologist was aware of the natural forces that are working slowly but inexorably under the earth, out of sight, unnoticed, setting off reactions and counterreactions in fragile earth faults deep in the bowels of the earth. There was no reliable way, despite all the scientific research, to predict them — except for the behavior of animals trying to escape the doomed area. Nobody but the horses, dogs, birds, fish, rats and cockroaches, fleeing their forest or city homes, seem to know with certainty when a temblor is about to strike, and then, alas, only a few hours before the event. In Yellowstone National Park in Wyoming, the day before a major quake the forest became silent, and visitors wondered at the absence of the chirping birds. They didn’t wonder long. Twenty-four hours later a devastating life-threatening quake shook the wild nature preserve, uprooting trees and buildings and causing widespread damage. Showing some primal connection between animal life — closer to nature than man — and the natural forces brimming under the surface of the earth. And in its heavens.
<urn:uuid:89130d41-d4c2-4b0a-b0c3-d3e75e224d6f>
2.859375
233
Personal Blog
Science & Tech.
41.039354
Most forecasting is easier and more reliable in the short run than over the long haul. Think of weather prediction. (And history is full of failed long-term forecasts of everything from oil prices to human population trends.) But for scientists studying the fate of the vast ice sheets of Greenland and West Antarctica, the situation seems reversed. Their views of sea trends through this century still vary widely, while they agree, almost to a person, that centuries of eroding ice and rising seas are nearly a sure thing in a warming world. The great shifts of sea level and temperature through cycles of ice ages and warm intervals make that clear. I wrote about that consensus last year in covering the reports released by the Intergovernmental Panel on Climate Change, but also wrote about scientists’ frustrations over trying to convey the importance of a slow-motion disaster. Many researchers are working hard to try to clarify whether more melting, both on the ice surface and along the coasts, could greatly speed things. I wrote about some of that work for Science Times this week. This post offers a bit more depth than could fit on the printed page. And it offers a more vivid view of the work in this video report, which takes you on a ride into the depths of the Greenland ice: Last summer, scientists from the University of Colorado, NASA, and elsewhere tried to probe Greenland’s internal plumbing, which can carry water from the melting surface down to the base, potentially lubricating where ice grinds over rock and speeding its movement toward the sea. The melting and gushing is dynamic and startling, and has produced a flood of media coverage in recent months. But many scientists doubt, for all the drama, that this process will end up moving meaningful quantities of ice into the sea. The same goes for the snouts of “outlet” glaciers, where ice from the interior funnels through gaps in coastal mountain ranges, and where warming seawater has broken up clots of ice that can hold things up, like a logjam in a river. Some scientists assessing the recent acceleration of ice flows propose that the rates of increase can’t be sustained long enough to get a truly disastrous rise in seas by 2100 from a warming Greenland. Tad Pfeffer, of the University of Colorado, and Joel T. Harper of the University of Montana laid out their argument for caution at the American Geophysical Union meeting in San Francisco in December, and quite a few glaciologists seem to agree with them. Of course, there is another wild card in the deck, called the West Antarctic Ice Sheet. Seasoned experts differ sharply on which storehouse of ice poses the bigger threat to coasts. My emails and calls to more than a dozen experienced ice scientists produced about a 50/50 split on whether Greenland or Antarctica was the biggest short-term risk. But there was little disagreement that playing what amounts to two games of high-stakes poker at the same time by driving up greenhouse-gas concentrations is a bad idea, particularly as ever more people concentrate on coastlines in both rich and poor countries. James E. Hansen, the prominent NASA climatologist who has become an outspoken advocate for sharp cuts in greenhouse gases, complained last year about the “reticence” of many of his peers when considering the risk of runaway ice loss within the lives of today’s children. He has co-written several papers recently positing how sustained warming could lead to coastal calamity by 2100. While the breakup and slipping of ice sheets is a small part of sea rise now, he wrote last year, it could easily accelerate under the heating from a “business as usual” path for emissions. “The broader picture gives a strong indication that ice sheets will, and are already beginning to, respond in a nonlinear fashion to global warming,” he wrote last May in the online journal Environmental Research Letters, adding there was “near certainty” that unabated emissions “would lead to a disastrous multi-meter sea level rise on the century timescale.” Many experts on polar and climate science push back, saying there is scant evidence to support that level of certainty. Waleed Abdalati, a NASA scientist focused on the ice sheets, said, “Ice sheets are continually responding to their changing boundary conditions in ways that might mitigate these changes.” There is a real risk of bigger ice losses and sea-level shifts, but much more work would be needed to clarify the odds, Dr. Abdalati said. “I think that close to a meter is a real possibility in the coming century, and the adverse effects of that should be enough to get people’s attention.” He concluded by saying scientists were in a real bind in trying to figure out how to discuss climate-related threats of this sort without causing the public and policymakers to glaze over. In an email, he said: “It is always a challenge to convey scientific uncertainty (and there is a lot in this case) to the general public. People want ‘the answer,’ and when you start to explain why ‘the answer’ is not as obvious as they would like, it is easy to lose them. Plus, there is so much hype made of uncertainty by skeptics, that it gets spun into the idea that scientists don’t really know what they are talking about and don’t have the answers. “At the end of the day, you can be 90% confident of something, and all people will hear is that you aren’t certain about what you are saying. This is why the debate is often cast in extremes, rather than an honest consideration of the data. It is really too bad, because an honest consideration of the data is still quite compelling.”
<urn:uuid:bb64bd05-e772-4c49-a22a-19149e00c0ec>
3.1875
1,194
Nonfiction Writing
Science & Tech.
43.950756
|s-block in the periodic table| ||This article is in a list format that may be better presented using prose. (March 2013)| The s-block is a block in the periodic table that consists of the first two groups, namely the alkali metals and the alkaline earth metals. The elements in the s-block generally exhibit well-defined trends in their physical and chemical properties, changing steadily moving down the groups. Their properties can be most readily explained in terms of their electron configuration, with their valence electrons occupying s-orbitals. By this definition, hydrogen and helium are sometimes also considered to be part of the s-block. The modern periodic law states that an element's chemical and physical properties is a periodic function of its atomic number. The long form of the periodic table is based on modern periodic law. The long form is divided into four blocks, s, p, d, and f. In an atom of an s-block element,the last electron enters the s-orbital of the outermost electron shell: - Group 1: - Group 2: - Beryllium (Z=4), Magnesium (Z=12), Calcium (Z=20), Strontium (Z=38), Barium (Z=56), Radium (Z=88). - Anomalous properties of Lithium: Diagonal relationship ||This section may stray from the topic of the article into the topic of another article, Diagonal relationship. (march 2013)| 1.) Lithium is in group 1 and period 2. 2) magnesium is in 2 group and 3 period. 3.) Lithium resembles magnesium which is diagonally placed. ¤ Cause of diagonal relationship- lithium and magnesium have similar ionic size and polarizing power hence they show diagonal relationship. ¤ Similarities between lithium and magnesium- Due to similar ionic size and polarizing power. ' Li ' and ' mg ' shows similar properties. 1) Both Li and mg are hard metals. 2) Licl and Mgcl2 are deliquecent and crystallize as hydrates. Licl.2H2o and Mgcl.2H2O. 3) both Li and mg combine with nitrogen to form nitrides. 4) hydroxides of both Li and mg are weak bases. 5) carbonates of both Li and mg decompose on heating. 6) hydrogen carbonates of both Li and mg do not exist in solid states. Anomalous behaviour of lithium Lithium - the first element of group 1 - differs from the rest of this group in many respects. This anomalous behaviour of lithium is due to the following reasons: - small size of lithium atom and its ion. - higher polarization power of Li+ (i.e. charge size ratio) resulting in increased covalent character of its compounds which is responsible for their solubility in organic solvents - comparatively high ionisation enthalpy and low electropositive character of lithium as compared to other alkali metals - strong intermetallic bonding Some of the properties in which lithium differs from other members of its group illustrating its anomalous behaviour are as follows: - Lithium is harder than sodium and potassium which are so soft that they can be cut by a knife. - The melting and boiling points of lithium are comparatively high. - Lithium forms monoxide with oxygen, other alkali form peroxide and superoxide. - Lithium combines with nitrogen to form nitrides, while other alkali metals do not. - Lithium Chloride is deliquescent and crystallizes as a hydrate LiCl.2H2O, whereas other alkali metal chlorides do not form hydrates. - hydroxide of alkali metals don't decompose but LiOH decompose in Li2O+H2O - lithium carbonate decompose on heating into Li2O+CO2 but rest of all decompose as MOH+H2O+CO2 - alkali nitrate make nitrites and O2 on heating but Li2NO3 decompose into Li2O and gives brown colour gas NO2 and O2 See also
<urn:uuid:cdb91b30-f53c-4d5b-98c4-0ee4c5be7fc0>
3.578125
881
Knowledge Article
Science & Tech.
48.409832
To begin with it's a well known result in gravitation that the sphere has geometry such that for gravitational calculations all it's mass can be considered to be concentrated at its centre for objects outside of its radius. Second there is another result that states that if inside a spherical shell then there is no gravitational influence i.e. net gravitational force on an obejct there is zero. This can be easily shown using geometry since the force on either side is proportional to and the area subtended on either side is proportional to . Hence the forces cancel and so within the cavity the net contribution is zero. Now consider a solid sphere of radius with a 'tunnel' of negligible width (compared to the size of the sphere) going into the sphere along a diameter. At any point at radius , in this tunnel the gravitational influence can be considered to be the sum of that of a spherical shell with inner radius and a solid sphere of radius r. As already explained the shell then has zero contribution and the influence of the inner sphere on a mass is then . (Gravitational force inside a sphere) Ok, so now we know this how do we use this result for this punctured sphere? Well, this is the equivalent to whatever the net gravitational effect of the large sphere (radius ) minus the net gravitational effect of the small sphere (radius I'll call the big solid sphere A and the small one B so then the net graviational force will be: We know the net force will be towards A so let us define the force in the direction of BA as the positive direction and let x be the distance of our mass m along AB from point B. Hence from the above result we have that This means this is a constant acceleration problem and since the sphere is fixed we can simply use to find that the velocity at impact at A is:
<urn:uuid:8a834973-80a1-4520-8176-9df1974a935c>
3.8125
386
Q&A Forum
Science & Tech.
41.321248
This 3 part problem deals with functions from the set P of cell phones in use in the US to the set of natural Numbers N. a) Write one interesting function f:P N that is injective. Define the function by giving a rule for it, i.e., (f(x)=...) b) write a function f:P N that is not injective. Again, give a rule c) Explain why there is no function from P to N that is surjective
<urn:uuid:79b3bb3f-c423-498c-9dce-8bdf99d33884>
2.890625
103
Q&A Forum
Science & Tech.
90.794153
John Matese and Daniel Whitmire, from the University of Louisiana at Lafayette, are claiming that data from NASA’s Wide-field Infrared Survey Explorer already suggests that there is a large planet in the outer solar system. This hypothetical planet, which they have nicknamed Tyche, orbits the sun at 15,000 AU’s and weighs in at four times the mass of Jupiter. (Apparently Matese suggested this theory as early as 1999 based on perceived statistical fluke in the orbit of comets.) When I read this I wondered at first whether it was even conceivable, and in particular would 15000 AU’s even still be considered in our solar system? I looked it up, and it is thought that the sun’s gravitational field dominates that of other stars out to about two light-years, or 125,000 AU’s. The Oort cloud, a hypothetical cloud of a trillion comets, which Freeman Dyson has speculated to be a possible long-term home for our distant descendants, is thought to be between 50,000 and 100,000 AU’s from the sun. It seems that the Tyche hypothesis is not widely accepted in the astronomy community, and NASA has demurred, suggesting that we will know more in coming months or years. I, for one, welcome our new giant planet overlord. Thanks to Dr. Heiser for the link.
<urn:uuid:ee27cb3d-7482-44a0-9837-ae0c9240f9b7>
3.3125
290
Personal Blog
Science & Tech.
55.124319
White Dwarf Star Spirals Photograph courtesy NASA/Tod Strohmayer (GSFC)/Dana Berry (Chandra X-Ray Observatory) About 1,600 light-years away, two dense white dwarfs in the J0806 binary star system orbit each other once every 321 seconds. When they reach the end of their long evolutions, smaller stars typically become white dwarfs. Sirius and Sirius B Photograph courtesy NASA/ESA/H. Bond (STScl)/M. Barstow (University of Leicester) The brightest star in the nighttime sky, Sirius, or the Dog Star, greatly outshines its white dwarf companion, Sirius B. At 8.6 light-years away, Sirius B is the nearest known white dwarf star to Earth. Photograph courtesy HubbleSite Ancient white dwarf stars shine in the Milky Way galaxy. Stars like our sun fuse hydrogen in their cores into helium. White dwarfs are stars that have burned up all of the hydrogen they once used as nuclear fuel. More Photos of the Universe Explore More From Nat Geo National Geographic Magazine The space-weather forecast for the next few years: solar storms, with a chance of catastrophic blackouts on Earth. Are we prepared? Archaeologists and artists, armed with the latest tools and techniques, are bringing the life-size army of painted clay soldiers back to life. Each month, National Geographic magazine features breathtaking photographs in Visions of Earth. Browse through visions of the world as seen through a photographer's eye.
<urn:uuid:03384b78-655b-40d9-b92a-595c1f446a16>
3.5
317
Content Listing
Science & Tech.
53.661333
|Recognized since antiquity and depicted on the shield of Achilles according to Homer, stars of the form the head of the constellation Taurus the Bull. Their general V-shape is anchored by the eye of the Bull and by far the constellation's brightest star. Yellowish in appearance, red giant Aldebaran is not a Hyades cluster Modern astronomy puts Hyades cluster 151 light-years away making it the nearest established open star cluster, while Aldebaran lies at less than half that distance, along the same with colorful Hyades stars, this stellar holiday portrait locates Aldebaran just below center, as well as another star cluster in Taurus, NGC 1647 at the left, some 2,000 light-years or more in the background. Just slide your cursor over the image to identify the stars. The central Hyades stars are spread out over about 15 light-years. Formed some 800 million years ago, the Hyades star cluster may share a common origin (Praesepe), a naked-eye open star cluster in Cancer, based on M44's motion through space and remarkably similar age. (Catching the Light)
<urn:uuid:e54cc1f2-425b-487c-abcc-9e4b24195e83>
3
257
Knowledge Article
Science & Tech.
41.244469
Included here are more in depth explanations of some of the terms and processes in the Nuclear Fuel Cycle. Nuclear Fuel is produced utilizing various elements and compounds in reactions that yied the product desired at the various stages. Initially uranium is the prime constituent looked for in its natural setting during the exploration phase. The ore body, the rock, that holds the uranium also contains many other elements and compounds. While these remain undisturbed, very little of these components enter the surrounding environment. But once mining is initiated, all of the components are moved throughout the environment and left, in many cases, to contaminate the ground and water systems. While most of the uranium is taken from the mine location, almost all of the other components are left in piles of waste rock and tailings either at the mine or at the mill. Below are the various components found in uranium ore bodies. Full descriptions can be found on Wikepedia. Uranium - (pronounced /jʊˈreɪniəm/) is a silvery-gray metallic chemical element in the actinide series of the periodic table that has the symbol U and atomic number 92. It has 92 protons and 92 electrons, 6 of them valence electrons. It can have between 141 and 146 neutrons, with 146 (U-238) and 143 in its most common isotopes. Uranium has the highest atomic weight of the naturally occurring elements. Uranium is approximately 70% denser than lead, but not as dense as gold or tungsten. It is weakly radioactive. It occurs naturally in low concentrations (a few parts per million) in soil, rock and water, and is commercially extracted from uranium-bearing minerals such as uraninite (see uranium mining). A person can be exposed to uranium (or its radioactive daughters such as radon) by inhaling dust in air ,or by ingesting contaminated water and food. Absorbed uranium tends to bioaccumulate and stay for many years in bone tissue because of uranium's affinity for phosphates. Normal functioning of the kidney, brain, liver, heart, and numerous other systems can be affected by uranium exposure, because in addition to being weakly radioactive, uranium is a toxic metal. Uranium is also a reproductive toxicant. In nature, uranium atoms exist as uranium-238 (99.284%), uranium-235 (0.711%), and a very small amount of uranium-234 (0.0058%). Uranium decays slowly by emitting an alpha particle. The half-life of uranium-238 is about 4.47 billion years and that of uranium-235 is 704 million years, making them useful in dating the age of the Earth (see uranium-thorium dating, uranium-lead dating and uranium-uranium dating). Many contemporary uses of uranium exploit its unique nuclear properties. Uranium-235 has the distinction of being the only naturally occurring fissile isotope. Uranium-238 is both fissionable by fast neutrons, and fertile (capable of being transmuted to fissile plutonium-239 in a nuclear reactor). An artificial fissile isotope, uranium-233, can be produced from natural thorium and is also important in nuclear technology. While uranium-238 has a small probability to fission spontaneously or when bombarded with fast neutrons, the much higher probability of uranium-235 and to a lesser degree uranium-233 to fission when bombarded with slow neutrons generates the heat in nuclear reactors used as a source of power, and provides the fissile material for nuclear weapons. Both uses rely on the ability of uranium to produce a sustained nuclear chain reaction. Depleted uranium (uranium-238) is used in kinetic energy penetrators and armor plating. To read a more in depth description please visit the Wikipedia link: URANIUM Thorium - (pronounced /ˈθɔəriəm/) is a chemical element with the symbol Th and atomic number 90. As a naturally occurring, slightly radioactive metal, it has been considered as an alternative nuclear fuel to uranium. When pure, thorium is a silvery-white metal that retains its luster for several months. However, when it is exposed to oxygen, thorium slowly tarnishes in air, becoming grey and eventually black. Thorium dioxide (ThO2), also called thoria, has the highest melting point of any oxide (3300°C). Exposure to an aerosol of thorium can lead to increased risk of cancers of the lung, pancreas and blood. Exposure to thorium internally leads to increased risk of liver diseases. To read a more in depth description please visit the Wikipedia link: THORIUM Protactinium - (pronounced /ˌproʊtækˈtɪniəm/) is a chemical element with the symbol Pa and atomic number 91. Its longest-lived isotope has a half-life of 32,760 years. Due to its scarcity, high radioactivity, and toxicity, there are currently no uses for protactinium outside of basic scientific research. Protactinium occurs in pitchblende to the extent of about 1 part 231Pa per 10 million parts of ore (i.e., 0.1 ppm). Protactinium is both toxic and highly radioactive. It requires precautions similar to those used when handling plutonium. To read a more in depth description please visit the Wikipedia link: PROTACTINIUM Actinium - (pronounced /ækˈtɪniəm/) is a radioactive chemical element with the symbol Ac and atomic number 89, which was discovered in 1899. Actinium is a silvery, radioactive, metallic element. Due to its intense radioactivity, actinium glows in the dark with a pale blue light. The chemical behavior of actinium is similar to that of the rare earth element lanthanum. 227Ac is extremely radioactive, and in terms of its potential for radiation induced health effects 227Ac is even more dangerous than plutonium. Ingesting even small amounts of 227Ac would be fatal. To read a more in depth description please visit the Wikipedia link: ACTINIUM Radium - (pronounced /ˈreɪdiəm/) is a radioactive chemical element which has the symbol Ra and atomic number 88. Its appearance is almost pure white, but it readily oxidizes on exposure to air, turning black. Radium is an alkaline earth metal that is found in trace amounts in uranium ores. It is extremely radioactive. Radium is a decay product of uranium and is therefore found in all uranium-bearing ores. Radium is highly radioactive and its decay product, radon gas, is also radioactive. Since radium is chemically similar to calcium, it has the potential to cause great harm by replacing it in bones. Inhalation, injection, ingestion or body exposure to radium can cause cancer and other disorders. To read a more in depth description please visit the Wikipedia link: RADIUM Radon - (pronounced /ˈreɪdɒn/) is a chemical element with symbol Rn and atomic number 86. Radon is a colorless, odorless, naturally occurring, radioactive noble gas that is formed from the decay of radium. It is one of the heaviest substances that remains a gas under normal conditions and is considered to be a health hazard. Radon is a significant contaminant that affects indoor air quality worldwide. Radon gas from natural sources can accumulate in buildings, especially in confined areas such as the basement. Radon can be found in some spring waters and hot springs. According to the United States Environmental Protection Agency, radon is reportedly the second most frequent cause of lung cancer, after cigarette smoking; and radon-induced lung cancer the 6th leading cause of cancer death overall. According to the same sources, radon reportedly causes 21,000 lung cancer deaths per year in the United States. To read a more in depth description please visit the Wikipedia link: RADON Polonium - pronounced /pəˈloʊniəm/) is a chemical element with the symbol Po and atomic number 84, discovered in 1898 by Marie and Pierre Curie. A rare and highly radioactive metalloid, polonium is chemically similar to bismuth and tellurium, and it occurs in uranium ores. By mass, polonium-210 is around 250,000 times more toxic than hydrogen cyanide (the actual LD50 for 210Po is about 1 microgram for an 80 kg person (see below) compared with about 250 milligrams for hydrogen cyanide). It has been estimated that a median lethal dose of 210Po is 0.015 GBq (0.4 millicuries), or 0.089 micrograms, still an extremely small amount. To read a more in depth description please visit the Wikipedia link: POLONIUM Lead - (pronounced /ˈlɛd/) is a main-group element with symbol Pb (Latin: plumbum) and atomic number 82. Lead is a soft, malleable poor metal, also considered to be one of the heavy metals. Lead has a bluish-white color when freshly cut, but tarnishes to a dull grayish color when exposed to air. It has a shiny chrome-silver luster when melted into a liquid. Lead has the highest atomic number of all stable elements, although the next element, bismuth, has a half-life so long (longer than the estimated age of the universe) it can be considered stable. Like mercury, another heavy metal, lead is a potent neurotoxin that accumulates in soft tissues and bone over time. Lead is a poisonous metal that can damage nervous connections (especially in young children) and cause blood and brain disorders. To read a more in depth description please visit the Wikipedia link: LEAD Molybdenum - (pronounced /məˈlɪbdənəm/, from the Greek word for the metal "lead"), is a Group 6 chemical element with the symbol Mo and atomic number 42. It has the eighth-highest melting point of any element. It readily forms hard, stable carbides, and for this reason it is often used in high-strength steel alloys. Molybdenum is found in trace amounts in plants and animals, although excess molybdenum can be toxic in some animals.The ability of molybdenum to withstand extreme temperatures without significantly expanding or softening makes it useful in applications that involve intense heat, including the manufacture of aircraft parts, electrical contacts, industrial motors, and filaments. Molybdenum dusts and fumes, as can be generated by mining or metalworking, can be toxic, especially if ingested (including dust trapped in the sinuses and later swallowed). Low levels of prolonged exposure can cause irritation to the eyes and skin. The direct inhalation or ingestion of molybdenum and its oxides should also be avoided. Chronic exposure to 60 to 600 mg Mo/m³ can cause symptoms including fatigue, headaches, and joint pains. To read a more in depth description please visit the Wikipedia link: MOLYBDENUM Vanadium - (IPA: /vəˈneɪdiəm/) is the chemical element with the symbol V and atomic number 23. It is a soft, ductile, silver-grey metal. Most vanadium is used as ferrovanadium as an additive to improve steels. All vanadium compounds should be considered to be toxic. The Occupational Safety and Health Administration (OSHA) has set an exposure limit of 0.05 mg/m3 for vanadium pentoxide dust and 0.1 mg/m3 for vanadium pentoxide fumes in workplace air for an 8-hour workday, 40-hour work week. The National Institute for Occupational Safety and Health (NIOSH) has recommended that 35 mg/m3 of vanadium be considered immediately dangerous to life and health. This is the exposure level of a chemical that is likely to cause permanent health problems or death. To read a more in depth description please visit the Wikipedia link: VANADIUM In nuclear science, the decay chain refers to the radioactive decay of different discrete radioactive decay products as a chained series of transformations. Most radioactive elements do not decay directly to a stable state, but rather undergo a series of decays until eventually a stable isotope is reached. Decay stages are referred to by their relationship to previous or subsequent stages. A parent isotope is one that undergoes decay to form a daughter isotope. The daughter isotope may be stable or it may decay to form a daughter isotope of its own. The daughter of a daughter isotope is sometimes called a granddaughter isotope. The four most common modes of radioactive decay are: alpha decay, beta minus decay, beta plus decay (considered as both positron emission and electron capture), and isomeric transition. Of these decay processes, alpha decay changes the atomic mass number (A) of the nucleus, and always decreases it by four. Because of this, almost any decay will result in a nucleus whose atomic mass number has the same residue mod 4, dividing all nuclides into four classes. The members of any possible decay chain must be drawn entirely from one of these classes. All four chains also produce helium, from alpha particles. Three main decay chains (or families) are observed in nature, commonly called the thorium series, the radium series (not uranium series), and the actinium series, representing three of these four classes, and ending in three different, stable isotopes of lead. The mass number of every isotope in these chains can be represented as A=4n, A=4n+2, and A=4n+3, respectively. The long-lived starting isotopes 232Th, 238U, and 235U, respectively, of these three have existed since the formation of the earth. The plutonium isotopes Pu-244 and Pu-239 have also been found in trace amounts on earth. To read more about decay chains please visit the Wikipedia link: DECAY CHAIN Radium Series - The Uranium238 Decay Chain Beginning with naturally occurring uranium-238, this series includes the following elements: astatine, bismuth, lead, polonium, protactinium, radium, radon, thallium, and thorium. All are present, at least transiently, in any uranium-containing sample, whether metal, compound, or mineral. Actinium Series - The Uranium235 Decay Chain Beginning with naturally occurring uranium-235, this series includes the following elements: Actinium, astatine, bismuth, francium, lead, polonium, protactinium, radium, radon, thallium, and thorium. All are present, at least transiently, in any uranium-containing sample, whether metal, compound, ore, or mineral. Thorium Series - The Thorium232 Decay Chain Begining with naturally occurring thorium-232, this series includes the following elements: Actinium, bismuth, lead, polonium, radium, and radon. All are present, at least transiently, in any natural thorium-containing sample, whether metal, compound, or mineral.
<urn:uuid:bb07d3a7-3da0-4ffa-842e-8015a1f3b547>
4.125
3,194
Knowledge Article
Science & Tech.
38.11429
The Message widget is a variant of the Label, designed to display multiline messages. The message widget can wrap text, and adjust its width to maintain a given aspect ratio. When to use the Message Widget To create a message, all you have to do is to pass in a text string. The widget will automatically break the lines, if necessary. from Tkinter import * master = Tk() w = Message(master, text="this is a message") w.pack() mainloop() If you don’t specify anything else, the widget attepts to format the text to keep a given aspect ratio. If you don’t want that behaviour, you can specify a width: w = Message(master, text="this is a relatively long message", width=50) w.pack() - Message(master=None, **options) (class) [#] A multi-line text message. - Parent widget. - Widget options. See the description of the config method for a list of available options. - config(**options) [#] Modifies one or more widget options. If no options are given, the method returns a dictionary containing all current option values. - Widget options. - Where in the message widget the text should be placed. Use one of N, NE, E, SE, S, SW, W, NW, or CENTER. Default is CENTER. (the database name is anchor, the class is Anchor) - Aspect ratio, given as the width/height relation in percent. The default is 150, which means that the message will be 50% wider than it is high. Note that if the width is explicitly set, this option is ignored. (aspect/Aspect) - Message background color. The default value is system specific. (background/Background) - Same as background. - Border width. Default value is 2. (borderWidth/BorderWidth) - Same as borderwidth. - What cursor to show when the mouse is moved over the message widget. The default is to use the standard cursor. (cursor/Cursor) - Message font. The default value is system specific. (font/Font) - Text color. The default value is system specific. (foreground/Foreground) - Same as foreground. - Together with highlightcolor and highlightthickness, this option controls how to draw the highlight region. (highlightBackground/HighlightBackground) - See highlightbackground. (highlightColor/HighlightColor) - See highlightbackground. (highlightThickness/HighlightThickness) - Defines how to align multiple lines of text. Use LEFT, RIGHT, or CENTER. Note that to position the text inside the widget, use the anchor option. Default is LEFT. (justify/Justify) - Horizontal padding. Default is -1 (no padding). (padX/Pad) - Vertical padding. Default is -1 (no padding). (padY/Pad) - Border decoration. The default is FLAT. Other possible values are SUNKEN, RAISED, GROOVE, and RIDGE. (relief/Relief) - If true, the widget accepts input focus. The default is false. (takeFocus/TakeFocus) - Message text. The widget inserts line breaks if necessary to get the requested aspect ratio. (text/Text) - Associates a Tkinter variable (usually a StringVar) with the message. If the variable is changed, the message text is updated. (textVariable/Variable) - Widget width, in character units. If omitted, the widget picks a suitable width based on the aspect setting. (width/Width)
<urn:uuid:89a05ea1-f000-4d50-8c76-05ebbedd7802>
2.984375
803
Documentation
Software Dev.
54.922008
Describing Motion with Diagrams Visit The Physics Classroom's Flickr Galleries and take a visual overview of 1D Kinematics. Introduction to Diagrams Throughout the course, there will be a persistent appeal to your ability to represent physical concepts in a visual manner. You will quickly notice that this effort to provide visual representation of physical concepts permeates much of the discussion in The Physics Classroom Tutorial. The world that we study in physics is a physical world - a world that we can see. And if we can see it, we certainly ought to visualize it. And if we seek to understand it, then that understanding ought to involve visual representations. So as you continue your pursuit of physics understanding, always be mindful of your ability (or lack of ability) to visually represent it. Monitor your study and learning habits, asking if your knowledge has become abstracted to a series of vocabulary words that have (at least in your own mind) no relation to the physical world which it seeks to describe. Your understanding of physics should be intimately tied to the physical world as demonstrated by your visual images. Like the study of all of physics, our study of 1-dimensional kinematics will be concerned with the multiple means by which the motion of objects can be represented. Such means include the use of words, the use of graphs, the use of numbers, the use of equations, and the use of diagrams. Lesson 2 focuses on the use of diagrams to describe motion. The two most commonly used types of diagrams used to describe the motion of objects are: Begin cultivating your visualization skills early in the course. Spend some time on the rest of Lesson 2, seeking to connect the visuals and graphics with the words and the physical reality. And as you proceed through the remainder of the unit 1 lessons, continue to make these same connections.
<urn:uuid:87dcf928-0fad-4198-b047-730821e5f217>
4.21875
375
Tutorial
Science & Tech.
43.895728
Mechanics: Newton's Laws of Motion Newton's Laws of Motion: Audio Guided Solution Skydiving tunnels have become popular attractions, appealing in part to those who would like a taste of the skydiving experience but are too overwhelmed by the fear of jumping out of a plane at several thousand feet. Skydiving tunnels are vertical wind tunnels through which air is blown at high speeds, allowing visitors to experience bodyflight. On Natalya's first adventure inside the tunnel, she changes her orientation and for an instant, her 46.8-kg body momentarily experiences an upward force of air resistance of 521 N. Determine Natalya's acceleration during this moment in time. Audio Guided Solution Click to show or hide the answer! 1.3 m/s/s, up (rounded from 1.33 m/s/s) Habits of an Effective Problem Solver - Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it. - Identify the known and unknown quantities in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., vo = 0 m/s; a = 4.2 m/s/s; vf = 22.9 m/s; d = ???. - Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity. - Identify the appropriate formula(s) to use. - Perform substitutions and algebraic manipulations in order to solve for the unknown quantity. Read About It! Get more information on the topic of Newton's Laws of Motion at The Physics Classroom Tutorial. Return to Problem Set Return to Overview
<urn:uuid:2025d2ed-0d97-43f6-9bfc-e5957318fe85>
3.546875
371
Tutorial
Science & Tech.
56.767316
|Mar26-09, 09:10 AM||#1| Magnetic force on a wire 1. The problem statement, all variables and given/known data A wire is oriented along the x-axis. It is connected to two batteries, and a conventional current of 2.3 A runs through the wire, in the +x direction. Along 0.27 m of the length of the wire there is a magnetic field of 0.82 tesla in the +y direction, due to a large magnet nearby. At other locations in the circuit, the magnetic field due to external sources is negligible. What is direction and magnitude of the magnetic force on the wire? 2. Relevant equations 3. The attempt at a solution |Mar26-09, 09:12 AM||#2| There is an equation that relates the force on a wire of length L to the current I flowing and the magnetic field strength B. See if you can find it. |Mar26-09, 09:21 AM||#3| |Similar Threads for: Magnetic force on a wire| |Force of wire in a magnetic field||Introductory Physics Homework||2| |Magnetic force on a wire||Introductory Physics Homework||6| |Magnetic Force on segment of wire||Introductory Physics Homework||1|
<urn:uuid:cd84d5f4-05b4-4f62-8e72-9994ec1df141>
3.5
284
Q&A Forum
Science & Tech.
69.062983
All eyes will be on the new Mars rover Curiosity when it lands in just over two weeks, but lest we forget, NASA’s indefatigable Mars rover Opportunity is still rolling along, too. The rover has driven about 22 miles, which prompted some Olympic-minded NASA people to realize the rover is nearing marathon distance. It will be the first interplanetary marathon. This is made all the more impressive by the fact that Opportunity and its late twin, Spirit, were designed to drive about one-third of a mile in total. And the fact that Opportunity drives about 160 to 330 feet a day. Granted, it flew a long way to even get to the starting line: “This particular marathoner had to fly about 283 million miles across space before being unceremoniously drop-bounced on the Martian surface,” said Ray Arvidson, the mission’s deputy principal investigator, told NASA Science News.Its main mission has been to look for water, and both rovers have found slam-dunk evidence that the Red Planet used to be a wet planet. Opportunity first found evidence of water at a site called Eagle Crater, and then spent the next few years driving around deeper and larger craters nearby. Since August of last year, it’s been exploring Endeavour Crater, after traversing tricky Martian terrain “with no aid stations anywhere,” as NASA Science cheekily puts it. It even had to drive backwards for a while after a wheel injury. At Endeavour, Opportunity found some of its best evidence yet, including fractured rock filled with gypsum. Gypsum forms in the presence of water, and likely in more pH-neutral (and life-hospitable) conditions. Just a few weeks ago, the rover awoke from a winter slumber and left its winter resting place, Greeley Haven, to do some more exploring. There’s still plenty of work to do, so a 26.2 mile total is certainly within the realm of possibility. Learn more about Opportunity's journey in this video. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:91199055-8c30-4720-9c1c-6196d220347a>
2.9375
483
Truncated
Science & Tech.
46.493967
Studying the effects of rainfall on civet behaviour Itís pretty easy to get lost when you venture deep into the Jungle of Lambusango on the Isle of Buton, just off the South East coast of Sulawesi in Indonesia - a fact that I discovered more than once and to the amusement of the local guides with whom I worked during my summer on the Island. The occasional paths all look the same, and most of them are blocked by vegetation somewhere along their length. I clearly recall the first time I crawled out of a spiky rattan plant after having stumbled head over heels into it, only to see an Indonesian man running through the thick muddy undergrowth barefoot Ė swinging himself around trees and jumping over rocks. But soon enough you learn how to find your way and how to stay (relatively) safe, making it easier to focus on why you are actually there. The purpose of my trip was to investigate the ranging behaviour of a small carnivore called a Malay civet. This species (and in particular the population I was studying) makes for a very good study model as they are the largest mammalian predator on Buton Island. As a result they have become abundant as they have little or no predation costs and much lower competition for mates and resources than their counterparts on other Islands. By investigating this population we can get a better idea of how other populations operate and what can be done to conserve other species that arenít so populous. Individuals from this species of civet are about the same size as a domestic cat but have a more elongated nose, similar to that of a possum (although they are from the Viverridae family and so arenít particularly closely related to cats or possums). They are very shy and timid, which poses a difficult problem when trying to observe their behaviour and scope out their territories. Since they canít be observed freely they need to be captured and then tracked using radio collars. This method is much easier to carry out on Buton than it would be in other locations due to the large population of Malay civets that live there. By tracking their movements we can begin to understand their population dynamics which in turn could lead to the development of investigations that may strengthen our understanding of their relationships with other individuals, their feeding strategies, and perhaps even their mating strategies. Operation Wallacea researchers (an organisation set up in the name of Alfred Wallace to conduct research for conservation purposes) have come up with a way to capture and study these amazing animals in their own environment. Twenty-seven large cage wire traps were placed on dry level ground and evenly distributed over their population area. They were baited with salt fish which give off a strong smell which attracts civets. The cages were left over night and checked the following morning - on average we had between one and two civets captured a day, with the highest being five and the lowest being zero. Sometimes we caught animals that werenít civets at all - over three very memorable days we caught two monitor lizards and a turtle! When civets were captured the initial plan was to put radio collars on them so that they could be tracked after their release. However, upon my arrival in Indonesia, I discovered that the equipment that I would need to track these animals was broken. As a result I had to resort to a capture-mark-recapture scheme, which is where you put ear tags on the individuals that you capture before you release them, rather than a radio collar. Once they have been released there is a chance that they will be recaptured and if so they might be caught in the same or a different trap, thereby giving information about their range of movement. This method is much less precise as it only gives a very rough estimation of an individualís home range and only provides limited information (especially if the individual is captured only once, or recaptured in the same trap it was captured in previously) As a result my investigation was entirely restructured and I set out to test the variation in recapture frequencies of individuals from different age groups. This new plan would allow me to find suggestions about whether or not civets are more curious at a particular age and whether they learn from previous trapping experiences to either like or dislike a trap. On analysis of the results it appeared that there was a significant difference in the recapture rates of individuals, but rather than this difference being connected to age it was connected to the sexual maturity (or lack of it) in an individual. It was shown that individuals who were sexually active were captured significantly more than those who were sexually inactive (see the graph). This variation is likely to be the result of a combination of different factors. Sexually inactive individualsí home ranges have to be large enough only to provide enough food, whereas sexually active individuals are also out searching for mates. This means that sexually active individuals are more likely to be roaming farther than inactive individuals. This may hold true particularly with active males searching for females. Sexually inactive individuals may also still be experiencing some parental care. Mothers who are still caring for young (that may still be learning to hunt for themselves) will have to find more food so as to feed themselves and their offspring. This also holds true if the mother is still lactating as she will have to feed herself more so as to afford the higher energy costs of lactation. Immature individuals are unlikely to roam as much as fully grown adults as they have not yet learnt to hunt efficiently for themselves. Young may also be at risk from predation from other animals. While this problem is much less of an issue on Buton than it would be on larger islands (such as Borneo, where there are far more possible predator and competitors), young civets may still be at risk from large monitor lizards or wild pigs. Alternatively, decreased juvenile activity as a response to predation risk may be an evolutionary trait left over from when predation was a much more serious threat. While trekking through the jungle day after day, covered in mud, I realised that it was much too wet for a usual dry season. This was a discovery that I made first thing one morning while emptying several litres of water out of my hammock. Sure enough this year was one of the rare occasions (occurring every five years or so) when a weather phenomenon known as El NiŮo took place. El NiŮo involves an increase in the surface temperature across the Pacific Ocean, leading to greater evaporation rates which make the air more humid and cause more rainfall. This gave me a rare opportunity to expand my project and explore the influence of rainfall on civet movements. I set out to test whether or not the surprisingly low numbers of captures that we were making was linked to the larger than usual amount of rainfall. To do this I had to gather two other sets of data. The first was the amount of rainfall that occurred in July in the area and equivalent data sets for the month of June in 2007 and 2008. This data was easy to collect as the rainfall was measured and recorded at the nearest city. The second data set I needed was the rainfall that was occurring day to day within my trapping grid. In order to analyse my data as I planned I had to get the capture frequency data from 2007 and 2008. This was possible because Operation Wallacea is a long-standing conservation group and its researchers have been conducting their work in the same locations for years. The monitoring of the civet population on Buton Island in Indonesia has been ongoing since 2001. This backlog of data allows different scientists to build on each otherís work and provides a standardised method for setting up new investigations. When I analysed my data, I discovered that high rainfall drastically decreases the capture rates of the civets, which suggests that during periods of intense rainfall the civets are less active. The most likely explanation is that during a rain event the efficiency of hunting decreases as prey become less active. A similar scenario was seen in Mississippi State at the Tallahala Wildlife Management Area (TWMA) with Bobcats (Lynx rufus). The Bobcats were also observed to reduce movement and activity during periods of rainfall and it was suggested that decreased hunting efficiency was the main cause of this behaviour. Not only are prey likely to be less active but they are also likely to be harder to find as rain has been shown to interfere with smell, sound and sight which are likely to be essential tools when hunting, especially when that hunting occurs nocturnally in the dark undergrowth of a rainforest. There may also be an increased risk of injury to the civets as this quantity of rainfall may often cause flooding and the bedrock can become unstable. Mudslides and knee deep water occurred frequently in the jungle during the trapping season due to the increased rainfall. Since the terrain is not flat the water was often flowing at high speeds down rock faces which were sharp and unstable. The water was brown owing to flowing mud, which meant there was limited, if any, visibility of the ground beneath the water. Finally there may be other more specific reasons for civets to decrease their movement patterns during periods of high rainfall that are still unclear due to lack of research (with the Malay civet there are large gaps in the current knowledge about their behaviour and lifestyle). This result has large implications for civet behaviour, as being deterred from venturing out on rainy nights will decrease their opportunities for both hunting and mating. This can be problematic especially for young civets in their first year as they are still vulnerable, and can also have a large impact on breeding rates which may influence population size. Luckily this species of civet is very populous on the Island of Buton and so the impact of a slightly reduced population size may not be difficult to recover from. However, a closely related species of civet called the Sulawesi palm civet is much rarer, having been out competed by the Malay civet in much of its native habitat. The Sulawesi palm civet is endemic to the island of Sulawesi, and remains one of the least studied mammals resulting in its actual population size being unknown. Due to estimations of population decline over the last 17 years (based on habitat destruction) the species is currently listed as Vulnerable on the IUCN red list for threatened species. If the weather has the same affect on the Sulawesi palm civet as it does on the Malay civet, the former could suffer drastic population loss. So, all in all, my adventure in the Indonesian tropics proved both educational and fun! I got lost several times, most of my clothes got ruined thanks to the mud and spiky plants, and while the work was definitely tiring and at times frustrating it was most certainly worth it! The data I collected show that individuals belonging to this species of civet have evolved behavioural traits which make them more active when they are sexually mature and when it isnít raining. On the other hand, the collection of scratches and scars I gathered during my trip show that, much as I love the jungle, I definitely didnít evolve to live there Ė rain or not!
<urn:uuid:66c46207-2104-41e9-ae66-4642f6fa70a1>
3.03125
2,298
Personal Blog
Science & Tech.
34.609654
The moon has never had all that much. It doesn't have atmosphere, it doesn't have water and it sure doesn't have life. What it does have, though, is dirt lots and lots of dirt and it's some of the coolest stuff you ever saw. Now it's even cooler, thanks to the discovery this week of a wholly unexpected ingredient stirred into the lunar mix. Even before astronauts landed on the moon, they knew the soil would be something special. With no atmosphere to intercept incoming meteorites and micrometeorites, the lunar regolith or surface covering would have been subjected to a 4.5 billion year bombardment that would have produced a layer of dust far finer than confectioner's sugar. That dust, the Apollo crewmen found when they went out to play in it, did some strange things: it rose above the surface when disturbed and hung there far longer than could be explained by the moon's weak gravity; it crept deep into the weave and cracks of virtually anything it touched and clung there as if adhesively attached. What's more, it was filled with exquisitely fine green and orange glass beads products of the superheated melting and cooling that followed impacts. When the astronauts brought their samples home, geologists in Houston discovered even more. The soil was unusually chemically reactive not something that was expected from a scrap of a world that was supposed to be largely inert. And it did a lousy job of conducting heat. The surface of the moon on the sunlit side might be close to the boiling point of water, but just a few feet down it would be far below freezing. For 40 years, geologists struggled to understand just what gave lunar soil its pixie-dust properties. Geologist Marek Zbik of Queensland University of Technology in Brisbane, Australia, may finally have cracked it. The answer: nanoparticles vanishingly tiny flecks of mass, some no bigger than molecules, that have all the odd qualities of moondust and more. Zbik made his discovery thanks to an instrument known as a synchrotron-based nano tomograph a hunk of hardware that didn't remotely exist when the Apollo crews splashed down. Nano tomographs work by bombarding nanoparticles with X-rays to produce 3-D images of structures that otherwise would be far too tiny to see or at least to see well. When Zbik got some lunar soil and a nano tomograph in the same room together, he knew that the first thing he wanted to look at were the infinitesimal glass bubbles in the lunar material. The bubbles are formed the same way the larger glass beads are formed in the fiery heat of meteorite collisions but their exotic origins notwithstanding, they still ought to be built like any other bubble. That means they ought to be filled with some kind of gas. That, however, wasn't the case. "Instead of gas or vapor," says Zbik, "the lunar bubbles were filled with a highly porous network of alien-looking glassy particles that span the bubbles' interior." Alien-looking maybe, but Zbik quickly recognized them as nanoparticles and that would explain a lot. Nanoparticles can become electrostatically charged, which would impart the same property to the soil, perfectly accounting for its tendency to float. They have low thermal conductivity, explaining why the lunar subsoil can get so cold so close to the surface. They are chemically active, and they are also electrically sticky, meaning that when the soil got on an astronaut's pressure suit or into the joints of his lunar tools, it would be all but impossible to brush away. What was not immediately evident was why the nanoparticles had a chance to interact with the soil at all. The ones that were spotted by the tomograph, after all, were sealed inside the bubbles like a figurine in a snow globe. Something would have to be breaking those globes, and Zbik reckons it was the same thing that created them in the first place: collisions. "It appears that the nanoparticles are formed inside bubbles of molten rocks when meteorites hit the lunar surface," he says. "Then they are released when the glass bubbles are pulverized by the consequent bombardment of [more] meteorites. This continuous pulverizing ... and constant mixing develop a type of soil which is unknown on Earth." There's more than just abstruse soil science in all this. Nanoparticles have long been the It material for engineers working on new computer hardware, medical equipment, drug-delivery systems, even fabric. The better we understand their origins and properties, the better we can manipulate them. What's more, if we ever hope to establish a long-term human presence on the moon, the tendency of the soil to cling to surfaces and, ultimately, to wear them away is a problem that will have to be addressed. Studying the dust now can provide solutions for later. That, however, is for another time. For now it's enough just to appreciate the elegance of both the new discovery and the moon itself. Four decades after we last dropped by for a visit, our little satellite is still surprising us.
<urn:uuid:65e71147-f739-4215-a0d7-44fdeb1cb89f>
3.65625
1,063
Nonfiction Writing
Science & Tech.
48.625853
(Note: If you're already familiar with chemical potentials, you may be interested in this alternative thermodynamic explanation.) Two things happen when ice and water are placed in contact: Molecules on the surface of the ice escape into the water molecules of water are captured on the surface of the ice (freezing). When the rate of freezing is the same as the rate of melting, the amount of ice and the amount of water won't change on average (although there are short-term fluctuations at the surface of the ice). The ice and water are said to be in dynamic equilibrium with each other. The balance between freezing and melting can be maintained at 0°C, the melting point of water, unless conditions change in a way that favors one of the processes over the other. If you don't see the animation above, a nonanimated version is available; or you can download the free Flash plugin from Macromedia. The balance between freezing and melting processes can easily be upset. If the ice/water mixture is cooled, the molecules move slower. The slower-moving molecules are more easily captured by the ice, and freezing occurs at a greater rate than melting. You can see a demonstration of this by clicking on the temperature in the animation and setting it to a lower value (say, -10). Conversely, heating the mixture makes the molecules move faster on average, and melting is favored. Reset the animation and then enter a higher value for the temperature (say 10) and watch what happens. Adding salt to the system will also disrupt the equilibrium. Consider replacing some of the water molecules with molecules of some other substance. The foreign molecules dissolve in the water, but do not pack easily into the array of molecules in the solid. Try hitting the "Add Solute" button in the animation above. Notice that there are fewer water molecules on the liquid side because the some of the water has been replaced by salt. The total number of waters captured by the ice per second goes down, so the rate of freezing goes down. The rate of melting is unchanged by the presence of the foreign material, so melting occurs faster than That's why salt melts ice. To re-establish equilibrium, you must cool the ice-saltwater mixture to below the usual melting point of water. For example, the freezing point of a 1 M NaCl solution is roughly -3.4°C. Solutions will always have such a freezing point depression. The higher the concentration of salt, the greater the freezing point depression . But won't any foreign substance cause a freezing point depression, according to this model? Yes! For every mole of foreign particles dissolved in a kilogram of water, the freezing point goes down by roughly 1.7-1.9°C. Sugar, alcohol, or other salts will also lower the freezing point and melt the ice. Salt is used on roads and walkways because it is inexpensive and readily available. It is important to realize that freezing point depression occurs because the concentration of water molecules in a solution is less than the concentration in pure water. The nature of the solute doesn't matter. One might expect from the diagram above that solutes with large molecules are better at blocking water molecules travelling towards the surface of the ice. The hypothesis that solutes with large molecules cause a larger freezing point depression than those with smaller molecules is not in accord with experimental data! The misconception arises because the diagram can't be drawn to scale; the size of the molecules is very small compared to the distance between them. Phase map for salt water. Drawn from a diagram by R. E. Dickerson (Note 3) As ice begins to freeze out of the salt water, the fraction of water in the solution becomes lower and the freezing point drops further. This does not continue indefinitely, because eventually the solution will become saturated with salt. The lowest temperature possible for liquid salt solution is -21.1°C. At that temperature, the salt begins to crystallize out of solution (as NaCl·2 H2O), along with the ice, until the solution completely freezes. The frozen solution is a mixture of separate NaCl·2H2O crystals and ice crystals, not a homogeneous mixture of salt and water. This heterogeneous mixture is called a eutectic mixture. References and Notes Notice that when melting is complete, it can take a while for ice to begin to form again, even if the temperature is quite low. A a "seed crystal" of ice must form by chance collisions before crystal growth really begins. Real liquids can exist for some time below their normal melting points.
<urn:uuid:82456d7a-489d-4b94-8475-0c74c32781b4>
3.984375
964
Tutorial
Science & Tech.
45.151474
An Ancient Universe: How Astronomers Know the Vast Scale of Cosmic Time THE ANCIENT UNIVERSE b) The Age of the Oldest Stars Other stars may have different lifetimes. Stars smaller (less massive) than the Sun have longer lives because they fuse their hydrogen fuel so much more slowly. Similarly, a sub-compact car may have a smaller gas tank than a large SUV, but it may be able to drive much longer on a full tank of gas, because it uses its fuel much more slowly. When a star has used up the available hydrogen fuel in its center, it expands and becomes a "red giant". Once we have found such a giant star, we know that it has used up all its hydrogen. If we can estimate its initial mass, and hence its initial power, we can estimate its lifetime, and we therefore know its age. This is equivalent to saying that, if we see a car that has just run out of gas, and if we know its horsepower, fuel efficiency, and fuel capacity, we can figure out how long it had been driving since the last fill-up before it ran out of gas. In this way, we can measure the ages of certain stars. When we apply this method to the oldest stars we can find, we obtain ages of 10 - 15 billion years. << previous page | next page >> An Ancient Universe - Table of Contents Home | Introduction | The Universe: An Overview | The Process of Science | The Ancient Universe - The Age of the Expanding Universe - The Age of the Oldest Stars - The Age of Light From Distant Galaxies - The Age of the Chemical Elements | The Changing Universe - Changes in the Solar System - Changes in Stars - Changes in the Universe | Science and Religion | Resource Guide | Activities © Copyright 2001, American Astronomical Society. Permission to reproduce in its entirety for any non-profit, educational purpose is hereby granted. For all other uses contact the publisher: Astronomical Society of the Pacific, 390 Ashton Ave., San Francisco, CA 94112. back to Teachers' Newsletter Main Page
<urn:uuid:15e8cc19-578b-4f33-b726-1bf680aec84b>
3.75
433
Knowledge Article
Science & Tech.
41.11164
This September, Larry Crumpler, a research colleague at the New Mexico Museum of Natural History and Science, and I were able to fly in the back seats of two weight-shifting ultralight aircraft during a two-hour flight over the McCartys lava flow in central New Mexico. This flow is 3,000 years old and over 47 km (29 miles) long, one of the longest fresh lava flows in the continental United States. It has been the subject of on-going research by Larry, other colleagues, and me as part of my research grant funded by NASA through the Planetary Geology and Geophysics program. Larry made contact with the ultralight pilots through his museum in Albuquerque, and following some field work on the McCartys flow this past April, Larry and I were able to make the first ultralight flight over the lava flow. Pilots Jeff Gilkey and Paul Dressendorfer are very experienced ultralight pilots, both having flown hundreds of times over the many natural wonders that abound in New Mexico and neighboring states. The April flight convinced both Larry and I that ultralights could represent a wonderful platform from which to obtain low-altitude stereo photographs, which should show much more detail than could be obtained from either commercial aerial photographs or satellite images. For the September flight, I attached a Canon Eos Rebel digital camera to a monopole, with a remote trigger taped to the pole, plus two separate safety lines that attached the pole to me in a way that still allowed for easy movement. As we flew over the lava flow, the camera was held out from the side of the two-person open cockpit, oriented to point straight down. I was able to collect over 1,800 vertical photographs, including ones taken while following several GPS-specified lines to provide aerial coverage of places that we have investigated extensively on the ground. Meanwhile, Larry took photos from the second ultralight (for safety reasons, the pilots prefer to fly in pairs), providing context information of the mapping ultralight. A quick check of the vertical photos has confirmed the great scientific value contained within low-altitude, low-speed aerial photographs. The stereo photographs should provide many new insights about the McCartys lava flow during the coming months, and they will also be included in future proposals to support research of lava flows in the New Mexico area. Jim Zimbelman is a geologist in the Center for Earth and Planetary Studies at the National Air and Space Museum.
<urn:uuid:4e7f5492-bac5-458d-b994-b06c56311a08>
2.703125
508
Personal Blog
Science & Tech.
32.178526
Continuous flow - A method of analysis where sample material is moved through a series of conversion and purification steps within a continuously flowing stream of carrier gas, typically helium. In most cases, this method allows for a single measurement of a sample. Continuous flow interface - The carrier gas flow rate from a continous flow system is typically an order of magnitude higher than an isotope ratio mass spectrometer (IRMS) can handle. A reduction in flow immediately upstream the IRMS is achieved with a plumbing interface. Dual inlet - A method of analysis where a sample gas and and reference gas are alternately put into the isotope ratio mass spectrometer (IRMS). This method typically allows for many measurements of a single sample and is generally considered the most precise way to use an IRMS. Elemental analyzer - An instrument used to combust solid or liquid material in a controlled excess oxygen reaction column. A carrier gas (typically helium) is used to move the combustion products through a series of oxidation or reduction as well as purification steps. Light Stable Isotopes - Isotopes are elements with varying numbers of neutrons but identical numbers of protons and electrons. Some isotopes are unstable, or radioactive, with increasing numbers of neutrons. Elements with a relatively low mass are considered light. For example, carbon has a mass of 12 and is considered to be light. Lead, however, has a mass of 207 and would be considered heavy. In the environmental stable isotope community, light isotopes are generally considered to be hydrogen, carbon, nitrogen, oxygen, and sulfur. Mass Dependence - Multiple isotopes of the same species following a physical fractionation that is dependent on mass. For example, 17O is empirically different from 18O by about half. Symbols δ and Δ - These symbols are the lower case and capital greek letter delta. We use the lower case delta (δ) to indicate the ratio of heavy to light isotope relative to the same ratio of a standard. For example, the 13C to 12C ratio of some sample material relative to the 13C to 12C ratio of some internationally recongnized standard. The capital delta (Δ) is used to express the difference of one δ with that of another (for example Δ17O of an oxygen containing species expresses the difference between δ17O and δ18O).
<urn:uuid:c3b8a093-8f35-40ec-ada6-88dec392b4ac>
3.359375
495
Knowledge Article
Science & Tech.
28.445605
String objects have one unique built-in operation: the % operator (modulo) with a string left argument interprets this string as a C sprintf() format string to be applied to the right argument, and returns the string resulting from this formatting operation. The right argument should be a tuple with one item for each argument required by the format string; if the string requires a single argument, the right argument may also be a single non-tuple object. 2.4 The following format characters are understood: %, c, s, i, d, u, o, x, X, e, E, f, g, G. Width and precision may be a * to specify that an integer argument specifies the actual width or precision. The flag characters -, +, blank, # and 0 are understood. The size specifiers h, l or L may be present but are ignored. The %s conversion takes any Python object and converts it to a string using str() before formatting it. The ANSI features %p and %n are not supported. Since Python strings have an explicit length, %s conversions don't assume that '\0' is the end of the string. For safety reasons, floating point precisions are clipped to 50; %f conversions for numbers whose absolute value is over 1e25 are replaced by %g conversions. 2.5 All other errors raise exceptions. If the right argument is a dictionary (or any kind of mapping), then the formats in the string must have a parenthesized key into that dictionary inserted immediately after the "%" character, and each format formats the corresponding entry from the mapping. For example: >>> count = 2 >>> language = 'Python' >>> print'%(language)s has %(count)03d quote types.' % vars() Python has 002 quote types. In this case no * specifiers may occur in a format (since they require a sequential parameter list). Additional string operations are defined in standard module string and in built-in module re.
<urn:uuid:31b05556-69da-49a6-8e6d-41a9e851a9e1>
4.28125
419
Documentation
Software Dev.
56.076853
This is a book report Rachel wrote for Ms Moore’s 3rd grade class this year. Marie Curie changed the world through science. Marie and her husband discovered two new elements, polonium and radium. Marie and two other scientists won the Nobel Prize in physics. In 1910, she isolated radium in the form of a metal. Marie won a second Nobel Prize, this time in chemistry. Some of her main struggles were that her husband was killed in an accident right after they won the prize. Marie’s main struggle was that people treated her differently because she was a woman. Her accomplishments inspire me to work harder in science. She proves that women can be as good as men in science. One of my favorite quotes of hers is: “You cannot hope to build a better world without improving and, at the same time, share a general responsibility for all humanity, our particular duty being to aid those to whom we think can be most useful.” -Marie Curie Her research into radiation helped others discover the structure of the atom. Even though radiation is very dangerous, it helps save lives even today through X rays, cancer treatments, and creating electricity.
<urn:uuid:c74b54f3-6918-4b20-b1c8-5c78fbe4c75a>
3.84375
245
Personal Blog
Science & Tech.
55.61318
Managing plant populations in fragmented landscapes: restoration or gardening? (Review) Hobbs, R.J. (2007) Managing plant populations in fragmented landscapes: restoration or gardening? (Review). Australian Journal of Botany, 55 (3). pp. 371-374. *Subscription may be required Ecosystem fragmentation results in major changes in several environmental and biotic parameters that affect the ability of plant populations to persist. All stages of the plant life cycle may be influenced in either negative or positive ways by the changed biophysical settings caused by fragmentation and associated changes in the surrounding landscape. This may result in plant populations being lost or significantly reduced from patches of native vegetation, leading to the need for active management intervention. This intervention may include management of threatening processes, reversal of ecosystem degradation, or the reintroduction of plants of species that have been lost from an area. These management actions range from preventative management through to active restoration. In the present paper I explore the question of whether there is a limit to the degree of intervention that is desirable in conservation terms, beyond which we are no longer conserving but rather cultivating and gardening, i.e. creating an artificial and potentially unsustainable system. I discuss this question in relation to management of remnant vegetation in urban and agricultural settings and suggest that a careful mix of species-based and process-based management is required for us to succeed in the goal of biodiversity conservation in fragmented landscapes. |Publication Type:||Journal Article| |Murdoch Affiliation:||School of Environmental Science| |Copyright:||© CSIRO 2007.| |Item Control Page|
<urn:uuid:37a8a71b-57e4-4949-8e94-e124f671a671>
2.90625
331
Academic Writing
Science & Tech.
22.538507
UPDATE 11/2: At the Washington Post @bradplumer follows up this post with his own, providing a nice summary of the issue. He concludes: "aggressive steps to cut emissions could reduce the amount of sea-level rise by somewhere between 6 and 20 inches in 2100, compared with our current trajectory" -- which is just about exactly where I came out in the dicussions with him and several others (thanks JG), 10 inches +/- 10 inches. UPDATE: Via Twitter @bradplumer points me to a newer paper that suggests perhaps 7 inches is the difference in seal level rise to 2100 between the highest and lowest RCP scenarios. It is not apples to apples with the number presented below, but still a very small number. And another paper here, with perhaps 10 inches between RCP scenarios, a number lower than the projection uncertainties. One of the more reasonable discussion points to emerge from efforts to link Hurricane Sandy to the need to reduce carbon dioxide emissions focuses on the role that future sea level rise will have on making storm impacts worse. Logically, it would seem that if we can "halt the rise of the seas" then this would reduce future impacts from extreme events like Sandy. The science of sea level rise, however, tells us that to 2100 (at least) our ability to halt he rise of the seas is extremely limited, even under an (unrealistically) aggressive scenario of emissions reduction. Several years ago, in a GRL paper titled "How much climate change can be avoided by mitigation?" Warren Washington and colleagues asked how much impact aggressive mitigation would have on the climate system. Specifically, they looked at a set of climate model runs assuming stabilization of carbon dioxide at 450 ppm. Here is what they concluded for sea level rise: [A]bout 8 cm of the sea level rise that would otherwise occur without mitigation would be averted. However, by the end of the century the sea level rise continues to increase and does not stabilize in both scenarios due to climate change commitment involving the thermal inertia of the oceans ...Eight cm is about three inches. Three inches. Then sea level rise continues for centuries. Though it seems logical to call for emissions reductions as a way to arrest sea level rise to reduce the impacts of hurricanes, recent research suggests that our ability to halt the rise of the seas is extremely limited. With respect to hurricanes, we have little option but to adapt, and improved adaptation makes good sense. Efforts to use future hurricane damages to justify emissions reductions just don't make much sense. Fortunately, there are far better reasons to focus on emissions reductions than hurricanes. Postscript: This post was inspired by Michael Levi's discussion here. Thanks!
<urn:uuid:a1167fc8-3d7d-43c0-a9c9-a15c1abc67d7>
2.90625
553
Personal Blog
Science & Tech.
42.633912
Authors: Charles B. Leffert ABSTRACT The success of quantum theory shows that the Universe is much more complicated than most have supposed. How did our universe get started? What is energy? What is gravity? The late Richard Feynman in Volume 1 of his “Lectures on Physics” said that no one had come up with the machinery of either energy or gravity. However the machinery of both has been presented by the author in recent issues of this viXar archive and other publications. For the machinery of the expansion of the universe, a complete spatial condensation theory, with no free parameters, has been under development for the past 25 years. In that development, one important contact has been made with quantum theory; the expansion theory predicts exactly the same value of vacuum energy as quantum theory, a factor of 10123 greater than Einstein’s mass energy, Mc2. The new concepts, such as a fourth spatial dimension, and our ordinary space of three spatial dimensions as the surface of a four-dimensional ball, indicate that there are even more complexities needed to accomplish unification with quantum theory. Present physics uses a symmetric time and yet we know that from our subjective concepts of past-present-future that there is also an “Arrow” of time. This conundrum was solved by two different productions of space operating under two different times. Our universe started under the first radiation-dominated era with the arrow of time producing four-dimensional space. Then as radiation cooled and four-dimensional space continued, geometric production of our three-dimensional space with symmetric time increased in the matter-dominated era. Some additional plots of important parameters are presented as well as a new agreement with measurements of passive separation of galaxies. But in general the aim of this paper is to alert the reader to the background vision of a greater epi-universe as the source of quantum interaction with the present mass of matter and the vision of how our universe came to be. Comments: 17 Pages, 9 Figures [v1] 2012-06-26 15:32:20 Unique-IP document downloads: 35 times Add your own feedback and questions here:
<urn:uuid:3b92e652-494c-460a-ba14-d21992da079a>
2.984375
444
Academic Writing
Science & Tech.
36.4175
With the DOM, you can access every node in an XML document. Access a node using its index number in a node list This example uses the getElementsByTagname() method to get the third <title> element in "books.xml" Loop through nodes using the length property This example uses the length property to loop through all <title> elements in "books.xml" See the node type of an element This example uses the nodeType property to get node type of the root element in "books.xml". Loop through element nodes This example uses the nodeType property to only process element nodes in "books.xml". element nodes using node relationships This example uses the nodeType property and the nextSibling property to process element nodes in "books.xml". You can access a node in three ways: 1. By using the getElementsByTagName() method 2. By looping through (traversing) the nodes tree. 3. By navigating the node tree, using the node relationships. getElementsByTagName() returns all elements with a specified tag name. The following example returns all <title> elements under the x element: Note that the example above only returns <title> elements under the x node. To return all <title> elements in the XML document use: where xmlDoc is the document itself (document node). The getElementsByTagName() method returns a node list. A node list is an array of nodes. The <title> elements in x can be accessed by index number. To access the third <title> you can write:: Note: The index starts at 0. You will learn more about node lists in a later chapter of this tutorial. The length property defines the length of a node list (the number of nodes). You can loop through a node list by using the length property: The documentElement property of the XML document is the root node. The nodeName property of a node is the name of the node. The nodeType property of a node is the type of the node. You will learn more about the node properties in the next chapter of this tutorial. The following code loops through the child nodes, that are also element nodes, of the root node: The following code navigates the node tree using the node relationships: Your message has been sent to W3Schools.
<urn:uuid:3c8fb642-cdb0-4f9c-957a-8609040f31f1>
3.46875
508
Documentation
Software Dev.
50.985484
The most used definition of electronegativity is that an element's electronegativity is the power of an atom when in a molecule to attract electron density to itself. The electronegativity depends upon a number of factors and in particuler as the other atoms in the molecule. The first scale of electronegativity was developed by Linus Pauling and on his scale iodine has a value of 2.66 on a scale running from from about 0.7 (an estimate for francium) to 2.20 (for hydrogen) to 3.98 (fluorine). Electronegativity has no units but "Pauling units" are often used when indicating values mapped on to the Pauling scale. On the interactive plot below you may find the "Ball chart" and "Shaded table" styles most useful. There are a number of ways to produce a set of numbers representing electronegativity and five are given in the table above. The Pauling scale is perhaps the most famous and suffices for many purposes. WebElements now has a WebElements shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more.
<urn:uuid:9b47acc3-a1ff-4811-98d9-c53686607c66>
3.90625
248
Knowledge Article
Science & Tech.
39.439734
Science Fair Project Encyclopedia Pleocyemata is a sub-order of decapod crustaceans, erected by Martin Burkenroad in 1963. Burkenroad's classification replaced the earlier sub-orders of Natantia and Reptantia with the monophyletic groups Dendrobranchiata (prawns) and Pleocyemata. Pleocyemata contains all the members of the Reptantia (which is still used, but at a lower rank), as well as the Stenopodidea (which contains the so-called "boxer shrimp" or "barber-pole shrimp"), and Caridea, which contains all the true shrimp. These taxa are united by a number of features, the most important of which is that the eggs are incubated by the female, and remain stuck to the pleopods (swimming legs) until they are ready to hatch. It is this characteristic that gives the group its name. Reference - BURKENROAD, M. D. (1963): The evolution of the Eucarida (Crustacea,Eumalacostraca), in relation to the fossil record. Tulane Studies in Geology, 2 (1): 1-17. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:132685e8-cf44-452f-9d2c-e2f2208b7e97>
3.40625
290
Knowledge Article
Science & Tech.
44.707991
Analytical chemistry is concerned with the measurement of the chemical compostion of unknown substances using exisitng instrumental techniques, and the development or application of new techniques and instruments. Anytime a measurement is made with an instrument, there is an error, a deviation from the true value, ineherent in that measurement. Instrumental analysis is very important in all areas of analytical chemistry. Modern analytical chemistry is a quantitative science, meaning that the desired result is almost always numeric. We need to know there is 55 μg of mercury in a sample of water, or 20 mM glucose in a blood sample. Quantitative results for analytical chemistry are obtained using devices or instruments that allow us to determine the concentration of a chemical in a sample from an observable signal. One of the most important techniques is analytical chemistry is the preparation of the calibration curve, which is an equation relating a signal measured from an instrument to the concentration of a substance in the sample that is being tested. You determine the calibration curve by measuring samples with known concentrations of the substance, to see how the instrument behaves. Then, you use a statistical technique called Linear Regression to generate the curve and determine its uncertainty. Another important part of statistical analysis in chemistry is the comparison of sets of data, to make conclusions about the validity of our measurements and the confidence with which we can make these conclusions. These are called tests of siginificance, and can give tell us the degree of uncertainty of a measurement. The three tests we cover in this tutorial are the Q-test for rejecting outliers, the t-test for comparing means, and the F-test for comparing precisions. This section presents concepts in statistics that form the basis for the Linear Regression and Data Comparison learned in future sections. We begin by covering measures that form the basis of any statistical analysis mean, variance and standard deviation, and we talk about the differences between population and sample mean, variance and standard deviations. Then, we discuss error and residuals, which are other important measures to describe data. The second half of this section introduces some important statistical concepts, such as Probability Distributions, Confidence Levels, and Degrees of Freedom. These are not essential for this section, but will become important when we discuss Linear Regression and Data Comparison. Finally, we present a brief introduction to statitsical hypotheses and Type I and Type II errors.
<urn:uuid:d3288014-8756-472d-87d8-73378cb85123>
3.75
483
Tutorial
Science & Tech.
20.570477
NASA’s Dawn spacecraft obtained this image with its framing camera on August 12, 2011. This image was taken through the framing camera’s clear filter. The image has a resolution of about 260 meters per pixel. The Dawn mission to Vesta and Ceres is managed by the Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington, D.C. It is a project of the Discovery Program managed by NASA's Marshall Space Flight Center, Huntsville, Ala. UCLA is responsible for overall Dawn mission science. Orbital Sciences Corporation of Dulles, Va., designed and built the Dawn spacecraft. The framing cameras were developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by the German Aerospace Center (DLR) Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The framing camera project is funded by NASA, the Max Planck Society and DLR. JPL is a division of the California Institute of Technology in Pasadena. More information about Dawn is online at http://www.nasa.gov/dawn and http://dawn.jpl.nasa.gov. Image credit: NASA/JPL-Caltech/UCLA/M PS/DLR/IDA
<urn:uuid:0cf44b33-0b32-4fa6-b283-f045e3964d4b>
3.140625
281
Knowledge Article
Science & Tech.
45.544221
By yamalaris on Tuesday, July 22, 2008 - 02:52 pm: Edit Post I understand that northern lights are the result of activity from the surface of our Sun, is there a time of the year this activity is more prevalent? Is there a way to predict or forecast northern lights activity? Thank you in advance for your answer. By admin on Tuesday, July 22, 2008 - 06:19 pm: Edit Post To the best of my knowledge there is no time of the year when the aurora activity is more active. However, if you plan to view them from the Keweenaw, then summer is the season. Less clouds. By frnash on Tuesday, July 22, 2008 - 10:16 pm: Edit Post This site might be of interest: Michigan Tech's Aurora Page (includes links to Aurora forecasts.) By yamalaris on Wednesday, July 23, 2008 - 10:50 am: Edit Post I wasn't sure if you had any knowledge in this area or not. frnash I appreciate the link isn't the internet amazing. Have a great day gentlemen. By myq on Thursday, July 24, 2008 - 05:40 pm: Edit Post Scientists learn what makes Northern Lights flare WASHINGTON, July 24 (Reuters) - The multicolored aurora borealis and aurora australis -- the Northern Lights and Southern Lights -- represent some of Earth's most dazzling Now scientists using data from five NASA satellites have learned what causes frequent auroral flare-ups that make this green, red and purple lightshow that shimmers above Earth's northernmost and southernmost regions even more spectacular. Writing in the journal Science, the scientists said on Thursday that explosions of magnetic energy occurring a third of the way between Earth and the moon drive the sudden brightening of the Northern Lights and Southern Lights. There had been debate among scientists dating back decades about what triggers these auroral flare-ups. The findings from the THEMIS satellites and a network of 20 ground observatories in Canada and Alaska confirmed that it is due to a process called "magnetic reconnection." THEMIS stands for Time History of Events and Macroscale Interactions during Auroral displays are associated with the solar wind -- electrically charged particles continuously spewing outward from the sun. Earth's magnetic field lines reach far out into space as they store energy from the solar wind. The researchers said that as two magnetic field lines come close together due to the storage of energy from the sun, a critical limit is reached and the lines reconnect, causing magnetic energy to be turned into kinetic energy and heat. The release of this energy sparks the auroral flare-ups. "We showed that the process begins far from Earth first and propagates Earthward later," said Vassilis Angelopoulos of the University of California at Los Angeles, who led the research. The moon is located about 240,000 miles (385,000 km) from Earth, and this process is occurring roughly 80,000 miles (128,000 km) from Earth. The same mechanism causing the auroral brightening also can cause problems for satellites, power grids and communications systems on Earth and could endanger astronauts in space, the By frnash on Thursday, July 24, 2008 - 05:49 pm: Edit Post Nice catch, thanks for posting that article!
<urn:uuid:85e61ffc-e571-4029-b042-1d05e7d29559>
3.09375
729
Comment Section
Science & Tech.
47.356106
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) Kaboobly doo! Oh wait, I liked A too. Brilliant answer then. It turned out to be .217 as a correct answer Yes, you are right. If shape A is transformed into B using a scale factor the correct way to compute the sf is to calculate B/A. What did they say the answer was? It really should be A. me 2 but I got D as incorrect I like D. 41.4 divided by 9 gives be 4.6 and when I check it it is correct why do they give us such weird decimal confusing numbers it confuses me and it gets annoying I answered 5-D 8-D 9-E 10-B and they were all incorrect So I revised it and here is what I got wondering if it is correct 28.7 x 0.244= 7.0028 so I ignore the 0028? right A = 7, 7, 21, 23 B = 28.7, 28.7, 86.1, 94.3 It just depends on whether you do (A divided by B) or (B divided by A). You went backwards on that calculation with the calculator? Find the scale factor of each if the polygons are proportional
<urn:uuid:49dd6808-3ec2-4601-8db5-4d27db347d23>
2.953125
334
Comment Section
Science & Tech.
98.055277
MessageToEagle.com - A vast structure of satellite galaxies and clusters of stars surrounding our Galaxy, stretching out across a million light years has been discovered by astronomers from the University of Bonn in Germany . The work challenges the existence of dark matter, part of the standard model for the evolution of the universe. PhD student and lead author Marcel Pawlowski reports the team’s findings in a paper in the journal Monthly Notices of the Royal Astronomical Society. The Milky Way, the galaxy we live in, consists of around three hundred thousand million stars as well as large amounts of gas and dust arranged with arms in a flat disk that wind out from a central bar. The diameter of the main part of the Milky Way is about 100,000 light years, meaning that a beam of light takes 100,000 years to travel across it. A number of smaller satellite galaxies and spherical clusters of stars (so-called globular clusters) orbit at various distances from the main Galaxy. Conventional models for the origin and evolution of the universe (cosmology) are based on the presence of ‘dark matter’, invisible material thought to make up about 23% of the content of the cosmos that has never been detected directly. In this model, the Milky Way is predicted to have far more satellite galaxies than are actually seen. In their effort to understand exactly what surrounds our Galaxy, the scientists used a range of sources from twentieth century photographic plates to images from the robotic telescope of the Sloan Deep Sky Survey. Using all these data they assembled a picture that includes bright ‘classical’ satellite galaxies, more recently detected fainter satellites and the younger globular clusters. “Once we had completed our analysis, a new picture of our cosmic neighbourhood emerged”, says Pawlowski. The astronomers found that all the different objects are distributed in a plane at right angles to the galactic disk. The newly-discovered structure is huge, extending from as close as 33,000 light years to as far away as one million light years from the centre of the Galaxy. VV 340 - Two perpendicular galaxies going to collide. Image credit: NASA, ESA, Hubble Team member Pavel Kroupa, professor for astronomy at the University of Bonn, adds “We were baffled by how well the distributions of the different types of objects agreed with each other”. As the different companions move around the Milky Way, they lose material, stars and sometimes gas, which forms long streams along their paths. The new results show that this lost material is aligned with the plane of galaxies and clusters too. “This illustrates that the objects are not only situated within this plane right now, but that they move within it”, says Pawlowski. “The structure is stable.” The various dark matter models struggle to explain this arrangement. “In the standard theories, the satellite galaxies would have formed as individual objects before being captured by the Milky Way”, explains Kroupa. “As they would have come from many directions, it is next to impossible for them to end up distributed in such a thin plane structure.” Postdoctoral researcher and team member Jan Pflamm-Altenburg suggests an alternative explanation. “The satellite galaxies and clusters must have formed together in one major event, a collision of two galaxies.” Such collisions are relatively common and lead to large chunks of galaxies being torn out due to gravitational and tidal forces acting on the stars, gas and dust they contain, forming tails that are the birthplaces of new objects like star clusters and dwarf galaxies. Arp 87 - A galaxy gets torn apart in a collision. Image credit: NASA, ESA, Hubble Kroupa concludes by highlighting the wider significance of the new work. “Our model appears to rule out the presence of dark matter in the universe, threatening a central pillar of current cosmological theory. We see this as the beginning of a paradigm shift, one that will ultimately lead us to a new understanding of the universe we inhabit.” @ MessageToEagle.com via RAS Mysteries Of The Sun Explained In Video Are you curious about the Sun? You now have an excellent chance to learn everything you ever wanted, and even more about our Sun and all its mysteries. Five new videos called "Mysteries of the Sun" have been just released by NASA. The videos describe the science of the sun and its effects on the solar system and Earth. Latest Spectacular Solar Flare Will Hit STEREO-B Spacecraft, Spitzer And Curiosity The Sun continues to show its more violent side. A spectacular solar flare erupted from the Sun's northeastern limb yesterday, sending an beautiful arcing jet of super-heated plasma blasting off into space. The explosion, captured by Nasa's Solar Dynamics Observatory at about 5.45pm yesterday evening, was one of the most beautiful seen in years. Something New Spotted On The Sun One day in the fall of 2011, Neil Sheeley, a solar scientist at the Naval Research Laboratory in Washington, D.C., did what he always does – look through the daily images of the sun from NASA's Solar Dynamics Observatory (SDO). But on this day he saw something he'd never noticed before: a pattern of cells with bright centers and dark boundaries occurring in the sun's atmosphere, the corona... Hidden Misshapen Celestial "Wonder" It is one of the brightest and strangest objects in the Milky Way - the corpse of a star that exploded around 1000 years ago. Only a handful of such young supernova remnants are known. The object named G350.1-0.3 is also incredibly small (only eight light years across) and young in astronomical terms. Mercury Surprises Scientists On March 17, MESSENGER (MErcury Surface, Space Environment, GEochemistry, and Ranging) completed its one-year primary mission, orbiting Mercury, capturing nearly 100,000 images, and recording data that reveals new information about the planet's core, topography, and the mysterious radar bright material in the permanently shadowed areas near the poles. Living Earth Simulator - Supercomputer Predicting The Future In Douglas Adams book the Hitchhikers Guide to the Galaxy we encounter a machine called Deep Thought. It is the most powerful computer ever built. Deep Thought is capable of answering questions concerning life, the Universe, and simply everything. Now scientists are planning to create a similar machine. It is called the Living Earth Simulator (LES).
<urn:uuid:f1a1789d-f464-4a0b-ba87-7220382dba11>
3.65625
1,394
Content Listing
Science & Tech.
42.685142
Designing a JSF Page The first thing you want to do after getting the page open in the designer is to expand the palette on the right hand side of the design by clicking the left arrow on the top of it: The palette will read the tag libraries out of your build path and load them so you can utilize the Drag and Drop interactions with building your web pages. Some of the more important tag groups to be aware of are as There are the most common sets of tags you will likely be using while designing your pages. Let's use these items to Drag and Drop a login form onto our new page. First we drag a JSF HTML form onto the page and update the page text a bit: Now we want to layout a typical login form with a username, password and login buttons. To lay these items out nicely, it sounds like it would be a 3x2 table, but if we want to include space for login error messages, we would likely want a 3x3 table. In JSF there is a component that will layout it's contents in a table automatically for us, it's called a panelGrid. Let's go ahead and drag a panelGrid into our form and be sure to set the columns value to 3: The first thing to notice is that when the panelGrid is added, the designer automatically adds 4 sample components to it, to get an idea of how output will work. For this tutorial, we are going to place the following components in the panelGrid in the given order: - outputText: "Username:" label - inputText: username text field (ID=username) - message: display username error messages (FOR=username) - outputText: "Password:" label - inputSecret: password text field (ID=password) - message: display password error messages (FOR=password) So we haven't added the buttons yet, but at this point our form is done and properly laid out and looks like this: It's important to note that the message components will only be rendered when they have messages to display. Now let's add our buttons. Given the layout of our login form, we may want to left-align the buttons under the input boxes to make the form look nice. If we simply place a single button in each cell (1 under Password, 1 under inputSecret) they are going to be unevenly spaced. However, the way the panelGrid works is to take components added directly to it, and lay them out in a table, cell-by-cell. To be able to group our two buttons together, and place them under the input fields, we will need to use a panelGroup. The first thing we need to do is add an empty component (we used an empty outputText) to the panelGrid, so it places it under the Password label. Second, we need to add a panelGroup, so it places it under the input fields. Then inside the panelGroup, we will add our two buttons. The result looks like this: Once that panelGroup is added, we don't have to add another component to be placed under the existing message components. The panelGrid will keep everything laid out correctly. Now the design portion of our page is done and has given you a good idea of how the designer works. Of course, if you were building a real JSF application, you would need to step back into the page, and using the designer, assign action handlers to the buttons, and value bindings to the input fields to make sure your managed bean was correctly backing the values on this page. These steps are outside the realm of this tutorial, which was simply intended to introduce you to the new Visual JSF Designer. If you'd like to see how to design a real JSF application, please check out the JSF Tutorial in the MyEclipse help documents or available from the MyEclipse site.
<urn:uuid:cdb624d1-7490-4771-b36d-70df08f4b5a9>
2.6875
820
Tutorial
Software Dev.
48.862918
Same Genes, Different Doses Distant DNA controls gene activity Context: Even in cases where two people share the same gene, they can produce widely differing amounts of the protein the gene codes for. This can lead to differences in physical characteristics, and it can also mean the difference between sickness and health. Segments of DNA called regulatory elements are one factor controlling how much of a particular protein the body produces. While researchers today can use algorithms to pick out genes from sequences of DNA, they have previously been unable to accurately distinguish regulatory elements from other non-coding DNA, let alone match those elements with the genes that they regulate. Researchers at the University of Pennsylvania, led by Vivian Cheung, have found a way to do just that. Methods and Results: Using white blood cells from 94 people, the researchers identified more than 3,500 genes whose expression was similar among relatives but varied widely among people who were unrelated. These patterns of expression were then correlated with patterns of known genetic markers across the genome. Hundreds of genes’ expression was linked to particular genetic markers – far more than the number predicted by chance. About four-fifths of these markers were located more than 5,000 base pairs from the genes that they regulated; many were even on other chromosomes. Researchers found that some “hot spot” regions apparently influence the expression of more than 30 genes. In addition, many genes seem to be regulated by more than one region. Why it matters: Researchers can finally study the genetic differences governing gene expression. The hot spots, which Cheung’s team calls “master regulators,” will help to tease out some of the mysteries that surround gene expression. More immediately, the techniques may allow researchers to use variation within genes and within regulatory elements to understand and treat disease. For years, geneticists have scoured the human genome for genes that contribute to complex traits, like susceptibility to depression or heart disease. Finding factors that control the genes is just as important but much more difficult. Now scientists should be better equipped to find the genetic variations that make a difference in matters of life and death. Source: Morley, M. et al. (2004) Genetic analysis of genome-wide variation in human gene expression. Nature 430:743-7.
<urn:uuid:6ee3ad16-fe67-47e1-b25b-f2f9101d87a0>
4.0625
465
Truncated
Science & Tech.
32.734375
(diff) ←Older revision | Current revision | Newer revision→ (diff) Dragon Breed Data Origin Of Name? There are several Australian butterflies with "Xenica" in their common names from the parent genera Geitoneura, Oreixenica, and Nesoxenica. The first named variety was the Ringed Xenica in 1805, with marbled orange/black wings marked with white-centered black eyespots. In Temeraire's world, perhaps the Australian butterfly was named for a similar appearance to the British dragon. Conversely, perhaps Novik named the dragon after the butterfly.
<urn:uuid:42c80edc-ca34-4282-85c2-0ebea99f1d2b>
2.9375
130
Knowledge Article
Science & Tech.
24.325333
Are humans the only animals that keep livestock? If the best guess of biologists proves toe be true, the answer is a surprising ‘no.’ We already know that ants practice a primitive form of agriculture - collecting leaf fragments to grow tasty fungus - and even cultivate aphids in order to ‘milk’ them of their honeydew, as seen in the above picture. However, an amazing discovery could mean that ants raise other insects for meat in a manner directly analogous to humans raising cattle. Melissotarsus ants share their colonies with ‘scale insects’ that neither secrete milk nor have an edible outer covering. Therefore, scientists suggest that the ants raise the scale insects explicitly in order to eat them, potentially the best example of true domestication outside of humans and crops. The ants are highly secretive, so the carnivorous activity hasn’t been directly observed yet. Even still, this finding offers a tantalizing example of the amazing spectrum of nature’s animal behavior. This is totally happening on a tree in my backyard. I have ant farmers!
<urn:uuid:2434062e-7daf-4ff6-a35c-19c60d2fa609>
3.140625
223
Personal Blog
Science & Tech.
36.991241
C# String Theory—String intern pool |Visual C# Tutorials| String intern pool The string intern pool is a table that contains a single reference to each unique literal string declared or created programmatically in your application. The Common Language Runtime (CLR) uses the intern pool to minimize string storage requirements. As a result, an instance of a literal string with a particular value only exists once in the system. For example, if you assign the same literal string to several different variables, at runtime, the CLR retrieves the unique reference to that literal string from the intern pool and assigns it to each variable. String.Intern (string) method searches the intern pool for a string equal to the specified value. If such a string exists, its reference in the intern pool is returned. Otherwise, a reference to the specified string is added to the intern pool and that reference is returned. In the following example, the string, declared with a value of "Intern pool" is interned; because, it is a string literal. The string built is a new string object with the same value as declared but generated by the System.Text.StringBuilder class. The Intern method searches for a string with the same value as built. Since the string already exists in the intern pool, the method returns the same reference that is assigned to declared and assigns that reference to built compare unequal because they refer to different objects, while references interned compare equal because they refer to the same string. String declared = "Intern pool"; String built = new StringBuilder().Append("Intern ") .Append("pool").ToString(); String interned = String.Intern(built); Console.WriteLine ((Object)built==(Object)declared); // different references Console.WriteLine ((Object)interned==(Object)declared); // same reference When trying to reduce the total memory allocated by your application, remember that interning has two unfortunate side effects. Firstly, the memory allocated for interned String objects is unlikely to be released until the CLR terminates: the CLR's references to interned String objects may persist after your application or application domain terminates. Secondly, to intern a string, a string must first be created. Thus, despite the fact that the memory will eventually be garbage collected, the memory used by the String object will still be allocated.
<urn:uuid:4626dbfd-8284-45cb-aff4-fb72a20610ba>
3.046875
498
Documentation
Software Dev.
39.009361
Photograph by Gabi Moisa/Shutterstock Washing machines are second only to toilets as the largest water users in the home, accounting for 14 percent of household water use. Household water consumption has a significant impact on aquatic life, especially when water supplies come from freshwater lakes and streams. The Rio Grande, recently named one of the World Wildlife Fund's Top 10 Rivers at Risk, has been so overextracted that saltwater from the Gulf of Mexico has begun moving upstream and endangering native species. So far, 32 of the river's 121 native species have been displaced as a result of increased salinity. Just like the Rio Grande, city water supplies are succumbing to saltwater intrusion, which occurs when increased pumping of groundwater allows saltwater pools to infiltrate freshwater supplies, making water unfit for human use. In response, cities are installing energy-intensive desalination plants, which require more fossil-fuel-derived power that, in turn, contributes to global warming. To date, desalination plants can be found in a few states and several countries. Keeping washing machines running also requires a great deal of fossil-fuel-supplied energy that, in turn, emits about 160 pounds of the greenhouse gas carbon dioxide (CO₂) per year per machine. Just supplying the water for washing machines consumes a considerable amount of energy. In total, water supply and treatment facilities use about 50 billion kilowatt-hours per year. If 1 out of every 100 U.S. homes switched to water-efficient appliances, the energy savings could reach 100 million kWh per year and reduce greenhouse gas emissions by 75,000 tons. According to recent U.S. Environmental Protection Agency statistics, at least 36 states are anticipating local, regional or statewide water shortages by 2013. Out West, water conflicts have raged for decades, mainly among farmers, who need water for their crops, and city water consumers. Cities are gradually taking more water, which could mean a long-term struggle for small farmers. More Buying Guides Green Living Video What's Your Water Footprint? Photographer David Doubilet photographs stingrays, sharks, and more. The Great Energy Challenge An initiative to help you understand our current energy situation. See how you measure up against others, and how changes at home could do tons to protect the planet. Special Ad Section The World's Water NG's new Change the Course campaign launches. When individuals pledge to use less water in their own lives, our partners carry out restoration work in the Colorado River Basin. A special series on how grabbing water from poor people and future generations threatens global food security, environmental sustainability, and local cultures.
<urn:uuid:ec5c6bfa-b3b8-4e35-bcb6-0f3b23a5f8c7>
3.078125
548
Knowledge Article
Science & Tech.
39.44566
When the debugger is entered, it displays the previously selected buffer in one window and a buffer named ‘*Backtrace*’ in another window. The backtrace buffer contains one line for each level of Lisp function execution currently going on. At the beginning of this buffer is a message describing the reason that the debugger was invoked (such as the error message and associated data, if it was invoked due to an error). The backtrace buffer is read-only and uses a special major mode, Debugger mode, in which letters are defined as debugger commands. The usual Emacs editing commands are available; thus, you can switch windows to examine the buffer that was being edited at the time of the error, switch buffers, visit files, or do any other sort of editing. However, the debugger is a recursive editing level (see Recursive Editing) and it is wise to go back to the backtrace buffer and exit the debugger (with the q command) when you are finished with it. Exiting the debugger gets out of the recursive edit and kills the backtrace buffer. The backtrace buffer shows you the functions that are executing and their argument values. It also allows you to specify a stack frame by moving point to the line describing that frame. (A stack frame is the place where the Lisp interpreter records information about a particular invocation of a function.) The frame whose line point is on is considered the current frame. Some of the debugger commands operate on the current frame. If a line starts with a star, that means that exiting that frame will call the debugger again. This is useful for examining the return value of a function. If a function name is underlined, that means the debugger knows where its source code is located. You can click Mouse-2 on that name, or move to it and type <RET>, to visit the source code. The debugger itself must be run byte-compiled, since it makes assumptions about how many stack frames are used for the debugger itself. These assumptions are false if the debugger is running interpreted.blog comments powered by Disqus
<urn:uuid:32df05f2-ddfc-4b78-a4e8-7d7e0feb4c39>
2.96875
422
Documentation
Software Dev.
41.675595
Bending the Light fantastic Lasers make a bright beam of light. Shine the beam through a large page-magnifier fesnel lens to show how the lens bends light to create images. A small low power laser such as a laser pointer. A page magnifier fresnel lens cardboard with a white side to use as a screen steel support, such as a dead lantern battery, Straight edge or straight sided piece of wood. Mount the fresnel lens on a table, clip two large binder clips to the bottom of the lens, then place the lens so that it is perpendicular to the table. Mount the laser to a steel object with binder clips and magnets. See Magnetic Optical Mount the laser so that its beam is parallel to the floor and shines through the center of the fresnel lens. Lasermagnetically attached to steel block with beam passing through the center of the fresnel lens Tape down a straightedge or a piece of wood to the table parallel to the fresnel lens. To Do and Notice / What's Going On? Parallel beams shining into the lens come together at the focal point. Slide the block of wood with the laser along the straight edge so that the laser goes through the fresnel lens. The point where the laser hits the lens will slide back and forth along a line parallel to the floor. Use a white screen to observe the behavior of the laser beam on the opposite side of the lens. Place the white screen very close to the lens. Slide the laser left and right. Notice that the spot of light on the screen also moves left and right. The farther from the lens the screen is placed the less the beam moves side to side until, at one distance, the beam moves very little. This point is the focal point of the lens. Where parallel beams of light hitting the lens at different positions are bent so that they all come together at one point. Mark this point on a table using masking tape. Beyond the focal point the beam moves opposite the motion of the laser. Slide the laser to the side. The beam always bends toward the center of the lens. Put the laser beam into the center of the lens. Locate the beam on the far side of the lens. The beam though the center goes in a straight line, it is not bent. The line perpendicular to the lens and through its center is the axis of the lens. Move the laser to one side of center. Notice that the laser beam bends toward the axis of the lens. Move the laser further to the side and it bends more. In fact, to make a beam that comes together as a focal point, the bending of the beam must be proportional to the distance from the center. (This proportionality is true for small bending angles otherwise the angle is the arctangent of the displacement divided by the focal length.) Remove the straightedge. Beams radiating from the focal point exit the lens parallel to each other. Mount the block with the laser on it so that the center of the laser rotates about the focal point of the lens. (For example, pivot the laser block on a magnet stuck on the top of a steel battery case turned sideways.) Notice that as the laser is rotated, the beam comes out from a point just as light does when it radiates from a normal bulb. However the laser allows us to examine one ray of light at a time. On the far side of the lens the beam moves back and forth the same amount independent of the distance from the lens to the screen. The rays coming out of the lens are parallel. This will only be true if the laser pivots about the focal point of the lens. Beams radiating from other points come together at an image point. Rotate the laser about points other than the focal point. Rotate it about a point further from the lens than the focal point and the light comes back together not at the focal mpoint but at another point called the image point. Rotate the laser about a point closer to the lens than the focal point and the light never comes back together again. Place the front of the laser so that it touches one point on the lens. Rotate the laser about the point at which it touches the lens.The laser beam hits the same point on the lens at different angles. Notice that when the laser hits the lens at the same spot, the beam always bends toward the axis by the same amount. A point on the fresnel lens will always deflect initial direction of the laser beam (dashed line) toward the axis of the lens by the same angle, regardless of the angle with which the light hits the lens. One way to think about lenses is that they turn the position at which a beam hits them into an angle of deflection, and that they turn the angle with which a beam hits the lens into a position in the focal plane. Scientific Explorations with Paul Doherty 21 Feb 99
<urn:uuid:bf1f5433-dc6e-4080-96cc-12286e6ec445>
4.0625
1,040
Tutorial
Science & Tech.
67.420365
Miconia is one of the most destructive invaders in insular tropical rain forest habitats. It is a serious threat to ecosystems in the Pacific because of its ability to invade intact native forests. Miconia has earned itself the descriptions such as the “green cancer” of Tahiti and the “purple plague” of Hawaii. Once miconia is established at a certain place it drastically changes the ecosystem and biodiversity of that environment. Physical disturbance: Invasion by miconia has eliminated native forest understorey vegetation, increasing rapid runoff and potential for soil erosion and landslides on steep slopes. Modification to Hydrology: Dense stands of miconia may damage watershed functions; there may be a significant change in the water balance, with an increase in runoff and a potential reduction in groundwater recharge, but this plausible result has yet to be fully investigated and documented (Burnett et al. 2006). Economic/Livelihoods: Potential (as yet hypothetical) losses from an invasion of miconia on Oahu to groundwater recharge may conceivably be as high as $137 million per year (Kaiser and Roumasset 2002, in Burnett et al. 2006). Increased sedimentation could likely incur surface water quality damages; potential costs for Oahu have been estimated to be almost $5 million per year (Kaiser and Roumasset 2000, in Burnett et al. 2006). Comparable damage is possible on other Hawaiian islands, though the greatest economic impact is likely to be on Oahu, where 85% of Hawaii’s population is located. Agricultural: Control programs underway since about 1995 have prevented significant agricultural impacts in the Hawaiian Islands. Invading miconia in ranchland near Hana, Maui in 1995-2000 was successfully removed. Theoretically, runoff from miconia stands could trigger erosion and loss of agricultural soil fertility (Chan-Halbrendt et al. 2007), but this has not yet happened or at least has not been documented. Competition: When compared with a large group of native species M. calvescens appears to be better suited to capture and use light, which is consistent with its rapid spread in Hawaiian environments (Baruch Pattison & Goldstein 2000). Invasive characteristics of the species include rapid growth, fairly early maturity (after four years or more), production of large quantities of fruits and seeds, and effective seed dispersal by birds. Threat to Endangered Species: In Tahiti, 70-100 native plant species, including 35-45 species endemic to French Polynesia, are directly threatened with extirpation by invasion of miconia into native forests (Meyer and Florence 1996). Hawaii is home to a great number of rare and endemic plant, bird and invertebrate species at risk of global extinction, including over 350 federally endangered species. Upper Kipahulu Valley of Haleakala National Park on Maui, Hawaii, is a prime stronghold of Hawaiian biodiversity, containing stands of ohia (Metrosideros polymorpha) and koa (Acacia koa) that provide the primary habitat for rare native Hawaiian plants, birds and insects. Proactive response of Haleakala National Park personnel originally triggered a community-wide response to the miconia invasion in Hawaii about 30 years after M. calvescens had first been introduced to the State.
<urn:uuid:a65dc8cb-b7d0-4555-8fd0-2fca37c11bbb>
3.921875
686
Knowledge Article
Science & Tech.
23.991178
NASA In Deep Water To the uninitiated, it’s not immediately obvious why NASA would be sponsoring an expedition into the deepest known sinkhole on Earth. On the other hand, the involvement of Environmental Science and Engineering Professor John Spear is a little more apparent—he’s a microbiologist and in this more than 1,000-feet deep, warm, water-filled and mineral-rich cave known as El Zacatón, in Mexico's Yucatan Peninsula, the microbial life is fairly unusual. Below 30 meters there is no light or oxygen, yet life abounds. “The walls are lined with spongy red and purple microbe mats,” says Spear. Jim Bowden, a deep water diver, dove to 82 meters and brought back samples in which 27 divisions of bacteria were identified, including six new divisions. “The diversity is astounding. I think that if we get down further, there will be even more,” Spear says. There may even be whole ecosystems in the depths of El Zacatón that are entirely independent of photosynthetic energy, instead metabolizing sulphides from volcanic plumes. However, Bowden won’t be diving any deeper to find out—82 meters is way past the limit of most recreational divers. Instead, starting in mid-May, the expedition will be using a highly sophisticated autonomous robot called the Deep Phreatic Thermal Explorer (DEPTHX). Loaded with a total of 30 computers and able to use sonar, temperature, pressure and light to self-navigate as it searches for environments likely to support life, DEPTHX might be the most sophisticated robot ever designed for autonomous exploration. Of course this is where NASA’s interest lies—DEPTHX was designed as a prototype for a vehicle that might someday go looking for life in the ice-covered oceans of Jupiter’s moon, Europa. Imaging the Earth’s Core Luis Tenorio, associate professor in the Department of Mathematical and Computer Sciences, is participating in an NSF-funded project to image the earth’s core mantle boundary (CMB). The collaboration includes geophysicists, mathematicians and statisticians from MIT, Purdue and the University of Illinois. Their results, published in the Journal of Geophysical Research and in Science, clearly show structures at two depths close to the CMB, and the existence of new phase transitions in the mantle. The team’s methodology involves a rich mix of physics, mathematics and statistics to extract information from seismic wave data through “inverse scattering.” Whereas in the past, existing knowledge of geophysical structures was used to interpret scattering patterns, this method allows researchers to take scattered wave data and reconstruct an image of the subsurface without relying on existing knowledge. Combined with considerably better data coverage, this advance in imaging is leading to a rapid expansion in our knowledge of the subsurface and the inner workings of the planet. Materials and Metallurgical Engineering professor, Ivar Reimanis recently discovered a unique material behavior in which particles are ejected from the surface of an indented ceramic over periods lasting up to a few minutes. Because many of the ejected particles are submicron in size, it looks to the unaided eye like the ceramic is smoking. The key ingredient in the ceramic is a lithium aluminum silicate called β-eucryptite, a strange material that has a negative coefficient of thermal expansion. It is thought that a high compressive stress, such as that experienced under an indenter, stimulates a transformation to a denser ceramic phase. Upon release, a reverse transformation leads to a popcorn-like effect where particles ranging from submicron to 50 microns are ejected violently from the material. There is no known report of this phenomenon in any other material. With assistance from undergraduate students Chris Seick and Kyle Fitzpatrick, Reimanis is exploring whether this discovery can inform development of a toughened ceramic composite—the phase transformation may be able to shut cracks before they can propagate through a composite. In fact, this latter idea has already been submitted for a United States patent. To better understand the phenomenon, the Mines researchers have involved collaborators at the National Institute for Standards and Technology in Gaithersburg, the Los Alamos National Laboratory and the Indian Institute of Science. The work is being supported by the U.S. Department of Energy, Office of Basic Energy Sciences. Enhanced Imaging of the Subsurface Paul Sava, who joined the Department of Geophysics faculty in the fall of 2006, is working on increasing the accuracy of seismic imaging. When computers generate a visual representation of the subsurface from a seismic dataset, finer details are often obscured by background “noise.” While imaging may provide a coherent overall picture, it emerges from a fuzzy background, much as a badly oriented TV antenna produces a poor image. The noise is created by random sound waves that are inevitably recorded during seismic surveying. This random data makes finer details of the subsurface indistinguishable—only “louder” signals emerge from buzz. Sava’s work aims to cancel out background noise and bring more subtle features into focus. Instead of imaging data received at individual locations, he takes information recorded at multiple nearby sites and mathematically compares them. Data that bears no relation across sites is filtered out, leaving only spatially coherent information. Essentially, he is mathematically cross-checking the data received at multiple nearby sites and building an image from only the information that is corroborated. The result is greatly enhanced imaging, as the accompanying illustrations demonstrate. Project Enters Phase XII Geophysics Professor Tom Davis presented results of Phase XI of the Reservoir Characterization Project (RCP) to a packed house of sponsors April 12-13. The Phase XI project focused on nine-component, full wavefield seismic data collected at Rulison Field, in Western Colorado’s Piceance Basin. Three multi-component seismic surveys were acquired in 2003, 2004 and 2006 across the same area, enabling interpretation of the efficacy of time-lapse data. Additionally, a downhole test that measured in-situ pore pressure was carried out on a field well within the study area, and multi-component microseismic data were recorded during a four-stage hydraulic fracture treatment on a nearby well. To date, RCP’s Phase XI graduate students have concluded that shear waves are the most valuable wave mode for characterizing and monitoring Rulison’s Williams Fork and Iles tight-gas sands. The RCP project also validated the use of nine-component seismic data for detecting faults and fractures, detecting and predicting lithology and pressures, monitoring reservoir connectivity and depletion, and locating prime well locations. Pressure-test results were able to show a correlation with depletion zones that were predicted from time-lapse shear-wave data. Furthermore, the data showed coincidence with depleted areas in the Cameo Coal interval. High-resolution dynamic reservoir characterization appears to be a key technology for tight gas, said Davis, as it gives operators the potential to improve recovery efficiencies. Going forward, RCP is circulating research proposals for its Phase XII project. Under consideration is dynamic reservoir characterization on Postle Field, in Texas County, OK. Postle’s Morrow reservoir is undergoing enhanced oil recovery using CO2, a very germane subject in today’s oilfield. Courtesy of Oil and Gas Investor
<urn:uuid:bf8e6725-a1fc-4599-b608-42f3e65a7227>
3.546875
1,542
Content Listing
Science & Tech.
22.598386
Core Temperature of Sun How do scientist know the sun's core temperature? This is a good question, after all, we have no direct access to the sun's core, but if we apply some knowledge of physics, we can get to a reasonable approximation quite readily. First, we have to understand that the sun is mostly gas-like particles. If the sun did not have gravity the gases would spread away since nothing would hold them in place. However, if all the sun had was gravity, then that would compress the gas into a smaller volume. Since the sun is giving off energy, then the gases spread out and the sun is bigger than it would be if it was not giving off energy. So there are two opposing forces: (1) the gravity of the sun which pulls the gas-like particles inward, and (2) the energy output of the sun which pushes the gas-like particles outward. The sun's size is the result of the balancing of these two forces. Next, we need to find the mass of the sun. We can measure the gravity of the sun, by observing how the planets are moving around the sun. Since we know that mass is related to gravity, we now have a value for the mass of the BUT we know how much gravity this amount of mass should have. We also know the effect this gravity would have on the size of the sun - IF it were not giving off energy. Therefore, we can know just how much energy the sun is giving off in order to maintain its bigger size. So, we measure the volume of the sun. That's also easy to do. We simply look at the sun (under a filter so that we do not burn out our measuring instruments) and measure its diameter. So, knowing the mass of the sun, and the size of the sun, can tell us how much energy the sun is giving off. But we can relate energy to temperature. And this is how we know the temperature of the core of the sun. Greg (Roberto Gregorius) Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:8de2ad13-db9e-4c77-8420-3bb72129700e>
3.71875
457
Knowledge Article
Science & Tech.
62.912994
Terms such as ‘red tide’ and ‘global warming’ are catchy but lead to misconceptions. Words matter. Take the term “red tide,” which is the popularized way of talking about blooms of harmful marine algae. This common terminology is a misnomer because the blooms are not always red and their movement is largely unrelated to tides. Also, many species of algae that cause red discoloration are not harmful. A relatively new issue catching public attention is “ocean acidification.” “Ocean acidification” is a term used to describe changes in seawater chemistry due to increasing amounts of CO2 being taken up by the ocean. When CO2 from the atmosphere dissolves into seawater, a series of chemical reactions occurs that effectively lower seawater pH. But while ocean pH is definitely decreasing, the ocean is not actually becoming acidic — just less basic. The world’s oceans are not predicted to drop below a pH of 7.0 (neutral on the pH scale). That doesn’t mean we should downplay the severity of changing ocean chemistry. Extensive research has shown that even a slight reduction in pH, down to 7.8 from the current average of about 8.1, could have devastating impacts on marine ecosystems. Many types of organisms, especially those with calcium-carbonate shells like corals and shellfish, will have trouble surviving in lower pH waters. Although it is important that the crucial nature of the issue be translated to the public, we must be careful with terminology. People don’t like to hear bad news. They’d prefer that the oceans were healthy and that rapid shifts in climate were not occurring. That’s why scientists and the media must avoid hyperbolic language when describing crucial environmental issues. The use of more colorful terms may make for catchier headlines, but the terms can also invite disbelief. There is a need, of course, to make complex scientific issues understandable to nonscientists. But in trying to do so, we must also be careful to be absolutely accurate in our descriptions. Elizabeth Tobin, Los Angeles Times, Opinion. 2 April 2012. Article.
<urn:uuid:3a073533-60c1-40f7-8e62-b2f76510fb1c>
3.265625
453
Personal Blog
Science & Tech.
46.481513
One boring Monday morning in the lab a group of us did the experiment, and to our surprise we found that the hot water (in sealed containers) did freeze faster. On closer examination we discovered that the shelves in our freezer were covered in frost, like I imagine most freezers, and the hot water was melting the frost and creating a good thermal contact between the beaker of water and the shelf. That turned out to be why the hot water froze faster. When we thoroughly cleaned the freezer shelf the effect went away and the hot water took longer to freeze. I think the rumours about hot water freezing faster illustrate the dangers of improperly controlled experiments. As Ron mentions, evaporation could also be a factor and it would be easy for a home experimenter to get the wrong conclusion. Add to that the fact we'd secretly all be delighted if we could prove hot water really does freeze faster, and you can see how the rumour has spread.
<urn:uuid:93796662-37d4-4b1e-926d-9a102bb9e035>
2.75
193
Q&A Forum
Science & Tech.
50.44
Have you ever wondered why all mammals have a tail, but man does not? All species of animals, including man, of course, have evolved very gradually over millions of years. In this way, each animal strengthens and improves those parts of its body that are particularly useful, and the various parts that are not so useful diminish, or are adapted to a changed environment. This evolution happens very slowly, and changes can only be seen over numerous generations. In four legged animals, tails are very important to maintain balance. In many cases, tails are used almost as another limb, like when monkeys swing through the trees. However, Men and apes, have no useful need for a tail. As a species that lives on land, Man doesn’t need tails for hanging on tree brances, and since we don’t fly or leap, we don’t need then for balance either. As a matter of fact, a smalll, very rudimentary ‘tail’ can be founded at the base of our spines. It’s called the coccyx, or tailbone, and it’s all that left of the human tail. It’s just too small to p oke out behind us, so we don’t know it’s there, and it does not help us at all with balace or movement.
<urn:uuid:2c03b666-e138-4115-816c-b848976930df>
3.734375
281
Personal Blog
Science & Tech.
60.886473
An initial report on the 24-hour count that began midnight Monday and ended midnight Tuesday included 233 different species — a drop of 11 from last year when 244 were counted on Mad Island. While the area likely still has one of the United States' most diverse bird populations, the species that were missing raise questions. Similar changes in bird behavior could be seen this year in the Midwest and parts of the South, areas that have been gripped by a massive drought that covered two-thirds of the nation at its height. The drought's severity is unusual, but scientists warn that such weather could become more common with global warming. Thank you Ruth for this important article on how bird species numbers are declining due to climate change.
<urn:uuid:7c35fe75-b7ce-4892-8c81-315a0bb0ae6b>
3.203125
144
Comment Section
Science & Tech.
50.941923
seed germination and fertilizer bae at oci.utoronto.ca Wed Feb 1 18:07:08 EST 1995 In article <3gm9hv$777 at newsbf02.news.aol.com> cheesenips at aol.com (CheeseNips) writes: >hello! I am trying to help a student determine if there is any >difference in seed germination when fertilizer is used. If anyone can >help, it would be greatly appreciated. It is science project time. Pepper (capsicum) seeds are alleged to have faster and better germination if soaked briefly in a potassium nitrate solution. One idea is that wild peppers are often eaten by birds and the seeds deposited in a bit of guano, and the plants are adapted to that. Your student might want to compare several unrelated species to see which benefit from a KNO3 soak, and whether this correlates with whether the seed is borne in a bird-attracting fruit. Toronto, Ontario Canada More information about the Plantbio
<urn:uuid:5642edd6-9926-46cc-b0c8-ff54ef19962e>
3.0625
230
Comment Section
Science & Tech.
54.995113
Common loons defend breeding territories on fresh water lakes in the northern US and Canada. While a great deal is known in general about their breeding biology it was the advent of banding that enabled identification of individual loons. Cornell professor of neurobiology and behavior Charles Walcott and colleagues Walter Piper and Jay Mager have been studying a banded population of loons near Rhinelander, WI for the past 18 years. They have found that loons are quite faithful to the lakes on which they breed returning for an average of 5 years. Loons looking for a breeding site will either pick a vacant lake, replace a missing breeder or actively displace a pair member. Female fights are relatively benign with the winner taking over the territory and the resident male, the loser moving to another lake in the vicinity. For males, fights are more serious; in 30% of such fights a male is killed. And if a male is killed it is always the resident, never the intruder.
<urn:uuid:a321549f-fb8c-49ef-8cf2-fae1ce3d3c65>
3.5625
200
Truncated
Science & Tech.
42.002143
New Native Languages, May 08, 2012 D, Go, Vala, and Rust: A new generation of native languages. D is the brain-child of Dr. Dobb's blogger Walter Bright. Like all the other languages discussed here, it's fundamentally an OO language with numerous features that push it into different areas of programming. Originally conceived in response to perceived flaws in C++, D has grown to embrace a wide range of features — optional memory management (garbage collection), robust safety features (via bounds checking and design by contract). While the language has a low-level feel (inline assembly language is supported), it has high-level constructs such as closures, metaprogramming capabilities, and other features associated with functional programming. Because it first appeared in 2001, D has benefited from considerable refinement and optimization. As a result, programs written in D generally show performance close to their C++ counterparts.
<urn:uuid:ee6563a7-d89c-4f97-8c65-cc5026c115ec>
2.859375
185
Personal Blog
Software Dev.
32.438647
Electromagnetic radiation is a combination of oscillating electric and magnetic fields propagating through space and carrying energy from one place to another. Light is a form of electromagnetic radiation. The theoretical study of electromagnetic radiation is called electrodynamics, a subfield of electromagnetism. Any electric charge which accelerates radiates electromagnetic radiation. When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the electric current. Depending on the circumstances, it may behave as waves or as particles. As a wave, it is characterized by a velocity (the velocity of light), wavelength, and frequency. When considered as particles, they are known as photons, and each has an energy related to the frequency of the wave given by Planck's relation E = hv, where E is the energy of the photon, h is Planck's constant - 6.626 × 10-34 J·s - and v is the frequency of the wave. Einstein later updated this formula to Ephoton = hv. Generally, electromagnetic radiation is classified by wavelength into radio, microwave, infrared light, visible light, ultraviolet light, X-rays and gamma rays. The details of this classification are contained in the article on the electromagnetic spectrum. The effect of radiation depends on the amount of energy per quantum it carries. High energies correspond to high frequencies and short wavelengths, and vice versa. One rule is always obeyed, regardless of the circumstances. Radiation in vacuum always travels at the speed of light, relative to the observer, regardless of the observer's velocity. (This observation led to Albert Einstein's development of the theory of special relativity). Much information about the physical properties of an object can be obtained from its electromagnetic spectrum; this can be either the spectrum of light emitted from, or transmitted through the object. This involves spectroscopy and is widely used in astrophysics. For example; many hydrogen atoms emit radio waves which have a wavelength of 21.12 cm. When electromagnetic radiation passes through a conductor it induces an electric current flow in the conductor. This effect is used in antennas. Electromagnetic radiation may also cause certain molecules to oscillate and thus to heat up; this is exploited in microwave ovens.
<urn:uuid:bf0af722-50ea-48a9-b02d-58a0d49fce3a>
4.09375
462
Knowledge Article
Science & Tech.
29.595229
||Topex was JPL's follow-on to the Seasat-A mission of 1978. Unlike Seasat, Topex had just one scientific instrument, a radar altimeter designed to measure sea surface height. Surface height is directly related to temperature, tides and currents, all of which are important to oceanographers and meteorologists. But in the poor fiscal atmosphere of the early 1980s, NASA could not afford the project. Similarly, the French space agency, Centre National d'Etudes Spatiales, had a nearly identical "Poseidon" mission in its plans that it could not afford. So, NASA associate administrator Burt Edelson arranged a merger with his French counterpart, Jean-Louis Fellouf, with NASA providing the satellite, France providing an Ariane launch vehicle, and both providing radar altimeters. Arianespace launched Topex/Poseidon on Oct. 10, 1992, from its facility in Korou, French Guyana. Over its life, the mission proved the existence of deep-ocean waves previously known only from theory, watched the complete evolution of the largest El Nino in the 20th century during 1997 and 1998, showed seasonal cycles in the world ocean, and measured the slow thermal expansion of the oceans as they warm. Far exceeding its expected five-year life, Topex/Poseidon was finally shut down in January 2006. The mission had been so valuable to Earth science, it had already been replaced by the Jason-1 mission (also a joint U.S./France effort). The two spacecraft shared the same orbit for several years, while scientists compared their data.
<urn:uuid:79ae505e-6e77-4a01-8e62-5e2a944f0424>
3.40625
332
Knowledge Article
Science & Tech.
41.465854
Space history was made on February 12, 2001 when NASA's NEAR spacecraft became the first craft to land on an asteroid. What makes this landing even more exceptional is that NEAR, managed by the Applied Physics Lab at Johns Hopkins University, was not built to withstand a landing; its mission was to orbit asteroid Eros and study the slow-moving rock from a distance. However, with its main mission successfully completed, scientists thought they could attempt an asteroid landing. It fell to NEAR's navigation team, based at JPL, to bring the craft in for a semi-smooth landing. As the rendezvous drew closer, the JPL team had to quickly crunch numbers to calculate the craft's path as it plunged toward Eros and relay commands back to NEAR to give it the best landing possible. Bobby Williams, the head of the navigation team, talks about the pressure on that eventful Monday morning. Q: What was the asteroid landing like? A: The days leading up to it were pretty chaotic. On Monday we got a really early start at 2 am. Al Hewitt, the network operations person for the Deep Space Network (the telecommunications system which talks to spacecraft), called to say that the predictions for NEAR's arrival at Eros we had sent weren't working. So we had an immediate panic attack. Pete Antreasian and Steve Chesley were waiting at their keyboards when the numbers were made available, they then started processing. We were on a very tight time schedule to get the number "how much earlier or later are we?" We did that with a couple minutes to spare. We found that it was 17 seconds late. We bumped the spacecraft's clock back to correct for the change and the spacecraft got to live 17 seconds over again. We believe that made the difference - that if we hadn't adjusted it, it may have mapped a much bigger error on the ground. So that was the big push-up for the morning. We had to have pictures taken that were immediately downlinked which not only required us to be on our toes, the spacecraft had to be officially set up to do that. So it all worked -- the pictures got down and a few moments after they were taken, downlinked. Q: How did the team feel? A: The day before we touched down there was a lot of fatigue: we'd been working pretty hard for the past month. Monday morning, all the fatigue drained away. Everybody was pretty excited. It was the culmination of all that hard work. It was like going in for your final exam, and you know you'll get an A and you feel really good when you come out. Q: What makes a good team, especially in the face of doing an unprecedented maneuver like the landing? A: We didn't over-train; we didn't have a lot of blow-by blow simulation. That makes everybody tired. My approach is always: lay it all out, simulate little parts of it so that everybody knows what they have to do. They're smart people! We rely on their own innate abilities and their training, and I think people respond to that. Q: What will the first small body landing on an asteroid teach us about future landings? A: The fact that NEAR was able to land with no landing apparatus on the spacecraft means that now they don't have to over-design any kind of landing apparatus on one that's actually designed to land. We were extremely lucky not to hit a rock or boulder and knock a solar array off. You wouldn't want that on a planned landing, where you have to take off again, or drop off a rover. But now we know that we can survive an impact. In that sense we've set the boundary; we know what the design constraints would be for a real lander on an asteroid or comet. Q: From the navigation point of view, what are the problems of landing on a small body and how do you solve them? A: For asteroids we know now that the key to landing is the models, like the gravity fields and the solar pressure on the spacecraft. Because we had those models fairly well-estimated, landing was a matter of planning and using those models. We found you can't just arrive and land immediately, like we do at Mars. For a small body that's impractical, because you need to know the gravity, you need to know the mass, and you can't estimate those things until you get close. Q: What is the NEAR navigation team's future? A: One important element is that we do navigation for many different missions. Almost all of my group has only worked part-time on NEAR. We're used to working more than one mission. We have a couple people going to other Discovery missions for the Applied Physics Lab, the CONTOUR mission, which flies by at least two comets, and the Messenger mission, which goes to Mercury.
<urn:uuid:00bf9a63-85b1-4f8e-bfc8-a56898e19be1>
3.484375
1,015
Audio Transcript
Science & Tech.
63.665302
will calculate simple statistical measures between two minc files or more by comparing all subsequent files to the first. The results for each subseqent file are then returned in order. By default all statistics are calculated. If specifitc statistics are requested via a command-line option, then only the requested statistics are printed. A very useful feature of this program is the ability to restrict the set of voxels included in the statistic calculation, either by restricting the range of included values (-floor, -ceil or -range), or by using a mask file (-mask) with a restricted range. The comparison statistics available in minccmp are given below. Note that two of these (-xcorr and -zscore) are a very close approximation to what is used Note that options can be specified in abbreviated form (as long as they are unique) and can be given anywhere on the command line. Overwrite an existing file. Don't overwrite an existing file (default). Dump a lot of extra information (for when things go haywire). Print out extra information (more than the default). Print out only the requested numbers Specify the maximum size of the internal buffers (in kbytes). Default is 4 MB. Check that all input files have matching sampling in world Ignore any differences in world dimensions sampling for input files . Volume range options A lower bound for ranges of data to include in statistic calculations. An upper bound for ranges of data to include in statistic calculations. A lower and upper bound for the ranges of data to include in statistics. Name of file to be used for masking data included in statistic Compute all statistical measures. This is the default. Print the Sum Squared Difference between two input files SSQ = Sum( (A-B)^2 ) Print the Root Mean Squared Error between two input files RMSE = sqrt( 1/n * Sum((A-B)^2)) Print the Cross Correlation between two input files XCORR = Sum((A*B)^2) / (sqrt(Sum(A^2)) * sqrt(Sum(B^2)) Print the z-score difference between two input files ZSCORE = Sum( |((A - mean(A)) / stdev(A)) - ((B - mean(B)) / stdev(B))| ) / n
<urn:uuid:fa75a467-5dc0-4da7-9882-8c3c977122b1>
2.734375
521
Documentation
Software Dev.
49.453856
Present climate is determined from analysis of meteorological observations. Climates of the past may be deduced by studying proxy data such as ice-cores, tree-rings, pollen, etc. Future climate changes may be estimated by means of computer simulations using programs known as General Circulation Models or Climate Models. People have built physical models having some properties in common with the atmosphere. These 'rotating dish-pans' do contain wave motions and instabilities similar in some respects to those observed. But no-one has constructed a mechanical model which behaves in detail like the atmosphere. The difficulties with such a task are obvious: for example, we cannot replicate the radial gravitational force of the spherical earth in the laboratory. So, we design instead a mathematical model in which the geometry of the atmosphere can be precisely represented and a variety of detailed physical processes incorporated. The computer program to solve or integrate this system of mathematical equations is our climate model. The essence of modelling is to use the known laws of physics to design a computer program capable of simulating the atmospheric flow. With realistic initial observational data the program can produce a short-range weather forecast, typically for up to a week ahead. If the solution is carried forward for an extended period - months or years of simulated flow - the forecast will diverge rapidly from the true evolution of the atmosphere. However, its statistical characteristics - mean values, frequency of extremes, etc. - may resemble those of the real climate provided the model includes a representation of all the processes which determine climate. The model has as its basis the fundamental principles of physics - conservation of mass and energy and Newton's laws of motion. These determine the overall behaviour of the atmosphere. Many physical processes must also be allowed for: the phase changes of water, incoming solar radiation, frictional drag at the earth's surface, sub-grid-scale turbulence and so on. The details of some of these micro-physical processes are poorly understood with the consequence that there are inherent inaccuracies and uncertainties in all climate models. In studying climate change, we must also include chemistry in the model. Small changes in the concentration of some species may dramatically alter the radiation balance and profoundly influence the climate. Details of the interaction of solar and terrestrial radiation with compounds such as methane and ozone must be taken into account. A vast complex of chemical reactions occurs in the atmosphere. Current models have relatively simple treatment of chemical processes. Future models may need to consider the concentrations of hundreds of compounds, their reaction rates, and so on. The most advanced climate models are capable of simulating the observed climate with a good level of accuracy. The zonal mean winds and temperatures in each season are generally well simulated, and the statistics of extremes are realistic. Thus, a reasonable picture of the climate of a given geographical location emerges from the models. However, it must be acknowledged that there are still serious shortcomings. In a recent comparative study of 14 climate models, it was found that the temperature was on average too cold, particularly in the polar upper troposphere and tropical lower troposphere. It was conjectured that all the models are misrepresenting or even omitting some mechanism, resulting in this deficiency. So, we can simulate the current climate fairly faithfully. But why bother? Why spend large resources finding out what we already know? Well, our fond hope is that a climate model can do more than simulate the status quo. Like a laboratory model, the climate model can be used in experiments to study how the climate may change in the future. For example, we may study the sensitivity of model climate to a doubling of carbon dioxide simply by changing a single number (representing CO2 concentration) and re-running the model. This may give us an indication of what is likely to happen in the real world under such circumstances. Indeed, just such experiments are the basis of current predictions of global warming resulting from the burning of fossil fuels. However, model results have been found to vary widely when details of the physical parameterisations are adjusted: in recent experiments with the Hadley Centre model, the predicted increase in global mean surface temperature ranged from 1.9C to 5.2C using different cloud schemes. The following quotation is from a report on these experiments: The guidance provided by model sensitivity studies is about the best we have for now. But there is some danger in placing too much reliance on it. The equations governing the atmosphere are non-linear, their solutions are hyper-sensitive to small perturbations and minor changes can have major implications. Thus, seemingly negligible deficiencies in a climate model may render its output useless or misleading. Moreover, there are certainly physical and chemical processes which are insignificant in present conditions and are therefore ignored in current models, but which may be of critical importance in altered circumstances. As an example, the detailed micro-physics of ice clouds is involved in the depletion of Antarctic ozone. This was not foreseen by the modellers, with the result that we were all caught on the hop. I fear there may be more unpleasant surprises to come. In conclusion, the overall success of climate models in simulating the present climate of the atmosphere is impressive. Although there are shortcomings in all models, they give a generally accurate picture of reality. They provide a valuable means for estimating the likely climatic consequences of changes induced by mankind's activities. At present, we must interpret their guidance with some caution; in particular, the details of geographical variations in climate impact may be unreliable. We can be confident that the dependability of model guidance will grow with their increasing sophistication, so that detailed regional climate impact predictions may be reliable in the future, perhaps shortly after the turn of the millennium. But in an uncertain world we can never completely rule out those 'nasty surprises'. (1) Basic Text: An Introduction to Three-dimensional Climate Modelling. Warren M Washington and Claire L Parkinson, University Science Books, 1986. (2) Recent CO2-Sensitivity Study: The Hadley Centre Transient Climate Change Experiment. Hadley Ceentre Publication, August, 1992. (3) Recent Model Intercomparison Study: An Intercomparison of Climates Simulated by 14 Atmospheric General Circulation Models. G J Boer et al., July, 1991, CAS/JSC WGNE, WMO T.D.-No.425.
<urn:uuid:8956a4ec-f79d-40f5-8f76-428843b0fd38>
3.90625
1,289
Academic Writing
Science & Tech.
27.726165
8 images of impact craters in space Tue, Nov 06 2012 at 7:05 PM An estimated half million asteroids are flying around our solar system, ranging from the size of baby planets to particles of dust. With so many objects winging about in space, it’s no wonder that the planets and moons of our solar system are pitted with craters. Here are eight images of amazing impact craters in our solar system, each telling its own tale of mysterious destruction.
<urn:uuid:52808130-3417-4ac4-b62f-a8edd7961b80>
2.8125
98
Truncated
Science & Tech.
51.159959
What are the special conditions necessary for fossils to form? Usually, if something dies, it either rots away from bacteria and fungus, or is eaten by animals. (Scavengers are animals that live off of dead flesh.) So, a major factor in forming a fossil is rapid burial. This protects the organism from being eaten or exposed to bacteria.
<urn:uuid:2a44f2b9-99f9-4f3c-9b92-395168421927>
3.578125
71
Knowledge Article
Science & Tech.
52.116172
Created by Roger Edwards, Storm Prediction Center In this enhanced water vapor image, color progressions from gray thru bright blue to deep blue indicated more moisture; while black through deep red means drying. Often, areas of strong drying correspond to sinking motion in the upper levels; and areas of high moisture content signal rising motion. Such is cearly the case with this image of Bertha, where tremendous amounts of moisture are being pumped high into the atmosphere near the center of the hurricane. The bright blue dot is the eye, which contains much less moisture than the surrounding CDO (Central Dense Overcast). Note the broad arc of banded, gray-and-blue enhanced material (cirrus clouds) arcing northward out of the hurricane. This is an upper-level outflow channel, marked by the cirrus arc. This cirrus arc was spiraling outward, away from the storm. As large and intense low pressure areas, hurricanes in the northern hemisphere rotate cyclonically (counterclockwise) from the surface up through most of their depth. However, the very top portion is an anticyclonically (clockwise) rotating high! This is the most efficient way for a hurricane to ventilate itself, releasing air through its top in an outward-spiralling (divergent) anticyclone. In the southern hemisphere, the sense of rotation would be the opposite.
<urn:uuid:1dcbab73-2697-401f-81a5-1d0673c3d3e5>
3.515625
284
Knowledge Article
Science & Tech.
39.428796
NOTE: Click on the images to view them at their highest resolution. This is a six image sequence showing the collision of fragment H of Comet Shoemaker-Levy 9 with Jupiter. The frames were taken over a three hour period on the 18th of July, 1994, using the MAGIC infrared camera on the 3.5 meter telescope on Calar Alto. The star-like object next to Jupiter is the largest Galilean moon, Ganymede. The polar caps and pre-existing impact sites appear bright at the wavelength of observation, 2.3 microns, which was selected to maximize contrast between the impacts and the jovian cloud deck. The following paragraphs give further information on each image. Jupiter and Ganymede approximately four minutes prior to the impact of fragment H. The bright feature near the southern polar cap is the impact site of the D and G fragments, which struck the planet 32 and 12 hours earlier, respectively. Note the crescent shaped ejecta blanket to the southwest (lower right) of the D/G site. The second and brighter of two impact precursors observed at Calar Alto. The first precursor appeared approximately two minutes earlier, and the main peak of the H impact began roughly four minutes after this exposure. The main peak of the H impact approaches maximum light. The explosion brightened for another 4 minutes, during which time the MAGIC camera acquired near infrared spectra of the event. These spectra give strong indications of hot (>2000 K) molecular gas. The fragment H impact site rotates into view. The scar on Jupiter's atmosphere is now larger than the planet Earth. Spectra taken just before this image show that the site has cooled considerably. Appearance of the ejecta blanket. Forty five minutes after impact, the H collision site shows a second feature to the southwest. This is the crescent-shaped ejecta blanket similar to that seen in the pre-existing D/G site. Ganymede is just beginning its transit across the face of Jupiter. The H impact site joins its brethren. More than three hours after impact, Ganymede has completed its transit, and the H impact site has rotated past the meridian and lies just to the southwest (lower right) of the Great Red Spot. This image was taken with the telescope pointed near the horizon, and the greater thickness of Earth's atmosphere renders the picture somewhat fuzzy. Nevertheless, the crescent-shaped ejecta blanket is obviously similar to that of the D/G impact (see adjacent frames). The sites visible on Jupiter are from left to right: A, E/F, H, and D/G (on the western limb). Tom Herbst, Max-Planck-Institut fuer Astronomie, Heidelberg, Doug Hamilton, Max-Planck-Institut fuer Kernphysik, Heidelberg, Hermann Boehnhardt, Universitaets-Sternewarte, Muenchen, and Jose Luis Ortiz Moreno, Instituto de Astrofisica de Andalucia, Granada. Images, Images, Images
<urn:uuid:a3b7f02e-144c-4116-b719-56512800ef6d>
3.046875
640
Knowledge Article
Science & Tech.
48.272293
Petford, N. and Koenders, M.A., 2001. Consolidation phenomena in sheared granitic magma: effects of grain size and tortuosity. Physics and Chemistry of the Earth, Part A: Solid Earth and Geodesy, 26 (4-5), pp. 281-286. Full text not available from this repository. Official URL: http://www.sciencedirect.com/science?_ob=ArticleUR... Granitic (and other) magmas with crystal contents between 50 and ca.70% are expected to show dilatant behavior during deformation. The grain size at which the magma has been crystallised is shown to be relevant to the development of excess pore pressure at continued shearing. The reigning pressure regime is compared to the stresses required for fracturing of the skeletal elements. At rates of loading in excess of average tectonic rates (≥ 10−14 s−1), shear-induced dilation in granitic magmas with high solidosities (crystal contents>50%), can lead to fracture. The available excess skeletal pressure at a given strain rate is a function of two coupled parameters, grain size and tortuosity, with higher skeletal pressures favoured by smaller mean particle size. Our analysis suggests that the common occurrence of brittle-like features thought to have formed in the magmatic state during pluton crystallisation can only be achieved where strain rates (emplacement loading) are at least of the order 10−13 s−1 or greater, consistent with similar estimates of strain rates during pluton emplacement based on field studies. |Subjects:||Science > Earth Sciences| |Group:||University Executive Team| |Deposited By:||Ms MJ Bowden| |Deposited On:||02 Jan 2008| |Last Modified:||07 Mar 2013 14:42| |Repository Staff Only -| |BU Staff Only -| |Help Guide -||Editing Your Items in BURO|
<urn:uuid:4b73956b-9978-4a56-8a95-f19e6f71657f>
2.8125
418
Academic Writing
Science & Tech.
48.085
If a is the radius of the axle, b the radius of each ball-bearing, and c the radius of the hub, why does the number of ball bearings n determine the ratio c/a? Find a formula for c/a in terms of n. Which is larger cos(sin x) or sin(cos x) ? Does this depend on x ? Find the exact values of some trig. ratios from this rectangle in which a cyclic quadrilateral cuts off four right angled triangles.
<urn:uuid:33e31ba8-be09-4f75-a5a1-3a49a8e0c1eb>
3.046875
104
Q&A Forum
Science & Tech.
88.760894
Last July, a chartered fishing boat strewed 100 tonnes of iron sulphate into the ocean, off the western coast of Canada. The goal was to supercharge the marine ecosystem. The iron was supposed to fertilize plankton, boost salmon population and sequester carbon. Currently, it’s still unclear whether the ocean responded as hoped, but this project has angered scientists, embarrassed a village of indigenous people and enraged opponents of geoengineering. Read more @ SciTechDaily
<urn:uuid:17da8785-9909-4085-9965-f6a672e596cf>
2.921875
98
Truncated
Science & Tech.
27.223579
I was lucky enough to see a talk by Barbara Liskov, the grande dame of computer science. The talk was titled “The Power of Abstraction,” and it covered Liskov’s work on programming languages in the 1970s and 1980s, primarily a language called CLU. Update 1/14/2010: Video of the same talk is available here: OOPSLA Keynote: The Power Of Abstraction CLU had a number of interesting features that were ahead of its time — heap-based garbage collection, typed exceptions, and iterators. Many of these features made their way into object-oriented languages such as Java. But CLU itself is not object-oriented. Object-oriented languages, Liskov said, tend to conflate the concrete representation of a type with the interface used to access it. Think of a classic Java class in an introductory OOP text. The class contains both instance fields and methods to manipulate them. Even though the fields are private, the interface is tied to a specific implementation. You can’t substitute a different implementation, not even by subclassing. CLU provides separate structures for fields and methods. Fields are defined in types, which are more or less like C structs. Methods are defined in clusters, from which the name CLU derives. A cluster is a named set of method implementations, associated with one particular type. Users only work with clusters, not types. A cluster may be substituted by a another cluster that implements the same methods. Why is this interesting now? Because we’re just catching up to where Liskov was in the seventies. Modern Java designs often favor interface-based APIs with no concrete inheritance and no public constructors. This is even more interesting to me, because my favorite programming language will soon have features very similar to CLU’s types and clusters. The “new” branch of Clojure defines two new abstractions: datatypes and protocols. A protocol is a set of function signatures, with no implementation. Conceptually, it’s similar to a Java interface. You could use a protocol to define an API to model some real-world object, such as Employee, Department, etc. A datatype is a set of named fields, with optional type declarations. Conceptually, it’s similar to a C struct. However, a datatype can also declare support for any number of protocols, and supply methods to implement those protocols. For example, Clojure will probably have a Countable protocol with a single method count. Clojure datatypes like Lists and Vectors can provide their own implementations of count. At that level, the datatype is like a concrete class implementing several interfaces. What’s really cool is that you can extend protocols for existing types, even Java classes. So, for example, we could implement Countable for java.lang.String by writing a count method that calls String.length(). This means you can create new protocols for Java classes that you do not control. This is like interface injection, a proposed but as-yet unimplemented feature for Java. Protocol method calls are dispatched dynamically based on the type of their first argument, very similar to (and at the same speed as) Java method calls.
<urn:uuid:239fa43c-d99b-4c93-848a-d8e1bfeecb8e>
2.71875
688
Personal Blog
Software Dev.
42.25416
Near the center of this sharp cosmic portrait, at the heart of the Orion Nebula, are four hot, massive stars Gathered within a region about 1.5 light-years in radius, they dominate the core of the dense Orion Nebula Star Cluster. Ultraviolet ionizing radiation from the Trapezium stars, mostly from the brightest star powers the complex star forming region's entire visible glow. About three million years old, the Orion Nebula Cluster was even more compact in its younger years and a recent dynamical study runaway stellar collisions at an earlier age may have formed a black hole with more than 100 times the mass of the Sun. The presence of a black hole within the cluster could explain the observed high velocities of the Trapezium stars, The Orion Nebula's distance of some 1500 light-years would make it the closest known black hole to planet Earth. Image Data - Hubble Legacy Archive,
<urn:uuid:9149c5b7-e4f8-4692-93ed-4972a87a7f32>
3.8125
201
Knowledge Article
Science & Tech.
38.784688
The topic of this article is focused around zooming in a perspective OpenGL view. I wrote this article because I found some examples of zooming in a OpenGL view using ortho projections. But the most advanced example that I found using perspective projections, don't implement a true zooming but simulate the zooming effect changing the view point or camera angle. The technique that I suggest implement a true zooming and not change camera angle or perspective parameters (view point, reference point clipping planes). I achieved this result using the function glFrustum instead of standard gluPerspective and manipulating (see function COGL::SetPerspective(const CRect &rcClient) in the class COGL) the scaling and translating parameters of projection matrix. Two normalized rectangles are maintained by the COGL class: one for the full size image, and one for the current zooming rectangle. The image presented by OpenGL is scaled by the ratio of the rectangles, adjusted according to the viewport aspect and translated to the center of current zooming rectangle. Finally sorry for my poor English.
<urn:uuid:1ea7819c-5d4f-48ac-9bfe-9b160391ee2b>
2.875
228
Documentation
Software Dev.
31.823197
Have some satellite data showing 2010 to be -- by a significant margin -- the hottest year in the satellite record. Just look at the very top line for 2010, with the rectangle at the end showing the most recent reading. You will want to check all the boxes at the bottom of the graph and then select "redraw" to show the previous 10 years compared to 2010. Have an entire press release showing how the Arctic sea ice is vanishing due to rising temperatures. My apologies for the huge graphics. It was either large ones or tiny thumbnails. I went with the large ones because they're at least easy to read. In May, Arctic air temperatures remained above average, and sea ice extent declined at a rapid pace. At the end of the month, extent fell near the level recorded in 2006, the lowest in the satellite record for the end of May. Analysis from scientists at the University of Washington suggests that ice volume has continued to decline compared to recent years. However, it is too soon to say whether Arctic ice extent will reach another record low this summer—that will depend on the weather and wind conditions over the next few months. Overview of conditions Arctic sea ice extent averaged 13.10 million square kilometers (5.06 million square miles) for the month of May, 500,000 square kilometers (193,000 square miles) below the 1979 to 2000 average. The rate of ice extent decline for the month was -68,000 kilometers (-26,000 square miles) per day, almost 50% more than the average rate of -46,000 kilometers (18,000 square miles) per day. This rate of loss is the highest for the month of May during the satellite record. Ice extent remained slightly above average in the Bering Sea, and below average in the Barents Sea north of Scandinavia, and in Baffin Bay. Figure 1. Arctic sea ice extent for May 2010 was 13.10 million square kilometers (5.06 million square miles). The magenta line shows the 1979 to 2000 median extent for that month. The black cross indicates the geographic North Pole. Conditions in context As we noted in our May post, several regions of the Arctic experienced a late-season spurt in ice growth. As a result, ice extent reached its seasonal maximum much later than average, and in turn the melt season began almost a month later than average. As ice began to decline in April, the rate was close to the average for that time of year. In sharp contrast, ice extent declined rapidly during the month of May. Much of the ice loss occurred in the Bering Sea and the Sea of Okhotsk, indicating that the ice in these areas was thin and susceptible to melt. Many polynyas, areas of open water in the ice pack, opened up in the regions north of Alaska, in the Canadian Arctic Islands, and in the Kara and Barents and Laptev seas. The polynyas are clearly visible in high-resolution passive microwave images from the Advanced Microwave Sounding Radiometer (AMSR-E) aboard NASA’s Aqua satellite. What do current ice conditions mean for the minimum ice extent this fall? It is still too soon to say: although ice extent at present is relatively low, the amount of ice that survives the summer melt season will be largely determined by the wind and weather conditions over the next few months. Figure 2. The graph above shows daily sea ice extent as of June 7, 2010. The solid light blue line indicates 2010; dashed green shows 2007; solid pink shows 2006, and solid gray indicates average extent from 1979 to 2000. The gray area around the average line shows the two standard deviation range of the data. May 2010 compared to past years Average ice extent for May 2010 was 480,000 square kilometers (185,000 square miles) greater than the record low for May, observed in 2006, and 500,000 square kilometers (193,000 square miles) below the average extent for the month. The linear rate of decline for May over the 1979 to 2010 period is now -2.41% per decade. The rate of decline through the month of May was the fastest in the satellite record; the previous year with the fastest daily rate of decline in May was 1980. By the end of the month, extent fell near the level recorded in 2006, the lowest in the satellite record for the end of May. Despite the rapid decline through May, average ice extent for the month was only the ninth lowest in the satellite record. Figure 3. Monthly May ice extent for 1979 to 2010 shows a decline of 2.4% per decade. Persistent warmth in the Arctic Arctic air temperatures averaged for May were above normal, continuing the temperature trend that has persisted since last winter. Temperatures were 2 to 5 degrees Celsius (4 to 9 degrees Fahrenheit) above average across much of the Arctic Ocean. A strong anticyclone centered over the Beaufort Sea produced southerly winds along the shores of Siberia (in the Laptev and East Siberian seas), resulting in warmer-than-average temperatures in this area. The Canadian Arctic Islands were an exception to the general trend, with temperatures slightly cooler than average over much of the region. Figure 4. This map of air temperature anomalies for May 2010, at the 925 millibar level (roughly 1,000 meters or 3,000 feet above the surface), shows warmer-than-usual conditions over much of the Arctic Ocean, especially along coastal Siberia. Areas in orange and red correspond to positive (warm) anomalies. Areas in blue and purple correspond to negative (cool) anomalies. Models indicate low ice volume Ice extent measurements provide a long-term view of the state of Arctic sea ice, but they only show the ice surface. Total ice volume is critical to the complete picture of sea ice decline. Numerous studies indicate that sea ice thickness and volume have declined along with ice extent; unfortunately, there are no continuous, Arctic-wide measurements of sea ice volume. To fill that gap, scientists at the University of Washington have developed regularly updated estimates of ice volume, using a model called the Pan Arctic Ice Ocean Modeling and Assimilation System (PIOMAS). PIOMAS uses observations and numerical models to make ongoing estimates of changes in sea ice volume. According to PIOMAS, the average Arctic sea ice volume for May 2010 was 19,000 cubic kilometers (4,600 cubic miles), the lowest May volume over the 1979 to 2010 period. May 2010 volume was 42% below the 1979 maximum, and 32% below the 1979 to 2009 May average. The May 2010 ice volume is also 2.5 standard deviations below the 1979 to 2010 linear trend for May (–3,400 cubic kilometers, or -816 cubic miles, per decade). PIOMAS blends satellite-observed sea ice concentrations into model calculations to estimate sea ice thickness and volume. Comparison with submarine, mooring, and satellite observations help increase the confidence of the model results. More information on the validation methods and results is available on the PIOMAS ice volume Web site. Figure 5. The chart above, from the University of Washington Pan-Arctic Ice Ocean Modeling and Assimilation System, shows anomalies in ice volume by month. Ice volume is expressed in units of 1000 cubic kilometers (240 cubic miles), and is computed relative to averages for the period 1979 to 2009. The SEARCH Sea Ice Outlook, an international, community-wide discussion of the upcoming September Arctic sea ice minimum, is slated to be published in June 2010. Gridded ICESat data are now available from the NASA Jet Propulsion Laboratory. This data provide an estimate of sea ice thickness based on elevation measurements, from 2004 to 2008. Schweiger, Axel , Jinlun Zhang, Mike Steele, et al. Pan Arctic Ice Ocean Modeling and Assimilation System (PIOMAS). http://psc.apl.washington.edu/ArcticSea ... e/IceVol.. For previous analyses, please see the drop-down menu under Archives in the right navigation at the top of this page.http://nsidc.org/arcticseaicenews/index.html But then, didn't it snow a whole lot somewhere last winter? And then there's those horrible, horrible emails which were in fact much ado about nothing. And isn't "Lord" Christopher Monckton out on tour somewhere? Now if you'll excuse me, I need to go laugh my ass off to drop a few more pounds.
<urn:uuid:c4a9a6fa-a88a-44af-9b8b-136dd9a03745>
3.328125
1,763
Comment Section
Science & Tech.
50.951932
This article has been reviewed by the following Topic Editor: C Michael Hogan Atlantic Water (AW) is a water mass traditionally defined as any water with salinity greater than 35.0 entering the Arctic domain from the Atlantic domain. This article is written at a definitional level only. Authors wishing to expand this entry are inivited to expand the present treatment, which additions will be peer reviewed prior to publication of any expansion. AW first entering the Iceland and Norwegian Seas typically has temperatures of six to eight degrees Celsius (C) and a salinity range of about 35.1 to 35.3, although the property ranges of other waters obviously connected with AW have prompted some to expand the definition to include all waters warmer than three degrees C and more saline than 34.9. Estimates of the total influx of AW range as high as nine Sverdrups. Arctic Ocean Circulation. #6 indicates Atlantic Water Warmer, more salty surface waters from the Atlantic penetrate the Arctic Ocean and are cooled as they move through the Greenland Sea and the Norwegian Sea. As they get colder, they sink beneath the cold, less salty waters to depths reaching several hundred meters. Eventually, they exit through the Fram Strait, the only “gateway” that allows deeper water to flow through. Source: Woods Hole Oceanographic Institution Physical Oceanography Index James H. Swift. The arctic waters. In Burton G. Hurdle, editor, The Nordic Seas, pages 129–153. Springer-Verlag, 1986. Steve Baum (Contributing Author);C Michael Hogan (Topic Editor) "Atlantic Water". In: Encyclopedia of Earth. Eds. Cutler J. Cleveland (Washington, D.C.: Environmental Information Coalition, National Council for Science and the Environment). [First published in the Encyclopedia of Earth March 29, 2010; Last revised Date November 6, 2011; Retrieved May 24, 2013 <http://www.eoearth.org/article/Atlantic_Water?topic=49523>
<urn:uuid:61d2478c-fda2-489a-b207-9d908b1db994>
3.265625
412
Knowledge Article
Science & Tech.
50.093026
The Karner Blue Butterfly Lycaedes melissa samuelis Walk into one of nature’s unique ecosystems in the Northeast, a pine barren, on a still, hot July day. You'll smell the aroma of pine. You'll sense the dryness of the air, and if you are fortunate, you'll see the fluttering of small iridescent blue wings against a backdrop of low-growing vegetation. You're in one of the very few places where you can see the rare Karner blue butterfly -- in one of the Northeastern pine barrens of New York and New Hampshire. What is it? The Karner blue butterfly (Lycaedes melissa samuelis) was first described more than a century ago in Karner, New York. It is a small butterfly, with a wingspan of about one inch. The male's wings are distinctively marked with a silvery or dark blue color. Karner blues are found in the northern range of wild lupine habitat. Wild lupine(Lupinus perennis) is a small, often attractively flowered plant that occurs in pine barrens and oak savannas in New Hampshire, New York, Michigan, Wisconsin, Indiana and Minnesota. The Karner blue's habitat is likely to be a patchwork of pitch pine and scrub oak scattered among open grassy areas. Historically, a network of these openings among the trees was maintained by wildfire, and at one time the butterfly was found in this habitat in a nearly continuous narrow band across 10 states and one province. Today it has been eliminated from at least five of these states. In the Northeast today, suitable habitat for the Karner blue is found in the Albany Pine Bush of New York and the Concord Pine Barrens of New Hampshire. Why are they so rare? Habitat throughout the range of the Karner blue has been lost through human activity to suppress wildfire, cultivate forests and develop communities. The remaining habitat has been divided into small, separated segments. This fragmentation of remaining habitat prevents the Karner blue from moving and spreading, resulting in small populations that are isolated from each other. The Karner blue butterfly’s habitat needs are very specific and it is unable to adapt to the human-caused changes in its environment. Habitat fragmentation and loss, combined with the extremely small size of the remaining population, are the greatest threats to the Karner blue butterfly’s continued existence in the Northeast and elsewhere in the country. New York's Albany Pine Bush, which once covered as much as 40,000 contiguous acres, has been reduced to 2,000 acres. These acres are dissected by obstacles to butterfly movement, such as roads and buildings, and are subject to disturbance by off-road vehicles and horseback riding. Elsewhere across the region, pine barrens have largely been destroyed by industrial, commercial and residential development; road and airport construction; and gravel and sand mining. Remaining habitat is threatened by encroachment of adjacent forests, conversion of barrens to pine plantations and other land management practices. Why should we be concerned? Since the landing of the Pilgrims in 1620, more than 500 species, subspecies and varieties of our nation’s plants and animals are known to have become extinct. In contrast, during the Pleistocene ice age, all of North America lost only about three species every 100 years. This recent, catastrophic loss of biological diversity is continuing at an unprecedented rate. Each and every species has a valuable ecological role in the balance of nature and each loss destabilizes that fragile balance. Once a species is extinct, it is lost forever. Experience has proven that many plants and animals have properties that will prove beneficial to humans as sources of food and medicine. With the loss of each species, we lose a potential resource for improving the quality of life for all humanity. In addition, some species of plants and animals may indicate to us whether or not their environment is healthy. The Karner blue butterfly’s disappearance from fragile pine barren habitat tells us that something is wrong. Protecting pine barrens will affect not only the fate of the Karner blue butterfly, but also that of many other specialized plants and animals. What you can do to help Learn more about the Karner blue butterfly and other rare and endangered plants and animals. The U.S. Fish and Wildlife Service, state wildlife agencies and private conservation organizations are working on programs for protection and management of the Karner blue butterfly. Contact them and learn more. All images, USFWS by (1) Ann B. Swengel, (2) Joel Trick, (3) Ann B. Swengel, (4) Ann B. Swengel
<urn:uuid:a62df826-9174-401f-a00f-311009dea1fc>
3.71875
970
Knowledge Article
Science & Tech.
42.687299
Faster-than-light communication AUG 18 2008 On the basis of their measurements, the team concluded that if the photons had communicated, they must have done so at least 100,000 times faster than the speed of light -- something nearly all physicists thought would be impossible. In other words, these photons cannot know about each other through any sort of normal exchange of information. Update: Hrm, the link above scampered behind Nature's paywall. Here's a post on the Scientific American blog instead.
<urn:uuid:60335297-1f88-4df1-920c-919cb3554f28>
3.6875
104
Personal Blog
Science & Tech.
49.555714
Unit 1: Many Planets, One Earth // Section 10: Further Reading University of California Museum of Paleontology, Web Geological Time Machine, http://www.ucmp.berkeley.edu/help/timeform.html. An era-by-era guide through geologic time using stratigraphic and fossil records. Science Education Resource Center, Carleton College, "Microbial Life in Extreme Environments," http://serc.carleton.edu/microbelife/extreme/index.html. An online compendium of information about extreme environments and the microbes that live in them. James Shreve, "Human Journey, Human Origins," National Geographic, March 2006, http://www7.nationalgeographic.com/ngm/0603/feature2/index.html? fs=www3.nationalgeographic.com&fs=plasma.nationalgeographic.com. An overview of what DNA evidence tells us about human migration out of Africa, with additional online resources.
<urn:uuid:ffc30444-14b5-4c80-ba05-0f9b595091e8>
3
208
Content Listing
Science & Tech.
32.54
> Larger images and animation available from NASA's Earth Observatory Crack in the Petermann Glacier Jesse Allen, using data provided courtesy of NASA/GSFC/METI/ERSDAC/JAROS, and U.S./Japan ASTER Science Team Michon Scott, NASA's Earth Observatory Covering some 1,295 square kilometers (500 square miles) along the northwestern coast of Greenland, Petermann Glacier’s floating ice tongue is the Northern Hemisphere’s largest, and it has occasionally calved large icebergs. Between 2000 and 2001, the glacier lost nearly 87 square kilometers (34 square miles). Between July 10 and July 24, 2008, the glacier lost another 29 square kilometers (11 square miles). Researchers at the Byrd Polar Research Center at Ohio State University, however, expressed greater concern at the presence of a rift farther upstream. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite captured this image of the rift on the Petermann Glacier on September 7, 2008. The rift, which appeared by 2001, is filled with thin ice and covered with snow in the close-up image (top). A thin fracture near the edge of the rift, however, indicates that it has continued to widen. After its initial formation, the rift on Petermann Glacier advanced toward the glacier front, widening as it moved. Satellite images from the 1990s show that rifts have developed in this region on the Petermann more than once, but previous rifts evolved differently than this one, which grew wider and longer. Byrd Polar Research Center scientists stated that if this rift extended completely across the glacier, the glacier could lose another 160 square kilometers (60 square miles)—one third of its current length. The larger view (bottom) shows areas of open water along the glacier’s margins, and a profusion of ice fragments beyond the tip of the glacier tongue. As a glacier squeezes past the fjord walls, the interaction of the ice and rock produces backstress that keeps the ice relatively compressed. But as pieces of ice break away from the glacier, the backstress is reduced, and the glacier begins to stretch. The rift on this glacier is evidence of the glacier’s stretching and thinning over time. NASA's Earth Observatory
<urn:uuid:1a2f180a-5d01-4723-928f-fb22ffa9afc0>
4.125
482
Knowledge Article
Science & Tech.
41.923432
Putting Together the Arctic Puzzle The Arctic is arguably one of the most perplexing places on Earth – so puzzling that an entire campaign has been dedicated to learning more about it. Nicola Blake, an associate researcher from the University of California, Irvine, is working on NASA's Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign and tracking one trace gas at a time to uncover the mysteries of the Arctic atmosphere. From Fairbanks, Alaska, Blake boards the NASA DC-8 plane and prepares her equipment for a flight to the Arctic Circle. Blake's responsibility on this campaign is centered on detecting gases that are present in the cold Arctic air. Using a special pump and more than 160 stainless steel canisters, Blake is able to take samples of the outside air every three to four minutes. After the flight, these canisters are shipped back to the University of California, Irvine, for chemical analysis, using a process known as gas chromatography. This analysis can detect about 50 different trace chemicals in each sample, including hydrocarbons from car exhaust, wildfires, and oil and gas extractions. Within the range of gases that Blake measures, she is able to pinpoint even the smallest concentrations of gases to see whether the air mass it came from was clean or polluted. If polluted, she can narrow down which activities, such as industrial pollution or wildfires, contributed to that pollution. Blake is also responsible for looking at the data that have been measured with the instrument in the context of results from other flights. According to Blake, being part of both the collection and analysis phase for these campaigns is advantageous. "Being physically present on a campaign and flying in the aircraft is a huge asset with respect to appreciating what went on during the campaign and helps enormously when it comes to interpreting and reporting our findings." Interpreting and reporting findings are not new to Blake, because ARCTAS is not her first Arctic adventure. She has been taking part in campaigns since she was a graduate student in England. During one such campaign, Blake flew over the Atlantic to the north of Ireland in a twin propeller Jetstream once per month to study air that came from the Arctic. She was also part of a collaborative team based at a research camp at the summit of the Greenland ice sheet. "We all slept in tents pitched on the ice. By comparison, the facilities at the Sophie Station hotel in Fairbanks are luxurious indeed!" Using knowledge from similar campaigns helps to understand and improve experiments. All the measurements and analyses that Blake has taken over the years serve as "parts of the puzzle" that have contributed to an evolving understanding of the effects that humans have on the atmosphere. The Arctic is a special place to study these effects. "It's a fascinating part of the planet from a scientific point of view," says Blake. "I am excited about contributing information as part of the larger effort to better understand Arctic climate change." NASA's Langley Research Center
<urn:uuid:f524053d-fe4c-48c6-9dfc-7d8fd038c7b0>
3.828125
608
Nonfiction Writing
Science & Tech.
37.966941
A NEW form of liquid crystal which behaves like rubber could one day be used to make components and to manipulate light in a new generation of optoelectronic circuits, which use light instead of electricity to communicate. At a conference held by the American Physical Society in St Louis last week, a West German researcher unveiled a new material that could be as important to the development of optoelectronics as silicon was to the growth of electronics., Telecommunications companies currently use components known as waveguides to direct and switch light in optical circuits. However, waveguides are expensive because they need to be made to very precise standards. Heino Finkelmann, a researcher at the University of Freiburg in West Germany, has devised a way to make waveguides as simply and as cheaply as record companies press records. Waveguides channel light in integrated circuits in the same way that a pipe directs ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:3c22b700-1195-481b-95f8-564cdc1500c7>
3.828125
210
Truncated
Science & Tech.
30.332941
Bring up a chimpanzee from birth as if it were a human and it will learn many unsimian behaviours, like wearing clothes and even eating with a knife and fork. But one thing it will not do is talk. In fact, it would be physically impossible for a chimp to talk just like us, thanks to differences in our voice boxes and nasal cavities. There are neurological differences too, some of which are the result of changes to what has been dubbed the "language gene". This story began with a British family that had 16 members over three generations with severe speech difficulties. Usually speech problems are part of a broad spectrum of learning difficulties, but the "KE" family, as they came to be known, seemed to have deficits that were more specific. Their speech was unintelligible and they had a hard time understanding others' speech, particularly when it involved ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:99341bbc-2370-44f7-aea9-f6f066a3c699>
3.53125
205
Truncated
Science & Tech.
49.816872
The first scientist to alert Americans to the prospect that human-caused climate change and global warming was already upon us was NASA climatologist James Hansen. In a sweltering Senate hall during the hot, dry summer of 1988, Hansen announced that "it is time to stop waffling ... The evidence is pretty strong that the [human-amplified] greenhouse effect is here." At the time, many scientists felt his announcement to be premature. I was among them. I was a young graduate student researching the importance of natural -- rather than human-caused -- variations in temperature, and I felt that the "signal" of human-caused climate change had not yet emerged from the "noise" of natural, long-term climate variation. As I discuss in my book, The Hockey Stick and the Climate Wars, scientists by their very nature tend to be conservative, even reticent, when it comes to discussing findings and observations that lie at the forefront of our understanding and that aren't yet part of the "accepted" body of scientific knowledge. Hansen, it turns out, was right, and the critics were wrong. Rather than being reckless, as some of his critics charged, his announcement to the world proved to be prescient -- and his critics were proven overly cautious. Given the prescience of Hansen's science, we would be unwise to ignore his latest, more dire warning. In a paper published today in the prestigious journal Proceedings of the National Academy of Sciences, Hansen and two colleagues argue convincingly that climate change is now not only upon us, but in fact we are fully immersed in it. Much of the extreme weather we have witnessed in recent years almost certainly contains a human-induced component. Hansen, in his latest paper, shows that the increase in probability of hot summers due to global warming is such that what was once considered an unusually hot summer has now become typical, and what was once considered typical will soon become a thing of the past -- a summer too improbably cool to anymore expect. We need to view this summer's extreme weather in this wider context. It is not simply a set of random events occurring in isolation, but part of a broader emerging pattern. We are seeing, in much of the extreme weather we are experiencing, the "loading of the weather dice." Over the past decade, records for daily maximum high temperatures in the United States have been broken at twice the rate we would expect from chance alone. Think of this as rolling double sixes twice as often as you'd expect -- something you would readily notice in a high stakes game of dice. Thus far this year, that ratio is close to 10 to 1. That's double sixes coming up ten times as often as you expect. So the record-breaking heat this summer over so much of the United States, where records that have stood since the Dust Bowl years of the 1930s are now dropping like flies, isn't just a fluke of nature; it is the loading of the weather dice playing out in real time. The record heat -- and the dry soils associated with it -- played a role in the unprecedented forest fires that wrought death and destruction in Colorado and New Mexico. It played a role in the hot and bone-dry conditions over the nation's breadbasket that has decimated U.S. agricultural yields. It played a role in the unprecedented 50 percent of the United States finding itself in extreme drought. Climate change is also threatening us in other ways of course, subjecting our coastal cities to increased erosion and inundation from rising sea level, and massive flooding events associated with an atmosphere that has warmed by nearly 2˚F, holding roughly 4 percent more water vapor than it used to -- water vapor that is available to feed flooding rains when atmospheric conditions are right. The state of Oklahoma became the hottest state ever with last summer's record heat. It is sadly ironic that the state's senior senator, Republican James Inhofe, has dismissed human-caused climate change as the "greatest hoax ever perpetrated on the American people." Just last week he insisted that concern over the impacts of climate change has "completely collapsed." This as Oklahoma City has just seen 18 days in a row over 100˚F (with more predicted to follow), Tulsa saw 112˚F Sunday, and 11 separate wildfires are burning in the state, with historic Route 66 and other state highways and interstates all closed. The time for debate about the reality of human-caused climate change has now passed. We can have a good faith debate about how to deal with the problem -- how to reduce future climate change and adapt to what is already upon us to reduce the risks that climate change poses to society. But we can no longer simply bury our heads in the sand. This story originally appeared at The Daily Climate. Image: Doug Wheller
<urn:uuid:e2b8aaa2-85ec-4e2a-9151-d7e851069ed4>
2.921875
997
Nonfiction Writing
Science & Tech.
44.033244
Now I don't have to delete all those important ^#$#*&# PMs ... yet... Thank you Magister!!! Helium II is a superfluid, a quantum mechanical state of matter with strange properties . The thermal conductivity of helium II is greater than that of any other known substance, a million times that of helium I and hundred of times that of copper. This is because heat conduction occurs via a quantum mechanism. Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the usual mechanism of diffusion. Heat takes the place of pressure in normal sound waves. This leads to very high thermal conductivity. It's known as "second sound" because the wave motion of heat is similar to the propagation of sound in air. Sound waves are fluctuations in the density of molecules in a substance; second sound waves are fluctuations in the density of phonons. Second sound can be observed in any system in which most phonon-phonon collisions conserve momentum. This occurs in superfluids and in dielectric crystals when Umklapp scattering is small.
<urn:uuid:5b1809c0-0e3b-406b-9cc0-e20dc53d2fe3>
2.84375
229
Comment Section
Science & Tech.
54.607778
Sometimes Earth's magnetic poles switch places. That is called a magnetic reversal. These graphs show how often magnetic reversals happen. Black stripes show times when Earth's magnetic field was "normal" (like it is today). White stripes show times when the field was reversed. The graph on the left shows the past 160 million years. The graph on the right shows a close-up of the past 5 million years. Images courtesy of the USGS.
<urn:uuid:7e043e91-1f0c-4ba5-9240-6c52cb23f0c4>
3.421875
91
Knowledge Article
Science & Tech.
74.745132
Silicon–oxygen and aluminum–oxygen compounds exhibit significant XPS Auger and photoelectron chemical shifts that are accurately measurable. Chemical state plots of KLL Auger kinetic energy versus 2p photoelectron energy permit identification of chemical species from the locations of their points on the plots. The KLL Auger electrons of Al and Si were generated by the bremsstrahlung component of the radiation, with conventional instrumentation. The location of points on the plots can be understood on the basis of polarizability of the environment (on the Auger parameter grid of lines, slope +1) and on the basis of the factors contributing to the energy of the final state ion in the Auger transition (a grid of line, slope −1). Tetrahedral aluminum has a significantly smaller Auger parameter than octahedral aluminum, and this difference is repeated, but with reduced magnitude on the similar plots for silicon and oxygen lines for the same compounds. Otherwise, the Auger parameters for this class of compounds are remarkably uniform. The Auger parameter values for oxygen and sodium in these compounds, using the 1s and KLL lines, are relatively small compared to those of other compounds of oxygen and sodium. For compounds of similar Auger parameter, differences in Auger final state ion energy are interpretable on the basis of electron density on aluminum and silicon atoms in the initial state, due to extent of bonding to oxygen, or to amount of negative formal charge on the silicate structure. Inclusion of tetrahedral aluminum enhances the negative charge and decreases the final state ion energy in high alumina zeolites. The difference between the energies of the O1s and Si2p lines in the inorganic silicon compounds is almost invariant, 429.0 to 429.6 eV. The three silicon polymers examined have a significantly larger line difference, 429.8 to 430.1 eV, making possible a differentiation between silicones and silicates. The oxygen KVV lines, with Auger transition final vacancies in valence levels, have shapes characteristic of chemical structure. The uncharged Si–O–Si structure exhibits a well‐defined shoulder; in Al–O–Si the shoulder is so close in energy it merely gives rise to asymmetry in the peak; Al–O–Al and charged Si–O–Si give oxygen KVV lines as single sharp peaks.
<urn:uuid:cb0f144c-d68f-4b2b-bb1c-639376647b8b>
3.140625
488
Academic Writing
Science & Tech.
23.448316
Space Science and Engineering Center website by University of Wisconsin-Madison's Graduate School shows a real time animation of the cloud cover over the North America continent. It gets the satellite images and shows animation by stitching those images together. With the help of it you can get an idea of what weather is going to be like in coming hours by taking a look at the forming clouds, storms near your city area. You can see weather for the hours passed and at the end of day you see what it looked throughout the day. We all know its difficult to predict the weather in our present day world but this really shows how the cloud cover is changing continuously over North America.
<urn:uuid:8a72b0cd-1d73-4e96-9193-1292e2027ff5>
3.046875
133
Personal Blog
Science & Tech.
50.1608