text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Watch it live tonight: An asteroid that NASA astronomers compared to the size of a city block dubbed 2012 LZ1, is roughly 502 metres wide, and will come within 14 "lunar distances" — a measurement of the distance from the Earth to the moon — from our planet. A few days ago astronomers first noticed the giant asteroid's flight path. The broadcast of the flyby will be available on Slooh's website --click here-- starting at 8 p.m. ET. The high-powered Slooh Space Camera Telescope predicts the asteroid will be visible on a live online feed from the Canary Islands. ET, though it won't be detectable by the naked eye. While the 2012 LZ1 will likely come within 5.4 million kilometers of Earth, we are not in danger of an impact. If the past is prelude, there's bound to be a massive collision event from a rogue asteroid at some point in the "near" future unless we successfully intervene.The "Impact Map of the World" above shows most of the 160 impact craters that have been identified since 1950. The bulk of the terrestrial impact craters that were ever formed, however, have been obliterated by eons of geological processes. The NASA image below shows the near-Earth asteroid Eros in 2000. The Daily Galaxy via http://neo.jpl.nasa.gov
<urn:uuid:13b1afc0-e796-44bb-b240-597bad005bbb>
3.375
286
Personal Blog
Science & Tech.
62.703571
Emeka Okafor writes "PhysOrg comments on a breakthrough in the path towards DNA computing, with implications in the field of molecular construction methods: "…Caltech assistant professor Erik Winfree and his colleagues show that DNA "tiles" can be programmed to assemble themselves into a crystal bearing a pattern of progressively smaller "triangles within triangles," known as a Sierpinski triangle. This fractal pattern is more complex than patterns found in natural crystals because it never repeats…" More… A key feature of the Caltech team's approach is that the DNA tiles assemble into a crystal spontaneously. Comprising a knot of four DNA strands, each DNA tile has four loose ends known as "sticky ends." These sticky ends are what binds one DNA tile to another. A sticky end with a particular DNA sequence can be thought of as a special type of glue, one that only binds to a sticky end with a complementary DNA sequence, a special "anti-glue''…In fact the work is the first experimental demonstration of a theoretical concept that Winfree has been developing since 1995–his proposal that any algorithm can be embedded in the growth of a crystal. This concept, according to Winfree's coauthor and Caltech research fellow Paul W. K. Rothemund, has inspired an entirely new research field, "algorithmic self-assembly," in which scientists study the implications of embedding computation into crystal growth…"
<urn:uuid:33807948-4f7b-4645-91a3-cea8a262286e>
3.140625
287
Truncated
Science & Tech.
30.290262
Summary: Math 1550 Fall 2005 Section 31 P. Achar Exam 2 Solutions October 4, 2005 Total points: 50 Time limit: 1 hour No calculators, books, notes, or other aids are permitted. You must show your work and justify your steps to receive full credit. 1. (4 points) Short answer. Warning: At least one of the questions below is a trick question. If you think a question has no meaningful answer, write the words "TRICK QUESTION" as your answer. (a) State both limit definitions of f (x). f(a + h) - f(a) f(x) - f(a) x - a (b) Suppose the graph of h(x) passes through the point (5, 3), and the tangent line to the graph at this point is given by y = -3x + 18. What is h (5)?
<urn:uuid:e0fc89f3-5435-4328-977f-da9c14cd9c01>
2.71875
201
Q&A Forum
Science & Tech.
96.024615
Mechanics: Momentum and Collisions Momentum and Collisions: Problem Set Overview This set of 32 problems targets your ability to use the momentum equation and the impulse-momentum change theorem in order to analyze physical situations involving collisions and impulses, to use momentum conservation principles to analyze a collision or an explosion, to combine a momentum analysis with other forms of analyzes (Newton's laws, kinematics, etc.) to solve for an unknown quantity, and to analyze two-dimensional collisions. The more difficult problems are color-coded as blue problems. An object which is moving has momentum. The amount of momentum (p) possessed by the moving object is the product of mass (m) and velocity (v). In equation form: p = m • v An equation such as the one above can be treated as a sort of recipe for problem-solving. Knowing the numerical values of all but one of the quantities in the equations allows one to calculate the final quantity in the equation. An equation can also be treated as a statement which describes qualitatively how one variable depends upon another. Two quantities in an equation could be thought of as being either directly proportional or inversely proportional. Momentum is directly proportional to both mass and velocity. A two-fold or three-fold increase in the mass (with the velocity held constant) will result in a two-fold or a three-fold increase in the amount of momentum possessed by the object. Similarly, a two-fold or three-fold increase in the velocity (with the mass held constant) will result in a two-fold or a three-fold increase in the amount of momentum possessed by the object. Thinking and reasoning proportionally about quantities allows you to predict how an alteration in one variable would effect another variable. Impulse-Momentum Change Equation In a collision, a force acts upon an object for a given amount of time to change the object's velocity. The product of force and time is known as impulse. The product of mass and velocity change is known as momentum change. In a collision the impulse encountered by an object is equal to the momentum change it experiences. Impulse = Momentum Change F • t = mass • Delta v Several problems in this set of problems test your understanding of the above relationship. In many of these problems, a piece of extraneous information is provided. Without an understanding of the above relationships, you will be tempted to force such information into your calculations. Physics is about conceptual ideas and relationships; and problems test your mathematical understanding of these relationships. If you treat this problem set as a mere exercise in the algebraic manipulation of physics equations, then you are likely to become frustrated quickly. As you proceed through this problem set, be concepts-minded. Do not strip physics of its conceptual meaning. Several of the problems in this set of problems demand that you be able to calculate the velocity change of an object. This calculation becomes particularly challenging when the collision involves a rebounding effect - that is, the object is moving in one direction before the collision and in the opposite direction after the collision. Velocity is a vector and is distinguished from speed in that it has a direction associated with it. This direction is often expressed in mathematics as a + or - sign. In a collision, the velocity change is always computed by subtracting the initial velocity value from the final velocity value. If an object is moving in one direction before a collision and rebounds or somehow changes direction, then its velocity after the collision has the opposite direction as before. Mathematically, the before-collision velocity would be + and the after-collision velocity would be - in sign. Ignoring this principle will result in great difficulty when analyzing any collision involving the rebounding of an object. The Momentum Conservation Principle In a collision between two objects, each object is interacting with the other object. The interaction involves a force acting between the objects for some amount of time. This force and time constitutes an impulse and the impulse changes the momentum of each object. Such a collision is governed by Newton's laws of motion; and as such, the laws of motion can be applied to the analysis of the collision (or explosion) situation. So with confidence it can be stated that ... In a collision between object 1 and object 2, the force exerted on object 1 (F1) is equal in magnitude and opposite in direction to the force exerted on object 2 (F2). In equation form: F1 = - F2 The above statement is simply an application of Newton's third law of motion to the collision between objects 1 and 2. Now in any given interaction, the forces which are exerted upon an object act for the same amount of time. You can't contact another object and not be contacted yourself (by that object). And the duration of time during which you contact the object is the same as the duration of time during which that object contacts you. Touch a wall for 2.0 seconds, and the wall touches you for 2.0 seconds. Such a contact interaction is mutual; you touch the wall and the wall touches you. It's a two-way interaction - a mutual interaction; not a one-way interaction. Thus, it is simply logical to state that in a collision between object 1 and object 2, the time during which the force acts upon object 1 (t1) is equal to the time during which the force acts upon object 2 (t2). In equation form: t1 = t2 The basis for the above statement is simply logic. Now we have two equations which relate the forces exerted upon individual objects involved in a collision and the times over which these forces occur. It is accepted mathematical logic to state the following: If A = - B and C = D then A • C = - B • D The above logic is fundamental to mathematics and can be used here to analyze our collision. If F1 = - F2 and t1 = t2 then F1 • t1 = - F2 • t2 The above equation states that in a collision between object 1 and object 2, the impulse experienced by object 1 (F1 • t1) is equal in magnitude and opposite in direction to the impulse experienced by object 2 (F2 • t2). Objects encountering impulses in collisions will experience a momentum change. The momentum change is equal to the impulse. Thus, if the impulse encountered by object 1 is equal in magnitude and opposite in direction to the impulse experienced by object 2, then the same can be said of the two objects' momentum changes. The momentum change experienced by object 1 (m1 • Delta v1) is equal in magnitude and opposite in direction to the momentum change experienced by object 2 (m2 • Delta v2). This statement could be written in equation form as m1 • Delta v1 = - m2 • Delta v2 This equation claims that in a collision, one object gains momentum and the other object loses momentum. The amount of momentum gained by one object is equal to the amount of momentum lost by the other object. The total amount of momentum possessed by the two objects does not change. Momentum is simply transferred from one object to the other object. Put another way, it could be said that when a collision occurs between two objects in an isolated system, the sum of the momentum of the two objects before the collision is equal to the sum of the momentum of the two objects after the collision. If the system is indeed isolated from external forces, then the only forces contributing to the momentum change of the objects are the interaction forces between the objects. As such, the momentum lost by one object is gained by the other object and the total system momentum is conserved. And so the sum of the momentum of object 1 and the momentum of object 2 before the collision is equal to the sum of the momentum of object 1 and the momentum of object 2 after the collision. The following mathematical equation is often used to express the above principle. m1 • v1 + m2 • v2 = m1 • v1' + m2 • v2' The symbols m1 and m2 in the above equation represent the mass of objects 1 and 2. The symbols v1 and v2 in the above equation represent the velocities of objects 1 and 2 before the collision. And the symbols v1' and v2' in the above equation represent the velocities of objects 1 and 2 after the collision. (Note that a ' symbol is used to indicate after the collision.) Momentum is a vector quantity; it is fully described by both a magnitude (numerical value) and a direction. The direction of the momentum vector is always in the same direction as the velocity vector. Because momentum is a vector, the addition of two momentum vectors is conducted in the same manner by which any two vectors are added. For situations in which the two vectors are in opposite directions, one vector is considered negative and the other positive. Successful solutions to many of the problems in this set of problems demands that attention be given to the vector nature of momentum. Two-Dimensional Collision Problems A two-dimensional collision is a collision in which the two objects are not originally moving along the same line of motion. They could be initially moving at right angles to one another or at least at some angle (other than 0 degrees and 180 degrees) relative to one another. In such cases, vector principles must be combined with momentum conservation principles in order to analyze the collision. The underlying principle of such collisions is that both the "x" and the "y" momentum are conserved in the collision. The analysis involves determining pre-collision momentum for both the x- and the y- directions. If inelastic, then the total amount of system momentum before the collision (and after) can be determined by using the Pythagorean theorem. Since the two colliding objects travel together in the same direction after the collision, the total momentum is simply the total mass of the objects multiplied by their velocity. Momentum Plus Problems A momentum plus problem is a problem type in which the analysis and solution includes a combination of momentum conservation principles and other principles of mechanics. Such a problem typically involves two analysis which must be conducted separately. One of the analysis is a collision analysis to determine the speed of one of the colliding objects before or after the collision. The second analysis typically involves Newton's laws and/or kinematics. These two models (Newton's laws and kinematics) allows a student to make a prediction about how far an object will slide or how high it will roll after the collision with the other object. When solving momentum plus problems, it is important to take the time to identify the known and the unknown quantities. It is helpful to organize such known quantities in two columns - a column for information pertaining to the collision analysis and a column for information pertaining to the Newton's law and/or kinematic analysis. Habits of an Effective Problem-Solver An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver... - ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it. - ...identifies the known and unknown quantities in an organized manner, often times recording them on the diagram itself. They equate given values to the symbols used to represent the corresponding quantity (e.g., m = 1.50 kg, vi = 2.68 m/s, F = 4.98 N, t = 0.133 s, vf = ???). - ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understaning of physics principles. - ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit. - ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity. Additional Readings/Study Aids: The following pages from The Physics Classroom tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems. - Impulse-Momentum Change Equation - Real World Applications - Momentum Conservation Principle - Isolated Systems - Collision Analysis - Explosion Analysis - Vector Addition - Newton's Second Law - Kinematic Equations
<urn:uuid:61ea9413-f456-4265-81b5-78b76d755688>
4.09375
2,619
Tutorial
Science & Tech.
43.695115
CASSINI AND HUYGENS, THE PIGGYBACK PROBE For seven years and two billion miles, the Cassini space-craft carried the Huygens lander up to Saturn. Last December, six months after Cassini began its four-year study of Saturn, it released Huygens—a probe the size of a Volkswagen Beetle—on a trajectory toward Saturn´s largest moon, Titan. When Huygens entered Titan´s atmosphere 21 days later, it relayed pictures and data back to Earth through Cassini. Better Than Evian |In this UV image of the rings, blue regions are water ice; red, empty space. Scientists can´t explain how the ice stays 99 percent pure though bombarded by meteorites.||Scientists think that this storm, which flares up and recedes every few months, creates electrical eruptions that may cause powerful radio bursts.|
<urn:uuid:bd1355d8-4fbd-4751-a38d-5ea1376dafee>
3.5
191
Truncated
Science & Tech.
45.826838
Apr. 26, 2007 Recent research published in the journal Biotropica addresses commercial hunting in the tropics, including its direct impacts on vertebrates and indirect impacts on plants. Many of the birds and mammals found in tropical ecosystems are frugivores, animals that disperse seeds rather than eat and kill them. These same animals are hunted at unsustainable rates virtually throughout the tropics. Researchers Richard Corlett, and Carlos Peres and Erwin Palacios review the consequences this has on tropical Asia and the Amazon, respectively, and consider the pervasive consequences for plants. In tropical Asia, commercial hunting for large-scale regional trade in wild animals has replaced traditional subsistence hunting. Most species are being hunted illegally at unsustainable levels and enforcement is weak in many areas. Reductions in the current rates of deforestation and logging will not be enough to save many of the region's animals from extinction. Ending the trade in wild animals and their parts should be the number one conservation priority in tropical Asia. Using more than 100 forest sites scattered across the Amazon, the authors show that most large game birds and mammals have been severely reduced to a small fraction of their original population densities, often just 1–5 percent of the densities of the same species in similar protected forests. Seed dispersal depends entirely on vertebrates for plant species with large seeds encased in fleshy fruits. Thus, hunting invariably alters relative seed dispersal distances among different plant species. Hunting is already changing plant species composition of tropical forests worldwide. As the composition of plant species changes, they may not provide the fruits and seeds necessary to sustain populations of frugivorous and granivorous vertebrates. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:20d4d54c-2f70-4912-885e-900175e4b0b3>
3.6875
384
Truncated
Science & Tech.
27.683688
In a 2006 article in JGR, Aslak Grinsted, John Moore, Viejo Pohjola, Tonu Martma and Elisabeth Isaksson study several climate indicators from the Lomonosovfonna ice field in Svalbard, shown below with their caption: Figure 5. Fifteen-year moving averages of Lomonosovfonna ice core data. (a) Oxygen isotopes, (b) continentality proxy (A), (c) stratigraphic melt indices (SMI), and (d) washout indices (solid line is W_NaMg, and dashed line is W_ClK). In the oldest part of the core (1130-1200), the washout indices are more than 4 times as high as those seen during the last century, indicating a high degree of runoff. Since 1997 we have performed regular snow pit studies [Virkkunen, 2004], and the very warm 2001 summer resulted in similar loss of ions and washout ratios as the earliest part of the core. This suggests that the Medieval Warm Period [Jones and Mann, 2004] in Svalbard summer conditions were as warm (or warmer) as present-day, consistent with the Northern Hemisphere temperature reconstruction of Moberg et al . Although the Svalbard ice core record extends back to 1130, a 2009 paper in Climate Dynamics, by Grinsted and 3 of the same authors plus M. Macias Fauria, S. Helama, M. Timonen, and M. Eronen, utilizes the same ice core record to infer winter sea extent, yet omits the distinctively “warm” first 7 decades of the record. It concludes, “The twentieth century sustained the lowest sea ice extent values since A.D. 1200.” My question for Dr. Grinsted and any of his co-authors who might drop in is, why did the first 7 decades of the core disappear between 2006 and 2009? Is it because they contradict the IPCC/AIT line that there was no MWP to speak of? Dr. Grinsted does occasionally visit CA, and contributed several helpful comments clarifying his smoothing algorithm on the 7/3 thread The Secret of the Rahmstorf ‘Non-Linear Trend Line’. BTW, has the Lomonosovfona core data ever been archived? I gathered from Steve’s post that it has not. I might add that Craig Loehle and myself (see Loehle 2007, Loehle and McCulloch 2008) have reconfirmed the existence of a MWP, using twice as many proxies as Moberg et al. Craig selected the proxies and did the smoothing, while I contributed standard errors to the 2008 correction, showing that the MWP and LIA were both significant relative to the bimillenial average. We did not use Lomonosovfona, but it could be a useful addition to future such studies, if calibrated to temperature and archived. Update: I would like to thank Dr. Grinsted for responding at length in comment #25 below, as follows: It is curious that we find that it was warm in Svalbard during the MWP but we do not see a low sea ice extent in our sea ice reconstruction. I would have expected it to be lower even though it does not extend quite as far back. When i was interviewed by the danish press then I pointed this out as the most surprising result. But I do not see a conflict between those two observations. We had several considerations that led us to restrict the sea ice reconstruction to 1200. We knew that the oldest data 1100-1200 was influenced by melt to such a degree that ions were being flushed from the ice (Grinsted et al. 2006 and the figure shown on this blog). That made us cautious of whether the isotope data might be influenced by post-depositional processes as well. 1200 seemed a natural choice for the cut-off (see above fig). The dating-model is expected to perform poorer near the bed. We believe the Lomonosovfonna dating to be quite accurate around 1200, since we have identified a sulphuric peak that we believe to be the 1259 eruption (Moore et al., JGR 2006). However, it is very important for the recontruction procedure that the dating is correct to within 5 years. Otherwise we might try to reconstruct the past ice extent using a lag between ice core and tree ring data that is inconsistent with the one used in the calibration period. The primary reason to do the 5-year smoothing was to make the reconstruction more robust against small dating errors. The dating could still be good prior to 1200AD, however we did not have confidence that the errors would be in an acceptable range for the treatment we were planning and therefore excluded this data. Note that, 1200AD is only 2m above the max depth of the ice core. The layers gets compressed near the bed and the temporal resolution decreases back in time. For the reconstruction we needed atleast 5 year resolution, because that is what we chose in the calibration period. At 1200AD the d18o temporal resolution is 3-4years per sample. That is OK, but not very good when we want to resample to 5year averages. @Hu (4): it is also asked why I only showed post-1400 d18O in my JGR 2006 paper. The reason is that E. Isaksson wanted to publish this herself before anybody else could get access to it. That simple. This is also where I will redirect all requests for the isotope data. He also thoughtfully replies to several questions posed by readers in comments #25, 27, 30, 49, and 59 below. Update 2: In a subsquent paper with K. Virkkunen et al, Dr. Grinsted and co-authors report on the washout factor from two pits at the summit of Lomonosovfonna that update the original core, which was drilled in 1997. An announcement, entitled “Present day summers in Svalbard are as warm as those during the medieval warm period”, is on Dr. Grinsted’s website, with a link to the full paper. Note that since the horizontal scale is depth in meters rather than inferred calendar date, the present is at the left, while the 12th century is at the right and highly compressed. It is not clear that this is the same as either of the washout measures shown in figure 5 from Grinsted 2006 above, however, since neither of those has conspicuous up-spikes corresponding to the ones this update shows at around 25 and 32 meters.
<urn:uuid:9a274c5f-c96a-4797-b224-51603c9fa75f>
2.890625
1,402
Comment Section
Science & Tech.
57.291004
OGNL stands for Object-Graph Navigation Language; it is an expression language for getting and setting properties of Java objects, plus other extras such as list projection and selection and lambda expressions. You use the same expression for both getting and setting the value of a property. The Ognl class contains convenience methods for evaluating OGNL expressions. You can do this in two stages, parsing an expression into an internal form and then using that internal form to either set or get the value of a property; or you can do it in a single stage, and get or set a property using the String form of the expression directly. OGNL started out as a way to set up associations between UI components and controllers using property names. As the desire for more complicated associations grew, Drew Davidson created what he called KVCL, for Key-Value Coding Language, egged on by Luke Blanshard. Luke then reimplemented the language using ANTLR, came up with the new name, and, egged on by Drew, filled it out to its current state. Later on Luke again reimplemented the language using JavaCC. Further maintenance on all the code is done by Drew (with spiritual guidance from Luke). We pronounce OGNL as a word, like the last syllables of a drunken pronunciation of "orthogonal". Many people have asked exactly what OGNL is good for. Several of the uses to which OGNL has been applied are: Most of what you can do in Java is possible in OGNL, plus other extras such as list projection, selection and lambda expressions.
<urn:uuid:bc1b622d-c581-4d34-a647-539c1bff66c0>
3.28125
330
Knowledge Article
Software Dev.
45.826838
Redirected from Spectral class Stars can be classified by their surface temperatures as determined from Wien's Displacement Law, but this poses practical difficulties for distant stars. Spectral characteristics offer a way to classify stars which gives information about temperature in a different way - particular absorption lines can be observed only for a certain range of temperatures because only in that range are the involved atomic energy levels populated. The standard classes and their surface temperatures are: A popular mnemonic for remembering the order is "Oh Be A Fine Girl (Guy), Kiss Me". There are many variants of this mnemonic. The reason for the odd arrangement of letters is historical. When people first started taking spectra of stars, they noticed that stars had very different hydrogen spectral lines strengths, and so they classified stars based on the strength of the hydrogen balmer series lines from A (strongest) to Q (weakest). Other lines of neutral and ionized species then came into play (H&K lines of calcium, sodium D lines etc). Later it was found that some of the classes were actually duplicates and those classes were removed. It was only much later that it was discovered that the strength of the hydrogen line was connected with the surface temperature of the star. The basic work was done by the "girls" of Harvard College Observatory[?], primarily Annie J. Cannon[?] and Antonia Maury[?], based on the work of Williamina Fleming. These classes are further subdivided by arabic numbers (0-9). A0 denotes the hottest stars in the A class and A9 denotes the coolest ones. More recently, the classification was extended into O B A F G K M L T, where L and T are extremely cool stars or brown dwarves. Class O stars are very hot and very luminous, being strongly blue in colour. Naos[?] (in Puppis) shines with a power close to a million times solar. These stars have prominent ionized and neutral helium lines and only weak hydrogen lines. Class O stars emit most of their radiation in ultra-violet. B stars are again extremely luminous, Rigel (in Orion) is a prominent B class blue supergiant. Their spectra have neutral helium and moderate hydrogen lines. As O and B stars are so powerful, they live for a very short time. They do not stray far from the area which they were formed in as they don't have the time. They, then, tend to cluster together in what we call OB1 associations, which are associated with giant molecular clouds. The Orion OB1 association is an entire spiral arm[?] of our Galaxy (brighter stars make the spiral arms look brighter, there aren't more stars there) and contains all of the constellation of Orion. Class A stars are amongst the more common naked eye stars. Deneb in Cygnus is another star of formidable power, while Sirius is also an A class star, but not nearly as powerful. As with all class A stars, they are white. Many white dwarves are also A. They have strong hydrogen lines and also ionized metals. F stars are still quite powerful but they tend to be main sequence stars, such as Fomalhaut in Pisces Australis. Their spectra is characterized by the weaker hydrogen lines and ionized metals, their colour is white with a slight tinge of yellow. Class G stars are probably the most well known for only the reason that our Sun is of this class. They have even weaker hydrogen lines than F but along with the ionized metals, they have neutral metals. G is host to the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the G classification as this is an extremely unstable place for a supergiant to be. Class K is slightly cooler than our Sun, they're orange stars. Some K stars are giants and supergiants, such as Antares while others like Alpha Centauri B are main sequence stars. They have extremely weak hydrogen lines, if it's present at all, and mostly neutral metals. Class M is by far the most common class if we go by the number of stars. All our red dwarves go in here and they are plentiful, more than 90 % of stars are red dwarves, such as Proxima Centauri. M is also host to most giants and some supergiants such as Arcturus and Betelgeuse, as well as Mira variables. The spectrum of an M star shows lines belonging to molecules, neutral metals but hydrogen is usually absent. Titanium oxide[?] can be strong in M stars. Right at the bottom of the scale is T. These are stars barely big enough to be stars and others that are substellar[?], being of the brown dwarf variety. They are black, emitting little or no visible light but being strongest in infrared. Their surface temperature is a stark contrast to the fifty thousand degrees or more for O stars, being a cool 700 degrees Celsius. Complex molecules can form, evidenced by the strong methane lines in their spectra. T and L could be more common than all the other classes combined, if recent research is accurate. From studying the number of propylids[?] (clumps of gas in nebulae from which stars are formed) then the number of stars in the galaxy should be several orders of magnitude higher than what we know about. It's theorised that these propylids are in a race with each other. The first one to form will become a proto-star[?], which are very violent objects and will disrupt other propylids in the vicinity, stripping them of their gas. The victim propylids will then probably go on to become main sequence stars or brown dwarf stars of the L and T classes, but quite invisible to us. Since they live so long (no star below 0.8 solar masses has ever died in the history of the galaxy) then these smaller stars will accumulate over time. Also occasionally used are the stellar classifications R, N and S. R and N stars are carbon stars (that is, giants) which run parallel to the normal classification system from roughly mid G to late M. These have more recently been remapped into a unified carbon classifier C, with N0 starting at roughly C6. S stars have ZrO lines rather than TiO, and are in between the M stars and the carbon stars. S stars have carbon and oxygen abundances are almost exactly equal, and both elements are locked up almost entirely in CO molecules. For stars cool enough for CO to form that molecule tends to "eat up" all of whichever element is less abundant, resulting in "leftover oxygen" on the normal main sequence, "leftover carbon" on the C sequence, and "leftover nothing" on the S sequence. In reality the relation between these stars and the traditional main sequence suggest a rather large continuum of carbon abundance and if fully explored would add another dimension to the stellar classification system. The Yerkes spectral classification, also called the MKK system, is a system of stellar spectral classification introduced in 1943 by William W. Morgen[?], Phillip C. Keenan[?] and Edith Kellman[?] of Yerkes Observatory. This classification is based on spectral lines sensitive to stellar surface gravity which is related to luminosity, as opposed to the Harvard classification which is based on surface temperature. Since the radius of a giant star[?] is much larger than a dwarf star[?] while their masses are roughly comparable, the gravity and thus the gas density and pressure on the surface of a giant star are much lower than for a dwarf. These differences manifest themselves in the form of luminosity effects which affect both the width and the intensity of spectral lines which can then be measured. Six different luminosity classes are distinguished:
<urn:uuid:129ec399-f972-444d-8755-1baae510ee5c>
3.828125
1,627
Knowledge Article
Science & Tech.
53.638071
Asymmetric Ashes (artist's impression) Artist's impression of how Type Ia supernovae may look like as revealed by spectro-polarimetry observations. The outer regions of the blast cloud is asymmetric, with different materials found in 'clumps', while the inner regions are smooth. Using observations of 17 supernovae made over more than 10 years with ESO's Very Large Telescope and the McDonald Observatory's Otto Struve Telescope, astronomers inferred the shape and structure of the debris cloud thrown out from Type Ia supernovae. Such supernovae are thought to be the result of the explosion of a small and dense star — a white dwarf — inside a binary system. As its companion continuously spills matter onto the white dwarf, the white dwarf reaches a critical mass, leading to a fatal instability and the supernova. But what sparks the initial explosion, and how the blast travels through the star have long been thorny issues. The study shows that the outer regions of the blast cloud is asymmetric, with different materials found in 'clumps', while the inner regions are smooth. About the Image |Release date:||30 November 2006| |Size:||2427 x 1686 px| About the Object |Type:||• Unspecified : Star : Evolutionary Stage : Supernova| • X - Stars • X - Illustrations
<urn:uuid:a2c054e3-e583-4b6e-b85e-245e0590a741>
2.921875
285
Truncated
Science & Tech.
31.537931
Brought to you by the Organic Reactions Wiki, the online collection of organic reactions Electrophilic fluorination is the combination of a carbon-centered nucleophile with an electrophilic source of fluorine to afford organofluorine compounds. Although elemental fluorine and reagents incorporating an oxygen-fluorine bond can be used for this purpose, they have largely been replaced by reagents containing a nitrogen-fluorine bond. Electrophilic fluorination offers an alternative to nucleophilic fluorination methods employing alkali or ammonium fluorides and methods employing sulfur fluorides for the preparation of organofluorine compounds. Development of electrophilic fluorination reagents has always focused on removing electron density from the atom attached to fluorine; however, compounds containing nitrogen-fluorine bonds have proven to be the most economical, stable, and safe electrophilic fluorinating agents. Electrophilic N-F reagents are either neutral or cationic and may possess either sp2- or sp3-hybridized nitrogen. Although the precise mechanism of electrophilic fluorination is currently unclear, highly efficient and stereoselective methods have been developed.(1) The most common fluorinating agents used for organic synthesis are N-fluoro-o-benzenedisulfonimide (NFOBS), N-fluorobenzenesulfonimide (NFSI), and Selectfluor.(2) Mechanism and Stereochemistry The mechanism of electrophilic fluorination remains controversial. At issue is whether the reaction proceeds via an SN2 or single-electron transfer (SET) process. In support of the SN2 mechanism, aryl Grignard reagents and aryllithiums give similar yields of fluorobenzene in combination with N-fluoro-o-benzenedisulfonimide (NFOBS), even though the tendencies of these reagents to participate in SET processes differ substantially. Additionally, radical probe experiments with 5-hexenyl and cyclopropyl enol ethers did not give any rearranged products.(3) On the other hand, the lifetime of radicals in the SET process is predicted to be four orders of magnitude shorter than the detection limit of even the most sensitive of radical probes. It has been postulated that after electron transfer, immediate recombination of the fluorine radical with the alkyl radical takes place.(4) Stereoselective fluorinations may be either diastereoselective or enantioselective. Diastereoselective methods have focused on the use of chiral auxiliaries on the nucleophilic substrate. For fluorinations of carbonyl compounds, chiral oxazolidinones have been used with success.(5) Tandem conjugate addition incorporating a chiral nucleophile has been used to synthesize β-amino α-fluoro esters in chiral, non-racemic form.(6) Enantioselective methods employ stoichiometric amounts of chiral fluorinating agents. N-fluoroammonium salts of cinchona alkaloids represent the state of the art for reactions of this type. In addition, these reagents are easily synthesized from Selectfluor and the parent alkaloids.(7) Scope and Limitations Electrophilic N-F fluorinating reagents incorporate electron-withdrawing groups attached to nitrogen to decrease the electron density on fluorine. Although N-fluorosulfonamides are fairly weak fluorinating reagents, N-fluorosulfonimides, such as N-fluorobenzenesulfonimide (NFSI), are very effective and in common use. N-fluoro-o-benzenedisulfonimide (NFOBS) is synthesized from the disulfonic acid.(8) The use of salts of cationic nitrogen increases the rates and yields of electrophilic fluorination, because the cationic nitrogen removes electron density from fluorine. N-fluoropyridinium ions and iminium ions can also be used as electrophilic fluorinating reagents. The counteranions of these salts, although they are not directly involved in the transfer of fluorine to the substrate, influence reactivity in subtle ways and may be adjusted using a variety of methods.(9) The most synthetically useful ammonium salts are the substituted DABCO bis(ammonium) ions, including Selectfluor. These can be easily synthesized by alkylation followed by fluorination. The difluoro version, which might at first seem more useful, delivers only a single fluorine atom.(10) More specialized electrophilic fluorinating reagents, such as neutral heterocycles containing N–F bonds, are useful for the fluorination of a limited range of substrates. Simple fluorinations of alkenes often produce complex mixtures of products. However, cofluorination in the presence of a nucleophile proceeds cleanly to give vicinal alkoxyfluorides. Alkynes are not fluorinated with N-F reagents. A surfactant was used to faciliatate contact between aqueous Selectfluor and the alkene.(11) Fluorination of electron-rich aromatic compounds gives aryl fluorides. The two most common problems in this class of reactions are low ortho/para selectivities and dearomatization (the latter is a particularly significant problem for phenols).(12) Enol ethers and glycals are nucleophilic enough to be fluorinated by Selectfluor. Similar to other alkenes, cohalogenation can be accomplished either by isolation of the intermediate adduct and reaction with a nucleophile or direct displacement of DABCO in situ. Enols can be fluorinated enantioselectively (see above) in the presence of a chiral fluorinating agent.(13) Metal enolates are compatible with many fluorinating reagents, including NFSI, NFOBS, and sulfonamides. However, the specialized reagent 2-fluoro-3,3-dimethyl-2,3-dihydrobenzo[d]isothiazole 1,1-dioxide consistently affords better yields of monofluorinated carbonyl compounds in reactions with lithium enolates. Other metal enolates afforded large amounts of difluorinated products.(14) Comparison with Other Methods Although the use of molecular fluorine as an electrophilic fluorine source is often the cheapest and most direct method, F2 often forms radicals and reacts with C-H bonds without selectivity. Proton sources or Lewis acids are required to suppress radical formation, and even when these reagents are present, only certain substrates react with high selectivity. Handling toxic, gaseous F2 requires special apparatus and great care to avoid contact with air or water.(16) Reagents containing O-F bonds, such as CF3OF, tend to be more selective for monofluorination than N-F reagents. However, difficulties associated with handling and their extreme oxidizing power have led to their replacement with N-F reagents.(17) Xenon di-, tetra-, and hexafluoride are selective monofluorinating reagents. However, their instability and high cost have made them less popular than organic fluorinating agents.(18) Experimental Conditions and Procedure Although fluorinations employing N-F reagents do not use molecular fluorine directly, they are almost universally prepared from F2. Proper handling of F2 requires great care and special apparatus. Poly(tetrafluoroethylene) (PTFE, also known as Teflon) reaction vessels are used in preference to stainless steel or glass for reactions involving molecular fluorine. F2 blends with N2 or He are commercially available and help control the speed of delivery of fluorine. Temperatures should be kept low, and introduction of fluorine slow, to prevent free radical reactions. 2-Acetylamino-3-(4-hydroxyphenyl)propionic acid methyl ester (400 mg, 1.68 mmol) and 3,5-Cl2FP-OTf (632 mg, 2 mmol) were stirred under nitrogen in 10 mL of dry CH2Cl2/MeCN (9:1). After 8 hours, the starting material and reagent were consumed as verified by a KI paper test and TLC. The reaction mixture was poured into 10 mL of water, neutralized with sodium bicarbonate, and the layers were separated. The organic layer was washed with 10 mL of H2O, dried (Na2SO4), and concentrated under reduced pressure. The crude product was purified by silica gel chromatography with 60% EtOAc in petroleum ether as eluent. The title product was obtained in 65% yield (280 mg); 1H NMR (CD3OD) δ 1.91 (s, 3H), 2.8–3.0 (m, 2H), 3.67 (s, 3H), 4.59 (dd, J = 8.8, 5.7Hz, 1H), 6.77–6.84 (m, 2H), 6.89 (dd, J = 12.9, 1.7Hz, 1H); 19F NMR (CD3OD) δ –140.3 (t, J = 12Hz). - ↑ a b Badoux, J.; Cahard, D. Org. React. 2007, 69, 347. doi: (10.1002/0471264180.or069.02) - ↑ a b Davis, F. A.; Han, W.; Murphy, C. K. J. Org. Chem. 1995, 60, 4730. - ↑ Differding, E.; Rüegg, G. M. Tetrahedron Lett. 1991, 32, 3815. - ↑ Piana, S.; Devillers, I.; Togni, A.; Rothlisberger, U. Angew. Chem., Int. Ed. Engl. 2002, 41, 979. - ↑ Davis, F. A.; Kasu, P. V. N. Tetrahedron Lett. 1998, 39, 6135. - ↑ Shibata, N.; Suzuki, E.; Asahi, T.; Shiro, M. J. Am. Chem. Soc. 2001, 123, 7001. - ↑ Umemoto, T.; Harasawa, K.; Tomizawa, G.; Kawada, K.; Tomita, K. Bull. Chem. Soc. Jpn. 1991, 64, 1081. - ↑ Stavber, S.; Zupan, M.; Poss, A. J.; Shia, G. A. Tetrahedron Lett. 1995, 36, 6769. - ↑ Laali, K. K.; Tanaka, M.; Forohar, F.; Cheng, M.; Fetzer, J. C. J. Fluorine Chem. 1998, 91, 185. - ↑ Lal, G. S. J. Org. Chem. 1993, 58, 2791. - ↑ Zupan, M.; Iskra, J.; Stavber, S. Bull. Chem. Soc. Jpn. 1995, 68, 1655. - ↑ Albert, M.; Dax, K.; Ortner, J. Tetrahedron 1998, 54, 4839. - ↑ Differding, E.; Lang, R. W. Helv. Chim. Acta. 1989, 72, 1248. - ↑ Chambers, R. D.; Hutchinson, J.; Sandford, G. J. Fluorine Chem. 1999, 100, 63. - ↑ Rozen, S. Chem. Rev. 1996, 96, 1717. - ↑ Ramsden, C. A.; Smith, R. G. J. Am. Chem. Soc. 1998, 120, 6842. - ↑ Umemoto, T.; Nagayoshi, M. Bull. Chem. Soc. Jpn. 1996, 69, 2287. - ↑ Hebel, D.; Kirk, K. L. J. Fluorine Chem. 1990, 47, 179.
<urn:uuid:05c12a06-eb64-4edf-8b7c-91865a382122>
2.6875
2,607
Knowledge Article
Science & Tech.
51.344887
The answer is in the same Wikipedia article but I feel the need to mention anisotropies: Anisotropy /ˌænaɪˈsɒtrəpi/ is the property of being directionally dependent, as opposed to isotropy. An example of anisotropy is the light coming through a polarizer. An example of an anisotropic material is wood, which is easier to split along its grain than across In general, anisotropies give us an idea about density fluctuations in the early universe which form out the basis(or seeds) of dense matter clusters(galaxies etc) in the universe. How big are these anisotropies(fluctuations)? That measurement is related to parameters of the universe. The answer is in the paragraph above the paragraph you mentioned. This : The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons—moving at speeds much slower than light—makes them tend to collapse to form dense haloes. These two effects compete to create acoustic oscillations which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. Here are necessary details. Some information about the Primordial Density Fluctuations from Wikipedia: The statistical properties of the primordial fluctuations can be inferred from observations of anisotropies in the cosmic microwave background and from measurements of the distribution of matter, e.g., galaxy redshift surveys. Since the fluctuations are believed to arise from inflation, such measurements can also set constraints on parameters within inflationary theory. Note that it says, these measurements set constraints on the parameters, i.e. they must be less than or greater than some value. An exact opinion is and was given using different techniques and looking for the common areas in results. If there's something that bothers you or I was unable to explain well, please mention. :)
<urn:uuid:89604713-1beb-470f-aa90-e69a08809306>
3.3125
501
Q&A Forum
Science & Tech.
26.342025
Takeshi Inomata Anastasia Kravtsova, left, and Q’eqchi’ excavators working on burials. Takeshi Inomata and Daniela Triadan of the University of Arizona excavated the Maya site of Ceibal in Guatemala. In this post, they answer readers’ questions about their recent expedition. I was particularly drawn by Professor Inomata’s hypothesis that building the large pyramids helped the Mayans build their society. We can see a parallel here in the U.S. with Kennedy’s goal to place an American on the moon. Then, we did not have the technology to get it done, but aspiring for this goal helped plump our creativity, and coordinate our intellectual and fiscal resources. Once the U.S. got to the moon, the commercialization of the related technologies (video cameras, to Mylar sheets, to materials and wireless technology) helped power the U.S. to an unprecedented boom in development and economic prosperity. Rightly, we did not reach the moon because we had the technology, but aspiring for the moon helped us develop the resources to reach the moon and helped society. Certainly this has happened all over the world. Important question for the professor: Who or what was their Kennedy? And, what big pyramid should we build next? — Arun Shanbhag, Boston It is interesting to me to contrast the dramaturgy used by elites to rally support around themselves and the public projects, whether circuses or buildings, that seem to be a simultaneous exhibition of that power/support and a reinforcement of it. Just how were vast segments of those ancient societies, and their resources, motivated to build pyramids, conduct circuses, etc., when probably that use of resources might not have been optimal in terms of people’s objective interests? — Benjamin R. Stockton, California Alvaro del Campo Douglas Stotz and Juan Díaz recorded nearly 400 bird species during a short survey of the Yaguas and Cotuhé Rivers in northeastern Peru. ( Slide Show) Douglas Stotz, near left, of the Field Museum in Chicago and Nigel Pitman of Duke University took a biological inventory of a vast roadless area in Peru’s northern Amazon. Now that we’re out of the field we can draw on a dream team at Field Museum and elsewhere to help us answer some of the harder questions in the reader comment section. For the answers below we received help from Corine Vriesendorp, director of Field Museum’s rapid inventory program; Robin Foster, conservation ecologist at the Field Museum; Jim Louderman, the Field Museum’s insect collection manager; and Günter Gerlach, a curator at the Munich Botanical Garden. You say you “counted” 860 canary-winged parakeets “this morning.” Please explain your technique. — P. Desenex, Tokyo The parakeets typically fly in groups of five to 100 birds. I hear the birds calling as they approach, and count them as they go by. For groups under 20 or so, I can count each individual; for larger groups I count in subsets of five or 10 birds. After counting a group, I write down the number in my notebook and await the next group’s arrival. The general movement is from south to north, so I only count the birds as they are arriving from the south. Afterward, I add up the birds and round the total to the nearest 10, since some of the counting is no more precise than that. Chris Filardi of the American Museum of Natural History studied evolution and conservation in the Solomon Islands. It has been over three months since I descended out of the clouds on Kolombangara. Tucked into the heart of this snowy winter that so many in North America are experiencing, the Solomon Islands seem far away, dreamlike. It is not the steamy weather there, the palms and strange whoop and cackle of the forest; it is more the raw distance in time and space. The islands are oceans away. It is remarkable — magic — that I was there. The months that have passed have been full of the ordinary and mundane that characterize much of a biologist’s working life: report and proposal writing, preparing manuscripts for publication, revising rejected manuscripts, resubmitting, revising, budgets and accounting, administration, planning, justifying. Yet the extraordinary is always there. Genetic results coming back from materials collected in the field act as a window back in time, redefining our sense of when and where island species evolved and even what they are. And then there are the bears. John Goodge A web of cracks in meltwater ice near the edge of Byrd Glacier. ( More Photos) John Goodge, left, a professor of geological sciences at the University of Minnesota-Duluth, and Jeff Vervoort, an isotope geochemist from Washington State University, recently returned from their research expedition in Antarctica. Sunday, Jan. 16 Back at home, it is nice to be enjoying a sunny winter’s day. Despite my earlier posts about weather and delays, our work in Antarctica ( View Slide Show) went relatively smoothly, and I owe a debt of gratitude to my wife and partner, Vicki, who took care of family, house, meals, dogs and all manner of things while working full time herself — much more challenging and unrelenting than what I faced! Jeff and I were unable to see our posts until we got back to McMurdo, and it has been very enjoyable to read the responses of readers who found our blog. It is encouraging to hear all the support and positive feedback. We wanted to respond to some of the many excellent questions posted by readers, so here goes: Hey Dr. Goodge! I’m curious what it’s like to be down there now as compared to your first few times. Has the “feel” of things changed? What are your instincts like? Do you feel like you have an edge because of your experiences in the past? We love the reports you guys are sending out. You guys are like astronauts to the rest of us pleebs. — David Biddle, Philadelphia, Pa. Eleanor Sterling of the American Museum of Natural History studied sea turtles at Palmyra Atoll this summer. Why would you only tag males with sat tags? Do you think them more “wayward”? Maybe the females “stray more”? I just don’t understand. — ehoort, Michigan The cost of the tags themselves and the additional expenses related to subsequent management and transmission of the data from satellites limit the total number of tags we can afford. With a small number of tags, we want to focus in on one subset of the population to best start to build a picture of what turtles of different ages and sexes are doing. The satellite tags are most useful in telling us about large-scale movements — like migrations across the Pacific — rather than small-scale movements such as within the atoll. We hypothesize that the younger turtles may be staying at the atoll for awhile and that therefore it is better to use the satellite tags for adults that may be more likely to travel long distances to reach breeding areas. With sea turtles in general, we know a reasonable amount about adult females because they regularly come to the shores for nesting, and scientists can then capture them and put tags on them. We know quite a bit less about males, and therefore we chose them as the focus of our work. Melanie Stiassny, an ichthyologist at the American Museum of Natural History, surveyed and collected freshwater fish during an expedition to the Malebo region of the Congo and Kasai Rivers. With all of our fish samples well preserved, carefully packed into leakproof barrels and clearly labeled with the necessary permits, Jake and I flew back to New York, reported in with United States Fish and Wildlife agents at the J.F.K. airport customs office, and made it to the museum with no problems. Our first task is to unpack and database all of those thousands of specimens. Each fish needs to be identified, to species where possible (it is already clear that some are new to science and are in need of a formal scientific description and name), and all must have their associated data recorded in the museum’s collection database. This includes where and when each specimen was collected, by what method, the water and habitat conditions the fish was found in, whether DNA samples were taken, and whether the fish was photographed prior to preservation. This is going to take a while, and the urge to “cherry pick” is almost irresistible — opening each package is exciting, but opening a package containing specimens of a species you’ve never seen before and can’t put a name to is an enduringly addictive thrill. But each and every specimen is an important record, and one that will enhance our understanding of the extent and array of fish diversity in the Congo basin, so I am holding off on sneaking the “good stuff” into my lab until we have documented the whole collection. And only once this has been done can the work of carefully distinguishing, describing and naming the new species begin. We will also soon be starting the molecular analyses that will hopefully answer some of our questions regarding the origin of the species of the lower Congo River. To help me with all of this, some of my Congolese colleagues and students will be visiting the museum in the coming months. Together we will start that work. After a smooth cruise into San Diego, where Atlantis would be embarking on her next expedition, the science teams went their separate ways, cars brimming with cooler-packed samples. We’ve had a couple of weeks to sort things out (a process which involved many brushes with frostbite, as samples were carefully arranged in freezers set at minus 80 degrees Celsius), and we are continuing to design and conduct experiments to tease out the secrets of the Hydrate Ridge ecosystem. Jeffrey Marlow Back at the lab, Stephanie Connon processing samples. For my part, it’s been an honor to participate in this exciting expedition and share my experiences with you. I appreciate all of your encouraging comments and insightful questions, and hopefully the strange, fascinating world of deep-ocean biology has sparked a Google search or two. Understanding our oceans is critical in this age of global environmental change, and we’re really just beginning that journey. And with that, I’d like to address a few of the science-based questions that came up in the comments. Fascinating life-forms, these creatures that do not need sun. Apparently, quite a few have been discovered during the last decades and their adaptability to various sources of “life energy” is remarkable. Do they have something in common? That is, the lowest part of the chain that actually creates organic material (proteins?) from inorganic sources. How did they evolve? Do they have relatives, close or distant, in the world outside the darkness? — Ladislav Nemec, Big Bear, Calif. Terry Gosliner Dendrodoris atromaculata. Slide Show After returning from the Philippines, I was thrust back into the realities of life at the California Academy of Sciences. It was nonstop catching up on e-mail, then dropping everything for several weeks to write a grant proposal to the National Science Foundation to support an even more extensive survey of the Philippines to determine whether the Verde Island Passage is the richest marine area for many different groups of marine plants and animals. Meanwhile, we were also processing the specimens that we collected in May. Our curatorial assistant carefully cataloged all the material, organized the photographs we had taken and got things ready to allow us to take the first look at some of the specimens. Each summer at the Academy, we have a great opportunity to invite 8 to 10 undergraduate students to participate in the Summer Systematics Institute, where each student works with an individual researcher on a research project that almost always results in publication in a scientific journal. Kelly Laughlin, from the University of Georgia, has been working with me on dissecting some of the Philippine specimens to determine whether they are new species. She is also sequencing their DNA so that we have molecular evidence to compare with the anatomical studies. We have determined that we have found two new species of Thordisa from the Philippines and one species from Japan that has not been studied since it was named more than 50 years ago. We also have established that two additional species found in the Marshall Islands by a colleague are new. So we have four new species in this group of nudibranchs that we had studied just a few years ago and already named six species from that work. We also discovered that one of the specimens I collected in May is not a member of this family but belongs in a completely different group. Christopher J. Raxworthy Three of the many chameleons found by Dr. Raxworthy’s team in Madagascar: Brookesia griveaudi, Brookesia betschi and Calumma malthe. Slide Show Monday, June 28 The rain forests of Marojejy, Madagascar, now seem like another world compared to my other world in New York City, back at the American Museum of Natural History. Time that was previously spent looking for animals and exploring new trails has now been largely replaced by time staring at a computer screen, in an office and lab. Of course, to be a successful field biologist, you need to live in both worlds – be productive in the field, and then turn field data into results and scientific publications back at your home institution. It sounds obvious and easy, but in reality this is one of the hardest balancing acts to maintain. Although I was in the field in Madagascar for only three weeks, I came back to around 800 e-mail messages, three papers waiting for review, a large collection of requests for help and a massive National Science Foundation grant proposal due June 8. Writing grants is another important aspect of a field biologist’s life — without grant funding, most fieldwork simply could not happen. My fieldwork on the chameleons at Marojejy was funded by the National Science Foundation, which supports the majority of research in the biological sciences, as well as many other scientific disciplines. So grant writing is something else you have to add to the balancing act. Getting that grant proposal submitted swallowed up my life for three weeks — and making the deadline was still touch and go at the end. The trouble was that during the final week, two collaborators were moving into new apartments (not so easy in New York), one was in Australia for a scientific meeting, and one was adopting a newborn. On the last night, I ended up drinking four cans of heavily caffeinated soda, sleeping on my office floor for four hours and uploading the final document seven minutes before the deadline at 5 p.m. The sleep deprivation made it feel like being in the field again. Friday, June 18 Today the survey for doucs in central Vietnam drew to a close. We got up early, even though we only had a short walk out, crisscrossing back down the muddy Khe Dien River to the boat landing. We waited about half an hour for the boat to arrive, during which time the guys pulled out playing cards and set up an impromptu game to pass the time. We were joined by several buffalo looking for breakfast in the same area. It only took two boat trips across the reservoir on the way back as we no longer have food stores to port. When we arrived at the dam entrance we waited some more for the cars to arrive. Field research involves a lot of waiting. We ferried people and bags via multiple trips until all were reassembled and ready for the drive back. Like Christopher J. Raxworthy, the American Museum of National History herpetologist who blogged last month about his expedition to Madagascar, the first things on our minds are hot showers and cold drinks. We take care of these in reverse order. It is customary in Vietnam to share a meal after an expedition to celebrate its success so we stop on the way home. Over lunch we laugh and recount stories of our experience in the forest. It is hard to say goodbye, because have become friends as well as colleagues. We promise to work together again in the near future.
<urn:uuid:978d924e-effb-41a6-9b38-4ceb5d009b0b>
3.171875
3,429
Content Listing
Science & Tech.
46.536177
It is easy to understand that the inertial coefficients appearing in the kinetic energy T must depend on the densities of solid and fluid constituents and , and also on the volume fractions v(1), v(2) and porosities , of the matrix material and fractures, respectively. The total porosity is given by and the volume fraction occupied by the solid material is therefore . For a single porosity material, there are only three inertial coefficients and the kinetic energy can be written as 2T = B<>u & B<>U ¯_11 & ¯_12 ¯_12 & ¯_22 B<>u B<>U , where is the velocity of the only fluid present. Then, it is easy to see that, if ,the total inertia must equal the total inertia present in the system . Furthermore, Biot (1956) has shown that and that . These three equations are not linearly independent and therefore do not determine the three coefficients. So we make the additional assumption that , where (Note: This without subscripts should not be confused with the stress tensor introduced earlier in the paper.) was termed the structure factor by Biot (1956), but has more recently been termed the electrical tortuosity (Brown, 1980; Johnson et al., 1982), since , where F is the electrical formation factor. Berryman (1980) has shown that = 1 + r(1 - 1), follows from interpreting the coefficient as resulting from the solid density plus the induced mass due to the oscillation of the solid in the surrounding fluid. Then, , where r is a factor dependent on microgeometry that is expected to lie in the range , with for spherical grains. For example, if and r = 0.5, equation (tau) implies , which is a typical value for tortuosity of sandstones. For double porosity, the kinetic energy may be written as 2T = B<>u & B<>U^(1) & B<>U^(2) _11 & _12 & _13 _12 & _22 & _23 _13 & _23 & _33 B<>u B<>U^(1) B<>U^(2) . We now consider some limiting cases: First, suppose that all the solid and fluid material moves in unison. Then, in complete analogy to the single porosity case, we have the result that must equal the total inertia of the system .Next, if we suppose that the two fluids can be made to move in unison, but independently of the solid, then we can take , and telescope the expression for the kinetic energy to 2T = B<>u & B<>U _11 & (_12+_13) (_12+_13) & (_22+2_23+_33) B<>u B<>U . We can now relate the matrix elements in (allfluidcase) directly to the barred matrix elements appearing in (singleinertia), which then gives us three equations for our six unknowns. Again these three equations are not linearly independent, so we still need four more equations. Next we consider the possibility that the fracture fluid can oscillate independently of the solid and the matrix fluid, and furthermore that the matrix fluid velocity is locked to that of the solid so that . For this case, the kinetic energy telescopes in a different way to 2T = B<>u & B<>U^(2) (_11+2_12+_22) & (_13+_23) (_13+_23) & _33 B<>u B<>U^(2) . This equation is also of the form (singleinertia), but we must be careful to account properly for the parts of the system included in the matrix elements. Now we treat the solid and matrix fluid as a single unit, so _11 + 2_12 + _22 = (1-)_s + (1-v^(2))^(1)_f + (^(2)-1)v^(2)^(2)_f, _13 + _23 = - (^(2)-1)v^(2)^(2)_f, and _33 = ^(2)v^(2)^(2)_f, where is the tortuosity of fracture porosity alone and v(2) is the volume fraction of the fractures in the system. Finally, we consider the possibility that the matrix fluid can oscillate independently of the solid and the fracture fluid, and furthermore that the fracture fluid velocity is locked to that of the solid so that . The kinetic energy telescopes in a very similar way to the previous case with the result 2T = B<>u & B<>U^(1) (_11+2_13+_33) & (_12+_23) (_12+_23) & _22 B<>u B<>U^(1) . We imagine that this thought experiment amounts to analyzing the matrix material alone without fractures being present. The equations resulting from this identification are completely analogous to those in (firstfrac)-(thirdfrac), so we will not show them explicitly here. We now have nine equations in the six unknowns and six of these are linearly independent, so the system can be solved. The result of this analysis is that the off-diagonal terms are given by 2_12/_f = (^(2)-1)v^(2)^(2) - (^(1)-1)(1-v^(2))^(1) - (-1), 2_13/_f = (^(1)-1)(1-v^(2))^(1) - (^(2)-1)v^(2)^(2) - (-1), and 2_23/_f = (-1)- (^(1)-1)(1-v^(2))^(1) - (^(2)-1)v^(2)^(2). The diagonal terms are given by _11 = (1-)_s + (-1)_f, _22 = ^(1)(1-v^(2))^(1)_f, and is given by (thirdfrac). Estimates of the three tortuosities , , and may be obtained using (tau), or direct measurements may be made using electrical methods as advocated by Brown (1980) and Johnson et al. (1982). Appendix A explains one method of estimating for the whole medium when the constituent tortuosities and volume fractions are known.
<urn:uuid:0ce5d46f-36ea-44f2-932f-8efe16bbd019>
2.828125
1,395
Academic Writing
Science & Tech.
62.474321
Science Fair Project Encyclopedia Human spaceflight is space exploration with a human crew, and possibly passengers (in contrast to dog-manned space missions , which are remotely-controlled or robotic space probes). Traditionally, these endeavours have been referred to as manned space missions, although today some prefer to use the term crewed or piloted space missions because they consider manned to be sexist, though it only denotes gender in one of several definitions of the word. The term manned is, however, accurate in terms of gender when speaking of all U.S. spaceflight programs before the Space Shuttle program and Soviet spaceflights before Vostok 6. NASA uses the term human spaceflight to refer to its programme of launching people into space. As of 2004 they have been carried out by the Soviet Union (later Russia), the United States (both government, NASA, and civilian, Scaled Composites, a California-based company), and the People's Republic of China. - International Space Station (has a Soyuz TMA as emergency lander; normal crew transport is with the following two) - Soyuz TMA with Soyuz launch vehicle - Baikonur Cosmodrome - Space Shuttle - John F. Kennedy Space Center - Shenzhou spacecraft with Long March rocket - Jiuquan Satellite Launch Center - Scaled Composites SpaceShipOne with Scaled Composites White Knight (the latter does not enter space itself) - Mojave Spaceport Human spaceflight missions beyond Earth orbit have been carried out by the United States only: to the Moon in the late 1960s. NASA's Apollo program landed twelve men on the Moon and returned them to Earth. The first mission beyond Earth orbit was Apollo 8 in which the crew orbited the Moon, the next Apollo 10 which tested the lunar landing craft in lunar orbit without actually landing. The missions that landed were Apollo 11-17, except 13, hence together six missions, with each time three astronauts of which two landed on the Moon. With regard to Earth orbits, perhaps the highest was that of the Gemini 11 in 1966: 1374 km. Other rather high orbits have been those of the Space Shuttle on the missions to launch and service the Hubble Space Telescope, at an altitude of ca. 600 km. On occasion, passengers of other species — dogs (Laika), chimpanzees (Ham and Enos the chimp ), and monkeys — have ridden aboard spacecraft. In fact, dogs were the first large mammals launched from Earth, not humans. Some died in space or on landing, others were returned to earth alive. Besides the US, Russia, and China, Europe, India, and Japan have active space programs. Indian Parliament recently sanctioned funds to the Indian Space Research Organization for a human spaceflight by 2008 (although the programme has now been scaled down to start with an unmanned orbiting satellite for surveying, see Chandrayan). Japan has announced a program to place a person on the moon by 2025. In an attempt to win the $10 million X-Prize, numerous private companies attempted to build their own manned spacecraft capable of repeated sub-orbital flights. The first private spaceflight took place on June 21 2004, when SpaceShipOne conducted a sub-orbital flight. SpaceShipOne captured the prize on October 4, 2004 with its second flight in one week. - List of human spaceflights - List of human spaceflights chronologically - List of human spaceflights by program - List of spacewalks - X-15 program - List of astronauts by name - Timeline of astronauts by nationality - List of space disasters - Human adaptation to space - Space colonization - Space and survival - Spaceflight records - Interplanetary travel - Monkeys in space The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:c3b20c28-34bf-46d1-bc84-60364360c872>
3.734375
806
Knowledge Article
Science & Tech.
37.975622
The Hobby-Eberly Telescope Dark Energy Experiment — HETDEX, at The University of Texas at Austin McDonald Observatory will be the first major experiment to probe dark energy. Its observations will narrow the list of possible explanations for dark energy, and may even provide the final answer.HETDEX will be the first major experiment to search for dark energy. It will map the three-dimensional positions of one million galaxies and tell us what makes up almost three-quarters of all the matter and energy in the universe. Continue reading "Unlocking the Secret of Dark Energy: Search for 'Sound Waves' from the First 400,000 Years of the Universe" » A preon star is a proposed type of compact star made of preons, a group of hypothetical subatomic particles that could originate from supernova explosions or the Big Bang. Preons were originally proposed as quark constituents over three decades ago, but in 2005, Fredrik Sandin and Johan Hansson of the Luleå University of Technology in Sweden came up with the concept of preon "stars" or "nuggets" in space. These objects, would be somewhere between the size of a pea and a football, with a mass comparable to the Moon with a density that would be in the range between a neutron star--the densest ordinary form of matter--and a black hole. Continue reading "The Odd Case of Preon Stars " » Scientists have seen surges in antimatter particles sweeping through space, and some believe the cause could be collapsing cosmic strings. As opposed to Ming the Merciless. Note that cosmic strings are entirely different strings from string theory - blame any confusion on the fact that there are far more cool things happening in space than we have words for. Continue reading "Realworld "Angels & Demons": Surges of Antimatter Observed in Space " » At the end of the nineteenth century scientists thought they had all the answers. They were spectacularly wrong, demonstrated by "The Ultraviolet Catasptrophe": a light experiment which simply couldn't be explained by the science of the day. This led to quantum mechanics, the particle-wave duality of light, and an entire new mode of science - which we've just broken again with a massive laser! Continue reading "The FLASH XASER: Massive Laser Zaps Einstein " » Hundreds of rogue black holes should be traveling the Milky Way's outskirts, each containing the mass of 1,000 to 100,000 suns. Avi Loeb -Harvard-Smithsonian Center for Astrophysics New calculations by Ryan O'Leary and Avi Loeb of the Harvard-Smithsonian Center for Astrophysics suggest that hundreds of massive rogue black holes, left over from the galaxy-building days of the early universe, may wander the Milky Way. Continue reading "Harvard-Smithsonian Center Reports Massive Black Holes Roaming the Milky Way (VIDEO)" » Did dark matter destroy the universe? You might be looking around at the way things "exist" athinking "No", but we're talking about ancient history. Three hundred million years after the start of the universe, things had finally cooled down enough to form hydrogen atoms out of all the protons and electrons that were zipping around - only to have them all ripped up again around the one billion year mark. Why? Continue reading "Did Dark Matter Destroy Universe 1.0? (VIDEO)" » Horoscope enthusiasts will be happy to hear that a grand cosmic force does indeed seem to be responsible for controlling the direction of all life on Earth. However, this grand cosmic cycle has more to do with extinction than finding a tall, handsome stranger. Continue reading "Does the Milky Way Influence Earth's Biodiversity? Research Says "Yes"" » Only four percent of the universe is made of materials we sort of understand. So what about that remaining 96%? For the most part we’ve labeled it under two names, dark matter and dark energy. We have no clear idea what these materials are. But now astronomers at the University of St Andrews are attempting to “simplify the dark side of the universe”. They say the two most mysterious constituents in the universe are actually the same thing. (Image is the future Supernova Acceleration Probe which may help solve of the dark matter/dark energy mystery). Continue reading "Is Dark Matter & Dark Energy the Same Thing?" » Dark energy is the deus ex machina of cosmology, able to save even the most inflation-prone calculations from destruction or - worse - being provably wrong. But while we've been busy watching the X-energy apparently accelerating all of creation while hiding in plain sight, some believe it's responsible for much more than that. It didn't just save the universe - no, no, that's far too small scale - it saved INFINITE universes. Continue reading "Will Dark Energy Save the Universe?" » There simply isn't a bigger question: wrapping up "Why are we here?", "Why is everything the way it is?" and "What if I don't believe a gigantic invisible skybeard did it?" -it's a Holy Grail of science. The theoreticians want to explain it, the experimenters want to detect it and - unlike 99% of all research - the public will actually care about the answer for a few minutes. We report on five ways scientists have have studied the beginning of everything and, in mockery of all you might think possible, made the question even cooler. Continue reading "New Views of the Big Bang -A Holy Grail of Science" »
<urn:uuid:a5d95261-88fd-4845-9b79-a8c02db1b95d>
2.9375
1,173
Content Listing
Science & Tech.
43.756846
|Before an atmospheric discharge, a thundercloud - cumulo nimbus - emits a series of ions known as down tracers or leaders. The negative polarity leaders fall towards the ground in successive fifty meter drops at speeds of 0.15 to 1 m per microsecond (µs), whereas the positive leaders fall more steadily. ||As they approach the ground, the tips of the highly ionised down leaders generate a powerful electric field that can reach several hundred kilovolts per meter. This powerful disturbance triggers the creation of opposite polarity up leaders, primarily at high and prominent spots along the ||As the down and up leaders join, an ionised channel to the ground fills with cloud. A return arc, known as the atmospheric discharge, returns through this ionised channel. It consists of several arcs between the ground and the cloud.
<urn:uuid:217c7fd1-cc92-4442-a750-3ad5bcd2f99c>
3.234375
186
Knowledge Article
Science & Tech.
45.633333
Shaffer, P.W., C.A. Cole, M.E. Kentula, and R.P. Brooks. 2000. Effects of measurement frequency on water-level summary statistics. Wetlands. 20(1):148-161. Wetland scientists and managers recognize the need to characterize hydrology for understanding wetland ecosystems. Hydrologic data, however, are not routinely collected in wetlands, in part because of a lack of knowledge about how to effectively measure hydrologic attributes and how frequently to measure water levels. To determine how measurement interval affects interpretation of water-level data, we analyzed data from seven wetlands in Oregon and Pennsylvania. We created subsets of daily data for each wetland, with measurement intervals of 2 to 28 days, then compared those subsets to the daily data for annual water-level summary statistics, monthly mean water levels, and occurrence/duration of threshold conditions (e.g., water in the root zone). Our primary goal was to determine if sampling at low frequencies can provide representative water-level distributions, small data sets from 28-day measurement intervals provided summary data (e.g., median, quartiles, range) comparable to the 1 day reference data. For measurement intervals of seven days or less, average errors in estimates of stage (minimum 25th, 50th, and 75th percentiles) were # 0.03 m; for a 28-day interval, average errors were >0.05 m. Errors in estimates of maximum stage were considerably larger (0.11 m and 0.21 m for 7- and 28-day intervals respectively) but can be circumvented using crest gauges. Errors in estimates of monthly mean stage varied greatly with measurement frequency (1-4% error for 7-day intervals, 4-15% error using one measurement per month), among wetlands and from month to month. Water-level durations above threshold values were problematic; for measurement intervals of 2 days and longer, 14-day exceedances of water in the root zone were frequently missed or spurious exceedance periods were identified., Overall results show that sampling at monthly intervals supplemented with crest gauges, provides a representative description of annual water-level distributions for use in classifying and comparing wetlands. More frequent sampling is required to characterize water levels for shorter (e.g., monthly) time periods and to reliably identify exceedance periods for water above threshold levels. More generally, the results remind us that the frequency and duration of sampling in hydrologic studies must be designed to ensure that data will support planned analyses.
<urn:uuid:4e264cc6-e638-46ac-b06f-55a5b6823d41>
2.8125
519
Academic Writing
Science & Tech.
39.760287
4.1 Moist atmospheric convection Understanding moist atmospheric convection and its influence on climate variability is central to empirical and process studies at CDC. It links small-scale turbulent motions to global circulations through cloud formation and precipitation. It connects the slowly varying conditions at the Earth's surface to the faster atmospheric responses, providing much of the long-term predictability of atmospheric circulations. Furthermore, it is the proximate cause of many climate impacts on humanity (e.g., drought, flood, and severe weather) and contributes significantly to systematic errors in climate and forecast models. In light of the complexity of convective processes and the wide range of space and time scales involved, a diverse and opportunistic research strategy has been adopted. This research draws on a wide range of models and observations, from cloud-resolving to planet-spanning, from highly idealized to highly realistic. CDC scientists have developed a new analysis technique called cylindrical binning that facilitates statistical studies using large Doppler radar data sets. Making effective use of existing data, the relationship between rainrate and horizontal wind divergence (Fig. 4.1), which is fundamental to the interaction of mesoscale convection and large-scale circulations, is explored. Here, wind divergence (the line integral of Doppler velocity around a circle centered on the radar) at every level in the atmosphere is regressed onto hourly area-averaged surface rain rate (estimated from reflectivity). Color indicates the size of the circle over which averages are considered. All the profiles exhibit the expected low-level convergence and upper-level divergence. Other interesting features are present that indicate, among other things, the spatial scale of the convective systems. However, the statistical significance and physical interpretation of those features remains uncertain and is a subject of further study. A more detailed and quantitative, if synthetic, source of data about convection is cloud-resolving models. With increasing computer power, ambitious computations are being performed around the world, and some of the resulting data sets are being analyzed at CDC. For example, Fig.4.2 shows a vertical velocity field at 1500m altitude in a 1064x32 doubly-periodic cloud-resolving model Convective updrafts and downdrafts (white patches) are clustered in certain preferred areas, all embedded in the context of a complex gravity wave field. Complete quantitative data about thermodynamics and microphysics as well as motion fields are available-far beyond the capabilities of observations. The simplified context of statistically steady states, periodic domains, and known governing equations allows parameterization hypotheses to be developed, tested, and refined in a tractable context. More realistic model studies of convection are being pursued with a nested-grid strategy in a mesoscale model (the MM5). The finest grid can resolve convection, while coarser grids require a cumulus parameterization scheme, so this project spans the interface between resolved and parameterized convection. The information flow in the model is very complex, rendering interpretation challenging, but the influence of complex, realistic lower boundary conditions on convection can be studied in detail. Figure 4.3 shows simulated 3-hour rain accumulations over western Colombia and the adjacent eastern Pacific ocean on 3-4 September, 1998. This region is particularly interesting because it has both steep topography and a strong sea surface temperature gradient offshore, north of the equatorial cold tongue. In the afternoon/evening, convection occurs over land, fueled by the accumulated moisture from many hours of strong surface fluxes and locally forced by a well-defined sea breeze front near longitude -76.5. In the late night and morning, by contrast, convection erupts in a mesoscale region offshore. Runs without topography suggest that this is due to a mountain-lowland breeze, not a thermal land breeze per se. The larger, coarser domains of the same model illustrate the interaction of parameterized convection, atmospheric dynamics, and a state-of-the-art land surface model of the Amazon basin at a 72 km grid spacing. Figure 4.4 shows time-longitude sections of rainfall (color) over the Amazon basin (8S-Equator) during 28 August-7 September, 1998, from the MM5 (left) and satellite observations (right). Rainbands sweep across the basin, from eastern Brazil to the Andes, at 10 degrees per day. While the surface fluxes are spatially coherent across the basin (driven by the solar input), the convection occurs in traveling bands, exhibiting some enhancement in the afternoon. The result of this is that the long-term climatology of convection has a striped structure. If we think of the ultimate driver of convection as being the heating of the atmosphere from below, Fig. 4.4 suggests that convection may be out of equilibrium with this driving by at least half a day, even over a hot moist continent. These findings suggest that equilibrium parameterizations of convection, as are used in many prediction models, may be inadequate. To explore the consequences of cumulus parameterization assumptions, it is helpful to first consider more idealized large-scale models with parameterized convection. Figure 4.5 shows rainfall patterns from one run of an idealized model on an earth-sized planet. A warm sea-surface temperature anomaly is specified in the middle of the picture, and the time-mean rainfall enhancement depends on the dynamics of the propagating transient activity, which in turn depends on aspects of the cumulus parameterization. The broad perspective on convective variability afforded by the multi-model approach outlined above is being used to develop convective parameterization schemes whose behavior in models and whose spatial correlation statistics more closely mimic the observations. Models on the various spatial scales can then be used as testbeds for different cumulus parametrization schemes and hypotheses.
<urn:uuid:1ae59779-6e41-46ca-860c-253fae11a701>
3.578125
1,205
Academic Writing
Science & Tech.
22.444956
words breaks a string up into a list of words, which were delimited by white space. words breaks a ByteString up into a list of words, which were delimited by Chars representing white space. words breaks a ByteString up into a list of words, which were delimited by Chars representing white space. And > tokens isSpace = words This is a simple wrapper around getting a list of words, that works in a common across multiple platforms. A word search solver library and executable This utility is useful for finding out if some old, misplaced version of a file (say from your old laptop) has any new text in it that never got checked in, synced, or copied over to your newest version of the file. The basic unix diff tool is sometimes incredibly unsatisfactory for this purpose, for example when text has been moved around, or when there are widespread whitespace differences. unwords is an inverse operation to words. It joins words with separating spaces.
<urn:uuid:3da711e0-f529-4df9-a803-1ed374f239e0>
3.328125
206
Documentation
Software Dev.
44.683598
Animated View of the AIM Mission The Aeronomy of Ice in the Mesosphere (AIM) mission will provide the first detailed exploration of Earth's unique and elusive noctilucent or night shining clouds that are found literally on the "edge of space." Located near the top of the Earth's mesosphere (the region just above the stratosphere), very little is known about how these polar mesospheric clouds form or why they vary. Image credit: NASA + View mpeg
<urn:uuid:4beb11ac-34fa-488b-9243-458376e54b6f>
2.765625
102
Truncated
Science & Tech.
21.328158
Creating an Electric Field Name: Brian S. I wish to create a lab with a a small pith ball and an E-field. I would like to suspend a pith ball in an E-field and watch it deflect like a pendulum and measure the angle. My problem is how can I create an E-field? any ideas? Take an aluminum pie pan and tape an insulator handle (wood or plastic) to the inside as a handle. Charge up a latex balloon with fur. Transfer the charge from the balloon to the pan. That should work. If you have an electrophorous instead of a balloon, that will work, too. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:dce1f54e-dcef-492f-b7fd-28453136e667>
2.953125
156
Q&A Forum
Science & Tech.
74.805
Scientists work to make ‘exploding’ lakes safe Explore This Story Can a lake kill? The “exploding” lakes phenomenon is real, with more and more scientific evidence supporting the notion that lakes have the power to kill. So where are these deadly lakes? And what are scientists doing to prevent more deaths? On Aug. 21, 1986, Cameroon’s Lake Nyos unexpectedly “exploded,” releasing a toxic cloud of carbon dioxide that suffocated 1,700 people in the surrounding area. A similar disaster happened in August 1984 at another Cameroon lake — Lake Monoun — killing 37 people. “Exploding” lakes are crater lakes formed by volcanic eruptions. They are caused by a buildup of CO2 gas in the lake’s bottom waters, says Bill Evans, a chemist with the U.S. Geological Survey, who has been studying the two West African lakes for decades. Over time, CO2 gas seeps into the lakes from magma below. Magmas are known to release gas for thousands of years after a volcanic eruption, and Cameroon is a volcanically active country. When an event such as a landside occurs, the CO2 buildup at the bottom of the lake is disturbed, triggering the mixture of bottom water and gas to rise toward the surface, says Evans. Once the gas depressurizes, bubbles form and that decreases the density of the water, creating a “self-sustaining degassing process that gets bigger and more violent as time goes on,” he says. There are only three known “exploding” lakes in the world: the two in Cameroon and East Africa’s Lake Kivu, which borders Rwanda and the Democratic Republic of Congo. Unlike Cameroon’s lakes, Kivu has not exploded in historical times. Evans has been monitoring the Cameroon lakes since the mid-1980s, when he was sent to West Africa to investigate the disasters as part of an international group. He visited Lake Nyos 10 days after the 1986 explosion. The scientists collected data — interviewing doctors who handled the dead, collecting water samples and the lake’s temperature and monitoring vegetation — to rule out the possibility that the Nyos disaster wasn’t due to a volcanic eruption. “I think the real wake-up call for us came when we got a boat on the lake,” Evans says. “It looked like Campbell’s tomato soup.” The lake had turned red because of the oxidized iron in the water. It took eight months for Nyos to return to its normal colour. The CO2 cloud that formed over the lake within several hours after the “explosion” eventually drifted down slope killing people in the surrounding river valleys. “People can lose consciousness after just two breaths of CO2, and that is likely what happened at Nyos. People just fell in the middle of their evening activities,” Evans says. “One woman hanging laundry was found still clutching the corners of the sheet she was about to hang.” In 2001 a French engineering team installed pipes in Lake Nyos to degas the lake. The pipes allow the bottom water to rise up at a controlled rate and release the CO2 slowly and safely into the air. Similar pipes were installed in Lake Monoun in 2003. By 2010 the small lake had been degassed to safe levels. Other scientists are active at Lake Kivu, which has CO2 and methane gas, to look at ways to monitor and extract its gas. Evans predicts Lake Nyos — which is roughly 210 metres deep and one kilometre in diameter — will reach to safe levels by 2022. The fear of another explosion happening there, however, is still real. Between 1986 and 2001 scientists watched the pressure of CO2 gas in the very deepest part of Lake Nyos double, says Evans. He estimates it would take about 100 years to saturate Lake Nyos with CO2 gas if pipes weren’t put in place. “At Nyos I think we’ve got a ways to go before we would want to call it safe,” Evans says. “Monoun currently is safe. Of course the problem is that once these pipes are pulled out of the lake. . . that process of (CO2) buildup starts again. “Technically, maybe it takes 100 years, but the lake will become dangerous again.” - NEW RCMP probing Senate expense scandal, Senate speaker says - Toronto terror suspect asks for defence lawyer who is guided by ‘holy book’ - Updated London attack: Two more people arrested, police say - Updated Tim Bosma homicide: Second suspect Mark Smich appears in court - Updated Legendary Blue Jays scout Epy Guerrero dies - Updated City councillor Paul Ainslie's licence suspended after roadside check - DiManno: No matter how it seems on Planet Ford, it’s over - Updated As world gawks at Rob Ford scandal, Toronto police wait and watch
<urn:uuid:a4183bed-2424-491c-9419-8e612ce6dd8c>
3.734375
1,061
Truncated
Science & Tech.
51.032322
Left, "before" image, from the Sloan Digital Sky Survey. Right, "after" image, from Swift's Ultraviolet/Optical Telescope. The pinpoint of light in the centre is the GRB, which outshines the entire host galaxy. Click image for the high resolution image [8.7MB tiff]. NASA's Swift telescope has detected a gamma-ray burst (GRB), a usual harbinger of a supernova, very close to our galaxy, in the constellation Aries. The GRB was detected on Feb. 18th, at 440 million light-years away, lasting 33-minutes -- quite the departure for GRBs, which are usually detected billions of light-years away, and lasting only seconds at most. Speculation is that the GRB may be a result of a very massive star collapsing into a black hole, then exploding. NASA animation showing the collapsing star scenario that is the leading contender to explain gamma-ray bursts.
<urn:uuid:951c9d5b-f31b-49d0-9599-929e9d60d909>
3.4375
199
Personal Blog
Science & Tech.
54.744
5.1 Maximum Radius of Space Colonies The maximum radius of such an O'Neill style colony is limited by the hoop stress of the spinning structure, and the tensile strength to density ratio of the material. The formula is R = HoopStress/gG Where R is the radius, g is the acceleration of pseudo-gravity at the rim, and G is the density. MNT offers a 5 x 1010 Pa tensile strength. Using the design rule of 50% safety factors for O'Neill style colonies , a 3.3 x 1010 Pa design tensile strength is reasonable. The associated material density is 3.51 103 . One goal of the architecture is for g to equal 9.8 m/s2 ,. This all gives a possible space station radius of 9.6 x 105 m, or nearly 1000 km. For comparison, the corresponding feasible radius for titanium is 14 km, and even at its ultimate tensile strength with no safety factor, the titanium limit would be 23 km. At the 9.6 x 105 m radius, the entire available strength (at the safety factor) of the MNT-based material is being used to prevent the rotating structure from bursting, and there is no strength left over to hold the space station's contents, including an atmosphere. To do so, a lower radius must be set.
<urn:uuid:27085444-b875-43d2-a7ac-71fc1d5aaef4>
2.859375
284
Comment Section
Science & Tech.
74.61674
3.1 Evaluation 3.1.2 The Evaluation Model 18.104.22.168 Form Evaluation 22.214.171.124.2 Conses as Forms 126.96.36.199.2.3 Function FormsIf the operator is a symbol naming a function, the form represents a function form, and the cdr of the list contains the forms which when evaluated will supply the arguments passed to the function. A function form is evaluated as follows: The subforms in the cdr of the original form are evaluated in left-to-right order in the current lexical and dynamic environments. The primary value of each such evaluation becomes an argument to the named function; any additional values returned by the subforms are discarded. Although the order of evaluation of the argument subforms themselves is strictly left-to-right, it is not specified whether the definition of the operator in a function form is looked up before the evaluation of the argument subforms, after the evaluation of the argument subforms, or between the evaluation of any two argument subforms if there is more than one such argument subform. For example, the following might return 23 or 24. (defun foo (x) (+ x 3)) (defun bar () (setf (symbol-function 'foo) #'(lambda (x) (+ x 4)))) (foo (progn (bar) 20)) A binding for a function name can be established in one of several ways. A binding for a function name in the global environment can be established by defun, setf of fdefinition, setf of symbol-function, ensure-generic-function, defmethod (implicitly, due to ensure-generic-function), or defgeneric. A binding for a function name in the lexical environment can be established by flet or labels.
<urn:uuid:6978dc03-5598-415a-87e9-40a641566a69>
2.75
379
Documentation
Software Dev.
60.569676
Search Mathematical Communication: Types of proof & proof-writing strategies Topic Teaching Tip(s): General principles of mathematical communication | Types of proof & proof-writing strategies This webpage includes examples of various types of proof, including proof by contradiction and proof by induction. Addresses when to use proof by contradiction and contains links to recitations about proof writing. Attachments include sample proofs and a proof-writing assignment. Most materials on this page are from real analysis, but the page also contains a list of books on proof-writing appropriate for a broader context. To rate this resource on a 1-5 scheme, click on the appropriate icosahedron: MathDL Mathematical Communication This review was published on April 01, 2011
<urn:uuid:43b5c7cc-fb28-4bf6-8bab-14f81c612eaa>
3.234375
152
Content Listing
Science & Tech.
20.598276
Jet Stream Revisited Name: Peter E. I have read your archival info re "Jet Streams" and in my studies have realized that this complex 3-D system (according to some authors) BEGINS with the temp/press gradients at a low level, influencing mid level winds moving either cyclonically or anticyclonically;add the intrusion of high level winds and the Coriolis Effect; plus the effect of the pressure gradient at the 300mb. level and the position of the Polar Front == THE PATTERN for the Jet Stream. 1] in this simplification have I left out an important "factor"? 2]does the system BEGIN at the temp/press gradient at a LOW level? 3]or perhaps, there is a causality which begins with the jet stream and works its way down to the surface? I appreciate your time and effort in this section and would be very grateful if you could enlighten me on the above questions. Sincerely Peter E. The strength of the polar jet stream is largely determined by the strength of the temperature and pressure gradients at the surface. That is why the polar front is so important. Strong temperature gradients (and pressure gradients secondarily) at the surface are amplified to stronger gradients with increasing altitude, thereby resulting in large horizontal wind shear at about 300-500 mb (depending on the time of year and latitude of the polar front) and strong winds (the jet stream) at that level. The fastest winds of the jet stream are restricted to just below the Tropopause, the boundary between the Troposphere and Stratosphere. The stable stratification of the Stratosphere prevents intrusion of the jet stream into it. The high level winds and pressure gradient at 300 mb are more affected by the polar jet than the jet is by the high level winds and pressure gradient at 300 mb. This system is complex! A good place to see the complex structure of the polar front, polar jet, subtropical front, and subtropical jet (the latter two existing only during the warm half of the year) is a diagram (with a description of the jet stream before it) at the Univ. of Oregon site at Another interesting site is one from Lyndon State College showing the Northern Hemisphere jets. Pick "Northern Hemisphere Jet Stream with MSL Pressure" at David R. Cook Atmospheric Research Section Environmental Research Division Argonne National Laboratory Click here to return to the Weather Archives Update: June 2012
<urn:uuid:5e5ab67a-2fd6-4ff7-8f44-aba1b14daf68>
2.96875
563
Q&A Forum
Science & Tech.
46.414018
Jul 5, 2000, 10:08 PM Post #2 of 7 I agree with SixKiller (is it fair if I play, too? ) @new = map /o/ ? $& : (), qw!one two three four five!; It's basically saying that if o matches, as it does in (one, two, and four), toss only the precise text that matched (stored in $&), which is o, into @new. If it doesn't match, it's nothing () However, if Cure had used: @new = map /o/ ? $_ : (), qw!one two three four five!; then it would have yielded the results that mckhendry posted. Performance note: Because Perl needs to know if the exact matched data will be needed, it looks for $& at compile time. If it finds $& anywhere in the program (and even in libraries and modules that the program uses), it will take the time to store the matched data for each regex in the program, even in regexes that do not use $&. This will slow down all regexes, so it's a good rule of thumb not to use $& unless you really, really have to, especially in a library or module. This is fun
<urn:uuid:aa4820ca-78f1-4410-9b9d-5bc670728fac>
2.9375
271
Comment Section
Software Dev.
77.392823
Science@NASA Headline News You may have noticed that the "look and feel" of Science@NASA stories has changed. There's no cause for alarm. Our core product, simply- and clearly-told stories about NASA science, remains the same. The changes are a sign of progress. Recently, the Science@NASA team joined forces with the Science Mission Directorate at NASA headquarters. Working together, we'll be able to cover a broader range of NASA discoveries and develop "citizen science" opportunities for our readers, while still producing old favorites such as Apollo Chronicles and "looking up" stories about backyard astronomy events. The sky's the limit. May 18, 2012 It won't happen again until December 2117: On June 5th, 2012, Venus will transit the face of the sun. The best places to watch are in the south Pacific, but travel is not required. The event is widely visible around the world, including at sunset from the USA. May 16, 2012 NASA has just released a new count of asteroids that come close to the orbit of Earth and could survive entry through our planet's atmosphere. The data, gathered by an infrared space telescope named WISE, reveal important new information about the origin and make-up of these potentially hazardous space rocks. May 15, 2012 On Sunday, May 20th, the Moon will pass in front of the Sun, producing an annular solar eclipse visible across the Pacific side of Earth from China to the United States. May 8, 2012 NASA's Spitzer Space Telescope has detected light emanating from a "super-Earth" beyond our solar system for the first time. May 2, 2012 Another "super-Moon" is in the offing. The perigee full Moon of May 5-6 will be as much as 14% bigger and 30% brighter than other full moons of 2012. April 19, 2012 In a unusual twist on space science, students in California have launched a rubber chicken to the edge of space to sample a solar storm. April 18, 2012 Astronomers and astronauts are joining forces for an unusual astrophotography experiment during the peak of the Lyrid meteor shower on April 21st. April 13, 2012 One year after the historic tornado outbreak of April 27-28, 2011, researchers say they've learned a few things about deadly twisters. Today's story from Science@NASA presents some of the scientific findings that emerged from the swath of destruction. April 2, 2012 This week, Venus and the Pleiades star cluster will meet in the sunset sky for a rare and beautiful conjunction. March 29, 2012 With NASA's Kepler spacecraft discovering alien worlds at a record pace, it seems to be just a matter of time before an Earth-sized planet is found in the "Goldilocks zone"--that is, in an orbit sized just right for liquid water and life. In today's story from Science@NASA, researchers discuss how they'll explore a cousin of Earth many light years away.
<urn:uuid:5388d851-1fc5-4bc3-9690-57577392aed1>
3.03125
621
Content Listing
Science & Tech.
55.051974
February of 1936 was the coldest in the US since the start of the twentieth century. 13 Feb 1936 – RECORD COLD IN NORTH AMERICA New York, Wednesday. That month had the most record minimums. It also had the coldest average temperature, at -4C. By contrast, July of 1936 was incredibly hot. That month blew away all other July’s for record maximum temperatures. NOAA claims that July of 2012 was the hottest in US history, but as you can see – this year isn’t even in the top ten. July 1936 had the second hottest average min/max temperature after 1901. The data presents two huge problems for TOBS (time of observation bias.) The NOAA theory behind TOBS is that the very stupid observers in the past used to reset their thermometers later in the day when it was warmer, and now the very stupid observers reset their thermometers earlier in the day when it is colder. This would cause average temperatures in the past to be biased upwards, and recent temperatures to be biased downwards. So how did 1936 manage to blow away the numbers of both record maximums and record minimums? TOBS is not a very plausible theory for explaining unusual numbers of record temperatures in either direction, even at the theoretical level – much less a year which had both record numbers of record minimums and record maximums. NOAA depends on these bogus adjustments to keep their fraudulent US warming story alive. This entry was posted in Uncategorized . Bookmark the permalink
<urn:uuid:3599914c-d1fc-4c3e-a002-48a706cc5198>
2.953125
317
Personal Blog
Science & Tech.
53.619681
Billie Jean Plexus (by Colin Rozee) New materials remove carbon dioxide from... → Scientists are reporting discovery of an improved way to remove carbon dioxide — the major greenhouse gas that contributes to global warming — from smokestacks and other sources, including the atmosphere. Their report on the process, which achieves some of the highest carbon dioxide removal capacity ever reported for real-world conditions where the air contains moisture, appears in the... The Man with the Beautiful Eyes (by Jonathan Hodgson) Using Kinect to build real world Google Analytics (by Administrator Agile Route)
<urn:uuid:6414dc2d-a5ab-42ae-a187-3423f47dea0d>
2.765625
119
Content Listing
Science & Tech.
30.947692
Saturday 25 May Sea lamprey (Petromyzon marinus) What’s the World’s Favourite Species?Find out here. Sea lamprey fact file - Find out more - Print factsheet Sea lamprey description Lampreys are some of the most primitive vertebrates alive today, they are known as cyclostomes, which means 'round mouths' and refers to the fact that they are jawless, having instead a round sucker-like mouth. A further primitive characteristic is that the skeleton consists of cartilage and not bone (2). Lampreys are similar in shape to eels, and have a series of uncovered round gill openings (known as gill pores) on the sides of the head and a single nostril on the upper surface of the head (2). The sea lamprey is the largest cyclostome in Europe. It can be distinguished from the other lampreys by its larger size, the marbling of the greyish-green back, and the two dorsal fins, which are widely separated (4). An alternative common name is 'stone sucker' (5), which may have arisen from the habit of males during spawning, when they create a depression in the river bed by wriggling and removing stones with the mouth (4). - Head-body length at spawning: over 45 cm (2) Sea lamprey biology Adults of this anadromous species migrate up rivers in March and April, but spawning actually takes place the following year between May and July (4). Mating occurs in pairs, unlike the other lampreys in which a female is mated by a succession of males (4). The female lays up to 300,000 eggs into a depression in the river bed created by the male. After hatching, the larvae, known as ammocoetes burrow into the sediment where they live for three to five years, feeding by filtering organic particles from the water (4). During metamorphosis, the eyes and the sucker-like mouth develop and the adults then migrate to the sea where they adopt a parasitic lifestyle, feeding by attaching to the bodies of large fish with the mouth and rasping away at the flesh. They remain in the sea for a few years and then return to freshwater in order to spawn. They do not feed during this return trip because the digestive organs degenerate, and shortly after spawning they die (4). Roman, Viking and Medieval Britons regarded river and sea lampreys as delicacies (2).Top Sea lamprey range The sea lamprey is fairly widespread in UK rivers, but it has declined to extinction in some areas. It is absent north of the Great Glen, Scotland, possibly as it prefers warm water (6). Current strongholds are the rivers Wye and Severn (2). Outside of the UK it is known from most of the Atlantic coastal areas of western and northern Europe between Norway and the Mediterranean. It is also found in eastern parts of North America (6).Top Sea lamprey habitatTop Sea lamprey statusTop Sea lamprey threatsTop Sea lamprey conservation A number of UK sites that support sea lampreys have been designated as candidate Special Areas of Conservation (SACs). Although this will be a good foundation for conserving the species, further action will be required. To this end, a draft Action Plan has been produced to guide future conservation efforts (7). Furthermore the Life in UK Rivers Project is helping to conserve this species (8).Top Find out more For more on the Life in UK Rivers Project see: - English Nature: Information authenticated by the Environment Agency: - In fish: those species that spend most of their lives at sea but migrate to fresh water to spawn. - Dorsal fin - The unpaired fin found on the back of the body of fish, or the raised structure on the back of most cetaceans. - Stage in an animal’s lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce. - The production or depositing of large quantities of eggs in water. - Animals with a backbone. - IUCN Red List (November, 2008) - Environment Agency. (1998) Species Awareness leaflet Number 5. Lamprey. Environment Agency, Bristol. - Conserving Natura 2000 Rivers: River, Brook and Sea Lamprey (September, 2008) - Fishbase (January, 2002) - Cihar, J. (1991) A Field Guide in Colour to Freshwater Fish. Aventium Publishing, Prague. - Davies, C., Shelley, J., Harding, P., McLean, I., Gardiner, R. and Peirson, G. (2004) Freshwater Fishes in Britain – The Species and their Distribution. Harley Books, Colchester. - JNCC (September, 2008) - Life in UK Rivers Project (October, 2002) MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends. Terms and Conditions of Use of Materials Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors. Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; - download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use; - teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User. End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials. Additional use of flagged material Green flagged material Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use. Creative commons material Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details. Any other use Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use. Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted.
<urn:uuid:d789fa78-bef5-4a10-839c-2100c3659a77>
3.96875
1,617
Knowledge Article
Science & Tech.
40.725763
Biology Plug N' Play When you're hard drive fails, you order a new one online and then swap it out. Why can't we do that for biological parts as well? From DNA robots and "organs-on-a-chip" to nanobristles that grab-and-release drugs, this slideshow explores the two major goals of synthetic biology: to build new biological systems and re-engineer the existing ones from nonbiological components. Image: Here actin filaments are nucleated in circle shapes (20–40 µm microns in diameter) using micropatterning (see next slide for details) and then imaged with epifluorescence microcopy. Click here to learn more about fluorescence light microscopes from Carl Zeiss. What regulates the actin architecture in a cell? Recently, Thery and colleagues demonstrated that the pattern of actin nucleators is all that is needed to organize F-actin filaments (yellow) into parallel bundles, as found inside the cell—no crosslinking or bundling proteins required. Image: The location of actin nucleators are micropatterned onto a circle using deep UV lithography on a glass coverslip (see Reyman et al 2010). Actin polymerization is then induced by applying actin monomers, profilin, and the Arp2/3 complex. A dense and branched meshwork of filaments assemble on the circle (bright yellow) while non-branched filaments grow out of the circle and form parallel bundles. 7% of actin monomers are labelled with Alexa568, which allows the filaments to be imaged with classical epifluorescence microcopy (up-right BX61 Olympus microscope; dry 40x objective). Micropatterning can also control a cell's size and shape. Here Thery and colleagues apply adhesive molecules (e.g., fibronectin) on glass slides in varies shapes—a "T" (top right) or an "H" (bottom right). When they plate one or two cells onto the micropattern, they adopts a convex “envelop” shape around the whole micropattern: the single cell becomes triangular on a T pattern and the cell doublet forms a square on the H. If they "draw" a micropattern near a fixed cell on the plate (left), the cell progressively spreads on this new bar and assembles stress fibers attached to its extremity Left: A RPE1 cell expresses LifeAct-GFP, which labels the actin network in living cells. After the micropattern is drawn near the cell, images were acquired every 20 minutes with an inverted TE2000 Nikon microscope (100x oil objective). Colors arbitrarily designate each stage of reprogramming. Right: Single RPE1 cells on the T (top) and an MCF10A cell doublet on the H (bottom) were permeabilized and fixed with paraformaldehyde after plated on the micropatterned glass slide. The actin network and focal adhesions are labeled green (phaloidin-FITC) and red (antibodies against vinculin or paxilin), respectively, while intercellular junctions are labeled white with antibodies agaisnt &bega;-catenin. Images acquired with a Leica DMRA microscope (100X oil objective). One major goal of synthetic biology is to use the building blocks of life—DNA, RNA, proteins, and lipids—to construct tools and devices that don't already exist in nature. For instance, in DNA "origami," long single-stranded DNA molecules of > 1000 basepairs are folded into custom shapes by interactions with smaller "staple strands." Image: Douglas and colleagues recently used this DNA origami approach to build a barrel-shaped nanorobot (35 nm X 35 nm X 45 nm) that can be filled with drugs, antibody fragments (pink) and other nanoparticles. A DNA aptamer (green) locks the barrel closed, but then pops it up when the barrel contacts antigen for the aptamer, say for example, on the surface of a cell. The nanorobot was designed with Molecular May and cadnano. Towards a Minimal Cell One of the most ambitious endeavors of synthetic biology is creating "minimal cells" that fully recapitulate the functions of a natural cell—they capture energy, maintain ion gradients, store information, and mutate. Although such technologies are still far on the horizon, researchers have made great progress in creating "semi-synthetic cells" that can mimic specific cellular tasks, such as protein production and synthesis of lipid membranes. Many of these "artificial cells" reside inside liposomes, or artificial vesicles comprised of lipid bilayers. Image: Each photomicrography shows a giant liposomes ~20-50µm in diameter comprised of fats and proteins from the surface of the mammalian lung alveoli without any chemical treatment. The liposomes are directly isolated from a lung lavage. Each photomicrography is acquired at different temperatures or varying composition of native fats and proteins of pulmonary surfactant. Images obtained with a Laser Scanning Confocal inverted microscope (Zeiss LSM 510; objective, water immersion with a 40x magnification), with either conventional fluorescent excitation or two-photon excitation. Another major goal of synthetic biology is to engineer unnatural molecules and compounds into systems and tools that mimic those found in biology. For instance, Joanna Aizenberg and her laboratory have pioneered using self-assembling synthetic nanofibers to generate capture-and-release devices that look strikingly like tiny fingers or tentacles. Image: Scanning electron image of nanoscale bristles holding onto a sphere. These bristles are made of epoxy resin and then immersed into a liquid. As the bristles dry, they grab whatever is nearby, such as a drug or small nanoparticles. The bristles store energy and thus, can be made to release the item. Each bristle here is ~1/1000th the width of a human hair. The self-assembling nanofibers can also be used to generate nanostructures with unique helical patterns and hierarchical order, which are frequently observed in biology. An ordered array of nanofibers is dipped into a liquid, and as the liquid evaporates, it creates a bending force that shapes the fibers into helical bundles and bundles of bundles, similar to how curly hair clumps together and coils together when its we—except that these bundles are ~1000 times smaller. The shape and size of the nanobundles depends on the spacing of the nanofibers and their intrinsic properties, such as elasticity and surface composition. Learn more here. Image: Scanning electron images of hierarchically assembling nanoscale bristles. These bristles are made of epoxy resin and then immersed into a liquid. As the bristles dry, they form hierarchical structures by self-assembly. Each bristle here is ~1/1000th the width of a human hair. Lung on a Chip Another emerging trend in synthetic biology is simulating the functions and activities of living organs in microdevices that are fabricated like a microchip and lined by living human cells. Recently, the Ingber lab used this strategy to create a "lung-on-chip," which contains hollow channels separated by a flexible porous membrane lined on one side by human airsac epithelial cells and lung capillary blood vessel cells on the other. By applying cyclic deformation to the tissue-tissue interface, they could mimic normal breathing motions. This simple "organ-on-a-chip" recapitulates the human lung's responses to infection, inflammation, and environmental toxins. Such devices offer a new approach for testing drugs and assessing the toxicity of pollutants. Although not typically grouped together in the same category, stem cell technologies share a major goal with synthetic biology: the fabrication of new organs. Early last year, Sasai and colleagues generated a retina in a 3D culture of embryonic stem cells (ESC), and now, they've "grown" a portion of a pituitary gland in a "dish." The key to constructing a hormone-producing gland? Assembly two adjacent layers of epithelial sheets (i.e., ectoderm and neuroectoderm), and then a pituitary primordium, called the Rathke’s pouch, forms at their interface. Image: (Left) The natural organ: Sagittal section of developing Rathke’s pouch (red) in the mouse embryo on E12. The pituitary primordium (i.e., the Rathke’s pouch) is labeled red with antibodies to Pitx1, while the hypothalamus is green via Rx antibodies. (Right) The engineered organ: Rathke’s pouches (green and white) self-formed in an ESC aggregate on culture day 13. Green, white, and red are derived from antibodies for Lim3, Pitx1, and Tuj1, respectively. Dapi stains nuclei blue in both images. Last but certainly least, what about synthesizing complex biological behaviors? One of the most often engineered behaviors is the cooperative flight of bees and insects, known as "swarming." Recently, Vijay Kumar and his team made impressive strides to replicating "swarming" in flying drones. Known as "quadrocopters," these robots are fully autonomous (i.e., there’s no remote control!) and work together to maneuver around obstacles, fly in formation, and assemble small structures. Image: An SEM image of a circulating tumor cell trapped on a microchip; magnification of 430x at 10 kV power.
<urn:uuid:f21ca05e-17c3-41ea-8be0-3719ddd58907>
3.25
2,050
Knowledge Article
Science & Tech.
33.73127
Below you'll find a little introduction to two-photon physics - porbabely more than you ever wanted to know about it. A photon can, within the bounds of the uncertainty principle, fluctuate into a charged fermion/ anti-fermion pair, to either of which the other photon can couple. This fermion pair can be leptons or quarks. In the latter case, we distinguish several cases: While the cross-section of annihilation falls with the center-of-mass energy, the cross-section of the photon scattering rises logarithmically ( green curve) When such a scattering process takes place, the electrons that emmited the photons get scattered out of their trajectory. If this scattering angle is large enough, the electron is "seen" inside the the detector, and is called a tag. The momentum transfer or the virtuality of the photon is expressed as . The scaling variable tells us what fraction of the photon momentum was carried by the struck fermion inside the photon. P^2 is the virtuality of the target photon and is very small. W is the invariant mass of the hadrons coming from the interaction. So, this is what a typical event looks like in the Opal detector: A tagged electron in the Forward Detector (right side) and some hadrons from the gamma-gamma collision. The cross section of the process can be written in terms of the photon structure functions:
<urn:uuid:1dc41c5a-d588-4e6a-9e20-f690c25209d2>
3.46875
304
Tutorial
Science & Tech.
45.555217
Hurricanes form through an exchange of warm, humid air and cold, unstable air between the upper and lower atmosphere. (Diagram from NASA) Hurricanes begin when areas of low atmospheric pressure move off Africa and into the Atlantic, where they grow and intensify in the moisture-laden air above the warm tropical ocean. Air moves toward these atmospheric lows from all directions and curves to the right under the influence of the Coriolis effect, thereby initiating rotation in the converging windfields. When these hot, moist air masses meet, they rise up into the atmosphere above the low pressure area, potentially establishing a self-reinforcing feedback system that produces weather systems known to meteorologists as tropical disturbances, tropical depressions, tropical storms, and hurricanes. Fortunately, fewer than 10 percent of disturbances grow into hurricanes. Development of a full-fledged hurricane requires a rare combination of atmospheric events. First, the tropical disturbance must produce converging air masses. Second, the converging air must rise - but not in an area where there are either strong winds or descending air masses aloft. Hurricane development requires both an organized pattern of convection that is destroyed by upper atmosphere winds, as well as unstable air masses in the upper atmosphere that can carry rising surface air away from the upper end of the developing storm. If these three phenomena occur together, a self-sustaining circulation develops in which moist surface air rises and its moisture condenses, releasing latent heat that warms the upper atmosphere. The heated atmosphere creates lift that extends the low pressure area upward and further reduces its already low pressure. As winds in the upper atmosphere carry moist air away from this growing cylinder of low pressure, dry warm air from above can enter the center of the cylinder, ultimately reaching the sea surface and forming the cloud-free area known as the eye of the hurricane. A system of this type will continue to intensify as long as the upper-level outflow of air exceeds low-level inflow. The relationship between inflow and outflow is controlled by the heat content of the ocean water and the latent heat contained in the moisture in the rising air. In other words, once formed, hurricane circulation will continue as long as the storm is over warm water, has access to moist air, and doesn't drift into areas where upper-level winds can tear it apart. Learn more by clicking here.
<urn:uuid:e709062d-6a4f-4a83-9ece-62583629f6ab>
4.21875
480
Knowledge Article
Science & Tech.
28.233955
Strong evidence for the existence of a giant black hole in the core of a galaxy has come from NASA's Hubble Space Telescope. Images of M32, a dwarf elliptical galaxy near to our own, show that stars become clustered much more closely together near its centre, which is what should happen if the galaxy contains a black hole. Astronomers turned Hubble's cameras on M32 because observations from the ground had shown that stars in the galaxy's core were packed very close together and that they were orbiting rapidly about an unseen object. The density of stars near the centre is a hundred million times as great as it is in the neighbourhood of the Sun, says Ted Lauer of the National Optical Astronomy Observatories in Kitt Peak, Arizona. If there were as many stars in our part of the Universe, 'you could read a newspaper by starlight', he says. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:b293eb7f-7ac8-4822-8461-e978b4d5f4e1>
4.25
202
Truncated
Science & Tech.
43.190349
|Nov16-08, 09:38 PM||#1| How much power is radiated by the human body? The power per unit area radiated by an ideal blackbody radiator is P/A = σT4, where P = power, A = surface area, the Stefan-Boltzmann constant is σ = 5.67 ×10−8 W m−2 K−4, and T = temperature (in the absolute kelvin scale). How much power is radiated by the human body? Calculate the total power radiated by a blackbody cylinder of height 1.22 m and radius 0.15 m at human body temperature. (Ignore radiation from the ends of the cylinder.) (The result is considerably more than the power radiated by a human body, because skin is not a good radiator in the infrared.) |Nov17-08, 01:53 PM||#2| Please show your attempted solution, per the Forum rules. |Nov17-08, 01:56 PM||#3| Blog Entries: 10 Welcome to PF. Is this homework? If so, you need to show your own attempt at solving it before getting hints and help, according to our forum policies. For details see the section on "Homework Help" here: Also, in future please post homework questions here: |Similar Threads for: How much power is radiated by the human body?| |Human body and topology||General Math||19| |Power of the Human Body||Biology||3| |Radiated power, what does this even mean?||Introductory Physics Homework||9| |Power radiated by a sound source||Advanced Physics Homework||3| |Net Power vs. Net Radiated||Introductory Physics Homework||1|
<urn:uuid:e18c1107-66fb-485b-8f4e-97151d0b0b91>
3.0625
388
Comment Section
Science & Tech.
65.807233
|Group||Actinides||Melting point||1135 oC, 2075 oF, 1408.15 K| |Period||7||Boiling point||4131 oC, 7467.8 oF, 4404.15 K| |Block||f||Density (kg m-3)||19050| |Atomic number||92||Relative atomic mass||238.029| |State at room temperature||Solid||Key isotopes||234U, 235U, 238U| |Electron configuration||[Rn] 5f36d17s2||CAS number||7440-61-1| |ChemSpider ID||22425||ChemSpider is a free chemical structure database| |Common oxidation states||6, 5, 4, 3| |Isotopes||Isotope||Atomic mass||Natural abundance (%)||Half life||Mode of decay| |233U||233.04||-||1.590 x 105 y||α| |> 2.7 x 1017 y||sf| |234U||234.041||0.005||2.453 x 105 y||α| |1.5 x 1016 y||sf| |235U||235.044||0.72||7.03 x 108 y||α| |1.0 x 1019 y||sf| |236U||236.046||-||2.342 x 107 y||α| |2.5 x 1016 y||sf| |238U||238.051||99.274||4.47 x 109 y||α| |8.2 x 1015 y||sf| Molar heat capacity (J mol-1 K-1) |27.665||Young's modulus (GPa)||Unknown| |Shear modulus (GPa)||Unknown||Bulk modulus (GPa)||Unknown| In the Middle Ages, the mineral pitchblende (uranium oxide, U3O8) sometimes turned up in silver mines, and in 1789 Martin Heinrich Klaproth of Berlin investigated it. He dissolved it in nitric acid and precipitated a yellow compound when the solution was neutralised. He realised it was the oxide of a new element and tried to produce the metal itself by heating the precipitate with charcoal, but failed. It fell to Eugène Peligot in Paris to isolate the first sample of uranium metal which he did in 1841, by heating uranium tetrachloride with potassium. The discovery that uranium was radioactive came only in 1896 when Henri Becquerel in Paris left a sample of uranium on top of an unexposed photographic plate. It caused this to become cloudy and he deduced that uranium was giving off invisible rays. Radioactivity had been discovered. |Listen to Uranium Podcast| Chemistry in Its Element - Uranium You're listening to Chemistry in its element brought to you by Chemistry World, the magazine of the Royal Society of Chemistry For Chemistry in its Element this week, can you guess what connects boat keels, armour piercing weaponry, beautiful coloured glass that you can track down with a geiger counter and more oxidation states than a chemist can shake a glass rod at. If not, here's Polly Arnold with the answer. Uranium is certainly one of the most famous, or perhaps I should say infamous, elements. It is the heaviest naturally occurring element. It is actually more abundant in the earth's crust than silver. It is one of eight elements named in honour of celestial objects, but you might not think that uranium deserves to be named after the planet Uranus. The lustrous black powder that the chemist Klaproth isolated from the mineral pitchblende in 1789 - just eight years after Uranus was discovered - was in fact an oxide of uranium. Not until fifty two years later did Eugène Melchior Peligot reduced uranium tetrachloride with potassium, and from these harsher conditions obtained the pure silvery white metal at last. Samples of the metal tarnish rapidly in air, but if the metal is finely divided, it will burst into flames. Uranium sits amongst the actinides, the second shell of metals to fill their f-orbitals with valence electrons, making them large and weighty. Chemically, uranium is fascinating. Its nucleus is so full of protons and neutrons that it draws its core electron shells in close. This means relativistic effects come into play that affect the electron orbital energies. The inner core s electrons move faster, and are drawn in to the heavy nucleus, shielding it better. So the outer valence orbitals are more shielded and expanded, and can form hybrid molecular orbitals that generated arguments over the precise ordering of bonding energies in the uranyl ion until as recently as this century. This means that a variety of orbitals can now be combined to make bonds, and from this, some very interesting compounds. In the absence of air, uranium can display a wide range of oxidation states, unlike the lanthanides just above it, and it forms many deeply coloured complexes in its lower oxidation states. The uranium tetrachloride that Peligot reduced is a beautiful grass-green colour, while the triiodide is midnight-blue. Because of this, some regard it as a 'big transition metal'. Most of these compounds are hard to make and characterise as they react so quickly with air and water, but there is still scope for big breakthroughs in this area of chemistry. The ramifications of relativistic effects on the energies of the bonding electrons has generated much excitement for us synthetic chemists, but unfortunately many headaches for experimental and computational chemists who are trying to understand how better to deal with our nuclear waste legacy. In the environment, uranium invariably exists as a dioxide salt called the uranyl ion, in which it is tightly sandwiched between two oxygen atoms, in its highest oxidation state. Uranyl salts are notoriously unreactive at the oxygen atoms, and about half of all known uranium compounds contain this dioxo motif. One of the most interesting facets of this area of uranium chemistry has emerged in the last couple of years: A few research groups have found ways to stabilise the singly reduced uranyl ion, a fragment which was traditionally regarded as too unstable to isolate. This ion is now beginning to show reactivity at its oxygen atoms, and may be able to teach us much about uranium's more radioactive and more reactive man-made sisters, neptunium and plutonium - these are also present in nuclear waste, but difficult to work with in greater than milligram quantities. Outside the chemistry lab, uranium is best known for its role as a nuclear fuel. it has been at the forefront of many chemists' consciousness over recent months due to the international debate on the role that nuclear power can play in a future as a low-carbon energy source, and whether our new generations of safer and efficient power stations are human-proof. To make the fuel that is used to power reactors to generate electricity, naturally occurring uranium, which is almost all U-238, is enriched with the isotope U-235 which is normally only present in about 0.7 %. The leftovers, called depleted uranium, or DU, have a much-reduced U-235 content of only about 0.2 %. This is 40 % less radioactive than natural uranium, and the material that we use to make compounds from in the lab. Because it is so dense, DU is also used in shielding, in the keels of boats and more controversially, in the noses of armour-piercing weapons. The metal has the desirable ability to self-sharpen as it pierces a target, rather than mushrooming upon impact the way conventional tungsten carbide tipped weapons do. Critics of DU weaponry claim it can accumulate around battlefields. Because uranium is primarily an alpha-emitter, its radioactivity only really becomes a problem if it gets inside the body, where it can accumulate in the kidneys, causing damage. However, uranium is also a heavy metal, and its chemical toxicity is of greater importance - it is approximately as toxic as lead or mercury. But uranium doesn't deserve it's image as one of the periodic table's nasties. Much of the internal heat of the earth is considered to be due to the decay of natural uranium and thorium deposits. Perhaps those looking to improve the public image of nuclear power should demand the relabelling of geothermal ground-source heat pumps as nuclear? The reputation of this element would also be significantly better if only uranium glass was the element's most publicly known face. In the same way that lead salts are added to glass to make sparkling crystal glassware, uranyl salts give a very beautiful and translucent yellow-green colour to glass, although glassmakers have experimented to produce a wide range of gem-like colours. An archaeological dig near Naples in 1912 unearthed a small green mosaic tile dated back to 79 AD, which was reported to contain uranium, but these claims have not been verified. However in the early-19th and early 20th century it was used widely in containers and wine-glasses. If you think that you own a piece, you can check with a Geiger counter, or by looking for the characteristic green fluorescence of the uranium when held under a UV-lamp. Pieces are generally regarded as safe to drink from, but you are advised not to drill holes in them, or wear them. Fair enough. Or inadvertently eating it too, presumably. That was Edinburgh University chemist Polly Arnold explaining the softer side of the armour piercing element Uranium. Next week Andrea Sella will be introducing us to some crystals with intriguing properties. "It's amazing stuff. You HAVE to see this." He pulled out of his pocket a sample vial containing some stunning pink crystals that glinted alluringly. "Wow!" I said - you can always impress a chemist with nice crystalline products. "It gets better." he said mysteriously. He beckoned me into a hallway. "Look" he said. As the crystals caught the light from the new fluorescent lights hanging from the ceiling, the pink colour seemed to deepen and brighten up. "Wow!" I said again. We moved the crystals back into the sunlight and the colour faded again, and moving the crystals back and forth they glowed and dimmed in magical fashion. But what did they contain? Well, the answer's Erbium and you can hear all about it in next week's Chemistry in its Element. I'm Chris Smith, thank you for listening and goodbye. Chemistry in its element is brought to you by the Royal Society of Chemistry and produced by thenakedscientists dot com. There's more information and other episodes of Chemistry in its element on our website at chemistryworld dot org forward slash elements. Mining and Sourcing data: British Geological Survey – natural environment research council. Text: John Emsley Nature’s Building Blocks: An A-Z Guide to the Elements, Oxford University Press, 2nd Edition, 2011. Additional information for platinum, gold, neodymium and dysprosium obtained from Material Value Consultancy Ltd www.matvalue.com Data: CRC Handbook of Chemistry and Physics, CRC Press, 92nd Edition, 2011. G. W. C. Kaye and T. H. Laby Tables of Physical and Chemical Constants, Longman, 16th Edition, 1995. Members of the RSC can access these books through our library.
<urn:uuid:db453e73-7a44-46d5-a276-1457cbee662c>
2.875
2,426
Knowledge Article
Science & Tech.
53.011152
Web edition: August 23, 2012 On August 5, after a journey lasting more than 8 months, a carlike rover carefully settled down onto the surface of Mars. The vehicle is basically a science lab. Its mission: to search for evidence that the Red Planet might once have hosted life — even if the organisms were only one-celled microbes. The first stage of this mission — the landing — is “an amazing achievement,” observes Charles Bolden. He runs the National Aeronautics and Space Administration, or NASA, which built and delivered the vehicle to Mars. Several years ago, NASA scientists began considering the best place for Curiosity to conduct its experiments. Last year, the researchers chose Gale Crater. So that’s where the rover landed. A towering peak — Mount Sharp — rises from the center of this basin 150 kilometers (93 miles) wide. Curiosity will spend two years motoring around and exploring the crater floor. But probing the mountain will be the vehicle’s primary focus. As Curiosity moves by, it will shoot out a laser beam at the mountain and then direct onboard chemical samplers to “taste” the vaporized rock. Another onboard device can drill into rock, pulverizing it into a fine powder for the rover’s chemical samplers to taste.
<urn:uuid:8033a6cf-cf90-44d4-ac7b-e25e5779687a>
3.953125
273
Truncated
Science & Tech.
47.220903
NASA's 23-year-old Hubble Space Telescope is still going strong, and agency officials said Tuesday (Jan. 8) they plan to operate it until its instruments finally give out, potentially through 2018, space agency officials say. [Read the Full Story] This artist's illustration shows the atmosphere of a brown dwarf called 2MASSJ22282889-431026, which was observed simultaneously by NASA's Spitzer and Hubble space telescopes. The telescopes' observations indicate this brown dwarf is marked by wind-driven, planet-size clouds. [Read the Full Story] This artist's concept shows the brown dwarf 2MASSJ22282889-431026, which has a turbulent atmosphere somewhat similar to the giant planet Jupiter's. [Read the Full Story] This false-color composite image, taken with the Hubble Space Telescope, reveals the orbital motion of the planet Fomalhaut b. Based on these observations, astronomers calculated that the planet is in a 2,000-year-long, highly elliptical orbit. Image released Jan. 8, 2013. [< href=http://www.space.com/19187-zombie-planet-shocking-orbit.html>Read the Full Story] This diagram shows the orbit of the exoplanet Fomalhaut b as calculated from recent Hubble Space Telescope observations. The planet follows a highly elliptical orbit that carries it across a wide belt of debris encircling the bright star Fomalhaut. Image released Jan. 8, 2013. [< href=http://www.space.com/19187-zombie-planet-shocking-orbit.html>Read the Full Story] This image is an expanded view of the alien planet Fomalhaut b around the star Fomalhaut abotu 25 light-years from Earth. The planet is a giant world nearly three times the mass of Jupiter. [< href=http://www.space.com/19187-zombie-planet-shocking-orbit.html>Read the Full Story] This artist's concept illustrates an asteroid belt around the bright star Vega. [Read the Full Story] Astronomers have discovered what appears to be a large asteroid belt around the bright star Vega, as illustrated here at left in brown. [Read the Full Story] NASA's planet-hunting Kepler space observatory has discovered 461 new potential alien planets, boosting its total to 2,740 potential extrasolar worlds. [Full Story] Attendees of the American Astronomical Society Meeting in Long Beach, CA, were treated to a colorful sunset on Jan. 7, 2013. This NASA graphic depicts the changes in alien planet discoveries, arranged by planet size, as seen by NASA's Kepler spacecraft. As of Jan. 7, 2012, there are 2,740 potential alien planets. [Full Story] SPACE.com infographic makes American Astronomical Society Meeting appearance at the Orbital Sciences booth, January 2013. A massive outburst erupts from the giant black hole at the center of the distant galaxy NGC 660, which is 44 million light-years from Earth, in this via captured by ground-based telescopes. Image released Jan. 7, 2013. [Full Story] An artist's illustration of a comet storm around a nearby star. [Full Story] This artist's illustration represents the variety of planets being detected by NASA's Kepler spacecraft. Scientists now say that one in six stars hosts an Earth-size planet. [Full Story] Infographic: Practically all sun-like stars have planets, and one in six has a planet the size of Earth, a new study finds. [Full Story and larger image] This chart depicts the frequencies of planets based on findings from NASA's Kepler space observatory. The results show that one in six stars has an Earth-sized planet in a tight orbit. [Full Story] The widest binaries and triple systems have very elongated orbits, so the stars spend most of their time far apart. But once in every orbital revolution they are at their closest approach. They may pose a danger to any planets orbiting them.
<urn:uuid:a76e18b0-b753-4e71-9c93-ca0f6fbf1b70>
3.015625
852
Content Listing
Science & Tech.
57.50739
During nuclear fission, great amounts of energy are produced from A. very small amounts of mass. B. tremendous amounts of mass. C. a series of chemical reactions. D. particle accelerators. (See answer below) ... mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. ... [ produced by any type of fission have enough energy ... fission energy amounts ... www.answers.com/topic/nuclear-fission ] (See complete conversation and new answers below) There are no new answers.
<urn:uuid:4c5d2d7f-8bae-493a-9466-5be8800e9847>
3.296875
122
Q&A Forum
Science & Tech.
52.499022
Introduction To Android Programming Android relies on the Dalvik Virtual Machine which uses Dalvik bytecode (named after the fishing village of Dalvík in Eyjafjörður, Iceland where some of the creators' ancestors lived according to Wikipedia). Developers write their programs in Java and the Android SDK converts it into Dalvik machine code. The main reasons for using Java as an intermediate language (and why it does not run as Java bytecode on the device) are obvious, developers do not have to learn yet another programming language, Android requires no permission from Sun Microsystems (or Oracle) and a lot of the bloat in the Java VM can be removed. Dalvik is also based on registers rather than a stack like the Java VM. .dex is the file extension of a Dalvik Executable, this would normally sit inside a .apk file (Android PacKage). APK files typically include other details such as the AndroidManifest.xml which lists how the program appear's on the launcher and what permissions the program requires. If a program tries to call a method which needs a permission not listed in the manifest it will generate a SecurityException which if not correctly handled will lead to a "Force Quit". Acquiring the APK I am currently aware of 5 methods of acquiring APK files. - Running an Android emulator on your computer with a ROM that contains the Android Market Place. - Plugging in a Wireless Router to a wireless Windows computer and using Internet Connection Sharing while running Wireshark on your computer capturing traffic on the ethernet port. - ARP Spoof your router using Cain & Abel and then capturing the traffic using Wireshark. - Downloading it on your phone and then using the ASTRO File Manager to save the files to the SD Card, then copying them to the PC. - Writing a program to pretend to be a phone and connecting to the market place directly and an article such as "REVERSING ANDROID MARKET PROTOCOL" may provide further help as well as a Wordpress plugin for displaying application details and its source. Decompilation to Dalvik bytecode Now to extract the Dalvik bytecode from the DEX file one can use the decompiler that is built into the SDK, but this ultimately does not produce output that is easy to read, so there is also Baksmali which with this tool you could even convert it back to a Dalvik Executable using Smali. This produces output like this. I have seen examples of people doing this to extend the functionality of existing applications such as Howto enable Gmail notifications for all new mail with smali/baksmali. Another alternative is DeDexer, though this does not have an assembler. There is also a tool to decode the manifest and other resources contained within the APK such as XMLs and PNGs called APKTool. From Google Groups: "classes.dex" inside. The classes.dex is optimized by the package manager on first use, and ends up in /data/dalvik-cache/. "System" apps have the DEX optimization performed ahead of time. The resulting ".odex" file is stored next to the APK, the classes.dex is removed from the APK, and the whole thing works without having to put more stuff in your /data partition. The optimized DEX files cannot easily be converted back to unoptimized DEX, and I'm not sure there's any benefit in doing so. Both kinds of DEX files can be examined with "dexdump". More detail can be found in dalvik/docs/dexopt.html in the source tree, or on the web at: Both DeDexer and baksmali have limited support for ODEX files. Decompilation to Java bytecode Given Java code converts to Dalvik machine code it did not take very long for someone to write a Dalvik machine code to Java bytecode converter. In fact there are 2. UNDX (lack of download link, developer notified) and Dex2Jar. The UNDX decompiler appeared to be missing some support for some of the Dalvik op(eration) codes so there is an unofficial fork at GitHub. Decompilation to Java source code Once you have Java bytecode you can convert it to Java source by using a Java disassembler such as JD-GUI. Finally applications can be decompiled to a variety of formats right up to the original Java source code. Generally the less processing done on the file the more success you will have in trying to decompile it (i.e. it is easier to go to Dalvik bytecode than to go back to source code as you are relying on less processes going wrong). Sources & Further Reading:
<urn:uuid:5e7c38ca-3be0-458b-9527-03460311f843>
3.359375
1,016
Documentation
Software Dev.
44.767381
Atomic Number: 27 Atomic Weight: 58.9332 Discovery: George Brandt, circa 1735, maybe 1739 (Sweden) Electron Configuration: [Ar] 4s2 3d7 Word Origin: German Kobald: evil spirit or goblin; Greek cobalos: mine Isotopes: Twenty-six isotopes of cobalt ranging from Co-50 to Co-75. Co-59 is the only stable isotope. Properties: Cobalt has a melting point of 1495°C, boiling point of 2870°C, specific gravity of 8.9 (20°C), with a valence of 2 or 3. Cobalt is a hard, brittle metal. It is similar in appearance to iron and nickel. Cobalt has a magnetic permeability around 2/3 that of iron. Cobalt is found as a mixture of two allotropes over a wide temperature range. The b-form is dominant at temperatures under 400°C, while the a-form predominates at higher temperatures. Uses: Cobalt forms many useful alloys. It is alloyed with iron, nickel, and other metals to form Alnico, an alloy with exceptional magnetic strength. Cobalt, chromium, and tungsten may be alloyed to form Stellite, which is used for high-temperature, high-speed cutting tools and dies. Cobalt is used in magnet steels and stainless steels. It is used in electroplating because of its hardness and resistance to oxidation. Cobalt salts are used to impart permanent brilliant blue colors to glass, pottery, enamels, tiles, and porcelain. Cobalt is used to make Sevre's and Thenard's blue. A cobalt chloride solution is used to make a sympathetic ink. Cobalt is essential for nutrition in many animals. Cobalt-60 is an important gamma source, tracer, and radiotherapeutic agent. Sources: Cobalt is found in the minerals cobaltite, erythrite, and smaltite. It is commonly associated with ores of iron, nickel, silver, lead, and copper. Cobalt is also found in meteorites. Element Classification: Transition Metal Density (g/cc): 8.9 Melting Point (K): 1768 Boiling Point (K): 3143 Appearance: Hard, ductile, lustrous bluish-gray metal Atomic Radius (pm): 125 Atomic Volume (cc/mol): 6.7 Covalent Radius (pm): 116 Ionic Radius: 63 (+3e) 72 (+2e) Specific Heat (@20°C J/g mol): 0.456 Fusion Heat (kJ/mol): 15.48 Evaporation Heat (kJ/mol): 389.1 Debye Temperature (K): 385.00 Pauling Negativity Number: 1.88 First Ionizing Energy (kJ/mol): 758.1 Oxidation States: 3, 2, 0, -1 Lattice Structure: Hexagonal Lattice Constant (Å): 2.510 CAS Registry Number: 7440-48-4 - Cobalt derived its name from German miners. They named cobalt ore after mischievous spirits called kobalds. Cobalt ores commonly contain the useful metals copper and nickel. The problem with cobalt ore is it usually contains arsenic as well. Attempts to smelt the copper and nickel typically failed and would often produce toxic arsenic oxide gases. - The brilliant blue color cobalt gives to glass was originally attributed to bismuth. Bismuth is often found with cobalt. Cobalt was isolated by Swedish chemist, Georg Brandt who proved the coloring was due to cobalt. - The isotope Co-60 is a strong gamma radiation source. It is used to sterilize food and medical supplies as well as radiation therapy in the treatment of cancer. - Cobalt is a central atom in vitamin B-12. - Cobalt is ferromagnetic. Cobalt magnets stay magnetic to the highest temperature of any other magnetic element. - Cobalt has six oxidation states: 0, +1, +2, +3, +4, and +5. The most common oxidation states are +2 and +3. - The oldest cobalt colored glass was found in Egypt dated between 1550-1292 B.C. - Cobalt has an abundance of 25 mg/kg (or parts per million) in the Earth's crust. - Cobalt has an abundance of 2 x 10-5 mg/L in sea water. - Cobalt is used in alloys to increase temperature stability and decrease corrosion. References: Los Alamos National Laboratory (2001), Crescent Chemical Company (2001), Lange's Handbook of Chemistry (1952), CRC Handbook of Chemistry & Physics (18th Ed.) International Atomic Energy Agency ENSDF database (Oct 2010)
<urn:uuid:bd611369-0429-4eb9-9a77-35359b909546>
3.28125
1,046
Knowledge Article
Science & Tech.
58.707116
int mvgetch(y, x) int mvwgetch(win, y, x) getch() will read input from the terminal in a manner depending on whether delay mode is set or not. If delay is on, getch() will wait until a key is pressed, otherwise it will return the key in the input buffer or ERR if this buffer is empty. mvgetch(...) and will move the cursor to position y,x first. The w functions read input from the terminal related to the window win, getch() and from the terminal related to . With keypad(...) enabled, getch() will return a code defined in .h as KEY_* macros when a function key is pressed. When ESCAPE is pressed (which can be the beginning of a function key) ncurses will start a one second timer. If the remainder of the keystroke is not finished in this second, the key is returned. Otherwise, the function key value is returned. (If necessary, use notimeout() to disable the second
<urn:uuid:df05d44a-e998-4aaa-b06c-ebf108cb5ded>
2.796875
237
Documentation
Software Dev.
71.57034
Provides humanity with rich resources And reveals secrets of the climate. Aside from the national parks, There is another group of guardians over marine Resources—a group of scientists. They try to decode the messages from the ocean, And unlock the mysteries of climate change. Global warming is by far the hottest environmental issue in recent years. From the U.S. documentary An Inconvenient Truth (2006) to the locally produced ±2°C (2010), the world’s growing anxiety over it is unmistakable. While there are still debates over the science behind these films, no one can deny that the weather is indeed getting hotter. The enormous impacts caused by extreme climates are seen and felt everywhere. The Challenge of Rising Sea Levels Record-breaking droughts, floods, snow storms and hurricanes have recently occurred on different parts of the planet. Yet an even greater disaster is still looming—global warming-induced sea level rise. Land, already representing less than 30% of the Earth’s surface, may be further diminished by the oceans’ expansion. Simply put, it is the Green versus the Blue, the land’s defense against the invading oceans. It is a battle Taiwan cannot afford to lose. Since 1991, Prof. Kuang-lung Fan of NTU’s Institute of Oceanography has been monitoring tidal levels in coastal cities like Keelung, Yilan, Taitung and Kaohsiung. He found that, for the decade, Taiwan’s sea level has risen at an annual rate of 0.32 cm, exceeding the global average. At this speed, it will rise by at least 0.3 m in just a century’s time. Two factors contribute to current sea level rise, and both are caused by global warming. The melting of continental ice sheets adds water to the oceans, and thermal expansion increases the volume of seawater. The combined effects of these factors threaten the coastal ecosystems first. The ocean plays a vital role in regulating and stabilizing global climates, making a sustainable environment possible for all living beings. Crucial natural processes like water circulation and the exchange of energies take place in the ocean and the atmosphere. Monsoons and typhoons expedite atmospheric heat exchange between Earth’s poles, and ocean currents help maintain energy balance for the planet by bringing warm seawater from the vicinity of the Equator to high-latitude areas.
<urn:uuid:5c84e203-6195-421b-a390-ac3a3f70bd23>
3.46875
501
Knowledge Article
Science & Tech.
46.894133
The role of glial cells – or cells that “glue” the neurons together – has traditionally been that of a house keeper, cooking up and serving food, cleaning up waste products, and holding everything in place. In recent years the role of glial cells has been expanded somewhat, which leads us to Einstein’s brain: In 1985 scientists at the University of California in Berkeley published anatomical studies of slivers of Einstein’s brain after counting the different cells in the organ. They found the only difference between his brain and those of dead doctors was a greater ratio of glial cells to neurons. “We know from animal studies that as you go from invertebrates to other animals and primates, as intelligence increases, so does the ratio of glial cells to neurons,” said Professor Volterra, whose study appears in the journal Nature Neuroscience. So what are the glial cells doing? The scientists said the cells provide energy for neural circuits and help build connections, leading to a more complex brain structure. Read the Guardian article here. Ohh… yeah… this really is Einstein’s brain.
<urn:uuid:7474977f-5636-4256-abf3-1a1577c08edb>
3.140625
236
Personal Blog
Science & Tech.
47.547047
In Newton’s theory, gravitational effects are simultaneous with their causes: the Sun attracts the Earth towards the Sun’s present position. This is often seen as the reason why Newton was in no position to “frame hypotheses” (about the mechanism or natural process by which gravity acts). Electromagnetic effects, on the other hand, are retarded. The earliest time at which a solar flare can affect us is about eight minutes later — the distance between the Sun and the Earth divided by the so-called speed of light (c). According to a widely held belief, the retardation of electromagnetic effects made it possible to explain how — by what mechanism or natural process — electric charges act on electric charges. Although we have previously (here and here) disposed of such an “explanation” as a mere sleight-of-hand, it is worth taking a look at what changed and what did not change when Einstein realized that the invariant speed was finite (namely, c) rather than infinite, as Newton had held. (Reminder: anything that “travels” with this speed in one inertial frame, does so in every other inertial frame.) The existence of an invariant speed implies a special kind of spatiotemporal relation between events: either the relation of being simultaneous, which is absolute (that is, independent of the inertial frame used) in Newton’s (non-relativistic) theory, or the relation of being situated on each other’s light cone, which is absolute in Einstein’s (relativistic) theory. Suppose that an event e1 at (x1,t1) is the cause of an event e2 at (x2,t2). The fact that e2 happens at t2, rather than at any other time, has two possible explanations. If the action of e1 on e2 is mediated, t2 is determined by the speed of mediation. This could be the speed of a material object traveling from (x1,t1) to (x2,t2), the speed of a signal propagating in an elastic medium, or what have you. But if the action of e1 on e2 is unmediated, t2 is determined by the special kind of spatiotemporal relation that the existence of an invariant speed implies. In Newton’s theory, in which simultaneity is absolute, t2 is equal to t1, whereas in the relativistic theory, t2 is retarded by |x2–x1|/c. So if an effect e2 at x2 happens a time span Δt = |x2–x1|/c after its cause e1 at x1, it means that the effect that e1 has on e2 is unmediated; by no means does it follow that e2 is brought about through the mediation of something that travels from e1 to e2 with the invariant speed c. Spacetime coordinates, we said, are human inventions. To make the laws of physics as simple as possible, we introduce them in such a way that equal coordinate intervals are physically equivalent. This means, among other things, that freely moving classical particles travel equal space intervals Δx,Δy,Δz in equal time intervals Δt; the ratios formed of Δx, Δy, Δz, and Δt are constants. The physical equivalence of equal time intervals implies a conserved physical quantity — energy; and the physical equivalence of equal intervals of the space coordinates implies another conserved physical quantity — momentum. These results have been generalized by Noether’s theorem. Suppose that we have a theory that is defined by a Lagrangian L, and suppose that L is invariant under some continuous transformation of the fields on which it depends. Noether’s theorem then implies a locally conserved quantity Q. This means that for any region R of space, the total amount of Q inside R increases (or decreases) by the amount of Q that flows into R (or out of R) through the boundary of R. If, for instance, L is invariant under translations in space (which it can only be if equal space intervals are physically equivalent), then the theorem implies the local conservation of momentum, and if L is invariant under time translations (which it can only be if equal time intervals are physically equivalent), then the theorem implies the local conservation of energy. More compactly, if L is invariant under spacetime translations, Noether’s theorem implies the local conservation of energy-momentum. A gauge transformation is another continuous transformation of the fields on which L depends. If L is invariant under such a transformation, the locally conserved quantity implied by this invariance is charge — electric charge in the case of this gauge transformation, or the weak (or flavor) charge associated with particles interacting via the weak force, or the strong (or color) charge associated with particles interacting via the strong force. So are we now to imagine that energy, momentum, and charge are kinds of stuff that continuously “slosh around” in space or spacetime? Of course not. The local conservation laws, like the Lagrangians that imply them, are calculational tools. They ensure that, for every scattering event and for every inertial frame, the energies, momenta, and charges of the incoming particles equal the energies, momenta, and charges (respectively) of the outgoing particles. (If some energy–momentum escapes undetected, then it also warrants the following conditional: if the escaped energy–momentum were detected, it would agree with the local conservation law for energy-momentum.)
<urn:uuid:925716e8-b383-461e-a97f-ddd402a99fe1>
3.59375
1,187
Academic Writing
Science & Tech.
35.905894
In this page from The Design and Implementation of the 4.4BSD Operating System, it is said that: A major difference between pipes and sockets is that pipes require a common parent process to set ... I try to write simple client-server program using fork(), pipe() and select(). Parent process is a server, which select pipe with data from client by FD_ISSET and write data to other clients. And ... #!/bin/ksh # start_service: start the service my_server_executable 2>&1 | my_pipe_following_shell_script & exit 0 After I run the above start_service script from command line, it is ...
<urn:uuid:87942027-4b3c-47cb-89d3-f7adc49fe824>
2.890625
139
Q&A Forum
Software Dev.
58.573846
By Randolph E. Schmid, Associated Press WASHINGTON For a change, there's some good news from the world of the environment. Several rare and vulnerable birds are rebounding in Europe. Conservation efforts in Peru are reducing damage to the Amazon rain forest. And black-footed ferrets are making a comeback in Wyoming. The three positive trends are reported Thursday in a series of papers in the journal Science. Researchers led by Paul F. Donald, of Britain's Royal Society for the Protection of Birds, report that European Union policies designed to protect vulnerable species and their habitat seem to be working. In 15 European countries studied, there was a significant increase in population trends for protected birds between 1990 and 2000, compared to 1970-1990, the team found. They said the protected birds also showed an increase compared to birds not on the list. Species doing particularly well include the barnacle goose, white stork, spoonbill, little egret Slavonian grebe and white-tailed eagle. On this side of the Atlantic Ocean, the once endangered black-footed ferret is repopulating its Wyoming homeland, according to researchers led by Martin B. Grenier of the University of Wyoming. The last seven known ferrets of this type were removed from the wild in 1987 and placed in a captive breeding program. They have produced 4,800 juveniles, many of which were returned to the wild. At first they continued to suffer losses and extinction seemed likely when they were down to five animals, but by 2003 the wild population had grown to 52 ferrets and researchers estimate the current wild total at more than 200. And in South America, satellite monitoring indicates that the rate of deforestation is declining in the Peruvian rain forest. Researchers led by Paulo J. Oliveira of the Carnegie Institution report that while deforestation is continuing, it is occurring mostly in designated logging areas and not in protected regions set aside by the government. They concluded that the government's program intended to set aside land for indigenous people is also having an effect in protecting the forest. The European bird research was supported by the European Bird Census Council and the European Union. The ferret study was funded by the Wyoming Game and Fish Department and the U.S. Fish and Wildlife Service. The Peruvian forest analysis was funded by the John D. and Catherine T. MacArthur Foundation and the Gordon and Betty Moore Foundation. Copyright 2007 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Conversation guidelines: USA TODAY welcomes your thoughts, stories and information related to this article. Please stay on topic and be respectful of others. Keep the conversation appropriate for interested readers across the map.
<urn:uuid:63229de0-09f1-42df-b84d-23f5882f758b>
2.921875
555
Truncated
Science & Tech.
44.01744
Water: Monitoring & Assessment 5.0 Wetland Algae Impacts on Quality of Inland Wetlands of the United States: A Survey of Indicators, Techniques, and Applications of Community Level Biomonitoring Data Excerpts from Report #EPA/600/3-90/073 (now out of print) This discussion concerns wetland communities containing phytoplankton, metaphyton, benthic algae, periphyton, and epiphytic algae. Wetlands may contain algal communities that differ from other surface waters, or that indirectly influence community composition of algae in receiving waters. For example, acidic wetland waters commonly are rich in desmid species and acid-tolerant diatoms, such as Eunotia, Frustulia, and Pinnularia (Flensburg and Spalding 1973, Graffius 1958, Patrick 1977). Marshes may become dominated by Nostoc pruniforme, Microcoleus paludosus, Vaucheria sessilis , and sometimes Aphanothece stagnina (Prescott 1968). In a study of the effect on periphyton in a river above and below a marsh, Perdue et al. (1981) found some species of Navicula were common upriver of a marsh but almost non-existent below the marsh; several Nitzschia spp. and Fragilaria spp. were common below but rare above the marsh. Fragilaria construens was abundant in both areas. 5.1 Use as Indicators As with microbial communities, algal communities in wetlands have most often been measured indirectly, in the pursuit of estimates of photosynthesis, respiration, and productivity. Few studies have quantified algal community structure in wetlands, or identified particular wetland algal species as indicators of wetland ecological condition. However, paleoecological studies of several peatlands have been undertaken. These use diatoms and pollen from peat cores as indicators of ancient environmental conditions (e.g., Agbeti and Dickman 1989, Battarbee and Charles 1987). Following are discussions of algal community responses to various stressors. Enrichment/eutrophication and Organic loading . Algal blooms are synonymous with eutrophication, so algae (particularly blue-green forms) are obvious indicators of trophic state, at least in lakes (Hecky and Kilham 1988). As concentrations of phosphorus in flowing water begin to exceed 0.020 mg/L, or 0.015 mg/L (and frequently less) in standing water, significant changes in algal communities can begin to occur (e.g., Traaen 1978), particularly if flow-adjusted loads are greater than 0.22 g/m3 (Craig and Day 1977). Florida regulations for discharge of treated wastewater to forested wetlands specify that, on an annual average basis, waters entering the wetland contain less than 3 mg/L nitrogen and less than 1 mg/L phosphorus; the monthly average for total ammonia must be less than 2.0 mg/L. Enriched conditions can be associated with either increased (e.g., Morgan 1987) or decreased (e.g., Hooper 1982, Schindler and Turner 1982) species richness of algal communities, depending on whether algae are mostly epiphytic or benthic, the pH, water regime, original state of the system, and other factors. Few studies have used algal community composition to classify the trophic state of wetlands. In other shallow surface waters, taxa such as the following (for example) have become dominant in response to fertilization (Mulligan et al. 1976, Patrick 1977, Prescott 1968): In New Jersey streams exposed to residential and agricultural runoff, Morgan (1987) reported a shift from species characteristic of the region to species that had been geographically peripheral to the region. Algal community structure in some cases might be capable of reflecting the form of enrichment; based on experiments in a Michigan bog, chlorophytean species responded particularly to ammonium, whereas blue-green (cyanobacteria) species dominated when phosphate was added (Hooper 1982). Euglenophytes (one-celled, mobile algae) in particular respond to increases in ammonium and Kjeldahl nitrogen (rather than to nitrate alone), as well as to other substances associated with decomposing organic matter (Hutchinson 1975). Near a wastewater-disposal pipeline in a Michigan bog, several algal species bloomed-- Cladophera glomerata, Microspora, Euglena, and Spirogyra (Richardson and Schwegler 1986); algal growth rates were faster at the outfall site than at the control and at various distances away from the outfall. Contaminant Toxicity. Numerous studies have demonstrated adverse effects of heavy metals (Whitton 1971), herbicides, synthetic organics, oil, and/or heavy metals on freshwater algae. Most such studies have been conducted in laboratories or non-wetland mesocosms, and/or have generally not examined community structure. Several (e.g., Hurlbert et al. 1972) report major algal blooms occurring after insecticide application due to temporary suppression of grazing by aquatic invertebrates. Herbicides have been shown to cause a shift in community composition from large filamentous chlorophyes (green algae) to smaller diatom species and blue-green algal species, particularly those of the order Chaemaesiphonales (Goldsborough and Robinson 1986, Gurney and Robinson 1989, Hamilton et al. 1987, Herman et al. 1986). Following application of phenol to a shallow pond mesocosm, Giddings et al. (1984, 1985) found and indirectly-caused increase in the dominance of the taxa Euglena, Phacus, Gonium , Coleochaeta, and Scenedesmus. Oil was predicted by Werner et al. (1985) to shift community composition from algae to heterotrophic microbes. In other studies, tolerance to high arsenic levels was demonstrated by Chlorella vulgaris (Maeda et al. 1983) and in a lake contaminated with copper, lead, and zinc, Rhizosoenia eriensis bloomed while other species declined (Deniseger et al. 1990). Algal assays using highway runoff have demonstrated chronic toxicity in several cases, probably due to combined effects of heavy metals, road salt, and sediment (FHWA 1988). Acidification. Algal responses to acidification in lakes are summarized by Stokes (1981, 1984). Algal species richness can decline in acidified akes, particularly in the presence of heavy metals (Dillon et al. 1979). Filamentous algae typically show a proportionate increase, and the genus Mougeotia has been reported to be a useful indicator of acidification. Nonetheless, algal production can be relatively high in some naturally acidic wetlands (e.g., Bricker and Gannon 1976). Thermal Alteration. From knowledge of algal responses in other surface waters (e.g., Squires et al. 1979), it appears likely that algae in wetlands would respond dramatically to thermal effluents, and that suitable assemblages of "most-sensitive species" could eventually be identified. Dehydration/Inundation. Drawdown of wetland water levels often concentrates nutrients and mobilizes nutrients locked up in exposed peat. This can cause algal blooms in remaining surface water (Schlosser and Karr 1981, Schoenberg and Oliver 1988). Inundation may have the opposite effect, diluting nutrients, reducing nutrient mobilization via oxidation, increasing algal competition with vascular plants, and thus reducing biomass of some algal taxa. However, inundation typically increases the leaf surface area available for colonization by algae, and provides increased opportunities for dispersal of some algal taxa into and out of a wetland. In some Prairie pothole wetlands, metaphyton (unattached, filamentous algae that float in a visible mat) and periphyton (attached algae) increase, while phytoplankton decreases, as higher water levels reduce the density of vascular plants and increase light penetration (Hosseini 1986). Other Human Disturbance. In other surface waters, species suggestive of "clean" water includeMelosira islandica and Cyclotella ocellata. Algal or microbial species that can indicate "contaminated" water include Chlamydomonas, Euglena viridis, Nitzschia palea, Microcystis aeruginos, Oscillatoria tenuis, O. limosa, Stigeocloneum tenue, and Aphanizomenon flos-aquae (Prescott 1968, APHA 1980). Salinization; Sedimentation/Burial; Vegetation Removal; Fragmentation of Habitat. We found no explicit information on algal indicators or algal community response to these stressors in wetlands. From knowledge of algal responses in other surface waters (e.g., Dickman and Gochnauer 1978), it appears likely that algae in wetlands would respond dramatically to many of these stressors, and that suitable assemblages of "most-sensitive species" could be identified. 5.2 Sampling Equipment and Methods Factors that could be important to standardize (if possible) among collections of algal communities include: - age of wetland (successional status) - light penetration (water depth, turbidity, shade) - hydraulic residence time - conductivity and baseline chemistry of waters - current velocity - leaf surface area and stand density of associated vascular plants - density of grazing aquatic invertebrates - typical duration and frequency of wetland inundataion - time elapsed since last runoff or inundation event. Standard protocols for algal monitoring are available, although uncertainty exists concerning their applicability to wetlands. One is presented by the manual of Britton and Greeson (1988). Replication requirements in wetland algal studies are significant, due to large spatial and temporal variability. Some investigators have recommended that samples that will be assumed to come from the same time period should be sampled within a time period less than the hydraulic residence time of the wetland. Rapid succession in dominant flagellate species was typical of shallow, eutrophic ponds where conditions fluctuate quickly (Estep and Remsen 1985). Sampling can occur at any season, but algal biomass is often greatest during the mid to late growing season (e.g., Crumpton 1989, Hooper 1978, Hooper-Reid and Robinson 1978a, b). In deeper waters, it may be advisable to sample phytoplankton at mid-day, due to vertical movements at other times (Estep and Remsen 1985). The pigment, chlorophyll-a is sometimes sampled from the water column as an indicator of algal biomass, but yields little information on community structure. Rabe and Gibson (1984) found greater phytoplankton density in a shallow vegetated pond than at nonvegetated sites, but species composition was similar. In contrast, Seelbach and McDiffett (1983) found that a pond with submerged vegetation had more taxa but lower population density than an open-water pond. Algal communities in wetlands are generally collected from sediment samples, water column samples, artificial substrates, or natural organic substrates. Methods are described as follows. Sediment sampling. Algae can be sampled from sediment surfaces in all types of wetlands. Piston corers, plastic syringes, or other suction devices are typically used. Water column sampling. Any wetland types that have surface water permanently or seasonally can be sampled. Samples from surface waters commonly involve use of volumetric containers or fine-mesh nets. Vertically-integrating, automated samplers can be used (e.g., Schoenberg and Oliver 1988). Surface microlayers (top 250-440 micrometers) can be sampled using fine nets or screens mounted on a frame (e.g., Estep and Remsen 1985). In flowing-water wetlands, fine nets can be mounted to intercept algae carried by currents. Artificial substrates. Artificial substrates (initially sterile materials placed in a wetland and subjected to natural colonization) may integrate algal assemblages from a large variety of microhabitats. As with microbial communities, algal communities can be monitored by installing plexiglass plates or similar inert, sterile surfaces in any wetlands that have surface water permanently or seasonally, and allowing them to be colonized by attached algae over a period of several weeks. Substrates are then retrieved and community structure is analyzed (e.g., Hooper-Reid 1978). Natural substrates. Natural organic substrates, particularly those in shallow water, may contain a great biomass of algae. Epiphytic and epibenthic algae are often sampled using a quadrat approach, in which a frame is placed over a standard-sized area of bottom or a standard volume of the water column is enclosed. Frame sizes of 10 x 10 cm (Atchue et al. 1983) and 1-2 m2 (Schoenberg and Oliver 1988) have been used. If algal density is to be estimated accurately, the surface area of substrate must be quantified. This can be a daunting task in the case of epiphytic algae, where plant surface areas need to be measured. Some investigators have approached this by measuring surface areas of a random sample of plants, sometimes with the use of a digital scanner, then measuring their volumes (by displacement) or dry weights and developing area-volume or area-weight calibration curves. The curves can be used to estimate plant surface area from future, simpler measurements of the volume or weight of other plants of the same species. 5.3 Spatial and Temporal Variability, Data Gaps In no region of the country, and in no wetland type, have data on algal community structure been uniformly collected from a series of statistically representative wetlands. Thus, it is currently impossible to state what are "normal" levels for parameters such as seasonal density, species richness, and their temporal and spatial variability. Studies that have compared algal community structure among wetlands (spatial variation) apparently include only Hern et al. (1978) who studied the Atchafalaya system in Louisiana, and Sykora(1984), who reported a range of 9 to 21 phytoplankton taxa per ml (mean=9, S.D.=2.3) from a series of six West Virginia wetlands. Phytoplankton density (cells per ml) ranged from 19 to 2581 (mean=203, S.D.=126). Atchue et al. (1982) found 56 taxa of phytoplankton in 8 springtime collections from a one-hectare temporary swamp pool in Virginia. We encountered no journal papers that quantified measurement errors or year-to-year variation in microbial community structure in U.S. inland wetlands. Even qualitatively, lists of "expected" wetland algal taxa appear not to have been compiled for any region or wetland type. Limited qualitative information may be available by wetland type from the "community profile" publication series of the U.S. Fish and Wildlife Service (USFWS)(Appendix C).
<urn:uuid:db0bf6b9-6f3d-4b2d-be39-9752e778a0f0>
3.203125
3,232
Academic Writing
Science & Tech.
27.498489
Science Fair Project Encyclopedia Amateur astronomy, often called back yard astronomy, is a hobby whose participants enjoy observing celestial objects. It is usually associated with viewing the night sky when most celestial objects and events are visible, but sometimes amateur astronomers also operate during the day for events such as sunspots and solar eclipses. Amateur astronomy and scientific research Unlike professional astronomy, scientific research is not typically the main goal for most amateur astronomers. Work of scientific merit is certainly possible, however, and many amateurs contribute to the knowledge base of professional astronomers very successfully. Astronomy is often promoted as one of the few remaining sciences for which amateurs can still contribute useful data. In particular, amateur astronomers often contribute toward activities such as monitoring the changes in brightness of variable stars, helping to track asteroids, and observing occultations to determine both the shape of asteroids and the shape of the terrain on the apparent edge of the Moon as seen from Earth. In the past and present, amateur astronomers have also played a major role in discovering new comets. Recently however, funding of projects such as the Lincoln Near-Earth Asteroid Research and Near Earth Asteroid Tracking projects has meant that most comets are now discovered by automated systems, long before it is possible for amateurs to see them. Societies for amateur astronomy There are a large number of amateur astronomical societies around the world that serve as a meeting point for those interested in amateur astronomy, whether they be people who are actively interested in observing or "armchair astronomers" who may be simply interested in the topic. Societies range widely in their goals, depending on a variety of factors such as geographic spread, local circumstances, size and membership. For instance, a local society in the middle of a large city may have regular meetings with speakers, focusing less on observing the night sky if the membership is less able to observe due to factors such as light pollution. It is common for local societies to hold regular meetings, which may include activities such as star parties. Other activities could include amateur telescope making, which was pioneered in America by Russell W. Porter, who later played a major role in design and construction of the Hale Telescope. Approaches to using amateur telescopes Amateur telescopes come in many shapes and sizes, both commercial and home-built. The preferences of people who use them often differ. Some amateur astronomers prefer to learn the sky as accurately as they can, using maps to find their way between the stars. In this case a common approach is to use binoculars or a manually driven telescope, combined with star maps, to locate items of interest in the sky. The normal technique for doing this, by locating landmark stars and "hopping" between them, is called star hopping. More recently as technology has improved and prices have come down, automated "GOTO" telescopes have also become a popular choice. With these computer-driven telescopes, the user typically enters the name of the item they wish to look at, and the telescope finds it in the sky automatically with comparatively little further effort required by the user. The main advantage of a "GOTO" telescope for an experienced amateur astronomer is the reduction of "wasted" time that may have otherwise been used in trying to find a particular object. This time can therefore be used more effectively for studying the object. There is significant (though usually light-hearted) debate within the hobby about which method is better. Promoters of the star hopping approach for finding items in the sky usually argue that they know the sky much better as a result. The manual method also tends to require simpler equipment with less calibration and setup time, and is therefore more versatile. Promoters of "GOTO" telescopes often argue that they are more interested in studying objects, and the reward of finding them or learning exactly where they are is not as important to them. Additional tools and activities In addition to optical equipment, amateur astronomers use a variety of other tools such as warm clothes, maps, and computers loaded with specialised software. There is a range of astronomy software available, with the most widely appreciated being software that generates maps of the sky. Some amateur astronomers also keep an observing log , in which they record details about what they have looked at and their impressions. Beginning in amateur astronomy There are a many ways for people to become involved in amateur astronomy and study the night sky. One option is to join a local astronomical society , the members of which will often be very happy to help a newcomer take a more active part. Some people also prefer to simply teach themselves, in which case there are likely to be a large amount of books in the local library. Common objects that are observed early are the Moon and planets. Another thing that most newcomers to amateur astronomy become acquainted with are the more prominent constellations in the night sky. When reading maps and interpreting instructions for future star hopping, constellations are good starting points for identifying locations in the night sky. They are frequently referred to by amateur astronomers when discussing the location of items of interest when looked at with binoculars and telescopes. Beginning with a GOTO telescope A relatively new type of beginning amateur astronomer, brought about by the increased affordability of powerful "GOTO" telescopes, is one who begins with such a telescope. It is possible for an inexperienced person to immediately look at a large amount of deep sky objects in the night sky without necessarily having any prior experience or training. There is currently some debate among amateur astronomers about the merits of this approach to becoming involved in the hobby, and the effects that low-priced GOTO telescopes may be having. Amateur astronomy is exposed to more people, as an individual is less likely to be discouraged by the need to learn how to locate objects in the night sky before being able to see them. Some are concerned, however, that newcomers may become bored very quickly. A GOTO telescope does not distinguish between objects that are easy and hard to see, and newcomers may therefore begin with objects that require large amounts of experience or understanding to properly appreciate. Becoming acquainted with the night sky Most tutors agree that it is very important to know one's way around the sky by means of the constellations. This ability forms a platform from which deeper explorations of the sky are then possible. A planisphere can be used to find and identify the constellations. These devises show the location of the constellations for any time of the night or time of the year. An observer will also need a red flashlight to read star charts or the planisphere. Use of a red light helps preserve the dark adaptation of the eyes. Having learned the main constellations, a beginner may want to extend their hobby and buy a pair of binoculars or a telescope. With binoculars it is possible to see many deep sky objects (DSO's), albeit not terribly well. Holding the binoculars can produce a shaky image. One way to improve the view is with the aid of a sturdy tripod mount to steady the view through the binoculars. Binoculars are still limited in range, although most of the Messier catalogue should be visible, as well as a great many NGC's, especially near the Milky Way. An advantage of binoculars is that they allow more complete wide field views of the larger open clusters such as the Pleiades, the Hyades, the Coma Berenices cluster and Praesepe, for example, of which only portions are usually observable in one field of view at higher magnifications. Using a telescope With a telescope, the sky really comes alive, especially one that has an aperture of six inches or more. Some amateur telescopes are built by their owners from scratch, but many good quality telescopes can be purchased from reputable companies. Thousands of DSO's are visible in a telescope and the determined amateur with a large (about 41 cm) telescope can push this to tens of thousands or more. Another type of telescope to consider, especially if the amateur is observing with children, is a wide-field telescope, such as Edmund Scientific's f/4 Astroscan compact reflector. This type of telescope is typically a short tube reflector and has an aperture of only 80 to 120 mm (3 1/4 to 4 3/4 inches), but is easier to target an object, since it offers a much wider field of view. With the aid of high power lenses (i.e. eyepieces), the amateur can zoom in on planets and some of the closer DSOs. It is the best of a blend of a telescope's narrow long range light gathering ability with a binocular's wider field of view. With any telescope, though, the mount is the most important feature. A tripod that doesn't shake every time one uses it is a must. Too many amateur astronomers give up because they have a hard time targeting an object. If the mounting tripod is rock solid, the amateur can enjoy their time observing the heavens instead of fighting with the telescope. The next step in an amateur astronomers quest for more space adventure comes with the purchase of a good camera for Astrophotography. Starting out with a good 35 mm camera with a 50 mm lens mounted on a tripod and using a cable release and 400 or faster speed film, the amateur can capture some nice pictures of the planets and some larger nebula, like the Orion Nebula. Some of the larger comets and prolific meteor showers can be photographed this way as well. As one progresses, cameras can be mounted directly on to telescopes, capturing on film many DSOs. Special films and even the technique of hypering the film has been employed by the amateur. Many publications accept these astrophotos in their magazines, i.e., Astronomy Magazine and Sky & Telescope. Some good books for amateur astronomers to start with are: - The Stars: A New Way to See Them, by Hans Augusto Rey, ISBN 0-395-081211 - NightWatch: An Equinox Guide to Viewing the Universe, by Terence Dickinson, ISBN 0-920-656897 - The Backyard Astronomer's Guide, by Terence Dickinson and Alan Dyer, ISBN 0-921-820119 - Turn Left at Orion, by Guy Consolmagno, ISBN 0-521-34090-X - Skywatching, by David H. Levy and John O'Byrne, ISBN 0-707-8354751-X - Seeing in the Dark: How Backyard Stargazers Are Probing Deep Space and Guarding Earth from Interplanetary Peril, by Timothy Ferris, ISBN 0-684-865793 - The Complete Manual Of Amateur Astronomy, by P. Clay Sherrod The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:e97128a2-3e59-4abc-a597-640f2e8bae21>
3.671875
2,233
Knowledge Article
Science & Tech.
33.95804
Science Fair Project Encyclopedia Generally each species of bird has a distinctive style of nest. Nests can be found in many different habitats. Some birds will build them in trees, some (such as eagles, and many seabirds like kittiwakes) will build them on rocky ledges, and some will build them on the ground. Common nest types: - Ground nests - Platform nests - Cavity nests - Cupped nests See also bird's nest soup In functional analysis, a nest is a chain of subspaces of a vector space closed under intersection and union. The algebra of those operators leaving invariant every subspace in a nest is called the nest algebra associated with the nest. In particular, if the nest is finite and the vector space is finite-dimensional, the corresponding nest algebra is just an algebra of block upper-triangular matrices. NEST is also an acronym for the Nuclear Emergency Search Team (official name of the Department of Energy's Safeguard Division), a US government group that responds to malevolent radiological incidents i.e. nuclear weapons reports including dirty bombs. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:012cc212-c8b1-48db-ae21-9a72a7da6b0f>
3.640625
263
Knowledge Article
Science & Tech.
42.135519
SCIENCE IN THE NEWS DAILY New Wyoming Supercomputer Expected to Boost Atmospheric Science from the Los Angeles Times (Registration Required) CHEYENNE, Wyo. -- Here in the shortgrass prairie, where being stuck in the ways of the Old West is a point of civic pride, scientists are building a machine that will, in effect, look into the future. This month, on a barren Wyoming landscape dotted with gopher holes and hay bales, the federal government is assembling a supercomputer 10 years in the making, one of the fastest computers ever built and the largest ever devoted to the study of atmospheric science. The National Center for Atmospheric Research's supercomputer has been dubbed Yellowstone, after the nearby national park, but it could have been named Nerdvana. The machine will have 100 racks of servers and 72,000 core processors, so many parts that they must be delivered in the back of a 747. Yellowstone will be capable of performing 1.5 quadrillion calculations--a quadrillion is a 1 followed by 15 zeros--every second. That's nearly a quarter of a million calculations, each second, for every person on Earth. In a little more than an hour, Yellowstone can do as many calculations as there are grains of sand on every beach in the world. Science in the Media Magazines and Web Sites: The Science-Media Intersection: ... for Sigma Xi SmartBrief, a free daily summary of the latest news in scientific research, delivered straight to your in-box. Each story is summarized concisely and linked directly to the original source for further reading. Click here to subscribe. Subscribe to Our Content! Visit our RSS Feeds page to choose among 13 customized feeds, or create a free My AmSci account to request an email notice whenever a specified author, department or discipline appears online.
<urn:uuid:da7c332e-571d-43f5-af86-b38a4aa14bdf>
3.21875
384
Truncated
Science & Tech.
46.364526
Friday 17 May Lace coral (Stylaster californicus) Lace coral fact file - Find out more - Print factsheet Lace coral description Lace corals form ornate tree-like structures, with all the fine, tapered branches growing in one plane. These delicate fan-like corals are remarkable for their bright colours (2). The colour is deposited within the limestone skeleton and remains even after the animal tissue is gone, unlike reef-building corals which have white skeletons and the only colour is found in the living tissue (3). Clusters of small pores, gastropores and dactylopores, can be seen on alternating sides of the branches (2).Top Lace coral biology Unlike many coral species, lace corals do not have the symbiotic algae zooxanthellae living within the coral tissue; they are azooxanthellate (2). They are therefore not dependent on light and thus can live where the reef-building corals, dependent on photosynthetic algae, can not. Lace corals are hydrozoans, and thus have different type of polyps with different functions than anthozoan corals. The polyps of hydrozoans are near microscopic size and are mostly imbedded in the skeleton, connected by a network of minute canals. All that is visible on the smooth surface are pores of two sizes; gastropores surrounded by dactylopores. Dactylopores house long fine hairs that protrude from the skeleton. The hairs possess clusters of stinging cells (nematocysts) that can inflict stings on human skin. These hairs capture prey, which is engulfed by gastrozooids, or feeding polyps, situated within the gastropores (2). Reproduction in lace corals is more complex than in reef-building corals. The polyps reproduce asexually, producing jellyfish-like medusae, which are released into the water from special cup-like structures known as ampullae. The medusae contain the reproductive organs, which release eggs and sperm into the water. Fertilised eggs develop into free-swimming larvae that will eventually settle on the substrate and form new colonies. Lace corals can also reproduce asexually by fragmentation (4) (5).Top Lace coral range Stylaster species occur throughout the Indo-Pacific and Atlantic Oceans (2).Top Lace coral habitat Widely distributed in temperate as well as tropical latitudes, and also occur at abyssal depths. Commonly found in caverns where it may occur as clumps, and under overhangs in shallow reef environments (2).Top Lace coral status Stylaster californicus is listed on Appendix II of CITES (1).Top Lace coral threats Lace corals face the many threats that are impacting coral reefs globally. It is estimated that 20 percent of the world’s coral reefs have already been effectively destroyed and show no immediate prospects of recovery, and 24 percent of the world’s reefs are under imminent risk of collapse due to human pressures. These human impacts include poor land management practices that are releasing more sediment, nutrients and pollutants into the oceans and stressing the fragile reef ecosystem. Over fishing has ‘knock-on’ effects that results in the increase of macro-algae that can out-compete and smother corals, and fishing using destructive methods physically devastates the reef. A further potential threat is the increase of coral bleaching events, as a result of global climate change (6). Lace corals are also potentially threatened by the global coral trade, for use in aquariums, or for jewellery and ornaments, however, the amount in trade is significantly smaller compared to many other coral genera (7).Top Lace coral conservation Lace corals are listed on Appendix II of the Convention on International Trade in Endangered Species (CITES), which means that trade in this species should be carefully regulated (1). Indonesia and Fiji both have quota systems for corals, including lace corals, monitored though CITES (1). Lace corals will form part of the marine community in many marine protected areas (MPAs), which offer coral reefs a degree of protection, and there are many calls from non-governmental organisations for larger MPAs to ensure the persistence of these unique and fascinating ecosystems (6).Top Find out more For further information on this species see Veron, J.E.N. (2000) Corals of the World. Australian Institute of Marine Science, Townsville, Australia. For further information on the conservation of coral reefs see:Top AuthenticationThis information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: email@example.comTop - Relating to asexual reproduction: reproduction that does not involve the formation of sex cells, such as sperm and eggs. Asexual reproduction only involves one parent, and all the offspring produced by asexual reproduction are identical to one another. - Relating to corals: corals composed of numerous genetically identical individuals (also referred to as zooids or polyps), which are produced by budding and remain physiologically connected. - Fragmentation is a form of asexual reproduction where a new organism grows from a fragment of the parent. Each fragment develops into a mature, fully grown individual. - Relating to corals: the stages of development before settlement on the reef. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce. - Plants that carry out a metabolic process in which carbon dioxide is broken down, using energy from sunlight absorbed by the green pigment chlorophyll. Organic compounds are produced and oxygen is given off as a by-product. - Typically sedentary soft-bodied component of Cnidaria (corals, sea pens etc), which comprise of a trunk that is fixed at the base; the mouth is placed at the opposite end of the trunk, and is surrounded by tentacles. - Describing a close relationship between two organisms. This term usually refers to a relationship that benefits both organisms. - CITES (October, 2009) - Veron, J.E.N. (2000) Corals of the World. Vol. 3. Australian Institute of Marine Science, Townsville, Australia. - Waikïkï Aquarium Education Department (July, 2007) - Borneman, E.H. (2001) Aquarium corals; Selection, Husbandry and Natural History. T.F.H. Publications, New Jersey, USA. - Wood, E.M. (1983) Reef corals of the world: biology and field guide. T.F.H. Publications, New Jersey, USA. - Wilkinson, C. (2004) Status of Coral Reefs of the World. Australian Institute of Marine Science, Townsville, Australia. - Green, E. and Shirley, F. (1999) The Global Trade in Corals. World Conservation Press, Cambridge, UK. Terms and Conditions of Use of Materials Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; Additional use of flagged material Green flagged material Creative commons material Any other use
<urn:uuid:49b4b0e1-8782-484b-8ca6-a01f9f89404b>
3.359375
1,559
Knowledge Article
Science & Tech.
36.34254
Spotlight on Venice: Climate change a wash in the City of Water Ahh, Venice – the City of Water. Built on a lagoon along the Adriatic Sea, there must be a looming disaster in store for this lovely, sinking city in the face of climate change. Right? That was the harsh verdict of the Intergovernmental Panel on Climate Change (IPCC). But new research is questioning that conclusion. The frequency of storm surges– known by Venetians as “Acqua Alta”– is expected to drop 30 percent by the end of this century. That’s according to research led by Alberto Troccoli from the Commonwealth Scientific and Industrial Research Organisation in Canberra, Australia and published in the most recent edition on the journal Nature Climate Change. Under climate change, weather patterns in the Mediterranean buffer the Northern Adriatic from the ill affects of extreme tides. Weather data on storm surges from 1958-1997 back this up. A decrease in the persistence and intensity of weather conditions that trigger dramatic storm events in Venice occurred over that time period. Using climate and weather models, the researchers concluded that those patterns are expected to persist. That would offset an expected 17 cm rise in sea levels over the next 90 years. In the end, there may be little change in the amount of flooding that drowns Venice. It’s an interesting example of how global climate change has very specific regional implications. And sometimes, those impacts are not what you’d expect.
<urn:uuid:e04b0f7a-32ea-47b0-b99b-16147ae7337f>
3.578125
305
Personal Blog
Science & Tech.
45.406073
CAMEL Climate Change Education A free, comprehensive, interdisciplinary, multi media resource for educators I Learn more I Join The Sun is a beautiful thing to talk about. Without the Sun there is no life for... Very good article. Solar forcing is not mentioned in the body of the text. F... This Is An Interesting Source..... Your Shared Information Are Good And Reality... Living a green life can actually offer us a better quality of life on earth. Rep... EDUCATOR TOOLS > Continuing Conversations (180 Blog sites) American Indian & Indigenous People Climate & Agriculture Climate & Food Security Climate Change & Disasters Climate Change & Security Sea Level Rise/Coastal Adaptation TED Talks Climate Series Misconceptions & Skeptics Climate Change FAQ's How Do We Know? CONTENT BY PARTNERS > Livermore National Laboratory Public Broadcasting System PBS UCAR – COMET Will Steger Foundation Photo by Margaret Swisher This video shows how geothermal heat pumps generate clean, reliable, and renewable energy using the heat contained within the earth. This case study desrcibes potential renewable energy sources that are being used and can be looked at being used in the future on tribal lands. it discusses... Video length: 2:47 min. Selected for the CLEAN Collection. This introductory video describes the basic principles of residential geothermal heat... Included in the CLEAN Collection. This animation illustrates how heat energy from deep in Earth can be utilized to generate electricity at a large scale. Estimated Lecture Hours: 3 Lab Hours: Discussion and group activity prompts within the presentation may account for an hour or so of the... This complete curriculum consist of 15 teaching modules based on facts and practical usage of energy and cost saving techniques on farms. You will not find biased... © Copyright 2012. All rights reserved. Powered by Trunity Terms of Service
<urn:uuid:01e724cd-05ee-4eae-8f96-4d89c48320f1>
3.359375
415
Content Listing
Science & Tech.
41.172579
NASA has made its choices, and TESS is not one of them. The Transiting Exoplanet Survey Satellite would have used six telescopes to observe the brightest stars in the sky, a remarkable 2.5 million of them, hoping to find more than 1,000 transiting planets ranging in size from Jupiter-mass down to rocky worlds like our own. An entrant in the agency’s Small Explorer program, TESS could have accelerated the time-frame for discovering another habitable world, assuming all went well. Not that we don’t have Kepler at work on 100,000 distant stars, looking for transits that can give us some solid statistical knowledge of how often terrestrial (and other) planets occur. And, of course, the CoRoT mission is actively in the hunt. But TESS would have complemented both, looking at a wide variety of stars, many of which would have been M-dwarfs. Not long ago I referred to a Greg Laughlin post that noted a 98 percent probability that TESS would locate a potentially habitable transiting planet orbiting a red dwarf within 50 parsecs of the Earth. Were that the case, the results could have been handed over to the James Webb Space Telescope, scheduled for launch near the end of the putative TESS mission, for further investigation. JWST, so the thinking goes, could then take a spectrum and tell us something about conditions in that planet’s atmosphere. Retrieving data from the atmospheres of such planets is crucial to astrobiology and we’ll get it done one day, but perhaps not as soon as we hoped. Getting a mission into space is no easy matter in the best of times (see Alan Boss’ The Crowded Universe for vivid proof of this). Consider that the two Small Explorer (SMEX) finalists were chosen from an original 32 submitted in January of 2008. The SMEX missions are capped at $105 million each, excluding the launch vehicle. That cost would depend on the vehicle — the last time I looked, an Atlas V would command $130 million. We’re talking relatively small investment for a solid scientific return, even if that return doesn’t include exoplanetary results on this round. One of the two proposals now to be developed into full missions is the Interface Region Imaging Spectrograph, which will use a solar telescope and spectrograph to look at the Sun’s chromosphere. The other is the Gravity and Extreme Magnetism SMEX mission, which will measure the polarization of X-rays emitted by neutron stars and stellar-mass black holes, as well as the massive black holes found at the centers of galaxies. Given that one of NASA’s stated aims with the SMEX program is “…to raise public awareness of NASA’s space science missions through educational and public outreach activities” (see this news release), the agency may have missed an opportunity with TESS. We’re close to the detection, through radial velocity or transit studies, of a terrestrial planet around another star. That’s going to put the study of that planet’s atmosphere for life signs high on everyone’s agenda, including the public’s. From the PR perspective, TESS was a gold-plated winner.
<urn:uuid:2781a4c9-4068-460a-8e8e-132ce45223c9>
3.28125
680
Personal Blog
Science & Tech.
46.275574
- See also: Classical central-force problem In celestial mechanics, the specific relative angular momentum (h) of two orbiting bodies is the vector product of the relative position and the relative velocity. Equivalently, it is the total angular momentum divided by the reduced mass. Specific relative angular momentum plays a pivotal role in the analysis of the two-body problem. - is the relative orbital position vector - is the relative orbital velocity vector - is the total angular momentum of the system - is the reduced mass The units of are m2s−1. As usual in physics, the magnitude of the vector quantity is denoted by : Elliptical orbit In an elliptical orbit, the specific relative angular momentum is twice the area per unit time swept out by a chord from the primary to the secondary: this area is referred to by Kepler's second law of planetary motion. Since the area of the entire orbital ellipse is swept out in one orbital period, is equal to twice the area of the ellipse divided by the orbital period, as represented by the equation - is the semi-major axis - is the semi-minor axis - is the semi-latus rectum - is the gravitational constant - , are the two masses. See also - Pandian, Jagadheep D. "Eclipse". Curious about Astronomy?. Cornell University. A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
<urn:uuid:a6d46f14-22ce-4f59-a347-400fb373cde6>
3.609375
310
Knowledge Article
Science & Tech.
26.87575
Irving Rabin is a Senior Consultant for Code Integrity Solutions, a consultancy for static source code analysis professional services. Irving is a former architect at Vitria Technologies, Principal Software Engineer at Openwave, and Research and Development Manager at Platinum Technologies. C and C++ rely heavily on usage of pointers, i.e. variables containing memory addresses. While pointer types are conceptually different from integer types, the physical address is always an integer number. Developers heavily use integer types instead of pointer types in C/C++, even in system headers that come with compilers. For the last few decades most computer processors used 32-bit memory addresses, so development environments have been built around the assumption that addresses are 32-bit long. Implicit and explicit casting between pointers and integers and overlapping memory structures through pointers or unions became commonplace. Of course, 32-bit architectures have its limitation; specifically they limit computer memory to 4GB. Application requirements have grown and migration to longer addresses was imminent. While first 64-bit computers have been introduced in the 1970s, only in the mid-1990s did major corporations start releasing 64-bit processors to the market on a larger scale. With a new architecture came the arrival of new operating systems. And, the new architecture brought new meaning and values to the well-known, frequently used legacy types. The most dramatic change was moving to 64-bit pointers. The need to adjust to the new sizes of integer types was apparent. The Same Rules No Longer Apply Some of the sizes of the base types changed. Some of them remain the same. However a multitude of implicit assumptions used all over C and C++ code bases suddenly was no longer valid. The code written and tested in 32-bit system was no longer valid with migration to 64-bit computers. Fortunately, most of the areas where the code may be vulnerable to migration from 32-bit machines to 64-bit machines have been identified and classified. Some of the compilers use aggressive checking techniques to catch possible vulnerabilities expected during migration. Vendors of static analysis tools -- Coverity and Klocwork come to mind -- provide mechanisms that further assist smooth code migration from 32-bit to 64-bit machines. While these tools do not directly address code migration, they provide convenient extensibility and APIs to create customized checkers relevant to 32-bit to 64-bit migration. Augmenting these tools can help even further to uncover and efficiently fix these compatibility problems. The goal of this article is to help you make your code base architecture-independent, so the same codebase can be built on either a 32-bit machine or 64-bit machine and produce workable code for each machine. Furthermore, if 32-bit programs communicate with 64-bit programs through binary data exchange (through files or sockets), there are ways to make sure that binary structures are immune to migration between the two architecture types. This article contains detailed analysis and recommendations for dealing with a multitude of issues related to 32-bit to 64-bit migration.
<urn:uuid:4ba96543-253d-4cdb-ac34-db29d3f6ed7d>
2.953125
618
Knowledge Article
Software Dev.
30.865849
Climate Change Responses Need Not be All or Nothing The dialog about climate change, man's role in causing it, and possible responses to limit it or even reverse it, takes on a crisis tone for many. Is this the best way to look at it, and is it the best way to achieve results? For some, this sort of dialog hardens positions and limits our collective ability to do anything. Is there an explanation for why this seems to be happening? An Ohio State University statistician says that the natural human difficulty with grasping probabilities is preventing Americans from dealing with climate change. In a panel discussion at the American Association for the Advancement of Science meeting on Feb. 15, Mark Berliner said that an aversion to statistical thinking and probability is a significant reason that we haven’t enacted strategies to deal with climate change right now. Berliner, professor and chair of statistics at Ohio State, is the former co-chair of the American Statistical Association’s Advisory Committee on Climate Change Policy, and as such, he spent two years talking with U.S. Congressional staffers about climate change. As a result, he's come to the conclusion that Americans need to understand that climate change is a range of possible events that are more or less likely. However, the negative impacts of climate change can be reduced by taking some moderate actions today, he said. "The general public has an understanding of tipping points, the moment beyond which things become inevitable. But as soon as you start thinking of climate change as inevitable, it's easy to throw up your hands and say, 'it's too late, so why bother to do anything?'" Berliner said. "It's like a two-pack-a-day smoker deciding not to cut back on the cigarettes, because he's as good as gone." Speedometer graphic image via Shutterstock. Read more at Ohio State University.
<urn:uuid:714c0482-15b1-4de2-8ea2-99977a39cc2f>
2.6875
386
Truncated
Science & Tech.
42.732225
Figure 9. Global methane reservoirs, fluxes, and turnover times. Major reservoirs are underlined, pool sizes and fluxes are given in Tg (1012 g) CH4 and Tg CH4 yr-1. Turnover times (reservoir divided by largest flux to or from reservoir ) are in parentheses. To convert Tg CH4 to moles C, multiply by 6.25 x 1010. The methane budget is less than 1% of the Earth's carbon budget. Methane is present in quantity in only three reservoirs on Earth: as natural gas associated with fossil fuel reservoirs, as hydrates or clathrates (a cage-like structure of water ice that contains methane), and in the atmosphere, which is the smallest reservoir. Methane in the atmosphere is photochemically oxidized, and the recently observed increase in atmospheric concentrations is a result of an imbalance between sources and the major s ink, photochemical oxidation. Research on methane, an important greenhouse gas, has focused on fluxes influencing the atmosphere.
<urn:uuid:e02f7f8d-8439-4202-b6bd-341b78cb30f5>
3.734375
224
Knowledge Article
Science & Tech.
43.188952
Name: seth h katz Date: 1993 - 1999 Why is the sky sometimes red at night and is it similiar to a blue sky during the day? No, the red sky you see at dusk is the result of dust and smoke particles in the air which absorb blue light but allow red light to pass through.. well, I guess in a way it's similar, since the blue sky you see during the day is the result of the water in the air absorbing RED light and allowing B BLUE light to pass through. Sorry about that.. Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:5033117c-b3c4-4c00-a837-6817cfa79d9c>
3.0625
135
Q&A Forum
Science & Tech.
80.465
Facts, Identification & Control Blow flies are often metallic in appearance, with feathery hairs on the terminal antennal segments of the males. Adult blow flies have sponge-like mouth parts, while maggots have hook-like mouth parts. Behavior, Diet & Habits Blow flies belong to the Family Calliphoridae of flies under the Order Diptera. To date, there are approximately 80 species of blow flies in North America. Blow flies are attracted to decaying meat and are typically the first organisms to come into contact with dead animals. The meat of dead animals is essential for larval survival and growth. They are also attracted to plants that give off the smell of rotting meat and as such, can be a pollinator for those plants. Female blow flies typically lay their eggs on decaying meat, where maggots hatch within a few hours to a few days depending on species. These maggots undergo three stages within several days, after which they leave their food source and pupate in soil. Within a few days, the pupation will be complete, at which point they emerge as adults. Signs of a Blow Fly Infestation The most common signs of blow flies are either the adults themselves or their larvae. The adults may be seen resting on surfaces or buzzing around potential food or odor sources. The larvae may be observed when they crawl out of the breeding material to pupate.
<urn:uuid:e12e46fc-38d2-47e4-881d-5ef459dd210c>
3.75
292
Knowledge Article
Science & Tech.
50.516844
The papers Einstein wrote in 1905 covered a broad swath—special relativity, electrodynamics, Brownian motion, light quanta. Churned out in less than a year, these ideas had lasting impact: scientists today still devote their lives to evaluating Einstein´s work on gravity, space and time. Einstein isn´t the only scientist, however, to pull off such compacted productivity. Newton, Galileo and others had their own superproductive 12-month stretches—but as far as we can tell, no post-Einstein scientist has managed one. Why? Read on. Galileo Galilei: 1609-1610 In mid-1609, after hearing of an innovative new Dutch telescope, Galileo built a two-lens prototype that provided twice the magnification of existing field telescopes. Four months later he assembled the world´s most precise astronomical telescope: it offered 20x magnification. In another four months, thanks to his new device, he had observed craters on our moon and provided the first evidence of moons elsewhere in the solar system (with the discovery of four of Jupiter´s moons). These findings, which overturned the prevailing view that the sun orbited the Earth, are often considered the start of modern science. Isaac Newton: 1666 After fleeing to the countryside during the Great Plague, Newton formulated the fundamental theories of calculus, optics, and physics, including the laws of motion and gravity. Although he didn´t publish these theories until years later, historians regard his accomplishments as the closest parallel to Einstein´s 1905. Almost all complex mathematical problems solved in the world today rely on Newton´s early studies of mathematics. Some real-world applications include the accurate dropping of bombs in war and calculating volumes of irregular shapes for engineering purposes. Thomas Edison: 1878 Thomas Edison invented the phonograph and the long-distance telephone system in a single year. In 1876, Alexander Graham Bell had introduced the telephone, but the system´s range extended only 10 miles. With Edison´s improvements, a New Yorker could call someone in Philadelphia, nearly 100 miles away. The upgraded long-distance system increased the efficiency of business communications, ushering in the Gilded Age. Pierre and Marie Curie: 1898 The married chemists discovered two elements over six months in 1898, an impressive feat considering that 80 elements of the periodic table had already been found. The Curies announced polonium in July and radium in December, and in the meantime determined that beta rays (now known to be electrons) were negatively charged particles. These discoveries greatly contributed to the development of medical treatments such as radiation to destroy cancerous tumors. Over the past hundred years, though, no scientist—not Hawking, Gdel, Bohr or Feynman—has achieved another annus mirabilis. Why? For one thing, historians say, the field is more crowded. “There are exponentially larger numbers of scientists now than at the time of Einstein,” says Jim Gates a physics professor at the University of Maryland. With so many people in the same disciplines asking the same questions, most modern scientists work in teams rather than solo. In addition, science is getting more specialized, and investigators now typically concern themselves with narrower fields of inquiry. “The research front is done in tiny wavelengths,” says James McClellan, co-author of Science and Technology in World History. Einstein tackled and answered many of the greatest physics questions, both big and small, before the individual thinker became obsolete. The next big scientific breakthroughs—solving mysteries of the brain, how life erupted from lifeless matter and how to colonize habitable planets—will most certainly come from teams of researchers who devote their lifetimes to a single cause. We may never again encounter the likes of Einstein—a 26-year-old patent examiner who changed the world in a single year by contemplating physics on the weekends for fun. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:6fc49503-d2c6-44d1-9d7c-b91d92977aa1>
3.8125
854
Listicle
Science & Tech.
35.900334
Abert’s Squirrel, Sciurus aberti Abert’s squirrel (Sciurus aberti) is a tree squirrel that is native to the North America. It is also known as the tassel-eared squirrel. Its range extends from the Rocky Mountains all the way into Mexico, with large populations appearing in Colorado, Arizona, New Mexico, and the Grand Canyon. Its range is slightly fragmented, with most populations being isolated to the Rocky Mountains, but the introduced populations in the Graham and Santa Catalina mountains of Arizona are stable. There have also been confirmed reports of this squirrel in Spanish Peaks State Wildlife Area, by Mellott and Choate, possibly extending its range by 43 miles. Abert’s squirrel derives its common name from the American naturalist John James Abert, who was also the leading military officer in the Corps of Topographical Engineers. This squirrel holds nine subspecies, all of which were previously recognized as separate species. Abert’s squirrel can reach an average body length of 1.9 feet, with a tail length of up to 9.8 inches. The most distinguishing feature of this squirrel is its ear tufts, which can reach a length of up to 1.8 inches. Its long fur is typically grey in color, with a pale or white fur on the underbelly and a visible red stripe running down the back. Individuals that reside in the rocky foothills of Colorado bare black fur all over. If not for these colorations, Abert’s squirrel would look very much like the Eurasian Red Squirrel. Abert’s squirrel prefers a habitat within ponderosa pine forests, in arid and cool areas of these trees. The squirrels prefer mature trees that can produce more pinecones, which are a main source of food. The average home range of each squirrel varies depending upon its location and the seasons, but studies show that the nest habitats have ponderosa pines with a diameter of twenty inches. Abert’s squirrels depend heavily on the ponderosa pine for food, shelter, and nests. Summer nests can also be built within Gambel oak and occasionally cottonwood trees. Instead of building nests within the pine trees, these squirrels will build their nests on branches, because they are too large to fit inside the tree. Most nests, although they vary in size depending upon location, are located in the upper third region of the tree crown. These can be found against trunks, or in dips or boles on top of the branches, as far up as 90 feet. Often times, the crowns in which the nests are placed are supported by “witches brooms”, infestations of the dwarf mistletoe tree. During the winter, a mother and her sub adult baby will share these nests. The nests are built by females Abert’s squirrels, using pine twigs that can reach a diameter of .5 inches and a length of up to 2 inches. These shelters are typically used year round. Abert’s squirrels are diurnal, but they may be active right before sunrise. The typical mating season may vary upon location, but in central Arizona, mating typically occurs from May 1 to June 1. A study conducted with eight litters showed an average litter size of three to five hairless young. Between three to six weeks, the mother will transport her young to a larger nest. By seven weeks of age, the tail fur has grown in and the ears are held erect. These babies are weaned at ten weeks of age and are fully grown by 16 weeks. The typical diet of Abert’s squirrel consists of plant materials from the ponderosa pine, including the seeds, buds, cones, and bark. It will also consume soft fungi, bones, carrion, and antlers. These squirrels will choose to eat the seeds of the Mexican pinyon over those of the ponderosa pine if they are available. They have been known to eat the acorns of Gambel oaks as well. These squirrels consume most of their water from the pine materials they eat, but they will drink from standing water like stock ponds or rain puddles. Because the ponderosa pine only produces cones every three to four years, Abert’s squirrels will begin eating the pine seeds once they start to grow. Each squirrel can consume up to 75 cones per day when available. Between the months of October and November, the seeds will be separated from the cones and stray seeds will be eaten off the ground. During the winter, the inner bark of twigs takes up the majority of their diet and they can eat up to 45 twigs per day. It is though that the northern goshawk may consume so many Abert’s squirrels that their populations will not grow, as suggested by Reynolds. Other possible common predators include grey foxes, hawks, coyotes, and bobcats, although there are no confirmed reports of this. The mortality rate of these squirrels is thought to be high due to injuries, like broken teeth, and food shortages. Abert’s squirrel appears on the IUCN Red List with a conservation status of “Least Concern”. Image Caption: Abert’s Squirrel (Sciurus aberti). Credit: NPS/Wikipedia
<urn:uuid:0aab1e20-dd50-42e5-911f-6113d1340541>
3.453125
1,106
Knowledge Article
Science & Tech.
54.409562
Best known for its role in crafting and commanding spacecraft such as Curiosity, JPL is also home to decades’ worth of accumulated oddities. (p. 32) Found in: Science & Society NASA’s rover looks for life-friendly environments. (p. 18) Planet discovered in Alpha Centauri, just a few light-years away. (p. 23) Eventual collision with Andromeda to shake up the solar system. (p. 26) With new efforts aimed at the stars, China seeks to revive its astronomical reputation. (p. 20) Found in: Astronomy Ancient photons leave their mark in high-energy radiation from powerful galaxies. (p. 8) Found in: Atom & Cosmos A simulation suggests that giant collisions created Titan and the planet’s many smaller satellites. Found in: Astronomy and Atom & Cosmos NASA’s newest rover, Curiosity, wasn’t alone on Mars for long. Two hours after Curiosity landed in Gale Crater on August 6, her cranky alter-ego plopped down with a huff on the Red Planet. That is, a virtual alter ego named Sarcastic Rover appeared on Twitter and began updating followers about her exploits. “Oh sure,” she tweeted early on. “I can't think of anything I'd rather be doing than driving around a wasteland looking at dirt for the rest of my life.” Sarcastic Rover (@SarcasticRover) tweets about the desolate Martian wilderness, her silent, rocky companions, and the dru... Found in: Atom & Cosmos and Science & Society
<urn:uuid:63e94ce9-698d-4d96-ae63-991fc6fe96fa>
2.953125
341
Content Listing
Science & Tech.
57.233636
1 MILLION YEARS: IS THE UNIVERSE LOPSIDED? Glenn Starkman, physicist at Case Western Reserve University The heat of the big bang left behind radiation that has permeated the universe ever since. Space probes have mapped this cosmic microwave background, or CMB, over the entire sky and found it to be extraordinarily uniform save for small, random fluctuations, just as big bang theory had predicted. Such smoothness implies that the early universe was itself uniform. Yet some analyses, including those by my collaborators and me, saw an excess of symmetry between opposite sides of the sky and other anomalies, including a lack of the largest fluctuations, those that should span more than 60 degrees in the firmament. To find out if these are real features or statistical flukes, we just need to keep observing. The CMB picture we see today is an accident of our place in space and time. The CMB has traveled to us from all directions for 13.7 billion years. Surveying it thus means mapping a spherical surface that surrounds us and has a radius of 13.7 billion light-years—the distance light has traveled in this time. If we wait long enough, the sphere will get bigger and bigger and thus cross new regions of the early universe. The anomalies are so large that it may take a billion years for the CMB sphere to get past them—when the sphere's radius would reach 14.7 billion light-years. If we could wait “just” one million years, most of the anomalies should be still there but slightly changed. By then, we would be able to see if they were on their way to disappearing—suggesting that they are flukes—or if their persistence reveals the presence of larger cosmic structures. Will our heads get bigger? Katerina Harvati, paleoanthropologist at the University of Tübingen in Germany How will giving birth at later ages change our biology? Marcus Feldman, mathematical biologist at Stanford University 1 MILLION YEARS: ARE PROTONS FOREVER? Sean M. Carroll, theoretical physicist at the California Institute of Technology The universe's ordinary matter consists, for the most part, of protons—particles that have been around since the big bang. Whereas other subatomic particles, including neutrons, can spontaneously decay, protons appear to be exceptionally stable. Yet some grand unified theories, or GUTs—attempts to reinterpret all of particle physics as different facets of a single force—predict that protons should break down, too, with average life spans of up to 10 To see the proton decay, all you have to do is fill a large underground tank with water and monitor it for little flashes of light that would go off as the protons in the water's atoms finally died. The more protons you monitor, the higher the chance that you will see one decay. Studies done with existing detectors show that protons last at least 10 This article was originally published with the title Questions for the Next Million Years.
<urn:uuid:f3052fd3-736f-4432-8e72-f66827491961>
3.28125
625
Content Listing
Science & Tech.
48.504152
Half of BC pines dead from fossil fuel pollution. Is it over? Our earth is overheating at a rate unprecedented in geologic history. BC is overheating twice as fast. The decimation of our pine forests is one of the many eco-collapses emerging from our overheating landscape. Half our pine trees have been eaten alive in just the last nine years. An area five times the size of Vancouver Island is being attacked by a killing plague of billions of native pine beetles. Nothing like this has ever been witnessed. A study published in the journal, Nature, concluded that "the current outbreak in British Columbia, Canada, is an order of magnitude larger in area and severity than all previous recorded outbreaks." One analyst calls the devastation "probably the biggest landscape-level change since the ice age." The force that unleashed this wholesale collapse is simple -- humans chose dirty and deadly fossil fuels instead of cleaner, sustainable energy sources. Fortunately we have easy ways in BC to quickly switch much of our dirty energy to cleaner, hopeful alternatives that will never run out. But if we don't switch soon then such eco-collapses will broaden and accelerate, threatening our way of life, our economy and our security. A recent government-NGO study of BC biodiversity bluntly states that if we continue the path we are on, "the rate of climate change will exceed the ability of most species to migrate and adjust." For millennia, pine forests flourished in BC and across western North America in areas with deeply frozen winter nights. But in the last decade, our fossil fuelled warming passed a tipping point in the pine forests preventing the long hard frosts of -40C that kill most of the native beetles. Every corner of BC is heating up. Our winter nights are heating fastest of all. While the planet has warmed 0.6C in the last century, BC winters have overheated +1.7C along the southern coast to +4.5C in the north. In the heart of our pine strongholds winter heating is now rising +1.0C per decade. The very coldest points of the year just aren't as cold as they used to be. For an ever growing chunk of our forests, the life-sustaining cold snaps are too rare to stop the ravaging hordes. Our fossil fuelled beetle mania has clear cut more BC timber volume in the last seven years than human loggers. Throughout the 1990s, BC forests removed an average of 30 megatonnes of CO2 from the air each year - a seven tonne CO2 reduction per BC resident per year. Despite a robust industrial logging industry, BC forests still sucked up more CO2 than they gave off. Our forest's carbon account was in the black. Then in 2000, our winter nights warmed past a tipping point where pine beetles suddenly started surviving in much greater numbers. They also started breeding faster and maturing quicker in the longer growing seasons. The pine slaughter began. By 2003, for the first time recorded, BC forests had shifted to net emitters of CO2. By 2007 the beetle kill was on such an epic scale that our BC forests were hemorrhaging 50 megatonnes of CO2 - an CO2 increase of 19 tonnes per British Columbian per year, just from our forests. Humans are no longer the only force dumping megatonnes of climate destabilizing CO2 into the air. We've inadvertently cooked up a new climate for BC in which a single species of beetle can kill 675 million cubic meters of mature forests and release an average of 70 megatonnes of CO2 year after year. This is the kind of dangerous climate warming feedback loop that our earth has excelled at in the past. If enough feedback loops get started again, we humans will lose control of both the rate and the magnitude of climate destabilization. The BC pine death toll is now up to 150 telephone poles of beetle killed pine per BC resident. Imagine a pile of 600 dead trees this size stacked in front of every family-of-four home. How big do we want this pile to get before we take the simple steps to cut our climate damaging pollution? The collective pile for Vancouver-area residents is an 150,000,000 tree heap. A logging truck would have to dump a new load every minute, 24/7, to keep up. Fortunately there are easy ways to quickly switch our lives away from much of our fossil fuel pollution. We can make choices that replace climate-damaging pollution with local climate sustainable energy choices like efficiency, conservation, hydro, wind, solar, geothermal and tidal. When will the pine forest collapse end? Experts expect the pine beetle slaughter to continue at a slowing rate until, by 2020, over two-thirds of our mature pines will be dead. That, they say, will be the end. Of the first wave. By 2030, today's pine saplings will be mature enough to be pine beetle dinner. The projections then show a second wave of pine slaughter even more brutal than the current one. And then another. And another. Ad nauseam. "Large-scale outbreaks of pests, such as mountain pine beetle and spruce bark beetle, are expected to persist and expand with continued warming. These pose an increasing threat. Future ecological changes will be complex and potentially rapid." - Natural Resources Canada The heartless tragedy of this tale is that our fossil fuel pollution is not only killing most of today's pine trees, it is also destroying their sanctuary and future. By melting the icy fortress that protects the pine generations to come from the devouring marauders, we make the landscape unfit for pine forests. We have started to cook several large scale BC ecosystems into collapse and we know much worse is coming until we stop burning fossil fuels. If we keep doing just the half measures we've tried so far, then today's kids can expect to see warming five to twenty times greater. They will witness most of BC ecosystems getting their bio-climate zone yanked from under them. Existing ecosystems will struggle and many will collapse. Fortunately, in Vancouver there are some very easy, common actions we can take to dramatically switch our energy use from dirty and deadly fossil fuels to locally-produced, climate-safe energy sources that never run out. Our city's number one source of climate damage comes from burning natural gas in our buildings. Think of natural gas heaters as chainsaws. Heating instead with much cleaner, local BC Hydro (ex: heat pump) will cut the climate damage by more than 80%. You will be throwing a lifeline to our forests and ecosystems. It may be hard to imagine that the fate of our ecosystems, and even our economic, food and water security, hinges on the type of energy each of us chooses to heat, cook and get around with. But it does. Our climate will continue to shift faster than living things can adapt to until we stop burning fossil fuels. Clean BC electricity is already being used to heat and cook in about half our homes and buildings. Not only that, but in the last 20 years, our growing Vancouverite population has chosen climate-sustainable BC electricity to supply all the new energy demand in buildings. The total use of climate-damaging natural gas has fallen 3%. We have started the race to a climate stable future. Now we need to focus on the finish line and accelerate our switch away from dirty fossil fuels. If you still burn fossil fuels in your home or business, consider joining your many neighbours and local businesses that are already using only climate-safe BC electricity in their buildings. Nobody would consider burning coal in their buildings any more, and for good reason. It is now time to make the switch away from all the other dirty and deadly fossil fuels we burn as well. - BC government 2010 Current Outbreak Assessment - BC government Pine Beetle FAQ - 2007 beetle epidemic map - Report on climate threats to biodiversity in BC - Canada in a Changing Climate by National Resources Canada - Nature article on climate feedback of beetle kill releasing more carbon than fires.
<urn:uuid:f36ebacc-86a3-454f-abdb-f18c26fd0bf8>
2.78125
1,652
Nonfiction Writing
Science & Tech.
52.081961
The PrivateKey property is used when you want to authenticate with the server using your private/public key pair, instead of using Password. This is a feature that should be supported by all SSH servers. The Idea of using keys is this: you own a private key (and no one else knows this value). You supply the server with the public key that corresponds to your private key. Once you initiate a connection, wodSSH will request publickey authentication. The server will check its internal list of public keys (usually stored in -/.ssh/authorized_keys2 or -/.ssh2/authorized files). If a match is found, it will send a request to wodSSH to prove you own the private key. Internally, wodSSH will sign some data using the key you provided, and the server will check the signature. If they match, it will allow you to login. Some servers will also require you to enter a password, in which case this make the server even more secure. To generate a PrivateKey that you can use with the server, use the Keys object (included in setup package) like this (VB Dim key As New key.Generate RSAkey ' 1024 bits is key.Save RSAkey, "C:\my_rsa_key.txt", "My secret password" The above sample will generate your private key and store it to file on disk, protected with a password. You can immediately continue your code like this: Ssh1.Login = "johndoe" Ssh1.PrivateKey = key ' or also Ssh1.PrivateKey = key.PrivateKey(RSAkey) Ssh1.Authentication = authPubkey Since generation of keys may be a lengthy process (for large bit numbers it can take a few seconds), you shouldn't generate it every time you need to use it. Rather, since it was saved, you should try to load it from disk. A typical scenario would be : Dim key As New WODSSHKeyLib.Keys On Error Resume ' try to load previously saved key.Load "C:\my_rsa_key.txt", "My secret password" If Err <> 0 Then 'key was not saved RSAkey ' 1024 bits is RSAkey, "C:\my_rsa_key.txt", "My secret password" ' next time you run this code it will be able to load it from the disk, ' so expensive Generate will not be called Now that you have your PrivateKey created, you should let the server know about it. You should do this by pasting the public key to the appropriate files on the server. For SSH servers (version 2), these are should have a line of text Key somefile.pub in a separate line of the file, -/.ssh2/somefile.pub which should contain your public key, as returned by Keys.PublicKeySSH property. For OpenSSH servers, you should paste the contents of -/.ssh/authorized_keys2 as a new line in the file. For VC users, you can prepare a returned key (loaded from file, for example) by converting it to SAFEARRAY like (Buffer holds key data, and Bufsize holds key length) char HUGEP *data; SafeArrayAccessData(psa, (void HUGEP* var.vt =VT_ARRAY | VT_UI1; and now you can pass this VARIANT to the PrivateKey property. Or, you can pass LPDISPATCH from the IKeys object directly in the same way: var.vt = VT_DISPATCH; var.pdispVal = (LPDISPATCH)your_keys_object_instance and it will work too.
<urn:uuid:4b048a8a-0f81-41ce-ac5f-af2f91104c00>
2.75
858
Documentation
Software Dev.
62.765095
If the Earth wobble is caused by Earth's magnetic N Pole being pushed violently away each day when it emerges to face the Sun and the approaching Planet X, then this violent push must be affecting the tides along the east and west coasts of the N American continent. The land gets pushed, and bounces back, but water has independence and would not necessarily move with the land. This would cause the tides to rush north and then south, creating pressure in inland bays and sudden swirls to ease this pressure. Are there signs such water movement is occurring? Yes indeed! Three anomalies during July, 2009 point to this wobble push as their cause - a great blob of algae in the Arctic, a swirling whirlpool off the coast of La Jolla, and high tides along the entire East Coast of the US. - Arctic Mystery: Identifying the Great Blob of Alaska July 18, 2009 - A group of hunters aboard a small boat out of the tiny Alaska village of Wainwright were the first to spot what would eventually be called "the blob." It was a dark, floating mass stretching for miles through the Chukchi Sea, a frigid and relatively shallow expanse of Arctic Ocean water between Alaska's northwest coast and the Russian Far East. The goo was fibrous, hairy. When it touched floating ice, it looked almost black. Test results released Thursday showed the blob wasn't oil, but a plant - a massive bloom of algae. While that may seem less dangerous, a lot of people are still uneasy. It's something the mostly Inupiat Eskimo residents along Alaska's northern coast say they could never remember seeing before. - Weird Rip Currents Spook La Jolla Divers July 2, 2009 - A strange current pulled divers off the La Jolla shore into a tornado-like swirl. Divers said the underwater currents were pushing down and to the south about 30 feet underwater. Even experienced divers said they had to fight to get to the surface and that it was unlike anything they had ever experienced. Lifeguards and experts at the Scripps Institution of Oceanography were unable to explain the unusual current. - Experts Struggle to Explain High Tides Jul. 25, 2009 - Since June, tides have been running from 6 inches to 2 feet above what would normally be expected, even considering seasonal and lunar fluctuations. While local tidal changes are not uncommon, researchers for the National Oceanic and Atmospheric Administration aren't sure they have ever recorded an event like this one, which is showing up all the way from Maine to Florida. - Scientists Don't Know What's Causing Freak Tides July 27, 2009 - Marine scientists say, they're baffled by several weeks of unusually high tides that coastal residents have noticed from Maine to Florida. Unusually high tides are not uncommon, but their causes are usually easily identified. Since mid-June, however, scientists have found no credible reason why tides are running a half to two-feet above normal up and down the Such phenomena have long been predicted by the Zetas, who knew the Earth wobble would commence when Planet X came into the inner solar system, as it did in 2003. ZetaTalk Prediction 4/15/1999: The oceans will continue, as will the weather, the ocean of air, to become more erratic. Winds sweeping in without notice, sudden storms, deluges, tides that are greater than expected, especially along the Pacific coastlines. Warm oceans will occur in places where they should be cool, and the fisheries will suffer from this because they can't predict where the schools of fish will be. ZetaTalk Prediction 1/12/2000: We also predict, as we did last year, that there will be high tides. Not tsunamis, following earthquakes, but unusual high tides. We also predict that there will begin to be reports of whirlpools in the oceans that will startle those who have never seen such a thing in the oceans. When asked specifically about the Arctic blob, the Zetas had the only explanation that made sense. Human scientists were still scratching their heads. ZetaTalk Explanation 7/18/2009: Since algae is not native to the Arctic, what drew these algae masses to the Arctic? Clearly, the tides have changed. The jet stream over the N American continent often appears to be vertical, then at other times horizontal, unlike the familiar lazy swoop. Whirlpools have developed off the coast of La Jolla in California. Tides along the East Coast are extraordinary also. The Earth wobble, which pushes the magnetic N Pole of Earth away, violently, when it appears to face the Sun, and the rogue Planet X, forces the Earth under its oceans and under the atmosphere, thus moving water and air masses in unusual directions. The algae mass was pushed, repeatedly, up into the Arctic, the push up more violent than the drift back, and thus the algae arrived in the Arctic. We warned that the wobble would become more violent, and it has! Is the wobble affecting tides elsewhere around the world? On the other side of the globe from N America stands India, which indeed is experiencing its own tide anomalies. - Mumbai Braces for Highest Tide in 100 Years July 24, 2009 - Nearly 200 people have been evacuated from coastal areas, warnings have been sent out to those in low-lying regions and schools have advised students to stay at home as India's financial capital braces for a massive 5.5 metre high tidal wave, billed as the highest in 100 years, to lash it Friday afternoon. The Earth wobble also affects the temperature, as the land is pushed under chilly air masses, creating high pressure areas which trap heat. Per a prediction by the Zetas last January 31, 2009, the entire northern hemisphere will experience cooler temperatures until the time of the pole shift, due to the magnetic N Pole of Earth being pushed away from the Sun and the approaching Planet X. This prediction has proved true! ZetaTalk Prediction 1/31/2009: At present the N Pole is tilted too far away from the Sun due to the violent push away of the magnetic N Pole when it turns to face Planet X. This is countered, however, by a violent bounce back during the wobble, forcing some parts of the globe under more tropical air. The hot and cold regions are like bands, vertical bands, showing the wobble to be a jerking back and forth of the N Pole. Hot on the West Coast, cold in the Great Lakes and New England states. Cold in Europe and too warm in western/central Russia. We predicted the wobble would get worse, and it has, but the worst is yet to come! We explained recently that the winter of 2007-2008 was excessively cold in the northern hemisphere because the N Pole of Earth was being pushed away by the N Pole of Planet X. This winter of 2008-2009 is likewise excessively cold in many parts of the northern hemisphere for the same reason, though the more violent wobble has interlaced parts of the northern hemisphere with record heat. Unfortunately for the southern hemisphere, which had record heat during the summer of 2007-2008, their heat spells will continue during their summer of 2008-2009, at least in certain locations. We explained that the northern hemisphere will experience more cold until the lean to the left into 3 days of darkness starts. This unfortunately means that portions of the southern hemisphere will continue to experience their record heat also until that time arrives. Per a NOAA chart, worldwide temps for July were colder than normal for most of the northern hemisphere. The southern hemisphere, particularly Antarctica, were warmer than normal, just as the One blog computes that some 3,000 records have been broken this summer in the US, and news reports from Nashville and Baltimore reflect this fact. - 3,000 Low Temp Records Set This July! July 26, 2009 - 1,044 daily record low temperatures have been broken this month nationwide according to NCDC -- count record "low highs" and the number increases to 2,925, surely to pass 3,000 before the end of the month. The period of July 17-20 was the worst, with over 1,600 stations breaking records. It's worth noting that these stats include all records across the - Coolest July 21 Recorded in Nashville as Cool Wave Continues in Tenn. July 21, 2009 - Cool weather has broken a previous low temperature for July 21 in Nashville that was set when Rutherford B. Hayes was president. When the temperature at the National Weather Service station dipped to 58 degrees at 5:30 a.m. on Tuesday, it wiped out the previous record low for the date of 60 degrees, which was set in 1877. - Record Low Temperature Tied this Morning, Another on the Way July 14, 8:48 AM - This past Sunday was the first time this summer (since June 1) we had hit 90F. So this morning's temperature at BWI reinforced the fact that something special is happening. On January 21, 2009 a magnetic blast washed over the Earth, source unknown. The Sun was quiet that week, no sunspots, the solar wind was quiescent, and no CMEs were present. On May 19-20, 2009 another similar blast occurred (see Magnetosphere Anomalies in the May 23, 2009 Newsletter). The Zetas stated that both blasts were proof of the presence of Planet X nearby, stationed between the Earth and the Sun. On July 21-22, 2009 yet another incident occurred. The magnetosphere was normal at 18:05 UTC on July 21, 2009, but became deformed under the influence of the blast by 6:05 UTC on July 22, 2009. Once again, the Sun was virtually asleep, as the NOAA chart for July 21, 2009 shows. No blasts from the Sun were present, nor were they anticipated in the forecast. The Zetas continue to assert, as they did after the January 21, 2009 blast, that this is coming from the bully Planet X as it points its magnetic N Pole toward Earth. Question: On GLP the magnetoshpere has been discussed several times this past week. What is going on with the magnetosphere at the current time. Is what is going on related to Planet X or is there another cause yet unknown? [and from another] Can you give a answer to why the magnetic field has been going bonkers lately? ZetaTalk Explanation 2/7/2009: Planet X is continually approaching Earth, who cannot escape her orbit, is locked in her orbit location, and thus the distance between these two celestial bodies continually closes. Planet X is increasingly pointing its N Pole toward Earth, which means that the hose of magnetic particles is creating an onslaught against the magnetic field of Earth. Recently scientific articles have reported that there seems to be a hole in the magnetic field of Earth on the Sun's side, reportedly four times the size of Earth. Well, of course Planet X is four times the diameter of Earth. Man's understanding of magnetic particles is limited, as electro-magnetic particles number in the hundreds. Thus, the disturbances to compasses or electronic devices which are sensitive to nearby magnets or electronic flow will be many, in the time between the present and the last weeks before the pole shift. About this we cannot be more You received this Newsletter because you Subscribed to the ZetaTalk Newsletter service. If undesired, you can quickly
<urn:uuid:77a22495-fd0f-4cd8-b2fb-3f73c2696f4a>
3.421875
2,480
Comment Section
Science & Tech.
54.455059
Seasons occur because of Earth's changing distance from the Sun This myth sounds right, but science says otherwise. Earth experiences seasons because our planet tilts 23.5° with respect to its orbital plane. This statement just means the reason it's summer in the Northern Hemisphere is because Earth's North Pole tilts toward the Sun at that time. These four images of Earth show how our planet’s tilt affects its appearance at the start of each season. Photo by NASA/JPL/Johns Hopkins University At the same time, however, the South Pole tilts away from the Sun. That means winter is beginning for inhabitants of the Southern Hemisphere. And, regarding distances, Earth is approximately 3 million miles (5 million km) closer to the Sun in early January than it is in early July. That works out to a bit more than a 3 percent swing from Earth's nearest approach to the Sun to its farthest. Although small, 3 percent is not insignificant. The different distances mean the Southern Hemisphere receives more solar energy during its summer than the Northern Hemisphere does in its summer. Summer and winter occur on dates called the solstices, which mark the highest and lowest points the Sun reaches in our sky. In the Northern Hemisphere, the Sun stands 47° (our planet's 23.5° tilt times two) higher in the sky June 21 than it does December 21. So, around June 21 of each year, summer begins north of the equator, and winter begins south of that line. For this reason, it's incorrect to call June 21 the "summer" solstice. Summer begins on that date only in the Northern Hemisphere. Here at the magazine, we use the terms June solstice and December solstice to signify these dates.
<urn:uuid:37b42a15-f6a5-400a-9816-349b1df1b215>
3.875
362
Knowledge Article
Science & Tech.
61.335985
- published: 02 Jul 2012 - views: 24974 - author: nationalacademies The National Research Council is pleased to present this video that explains how scientists have arrived at the current state of knowledge about recent clima... |Atmospheric chemistry (category)| |Tropical cyclone (category)| |Global warming (category) · (portal)| Climatology (from Greek κλίμα, klima, "place, zone"; and -λογία, -logia) is the study of climate, scientifically defined as weather conditions averaged over a period of time, and is a branch of the atmospheric sciences. Basic knowledge of climate can be used within shorter term weather forecasting using analog techniques such as the El Niño – Southern Oscillation (ENSO), the Madden-Julian Oscillation (MJO), the North Atlantic Oscillation (NAO), the Northern Annualar Mode (NAM), the Arctic oscillation (AO), the Northern Pacific (NP) Index, the Pacific Decadal Oscillation (PDO), and the Interdecadal Pacific Oscillation (IPO). Climate models are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. The earliest person to hypothesize climate change may have been the medieval Chinese scientist Shen Kuo (1031–1095). Shen Kuo theorized that climates naturally shifted over an enormous span of time, after observing petrified bamboos found underground near Yanzhou (modern day Yan'an, Shaanxi province), a dry-climate area unsuitable for the growth of bamboo. Early climate researchers include Edmund Halley, who published a map of the trade winds in 1686 after a voyage to the southern hemisphere. Benjamin Franklin (1706-1790) first mapped the course of the Gulf Stream for use in sending mail from the United States to Europe. Francis Galton (1822-1911) invented the term anticyclone. Helmut Landsberg (1906-1985) fostered the use of statistical analysis in climatology, which led to its evolution into a physical science. Climatology is approached in a variety of ways. Paleoclimatology seeks to reconstruct past climates by examining records such as ice cores and tree rings (dendroclimatology). Paleotempestology uses these same records to help determine hurricane frequency over millennia. The study of contemporary climates incorporates meteorological data accumulated over many years, such as records of rainfall, temperature and atmospheric composition. Knowledge of the atmosphere and its dynamics is also embodied in models, either statistical or mathematical, which help by integrating different observations and testing how they fit together. Modeling is used for understanding past, present and potential future climates. Historical climatology is the study of climate as related to human history and thus focuses only on the last few thousand years. Climate research is made difficult by the large scale, long time periods, and complex processes which govern climate. Climate is governed by physical laws which can be expressed as differential equations. These equations are coupled and nonlinear, so that approximate solutions are obtained by using numerical methods to create global climate models. Climate is sometimes modeled as a stochastic process but this is generally accepted as an approximation to processes that are otherwise too complicated to analyze. Scientists use climate indices based on several climate patterns (known as modes of variability) in their attempt to characterize and understand the various climate mechanisms that culminate in our daily weather. Much in the way the Dow Jones Industrial Average, which is based on the stock prices of 30 companies, is used to represent the fluctuations in the stock market as a whole, climate indices are used to represent the essential elements of climate. Climate indices are generally devised with the twin objectives of simplicity and completeness, and each index typically represents the status and timing of the climate factor it represents. By their very nature, indices are simple, and combine many details into a generalized, overall description of the atmosphere or ocean which can be used to characterize the factors which impact the global climate system. El Niño-Southern Oscillation (ENSO) is a global coupled ocean-atmosphere phenomenon. The Pacific ocean signatures, El Niño and La Niña are important temperature fluctuations in surface waters of the tropical Eastern Pacific Ocean. The name El Niño, from the Spanish for "the little boy", refers to the Christ child, because the phenomenon is usually noticed around Christmas time in the Pacific Ocean off the west coast of South America. La Niña means "the little girl". Their effect on climate in the subtropics and the tropics are profound. The atmospheric signature, the Southern Oscillation (SO) reflects the monthly or seasonal fluctuations in the air pressure difference between Tahiti and Darwin. The most recent occurrence of El Niño started in September 2006 and lasted until early 2007. ENSO is a set of interacting parts of a single global system of coupled ocean-atmosphere climate fluctuations that come about as a consequence of oceanic and atmospheric circulation. ENSO is the most prominent known source of inter-annual variability in weather and climate around the world. The cycle occurs every two to seven years, with El Niño lasting nine months to two years within the longer term cycle, though not all areas globally are affected. ENSO has signatures in the Pacific, Atlantic and Indian Oceans. In the Pacific, during major warm events, El Niño warming extends over much of the tropical Pacific and becomes clearly linked to the SO intensity. While ENSO events are basically in phase between the Pacific and Indian Oceans, ENSO events in the Atlantic Ocean lag behind those in the Pacific by 12–18 months. Many of the countries most affected by ENSO events are developing countries within tropical sections of continents with economies that are largely dependent upon their agricultural and fishery sectors as a major source of food supply, employment, and foreign exchange. New capabilities to predict the onset of ENSO events in the three oceans can have global socio-economic impacts. While ENSO is a global and natural part of the Earth's climate, whether its intensity or frequency may change as a result of global warming is an important concern. Low-frequency variability has been evidenced: the quasi-decadal oscillation (QDO). Inter-decadal (ID) modulation of ENSO (from PDO or IPO) might exist. This could explain the so-called protracted ENSO of the early 1990s. The Madden–Julian Oscillation (MJO) is an equatorial traveling pattern of anomalous rainfall that is planetary in scale. It is characterized by an eastward progression of large regions of both enhanced and suppressed tropical rainfall, observed mainly over the Indian and Pacific Oceans. The anomalous rainfall is usually first evident over the western Indian Ocean, and remains evident as it propagates over the very warm ocean waters of the western and central tropical Pacific. This pattern of tropical rainfall then generally becomes very nondescript as it moves over the cooler ocean waters of the eastern Pacific but reappears over the tropical Atlantic and Indian Oceans. The wet phase of enhanced convection and precipitation is followed by a dry phase where convection is suppressed. Each cycle lasts approximately 30–60 days. The MJO is also known as the 30–60 day oscillation, 30–60 day wave, or intraseasonal oscillation. Indices of the NAO are based on the difference of normalized sea level pressure (SLP) between Ponta Delgada, Azores and Stykkisholmur/Reykjavik, Iceland. The SLP anomalies at each station were normalized by division of each seasonal mean pressure by the long-term mean (1865–1984) standard deviation. Normalization is done to avoid the series of being dominated by the greater variability of the northern of the two stations. Positive values of the index indicate stronger-than-average westerlies over the middle latitudes. The NAM, or AO, is defined as the first EOF of northern hemisphere winter SLP data from the tropics and subtropics. It explains 23% of the average winter (December–March) variance, and it is dominated by the NAO structure in the Atlantic. Although there are some subtle differences from the regional pattern over the Atlantic and Arctic, the main difference is larger amplitude anomalies over the North Pacific of the same sign as those over the Atlantic. This feature gives the NAM a more annular (or zonally symmetric) structure. The NP Index is the area-weighted sea level pressure over the region 30N–65N, 160E–140W. The PDO is a pattern of Pacific climate variability that shifts phases on at least inter-decadal time scale, usually about 20 to 30 years. The PDO is detected as warm or cool surface waters in the Pacific Ocean, north of 20° N. During a "warm", or "positive", phase, the west Pacific becomes cool and part of the eastern ocean warms; during a "cool" or "negative" phase, the opposite pattern occurs. The mechanism by which the pattern lasts over several years has not been identified; one suggestion is that a thin layer of warm water during summer may shield deeper cold waters. A PDO signal has been reconstructed to 1661 through tree-ring chronologies in the Baja California area. The Interdecadal Pacific Oscillation (IPO or ID) display similar sea surface temperature (SST) and sea level pressure patterns to the PDO, with a cycle of 15–30 years, but affects both the north and south Pacific. In the tropical Pacific, maximum SST anomalies are found away from the equator. This is quite different from the quasi-decadal oscillation (QDO) with a period of 8–12 years and maximum SST anomalies straddling the equator, thus resembling ENSO. Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the earth with outgoing energy as long wave (infrared) electromagnetic radiation from the earth. Any unbalance results in a change in the average temperature of the earth. The most talked-about models of recent years have been those relating temperature to emissions of carbon dioxide (see greenhouse gas). These models predict an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher latitudes. Models can range from relatively simple to quite complex: In contrast to meteorology, which focuses on short term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes in long-term average weather patterns, in relation to atmospheric conditions. Climatologists, those who practice climatology, study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and can help predict future climate change. Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere. A more complicated way of making a forecast, the analog technique requires remembering a previous weather event which is expected to be mimicked by an upcoming event. What makes it a difficult technique to use is that there is rarely a perfect analog for an event in the future. Some call this type of forecasting pattern recognition, which remains a useful method of observing rainfall over data voids such as oceans with knowledge of how satellite imagery relates to precipitation rates over land, as well as the forecasting of precipitation amounts and distribution in the future. A variation on this theme is used in Medium Range forecasting, which is known as teleconnections, when you use systems in other locations to help pin down the location of another system within the surrounding regime. One method of using teleconnections are by using climate indices such as ENSO-related phenomena. The World News (WN) Network, has created this privacy statement in order to demonstrate our firm commitment to user privacy. The following discloses our information gathering and dissemination practices for wn.com, as well as e-mail newsletters. We do not collect personally identifiable information about you, except when you provide it to us. For example, if you submit an inquiry to us or sign up for our newsletter, you may be asked to provide certain information such as your contact details (name, e-mail address, mailing address, etc.). We may retain other companies and individuals to perform functions on our behalf. Such third parties may be provided with access to personally identifiable information needed to perform their functions, but may not use such information for any other purpose. In addition, we may disclose any information, including personally identifiable information, we deem necessary, in our sole discretion, to comply with any applicable law, regulation, legal proceeding or governmental request. We do not want you to receive unwanted e-mail from us. We try to make it easy to opt-out of any service you have asked to receive. If you sign-up to our e-mail newsletters we do not sell, exchange or give your e-mail address to a third party. E-mail addresses are collected via the wn.com web site. Users have to physically opt-in to receive the wn.com newsletter and a verification e-mail is sent. wn.com is clearly and conspicuously named at the point ofcollection. If you no longer wish to receive our newsletter and promotional communications, you may opt-out of receiving them by following the instructions included in each newsletter or communication or by e-mailing us at michaelw(at)wn.com The security of your personal information is important to us. We follow generally accepted industry standards to protect the personal information submitted to us, both during registration and once we receive it. No method of transmission over the Internet, or method of electronic storage, is 100 percent secure, however. Therefore, though we strive to use commercially acceptable means to protect your personal information, we cannot guarantee its absolute security. If we decide to change our e-mail practices, we will post those changes to this privacy statement, the homepage, and other places we think appropriate so that you are aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it. If we make material changes to our e-mail practices, we will notify you here, by e-mail, and by means of a notice on our home page. The advertising banners and other forms of advertising appearing on this Web site are sometimes delivered to you, on our behalf, by a third party. In the course of serving advertisements to this site, the third party may place or recognize a unique cookie on your browser. For more information on cookies, you can visit www.cookiecentral.com. As we continue to develop our business, we might sell certain aspects of our entities or assets. In such transactions, user information, including personally identifiable information, generally is one of the transferred business assets, and by submitting your personal information on Wn.com you agree that your data may be transferred to such parties in these circumstances.
<urn:uuid:8d930fea-e49f-44c8-b6f5-9227ab21050b>
3.125
3,237
Knowledge Article
Science & Tech.
26.779553
Hummingbirds are incredible flyers, with the ruby-throated hummingbird beating its wings 80 times every second, an ability that inspired this blog’s name. These tiny birds can fly forwards, hover, and are the only known birds to fly backwards as well. But although zooming backwards is the rarest of the hummingbird’s flying tricks, a paper in the Journal of Experimental Biology reveals that it takes no more energy than moving forwards. In the top video, an Anna’s hummingbird hovers in the still air of an inactive wind tunnel and sips sucrose from a syringe. When researchers turned on the air, the bird had to push backwards in order to reach its snack, as seen in the following video. To make the bird fly forwards, the researchers just reversed the position of the syringe. Although it looks similar to the other two videos, in the clip below, the hummingbird is flapping forwards. Clearly, it’s hard to tell these modes of flight apart. To quantify the differences, researchers took high speed videos like these, to measure the birds’ posture and wing motion, and monitored oxygen intake, to see how much energy flying took. They found that hummingbirds breathe just as heavily during backwards and forwards flight. In fact, flight in either direction was actually more energy-efficient for the hummingbird than hovering in still air. Their ability to dart and dodge makes hummingbirds nectar-gathering pros…even if they’re not so great at sitting still.
<urn:uuid:b5ea0d44-759b-4f01-9648-3b388720eb0f>
3.625
316
Personal Blog
Science & Tech.
51.112366
*pendwill point to the first character in str which follows the representation of the number. If base is 0, the radix will be determined base on the leading characters of str: if str starts with '0X', radix 16 will be used; if str starts with '0', radix 8 will be used; otherwise radix 10 will be used. If base is not 0, it must be between 36, inclusive. Leading spaces are ignored. If there are no digits, ValueError will be raised. See About this document... for information on suggesting changes.
<urn:uuid:cf618d08-906f-44df-a108-7994732d964f>
2.78125
122
Documentation
Software Dev.
74.485
This is a book about mathematical beauty. Not the facile surface beauty of Lissajous figures or fractals, but a beauty that is visible only to the inner eye. We are in the world of concepts that are almost too simple to understand, whose visible manifestations (polyhedra, wallpaper patterns) are only consequences of the underlying reality and not pictures of the reality itself. It is a lot like theology: for God cannot be seen, is too simple for human comprehension, and his visible manifestations are nothing more than shadows and signposts to a reality that they cannot circumscribe or define. Most people know about complex numbers: they can be written as x+iy, where x and y are real numbers and i has no meaning but obeys the rule that i²=-1, and you can do almost everything with them that you can with real numbers. In fact, a lot of mathematics is much easier with complex numbers than it is without them. A bored intelligent schoolchild can imagine that there will be something further, with three numbers rather than two. It took Sir William Hamilton eight years to discover that there is no such thing; but that if you use four numbers, there is: x+iy+jz+kw, with i²=-1, j²=-1, k²=-1, and ijk=-1, then there is. Quaternions turn out to be a good way of representing transformations in 3-dimensional and 4-dimensional space. The writers of 3-D computer games use them when computing the effect of multiple rotations. You can sort of guess what octonions might be. It begins to seem that we’re getting more and more Victorian. Victorian mathematics is a bit like the Albert Memorial: it delights in intricacy and abundance of detail. A Victorian likes nothing better than manipulating equations with many variables and dozens of terms, rearranging them for page after page: overtime for typographers. Twentieth and twenty-first century mathematics is spare and bony. So bony that (like the comic-book characters who are so strong that “even their muscles have muscles”) even its bones have bones. No sooner has one mathematician abstracted everything numeric from numbers – and left a beautiful skeleton behind, that fits into all manner of hitherto unrelated bodies and gives them shape and motion – than the next mathematician abstracts most of the content from his predecessor’s abstraction until its symbols become like those shadows that fill your sight when you have stared at a neon sign too long: shadows that float in the vision and dodge away when you try to look at them. If the Victorian ideal was decoration at every scale, the late-twentieth-century ideal is a theorem that can be expressed in only three symbols and proved in six lines (which it will take you a year to understand fully). It can be confidently predicted that the twenty-first-century ideal will be a theorem that is expressed by a blank piece of paper and cannot be understood even after a lifetime of study. “On Quaternions and Octonions” is a book about bones. It categorizes real numbers (R), complex numbers (C), quaternions (Q) and octonions (O) as “algebras” (a term that has only a passing relation to what one means by “algebra” at school). It looks at them geometrically – showing, for instance, that whether you can have the equivalent of unique factorisation of integers in these algebras depends on what you define “integers” to be and (with stunning simplicity) on how much space there is between them; thus abolishing pages and pages of number-theory texts with a simple picture. A rotation in 3 dimensions can be represented by a single quaternion; rotations in 4 dimensions can be represented by a pair of them. And so Conway & Smith look at 3-dimensional rotations geometrically. It shows (this is classic stuff but I have never seen it so well presented) that the regular polyhedra (cube, tetrahedron, icosahedron) can be characterised by the way you can rotate them and get back to what you started. For instance, you can rotate a cube a quarter-turn round the centre of a face and you’ll get back to the same cube. You can rotate it a half-turn round the mid-point of one of its sides: same result. You can rotate it a third of a revolution round one of its diagonals : same result (I always need to pick up a real cube to see this one). Those numbers, 423, define the symmetry of the cube (and its dual, the octahedron). But then, by relating these rotations to reflections (think of how the two angled mirrors of a kaleidoscope generate a whole group of rotations), the question “what regular polyhedra are there?” turns into a question about what spherical triangles you can have whose angles obey certain rules. So by enumerating a few possible combinations of numbers you can find all the regular polyhedra that exist and prove that you can’t have missed any out. In the spirit of “the only ones possible”, Conway and Smith also give Hurwitz’s Theorem, which shows that each of R, C, Q and O comes from “doubling” the one before it in the series, that the series has to stop after O, and that no other algebras of this kind can exist. They go on to explore 4-dimensional geometry, the 7- and 8-dimensional geometry of O, and to prove some new results on factorisation in O. “On Quaternions and Octonions” is not a textbook. It assumes that you already know what groups, rings and fields are. When an idea is presented, it is presented once: not two or three times with additional exercises to ram the point home. Some peripheral concepts, such as the orbifold notation, are used without elaborate proofs: you can spend some enjoyable time working out for yourself how they work and why they are the way they are. The result is not only a bony book but a chewy one: you can come back to the same page day after day and understand a little more of it each time. Anyone who has at least a first-year undergraduate grounding in algebra will find this book rewarding and enjoyable: something to come back to at intervals and get a little more out of each time.
<urn:uuid:261bc68f-49d2-436f-ac1a-49866f733083>
2.6875
1,369
Personal Blog
Science & Tech.
43.835963
New Look at the Infant Universe From ESA’s HubbleCast. In early 2009, a team of astronauts visited Hubble to repair the wear and tear of twenty years of operating in a hostile environment — and to install two new instruments, the Cosmic Origins Spectrograph, and Wide Field Camera 3 — better known as WFC3. Hubble has become famous for its striking visible-light pictures of huge clouds of interstellar dust and gas. But sometimes scientists want to know what’s happening behind, or inside, the cloud of dust. Making infrared observations pulls away the veil and reveals the hidden stars. Until now, infrared imaging was challenging with Hubble. The Near Infrared Camera and Multi-object Spectrometer, or NICMOS, did allow astronomers to study objects in infrared light in ways not possible from the ground, but it forced them to make a difficult choice. Because its images were small — only about 65 000 pixels in total, similar to a mobile phone screen — NICMOS could produce the sharpest images only if it concentrated on a very narrow field of view. Taking in a wider view came at the cost of losing much of the detail. These improvements mean Hubble is now far better at observing large areas of sky as well as very faint and very distant objects. These are key for the science of cosmology, the study of the origins and development of the Universe. Because the Universe is expanding, light waves coming from distant objects are stretched as they travel through space, and the waves become longer. The further an object is away, the more its light is stretched on its journey to us, and the redder the light appears. Hence the effect is known as redshift. For really distant objects, the ultraviolet and visible light is redshifted so much it goes infrared — literally, “below red” — and that is the reason that infrared imaging is so important for spotting these very distant galaxies. This is the Hubble Ultra Deep Field, a visible light image taken in 2003 and 4 with Hubble’s Advanced Camera for Surveys.
<urn:uuid:8c42506b-01fe-4f2d-b545-e999dee1c30b>
3.59375
425
Truncated
Science & Tech.
46.156019
Geophysics is the study of the physical properties of the Earth. These include three which we can readily measure from a moving aircraft: - magnetic field - electrical conductivity Geophysicists observe how these properties and their effects vary and interpret the results in terms of underlying geological structures. Geophysical methods are particularly useful for mapping regions, such as Northern Ireland, where a layer of glacial cover or peat obscures the solid geology. The aircraft flies along a network of parallel lines taking regular readings at intervals of between one tenth of a second and one second. The aircraft's position is recorded simultaneously with a global positioning system and the height measured accurately with a radar altimeter. The data are recorded digitally and removed to the processing centre at the end of each flight. Previous surveys in Northern Ireland The last major geophysical survey of Northern Ireland was a magnetic survey made in 1959 and the results show the broad structural elements at a regional scale. The new survey, flown at a lower altitude along more closely spaced lines and with significantly more sensitive equipment, has provided a wealth of new detail. The Tellus airborne survey The Tellus airborne geophysical survey of Northern Ireland is part of the HiRes geophysical mapping programme of BGS. The survey was flown by the Joint Airborne-geoscience Capability (JAC), a partnership of BGS and the Geological Survey of Finland (GTK). The aircraft was equipped with: - two magnetometer sensors, each with a sensitivity 100 times better than that of the 1959 survey, which measure the magnetic field; - an electromagnetic system, which measures the electrical conductivity of the ground; - a gamma-ray spectrometer, which measures radioactivity. The aircraft was a De Havilland Twin Otter, originally modified for this work by the GTK. The aircraft was manned by two pilots, a navigator and engineer. Survey lines were spaced 200 m apart and orientated approximately north-north-west or south-south-east. The survey was flown at a nominal height of 56 m above the ground over rural areas and 250 m over villages and urban areas. We can detect and map the magnetic field of the Earth with a sensitivity of about one part in five million. Most rocks are slightly magnetic and differences in the measured magnetic field indicate variations in the type of rock and soil beneath the aircraft. The pattern of the magnetic map shows both major geological structures deep within the Earth and the shallower effects of magnetic rocks nearer the surface. Prominent magnetic anomalies include those of the Antrim Lava Group, swarms of Palaeocene dykes, and the Palaeocene intrusions of the Mourne Mountains Complex. The electrical conductivity of rocks and soils varies largely according to porosity, salinity, saturation and clay content. We use the variation in conductivity to help map rock and soil types and conducting structures such as faults. We may also be able to detect contaminants (for example drainage from an industrial site) that typically raise ground conductivity. The electrical conductivity map of Northern Ireland shows variations between the principal formations, areas of increased salinity, prominent expressions of major fault zones and certain industrial effects. All rocks and soils are very slightly radioactive. Typically the radioactive content is only a few parts per million by volume but we can detect ground radiation with sensitive detectors in the aircraft. Most terrestrial radiation is from isotopes of uranium, thorium and potassium and the proportions of these vary among different rock types. Mapping natural radioactivity is therefore another useful means of differentiating rock and soil types. The radioactivity map provides a standard against which to measure any change in ground radioactivity in the future. Prominent anomalies include those of the intrusive rocks of the Mourne Mountain Complex and parts of the ancient metamorphic rocks in Counties Tyrone and Derry. The Antrim Lava Group and areas covered by peat have much reduced activity.
<urn:uuid:0ba8f6a2-cdbe-4c35-b6e5-d1e3511ac968>
4.03125
804
Knowledge Article
Science & Tech.
27.382237
The rotifers make up a phylum of microscopic and near-microscopic pseudocoelomate animals. They were first described by John Harris in 1696 (Hudson and Gosse, 1886). Leeuwenhoek is mistakenly given credit for being the first to describe rotifers but Harris had produced sketches in 1703. Most rotifers are around 0.1-0.5 mm long, and are common in freshwater throughout the world with a few saltwater species. Rotifers may be free swimming and truly planktonic, others move by inchworming along the substrate whilst some are sessile, living inside tubes or gelatinous holdfasts. About 25 species are colonial (i.e. Sinantherina semibullata), either sessile or planktonic.Subscribe in a reader Structure and form Rotifers get their name (derived from Latin and meaning "wheel-bearer"; they have also been called wheel animalcules) from the corona, which is composed of several ciliated tufts around the mouth that in motion resemble a wheel. These create a current that sweeps food into the mouth, where it is chewed up by a characteristic pharynx (called the mastax) containing a tiny, calcified, jaw-like structure called the trophi. The cilia also pull the animal, when unattached, through the water. Most free-living forms have pairs of posterior toes to anchor themselves while feeding. Rotifers have bilateral symmetry and a variety of different shapes. There is a well-developed cuticle which may be thick and rigid, giving the animal a box-like shape, or flexible, giving the animal a worm-like shape; such rotifers are respectively called loricate and illoricate. Like many other microscopic animals, adult rotifers frequently exhibit eutely - they have a fixed number of cells within a species, usually on the order of one thousand. Males in the Class Monogononta may be either present or absent depending on the species and environmental conditions. In the absence of males, reproduction is by parthenogenesis and results in clonal offspring that are genetically identical to the parent. Individuals of some species form two distinct types of parthenogenetic eggs; one type develops into a normal parthenogenetic female, while the other occurs in response to a changed environment and develops into a degenerate male that lacks a digestive system, but does have a complete male reproductive system that is used to inseminate females thereby producing fertilized 'resting eggs'. Resting eggs develop into zygotes that are able to survive extreme environmental conditions such as may occur during winter or when the pond dries up. These eggs resume development and produce a new female generation when conditions improve again. The life span of monogonont females varies from a couple of days to about three weeks. Bdelloid rotifers are unable to produce resting eggs, but many can survive prolonged periods of adverse conditions after desiccation. This facility is termed anhydrobiosis, and organisms with these capabilities are termed anhydrobionts. Under drought conditions, bdelloid rotifers contract into an inert form and lose almost all body water; when rehydrated, however, they resume activity within a few hours. Bdelloids can survive the dry state for prolonged periods, with the longest well-documented dormancy being nine years. While in other anhydrobionts, such as the brine shrimp, this desiccation tolerance is thought to be linked to the production of trehalose, a non-reducing disaccharide (sugar), bdelloids apparently lack the ability to synthesise trehalose. Bdelloid rotifer genomes contain two or more divergent copies of each gene, suggesting a long term asexual evolutionary history. Four copies of hsp82 are, for example, found. Each is different and found on a different chromosome excluding the possibility of homozygous sexual reproduction. There are about 2000 species of Rotifers, divided into three classes, Monogononta, Bdelloidea, and Seisonidea. The parasitic Acanthocephala is closely related to these groups as well. Currently these four taxa are within the superphyla Platyzoa. Monogononta is the largest group with around 1500 different species. Bdelloida is of particular note because of the absence of males and the ability of an individual to survive by drying themselves out (known as cryptobiosis). Bdelloids can then become active again when conditions are right. Rotifers of all types are relatively easy to find. Many live in ponds, moist soil, or any stagnant water. Rotifers can be free swimming or sessile. Rotifers are mostly omnivorous and some have been observed to be cannibalistic. They normally eat algae or decomposing organic material. Death of a Rotifer
<urn:uuid:d2960bad-0352-4648-bb5d-d2ef37a014a8>
3.859375
1,038
Knowledge Article
Science & Tech.
24.642943
Public Function SetBitInStringFast( _ ByVal vString As String _ , ByRef vBit As Long _ , ByRef vValue As Boolean _ ) As String Set a bit within a binary string. Examples: Bits range from 1 (one is the least-significant-bit) to the number of bits in the value. SetBitInStringFast(Chr$(255) + Chr$(255), 3, 0) = Chr$(251) + Chr$(255) SetBitInStringFast(Chr$(0) + Chr$(0), 3, 1) = Chr$(4) + Chr$(0)See also: SetBit FunctionNote: This function is different from GetBit and SetBit because this one considers the first string character to represent the LEAST-significant-byte instead of the MOST-significant-byte. vString: String in which one of the bits is to be set to either True or False. vBit: The number of the bit whose value within string vValue is to be changed. Note: This function assumes that vBit represents one of the bits within the string vString and its behavior will be unpredictable if that is not true. vValue: The value to which the specified bit will be set. Note: Function has restrictive argument types to avoid argument fix-up overhead. Copyright 1996-1999 Entisoft Entisoft Tools is a trademark of Entisoft.
<urn:uuid:53045294-0da6-4f1e-ac15-0e5baf596a4a>
2.921875
312
Documentation
Software Dev.
64.524847
|Sather - A Language Tutorial| |Prev||Chapter 15. Statement and Expression Catalogue| We describe below a few special expressions used in Sather - void, void() and the short circuit boolean operations or and and. A void expression returns a value whose type is determined from context. void is the value that a variable of the type receives when it is declared but not explicitly initialized. The value of void for objects (except for immutable objects) is a special value that indicates the absence of an object - it is essentially the NULL pointer. Immutable objects are described in their own chapter, but for the sake of reference: Class Initial Value ----------------------- INT 0 CHAR '\0' FLT 0.0 FLTD 0.0d BOOL false For other immutable types the void value is determined by recursively setting each attribute and array element to void. For numerical types, this results in the appropriate version of 'zero'. The other built-in basic types are defined as arrays of BOOL and all have their values set to void by this rule. void expressions may appear as the initializer for a constant or shared attribute. In fact, for most built-in classes, the only legal constant value is the void value e.g. const a: POINT := void; as the right hand side of an assignment statement as the return value in a return or yield statement as the value of one of the expressions in a case statement as the exception object in a raise statement (see the chapter on Exceptions) as an argument value in a method call in a creation expression. In this last case, the argument is ignored in resolving overloading. void expressions may not appear: as the left argument of the dot '.' operator. a: POINT := #POINT(3,3); -- ILLEGAL (and silly) a.void It is a fatal error to access object attributes of a void variable of reference type or to make any calls on a void variable of abstract type. Calls on a void variable of an immutable type are, however, quite legal (otherwise you would not be able to dot into a false boolean or a zero valued integer!) Void test expressions evaluate their argument and return a boolean value which is true if the value is void . p: POINT; #OUT + void(p); -- Prints out true p := #POINT(3,5); #OUT + void(p); -- Prints out false p := void; #OUT + void(p); -- Prints out true; b: BOOL; #OUT + void(b); -- Prints out true b := false; #OUT + void(b); -- Prints out true! -- Even though b has been assigned, it has the void value if (3>a and b>6) or (c = "Goo") then #OUT + "Success!" end; and expressions compute the conjunction of two boolean expressions and return boolean values. The first expression is evaluated and if false, false is immediately returned as the result. Otherwise, the second expression is evaluated and its value returned. or expressions compute the disjunction of two boolean expressions and return boolean values. The first expression is evaluated and if true, true is immediately returned as the result. Otherwise, the second expression is evaluated and its value returned. For an expression description see unnamedlink protect ... some code when STR then #OUT + exception.str; when ... else ... end; exception expressions may only appear within the statements of the when and else clauses in protect statements. They return the exception object that caused the when branch to be taken in the most tightly enclosing protect statement. The return type is the type specified in the corresponding when clause (See unnamedlink). In an else clause the return type is '$OB'.
<urn:uuid:b65fd95d-3d47-45f3-bd6d-a1b15efa315c>
3.9375
804
Documentation
Software Dev.
57.653231
Climate Impacts of Waxman-Markey (the IPCC-based arithmetic of no gain) Editor Note: Using mainstream models and assumptions, Mr. Knappenberger finds that in the year 2050 with a 83% emissions reduction (the aspirational goal of Waxman-Markey, the beginning steps of which are under vigorous debate), the temperature reduction is nine hundredths of one degree Fahrenheit, or two years of avoided warming by 2050. A more realistic climate bill would be a fraction of this amount. The author will respond to technical questions on methodology and results and invites input on alternative scenarios and analyses. “A full implementation and adherence to the long-run emissions restrictions provisions described by the Waxman-Markey Climate Bill would result only in setting back the projected rise in global temperatures by a few years—a scientifically meaningless prospect.” (from below) The economics and the regulatory burdens of climate change bills are forever being analyzed, but the bills’ primary function—mitigating future climate change—is generally ignored. Perhaps that’s because it is simply assumed. After all, we are barraged daily with the horrors of what the climate will become if we don’t stop emitting greenhouse gases into the atmosphere (the primary focus being on emissions from the combustion of fossil fuels). So doing something as drastic as that proposed by Waxman-Markey—a more than 80% reduction of greenhouse gas emissions from the United States by the year 2050—must surely lessen the chances of climate catastrophe. Mustn’t it? But if that were the case, why aren’t the climate impacts being touted? Why aren’t Representatives Waxman and Markey waving around the projected climate success of their bill? Why aren’t they saying: “Economics and regulations be damned. Look how our bill is going to save the earth from human-caused climate apocalypse”? That reason is that it won’t. And they know it. That is why they, and everyone else who supports such measures, are mum about the outcome. The one thing, above all others, that they don’t want you to know is this: No matter how the economic and regulatory issues shake out, the bill will have virtually no impact on the future course of the earth’s climate. And this is even in its current “pure” form, without the inevitable watering down to come. So discussion of the bill, instead of focusing on climate impacts, is shrouded in economics and climate alarm. Getting a good handle on the future climate impact of the proposed Waxman-Markey legislation is not that difficult. In fact, there are several ways to get at it. But perhaps the most versatile is the aptly named MAGICC: Model for the Assessment of Greenhouse-gas Induced Climate Change. MAGICC is sort of a climate model simulator that you can run from your desktop (available here). It was developed by scientists at the National Center for Atmospheric Research (primarily by Dr. Tom Wigley) under funding by the U.S. Environmental Protection Agency and other organizations. MAGICC is itself a collection of simple gas-cycle, climate, and ice-melt models that is designed to produce an output that emulates the output one gets from much more complex climate models. MAGICC can produce in seconds, on your own computer, results that complex climate models take weeks to produce running on the world’s fastest supercomputers. Of course, MAGICC doesn’t provide the same level of detail, but it does produce projections for the things that we most often hear about and care about—for instance, the global average temperature change. Moreover, MAGICC was developed to be used for exactly the purpose that we use it here—the purpose for which Representatives Waxman and Markey and everybody else who wants a say in this issue should be using it. That purpose is, according to MAGICC’s website, “to compare the global-mean temperature and sea level implications of two different emissions scenarios” —for example, scenarios both with and without the proposed legislative emissions reductions. So that is what we’ll do. We’ll first use MAGICC to produce a projection of global average temperature change through the 21st century under two of the Intergovernmental Panel on Climate Change’s future emissions scenarios (which assume no explicit policy implementation). The two are: a mid-range emissions scenario (SRES A1B for those interested in the details) and a high-end emissions scenario (SRES A1FI). Then, we’ll modify these IPCC scenarios by entering in the emissions reductions that will occur if the provisions outlined in the Waxman-Markey Climate Bill are fully met (leaving aside whether or not that could be done). Basically, Waxman-Markey calls for U.S. emissions to be reduced to 20% below the 2005 emissions level by 2020, 42% below 2005 levels by 2030, and 83% below 2005 levels by 2050. We’ll assume that U.S. emissions remain constant at that reduced value for the rest of the century. We’ll then use MAGICC to produce temperature projections using these modified scenarios and compare them with the original projections.* And here is what we get all rolled into one simple figure. The solid lines are the projections of the change in global average temperature across the 21st century from the original IPCC A1FI (red) and A1B (blue) high and mid-range emissions scenarios, respectively (assuming a climate sensitivity of 3ºC). The dotted lines (of the same color) indicate the projected change in global average surface temperature when the emissions reductions prescribed by Waxman-Markey are factored in. By the year 2050, the Waxman-Markey Climate Bill would result in a global temperature “savings” of about 0.05ºC regardless of the IPCC scenario used—this is equivalent to about 2 years’ worth of warming. By the year 2100, the emissions pathways become clearly distinguishable, and so to do the impacts of Waxman-Markey. Assuming the IPCC mid-range scenario (A1B) Waxman-Markey would result in a projected temperature rise of 2.847ºC, instead of 2.959ºC rise— a mere 0.112ºC temperature “savings.” Under the IPCC’s high-emissions scenario, instead of a projected rise of 4.414ºC, Waxman-Markey limits the rise to 4.219ºC—a “savings” of 0.195ºC. In either case, this works out to about 5 years’ worth of warming. In other words, a full implementation and adherence to the emissions restrictions provisions described by the Waxman-Markey Climate Bill would result only in setting back the projected rise in global temperatures by a few years—a scientifically meaningless prospect. (Note: I present the results to three significant digits, not that they are that precise when it comes to the real world, but just so that you can tell the results apart). Now, various aspects of the MAGICC model parameters can be tweaked, different climate models can be emulated, and different scenarios can by chosen. And different answers will be obtained. That is the whole purpose of MAGICC—to be able to examine the sensitivity of the output to these types of changes. But if you take the time to download MAGICC yourself and run your own experiments, one thing that you will soon find out is: No matter what you try, altering only U.S. emissions will produce unsatisfying results if you seek to save the world by altering its climate. We have calculated only the climate impact of the United States acting alone. There is no successor treaty to the Kyoto Protocol to bind other countries to greenhouse gas emissions reductions. But, truth be told, the only countries of any real concern are China and India. The total increase in China’s emissions since the year 2000 is 50 percent greater than the total increase from rest of the world combined and is growing by leaps and bounds. And consider that India carbon dioxide emissions haven’t started to dramatically increase yet. But it is poised to do so, and an Indian official recently stated that “It is morally wrong for us to agree to reduce [carbon dioxide emissions] when 40 percent of Indians do not have access to electricity.” Without a large reduction in the carbon dioxide emissions from both China and India—not just a commitment but an actual reduction—there will be nothing climatologically gained from any restrictions on U.S. emissions, regardless whether they come about from the Waxman-Markey bill (or other cap-and-trade proposals), from a direct carbon tax, or through some EPA regulations. This is something that should be common knowledge. But it is kept carefully guarded. The bottom line is that a reduction of U.S. greenhouse gas emissions of greater than 80%, as envisioned in the Waxman-Markey climate bill will only produce a global temperature “savings” during the next 50 years of about 0.05ºC. Calculating this isn’t all that difficult or costly. All it takes is a little MAGICC. [Note: Be sure not to miss Part II of this analysis, where I take a look at what happens if the rest of the world were to play along.] * Assumptions Used in Running MAGICC There are many parameters that can be altered when running MAGICC, including the climate sensitivity (how much warming the model produces from a doubling of CO2 concentration) and the size of the effect produced by aerosols. In all cases, we’ve chosen to use the MAGICC default settings, which represent the middle-of-the-road estimates for these parameter values. Also, we’ve had to make some assumptions about the U.S. emissions pathways as prescribed by the original IPCC scenarios in order to obtain the baseline U.S. emissions (unique to each scenario) to which we could apply the Waxman-Markey emissions reduction schedule. The most common IPCC definition of its scenarios describes the future emissions, not from individual countries, but from country groupings. Therefore, we needed to back out the U.S. emissions. To do so, we identified which country group the U.S. belonged to (the OECD90 group) and then determined the current percentage of the total group emissions that are being contributed by the United States—which turned out to by ~50%. We then assumed that this percentage was constant over time. In other words, that the U.S. contributed 50% of the OECD90 emissions in 2000 as well as in every year between 2000 and 2100. Thus, we were able to develop the future emissions pathway of the U.S. from the group pathway defined by the IPCC for each scenario (in this case, the A1B and the A1FI scenarios). The Waxman-Markey reductions were then applied to the projected U.S. emissions pathways, and the new U.S. emissions were then recombined into the OECD90 pathway and into the global emissions total over time. It is the total global emissions that are entered into MAGICC in order to produce global temperature projections—both the original emissions, as well as the emissions modified to account for the U.S. emissions under Waxman-Markey.
<urn:uuid:e24bd5ab-c843-4d81-96cf-d6fc0dab32a8>
2.765625
2,390
Academic Writing
Science & Tech.
46.884527
I have overheard much confusion about local weather and global climate change. According to the experts at NASA, the difference between weather and climate is a measure of time. Weather consists of the short-term minute- to month-long changes in the atmosphere. Climate is how the atmosphere behaves over relatively long periods of time—the average weather over time and space. Some scientists define climate as the average pattern of weather in a region over 30 years. For example, after looking at rain gauge data, you can tell if an area was drier than average during the summer. If it continues to be drier than normal over the course of many summers, then it would likely indicate a change in the climate. To add to the confusion, there are shorter-term climate variations related to El Niño, La Niña, volcanic eruptions and other changes in Earth’s complicated systems. An easy way to remember the difference is that climate is what you expect—like a warm summer—and weather is what you can get—like a hot, muggy day with thunderstorms. Research and the memories of old folks seem to indicate that the climate is changing. When you kids hear stories from your grandparents about trudging to school through waist-deep snow, they may not just be berating you for needing to be driven everywhere. You may have never experienced the extreme conditions your grandparents suffered, because changes in recent winter snows indicate that the climate has changed since those ancient folks were your age. OK, so it never snows here in Kīhei, but if summers seem hotter and drier lately, then the recent climate may have changed. Although global warming refers to an average planetary temperature increase of a degree or so, that doesn’t mean the thermometer in our back yard is going to read a degree higher. That’s why “climate change” rather than “global warming” may be an easier concept for us on a daily basis. I know it’s a challenge. We don’t like change, because then we have to change. And we especially don’t like climate change, because our economy, our homes and our wardrobes are already set up for the status quo. So, just as one day of cold does not an ice age make, neither does it relegate the term “global warming” to the status of processed luncheon meat.
<urn:uuid:80015505-2c32-4601-9503-7d3cd42e0d0e>
3.078125
499
Nonfiction Writing
Science & Tech.
45.453579
How Air Molecules Cause Friction? Name: Rebecca G. How do molecules in the air cause air resistance? Basically, the air molecules just get in the way. Imagine walking through a crowded room. You have to push people aside or wait for them to move out of your way. You are forced to move very slowly. Now imagine moving through an empty room. You can just walk straight where you want to go. When an airplane or car or ball moves through the air, the air molecules get in the way, and have to be moved aside -- like moving through a crowded room. Just like it takes some energy (some force) to move the people aside, the air molecules also take a force to be moved aside. This is the air resistance I hope this helps, For an object to pass through air in any direction (running, falling, flying ...), the object must push the air molecules out of the way. There is always some push from the air, called air pressure. This is what astronauts have to worry about when in space. This is why they have pressure suits. When moving air molecules out of the way, the object pushes harder than usual. When the object pushes harder on the molecules, the molecules push back harder. This extra push from the air molecules is air pressure. The faster the object moves, the faster it must push air molecules out of the way. This is why air pressure gets stronger when an object moves faster through the air. Dr. Ken Mellendorf Illinois Central College Dear Rebecca G., An object, say a baseball, travelling through the air will, of course, collide with all the molecules in its path. Each molecule will be pushed forward by the collision and so will exert a backward force on the baseball. Since each molecule is so tiny and so light each collision exerts a very small force on the baseball, but since there are so many molecules the combined effect can cause an appreciable amount of air resistance. Best, Dick Plano, Professor of Physics emeritus, Rutgers University Air is a mixture, mostly of nitrogen and oxygen molecules. These molecules "get in the way" of an object trying to move through the air. To move through the air, the object has to "push" the air out of the way. The molecules "push back". You can think of this like a person trying to "push" through a crowd. In order to move forward, the person has to move the "crowd" out of the way. The more people in the crowd, the more resistance there is for the person to move through the crowd. While not an exact example, it does describe the basic idea of the resistance of the movement of the molecule trying to move through the Click here to return to the Physics Archives Update: June 2012
<urn:uuid:df4a2f46-6991-4431-a4e5-26c9d414fe17>
3.671875
616
Q&A Forum
Science & Tech.
60.19875
15 August 2012 Posted in NatureWorks The Burmese python is one of the six largest snakes in the world and is native to both tropic and sub-tropic areas of Southern and Southeast Asia. This species was first observed in the Everglades National Park in 1979. Importation of the Burmese python has led to some rather serious problems in Florida. When people no longer wish to care for or are unable to manage the size of their pythons, they release them into the wild. These actions have caused the Burmese python to become an invasive species in the Everglades. An invasive species is one that is non-native and generally disrupts its habitat or region by dominating it. Burmese pythons have even been known to swallow animals as large as deer and alligators – yikes! Roaming freely, invasive species can disrupt the natural order by eating native animals that often have no natural defenses or adaptations against these new predators. Likewise, they generally don’t have natural predators in their new habitat. Researchers are hoping to learn more through examination of the python about its diet and reproductive status, which will hopefully give them insight into how to potentially manage other wild Burmese pythons in the future. Following scientific investigation, the snake will be mounted for exhibition at the Florida Museum of Natural History and then returned and put on display at the Everglades National Park.
<urn:uuid:0a63af41-6914-4b13-9de6-1574cfa732d5>
3.671875
291
Knowledge Article
Science & Tech.
29.990909
Making babies requires a male and a female, a sperm and an egg, right? Well, the wild world of animals is often more creative than the lot of us humans when it comes to making whoopee. In fact, some animals don't have sex at all, thank you very much. Just this month, bug biologists found the first all-female ant species, Mycocepurus smithii. The queen ant clones herself by making eggs that develop into adult females without fertilization. Some of those females will then become queens themselves. Apparently the species has been sexless for enough generations that the ants might not be able to mate even if they wanted to. Dissections showed that a key female sex part that normally interlocks with a male organ during mating had shrunken to a ghost of its former self.
<urn:uuid:475f3828-08c4-457a-b4a7-c0b099612f2c>
2.8125
167
Truncated
Science & Tech.
54.834147
As you saw in the last section, the derivative of a function measures the function's rate of change, or its slope. To give you a better idea of what a derivative is, imagine that Bob The Crash Test Dummy is driving a car. Bob's car is on fire, which is why his driving is somewhat erratic. The function models the number of miles driven after t hours has elapsed. After one hour has passed, Bob looks at his melting odometer to see that he has driven 21.5 miles. According to his speedometer, Bob is traveling at a measly 23 miles per hour, which spurs him to wonder audibly how much he'll be paid for this "invigorating yet harmless driving stunt" that Stunt Dummies International signed him up for. Five hours later (t = 6 hours), Bob finds that he has covered a total of 174 miles and is currently moving at 38 miles per hour. Since 174 miles was all Bob had to drive, he slams on the brakes, gets out of the car and telephones mission control on his tungsten cellphone. In this example, the speed that Bob was traveling at any point in time could be described as the rate of change of his position at that point -- also known as the derivative of his position function. The change in the speed of Bob's car over time was his acceleration, or the rate of change of his rate of change throughout the journey. His acceleration can also be described as the second derivative of his position function, though we will mostly be concerned with the first derivative for now. Do you see now how derivatives relate to motion and position? This is very much the same as with the functions of the last section; you can think of the slope of a tangent line as the function's speed at that point. Just as a speedometer gives a vehicle's instantaneous speed, the derivative gives a function's instantaneous rate of change. Since finding derivatives via the limit process of the last section can be rather tedious, though, it is time to introduce a much faster method. Differentiation is the process of finding derivatives, a process that becomes much faster once you have master the upcoming rules! Most calculus books have a chart of such rules on the inside front or back cover for easy viewing, though this page should also serve as a faithful reference. I'll start with just a few of the rules here, and explain them as I go. The function f'(x) (pronounced 'f prime of x') signifies the first derivative of f(x). To explain the Constant Rule, think of a function that is equal to a constant, perhaps the number 3, the square root of 5, the number e, or just a constant 'a'. The graph of such a function will necessarily be flat, and thus have a slope of zero. It is natural, therefore, that: In order to save space, I won't use 'lim' in the other examples. The variable 'h' is assumed to be approaching zero. For the Constant Multiple Rule: I used 'a' again instead of an actual number just to show that it works for a general case. The Power Rule is more difficult to prove for xn, though, so I'll use an actual number in tandem with the Constant Multiple Rule. Keep in mind that if the variable's power is a negative number, you will have to multiply through the negative sign. So if f(x) = 4*x^-3, then f'(x) = -12*x^-4. Negative exponents also 'get bigger', since the Power Rule dictates that you must subtract 1 from the exponent's power. Going back to the example about Bob in his car, let's take the derivative of his position function to see how fast he was moving at any point in time. The function v(t) is a better representation for instantaneous velocity than x'(t), which explains the above. Based on the newly-found formula for Bob's velocity, we can confirm his observations that v(1) = 23 and that v(6) = 38. If we take the derivative of the velocity function, now: Bob was accelerating at a rate of 3 miles per hour per hour, which explains why he was moving more quickly toward the end of his journey than at the start. Compare the position, velocity, and acceleration functions in the following graph: var('t') plot(3*t^2/2+20*t, t, 0, 6)+plot(3*t+20, t, 0, 6, rgbcolor='red')+line([(0, 3), (6, 3)], rgbcolor='green')Toggle Explanation Toggle Line Numbers 1) Initialize t. 2) Plot Bob's position, velocity, and acceleration for his six-hour drive. You can see that as Bob's velocity gradually increased (the red line), his distance traveled (the blue line) began to rise more quickly. His acceleration remained constant (the green line), which caused his velocity to increase linearly. The next two differentiation rules are not as easy to apply as the first three, so pay attention. What makes them less straightforward is that both usually involve a fair amount of work. As an example of the product rule, think of two expressions, (x2+1) and (4*x^3-2*x+3) that are multiplied together. To take the derivative of their combination, one could either multiply through (which would be somewhat of a hassle) or apply the product rule, which is the much better alternative. I wouldn't recommend simplifying the result of the product rule unless you have to; it's much safer just to leave it as it is, especially if you are going for the second derivative as well. The quotient rule is another case where you don't always want to simplify, since generally leaving the denominator as it is (without squaring) looks much cleaner and is still a valid answer. In this case, though, simplifying turns out to be a valid option. This is also the last lesson for now; sorry! Check back in a couple of weeks.
<urn:uuid:b5ca3c5d-8e45-4cf6-aefc-aebc7e611c3b>
3.578125
1,277
Tutorial
Science & Tech.
59.222986
from the urls-we-dig-up dept Civilizations were previously categorized by the materials they used: copper, bronze, iron, steel, plastic, etc. The advancements in material science haven't quite had as much of an impact on society as they used to. Still, there are plenty of really cool materials now that didn't exist just a few decades ago. Here are just a few examples. - A metallic lattice is the current record-holder for being the lightest solid material -- beating out aerogels and low-density foams. This material is made up of hollow struts (up to 500-microns wide, made from a nickel-phosphorous alloy) that form a 3D lattice that looks like tiny scaffolding. [url] - Aerogels used to be the lowest-density solid material for many decades, and it has several practical applications. Aerogels generally have very low thermal conductivities, so they can be useful for anything from cryogenic insulation to insulating shoe insoles. [url] - Common metals, like steel, contain metal grains that can move along grain boundaries, but these boundaries can be made immobile by adding defects such as small particles. Nanometals are made with really small grain boundary defects -- which can create super strong materials that are also lightweight. Material scientists are constantly working on making metal alloys and composites to further understand how these nanostructures can be created. [url] - To discover more interesting science-related stuff, check out what's currently floating around the StumbleUpon universe. [url]
<urn:uuid:9fed636f-eda4-4d1f-9241-5414d6f2b867>
3.234375
331
Listicle
Science & Tech.
34.295585
As climate changes, so must the tools to model it (page 2 of 4) And including those small components is vital. “We just don’t have a clear understanding of how very fine-scale processes affect the large-scale climate,” says Ben Kirtman, professor of meteorology and physical oceanography at the University of Miami. Take, for example, hurricanes. “When we do climate-change research, we use models that don’t produce hurricanes,” Kirtman says. “They produce stuff kind of like hurricanes but not really.” Part of the difference is that hurricanes transport heat upward, and today’s climate-change models “don’t simulate that heat motion correctly.” Ocean features also could stand some improvement. At present, ocean models “don’t capture how eddies work because the models don’t resolve them,” Kirtman says. This shortcoming creates errors that require large numbers of repetitious simulations to resolve. With exascale computing, Kirtman says, climate-change models can start to resolve physical properties like ocean eddies – and potentially resolve hurricanes. He and colleagues are studying eddies and how they transport heat from the tropics to the United States. These eddies maintain the Gulf Stream. “Until we get exascale,” he says, “we need lots of years of simulations” to model how these eddies affect heat transfer. Even with exascale power, climate-change models will not replace weather models for simulating individual hurricanes. Still, exascale computing could help researchers forecast an active hurricane period, explain the relationship between sea ice and hurricane intensity, or predict how carbon dioxide levels in 50 years could affect hurricane patterns. James J. Hack, director of the National Center for Computational Sciences and the Oak Ridge Climate Change Science Institute, says researchers also need a better understanding of the entire climate system. A couple degrees of global average temperature increase in 2100 might not seem like much to most people, but it could trigger 10-15 degrees of change in some areas. “That may move storm tracks,” Hack says. “For example, it could trigger large stationary wave patterns in the atmosphere that could set up and sit there for a long time, leading to heat waves.”
<urn:uuid:438af00b-cdda-4d0b-9292-35d83c04ec3a>
3.265625
495
Knowledge Article
Science & Tech.
35.727551
An irregular galaxy is the catchall name given to any galaxy that does not neatly fit into one of the categories of the Hubble classification scheme. They have no defined shape nor structure and may have formed from collisions, close encounters with other galaxies or violent internal activity. They contain both old and young stars, significant amounts of gas and usually exhibit bright knots of star formation. Due to the diversity of objects that fall into this category it is difficult to constrain sizes, masses and luminosities. Dwarf irregulars can be as small as 3 kiloparsecs and contain as little as 108 solar masses of material. At the other end of the scale, the larger irregulars can be up to 10 kiloparsecs across and contain 1010 solar masses of material. Their luminosities range from 107 to 109 solar, making them generally fainter than spiral galaxies. The best known examples of irregular galaxies are the Small and Large Magellanic clouds. These are companion galaxies to our own Milky Way, and can be easily seen at dark sites in the Southern Hemisphere. |The Large (left) and Small (right) Magellanic clouds are prime examples of irregular galaxies.|
<urn:uuid:b7b1af53-ac2f-4c9a-9886-3b6cf8edd72c>
3.890625
242
Knowledge Article
Science & Tech.
38.439876
May 22, 2013 9:34 am Heinrich Rohrer, winner of the 1986 Nobel Prize in Physics, passed away last week at the age of 79. Rohrer is widely regarded as one of the founding scientists of the nanotechnology field. In his Nobel Prize announcement, the Nobel Prize committee called out “his fundamental work in electron optics and for the design of the first electron microscope.” The electron microscope is what let scientists see viruses and IBM make this little animation. Here’s Physics World on how the Scanning Tunneling Microscope (STM) works: An STM creates an image of the surface of a sample by scanning an atomically sharp tip over its surface. The tip is held less than one nanometre from the surface and a voltage is applied so that electrons can undergo quantum-mechanical tunnelling between tip and surface. The tunnelling current is strongly dependent on the tip–surface separation and this is used in a feedback loop to keep the tip the same distance from the surface. An image is obtained by scanning the tip across the surface to create a topographical map in which individual atoms can be seen. The scientists’ colleagues at I.B.M. were skeptical of the project. As Dr. Rohrer recalled, “They all said, ‘You are completely crazy — but if it works you’ll get the Nobel Prize.’ ” For inventing the STM, Rohrer didn’t just get the Nobel Prize. He was also awarded the German Physics Prize, the Otto Klung Prize, the Hewlett Packard Europhysics Prize, the King Faisal Prize and the Cresson Medal. His invention also got him inducted into the U.S. National Inventors Hall of Fame. That’s because the STM allows scientists to look at the arrangement of the atoms on a surface and move atoms around. Seeing this atomic level and being able to study and manipulate it allowed scientists to develop modern forms of nanotechnology. Rohrer was born in Buchs, Switzerland, on June 6th, 1933, half an hour after his twin sister. Rohrer wasn’t planning on going into physics, he writes in his autobiography: My finding to physics was rather accidental. My natural bent was towards classical languages and natural sciences, and only when I had to register at the ETH (Swiss Federal Institute of Technology) in autumn 1951, did I decide in favor of physics. More from Smithsonian.com: May 10, 2013 1:49 pm Have you ever noticed that almost every barn you have ever seen is red? There’s a reason for that, and it has to do with the chemistry of dying stars. Seriously. Yonatan Zunger is a Google employee who decided to explain this phenomenon on Google+ recently. The simple answer to why barns are painted red is because red paint is cheap. The cheapest paint there is, in fact. But the reason it’s so cheap? Well, that’s the interesting part. Red ochre—Fe2O3—is a simple compound of iron and oxygen that absorbs yellow, green and blue light and appears red. It’s what makes red paint red. It’s really cheap because it’s really plentiful. And it’s really plentiful because of nuclear fusion in dying stars. Zunger explains: The only thing holding the star up was the energy of the fusion reactions, so as power levels go down, the star starts to shrink. And as it shrinks, the pressure goes up, and the temperature goes up, until suddenly it hits a temperature where a new reaction can get started. These new reactions give it a big burst of energy, but start to form heavier elements still, and so the cycle gradually repeats, with the star reacting further and further up the periodic table, producing more and more heavy elements as it goes. Until it hits 56. At that point, the reactions simply stop producing energy at all; the star shuts down and collapses without stopping. As soon as the star hits the 56 nucleon (total number of protons and neutrons in the nucleus) cutoff, it falls apart. It doesn’t make anything heavier than 56. What does this have to do with red paint? Because the star stops at 56, it winds up making a ton of things with 56 neucleons. It makes more 56 nucleon containing things than anything else (aside from the super light stuff in the star that is too light to fuse). The element that has 56 protons and neutrons in its nucleus in its stable state? Iron. The stuff that makes red paint. And that, Zunger explains, is how the death of a star determines what color barns are painted. More from Smithsonian.com: May 9, 2013 12:55 pm You would think that we’d know how thunder and lightning work by now. But researchers still puzzle over what, exactly, causes those bright flashes of electrostatic discharge. Lightning electrifies the sky about 100 times per second in various locations around the world, yet the electric fields within thunderclouds seem to have only about a tenth of the strength required for producing a lightning bolt, LiveScience reports. As it turns out, lightning may have extraterrestrial origins. This idea is not new: More than 20 years ago, physicist Alex Gurevich at the Russian Academy of Sciences in Moscow suggested lightning might be initiated by cosmic rays from outer space. These particles strike Earth with gargantuan amounts of energy surpassing anything the most powerful atom smashers on the planet are capable of. Cosmic rays slamming into air molecules may split those molecules into many electrons, which collide in turn with additional molecules, snowballing into more and more electrons zipping around. Gurevich called this “a runaway breakdown,” LiveScience writes. In a new paper, Gurevich and colleagues analyzed radio pulses from around 3,800 lightning strikes. They hypothesize that thunderclouds’ highly electrically charged water droplets and ice nuggets allow even the most un-energetic cosmic rays to spark a bolt of lightning if it comes into contact with such a cloud. Researchers know that cosmic rays hit the planet about as frequently as lightning strikes, LiveScience writes, so the theory at least makes sense. Unfortunately, Gurevich and a number of other scientific groups are still in the process of taking simultaneous measurements of cosmic ray’s energetic particles and the radio pulses lightning produces, which should help determine whether or not the two phenomenon are indeed linked. At least for now, Gurevich’s idea—long ignored by science—is at least being given the attention needed to prove once and for all whether lightning does have extraterrestrial origins. More from Smithsonian.com: May 7, 2013 1:54 pm A star being ripped to shreds in a violent supernova is one of the most powerful explosions in the universe. The largest supernovae can produce gamma-ray bursts: a tightly concentrated lance of light that streams out into space. Gamma-ray bursts, says NASA, “are the most luminous and mysterious explosions in the universe.” The blasts emit surges of gamma rays — the most powerful form of light — as well as X-rays, and they produce afterglows that can be observed at optical and radio energies. Two weeks ago, says NASA, astronomers saw the longest and brightest gamma-ray burst ever detected. It was the biggest shot of energy we’ve ever seen, streaming from the universe’s most powerful class of explosions. NASA: “We have waited a long time for a gamma-ray burst this shockingly, eye-wateringly bright,” said Julie McEnery, project scientist for the Fermi Gamma-ray Space Telescope at NASA’s Goddard Space Flight Center in Greenbelt, Md. “The event, labeled GRB 130427A, was the most energetic gamma-ray burst yet seen, and also had the longest duration,” says Matthew Francis for Ars Technica. “The output from GRB 130427A was visible in gamma ray light for nearly half a day, while typical GRBs fade within a matter of minutes or hours.” There are a few different of classes of gamma-ray bursts in the world. Astrophysicists think that some—short gamma-ray bursts—form when two neutron stars merge and emit a pulse of energy. Huge ones like the one just detected are known as long gamma-ray bursts, and they form when huge stars collapse, often leading to the formation of a black hole. Gamma-ray bursts focus their energy in a tightly-concentrated spire of energy. A few years ago, says Wired, researchers calculated what would happen if a gamma-ray burst went off nearby, and was pointed at the Earth. Steve Thorsett of Princeton University has calculated the consequences if such a merger were to take place within 3,500 light-years of Earth, with its energy aimed at the solar system. The blast would bathe Earth in the equivalent of 300,000 megatons of TNT, 30 times the world’s nuclear weaponry, with the gamma-ray and X-ray radiation stripping Earth of its ozone layer. While scientists cannot yet predict with any precision which nearby stars will go supernova, the merger of neutron star binaries is as predictable as any solar eclipse. Three such binary systems have been discovered, and one, PSR B1534+12, presently sits about 3,500 light-years away and will coalesce in a billion years. More from Smithsonian.com: May 1, 2013 10:59 am In November 1999, Don Eigler proved that man had truly mastered the atom: not by way of a devastating explosion or constrained reaction, but with art. The physicist, working for IBM, spelled out the company’s name using 35 individual atoms of the element xenon using a scanning tunneling microscope. Now, scientists use scanning tunneling microscopes “for more than just imaging surfaces. Physicists and chemists are able to use the probe to move molecules, and even individual atoms, around in a controlled way,” says physicist Jim Al-Khalili in a 2004 book. Fourteen years ago, Don Eigler was the first person to do so, an achievement that helped to open the door on the then-nascent field of nanotechnology. Now IBM is back, and with fourteen more years playing with these techniques, scientists have moved from precisely positioning individual atoms to making them dance. In a new short stop-motion film, A Boy and His Atom, scientists manipulated thousands of individual atoms to make the “world’s smallest movie.” The movie exists on a plane 100,000,000 times smaller than the world as we know and experience it. The boy and his ball are made from molecules of carbon monoxide, and yet gives an image reminiscent of the video games of the early 1980s. “Though the technology that the team discusses isn’t new,” says the Verge, “they were able to use it in a new way: the black-and-white images and playful music form a strong artistic style that’s reminiscent of early film, but at an entirely different scale.” For more information about how the movie was made, IBM has released a behind-the-scenes video to accompany their animation. More from Smithsonian.com:
<urn:uuid:df3d298d-b9c2-4630-8bb9-040d37c6c2cc>
3.59375
2,412
Content Listing
Science & Tech.
53.508783
Variables must be declared before they are used. Unique names are used to identify variables. Descriptions of variables are used for them to be defined and for types to be declared. Description is not an operator. The basic types are: - bool - boolean values of true and false; - string - character strings; - double - double-precision numbers with floating point. - color is an integer representing an RGB color; - datetime is date and time, an unsigned integer containing seconds that have passed since 0.00 a.m. on 1 January, 1970. Additional data types make sense only at the declaration of input parameters for their more convenient representation in the properties window. datetime tBegin_Data = D'2004.01.01 00:00'; color cModify_Color = C'0x44,0xB9,0xE6'; Array is the indexed sequence of the identical-type data. int a; // A one-dimensional array of 50 integers. double m; // Two-dimensional array of seven arrays, //each of them consisting of 50 integers. Only an integer can be an array index. No more than four-dimensional arrays are allowed. Numbering of array elements starts with 0. The last element of a one-dimensional array has the number which is 1 less than the array size. This means that call for the last element of an array consisting of 50 integers will appear as a. The same concerns multidimensional arrays: A dimension is indexed from 0 to the dimension size-1. The last element of a two-dimensional array from the example will appear as m. If there is an attempt to access out of the array range, the executing subsystem will generate error named ERR_ARRAY_INDEX_OUT_OF_RANGE that can be got using the GetLastError() function.
<urn:uuid:2d95f7e8-9080-40b6-a0b0-2545328237ce>
3.375
403
Documentation
Software Dev.
48.750652
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1987) Economically important seaweeds. CMFRI Bulletin, 41 . pp. 3-18. The plants in the sea other than seagrasses—what we call seaweeds—belong to the simplest group of plants ; the marine algae. With few exceptions, these plants are so simple that they have no distinguishable roots, stems or leaves. The algae vary in size from microscopic single-celled forms (eg. diatoms) to the giant macrophytes of temperate waters (Macrocystis, Nereocystis, eXc). |Uncontrolled Keywords:||Economically important seaweeds| |Subjects:||Algae > Seaweed| |Divisions:||CMFRI-Cochin > Mariculture| |Deposited By:||Dr. V Mohan| |Deposited On:||24 Aug 2010 11:23| |Last Modified:||24 Aug 2010 11:23| Repository Staff Only: item control page
<urn:uuid:cf635424-e352-4da8-81f6-a23b5e74ad94>
2.75
243
Academic Writing
Science & Tech.
47.986544
Wednesday, February 22, 2012 - 06:00 in Earth & Climate A year on, modellers continue to provide daily forecasts of the likely spread of floating debris washed out into the Pacific by the Japanese Tohoku megatsunami. - Where will the debris from Japan's tsunami drift in the ocean?Wed, 6 Apr 2011, 10:09:18 EDT - Floating dock from Japan carries potential invasive speciesThu, 7 Jun 2012, 21:33:29 EDT - For disaster debris arriving from Japan, radiation least of the concernsWed, 22 Feb 2012, 16:34:38 EST - Scientists find that debris on certain Himalayan glaciers may prevent meltingMon, 24 Jan 2011, 16:09:24 EST - Raft or bridge: How did iguanas reach tiny Pacific islands?Mon, 11 Jan 2010, 15:50:23 EST
<urn:uuid:7ab44815-da0f-4959-9c3d-631003d71abf>
3.34375
180
Content Listing
Science & Tech.
50.262312
GIS analysis, biological samples (soil microorganism, invertebrate and plant), automatic weather station data and vegetation and invertebrate surveys to determine the terrestrial biocomplexity of the McMurdo Dry Valleys The McMurdo Dry Valleys are the largest area of snow-free land in Antarctica. Managers ability to promote and protect these areas would benefit if we knew the biodiversity present and what controls it distribution. The research therefore focused on describing and predicting biodiversity of terrestrial habitats in the Ross Dependency, Antarctica. The aim is to produce a GIS/biodiversity database ... that links biodiversity with environmental factors such as geology, and soil moisture content, to produce a model that is easily understood and useable by non-specialists and endusers. Samples of soil, invertebrates and mosses were collected from the Miers, Marshall, and Garwood Valleys for geochemistry and biological analysis. Over 450 sampling sites were visited although roughly 15 were inaccessible due to terrain or snow cover. A total 435 vegetation and invertebrate surveys were made and over 450 soil samples collected. At each location up-to-date molecular techniques were used to describe the biota from visible lichens, mosses and invertebrates to the hidden microbes. The soil samples were subsampled and analyses of soil geochemistry, soil respiration, microinvertebrate content (e.g. nematodes, rotifers, tardigrades), and microbiological assays. Samples were collected and split in the field using aseptic techniques for DNA analysis. New genomic approaches that examine microbial communities as a whole (i.e., metagenomics) or even their entire functional aspects (i.e., metatranscriptomics) were used to provide a comprehensive picture of systematic and functional biodiversity, which will help resolve the drivers of biodiversity in the environment. The samples are part of a major landscape scale study to determine the primary drivers of biodiversity and distribution of flora and fauna in the Dry Valleys. In addition, the SOM and other nutrient status including the form of subsidy was determined, and this information will be placed, together with site-specific variables such as aspect, slope, water, snow, stability. The use of GIS is central to the success of this project and considerable success in collating, analysing and preparing information for the GIS analysis. Two automatic weather stations were installed together with various trap systems to measure transfer of material within the Miers and Garwood Valleys in the 2007-2008 field season and in the 2008-2009 field season another was installed in the Marshall Valley. In 2010-2011 the Hidden Valley research area was divided into individual tiles based on geographical and geological attributes using both remote sensing data and on-the-ground surveys. Teams of specialists then visit sampling sites identified within representative tiles to collect soil samples and conduct surveys of local flora and fauna. Soil collection (for molecular genetic analyses) and surveys of micro- and macro-invertebrates were carried out . A total of 160 soil samples were collected for speciation and molecular analyses. The collected soils are analysed in our laboratories for their geochemical properties and resident microbiota using molecular genetic techniques. A survey of lichens, mosses, hypolithic, and endolithic communities was carried out in the Miers and Hidden Valleys. A total of 63 lichen samples and 10 hypolith samples were collected for speciation and molecular analyses. We also deployed instruments in the Wright Valley to facilitate our fieldwork in the 11/12 field season. In 2012 we deployed for the first time a SODAR system (Peyman Zawar-Reza) in Meirs valley in an effort to begin to understand the influence of wind and wind patterns on the distribution of organisms in the Valley. SODAR (Sonic Detection And Ranging) is a meteorological instrument used as a wind profiler to measure the scattering of sound waves by atmospheric turbulence. SODAR systems are used to measure wind speed at various heights above the ground, and the thermodynamic structure of the lower layer of the atmosphere. This unit has already provided compelling information for our model. We plan to deploy the same unit in the Wright Valley next year for more complete coverage and modelling. Further sampling was carried out in the Wright Valley in January 2012. A total of 59 soil samples were collected for speciation and molecular analyses. A survey of hypolithic and endolithic communities was carried out in the Miers, McKelvey, and Victoria Valleys in January 2012. A total of 64 hypolith and 13 endolith samples were collected for speciation and molecular analyses. In addition, 36 moss samples and 1 lichen samples were collected in Miers Valley. Instruments were deployed in the Victoria and McKelvey Valleys to facilitate our fieldwork in the 12/13 field season. Information Systems Management U.S. Geological Survey EROS Data Center 47914 252nd Street Province or State: Brown, W.M., III, 1992, The National Landslide Information Center -- Data to reduce landslide damage, in Earthquakes and Volcanoes: U.S. Geological Survey, v. 23, no. 2, p. 52-57. Brown, W.M., III, Cruden, D.W., and Denison, J.S., 1992, The directory of the World Landslide Inventory: U.S. Geological Survey Open File Report 92-427, 216 p. plus appendix, 19 p. Cruden, D.M., and Brown, W.M. III, 1992, Progress towards the World Landslide Inventory in Bell, D.H., ed., Landslides (Clissements de Terrain), International Symposium on Landslides, 6th. Christchurch, New Zealand, 1992. Proceedings: A.A. Balkema, Rotterdam, Netherlands, and Brookfield, Vermont, v. 1, p. 59-64.
<urn:uuid:6d36840d-8926-428e-af9a-303e43031d4c>
2.96875
1,231
Academic Writing
Science & Tech.
39.344531
The Problems for the IYPT 2012 as published on IYPT.org: A sequence of identical steel balls includes a strong magnet and lies in a nonmagnetic channel. Another steel ball is rolled towards them and collides with the end ball. The ball at the opposite end of the sequence is ejected at a surprisingly high velocity. Optimize the magnet’s position for the greatest effect. When a piece of thread (e.g., nylon) is whirled around with a small mass attached to its free end, a distinct noise is emitted. Study the origin of this noise and the relevant parameters. A long string of beads is released from a beaker by pulling a sufficiently long part of the chain over the edge of the beaker. Due to gravity the speed of the string increases. At a certain moment the string no longer touches the edge of the beaker (see picture). Investigate and explain the phenomenon. If a high voltage is applied to a fluid (e.g. deionized water) in two beakers, which are in contact, a fluid bridge may be formed. Investigate the phenomenon. (High voltages must only be used under appropriate supervision – check local rules.) Illuminate a water tank. When there are waves on the water surface, you can see bright and dark patterns on the bottom of the tank. Study the relation between the waves and the pattern. A woodpecker toy (see picture) exhibits an oscillatory motion. Investigate and explain the motion of the toy. A drawing pin (thumbtack) floating on the surface of water near another floating object is subject to an attractive force. Investigate and explain the phenomenon. Is it possible to achieve a repulsive force by a similar mechanism? Is it possible to float on water when there are a large number of bubbles present? Study how the buoyancy of an object depends on the presence of bubbles. Place a coin vertically on a magnet. Incline the coin relative to the magnet and then release it. The coin may fall down onto the magnet or revert to its vertical position. Study and explain the coin’s motion. Fill a bottle with some liquid. Lay it down on a horizontal surface and give it a push. The bottle may first move forward and then oscillate before it comes to rest. Investigate the bottle’s motion. Fill a thin gap between two large transparent horizontal parallel plates with a liquid and make a little hole in the centre of one of the plates. Investigate the flow in such a cell, if a different liquid is injected through the hole. Paper lanterns float using a candle. Design and make a lantern powered by a single tea-light that takes the shortest time (from lighting the candle) to float up a vertical height of 2.5m. Investigate the influence of the relevant parameters. (Please take care not to create a risk of fire!) Breathe on a cold glass surface so that water vapour condenses on it. Look at a white lamp through the misted glass and you will see coloured rings appear outside a central fuzzy white spot. Explain the phenomenon. If a steel ball is dropped onto a bed of dry sand, a “splash” will be observed that may be followed by the ejection of a vertical column of sand. Reproduce and explain this phenomenon. It often happens that a golf ball escapes from the hole an instant after it has been putted into it. Explain this phenomenon and investigate the conditions under which it can be observed. A vertical tube is filled with a viscous fluid. On the bottom of the tube, there is a large air bubble. Study the bubble rising from the bottom to the surface. A small, light ball is placed inside soap foam. The size of the ball should be comparable to the size of the foam bubbles. Investigate the ball’s motion as a function of the relevant parameters. Picture “Woodpecker toy” © Mike Willshaw – http://www.flickr.com/photos/freakdog/308938937/ Picture “String of beads” © Hans Jordens and Leonid Markovich
<urn:uuid:001dd622-ed0e-4f5f-a30c-290f6e5363b1>
3.4375
868
Content Listing
Science & Tech.
58.586814
NASA has released a stunning simulation video that shows how a single disk galaxy such as the Milky Way develops over a span of 13.5 billion years, beginning with the Big Bang and leading up to today. A disk galaxy, which is described as a flattened circular volume of stars, is color-coded in this captivating clip to show its history and development process. For example, older stars are highlighted in red, while younger stars are shown in white and bright blue. The pale blue color represents the distribution of gas density. To put the size of this in perspective, NASA reveals the view is about 300,000 light-years across (a light-year is the distance light can travel in a year). Although this video may just be a computer-generated model of the real thing, it's worth all 2:17 minutes of your time and reminds you just how small you are in this giant universe.
<urn:uuid:1b22715b-aedf-4b03-90d1-12b8e47f520d>
4
186
Truncated
Science & Tech.
56.735
Implements a data structure for describing a property as a path below another property, or below an owning type. Property paths are used in data binding to objects, and in storyboards and timelines for animations. Assembly: PresentationFramework (in PresentationFramework.dll) XMLNS for XAML: http://schemas.microsoft.com/winfx/2006/xaml/presentation, http://schemas.microsoft.com/netfx/2007/xaml/presentation supports two modes of behavior: Source mode describes a path to a property that is used as a source for some other operation. This mode is used by the Binding class to support data binding. Target mode describes a path to a property that will be set as a target property. This mode is used by animation in support of storyboard and timeline setters. For instance, Background.Opacity is a two-step path. This path implies: first, find the Background property of an object, get the value object that the Background property is set to, and then get the value of the Opacity property on that object. Windows 7, Windows Vista, Windows XP SP2, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 The .NET Framework and .NET Compact Framework do not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
<urn:uuid:23af114c-a8c0-4df0-91d2-9684ffcf5e80>
3.21875
297
Documentation
Software Dev.
54.213722
Solar Alignment or Earth Orbiting Satellite Name: Lewis S. Can a rotating object in earth orbit maintain axial alignment with the sun? For example: a rotating parabolic photonic collector, a space station, a solar observing satellite. Sort of, and for a while. The forces on an object in orbit are pretty weak, normally, but there are forces. Any drag created by whatever atmosphere is left at the altitude of the object could cause the object to rotate if its cross section is nonuniform. If the object is electrically conducting, there may be some current induced by motion through the Earth's magnetic field, and this current will produce a weak magnetic field that will tend to reorient the object. But the biggest force is probably tidal. Imagine that the object is made of sand grains glued together, and then imagine that the glue disappears. Each grain of sand is now actually in its own orbit: those closer to Earth are in slightly tighter, faster orbits than the orbit of the object's center of mass, so they will not stay in place forever. The force that would have caused this motion is still in play, even though the object is *not* made of unglued sand The fact that the object is rotating makes the situation more complicated, but does not remove the forces tending to reorient the Yes, there is no first-order effect trying to move the axis of rotation, so if it is pointed at the sun it will not take much maintenance energy to keep it pointed at the sun. But there are always drifts and weak higher-order effects, and the Earth's annual orbit around the sun changes the subjective direction of the sun at about 1 degree per day. So it will take some feedback mechanism determine the right direction and occasionally provide modest amounts of torque in that direction. Of course, this means that the satellite does not get to keep any one face pointed steadily at earth below. This can give rise to satellites that are part-rotating, part-stationary. Some of our tiny pico-satellites try to stay in "sun-synchronous orbit", always over the day/night line on the Earth below them. That way they are always in steady sunlight, and have an easier time maintaining a steady temperature. But I think orbit-corrections would be required to stay that way for very The higher the orbit, the more margin you have for staying in sunlight. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:8c005fe4-12d7-4ee4-81f3-633bc50a06b5>
3.421875
551
Knowledge Article
Science & Tech.
46.445937
That is, why is the potential energy with the orbitals overlapping less than with the Hydrogen atoms 'independent'. Similarly, why is a noble gas configuration stabler than if an electron were to be removed or added? Is this because pairs of electrons are more stable than single electrons? If so, why? The pairs of electrons are more stable if you see the pair as a filled orbital (which can contain maximum of 2 different spin electrons via Pauli's Exclusion Principle). I think the simplest way of explaining the lower energy of two-nuclei orbital is that the size of the electron's 'playground' grows 2 times, and as the area over which the electron may soar increases, it's kinetic energy decreases.
<urn:uuid:5ac92620-c463-4f83-bb51-c237e206a7c8>
2.96875
149
Q&A Forum
Science & Tech.
45.715305