text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
gets is dangerousThis tip submitted by carl johnson on 2006-07-21 13:13:35. It has been viewed 9031 times. Rating of 8.1 with 78 votes If we use this code: char string[ 100 ]; printf("ENTER SENTENCE: "); gets(string); we can introduce a bug or security vulnerability into our code! The problem is that it allows someone to enter too much text and thereby overflow the buffer. There is no way for gets to know how big the string is supposed to be, so it will just read data until the user hits enter, even if it's way more than 100 characters. You can read more about the security risk of gets here. You can use fgets instead, which takes both a size for the string, ensuring there is no buffer overflow. If you're interested in learning more about secure coding practices, check out this article on writing secure code. Help your fellow programmers! Add a tip!
<urn:uuid:3b82dc65-cb62-4083-952d-7d9bb405a30f>
3.21875
205
Tutorial
Software Dev.
69.545551
Caitlin Stier, video intern A clip of giant crabs traipsing across the Antarctic sea floor takes second place in our countdown. These monstrous crabs are swarming an Antarctic abyss, endangering an ecosystem crafted by 14 million years of evolution. Millions of the crustaceans, with bodies over a metre wide, have infiltrated a hollow within the continental shelf after being displaced due to warming ocean waters. In this clip, you can watch them pillage the seabed's delicate sediments. In 2008, researchers predicted that the crabs would invade the Antarctic within 100 years due to climate change. This submarine footage shows that the siege may already be underway. For more about the invasion, read the original article: Giant red crabs invade the Antarctic abyss. If you enjoyed this video, watch a snail devour a crab coated in mucus.
<urn:uuid:f91dd32b-e8c4-4327-8823-f5f52e3364da>
3.5
174
Listicle
Science & Tech.
45.571373
See 2 video demonstrations of diffusion in action. The first video shows a drop of food coloring diffusing through water. It is a video that is time lapsed at 1 frame/15 seconds. The second video shows a Ziploc bag of iodine placed into a beaker of starch water. It is a video that is time lapsed at 1 frame/30 seconds. The iodine diffuses out of the bag and into the starch water. Where the iodine is diffusing the starch water turns a dark blue. Entries in video (87) Signs of a chemical reaction include color change, gas production, temperature change, precipitation, and other changes in properties …including density, taste, texture, smell, melting point, boiling point, etc. This demonstration will really make them think about it. Place a large plastic container on the counter. Invert a large jar or beaker into the container. Set a 3/4 full liter of fresh cola on top. Carefully but quickly pour a cupful of sugar into the soda and stand back. The cola will burst out of the bottle. Most students will assume it is a gas production sense they can see all of the foam. Have the students taste the left over soda. It is very sweet and very flat, indicating the soda is still there and the carbon dioxide left. It is a physical change and not a chemical change. The sugar pushes out all of the carbon dioxide gas.
<urn:uuid:e41a202a-b926-4bc4-8bf0-d86aa728bba6>
3.765625
294
Truncated
Science & Tech.
63.236447
Everybody's favorite adorably-monikered, microscopic invertebrate continues to prove that it's also one tough little "bear". Water bears*—long recognized as hardiest animals on Earth—can also, apparently, survive in the vacuum of space, according to a European Space Agency experiment published in the journal Current Biology. But, before offering the inevitable welcome speeches to our water bear overlords, it's worth noting a couple caveats. First, these water bears weren't just hanging out in open space, wriggling around. Instead, they were in a dehydrated state—a sort-of mega-hibernation that allows water bears to go without water, and appear dead, for years, before being revived. In the video above, you can see a water bear drying out into a little nub, called a tun. But he revives after water floods the petri dish. It was tuns that went to space, not active water bears. Second, the creatures didn't hold up nearly as well against the Sun, as they did against Space, itself. New Scientist explains: Ultraviolet radiation, which can damage cellular material and DNA, did take its toll. In one of the two species tested, 68% of specimens that were shielded from higher-energy radiation from the Sun were revived within 30 minutes of being rehydrated. Many of these tardigrades went on to lay eggs that successfully hatched. But only a handful of animals survived full exposure to the Sun's UV light, which is more than 1000 times stronger in space than on the Earth's surface. Dried out, the bears can survive a cold vacuum just fine. But only a particularly feisty few made it past the UV exposure. Both pieces of information could prove useful, in the coming Water Bear Imperium. *Also known by the equally darling nickname "Moss Piglets", and by the technically correct, but boring, title of "Tardigrades". Maggie Koerth-Baker is the science editor at BoingBoing.net. She writes a monthly column for The New York Times Magazine and is the author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. You can find Maggie on Twitter and Facebook.
<urn:uuid:e5b04439-23de-45ce-a3eb-ec1485cda861>
3.1875
470
Personal Blog
Science & Tech.
46.32608
PhotolysisWhat happens when light is absorbed by a molecule? Let's look at the absorption of a photon of light by a hydrogen molecule. If the photon of light contains just enough energy to promote an electron from the lower energy level to the upper one, the bond order decreases from 1 to 0. The bond is broken and two hydrogen atoms are formed. If the photon is more energetic, the hydrogen atoms will have excess kinetic energy. Light breaks chemical bonds when the energy of the photon is greater than the bond dissociation energy. What is the wavelength required to break the H-H bond? What about molecular oxygen? The O=O bond is stronger than the H-H bond. By the same analysis, it would require light of 240 nm to break it. However, this doesn't ususally happen in the troposphere. The 240 nm light is very energetic. Very little of light in this frequency range reaches the troposphere because most of it is absorbed in the stratosphere. What frequency of light would be needed to cleave the N2 molecule? The bond dissociation energy is 941 kJ/mol. Is this photolytic reaction likely to occur in the troposphere? Bond Dissociation EnergyThe energy required to break a particular bond in a specified molecule is the bond dissociation energy. There is a table of average bond energies below. Be careful! Bond dissociation energy and bond energy are not the same. It takes 493 kJ/mol of bond dissociation energy to break the first O-H bond in water and 424 kJ/mol to cleave the remaining O-H bond. The average bond energy of the O-H bonds in water is 459 kJ/mol. The O-H bond energy will vary a little from molecule to molecule. The average is about 464 kJ/mol. Methane has 4 C-H bonds and the bond dissociating energies are 435 kJ/mole for D(CH3-H), 444 kJ/mole for D(CH2-H), 444 kJ/mole for D(CH-H) and 339 kJ/mole for D(C-H). The average bond energy is 414 kJ/mole. The C-H bond energy changes depending on the structure of the molecule. The average for many molecules is shown in the table. Photolysis of Molecules in the TroposphereThere are molecules in the troposphere that are easier to cleave than H2, O2, or N2. For example, ozone has weaker O-O bonds and can be cleaved by light of 330 nm. Reaction of O with water, abundant in the troposphere, forms 2 hydroxyl radicals that are more stable than oxygen atoms. Molecules with O-O single bonds, such has hydrogen peroxide, are even easier to cleave. Here are some other common photolysis reactions of the troposphere. Note that the aldehyde C-H bond is much weaker (~368 kJ/mol) than other C-H bonds in organic molecules. Hydroxyl RadicalsHydroxyl radicals are the most important oxidizing agents in the troposphere and are produced in a small, steady state concentration. Radicals are atoms or molecules with unpaired electrons. They are always electron deficient and highly reactive. In the Lewis structure of the OH radical, for example, there are only 7 electrons around the electronegative oxygen atom. Radicals can react with other molecules in several ways. The reactions lead to more stable radicals or to non-radical molecules.
<urn:uuid:1359e402-0fb7-4b17-8d44-46ca7ebfb4d3>
3.28125
749
Knowledge Article
Science & Tech.
55.988837
A eutectic system is a mixture of chemical compounds or elements that have a single chemical composition that solidifies at a lower temperature than any other composition made up of the same ingredients. This composition is known as the eutectic composition and the temperature at which it solidifies is known as the eutectic temperature. On a phase diagram the intersection of the eutectic temperature and the eutectic composition gives the eutectic point. Non-eutectic mixtures will display solidification of one component of the mixture before the other. Not all binary alloys have a eutectic point; for example, in the silver-gold system the melt temperature (liquidus) and freeze temperature (solidus) both increase monotonically as the mix changes from pure silver to pure gold. The eutectic reaction is defined as follows: This type of reaction is an invariant reaction, because it is in thermal equilibrium; another way to define this is the Gibbs free energy equals zero. Tangibly, this means the liquid and two solid solutions all coexist at the same time and are in chemical equilibrium. There is also a thermal arrest for the duration of the change of phase during which the temperature of the system does not change. The resulting solid macrostructure from a eutectic reaction depends on a few factors. The most important factor is how the two solid solutions nucleate and grow. The most common structure is a lamellar structure, but other possible structures include rodlike, globular, and acicular. Compositions of eutectic systems that are not the eutectic composition are commonly defined to be hypoeutectic or hypereutectic. Hypoeutectic compositions are compositions to the left of the eutectic composition and hypereutectic compositions are compositions to the right. As the temperature of a non-eutectic composition is lowered the liquid mixture will precipitate one component of the mixture before the other. Eutectic alloys have two or more materials and have a eutectic composition. When a non-eutectic alloy solidifies, its components solidify at different temperatures, exhibiting a plastic melting range. A eutectic alloy solidifies at a single, sharp temperature. Conversely, when a well mixed, eutectic alloy melts it does so at a single temperature. The various phase transformations that occur during the solidification of a particular alloy composition can be understood by drawing a vertical line from the liquid phase to the solid phase on the phase diagram for that alloy. Some uses include: - eutectic alloys for soldering, composed of tin (Sn), lead (Pb) and sometimes silver (Ag) or gold (Au) - casting alloys, such as aluminium-silicon and cast iron (at the composition of 4.3% carbon in iron producing an austenite-cementite eutectic) - silicon chips are bonded to gold-plated substrates through a silicon-gold eutectic by the application of ultrasonic energy to the chip. See eutectic bonding. - brazing, where diffusion can remove alloying elements from the joint, so that eutectic melting is only possible early in the brazing process - temperature response, e.g. Wood's metal and Field's metal for fire sprinklers - non-toxic mercury replacements, such as galinstan - experimental glassy metals, with extremely high strength and corrosion resistance - eutectic alloys of sodium and potassium (NaK) that are liquid at room temperature and used as coolant in experimental fast neutron nuclear reactors. Sodium chloride and water form a eutectic mixture. It has a eutectic point of −21.2 C and 23.3% salt by mass. The eutectic nature of salt and water is exploited when salt is spread on roads to aid snow removal, or mixed with ice to produce low temperatures (for example, in traditional ice cream making). 'Solar salt', 60% NaNO3 and 40% KNO3, forms a eutectic molten salt mixture which is used for thermal energy storage in concentrated solar power plants. To reduce the eutectic melting point in the solar molten salts calcium nitrate is used in this proportion: 42% Ca(NO3)2, 43% KNO3 and 15% NaNO3. Menthol and camphor, both solids at room temperature, form a eutectic that is a liquid at room temperature in proprotions 8:2, 7:3, 6:4 and 5:5. Both substances are common ingredients in pharmacy extemporaneous preparations. Other critical points When the solution above the transformation point is solid, rather than liquid, an analogous eutectoid transformation can occur. For instance, in the iron-carbon system, the austenite phase can undergo a eutectoid transformation to produce ferrite and cementite, often in lamellar structures such as pearlite and bainite. This eutectoid point occurs at 727 °C (1,341 °F) and about 0.76% carbon. A peritectoid transformation is a type of isothermal reversible reaction that have two solid phases reacting with each other upon cooling of a binary, ternary, ..., alloy to create a completely different and single solid phase. The reaction plays a key role in the order and decomposition of quasicrystalline phases in several alloy types. Peritectic transformations are also similar to eutectic reactions. Here, a liquid and solid phase of fixed proportions react at a fixed temperature to yield a single solid phase. Since the solid product forms at the interface between the two reactants, it can form a diffusion barrier and generally causes such reactions to proceed much more slowly than eutectic or eutectoid transformations. Because of this, when a peritectic composition solidifies it does not show the lamellar structure that is found with eutectic solidification. Such a transformation exists in the iron-carbon system, as seen near the upper-left corner of the figure. It resembles an inverted eutectic, with the δ phase combining with the liquid to produce pure austenite at 1,495 °C (2,723 °F) and 0.17% carbon. Peritectic decomposition. Up to this point in the discussion transformations have been addressed from the point of view of cooling. They also can be discussed noting the changes that occur to some solid chemical compounds as they are heated. Rather than melting, at the peritectic decomposition temperature, the compound decomposes into another solid compound and a liquid. The proportion of each is determined by the lever rule. The vocabulary changes slightly. Just as the cooling of water, which leads to ice, is termed freezing, the warming of ice leads to melting. In the Al-Au phase diagram, for example, it can be seen that only two of the phases melt congruently, AuAl2 and Au2Al. The rest peritectically decompose. The composition and temperature of an eutectic can be calculated from enthalpy and entropy of fusion of each components. The free Gibbs enthalpy G depends on its own differential by Eq. ( Thus, the G/T derivative at constant pressure is calculated by equation Eq. The chemical potential is calculated if we assume the activity is equal to the concentration we suppose the activity equal to the concentration. At the equilibrium, , thus is obtained by\,: Using and integrating gives Eq. The integration constant K may be determined for a pure component with a melting temperature and an enthalpy of fusion Eq. We obtain a relation that determines the molar fraction as a function of the temperature for each component. The mixture of n components is described by the system that can be solved by - Smith & Hashemi 2006, pp. 326–327. - Smith & Hashemi 2006, p. 327. - Smith & Hashemi 2006, pp. 332–333. - Muldrew, Ken; Locksley E. McGann (1997). "Phase Diagrams". Cryobiology—A Short Course. University of Calgary. Retrieved 2006-04-29. - Senese, Fred (1999). "Does salt water expand as much as fresh water does when it freezes?". Solutions: Frequently asked questions. Department of Chemistry, Frostburg State University. Retrieved 2006-04-29. - "Molten salts properties". Archimede Solar Plant Specs. - Fichter, Lynn S. (2000). "Igneous Phase Diagrams". Igneous Rocks. James Madison University. Retrieved 2006-04-29. - Davies, Nicholas A.; Beatrice M. Nicholas (1992). "Eutectic compositions for hot melt jet inks". US Patent & Trademark Office, Patent Full Text and Image Database. United States Patent and Trademark Office. Retrieved 2006-04-29. - Iron-Iron Carbide Phase Diagram Example - IUPAC Compendium of Chemical Terminology, Electronic version. "Peritectoid Reaction" Retrieved May 22, 2007. - Numerical Model of Peritectoid Transformation. Peritectoid Transformation Retrieved May 22, 2007. - International Journal of Modern Physics C, Vol. 15, No. 5. (2004), pp. 675-687 - Smith, William F.; Hashemi, Javad (2006), Foundations of Materials Science and Engineering (4th ed.), McGraw-Hill, ISBN 0-07-295358-6. |Look up eutectic in Wiktionary, the free dictionary.| - Askeland, Donald R.; Pradeep P. Phule (2005). The Science and Engineering of Materials. Thomson-Engineering. ISBN 0-534-55396-6. - Easterling, Edward (1992). Phase Transformations in Metals and Alloys. CRC. ISBN 0-7487-5741-4. - Mortimer, Robert G. (2000). Physical Chemistry. Academic Press. ISBN 0-12-508345-9. - Reed-Hill, R.E.; Reza Abbaschian (1992). Physical Metallurgy Principles. Thomson-Engineering. ISBN 0-534-92173-6. - Sadoway, Donald (2004). "Phase Equilibria and Phase Diagrams" (pdf). 3.091 Introduction to Solid State Chemistry, Fall 2004. MIT Open Courseware. Archived from the original on 2005-10-20. Retrieved 2006-04-12.
<urn:uuid:8c06d95f-c5ff-467d-aadb-b1e0f59acce1>
4.21875
2,272
Knowledge Article
Science & Tech.
45.427746
What Is Climate and Climate Change? Our weather is always changing and now scientists are discovering that our climate does not stay the same either. Climate, the average weather over a period of many years, differs in regions of the world that receive different amounts of sunlight and have different geographic factors, such as proximity to oceans and altitude. Climates will change if the factors that influence them fluctuate. To change climate on a global scale, either the amount of heat that is let into the system changes, or the amount of heat that is let out of the system changes. For instance, warming climates are either due to increased heat let into the Earth or a decrease in the amount of heat that is let out of the atmosphere. The heat that enters into the Earth system comes from the Sun. Sunlight travels through space and our atmosphere, heating up the land surface and the oceans. The warmed Earth then releases heat back into the atmosphere. However, the amount of sunlight let into the system is not always the same. Changes in Earth’s orbit over thousands of years and changes in the Sun’s intensity affect the amount of solar energy that reaches the Earth. Heat exits the Earth system as the Earth’s surface, warmed by solar energy, radiates heat away. However, certain gases in our atmosphere, called greenhouse gases, allow the lower atmosphere to absorb the heat radiated from the Earth’s surface, trapping heat within the Earth system. Greenhouse gases, such as water vapor, carbon dioxide, methane and nitrous oxide, are an important part of our atmosphere because they keep Earth from becoming an icy sphere with surface temperatures of about 0°F. However, over the past century or so the amounts of greenhouse gases within our atmosphere have been increasing rapidly, mainly due to the burning of fossil fuels, which releases carbon dioxide into the atmosphere. Consequently, in the past one hundred years global temperatures have been increasing more rapidly than the historic record shows. Scientists believe this accelerated heating of the atmosphere is because increasing amounts of these greenhouse gases trap more and more heat.
<urn:uuid:030e5a4f-db68-4f49-b6ce-e511eb551a87>
3.90625
422
Knowledge Article
Science & Tech.
40.092741
Scope of this Manual C is a flexible language that leaves many programming decisions up to you. In keeping with this philosophy, C imposes few restrictions in matters such as type conversion. Although this characteristic of the language can make your programming job easier, you must know the language well to understand how programs will behave. This book provides information on the C language components and the features of the Microsoft implementation. The syntax for the C language is from ANSI X3.159-1989, American National Standard for Information Systems – Programming Language – C (hereinafter called the ANSI C standard), although it is not part of the ANSI C standard. C Language Syntax Summary provides the syntax and a description of how to read and use the syntax definitions. This book does not discuss programming with C++. Seefor information about the C++ language.
<urn:uuid:bf36777e-cb46-43e2-afc1-6dbeb5da071b>
3.59375
172
Documentation
Software Dev.
47.83464
|definition||a quantity that has |a quantity that has both magnitude and direction |up, down, left, right north, east, west, south positive, negative (in a coordinate system) angle of inclination, depression angle with the vertical, horizontal right ascension, declination proper time (relativistic) electric, magnetic, gravitational fields vector addition, vector subtraction resultant (Σ), change (Δ) dot product (·), cross product (×) |answers||a number with a unit||a number with a unit and a direction angle a number with a unit along each coordinate axis an arrow drawn to scale in a specific direction Jump off the line. Don't forget the parallelogram rule. Lot's of vectors to be added. Conclude and get on with the sample problems. |45°, 45°, 90° 1 : 1 : √(2) |An isosceles right triangle. The two legs are equal in size. If we assume each leg has a length of one, according to Pythagoras' theorem, the hypotenuse has a length equal to the square root of two.| |30°, 60°, 90° 1 : 2 : √(3) |Half an equilateral triangle. Assume the sides of the full triangle have a length of two. Split it in half and the side opposite the 30° angle has a length of one. Using Pythagoras' theorem, the remaining leg has a length equal to the square root of three.| |37°, 53°, 90° 3 : 4 : 5 |The simplest Pythagorean triple. The size of the angles are best determined with a calculator. Standardized tests love this triangle. (Secret reason: It simplifies grading.) Some people memorize these angles to help them spot 3-4-5 triangles on tests.| [summarize in a bigger window]
<urn:uuid:7fec0735-9d7b-40b2-991b-e12dcaecb531>
3.828125
429
Structured Data
Science & Tech.
55.496591
I was alerted to this by a post on the Bad Science forum. There one of the posters had used the position of Barnard’s Star to estimate the date a Google Sky image had been taken. Barnard’s Star has the highest proper motion of any star known. This means it moves across the sky compared to background stars. Looking at the image I wondered why Barnard’s star appeared only once as a single, very blue image. These colour images are made by combining images of the sky taken in different colour filters. These images are often taken years apart so for an object such as Barnard’s star which moves very fast across the sky the positions in each filter will be different. Hence you would expect to see one blue, one green and one red image in three different positions, this doesn’t appear to happen. I decided to check out a few other high proper motion stars to see what they looked like. Proxima Cent is a bit weird, I can’t seem to identify it on the image, perhaps it is the blue thing on top of a background star, there is certainly a bit of noise where the UKST I plate (Google Sky uses a combination of data from the UKST, POSS telescope and the Sloan Digital Sky Survey plus a few other sources of more detailed images) position from the 1970s is. A better example is Kapteyn’s Star is a better example. Notice the bright very blue object in the upper right, that is the blue image from 1975, while the noisy thing in the middle is around the position of the red image from 1998. You can see a better subtraction for Luyten’s star. Frankly I’m not sure about the finer points of how Google Sky make their colour images. This is clearly an artifact of the way in which they combine the images. Anybody able to use this to work out why this is happening? At the end of last year a couple of papers appeared with some very promising looking direct images of extrasolar planet candidates. Until now the bulk extrasolar planets (i.e. planets outside our solar system) have been found either by the radial velocity method where the motion of the parent star being pulled around by the planet is detected or the transit method where the planet obscures a portion of the parent star, blocks some of the light that would otherwise reach us here on Earth and makes the star appear a bit dimmer. One of the candidate direct images was of a planet around the nearby young star Beta Pictoris, the discovery paper by Lagrange and collaborators is here and here and the press release is here. These direct images are very difficult to acquire as the star is much, much brighter than the planet (in the case of Beta Pic about 1500 times brighter) and the atmosphere and telescope optics smear out the star’s light, covering the spot on the sky where the planet is. The group led by Lagrange used the Very Large Telescope in Chile along with the NaCo instrument (which both blocks out most of the light from the parent star and corrects for some of the atmospheric smearing). This has allowed them to image what looks like a planet near the star. Of course it could just be another, fainter, unrelated star behind Beta Pic, in these cases you need to come back a few years later to check the planet is moving through space along with the parent star to make sure. However the chance of this just being coincidence is pretty small. So why am I writing about this now? Well a paper has appeared that may indicate this planet was detected before, in 1981. Back then Beta Pic was seen to dim briefly, as if a planet passed in front of it. This of course begs the question “was the imaged planetary candidate responsible for the transit?” This is the question the authors try to answer. A planet will only transit if you are looking at the system edge-on and we have a clue that the Beta Pic system is very close to edge-on. Like many young stars Beta Pic has a disk of material around it that is thought to form planets. We know that the disk around Beta Pic is pretty close to edge on and you’d expect the planets in any system to orbit roughly in the plane of the disk. Hence it is possible the planetary candidate could transit in-front of the star. The authors then go on to try to work out (assuming the planetary candidate and the transiting body are the same thing) when a transit would happen again and what the planet’s orbit is. They find the most likely solution is a planet orbiting Beta Pic at a distance of eight times the Earth-Sun distance every 16-19 years. Both the direct detection and the transit suggest the planet is a gas giant. The idea that the transit of a planet across its parent star could have been detected in 1981 sort of shows that astronomy is a passive science. In most research you design an experiment, have complete control over it, carry it out and note down the result. In astronomy you can’t grab two bottles of chemicals off the shelf and mix them, you can only look. If there is a planet around Beta Pic in a 16-19 year orbit then it was also there in 1981, it was also transiting at the end of the last century and in the mid-60s, just nobody was looking. Almost everything we can study, measure and analyse in astronomy is already out there, we just haven’t looked hard enough yet.
<urn:uuid:09f7242f-edc6-4314-9a26-ea46f46cd9c8>
3.125
1,139
Personal Blog
Science & Tech.
55.418389
Because our forum is being polluted with bad information. Myth #1: When you hash something, you get a unique result that no other file or string or password can have. Wroonnnnggggg. Let's attack this one with simple logic. Let's say your hash is 32 characters long. Now let's say you hash every possible 33-character string there is. You will have strings with matching hashes, or "collisions". It's simple logic -- there are far more combinations of 33-character strings than there are of 32-character strings, because for every 32-character string that exists, you can tack on every possible character to the end and make a bunch of 33-character strings. So, just making up some example numbers, if there are 90,000 33-character strings and 20,000 32-character strings, some of those 33'ers MUST have the exactly the same 32-character hash. The goal of hashing algorithms is to make collisions as rare as possible, but it is impossible to write to a hashing algorithm that has no collisions. Myth #2: MD5 is insecure. Wroonnnnggggg. MD5 is a less sophisticated (and therefore much faster) hashing algorithm than, say, SHA-256, but it is not insecure. An insecure hash would mean that the hash could be reversed -- or rather, that you could take a hash, and, using that and having no other information, produce a string that has the same hash. You cannot do that with MD5. In fact, the closest anyone has gotten to this is changing an existing, large file in a way that doesn't change the hash it already has. No one in the history of humankind has been able to produce a "reverse" MD5. This myth comes from the fact that MD5 is a common target of password-cracking attacks, which leads to our next myth... Myth #3: MD5 is less secure for password hashing than other algorithms like SHA. Wroonnnnggggg. There are exactly three attacks that can be used to find out someone's password if you have the hash of that password: - Hash database lookup. You go to a super-large online database of short words and their hashes, plug in the hash, and see if a short word with that hash has ever been submitted before. A common prevention for this is to salt your passwords. - Brute force. You run through all the words of a dictionary, hashing each one, to see if the hash matches what you have on hand. If that doesn't work, you just start hashing every possible combination of 5, 6, 7, or 8-character words to find a match. This takes for effing ever and rarely produces a result, and can easily be prevented with a salt. - Rainbow tables. This is a method of hashing and re-hashing the data you have to find similarities in the hashes of other words, which can eventually lead to finding the password. Salts have minimal effect on these attacks. Myth #4: Sophisticated, super-long hashes like SHA-256 are harder to attack because they're longer. Wroonnnnggggg. Let's face it: the biggest threat to hash cracking attacks is the rainbow tables method. It is by far the most efficient tradeoff between processing time and storage space, and often times can find a password in under a minute if you have enough chains. But rainbow tables are not magical. That's the impression most people have because it's easier to believe that than to learn how they work, but seriously, the rainbow table attack isn't hard to understand. I suggest reading up on it if you have a spare 15 minutes. So if you know how rainbow tables work, you know that the biggest weakness they have is hash collisions -- which, you'll remember from above, means more than one string that produces the same hash. So password hashing security is a tradeoff: You want an algorithm that has reasonably few collisions so that no two passwords are likely to generate the same hash, but you also want one that produces enough collisions to potentially send a rainbow table into an endless loop that can't be cracked. SHA-256 is horrible for this, because it's too good a hashing algorithm. You're extremely unlikely to get any collisions at all when using SHA-256 for a password (even a salted one), so using that makes your passwords -- salted or not -- easier to crack. SHA-256 is awesome for hashing large files. Not so awesome for passwords. At the same time, though, you don't want to use a measly 32 or 48-bit hash, because collisions are extremely likely with those. You want to find a happy medium, which is around 128 bits. What's a fast 128-bit hashing algorithm? MD5. Myth #5: SHA-1 is still better to use than MD5 because MD5 is more likely to be cracked. Wroonnnnggggg. Again, you're being lulled into a false sense of security. Not only is SHA-1 160 bits (so you'll still get collisions, but fewer than MD5), hash databases and rainbow tables are just as easy and well-developed for SHA-1 as they are for MD5. This myth is one that is spread far and wide among developers who have never taken the time to research the facts. The fact that SHA produces a longer hash has little-to-no impact on the ability to store them in a database or produce rainbow chains with them. All it takes is a tiny bit more storage space, and that's true for any hashing algorithm in the world, whether it's 8 bits or 8 thousand bits. The length only helps for reducing collisions, which, past around 128 bits, doesn't help at all when you're hashing passwords rather than large files. Myth #6: Hashing with multiple algorithms is more secure! Wroonnnnggggg. No. Just no. Come on, now you're just pulling stuff out of your ass. I've seen this before: sha1(md5("Password")). That is ridiculous. You're feeding 128 bits into 160 bits, which is an easy easy crack. You can't make hashes more-hashy. You might add one more step to the process for someone cracking it, but it's going to end up with the same result. Don't guess at what's more secure, know what's more secure. Myth #7: Using a global salt AND a user salt is more secure than using just a user salt. There was a thread recently in which folks supported the idea of using a global salt in addition to a user salt, and I dismissed it as pointless. I was met with some fierce opposition, claiming that it makes passwords more secure if a hacker should be able to steal a copy of your database but not your source code. This is true, however the amount of extra security this offers is minimal at best. Here's why: Out of our list of attacks on hashes that I wrote out earlier, there is only one attack that's made more difficult to crack by using a global salt, and that's the brute force attack -- the one that's already almost impossible to use successfully to begin with. Adding a global salt only makes this more-impossible-than-nearly-impossible. So I'll admit there is SOME benefit, but with the most common and most successful attack being rainbow tables, folks would be crazy to try a brute force attack anyway. They'd just rainbow table it, and the global salt won't make your password any more secure than a properly long user salt would. And hey, let's face it: If someone's able to steal a copy of your database, chances are that they can nab a copy of your source without too much trouble too. If you're looking to protect against the case where someone gets a copy of your database, a far, far, far more effective solution would be to use symmetric encryption on your hashes. Choose a fast, simple symmetric encryption algorithm (RC4, Blowfish... your call), generate a key and store it somewhere in a configuration file. Use that to encrypt all your salted password hashes in the database. Then, when you pull them from the database, just decrypt them with that key when you read them. Now you have a hash that can't even be attacked with a rainbow table if someone happens to gain unauthorized access to either your database or your account on the server, and this is still effective if you distribute your software to the public. MUCH more effective than using a global salt. So what are the best practices for password hashing? The only two, good options for password hashing are MD5 and SHA-1, and you should let the language you're programming in dictate which one you use. For PHP (and most languages), MD5 is faster than SHA-1, so it's the better option. The key to making it secure is using a salt of the appropriate length. The sweet spot for securely storing a password that isn't susceptible to dictionary or brute force attacks AND is relatively safe from rainbow table attacks is to feed twice the number of bits of a hash into a new hash. So, for example, using MD5, you'd want to hash two MD5s. You can generate a 128-bit salt for this (which, for a lot of users, can take up some significant storage space), or you can generate a salt that's just a handful of characters long and hash that. My favorite method is this: $finalHash = md5(md5($salt) . md5($password)) Whether you want to put the symmetric encryption layer on top of that is entirely up to you So please, for the love of all that is holy, stop teaching these myths to other people! The world's programmers thank you :D
<urn:uuid:782fe768-83be-4055-a750-5cd7407f3c58>
3.1875
2,047
Comment Section
Software Dev.
65.461072
Large collisions between asteroids transferred carbonaceous material in the inner solar system. NASA's space probe Dawn is scheduled to leave asteroid Vesta on Wednesday, September 5th, and head for its next destination, the dwarf planet Ceres. New results prove that Dawn’s target asteroid Vesta is a relict from an early phase of planetary evolution. First false-color maps of the asteroid show unique surface variations. NASA’s Dawn spacecraft is now in its first science-collecting orbit at Vesta. Neue Bilder des Kamerasystems an Bord der NASA-Raumsonde Dawn geben erste Hinweise auf eine bewegte Vergangenheit. NASA's Dawn spacecraft has returned the first close-up image after beginning its orbit around the giant asteroid Vesta. On Friday, July 15, Dawn became the first probe to enter orbit around an object in the main asteroid belt between Mars and Jupiter. On July 15, NASA's Dawn spacecraft will become the first spacecraft to begin a prolonged encounter with the asteroid Vesta. New images of the asteroid show the first surface structures and give a preview of the Dawn mission's coming months. New images taken by NASA's Dawn spacecraft show a dark spot on the asteroid's equatorial region. The targets of the Dawn mission could not be more different: While Vesta once had a hot, molten interior that produced lava flows, Ceres has always been a cold body, under whose surface possibly frozen water can be found. In addition, both bodies allow for a look back into an early phase of our solar system. Both asteroids are among the largest survivors from this early phase of planet formation. more... Dawn is a NASA mission managed by the Jet Propulaion Laboratory (JPL) that will reach the asteroids Vesta and Ceres within the next years. The space probe will encounter its first destination, the asteroid Vesta, in the summer of 2011. Presumably at the end of July, Dawn will start orbiting Vesta and deliver its first high-resolution images of the surface. more... The mission's success crucially depends on the two cameras, Dawn's eyes. The cameras were developed and built under the leadership of the Max Planck Institute for Solar System Research with significant contributions by the Institute for Planetary Research of the German Aerospace Center (DLR) and in coordination with the Institute of Computer and Communication Network Engineering of the Technical University Braunschweig. more...
<urn:uuid:21efea99-d648-44fd-9f91-0eb9cfa51abb>
3.484375
507
Content Listing
Science & Tech.
45.646438
Flora and Fauna Ostriches, cassowaries, emus, kiwis, and rheas. Information about kiwi birds -- biology and conservation. Struthio camelus, the ostrich, flightless bird native to Africa. Business: Agriculture and Forestry: Livestock: Ratites Shopping: Food: Meat: Exotic: Ostrich, Emu, and Rhea Copyright © 2013 Netscape Last update: Sunday, May 16, 2010 4:43:54 AM EDT -
<urn:uuid:badc7e8e-f898-4726-8fd2-52ce9f65ad48>
2.71875
115
Content Listing
Science & Tech.
30.876918
Praying mantis (insect)A praying mantis, or praying mantid, is a kind of insect, of the family Mantidae (order Dictyoptera), named for their "prayer-like" stance. (The word mantis in Greek means prophet.) There are approximately 2,000 species world-wide; most are tropical or subtropical. The most common species is Mantis religiosa. Mantids are notable for their large size and nimble reflexes. Their diet, which consists exclusively of living insects, includes flies and aphids, which are caught and held securely with the grasping forelegs. Mantids make use of protective coloration to blend in with the foliage, both to avoid predators themselves, and to better snare their victims.
<urn:uuid:3ceead7c-69fe-4853-aa3a-f197d9c4082c>
3.296875
163
Knowledge Article
Science & Tech.
42.341667
Elasmobranchii (sharks and rays) > Carcharhiniformes (Ground sharks) > Carcharhinidae Etymology: Carcharhinus: Greek, karcharos = sharpen + Greek, rhinos = nose (Ref. 45335). Environment / Climate / Range Marine; reef-associated; depth range ? - 170 m (Ref. 6871). Tropical; 34°N - 25°S Length at first maturity / Size / Weight / Age Maturity: Lm ?, range 70 - 75 cm Max length : 120 cm TL male/unsexed; (Ref. 4883) Indo-West Pacific: Persian Gulf and Arabian Sea between Gulf of Oman and Pakistan to Java, Indonesia and the Arafura Sea (Ref. 9819), north to Japan, south to Australia (Ref. 6871). A common but little-known shark found on the continental and insular inshore areas (Ref. 9997). Feeds mainly on fishes but also on cephalopods, and crustaceans (Ref. 6871). Viviparous (Ref. 50449), with a yolk-sac placenta; gives birth to litters of 1-4 (usually 2) pups (Ref.58048). Taken in artisanal and small-scale commercial fisheries and marketed for human consumption (Ref. 244). Fins also utilized (Ref. 6871). Compagno, L.J.V., 1984. FAO Species Catalogue. Vol. 4. Sharks of the world. An annotated and illustrated catalogue of shark species known to date. Part 2 - Carcharhiniformes. FAO Fish. Synop. 125(4/2):251-655. Rome: FAO. IUCN Red List Status (Ref. 90363) ReferencesAquacultureAquaculture profileStrainsGeneticsAllele frequenciesHeritabilityDiseasesProcessingMass conversion CollaboratorsPicturesStamps, CoinsSoundsCiguateraSpeedSwim. typeGill areaOtolithsBrainsVision Estimates of some properties based on empirical models Phylogenetic diversity index (Ref. 82805 = 0.5000 [Uniqueness, from 0.5 = low to 2.0 = high]. Bayesian length-weight: a=0.00214 (-0.19144 - 0.19572), b=3.22932 (3.13085 - 3.32780), based on LWR estimates for this genus-BS (Ref. 93245 Trophic Level (Ref. 69278 ): 3.9 ±0.6 se; Based on diet studies. Resilience (Ref. 69278 ): Very Low, minimum population doubling time more than 14 years (Fec=2). Vulnerability (Ref. 59153 ): Moderate to high vulnerability (46 of 100) .
<urn:uuid:3beaa427-85e3-4c05-9023-8fa381a1fce8>
3.28125
624
Knowledge Article
Science & Tech.
63.964498
Proportion of species first reported on each foray The fraction of species new to each foray is shown. In the second foray more than 40% of the species were new records. This is partially due to a strong seasonal effect. The genus Boletus for example is much more diverse in the December forays than the January forays, and Hygrocybe was just the reverse. On the last two forays the percentage of new records dropped to 31% and 18%, respectively. This may show that we are starting to saturate the species that we are currently able to identify, but it also may be caused by drier weather in the second season.
<urn:uuid:4c57409c-0971-4938-ba0f-5017b7adc2e1>
2.765625
133
Structured Data
Science & Tech.
54.404848
Joined: 03 Oct 2005 |Posted: Fri Dec 23, 2005 10:30 am Post subject: Scientists Develop New Process for Tracking Nanomaterials |Scientists at Northwestern University Develop a New, Non-Invasive Process for Tracking Nanostructured Materials Inside the Body Researchers at Northwestern University have been developing a toolbox of synthetic amino acids (related to building blocks of proteins) that assemble themselves into complex structures that may prove useful in drug delivery and tissue engineering applications. Now, that same research team has devised a non-invasive method of imaging these nanostructured materials within the body, providing a way of tracking the fate of these materials in a living organism. Samuel Stupp, Ph.D., and his colleagues have been creating complex, self-assembled, nanoscale materials that can serve as scaffolds for tissue regeneration following surgery or injury, and as targeted, multifunctional drug delivery devices. Once these materials have served their purpose, the body would degrade them slowly and gradually eliminate them, but tracking such a process would be difficult because of the similarity of these materials to those found in the body. To provide a handle on how the body handles these materials, Dr. Stupp and his collaborators teamed with Thomas Meade, Ph.D., also at Northwestern, to create another synthetic amino acid that can bind strongly to gadolinium ions. Other compounds containing gadolinium ions are employed by radiologists today to enhance images obtained using magnetic resonance imaging (MRI). When these gadolinium-binding amino acids were incorporated into a variety of different self-assembling nanostructures, they were readily visible in images obtained using MRI. By studying various nanostructures, the investigators were able to determine how to maximize the MRI signal with a minimum amount of gadolinium, which can be toxic in large amounts. Dr. Stupp and his team are now using this gadolinium-containing amino acid to study degradation and migration of their self-assembled nanostructures in vivo. This work is detailed in a paper titled, “Magnetic resonance imaging of self-assembled biomaterial scaffolds,” which appeared in the journal Bioconjugate Chemistry. An abstract of this paper is available through PubMed. Source: NCI Alliance for Nanotechnology in Cancer. This story was posted on 21 December 2005.
<urn:uuid:4b8a737a-9c0a-42bd-95b5-28ba99e1a405>
3.296875
483
Comment Section
Science & Tech.
21.100999
Joined: 16 Mar 2004 |Posted: Mon Feb 04, 2008 12:35 pm Post subject: Golden Glue A single gold atom might be able to serve as a versatile glue to bind together different kinds of monomers into completely unknown structures. Pekka Pyykkö and colleagues at the University of Helsinki, Finland, have used theoretical calculations to predict a new family of structures. The compounds are made of aromatic rings glued together by gold atoms to form infinite one-dimensional nanostrips. Gold atoms glue aromatic rings together Depending on the geometry and chemical composition, the nanostrips can behave as insulators, semiconductors or metals. 'Due to the abundance of possibilities, one is able to experiment with various starting materials in search of successful syntheses of these new systems,' said Pyykkö. The repeating molecular units in these systems are relatively small, explained Pyykkö, which allows highly accurate quantum-chemical predictions. This means that simulations can be used to tailor new systems to achieve desired electrical properties. Pyykkö hopes that this work will inspire both experimentalists and theoreticians to explore these new species. 'The first and foremost challenge is to find a suitable synthesis pathway to experimentally make these structures,' he said. In the meantime, Pyykkö's team is extending the research by attempting to bend the strips into rings. Story posted: 31st May 2007
<urn:uuid:83368837-fd07-4e79-9110-10d2e0dfbe90>
3.4375
293
Comment Section
Science & Tech.
27.995
TO TRY to understand neutron stars, researchers have abandoned telescopes for a new tool a billion billion times smallerthe nucleus of a lead atom. Neutron stars, like the one at the heart of the Crab Nebula, have a radius of only a dozen kilometres or so but weigh more than the Sun. Despite often pumping out X-rays or radio signals, it is hard to determine the structure of a neutron star just from its radiation. Researchers think that a neutron star is solid on the outside with a liquid centre. They want to know how thick the solid neutron crust is, as this affects many of the star's propertiesfrom how fast it cools to how well it emits gravitational waves. Charles Horowitz of Indiana University in Bloomington and Jorge Piekarewicz of Florida State University in Tallahassee believe that they can get an idea of the crust's thickness by measuring the ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:0a35fcf8-7c62-453f-8808-e041cc05e2c9>
4.375
210
Truncated
Science & Tech.
50.394575
THE over-harvesting of small fish to feed farmed salmon is threatening marine ecosystems worldwide, according to a report commissioned by respected environmental organisations. Nonsense, say industry experts, the fisheries are well managed. So who is right? Feed for farmed fish consists of up to 80 per cent fishmeal and oil, because predatory fish such as salmon need high levels of certain oils and proteins for rapid growth and the right taste. The report, done for the conservation group WWF, the Scottish Wildlife Trust and the UK's Royal Society for the Protection of Birds (RSPB), claims many of the feed fisheries are not sustainable. It focuses on the fishmeal used by Scottish salmon farms, but many fish farmers worldwide rely on the same sources. "Aquaculture can't just keep on expanding without any measures of sustainability," says Rebecca Boyd of WWF. "We'll just end up plundering the ocean, and that means losing ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:95d62df4-46f6-4467-9cca-8716942fc256>
3
218
Truncated
Science & Tech.
51.1175
Color and Vision Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of light and color.The Physics of Coloured Fireworks Learn about the colors associated with firework displays.PhET Simulation: Neon Lights & Other Discharge Lamps Learn how discharge lamps (e.g., neon lights) produce their light with this interactive PhET simulation.How the Eyes See Color This short video features an optometrist explaining the science of color. Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on visible light and color.The Physics of Coloured Fireworks Deepen your understanding and be prepared for those pesky student questions on colored fireworks.The Science of Color This downloadable, 100-plus page book discusses various aspects of light production in the visible spectrum and color addition and subtractionThere's More to Light Than Meets the Eye Created by the Astronomical Society of the Pacific, this collection of pages starts with the EM spectrum and ends with a classroom project on rainbow formation.General Atomics Sciences: It's a Colorful Life Deepen your understanding of color with this free, downloadable book on color; contains theory and ideas for labs. Visible Light and the Eye's Response As mentioned in the first section of Lesson 2, our eyes are sensitive to a very narrow band of frequencies within the enormous range of frequencies of the electromagnetic spectrum. This narrow band of frequencies is referred to as the visible light spectrum. Visible light - that which is detectable by the human eye - consists of wavelengths ranging from approximately 780 nanometer (7.80 x 10-7 m) down to 390 nanometer (3.90 x 10-7 m). Specific wavelengths within the spectrum correspond to a specific color based upon how humans typically perceive light of that wavelength. The long wavelength end of the spectrum corresponds to light that is perceived by humans to be red and the short wavelength end of the spectrum corresponds to light that is perceived to be violet. Other colors within the spectrum include orange, yellow, green and blue. The graphic below depicts the approximate range of wavelengths that are associated with the various perceived colors within the spectrum. Color can be thought of as a psychological and physiological response to light waves of a specific frequency or set of frequencies impinging upon the eye. An understanding of the human response to color demands that one understand the biology of the eye. Light that enters the eye through the pupil ultimately strikes the inside surface of the eye known as the retina. The retina is lined with a variety of light sensing cells known as rods and cones. While the rods on the retina are sensitive to the intensity of light, they cannot distinguish between lights of different wavelengths. On the other hand, the cones are the color-sensing cells of the retina. When light of a given wavelength enters the eye and strikes the cones of the retina, a chemical reaction is activated that results in an electrical impulse being sent along nerves to the brain. It is believed that there are three kinds of cones, each sensitive to its own range of wavelengths within the visible light spectrum. These three kinds of cones are referred to as red cones, green cones, and blue cones because of their respective sensitivity to the wavelengths of light that are associated with red, green and blue. Since the red cone is sensitive to a range of wavelengths, it is not only activated by wavelengths of red light, but also (to a lesser extent) by wavelengths of orange light, yellow light and even green light. In the same manner, the green cone is most sensitive to wavelengths of light associated with the color green. Yet the green cone can also be activated by wavelengths of light associated with the colors yellow and blue. The graphic below is a sensitivity curve that depicts the range of wavelengths and the sensitivity level for the three kinds of cones. The cone sensitivity curve shown above helps us to better understand our response to the light that is incident upon the retina. While the response is activated by the physics of light waves, the response itself is both physiological and psychological. Suppose that white light - i.e., light consisting of the full range of wavelengths within the visible light spectrum - is incident upon the retina. Upon striking the retina, the physiological occurs: photochemical reactions occur within the cones to produce electrical impulses that are sent along nerves to the brain. The cones respond to the incident light by sending a message forward to brain, saying, "Light is hitting me." Upon reaching the brain, the psychological occurs: the brain detects the electrical messages being sent by the cones and interprets the meaning of the messages. The brain responds by saying "it is white." For the case of white light entering the eye and striking the retina, each of the three kinds of cones would be activated into sending the electrical messages along to the brain. And the brain recognizes that the messages are being sent by all three cones and somehow interprets this to mean that white light has entered the eye. Now suppose that light in the yellow range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light with these wavelengths would activate both the green and the red cones of the retina. Upon striking the retina, the physiological occurs: electrical messages are sent by both the red and the green cones to the brain. Once received by the brain, the psychological occurs: the brain recognizes that the light has activated both the red and the green cones and somehow interprets this to mean that the object is yellow. In this sense, the yellow appearance of objects is simply the result of yellow light from the object entering our eye and stimulating the red and the green cones simultaneously. If the appearance of yellow is perceived of an object when it activates the red and the green cones simultaneously, then what appearance would result if two overlapping red and green spotlights entered our eye? Using the same three-cone theory, we could make some predictions of the result. Red light entering our eye would mostly activate the red color cone; and green light entering our eye would mostly activate the green color cone. Each cone would send their usual electrical messages to the brain. If the brain has been psychologically trained to interpret these two signals to mean "yellow", then the brain would perceive the overlapping red and green spotlights to appear as yellow. To the eye-brain system, there is no difference in the physiological and psychological response to yellow light and a mixing of red and green light. The brain has no means of distinguishing between the two physical situations. In a technical sense, it is really not appropriate to refer to light as being colored. Light is simply a wave with a specific wavelength or a mixture of wavelengths; it has no color in and of itself. An object that is emitting or reflecting light to our eye appears to have a specific color as the result of the eye-brain response to the wavelength. So technically, there is really no such thing as yellow light. Rather, there is light with a wavelength of about 590 nm that appears yellow. And there is also light with a mixture of wavelengths of about 700 nm and 530 nm that together appears yellow. The yellow appearance of these two clearly different light sources can be traced to the physiological and psychological response of the eye-brain system, and not to the light itself. So to be technically appropriate, a person would refer to "yellow light" as "light that creates a yellow appearance." Yet, to maintain a larger collection of friendships, a person would refer to "yellow light" as "yellow light." In the next several sections of Lesson 2, we will explore these concepts further by introducing three primary colors of light and generating some simple rules for predicting the color appearance of objects in terms of the three primary colors.
<urn:uuid:4e9abace-b7c8-409b-869a-4d2da8406916>
4.25
1,570
Content Listing
Science & Tech.
45.631245
Ask the Experts; Best of Ask the Experts; Exclusive Online Issues; by Staff Editor; 1 Page(s) The central part of the U.S. gets many tornadoes, particularly strong and violent ones, because of the unique geography of North America. The combination of the Gulf of Mexico to the south and the Rocky Mountains to the west provides ideal conditions for tornadoes to develop more often than any other place on earth. The central U.S. experienced a record-breaking week from May 4 through May 10 this year, when close to 300 tornadoes occurred in 19 states, causing 42 deaths, according to NOAA's National Weather Service. Storms that produce tornadoes start with warm, moist air near the ground. Dry air is aloft (between altitudes of about three to 10 kilometers). Some mechanism, such as a boundary between the two air masses, acts to lift the warm, moist air upward. The boundary can be a front, dryline or outflow from another storm-essentially any kind of difference in the physical properties of two air masses. "Kinks" in the boundary are locations where rotation could occur. An updraft (air going up) traveling over the kink will "stretch" and intensify the rotation, just like an ice skater pulling in her arms.
<urn:uuid:e3847505-41c8-4ead-a8b6-8fed2b7d6e9e>
4.1875
266
Truncated
Science & Tech.
56.690275
Write a lexical analyser for the C programming language using the gramar for the language given in the book "The C Programming Language", 2e, by B Kernighan and D Ritchie. Use flex for creating the lexical analyser. Implement a desk calculator using operator precedence parsing. Imagine the syntax of a programming language construct such as while-loop -- while ( condition )where while, begin, end are keywords; condition can be a single comparision expression (such as x == 20, etc.); and statement is the assignment to a location the result of a single arithmatic operation (eg., a = 10 * b). Write a program that verifies whether the input follows the above syntax. Use flex to create the lexical analyser module and write a C program that performs the required task using that lexical analyser. Write a C program for implementing shift-reduce parsing using a given Firstly, define the data structures for representing the given CFG in BNF, the stack for the parsing, and the parse tree to be created. Write a C program for creating the precedence table for a given grammar. The table should be created in an appropriate data structure that is compatible with the requirements of Assignment 2. Hint: Use a distinct number to represent each grammar symbol - terminal and non-terminal. Write a LR parser program in C. Define the data structure for the parsing table in such a way that it can be initialised easily (manually) for a given grammar. Take a simple grammar, eg., expression grammar, compute the parsing table entries by hand using the steps discussed in the class, and initialize the table in your program with these values. Try to parse input expressions scanned by a lexical analyser (which can be easily created using flex). The output of the parser should be SUCCESS or FAILURE depending on the input. In case of FAILURE the parser should indicate the incorrect token in the input. Modify your LR parser program of the preceding assignment such that along with the reduce actions, the parser invokes routines associated with the particular grammar rule. For example, for a reduction by the rule E -> E + T of the exression grammar, the parser computes the sum of the numbers corresponding to the symbols E and T on the RHS, and associates the sum to the symbol E on the LHS. Hint: Observe that it will be necessary to associate values with different symbols in the stack. Whenever a reduce action is taken some symbols of the stack are to be replaced by a non-terminal symbol. In this step the value to be associated with the non-terminal is to be computed using the values associated with the symbols that are being replaced. Take the C grammar from the book - The C Programming Language (by Kernighan and Ritchie), and try to generate a parser for the language using YACC. The notation for the grammar in the book is not strictly BNF (e.g. use of subscript "opt" with some symbols, use of "one of", etc.), so some rewriting shall have to be done due to that. Apart from that there are are some LALR conflicts which you shall have to resolve by any appropriate means. Note: This exercise may require more time than available. So start the work, realise the issues and learn the tricks, so that completion of the remaining part should depend not on learning time, but only on adequate coding time. A Simple example using YACC Take a common programming language construct of an HLL, such as the for-loop construct of the C language. Use LEX and YACC to create a translator that would translate input into three-address intermediate code. The output of the translator should finally be in a file. Assume a simple structure for "statements" that may appear inside the construct, and make necessary assumptions for the intermediate code format.
<urn:uuid:1e4228ef-6f8d-48d0-9f50-fc04b001f44f>
3.6875
815
Tutorial
Software Dev.
46.993769
SAN FRANCISCO, Dec. 3 (UPI) -- Russia's Far East holds seismic and volcanic hazards that could trigger tsunamis and pose a risk to the rest of the Pacific Basin, a U.S. researcher says. Studies in the last 20 years have revealed the dangerous side of the Kamchatka Peninsula and the Kuril Islands, sources of powerful earthquakes and volcanic activity long kept hidden from outsiders by Russia, Jody Bourgeois, a University of Washington professor of Earth and space sciences, says. A magnitude-9 earthquake in that region in 1952 caused significant damage elsewhere on the Pacific Rim, she said, and even less-powerful quakes have had effects throughout the Pacific Basin. "There's not a large population in the Russian Far East, but it's obviously important to the people who live there," she said in a university release. "Thousands of people were killed in tsunamis because of the earthquake in 1952. And tsunamis don't stay home." A tsunami caused by a smaller 2006 earthquake crossed the Pacific and did more than $10 million in damage at Crescent City, Calif. The historic record for earthquakes, tsunamis and volcanic eruptions in Kamchatka and the Kurils is relatively short, Bourgeois said, and because the region was closed off from much of the world for decades information has started becoming available only recently. Learning more about the region is important to many people over a broad area, she said. "Let's say you decide to build a nuclear power plant in Crescent City. You have to consider local events, but you also have to consider non-local events, worst-case scenarios, which includes tsunamis coming across the Pacific." Bourgeois talked about the seismic and volcanic threats in the Kamchatka-Kurils region Monday at a meeting of the American Geophysical Union in San Francisco.
<urn:uuid:9281134b-6dd0-498f-9f26-663973c57e29>
3.109375
385
Truncated
Science & Tech.
41.438762
Material and methods Most of the type material was collected using ROV manipulators and is preserved in 75% ethanol. One specimen (CAS 175885) was originally fixed in 10% seawater formalin and later transferred to 75% ethanol. In addition to the specimens collected by ROVs, approximately twelve specimens representing one lot were recently found in the invertebrate zoology collection of California Academy of Sciences (CAS). These specimens were collected by benthic trawl off the Oregon coast in 1973 and were preserved in 75% ethanol. We have found this material to be conspecific with the new taxon. Microscopes used in this study include a Nikon SMZ-10 dissecting microscope, an Olympus CH-2 compound microscope, and a Leo 1400 series scanning electron microscope. In situ observations of living colonies were made using high-resolution video (Ikegami HDL-40 and Panasonic WV-E550) and digital still (Nikon Cooplpix 990) cameras (figs. 2, 3). Over 250 hours of video recordings from San Juan, Rodriguez, Davidson and Pioneer Seamounts, Monterey Canyon, and the continental slope off northern California were reviewed using MBARI’s Video Annotation Reference System, VARS (Schlining & Jacobsen Stout, 2006). Living Gersemia colonies and associated habitats were identified in video and added to the searchable VARS database. Within VARS, these video observations were merged with ancillary data (latitude, longitude, depth, temperature, and oxygen concentration) that were collected by the ROV at the time of deployment. The VARS query was used to export video observation data for analysis and the data were mapped using ArcGIS 9.3. In addition, six video transects were collected to estimate organism density (Monterey Canyon n=5; Pioneer Seamount n=1). Two parallel red lasers (640 nm) positioned 29 cm apart were used to estimate transect width. Transect length was calculated in ArcView® 3.2 using the Animal Movement Analysis Extension, Version 2, which was used to calculate successive distance between the start and end points of each transect (Hooge & Eichenlaub, 1997).
<urn:uuid:bdd25e88-2864-47a1-b3b7-8da40f8df65b>
2.75
465
Academic Writing
Science & Tech.
30.136378
Date: ca. 1924-1928 Country of Origin: United States of America 3-D Test: 26 x 10.2cm (10 1/4 x 4 in.) Overall, glass; wound string around part of tube over a thin cardboard underlay; each smaller tube extension with thin wire strands, apparently copper, imbedded inside each; broken off metal spring inside larger, main tube. American rocket pioneer Robert Goddard (1882-1945) used this device between 1924 and 1928 in his experiments to determine the feasibility of ion propulsion for space travel. Ion engines, in which electrically charged particles of atoms are discharged, produce extremely high exhaust velocities. Experiments in space with ion propulsion first took place in 1964. According to a 1964 note written by Russell B. Hastings, one of Goddard's graduate students at the time of the ion experiments, "the tube looks like an early attempt to either singly deflect electrons by a magnetic field or possibly to measure the ratio of charge to mass.... If so this might be a prize piece." Mrs. Goddard gave this artifact to the Smithsonian in 1965 as part of a set of laboratory glassware from her husband's pioneering ion-propulsion experiments. Gift of Mrs. Robert Goddard
<urn:uuid:c380689b-64e6-4089-a7b8-adcbc9aa260d>
3.515625
256
Knowledge Article
Science & Tech.
57.9415
|Next: Creating the License Up: Freeing the Source Previous: Freeing the Source| The body of Communicator source code at Netscape was called ``Mozilla.'' Mozilla was a term initially created by Jamie Zawinsky and company during the development of Navigator. The team was working at a similarly frantic pace to create a beast vastly more powerful than Mosaic, and the word became the official code name for Navigator. Later the big green dinosaur became an inside joke, then a company mascot, and finally a public symbol. Now the name came into use as the generic term referring to the open-source web browsers derived from the source code of Netscape Navigator. The move was on to ``Free the Lizard.'' There was an amazing amount to be done to make the code ready for prime time. As issues surfaced, they separated themselves into categories and were claimed. The next three months were devoted to resolving issues at the fanatical pace that Netscapers knew well. One of the largest issues was the disposition of the third-party modules included in the browser. Communicator contained over seventy-five third-party modules in its source, and all of the code owners needed to be approached. Teams of engineers and evangelists were organized to visit and sell each company on the concept of joining Netscape on the road to Open Source. All of them had heard Netscape's Open Source announcement, and now each company had a choice to make: their code could be removed or replaced, shipped as binary (kept in its compiled state), or shipped as source code along with Communicator. To complicate matters, many of the third-party contracts were unique and ran for different lengths of time. No one scenario would be appropriate as a solution for all situations. Making the deadline for Project Source 331 was considered essential. And that required tough choices. This was surely the case when it came to the participation of the third-party developers. The rule was either you're in by February 24th, or your element will have to be scrubbed from the source. Those kinds of deadlines are not hard to set early on, but they became brutal when we hit the wall. When the time came, some code had to be removed. Java was a proprietary language, so it had to be removed. Three engineers were assigned to perform a ``Java-ectomy.'' The browser had to build, compile, and run -- without Java. Since the overall code was so tightly integrated with Java, this was no small feat. The goal was to have the source code ready by March 15th so that the final two weeks could be devoted to testing. Engineers had to disentangle all Java code from the browser in an inconceivably short time. Cleansing the code was a huge project. Early on, many felt it just couldn't be done in time for the deadline. But as steam gathered at meetings, strategies formed. The wheels began to turn. The Product Team dropped their entire workload (most were developing the next generation of the browser) and everyone got down to the business of surgery. Not only did the inclusion (or excision) of each third-party participant have to be resolved, all comments had to be edited from the code. Responsibility for each module was assigned to a team and they went in to scrub. One of the great innovations that happened early on was the decision to use the Intranet bug-reporting system as a task manager. ``Bugsplat'' was the name for Scopus, a bug-reporting program fronted with an HTML interface. It was ideal as a workflow management system. New jobs were reported to the system as they came up, input in a simple HTML form. Just as with a bug that has been reported to the system, priorities were set, relevant participants were determined, and mailing lists grew up around each task. When the task (or bug) was resolved, all of the mailing lists and prioritization collapsed and disappeared from view. Engineers were able to track the progress of their modules and watch the project unfold by logging on to the Intranet. The removal of the cryptographic modules was another tremendous task for the engineering team. Not only did the government insist that all cryptographic support had to be removed, but every hook that called it had to be redacted. One team's sole job was to keep in constant contact with the NSA and manage compliance issues.
<urn:uuid:fd3740b7-ab55-41f0-b5b4-634f258712a6>
2.96875
896
Nonfiction Writing
Software Dev.
47.86369
Melvin Prueitt of Los Alamos National Laboratory in New Mexico received patents last January for an air purifying tower for large smog- filled cities. At the top of the 650-foot tower, which would be made of metal beams covered with a fiberglass shell, a spray of fine, electrostatically charged mist would humidify the air. It would make the air cooler and cause it to sink, thus creating a downdraft that would suck more air into the tower. Since pollutants would cling to the charged droplets, they would be washed away when the mist condenses at the bottom of the tower. Clean air, humidified by the remaining water vapor, would waft out of the bottom. Prueitt figures that a mere 190 towers could scrub the smog out of a city like Los Angeles without inflicting noticeable aesthetic damage to the skyline.
<urn:uuid:10c0bc7c-0088-457e-81c3-9b3910ec9f45>
3.0625
175
Knowledge Article
Science & Tech.
47.3825
Raptors are birds of prey. That mean they eat meat and many of them like MICE. Many raptors hunt for their meat but some look for food that is already dead! Some raptors hunt during the day (diurnal) and others prefer the night time (nocturnal). Raptors come in all sizes and colors and can be found around the world in many types of habitats. Raptors have three special characteristics that make them a raptor. They have sharp talons on their feet that are great for grabbing and carrying their prey, a hook beak great for tearing the meat apart, and excellent vision! The vision test we take in school would be a piece of cake for a raptor! Many of them can see their meal a couple of miles away. Depending on which raptor you are talking about, they also have other special talents. Some raptors are extremely strong and kill their prey with their grasp like the eagles and other raptors have a sense of hearing that is unbelievable like the owls. Other raptors can fly so fast through the sky like the falcons. Raptors are very special birds. Birds are in a class called Aves. Scientists use scientific classification to divide 'Aves' into groups of birds that are alike in some way. This means that ALL birds in the world have been separated into sections based on what is alike about them. Then the scientists divide each group of birds into a smaller group--still based on things that are alike. An example of how this works is to imagine that you are in school. Your teacher wants to take the whole class and divide you into groups based on what is alike about you. She tells all of the blonde children to go in one corner and brown-haired in another, and red-haired in still another. She then takes each group and separates them even more by eye color. So, the blonde 'class' might have two groups inside of it: blue eyed children and brown eyed children. These two groups might be split again into children who are right handed or left more on how scientific classification is done, Falconiformes and Strigiformes are two orders in the class of Aves. This is where the raptors come However scientists were still having trouble deciding which bird goes in which order so the Falconiformes were divided again into families. They needed to do this because they were not only working with likenesses you can see but also the changes in molecules and chromosomes in the birds over thousand of years. Falconiforme families are:
<urn:uuid:35e7e7ca-afd2-4c86-9b44-04850471193d>
4.03125
566
Knowledge Article
Science & Tech.
57.31469
Predict future weather using the probability that tomorrow is wet given today is wet and the probability that tomorrow is wet given that today is dry. If the score is 8-8 do I have more chance of winning if the winner is the first to reach 9 points or the first to reach 10 points? It is believed that weaker snooker players have a better chance of winning matches over eleven frames (i.e. first to win 6 frames) than they do over fifteen frames. Is this true?
<urn:uuid:5cab8824-4109-4de4-89e5-d8c1df520c04>
2.875
108
Q&A Forum
Science & Tech.
70.107257
Four vehicles travelled on a road with constant velocities. The car overtook the scooter at 12 o'clock, then met the bike at 14.00 and the motorcycle at 16.00. The motorcycle met the scooter at 17.00 then it overtook the bike at 18.00. At what time did the bike and the scooter meet? Brian swims at twice the speed that a river is flowing, downstream from one moored boat to another and back again, taking 12 minutes altogether. How long would it have taken him in still water? At Holborn underground station there is a very long escalator. Two people are in a hurry and so climb the escalator as it is moving upwards, thus adding their speed to that of the moving steps. ... How many steps are there on the escalator? The person was moving quickest at the point where the graph's slope or gradient, either up or down, is steepest. Between 9 seconds and 12 seconds they walked 4 metres, so a reasonable estimate for the speed can be found dividing the distance by the time. 4 metres in 3 seconds gives an average of around 1.3 m/s Average speeds for other parts of the motion can be estimated in a similar way to give approximate speeds of 0.6 m/s, 0.5 m/s, then the 1.3 m/s (fastest) and finally 0.25 m/s If the person switches direction, say they were going forwards and then go backwards, or the other way round, then there was an instant when they stopped. In this question that happened three times : At around 9 seconds, 12 seconds and 16 seconds, the person switched between motion forwards and motion backwards.
<urn:uuid:525d1cbd-ccb0-4584-b9b7-c181d9b8e31a>
2.921875
383
Q&A Forum
Science & Tech.
80.12169
Copyright © University of Cambridge. All rights reserved. We had just a few solutions sent in from pupils trying to give some kind of proof for this challenge. Jasmine from Salcombe Prep. School in London sent in this picture and wrote: Every time the numbers will make a rectangle and from Michelle at the International School in the Seychelles we were sent the following: When you multiply an even and an odd together it will always be an even number and when you times even and even togther it will also be an even number and how I did this was with cubes and I showed my teacher and he agreed with me. Chris from St. Mary's Catholic Primary, England had a good way of showing it: An odd number multiplied by an even number will make an even number. Key: different colour = different number Finally from Christopher and Rei at Seoul Foreign School (SFS) , South Korea (Republic of Korea) we have the following: Answer for Odd Times Even: 4*3=12 that's the same as 4+4+4 4 is even. Everytime an even is added to a even the answer is always a even number. We hope that we convinced you that an odd number times an even number is always an even number. Thank you for all the contributions. It obviously made you think - and thinking is good!
<urn:uuid:11ba6a28-dfd7-4f6c-a689-4be3ca9da20f>
3.3125
286
Comment Section
Science & Tech.
59.947011
This is the uncommon time a beautiful spectrum and chaotic climate reasoning came go to go creating a gorgeous world high in the sky. As the creature climate began to control the air over Co, USA, most individuals would have been securing their windows and doors. But as the risk declined one fortunate group of climate chasers saw the uncommon sight of a spectrum and climate apparently just a few kilometers apart. The spectacular there was a time taken by dedicated Nederlander climate chaser Chris Gude, 47, who couldn’t believe his eyes as he seen the uncommon world. With the Calendar month of Goal pending, United States climate chasers are already watching the South as a unpleasant climate makes with the potential to whirl off a group of tornadoes. But if direct atmosphere create Exclusive or Exclusive as some forecasters believe, they won’t be the first. This climate period got an early and dangerous start in late Jan when two individuals were murdered by individual twisters in Al. Preliminary reports revealed 95 tornadoes arranged last Calendar month, compared with 16 in Jan during a particularly raining 2011. The period usually starts in Goal and then security up for the next few months, but forecasting these climate conditions is even more hidden than forecasting storm conditions. Tornadoes are too small and too short-lived for experts to create periodic forecasts. They don’t create like snow raining climate and tornados, which are easier to project. They pop in and pop out. The raining climate that give them birth may last only a few time. Hurricanes and snow raining climate are lumbering monsters that spend times moving across the satellite charts. When a storm techniques, coastlines get times warning to leave. With a climate, if the elements service can let individuals know 20 minutes in advance, it’s considered a success. ‘The Joplin (Missouri) climate (that murdered 158 individuals last May) wasn’t chaotic until just about plenty of it got to the medical,” said Harold Brooks, a research researcher at the Nationwide Oceanic and Environmental Administration’s Nationwide Severe Storms Research clinical, in Gary, Ok. ‘Even when you’re in the field, there are still times when you’re surprised by the concentration of the event and how quickly it started.’ If a prediction for a storm or blizzard is off by a kilometer, it’s still rainwater. But a kilometer difference means no damage in a climate, Brooks said: ‘It’s so much small in some time to space on the elements, it does create it a harder problem.’ It requires a piece of dirt only a few seconds to fly around an entire tornado; it requires a chance to range a storm. But tornadoes, though smaller, can have more powerful gusts of wind. Since 1950, there have been 58 tornadoes in the U.S. with gusts of wind in excess of 200 mph; six last year alone. Only three tornados have made U.S. landfall with gusts of wind more than 155 mph.
<urn:uuid:990acde8-b0cf-405e-9caf-a0dad37e3f3c>
2.84375
649
Truncated
Science & Tech.
48.885781
Ocean Circulation and Upwelling Visualizations Find images and animations about ocean surface currents, upwellings and circulation models. Browse the complete set of Visualization Collections. If you have a visualization that would be helpful for teaching about earth science, please contribute to our continually expanding collections. California Upwelling (more info) This Earth Science Picture of the Day shows a SeaWIFS color-coded image of a cold water upwelling along the California coast. The annotated image also explains the physics of upwellings and how they contribute to nutrient cycling and phytoplankton growth. Ekman Transport and Coastal Upwelling (more info) This SeaWiFS image shows the Agulhas and Bengeula currents off of the southern tip of Africa. The upwelling of nutrient rich water is seen here in false color as orange and yellow, representing high concentrations of phytoplankton. Observe How Upwelling Occurs (more info) This Flash animation delineates, in cross section and plan view, the steps necessary to produce upwelling, a phenomenon important in influencing climate and oceanic productivity. Winds moving along the coast are influenced by the Coriolis effect and push surface water offshore. In response, cold, plankton rich water rises to the surface to replace displaced water. The animation can be paused and replayed to stress important points.
<urn:uuid:dc4b9edf-c893-4e03-975c-3408cc3197c0>
3.375
290
Content Listing
Science & Tech.
21.090896
Mission Type: Orbiter Launch Vehicle: Thor-Delta E-1 (no. 50 / Thor no. 488 / DSV-3E) Launch Site: Eastern Test Range / launch complex 17B, Cape Canaveral, USA NASA Center: Goddard Space Flight Center Spacecraft Mass: 104.3 kg at launch Spacecraft Instruments: 1) magnetometers; 2) thermal ion detector; 3) ion chambers and Geiger tubes; 4) Geiger tubes and p-on-n junction; 5) micrometeoroid detector; and 6) Faraday cup Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi National Space Science Data Center, http://nssdc.gsfc.nasa.gov/ Solar System Log by Andrew Wilson, published 1987 by Jane's Publishing Co. Ltd. Explorer 35 was designed to study interplanetary space phenomena-particularly the solar wind, the interplanetary magnetic field, dust distribution near the Moon, the lunar gravitational field, the weak lunar ionosphere, and the radiation environment. The spacecraft left Earth on a direct ascent trajectory and entered lunar orbit on 21 July 1967. Initial orbital parameters were 800 x 7,692 kilometers at 147° inclination. The spacecraft, similar to Explorer 33, also in lunar orbit, found that the Moon has no magnetosphere, that solar wind particles impact directly against the surface, and that the Moon creates a cavity in the solar wind stream. After six years of successful operation, the satellite was turned off on 24 June 1973. Explorer 35 was launched by the 50th Thor-Delta booster, of which only three had failed, giving the booster a success rating of 94 percent.
<urn:uuid:f2ff6996-4e59-42fc-8a43-6f492a286d26>
3.46875
372
Knowledge Article
Science & Tech.
51.94557
The Blue Morpho Butterfly is a beautiful brown — yes, brown — butterfly. The microscopic scales on this rainforest butterfly manipulate light, reflecting back an intense blue light that makes them appear blue. Showing 8 posts tagged butterflies Professor Brian Cox explains how Monarch Butterflies navigate by “monitoring the position of the sun, and compensating for its location in the sky using their internal timekeeping mechanism… even when it’s cloudy.” This is an episode 5 preview of the BBC’s Wonders of Life. Full screen this. …we human beings, who have been trying to make things for only the blink of an evolutionary eye, have a lot to learn from the long processes of natural selection, whether it’s how to make a wing more aerodynamic or a city more resilient or an electronic display more vibrant… one of the most often-cited examples is Velcro, which the Swiss engineer Georges de Mestral patented in 1955 after studying how burs stuck to his clothes… More than a decade ago, an MIT grad named Mark Miles was dabbling in the field of micro-electromechanical and materials processing. As he paged through a science magazine, he was stopped by an article on how butterflies generate color in their wings. The brilliant iridescent blue of the various Morpho species, for example, comes not from pigment, but from “structural color.” Those wings harbor a nanoscale assemblage of shingled plates, whose shape and distance from one another are arranged in a precise pattern that disrupts reflective light wavelengths to produce the brilliant blue. To create that same blue out of pigment would require much more energy—energy better used for flying, feeding and reproducing. Miles wondered if this capability could be exploited in some way. Where else might you want incredibly vivid color in a thin package? From Smithsonian Mag. The Hidden Beauty of Pollination. You’ve seen this video before. It was a part of Louie Schwartzberg’s TED Talk in 2011, but frankly, it’s so amazing that it’s worth watching and posting again on its own! This video was shown at the TED conference in 2011, with scenes from “Wings of Life,” a film about the threat to essential pollinators that produce over a third of the food we eat. The seductive love dance between flowers and pollinators sustains the fabric of life and is the mystical keystone event where the animal and plant worlds intersect that make the world go round. via Boing Boing. Handling wild Silver-Spotted Skippers (Epargyreus clarus). Footage of technique and them playing games, landing on my head, etc. Another from YouTube user Precarious333’s treasure trove of insect videos.
<urn:uuid:a53aec48-0262-4d48-bc1e-0e10aa33152f>
2.90625
588
Content Listing
Science & Tech.
45.651218
One of the great things about Bach’s organ music is how changes of a single note in a whole pattern can have rather dramatic effects on the sound. A unique and potentially very important similar phenomenon has been discovered recently in the area of GPCR research. The understanding of the basic process by which GPCRs transmit signals from the cell exterior to the interior has seen remarkable advances in the last three decades, but much still remains to be deciphered. Our knowledge of signaling responses until now hinged on the action of agonists and antagonists. Central to this knowledge was the concept of ‘intrinsic efficacy’; according to this concept, there was no difference between two full agonists for instance, and both of them would produce the same response irrespective of the situation. But this understanding failed to explain some observations. For instance, a full agonist would function as a partial agonist and even as an inverse agonist under different circumstances. Several such observations, especially in the context of GPCRs involved in neurotransmission, have forced a re-evaluation of the concept of intrinsic efficacy and led to an integrated formulation of a fascinating concept called ‘functional selectivity’. So what is functional selectivity? It is the phenomenon by which the same kind of ligand (agonist, antagonist etc.) can modulate different signaling pathways activated by a single GPCR, leading to different physiological responses. Functional selectivity thus opens up a whole new method of modifying GPCR signaling in complex ways. It comprises a new layer of complexity and control that biological systems enforce at the molecular level to engage in complex signaling and homeostasis. Functional selectivity can allow the ‘tuning’ of ligands on a continuum scale of properties, from agonism to inverse agonism. In addition it can tightly regulate the strength of the particular property. It is what allows GPCRs to function as rheostats rather than as binary switches and allows them to exercise a fine layer of biological control and discrimination. Functional selectivity is not just of academic interest. It can have clinical significance. Probably most tantalizingly, it may be one of the holy grails of pharmacology that allows us to separate the beneficial and harmful effects of a drug, leading to Paul Ehrlich’s ‘magic bullet’. Until now, side-effects have been predominantly thought to result from the lack of subtype-specificity of drugs. For instance, morphine’s side effects are thought to result from its activation of the μ-opioid receptor. But functional selectivity could provide a totally new avenue for explaining and possibly mitigating side-effects of drugs. For instance, consider the dopamine receptor agonist ropinirole, used in the treatment of Parkinson’s disease. There are several D-receptor agonists and just like them ropinirole interacts with several receptor subtypes. But unlike many of these, ropinirole does not demonstrate the dangerous side-effect named valvulopathy, a weakening of the heart valves that makes them stiff and inflamed. This can be a potentially life-threatening condition that seems to be facilitated by several dopamine agonists, but not ropinirole. The cause seems to be becoming clear only now; ropinirole is a functionally selective ligand that activates a certain pattern of second messenger pathways that is different from those activated by other agonists. Somehow this pattern of pathways is responsible for reduced valvulopathy. Let’s go back to the organ/piano analogy to gauge the significance of such control. The sound produced by a piano depends on two variables- the exact identities of the keys pressed, and the intensity (how hard or softly you press them). The second variable can be as important as the first since a pressing a key particularly hard can drown out other notes and influence the very nature of the sound. The analogy to functional selectivity would be in looking at the keys themselves as different signaling pathways and the intensity of the notes as the strength of the pathways. Now, if one ligand binding to a single GPCR is able to activate a specific combination of these pathways, each with its own strengths, think of the permutations and combinations you could get from a set of even a dozen pathways- an astonishing number. Thus, functional selectivity could be the key that unlocks the puzzle of how one ligand can put into motion such a complex set of signaling events and physiological responses. One ligand- one receptor- several pathways with differing strengths. An added variable is the concentration of certain second messengers in a particular environment or cell type, which could add even more combinations. This picture could go a long way toward explaining how we can get such complex signaling in the brain from just a few ligands like dopamine, serotonin and histamine. And as described above, it also provides a fascinating direction - along with control of subtype selectivity (a much more well known and accepted cause) - for developing therapies that demonstrate all the good stuff without the bad stuff. The basic foundation of functional selectivity is as tantalizing. Whatever the reasons for the phenomenon, the proximal cause for it has to concern the stabilization of different protein conformations by the same kind of ligands. Unravel these protein conformations and you would make significant inroads into unraveling functional selectivity. If you come to think of it, this principle is not too different from the current model of conformational selection used in explaining the action of agonists and antagonists in general, which involves the stabilization of certain conformations by specific molecules. Nature never ceases to amaze. As we plumb its mysteries further, it reveals deeper, more subtle and finer layers of control and discrimination that allows it to generate profound complexity starting from some relatively simple events like the binding of a disarmingly simple molecule like adrenaline to a protein. And combined with the action of several proteins, the concerto turns into a symphony. We have been privileged to be in the audience. Mailman, R., & Murthy, V. (2010). Ligand functional selectivity advances our understanding of drug mechanisms and drug discovery Neuropsychopharmacology, 35 (1), 345-346 DOI: 10.1038/npp.2009.117 Kelly, E., Bailey, C., & Henderson, G. (2009). Agonist-selective mechanisms of GPCR desensitization British Journal of Pharmacology, 153 (S1) DOI: 10.1038/sj.bjp.0707604
<urn:uuid:87023f86-2836-41b9-875f-b7e1a49bbf26>
3.046875
1,347
Nonfiction Writing
Science & Tech.
30.834403
Task Summary Reports This project began by digitizing all hand-written meteorological observing records from the pre-1948 era yielding a database totaling 370 stations with some records dating back to the 1850's. This database was merged with the post-1948 observations and stored using the Summary of the Day (SOD) format as defined by the National Climate Data Center (NCDC). A summary of the task and listing of available stations in the database can be viewed here. For Task 2 of this project, the snowfall, precipitation, temperature and wind records for Minnesota were analyzed generating the following statistics: snow accumulation season, mean seasonal snowfall, percentile rankings and probability of exceedence, snow water equivalence and finally wind frequency distributions. This report summarizes the techniques used for the analysis and provides a link to each product. Following the guidelines of Dr. Ronald Tabler (1994) and utilizing the climatological products developed in task 2, snow transport models were run for Minnesota. Also a field study was conducted to investigate some of the agricultural implications of living snow fences. Visit this link to learn more about the impact of living snow fences. Included in this report is a brief summary of the Web Site hosting the climatological products and instructions on how to retrieve climate data from the Minnesota State Climatology Office Web Site.
<urn:uuid:92d3fd06-2f59-44da-97a5-7b6e13795eb9>
2.6875
292
Knowledge Article
Science & Tech.
23.866865
filesystem, formerly known as , is a filesystem keeping all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on any device. If you unmount a tmpfs instance, everything stored therein is lost. tmpfs puts everything into the kernel internal caches and grows and shrinks to accommodate the files it contains and is able to swap unneeded pages out to swap space. It has maximum size limits which can be adjusted on the fly via 'mount -o remount ...' If you compare it to (which was the template to create you gain swapping and limit checking. Another similar thing is the RAM ), which simulates a fixed size hard disk in physical RAM, where you have to create an ordinary filesystem on top. Ramdisks cannot swap and you do not have the possibility to resize them. tmpfs has a couple of mount options: size: The limit of allocated bytes for this tmpfs instance. The default is half of your physical RAM without swap. If you oversize your tmpfs instances the machine will deadlock since the OOM handler will not be able to free that memory. nr_blocks: The same as size, but in blocks of PAGECACHE_SIZE. nr_inodes: The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages. These parameters accept a suffix k, m or g for kilo, mega and giga and can be changed on remount. To specify the initial root directory you can use the following mount mode: The permissions as an octal number uid: The user id gid: The group id These options do not have any effect on remount. You can change these on a mounted So the following mount command will give you a tmpfs instance on which can allocate 12MB of RAM/SWAP and it is only accessible mount -t tmpfs -o size=12M,mode=700 tmpfs /mytmpfs In order to use a option has to be enabled for your kernel configuration. It can be found in the configuration group. You can simply check if a running by searching the contents of /proc/fileysystems bash# grep tmpfs /proc/filesystems In embedded systems is very well suited to provide read and write space (e.g. /tmp ) for a read-only root file system such described in section 9.1.4. Compressed ROM Filesystem One way to achieve this is to use symbolic links. The following code could be part of the startup file /etc/rc.sh of the read-only ramdisk: # Won't work on read-only root: mkdir /tmpfs mount -t tmpfs tmpfs /tmpfs mkdir /tmpfs/tmp /tmpfs/var # Won't work on read-only root: ln -sf /tmpfs/tmp /tmpfs/var / The commented out sections will of course fail on a read-only root filesystem, so you have to create the mount-point and the symbolic links in your root filesystem beforehand in order to successfully use this setup.
<urn:uuid:85c2105f-16d1-448f-97b9-a0e8fb057083>
2.921875
702
Documentation
Software Dev.
63.256209
The SRES emissions scenarios also have different emissions for other GHGs and chemically active species such as CO, NOx , and volatile organic compounds. The uncertainties that surround the emissions sources of these gases, and the more complex set of driving forces behind them are considerable and unresolved. Hence, model projections of these gases are particularly uncertain and the scenarios presented here are no exception. Improved inventories and studies linking driving forces to changing emissions in order to improve the representation of these gases in global and regional emission models remain an important future research task. The emissions of other gases follow dynamic patterns much like those shown in Figure TS-7 for carbon dioxide emissions. Further details about GHG emissions are given in Chapter 5. Some models of the six SRES models do not provide a comprehensive description of NOx emissions or include only specific sectors (e.g., energy-related sources) and have adopted other source categories from corresponding model runs derived from other models. Even with a simplified model representation, future NOx emission levels are mainly determined by two set of variables: levels of fossil energy use (see Chapter 4), and level and timing of emission controls, inspired by local air quality concerns. As a result the spread of NOx emissions is largest within the A1 scenario family (28 to 151 MtN/yr by 2100), almost as large as the range across all 40 SRES scenarios (see Table TS-4). Only in the highest emission scenarios (the fossil fuel intensive scenarios within the A1 scenario family and the high population, coal intensive A2 scenario family) do emissions rise continuously throughout the 21st century. In the A1B ("balanced") scenario group and in the B2 scenario family, NOx emission levels rise less. NOx emissions tend to increase up to 2050 and stabilize thereafter, the result of a gradual substitution of fossil fuels by alternatives as well as of the increasing diffusion of NOx control technologies. Low emission futures are described by various B1 family scenarios, as well as in the A1T scenario group, that describe futures in which NOx emissions are controlled because of either local air quality concerns or rapid technological change away from conventional fossil technologies. Overall, the SRES scenarios describe a similar upper range of NOx emissions as the previous IS92 scenarios (151 MtN versus 134 MtN, respectively, by 2100), but extend the IS92 uncertainty range toward lower emission levels (16 versus 54 MtN by 2100 in the SRES and IS92 scenarios, respectively). NMVOCs arise from fossil fuel combustion (as with NOx , wide ranges of emission factors are typical for internal combustion engines), and also from industrial processes, fuel storage (fugitive emissions), use of solvents (e.g., in paint and cleaners), and a variety of other activities. In this report NMVOCs are discussed as one group. As for NOx emissions, not all models include the NMVOCs emissions category or all of its sources. A relatively robust trend across all 40 scenarios (see Chapter 5) is a gradual increase in NMVOC emissions up to about 2050, with a range between 190 and 260 Mt. Beyond 2050, uncertainties increase with respect to both emission levels and trends. By 2100, the range is between 58 and 552 Mt, which extends the IS92 scenario range of 136 to 403 Mt by 2100 toward both higher and lower emissions (see Table TS-4). As for NOx emissions, the upper bounds of NMVOC emissions are formed by the fossil fuel intensive scenarios within the A1 scenario family, and the lower bounds by the scenarios within the B1 scenario family. Characteristic ranges are between 60 and 90 Mt NMVOC by 2100 in the low emissions cluster and between 370 and 550 Mt NMVOC in the high emissions cluster. All other scenario families and individual scenarios fall between these two emissions clusters; the B2 marker scenario (B2-MESSAGE) closely tracks the median of global NMVOC emissions from all the SRES scenarios (see Chapter 5). The same caveats as stated above for NOx and NMVOC emissions also apply to CO emissions - the number of models that represent all the emission source categories is limited and modeling and data uncertainties, such as emission factors, are considerable. As a result, CO emission estimates across scenarios are highly model specific and future emission levels overlap considerably between the four SRES scenario families (see Table TS-4). Generally, emissions are highest in the high growth fossil fuel intensive scenarios within the A1 scenario family. Lowest emission levels are generally associated with the B1 and B2 scenario families. By 2100, emissions range between 363 and 3766 Mt CO, a considerably larger uncertainty range, particularly toward higher emissions, than in IS92, for which the 2100 emission range was between 450 and 929 Mt CO (see Table TS-4). Table TS-4 (see later) summarizes the emissions of GHGs, sulfur dioxide and other radiatively active species by 2100 for the four markers and the ranges for other 36 scenarios. Combined with Tables TS-2 and TS-3, the tables provide a concise summary of the new SRES scenarios. Data are given for both the harmonized and all scenarios. Other reports in this collection
<urn:uuid:d0231547-3bde-4a43-be57-f4a6d842d65a>
3.171875
1,067
Academic Writing
Science & Tech.
35.178948
<!DOCTYPE> is a special tag which is declared in the very beginning of a web page before <html> tag. It sends an instruction to the web browser about the version of HTML. <!DOCTYPE> is not a HTML tag and most of the modern browsers support this tag declaration. As HTML5 is not based on SGML, so no DTD reference is required in <!DOCTYPE> declaration. This tag is also very useful for SEO. An example of typical HTML5 doctype tag declaration is shown below: For example below we have given a simple HTML5 code snippet for creating a simple web page with doctype declaration: <!DOCTYPE html> <html> <head> <title>HTML5 page title</title> </head> <body> HTML5 page content </body> </html>
<urn:uuid:1a653711-2335-4566-b79d-7d42c9a3b973>
3.4375
181
Documentation
Software Dev.
64.737134
IF YOU bury it deep enough, high-level radioactive waste will seal itself into the rock and can be safely forgotten. So claims one geochemist, who believes he has the answer to a growing global problem. The world's nuclear plants have accumulated vast stocks of highly radioactive waste. By 2010, Britain alone could have 2000 cubic metres of the waste emitting 100 million terabecquerels of radioactivity. Worldwide, high-level waste is currently stored above ground, and no government has a clear policy on its eventual disposal. While most experts believe that burying the waste is the safest bet in the long term, the problem is finding sites that everyone can agree are geologically stable. Decaying radioactive isotopes release heat. As a result, high-level waste must be constantly cooled otherwise it becomes dangerously hot. Many experts want to store waste above ground until it has decayed and is cool enough to be stored ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:4655fd01-98c7-4a71-8e8a-e5ea6c12fd52>
3.78125
214
Truncated
Science & Tech.
43.984098
How much do scientists know about our solar system? Out of 8 planets, is ours really the only one to sustain life? How can meteorites tell us about other planets, and even how the solar system first began? Explore our universe with the help of the Museum, and discover the role the Museum plays in space exploration and research. As far as we know, there is only life on Earth. Will we ever discover otherwise? How much do we really know about our universe? Whether or not our solar system is the only planetary system in the universe has intrigued scientists and philosophers for hundreds of years. So is there any evidence for extrasolar planets? Meteorites vary in size from a fraction of a millimetre to bigger than a football pitch and they fall to Earth all the time. Explore the fascinating facts about meteorites including how to find, identify and study them. What and where is the asteroid belt? And what makes an asteroid shoot away from the belt and crash to Earth? Comets, with their long tails, appear like ghostly apparitions that glide across the night sky. Discover more about these objects and what causes meteor showers on Earth. Meteors are often confused with meteorites. Do you know the difference? And what causes spectacular meteor storms, which can be seen at certain times of the year? Planets and their moons have collided with asteroids and comets frequently throughout our solar system's 4.5-billion-year history. How often is Earth hit and where is the biggest impact crater?
<urn:uuid:68ab5b5b-4034-4464-ab09-aca6a139dc32>
3.75
315
Content Listing
Science & Tech.
57.109655
Future Fuels: Sparks? Soy? By KRISTEN STERNBERG | NIE Educational Consultant The Earth's population continues to grow, and more people mean more development, more food production, more industry and more travel, for a start. All these activities-and many others-require fuel. What if the world ran out of fuel? Fuel isn't only the stuff we use to heat our homes or drive our cars. It powers trains, planes, ships and rockets. It runs machinery on farms and in factories, and produces the electricity we take for granted. For heat, transportation and electricity, people most often use "fossil fuels" such as oil, coal and natural gas. They are called fossil fuels because they were formed during the time of the dinosaurs, hundreds of millions of years ago. Fossil fuels are a growing concern for many reasons. For example, when burned they release toxins that harm our environment. Also, they are a non-renewable resource. Non-renewable means there is a limited supply-new fossil fuels can't be produced as quickly as they're being used up. It takes millions of years for them to form under the earth's crust. Throughout history, many materials have been used as fuel. Some sources were exploited and are no longer available (for example, whales, which were once hunted primarily for their blubber to be used in lamps, became almost extinct due to overhunting). Wood, used for both building and fuel, is also in shorter supply today than it once was. Nomadic (wandering) tribes, traveling in open areas without many trees, fueled their cooking and heating fires with "chips," that is, animal dung. Droppings from animals such as buffalo, camels, reindeer and llamas are sometimes still used today in some areas of the world. To offset the threat of using up the Earth's fossil fuels, scientists are researching other natural products as alternatives. While there are still major pockets of fossil fuels to be discovered, drilling to find them is expensive. In addition, drilling and mining can hurt the environment and disrupt wildlife. There's also some danger when transporting fuel from one place to another-pollution from oil spills is one example of the risks involved. Recent experiments with producing fuel from vegetable material look promising. For instance, fuel from soybeans and vegetable by-products (such as cooking oil) have been used in place of petroleum products (gas, oil, diesel fuel, etc.) in cars and to heat buildings. Check out this News-Journal article about a soy-burning Harley-Davidson. Developers of vegetable (called "biodiesel") fuels proudly claim that using biodiesel fuel can not only reduce our dependence on fossil fuels but also help keep energy costs down and cause less pollution to the environment. Scientists are working to find other alternatives for petroleum products used for transportation. Ethanol is a fuel made from grains such as corn or certain types of grasses. Cars have been developed that can run on ethanol instead of gas. There are also cars that run on electricity. Advantages of electric cars are that they're cheaper to run, quieter and better for the environment than traditional cars, but so far they're expensive to produce and they can't go long distances because they need their batteries charged very often. You will probably hear more about biodiesel and electric cars in the near future, as developments continue to make them less expensive and more available. Electricity, though, is routinely produced using coal and other petroleum products. Increasing our use of electricity means we will still need fossil fuels to produce it. Fortunately, experts around the world are studying alternate ways to produce electricity. Many are looking to resources like the wind, the water and the sun. In other words, they're searching for viable ways to harness these important natural resources for power because they have the potential to provide renewable, relatively inexpensive, non-polluting fuel. Windmills have been used for thousands of years. In the Netherlands, energy harnessed by the wind is used to grind wheat into flour, for example. Water wheels are another ancient device for harnessing energy. (You can see examples of water wheels all around Central Florida, at some of our State Parks and Historic Places.) Some modern waterpower is generated by hydro-electric dams (hydro is a Greek word for "water"), and it's now common to see solar panels converting the sun's energy to heat homes and water, run electrical appliances and power batteries. Steam is another important resource, but to generate steam you have to have a way to heat the water-unless we tap into the Earth's resources and figure out a way to harness heat from volcanoes, etc. Did you know people in industrialized countries routinely use up to 20 times more fuel every year than people in less-developed nations? Even if the perfect alternative fuel became available tomorrow, we would still need to conserve our resources. Many of our foods and everyday household objects are also made from petroleum by-products, and some companies have made major changes in their efforts to use fewer fossil fuels. For example, did you know that The Daytona Beach News-Journal is printed with ink made from soybeans rather than petroleum products? What can you do to reduce consumption of fossil fuels? Think about it as you check out the newspaper activities and web links provided below. Try these interesting activities using The Daytona Beach News-Journal 1. Use The News-Journal's Classified section to find three automobiles for sale. (Choose ads that provide information on how many miles per gallon of fuel the car offers.) Next, find a newspaper dateline of a place in the United States you would like to visit. Use a map to determine how far the location you chose is from your hometown. Using the current price of a gallon of gasoline, compute the cost of taking such a trip. Do this for each of the three cars you chose. Finally, determine which car would provide the most economical trip. Draw a picture to show your results and share your findings with friends and family. (Sunshine State Standards MA.A.1.2.1, MA.A.1.2.2, MA.A.1.2.3, MA.A.3.2.2, MA.A.3.2.3, MA.B.1.2.1, MA.B.1.2.2, MA.B.2.2.1, MA.B.3.2.1, MA.B.4.2.1, MA.D.2.2.2 SC.B.1.2.2, SS.B.1.2.1) 2. How are scientists thinking ahead to the future? Science and technology continue to play a part in preserving our environment. Skim The News-Journal for display ads, pictures and articles that show examples of inventions, products and discoveries that are helping to clean the air and water and prevent the destruction of the Earth. Arrange your findings in a visual display entitled Science and Technology Help Preserve the Earth. Share your display with friends or classmates, or post it in a public place for others to learn from. (Sunshine State Standards SC.B.2.2.2, SC.B.2.2.3, SC.D.2.2.1, SC.H.1.2.3) 3. How does an increase in oil prices affect you? Find three stories in The News-Journal about world events that might impact the price of oil. Then, brainstorm ways your daily life might be affected. Continue checking The News-Journal to see how accurate you were. From time to time, share your findings to educate others around you. (Sunshine State Standards LA.A.1.2.4, LA.A.2.2.5, LA.A.2.2.8, MA.A.4.2.1, MA.D.1.2.2, MA.E.2.2.2, MA.E.3.2.2, SC.B.2.2.2, SC.B.2.2.3, SC.G.1.2.1, SS.A.1.2.1, SS.B.1.2.4, SS.B.2.2.3, SS.B.2.2.4) 4. People require energy not only for cars, airplanes, trucks and other kinds of transportation, but also for heating and lighting homes. How can you make your home more energy efficient? Use The News-Journal to clip articles and ads about products and methods that are available to help you use energy more efficiently. Draw a picture, or make a model, of your home showing how you might use what you have learned. Share with your family. (Sunshine State Standards SC.B.1.2.2, SC.B.1.2.3, SC.B.1.2.4, SC.B.1.2.5, SC.B.1.2.6, SC.B.2.2.2, SC.B.2.2.3, SC.C.2.2.1, SC.D.2.2.2, SC.18.104.22.168) 5. Skim News-Journal articles for keywords that relate to energy use, fuel, pollution, conservation, etc. (Make a list of the words as you find them. You'll need from about 10 to 20 words for this activity.) Then, create a word search puzzle using all the words from your list. Duplicate your puzzle and hand out to classmates or family members for them to solve. (Sunshine State Standards SC.B.1.2.2, SC.B.2.2.2, SC.D.2.2.1) A copy of Florida's Sunshine State Standards can be found at intech2000.miamisci.org. Follow these links to learn more Check out the great graphics and get some of your questions answered about hydro (water) power at this site called How Hydropower Works. www.wvic.com/hydro-works.htm Find some great photos and fun facts about windmills as you learn about their importance not only in history but also to the future. www.looklearnanddo.com/documents/history_windmills.html Unlock the secrets of a diesel engine so you can better understand the role fuel plays in transportation. www.howstuffworks.com/diesel.htm Peat is rotting vegetation formed in bogs and other wetlands. Peat is still cut, dried and used as fuel in many countries of the world. You can see lots of pictures and learn interesting facts when you take this virtual tour of a peat bog in Ireland. www.geocities.com/mptadams/Ireland_2001/Bog_Tour/Bog_Tour.html The Big Chalk Library offers a site called the Energy Story, where you can see examples of all kinds of energy, including ocean, wind and nuclear. Plan to spend time at this Energy Education site from the California Energy Commission. You can read a scary story, find ways to save energy, find puzzles and games, check out vehicles of the future and lots more. www.energy.ca.gov/education/ Learn about different types of energy, from wind to solar to geothermal (using heat from inside the Earth) at this kid-friendly U.S. Department of Energy site. www.eren.doe.gov/kids/ Take a journey down the "Energy Trail," where you'll find special activities for your age group. www.dti.gov.uk/renewable/ed_pack/trail.html Here's a neat camping or picnic activity (be sure you have an adult's permission first): Make your own solar oven-and then, use it to make cookies, pizza and more. www.solarnow.org/pizzabx.htm In addition to exploring soy as fuel (called soy biodiesel), scientists are working to develop an amazing number of other products from soybeans-birthday candles you can eat, ink and car wax are just a few. Find out more at this on-line Museum of Soy. www.thesoydailyclub.come/members/TheMuseumofSoy/museum_of_soy%20lobby.asp The Newspaper Association of America's web site contains links to many newspapers in the U.S. and around the world. Visit the site and check some of them out, to see if they have recently published any articles about the future of fuels. To access the newspapers at the site, select a state. Click on the "Internationals" button to view choices from other countries. The Daytona Beach News-Journal NIE Program, published January 7, 2002
<urn:uuid:b7fd7531-868a-4bbe-8239-f96fb4ea5bed>
3.75
2,699
Knowledge Article
Science & Tech.
74.301413
Herd in a Hot Spot Alaska's Porcupine caribou face an uncertain future as global warming alters their fragile Arctic habitat Lisa W. Drew FOR THE PAST five years, no one has been able to count the members of the most well-known caribou herd on Earth. Every time scientists have flown over Alaska's Arctic National Wildlife Refuge to photograph the Porcupine herd for a census, something has gone wrong. Most years the researchers have not been able to see the herd--because of clouds, fog or thick wildfire smoke that has blown over the refuge. In 2006 the animals simply did not aggregate in large enough numbers to be counted. This spring the researchers will try again. If they get lucky this time, a half dozen scientists will spend several days in the fall hunched over piles of 9-inch-by-9-inch images, counting as many as 30,000 caribou in a single print. Among the many people who eagerly await their results are legislators who champion opening the refuge to oil drilling, wildlife conservationists, and Gwich'in Indians who depend on caribou for their sustenance. But even without the new count, researchers know the herd has not been faring well in recent years. Census results from 1989 to 2001 revealed that its numbers dropped steadily from 178,000 to 123,000 animals. Since then, annual studies of radio-collared cows and their calves suggest the herd size may be as low as 110,000. During the same time, the planet has been warming, and scientists suspect that resulting changes in the herd's habitat have taken a toll on the animals. "What we see is a long-term downward trend that we can't explain by other types of disturbances," says Alaska Department of Fish and Game biologist Steve Arthur. The warming has been impressive. Global temperatures have risen by 1 degree F over the past 50 years, but in Alaska, they have gone up an average of 3.4 degrees. The biggest shift has been in the winter; in Alaska's Arctic, where the Porcupine caribou roam, winters have warmed by 5.7 degrees. For animals that live in a very cold place, that might seem to be a good thing. After all, when temperatures drop during the endless Arctic night, and caribou must dig through snow to dine on lichens, the animals use up more energy than they take in. But when the sun returns in late winter--just as the animals have burned up almost all their fat reserves--warming has brought more thaw-and-freeze days in recent years. Snow that starts to thaw can turn to heavy sludge, or it can ice over with a crust. Such conditions can make walking difficult and make it easier for grizzlies, wolves and golden eagles to get at caribou, which often outrun their predators under other circumstances. Ice also can seal off the caribou's winter food supply of lichen, which may itself become scarce as warming brings about long-term changes in vegetation. Scientists already have documented a northward migration of shrubs, which yield lower-quality forage, into tundra. "We may see an increase in moose habitat and a decrease in caribou habitat," says Arthur. The combined impact on caribou is "a pressure" that is probably intensified by a host of others, says ecologist Brad Griffith of the Alaska Cooperative Fish and Wildlife Research Unit. Those pressures are taking a toll on a herd that is finely tuned to specific, unforgiving conditions. "It would only take a 4 percent reduction in the survival of adults to have caused the turnaround from an increase to a decrease in the population," he says. The warming appears to have a sunny side. Once the snow melts, vegetation is greening up sooner--by as many as 10 to 20 days. In the short Arctic warm season, that's a huge difference. Almost every year, the Porcupine herd migrates from Canada to the Arctic Refuge to calve on the coastal plain, where cottongrass and other plants are plentiful, and wind helps keep the Far North's legendary mosquitoes and other insects at bay. "The earlier it greens up, as evidenced by satellite imagery, the higher the calf survival in summertime," says Griffith. "There's more food for moms, and it puts them out on the coastal plain, where there's more protection from predators and insects." But there's a catch. Because the plants' lifespans have not changed, they not only green up sooner, they turn brown earlier at the end of summer. If the caribou retreat from this browning vegetation, moving from the coastal plain into greener hills and mountains to the south, they encounter more insects and predators. The mosquitoes are almost unimaginably intense. In one week, they can draw almost half a gallon of blood from a single caribou. For bigger predators, the caribou are a feast. Perhaps even worse, the herd as a whole likely is not as well nourished as it could be for winter. "You give them less to eat and lower-quality food, and things go bad," says Griffith. "The most important thing you can do going into the winter is to go in fat." Without fat reserves, the number of pregnancies is likely to be lower, and the death rates higher. Elsewhere on the coastal plain, which stretches beyond the refuge across the whole of the state's northern coast, the region's other three caribou herds do not seem to be troubled by warming. From east to west, the Central Arctic herd has been growing for about two decades and now numbers about 31,000; the Teshekpuk herd in the National Petroleum Reserve-Alaska has grown from a few thousand to about 45,000; and the enormous Western Arctic herd, which has been fluctuating, is at about 490,000. Why the difference? One reason may be simple geography. In the Arctic Refuge, the coastal plain is quite narrow, closely bordered by the foothills and mountains of the Brooks Range. Beyond the refuge, the plain broadens dramatically. "So you have a much wider amount of habitat, and the caribou have more options," says Arthur. The Porcupine herd in the refuge, he adds, "pretty much fills up the available habitat." East of Alaska, however, the number of caribou in Canada's Cape Bathurst and Bluenose herds have been plummeting over the past six years. The reasons are unknown, but scientists speculate that oil and gas exploration as well as over-hunting may play roles. They've also noticed low calving rates, cows in poor condition and over-grazing of lichens on parts of the animals' range. As these caribou numbers fall, the region's Inuvialuit subsistence hunters have begun eyeing the Porcupine herd as an alternative food source. Last year, the Gwich'in imposed a voluntary six-week hunting ban to help out the herd, but its effectiveness is not known. If the other herds continue to dwindle, 2007 will bring difficult choices for people whose lifestyles depend on a shrinking supply of caribou. Ironically, the warming that pressures the Porcupine herd in Alaska results from burning fossil fuels, which have been at the center of a 27-year-old controversy over opening parts of the coastal plain to oil drilling. Now that the herd is clearly stressed even without drilling, there's "cause for concern," as Arthur carefully puts it. "In all likelihood," he adds, "the long-term consequences of climate change are going to be far more significant than the construction of an oil field would be." Lisa W. Drew teaches journalism at Ithaca College in New York and frequently writes about Alaska. What's in a Name? Scientific reports can be deadly dull. So it is with some surprise that this writer looks forward to results of the annual calving survey of the Porcupine herd the way one might look forward to a soap opera. For the report, biologists locate 70-plus radio-collared cows and monitor calving success by flying over the animals in small airplanes. Last year's results led biologist Steve Arthur to conclude that the herd may now number as few as 110,000 caribou. But it's when he notes that "Arnaq, Claudia, Cocoa, Helen and Tundra had calves that survived through late June," and that "Bertha, Catherine, Daphey and Donner were judged to be barren," that his report comes alive for nonscientists. Alas, we may never learn the fate of Blixen, whose transmitter was not working. --Lisa W. Drew NWF Priorities: The Arctic Refuge and Global Warming For more than two decades, NWF has been fighting attempts to open up the coastal plain of Alaska's Arctic National Wildlife Refuge to oil and gas development, working both through Congress and with grassroots organizations (see www.nwf.org/arcticrefuge). NWF also is combating the effects of global warming on the refuge and in other wildlife habitats throughout the country. Among other activities, it is backing congressional legislation to reduce greenhouse gases, publishing reports on warming's impact on wildlife and collaborating with its state affiliates on local projects (see www.nwf.org/globalwarming). Thousands of Scientists Launch Polar Warming Studies More than 50,000 researchers from various scientific fields have banded together to launch a broad series of studies on the effects of global warming on polar regions and how those effects will impact human society and natural systems. The scientists announced the International Polar Year in Paris, France, in early March, calling it the biggest polar research project in half a century. The announcement comes on the heels of a report in February by the Intergovernmental Panel on Climate Change which said that global warming is unquestionably real, very likely caused by human activities and will take centuries to abate. The polar regions are hit especially hard by global warming. The rate of warming in the Arctic is occurring at nearly twice the rate of the rest of the planet. Average winter temperatures in Alaska and western Canada have risen by as much as 7 degrees F during the past 60 years. Average temperatures in the Antarctic have increased by as much as 4.5 degrees F since the 1940s, among the fastest rates of change in the world. Some climate scientists predict that the Arctic will be ice free during summer within the next century, a change that will have devastating effects on Arctic wildlife and people. The scientists participating in the International Polar Year will be trying to determine just what those effects will be at both poles. Among other things, they will try to quantify the amount of freshwater entering the sea from melting Antarctic ice sheets; study creatures living on the floor of the Antarctic's Southern Ocean, a region long obscured by thick sheets of ice that are now vanishing; install an Arctic Ocean monitoring system that will provide early-warning data on global warming; study Antarctic lakes and mountains; and research the cultures of the 4 million people living in the Arctic in an attempt to understand how global warming with impact them. The project is sponsored by the United Nations World Meteorological Organization and the International Council for Science. The $1.5 billion likely to be spent on the project comes mostly from existing research budgets. The aggressive thrust of the project fits a comment made by Prince Albert II of Monaco at the launch. Global warming, he said, is "the most important challenge we face in this century. The hour is no longer for skepticism. It is time to act, and act urgently."--Roger Di Silvestro Canadian Tundra Shrinking While scientists have long predicted that the area of the world's tundra will decrease as global climate warms, results of a new study suggest that the habitat--home to caribou and other wildlife--may be disappearing even more rapidly than expected. Using tree ring data to date the establishment and death of spruce trees in southwestern Yukon, University of Alberta biologist Ryan Danby and his colleagues reconstructed changes in the location and density of treeline forests over the past three centuries. At all of their study sites, the scientists discovered rapid changes in vegetation during the early to mid-twentieth century. On some warm, south-facing slopes, tree line rose as much as 278 feet in elevation, overtaking what historically had been frozen tundra. The researchers published their results last month in the Journal of Ecology. "The conventional thinking on tree line dynamics has been that advances are very slow because conditions are so harsh at these high latitudes and altitudes," says Danby. "But our data indicate that there was an upslope surge of trees in response to warmer temperatures. It's like the tree line waited until conditions were just right, and then it decided to get up and run, not just walk."--Laura Tangley
<urn:uuid:64bc465d-f05d-4bf4-837f-80c8f7764b3b>
3.578125
2,658
Content Listing
Science & Tech.
48.635958
Acid Test: The Global Challenge of Ocean AcidificationAcid Test: The Global Challenge of Ocean Acidification Watch on YouTube Ocean acidification: Connecting science, industry, policy and publicOcean acidification: Connecting science, industry, policy and public Watch on YouTube About this video Not just another pretty demonstration. Dr Andrea Sella of University College London demonstrates how one of his favourite molecules, carbon dioxide, reacts with water. It's not a complicated experiment: when more and more CO2 dissolves the indicator in the flask changes colour as the water becomes more acidic. Whilst this make our agua con gas taste great, Sella goes on to explain why the process could also pose an immense threat to ocean biodiversity. As rising levels of CO2 dissolve in our seas many species – especially those in coral ecosystems – are unable to adapt to increasingly acidic waters. You can find out more about what has become known as "the other carbon problem" in the footnotes below the video or the additional resources to the right. There's also a really useful video on universal indicator if you're wondering how and why the solution changes colour as it becomes more acidic. - Professor Andrea Sella - London, UK - Filmed in: - The Theatre That's the happy sound of one of my favourite molecules, carbon dioxide, being released from captivity inside a bottle of sparkling water. Now, the reason why sparkling water tastes so fantastic is because the carbon dioxide actually dissolves, giving a slightly acidic carbonic acid. But what this carbon dioxide look like? Well, I have a little bit of it here. And we've cooled it down so that it's a solid. And it's a very interesting one, because it goes from the solid to the gas without passing through the liquid. So it actually sublimes. On my left here, I have a big tube of water which has a little bit of indicator in it. Watch what happens when I drop a bit of carbon dioxide into the water. Of course, the carbon dioxide starts to sublime immediately. And it forms loads and loads of bubbles. But as those bubbles rise through the liquid, a little bit of the carbon dioxide actually dissolves and gradually the pH of the solution begins to change. And so the indicator changes colour. And you can see its now gone from purple to blue. And slowly it's changing from blue to green. And eventually, it will become yellow. What this is telling us is that the water is becoming more acidic. Now, you might think that this is just another pretty demonstration. It's not. It's actually really serious. As we put more and more carbon dioxide into the atmosphere, one of the things that is happening is that the CO2 is dissolving in the oceans. And as that happens, the oceans are becoming more and more acidic. And that really spells disaster for all kinds of life forms that can only live in a very narrow range of pH. And so while we have here a beautiful demonstration, in reality it's an alarm signal for what we're doing to our world.
<urn:uuid:28366f1d-93b0-46c7-bb3c-10e8238d73ac>
3.328125
641
Truncated
Science & Tech.
46.146582
In this study, the classical and modern signal processing methods are used to extract dominant frequencies of Masjed Soleiman dam, the highest embankment dam in Iran. The signals were recorded in the gallery, mid-height and the crest of the dam during local earthquakes. Since the amplitude and frequency contents of earthquake acceleration time histories vary with time, classical signal processing techniques are limited to extracting the exact characteristics of the signal. Time-frequency distribution and wavelet analysis were used in this study to overcome this limitation. The proposed modal frequencies of the dam body were evaluated using both the classical and new techniques and the results compared. Differences between the two sets of methods are described and the benefits of the modern signal processing methods are discussed. It is shown that, in non-stationary signals such as earthquake records, higher frequencies are extracted by modern methods that cannot be obtained using classical methods. Besides, the spectral variations of the scalograms clearly indicate that lower frequency contents become more dominant as the excitation amplitude decreases. The lower mode shapes of dam body are excited during the weak part of an earthquake, whereas during the stronger part, all the high and low modes are excited. M. Davoodi, M.A. Sakhi and M.K. Jafari, 2009. Comparing Classical and Modern Signal Processing Techniques in Evaluating Modal Frequencies of Masjed Soleiman Embankment Dam during Earthquakes. Asian Journal of Applied Sciences, 2: 36-49.
<urn:uuid:ecb05483-be37-4362-9c11-3bf7a79b1877>
2.875
327
Academic Writing
Science & Tech.
31.765588
How do human CO2 emissions compare to natural CO2 emissions? What the science says... |Select a level...||Basic||Intermediate| |The natural cycle adds and removes CO2 to keep a balance; humans add extra CO2 without removing any.| Before the industrial revolution, the CO2 content in the air remained quite steady for thousands of years. Natural CO2 is not static, however. It is generated by natural processes, and absorbed by others. As you can see in Figure 1, natural land and ocean carbon remains roughly in balance and have done so for a long time – and we know this because we can measure historic levels of CO2 in the atmosphere both directly (in ice cores) and indirectly (through proxies). Figure 1: Global carbon cycle. Numbers represent flux of carbon dioxide in gigatons (Source: Figure 7.3, IPCC AR4). But consider what happens when more CO2 is released from outside of the natural carbon cycle – by burning fossil fuels. Although our output of 29 gigatons of CO2 is tiny compared to the 750 gigatons moving through the carbon cycle each year, it adds up because the land and ocean cannot absorb all of the extra CO2. About 40% of this additional CO2 is absorbed. The rest remains in the atmosphere, and as a consequence, atmospheric CO2 is at its highest level in 15 to 20 million years (Tripati 2009). (A natural change of 100ppm normally takes 5,000 to 20,000 years. The recent increase of 100ppm has taken just 120 years). Human CO2 emissions upset the natural balance of the carbon cycle. Man-made CO2 in the atmosphere has increased by a third since the pre-industrial era, creating an artificial forcing of global temperatures which is warming the planet. While fossil-fuel derived CO2 is a very small component of the global carbon cycle, the extra CO2 is cumulative because the natural carbon exchange cannot absorb all the additional CO2. The level of atmospheric CO2 is building up, the additional CO2 is being produced by burning fossil fuels, and that build up is accelerating. Last updated on 29 August 2010 by gpwayne.
<urn:uuid:f6417edc-0672-4da9-957c-128cdba9f48f>
3.796875
453
Truncated
Science & Tech.
53.695734
An experimental investigation is conducted to determine the feasibility of laboratory simulation of the interaction of the shock wave of a hypersonic body with a separate moving shock wave. The experimental technique uses a blunted cone mounted in a hypersonic wind tunnel to generate the body shock wave. Shock tubes are used to generate the moving shock wave. The hypersonic tunnel is operated at Mach 7.3. The shock tube generates moving shock wave velocities from 3600 to 13,500 feet per second. The tubes are installed at angles of 30, 60, 90, and 120 degrees with respect to the tunnel centerline. Pressure measurements and Schlieren photographs are used to define the characteristics and the shock interaction. The Schlieren photographs show that the moving shock wave will move across the hypersonic stream and intersect the body. Transient pressures associated with arrival of the moving shock wave increase with increasing shock velocity. It is concluded that the technique investigated provides a feasible method for laboratory simulation of the interactions between a moving shock wave and a hypersonic body. (Author) This title is unavailable from Storming Media. We do not know when it might be available, if at all. We list the report on our site for bibliographic completeness, to help our users know what other work has been performed in this field. Please note that as with all titles on this site, we do not have contact information for any of the authors. Nor can we give any suggestions on how one might obtain this report.
<urn:uuid:a2b4dc92-09f0-44b0-9dec-d986a3a9e65d>
2.796875
306
Truncated
Science & Tech.
38.710202
In this chapter we'll discuss the technology involved in developing mobile applications with the Flash Platform. We'll explore ActionScript, the AIR runtime, and the Flex framework, as well as various best practices for optimizing applications that are to be used on smartphones and tablets. Developing for the Flash Platform on Mobile In this lesson, we'll discuss why you might want to develop for mobile devices using Flash Platform technologies and what their main benefits are. There are various best practices that are good to keep in mind when you get started with mobile application development. In this section we'll look at working with different screen sizes and optimizing performance and user input for touch-based devices. Best Practices for Mobile In this lesson, we'll discuss different screen sizes and pixel densities on mobile devices and how to appropriately deal with them in your applications. This chapter covers the basics of working with mobile in Flash Professional CS5.5. You'll see how to create a new mobile project, work with mobile templates, use the Projects panel, and perform intelligent scaling. Setting Up Your Project in Flash Professional CS5.5 With Flash Professional CS5.5, you now have the ability to target iOS and Android directly. In this lesson, we'll go over how to create a new project in Flash Professional that targets Android or iOS. All AIR projects have an application descriptor file, which provides the compiler with certain important project settings. This lesson will explain the importance of the application descriptor file and demonstrate how to access various settings from the Flash GUI. Mobile devices come in a wide range of screen resolutions and pixel densities. In this lesson, we'll have a look at enhancements made to the Project Panel to allow for shared assets and multiple FLA files targeting specific device resolutions. Using Flash Professional CS5.5, we can easily adapt objects on our Stage for various mobile device screen resolutions. In this lesson, we'll look at how to modify project resolution in such a way that assets already established upon the stage are able to scale appropriately. Debugging your mobile application is an essential step in the mobile development process. In this chapter we'll look at what debugging tools are available in Flash Builder 4.5 and how you can do on-device debugging over USB or a wireless network. Debugging Your Application In this lesson, we'll look at features that help you debug your application on the desktop, including trace statements, the debugger, and Flash Builder Profiler. In this chapter we'll cover setting up mobile projects with Flash Builder 4.5 using either ActionScript or Flex 4.5 on mobile. We'll discuss the various project templates available, as well as working with mobile views and data persistence in your application. Flex Mobile Projects In this lesson, we'll walk through the process of setting up an ActionScript-based mobile project in Flash Builder 4.5. Flash Builder lets you set up Flex mobile projects using a number of templates. In this section we'll discuss creating applications that start with a blank, view-based, or tabbed application template. We'll build on that to discuss specific user interface elements available to each of those templates. Views are the way in which almost all mobile applications are set up. In this chapter you'll see how to set up views in a Flex Mobile View-based application and how to navigate and pass data between them. Working with Views In this lesson, we'll look at working with mobile views and how to push a new view in your Flex Mobile view-based applications. The number of mobile and touch-specific APIs now present in ActionScript 3 is truly staggering. This chapter will introduce you to gestures, touch events, and a variety of hardware APIs and sensors. Using Mobile API Features It is possible to enable auto-orientation in an application and respond to device orientation changes through StageOrientationEvent.ORIENTATION_CHANGE. In this lesson, you'll see how to adapt a layout in response to device orientation changes. Most mobile devices contain an accelerometer sensor that detects movement of the device in space. In this lesson, you'll learn how to adjust the X and Y positions of a display object based on accelerometer sensor data. The geolocation sensors on a mobile device can be used to gather data and feed that to a variety of external services. In this lesson, we will employ the Geolocation ActionScript API to retrieve dynamic location data that will be fed into the embedded Google Maps API for Flash. Both Android and iOS have local storage set aside for storing captured photographs. In this lesson, you'll see how to use the CameraRoll API to select an image from device storage and then display the image along with basic metadata within the AIR application. Many Android devices provide hardware that can be used as a direction control for games or a selection control for applications. This often is part of the device's physical keyboard or can take the form of something like a trackball. In this lesson, you'll learn how to access the D-pad or trackball on an Android device. Android devices always come with a dedicated set of hardware keys built into the device. These include Back, Menu, Home, and Search. In this lesson, you'll see how to access these keys and override their default behavior. The rotate gesture involves placing two fingers on the screen and moving one around the other in a clockwise or counterclockwise motion depending on the direction in which one wishes to rotate an object. This lesson looks at using the multitouch rotate gesture. The Android long-press gesture is a timed touch event that is used heavily on that platform. However, this is not one of the predefined gestures available to us in ActionScript. In this lesson, you'll learn how to simulate the long-press interaction found on Android devices. When using AIR for Android and iOS, you have a choice of using predefined gestures for your applications or employing raw touch data. In this lesson, we'll look at the data returned when using raw touch interactions in ActionScript. It is possible to use traditional startDrag and stopDrag methods on mobile as well. In this lesson, you'll see how to use touch points to enable drag-and-drop functionality on display objects in an application. You can also use AIR for mobile to invoke the native text messenger application on a device. In this lesson, you'll learn how to invoke the native test messaging application (assuming the device is SMS-capable) from within an AIR project. AIR for mobile enables you to invoke Google Maps through the default web browser or Maps application. In this lesson, you'll see how to invoke the default map application and have it display specific coordinates. Need to direct a user to email from directly within an AIR application? This lesson will show you how to invoke the native email application from within an AIR project and even pass along some basic information. On Android, you can link directly to the Android Market from within an AIR application. In this lesson, you'll see how to invoke the Android Market and perform a custom search according to specific parameters. Working with data is something practically every mobile application does. In this chapter we'll discuss working with the filesystem to read, write, and delete files. We'll also cover setting up and interacting with a local SQLite database as well as interacting and authenticating with online data sources. Working with Data Interacting with the filesystem lets you read, write, and delete files on the mobile device you're developing for. In this section you'll see how you can take advantage of these features in your mobile applications. Interacting with the Filesystem In this lesson, you'll see how to read in files from the filesystem on mobile devices. In situations where you have structured data that you want to be able to filter at runtime, local SQLite database support comes in very handy. In this section we'll look at creating a connection to a local database on the mobile device, as well as running queries and getting results back to display in an application. Working with a Local SQLite Database This lesson introduces the SQLite database engine and shows how to create and connect to a local database file. Working with online data sources is very common in mobile applications. In this section you'll see how to do network detection so you can download and cache the latest information as well as work with data returned from an API service call that returns in an XML or JSON format. Working with an Online Data Source In this lesson, we'll look at online/offline network detection and caching an online resource on the device for use when no network connection is available. Open authentication (or OAuth) is an increasingly popular way of authenticating the user in your application with a third-party service. It lets users give applications access to their information without having to share their login and password credentials. In this section we'll discuss how OAuth works and how you can implement it in your projects. This lesson introduces the basics of open authentication (OAuth), discusses the authentication workflow, and explains what its benefits are. This chapter will explore working with video, audio, and the StageWebView object on mobile devices. While there is a lot of new stuff to cover with Flash on Android and iOS, it's important to remember that this is still Flash we are working with! In this section, you'll see how to record, encode, save, load, and play back audio on a mobile device using Adobe AIR. Working with Audio In this lesson, you'll learn how to load, play, pause, and resume MP3 playback on a device. This technique is useful for loading full tracks into an application to enable basic playback or for providing background audio to a game. StageWebView is a totally new object that can be used to render HTML content within an AIR application. In this section, we will look at some of the potential uses for this killer feature. Working with HTML Using StageWebView Using AIR for mobile, it is possible to render HTML content in a special object called StageWebView. In this lesson, you'll learn how to load a website into an application using the StageWebView class. While the StageWebView object is not part of the traditional Flash display list, you can still access the bitmap data of rendered HTML content by using methods specific to this class. In this lesson, you'll see how to capture bitmap data from a StageWebView object by using the drawViewPortToBitmapData method available in ActionScript. Loading advertisements within a mobile application is a great way to collect a little revenue, even if you're offering an application for free. In this lesson, you'll learn how to employ a StageWebView object to load ads into a mobile application through an ad service. The final step in making your application publicly available for Apple iOS devices is submitting it to the App Store for distribution. In this chapter we'll discuss the steps involved in doing so, including working with provisioning profiles and exporting a signed release build. Distributing Applications on iOS In this lesson, you'll learn how to use the iOS Provisioning Portal to set up certificates, devices, and the App ID for your application. In this chapter, we will examine how to prepare your AIR for Android application for distribution through the worldwide Android Market. Everything from asset preparation to signing and final publication is covered. Distributing Applications on the Android Market The most popular way of distributing Android applications is through the Android Market. In this lesson, you'll see how to register as an Android developer and gain access to the Android Market. The Android Market requires certain icon files to be compiled along with an .apk to form a valid submission. This lesson demonstrates how to prepare and assign icons for applications that are to be distributed via the Android Market. Users often appreciate being able to move an application to the device's SD card to save memory. In this lesson, you'll learn how to enable an AIR for Android application to be installed onto the device SD card. To form a valid submission, the Android Market requires an .apk to be signed with a digital code signing certificate that has certain precise properties. In this lesson, you'll see how to do this correctly and avoid trouble with your submission. The Android Market requires a number of textual descriptions and image assets to be entered as part of a valid submission. In this lesson, you'll see how to prepare the various assets and data that must accompany your Android application when you submit to the Android Market. This chapter will examine some of the things that you can do to monitor your application's CPU and memory usage and make sure it behaves properly when installed on a user's device. We'll also look at using an analytics package to track user views and interactions. Optimizing Your Applications To be a good steward of system resources, our applications should always free up system memory when not in use. In this lesson, we'll demonstrate how to properly exit an application so as not to take system resources unnecessarily. When an AIR application is actually run on a device, you can gather information from the device itself to inform certain application runtime decisions. In this lesson, you'll see how to detect screen DPI, resolution, and pixel aspect ratio by reading from the Capabilities class at runtime. By using the System class in ActionScript, you can read a number of useful device resources and their consumption at runtime. In this lesson, you'll learn how to monitor CPU and memory usage in a mobile application. The Android Market keeps its own set of statistics for submitted applications, but if you want customized data reporting, an additional analytics package may be required. In this lesson, you'll see how to use an analytics package - specifically, the Google Analytics service using the GAforFlash ActionScript library - to track certain user views and interactions from within a mobile application.
<urn:uuid:1353b35c-d5e1-43eb-935e-8c7874da07bf>
2.90625
2,854
Tutorial
Software Dev.
42.969098
The copy() function copies a file. This function returns TRUE on success and FALSE on failure. |file||Required. Specifies the file to copy| |to_file||Required. Specifies the file to copy to| Note: If the destination file already exists, it will be overwritten. The output of the code above will be: Your message has been sent to W3Schools.
<urn:uuid:bede87cc-ceb3-4fb3-a2f9-81a88aa89393>
2.71875
87
Documentation
Software Dev.
50.255658
Hydrogen is an almost perfect energy source. It combines with oxygen to produce energy and water with no greenhouse gases. The problem is, hydrogen rarely occurs by itself in nature. There are basically two ways to produce usable hydrogen - through steam reforming or electrolysis. The chemical equation for steam reforming is CH4+H2O->CO + 3 H2. Practically, it means you mix a hydrocarbon such as methane with steam to produce hydrogen and carbon monoxide. Electrolysis is greener, but costs at least three times as much as steam reforming. In 2003, the Bush administration announced amid great fanfare the nation would collaborate with Europe to eliminate dependence upon petroleum and decrease the production of greenhouse gases. The missing piece though, was how to generate the hydrogen. The next generation nuclear reactor was envisioned to provide a more efficient electrolysis. Unfortunately, that has not panned out. But 'Inexhaustible' Source of Hydrogen May Be Unlocked by Salt Water, Engineers Say tells that a combination of chemistry and biology may be the missing piece to the hydrogen economy. Reverse electrodialysis (RED) produces a voltage by forcing seawater and freshwater through a stack of membranes. Previously it required too many membranes to produce sufficient voltage for electrolysis and thus producing hydrogen from mixing freshwater and seawater. The Science Daily article explains that researchers at Penn State have combined exoelectrogenic bacteria to increase the voltage produced by RED and thus enable a new method of producing hydrogen.
<urn:uuid:3f36e176-8423-4929-b5f6-3c0b07f672c9>
3.96875
304
Personal Blog
Science & Tech.
23.433557
Drought Tolerance in Two Perennial Bunchgrasses Used for Restoration in the Intermountain West, USA An ideal restoration species for the semi-arid Intermountain West, USA would be one that grows rapidly when resources are abundant in the spring, yet tolerates summer’s drought. We compared two perennial C3, Triticeae Intermountain-native bunchgrasses, the widely occurring Pseudoroegneria spicata and the much less widespread Elymus wawawaiensis, commonly used as a restoration surrogate for P. spicata. Specifically, we evaluated seedlings of multiple populations of each species for biomass production, water use, and morphological and physiological traits that might relate to drought tolerance under three watering frequencies (WFs) in a greenhouse. Shoot biomass of E. wawawaiensis exceeded that of P. spicata regardless of WF. At low WF, E. wawawaiensis displayed 38% greater shoot biomass, 80% greater specific leaf area (SLA), and 32% greater precipitation use efficiency (PUE). One E. wawawaiensis population, E-46, displayed particularly high root biomass and water consumption at high WF. We suggest that such a plant material could be especially effective for restoration of Intermountain rangelands by preempting early-season weeds for spring moisture and also achieving high PUE. Our data explain how E. wawawaiensis has been so successful as a restoration surrogate for P. spicata and highlight the importance of measuring functional traits such as PUE and SLA when characterizing restoration plant materials. Mukherjee, J. R., Jones, T. A., Adler, P. B., & Monaco, T. A. (2010). Drought tolerance in two perennial bunchgrasses used for restoration in the Intermountain West, USA. Plant Ecology, 212(3), 461-470. doi:10.1007/s11258-010-9837-3
<urn:uuid:d9776f4a-9709-4be1-84c2-b236c617d731>
2.921875
421
Academic Writing
Science & Tech.
43.733283
In a counter-intuitive finding these snakes have been found, in certain specific ecosystems to not suffer any apparent detriment despite having lost eyes to attacks from Silver Gulls ( Larus novaehollandiae ). - Bonnet, X., Bradshaw, D., Shine, R., & Pearson, D. (1999). Why do snakes have eyes? The (non-)effect of blindness in island tiger snakes ( Notechis scutatus ). Behavioral Ecology and Sociobiology, 46(4), 267-272. doi:10.1007/s002650050619 No one has provided updates yet.
<urn:uuid:ca6e6ac1-2eaa-4828-98cd-12781b535da9>
3.484375
127
Knowledge Article
Science & Tech.
60.531143
Russell McMahon email (remove spam text) > Not to be contrary, Russell, Contraryness is always welcome :-) But I can be happy with non-contraryness too. > but the data seems to be very compelling for > some and hardly impressive for others within the community of scientists. > This is not the case for gravity, or electric field, etc. How can it be a > sound principle if there is so much controversy, and there is clear > definition of its nature? Second things first. 2. The definition is far from clear. This is outlined nicely in the summary of an ebook in one of the references I gave and in various papers. People are agreed that the climate is varying. Some say we are driving it towards a warming cycle. Others say we are pushing it towards triggering an ice age. Some, wisely, say that that A MAY cause B. Some say that's just the way it goes and we are not having a significant effect. The most alarming of the true scientific positions is that we MAY be pushing the climate into a toggling mode where it changes almost instantly into a completely different way of working - past history seems to indicate you MAY be able to swing mean global temperature by over 15C in around 50 years. Now THAT would be a time to live through (if you were lucky :-) ). Note that Northern Summer 2003 was the hottest ever recorded. (I was there at the peak - it was very interesting). Note El Ninya weather patterns of a decade or two back - never *known* to have been seen before. And changes since. Some say we are doing this. others say it just happens. 1. We don't know enough, the reality is vastly complex, our computers and our knowledge cannot model the system well enough to be certain that our assumptions are good. Some people just decide that they know their assumptions are the correct ones, and away we go. They MAY be right. They may not. God knows who is right. Just as well ;-) While gravity and electric fields are at core a pure mystery (even though we may pretend otherwise), we can model their effects very very very well indeed. In fact our models work better than we can measure reality and we have cause to believe that our models are far better than our measurements. eg Gravity seems to work by inverse law to power 2.000000000000000000000000000000..... . Our measurements run out long before the 0's do. Fundamental laws generally allow of good prediction and modelling, even if we don't understand how they work. Even Quantum mechanics, which we completely don't understand, is superb for predictions (most of the time until we find another wrinkle). But atmosphere is not based on simple single fundamental laws but on vast interactions (obviously enough). The only computer we have that will model it accurately is an analog one and its in constant use for other purposes and only gives results in real time. Maybe we could ask the mice to fund a second one ? Did I pass ? ;-) http://www.piclist.com#nomail Going offline? Don't AutoReply us! email mitvma.mit.edu with SET PICList DIGEST in the body listserv See also: massmind.org/techref/index.htm?key=global+warming
<urn:uuid:89442a3e-4e7f-46dc-8c52-a9dad54a06dd>
2.875
728
Comment Section
Science & Tech.
62.977959
The chain graph shown above has only one centre, $d$. The largest distance from $d$ to any other vertex is $3$, from $d$ to $a$ and from $d$ to $g$. The largest distance from $c$ to any other vertex is $4$, to vertex $g$, which is greater than $3$, so $c$ is not a centre. Similarly, the greatest distance from $e$ to any other vertex is $4$, to $a$, and you can check that the other vertices are even worse. The tree above has two centres; can you find them? HINT for the proof: Let $T$ be a tree. If $u$ and $v$ are any vertices of $T$, there is a unique path from $u$ to $v$ in $T$. (You’ve probably proved this already; if not, you’ll want to do so now.) Call the length of this path the distance between $u$ and $v$ in $T$. Among all pairs of vertices of $T$ pick two, $u$ and $v$, with the largest possible distance between them. If that distance is even, there’s a vertex smack in the middle of that path; prove that it’s the unique centre of $T$. If the distance between $u$ and $v$ is odd, the path looks, for example, like this: Now there is no vertex right at the centre of the path, but there are two, $c$ and $d$, that are closest to the centre; prove that those two vertices are the centres of $T$.
<urn:uuid:f5a22802-d22f-4a94-93bc-9206e6090779>
3.140625
350
Q&A Forum
Science & Tech.
79.747917
Every insect looks prettier when it lands on a tower of jewels (Echiium wildpretti). When in full bloom, the 9-to-10-foot-high plant, native to the Canary Islands, blazes with firecracker-red flowers. It's a showstopper. Syrphid flies, aka flower flies or hover flies, battle with honey bees to sip the sweet nectar. The flower flies flit in and out of the blossoms, barely visible. However, these insects suffer from an identity crisis. Their wasp-like coloring wards off predators. That same coloring confuses people, too. The average person on the street--or in a flower bed--thinks they're bees. They're not. They're flies. UC Davis-trained entomologist Robert Bugg wrote an excellent pamphlet on flower flies that's downloadable free from the UC Division of Agriculture and Natural Resources. Titled Flower Flies (Syrphidae) and Other Biological Control Agents (Publication 8285, May 2008), it will help you identity flower flies. Not bees. Not wasps. Flies. Cosmos flowers are somewhat like Libras. They balance. In fact, the word, "cosmos," means "harmony" or "ordered universe" in Greek. Plant cosmos and you'll soon be enjoying colorful flowers that belong to the Asteraceae family, which also includes sunflowers, daisies and asters. Plant a variety of colors--white, pink, orange, yellow and scarlet--and you'll see why the Spanish missions in Mexico favored cosmos. They're beautiful and easy to grow. An added benefit: they attract syrphids, also known as flower flies and hover flies. Plant cosmos. Attract syrphids. Capture an image of a syrphid on a cosmos. Caught on the cosmos. That's what it takes to capture images of syrphids, aka flower or hover flies. They are oh, so tiny and they move oh, so quickly. As the morning dawns, you wait, camera poised, near their preferred blossoms. You'll need a keen eye and a quick trigger finger--not to mention a good macro lens and a high shutter speed to freeze a moment in time and space. If you're stealthy and don't startle or shadow them, you can observe them nectaring just inches away from you. This is big game hunting, but with little insects. And, another frozen moment in time and space. It's often mistaken for a honey bee. It's not a honey bee. It's a hover fly or flower fly. And this one, hovering around the plants last Saturday in the Storer Gardens at the University of California, Davis, looked like a Syrphus opinator to me. So I asked UC Davis entomologist Robert "Bob" Bugg, who specializes in flower flies (Syrphidae), what it is. "If I have to be an opinator, I'd opine that you're right," he quipped. Bugg, who received his doctorate in entomology at UC Davis, does research on the biological control of insect pests, cover crops, and restoration ecology. If you want to learn more about flower flies, read Dr. Bugg's "Flower Flies (Syrphidae) and Other Biological Control Agents for Aphids in Vegetable Crops" (Publication 8285, May 2008, University of California, Division of Agriculture and Natural Resources.) To bee or not to bee. Not to bee. The flying insect hovering over the It's commonly known as a hover fly, drone fly, flower fly, syrphid fly or "syrphid," says Robbin Thorp, emeritus professor of entomology at UC Davis who researches native pollinators from his headquarters in the Harry H. Laidlaw Jr. Honey Bee Research Facility on "These are good honey-bee mimics," he said, "but note the short stubby antennae and bulging face." Also note the large eyes! (Reminiscent of the eyes of the male honey bee, the drone). The hover fly moves like a helicopter, holding perfectly still for a moment or two, and then darting upward, downward and backward in flight. Unlike bees and wasps, syrphids have two wings, not four. Also a syrphid-notable: black and yellow stripes on their abdomen. The coloring helps fool would-be predators. In their larval stages, syrphids dine on plant-sucking pests like tasty aphids, thrips, mealybugs and scales, or munch on decaying matter in the soil or in ponds and streams. They're the good guys. And girls. These beneficial insects are like the ladybird beetles (aka ladybugs) and lacewings of the garden. In their larval stages, they prey on pests, and in their adult stages, they pollinate flowers. Prey 'n pollinate, that's what they do best.
<urn:uuid:8a37598f-7d7c-4752-8aa5-ea15ea127b61>
2.703125
1,085
Personal Blog
Science & Tech.
60.343183
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...becomes a new double helix with a new complementary strand to replace the original one. Because adenine always falls in place opposite thymine and guanine opposite cytosine, the process is called a template replication—one strand serves as the mold for the other. It should be added that the steps involving the duplication of DNA do not occur spontaneously; they require catalysts in the... polymerase chain reaction The PCR technique is based on the natural processes a cell uses to replicate a new DNA strand. Only a few biological ingredients are needed for PCR. The integral component is the template DNA— i.e., the DNA that contains the region to be copied, such as a gene. As little as one DNA molecule can serve as a template. The only information needed for this fragment to be replicated is... work of Mullis ...a specific sequence of DNA in quantities sufficient for study were difficult, time-consuming, and expensive. PCR uses four ingredients: the double-stranded DNA segment to be copied, called the template DNA; two oligonucleotide primers (short segments of single-stranded DNA, each of which is complementary to a short sequence on one of the strands of the template DNA); nucleotides, the... What made you want to look up "template replication"? Please share what surprised you most...
<urn:uuid:12954f49-77f0-4043-9e43-b5cb720077a3>
4.0625
313
Truncated
Science & Tech.
49.370946
We will define a random-access machine as follows: Data types. The only data type we will support is the natural numbers 0, 1, 2, 3, .... (However, numbers may be arbitrarily large.) Variables. We will allow an arbitrary number of variables, each capable of holding a single natural number. All variables will be initialized to 0. Tests. We will allow the following test: <variable> = 0 Statements. Our language will have the following types of statements: if <test> then <statement> else <statement>; while <test> do <statement>; <variable> := <variable> + 1;(increment) <variable> := <variable> - 1;(decrement) (Note: Decrementing a variable whose value is already zero has no effect.) In addition, we will permit statements to be executed in sequence <statement>; <statement>; ...), and we will use parentheses to group a sequence of statements into a single statement. This begins to look like a "real" programming language, albeit a very weak one. Here's the point: this language is equivalent in power to a Turing machine. (You can prove this by using the language to implement a Turing machine, then using a Turing machine to emulate the language.) In other words: this language is powerful enough to compute anything that can be computed in any programming language.
<urn:uuid:e0d0c713-08bc-4dfc-a0f3-e343b884cf05>
3.9375
301
Documentation
Software Dev.
43.298351
So, without further ado, I present to you digital root extraction. Step 1: What is a Digital Root? A better demonstration would be to take a number (such as 179) and demonstrate. So, the digital root of 179 is something like this: 179 -> 1+7+9 = 17 -> 1+7 = 8 So, the digital root of 179 is 8. Another, more interesting way to look at a digital root is that it is a modulo 9 operator. In other words, if you were to divide the number by 9, the integer remainder and the digital root are the same thing. So, digital roots can be rewritten as mod9(Number). Now that we know what digital roots are, let's go ahead and find them.
<urn:uuid:a0811838-f114-49a5-9d4c-2b24d6bd5816>
2.75
160
Tutorial
Science & Tech.
75.678155
If you leave the IP address for your new site as All Unassigned, your new site will be the default web site for your computer, which is the web site the server returns when a browser tries to access an IP address not currently assigned to another site. For example, if a computer has three IP addresses--172.16.12.50, 172.16.12.51, and 172.16.12.52--and only the first address has been assigned to a site, then opening the URLs http://172.16.12.52 will return the default web site. It's a good idea to have a default web site configured with general contact information about your company on a server that will be hosting many sites. Note that if there is already a web site that has All Unassigned for its IP address (such as the Default Web Site created when IIS is installed) then if you assign All Unassigned to another site you won't be able to start that site. Host headers are a feature of the HTTP/1.1 specification and allow IIS to host multiple web sites that have the same IP address and port number but different DNS identities. You can't use host headers for sites that use SSL, however, and to use host headers you must have DNS name resolution working on your network. Also, don't assign any host header names to the Default Web Site. One good side of host headers is that when you have thousands of web sites hosted on a single IIS computer, using host headers to identify them incurs a smaller performance hit than using individual IP addresses. The one tricky thing about this code is setting up the ServerBindings array. For whatever reason, instead of making the web site IP address, port, and host header part of the parameters to the CreateNewSite method, they must be concatenated together in an array element and separated by a Recipe 12.4, Recipe 12.17, MS KB 304187 (IIS: Home Directory Cannot Point to Mapped Drives), and MS KB 816568 (HOW TO: Manage Web Sites and Web Virtual Directories by Using Command-Line Scripts in IIS 6.0) Mailbox-Enabling a User You want to create a mailbox for a user. This is also known as mailbox-enabling a user. Using a Graphical User Interface Open the ADUC snap-in. TIP: This needs to be run on a workstation or server that has the Exchange Management Tools loaded (see Recipe 17.6). If you need to change domains, right-click on Active Directory Users and Computers in the left pane, select Connect to Domain, enter the domain name, and click OK. In the left pane, browse to the parent container of the user, right-click on the user, and select Exchange Tasks. On the Welcome screen, click Next. Select Create Mailbox and click Next. Verify the mail alias is what you want, select the server you want the mailbox on, select which store where you want the mailbox, and click Next. On the Completion screen, click Finish. Using a Command-Line Interface > exchmbx -b "<UserDN>"-cr"<server>:<storage group>:<mail store>" Or alternatively, run the following command: > exchmbx -b <UserDN> -cr"<Home MDB URL>" To mailbox-enable user joe with a mailbox on Exchange Server SRV1, Storage group SG1, and mailbox store DB1, execute the following command: > exchmbx -b "cn=joe,cn=users,dc=rallencorp,dc=com"-cr "srv1:sg1:db1" TIP: I highly recommend that you keep your storage group and mailbox store names short, simple, and "space" free. Spaces are troublesome to deal with at the command prompt and have caused many administrators unneeded grief. If you do not use spaces and other special characters, you can dispense with the quotes in all of the command-line examples. Replace <UserDN> with the user's distinguished name, <server> with the Exchange server name, <storagegroup> with the storage group, <mailstore> with the mail store, and <Home MDB URL> with the full homeMDB URL for the desired mailbox store. ' This code creates a mailbox for a user. ' ------ SCRIPT CONFIGURATION ------ strUserDN = "<UserDN>" ' e.g., cn=jsmith,cn=Users,dc=rallencorp,dc=com strHomeMDB = "<Home MDB DN>" ' e.g. CN=Mailbox Store (SERVER),CN=First Storage Group,CN=InformationStore, ' CN=SERVER,CN=Servers,CN=First Administrative Group,CN=Administrative Groups, ' CN=RALLENCORPMAIL,CN=Microsoft Exchange,CN=Services, ' CN=Configuration,DC=rallencorp,DC=com" ' ------ END CONFIGURATION --------- set objUser = GetObject("LDAP://" & strUserDN) objUser.CreateMailBox strHomeMDB objUser.SetInfo( ) Wscript.Echo "Successfully mailbox-enabled user."
<urn:uuid:faa9133b-693b-47da-87cb-ef5465b58e48>
2.703125
1,152
Documentation
Software Dev.
62.745567
This section explains some of the rationale and technical details behind the overall build method. It is not essential to immediately understand everything in this section. Most of this information will be clearer after performing an actual build. This section can be referred to at any time during the process. The overall goal of Chapter 5 is to produce a temporary area that contains a known-good set of tools that can be isolated from the host system. By using chroot, the commands in the remaining chapters will be contained within that environment, ensuring a clean, trouble-free build of the target LFS system. The build process has been designed to minimize the risks for new readers and to provide the most educational value at the same time. Before continuing, be aware of the name of the working platform, often referred to as the target triplet. A simple way to determine the name of the target triplet is to run the config.guess script that comes with the source for many packages. Unpack the Binutils sources and run the script: ./config.guess and note the output. For example, for a modern 32-bit Intel processor the output will likely be i686-pc-linux-gnu. Also be aware of the name of the platform's dynamic linker, often referred to as the dynamic loader (not to be confused with the standard linker ld that is part of Binutils). The dynamic linker provided by Glibc finds and loads the shared libraries needed by a program, prepares the program to run, and then runs it. The name of the dynamic linker for a 32-bit Intel machine will be ld-linux.so.2. A sure-fire way to determine the name of the dynamic linker is to inspect a random binary from the host system by running: <name of binary> | grep interpreter and noting the output. The authoritative reference covering all platforms is in the file in the root of the Glibc source tree. Some key technical points of how the Chapter 5 build method works: Slightly adjusting the name of the working platform, by changing the "vendor" field target triplet by way of the LFS_TGT variable, ensures that the first build of Binutils and GCC produces a compatible cross-linker and cross-compiler. Instead of producing binaries for another architecture, the cross-linker and cross-compiler will produce binaries compatible with the current hardware. The temporary libraries are cross-compiled. Because a cross-compiler by its nature cannot rely on anything from its host system, this method removes potential contamination of the target system by lessening the chance of headers or libraries from the host being incorporated into the new tools. Cross-compilation also allows for the possibility of building both 32-bit and 64-bit libraries on 64-bit capable hardware. Careful manipulation of the GCC source tells the compiler which target dynamic linker will be used. Binutils is installed first because the configure runs of both GCC and Glibc perform various feature tests on the assembler and linker to determine which software features to enable or disable. This is more important than one might first realize. An incorrectly configured GCC or Glibc can result in a subtly broken toolchain, where the impact of such breakage might not show up until near the end of the build of an entire distribution. A test suite failure will usually highlight this error before too much additional work is performed. Binutils installs its assembler and linker in two locations, /tools/$LFS_TGT/bin. The tools in one location are hard linked to the other. An important facet of the linker is its library search order. Detailed information can be obtained from ld by passing it the --verbose flag. For example, ld --verbose | grep SEARCH will illustrate the current search paths and their order. It shows which files are linked by ld by compiling a dummy program and to the linker. For example, dummy.c -Wl,--verbose 2>&1 | grep succeeded will show all the files successfully opened during the linking. The next package installed is GCC. An example of what can be seen during its run of configure is: checking what assembler to use... /tools/i686-lfs-linux-gnu/bin/as checking what linker to use... /tools/i686-lfs-linux-gnu/bin/ld This is important for the reasons mentioned above. It also demonstrates that GCC's configure script does not search the PATH directories to find which tools to use. However, during the actual operation of gcc itself, the same search paths are not necessarily used. To find out which standard linker gcc will use, run: Detailed information can be obtained from gcc by passing it the -v command line option while compiling a dummy program. For example, dummy.c will show detailed information about the preprocessor, compilation, and assembly stages, including gcc's included search paths and their order. Next installed are sanitized Linux API headers. These allow the standard C library (Glibc) to interface with features that the Linux kernel will provide. The next package installed is Glibc. The most important considerations for building Glibc are the compiler, binary tools, and kernel headers. The compiler is generally not an issue since Glibc will always use the compiler relating to the --host parameter passed to its configure script, e.g. in our case, i686-lfs-linux-gnu-gcc. The binary tools and kernel headers can be a bit more complicated. Therefore, take no risks and use the available configure switches to enforce the correct selections. After the run of configure, check the contents of config.make file in the glibc-build directory for all important details. Note the use of CC="i686-lfs-gnu-gcc" to control which binary tools are used and the use of the -isystem flags to control the compiler's include search path. These items highlight an important aspect of the Glibc package—it is very self-sufficient in terms of its build machinery and generally does not rely on toolchain During the second pass of Binutils, we are able to utilize the switch to control ld's library search path. For the second pass of GCC, its sources also need to be modified to tell GCC to use the new dynamic linker. Failure to do so will result in the GCC programs themselves having the name of the dynamic linker from the host system's embedded into them, which would defeat the goal of getting away from the host. From this point onwards, the core toolchain is self-contained and self-hosted. The remainder of the Chapter 5 packages all build against the new Glibc in Upon entering the chroot environment in Chapter 6, the first major package to be installed is Glibc, due to its self-sufficient nature mentioned above. Once this Glibc is installed /usr, we will perform a quick changeover of the toolchain defaults, and then proceed in building the rest of the target LFS system.
<urn:uuid:62d09ee4-9376-4476-b34c-750d7c7278e8>
3.625
1,566
Documentation
Software Dev.
53.08112
Planning your app is the most important phase of developing a great app. This could make or break your project. If you are going to be a serious iPhone Developer and play with the big kids, you have to have some kind of strategy to building your mobile app. In the old days of programming, a systematic approach was used to create software programs. It started with research and planning or writing out the plan, followed by design, then coding, testing and finally production. While it can be a lot of fun to make your own app, don’t take lightly the time needed to prepare for a successful app. The following video is a good example of the development process beginning with planning. It’s a video at high speed capturing an iPhone project being developed from beginning to end. In the free 5 day mini-course, we go a little deeper into the planning stages and in the E-book and full online course, we cover step-by-step how to plan your app development project so that you have an easier time developing your app to completion.
<urn:uuid:374b9e20-d6b3-43f6-9398-c7880551f833>
3.015625
218
Tutorial
Software Dev.
55.835643
Compared to other species, little is known of M. javanica reproduction. They usually give birth to one young, occasionally two. The scales are soft at birth but harden on the second day (Nowak, 1999). Births are thought to occur during February to March or August to October (Banks, 1949). Young cling to base of mother’s tail, using their claws to hook under scales of the tail. Female pangolins roll up into a ball with their offspring enclosed in the centre in order to protect them from danger (Banks, 1949).
<urn:uuid:93cb02d2-5434-4bf9-af94-45386d23328c>
3.40625
120
Knowledge Article
Science & Tech.
67.825
Whether you are interested in insects, birds, reptiles or amphibians, find out how you can help the UK's experts to map the biodiversity of the UK. As well as the surveys that the Museum runs, there are many other national wildlife surveys happening all over the UK. Here are some examples that you can take part in. We will update this list as new ones launch. Find out about the threats facing our trees’ health and provide scientists with vital information about pests and diseases affecting oak, ash and horse chestnut trees. Invertebrates play a vital role pollinating plants, recycling nutrients and providing food for birds. But which habitats do they prefer and how does the built environment affect them? Help scientists find out. Join in the OPAL biodiversity survey and discover the diverse range of wildlife that hedges support. Take part in the OPAL water survey and help scientists learn more about how polluted our lakes and ponds are - something we know surprisingly little about. Find out how much pollution there is in your local area by looking for lichens, which are natural indicators of air pollution. Earthworms play a vital role in recycling plant nutrients and aerating the soil, but scientists still have a lot to learn about them. Help them find out more by joining the earthworm hunt in your local park or garden. Our conker trees (horse chestnuts) are under attack by invading moths. Help scientists in their mission to find out how far these alien invaders have spread and how well birds and other insects are controlling them. Oil beetles are under threat in the UK and have been identified as priorities for conservation action. Help conservation efforts by recording your sightings, and learn about the extraordinary life cycle of these beetles. The Royal Horticultural Society wants you to help track 4 invasive insect species that are pests of British plants. The dragonfly fauna of Britain is changing. Help the British Dragonfly Society gather up-to-date information about the distribution of British dragonflies and damselflies by recording species in your area. Britain’s only native dormouse species, the hazel dormouse, is now rare in the UK. By searching for nibbled hazel nuts you could help identify woods where they still occur. The harlequin ladybird is the most invasive ladybird species on Earth and it arrived in Britain in 2004. By taking part in this survey you can help to monitor its spread across Britain. Find out how you can help fill gaps in scientists' knowledge about birds in the UK. You can record information online about your bird sightings, from common garden birds to rare migrants. Help scientists find out how healthy the countryside is, by monitoring common wildflowers. You will not only be helping plants but also the insects and other invertebrates that depend on them. What are ancient trees and why are they such perfect homes for other plants, animals and fungi? Find out the answers from the Woodland Trust and help them map the distribution of ancient trees in Britain. The National Amphibian and Reptile Recording Scheme runs a number of surveys. Seek out rare visitors to your local area, like the adder and great-crested newt, or spot more regular ones like the common frog. The stag beetle is a protected species in the UK, following its extinction in many European countries. Help find out if its numbers are stable in the UK by joining the Great Stag Hunt. Hedgehog populations are declining in some parts of the UK. Record your sightings in the Hogwatch survey to help track populations and join in the Hedgehog Street project to find out positive actions you can take. Help record the distribution of non-native animal and plants that have been brought into the UK which have a negative impact on native wildlife. Numbers of swifts have dropped dramatically in the past 10 years and they are now officially a Conservation Concern. Help scientists to help them by telling the RSPB about any swifts you see. Scientists at the Museum's Centre for UK Biodiversity and the Biological Records Centre have produced a practical guide to setting up citizen science projects to study biodiversity and the environment. Guide to citizen science PDF (1.9 MB)
<urn:uuid:81ff6dfa-3f3a-4c11-8342-18bfbcf69782>
3.640625
863
Content Listing
Science & Tech.
51.4271
Next presentation offering To be determined In December 2011 the North Museum of Natural History & Science began presenting its newest demonstration attraction. It's a circular track of magnets above which a razor-thin disc amazingly levitates, seeming to defy the laws of physics. Purchased for about $7,000 from Tel-Aviv University, the Levitator is believed to be the only one of its kind in a United States. The program includes a variety of demonstrations related to magnets, levitation and liquid nitrogen, culminating in the demonstration of the Quantum Levitator. Free with museum admission. Watch a short demonstration: How it works: The key to the levitator is the disc, which is made of superconducting material above layers of gold and sapphire crystal. A piece of foam is placed on top and held in place with household plastic wrap. The disc is then dipped into a brew of liquid nitrogen (temperature: minus-300 degrees Fahrenheit). This creates a superconductor — an object that conducts electricity without resistance and no energy loss. When placed atop a powerful magnet, the disc appears to float or be trapped by the magnetic field. The combination of magnetism and superconductivity create the levitation. The disc doesn't have to remain flat but can be tilted and will maintain the same angle as long as it's above the magnet. You can even flip the magnet over without the disc falling off.
<urn:uuid:8a07efac-fcb3-4ea7-8319-cf8630efd879>
3.828125
298
Knowledge Article
Science & Tech.
41.606923
Concurrency: An Introduction Thus far, we have seen the development of the basic abstractions that the OS performs. We have seen how to take a single phys- ical CPU and turn it into multiple virtual CPUs, thus enabling the illusion of multiple programs running at the same time. We have also seen how to create the illusion of a large, private vir- tual memory for each process; this abstraction of the address space enables each program to behave as if it has its own mem- ory when indeed the OS is secretly multiplexing address spaces across physical memory (and sometimes, disk). In this note, we introduce a new abstraction for a single run- ning process: that of a thread. Instead of our classic view of a single point of execution within a program (i.e., a single PC where instructions are being fetched from and executed), a multi- threaded program has more than one point of execution (i.e., multiple PCs, each of which is being fetched and executed from). Perhaps another way to think of this is that each thread is very much like a separate process, except for one difference: they share the same address space and thus can access the same data.
<urn:uuid:d262e6c3-c1c8-4d0a-9f71-053be0949a72>
3.5
262
Academic Writing
Software Dev.
49.016768
By Geoff Brumfiel of Nature magazine Without fanfare, astronomers have redefined one of the most important distances in the Solar System. The astronomical unit (au) — the rough distance from the Earth to the Sun — has been transformed from a confusing calculation into a single number. The new standard, adopted in August by unanimous vote at the International Astronomical Union's meeting in Beijing, China, is now 149,597,870,700 meters — no more, no less. The effect on our planet’s inhabitants will be limited. The Earth will continue to twirl around the Sun, and in the Northern Hemisphere, autumn will soon arrive. But for astronomers, the change means more precise measurements and fewer headaches from explaining the au to their students. The distance between the Earth and the Sun is one of the most long-standing values in astronomy. The first precise measurement was made in 1672 by the famed astronomer Giovanni Cassini, who observed Mars from Paris, France, while his colleague Jean Richer observed the planet from French Guiana in South America. Taking the parallax, or angular difference, between the two observations, the astronomers calculated the distance from Earth to Mars and used that to find the distance from the Earth to the Sun. Their answer was 140 million kilometers — not far off from today’s value. Until the last half of the twentieth century, such parallax measurements were the only reliable way to derive distances in the Solar System, and so the au continued to be expressed as a combination of fundamental constants that could transform angular measurements into distance. Most recently, the au was defined as (take a deep breath): “the radius of an unperturbed circular Newtonian orbit about the Sun of a particle having infinitesimal mass, moving with a mean motion of 0.01720209895 radians per day (known as the Gaussian constant)”. The definition cheered fans of German mathematician Carl Friedrich Gauss, whose constant sits at the heart of the whole affair, but it caused trouble for astronomers. For one thing, it left introductory astronomy students completely baffled, says Sergei Klioner, an astronomer at the Technical University of Dresden in Germany. But, more importantly, the old definition clashed with Einstein’s general theory of relativity. As its name implies, general relativity makes space-time relative, depending on where an observer is located. The au, as formerly defined, changed as well. It shifted by a thousand meters or more between Earth’s reference frame and that of Jupiter’s, according to Klioner. That shift did not affect spacecraft, which measure distance directly, but it has been a pain for planetary scientists working on Solar System models. The Sun posed another problem. The Gaussian constant is based on Solar mass, so the au was inextricably tied to the mass of the Sun. But the Sun is losing mass as it radiates energy, and this was causing the au to change slowly as well. The revised definition wipes away the problems of the old au. A fixed distance has nothing to do with the Sun’s mass, and the meter is defined as the distance traveled by light in a vacuum in 1 / 299,792,458 of a second. Because the speed of light is constant in all reference frames, the au will no longer change depending on an observer’s location in the Solar System. Redefining the au has been possible for decades — modern astronomers can use spacecraft, radars and lasers to make direct measurements of distance. But “some of them thought it was a little bit dangerous to change something,” says Nicole Capitaine, an astronomer at the Paris Observatory in France. Some feared the change might disrupt their computer programs, others held a sentimental attachment to the old standard. But after years of lobbying by Capitaine, Klioner and others, the revised unit has finally been adopted.
<urn:uuid:0dcb01fd-492d-4d07-bc20-1aa7351e9405>
3.78125
808
Truncated
Science & Tech.
39.907604
More In This Article - Photo Album In the February issue of Scientific American two astrophysicists offer a close-up look at a telescope they are developing for NASA. The Nuclear Spectroscopic Telescope Array (NuSTAR) is the space agency's first mission capable of focusing high-energy x-rays. Among other goals, NuSTAR is expected to capture the highest quality hard x-ray images to date of black holes, neutron stars and other extreme phenomena. The photo feature in the magazine showed just the optics; this slide show reveals NuSTAR's other components as they are assembled in preparation for the telescope's launch next year.
<urn:uuid:16a39d3a-93a0-4aa8-a1c0-aa64a73e2e7e>
3.171875
130
Truncated
Science & Tech.
27.105536
|Date sent: Tue, 21 Jul 1998 John Shaw <John.Shaw@ualberta.ca> Here are my responses to your comments. How did melting occur under an ice sheet? Melting at a glacier bed is a result of several energy sources: i) Geothermal heat conducted from the Earth's interior; ii) Frictional heat generated by sliding at the ice bed; iii) Advected heat from surface water or groundwater; and iv) Frictional heat produced by viscous dissipation in flowing water. What prevents a sheet flow from being concentrated Nothing. That's exactly what the field evidence tell us. The sheet flows are transient and, after the formation of drumlin or Rogen fields or hummocky terrain, sheet flow is concentrated in tunnel channels. Convergent vs divergent flow patterns? Patterns of drumlins do converge into tunnel channels near the former ice margins (e.g., the Finger Lakes, N.Y.) Elsewhere, patterns diverged as water flowed radially outwards towards distant ice margins (see Prest et al. 1968, Glacial Map of Canada). In some locations flow diverged over high ground; e.g., the Livingstone Lake drumlin field (Shaw and Kvill Shifts in flow direction Flow direction in a pressurized system is determined by gradients in the hydraulic head (i.e. the pressure in subglacial meltwater). If the pressurized system bursts, flow will be towards ruptures, for example breaks in a seal around the periphery of the Laurentide ice sheet. Initially flow will occur through all outlets. Later, when the hydraulic head falls, the higher outlets will be resealed and flow diverted towards the lowest outlets. Hence drumlins show cross-cutting patterns related to the changing availability of outlets as outburst events evolved. Flow from out of the sea? If the ice were thickest over the sea then the hydraulic head would be highest there and flow would be towards what is now land. For example, the Laurentide ice sheet flowed uphill from Hudson Bay to the Milk River Ridge in Alberta. Size of vortices forming drumlins and Rogen moraine In the meltwater hypothesis vortices forming cavity fill drumlins are presumed to be on the scale of the landforms. Horseshoe vortices forming erosional drumlins, Beverleys, may be of much smaller diameter than the height of the obstacle (drumlin). Contemporaneous erosion of lake basins and formation Yes this appears to have been the case, since lake basins, for example the Lake Ontario Basin, contain only thin or no surficial sediment on bedrock. Areas adjacent to the present lake have thick deposits, for example the Oak Ridges Moraine, Ontario. Drumlin fields are about the same age Yes, this seems to be the case for the fields related to the maximum extent of the Laurentide ice sheet. I have inferred from this that there was some external effect, such as a rapid climate change, on the ice sheet that caused build up of meltwater storage on, within or beneath What about the imprint of the ice sheet? This is a fascinating question. Airphotos, satellite images and Digital Elevation Models (DEM's) illustrate regional-scale bedforms with very little subsequent reworking. Consequently, if the meltwater hypothesis is correct, the ice sheet must have been let down gently on the bed as pressure fell in the meltwater system. At that point, because the ice had been floating, there would have been insufficient slope on the surface to drive ice flow. Thick ice must have stagnated and melted away. Hence recess. Where did the water come from? C. Warren Hunt's comments are out of date; I discuss the origins of the meltwater (Shaw 1996) including references to Shoemaker's papers on the storage and release of subglacial meltwater. Furthermore, it is quite clear that we realize that the ice sheet could not be floating at the margin. It is for this reason that I show tunnel channels in the marginal area of the ice sheet (Shaw 1996, Fig 7.45). As I have written before, the similarity of our conclusions regarding the mechanisms of formation of drumlins and Rogen moraine is remarkable, given that we worked completely independently of each other. I appreciate your respectful questioning of the meltwater hypothesis and hope that you find my responses equally respectful.
<urn:uuid:34963e68-9704-4d88-8790-a0ad2fa5a591>
4.15625
973
Comment Section
Science & Tech.
48.251775
by Staff Writers Pasadena CA (JPL) Jan 19, 2012 La Nina, "the diva of drought," is peaking, increasing the odds that the Pacific Northwest will have more stormy weather this winter and spring, while the southwestern and southern United States will be dry. Sea surface height data from NASA's Jason-1 and -2 satellites show that the milder repeat of last year's strong La Nina has recently intensified, as seen in the latest Jason-2 image of the Pacific Ocean, available here. The image is based on the average of 10 days of data centered on Jan. 8, 2012. It depicts places where the Pacific sea surface height is higher than normal (due to warm water) as yellow and red, while places where the sea surface is lower than normal (due to cool water) are shown in blues and purples. Green indicates near-normal conditions. The height of the sea surface over a given area is an indicator of ocean temperature and other factors that influence climate. This is the second consecutive year that the Jason altimetric satellites have measured lower-than-normal sea surface heights in the equatorial Pacific and unusually high sea surface heights in the western Pacific. "Conditions are ripe for a stormy, wet winter in the Pacific Northwest and a dry, relatively rainless winter in Southern California, the Southwest and the southern tier of the United States," says climatologist Bill Patzert of JPL. "After more than a decade of mostly dry years on the Colorado River watershed and in the American Southwest, and only two normal rain years in the past six years in Southern California, low water supplies are lurking. This La Nina could deepen the drought in the already parched Southwest and could also worsen conditions that have fueled recent deadly wildfires." NASA will continue to monitor this latest La Nina to see whether it has reached its expected winter peak or continues to strengthen. A repeat of La Nina ocean conditions from one year to the next is not uncommon: repeating La Ninas occurred most recently in 1973-74-75, 1998-99-2000 and in 2007-08-09. Repeating La Ninas most often follow an El Nino episode and are essentially the opposite of El Nino conditions. During a La Nina episode, trade winds are stronger than normal, and the cold water that normally exists along the coast of South America extends to the central equatorial Pacific. La Nina episodes change global weather patterns and are associated with less moisture in the air over cooler ocean waters. This results in less rain along the coasts of North and South America and along the equator, and more rain in the far Western Pacific. The comings and goings of El Nino and La Nina are part of a long-term, evolving state of global climate, for which measurements of sea surface height are a key indicator. NASA's ocean surface topography mission Earth Observation News - Suppiliers, Technology and Application Comment on this article via your Facebook, Yahoo, AOL, Hotmail login. Map project accuses Google users of edits San Francisco (UPI) Jan 17, 2012 The OpenStreetMap project, an open source mapping group competing with Google Maps, say user accounts in India linked to Google have tampered with its data. Accounts attached to a range of Google Internet addresses in India have been maliciously vandalizing OpenStreetMap data, OSM project members said. The allegation comes after an incident in which users behind a Google IP [Inte ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:402e5cb0-df74-4c1e-abf2-f092e39daef8>
3.109375
835
Truncated
Science & Tech.
38.120739
It's rare for a month to go by without some aspect of DNA sequencing making the headlines. Species after species has seen its genome completed, and the human genome, whether it's from healthy individuals or cancer cells, has received special attention. A dozen or more companies are attempting to bring new sequencing technology to market that could eventually drop the cost of sequencing down to the neighborhood of a new laptop. Arguably, it's one of the hottest high-tech fields on the planet. But, although these methods can differ, sometimes radically, in how they obtain the sequence of DNA, they're all fundamentally constrained by the chemistry of DNA itself, which is remarkably simple: a long chain of alternating sugars and phosphates, with each sugar linked to one of four bases. Because the chemistry of DNA is so simple, the process of sequencing it is straightforward enough that anyone with a basic understanding of biology can probably understand the fundamentals. The new sequencing hardware may be very complex, but all the complexity is generally there to just sequence lots of molecules in parallel; the actual process remains pretty simple. In a series of articles, we'll start with the very basics of DNA sequencing, and build our way up to the techniques that were used to complete the human genome. From there, we'll spend time on the current crop of "next-generation" sequencing hardware, before going on to examine some of the more exotic things that may be coming down the pipeline within the next few years. The basics of copying DNA Anyone who's made it through biology knows a bit about the structure of the double helix. Half of one is shown above, to illustrate its three components: its backbone is made up of alternating sugars (blue) and phosphates (red), and each sugar is linked to one of four bases (green). In this case, all of the bases shown are adenine (A), although they could be potentially be guanine (G), cytosine (C), or thymine (T). In the double helix, the bases undergo base pairing to partners on the opposite strand: A with T, C with G. When a cell divides and DNA needs to be replicated, the double helix is split, and enzymes called polymerases use each of the two halves as a template for an new opposing strand; the base pairing rules ensure that the copying is exact, except for rare errors. Historically, DNA sequencing has relied on the exact same process of copying DNA—in fact, the enzymes that make copies of DNA within a cell are so efficient that biologists have used a modified polymerase to perform sequencing. In the animation shown at right, a string of T's is base paired with a partial complement of A's on an opposing strand. The DNA polymerase, which isn't shown, is able to add additional nucleotides (a sugar + base combination) under two conditions: they're in the "triphosphate" form, with three phosphate groups in a row, and they base pair successfully with the complementary strand. As the red highlight indicates, the polymerase causes the hydroxyl group (OH) at the end of the existing strand to react with the triphosphate, linking the two together as part of the growing chain. When that reaction is done, there's a new hydroxyl group ready to react, allowing the cycle to continue. By moving down the strand and repeating this reaction, a new molecule of DNA with a specific sequence is created. From copying to sequencing From a sequencing perspective, having a new copy of DNA isn't especially helpful. What we want to know is what the order of the bases along the strand is. Sequencing works because we can get the process to stop in specific places and identify the base where it stops. The simplest way to do this is to mess with the chemistry. Instead of supplying the DNA with a normal nucleotide, it's possible to synthesize one without the hydroxyl group that the polymerase uses to add the next base. As the animation here shows, the base can be added to the growing strand normally, but, once in place, the process comes to a crashing halt. We've now stopped the process of DNA replication. Of course, if you supply the polymerase with nothing but terminating bases, it will never get very far. So, for a sequencing reaction, researchers use a mix of nucleotides where the majority are normal but a small fraction lack the hydroxyl group. Now, most of the time, the polymerase adds a normal nucleotide, and the reaction continues. But, at a certain probability, a terminator will be put in place, and the reaction stops. If you perform this reaction with lots of identical DNA molecules, you'll wind up with a distribution of lengths that slowly tails off as fewer and fewer unterminated molecules are left. The point at which this tailing off takes place is dictated by the fraction of terminator nucleotides in the reaction mix. Now we just need to know what base is present when the reaction stops. This is possible by making sure that only one of the four nucleotides given to the polymerase can terminate the reaction. If all the C's, T's and G's are normal, but some fraction of the A's are terminators, then that reaction will produce a population of DNA molecules that all end at A. By setting up four reactions, one for each base, it's possible to identify the base at every position. There are only two more secrets to DNA sequencing. First, you need to make sure every polymerase starts copying in the same place, otherwise you'll have a collection of molecules with two randomly located ends. This part is easy, since DNA polymerases can only add nucleotides to an existing strand. So, researchers can "prime" the polymerase by seeding the reaction with a short DNA molecule that base pairs with a known sequences that's next to the one you want to determine. The other trick is that you need to figure out how long each DNA molecule is in the large mix of reaction products that you're left with. The negative charge on phosphates makes this easy, since it ensures that DNA molecules will move when placed in an electric field. So, if you start the reaction mix on one side of an aqueous polymer mesh (called a gel) and run a current through the solution, the DNA will worm its way through the mesh. Shorter molecules move faster, longer ones slower, allowing the population of molecules to be separated based on their sizes. By running the four reactions down neighboring lanes on a gel, you'll get a pattern that looks like the one below, which can be read off to determine the sequence of the DNA molecule. Going high(er) throughput We're now at the state of the art from when I was a graduate student back in the early 1990s and, trust me, it was anything but artful. The presence of the DNA, marked by those dark bands, came from a short-lived radioisotope incorporated into the nucleotides. That meant you had to collect everything involved in the process and pay someone to store it until it decayed to background. The gels were flexible enough that they would shift or bend at the slightest provocation, making the order of bases difficult to read. But not so flexible that they wouldn't tear if suitably disturbed. All told, it took a full day to create something from which, if you were lucky, you could read two hundred bases down each lane, making each gel good for about a kilobase of sequence. The human genome is about 3 Gigabases—clearly, this wasn't going to cut it, and people were beginning to discuss all manners of exotic approaches, like reading single molecules with a scanning-tunneling microscope. Fortunately, a couple of changes breathed new life into the old approach. For starters, people got rid of the radioactivity by replacing it with a fluorescent tag. Not only was this a whole lot more convenient, but it enabled a simple four-fold improvement in throughput. Go to any outdoor event, and the glow sticks should indicate that it's possible to craft molecules that fluoresce in a variety of different colors. By picking four fluorescent molecules that are decently spread out—blue for G's, green for A's, Yellow for T's and red for C's, for example—and linking them to a specific terminating nucleotide, it's possible to link the termination position with the identity of the base there. What once required four separate reactions could now be run at once in a single solution. The next trick was to get rid of most of the gel. As we noted above, molecules work their way through the gel based on their size, but you needed a long gel if you wanted to image a lot of them at once. The solution, it turned out, was not to image them at once—something that, before the switch from radioactivity to fluorescence, wasn't really possible. All you really need is just enough gel to separate things out slightly. You can put a gate at the end of the gel and image the fluorescent activity there. One by one, based on their size, the different molecules will pass through the gate, and glow a specific color based on the base at that position. Instead of a couple hundred bases, it was now possible to get about 700 bases of sequence from a single reaction. Thanks to digital imaging, the data, an example of which is shown below, was easy to interpret. Sequences came as a computer file, ready to be plugged into various analysis programs. With all of these in place, DNA sequencing was ready to for the same sorts of processes that revolutionized many areas of technology: automation and miniaturization. Instead of a grad student or technician painstakingly adding everything that was needed into individual tubes, a robot could dispense all the reaction ingredients into a small plastic plate that could hold about 100 individual samples. A second robot could then pull the samples and deposit them into a machine that read out the sequencing information. Large gels were replaced by narrow capillaries. The new sequencing machines could do all of this for many samples in parallel, and the larger sequencing centers had dozens of these machines. As the bottlenecks were opened wider, the human genome project shot past its planned schedule, and a flood of genomes followed. But with the increased progress came increased expectations. Ultimately, researchers didn't just want to have a human genome, but the ability to sequence any human genome, from an individual with a genetic disease to the genome of a cancer cell, in order to personalize medicine. That, once again, has set off a race for new and exotic sequencing technology. We'll discuss the first wave of these so-called "next generation" sequencers in a future installment.
<urn:uuid:8556733b-8d37-439b-8492-dea89a3c5451>
4.0625
2,224
Truncated
Science & Tech.
44.709854
There is a buzz of excitement around citizen science currently. At its best citizen science can allow excellent engagement of people with science and nature, and it can be real science. Last week, scientists at the NERC Centre for Ecology & Hydrology (CEH) and Natural History Museum published a review of citizen science [PDF], which looks at the current state of citizen science, looks back on its history, and looks forward to the future potential of citizen science, including the use of technology. Knowing all about citizen science is one thing, but knowing how to do it is another – so we also wrote a "how to" Guide to Citizen Science [PDF] which includes really practical advice based on our experience and all the evidence we collected during the review. Both publications were commissioned by the United Kingdom Environmental Observation Framework (UKEOF) and can also be downloaded from their website. Of course, the things that today we call "citizen science" are not brand new. In Britain, the recording of animals and plants by volunteers has been going on for centuries, especially since the time of people like John Ray (the "father of natural history"). This year is the 50th anniversary of the publication of the Atlas of the British Flora – with the monumental, magnificent effort of botanists that provided the data. Over the past 35 years, the Biological Records Centre at CEH has been publishing atlases of animal and plant groups at the rate of three per year. Many of those atlases now are repeat surveys, so we can chart the changing distributions of animals and plants. Clearly, volunteer data collection is much older than the term "citizen science". However, the future of citizen science is looking exciting. There are lots of technological innovations (which especially appeals to the geek in us: how about plugging a pollution sensor into your smartphone, do-it-yourself remote sensing with a camera attached to a kite or harvesting twitter messages to track the spread of tree diseases?). There is a huge diversity of projects that allow mass participation; simply download an app and you can use your smartphone as a handheld data recorder to contribute to real science! But there is also lots of fantastic face-to-face engagement, which provides a quality of engagement almost impossible via the internet. Volunteers can provide data, but they can also get involved with analysing data and interpreting results. And citizen science is not limited to professionals asking volunteers to do something – more and more volunteers are working collaboratively with scientists, so that communities get answers to the questions that are important to them. Finally, the data collected by volunteers (with the appropriate checks and balances for quality control) is becoming increasingly trusted and used by scientists and policy-makers. I’m passionate about citizen science. It provides us, as society, with the data we need to address important questions around environmental change. It also provides a way in which anyone can get involved in science and, in many cases, to get engaged with their natural world. Citizen science, at its best, is real science and potentially life-changing engagement (oh, and it is great fun!) We hope that our enthusiasm for citizen science shines through the Report and the Guide! Dr Michael Pocock Dr Michael Pocock is an ecologist at the Centre for Ecology & Hydrology. He was one of the team of authors on the two new publications about citizen science commissioned by UKEOF and written by CEH and the Natural History Museum mentioned in the article. The Understanding Citizen Science & Environmental Monitoring project was led by Dr Helen Roy of CEH.
<urn:uuid:de5d6764-b069-4166-a0aa-dfe98c02b73e>
2.921875
729
Personal Blog
Science & Tech.
32.790324
Charm++ is a portable adaptive runtime system for parallel applications. Application developers create an object-based decomposition of the problem of interest, and the runtime system manages issues of communication, mapping, load balancing, fault tolerance, and more. Sequential code implementing the methods of these parallel objects is written in C++. Calls to libraries in C++, C, and Fortran are common and straightforward. Charm++ is portable across individual workstations, clusters, accelerators (Cell SPEs and GPUs), and supercomputers such as those sold by IBM (Blue Gene, POWER) and Cray (XT3/4/5/6). Applications based on Charm++ are used on at least 5 of the 20 most powerful computers in the world. GNU parallel is a shell tool for executing jobs in parallel locally or using remote computers. A job is typically a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. If you use xargs today you will find GNU parallel very easy to use, as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. If you use ppss or pexec you will find GNU parallel will often make the command easier to read. GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.
<urn:uuid:258ad0f4-e52a-4cf2-94c5-2f69f7f09b25>
2.8125
352
Knowledge Article
Software Dev.
52.005
Even if you don’t know the story or significance behind this simple animation, you have probably seen it before. The legend goes that during the 1870′s, English photographer Eadward Muybridge was commissioned by Leland Stanford, a former governor of California and horse enthusiast, to prove that in the course of a horse’s gallop there is a moment at which the animal is completely airborne. Little did Muybridge know that his ”proof” would become the most famous 3-second movie in history. Of course, Muybridge’s movie isn’t the quality of your typical Hollywood fare. His primitive motion picture device (the Zoopraxiscope) ran at 8 frames per second (FPS) whereas most movies nowadays are screened at 24 to 30 FPS. Even better are top of the line high definition (HD) televisions that can show about 60 FPS and emerging “ultra” HDTV’s that are capable of up to 120 FPS. However, typical audiences don’t need or really even want a life-like viewing experience when they tune into their weekly sitcoms. Nevertheless, the question arises, how “life-like” are we physically capable of achieving and what are the ramifications of ultra-fast motion capture? The astonishing answer is that scientists are now able to record movement down to the scale at which molecules vibrate and even faster than the speed of light. First, let’s get a handle on how fast a camera would actually have to be in order to register molecular vibrations. It can be argued that our physical world is defined by chemical reactions. After all, the act of molecules smashing into each other is what determines everything that goes on inside our bodies as well as all of our interactions with the outside world. Nevertheless, for years, actually observing molecules in the process of reacting was at best a chemist’s pipe dream. To explain why, we must go back to one of the first precepts of molecular motion. Namely, as you increase temperature, the movement of a molecule becomes more frenetic. In fact, reactions cannot occur until a molecule is imbued with just the right amount of energy (high temperature) to ”kick” it over its activation barrier. This is similar to a rocket having to overcome the force of gravity. The experimental problem is that a chemical reaction has so much built up energy that it occurs in a matter of femtoseconds*, or a quadrillionth (0.000000000000001) of a second. [*To truly wrap your brain around how short a femtosecond lasts is impossible. According to the Committee for the Nobel Prize, in one second light travels from the earth to the moon while in one femtosecond it travels a fraction of the width of a human hair. Alternatively, the ratio between a femtosecond and a second is equivalent to the ratio between a second and about 32 million years.] The first person to formalize this idea of the rate of a chemical reaction was Swedish physical chemist Svante Arrhenius who won the Nobel Prize in Chemistry in 1903. He derived mathematically that a high activation barrier can be overcome by high temperatures, resulting in a very fast chemical reaction. From then on, it was a race to see who could get the most information out of the smallest slices of time. In 1923, a process for observing reactions in intervals of a thousandth of a second was developed by mixing two solutions from separate tubes and forcing the resultant mixture through an outlet tube of glass at high velocity. Then, in 1967, three scientists were awarded the Nobel Prize after achieving a resolution of microseconds using flash lamps as well as electrical, pressure, and heat shocks. This was based on the emergent field of photochemistry in which light was found to be able to initiate chemical reactions. The Swinging Sixties also saw the birth of the laser to replace the flash lamp, finally pushing the veil past the nano and picosecond time-scales. One of the most significant discoveries throughout this process was that of “intermediate” substances between the original reactants and the final products of the reaction. With each successive refinement in resolution, these intermediates would become increasingly unstable and ephemeral. It is important to note that this “transition-state theory” was hypothesized as early as the 1930’s, but nobody would ever have thought that actually seeing the intermediate molecules could be possible. You can look at transition states like a slide on the playground. Molecules are activated, causing them to occupy a higher “transition” state (the top of the slide). Once over this crest, the molecule proceeds to fall to the lowest possible energy state (the bottom of the slide). It wasn’t until Ahmed Zewail in the late 20th century that femtochemistry was born, culminating a century long voyeuristic desire to catch molecules “in the act”. Since then, we have been able to penetrate the strange world of molecular “slow motion”, from observing how electrons are transferred between ions to how plants process photons into usable energy.Femtochemistry promises to lead us into the future with improved materials to manufacture tomorrow’s electronics, new medical procedures to diagnose cancer and other diseases, and even an artificial photosynthesis. It may be that humans are close to unlocking nature’s deepest secrets. For more, I recommend watching this TED talk by MIT’s Ramesh Raskar talking about commercial uses for femtosecond “cameras”. You can also read a very comprehensive review of the field of femtochemistry by Dr. Zewail himself.
<urn:uuid:ab1d4c79-2f18-4385-9091-835b4cfcbe1d>
3.34375
1,186
Personal Blog
Science & Tech.
38.022971
|Jolitz Heritage ]|| One of the last considerations in the development of the 386BSD specification is deciding how we can most easily bootstrap load the BSD kernel from hard or floppy disk. We know that ISA machines have BIOS ROMs that select the device to be booted (typically the floppy first, followed by the hard disk), load the very first block into RAM at location 0x7c00, and finally execute it in real mode. From this point on, we had to create some tight code to run within that 512-byte block to read in our kernel from an executable file in the UNIX file system. Traditional Berkeley UNIX undergoes a four-step bootstrap process to load in the kernel. First, the initial block bootstrap is brought in from disk by the hardware (in this case, the BIOS). The primary purpose of this assembly language bootstrap is to load in the second 7.5-Kbyte bootstrap located immediately after the initial boot on disk. This larger program, written in C, is much more elaborate in that it can decipher the UNIX file system, extract the UNIX file /boot, and load it as the next stage in the bootstrap. /boot, the most complex of the three bootstraps, evaluates the boot event and finally passes configuration parameters to the kernel as it is loading /vmunix, also located in the file system. At first we intended to write the initial block bootstrap in MASM, Microsoft's MS-DOS assembler, and use calls to the BIOS to accomplish the boot process. This proved to be unsatisfactory, as it still left us tied to MS-DOS. So, we decided to use the UNIX protected mode assembler. This allowed us to "cut the cord" with MS-DOS and permitted the system alone to support all code. We also chose to create drivers for the hardware directly, from the initial boot block on up, to break away from the BIOS as well. As a result, 386BSD can now be easily retargeted to new buses that might not rely on either MS-DOS or the BIOS. Both the second and third bootstraps are actually separate incarnations of the same source code (drivers and all). The only difference is that the second bootstrap is a functional subset of the third bootstrap, so that it could fit within the small confines required. All of the bootstraps reference a special data structure called the disklabel that knows the layout and geometry of the disk drive booted. In this way thousands of different disk drives can be supported independent of MS-DOS and the BIOS information. Copyrightę1994 Willaim & Lynne Jolitz
<urn:uuid:0323873f-b781-496f-8635-20303c85bb98>
3.03125
550
Documentation
Software Dev.
51.016357
Birds and Rattle Snake Name: Ann T. Date: Tuesday, August 06, 2002 Hello, I live in the foothills of the Sierra Nevada. This area is basically dry fields in the summer. I have had many rattlesnakes this year. I have had two experiences with the local song birds, in a sense, warning me that there was a rattlesnake about. I have reason to believe that the last rattlesnake had been killed by some birds that look like very small crows. They nest around here during the Spring and Summer. As I approached the snake, I noticed that it was encircled by several (maybe eight) of these birds. Is this common? Could these birds really have killed the snake? I would really appreciate your response. I am not familiar with wildlife of the Sierra Nevada, but I think it very unlikely small birds killed a rattlesnake. They were probably scavenging on an already dead snake. Larger raptors (hawks or owls) might kill a snake. Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:8f8cee02-365a-4dad-9ab6-f594764223de>
2.6875
244
Q&A Forum
Science & Tech.
59.54069
Above, today’s NEXSAT satellite image over New York City from Navy Satellite. So, there’s this obscuration, plus the CARE Rocket. Here is more information on today’s experiment. HAARP and other antenna arrays are being used, I see Norway’s antenna array will be in operation. This information was forwarded to me from California Skywatch. Check out News With Views article by Rosalind Peterson. In her article, Prof. Wayne A. Scales is quoted “…CARE will release its (aluminum oxide) (4), dust particles a bit higher than that, then let them settle back down to a lower altitude.”What the CARE experiment hopes to do is to create an artificial dust layer,” Professor Scales told SPACE.com. “Hopefully it’s a creation in a controlled sense, which will allow scientists to study different aspects of it, the turbulence generated on the inside, the distribution of dust particles and such.” CARE is a project of the Naval Research Laboratory and the Department of Defense Space Test Program. The spacecraft will launch aboard a NASA four-stage Black Brant XII suborbital sounding rocket…Researchers will track the CARE dust cloud for days or even months to study its behavior and development over time…If CARE cannot launch Tuesday, the team can try again between Sept. 16 and Sept. 20, 2009…” Here is the PDF document from Prof. Wayne A. Scales notes (maybe a PowerPoint document) with the Bradley Department of Electrical and Computer Engineering. Here are some notes from the PDF. This is a chemical alteration of the atmosphere that will affect climate. How is the space environment perturbed ? - Injection of charged particle beams (heavy ions Injection of charged particle beams or electron beams) - Release of chemicals that Release of chemicals that photoionize (barium) - Release of chemicals that attach electrons (nickel carbonyl, sulfur hexafluride trifloromethyl bromide) - Release of aerosol particles (space shuttle exhaust) - Injection of high power radio waves from space or the ground (HAARP, or the ground (HAARP, Arecibo, EISCAT Tromso) Why is this an important area of research? - Allows study of basic physics of the near earth environment - Allows for control of some physical processes in the space environment - Allows for possible denial of adversary communication/navigation systems (military) - Allows for possible new communication system techniques (military) - Artificial Perturbation of Natural Dust Clouds in the Space Environment (Sponsored by NSF) - Creation of Artificial Dust Clouds in the Space Environment (Sponsored by NRL) - Creation of Artificial Plasma Clouds in Space for Remediation of Radioactive Particles after High Altitude Thermonuclear Detonation HAND (Sponsored by ONR and NRL)
<urn:uuid:c351b9c8-2e41-4e77-a394-0336a28fa08b>
3.109375
614
Personal Blog
Science & Tech.
31.683726
I’ve been learning Python for about a week now using Learn Python the Hard Way. It isn’t that hard for me yet since I have a strong SysAdmin background. So it comes somewhat natural for me. LPTHW is not the best resource for a beginner but it’s a great start nonetheless. Previously, I haven’t written any code beyond Hello World and today I was able to write a program that determines your age. While it’s nothing complicated, I do find it very encouraging to create something from scratch. # Function to find current age. def find_age(current_year, birth_year): print "Subtracting %d - %d" % (current_year, birth_year) return current_year - birth_year # Get input from user for the current calendar year. print "What year is it now?" current_year = int(raw_input("> ")) # Get input from user on the year they were born. print "What year were you born?" birth_year = int(raw_input("> ")) # Create a variable of the user's age. age = find_age(current_year, birth_year) # Display the user's age after calculation. print "You are %d years old." % age This is the result of the code: What year is it now? > 2013 What year were you born? > 1990 Subtracting 2013 - 1990 You are 23 years old. The next step for me is to continue the exercises from LPTHW and then move onto another learning resource.
<urn:uuid:d24c293a-5d64-4353-88a1-c26293ba003e>
2.9375
339
Personal Blog
Software Dev.
69.836717
Many deep sea animals have the amazing ability to glow in the dark by using bioluminescence. Learn how bioluminescence works, and how animals use this adaptation. How Bioluminescence Works Bioluminescence is mainly found in marine organisms living in the ocean’s deepest regions. It is rare in freshwater species or in those that live on land, aside from the firefly and certain types of fungus such as foxfire. Bioluminescence is created by a chemical reaction. It usually takes place within special light-producing cells called photocytes, which are located inside light organs called photophores. The basic chemical reaction that produces bioluminescence requires three things: a type of molecule called a luciferin, an enzyme called luciferase, and oxygen. The luciferase enzyme catalyzes, or speeds up, the reaction in which luciferin combines with oxygen, producing an oxidized form of luciferin called oxyluciferin. The oxyluciferin emits light, usually blue or blue-green. Blue light travels farthest under water, and most marine organisms are only capable of seeing blue light. One exception to this is the Black Dragonfish, a member of the Loosejaw family of fishes, which produce red light in addition to blue light, and have eyes sensitive to both colors of light. Photostomias guernei, a bioluminescent fish. Photo credit: Wikimedia Commons How Animals Use Bioluminescence Marine animals use bioluminescence for a variety of reasons. Some species use the light to attract prey. In the black depths of the ocean, the prey is not visible, so instead of hunting, predators such as the anglerfish can simply remain in one place and wait for the prey to be attracted to the light they emit. On the other hand, some marine animals use bioluminescence to avoid being caught by predators. Flashlight fish are able to turn the lights on their cheeks on and off to confuse predators. Firefly squid can release luminous ink as a smokescreen while they make a quick getaway. Other animals use light to communicate with each other, warn of approaching enemies, or attract mates. Ocean, American Museum of Natural History, 2006.
<urn:uuid:e98380c7-4ab5-4f0a-b265-be59f57e6a0f>
4.25
478
Knowledge Article
Science & Tech.
29.586471
April 25, 2011 Dust in an astronomical context refers to small grains of solid material present in space. These grains form in the winds of evolved stars and supernovae explosions. Era Carinae shed some 2 to 3 solar masses of gas and dust nearly 160 years ago to create the lobed nebula surrounding the massive star. This object showcases one way that dust forms in the cosmos. NASA/ESA/The Hubble SM4 ERO Team You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:1307d8f3-f1cc-41f9-8084-9620b0a3e7ba>
3.421875
256
Truncated
Science & Tech.
46.6275
Trends in Plant Science, Volume 5, Issue 11, 1 November 2000, Pages 482-488 Robert B. Jackson, John S. Sperry and Todd E. Dawson Plant water loss, regulated by stomata and driven by atmospheric demand, cannot exceed the maximum steady-state supply through roots. Just as an electric circuit breaks when carrying excess current, the soil–plant continuum breaks if forced to transport water beyond its capacity. Exciting new molecular, biophysical and ecological research suggests that roots are the weakest link along this hydraulic flow path. We attempt here to predict rooting depth and water uptake using the hydraulic properties of plants and the soil, and also to suggest how new physiological tools might contribute to larger-scale studies of hydraulic lift, the water balance and biosphere–atmosphere interactions. Abstract | Full Text | PDF (1089 kb) Trends in Ecology & Evolution, Volume 13, Issue 6, 1 June 1998, Pages 232-235 Jonathan L Horton and Stephen C Hart Hydraulic lift is the process by which some deep-rooted plants take in water from lower soil layers and exude that water into upper, drier soil layers. Hydraulic lift is beneficial to the plant transporting the water, and may be an important water source for neighboring plants. Recent evidence shows that hydraulically lifted water can promote greater plant growth, and could have important implications for net primary productivity, as well as ecosystem nutrient cycling and water balance. Abstract | Full Text | PDF (96 kb) Trends in Plant Science, Volume 17, Issue 12, 1 December 2012, Pages 693-700 William R.L. Anderegg, Joseph A. Berry and Christopher B. Field Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Abstract | Full Text | PDF (815 kb) Copyright © 1967 The Biophysical Society All rights reserved. Biophysical Journal, Volume 7, Issue 1, 25-36, 1 January 1967 Albert L. Kunz and Norman A. Coulter Sinusoidal oscillatory flow of blood and of aqueous glycerol solutions was produced in rigid cylindrical tubes. For aqueous glycerol, the amplitude of the measured pressure gradient wave form conformed closely to that predicted by Womersley's theory of oscillatory flow, up to Reynolds numbers approaching 2000. Blood differed significantly from aqueous glycerol solutions of comparable viscosity, especially at low frequencies and high hematocrits. As frequency increased, the hydraulic impedance of blood decreased to a minimum at a frequency of about 1–2 CPS, increasing monotonically at higher frequencies. The dynamic apparent viscosity of blood, calculated from Womersley's theory, decreased with increasing flow amplitude. The reactive component of the hydraulic impedance increased with frequency as predicted by theory; the resistive component decreased with increasing frequency, differing from the resistance of a Newtonian fluid which increased with frequency.
<urn:uuid:d67e2a9a-e466-4ecb-b5a7-e375cc32ff8b>
2.6875
730
Content Listing
Science & Tech.
34.369568
Hyperphysics: Torque Feature Summary - Physics in Your World - Think about the forces on this sailboat: The force of the wind on the sail (perpendicular to the fabric of the sail), tends to rotate the boat. A force that can rotate an object is called a torque. In this case, if the torque of the wind isn't balanced, it will tip the boat over. The weight of the sailor, and also the weight of the hull that's out of the water, both create torques in the opposite sense, to balance the torque of the wind. For more on torques, see Hyperphysics: Torque, and also this other Hyperphysics page. - Image credit: Jupiter405; image source; larger image - Image URL: - May 1, 2012 - May 16, 2012
<urn:uuid:fdfcbcaa-1947-4b1a-993f-b46b190b8084>
3.375
173
Knowledge Article
Science & Tech.
54.763
Hawaii Researchers Explore Previously Unseen Coral Source: The Associated Press September 8, 2009 Honolulu - Scientists over the past month explored coral reefs in the remote Northwestern Hawaiian Islands that until recently were considered too deep for scuba divers to reach. Divers swam among previously unseen reefs as deep as 250 feet during a monthlong research trip to the islands by the National Oceanic and Atmospheric Administration vessel Hiialakai. They unexpectedly found nursery grounds for juvenile reef fish like parrotfish and butterflyfish. They also were able to collect specimens that may help them identify new species. To read the full text of the article, click here.
<urn:uuid:ba653ac4-6f35-430e-bc3a-44251ef56076>
2.890625
134
Truncated
Science & Tech.
31.628571
Prescribed Fire in Alabama Prescribed fire is a safe way to apply a natural process that benefits various habitats and ensures ecosystem health. Wildlife habitat and animals such as deer, turkeys, and quail flourish in areas that are maintained with prescribed fire. Some rare animals such as the red-cockaded woodpecker and the gopher tortoise require fire-adapted habitats to survive. Prescribed burning is also an effective tool to reduce the risk of wildfire, which can be disastrous to both humans and wildlife. In Alabama, prescribed burns can be safely conducted throughout the year. Cool season burns are used to reduce forest litter and to help prevent forest fires. Growing season burns, often used to control the choking underbrush in a stand of mature trees, are conducted from early spring to late summer. One of the primary uses for prescribe burning is the maintenance of wildlife habitat. Controlled burning helps to rejuvenate high quality natural food sources for many species including white-tailed deer and Eastern wild turkey. The burning of undergrowth can release nutrients into the soil which stimulates the growth of high-quality native grasses, forbs and legumes. Unlike most supplemental wildlife plantings, controlled burning can provide year-round protective cover and food for wildlife on managed land. Prescribed fire resources: ADCNR Land Management Articles Coalition of Prescribed Fire Councils Auburn University Prescribed Fire Site
<urn:uuid:a283e003-9876-4415-b540-29d23cce4520>
3.515625
287
Knowledge Article
Science & Tech.
23.770148
The Venus Transit 2004 ... Extended InfoSheet D7 The clouds of Venus - a description of the planet's atmosphere One of the reasons why Venus looks like a jewel in the sky is that this sister planet of the Earth is covered by a dense cloudy atmosphere which causes its high visual albedo of 0.65. But this cloudy shell also completely frustrates any attempt to view the planet's surface features through optical telescopes. In the early years of telescopic observation, scientists believed that the clouds were made of water like clouds in the Earth's atmosphere and so they came to the conclusion that the conditions on Venus' surface were similar to the conditions which prevailed on Earth 250-65 million years ago in the Triassic, Jurassic and Cretaceous geological periods. With the help of spectroscopy we have known since 1932 that Venus' atmosphere contains carbon dioxide, but until fairly recently we did not know how much. This situation changed in 1962 when interplanetary probes undertook missions to the second planet of the solar system. Today we know that the atmosphere of Venus differs remarkably from ours. Unlike the Earth's atmosphere, which is mainly composed of nitrogen (78%) and oxygen (21%), Venus' atmosphere contains about 96% carbon dioxide, 3% nitrogen, some argon and traces of water vapour (varying from 0.1 to 0.4%), oxygen, hydrogen chloride, hydrogen fluoride, hydrogen sulphide, sulphur dioxide, and carbon monoxide. The surface pressure of Venus' atmosphere is about 90 times greater than the Earth's and its surface temperature is about 500°C, exceeding that of Mercury and hot enough to melt metals. Calculations indicate that such high temperatures require a mechanism in the Venusian atmosphere that traps solar radiation very effectively. The "Greenhouse Effect" This mechanism is called the "Greenhouse Effect" . That means that the carbon dioxide in the atmosphere is transparent to the light and heat coming from the Sun but it is opaque to the long wavelength infrared radiation coming from the hot planet. Since less than half the infrared radiation is released back to space, the result is to raise the temperature of the planet by a massive 500°:C. Compare this with the Earth, where the greenhouse effect raises the temperature by only 30°C. On Venus the temperature difference between the equator and the poles is only a few degrees. The effect occurs in all planetary atmospheres containing greenhouse gases. In the case of our planet Earth these are water vapour and carbon dioxide. The natural greenhouse effect is responsible for these planets being warmer than would be the case otherwise. However, under certain conditions that we find on Venus, we believe the greenhouse effect can "run away" . It will happen if the temperature rises near to the boiling point of water, because the oceans would then begin to change to water vapour which would increase the effectiveness of trapping heat and accelerate the greenhouse effect. When the oceans are gone, the atmosphere would finally stabilize at a much higher temperature and much higher density. Another runaway effect occurs when high temperature chemical reactions begin to drive carbon dioxide from the rocks into the atmosphere. It would also accelerate the heating. In the case of Venus we believe the initial solar heating prevented oceans from forming, or prevented them from remaining if they did form, and the subsequent lack of rainfall and the failure of plant life to evolve kept the carbon dioxide in the atmosphere rather than binding it in rocks as is the case for the Earth. So on Venus we see the greenhouse effect out of control and it is called the "Runaway Greenhouse Effect" . Venus is covered with clouds of sulphuric acid, rather than the water vapour clouds found on the Earth. At ultraviolet wavelengths cloud patterns become distinctive. In particular, a horizontal V-shaped cloud feature is visible near the equator. Ringing this latitude of the planet, the clouds whiz around at roughly 100 m/s, fast enough to orbit the planet in only four days. Winds also blow from the equator to the poles in large cyclones 100 to 500 km across. They culminate in two giant vortices that cap the polar regions. The direction in which atmospheric winds on Venus blow goes from east to west and from the equator to polar regions. The wind flow carries heat. Along with very effective atmospheric isolation, this flow helps to keep the temperatures fairly constant over Venus' surface, so that they vary about 10°C or less from the day to the night side and do not cool off much at night. The atmosphere appears to be relatively clear below the cloud deck that is located at about 45 km altitude above the surface. The yellowish-white clouds conceal what is below and their tops reach about 70 km above the surface - for comparison the highest clouds above the Earth reach only about 16 km. Although the atmosphere and clouds of Venus contain some water vapour, it does not amount to very much compared with the total amount of water on the Earth. If all the Earth's water (in both the atmosphere and oceans) were spread in a uniform layer over our planet's surface, that sheet would be 3 km thick. All the water in the Venusian atmosphere (none exists on the surface now because it is so hot) would amount to a layer only 30 cm thick. The Earth's surface and atmosphere have about 10,000 times as much water as Venus. Like the Earth a long time ago ? During its early history, Venus had relatively Earth-like conditions and substantial amounts of water. Venus' original atmosphere about 4 billion years ago is believed to have been much like its present atmosphere which is many times denser than Earth's and consists mostly of carbon dioxide. The "wet greenhouse" theory suggests that this enormous primordial atmosphere was reduced to a small part of its original mass by ocean-planet interactions. This would have left Venus' atmosphere about the same size as Earth's. The thin Earth-like atmosphere then lasted several hundred million years and eventual loss of most of its water would have stripped the planet of its water, leaving it bone dry as Venus is today. Go to the corresponding Brief InfoSheet Back to the List of Extended InfoSheets.
<urn:uuid:b4f49cb4-f8eb-4d5f-944a-b32cd562fa67>
4.03125
1,259
Knowledge Article
Science & Tech.
45.172553
O'Reilly Book Excerpts: Programming Visual Basic .NET ADO.NET, Part 3 This is the third installment from the Programming Visual Basic .NET chapter on ADO.NET, focusing on the relations between DataTables in a DataSet, and the DataSets XML capabilities. Relations Between DataTables in a DataSet The DataSet class provides a mechanism for specifying relations between tables in a DataSet. The DataSet class's Relations property contains a RelationsCollection object, which maintains a collection of DataRelation objects. Each DataRelation object represents a parent/child relationship between two tables in the DataSet. For example, there is conceptually a parent/child relationship between a Customers table and an Orders table, because each order must belong to some customer. Modeling this relationship in the DataSet has these benefits: - The DataSet can enforce relational integrity. - The DataSet can propagate key updates and row deletions. - Data-bound controls can provide a visual representation of the relation. Example 8-4 loads a Customers table and an Orders table from the Northwind database and then creates a relation between them. The statement that actually creates the relation is shown in bold. Example 8-4: Creating a DataRelation between DataTables in a DataSet ' Open a database connection. Dim strConnection As String = _ "Data Source=localhost;Initial Catalog=Northwind;" _ & "Integrated Security=True" Dim cn As SqlConnection = New SqlConnection(strConnection) cn.Open( ) ' Set up a data adapter object. Dim strSql As String = "SELECT * FROM Customers" _ & " WHERE City = 'Buenos Aires' AND Country = 'Argentina'" Dim da As SqlDataAdapter = New SqlDataAdapter(strSql, cn) ' Load a data set. Dim ds As DataSet = New DataSet( ) da.Fill(ds, "Customers") ' Set up a new data adapter object. strSql = "SELECT Orders.*" _ & " FROM Customers, Orders" _ & " WHERE (Customers.CustomerID = Orders.CustomerID)" _ & " AND (Customers.City = 'Buenos Aires')" _ & " AND (Customers.Country = 'Argentina')" da = New SqlDataAdapter(strSql, cn) ' Load the data set. da.Fill(ds, "Orders") ' Close the database connection. cn.Close( ) ' Create a relation. ds.Relations.Add("CustomerOrders", _ ds.Tables("Customers").Columns("CustomerID"), _ ds.Tables("Orders").Columns("CustomerID")) Public Overloads Overridable Function Add( _ ByVal name As String, _ ByVal parentColumn As System.Data.DataColumn, _ ByVal childColumn As System.Data.DataColumn _ ) As System.Data.DataRelation The parameters are: - The name to give to the new relation. This name can be used later as an index to the RelationsCollection object. - The DataColumn object representing the parent column. - The DataColumn object representing the child column. The return value is the newly created DataRelation object. Example 8-4 ignores the return value. Pages: 1, 2
<urn:uuid:c96ed465-af9e-4b01-8cd5-744f7a5a8b2a>
2.6875
726
Documentation
Software Dev.
42.343686
More and more people around the world are becoming aware of the environmental issues surrounding plastic bags. Considering their somewhat placid appearance, the impact of plastic bags on the environment can be devastating. Here are some facts about the environmental impact of plastic bags: - Plastic bags cause over 100,000 sea turtle and other marine animal deaths every year when animals mistaken them for food - The manufacture of plastic bags add tonnes of carbon emissions into the air annually - In the UK, banning plastic bags would be the equivalent of taking 18,000 cars off the roads each year - Between 500 billion and 1 trillion plastic bags are used worldwide each year - Approximately 60 - 100 million barrels of oil are required to make the world’s plastic bags each year - Most plastic bags take over 400 years to biodegrade. Some figures indicate that plastic bags could take over 1000 years to break down. (I guess nobody will live long enough to find out!). This means not one plastic bag has ever naturally biodegraded. - China uses around 3 billion plastic bags each day! - In the UK, each person uses around 220 plastic bags each year - Around 500,000 plastic bags are collected during Clean Up Australia Day each year. Clean Up Australia Day is a nationwide initiative to get as many members of the public to get out and pick up litter from their local areas. Unfortunately, each year in Australia approximately 50 million plastic bags end up as litter. Fortunately, some governments around the world are taking the initiative to deal with the environmental impact of plastic bags by either banning plastic bags or discouraging their usage.
<urn:uuid:22a517ac-920e-4d73-9112-d88f85f67c31>
3.390625
326
Personal Blog
Science & Tech.
42.609027
DELICATE quantum bits have been stored in single atoms, a feat that could make accessing memory in quantum computersmore convenient. Unlike classical bits, which can store only a 0 or 1, qubits can be in a superposition of the two states at once. Two or more can also be "entangled" and remain linked across great distances. Both properties vastly enhance the power of quantum computers compared with normal ones. But while a qubit can be encoded in the polarisation of a photon, which can transport qubits efficiently, a good method for storing qubits and reading them out later has eluded us. Patterns created in ensembles of atoms can do the trick, but the information from one qubit is spread over many atoms so accessing the qubit is cumbersome. So Gerhard Rempe of the ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:c68892b1-0b05-43fa-b24e-3280018dd4ce>
3.953125
196
Truncated
Science & Tech.
46.740802
Coding Languages, Compilers Category With a little bit of luck, find proper tools that would help every beginner to quickly get used to the basics of programming or find simple to use programming languages powerful enough to let you write useful programs quickly. Learn the purposes every programming language has and discern between C++, Java and PHP and get convinced that there is no reason to restrict yourself to a single language as it might be counterproductive. Acquire the chance to learn different languages and different kinds of language to keep yourself sharp and flexible in problem handling. Discover the real ease of implementation and indispensable features of different coding languages. Compare the languages that stood at the very beginning of your programming experience with the latest developed mature and elegant languages and get ready to try new languages that would suit every kind of project you might have in mind. The more technologies you know, the better! Coding Languages, Compilers
<urn:uuid:cdede2b8-7a44-4e4a-b590-23a3b40cd98c>
2.703125
180
Content Listing
Software Dev.
27.214073
|It is important that the vegetation surrounding each trap is recorded in a manner which allows distance weighted abundance to be calculated for each plant taxon. It is recommended that this is done in two stages, one, in the field and the second using digitised maps, air photos or forest |Pollen deposited in the traps has its origin in sources which may be at a whole range of distances. However, modelling has demonstrated that the pollen source area can be divided into two, the relevant source area (within which different mosaics of vegetation communities are separately reflected in the pollen assemblage) and the area beyond this, from which the pollen signal is homogenous (whatever the mosaic of vegetation communities). This latter can be classed as background pollen (Sugita 1994, Davis 2000). For this reason it is necessary to map vegetation out to several hundred metres (even kilometres) from the trap but the degree of detail of the mapping needs to be highest closest to the trap. |We suggest three scale categories: - Within 10.5 m of the trap. Within this area it will be primarily the herbaceous vegetation which is recorded and the plants will be identified to species. For this mapping we recommend the walking in circles method - For the area between 10.5 and 500 m from the trap the main focus will be on the abundance and distribution of the trees, while the herbaceous and shrub vegetation will be mapped as vegetation units for which, as far as possible, the average species composition and abundance is known. Depending upon availability of data and the nature of the trapping locality, these vegetation data may be obtained from existing forest inventories or remote sensing of air photos. In some instances it may be most practical to map this intermediate area using the Bitterlich method described below. - For the area from 500 m out to 1 - 2km it is significant to know the patchiness of the vegetation, the relative distribution of forested and unforested land and, where possible, the species composition of distinguishable vegetation units. Such information may be obtained from forest inventories, remote sensing of air photos or basic topographical maps, depending upon Recording vegetation in the field by walking in circles The mapping method described here has been developed primarily by the POLLANDCAL (Pollen Land-use Calibration) group and we are grateful to them for inspiring discussions about pollen dispersal and for permission to adapt their mapping strategy for PMP. A series of concentric rings are considered around each pollen trap. The innermost of these is a circle, centred on the trap with a radius of 50 cm. Beyond this circle, rings of 1 m diameter are considered out to a distance of 6.5 m from the trap. Beyond this the width of the ring is increased to 2 m, out to a distance of 10.5 m. This gives an inner circle and 8 surrounding concentric rings (Figure 1). These are located in the field using a series of wooden stakes and a rope, marked off at 0.5m and then in 1 m intervals. The rope is anchored at the trap and the person making the analysis literally walks in circles with it around the trap, recording the percentage cover of each species in each individual concentric ring, on a pre-prepared form (Figure 2). Figure 1: Plan for vegetation analyses to give distance weighted plant abundance. The innermost circle may be used to facilitate estimation of plant coverage during a vegetation survey using the walking in circles approach (Figure 3). Make a ring or hoop in a light material e.g. plastic or metal, with a radius of 0.5 m. Use coloured tape to mark percentage sectors around the circle. When walking in circles wind the walking rope around the centre pole (cp). Let one end of the rope remain fixed, aligned to the north, out until 10 m and the other end moveable around the circle. The two rope ends allow you to divide your circle into cake slices of various percentage size. This enables you to train your perception for percentage coverage in a circle before starting the actual survey. This is particularly important each time you move out to a circle with larger radius. Figure 2: Vegetation record form. Bearing in mind the pollen taxonomic resolution, grasses and sedges may be recorded as Gramineae and Cyperaceae respectively and not identified to species. Note that with overlapping vegetation the total coverage in one ring may exceed 100%. Mosses/rocks/bare ground should also be given a percentage cover value. The position (compass orientation and distance) of individual trees occurring within the concentric rings is also marked on a paper copy cf. Figure 1. This field method has been developed by Anna Broström in consultation with Shinya Sugita. See also Broström 2002. Figure 3: Vegetation analysis of the innermost circle when using the walking in circles approach. The Bitterlich method of estimating tree abundance The Bitterlich method provides an easy, fast, simple and inexpensive way of estimating tree abundance as basal area per hectare. With the use of an angle-gauge (Bitterlich stick see Figure 4) all trees that are larger in diameter than a specified angle are counted in a circle from a central sampling point. The angle is set by the configuration of the Bitterlich stick. For simplcity using a stick of 1 m length with a 2 cm wide crosspiece made of cardboard, plastic or metal is suggested. The crosspiece is mounted to one end of the stick while a notch or peephole can be fixed to the other side. When using a round stick the notch may not be necessary. The trees should be measured at breast height and those that are seen as wide as the crosspiece may be counted as half. If a Bitterlich stick with the above suggested configuration is used the number of trees counted from the sample point is a direct estimate of abundance as basal area per hectare (m2 * ha-1). Because this way of estimating tree abundance is relatively quick it is possible to survey larger areas around the pollen trap. In order to later distance weight the abundance of different trees it is important to record the positions at which the abundance was estimated. Two ways are suggested here, but different approaches may be followed. Figure 4: Bitterlich stick for estimating tree abundance. The stick is held to the eye and pointed horizontally with the crosspiece end to each tree surrounding the sample point. If the tree appears wider than the crosspiece it is counted otherwise it is excluded. Figure 5: Suggested format for a table containing the results of estimating tree abundance using a hand held GPS. UTM coordinates should be given to 1 metre; the UTM zone needs to be indicated. In both cases the exact sample design should be adopted to the specific situation. If the pollen trap site is in a forest opening the first ring of sampling points should be arranged near the forest edge. If the pollen trap is positioned more or less inside the forest, the first estimate should be made at the position of the pollen trap. The size of the opening should be estimated and recorded separately. - A hand held GPS can be used to record the positions at which the count was conducted. This way the surveyor can move around freely and a large number of points can be used. Sampling points should be more frequent closer to the pollen trap and a very homogenous forest may be covered with a lower amount of points than a patchy forest. The coordinates can later be saved in a table as UTM coordinates and combined with the abundance of the different trees in m2 * ha-1 (Figure 5). If no hand-held GPS is available the position of sampling points can be recorded with a compass and some means of estimating distance (e.g. measuring tape, counting steps, topographical map). Along the compass directions sampling points can be set up at increasing intervals (Figure 6). At or near these points the tree abundance can be estimated a few times and averaged. The results should be presented in a table with the direction and distance from the trap as coordinates (Figure 7). Figure 6: Suggested sampling design for estimating tree abundance around a pollen trap in a forest opening of about 100 metre in diameter. The description of the Bitterlich method of estimating tree abundance is largely based on the relevant chapters in the text book Aims and Methods of Vegetation Ecology by Mueller-Dombois, and Ellenberg (1974). Jackson and Kearsley (1998) make use of the method and present another sampling setup that enables the presentation of the sampling points in a table format. Figure 7: Suggested format for a table containing the results of estimating tree abundance when the sapling points are described by direction and distance measurements. Vegetation mapping by remote sensing of digitised air photographs (ground resolution of 1m) Precise details of this will be provided at a later date. Broström, A. 2002. Estimating source area of pollen and pollen productivity in the cultural landscapes of southern Sweden - developing a palynological tool for quantifying past plant cover. Doctoral Thesis, Lund University, Lund. Davis, M.B. 2000. Palynology after Y2K - understanding the source area of pollen in sediments. Annu. Rev. Earth and Planet Sciences 28:1-18. Sugita, S. 1994. Pollen representation of vegetation in Quaternary sediments: Theory and method in patchy vegetation. Journal of Ecology 82:881-897. Mueller-Dombois, D. and Ellenberg, H. 1974. Aims and Methods of Vegetation Ecology. J. Wiley and Sons. Jackson, S.T. and Kearsley J.B. 1998. Quantitative representation of local forest composition in forest-floor pollen assemblages. Journal of Ecology
<urn:uuid:036a314d-3c14-46ac-832c-48938a4785c3>
3.796875
2,172
Tutorial
Science & Tech.
48.204799
Author: This chapter originated as part of Enhancement of the ANSI SQL Implementation of PostgreSQL, Stefan Simkovics' Master's Thesis prepared at Vienna University of Technology under the direction of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr. This chapter gives an overview of the internal structure of the backend of PostgreSQL. After having read the following sections you should have an idea of how a query is processed. This chapter does not aim to provide a detailed description of the internal operation of PostgreSQL, as such a document would be very extensive. Rather, this chapter is intended to help the reader understand the general sequence of operations that occur within the backend from the point at which a query is received, to the point at which the results are returned to the client.
<urn:uuid:7d68951a-d352-4d04-bd00-f139b343ae3f>
2.796875
169
Truncated
Software Dev.
51.097283
It has now become all too common. Peculiar weather precipitates immediate blame on global warming by some, and equally immediate pronouncements by others (curiously, quite often the National Oceanic and Atmospheric Administration in recent years) that global warming can’t possibly be to blame. The reality, as we’ve often remarked here before, is that absolute statements of neither sort are scientifically defensible. Meteorological anomalies cannot be purely attributed to deterministic factors, let alone any one specific such factor (e.g. either global warming or a hypothetical long-term climate oscillation). Lets consider the latest such example. In an odd repeat of last year (the ‘groundhog day’ analogy growing ever more appropriate), we find ourselves well into the meteorological Northern Hemisphere winter (Dec-Feb) with little evidence over large parts of the country (most noteably the eastern and central U.S.) that it ever really began. Unsurprisingly, numerous news stories have popped up asking whether global warming might be to blame. Almost as if on cue, representatives from NOAA’s National Weather Service have been dispatched to tell us that the event e.g. “has absolutely nothing to do with global warming”, but instead is entirely due to the impact of the current El Nino event. So what’s really going on? The pattern so far this winter (admittedly after only 1 month) looks (figure on the immediate right) like a stronger version of what was observed last winter (figure to the far right–note that these anomalies reflect differences relative to a relatively warm 1971-2000 base period, this tends to decrease the amplitude of positive anomalies relative to the more commonly used, cooler 1961-1990 base period). This poses the first obvious conundrum for the pure “El Nino” attribution of the current warmth: since we were actually in a (weak) La Nina (i.e., the opposite of ‘El Nino’) last winter, how is it that we can explain away the anomalous winter U.S. warmth so far this winter by ‘El Nino’ when anomalous winter warmth last year occured in its absence? The second conundrum with this explanation is that, while El Nino typically does perturb the winter Northern Hemisphere jet stream in a way that favors anomalous warmth over much of the northern half of the U.S., the typical amplitude of the warming (see Figure below right) is about 1C (i.e., about 2F). The current anomaly is roughly five times as large as this. One therefore cannot sensibly argue that the current U.S. winter temperature anomalies are attributed entirely to the current moderate El Nino event. Indeed, though the current pattern of winter U.S. warmth looks much more like the pattern predicted by climate models as a response to anthropogenic forcing (see Figure below left) than the typical ‘El Nino’ pattern, neither can one attribute this warmth to anthropogenic forcing. As we are fond of reminding our readers, one cannot attribute a specific meteorological event, an anomalous season, or even (as seems may be the case here, depending on the next 2 months) two anomalous seasons in a row, to climate change. Moreover, not even the most extreme scenario for the next century predicts temperature changes over North America as large as the anomalies witnessed this past month. But one can argue that the pattern of anomalous winter warmth seen last year, and so far this year, is in the direction of what the models predict. In reality, the individual roles of deterministic factors such as El Nino, anthropogenic climate change, and of purely random factors (i.e. “weather”) in the pattern observed thusfar this winter cannot even in principle be ascertained. What we do know, however, is that both anthropogenic climate change and El Nino favor, in a statistical sense, warmer winters over large parts of the U.S. When these factors act constructively, as is the case this winter, warmer temperatures are certainly more likely. Both factors also favor warmer global mean surface temperatures (the warming is one or two tenths of a degree C for a moderate to strong El Nino). It is precisely for this reason that some scientists are already concluding, with some justification, that 2007 stands a good chance of being the warmest year on record for the globe. A few other issues are worthy of comment in the context of this discussion. A canard that has already been trotted out by climate change contrarians (and unfortunately parroted uncritically in some media reports) holds that weather in certain parts of the U.S. (e.g. blizzards and avalanches in Colorado) negates the observation of anomalous winter warmth. This argument is disingenuous at best. As clearly evident from the figure shown above, temperatures for the first month of this winter have been above normal across the United States (with the only exceptions being a couple small cold patches along the U.S./Mexico border). The large snowfall events in Boulder were not associated with cold temperatures, but instead with especially moisture-laden air masses passing through the region. If temperatures are at or below freezing (which is true even during this warmer-than-average winter in Colorado), that moisture will precipitate as snow, not rain. Indeed, snowfall is often predicted to increase in many regions in response to anthropogenic climate change, since warmer air, all other things being equal, holds more moisture, and therefore, the potential for greater amounts of precipitation whatever form that precipitation takes. Another issue here involves the precise role of El Nino in climate change. El Nino has a profound influence on disparate regional weather phenomena. Witness for example the dramatic decrease in Atlantic tropical cyclones this most recent season relative to the previous one. This decrease can be attributed to the El Nino that developed over the crucial autumn season, which favored a strengthening of the upper level westerlies over the tropical North Atlantic, increased tropical Atlantic wind shear, and a consequently less favorable environment for tropical cyclogenesis. If a particular seasonal anomaly appears to be related to El Nino, can we conclude that climate change played no role at all? Obviously not. It is possible, in fact probable, that climate change is actually influencing El Nino (e.g. favoring more frequent and larger El Nino events), although just how much is still very much an issue of active scientific debate. One of the key remaining puzzles in the science of climate change therefore involves figuring out just how El Nino itself might change in the future, a topic we’re certain to discuss here again in the future.
<urn:uuid:341940a1-09bf-4e10-8ce6-3d50dfbc392d>
2.9375
1,383
Nonfiction Writing
Science & Tech.
36.700943
Flora Fact: Multiplying Mangroves Rising warmth and salinity increase black mangrove numbers. By Sheryl Smith-Rodgers Whenever he’s out collecting data, Eric Madrid often hears old-timers describe how favorite fishing spots along the Texas Gulf Coast have changed. “They tell me that black mangrove are everywhere now, and they didn’t used to be nearly as common,” says Madrid, a botanist with Texas A&M University who’s studied the species since 2009. What’s up? “Warmer temperatures are a primary reason,” he explains. “We hypothesize that changes in water salinity brought about by the construction of the Intracoastal Waterway in the ’40s have also played a role in the expansion of black mangrove populations in Texas. Today, our state’s largest population of these shrubby trees grows in northern Corpus Christi Bay near Aransas Pass.” Madrid, who’s part of an international team monitoring the species in the Gulf of Mexico, can’t yet predict how larger populations of mangroves will affect Texas coastal ecosystems. But they are important. “Mangroves are part of the base of the food chain in Texas wetlands, and they also help to create habitat for fish, crabs, insects, small invertebrates and birds,” Madrid says. Black mangroves — named for the flaky, black bark — occur in wet soils dampened by high tides. To survive occasional submersions, mangrove roots send up hordes of pencil-like structures (called pneumatophores) that emerge from the ground and absorb oxygen. Another survival trick: Seeds sprout into seedlings (propagules) while still on the tree! After falling off, they can float up to a year before rooting.
<urn:uuid:1012c601-fa75-4d8e-acb4-d8a15ef498c6>
3.84375
401
Truncated
Science & Tech.
43.475564
run-parts runs a number of scripts or programs found in a single directory directory. Filenames should consist entirely of upper and lower case letters, digits, underscores, and hyphens. Subdirectories of directory and files with other names will be silently ignored. Scripts must follow the #!/bin/interpretername convention in order to be executed. They will not automatically be executed by /bin/sh. The files found will be run in the lexical sort order of the filenames. print the names of the scripts which would be run, but don't actually run them. print the name of each script to stderr before running. similiar to --verbose, but only prints the name of scripts which produce output. The script's name is printed to whichever of stdout or stderr the script first produces output on. Sets the umask to umask before running the scripts. umask should be specified in octal. By default the umask is set to 022. Pass argument to the scripts. Use --arg once for each argument you want passed. Specifies that this is the end of the options. Any filename after -- will be not be interpreted as an option even if it starts with a hyphen. Display usage information and exit. Copyright (C) 1994 Ian Jackson. Copyright (C) 1996 Jeff Noxon. Copyright (C) 1996,1997,1998 Guy Maor lib/main.php:944: Notice: PageInfo: Cannot find action page
<urn:uuid:e38cc8df-0295-47f7-baee-8c60e37323a5>
2.765625
321
Documentation
Software Dev.
64.013004
Given my blog title, how have I gone this long without discussing triangle geometry? I will rectify this gross negligence in the next few weeks. Let’s begin with a seemingly impossible fact due to Frank Morley. Start with a triangle ABC, with any shape you wish. Now cut its angles into three equal parts, and extend these angle trisectors until they meet in pairs as illustrated below, forming triangle PQR. Morley’s amazing theorem says that this Morley triangle, PQR, will always be equilateral! Why would this be true? Well, Morley’s theorem tells us that this diagram has three nice 60-degree angles in the middle, but we may suspect that, in fact, all of the angles are nice! This key insight lets us piece together the following argument, where we build up the diagram backwards from its constituent pieces. Draw seven separate “puzzle piece” triangles with angles and side-lengths as shown below, where the original triangle ABC has angles \(3\alpha,3\beta,3\gamma\) respectively. (Be sure to check that puzzle pieces with these specifications actually exist! Hint: use the Law of Sines.) Now, fit the pieces together: all matching edge-lengths are equal (by design), and the angles around vertex P add up to \(60+(\gamma+60)+(\alpha+120)+(\beta+60)=360\) and similarly for Q and R, so the puzzle fits together into a triangle similar to our original triangle ABC. But now these pieces must make up the Morley configuration, and since we started with an equilateral in the middle, we’re done with the proof! But there’s more to the Morley story. Let’s push A and C to make them swap positions, dragging the trisectors and Morley triangle along for the ride (a process known as extraversion). We end up using some external angle trisectors instead of only internal ones, but the Morley triangle remains equilateral throughout. This gives us a new equilateral Morley triangle for our original ABC! In fact, each vertex of ABC has six angle trisectors (three pairs of two), and if you continue applying extraversions you’ll soon uncover that our original triangle ABC has 18 equilateral Morley triangles arranged in a stunning configuration! (Click for larger image.) Here’s a challenge: the diagram above has 27 equilateral triangles, but only 18 of them are Morley triangles (i.e., are made from a pair of trisectors from each vertex). Which ones are they? - This is a slight modification of a proof by Conway and Doyle. [↩]
<urn:uuid:b4903fd2-71c9-4c47-8f14-653c6f0cdde1>
2.796875
578
Personal Blog
Science & Tech.
57.041571
We all know that the area of a circle is and the volume of a sphere is , but what about the volumes (or hypervolumes) of balls of higher dimension? For a fun exercise I had my multivariable calculus class compute the volumes of various balls using multiple integrals. The surprising results inspired this post. First some terminology. An -dimensional hypersphere (or -sphere) of radius is the set of points in satisfying (I’ll place the center at the origin for simplicity). For example, a 0-sphere is the two-point set on the real number line, a 1-sphere is a circle of radius in the plane, and a 2-sphere is a spherical shell of radius in 3-dimensional space. An -dimensional ball (or -ball) is the region enclosed by an -sphere: the set of points in satisfying . For example, a 1-ball is the interval , a 2-ball is a disk in the plane, and a 3-ball is a solid ball in 3-dimensional space. It is possible to define “volume” in —in it is length, in it is area, in it is ordinary volume, and in it is hypervolume. Let denote the volume of the -ball of radius . It turns out that the volumes of -balls satisfy the following remarkable recursion relation. (I’ll prove this relation at the end of the post.) It is not difficult to use this recurrence relation to obtain a formula for . In particular, when is even and when is odd . (If you know what the gamma function is you can express this as a single function, ) The volumes of the -balls in the first 15 dimensions are given in the following table. If you look at the volumes of the unit balls you’ll see they increase at first, reaching a maximum in dimension 5. Then they decrease and tend to zero as the dimension goes to infinity. Strange! First, what is special about dimension 5? Why is the maximum achieved in this dimension? It turns out that there is nothing special about dimension 5. Below is a GeoGebra applet that allows you to adjust the radii of the balls. As we can see, the maximum volume is not always attained by the ball in dimension 5. Indeed, as the radius increases, the maximum volume occurs in higher dimensions. As John Moeller points out, the powers of in the numerator try to make an increasing function, however the factorials in the denominator always dominate in the end. Second, what is the intuition behind this limit of zero? One way to see this is to observe that to be on the boundary of the unit -ball, we must have , but for this to happen when is large, most of the ‘s must be very close to zero. For example, the line intersects the -sphere at . On the other hand, the corresponding corners of the hypercube that inscribes the sphere are at , units from the origin. Thus the sphere fills up less and less of the hypercube that contains it. (Notice that the circumscribed hypercube has volume , while inscribed hypercube has volume .) My colleague informed me that this zero limit is related to the curse of dimensionality in statistics. Volume increases rapidly as dimension increases, so it requires many more data points to get a good estimate. As Wikipedia points out, “100 evenly-spaced sample points suffice to sample a unit interval with no more than 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice with a spacing of 0.01 between adjacent points would require sample points.” Now I will prove the recurrence relation that I gave above. Clearly the relation is true for and . Suppose . First recall that if a solid in -dimensional space is scaled by a factor of , then its volume increases by a factor of . In particular, this implies that Observe that the intersection of the -plane with the -ball is a disk of radius centered at the origin (see image below). Use polar coordinates to describe points in this disk. Then the perpendicular cross section of the -ball at the point is an -ball of radius . Thus we can compute by integrating over the disk. We do so using polar coordinates.
<urn:uuid:63aa6e84-ff0e-418f-b01d-02750fc070fd>
3.5
901
Personal Blog
Science & Tech.
55.606465
Alongside Planck's constant and the speed of light in a vacuum, the gravitational constant, G, is one of the most important and fundamental constants in our universe. However, since Isaac Newton introduced it in 1686, the value of G has been always been a little controversial. Despite centuries of physicists working on finding ever more accurate values (after the speed of light, the gravitational constant was the first to be measured scientifically), G is still the least accurately known physical constant. As an example, by the late 90s, Planck's constant was known to an accuracy 10000 times greater than G! To make things worse, new experiments started to produce results wildly different from this "accepted" value, in some cases, up to 1% away from previous results. Some experiments showed G had a space and time variation of over 0.5% - this was fundamentally opposed to accepted theory. Was Newton wrong? Not in this case; it's just that G is so hard determine. Compared to other forces, gravity is incredibly weak, meaning accurately measuring its effects is much harder than is the case with other forces. Almost every experiment done to measure G, including the original one performed by Henry Cavendish, is based around a bar on the end of a thin fibre being placed near some objects of known mass. The attraction between the bar and the masses causes the fibre to twist, and by measuring that twist, it is possible to ascertain the forces involved, and hence G: From the side: | <--- fibre +------+ | +------+ | | | | | | MASS | +----------------+ | MASS | | 1 | | BAR | | 2 | | | +----------------+ | | | | | | From the top: +------+ +----------------+ | MASS | | MASS | | BAR | | 2 | | 1 | +----------------+ +------+ There are a number of problems with this experiment that bring in serious systematic errors. One of these is that internal friction in the fibre can cause an unaccounted for inaccuracy when the amount of fibre twist is measured. Also, the dimensions and mass of the bar had to be known to an incredible accuracy that challenged engineering techniques to the limit. In 2000, a team from the University of Washington successfully addressed these issues to produce a new value for G. The first thing this team did (as other more recent attempts had) was to suspend the fibre from a rotating disc and instead of just measuring a single twisting displacement, the fibre was continually rotated between the masses, so that perturbations in the rotation can be measured and averaged over time. Secondly, the bar, which was usually a thin shaft with dumbells on either end, was replaced by a thin rectangle of metal, hung from the side. It turns out that this completely removes the need to know the characteristics of the pendulum at all! Also, to prevent the fibre from twisting at all, the speed of the rotationg disc was controlled by feedback from the pendulum, so that the rotating disc's speed was perturbed, not the pendulum's. As well as making the perturbation's easier to measure, the effects of internal fibre friction are completely eliminated. The end result is that we have a figure for G which is now 1000 times less accurate than Planck's constant; a relative standard uncertainty of 1.5 x 10-4. Considering the accuracy had only been improving by about a factor of 10 per century, a ten fold improvement in a few years is quite extraordinary.
<urn:uuid:1a90866d-99b2-4ee9-99ff-77da654bb569>
4.15625
734
Knowledge Article
Science & Tech.
45.152153
There are 1760 yards in a mile. A certain wheel makes 17,600 revolutions in 40 miles. What is the number of yards in the radius of the wheel? Give your answer as a decimal to the nearest tenth of a mile. Follow Math Help Forum on Facebook and Google+ the wheel travels a distance of in one revolution, so that would mean the wheel's circumference is 4 yds, correct? Oh, thank you so much! So you got the radius from that? Well if the circumference is 4 yards, then you can divide it by 2 to get the radius. Originally Posted by Espionage Well if the circumference is 4 yards, then you can divide it by to get the radius. correction ... View Tag Cloud
<urn:uuid:c524557c-c33e-4ada-b104-92a6a86d322a>
2.8125
153
Q&A Forum
Science & Tech.
78.006071
Science Sunday: The Reason for the Season As I’m writing this it’s late evening, and I have been spending a few days at home. Funny thing is, even though I’m going to bed soon, the Sun isn’t setting. Actually, it won’t set for at least another month. During that time, it will circle the sky, dip a bit up and down, but never go under the horizon. So, how does that work? The axial tilt, or obliquity, of the Earth is defined as the angle between the rotational axis and the orbital axis. Presently, the axial tilt is at about 23.5 degrees, a number that varies slightly over a 41 000 year cycle. Other astronomical bodies have different tilts, where some vary a lot over time. The stability of the Earth’s axial tilt is largely because of the presence of our Moon. Without it, the Earth would most likely tip drunkenly back and forth between different degrees of obliquity. This would not happen overnight, but it has been theorized that a greatly varying tilt would not be positive for the evolution of higher species. This is because a great change in tilt would cause great climate changes. Subsequently, it is also this tilt that gives the Earth its seasons. If the Earth had been completely upright, then the length of day and night would never change, and different times of the year would look suspiciously alike. There would only be different climate zones depending on latitude. Instead most places have varied seasons, disregarding a thin belt around the equator. A common misunderstanding is that the seasons are influenced heavily by the distance between the Earth and the Sun. In reality, the Earth is closest to the Sun in January, at 147 million kilometres, and furthest away in June, at 153 million kilometres. This causes a small difference in the heat received by the Sun, but the effect is much smaller than that caused by axial tilt. To put it simply, the difference between seasons is mainly the average daytime temperature. The average daytime temperature is affected by how long the Sun is over the horizon, and also by how long it is at its peak elevation. In other words, winters become cold because the Sun doesn’t get high enough to heat the surface efficiently*. As mentioned, if the Earth was completely upright, then every day would have the same length, and the same average temperature. This is because every point of the Earth – still depending on the latitude – would receive the same amount of sunlight, and for the same duration, every day of the year. Tilting the Earth changes this status quo, so that different places receive a different amount of sunlight during the year. The North and South Pole experience the most extreme effects of this, as the Sun is continously over the horizon for one half of the year, and under the horizon for the other half. These are of course not the same halves, as the poles have summer and winter at oppsite times of the year. The North Pole is in essence pointing away from the Sun for the six months that it is completely dark. This effect wanes as you get closer to the equator. Still, at the southernmost point of the Arctic Circle – at about 66.5 degrees latitude, there will be one day where the Sun doesn’t set, and one day with complete darkness. The effect is still strong enough over most of the Earth to be noticeable, with winter typically being darker than summer. For that, axial tilt is the reason for the season. *This also explains why evenings are colder than noon; during the evening, the angle between the sun and the horizon is smaller, so that the surface is heated less.
<urn:uuid:7126d38a-e966-4c5c-addd-71470d72ad14>
3.421875
774
Personal Blog
Science & Tech.
55.823125