text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Atmospheric Infrared Sounder The Atmospheric Infrared Sounder is an instrument onboard NASA's Aqua satellite under the space agency's Earth Observing System. The sounding system is making highly accurate measurements of air temperature, humidity, clouds and surface temperature. Data will be used to better understand weather and climate. It will also be used by the National Weather Service and the National Oceanic and Atmospheric Administration to improve the accuracy of their weather and climate models. The instrument was designed and built by Lockheed Infrared Imaging Systems (recently acquired by British Aerospace) under contract with JPL. The Aqua satellite mission is managed by NASA's Goddard Space Flight Center. +Instrument home page + Press Kit (PDF) + Aqua home page Instrument to study Earth's atmosphere Launch: May 4, 2002
<urn:uuid:994359c1-ec4f-4742-99bb-5554cc455037>
3.0625
165
Knowledge Article
Science & Tech.
20.608229
Report an inappropriate comment Reinventing Measures Talent As Much As Inventing Wed Aug 22 17:09:10 BST 2012 by alex What is significant is that a Bonobo can learn and emply this technique. The matter of whether they could devise it would need a lot more exploration, but there is no reason why not, given time and the need. We know they can devise the use of a rock, and could learn from accidents that split rocks might be of more use. What this does do is make a good contribution to the issue of the point at which humans needed language to advance technically. It suggests that early human tool use did not require language. Perhaps it contributes to the idea that one major difference between homo sapiens and homo erectus (earlier ones especially) would be the degree and complexity of vocal communcton; suchthat the inherent capacity for full language provided a major advantage to the emergent homo sapiens.
<urn:uuid:9c5c4a34-cf15-4c08-b0c4-4cd15d26f500>
3.21875
198
Comment Section
Science & Tech.
40.876185
Relative Speed and Frame of Reference Name: Jane K. When you are driving side by side with another car, why does the other car appear to be not moving, even though it is moving with the same speed as your car? It appears to be standing still because with respect to your frame of reference (your car) it is not moving. This assumes that your car and the other one are not moving at, or near, the speed of light. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:d1aef03b-7d08-469b-8630-2f941e39ab66>
2.890625
111
Knowledge Article
Science & Tech.
59.167926
In programming, a repetition within a program. Whenever a process must be repeated, a loop is set up to handle it. A program has a main loop and a series of minor loops, which are nested within the main loop. Learning how to set up loops is what programming technique is all about. The following example prints an invoice. The main loop reads the order record and prints the invoice until there are no more orders to read. After printing date and name and addresses, the program prints a variable number of line items. The code that prints the line items is contained in a loop and repeated as many times as required. Loops are accomplished by various programming structures that have a beginning, body and end. The beginning generally tests the condition that keeps the loop going. The body comprises the repeating statements, and the end is a GOTO that points back to the beginning. In assembly language, the programmer writes the GOTO, as in the following example that counts to 10. MOVE "0" TO COUNTER LOOP ADD "1" TO COUNTER COMPARE COUNTER TO "10" GOTO LOOP IF UNEQUAL In high-level languages, the GOTO is generated by the interpreter or compiler; for example, the same routine as above using a WHILE loop. COUNTER = 0 DO WHILE COUNTER <> 10 COUNTER = COUNTER + 1 For a more detailed look at a loop, see event loop.
<urn:uuid:bb886ab7-755a-4989-a70f-40df7f3b0d41>
4.375
307
Knowledge Article
Software Dev.
55.739231
Aug. 31, 2010 The ozone layer, which protects humans, plants, and animals from potentially damaging ultraviolet (UV) light from the Sun, develops a hole above Antarctica in September that typically lasts until early December. However, in November 2009, that hole shifted its position, leaving the southern tip of South America exposed to UV light at levels much greater than normal. To characterize this event and to evaluate satellite monitoring capabilities, de Laat et al. analyze satellite and ground-based measurements of ozone levels and the UV index (UVI). They find that the ozone column over southern South America was especially thin from 11 to 30 November 2009, and significantly higher UVI values were measured. Such abnormally low ozone levels sustained during a continuous period of three weeks had not been observed above southern South American at any time in the past 30 years, the researchers say. The high UVI values occurred over populated regions, meaning that humans had been exposed to increased levels of UV light. The scientists also note that the satellite-based measurements agreed well with the ground-based measurements, suggesting that satellite measurements can be valuable for monitoring ozone and UV radiation levels. Authors of the study include: A. T. J. de Laat, R. J. van der A, M. A. F. Allaart, and M. van Weele: Royal Netherlands Meteorological Institute, de Bilt, Netherlands; G. C. Benitez: Observatorio Central de Buenos Aires, Servicio Meteorológico Nacional, Buenos Aires, Argentina and PEPACG, Pontificia Universidad Católica Argentina, Buenos Aires, Argentina; C. Casiccia: Laboratorio de Monitoreo de Ozono y Radiación Ultravioleta, University of Magallanes, Magallanes, Chile; N. M. Paes Leme: Ozone Laboratory, National Institute for Space Research, São José dos Campos, Brazil; E. Quel, J. Salvador, and E. Wolfram: División Lidar, CEILAP, CITEFA, CONICET, Río Gallegos, Argentina. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - A. T. J. de Laat, R. J. van der A, M. A. F. Allaart, M. van Weele, G. C. Benitez, C. Casiccia, N. M. Paes Leme, E. Quel, J. Salvador, E. Wolfram. Extreme sunbathing: Three weeks of small total O3 columns and high UV radiation over the southern tip of South America during the 2009 Antarctic O3 hole season. Geophysical Research Letters, 2010; 37 (14): L14805 DOI: 10.1029/2010GL043699 Note: If no author is given, the source is cited instead.
<urn:uuid:c7ae7159-e48a-4374-ac18-7c25ba24ea6b>
4.15625
627
Truncated
Science & Tech.
47.4925
Transformation of Legendre The transformation of Legendre is an operation which makes it possible to pass from a function of state of a system describes by a whole of variables to another function of state describing the same system using an other set of variables. In fact, this transformation makes it possible to create starting from a function of state given unsuited to the particular conditions of the system another function of state adapted better to the effective configuration of the system. The principle is the following: That is to say a Function of state E of the variables B, D. G whose Différentielle is written for example in the vicinity of a state defined by : that one writes formally: If F is nonnull around the point considered , then for fixed the restriction of the function E at the variable G is locally bijective around for . One can then create a new function of E' state depending this time on the set of variables having a extremum for fixed around by considering the combined Variables F and G and while posing: (If one had had: One would have posed: The differential of E' will be then: Who is simplified in the form: And we see that the new function of E' state has the same number of independent variables as the function of state E. the difference lies in the fact that E was based on the variable G whereas E' is based on the variable F. This will be particularly useful in the case or one has the average materials to impose on the system since the outside the value of F, which is not an original variable of the system. One thus creates a function of state in which F becomes a variable of the system, that one can then fix; For example in thermodynamics, the rigid container imposes the volume of gas contained independently of its pressure (if the container is sufficiently solid!). We see with the following lines that one creates starting from the function of state " energy interns " a function of état" Enthalpy " who deals with this problem. On the contrary, in the case or the container is not a bottle but for example a deflated balloon, it is not any more the volume which is imposed outside but well the pressure. The function of state energy interns is thus adapted to deal with this problem. On the whole, the transformation of Legendre allows in fact to modify the whole of the independent variables to have a whole of variables better adapted to the problem considered. Example of the thermodynamic functions Let us consider, as example, the energy interns of a system of which one of the principal differentials can be written: Who would be well adapted to a context or one controls well the Volume V and the Entropie S as independent variables Taking into account the presence of the term - pdV, we can make a transformation of Legendre by adding statement to the function of state U. - Taking into account the presence of the term TdS, we can make a transformation of Legendre by removing TS with the function of state U. By adding statement If statement is added, one obtains a new function of state H which one calls Enthalpie. Its differential is then: And we see that one obtained a function of state H adapted well to a context or one controls the Pression p and the Entropie S as independent variables. By cutting off TS If TS is cut off, one obtains a new function of state F which one calls free energy. Its differential is then: And we see that one obtained a function of state F adapted well to a context or one controls the Volume V and the Température T as independent variables like the chemical reactions with constant volume and constant temperature. It is the case if one causes a chemical reaction between several gases in a calorimetric Bombe with gas products and if one brings back the temperature of the products to the temperature which the reagents had before the reaction. By adding statement and by cutting off TS One can, of course, make the two operations simultaneously. This transformation of Legendre enables us to obtain another function of state G which one calls free Enthalpie. Its differential is then: And we see that one obtained a function of state G adapted well to a context or one controls the Pression p and the Température T as independent variables like the chemical reactions with constant pressure and constant temperature. It is the case if one causes a chemical reaction with the free air, i.e. subjected to the atmospheric pressure and if one brings back the temperature of the products at the temperature which the reagents had before the reaction. The case of the chemical PotentialThe exact differential of internal energy in its original variables is: or NR is the number of particles of the system, which one can possibly vary; The resultant of the transformation of Legendre compared to all the natural extensive variables are the function Grand potential defined by: where driven chemical Potentiel of the species considered is the ; One can define generalized free enthali relating to the electric properties (Diélectricité, Ferroélectricité.), magnetic (Diamagnetism, Paramagnetism, Ferromagnetism,…) and of the chemical potentials generalized with several different chemical species exactly in the same way… Case of the Hamiltonian formalism in Mechanical traditional The relationship between the Lagrangian formalism and the formalism Hamiltonien in traditional mechanics evokes ways immediate the transform of Legendre. Let us leave the Lagrangian one: and let us define the combined moment of , which is the combined variable of . We write then: From what one defines the Hamiltonian: who is the opposite of the transformation of Legendre of the Lagrangian one. |Random links:||1748 | Pollen | Magrat Goussedail | R. Lee Ermey | Projective treaty of conical/In a pappusien plan | Herencia | L'homme_avec_le_pistolet_d'or_(roman)|
<urn:uuid:654ccdd6-f363-4988-a836-ae0e32063f1c>
3.03125
1,255
Tutorial
Science & Tech.
27.328387
This video from the USA is called Hurricane Sandy Hits Jamaica on Path Toward Cuba, Florida. The eastern US is bracing for Hurricane Sandy, which is expected to make landfall next week. The storm looks set to be a record-breaker, and could cause severe damage. New Scientist breaks down what we know about the storm, and what it is likely to do. Hurricane Sandy’s Five-Fold Flood Threat, with Local Maps: here. Hurricane Sandy and birds: here. Sandy and Haiti: here. - Major Hurricane Sandy – ‘Frankenstorm’ – Threatens East Coast After Battering Cuba (insurancejournal.com) - Hurricane Sandy Slams Bahamas, Threatens US (themoderatevoice.com) - Hurricane Sandy may slam into US East Coast as Halloween week “Frankenstorm” (cbsnews.com) - Hurricane Sandy looks as the ‘Bride of Frankenstorm’ Approaching U.S. East Coast (sciencedaily.com)
<urn:uuid:36c81dab-dc59-4d03-9169-9ed3318cfc0f>
2.734375
216
Personal Blog
Science & Tech.
49.119755
The South American Network on Antarctic Marine Biodiversity (BioMAntar) is a multinational project that was built through the Latin American Census of Antarctic Marine Life (LA CAML), but financed nationally. The BioMAntar intends to foster the interaction among scientists and administrators of the participant countries in order to increase the efforts towards the understanding of Antarctic marine ... biodiversity. Argentina, Brazil, Chile, Ecuador, Peru, Uruguay, and more recently Venezuela have undertaken marine biological research work in Antarctica, mainly around the Antarctic Peninsula and Drake Passage, and also these in relation to South America. This work aims to aggregate georeferenced Antarctic marine biodiversity data published by South American scientists, including information in the grey literature (mainly MSc dissertations and PhD thesis found at different regional libraries). So far, the occurrence of 337 species from twenty different phyla has been recor ded from the literature. These are mainly from King George Island and other islands from the South Shetlands. The majority of manuscripts relates to Chordata (33.26%), and from these 57% relate to fish, 25% to birds and 7.5% to marine mammals. Other articles report mainly on the following: Annelida (12.8%); Arthropoda (9.7%); Mollusca (9%).
<urn:uuid:8a4ed75b-baeb-4f9e-a61c-8173889ae673>
2.921875
267
Knowledge Article
Science & Tech.
26.098864
You know a match helps to light a fire but did you know phosphorus is the key element used in making match sticks? This video explains the chemical characteristics of phosphorus and why it is such a hot element. Plants are also considered as living beings of nature just like humans, animals and other living stock. Just like how we humans need extra vitamin, calcium and other supplements apart from our normal intake of nutrition to stay healthy, plants need fertilizers to grow and stay healthy. In order for the plants to grow, they need a number of chemical elements. Let us find out how fertilizers work. Man invented fire ages ago and till date, fire is an indispensable aspect of our everyday life. Right from cooking to light, we use fire directly or indirectly. We use matchsticks for lighting the lamp and also for many other things. The matchsticks have phosphorus and let us now find out more about phosphorus... We know that too much carbon dioxide in our atmosphere is causing climate change. If we increased forest cover on land, we could reabsorb some of the CO2. But did you know the best solution is actually in the oceans?
<urn:uuid:4072531f-a27e-44cc-ba52-82d82e294182>
3.484375
232
Content Listing
Science & Tech.
57.705642
What is a quantum dot? What is a nanowire? What is a nanotube? Why are they interesting and what are their potential applications? How are they made? This presentation is intended to begin to answer these questions while introducing some fundamental concepts such as wave-particle duality, quantum confinement, the electronic structure of solids, and the relationship between size and properties in nanomaterials. A more recent version of this talk titled "Nanotubes and Nanowires: One-dimensional Materials" is available. Researchers should cite this work as follows: Timothy D. Sands (2005), "Nanomaterials: Quantum Dots, Nanowires and Nanotubes," https://nanohub.org/resources/376. Purdue University, West Lafayette, IN
<urn:uuid:f1475433-a2ef-408c-b96d-7af6091c0d1e>
2.796875
168
Knowledge Article
Science & Tech.
26.876
Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube made from 27 unit cubes so that the surface area of the remaining solid is the same as the surface area of the original 3 by 3 by 3 cube? 60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra? A very mathematical light - what can you see? Each of these solids is made up with 3 squares and a triangle around each vertex. Each has a total of 18 square faces and 8 faces that are equilateral triangles. Each has a band of 8 squares around the 'equator' and two square faces at the top and bottom (parallel to the equator) containing the 'north and south poles' at their centres. Draw the net for making each of the shapes and make the models for yourself either with card or a plastic constriction kit. How many faces, edges and vertices does each solid have? How many planes of symmetry and how many axes of rotational symmetry? The solid on the left is one of the classical semi-regular or Archimedean solids but the one on the right was almost entirely ignored until it was made known by JCP Miller in the 1930s. Perhaps people thought the two were the same - can you describe the differences?
<urn:uuid:3634d9aa-2728-476c-bf83-2f85b70c2b59>
3.28125
292
Q&A Forum
Science & Tech.
59.947011
Do you know how to find the area of a triangle? You can count the squares. What happens if we turn the triangle on end? Press the button and see. Try counting the number of units in the triangle now. . . . Make an eight by eight square, the layout is the same as a chessboard. You can print out and use the square below. What is the area of the square? Divide the square in the way shown by the red dashed. . . . Show that among the interior angles of a convex polygon there cannot be more than three acute angles. Blue Flibbins are so jealous of their red partners that they will not leave them on their own with any other bue Flibbin. What is the quickest way of getting the five pairs of Flibbins safely to. . . . Is it true that any convex hexagon will tessellate if it has a pair of opposite sides that are equal, and three adjacent angles that add up to 360 degrees? Points A, B and C are the centres of three circles, each one of which touches the other two. Prove that the perimeter of the triangle ABC is equal to the diameter of the largest circle. Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps? What can you say about the angles on opposite vertices of any cyclic quadrilateral? Working on the building blocks will give you insights that may help you to explain what is special about them. If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. ABC is an equilateral triangle and P is a point in the interior of the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP must be less than 10 cm. Find the area of the annulus in terms of the length of the chord which is tangent to the inner circle. A standard die has the numbers 1, 2 and 3 are opposite 6, 5 and 4 respectively so that opposite faces add to 7? If you make standard dice by writing 1, 2, 3, 4, 5, 6 on blank cubes you will find. . . . Semicircles are drawn on the sides of a rectangle ABCD. A circle passing through points ABCD carves out four crescent-shaped regions. Prove that the sum of the areas of the four crescents is equal to. . . . Is it possible to rearrange the numbers 1,2......12 around a clock face in such a way that every two numbers in adjacent positions differ by any of 3, 4 or 5 hours? A huge wheel is rolling past your window. What do you see? This article introduces the idea of generic proof for younger children and illustrates how one example can offer a proof of a general result through unpacking its underlying structure. What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? There are four children in a family, two girls, Kate and Sally, and two boys, Tom and Ben. How old are the children? A paradox is a statement that seems to be both untrue and true at the same time. This article looks at a few examples and challenges you to investigate them for yourself. Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are natural numbers and 0 < a < b < c. Prove that there is only one set of values which satisfy this equation. Toni Beardon has chosen this article introducing a rich area for practical exploration and discovery in 3D geometry Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. Patterns that repeat in a line are strangely interesting. How many types are there and how do you tell one type from another? In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot. Can you discover whether this is a fair game? What does logic mean to us and is that different to mathematical logic? We will explore these questions in this article. In how many distinct ways can six islands be joined by bridges so that each island can be reached from every other island... Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. What happens when you add three numbers together? Will your answer be odd or even? How do you know? Can you fit Ls together to make larger versions of themselves? Look at three 'next door neighbours' amongst the counting numbers. Add them together. What do you notice? Look at what happens when you take a number, square it and subtract your answer. What kind of number do you get? Can you prove it? How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct. Here are some examples of 'cons', and see if you can figure out where the trick is. I start with a red, a blue, a green and a yellow marble. I can trade any of my marbles for three others, one of each colour. Can I end up with exactly two marbles of each colour? Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important. This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . . Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning? Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? Can you find all the 4-ball shuffles? A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter. If you know the sizes of the angles marked with coloured dots in this diagram which angles can you find by calculation? Your partner chooses two beads and places them side by side behind a screen. What is the minimum number of guesses you would need to be sure of guessing the two beads and their positions? Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
<urn:uuid:39ffa6c6-7333-4006-935c-aa45fc98c78a>
3.53125
1,708
Content Listing
Science & Tech.
75.209915
Ocean conditions surrounding the OOI RSN are bracketed on the pelagic or open-ocean end by the OOI global site at Ocean Station Papa, and on the coastal end by the Endurance Array. The OOI RSN resides in a complex system of currents, where wind- and tide-forced motions lead to turbulent mixing that aids transport of chemical and biological species. The Pacific Northwest is one of the most biologically productive regions of the world, hypoxia, ocean acidification, and harmful algal blooms are observed with increasing frequency. These complex physical, biological and chemical processes are all intertwined, and respond to forcing on a wide range of spatial and temporal scales. The water column moorings at Hydrate Ridge and Axial Seamount are well suited to resolve these processes, and the system’s response to changing forcing conditions resulting from climate change. The unprecedented power (375V) and bandwidth (Gb/s) capabilities of these moorings allow for a broad suite of sensors that include real-time digital imaging and acquisition of high bandwidth sonar and hydrophone data for biological applications. Though both moorings are in about 3000 m of water, they have very different oceanographic foci. The mooring at Hydrate Ridge is situated adjacent to the coastal continental slope at the end of the Endurance Oregon Line, and in concert with the northern Endurance Washington Line, provides a unique opportunity for investigating a variety of interdisciplinary coastal studies. The coastal region of the Pacific Northwest is a classic wind-driven upwelling system. However, the presence of the Columbia River plume and the range of trajectories with which it can impinge on the ocean, and the strong variability of the width of the continental shelf, all play strong roles in setting the system’s response and behavior. In addition, the aforementioned large-scale systems affect the coastal region by modulating the pycnocline, nutricline and oxycline depths and offshore pressure gradients, which in turn affect the onshore transport of physical, biological and chemical quantities. The presence of internal waves driven by waves and tides, their interaction with the larger-scale currents, and their eventual breakdown into turbulence, are also vital to setting properties in the coastal region. All of these are expected to change strongly over time, but will be well resolved by the measurements at Hydrate Ridge, the Endurance Array, and supporting shipboard work. In contrast to the margin setting of Hydrate Ridge, Axial Seamount is far from the continental shelf and hence represents an open-ocean or pelagic site in the continuum of observing scales represented in the OOI’s cabled system. Here, large-scale currents including the North Pacific Current, the subpolar gyre and the northern end of the California Current interact. These currents transport heat, salt, oxygen, and biota, all of which are crucial to the region’s ecosystem. However, their variability arises from forcing as varied as tides and wind (0.5--5 day timescales) to interannual (El Niño) to decadal (Pacific Decadal Oscillation) timescales. Examples of relevant science questions represented in the OOI Science Requirements include 1) Internal tides are ubiquitous vertical motions formed by tidal currents flowing past bottom features such as Axial Seamount. How, and how strongly, do they break down into turbulence, and what are the feedbacks on the large scale current system? 2) What is the impact of long- and short-term forcing changes on the structure and transports of the large-scale current system – and what are their effects on the ecosystem? Together with the mooring at Ocean Station Papa, these processes can be studied with observing platforms in the water column at these two sites.
<urn:uuid:1f35ae6e-723f-4a20-9b04-f07671b7610b>
2.71875
776
Academic Writing
Science & Tech.
23.633002
A website that uses hydrogen generates almost no contamination, creating it much recommended over energy sources. Hydrogen is numerous in the world and is discovered when in addition to other components that will help type substances like drinking water or H2O. By using the energy of hydrogen you will be able to type energy that can be alternative, but also that is healthy for everyone and World in common. You will discover that this technological innovation is something that NASA has been using for years, but has also investigated for the better good of daily routine. Hydrogen energy is saved by using petrol tissues. The petrol tissues are what abilities all of the spaceships during their take off. Because hydrogen is not discovered by itself, the first step to building petrol cellular is to eliminate the hydrogen from a material. You may not know this, but Hydrogen is many things other than water; such as, organic gas; a material that is often used in hydrogen energy. Hydrogen energy is where you are changing to use warm in order to individual hydrogen and organic gas. When drinking water is used, a similar procedure, known as electrolysis, is used to individual the hydrogen and fresh air with an electric powered current. The result of each of these procedures is more energy, by using hydrogen, than is used in the procedure itself. This type of energy has been investigated since it is regarded to be genuine energy. A website that uses hydrogen generates almost no contamination, creating it much recommended over energy sources. In the space taxi example, for example, the result of losing hydrogen to energy lift-off is filtered drinking water, which the jet pilots than can consume. Therefore, using hydrogen energy is not only fresh, but also useful. You can evaluate petrol tissues to battery power, since they are as well, however, with the petrol tissues you will never lose your cost and it will work until all of the hydrogen is cut off. Inside the cellular, fresh air is in addition to hydrogen, and caffeine type procedure which makes drinking water gives off warm and power. The drinking water can then be accessed hydrogen and fresh air once again, and the procedure begins all over again. As for the long run you will see that there are some experts that have expected that these petrol tissues will become popular in use. These days, they still are working on creating this method practical and something that can become practical after this procedure has been critiqued to be affordable, you will see that hydro will become the remedy to all of the globe’s problems. You will discover that this procedure will be alternative and fresh for the planet.
<urn:uuid:21648f6e-6053-45fd-a8eb-78f67247da68>
3.140625
523
Knowledge Article
Science & Tech.
33.785233
Many spiral galaxies have bars across their centers. Even our own Milky Way Galaxy is thought to have a modest central bar. Prominently barred spiral galaxy NGC 1672, pictured above, was captured in spectacular detail in image taken by the orbiting Hubble Space Telescope. Visible are dark filamentary dust lanes, young clusters of bright blue stars, red emission nebulas of glowing hydrogen gas, a long bright bar of stars across the center, and a bright active nucleus that likely houses a supermassive Light takes about 60 million years to reach us from NGC 1672, which spans about 75,000 light years across. NGC 1672, which appears toward the constellation of the Dolphinfish being studied to find out how a spiral bar contributes to star formation in a galaxy's central regions.
<urn:uuid:b24acc15-5926-426a-8d7d-d2a79d949835>
3.890625
181
Knowledge Article
Science & Tech.
42.686275
John Baez has written up a short history of some of Earth’s disasters. These include the Big Splat, also known as the formation of the Moon, the Heavy Bombardment, the Oxygen Catastrophe, and Snowball Earth. In 2004, the astrophysicist Robin Canup, at the Southwest Research Institute in Texas, published some remarkable computer simulations of the Big Splat. To get a moon like ours to form — instead of one too rich in iron, or too small, or wrong in other respects — she had to choose the right initial conditions. She found it best to assume Theia is slightly more massive than Mars: between 10% and 15% of the Earth’s mass. It should also start out moving slowly towards the Earth, and strike the Earth at a glancing angle. The result is a very bad day. Theia hits the Earth and shears off a large chunk, forming a trail of shattered, molten or vaporized rock that arcs off into space. Within an hour, half the Earth’s surface is red-hot, and the trail of debris stretches almost 4 Earth radii into space. After 3 to 5 hours, the iron core of Theia and most of the the debris comes crashing back down. The Earth’s entire crust and outer mantle melts. At this point, a quarter of Theia has actuallyvaporized! Someone needs to send this to Bill O’Reilly. Mars has two moons. Both are probably captured asteroids. Saturn has 62 moons, the last one being discovered in 2009. Science – It Works.
<urn:uuid:827fb8f9-3789-4dcf-8702-ea6839e7a542>
3.5625
332
Personal Blog
Science & Tech.
70.012416
National Wetlands Update September 2012 Issue No. 21, September 2012 Rewetting - response of arid floodplain wetlands following extensive drought Cherie Campbell, Murray-Darling Freshwater Research Centre, Mildura Abundant aquatic macrophytes following environmental watering at Scotties Billabong. Management intervention, in the form of environmental watering during the drought, may have enabled wetlands to respond more favourably to the 2010–11 flood. However, other processes associated with the flooding, such as sediment deposition, may have inhibited the development of submerged macrophyte communities. This article reports on monitoring of vegetation communities in the lower Murray-Darling Basin, specifically at two of The Living Murray (TLM) icon sites, Hattah Lakes and Lindsay-Mulcra-Wallpolla Islands (LMW), as well as wetlands in New South Wales (NSW) downstream of the confluence of the Murray and Darling Rivers. Prior to the flooding in 2010–11 the last overbank flow in this region of the Murray River was in 2000-01, which inundated most wetlands and low-lying parts of the floodplain. During the drought, environmental water was delivered to a number of wetlands to maintain ecological values, with individual wetlands receiving between one and 10 watering events from 2004 until spring 2010. Following inundation in 2010–11 there was mass germination of wetland species as the floodwaters receded. Positive signs of resilience and recovery potential include the establishment of swamp lily (Ottelia ovalifolia ssp. ovalifolia) at a wetland site dry for 10 years; high plant species diversity, including a relatively large number of flow-dependent plant species listed as vulnerable in Victoria, such as lagoon nightshade (Solanum lacunarium) and jerry-jerry (Ammannia multiflora); and a large number of river red gum and black box seedlings. Germination of River Red Gum seedlings and a diversity of wetland plants following flood recession at Mulcra Island flood plain. (C.Campbell) Preliminary results indicate wetlands that received environmental water during the drought typically responded to the 2010–11 flood with greater abundance and diversity of wetland plants than wetlands without management intervention. However, a paucity of submerged macrophytes were observed following flooding, including at wetlands known to have developed very abundant and diverse macrophyte communities following environmental watering. One potential explanation is sediment deposition during the recent flood. Dense mats of nardoo (Marsilea spp.) rhizomes could be felt underneath about 50 centimetres of sediment at Scottie's Billabong on Lindsay Island. It is hoped that on-going monitoring will help identify how management intervention during the drought may be influencing the response of successional wetland vegetation communities as the sites continue to draw-down and dry. LMW and Hattah Lakes monitoring is funded by The Living Murray program which is a joint initiative funded by the NSW, Victorian, South Australian, Australian Capital Territory and Australian governments, coordinated by the Murray-Darling Basin Authority. The provision of environmental water and associated monitoring of wetlands in the NSW Lower Murray-Darling Catchment has been funded by Murray-Darling Wetlands Ltd. (formerly the Murray Wetlands Working Group) and the NSW Office of Environment and Heritage. For further information contact: Cherie Campbell, Murray-Darling Freshwater Research Centre, Mildura, email@example.com. Jerry-Jerry (Ammannia multiflora), vulnerable in Victoria, was frequently observed post-flooding at Mulcra Island flood plain. (C.Campbell)
<urn:uuid:bba43f7c-ff79-4d0e-9a69-efea31a5e762>
2.9375
770
Knowledge Article
Science & Tech.
21.132318
Facts about Lutetium Facts about Lutetium - Element included on the Periodic Table Facts about the Definition of the Element Lutetium The Element Lutetium is defined as... A silvery-white rare-earth element that is exceptionally difficult to separate from the other rare-earth elements, used in nuclear technology. Interesting Facts about the Origin and Meaning of the element name Lutetium What are the origins of the word Lutetium ? The name originates from the Latin word Lutetia meaning Paris. Facts about the Classification of the Element Lutetium Lutetium classified as an element in the Lanthanide series as one of the "Rare Earth Elements" which can located in Group 3 elements of the Periodic Table and in the 6th and 7th periods. The Rare Earth Elements are divided into the Lanthanide and Actinide series. The elements in the Lanthanide series closely resemble lanthanum, and one another, in their chemical and physical properties. Their compounds are used as catalysts in the production of petroleum and synthetic products. Brief Facts about the Discovery and History of the Element Lutetium Lutetium was discovered by by French scientist Georges Urbain and Austrian mineralogist Baron Carol Auer von Welsbach in 1907 Occurrence of the element Lutetium in the Atmosphere Obtained from gadolinite & xenotime Common Uses of Lutetium No known uses The Properties of the Element Lutetium Name of Element : Lutetium Symbol of Element : Lu Atomic Number: 71 Atomic Mass: 174.967 amu Melting Point: 1656.0 °C - 1929.15 °K Boiling Point: 3315.0 °C - 3588.15 °K Number of Protons/Electrons: 71 Number of Neutrons: 104 Crystal Structure: Hexagonal Density @ 293 K: 9.85 g/cm3 The element Lutetium and the Periodic Table Find out more facts about Lutetium on the Periodic Table which arranges every chemical element according to its atomic number, as based on the periodic law, so that chemical elements with similar properties are in the same column. Our Periodic Table is simple to use - just click on the symbol for Lutetium for additional facts and info and for an instant comparison of the Atomic Weight, Melting Point, Boiling Point and Mass - G/cc of Lutetium with any other element. An invaluable source for more interesting facts and information about the Lutetium element and as a Chemistry reference guide. Facts and Info about the element Lutetium - IUPAC and the Modern Standardised Periodic Table The Standardised Periodic Table in use today was agreed by the International Union of Pure Applied Chemistry, IUPAC, in 1985 which includes the Lutetium element. The famous Russian Scientist, Dimitri Mendeleev, perceived the correct classification method of "the periodic table" for the 65 elements which were known in his time. Lutetium was discovered by by French scientist Georges Urbain and Austrian mineralogist Baron Carol Auer von Welsbach in 1907. The Standardised Periodic Table now recognises more periods and elements than Dimitri Mendeleev knew in his day but still all fitting into his concept of the "Periodic Table" in which Lutetium is just one element that can be found. Facts and Info about the Element Lutetium Information Facts about the Lutetium Element
<urn:uuid:f6ed9f11-cead-4125-85cc-e3f43eb52abd>
3.59375
777
Knowledge Article
Science & Tech.
30.481217
Hydroelectric schemes usually generate a barrage of criticism from conservationists. But the flooding of a Venezuelan valley 20 years ago has provided ecologists with the ideal outdoor laboratory to answer one of ecologys oldest and thorniest questions: why is the world green? Reporting their results in the March issue of the British Ecological Societys Journal of Ecology, a team lead by Professor John Terborgh of Duke University says that the role of predators is the key to keeping the world green, because they keep the numbers of plant-eating herbivores under control. Their results support the so-called “green world hypothesis” first proposed in 1960 by Hairston, Smith and Slobodkin and seem to lay to rest the competing theory that plants protect themselves from being eaten through the physical and chemical defences they have developed. Despite being nearly 50 years old, the green world hypothesis has been almost impossible to test until now. According to Terborgh: “Since the landmark paper by Hairston et al, ecologists have been debating whether herbivores are limited by plant defences or by predators. The matter is trivially simple in principle, but in practice the challenge of experimentally creating predator-free environments in which herbivores can increase without constraint has proven almost insurmountable.” Along with colleagues from Harvard and Wake Forest University, Terborgh realised that the hypothesis could be tested on a vast hydroelectric scheme in Venezuelas Caroni Valley, where in 1986 an area of 4,300 square kilometres was flooded to create a lake (Lago Guri) containing hundreds of land-bridge islands that were formerly fragments of a continuous landscape. Terborgh and his team monitored the vegetation at 14 sites of differing size. Nine of the sites were on predator-free islands, while the others were on the mainland or on islands with a complete or nearly complete suite of predators. They found that by 1997, small sapling densities on small islands were only 37% of large land masses and by 2002 this had fallen to just 25%. Most of the vertebrates present in regional the dry forest ecosystem had disappeared from small islands, including fruit eaters and predators of vertebrates, leaving a hyperabundance of generalist herbivores such as iguanas, howler monkeys and leaf-cutter ants. “Mere numbers do not do justice to the bizarre condition of herbivore-impacted islets. The understory is almost free of foliage, so that a person standing in the interior sees light streaming in from the edge around the entire perimeter. There is almost no leaf litter,and the ground is bright red from the subsoil brought to the surface by leaf-cutter ants. Dead twigs, branches and vine stems from canopy dieback litter the ground, and in places lie in heaps. But in striking contrast with this scenario of destruction, the medium islands presented a relatively normal appearance,” Terborgh says. As well as proving that the green world hypothesis is correct, Terborghs results have important implications for the debate raging in many countries over reintroduction of top predators such as wolves. “The take-home message is clear: the presence of a viable carnivore guild is fundamental to maintaining biodiversity,” he says. More articles from Ecology, The Environment and Conservation: Ant Study Could Help Future Robot Teams Work Underground 21.05.2013 | Georgia Institute of Technology, Research Communications Canada must addess real climate-change challenge 16.05.2013 | University of Toronto University of Würzburg physicists have succeeded in creating a new type of laser. Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature. It also emits light the waves of which are in phase with one another: the polariton laser, developed ... Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions. They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics. “When water boils, its molecules are released as vapor. We call this ... Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset. For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ... A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise. The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ... About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers. However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich. 21.05.2013 | Studies and Analyses 21.05.2013 | Life Sciences 21.05.2013 | Studies and Analyses 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
<urn:uuid:129a1d7a-f936-424f-9942-02f3ff408cf9>
4
1,239
Content Listing
Science & Tech.
43.570499
Direct access to content A laser infrastructure of a power unique in the world for extreme physics This new scientific infrastructure will house the most powerful laser ever built. It will be dedicated to laser research. The preparatory phase of the project, run jointly by the Applied Optics Laboratory (LOA), ENSTA ParisTech, the École Polytechnique and the CNRS, is approaching completion. Gérard Mourou, director of the Institute of Extreme Light (ILE), is the instigator of the project. ELI (Extreme Light Infrastructure) will concentrate a large quantity of light energy in the shortest possible timescale (femtoseconds, 1/1015 of a second) and in the smallest possible space (roughly one micron = 10-6 m) to obtain a light strength never previously obtained on Earth (200 PW, equivalent to 100,000 times the power produced by all the Earth's electricity Earth installations). It will be an outstanding aid for the fundamental study of laser matter interaction at a level of intensity that has never before been equalled. The intensity levels reached will be enough to boil the vacuum and create fundamental particles. They could recreate the conditions that were prevalent just a few milliseconds after the big bang. ELI could enable us to analyse ultra-rapid phenomena occurring at an attosecond-zeptosecond level. It will also aim to promote relativistic engineering leading to the development of very compact accelerators delivering very high energy particles and photons. This interaction between light and matter will enable us to study fields that are as yet unexplored. ELI applications will also affect materials science and more basic applications such as the study of the vacuum structure. These new technologies will have a significant social impact in medicine with imaging and cancer treatment, in materials science with the potential to understand and slow down the ageing mechanism in nuclear reactors and in the environment by offering new methods for treating nuclear waste. ELI is a partnership between 13 European countries.
<urn:uuid:7659ab0c-ac23-4d9a-ada1-00822b479430>
2.96875
405
Knowledge Article
Science & Tech.
26.527717
It's one thing to hop around on another world for two or three days at a time, as the Apollo astronauts did. But it's something else to live and work on Mars for weeks on end, as the first human explorers to the Red Planet will do. For one thing, those astronauts will need a suit that can stand up to the abuse, and MIT professor of Astronautics and Astronautics Dava Newman thinks she has a better spacesuit for the job, allowing full mobility along with life support. In this design, displayed in a new prototype form for the first time at AMNH, only the helmet needs to provide the air for a person to breathe. The rest of the suit squeezes astronauts to keep their bodies under Earth-normal pressure, rather than inflating like conventional designs do.
<urn:uuid:1b2e3c30-9b37-43f4-a6b7-e07579f9b008>
2.6875
163
Truncated
Science & Tech.
51.515
Mike Fay and Enric Sala are exploring the waters off the coast of Gabon with an eye toward protecting its ecosystem. This picture was taken when their remote operated vehicle (ROV) selected a shell from the sea floor. When we picked up the shell from the ROV’s arm, to our surprise, a small octopus came out of the shell. It was a female that laid her eggs inside the shell. We put shell and octopus in a tank with seawater, and after one minute thousands of octopus larvae started to stream out of the shell. The octopus eggs were hatching! That was the first time we had observed such a magnificent show. The larvae were changing coloration from transparent with dark spots to brown, and swimming like squid – although on a millimeter scale. (Image credit:Enric Sala)
<urn:uuid:c55d7835-7c58-4cdd-bfad-61b02ce6d3b0>
3
175
Truncated
Science & Tech.
56.908859
Damselflies are brightly coloured insects who, like dragonflies, are acrobatic masters of the air as they hunt for their prey. Damselflies are delicate and very thin and fold their wings back over their bodies at rest. You can sometimes spot clouds of them flitting over the water surface and amongst vegetation on sunny days. They feed mainly on mosquitoes, midges and larger insects. The reed fringes of many of our canals and rivers provide excellent breeding sites and hunting grounds for damselflies - and the Canal & River Trust's maintenance programme involves the creation and improvement of canal banks with damselflies in mind. In the past, any work on canal banks would have involved steel sheet piles. Today, with our greater emphasis on habitat creation, soft banks are created either using coir roles or hazel faggots. This allows the growth of reed fringes, ideal habitats for many species of insects, particularly damselflies. The Canal & River Trust, formerly British Waterways, has been a supporter of the British Dragonfly Society for over ten years, during which time it has been a member of its steering group.
<urn:uuid:f371a904-22e9-4a2b-b1a3-773bc65e0de9>
3.359375
236
Knowledge Article
Science & Tech.
44.272077
Cellular Automata Coupled by Overlap or Common Boundary Cellular automata (CA) are often treated as isolated systems with simple cyclic or Dirichlet boundary conditions. Realistic systems, in contrast, interact with the environment through a boundary. Boundaries can be as simple as solid body surfaces, as complex as walls of living cells, or even non-geometric boundaries of social systems. A boundary, having an intricate structure and being a coupling link to the environment, can strongly influence the system dynamics. Here two elementary CA colored red and blue interact via a boundary consisting of black shared cells. The configuration of coupling is schematically shown on the image at the lower-left corner of the graphic. Even the single cell boundary can significantly alter the dynamics of CA, as shown on the fourth row of snapshots. To see clearly whether the dynamics was altered, switch a few times between the given and zero values of the control "overlap" for comparison. If the overlap is made significantly large, it can be considered as a 2-color range-2 CA interacting with two elementary CA. The boundary in this case is the line separating CA of different color. The CA exchange information via the bordering cells. A few examples of cases when the dynamics are significantly altered by the CA coupling are shown on the third row of snapshots. The Demonstration also shows how larger neighborhood CA patterns arise from the multiple action of smaller neighborhood CA. Simply compare the red and blue patterns to the black ones, which should be done with a substantial number of black cells (with the control "overlap" much larger than 5). A few illustrative examples are shown in the first and second rows of snapshots. Browse the bookmarks; for more information, see the Details section. This Demonstration shows the evolution of two elementary CA sharing several black cells. Unshared cells are red and blue to distinguish between the different CA. The number of shared cells is set by the control "overlap". When the "overlap" is zero, no cells are shared and the red and blue CA evolve independently. In this zero-overlap case evolution takes place on isolated red and blue loops due to cyclic boundary conditions. Increasing the control "overlap" does NOT change the number of cells in the red and blue loops, but makes some of the cells common to both CA. This is schematically shown on the image at the lower-left corner of the graphic. In some way the "overlap" control acts as a "zipper" fusing the red and blue loops together. With equal numbers of red and blue cells it is possible to completely "zip up" the loops. At a single step in the evolution, the "red rule" is applied to the red loop including the black shared cells, and then the "blue rule" is applied to the blue loop including the same black cells once again. Thus, during a single step of the evolution, black shared cells are acted upon twice by two generally different 2-color range-1 CA rules. Such double action is equivalent to a single action of a 2-color range-2 CA rule. All three CA (red, blue, and black) interact through the bordering cells exchanging information.
<urn:uuid:fad036a2-ea09-4ec8-a94e-b417b7d1c1ed>
3.015625
655
Documentation
Science & Tech.
46.151667
Knott, Edward Joseph (2007) The effect of elephants (Loxodonta africana, Blumenbach, 1797) on Xeric Succulent Thicket. Masters thesis, Rhodes University. This study looks at the impact of elephant feeding on the Xeric Succulent Thicket component of Eastern Cape Subtropical Thicket (ECST) in Addo Elephant National Park (AENP). Observations of elephant feeding were carried out and vegetation transects were surveyed for impact of elephant feeding. The results indicated that the Nyati elephants spent the majority of their time grazing (nearly 90%), particularly the cow-young herds, and especially when the herd gathered in larger numbers. Browsing events were concentrated on Acacia karroo (81%) and there was no significant difference between the sexes in their preference for this species. Despite being subjected to most of the browsing, the majority of A. karroo trees were undamaged and the effect of elephants was generally light. It appears unlikely that, three years after re-introduction to Nyati, the elephants have had an effect on community structure of the vegetation. Surveys were conducted on stands of the alien invasive weed prickly pear Opuntia ficus-indica, and it was recorded that elephants in Nyati have had a dramatic effect on prickly pear, utilising all adult plants assessed and destroying 70% of them. This level of destruction in such a short period of time suggests that prickly pear is a highly favoured species. The results from the present study suggest that elephants can play a role in the control of prickly pear. Results are discussed in terms of elephants as both megaherbivores and keystone species, and as agents of intermediate disturbance. |Item Type:||Thesis (Masters)| |Uncontrolled Keywords:||African elephant, Addo Elephant National Park, South Africa, Eastern Cape, Elephants, Succulent plants, Woody plants, Elephant behaviour, Elephant ecology, Impact of elephant feeding| |Subjects:||Q Science > QL Zoology > Chordates. Vertebrates > Mammals| Q Science > QL Zoology > Animal behaviour |Divisions:||Faculty > Faculty of Science > Zoology & Entomology| |Deposited By:||Ms Chantel Clack| |Deposited On:||19 Apr 2012 12:31| |Last Modified:||19 Apr 2012 12:31| 0 full-text download(s) in the past 12 months Repository Staff Only: item control page
<urn:uuid:be140da5-eee6-472b-b701-0c1ccddde781>
2.796875
541
Academic Writing
Science & Tech.
26.748103
Purpose of future metagenomic (DNA), metaproteomic (protein) and metatranscriptomic (RNA) analysis: For each sample, two drums (~200L each) of seawater were collected. Samples were taken from CTD sites, and surface samples (2m depth) taken at each of these sites. At most of these CTD sites, a deeper sample was taken according to the location of the DCM at that site. The 200L seawater is pumped ... through a 20 micron mesh to remove the largest particles, then the biomass is collected on three consecutive filters corresponding to decreasing pore size (3.0 microns, 0.8 microns, 0.1 microns). This is repeated for each sample using the second 200L of seawater to generate duplicates for each sample. The overall aim is to determine the identity of microbes present in the Southern Ocean, and what microbial metabolic processes are in operation. In other words: who is there, and what they are doing. Special emphasis was placed on the SR3 transect. Samples were collected as below. For each sample, a total of six filters were obtained (3x pore sizes, 2x replicates). Each filter is stored in a storage buffer in a 50mL tube, and placed at -80 degrees C for the remainder of the voyage.
<urn:uuid:eb0af40d-9eca-493c-8798-053dc726d407>
2.984375
281
Documentation
Science & Tech.
56.985303
fgetc, getc, getchar, getc_unlocked, getchar_unlocked, getw - get next character or word from input stream Standard C Library (libc, -lc) The fgetc() function obtains the next input character (if present) from the stream pointed at by stream, or the next character pushed back on the stream via ungetc(3). The getc() function acts essentially identically to fgetc(), but is a macro that expands in-line. The getchar() function is equivalent to: getc with the argument stdin. The getc_unlocked() and getchar_unlocked() functions provide functionality identical to that of getc() and getchar(), respectively, but do not perform implicit locking of the streams they operate on. In multithreaded programs they may be used only within a scope in which the stream has been successfully locked by the calling thread using either flockfile(3) or ftrylockfile(3), and may later be released using The getw() function obtains the next int (if present) from the stream pointed at by stream. If successful, these routines return the next requested object from the stream. If the stream is at end-of-file or a read error occurs, the routines return EOF. The routines feof(3) and ferror(3) must be used to distinguish between end-of-file and error. If an error occurs, the global variable errno is set to indicate the error. The end-of-file condition is remembered, even on a terminal, and all subsequent attempts to read will return EOF until the condition is cleared with clearerr(3). ferror(3), fopen(3), fread(3), putc(3), ungetc(3) The fgetc(), getc() and getchar() functions conform to ANSI X3.159-1989 (``ANSI C''). The getc_unlocked() and getchar_unlocked() functions conform to ISO/IEC 9945-1:1996 (``POSIX.1''). Since EOF is a valid integer value, feof(3) and ferror(3) must be used to check for failure after calling getw(). The size and byte order of an int varies from one machine to another, and getw() is not recommended for BSD April 25, 2001 BSD [ Back ]
<urn:uuid:e50f83cc-b72d-4076-a67e-86af3109c17c>
3.734375
548
Documentation
Software Dev.
56.772667
| Fresnel's Rhomb was developed in 1817 by Augustin Jean Fresnel (1788-1827) to produce circularly polarized light. In this form, there are two electric field vectors associated with the ray of light that are out of phase with each other by 90°. The rhomb has outside angles of 54° and 126°. The light coming into the top of the rhomb was polarized at an angle of 45° to the plane ABCD. Upon reflection at R at Brewster's angle, one of the components of the linearly polarized light is retarded by 45°, and the same thing occurs at F. Thus, one of the components of the light is retarded by 90°, the condition for circularly polarized light. The double rhomb at the left. below is at Glasgow University and was made by Soliel of Paris before 1850. The one at the right is at the United States Military Academy. || This rather beautiful example of Fresnel's Rhomb is unmarked. It is on display at the Magic Lantern Museum in San Antonio, Texas, which represents many years of dedicated work by its curator and owner, Jack This is a single rhomb, and the only one I have ever seen.
<urn:uuid:976a5d11-4d01-4b29-8f20-2bbea77136f6>
3.984375
260
Knowledge Article
Science & Tech.
60.509913
Thursday, August 5, 2010 Rate of Solution: Sugar Cubes I did this with 4th grade students who were learning about solutions. It's simple, but they have a good time and learn a little something in the process! For each pair of students, you'll need: -4 Sugar Cubes For the whole class: -Room Temperature Water -A means of crushing sugar cubes Provide each pair of students with a cup (clear is better - it's hard to see a white sugar cube in a white cup) of room temperature water. Have them drop a whole sugar cube into the water and time how long it takes for the cube to dissolve (no stirring). This is their baseline measurement. They'll now test several variables, one at a time. With a fresh cup of room temperature water, drop in a whole sugar cube and time how long it takes for it to dissolve when you STIR it. With another fresh cup of room temperature water, drop in a CRUSHED sugar cube and time how long it takes to dissolve (no stirring). Get a cup of HOT WATER, drop in a whole sugar cube and time how long it takes to dissolve (no stirring). There are several ways to conclude this experiment. Try one or more... 1 - Have students create bar graphs of the data: -Room Temperature water vs. hot water -Stirring vs. not -Crushed cube vs. whole cube 2 - After students have analyzed their data, have them race to see who can dissolve their sugar cube the fastest. They've got three choices to make: hot or room temperature water, will they stir or not, and will they use a crushed cube or a whole cube. 3 - Along the same lines as #2, have a contest where students try to prevent a sugar cube from dissolving for as long as possible.
<urn:uuid:c5e747c0-5b01-46e6-b850-6cc36cc2ea11>
3.765625
392
Tutorial
Science & Tech.
62.828667
In 1989 Stanley Pons and Martin Fleischmann made a sensational claim that would have changed the world—had it been true. They said they had achieved nuclear fusion at room temperature using a simple tabletop device, thus creating a revolutionary clean energy source they called “cold fusion.” Unfortunately for the University of Utah chemists, multiple attempts to replicate their experiment over ensuing months failed. Cold fusion was considered debunked, and it has lived beyond the fringe of mainstream science ever since. Yet quietly, more than 20 years later, two of the world’s largest mainstream scientific institutions—NASA and the European physics research center CERN—have revisited the controversial energy-generating experiment.
<urn:uuid:654cf166-3e36-433a-844e-da702b3ca193>
3.1875
141
Truncated
Science & Tech.
26.229357
Very small variations in the initial values of the 12 variables he used in his computer to model the weather would, he discovered, result in very divergent weather patterns. The variables were numerical rules or equations expressing the relationships between temperature and pressure, between pressure and wind speed and so on. One day at the end of 1961, Lorenz wanted to re-examine a run of results he had obtained on his ancient Royal McBee computer but, instead of starting the whole run from the beginning, he decided to take a shortcut and started midway through. To give the computer the initial conditions he typed in the numbers from the earlier printout. The programme had not been changed and so the new run should have duplicated the old one. When he compared the new printout with the other one he discovered to his amazement that there was no resemblance. It was, in the words of James Gleick, the author of the book Chaos, as if he had chosen "two random weathers out of a hat". The great difference between Lorenz's printouts arose because the numbers he had entered for the rerun were very slightly different from the numbers stored in the computer for the first run. In the computer memory, six decimal places were stored but on the printout, to save space, just three decimal places appeared. The difference -- one part in a thousand -- was, Lorenz assumed, of no consequence. From this accidental discovery, a scientific revolution was launched and a new science -- chaos theory -- born. Lorenz published his conclusions in 1963 in a paper described by another scientist as "a masterpiece of clarity of exposition about why weather is unpredictable". In a talk he gave to the American Association for the Advancement of Science in 1972 entitled: Predictability: Does the Flap of a Butterfly's Wings in Brazil Set off a Tornado in Texas? he coined the brilliant term "butterfly effect" to describe elegantly how a very small disturbance, such as the movement of a butterfly's wings, in one place can give rise to a series of events that induce enormous consequences in another, far distant, place. Put simply, small deviations in a system can result in large and often unsuspected results. Lorenz investigated the basic mathematics behind the phenomenon and published his conclusions in a famous paper entitled Deterministic Nonperiodic Flow. This describes a relatively simple set of equations that resulted in a pattern of infinite complexity called the Lorenz attractor. Chaos theory has had a profound effect not only in the field of mathematics but in almost every area of science, biological, physical and social, bringing "about one of the most dramatic changes in mankind's view of nature since Sir Isaac Newton". In meteorology, chaos theory implies that it may be fundamentally impossible to forecast weather for more than two or three weeks with a reasonable degree of accuracy. Chaos theory has been ranked with relativity and quantum mechanics as the third scientific revolution of the 20th century. Edward Norton Lorenz was born in 1917 in West Hartford, Connecticut. He received his bachelors degree in mathematics from Dartmouth College, New Hampshire, in 1938 and his masters degree in mathematics from Harvard in 1940. After the war, during which he served as a weather forecaster for the US Army Air Corps, he studied meteorology at the Massachusetts Institute of Technology (MIT), earning his doctorate in 1948. His interest in meteorology was longstanding. In an autobiography he wrote: "As a boy I was always interested in doing things with numbers, and was also fascinated by changes in the weather." In 1948 Lorenz was appointed a member of the staff of what was then MIT's Department of Meteorology. In 1955 he was appointed an assistant professor and in 1962 he was promoted to professor. Between 1977 and 1981 he was head of the Department of Earth, Atmospheric and Planetary Sciences.In 1987 he became emeritus professor. He kept up his academic work for most of the rest of his life, publishing his last scientific paper shortly before his death. While on leaves of absence from MIT, he held research or teaching positions at the Lowell Observatory in Flagstaff, Arizona, and visiting professorships at the Department of Meteorology at the University of California, Los Angeles; the Norske Meteorologiske Institutt in Oslo; and the National Centre for Atmospheric Research, Boulder, Colorado. Lorenz received many honours, awards and honorary degrees. In 1975 he was elected Fellow of the US National Academy of Sciences. In 1983 he and Henry M. Stommel were jointly awarded the $50,000 Crafoord Prize by the Royal Swedish Academy of Sciences, a prize set up to recognize fields not eligible for Nobel prizes. In 1991 he was awarded the respected Kyoto Prize for basic sciences in the field of earth and planetary sciences for establishing "the theoretical basis of weather and climate predictability, as well as the basis for computer-aided atmospheric physics and meteorology". In 1969 he received the Carl Gustaf Rossby Research Medal from the American Meteorological Society; in 1973 the Royal Meteorological Society awarded him the Symons Memorial Gold Medal; and in 2004 he received the Buys Ballot medal. In 1981 he became a member of the Norwegian Academy of Science and Letters and in 1984 he was made an honorary member of the Royal Meteorological Society. A man known for his gentlemanliness and humility, Lorenz was a very keen outdoorsman who enjoyed hiking and cross-country skiing until well into old age. Lorenz's wife died in 2001, and he is survived by his two daughters and son. Professor Edward Lorenz, meteorologist and mathematician, was born on May 23, 1917. He died on April 16, 2008, aged 90
<urn:uuid:29ca7539-6316-4b18-8f71-6ab4a68fa701>
3.984375
1,165
Knowledge Article
Science & Tech.
36.397333
The luminescent displays produced by these organisms are veritable light shows above coral reefs at night and as such are gaining attention among the tourism industry as one of the real wonders of a coral reef. There are no direct economic models that have been done on these organisms. impact statement issue Bioluminescence is ubiquitous in the sea, yet little is known as to why it exists and how it functions. Understanding of the ecological and behavioral underpinnings of marine systems is crucial to the long-term wellbeing of our planet. Everyone has the potential to be indirectly affected by the health of our oceans. impact statement response We have described about 25 percent of the known species that use light for courtship, and we have begun to understand their population and community dynamics and the potential impacts these creatures have on the activities of coral reefs. The primary target audience is the society of scholars who work on coral reef systems. impact statement summary We are studying the unique, bioluminescent signals produced by tiny crustaceans in shallow coral reefs of the Caribbean. These animals are abundant and diverse, and they are excellent indicators of light pollution. The displays are complicated and provide insights into (1) how visual patterns are recognized by both males and females for reproductive purposes, (2) how sexual selection has driven speciation in this group, and (3) how rapid species evolution can occur as a result of minor signal changes. We are also describing numerous new species and genera in the group and deciphering their evolutionary relationships.
<urn:uuid:7b094701-67fb-4a97-aff8-d983a3e5f688>
2.859375
312
Academic Writing
Science & Tech.
25.220784
two tiny conducting balls of identical mass m and identical charge q hang from nonconducting threads of length L. Assume that ? is so small that tan ? can be replaced by its approximate equal, sin ?. If L = 170 cm, m = 13 g, and x = 6.6 cm, what is the magnitude of q, in nanocoulombs?
<urn:uuid:1952003b-abb2-42a7-92fb-296933dfe9a9>
2.90625
79
Q&A Forum
Science & Tech.
92.471667
Melting Arctic ice heralds new polar hybrids: Pizzlies and more A pizzlie is a cross between a polar bear and a grizzly bear, and this new hybrid animal may foreshadow as many as 34 hybrids to come as Arctic ice melts, say scientists. An odd-looking white bear with patches of brown fur was shot by hunters in 2006 and found to be a cross between a polar bear and a grizzly bear. Apparently, grizzlies were moving north into polar bear territory. Since then, several hybrid animals have appeared in and around the Arctic, including narwhal-beluga whales and mixed porpoises.Skip to next paragraph Subscribe Today to the Monitor The culprit may be melting Arctic sea ice, which is causing barriers that once separated marine mammals to disappear, while the warming planet is making habitats once too cold for some animals just right. The resulting hybrid creatures are threatening the survival of rare polar animals, according to a comment published Wednesday (Dec. 15) in the journal Nature. [Real of Fake? 8 Bizarre Hybrid Animals] A team led by ecologist Brendan Kelly of the National Marine Mammal Laboratory counted 34 possible hybridizations between distinct populations or species of Arctic marine mammals, many of which are endangered or threatened. "The greatest concern is species that are already imperiled," said Kelly, first author of the Nature comment. "Interbreeding might be the final straw." Pizzlies and Narlugas When hunters encountered a hybrid of a polar bear and a grizzly in 2006, Kelly's colleagues remarked that the incident was just a fluke. But as Kelly delved into the issue, he found more evidence of similar anomalies. In 2009, a cross between a bowhead and a right whale was spotted in the Bering Sea, between Alaska and Russia. And a museum specimen in Alaska attests to breeding between spotted seals (Phoca largha) and ribbon seals (Histriophoca fasciata), which belong to different genera, a scientific classification of organisms that is broader than the species level. Evidence suggests at least five other types of hybrids that may arise from animals of distinct genera, Kelly's team reported. These include: - Narwhal (Monodon monoceros) and beluga whale (Delphinapterus leucas) - Ringed seal (Phoca hispida) and ribbon seal (Histriophoca fasciata) - Bowhead whale (Balaena mysticetus) and right whale (Eubalaena spp.) - Harp seal (Phoca groenandica) and hooded seal (Cystophora cristata) - Harbour porpoise (Phocoena phocoena) and Dall's porpoise (Phocoenoides dalli) Breedings between these marine mammals near the North Pole are likely to result in fertile offspring, because many of these animals have the same number of chromosomes, said comment co-author Andrew Whiteley, a conservation geneticist at the University of Massachusetts, Amherst.
<urn:uuid:50a25c9a-8169-466e-8257-120c3ba14e45>
3.34375
637
Truncated
Science & Tech.
27.052071
When considering scalable system design, it helps to decouple functionality and think about each part of the system as its own service with a clearly defined interface. In practice, systems designed in this way are said to have a Service-Oriented Architecture (SOA). For these types of systems, each service has its own distinct functional context, and interaction with anything outside of that context takes place through an abstract interface, typically the public-facing API of another service. Deconstructing a system into a set of complementary services decouples the operation of those pieces from one another. This abstraction helps establish clear relationships between the service, its underlying environment, and the consumers of that service. Creating these clear delineations can help isolate problems, but also allows each piece to scale independently of one another. This sort of service-oriented design for systems is very similar to object-oriented design for programming. In our example, all requests to upload and retrieve images are processed by the same server; however, as the system needs to scale it makes sense to break out these two functions into their own services. Fast-forward and assume that the service is in heavy use; such a scenario makes it easy to see how longer writes will impact the time it takes to read the images (since they two functions will be competing for shared resources). Depending on the architecture this effect can be substantial. Even if the upload and download speeds are the same (which is not true of most IP networks, since most are designed for at least a 3:1 download-speed:upload-speed ratio), read files will typically be read from cache, and writes will have to go to disk eventually (and perhaps be written several times in eventually consistent situations). Even if everything is in memory or read from disks (like SSDs), database writes will almost always be slower than reads. (Pole Position, an open source tool for DB benchmarking, and results.) Another potential problem with this design is that a Web server like Apache or lighttpd typically has an upper limit on the number of simultaneous connections it can maintain (defaults are around 500, but can go much higher) and in high traffic, writes can quickly consume all of those. Since reads can be asynchronous, or take advantage of other performance optimizations like gzip compression or chunked transfer encoding, the Web server can switch serve reads faster and switch between clients quickly serving many more requests per second than the max number of connections (with Apache and max connections set to 500, it is not uncommon to serve several thousand read requests per second). Writes, on the other hand, tend to maintain an open connection for the duration for the upload, so uploading a 1MB file could take more than 1 second on most home networks, so that Web server could only handle 500 such simultaneous writes. Figure 2: Splitting out reads and writes.. Planning for this sort of bottleneck makes a good case to split out reads and writes of images into their own services, shown in Figure 2. This allows us to scale each of them independently (since it is likely we will always do more reading than writing), but also helps clarify what is going on at each point. Finally, this separates future concerns, which would make it easier to troubleshoot and scale a problem like slow reads. The advantage of this approach is that we are able to solve problems independently of one another — we don't have to worry about writing and retrieving new images in the same context. Both of these services still leverage the global corpus of images, but they are free to optimize their own performance with service-appropriate methods (for example, queuing up requests, or caching popular images — more on this below). And from a maintenance and cost perspective each service can scale independently as needed, which is great because if they were combined and intermingled, one could inadvertently impact the performance of the other as in the scenario discussed above. Of course, the above example can work well when you have two different endpoints (in fact this is very similar to several cloud storage providers' implementations and Content Delivery Networks). There are lots of ways to address these types of bottlenecks though, and each has different tradeoffs. For example, Flickr solves this read/write issue by distributing users across different shards such that each shard can only handle a set number of users, and as users increase more shards are added to the cluster (see the presentation on Flickr's scaling). In the first example it is easier to scale hardware based on actual usage (the number of reads and writes across the whole system), whereas Flickr scales with their user base (but forces the assumption of equal usage across users so there can be extra capacity). In the former an outage or issue with one of the services brings down functionality across the whole system (no-one can write files, for example), whereas an outage with one of Flickr's shards will only affect those users. In the first example it is easier to perform operations across the whole dataset — for example, updating the write service to include new metadata or searching across all image metadata — whereas with the Flickr architecture each shard would need to be updated or searched (or a search service would need to be created to collate that metadata — which is in fact what they do). When it comes to these systems there is no right answer, but it helps to go back to the principles at the start of this article, determine the system needs (heavy reads or writes or both, level of concurrency, queries across the data set, ranges, sorts, etc.), benchmark different alternatives, understand how the system will fail, and have a solid plan for when failure happens. In order to handle failure gracefully a Web architecture must have redundancy of its services and data. For example, if there is only one copy of a file stored on a single server, then losing that server means losing that file. Losing data is seldom a good thing, and a common way of handling it is to create multiple, or redundant, copies. This same principle also applies to services. If there is a core piece of functionality for an application, ensuring that multiple copies or versions are running simultaneously can secure against the failure of a single node. Creating redundancy in a system can remove single points of failure and provide a backup or spare functionality if needed in a crisis. For example, if there are two instances of the same service running in production, and one fails or degrades, the system can failover to the healthy copy. Failover can happen automatically or require manual intervention. Another key part of service redundancy is creating a shared-nothing architecture. With this architecture, each node is able to operate independently of one another and there is no central "brain" managing state or coordinating activities for the other nodes. This helps a lot with scalability since new nodes can be added without special conditions or knowledge. However, and most importantly, there is no single point of failure in these systems, so they are much more resilient to failure. For example, in our image server application, all images would have redundant copies on another piece of hardware somewhere (ideally in a different geographic location in the event of a catastrophe like an earthquake or fire in the data center), and the services to access the images would be redundant, all potentially servicing requests. (See Figure 3.) (Load balancers are a great way to make this possible, but there is more on that below). Figure 3: Image hosting application with redundancy There may be very large data sets that are unable to fit on a single server. It may also be the case that an operation requires too many computing resources, diminishing performance and making it necessary to add capacity. In either case you have two choices: scale vertically or horizontally. Scaling vertically means adding more resources to an individual server. So for a very large data set, this might mean adding more (or bigger) hard drives so a single server can contain the entire data set. In the case of the compute operation, this could mean moving the computation to a bigger server with a faster CPU or more memory. In each case, vertical scaling is accomplished by making the individual resource capable of handling more on its own. To scale horizontally, on the other hand, is to add more nodes. In the case of the large data set, this might be a second server to store parts of the data set, and for the computing resource it would mean splitting the operation or load across some additional nodes. To take full advantage of horizontal scaling, it should be included as an intrinsic design principle of the system architecture, otherwise it can be quite cumbersome to modify and separate out the context to make this possible. When it comes to horizontal scaling, one of the more common techniques is to break up your services into partitions, or shards. The partitions can be distributed such that each logical set of functionality is separate; this could be done by geographic boundaries, or by another criteria like non-paying versus paying users. The advantage of these schemes is that they provide a service or data store with added capacity.
<urn:uuid:658a7922-824b-485a-b6cf-574d569e827a>
2.8125
1,843
Academic Writing
Software Dev.
31.075028
Scientists are hoping to restore the decimated mollusk population by releasing millions of microscopic scallop larvae into the waters off southwest Florida. There have not been enough scallops to allow recreational collecting, or commercial harvesting, since the 1960s. This weekend volunteers helped scientists place four million baby scallops in the water. They hope at least one percent survive. The Sarasota Herald-Tribune reports the larvae were collected from Anclote Key and taken to a hatchery at the Bay Shellfish Company where they were monitored for weeks. An annual count found 93 adult scallops in the bay earlier this year. FOX 13 / WTVT-TV Didn't find what you were looking for?
<urn:uuid:c4c0f506-8de5-4176-bca5-807b8f9ee218>
2.921875
150
Truncated
Science & Tech.
45.929189
Joined: 16 Mar 2004 |Posted: Fri Aug 24, 2007 9:46 am Post subject: Nanotechnology For Sustainability |Nanotechnology will play large part in transportation sector. The primary impact of nanotechnologies will be in more efficient use of existing resources rather than the creation of new supplies from solar and hydrogen based technologies, according to a reported entitled, "Nanotechnologies for the Energy Markets" from London-based Cientifica. "By 2014, 75% of the applications of nanotechnology surveyed will be in the transportation sector, with the major benefits being reduced emissions and greater drive train efficiency," said Cientifica CEO Tim Harper. Highlights from the report, which used an economic model based on primary research that quantifies the impact and diffusion of nanotechnologies over time, include: - The most immediate opportunities lie in saving energy through the use of advanced materials. Currently this market is at $1.6 billion, and is predicted to rise to $51 billion by 2014. - Despite advances in battery technology, hydrogen storage and fuel cells, energy saving technologies will exhibit faster growth, accounting for 75% of the market for nanotechnologies in 2014, up from 62% in 2007. - Solid state lighting, nanocomposite materials, aerogels and fuel borne catalysts will have the greatest impact between now and 2014. - Compound annual growth rates (CAGR) are 64% for energy saving technologies and 90% for energy generation, while energy storage applications show a comparatively lowly 30%. - Applications in transportation will increase to $50 billion by 2014 with a CAGR of 72%. "The use of nanomaterials to increase energy efficiency represents a major global opportunity. While nanotechnologies will definitely play a role in the creation of renewable energy resources, the smart money today is on nanomaterials," said Cientifica research director Hailing Yu. Story first posted: 23rd February 2007.
<urn:uuid:d2913ff2-e567-41e4-881b-6497d4410a9c>
2.71875
414
Comment Section
Science & Tech.
23.054822
Three astronauts got an unpleasant taste of what early spacefarers had to cope with this week, when their spacecraft executed a fiery nosedive through Earth's atmosphere. On 19 April, a Russian Soyuz TMA-11 capsule unexpectedly switched to "ballistic re-entry mode" on its journey home from the International Space Station. It was carrying South Korea's first astronaut, Yi So-Yeon, Russia's Yuri Malenchenko and NASA's Peggy Whitson. Ballistic re-entry relies solely on atmospheric drag to slow a spacecraft and can expose crew members to gravitational forces 10 times those on Earth. It was standard for the early US Mercury and Soviet Vostok spacecraft, but is a last resort for modern spacecraft. The Soyuz capsule design usually allows for some aerodynamic lift during re-entry, which provides a slower, more shallow descent. "Obviously, we'd rather they had a normal re-entry," says John Petty of NASA's Johnson Space Center in Houston, Texas, but ballistic re-entry ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:50d919da-0370-4a99-896b-a798678d90e7>
2.9375
238
Truncated
Science & Tech.
45.713025
A SPECIES of starfish has confounded climate change doom-mongers by thriving as sea temperatures and acidity increase - a scenario that is likely as the world gets warmer. Most studies have concluded that sea animals with calcified shells or skeletons, such as starfish, will suffer as carbon dioxide from burning fossil fuels dissolves in the sea, making the water more acidic and destroying the calcium carbonate on which the creatures depend. But the sea star Pisaster ochraceus may ride out the climate storm. Rebecca Gooding and colleagues at the University of British Columbia in Vancouver, Canada, exposed sea stars to rising temperatures and water acidity. They thrived in temperatures of up to 21 °C and atmospheric CO2 concentrations of up to 780 parts per million - beyond predicted rises for the next century (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.0811143106). The sea star seems ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:4b204ca3-1ffd-41d3-a25c-499dfe0d1e80>
3.578125
223
Truncated
Science & Tech.
45.582143
|Phosphate mine west of Senegal.| Human activities by increasing carbon (C) and nitrogen (N) emissions are enhancing the imbalance of C and N with phosphorous (P) in Earth’s systems. Researches of the Global Ecology Unit CREAF-CSIC-UAB have conducted an study to estimate the past, present and future N:P ratios of the emissions from human activities and compared them with the N:P ratios of the main terrestrial ecosystems. The analysis of the data confers to P an increasing limiting role in the Earth’s life system by affecting carbon-sequestration capacity potential and by affecting the structure, function and evolution of the Earth’s ecosystems through the increasing N:P ratios. Humans are continuously raising atmospheric CO2 concentrations by increasing fossil fuel combustion. This increase of atmospheric CO2 concentration together with the warming effect can stimulate plant production capacity mitigating the rise of atmospheric CO2 increase in a negative feedback. But this capacity to mitigate the rise of atmospheric CO2 is strongly dependent on nutrient availability. Researchers of the Global Ecology Unit CREAF-CSIC have now conducted a study on the nutrient inputs to ecosystems and on their impacts on N and P cycles as a result of the human activities (Pe˝uelas et al., 2012). The study has evaluated the possible current and future imbalance between N and P and its consequences on Earth’s capacity to fix C and also on the ecosystems structure and function. N, the main macronutrient, could become a limiting factor in plant primary production capacity to buffer CO2 atmospheric increase. But this possible N deposition may be overcome by the continuous rise of N from human activities coming from fossil fuel burning, crop fertilization and anthropogenic N2 fixation (legume and rice crops). Currently the global anthropogenic N input to biosphere can be estimated in 175-259 Tg N year-1, amount that is closed to the total natural earths N2 fixation capacity including terrestrial and aquatic ecosystems. As a result, of the continuous increases of these human N emissions since the beginning of the industrial revolution, the reactive N atmospheric deposition has increased from 32 Tg N year-1 in 1860 to 112-112 Tg year-1 nowadays, and furthermore, most models project a twofold enhancement of the N deposition by 2050. In spite of this continuous increase of N and CO2 emissions, there are several studies that report no significant increases in global tree growth and global carbon sinks. These results suggest that other factors have overridden the potential growth benefits of a CO2-rich world. Other factors may have been neglected. Among these factors that are expected to be limiting to net primary world productivity, P appears to have an outstanding role. P is present in the DNA structure, in cell membranes, in many enzymes, in molecules storing and supplying energy and in bones. Despite these crucial roles, it is scarce in the environment. Human activities also fertilize with P the biosphere by crop fertilization, but this human-induced P fertilization is produced at a smaller rate that the increases of C and N. These results in a increase of biosphere N:P ratios. There are two main reasons why P fertilization is less than N fertilization. First, N fertilization are originated from sources that do not seem to have an immediate limit, that are mainly: industrial fertilizers obtained from the Haber-Bosch reaction between atmospheric nitrogen and hydrogen (obtained from water) at elevated pressure, and anthropogenic N2 fixation, while P fertilizers come mainly from phosphate rock with limited occurrence. Second, while N is highly mobile and globally widespread, P compounds are much less mobile and typically confined to specific areas. Whereas N:P ratio in photosynthetic organisms have great variability (it tends to reach on average molar ratios of 14-16 in plankton and 22-30 in terrestrial and freshwater plants), the fossil fuels, fertilizers and N2 fixation products have a much higher N:P ratios. Fossil fuels are disproportionately richer in N (1000-20000 mg L-1) than in P (at the level of a few mg L-1). Thus whereas annual emissions of N from fossil fuels reached 33 Tg year-1 in 2000, there is no global evidence of a significant P flux from fossil fuels to the atmosphere. Regarding fertilizers, P inputs to the biosphere is ~17 Tg year-1 and it remains more or less constant since 1989. In turn, N inputs from fertilizers are currently considerable higher, 100-136 Tg year-1 and are continuously rising. Thus, whereas global anthropogenic N input is ~ 175-259 Tg year-1, the anthropogenic P input is ~14-17 Tg year-1, making that currently the N:P ratio of human emissions in molar basis is ~ 22.8-44.6. This ratio is nearly twice the average N:P contents ratio of plankton and on average 5-100% greater than the observed optimum soil N:P ratios (~16-22) for the growth of most terrestrial plants. This draws a scenario of increasing P limitation, predominantly in marine ecosystems and also in terrestrial nonagricultural ecosystems. Nonetheless, P may end up becoming limiting even in agricultural lands. The main source for P fertilizers is mined phosphate rock, whose demand is continuously increasing, and there is an emerging concern about the sustainability of such an increase to guarantee the global food security. The problem may further be worsened by the fact that only five countries hold 90% of the world P reserves, making several very populated regions such as Europe, India or Indonesia completely dependent on P inputs. Apart from affecting the C storing capacity, an enhancement of N:P ratio can impacts on other ecosystems structural and functional traits. High growth rates demands elevated investment in P-rich ribosomal RNA. Under P limitation, reduced allocation to ribosomal RNA is possible, thus favoring species with lower growth rates in detriment of species with higher growth rates. Moreover, some recent studies show that the N and P availability of the medium can change the composition of DNA. Thus, the human-induced modification of N:P ratio of several ecosystems at world scale might be acting as an evolution driver. If you have any suggestions: email@example.com DL B.11870-2012 ISSN 2014-6388
<urn:uuid:fcbe7187-7de2-4eee-8587-0d39c427e522>
2.75
1,314
Knowledge Article
Science & Tech.
39.438065
Relevance of red fluorescence in coral reef fishes This is our official "red fluorescence in reef fish" site The colourfulness of marine fish can only be seen in shallow water, because water absorbs very short (UV) and long (red) within a distance of a few meters. This only leaves intermediate wavelengths (blue-green) that are scattered, but not absorbed as much. This explains why deep water appares blue-green and why everything looks pale blue-greenish when diving below 15 m. This fact has contributed to the assumption that red does not play an important role in marine fish behaviour and ecology. Red is, however, abundant in deeper water, where red fluorescent members of all major phyla contribute to a red glow on the reef. More surprisingly, we also found a number of marine fish to fluoresce in red. Moreover, they seem to do so in patterns suggestive of a function in communication both between and within species. Here, we want to elucidate the function and mechanisms behind red fluorescence in fish. The key question is: Can they see it, and do they use it? In other words: is there an adaptive benefit associated with the production of red fluorescent tags. This research programm is supported by a generous Reinhard Koselleck grant from the German Science Foundation (DFG) and will run für 5 years. Starting August 2009, this site will be regularly updated with news, pictures, useful links and practical tips. Both laboratory and field work is conducted to elucidate these questions. We cooperate with marine research stations at the Red Sea (Egypt) and in the Great Barrier Reef (Australia). We also work in Tübingen using our own experimental marine aquarium systems. The fishes we work on include gobies, triplefins, wrasses and pipefishes. DFG Reinhard Koselleck project to Nico K. Michiels for 5 years (2009-2014)
<urn:uuid:094c9950-b3ab-4700-a1a3-715c6cbdafb0>
3.25
400
Knowledge Article
Science & Tech.
47.565
As a science teacher I can tell you that people find science scary. Perhaps you already knew that. They think it is something they “can’t do” or “don’t get.” They may say they do not have a scientific brain. I can also tell you that these are entirely untrue. Real world, personally accessible examples, instead of theoretical situations, help make the connection to the here and now. For instance, the title of this post “Inefficient Devices and the Laws of Thermodynamics” would probably be enough to scare off most readers. However, by thinking about your own home, this scientific concept comes to life. In a recent post I alluded to the inefficiencies of converting electricity to heat when toasting bread. Both are types of energy (electrical and thermal). Since energy cannot be created or destroyed – the first law of thermodynamics, aka The Law of Conservation of Energy – you have to use some form of energy to generate heat for your toaster. Electricity is referred to as a secondary source of energy. It is not directly accessible in nature. In other words it has to be generated by another form of energy, say chemical energy in the form of coal. How does coal become electricity? One of the first posts from 2nd Green Revolution – Coal-Fired Electricity Generation – discussed many of the problems with using coal to generate electricity. Namely, only a third of the coal’s energy becomes electricity. The other two-thirds is lost as ambient heat. The process is the same for most electricity generation, except photovoltaic cells. As with nuclear power, natural gas, and oil-fired generators, thermal energy (heat) released from combustion, or nuclear reaction in the case of nuclear energy, boils water. The steam spins a turbine, which turns a generator. In turn, the generator spins magnets around a coil of metal, causing electrons to flow. Following the path, we see that chemical energy – natural gas, coal, or oil – transforms into thermal energy. From there the thermal energy is converted into mechanical energy – the movement of the turbine and generator, which causes the moving electrons to flow. Finally, the electricity – yet another type of energy – can be sent to homes so that it can be transformed into radiant or thermal energy. Here’s the problem . . . well, one of the problems. Every time there is a conversion or transformation of energy, there is a decrease in available energy – the 2nd Law of Thermodynamics. In other words, not all of the coal’s energy is converted into usable heat to toast your bread. In fact, a majority of it is unavailable. With each transformation, some of the coal’s original energy dissipates. How many steps did the preceding scenario take? Is there a better way? One of the spectacular benefits of photovoltaic or wind power is that both reduce the numerous steps of energy conversion, thereby making more of the original energy available to the end-user. Using wind to spin the turbine directly cuts out the heated water and burning of nonrenewable resources. Furthermore, there is no byproduct or massive water requirement. Photovoltaic cells are similar in that they convert the sun’s radiant energy directly into electricity, without the need for turbines. Even better is solar thermal technology, which has been used for thousands of years. The heat from the sun’s rays can be transferred into water and used to heat one’s home, as water retains heat better than most any material. Consider a pot of water used to cook pasta. It stays warm for hours whereas the pot used to heat up sauce cools off quickly. Keep an eye out for a follow up post on how these inefficiencies can impact your bottom line. - Eric Wilson
<urn:uuid:3d5cb4a0-cc70-4468-8acc-01834fec2dd2>
3.234375
788
Personal Blog
Science & Tech.
45.628184
Leaf venation is a pervasive example of a complex biological network, endowing leaves with a transport system and mechanical resilience. Transport networks optimized for efficiency have been shown to be trees, i.e., loopless. However, dicotyledon leaf venation has a large number of closed loops, which are functional and able to transport fluid in the event of damage to any vein, including the primary veins. Inspired by leaf venation, we study two possible reasons for the existence of a high density of loops in transport networks: resilience to damage and fluctuations in load. In the first case, we seek the optimal transport network in the presence of random damage by averaging over damage to each link. In the second case, we seek the network that optimizes transport when the load is sparsely distributed: at any given time most sinks are closed. We find that both criteria lead to the presence of loops in the optimum state.TreeHugger published a review of the research at Awesome Biomimicry: Leaf Veins Inspire New Model for Water and Electricity Distribution Networks. While distribution networks based on simple tree-like branching are efficient, they are not resilient in that failure at a point will block access to the parts of the network past that point. The paper above (subscription required) explores various reasons for loops found in the veins of dicotyledon leaves, including resilience and load fluctuations.
<urn:uuid:b332edce-f093-4166-bb00-d1fd5c5cf05b>
3.03125
287
Academic Writing
Science & Tech.
35.15046
Using Lasers To Find Submarines Check out this story about how a scientist from the Naval Research Laboratory is investigating the use of lasers for finding or communicating with submerged submarines: The shock wave created by either method can travel several miles and can be used for several purposes. One would be for one-way communication with underwater vessels. Triggering pressure waves in a specific order could allow a plane to communicate with underwater vessels via basic Morse code, or, more likely, says Jones, with a complex, encoded pattern of pulses.Every once in a while, you read some story about how lasers are going to make submarines obsolete by making the ocean "transparent" and easily finding submarines. Somehow, these systems never end up working out. The reason, of course, if that you would get huge rates of false "positives" for any such system. One thing about ASW exercises that's always bugged me is how skimmers get a false sense of how good they are because they get cued to where the submarine is to start with -- otherwise, of course, it would turn such exercises into a waste of time because they'd never find the sub. Still, in the real world, they're not going to know where the subs are at to start their search, or even if one is there. When I was on the Carrier Group staff, during workups I saw many "positive submarine" detections called that weren't anywhere close to where the submarine actually was; in wartime, each of these would have likely resulted in wasted ordnance. Ships only carry so many ASW weapons. I think that to make skimmers aware of this, we should occasionally do ASW exercises where no submarine is present. That could be a valuable teaching lesson that could save ordnance for when it's actually needed during wartime. Another use for laser-induced sound waves would be for mapping the ocean floor. When they hit a submerged object, the pressure waves bounce back. A nearby submarine or buoy could detect the pattern of those waves and create a map of the ocean floor, or the location of other submarines in the area.
<urn:uuid:41320ac5-c828-49c2-aaaa-eaab02acf06e>
2.8125
429
Personal Blog
Science & Tech.
37.873015
Popper extends JUnit to allow you to specify theories. Theories are assertions about your code's behavior that may be true over (potentially) infinite sets of input values. You might find it useful to pose theories about your Groovy code too. Let's consider how we might test the following class (example taken from the Popper web site): With traditional JUnit code, we might test it as follows: This tests the method for one amount value and one m value. Next steps might be to triangulate so that additional values are also tested. In general though, it might be difficult to know when you have done enough values (when to stop) and also what invariants of your class may hold if you simply keep adding more tests without sufficient refactoring. With these factors in mind, Popper provides facilities to make invariants and preconditions of your classes obvious as well as providing an extensible framework for adding new test values. Here is how you might use Popper to test the above class. First, we have avoided using Hamcrest style assertions in our Groovy code. Groovy's built-in assert method usually allows such assertions to be expressed very elegantly without any additional framework. We'll create a small helper class to allow Groovy-style assertions to be used for method pre-conditions: Now, our test becomes: We have added an additional log variable to this example to explain how Popper works. By default, Popper will use any public fields in our test as test data values VAL4 in our example. It will determine all combinations of the available variables and call the multiplyIsInverseOfDivide() for each combination. This is a very crude way to select test instance values but works for simple tests like this one. You should also note the assume statement. In our example, we haven't catered for m being 0 which would result in a divide by zero error. The assume statement allows this method precondition to be made explicit. When Popper calls the test method, it will silently ignore any test data combinations which fail the method preconditions. This keeps the preconditions obvious and simplifies creating test data sets. Here is the output from running this test: We wouldn't normally recommend sending this kind of information to standard out when running your test, but here it is very illustrative. Note that all four test values have been used for the amount variable but only three values have been used for m. This is exactly what we want here. Popper supports an extensible framework for specifying more elaborate algorithms for selecting test data. Instead of the public variables, we can define our own parameter supplier. Here is one which supplies data between a first value and a last value. First the annotation definition (coded in Java): And the backing supplier (coded in Groovy): Now our Groovy test example could become: When run, this yields: The supplied test values for the test method are (-4, -2), (-4, -1), (-4, 0), ..., (2, 5). The data where m is equal to 0 will be skipped as soon as the assume statement is reached. We can also Groovy to make the bowling example a little more succinct:
<urn:uuid:ed659862-a4a3-4759-b310-72d958a7b9f8>
3.0625
691
Documentation
Software Dev.
49.489139
Project: NEBO (Northeastern Bentho-pelagic Observatory) The goal of NEBO is to make repeated surveys to designated "sentinal sites" seasonally over the course of three years. These sites were selected for their diverse physical and biotic characteristics, importance to fisheries, and potential to learn about ecosystem funtioning. The six sentinal sites of NEBO, ranging from the Mid-Atlantic Bight to Georges Bank, to the Gulf of Maine. Monitoring Didemnum vexillum, an invasive species. This invisive tunicate (sea-squirt) has been found in several locations on Georges Bank not previously known to have D. vexillum. It provides a unique opportunity to study drastic change in the ecosystems it inhabits, sometimes overgrowing the bottom and other organisms for kilometers at a time. Continuous Sampling Design Because the HabCam system provides continuous data rather than point-sample data, we have to explore different survey methods to find the most efficent sampling design. Simulated scallop populations are surveyed with a variety of sampling desings including grids, zig-zags, spirals, and square spirals.
<urn:uuid:5b8641fb-fc53-4c82-b13b-1d74c9908e6b>
3.21875
247
Knowledge Article
Science & Tech.
20.702619
I need to name this polynomial. y=(x+4)(x+1)(x-3) Would it be a cubic polynomial because it's all multiplying, or would it be a linear? Please explain it to me.. The terms "cubic" and "linear" refer to the form of a polynomial that is a sum of monomials. A monomial is a coefficient multiplied by some power of x. Therefore, to decide if this polynomial is cubic or linear, you have to multiply through and to represent this polynomial as a sum of powers of x with some coefficients. Note, however, that whether it is cubic or linear depends only on the highest power of any monomial, so you don't need to find the coefficients.
<urn:uuid:c2ac0d6e-85a1-48bf-8317-2c117bc19fec>
2.6875
165
Q&A Forum
Science & Tech.
66.331059
Ian Musgrave has just posted an excellent article on the poor design of the vertebrate eye compared to the cephalopod eye; it’s very thorough, and explains how the clumsy organization of the eye clearly indicates that it is the product of an evolutionary process rather than of any kind of intelligent design. A while back, Russ Fernald of Stanford University published a fine review of eye evolution that summarizes another part of the evolution argument: it’s not just that the eye has awkward ‘design’ features that are best explained by contingent and developmental processes, but that the diversity of eyes found in the animal kingdom share deep elements that link them together as the product of common descent. If all we had to go on was suboptimal design, one could argue for an Incompetent Designer who slapped together various eyes in different ways as an exercise in whimsy (strangely enough, though, this is not the kind of designer IDists want to propose)…but the diversity we do see reveals a notable historical pattern of constraint. How different are animal eyes? In the metazoans, about a third of all phyla don’t have any eyes at all, although light sensitive molecules are ubiquitous (for example, in sea urchin tube feet), and can be found in bacteria and, obviously, plants. Another third of the metazoan phyla have light sensitive organs, specialized epithelial patches that respond to changes in light levels. They may have some morphological specializations, but they don’t focus an image in any way, and can’t resolve patterns of light. The rest have distinct, image-forming eyes that focus light on an array of light-sensitive cells, and can detect patterns of light and shadow that are used to perceive a picture of the world around them. For us motile animals, eyes seem to be part of the recipe for success—the six animal phyla (cnidaria, mollusca, annelida, onychophora, arthropoda, and chordata) that contain 96% of all animal species also primitively evolved eyes. These animal eyes fall into two major categories, with further subdivisions. Chambered eyes have a single optical element—a slit, a lens, a mirror—and focus an image on a 2-dimensional array of photoreceptors, the retina. Compound eyes use multiple optical elements. Eyes can be further categorized as rhabdomeric or ciliary by the nature of the cellular elements that make up the photoreceptors, by the kind of opsin molecule used to transduce the light signal, and by the signaling pathway used to convert a conformation change of the opsin molecule into a change in the electrical potential across the cell membrane. Whoa…the differences are all over the place. Eyes look different, function differently, develop differently, and use different molecules, so where are the signs of common descent? The differences tell us that eyes have arisen in evolutionary history multiple times, but there are still deep homologies; in particular, look at those opsins, specifically the Type 2 opsins. It’s one big happy gene family, with members all related to one another. The major family members are the r-opsins, used in the rhabdomeric eyes of invertebrates, and the c-opsins, used in the ciliary eyes of vertebrates, but note that there is considerable overlap. We vertebrates also have an r-opsin: melanopsin is a visual pigment molecule expressed in ganglion cells (not classically considered photoreceptors) in our eyes, and are involved in detecting general light levels to reset our circadian clocks. Some invertebrates have both rhabdomeric and ciliary eyes and use both r-opsin and c-opsin in vision. Note also that some of these opsins have unknown functions—neuropsin, for instance, is expressed in human testes, a curious pattern that makes me wonder if there’s some analogy that could be made with the tube feet of sea urchins. You might argue that the relationships are all spurious—maybe there is only one chemistry possible for transducing photons into a chemical signal (which is absurd on the face of it, but let’s be thorough and make the suggestion anyway). That’s easily countered: those are Type 2 opsins, what about Type 1? Type 1 opsins are found in the Archaea and in eukaryotic microbes, and while both Type 1 and Type 2 interact with retinal, Type 1 opsins have a different molecular size, a different structure, and a different function—they couple photoreception to transmembrane ion pumping, rather than to activation of a G protein signal transduction cascade. The similarities of these various phototransduction molecules are not necessary outcomes of their function, but instead reflect a contingent historical connection between them. We can put together a good explanation for these relationships with evolutionary theory. Taken together, these data show that at least two kinds of photoreception existed in the Urbilateria, before the split into three Bilateria branches at the Cambrian. Moreover, each branch of the family tree still carries versions of both of these photoreceptor types, along with other opsin-dependent photodetection systems yet to be fully described. In the course of evolution, vertebrate vision favored ciliary photodetection for the pathway that delivers images, whereas invertebrates favored rhabdomeric photodetection for their main eyes, although why this might be remains unknown. Along both evolutionary paths, secondary photodetection systems remained to give additional information about light, possibly to instruct circadian rhythms, phototaxis, or other light-dependent behaviors. But, if vertebrates are an example, these two photodetection systems functioned together, rather than remaining separate. Although the remaining five families of opsins have not been fully characterized, it seems probable that they also respond to light, and organisms use the information they provide. One other thing that I would note from these examples is illustrated by the c-opsin and r-opsin pathways. These are “molecular machines” or “biochemical pathways” of the sort that Intelligent Design creationists like Behe talk about, but they don’t dig into the specifics of these because they undercut their point. I look at pathways like the c-opsin → phosphodiesterase (PDE) vs. r-opsin → phosphatidylinositol (PIP) pathway, and what I see are two common signal transduction pathways (PDE and PIP show up in lots of other places, too) that have been coupled to slightly different sensors. What we find in molecular biology is flexibility and modularity, attributes that lend themselves well to combinatorial changes that can easily increase complexity—a complexity that is a hallmark of unguided evolutionary change, not design. Fernald RD (2006) Casting a genetic light on the evolution of eyes. Science 313:1914-1918.
<urn:uuid:e4709eef-7ae8-497d-8486-168e102873da>
3.296875
1,484
Personal Blog
Science & Tech.
22.071308
Diatoms can be found living in a wide variety of extreme environments, including ancient Antarctic Ice. Some believe they may even exist on Europa and in interstellar dust. The above diatom, Surirella, was collected from the alkaline and hypersaline Mono Lake. Originally uploaded in Microbial Life. Image 2705 is a 205 by 300 pixel JPEG Uploaded: Provenance The image was taken by David Patterson and provided courtesy of the microscope web site. Reuse No information about limits on reusing this item have been recorded. You will need to contact the original creator for permission in cases that exceed fair use (see http://fairuse.stanford.edu/).
<urn:uuid:7cf69801-eb56-4136-b0c9-ae82dce2cefc>
3.21875
140
Truncated
Science & Tech.
33.850476
The mysterious blue reflection nebula found in catalogs as VdB 152 or Ced 201 really is very faint. It lies at the tip of the long dark nebula Barnard 175 in a dusty complex that has also been called The cosmic apparitions are nearly 1,400 light-years away along the northern Milky Way in the royal constellation Cepheus. Near the edge of a large molecular cloud, pockets of interstellar dust in the region block light from background stars or scatter light from the embedded bright star its characteristic blue color. Ultraviolet light from the star is also thought to cause a dim reddish luminescence in the nebular dust. Though stars do form in molecular clouds, this star seems to have only accidentally wandered into the area, as its measured velocity through space is very different from the cloud's velocity. This deep telescopic image of the region spans about 7 light-years. Credit & Copyright: (Catching the Light)
<urn:uuid:7eedaef2-9f0f-44e1-88dc-10ad7f2d004a>
3.328125
219
Knowledge Article
Science & Tech.
47.316793
probably heard of Carbon-14, also written C14. C14 is a radioactive form of carbon produced in the outer atmosphere when cosmic rays transform atoms of regular nitrogen. Being radioactive and thus unstable, C14 "decays" at a certain predictable rate. Its decay rate is such that half of any given quantity of it will convert back to nitrogen in about 5,730 years. Then half of the remainder will be gone in the next 5,568 years, etc. The fancy way of saying this is that C14's half life is 5,568 years. There's a good site on the Web for finding more detailed information on C14 dating. C14's remarkable behavior is very useful to us because of two further fabulous facts: First, despite its strange manner of having a half life, it behaves just like regular carbon as it cycles through the ecosystem, sometimes as a constituent of the gas carbon dioxide, or CO2, sometimes as a carbohydrate inside a plant's body, or whatever. The second fabulous fact about C14 is that since it both appears and disappears at given rates, and surely has been doing so ever since Earth had an atmosphere with nitrogen in it, it can be assumed that the ratio between regular carbon and C14 in the atmosphere has long remained the same. Therefore, imagine this: A typical green plant spends its day "photosynthesizing" -- using energy from sunlight to convert water and carbon dioxide into its "food," which later becomes part of the plant's own body. The vast majority of the carbon comprising the carbon dioxide used during photosynthesis is regular carbon, but a certain small percentage is C14. This means that, ultimately, the carbon in the plant's body itself will be composed of a certain predictable quantity of C14. Now imagine that the plant dies, decays into a kind of compost, and a snail eats the compost. The result will be that the snail's body will ultimately be composed of essentially the same ratio of C14 to regular carbon as the plant. Then the snail dies and is buried beneath... loess, let's say. Now, after a certain time we dig up the fossil snail's shell, analyze the shell's C14 content, and find that it is exactly half of the C14 found in a living snail's body. Therefore... Since C14's half life is 5,568 years, we can estimate that our fossil snail lived approximately that many years ago. If the C14 content had been exactly one-quarter of that found in a living snail, we'd estimate an age of about 11,100 years. Unfortunately, after about 30,000 years the concentration of C14 in fossil specimens becomes so small that it becomes chancy to estimate times from the small concentrations. However, for something that lived 20,000-years ago, C14 dating works great! It's good to have this confidence in C14 dating up to the 20,000-year mark because during the mid 1960s our much-mentioned team of investigators from Millsaps College in Jackson, Mississippi, headed by J.O Snowden, Jr. and Richard R. Priddy, pulled up to a considerable-size road cut on the U.S. Highway 61 Bypass at Vicksburg, Mississippi -- a road cut through unmistakably classic loess -- scraped away part of the road cut's weathered face, and found some white, thumbnail-size fossil snails, just like the ones that once caused B. Shimek's heart to flutter in our "Snails were telling us" section. In this road-cut loess at Vicksburg, fossil snail shells lay embedded at various levels, some at the top, some at the bottom, and some in between. The investigators gingerly pried shells from various levels, and lovingly stored them out of harm's way. Eventually the shells were checked for their C14 content both at the U.S. Department of Agriculture Sedimentation Laboratory in Oxford, Mississippi, and by a company called Isotopes, Inc., of Westwood, New Jersey. When the reports on estimated dates-of-snail-death came back, the numbers harmonized with previous estimates with the loess's age, they confirmed aspects of a magnificent story, they were simply glorious to see. The snail-shell samples had been taken at five different levels. The topmost sample taken were judged as 17,850 years old, give or take 380 years. The middle sample was placed at 19,250 years old, give or take 350 years. The lowest sample registered at 25,300 years old, give or take 1,000 years. In short, the snails tell us that our loess was deposited during a long period between about 25,000 and 18,000 years ago, which is toward the end of the last Ice Age. Return to the Geology Menu Return to the Main Loess Menu
<urn:uuid:78dbd99a-6b82-4cac-83cb-0da5ef4a6622>
3.59375
1,020
Knowledge Article
Science & Tech.
61.348792
Ecological Footprint (EF) measures how fast we consume resources and generate wastes compared to how fast nature can absorb our waste and generate new resources. In this way the EF indicates what we need to do in order to live in balance with natural systems. It can be calculated for an activity, an individual, a family, a city, or a nation. This allows us to compare the difference in environmental impacts between individual lifestyles, between nations, or between biking versus driving. The Ecological Footprint is very valuable for creating resource use goals and making decisions about how to reduce our negative impacts in terms of waste and pollution. The EF also offers a larger perspective in which we can see our own consumption compared to others, and indicates what our fair share of global resources is. (We once asked a group of about 10 sustainability consultants what framework they used for making personal decisions and every one of them said the Ecological Footprint.)
<urn:uuid:ad6ab5ad-85ec-41f5-a154-dcd9dc5a6b54>
3.5625
187
Knowledge Article
Science & Tech.
25.418405
Physics at Fermilab main page | accelerators | collider experiments | neutrino physics | technology computing | theory | astrophysics | discoveries at Fermilab Details about Fermilab's accelerators To create the world's most powerful particle beams, Fermilab uses a series of accelerators. The diagram and photos show the paths taken by protons and antiprotons. The Cockcroft-Walton pre-accelerator provides the first stage of acceleration. Inside this device, hydrogen gas is ionized to create negative ions, each consisting of two electrons and one proton. The ions are accelerated by a positive voltage and reach an energy of 750,000 electron volts (750 keV). This is about 30 times the energy of the electron beam in a television's picture tube. Next, the negative hydrogen ions enter a linear accelerator, approximately 500 feet long. Oscillating electric fields accelerate the negative hydrogen ions to 400 million electron volts (400 MeV). Before entering the third stage, the ions pass through a carbon foil, which removes the electrons, leaving only the positively charged protons. The third stage, the Booster, is located about 20 feet below ground. The Booster is a circular accelerator that uses magnets to bend the beam of protons in a circular path. The protons travel around the Booster about 20,000 times so that they repeatedly experience electric fields. With each revolution the protons pick up more energy, leaving the Booster with 8 billion electron volts (8 GeV). The Main Injector, completed in 1999, accelerates particles and transfers beams. It has four functions: (1) It accelerates protons from 8 GeV to 150 GeV. (2) It produces 120 GeV protons, which are used for antiproton production (see picture and text at bottom). (3) It receives antiprotons from the Antiproton Source and increases their energy to 150 GeV. (4) It injects protons and antiprotons into the Tevatron. Inside the Main Injector tunnel, physicists have also installed an Antiproton Recycler (green ring). It stores antiprotons that return from a trip through the Tevatron, waiting to be re-injected. The Tevatron receives 150 GeV protons and antiprotons from the Main Injector and accelerates them to almost 1000 GeV, or one tera electron volt (1 TeV). Traveling only 200 miles per hour slower than the speed of light, the protons and antiprotons circle the Tevatron in opposite directions. The beams cross each other at the centers of the 5000-ton CDF and DZero detectors located inside the Tevatron tunnel, creating bursts of new particles. To produce antiprotons, the Main Injector sends 120 GeV protons to the Antiproton Source, where the protons collide with a nickel target. The collisions produce a wide range of secondary particles including many antiprotons. The antiprotons are collected, focused and then stored in the Accumulator ring. When a sufficient number of antiprotons has been produced, they are sent to the Main Injector for acceleration and injection into the Tevatron. |last modified 1/15/2002 email Fermilab|
<urn:uuid:fe92a6ae-3bba-4c5f-8f43-fa51fae837bf>
3.890625
694
Knowledge Article
Science & Tech.
37.643179
A team of astronomers at the Harvard-Smithsonian Center for Astrophysics discovered a new earth-sized planet just 40 light-years from earth that is completely covered in water. Though I immediately imagine a web-toed Kevin Costner wandering around looking for dirt to trade, Zachory Berta, who was part of the team that made the discovery, says that the planet isn't just covered by an ocean, but that there's also a "dense atmosphere of water vapor," which could make living conditions rough for humans. Dreaming of new worlds? Learn more about GJ1214b below. - It's so hot right now — GJ1214b orbits its red-dwarf star every 38 hours at a distance of 1.3 million miles. This means that the planet has a steam temperature of 450 degrees Fahrenheit. Talk about a steam bath! According to researchers, "the high temperatures and high pressures would form exotic materials like 'hot ice' or 'superfluid water,' substances that are completely alien to our everyday experience." - Size matters — The water planet is bigger than Earth but smaller than Uranus, coming in at about 2.7 times Earth's diameter and weighing almost seven times as much. - It's well-traveled — Theorists believe that GJ1214b formed "farther out from its star," then traveled inward over the course of its history, landing in the system's "habitable zone."
<urn:uuid:f35a6ab4-253d-474d-b13e-cac666714811>
3.859375
301
Listicle
Science & Tech.
53.57175
Scientists recently made an eye-opening discovery at Golden Gate Highlands National Park in South Africa. Several nests of Massospondylus, a 20ft (6m) long sauropod, including fossilized eggs and hatchling footprints, is increasing scientists knowledge of the nesting, breeding and mothering habits of dinosaurs. The dinosaur nesting site is believed to be 190 million years old, some 100 million years older than any dinosaur nest previously found. At least 10 nests were uncovered at several different rock levels. Each contained up to 34 round eggs in tightly clustered clutches. The distribution of the nests indicated that dinosaurs returned repeatedly to the… - Greenfudge.org on Facebook FUNDRAISINGWe are currently fundraising to start our first real-live nature conservation project. Even $1 can be a big help! Add your green newsYou must be logged in to submit a story Tip of the Day Home/Posts Tagged ‘dinosaur’
<urn:uuid:559fb1cd-032a-4c24-a84a-0b586c488ddd>
3.03125
197
Content Listing
Science & Tech.
41.409159
Posted by y912f on Friday, January 16, 2009 at 8:39pm. you are told that tan s = + 6.86874.. so by the CAST rule, the angle must be in the I or III quadrant. but our domain is from 0 to pi/2 which is only quadrants I and II. and since they are using radians to describe the domain, we should also answer in radians. set your calculator to radians, inv tan or 2nd function tan I got s = 1.42622 radians ya see this somehow doesnt work on my calculator so you have to enter inv tan 1?? let's try it with degrees first set you calc to degrees you should get 1 now work it backwards (the inverse) at the top left you should have either an "INV" key or 2ndF key you should get 45 degrees If that doesn't work it could be that on your calc you have to enter the number first. let me know if it worked. If it worked, set your machine to radians and repeat the steps in the same way as noted above. when i do tan 45 i get 1.6197.. do you know any site where i can use a calculator... y912f, you have your calculator set to radians yes, tan 45 radians = 1.6197 look for a key that says DRG, it will toggle your setting beteen degrees, radians and gradians ya Damon already helped me understand the problem i set it to radians. entered 1, atan and it equaled .785 then entered +, pi= 3.926radians in other words 5pi/4 thanks for all your help tooo!!!! math - ok so I asked a question like this before and I understood how to do it ... I think you didnt see my question ^^ help please! - Hi!!! I didnt understand ... Quick Math - I still have a question. My first one has been answered(I, but I am... Ecnomics Answer check - I had asked this question earlier and Ms. Sue had helped... chemistry - i don't understand any of this ! I'm doing the STP & ... Physics - I had posted this question once before, but no one helped me on this ... math - what percentile score is equivalent to the lower quartile? but here i ... Literature - So for example for this poem: Harlem by Langstone Hughes: What ... English - Thank you very much! Just one more thing. Should I evaluate the ... Trigonometry - I can't remember what formula to use for these. Note- the X... For Further Reading
<urn:uuid:942bed59-c8d1-435c-81ab-50d6fcb3faf8>
2.890625
586
Q&A Forum
Science & Tech.
89.181357
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) Thank you very much. It was very helpful and clear. And easy, what's the most important :-) Say your lines look like this: Y I need to make line B parallel to line A. I have start and end coordinates of line A and start coordinate of line B. Can I calculate the end coordinate of line B to make it parallel to line A. I also have end X coordinat. So all I actually need is end Y.
<urn:uuid:59ace767-c48b-434e-9ca1-52af392e4383>
2.984375
167
Comment Section
Science & Tech.
78.721667
Why can't one of our space telescopes, capable of seeing galaxies many light years away, be pointed at the site of the moon landings where one can assume there are some remnants from the visits. Would this definitively prove to any sceptics that humans landed on the moon? It would be a nice way to celebrate the 40th anniversary of the first landing. Liza Brooks, Shrivenham, Wiltshire, UK To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:e000bb13-3b0f-44af-b884-aee057a0d01a>
3.25
114
Truncated
Science & Tech.
46.18373
Can we do that here since we don't have tons of black sand? Here is a little excerpt from another article...http://www.stuff.co.nz/stuff/thepress/4017784a11.html New solar cells developed by Massey University don't need direct sunlight to operate and use a patented range of dyes that can be impregnated in roofs, window glass and eventually even clothing to produce power. This means teenagers could one day be wearing jackets that will recharge their equivalents of cellphones, iPods and other battery- driven devices. Dr Campbell said that unlike silicone-based solar cells, the dye- based cells are still able to operate in low-light conditions, making them ideal for cloudy climates. They are also more environmentally friendly because they are made from titanium dioxide - an abundant and non-toxic, white mineral available from New Zealand's black sand.
<urn:uuid:42b3821d-4b32-462b-bd2d-bec0175ac03e>
2.703125
186
Comment Section
Science & Tech.
51.305
CSIRO: warming up to five degrees by 2070 State of the climate report casts gloomy predictions Even an La Niña event and cool weather in Australia in 2010 and 2011 haven’t reversed the overall long-term trend to a warmer globe, according to Australia’s latest State of the Climate report. The report was assembled by Australia’s peak science body, the Commonwealth Scientific and Industrial Research Organisation, and the Bureau of Meteorology. It predicts ongoing global temperature increases, with Australia likely to experience between 1°C and 5°C increases by 2070. The La Niña event – which has soaked the country for two successive summers and brought flooding in Queensland, New South Wales and Victoria – damped the trend for 2010 and 2011. Temperatures in those two years in Australia were kept below the long-term average by 0.24°C, the report states, however 2010 was the warmest on record during a La Niña event and still managed to be the 11th warmest year, the report says. “The highest temperatures on record are occurring with greater frequency and over greater areas of Australia”, the Bureau of Meteorology’s Dr Karl Braganza told ABC Radio. That view is supported by CSIRO CEO Dr Megan Clark said: “We’ve seen changes in CO2 over very long geological history, but never this fast.” Other key points in the report are: global CO2 emissions from 2009 to 2010 grew by 5.9 percent (reversing a small decline in the global financial crisis years of 2008-2009); sea surface temperatures globally are rising, and the rise is faster around Australia; and this country’s sea level rise since 1993 is at least equal to, and often greater than, the global average. The global temperature increase, the report states, “continues the trend since the 1950s of each decade being warmer than the previous”, something masked but not reversed by the current La Niña event. Daily maximum temperatures in Australia, the report finds, have increased by 0.75°C since 1910, the annual average daily mean temperatures are up by 0.9°C over the same period, and annual average overnight minimum temperatures are up by 0.9°C. In addition, the report states, “there has been an increase in the frequency of warm weather and a decrease in the frequency of cold weather … the frequency of extreme (record) hot days has been more than double the frequency of extreme cold days during the past two years.” Changes in rainfall also show some worrying trends, in spite of the southern oscillation (the cycle between El Nino / La Niña periods) bringing heavy rainfall: in the North, monsoonal rains are rising, while across the south, the autumn and winter rains that drive agricultural activity are decreasing. “Recent drying trends across southern Australia in autumn and winter have been linked to circulation changes. The causes of these changes are an area of active research,” the report says. South-western Australia is also experiencing a decreasing rainfall trend. The study repeats predictions that over the long term, more droughts are likely, interspersed with intense rainfall events. “The fundamental physical and chemical processes leading to climate change are well understood,” the report concludes. ® Re: AC @ 10:31 > Guess what? They already do this. This is called hindcasting and it is easy to make the models match the past. To warm in 1978? Then increase to suphates released. To cold? Then decrease the sulphates. That is just one of many parameters they can manipulate to get the models to match the past. Give me the last 20 years of UK horse racing results and I'll write a model that would have made you a profit over those 20 years. It will be pure luck if it makes a profit over the next 20 years, but don’t worry because if it doesn't I'll simply change some of the model parameters so that it would have. Re: burb @09:19 No. I'm saying that even a perfect model, with as few as 3 parameters, that exactly simulates the physics, might have no predictive power. It it also impossible to determine which models do and do not have predictive power. Add more parameters and it is less likely to have predictive power. Climate models are so parameterised that they can be used to say whatever you want them to say. For example, in 2007 all but one of the models used by the IPCC claimed that the northern hemisphere would have less winter snow cover. At that time there was a small decrease in snow cover in the northern hemisphere. After several harsh winters reversed the decrease those same models, with different parameters, are now saying that the northern hemisphere will have increased snow cover. Whether its flood, drought, cold, warmth, snow or lack of snow, in any area, the models can predict it by running the model with suitable parameters. Re: Oh, look "It's a computer model." What is your point here? I keep seeing this sort of statement - on a geek site of all places! Is it that it seems like a radical idea to model physical phenomena mathematically and to use computers to solve such models? Believe it or not people have been doing this sort of thing for decades now for all sorts of applications.
<urn:uuid:d845465e-f018-47fd-800e-c9dfd99c504f>
3.0625
1,121
Comment Section
Science & Tech.
51.776912
If it wasnt carbonated water here is two possible explanations: If it wasn't carbonated, it's probably the supercooling effect. This is much the same as superheating, where water doesn't boil even though it's at 100°C. If the insides of the bottles were smooth enough and the water pretty pure (mineral water should do just fine for that), there's no place for the crystallisation to start, and thus the water stays liquid. By opening the bottle, dust and such enters the bottle, and the water turns to ice. We suspect that what's happening here is that the water in the bottles which did not freeze overnight was "supercooled." Water normally freezes when it is cooled below 0 degrees Celsius, forming ice crystals. Ice crystals form more easily when they grow on existing ice crystals -- the water molecules like to pack themselves in place on a crystal that's already gotten started. It doesn't take much to start the crystallization process going -- a little piece of dust or other impurity in the water, or even a scratch on the bottle are sometimes all it takes to get ice crystals growing. The process of starting off a crystal is called "nucleation." In the absence of impurities in the water and imperfections in the bottle, the water can get "stuck" in its liquid state as it cools off, even below its freezing point. We say this supercooled state is "metastable." The water will stay liquid until something comes along to nucleate crystal growth. A speck of dust, or a flake of frost from the screw-cap falling into the bottle are enough to get the freezing going, and the crystals will build on each other and spread through the water in the bottle. Water releases 80 calories per gram when turning from a liquid to a solid. We suspect your freezer is only a few degrees Celsius below zero(perhaps ten or fifteen?), and the specific heat of water is one calorie per degree per gram. This means that your water, as it freezes, warms up the rest of the water until the process stops at 0 degrees Celsius, freezing perhaps ten or twenty percent of the water. This ice may be distributed throughout the bottle, though, as the crystallization process happens very quickly and heat flows slowly. We suspect you have slush in your bottle rather than hard ice when this is done. You can compare with another bottle which froze hard in your freezer overnight how hard it is to squeeze the bottle and how long it takes to melt. The ice will also take up more room than the water it used to be, and some water may spill out the top. There can also be some small effects of pressure and of dissolved gases on the freezing temperature. Is your water under pressure?
<urn:uuid:175d1f5e-56d6-4bfd-8b4e-418c6b8a50c5>
2.96875
570
Comment Section
Science & Tech.
56.509716
Short for Microsoft Foundation Classes, a large library of C++ classes developed by Microsoft. For Windows -based applications written in C++, MFC provides an enormous headstart. One of the hardest parts of developing C++ programs is designing a logical hierarchy of classes. With MFC, this work has already been done. MFC is bundled with several C++ compilers and is also available as part of the Microsoft Developer's Network (MSDN).
<urn:uuid:73207a99-e260-403f-a14a-ddf7a2798bea>
2.890625
91
Knowledge Article
Software Dev.
51.222143
||A general servlet.| A servlet is a key portion of a server-based application that implements the semantics of a particular request by providing a response. This abstract class defines servlets at a very high level. Most often, developers will subclass HTTPServlet or even Page. Servlets can be created once, then used and destroyed, or they may be reused several times over (it's up to the server). Therefore, servlet developers should take the proper actions in awake() and sleep() so that reuse can occur. Objects that participate in a transaction include: The awake(), respond() and sleep() methods form a message sandwich. Each is passed an instance of Transaction which gives further access to all the objects involved. ||Methods defined here:| - Subclasses must invoke super. - awake(self, trans) - Send the awake message. This message is sent to all objects that participate in the request-response cycle in a top-down fashion, prior to respond(). Subclasses must invoke super. - log(self, message) - Log a message. This can be invoked to print messages concerning the servlet. This is often used by self to relay important information back - Return the name which is simple the name of the class. Subclasses should *not* override this method. It is used for logging and debugging. - respond(self, trans) - Respond to a request. - runMethodForTransaction(self, trans, method, *args, **kw) - serverSidePath(self, path=None) - Return the filesystem path of the page on the server. - setFactory(self, factory) - sleep(self, trans) - Send the sleep message. Static methods defined here: - Returns whether a single servlet instance can be reused. The default is True, but subclasses con override to return False. Keep in mind that performance may seriously be degraded if instances can't be reused. Also, there's no known good reasons not to reuse an instance. Remember the awake() and sleep() methods are invoked for every transaction. But just in case, your servlet can refuse to be reused. - Return whether the servlet can be multithreaded. This value should not change during the lifetime of the object. The default implementation returns False. Note: This is not currently used. Data descriptors defined here: - dictionary for instance variables (if defined) - list of weak references to the object (if defined)
<urn:uuid:fab80a78-0723-4bb2-b94c-9b8a9be38bf1>
3.484375
553
Documentation
Software Dev.
46.195278
Family: Lycaenidae, Gossamer-wing Butterflies view all from this family Description The Brown Elfin (Callophrys augustinus) is butterfly of the Lycaenidae family. It is found in from Newfoundland north and west through the northern United States and the prairie provinces to Alaska. To the south it ranges in Appalachian Mountains to northern Georgia and northern Subspecies iroides is known as the Western Elfin. The wingspan is 22–29 mm. Adults are on wing from early May to early June in one generation. They feed on flower nectar from various species, including Vaccinium, Sanicula arctopoides, Lindera, Salix, Barbarea and Prunus americana. The larvae feed on Ericaceae species, including Vaccinium vacillans and Ledum groenlandicum in the east. They feed on a wide variety of plants in the west, including Arbutus and Cuscuta species. They feed on the flowers and fruits of their host plant. Pupation takes place in the litter at the base of the host plant. Hibernation takes place in the pupal stage. Dimensions 3/4-1 1/8" (19-28 mm). Habitat Freshwater swamps, marshes & bogs, Cities, suburbs & towns, Scrub, shrub & brushlands, Deserts, Grasslands & prairies, Forests & woodlands. Range Mid-Atlantic, California, Rocky Mountains, Plains, Alaska, Northwest, Southwest, New England, Eastern Canada, Southeast, Western Canada, Great Lakes.
<urn:uuid:8f44d01c-c5e8-41a5-a08f-f27b381a41f5>
2.6875
340
Knowledge Article
Science & Tech.
46.108738
Podcasts & RSS Feeds Most Active Stories Mon March 18, 2013 Let's take a roadtrip to Mars What would it take to get humans to Mars? For the last seven months, NASA's rover 'Curiosity' has crawled all over the planet's dusty red Gale Crater. As it explores, the rover has sent back all sorts of information to Earth for further investigation. Most recently, a report of a rock sample collected by Curiosity shows that, yes, ancient Mars could have supported living microbes. But let's go one step further. What would it take for human beings to get to Mars? Ben Longmier is an Assistant Professor of Aerospace Engineering at the University of Michigan College of Engineering and researches electric propulsion, spacecraft design and basic plasma physics. Michigan Radio's Cynthia Canty spoke with Longmier about the challenges and possibilities of getting humans on Mars. Click the link above to hear the full interview. Environment & Science
<urn:uuid:4227e4f4-08fe-4b74-a699-224db53394f0>
3.0625
202
Truncated
Science & Tech.
50.336364
[Numpy-discussion] Truth value of an array Fri Apr 18 06:11:37 CDT 2008 In mathematics, if I compare two function, it means that I compare on all its "coordinates". If I say "f < g" I mean "f(x) < g(x) for all The same holds for a vector, if I write "v == w" I mean "v[i] == w[i] for all i". How come this doesn't work in numpy? And why the message about the truth value of an array being ambiguous? What is ambiguous? In my opinion any boolean array should be cast with all() automatically so that people can simply write: if v == w: Or is there a good reason why this is not possible? More information about the Numpy-discussion
<urn:uuid:a4ef3610-dc3b-456f-85b2-cba15f8d45c3>
2.859375
185
Comment Section
Software Dev.
76.748571
The two species P. pemaquidensis and P. aestuarina have a surface coat that consists of tiny discrete bodies termed "glycostyles". The glycostyles are organic (not mineralized), cover the entire cell surface, and appear to be formed in the Golgi. They may sometimes appear to be hexagonal in outline, and are seldom taller than broad. Some but not all strains of P. pemaquidensis and P. aestuarina also have hairlike filaments that arise from the cell membrane. |No surface coat has been found in the parasitic species P. invadens and P. perniciosa.| |PARASOME. The parasome is bounded by two membranes. Within these membranes there are three distinct structures that are also bounded by two membranes: two peripheral bodies and one central body. These are the structures within the parasome that show the positive reaction to DNA-binding dyes and fluorochromes. There is a limited amount of cytoplasm within the parasome that is not contained within either the peripheral or the central bodies.| OTHER ORGANELLES. As far as is known, all species of Paramoeba have one nucleus per cell. The nucleus has a single, centrally located, essentially Each cell has several sausage-shaped, unbranched mitochondria. The cristae are tubular. The finger-like pseudopodia ("dactylopodia") may have a central core of fine fibrils. Such fibrils are not commonly observed in other amoebae. The composition and function of these fibrils is not known. Some species, P. eilhardi especially, harbor endosymbiotic bacteria. Paramoeba: Index | Introduction | Appearance | Ultrastructure | Reproduction and Life History | Similar genera | Classification | Taxonomy and Nomenclature | Cultures | References | Internet resources Protist Image Data: Picture Gallery | Home Page
<urn:uuid:c28f992a-cfb5-40b1-a856-d6b856ee4f1e>
3.5625
422
Knowledge Article
Science & Tech.
25.060395
As much as any biome or global ecoregion is a challenge to group, differentiate or otherwise generalize, the chaparral or Mediterranean woodlands (scrubland/heathland/grassland) biome may be the best example such classification difficulties. There’s perhaps more general agreement regarding the features of this biome, even if the name tends to change from author to author. Many texts will not even include this biome in their list of major regions, instead making a small reference to it in the section regarding deserts. However, these areas, considering their combined territory, contain about 20 percent of the world’s species of plants, many of them endemic gems found nowhere else. On the flipside, due to the often environmentally heterogeneous nature of this biome, organisms that are prominent, integral members of other biome classifications are found in the chaparral as well. For the sake of consistency in this post, I’ll continue to refer to this biome as chaparral, as incomplete a descriptive designation as that may be. Specifically, chaparral biomes exist in five major regions: South Africa, South/Southwest Australia, Southwestern California/Mexico, Central Chile and in patches wrapped around the Mediterranean Sea, including Southern Europe and Northern Africa. These regions are unified by their hot, dry summers and mild winters, referred to as an archetypal Mediterranean climate at 40 degrees north and south approximately. The vast majority of rainfall usually comes with the cold fronts of winter. Annually, chaparral can experience anywhere from 250 mm of rain all the way up to 3000 mm in isolated subregions like the west portion of Fynbos in South Africa. Plants in chaparral areas tend to be sclerophyllous (Greek: “hard-leaved”), meaning the leaves are evergreen, tough and waxy. This adaptation allows plants to conserve water in an area where rainfall is discontinuous, but probably evolved to compensate for the low levels of phosphorous in ancient weathered soils, particularly in Australia where there have been relatively few volcanic events to reestablish nutrients over millions of years. Obviously, these plants also happen to do very well during the xeric summers of the chaparral where drought is always a threat. Because of the aridity and heat, the chaparral plant communities are adapted to and often strategically dependent on fire. Evolutionary succession scenarios constructed by scientists typically point to fire as one of the major factors that created much of chaparral areas in Australia and South Africa from Gondwanaland rainforest. (Fire ecology really deserves at least a post of its own, which I’d like to discuss given the time in the future.) Some of the regions in the chaparral are exceptional. In South Africa, the area known as the Fynbos constitutes its own floristic region (phytochorion) among phytogeographers, the Cape Floristic Region. While it is the smallest of these floral kingdoms, it contains some 8500 species of vascular plants, 70 percent of which are endemic. The March rose (Oromthamnus zeyheri) is one of the standout specimens of the group as well as the national flower of South Africa, the King protea (Protea cynaroides). P. cynaroides is a “resprouter” in its fire-prone habitat, growing from embedded buds in a subterranean, burl-like structure. Another endemic species, the Cape sugarbird, is shown feeding on a King protea below**. There is one unique threat to the chaparral: anthropogenic fire. In the past, if nature had not provided a fire to burn back the accumulated brush in these areas, often the native peoples would do so, and generally speaking, the fires seemed to be controlled and effective. But increased frequency of fires due to negligence or downed power lines can potentially cause catastrophic, unrecoverable fire. Only so much tolerance to such a destructive force can be built by evolutionary processes.
<urn:uuid:25710142-9e32-420e-b000-9d4875de38e6>
3.9375
834
Personal Blog
Science & Tech.
26.414638
Accessing an element in document Every HTML document is actually present in layers. According to the hierarchy, we can reach any of them as per our use. Beginning with some concepts, we will eventually move to a simple example to illustrate those points. It helps us to access any particular ID present on the page directly , without any reference to its parents or any other relatives. However, in many situations if we wanna to access a div which is annonymous or through some automated code, we need to use other hierarchy relationships. One of the most powerfull and broad way of collecting all the tags present on the page is :- Tag refers to everything on the page that starts with a "<>" and ends with a ">" . Ex - html, body, a , img etc. So, this method returns with an array of all the tags present on the page together. Often using a filter is advisable depending upon the use. Suppose, we wanna impose some condition on the links on our website. For this we need to collect all the elements on our page with a(link) tag. document.getElementByTagName('<a>') As a result we have a list of all the links on our document . Now, we can impose any condition on them as per as our wish. Lets begin HTML DOM traversing with a simple example <body> <ol id='ol1'> <li id='li1'> <span id='span1'> This is Text 1 </span> </li> <li id='li2'> <span id='span2'> This is Text 2 </span> </li> <li id='li3'> <span id='span3'> This is Text 3 </span> </li> </ol> </body> Suppose we wanna refer to the id ol1 in the above example, it can be done in a number of ways :- document.getElementById("ol1") document.getElementsById("li1").parentNode document.getElementsByTagName("ol").item(0) document.getElementsByTagName("li").item(0).parentNode document.getElementsByTagName("li").item(1).parentNode document.getElementsByTagName("l1").item(2).parentNode document.getElementsByTagName("span").item(2).parentNode.parentNode document.getElementsByTagName("body").item(0).childNodes.item(0) document.body.childNodes.item(0) Basically DOM traversing can be very useful both for the user and admin of a website. User can use this to access data from some other site by looking into the source code of the website, while, the admin of the website can use it to update or place contents to any particular page of the website. Infact, Ajax uses this to place the updated contents at correct place. For an example, suppose we wanna read the contents of span3 in above example . It can be done by :- We will discuss about node properties and methods in next chapter. For some more methods of DOM , lets move forward :-
<urn:uuid:61ccc462-f8a0-4d17-93cd-79059831a30b>
3.671875
669
Documentation
Software Dev.
52.274378
The user-mode kernel is a port of the Linux kernel to the Linux system call interface rather than to a hardware interface. The code that implements this is under the arch interface, which is the internal kernel interface which separates architecture-independent code from architecture-dependent code. This kernel is a full Linux kernel, lacking only hardware-specific code such as drivers. It runs the same user space as the native kernel. Processes run natively until they need to enter the kernel. There is no emulation of user space code. Processes running inside it see a self-contained environment. They have no access to any host resources other than those explicitly provided to the virtual machine.
<urn:uuid:ad107903-e5b9-4abe-8e9b-3c40caa5d0c2>
2.84375
137
Documentation
Software Dev.
32.190625
In 1985 it was discovered that Pluto has an atmosphere, albeit a very tenuous one. Pluto's atmosphere arises only when it approaches closer to the Sun during its highly eccentric, 248 earth years long orbit. The atmosphere likely consists of nitrogen, methane, and carbon monoxide, which sublimate directly from Pluto's frozen surface. As Pluto's orbit moves it away from the Sun, these gases are believed to slowly precipitate back to Pluto's surface. Copyright © Walter Myers. All rights reserved. [ Home | What's New | The Graphics | Information | Site Map | ]
<urn:uuid:58c8ed2a-9fb8-4aa0-9e4a-1dc26bb693dc>
3.4375
118
Knowledge Article
Science & Tech.
41.76391
But I have a confession to make: I was too optimistic. My projections about increasing global temperature have been proved true. But I failed to fully explore how quickly that average rise would drive an increase in extreme weather. In a new analysis of the past six decades of global temperatures, which will be published Monday, my colleagues and I have revealed a stunning increase in the frequency of extremely hot summers, with deeply troubling ramifications for not only our future but also for our present. This is not a climate model or a prediction but actual observations of weather events and temperatures that have happened. Our analysis shows that it is no longer enough to say that global warming will increase the likelihood of extreme weather and to repeat the caveat that no individual weather event can be directly linked to climate change. To the contrary, our analysis shows that, for the extreme hot weather of the recent past, there is virtually no explanation other than climate change. The deadly European heat wave of 2003 , the fiery Russian heat wave of 2010 and catastrophic droughts in Texas and Oklahoma last year can each be attributed to climate change. And once the data are gathered in a few weeks’ time, it’s likely that the same will be true for the extremely hot summer the United States is suffering through right now. These weather events are not simply an example of what climate change could bring. They are caused by climate change. The odds that natural variability created these extremes are minuscule, vanishingly small. To count on those odds would be like quitting your job and playing the lottery every morning to pay the bills. Twenty-four years ago, I introduced the concept of “climate dice” to help distinguish the long-term trend of climate change from the natural variability of day-to-day weather. Some summers are hot, some cool. Some winters brutal, some mild. That’s natural variability. But as the climate warms, natural variability is altered, too. In a normal climate without global warming, two sides of the die would represent cooler-than-normal weather, two sides would be normal weather, and two sides would be warmer-than-normal weather. Rolling the die again and again, or season after season, you would get an equal variation of weather over time. An clean energy economy ... is a simple, honest and effective solution. But loading the die with a warming climate changes the odds. You end up with only one side cooler than normal, one side average, and four sides warmer than normal. Even with climate change, you will occasionally see cooler-than-normal summers or a typically cold winter. Don’t let that fool you. Our new peer-reviewed study, published by the National Academy of Sciences, makes clear that while average global temperature has been steadily rising due to a warming climate (up about 1.5 degrees Fahrenheit in the past century), the extremes are actually becoming much more frequent and more intense worldwide. When we plotted the world’s changing temperatures on a bell curve, the extremes of unusually cool and, even more, the extremes of unusually hot are being altered so they are becoming both more common and more severe. The change is so dramatic that one face of the die must now represent extreme weather to illustrate the greater frequency of extremely hot weather events. Such events used to be exceedingly rare. Extremely hot temperatures covered about 0.1 percent to 0.2 percent of the globe in the base period of our study, from 1951 to 1980. In the last three decades, while the average temperature has slowly risen, the extremes have soared and now cover about 10 percent of the globe. This is the world we have changed, and now we have to live in it — the world that caused the 2003 heat wave in Europe that killed more than 50,000 people and the 2011 drought in Texas that caused more than $5 billion in damage . Such events, our data show, will become even more frequent and more severe. There is still time to act and avoid a worsening climate, but we are wasting precious time. We can solve the challenge of climate change with a gradually rising fee on carbon collected from fossil-fuel companies, with 100 percent of the money rebated to all legal residents on a per capita basis. This would stimulate innovations and create a robust clean-energy economy with millions of new jobs. It is a simple, honest and effective solution. The future is now. And it is hot.
<urn:uuid:bb008525-b4ef-4677-aba8-6d0941b32a9f>
3.109375
910
Nonfiction Writing
Science & Tech.
45.5905
Re: Energy and global warming Reply to Paul Ward Many thanks again for helping to get my ideas sorted out. The anthropogenic energy, after use by mankind, enters the Earth?s atmospheric system as heat, ie. as increased kinetic energy of the air molecules. This, of course, does not cause internal molecular excitation, and so no radiation can occur, even from the GHG components. Since the Earth is isolated in space, no energy can escape into space by conduction or convection, and so the anthropogenic energy is retained within the Earth?s system, and builds up over time and so causes global warming. (See below). This extra energy in the atmosphere is circulated by the normal currents towards the poles in the usual way, where it causes extra ice to melt, as calculated. Naturally, some ice reforms during the winter, so releasing its latent heat into the system again, but a greater amount is melted during the following summer because the latent heat just liberated is still available in the system and yet more anthropogenic energy has also been injected during the intervening months. If no energy entered the actual surface of the Earth, land or sea, after leaving the polar region in the Northern hemisphere in the usual way, there would be sufficient remaining energy to raise the temperature of the atmosphere in the Northern hemisphere by 1.8 degC. However, let us assume that enough energy enters the surface to make the actual temperature rise of the atmosphere only 0.6 degC, in line with the practical observation over the last 150 years. Then the amount that enters the surface can indeed be radiated away (apart from the effect of the ?pre-industrial? GHG effect), but the increase in temperature of the surface required to achieve this is less than 0.1 degC owing to the fourth power temperature dependency of the Stefan-Boltzmann law. Such a small rise would be virtually impossible to detect with present techniques. The factor of 2 figure I gave for melting of the ice comes from the latest energy information I could find and applies to 2003. This amount can melt over 1300 Gt of ice in one year, whereas the best practical figure I have is less than 600 Gt, where 1 Gt is 1 thousand million metric tons, which leaves a lot of energy over to warm the atmosphere. However, the comparison I made in my paper for the Arctic sea ice was for a 25 year period from 1978 to 2003, for which energy production data was available and for which practical observations happened to have been made. Aubrey E Banner, Sale, Cheshire, UK
<urn:uuid:eb02ae4f-a569-4de4-86df-c61661a0b5c0>
2.6875
531
Comment Section
Science & Tech.
44.860723
public member function void push ( const T& x ); Adds a new element at the end of the queue , after its current last element. The content of this new element is initialized to a copy of x This member function effectively calls the member function push_back of the underlying container object. - Value to be copied to the new element. T is the first template parameter (the type of the elements stored in the queue). using namespace std; int main () cout << "Please enter some integers (enter 0 to end):\n"; cin >> myint; } while (myint); cout << "myqueue contains: "; cout << " " << myqueue.front(); The example uses push to add a new elements to the queue, which are then popped out in the same order. - Delete next element (public member function) - Return size (public member function)
<urn:uuid:ea786362-8f17-4750-a59a-3a274c1e3b32>
3.3125
199
Documentation
Software Dev.
53.387949
Hendrik Antoon Lorentz Lorentz, Hendrik Antoon (hĕnˈdrək änˈtōn lōˈrĕnts) [key], 1853–1928, Dutch physicist, a pioneer in formulating the relations between electricity, magnetism, and light. He was one of the first to postulate the existence of electrons. On this he based his explanation of the Zeeman effect (a change in spectrum lines in a magnetic field), for which he shared with Pieter Zeeman the 1902 Nobel Prize in Physics. He extended the hypothesis of G. F. Fitzgerald, an Irish physicist, that the length of a body contracts as its speed increases (see Lorentz contraction), and he formulated the Lorentz transformation, by which space and time coordinates of one moving system can be correlated with the known space and time coordinates of any other system. This work influenced, and was confirmed by, Einstein's special theory of relativity. Lorentz also discovered (1880), simultaneously with L. V. Lorenz of the Univ. of Copenhagen, the relations (known as Lorentz-Lorenz relations) between the refraction of light and the density of a translucent body. He was professor (1878–1912) at the Univ. of Leiden and director from 1912 of the Teyler laboratory, Haarlem. His works in English include The Theory of Electrons (1909) and Problems of Modern Physics (1927). See his collected papers (9 vol., 1934–39); study ed. by G. L. de Haas-Lorentz (tr. 1957). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Physics: Biographies
<urn:uuid:31d2ea58-a7cb-4929-8c06-5c288aa79d06>
3.640625
380
Knowledge Article
Science & Tech.
56.685207
Posted Monday, January 14, 2008 There are some 5,700 species of amphibians on the planet, and over the last twenty years, about a third of those species have experienced some serious decline. We're talking about extinctions and rapid declines too large to be explained away by natural cycles. Scientists say these population problems may point to a similar future for other animals--even humans. We'll talk about this cold and slimey world Monday morning at 9 a.m. Photo courtesy Cleveland Metroparks Zoo Environment, Community/Human Interest Please follow our community discussion rules when composing your comments.
<urn:uuid:2e0deab1-2df4-4fce-b7ca-ed901c153ecc>
2.8125
122
Truncated
Science & Tech.
40.176333
IMAGINE weighing yourself with a watch. In theory, that's now possible thanks to a clock with a tick that depends on the mass of a single atom. More practically, the clock could help efforts to redefine the kilogram in terms of fundamental constants. The most accurate clocks are atomic clocks, which measure how often electrons in an atom such as caesium jump between two energy levels. Roughly 9 billion of these transitions equal 1 second. But there is another way to count time using an atom. Quantum theory says that all matter exists as a wave, as well as a particle. This means each particle has a frequency, which can act as the tick of a clock. As the frequency depends on the atom's mass, the clock could in principle be used to weigh things, but the frequency itself is too high to count. So instead a team led by Holger Müller at the University of California, Berkeley, split the wave of a caesium atom in two, keeping one half stationary and the other moving. Einstein's theory of special relativity means time passes slower for the moving wave, so when the two were re-combined, they were out of phase. Müller's team were able to measure this phase difference, which, like the atomic frequency, also depends on the mass of the atom. The result was the tick of the first mass clock (Science, doi.org/j7j). This new kind of clock might make an expected, more precise definition of the kilogram easier to use. The scientific standard for mass is currently defined by a lump of metal, but its mass is drifting so the plan is to replace this with a definition based on fundamental constants. A device is still needed to measure physical masses, though. The current favourite is called a watt balance, but this is limited to macroscopic objects. The new mass clock provides a way to measure microscopic masses, too. - New Scientist - Not just a website! - Subscribe to New Scientist and get: - New Scientist magazine delivered every week - Unlimited online access to articles from over 500 back issues - Subscribe Now and Save If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:e0971673-5364-4414-b9bb-37866e2965a9>
3.71875
525
Truncated
Science & Tech.
52.028804
The PostgreSQL source code is stored and managed using the Git version control system. A public mirror of the master repository is available; it is updated within a minute of any change to the master repository. Our wiki, http://wiki.postgresql.org/wiki/Working_with_Git, has some discussion on working with Git. Note that building PostgreSQL from the source repository requires reasonably up-to-date versions of bison, flex, and Perl. These tools are not needed to build from a distribution tarball since the files they are used to build are included in the tarball. Other tool requirements are the same as shown in Chapter 14. With Git you will make a copy of the entire code repository on your local machine, so you will have access to all history and branches offline. This is the fastest and most flexible way to develop or test patches. You will need an installed version of Git, which you can get from http://git-scm.com. Many systems already have a recent version of Git installed by default, or available in their package distribution system. To begin using the Git repository, make a clone of the official mirror: git clone git://git.postgresql.org/git/postgresql.git This will copy the full repository to your local machine, so it may take a while to complete, especially if you have a slow Internet connection. The files will be placed in a new subdirectory postgresql of your current directory. The Git mirror can also be reached via the HTTP protocol, if for example a firewall is blocking access to the Git protocol. Just change the URL prefix to http, as in: git clone http://git.postgresql.org/git/postgresql.git The HTTP protocol is less efficient than the Git protocol, so it will be slower to use. Whenever you want to get the latest updates in the system, cd into the repository, and run: Git can do a lot more things than just fetch the source. For more information, consult the Git man pages, or see the website at http://git-scm.com.
<urn:uuid:f4c58a25-62c1-464d-9be3-8b98fc8b1504>
2.78125
447
Tutorial
Software Dev.
57.733611
Systematics of Acacia Our aim is to discover the phylogenetic relationships within the Australian Acacia clade in order to produce a predictive, phylogeny-based classification. Acacia in the strict sense (formerley Acacia subgenus Phyllodineae) is the second largest genus of legumes (after Astragalus) and the largest genus in subfamily Mimosoideae (syn. Mimosaceae). This is even after the segregation of Acacia in the broad sense into five genera based on recent phylogenetic research. Phylogenetic analyses of molecular and morphological data show that Acacia in the strict sense is a single evolutionary lineage (monophyletic), almost completely confined to Australia. Few plant clades dominate a continent as extensively as Acacia. Recent estimates of species numbers range from 986 species to about 1,045 species, when undescribed taxa are included. Acacia is widespread within Australia and forms a dominant component of many vegetation classes, particularly in Australia's large arid and semi-arid zone. Acacia is commonly divided into seven sections, characterised by combinations of easily recognised macro-morphological characters. Results of molecular phylogenetic studies have demonstrated that the traditional sections are largely non-monophyletic and, as such, there is an acute need for an infrageneric classification of Acacia based on phylogenetic relationships. Our studies have recently identified four main clades in Acacia based on nuclear ribosomal DNA: - A clade, resolved as sister to all other taxa in Acacia, including species related to A. victoriae and A. pyrifolia - The Pulchelloidea clade, which includes members of sections Pulchellae, Alatae, Phyllodineae and Lycopodiifoliae. Gillian Brown is currently analysing a large sample of taxa, which are putatively placed in this clade, as part of an Australian Research Council (ARC) funded Linkage grant. Sampling has been undertaken in consultation with morphological information provided by Bruce Maslin. - A third clade comprises taxa in the A. murrayana species group. These taxa occur predominantly in semi-arid and arid regions. - A fourth clade consists of a diverse assemblage of phyllodinous Acacia sepcies. We have dubbed this the 'p.u.b. clade', to include all those taxa with 'plurinerved' phyllodes (in sections Juliflorae and Plurinerves), some uninerved phyllodinous taxa (sect. Phyllodineae) and eastern Australian bipinnate taxa (sect. Botrycephalae). Recently, Honours student James Kidman, analysed the phylogeny of the phyllodinous Acacia species that occur outside Australia. He discovered that all extra-Australian taxa are nested within Acacia, although not as a separate lineage, but in four clades. These findings, among others, suggest a connection between taxa in the Indian Ocean (A. heterophylla) and Hawaii (A. koa and relatives). This research is currently being prepared for publication. World Wide Wattle [http://www.worldwidewattle.com] is an excellent source of further information on Acacia. Our research group is also currently carrying out studies on the Mimosoid legume tribe Ingeae. Recent results indicate that the closest relative to Australian Acacia in the strict sense is Paraserianthes lophantha, a member of the tribe Ingeae and a widespread weed, with a disjunct natural distribution in Western Australia and Indonesia. - Daniel Murphy (Royal Botanic Gardens Melbourne) - Gillian Brown (The University of Melbourne) - Pauline Ladiges (The University of Melbourne) - Bruce Maslin (Department of Environment and Conservation, Western Australia) - Joe Miller (CSIRO Plant Industry, Canberra) - Australian Research Council - Australian Plants Society Maroondah Group Brown, G.K., Murphy, D.J., Kidman, J. and Ladiges, P.Y. (2012). Phylogenetic connections of phyllodinous species of Acacia outside Australia are explained by geological history and human-mediated dispersal. Australian Systematic Botany 25, 390–403. Gibson, M.R., Richardson, D.M., Marchante, E., Marchante, H., Rodger, J.G., Stone, G.N., Byrne, M., Fuentes-Ramírez, A., George, N., Harris, C., Johnson, S.D., Roux, J.J.L., Miller, J.T., Murphy, D.J., Pauw, A., Prescott, M.N., Wandrag, E.M. and Wilson, J.R.U. (2011). Reproductive biology of Australian acacias: important mediator of invasiveness? Diversity and Distributions 17, 911–933. Miller, J.T., Murphy, D.J., Brown, G.K., Richardson, D.M. and González-Orozco, C.E. (2011). The evolution and phylogenetic placement of invasive Australian Acacia species. Diversity and Distributions 17, 848–860. Wilson, J.R.U., Gairifo, C., Gibson, M.R., Arianoutsou, M., Bakar, B.B., Baret, S., Celesti-Grapow, L., DiTomaso, J.M., Dufour-Dror, J.-M., Kueffer, C., Kull, C.A., Hoffmann, J.H., Impson, F.A.C., Loope, L.L., Marchante, E., Marchante, H., Moore, J.L., Murphy, D.J., Tassin, J., Witt, A., Zenni, R.D. and Richardson, D.M. (2011) Risk assessment, eradication, and biological control: global efforts to limit Australian acacia invasions. Diversity and Distributions 17, 1030–1046. Murphy, D.J., Brown, G.K., Miller, J.T. and Ladiges, P.Y. (2010). Molecular phylogeny of Acacia s.s. (Mimosoideae: Leguminosae) – evidence for major clades and informal classification. Taxon 59, 7–19. Maslin, B.R. and Murphy, D.J. (2009). A taxonomic revision of Acacia verniciflua and A. leprosa (Leguminosae: Mimosoideae) in Australia. Muelleria 27, 183–223. Brown, G.K. (2008). Systematics of the tribe Ingeae (Leguminosae-Mimosiodeae) over the last 25 years. Muelleria 26, 27–42. Brown, G.K., Murphy, D.J., Miller, J.T. and Ladiges, P.Y. (2008). Acacia s.s. and its relationship amongst tropical legumes, tribe Ingeae. Systematic Botany 33, 739–751. Murphy, D.J. (2008). A review of the classification of Acacia (Leguminosae, Mimosoideae). Muelleria 26, 10–26. Reid, J.C. and Murphy, D.J. (2008). Some case studies of Acacia as weeds and implications for herbaria. Muelleria 26, 57–66. Acacia inaequilatera racemes showing reddish-coloured raceme axes and buds Acacia spondylophylla in the Pilbara region of Western Australia Acacia colei var. colei, a member of section Juliflorae, flower spike and plurinerved phyllode Last updated 13 Feb 2013
<urn:uuid:5cde77d3-582a-433b-b697-4790c78cd9d4>
3.359375
1,709
Academic Writing
Science & Tech.
46.079683
There was a minor kerfuffle in recent days over claims by Tim Flannery (author of “The Weather Makers”) that new information from the upcoming IPCC synthesis report will show that we have reached 455 ppmv CO2_equivalent 10 years ahead of schedule, with predictable implications. This is confused and incorrect, but the definitions of CO2_e, why one would use it and what the relevant level is, are all highly uncertain in many peoples’ minds. So here is a quick rundown. Definition: The CO2_equivalent level is the amount of CO2 that would be required to give the same global mean radiative forcing as the sum of a basket of other forcings. This is a way to include the effects of CH4 and N2O etc. in a simple way, particularly for people doing future impacts or cost-benefit analysis. The equivalent amount is calculated using the IPCC formula for CO2 forcing: Total Forcing = 5.35 log(CO2_e/CO2_orig) where CO2_orig is the 1750 concentration (278 ppmv). Usage: There are two main ways it is used. Firstly, it is often used to group together all the forcings from the Kyoto greenhouse gases (CO2, CH4, N2O and CFCs), and secondly to group together all forcings (including ozone, sulphate aerosols, black carbon etc.). The first is simply a convenience, but the second is what matters to the planet. Many stabilisation scenarios, such as are being discussed in UNFCCC negotiations are based on stabilising total CO2_e at 450, 550 or 750 ppmv. Magnitude The values of CO2_e (Kyoto) and CO2_e (Total) can be calculated from Figure 2.21 and Table 2.12 in the IPCC WG1 Chapter 2. The forcing for CO2, CH4 (including indirect effects), N2O and CFCs is 1.66+0.48+0.07+0.16+0.34=2.71 W/m2 (with around 0.3 W/m2 uncertainty). Using the formula above, that gives CO2_e (Kyoto) = 460 ppmv. However, including all the forcings (some of which are negative), you get a net forcing of around 1.6 W/m2, and a CO2_e (Total) of 375 ppmv with quite a wide error bar. This is, coincidently, close to the actual CO2 level. Implications The important number is CO2_e (Total) which is around 375 ppmv. Stabilisation scenarios of 450 ppmv or 550 ppmv are therefore still within reach. Claims that we have passed the first target are simply incorrect, however, that is not to say they are easily achievable. It is even more of a stretch to state that we have all of a sudden gone past the ‘dangerous’ level. It is still not clear what that level is, but if you take a conventional 450 ppmv CO2_e value (which will lead to a net equilibrium warming of ~ 2 deg C above pre-industrial levels), we are still a number of years from that, and we have (probably) not yet committed ourselves to reaching it. Finally, the IPCC synthesis report is simply a concise summary of the three separate reports that have already come out. It therefore can’t be significantly different from what is already available. But this is another example where people are quoting from draft reports that they have neither properly read nor understood and for which better informed opinion is not immediately available. I wish journalists and editors would resist the temptation to jump on leaks like this (though I know it’s hard). The situation is confusing enough without adding to it unintentionally.
<urn:uuid:ca39979d-051f-48f3-9272-62b94fdab467>
3.015625
797
Comment Section
Science & Tech.
60.367418
A different approach to Solar thermal electricity generation is the solar pond which uses a large salty lake as a kind of flat plate collector. If the lake has the right gradient of salt concentration ( salty water at the bottom and fresh water at the top) and the water is clear enough solar energy is absorbed on the bottom of the pond. The hot, salty water cannot rise, because it is heavier than the fresh water at the top. The upper layers of the water act as an insulating blanket and the temperature at the bottom of the pond can reach 90 degrees C. This is a high enough temperature to run an organic rankin cycle (ORC) engine or Stirling engine. However the thermodynamic limitations of the relatively low temperatures mean low solar to electricity conversion efficiencies, typically less than 2%. Even with this low capability systems of 50 mwatts electrical output, fed from a lake of 20 hectares. The largest operating solar pond for electricity generation was the Beit HaArava pond built in Israel and operated up until 1988. It had an area of 210,000 m² and gave an electrical output of 5 MW. The first solar pond in India (6000 sq. metres) was built at Bhuj. The project was sanctioned under the National Solar Pond Programme by the Ministry of Non-conventional Energy Sources in 1987 and completed in 1993 after a sustained collaborative effort by TERI, the Gujarat Energy Development Agency, and the GDDC (Gujarat Dairy Development Corporation Ltd). The solar pond successfully demonstrated the expediency of the technology by supplying 80,000 litres of hot water daily to the plant. The Energy and Resources Institute provided all technical inputs and took up the complete execution of research, development, and demonstration. TERI operated and maintained this facility until 1996 before handing it over to the GDDC. The solar pond functioned effortlessly till the year 2000 when severe financial losses crippled GDDC. Subsequently, the Bhuj earthquake left the Kutch Dairy non-functional. Learn more about the Bhuj India Solar Pond. The 0.8 acre solar pond powering 20% of Bruce Foods Corporation’s operations El Paso, Texas is the second largest in the U.S. It is also the first ever salt-gradient solar pond in the U.S. A natural example of these effects in a saline water body is Solar Lake, Sinai, Israel. The energy obtained is in the form of low-grade heat of 70 to 80 °C compared to an assumed 20 °C ambient temperature. According to the second law of thermodynamics (see Carnot-cycle), the maximum theoretical efficiency of a solar concentrator system with molten salt is: 1-(273+20)/(273+80)=17%. By comparison, a power plant’s heat engine delivering high-grade heat at 800 °C would have a maximum theoretical limit of 73% for converting heat into useful work (and thus would be forced to divest as little as 27% in waste heat to the cold temperature reservoir at 20 °C). The low efficiency of solar ponds is usually justified with the argument that the ‘collector’, being just a plastic-lined pond, might potentially result in a large-scale system that is of lower overall levelised energy cost than a solar concentrating system. ||Advantages:The large thermal mass of the system acts as a heat store, and electricity generation can proceed 24 hours a day.The best use of solar ponds may be to generate heat for desalinization plants, creating enough fresh water to maintain themselves and provide a supply of drinking water.||Disadvantages:Large amounts of fresh water are required to maintain the salt gradient. Large bodies of water in desert areas are few in number so locations are difficult to find.Solar Ponds are not viable at higher altitudes, since the solar collection surface area (bottom of the pond) needs to be flat not tilted.|
<urn:uuid:1f7122d9-1492-4e65-aaa7-59d08943e341>
3.828125
809
Knowledge Article
Science & Tech.
46.646091
Stenoplax heathiana Berry, 1946 |Stenoplax heathiana, San Simeon, CA. Head and plate 1 are to the left| |(Photo by: Dave Cowles, 1997)| How to Distinguish from Similar Species: The wide plate 8 distinguishes from most. Stenoplax chitons are narrower than most chitons for their length. Geographical Range: Mendocino County, Northern CA to Puerto Santo Tomas, Baja CA. Not likely to be found north of California. Depth Range: Middle and low intertidal Habitat: Lives under rocks on open coast, especially rocks partly embedded in sand or gravel. Biology/Natural History:_Mostly nocturnal. Remains buried in sand on the undersides of rocks during the day. May be still exposed at dawn. Feed on drift algae that lodge at the bases of the rocks. Named for Stanford professor Harold Heath, who made a detailed study of the development of this species in 1899. |Main Page||Alphabetic Index||Systematic Index||Glossary| Morris et al., 1980
<urn:uuid:f6449502-c1a5-4c48-b837-24277634f24d>
2.90625
241
Knowledge Article
Science & Tech.
50.074837
The moving Earth. The Moving Earth Even though we cannot sense it, the Earth is constantly in motion. In fact, the Earth isn't just moving, it's moving really fast! The fact that we do not feel as if we are moving led people to believe that the Earth is the stationary center of the universe for centuries. Eventually, astronomers discovered that the only way to explain all of the motions they observed in the sky was if the Earth were really moving. From their observations of the sky, they realized that there are two components to the Earth's motion; the Earth is orbitting the Sun, and the Earth is spinning like a top. Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: People have been living in North America for a long, long time. The first people to live there were the Native Americans. They didn't have clocks or calendars so they watched tides, the Sun, the Moon,...more Man has always observed the sky. By watching the Sun and Moon, early man could tell what season was coming next. They had to know this to be able to farm and hunt. Archeoastronomy started in the 1960's...more The stones of Carnac, France, are very famous because there are a lot of them and because they are so old! The oldest stones found in Carnac are from about 4,500 B.C. That's older than the stones at Stonehenge!...more You may have heard of the lake called Loch Ness, where people think they've seen the Loch Ness monster. Near Loch Ness there are three giant stone tombs you may not have heard of...they are called the...more Cuzco is a city in Peru. It was the capital of the ancient Inca Empire. In ancient times, Cuzco was the center of the Inca road network which was made up of about 40,000 kilometers (25,000 miles) of roads...more The stone rings and tombs of England and France are very famous. But, there are also stone structures in Italy. There are some neat stones in Fossa, Italy. They are standing stones. These stones form circles...more Kepler realized that the line connecting the planet and the Sun sweeps out equal area in equal time. Look at the diagram to the left. What Kepler found is that it takes the same amount of time for the...more
<urn:uuid:5e629255-cddf-4f49-ac12-ad91c636ad49>
3.171875
546
Content Listing
Science & Tech.
71.905916
If we suppose Saturn is a perfect sphere, the shadows are the intersections of a sphere and a set of concentric elliptical cylinders formed by parallel lines passing through the rings. (The lines converge to the Sun.) These can be found by solving the equations , resulting in space curves with parametric form being the declination of Saturn. Since we take the orbit of Saturn around the Sun to be circular and 26.75° as its axial tilt, the declination at time is equal to The ring radii are approximated by a middle third Cantor set with three iterations between the radii 1.12 and 2.3. These correspond to the radii of the D and F rings (approx. 67,000 km and 140,000 km) if we set Saturn's equatorial radius (approx. 60,000 km) equal to 1. Other interesting images and information can be found in ''The Seasons of Saturn " at "Astronomy Picture of the Day" from July 2, 2001. Snapshot 1: the ring shadows at northern vernal equinox (last occurrence August 2009) Snapshot 2: the ring shadows at northern summer solstice (next occurrence May 2017) Snapshot 3: the ring shadows at northern autumnal equinox (next occurrence in 2024) Snapshot 4: the ring shadows at northern winter solstice (next occurrence in 2039) As of the end of 2011, the "today" button will show only a slight shift of the shadows toward the south. It is only some 820 days past vernal equinox (August 10, 2009) within a Saturn year of 10,759 days.
<urn:uuid:d746ee41-b180-471f-86ad-d89efa851fad>
3.765625
350
Knowledge Article
Science & Tech.
65.167738
K-type main-sequence star A K-type main-sequence star (K V), also referred to as an orange dwarf or K dwarf, is a main-sequence (hydrogen-burning) star of spectral type K and luminosity class V. These stars are intermediate in size between red M-type main-sequence stars and yellow G-type main-sequence stars. They have masses of from 0.6 to 0.9 times the mass of the Sun and surface temperatures between 3,900 and 5,200 K., Tables VII, VIII. Better known examples include Alpha Centauri B (K1 V) and Epsilon Indi. These stars are of particular interest in the search for extraterrestrial life because they are stable on the main sequence for a very long time (15 to 30 billion years, compared to 10 billion for the Sun). This may create an opportunity for life to evolve on terrestrial planets orbiting such stars. Orange dwarfs are about three to four times as abundant as sun-like stars, making planet searches easier. Spectral Standard Stars The revised Yerkes Atlas system (Johnson & Morgan 1953) listed 12 K-type dwarf spectral standard stars, however not all of these have survived to this day as standards. The "anchor points" of the MK classification system among the K-type main-sequence dwarf stars, i.e. those standard stars that have remain unchanged over years, are Sigma Draconis (K0 V), Epsilon Eridani (K2 V), and 61 Cygni A (K5 V). Other primary MK standard stars include 107 Piscium (K1 V), HD 219134 (K3 V), TW Piscis Austrini (K4 V), HD 120467 (K6 V), 61 Cygni B (K7 V) . There are not yet any generally agreed upon K8 or K9 dwarf standard stars. Based on the example set in some references (e.g. Johnson & Morgan 1953, Keenan & McNeil 1989), many authors consider the step between K7 V and M0 V to be a single subdivision, and one rarely encounters K8 or K9 classifications in the literature. See also - Solar twin - Red dwarf - Stellar classification, Class K - Star count, survey of stars - Habitability of orange dwarf stars - A Modern Mean Stellar Color and Effective Temperatures (Teff) # Sequence for O9V-Y0V Dwarf Stars, E. Mamajek, 2011, website - Empirical bolometric corrections for the main-sequence, G. M. H. J. Habets and J. R. W. Heintze, Astronomy and Astrophysics Supplement 46 (November 1981), pp. 193–237. - SIMBAD, entries for Alpha Centauri B and Epsilon Indi, accessed on line June 19, 2007. - , retrieved on May 6, 2009. - Fundamental stellar photometry for standards of spectral type on the revised system of the Yerkes spectral atlas H.L. Johnson & W.W. Morgan, 1953, Astrophysical Journal, 117, 313 - MK ANCHOR POINTS, Robert F. Garrison - The Perkins Catalog of Revised MK Types for the Cooler Stars, P.C. Keenan & R.C McNeil, "Astrophysical Journal Supplement Series" 71 (October 1989), pp. 245–266. - Fundamental stellar photometry for standards of spectral type on the revised system of the Yerkes spectral atlas, H.L. Johnson & W.W. Morgan, 1953, Astrophysical Journal, 117, 313
<urn:uuid:dd3904bc-55ec-4ff9-8d50-54c45a4e11b4>
3.515625
763
Knowledge Article
Science & Tech.
71.982629
- Bejan, A., The process of melting by rolling contact, Int. J. Heat Mass Transf. (UK), vol. 31 no. 11 pp. 2273 - 83 [0017-9310(88)90159-7] . (last updated on 2007/04/08) Describes the fundamentals of the heat transfer melting process occurring in the narrow rolling contact region between a body of phase-change material and a solid body that acts as a heater. The theory is based on the following assumptions: (i) the phase-change material is at the melting point temperature, (ii) the surface of the solid is isothermal, (iii) the effect of frictional heating in the liquid gap is negligible, (iv) the peripheral length of the liquid region is much smaller than the radius of the roller, (v) the liquid gap region is slender, and (vi) the effect of surface tension is negligible. The general solution constructed in this manner relates the mechanical loading of the roller (normal force, tangential force, applied torque) to the angular speeds of the roller, the temperature difference between the heater and the phase-change material and the thermophysical properties of the liquid phase. Simpler calculation procedures are developed for two special applications: (a) the melting of a cylinder mounted freely on its axle, and (b) the melting of a turning cylinder the axle of which is stationary relative to the heater surface
<urn:uuid:2a4e8aeb-fc96-4deb-bd4a-f86a965dffe0>
2.96875
297
Academic Writing
Science & Tech.
48.018022
What is a modular build? - A modular build consists of a set of modules, - each of which can be built separately. - Each module's build produces some output (a directory tree). - A module may depend on the outputs of other modules, but it can't reach inside the others' build trees. - There is a common interface that each module provides for building itself. - The build tool can be replaced with another. The description of the module set is separate from the modules themselves. What is a non-modular, monolithic build? - A monolithic build consists of one big build tree. - Any part of the build can reference any other part via relative filenames. - It might consist of multiple checkouts from version control, but they have to be checked out to specific directory tree locations (as in the Chromium build). Some examples of modular builds: - Build systems/tools: - Debian packages (and presumably RPMs too) - Module interfaces: - GNU autotools (./configure && make && make install) - Python distutils (setup.py) - Software collections: - Xorg (7.0 onwards) - XFree86 (and Xorg 6.9): Before Xorg was modularised, there was a big makefile that built everything, from Xlib to the X server to example X clients. - Chromium web browser: This uses a tool called "gyp" to generate a big makefile which compiles individual source files from several libraries, including WebKit, V8 and the Native Client IPC library. It ignores WebKit's own build system. - Native Client: One SCons build builds the core code as well as the NPAPI browser plugin and example code; it needs to know how to cross-compile NaCl code as well as compile host system code. Another makefile builds the compiler toolchain from tarballs and patch files that are checked into SVN. - CPython: The standard library builds many Python C extensions. Modular build systems offer a number of advantages: - You can download and build only the parts you need. This can be a big help if some modules are huge but seldom change while the modules you work on are small and fast to build. - Some systems (such as Debian packages) give you binary packages so you don't need to build the dependencies of the modules that you want to work on. JHBuild doesn't provide this but it could be achieved with a little work. - Dependencies are clearer. - External interfaces are clearer too. - It is possible to change one module's version independently of other modules (to the extent that differing versions are compatible). - They are relatively easy to use in a decentralised way. It is easy to create a new version of a module set which adds or removes modules. - You don't have to check huge dependencies into your version control system. Some projects check in monster tarballs or source trees, which dwarf the project's own code. If you avoid this practice you will make it easier for distributions to package your software. The two categories can coexist: Each module may internally be a monolithic build which can be arbitrarily complex. Autotools is an example of that. This is not too bad because at least we have contained the complexity within the module. The layer on top, which connects modules together, can be relatively simple. Despite its faults, autotools is very amenable to being part of a modular build: - The build tree does not need to be kept around after doing "make install". - Output can be directed using "--prefix=foo" and "make install DESTDIR=foo". - Inputs can be specified via --prefix and PATH and other environment variables. - The build tree can be separate from the source tree. It's easy to have multiple build trees with different build options. The systems I listed as modular all have their own problems. The main problem with Debian packages is that they are installed system-wide, which requires root access and makes it difficult to install multiple versions of a package. It is possible to work around this problem using chroots. JHBuild, Zero-Install and Nix avoid this problem. JHBuild and Zero-Install are not so good at capturing immutable snapshots of package sets. Nix is good at capturing snapshots, but Nix makes it difficult to change a library without rebuilding everything that uses it. Despite these problems, these systems have a nice property: they are layered. It is possible to mix and match modules and replace the build layer. Hence it is possible to build Xorg and GNOME either with JHBuild or as Debian packages. In turn, there is a choice of tools for building Debian source packages. There is even a tool for making sets of Debian packages from JHBuild module descriptions. These systems do not interoperate perfectly, but they do work and scale. There are some arguments for having a monolithic system. In some situations it is difficult to split pieces of software into separately-built modules. For example, Plash-glibc is currently built by symlinking the source for the Plash IPC library into the glibc source tree, so that glibc builds it with the correct compiler flags and with the glibc internal header files. Ideally the IPC library would be built as a separate module, but for now it is better not to. Still, if you can find good module boundaries, it is a good idea to take advantage of them.
<urn:uuid:36a32f22-12be-46e2-8b0f-88c71c3aaa8d>
3.046875
1,166
Personal Blog
Software Dev.
51.23712
The Solar System consists of the Sun and those celestial objects bound to it by gravity: the eight planets and five dwarf planets, their 173 known moons, and billions of small bodies. The small bodies include asteroids, icy Kuiper belt objects, comets, meteoroids, and interplanetary dust. The charted regions of the Solar System comprise the Sun, four terrestrial inner planets, the asteroid belt, four gas giant outer planets, and finally the Kuiper belt and the scattered disc. The hypothetical Oort cloud may also exist at a distance roughly a thousand times beyond these regions. The solar wind, a flow of plasma from the Sun, permeates the Solar System, creating a bubble in the interstellar medium known as the heliosphere, which extends out to the middle of the scattered disc. In order of their distances from the Sun, the eight planets are: - Mercury (57,900,000 km) - Venus (108,000,000 km) - Earth (150,000,000 km) - Mars (228,000,000 km) - Jupiter (779,000,000 km) - Saturn (1,430,000,000 km) - Uranus (2,880,000,000 km) - Neptune (4,500,000,000 km) As of mid-2008, five smaller objects are classified as dwarf planets, all but the first of which orbit beyond Neptune. These are: - Ceres (415,000,000 km, in the asteroid belt; formerly classed as the fifth planet) - Pluto (5,906,000,000 km, formerly classified as the ninth planet) - Haumea (6,450,000,000 km) - Makemake (6,850,000,000 km) - Eris (10,100,000,000 km) Six of the planets and three of the dwarf planets are orbited by natural satellites, usually termed "moons" after Earth's Moon. Each of the outer planets is encircled by planetary rings of dust and other particles.
<urn:uuid:fc4cf85b-c5fb-4fae-bee5-6673e37e4493>
4.03125
438
Knowledge Article
Science & Tech.
59.807571
|Home | X-Objects | Stars | Habitability | Life || MPG/ESO 2.2-m Telescope, La Silla Observatory, ESO Larger false-color image of the ACO or Abell 3627 supercluster of galaxies, which lies near the core of the Great Attractor. A huge volume of space that includes the Milky Way and superclusters of galaxies is no longer thought to be flowing towards a largely unseen mass called the Great Attractor, but rather is flowing past it towards the Shapley Supercluster behind it (more). In late 2005, a team of astronomers engaged in a X-ray survey called the Clusters in the Zone of Avoidance (CIZA) project revealed that the Milky Way is not being drawn towards a concentration of mass called the Great Attractor but to an even more massive region behind it called the Shapley Supercluster, which lies around 500 million light-years away or four times the distance to the Great Attractor. Over the two decades since the discovery of the Great Attractor, subsequent observations at infrared wavelengths indicated that the Milky Way not, in fact, bring drawn towards the Great Attractor. Indeed, the CIZA team reported that the Great Attractor actually has only about a tenth the mass that was originally estimated (IFA press release; Maggie McKee, New Scientist, December 15, 2005; Kocevski and Ebeling, 2005; and Kocevski et al, 2005). Kocevski et al, 2005a and 2005b, The Shapley Supercluster lies behind the Great Attractor and is much more massive, with the equivalent of nearly 10,000 Milky ways or four times the currently observed of the Great Attractor (more). Region around the Mostly Unseen Mass In the 1980s, a group of astronomers known as the "Seven Samurai" (David Burstein, Roger Davies, Alan Dressler, Sandra Faber, Donald Lynden-Bell, Roberto J. Terlevich, and Gary Wegner) found that galaxies are very unevenly distributed in space, with galactic superclusters separated by incredibly huge voids of visible ordinary matter. The Great Attractor is one such structure, a diffuse concentration of matter some 400 million light-years in size located around 250 million light-years (ly) away in the direction of the southern Constellation Centaurus, about seven degrees off the plane of the Milky Way -- at a redshift-distance of 4,350 kilometers (or around 2,700 miles) per second. It lies in the so-called Zone of Avoidance, where the dust and stars of the Milky Way's disk obscures as much as a quarter of the Earth's visible sky. C. Kraan-Korteweg and Ofer Lahav, Scientific American (October 1998): pp. 50-57 (Permission being sought) The Great Attractor is located in a region of the universe that is obscured from observers in the Solar System by the dust of the Milky Way's disk (more). The Great Attractor is apparently pulling in millions of galaxies in a region of the universe that includes the Milky Way, the surrounding Local Group of 15 to 16 nearby galaxies and larger Virgo Supercluster, and the nearby Hydra-Centaurus Supercluster, at velocities of around 600 (in the Local Group) to thousands of kilometers (or miles) per second (Lynden-Bell et al, 1988; and Dressler et al, 1987). Based on the observed galactic velocities, the unseen mass inhabiting the voids between the galaxies and clusters of galaxies is estimated to total around 10 times more than the visible matter in this region of the universe and so must be composed of mostly dark matter. Calculations of the mass of the Great Attractor originally set at around 5.4 times 1016 to Solar-masses have been revised sharply downward with further study dur to subsequent infrared and x-ray studies. Galaxies located on the other side of the Great Attractor are no longer thought to be pulled in its direction (; Kocevski and Ebeling, 2005; Kocevski et al, 2005; and Renée C. Kraan-Korteweg, 2000). [More discussion of how the sheets, filaments, and stars of the early universe developed from quantum fluctuations and the gravitational condensation of dark and ordinary matter can be found in First Stars.] C. Kraan-Korteweg and Scientific American (October 1998): pp. 50-57 (Permission being sought) Millions of galaxies may be moving towards the Great Attractor, including the Virgo and the Hydra-Centaurus superclusters of Core of the Great Attractor The core of the Great Attractor lies within the so-called "Centaurus Wall" of galaxies. From the perspective of observers in the Solar System, this Great Wall-like structure is viewed edge-on (Woudt and Kraan-Korteweg, 2000). The intersection of the Centaurus Wall and the Great Attractor includes the Norma Cluster or Supercluster -- ACO 3627, Abell 3627, or A3627 (Woudt et al, 2000, 1999a, and 1999b). Indeed, the Milky Way, the Local Group, and the surrounding Virgo Supercluster, as well as the Hydra-Centaurus Supercluster, appear to be part (or at least "appendages") of the sheet of ordinary and dark matter that forms the Centaurus Wall. [More discussion and color images of the Virgo Supercluster and the Centaurus Wall are available from Professor Anthony P. Fairall's lecture on "Large-Scale Structures in the Universe.") © Anthony P. Fairall, An Atlas of Nearby Large-scale Structures, from Large-scale Structures in the Universe (Used with permission) The core of the Great Attractor ("A3627" at left) lies within the "Centaurus Wall" of galaxies (more maps and discussion on Large-scale Structures in The Great Attractor's core region appears to be dominated by the Norma Supercluster, a highly obscured, nearby, and massive group of galaxies close to the plane of the Milky Way (Woudt et al, 1999a and 1999b; Kraan-Korteweg et al, 1996; and Patrick Alan Woudt, 1998 PhD thesis). In the absence of the obscuring effects of the Milky Way, the Norma Supercluster would have appeared as prominent as the well-known Coma Cluster or Supercluster, but nearer in redshift-space. Indeed, spectroscopic observations support the idea that the Norma Supercluster is the dominant component of a "Great Wall"-type of structure and is comparable in size, richness and mass similar to Coma in the northern part of the Great Wall (Woudt et al, 2000; and 1997). [The Norma Supercluster is also associated with a "Finger of God effect" in plots of galactic redshift velocities when viewed from the perspective of the Solar System.] and Patricia A. Henning, 1997 (PASA, 14:1) (Permission being sought) Larger image with x-ray contours. The core of the Great Attractor appears to include the Norma Supercluster of galaxies (more). The overdensity of galaxies in the region of the Norma Supercluster was first detected in the 1980s. Although astronomers have since observed a large excess of galaxies with optical and infrared telescopes in this region, no dominant cluster or central peak has been identified. This strongly suggests that a significant fraction of the Great Attractor's overdensity could still be obscured by the Milky Way, possibly in another rich cluster of galaxies around the strong radio-source PKS1343-601 (Woudt and Kraan-Korteweg, 2000). Up-to-date technical summaries on the Great Attractor may be available at: NASA's ADS Abstract Service for the Astrophysics Data System; the SIMBAD Astronomical Database mirrored from CDS, which may require an account to access; and the NSF-funded, arXiv.org Physics e-Print archive's search interface. © 2003-2009 Sol Company. All Rights Reserved.
<urn:uuid:00bede26-864e-49b8-adcd-fbf952ef424d>
2.796875
1,772
Knowledge Article
Science & Tech.
38.94363
Explosions at units 1 and 3 occurred due to similar causes. When an incident occurs in a nuclear power plant such as a loss of coolant accident or when power is lost, usually the first response is to depressurize the reactor. This is done by opening pressure relief valves on the reactor vessel. The water/steam mixture will then flow down into the suppression pool, which for this design of a reactor is in the shape of a torus (technical term for the shape of a donut). By blowing the hot steam into the suppression pool some of the steam is condensed to liquid phase, which helps keep the pressure low in the containment. The pressure in the reactor vessel is reduced by venting the water/steam mixture. It is much easier to pump water into the vessel when it is at a reduced pressure, thus making it easier to keep the fuel cooled. This procedure was well underway after the earthquake. Unfortunately, because of the enormous magnitude of the earthquake, an equally large tsunami was created. This tsunami disabled the onsite diesel generators as well as the electrical switchyard. Without power to run pumps and remove heat, the temperature of the water in the reactor vessel began to rise. With the water temperature rising in the core, some of the water began to vaporize and eventually uncovered some of the fuel rods. The fuel rods have a layer of cladding material made of a zirconium alloy. If zirconium is hot enough and is in the presence of oxygen (The steam provides the oxygen) then it can undergo a reaction that produces hydrogen gas. Hydrogen at concentrations above 4% is highly flammable when mixed with oxygen; however, not when it is also in the presence of excessive steam. As time went on, the pressure in the containment rose to a much higher level than usual. The containment represents the largest barrier to the release of radioactive elements to the environment and should not be allowed to fail at any cost. The planned response to an event like this is to vent some of the steam to the atmosphere, just to keep the pressure under control. Exactly what happened next is not verified; however, the following is very likely the general explanation for the explosion. It was decided to vent the steam through some piping that led to a space above and outside containment, but inside the reactor building. At this point, the steam and hydrogen gas were mixed with the air in the top of the reactor building. This was still not an explosive mixture because large amounts of steam were mixed with the hydrogen and oxygen (from the air). However, the top of this building is significantly colder than inside the containment due to the weather outside. This situation would lead to some of the steam condensing to water, thereby concentrating the hydrogen and air mixture. This likely went on for an extended period of time, and at some point an ignition source (such as a spark from powered equipment) set off the explosion that was seen in units 1 and 3. The top of the reactor building was severely damaged; however, the containment structure showed no signs of damage. Right after the explosions there were spikes in the radiation levels detected, because there were some radioactive materials in the steam. When the zirconium alloy cladding reacted to make hydrogen, it released some fission products. The vast majority of the radioactive materials in the fuel will remain in the fuel. However, some of the fission products are noble gases (xenon, Xe and krypton, Kr) and will immediately leave the fuel rods when the cladding integrity is compromised. Fortunately, Xe and Kr are not a serious radiological hazard because they are chemically inert and will not react with humans or plants. Additionally, small quantities of iodine (I) and cesium (Cs) can be entrained with the steam. When the steam was vented to the reactor building, the Xe and Kr would have followed as well as some small amounts of I and Cs. Thus, when the roof of the reactor building was damaged , these radionuclides that were in the reactor building would have also been released. This is the reason a sudden spike was seen in radiation levels. These heightened radiation levels quickly decreased. This is because there was no damage to the containment which would increase the quantities of radionuclide released, and because the radionuclides released during the explosion quickly decayed away or dispersed. Unit 2 explosion Recent information indicates that unit 2 may have suffered a containment breach. Pressure relief of unit 2 was complicated by a faulty pressure relief valve, which complicated the injection of sea water and the evacuation of the steam and hydrogen. It is reported that the fuel rods were completely exposed twice. More details to follow. Unit 4 fire A fire was reported at unit 4 which was in a shutdown state during the earthquake and tsunami for a planned outage. Latest reports indicate that the fire was put out. More details to come.
<urn:uuid:b0305f22-d522-4f94-8d85-855e1d06d7c9>
4.0625
1,003
Knowledge Article
Science & Tech.
43.606104
A new technique measures each unit of charge that accumulates on a submerged plastic bead, unprecedented resolution for a liquid-solid interface and an experiment that may benefit a variety of commercial devices and processes. An improved version of a technique for folding tiny objects from a thin membrane uses a magnetic field to affect the shape. The membrane wraps around a droplet of fluid that distorts in response to the field. Phys. Rev. Focus28, 3 (2011) – Published July 18, 2011 The structures within a pile of soil or grain that allows it to bear weight depend only on the average number of neighbors for each particle, not on any details of the types of particles or even on the presence of gravity. Phys. Rev. Focus26, 11 (2010) – Published September 10, 2010 Simulated soils made of glass beads and various pastes dry at different rates, depending on the properties of their smallest particles. The work suggests new ways to study an aspect of soil that is critical for agriculture. Phys. Rev. Focus25, 20 (2010) – Published May 28, 2010 Drops of water striking a bed of grains can leave a wide range of crater shapes and sometimes a bigger impression at low and high impact speeds than at medium speeds. The work may help geoscientists identify ancient formations.
<urn:uuid:154febc2-dc47-44ae-9f7a-cad33f9cf992>
3.046875
269
Content Listing
Science & Tech.
55.040749
The U.S. central plains region -- nicknamed Tornado Alley -- suffers the highest frequency of tornadoes in the world [source: Tarbuck]. Many of these twisters leave death, injury and destruction in their wake, but one stands in a class by itself. Sweeping out from southeastern Missouri on March 18, 1925, the Tri-State Tornado careened across the southern tip of Illinois before dissipating in lower Indiana. Remarkably, these three locales lie 219 miles (352 kilometers) apart, and the tornado traveled this distance in just three and a half hours [source: SEMP]. Typical tornadoes measure 500 to 2,000 feet (150 to 600 meters) wide and move at a speed of about 30 mph (45 kph). Generous estimates suggest they travel an average of 6 miles (10 kilometers) before petering out [source: Tarbuck]. The Tri-State Tornado achieved an average speed of 62 mph (100 kph) and topped out at 73 mph (117 kph). It covered more than 36 times as much ground as an average tornado. Some eyewitnesses reported its path as nearly a mile wide [source: NOAA]. Scientists today wonder if the Tri-State Tornado instead might have been a family of tornadoes spawned by a massive supercell storm, which would account for both its extremity and for the remarkably straight path it followed for 183 of its 219 miles [source: NOAA]. All told, the EF5 storm killed 695 people, 234 of whom lived in the town of Murphysboro, Ill., thereby setting the grim record for the most fatalities incurred by a tornado in a single U.S. city. In total, 2,027 people sustained injuries from the tornado's passage, and 15,000 homes were destroyed. Entire towns were obliterated [source: SEMP]. Next, let's look at a more recent storm that the world won't soon forget.
<urn:uuid:0508acaf-7d67-441b-bfca-80755daa8ada>
3.515625
395
Knowledge Article
Science & Tech.
61.557089
WHAT IS PANTHER? The PANTHER device uses immune cells altered to act as detectors of dangerous biological agents. The device takes in air, runs it past the cells, which are gathered into groups, each designed to react to specific agent. The cells, which are engineered to respond to a specific pathogen, release photons of light when they detect their target. Other detectors recognize the release of light to indicate the pathogen that was detected. Based on the wavelengths of light that were released, the device outputs a list of dangerous pathogens that were found, about three minutes after beginning the test. This report has also been produced thanks to a generous grant from the Camille and Henry Dreyfus Foundation, Inc.
<urn:uuid:335794a8-f210-4de5-bb38-7c14582e8dd9>
3.15625
144
Knowledge Article
Science & Tech.
41.385
OLR - Outgoing Longwave Radiation ISR- Incoming Shortwave Radiation TOA- Top Of the Atmosphere GCC- Global Cloud Cover GHG- GreenHouse Gas CO2- Carbon Dioxide PDO- Pacific Decadal Oscillation AMO- Atlantic Multidecadal Oscillation IOD- Indian Ocean Diphole QBO- Quasi Biennial Oscillation You have probably heard every single part of the upcoming catastrophe that is coming from the media- that it is a fact that has been proven and proven again that humans are driving the current climate, and natural factors are very small compared to the human forcing. But this is not true. Let's start off with an indirect proof. If Carbon Dioxide were driving the Climate, then we would see a reduction in overall OLR, since the Greenhouse Gases are poised to create a reduction in OLR- if they are the drivers of the Climate. However, what we can see from the observational data, is that OLR has actually increased during the time we were warming. What we can see is that OLR has increased by 11 w/m^2 since the beginning of the satellite era. According to the Climate Models, we should have seen a reduction in OLR at the TOA, due to GHGs trapping more and more of the OLR. But we haven't. Not at all. This was seen by Lindzen and Choi's 2009 and 2010 papers. (LINK Here, we can see, as I explained earlier, that climate Models forecasted a downward trend in OLR at the TOA due to increased GHGs trapping OLR. We can see that reality shows that OLR has increased with temperature. What does this all tell us? It tells us that the warming is occuring through an increase in ISR, since if ISR were not increasing, and OLR was going up, we would experience cooling, since the energy leaving Earth, would hypothetically be greater than the Energy getting to Earth. The only possible factor that could cause an increase in ISR and an increase in OLR, is decreasing Cloud Cover. Decreasing Cloud Cover allows for more ISR to reach the Earth's Surface, but it also allows for more OLR to escape into space. However, since Cloud Cover overall reflects more ISR than it traps OLR, if all clouds were to be removed, an extra 17 w/m^2 would be added to Earth's Energy Budget. From Climate4you.com...The overall reflectance (albedo) of planet Earth is about 30 percent, meaning that about 30 percent of the incoming shortwave solar radiation is radiated back to space. If all clouds were removed, the global albedo would decrease to about 15 percent, and the amount of shortwave energy available for warming the planet surface would increase from 239 W/m2 to 288 W/m2 (Hartmann 1994). However, the longwave radiation would also be affected, with 266 W/m2 being emitted to space, compared to the present 234 W/m2 (Hartmann 1994). The net effect of removing all clouds would therefore still be an increase in net radiation of about 17 W/m2. So the global cloud cover has a clear overall cooling effect on the planet, even though the net effect of high and low clouds are opposite (see figure above). This is not a pure theoretical consideration, but is demonstrated by observations (see diagram below). So we now know that it is impossible for CO2 to be driving the Climate, because of the reasons expressed above. But how do the CAGW Proponents reach the conclusions that they do? Well, often, they will show this graph which depicts a model that is modeling the Anthropogenic VS. Natural Forcings. Note that according to the model, natural factors could not possibly explain the temeprature increase, because natural factors significantly diverge from observed data in 1979. But as already shown above, the model got the OLR Vs. Temeprature Component completely wrong, which shows that the models are misinterpreting something. But what is it? A paper was published in February 2010, that shows that Climate Models may be underestimating Clouds' role as a negative feedback by a factor of 4. (LINK The implication of this optical depth bias that owes its source to biases in both the LWP and particle sizes is that the solar radiation reflected by low clouds is significantly enhanced in models compared to real clouds. This reflected sunlight bias has significant implications for the cloud-climate feedback problem. The consequence is that this bias artificially suppresses the low cloud optical depth feedback in models by almost a factor of four and thus its potential role as a negative feedback. The models, which all catastrophic statements are based off of, have gotten the Cloud Feedback completely mixed up. We know that they have gotten mixed up, because they got the Temperature Vs. OLR component completely off. But is CO2 causing a very small portion of the warming? Without any Feedbacks, CO2 does cause warming. However, in these next calculations, we will see how much CO2 would have contributed to the current Global Warming, without any climatic feedbacks. Two Solar Scientists found that Clouds have contributed an extra 7 w/m^2 of energy to Earth's Energy Budget in a 21 year timeframe. (LINK According to the IPCC, Carbon Dioxide causes a 1.4 w/m^2 of energy to be added to Earth's Energy Budget over a 104 year timespan. To get the effect that CO2 has had over this 21 year timeframe, you muliply the 1.4 w/m^2 by .2, since that is the value of 21 divided by 104. You get .28 w/m^2. Divide that by 7 w/m^2 to get the percentage that CO2 has contributed to the current Global Warming. You get 4%. Assuming that CO2 and Clouds are hypothetically, the only drivers of the Climate, CO2 contributed only .014 Degrees C to the .35 Degree C warming since 1979. Factor in Feedbacks, the PDO, AMO, IOD, QBO, Ozone Depletion due to Volcanism, the Solar AA index, and you can see how small of a role CO2 plays on the overall Climate System. It's effects are not even measureable. With these effects factored in, the effect of CO2 would even be significantly less than 4%. So in conclusion, Natural Drivers are dominating the current Climate Change, and always will in the future.
<urn:uuid:6d9efad2-e27d-45ce-ba0c-b2d0f6cfd3d9>
2.828125
1,389
Comment Section
Science & Tech.
57.083464
For educational purposes only; do not review, quote or abstract:-- Information on the basics of Entomology An Introduction To The Study of Entomology 1 Kingdom: Animalia, Phylum: Arthropoda Subphylum: Hexapoda: Class: Insecta: Order: Mallophaga Please CLICK on underlined categories to view and on included illustrations to enlarge: Depress Ctrl/F to search for subject matter: All species are apterous, although it is believed that they lost wings in evolution, which is evidenced from thoracic sclerites. There is a gradual metamorphosis. The various families of biting lice are confined to definite groups of birds, indicating that evolution of the parasites has proceeded along with that of their bird hosts. The common hen-louse, Menoponpallidum is an example. The head is semicircular in form and articulates with a prothorax that is freely movable on the rest of the body. A tagma is formed by the fusion of the meso- and metathorax with the abdomen. The mouth is situated ventrally on the head and surrounded by biting mandibles and less prominent 1st and 2nd All stages occur on the host and reproduction is continuous. Although birds are the primary hosts Mallophaga are also found on mammals occasionally. Birds that have become infested often exhibit the habit of "dusting", which cuts down on the number of lice. High infestations will cause a loss of weight and lowering of egg production in fowl, whereas small birds are often killed. Humans are never attacked. When Mallophaga occur on birds they possess two claws, while on mammals only one claw is present. Eggs are laid separately on feathers or hairs and the life cycle is completed in about a month, the young instars resembling the adult in form and habit. They are spread very rapidly through bodily contact. They crawl on the ground during the day and return to their host at night. Mallophaga used to be controlled by dusting their poultry hosts with insecticides. Restrictions on such practices for public health reasons have made it exceedingly difficult to control these insects. When poultry are raised for the production of eggs and meat it is best to corral them in open fenced yards on the ground. In this way the birds are able to dust themselves with soil and thereby reduce louse infestations significantly. Such operations are not always economically practical, however, because of the additional space required and the difficulty of harvesting their eggs.
<urn:uuid:e4d246b1-8bfd-4975-ab28-b20a607b3c90>
3.828125
539
Knowledge Article
Science & Tech.
36.142028
Chatham / Challenger Project Zoning the ocean into areas that reflect biodiversity for management purposes is a complex task. To date, progress has been made using physical oceanic data, but information about sea-bed ecology is largely missing. The Chatham / Challenger project will map and compare habitats and diversity of sea-bed communities in fishable depths at key locations across the Chatham Rise and the Challenger Plateau. These two areas have been chosen for the project because they provide a strong contrast in terms of plankton productivity. Their sea-bed communities are likely to mirror this. This is a joint project between MFish, DOC, NIWA and LINZ. The $4.7 million project contributes to New Zealand’s Biodiversity Strategy. To balance protection of the environment with the use of those resources it has become clear we need more information about the biological systems that operate in the ocean. We already know about the distribution of fisheries and commercial fish stocks, and we know that deep water seamounts have fragile ecosystems vulnerable to fishing. But we still know very little about the sea-bed communities that are found in soft sediments between 200 and 1200 metres deep - the depths and sediment types where our largest offshore fisheries are found (e.g. hoki, hake, ling and silver warehou). We need to know more about the role of soft sediment communities in sustaining marine ecosystems and biodiversity, as well their role in sustaining our fish resources. We also need to know how sensitive sea-bed communities are to disturbance. Mapping and characterising soft sediment communities in fishable depths is a step towards understanding this. Information from the Chatham / Challenger project will: Provide an important step towards mapping sea-bed habitats and communities on the Chatham Rise and Challenger Plateau. Provide new information on the effects of trawling on soft sediment sea-beds. Be used to improve decision-making around development of offshore Marine Protected Areas. The first voyage, in 2006, acoustically mapped sea-bed habitats at key locations on the Chatham Rise and Challenger Plateau. The second voyage, in April 2007, explored the Chatham Rise locations using deep-sea cameras and sample sea-bed communities using sea-bed sleds. The third voyage, in June 2007, is exploring the Challenger Plateau locations using deep-sea cameras and sample sea-bed communities using sea-bed sleds. Preliminary maps of biodiversity and the habitat types from this project will be available by mid-2008.
<urn:uuid:60203483-eeb4-4b06-a199-a3b19a3b9c2e>
3.140625
527
Knowledge Article
Science & Tech.
36.302906
by Lindsey's channel Post a comment Tags: chlorine atoms, hydrogen atoms, metal atoms A covalent(?) bond is a strong bond between two non metal atom. It consist of a shared pair of electron. A covalent(?) bond can be represented by a straight line or dot in ___. Hydrogen and Chlorine can each form one covalent(?) oxygen, two bonds nitrogen three well(?) carbon can form four bonds. Sharing electron. You'll need to understand what covalent(?) bonding is and to remember some of the properties of molecules that are formed in this way. A covalent(?) bond forms when two non metal atoms share a pair of electron. Electron involved in the highest archipad(?) energy level for other shelve of the atoms and atoms does shared one of it's electron's to complete it high occupied energy level. Covalent(?) bonds are strong and a lot of energy is needed to break them. Substances(?) substances with covalent(?) bond open for molecules with well melting and boiling point such as hydrogen and water. Example when covalent(?) bonds are being form reaching hydrogen atoms and chlorine atoms. They can form hydrogen chloride. After bonding the covalent(?) atoms it's now in contact with A electron in it.
<urn:uuid:fdd17b56-9452-4b75-97da-1512c4b52fce>
3.671875
274
Truncated
Science & Tech.
61.284953
|Scientific Name:||Sousa chinensis (eastern Taiwan Strait subpopulation)| The eastern Taiwan Strait (ETS) subpopulation of Indo-Pacific humpbacked dolphins was only recently discovered (Wang et al. 2004a). Dolphins from this subpopulation have pigmentation that differs consistently from that of nearby subpopulations along the coast of mainland China (specifically those of western Taiwan Strait/Jiulong River Estuary (= Xiamen/Chinmen) and the Pearl River Estuary (=Hong Kong/Guangdong)) (Wang et al., in review, but also see Jefferson 2000, Jefferson and Hung 2004, Wang et al. 2007b). |Red List Category & Criteria:||Critically Endangered C2a(ii) ver 3.1| |Assessor/s:||Reeves, R.R., Dalebout, M.L., Jefferson, T.A., Karczmarski, L., Laidre, K., O’Corry-Crowe, G., Rojas-Bracho, L., Secchi, E.R., Slooten, E., Smith, B.D., Wang, J.Y. & Zhou, K.| |Reviewer/s:||Brownell Jr., R.L. & Cooke, J. (Cetacean Red List Authority)| The total population (all ages) was estimated at about 100 individuals in the mid-2000s and the extent of occurrence is only a small stretch of coastal waters off western Taiwan (estimated to be ca. 515 km2). Given the number of development projects that are underway or proposed, and the fact that only minimal or no conservation measures are in place to reduce the probable impacts of the various threats (e.g., bycatch in net fisheries, severe reduction of freshwater flow to estuaries, land reclamation), a continuing decline in the subpopulation is projected. Although there is no prospect of obtaining a long enough time series of data to show a decline over the last three generations (about 60 years; see Taylor et al. 2007), a decline almost certainly has occurred (at least since the beginning of Taiwan’s rapid industrialization about 30 years ago) and there is no reason to believe that the causes have stopped, or even slowed. Therefore, it is reasonable to project a continuing decline and this subpopulation meets criterion C2a(ii) for Critically Endangered (total of fewer than 250 mature individuals, projected continuing decline, and at least 90% of mature individuals in a single subpopulation). This subpopulation also may meet criterion D for CR because the total number of mature individuals may be close to (or fewer than) 50 (depending partly on the value used to estimate percent mature – 60% from Jefferson (2000) or 50% from Taylor et al. (2007)). The primary range of this subpopulation consists of coastal western Taiwan from the estuaries of the Houlong and Jhonggang rivers (Miaoli County) in the north to Waishanding Zhou (a large sandbar off Chiayi County) in the south (see Figure 1 in the attached PDF). However, one sighting of about 20 dolphins has been confirmed from the inshore waters of Tainan County and a dolphin, almost certainly a “stray,” was observed at the mouth of Fugang Harbour (Taitung County) where adjacent waters are deep and oceanic (i.e., clearly not the preferred habitat of this species). All sightings have been within 3 km off shore with the exception of the mud flats/littoral zone in the Changhua County, the central part of the distribution, where extensive oyster mariculture structures and associated activities likely exclude dolphins physically (Wang et al. 2007b). The distribution is linear, i.e. similar to that of a riverine species. Most of the dolphins in this subpopulation have been sighted in and around the two main estuaries of western Taiwan (Dadu and Joushuei rivers of Taichung, Changhua and Yunlin counties) (Wang et al. 2007a, b). Native:Taiwan, Province of China |FAO Marine Fishing Areas:|| Pacific – northwest |Range Map:||Click here to open the map viewer and explore range.| The subpopulation was estimated to number 99 individuals (CV=51.6%) in the mid-2000s (Wang et al. 2007a). By analogy with the Pearl River Estuary subpopulation of S. chinensis, mature individuals constitute about 60% of this subpopulation (Jefferson 2000), or about 60. Using a default value of 50% percent for this species (Taylor et al. 2007), however, would suggest only about 50 mature individuals. Almost all individually recognizable dolphins were novel in 2002 but by 2004, most had been photographed in previous years (Wang et al. 2007a); the catalogue of recognizable dolphins numbered fewer than 30 at that time (J.Y. Wang pers. comm., December 2007). |Habitat and Ecology:|| The ETS dolphins appear to be year-round residents of the coastal waters of central western Taiwan where dedicated surveys have resulted in sightings from April to August (Wang et al. 2007a). Opportunistic sightings have been made in other months; as of December 2007, the only months with no confirmed sightings were January, February and March, when conditions and opportunities for observations are poor (J.Y. Wang pers. comm., 13 December 2007). In late winter and early spring, grey mullet (Mugil cephalus) fishermen report seeing humpback dolphins near their nets (trammel and gill nets that are commonly used as encircling nets as well). Recreational shore fishermen report that the dolphins are seen most commonly in the winter months in the Dadu River estuary. Although reports by fishermen need to be viewed skeptically because of the possibility of misidentification, other species of dolphins are generally not present in the near-shore waters of western Taiwan so the chances of confusion are relatively small in this instance (Wang et al. 2007b). All sightings have been in waters less than 25 m deep, most in less than 15 m and within 3 km of shore. The few measurements of sea surface temperatures at sightings have varied from about 24 to 30oC (Wang et al. 2007a). Schools of dolphins often patrol parallel to the coastline just off the surf zone and large sandbars. Estuaries are likely where most of the foraging occurs. Feeding behind active trawlers (as in Hong Kong and Australia) has not been observed but dolphins move along the length of set trammel or gill nets, possibly searching for injured or net-entangled fish. In general, they appear to be indifferent towards boats (at least the research vessels that have been used to study them – a fishing boat and a large raft made of plastic tubing). Indo-Pacific humpback dolphins appear to be opportunistic feeders. They take a wide variety of nearshore, estuarine, and reef fishes. They also eat cephalopods in some areas, although crustaceans appear to be rare in the diet (Jefferson and Karczmarski 2001, Ross 2002, Ross et al. 1994). Little is known about the specific feeding habits of the ETS subpopulation but these dolphins have been observed feeding on croakers (Sciaenidae), mullets (Mugilidae), threadfins (Polynemidae) and herring (Clupeidae) (Wang et al. 2007b). This population is not known to be hunted presently but is likely to have been hunted at least opportunistically in the past. Entanglements of humpback dolphins in gillnets have been recorded in coastal waters of the Habitat Degradation and Reduction Reduction of freshwater flow and other kinds of degradation of estuaries and adjacent coastal waters (e.g. land reclamation) are almost certainly having an impact on this dolphin population, and there are continuing proposals for large-scale industrial development projects involving land reclamation (e.g., offshore wind farms, steel factory of the Formosa Plastic Group, Chinese Petroleum Company’s petrochemical factory within the animals’ restricted habitat) (Wang et al. 2004b, 2007b). Besides the physical removal of habitat, activities associated with land reclamation, such as pile-driving, can cause disturbance or even direct harm to the dolphins. Pollution (industrial, agricultural and residential discharge with minimal to no treatment) poses a risk to humpback dolphins via the consumption of marine prey species (Clarke et al. 2000, Parsons 2004). Spills of oil and other toxic substances by commercial ships could be catastrophic for a population so small and limited in its distribution. Parsons (1997) estimated that a humpback dolphin in Sousa spp. are listed in Appendix I of CITES. Efforts are being made to characterize this dolphin population and the threats it faces, and to integrate relevant information into Clarke, S.C., Jackson, A.P. and Neff, J. 2000. Development of a risk assessment methodology for evaluating potential impacts associated with contaminated mud disposal in the marine environment. Chemosphere 41: 69-76. IUCN. 2008. 2008 IUCN Red List of Threatened Species. Available at: http://www.iucnredlist.org. (Accessed: 5 October 2008). Jefferson, T.A. 2000. Population biology of the Indo-Pacific Hump-backed Dolphin in Hong Kong waters. Wildlife Monographs 144: 1–65. Jefferson, T.A. and Hung, S.K. 2004. A review of the status of the Indo-Pacific humpback dolphin (Sousa chinensis) in Chinese waters. Aquatic Mammals 30: 149-158. Jefferson, T.A. and Karczmarski, L. 2001. Sousa chinensis. Mammalian Species (American Society of Mammalogists) 655: 9pp. Parsons, E.C.M. 1997. Sewage pollution in Hong Kong: implications for the health and conservation of local cetaceans. Final report to Friends of the Earth, Wan Chai, Hong Kong. Parsons, E.C.M. 2004. The potential impacts of pollution on humpback dolphins – with a case study on the Hong Kong population. Aquatic Mammals 30: 18–37. Ross, G.J.B. 2002. Humpback dolphins Sousa chinensis, S. plumbea and S. teuszii. In: W.F. Perrin, B. Würsig and J.G.M. Thewissen (eds), Encyclopedia of Marine Mammals, pp. 585-589. Academic Press. Ross, G.J.B., Heinsohn, G.E. and Cockcroft, V.G. 1994. Humpback Dolphins Sousa chinensis (Osbeck, 1765), Sousa plumbea (G. Cuvier, 1829) and Sousa teuszii (Kukenthal, 1892). In: S.H. Ridgway and R. Harrison (eds) Handbook of Marine Mammals. Volume 5: The First Book of Dolphins, pp. 23–42. Academic Press, London. Taylor, B.L., Chivers, S.J., Larese, J. and Perrin, W. 2007. Generation Length and Percent Mature Estimates for IUCN Assessments of Cetaceans. Administrative report LJ-07-01 available from Southwest Fisheries Science Center, National Marine Fisheries Service, 8604 La Jolla Shores Dr., La Jolla, CA 92038, USA. Wang, J.Y., Hung, S.K. and Yang, S.-C. 2004a. Records of Indo-Pacific humpback dolphins, Sousa chinensis (Osbeck, 1765), from the waters of western Taiwan. Aquatic Mammals 30: 189-196. Wang, J.Y., Hung, S.K., Yang, S.C., Jefferson, T.A. and Secchi, E.R. Submitted. Population differences in the pigmentation of Indo-Pacific humpback dolphins, Sousa chinensis, in Chinese waters. Mammalia. Wang, J.Y., Yang, S.-C. and Reeves, R.R. 2004b. Report of the first workshop on conservation and research needs of Indo-Pacific humpback dolphins, Sousa chinensis, in the waters of Taiwan. National Museum of Marine Biology and Aquarium, Checheng, Pingtung County, Taiwan. 25-27 February 2004, Wuchi, Taiwan. 43 pp (English) + 37 pp (Chinese). Wang, J.Y., Yang, S.C. and Reeves, R.R. 2007b. Report of the Second International Workshop on Conservation and Research Needs of the Eastern Taiwan Strait Population of Indo-Pacific Humpback Dolphins, Sousa chinensis. 4-7 September 2007, Changhua City, Taiwan. National Museum of Marine Biology and Aquarium, Checheng, Pingtung County, Taiwan. 62 pp (English) + 54 pp (Chinese). Wang, J.Y., Yang, S.C., Hung, S.K. and Jefferson, T.A. 2007a. Distribution, abundance and conservation status of the eastern Taiwan Strait population of Indo-Pacific humpback dolphins, Sousa chinensis. Mammalia 71: 157-165. |Citation:||Reeves, R.R., Dalebout, M.L., Jefferson, T.A., Karczmarski, L., Laidre, K., O’Corry-Crowe, G., Rojas-Bracho, L., Secchi, E.R., Slooten, E., Smith, B.D., Wang, J.Y. & Zhou, K. 2008. Sousa chinensis (eastern Taiwan Strait subpopulation). In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.2. <www.iucnredlist.org>. Downloaded on 22 May 2013.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please fill in the feedback form so that we can correct or extend the information provided|
<urn:uuid:bd2a853f-92db-469b-8bec-aa0d1f68ab08>
2.9375
3,071
Knowledge Article
Science & Tech.
62.42101
When you crumple up your gift-wrapping paper this year, you'll create a shape so complex that it has defeated the most sophisticated computers WHEN you throw out your Christmas wrapping paper this year, don't tell Narayanan Menon and Anne Dominique Cambou. You'll be throwing away examples of their painstaking research. That's because they study the physics of crumpled balls of paper, which contain deeper mysteries than you might expect. Take a sheet of A4, scrunch it up and throw it at a colleague. You'll notice that even though paper is flimsy, it becomes sturdier in the form of a ball. How can a sheet of paper become an unaccountably tough projectile simply by the act of crushing? The answer might seem simple, but it turned out that finding a sound explanation required complex instruments and a lot of brain power. Now, though, Cambou and Menon, physicists at the University of Massachusetts in Amherst, have come up with some unexpected answers. There is something of a niche research field in paper folding. One of its original defining experiments was testing the assertion that it is only possible to fold a sheet of paper seven times. This was shown to be false on the Discovery Channel programme Mythbusters (episode 72, first aired in 2007). The actual number turned out to be 11, though getting to that required a steamroller and a very thin grade of paper akin to parachute material. The dimensions of the folding material were also on a huge scale - the size of a football pitch. In the context of the office, however, the seven folds assertion stands, unbusted. Another aspect of paper folding is that it is highly unpredictable. If you place a sheet of paper over a coffee cup and poke it down into a cone, it can fold in myriad ways. Researchers were eventually able to mathematically predict how a given sheet of paper would fold in this situation (Nature, vol 401, p 46). One property of crumpled paper remained, though, resisting all forms of analysis. No matter how tightly you crumple paper into a ball, you'll be hard-pressed to come up with a structure composed of less than about 90 per cent air. "It's technically possible to compress them further," says Cambou, "but that will take a lot more force because the crumpled sheet increasingly opposes the external force as it's crushed." Menon and Cambou wanted to know why. Despite their insubstantial constitution, wadded paper balls are capable of feats of considerable strength. They are the ultimate packing material, for instance, able to support and cushion objects far heavier than themselves. That's unexpected, given their lack of internal buttressing. A house, by contrast, has supporting structures such as beams built into the architecture to explain why it is so rigid. "This is not stiffness you have designed into the ball," says Menon. "You've just crushed it." Considering that lack of uniform structure, a ball's stiffness is also surprisingly consistent throughout, even though no two are likely to have the same configuration of folds inside. Each crumpled ball may even be unique, though researchers have not yet examined them in sufficient numbers to determine whether they can be compared on the lines of snowflakes, fingerprints and dust particles (see "A library that's deliberately gathering dust"). Furthermore, numbers aren't the only barrier to understanding paper balls. It's a wrap Despite technological advances, it is still extremely difficult to peer inside a simple scrunched-up paper ball with any detail. Computer science hasn't been much help. It has been impossible to pinpoint the physics involved because even the most sophisticated hardware and software fail when trying to recreate the sheer complexity involved. There are simply too many variables. Neither is it possible to make a paper ball and then reverse engineer the structure from reading the patterned wrinkles in the unfurled paper. Various groups have analysed such patterns and been driven to frustration. "You can ask very simple questions that have surprisingly complex answers," says Menon. He had hoped some kind of 3D imager would do the trick. For example, an X-ray tomography machine - a piece of kit normally used to hunt for tumours or to look inside delicate artefacts of archaeological digs - bounces X-rays off the internal surfaces of an object to create thousands of 2D cross-sections that can be reassembled into a 3D image. There was just one tricky problem, which is that X-rays sail right through paper. Menon and Cambou realised that they could get what they wanted with a different material that comes in sheets: aluminium foil. Their plan worked, and they created the world's first image of the internal geometry of a crumpled-up sheet. The image yielded answers immediately. The first thing the researchers noticed were the ridges throughout the insides of the ball. They are the paper's strongest points, and what fortifies them is a quality that you might not expect from paper. It might rip easily, but it is very robust in one particular way, says Tom Witten, a solid-state physicist at the University of Chicago. To demonstrate, he picks up a flat piece of paper and tries to stretch it until it rips. It is really difficult to do. - New Scientist - Not just a website! - Subscribe to New Scientist and get: - New Scientist magazine delivered every week - Unlimited online access to articles from over 500 back issues - Subscribe Now and Save If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article New World Record Fri Dec 23 12:25:32 GMT 2011 by Eric Kvaalen I just folded a piece of paper 12 times! I could have done even more folds. (The trick is not to let a fold cross too many other folds.) I think what was meant was that you can't fold a piece of paper in half more than 7 times. Just So You Know Thu Jan 05 13:45:36 GMT 2012 by Jamie Mythbusters weren't the first to fold paper in half that many times. See: Gallivan, B. C. "How to Fold Paper in Half Twelve Times: An 'Impossible Challenge' Solved and Explained." Pomona, CA: Historical Society of Pomona Valley, 2002. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:cae3b82b-f7e3-4fd3-9877-3bbd305eb51c>
3.875
1,434
Comment Section
Science & Tech.
52.06313
Previous abstract Next abstract Session 27 - Gravitational Lensing & Dark Matter. Oral session, Monday, January 13 Microlensing events have been detected so far in photometric surveys, from the variation in the total magnification of two images of the source star as the lensing object passes by. These events may also be detected astrometrically, by measuring directly the deflection. The field of high precision interferometric astrometry is rapidly advancing: interferometers in the ground should now have accuracies of 50 \mu \arcsec , and various proposals for interferometric satellites call for accuracies better than 10 \mu \arcsec on large numbers of stars. This motivates this study of the applications of detecting microlensing events with astrometry. Contrary to the photometric method, microlensing events can be detected with astrometry for impact parameters much larger than the Einstein radius, increasing by a large factor the number of detectable events. Three different applications will be discussed. The first is to measure masses of bright, nearby stars, from the deflection on a background star. This can be done with ground-based interferometers, and the observations required are the same as those needed to discover planets around nearby stars. We show that there should be several bright stars over the sky with an adequate background star close to them that will undergo a microlensing event detectable over the next 10 years. Second, masses of brown dwarfs orbiting nearby bright stars could be measured with the same technique. Third, distant stars can be monitored to detect microlensing events by any lens near the line-of-sight. This requires an astrometric satellite able to monitor astrometrically large numbers of stars, doing 10 to 20 observations of each star over a period of several years. The proposed satellite GAIA satisfies these characteristics, and we show that it should detect thousands of microlensing events from known stars, as well as compact objects that might account for dark matter in the halo or in the disk. Program listing for Monday
<urn:uuid:22b8d8ae-ff30-4b12-9da1-728a8d4664d7>
2.859375
418
Academic Writing
Science & Tech.
30.211577