text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
MATHEMATICS is often thought to be universally and unassailably true. I have even heard it argued that God, omnipotent though He may be, could not make math false even if He was impulsive enough to try it. Can mathematicians actually prove that math is true? If they cannot, does the fact that math is so useful in solving real world problems provide evidence of its truth? And, if mathematics is not true, then does that imply that conclusions drawn from it are faulty or suspect? These are some of the questions that I will try to address. The first attempt we might make to prove that mathematics is true is to consider real world situations where mathematical equations seem to appear. Some examples are: • If I have three red balls in a bag and add two more, the bag will then contain five red balls. • If I am on a train traveling at three miles per hour and throw a ball at two miles per hour (measured with respect to the train) then the ball will be traveling at five miles per hour with respect to the ground. •If I had three dollars worth of goods yesterday and borrow two dollars worth of goods from you today then I have five dollars worth of goods in my possession. Each of these three situations seem to imply the equation 3+2=5, but do they actually prove that the equation 3+2=5 is true? One problem with drawing conclusions about mathematics from these examples is that the number ’3′ is not the same as ‘three balls’ or ‘three hours’ or ‘three dollars’, and the operator ‘+’ is not the same as grouping together balls or combining velocities or aggregating wealth. It is true that 3+2=5 is typically an excellent model for each of these situations, but nonetheless the equation is not precisely equivalent to these situations. It is also true that when we group balls together (by, in this case, placing them in a bag) the procedure generally behaves as though we are performing addition. But now suppose that the objects we are grouping together are made of packed sand, or some other delicate substance. In this case, when we add new objects to our bag they will sometimes fracture and split into multiple objects, and occasionally multiple objects will even fuse together into a single object. The addition operator ‘+’ no longer models this situation well because when we place a new object in the bag it does not always increase the number of objects contained in the bag by one. It is not terribly difficult to annihilate the relationship between the equation 3+2=5 and the other real world situations given above. For example, Einstein’s theory of relativity tells us (in contradiction to the more intuitive but less accurate equations of Newtonian mechanics) that when a person on a train which is moving three miles per hour (with respect to the ground) throws a ball at two miles per hour (with respect to the train), then the speed of the ball with respect to the ground is actually very slightly less than 5 miles per hour, not equal to 5 miles per hour. What’s more, if I had three dollars worth of goods yesterday and then borrow two dollars worth of goods from you today, the total number of dollars worth of goods that I have possession of will not necessarily be five dollars if the value of my original goods changed between yesterday and today (as can happen in real economic markets). What these examples show us is that the only reason to say that grouping balls or combining velocities or aggregating wealth encapsulates the idea of mathematical addition is because most of the time the addition operator ‘+’ provides a good model for these scenarios. We can no more conclude that 3+2=5 is a true statement simply because putting two balls into a bag that already has three balls generally produces a bag with five balls, then we can conclude that 3+2=5 is false simply because velocities have been proven not to add. In other words, while real world situations can motivate the equations of mathematics and provide justifications for applying them, they cannot prove that those equations are actually true. We have stared at equations like 3+2=5 so many times in our lives that it can be difficult to consider them with fresh eyes in order to ask ourselves what it really is that they are saying. Clearly ’3′, ‘+’ , ’2′, ‘=’ and ’5′ are not objects in the physical universe. You can go to the zoo and see three bears, or see the numeral ’3′ printed on a sign, or perform arithmetic on paper using the symbol ’3′, but nowhere in the universe can you find the actual (metaphysical) number ’3′. This is hardly surprising, since ’3′ is a concept or idea, not a physical thing. But this line of thought implies that 3+2=5 is a statement about the relationship among the concepts ’3′, ’2′, and ’5′, and not a statement about physical entities that actually exist. But how do we define the word “true” when it comes to relations among abstract concepts? One possible approach is to say that statements about abstract concepts are true if they follows as logical consequence of the definitions of the concepts themselves. This leads us to ask whether 3+2=5 and all other mathematical statements are simply true by definition as a consequence of our chosen definitions for ’3′, ‘+’, ’2′, ‘=’, ’5′ and the other mathematical objects. Unfortunately, this question cannot be answered without further qualification. To begin with, what do we mean by “mathematical objects”, and how do we choose to define concepts such as ’3′? Various authors have attempted to define mathematics by developing lists of axioms (which are simply assumed to be true) and then proving that the basic mathematical objects (e.g. integers) and theorems (e.g. a+b = b+a) follow from these axioms. Unfortunately, there are a variety of different ways that math can be axiomatized (i.e. built up from basic axioms). Some approaches use sets as the most basic objects (as is done in what is probably the most popular axiomatization, Zermelo-Fraenkel set theory), while others use Category Theory to provide the basic building blocks, and still other theories attempt to axiomatize only small portions of math, such as Euclid’s Axioms of planar geometry, Hilbert’s axiomatization of Euclidean Geometry and the Peano axioms for arithmetic. What is even worse (when it comes to deciding what is true) than having so many conflicting viewpoints for constructing math is that the axioms of these viewpoints are themselves not provably true. If you are willing to assume the axioms of math are “true” (whatever that means), then all of the resulting theorems that can be derived from those axioms are also true, but the axioms themselves must simply be accepted without proof in order for this process to work. As a matter of fact, if we could prove that the axioms were true then they would be called “theorems” and not “axioms”! As convoluted as this discussion has become, matters get still murkier. Even those mathematicians who agree to rely on a single basic axiomatization (such as Zermelo-Fraenkel set theory) sometimes cannot agree on whether certain extra axioms (such as the continuum hypothesis, which concerns itself with the existence of sets of certain infinite sizes, or the axiom of choice which pertains to being able to select one element from each element of a set of sets) should be added or left out. And to top that off, mathematics (as defined by whichever axiomatization you like) has not even been proven to be consistent, meaning that no one has been able to mathematically demonstrate that the axioms of any single axiomatization do not contradict each other. In fact, Gödel’s 2nd incompleteness theorem shows that if mathematics is in fact consistent then it will not be possible to use math to prove that no inconsistencies exist! In conclusion: numbers and other mathematical objects are simply concepts, and not things that are actually observable in the universe, so we cannot say that statements like 3+2=5 are true in the same way that we can say that the statement “massive objects exert forces on other massive objects” is true. We might like to think that mathematical statements are true by definition, but this idea is complicated by the fact that there is more than one way to axiomatize mathematics, and therefore more than one definition that we might choose in order to define numbers, operators and other mathematical objects. But even if there were truly only one way to axiomatize math, the axioms themselves would still not be provably true (they would only be assumed to be true), and hence it would hardly seem fair to then conclude that mathematical theorems are “true” in some objective and universal sense. These problems are compounded by the fact that we cannot prove that our commonly accepted mathematical axioms do not contradict each other, leading to a still deeper level on which to question the truth of mathematical statements. In the end, while it hardly seems fair to say that math is false, it also does not not seem fair to conclude that math is true. Math is probably neither “true” nor “false” in the usual sense of those words, though it does undeniably provide extraordinarily useful models for making predictions about what will happen in our physical universe. This will perhaps seem less surprising if we remember that mathematics was not originally developed from the ground up using axioms, but rather piece by piece in order to find solutions to problems that appear in the real world (like those related to calculating the size of plots of land, counting money, measuring roads, tracking the movements of the stars, understanding heat flow in cannons, etc.). Hence, mathematical definitions were chosen by humans to model physical reality so that we could make useful predictions, not to encapsulate metaphysical truth, so really, why should we expect math to be true?
<urn:uuid:7e60be39-5ae5-486f-84a5-f0cf36a5f3c1>
3.046875
2,169
Personal Blog
Science & Tech.
38.958087
1,600
A chemical reaction between iron-containing minerals and water may produce enough hydrogen “food” to sustain microbial communities living in pores and cracks within the enormous volume of rock below the ocean floor and parts of the continents, according to a new study led by the University of Colorado Boulder. University of Colorado Boulder Assistant Professor Nikolaus Correll likes to think in multiples. If one robot can accomplish a singular task, think how much more could be accomplished if you had hundreds of them. Correll and his computer science research team recently created a swarm of 20 robots, each the size of a pingpong ball, which they call “droplets.” When the droplets swarm together, Correll said, they form a “liquid that thinks.” In 1977, Jimmy Carter was sworn in as president, Elvis died, Virginia park ranger Roy Sullivan was hit by lightning a record seventh time and two NASA space probes destined to turn planetary science on its head launched from Florida. When the space shuttle Atlantis lifted off for its journey to the International Space Station in 2009, it had on board two butterfly habitats, which were part of an experiment conducted by CU-Boulder and K–12 students across the country. Corn and potato crops may soon provide information to farmers about when the plants need water and how much should be delivered, due to a CU-Boulder invention. A tiny sensor clipped to plant leaves charts their moisture content, a key measure of water deficiency and accompanying stress. Data from the leaves is sent wirelessly over the Internet to computers linked to irrigation equipment, ensuring timely watering, reducing excessive water and energy use, and potentially saving farmers millions of dollars a year.
<urn:uuid:6427b637-36e9-419e-9ed4-2363f843d44f>
3.484375
344
Content Listing
Science & Tech.
32.158026
1,601
Brazil has the potential to be a world leader in solar energy. So far however, the country hasn't been doing much. Solar technology is still in its infancy here. But with help from Germany's International Climate Initiative (IKI), the country's largest solar plant is slowly taking shape. It's meant to generate one megawatt of energy. So far, all of Brazil has produced just 2.5 megawatts of energy from solar. In comparison, that's less than 0.01 percent of the energy produced by the solar industry in Germany. The pilot project in Brazil is meant to be an impetus for further investors to place their money on solar technology. Santa Catarina, where the solar plant is coming up, is one Brazil's darkest states. And still, solar irradiation there is 40 percent higher than in Germany's sunniest places.
<urn:uuid:d7a40267-14c7-4bed-99dc-35ce2e09471f>
3.078125
173
News Article
Science & Tech.
57.7675
1,602
Australia has one of the world's largest ecological footprints per capita, requiring 6.6 global hectares per person. Over 50% of Australia's footprint is due to greenhouse gas emissions, with the average household emitting around 14 tonnes of greenhouse gases each year. Measure your ecological footprint to see how the way you live is impacting the planet and what you can do to reduce it? - Ecological footprint calculator In 2003, Global Footprint Network, a 501c (3) nonprofit organization, was established to enable a sustainable future where all people have the opportunity to live satisfying lives within the means of one planet. Global Footprint Network is an international think tank working to advance sustainability through use of the Ecological Footprint, a resource accounting tool that measures how much nature we have, how much we use and who uses what.By making ecological limits central to decision-making, we are working to end overshoot and create a society where all people can live well, within the means of one planet. - Page 1 of 1
<urn:uuid:002c9e2f-fdbf-40e0-abfa-5e52348be2b5>
3.578125
209
Product Page
Science & Tech.
30.880862
1,603
Time to give credit where credit is due: the 2007 Presidential Green Chemistry Challenge Awards are in, recognizing some very hard working EcoGeeks who are doing their part to make our world a better place. Winners this year have discovered a nanotechnology-based catalyst capable of producing hydrogen peroxide from renewable feedstocks, a formaldehyde-free adhesive for making wood composites, a process for synthesizing an ingredient for polyurethane foam (used in bedding and furniture) without petroleum oil, a green technique for prepping donor tissue for transplant, and a new class of chemical reactions using hydrogen and metal catalysts which minimize waste in industrial applications. "The EPA estimates that over the past 12 years, the winners' work has led to the elimination of over 940 million pounds of hazardous chemicals and solvents, [the use of] over 600 million gallons of water and more than 340 million pounds of carbon dioxide. Kudos to Professors Michael Krische and Kaichang Li, and the innovative chemists at NovaSterilis, Columbia Forest Products, Hercules Inc., Headwaters Technology Innovation, and Cargill Inc. via GreenBiz News -Green One Pot Chemistry- |< Prev||Next >|
<urn:uuid:d547c2d8-1fa8-45ca-8165-f30cfe1249cb>
2.796875
252
News (Org.)
Science & Tech.
10.656492
1,604
Ecologists point to forests as important sinks for atmospheric carbon. But a new report suggests that climate change could induce environmental stresses that would chnge the role of forests into a net carbon source. The report, titled “Adaptation of Forests and People to Climate Change – A Global Assessment,” was coordinated by the International Union of Forest Research Organizations (IUFRO) and the Collaborative Partnership on Forests (CPF). The findings came from an analysis of how different forest ecosystems worldwide would be affected under specific climate change scenarios developed by the IPCC report. The report brings together 35 international forest scientists, some of whom contributed to the IPCC. The study reports that higher temperatures would usher in the probability of prolonged droughts, more intense pest invasions, and a host of other environmental stresses, which would lead to forest destruction and degradation. Climate change could thus create a dangerous feedback loop in which damage to forests significantly increases global carbon emissions, which then exacerbates the greenhouse effect. This scenario is likely to occur if the world warms more than 4.5 degrees Farenheit. “Even if adaptation measures are fully implemented, unmitigated climate change would, during the course of the current century, exceed the adaptive capacity of many forests. The fact remains that the only way to ensure that forests do not suffer unprecedented harm is to achieve large reductions in greenhouse gas emissions.” The report will be formally presented at the United Nations Forum on Forests (UNFF) session taking place April 20-May 9 at the UN Headquarters in New York City.
<urn:uuid:0156ffc2-c3e4-407c-a0b4-bfa9e75cbdaf>
3.890625
321
News (Org.)
Science & Tech.
24.547222
1,605
Water from the 'tap' [like our 'Green-House-Gas' or GHG emissions] flowing into . . ' the 'bath' [like the global atmosphere], raises the level of the bath-water [like the rate of atmosphere GHG accumulation/concentration] . . . but, the 'bath' is also drained by . . . the 'plug-hole' [like the natural 'sinks for GHG' . . . [affecting/slowing the rate of atmospheric GHG accumulation]. To stop the bath over-flowing, the tap must turned off in the knowledge that the bath level will continue to rise while the tap is being turned off. This is true for emissions, once the need for UNFCCC-compliance in the form of safe and stable future GHG concentrations in the atmosphere is accepted. An assessment of 'Contraction & Concentrations' and 'Contraction & Convergence' and the C&C targets and modelling behind 'sink-efficiency' in the UK Government's 'Climate Act' . The '50:50' odds the UK Government gave for avoiding a temperature rise globally of more than two degrees with their emissions scenario are in this context. They are linked the Government's wholly unsubstantiated claim that atmosphere concentrations will fall after 2050 even though we are projected as only halfway through a 100-year emissions 'contraction-event'. A letter 8th June 2011 from many eminent persons sent to the Secretary of State for Energy and Climate Change about these matters ishere Working draft of 'CBAT' - the Carbon Budget Analysis Tool [see here] C&C in the context of COP-15 Copenhagen [12/2009] with a view on what went wrong and what it takes to get it right. Presentation/Animation - Also for 'download and save' as an swffile for internet browsers or a self-executing [virus-free]Flash file for PCs. Presentation/Animation - C&C in the context of IPCC AR4 and the so-called reported quantitatively for the first time since IPCC FAR 1990. Essentially, due to 'positive feedback' effects in the carbon cycle, where rising temperature amplifies the rate at which atmospheric GHG concentration increases, accelerated rates of carbon emissions contraction are needed to meet a given concentration outcome. This is the increasingly crucial issue of changing rates of 'Sink-efficiency'. In depth analysis of this in relation to the UK Climate Act is here in thisEvidence to UK Environmental Audit Committee. The rates for Contraction:Concentrations and Contraction:Convergence are compared in thisAnimationas: - Acceptable [C1] Dangerous [C2] Impossible [C3] Rates of C&C at four different theoretical rates of sink-failure. Presentation/Animation that relates the arithmetic of emissions contraction to issues of: - science, geo-technology, oil and gas depletion, growth and damages, clean energy and implementation. The arithmetic of emissions contraction relating to: - Globalisation of Consciousness; Climate Science, Rising Risks; Trends of 'Expansion and Divergence'; 'Contraction & Convergence'; 'Syntax for Global Climate Policies'; Presentation/Animation and Notes About future 'growth', you should ask an economist 'how long is a piece of string'. He may tell you a witty Woody Allen one-liner about infinity being a really long time, 'especially towards the end'; [in other words he'll probably try and avoid the question]. If you ask a string-player how long is a piece of string, s/he'll give you a different answer. exactly twice half its length' [giving the perfect octave] exactly three time a third of its length [giving the perfect octave] as in the audio-visually animatedimage above. This Pythagorean 'stringularity' true because it has 'ontological structure'.
<urn:uuid:36baadb4-ec9d-4af2-8ca5-d86cafde31e5>
2.96875
850
Knowledge Article
Science & Tech.
38.733326
1,606
UK Germination Toolbox - about the database The Millennium Seed Bank Partnership has successfully collected and stored seed samples from around 90% of the United Kingdom’s native seed plant species, as a hedge against extinction and as a conservation resource. The ‘missing’ species produce either no seeds at all; or seeds that cannot be stored conventionally; or are too rare, or fruit too infrequently for seed collections to have been made without threatening their survival. As well as further collecting to increase bio-geographic and genetic coverage of those species already in the bank, efforts are also continuing, to locate and bank collectable samples from those last few, elusive species. In this database, the naming and definition of UK native species follows the PLANTATT database (Hill, Preston and Roy, 2004); and includes species designated there as native (N), native endemic (NE) or archaeophyte (AR; introduced before 1500AD) – 1442 species. At present casuals, aliens and more recent introductions are not included. Germination tests are central to the routine management of seeds collected for the purpose of their conservation. As well as being the most useful means of monitoring seed viability over time in storage; they also provide essential information towards the propagation of new plants. Ultimately, seeds in a bank, even though they may be perfectly viable, will be of little use if we do not know how to grow new plants from them. The MSBP aims to promote conservation by enabling the sustainable use of seeds in the bank, not least for the re-introduction of native species and restoration of degraded natural and semi-natural vegetation. Consequently, this database is intended as a resource for all those who need to propagate UK native species from seed: researchers; conservationists attempting to restore native species and vegetation; horticulturalists, including those commercial nurseries specialising in growing and supplying UK native species. It will also be useful for researchers in comparative and evolutionary ecology, seeking germination trait data. The ‘toolbox’ is comprised of information on germination from up to three available sources. 1. MSB germination tests The main purpose of this database is to share the MSB’s germination data on UK species with potential users. So, wherever they are available, a search returns a summary of the conditions applied in successful MSB germination tests. By and large the conditions (mostly temperatures) returned by a search for a particular species will be those that resulted in at least 75% germination (i.e. the MSB viability standard is passed). The tests ‘accepted’ by the MSB are usually those that are easiest to apply and repeat; wherever possible avoiding complicated temperature regimes, or the application of dormancy-breaking chemicals, for example. Please note that the successful germination conditions presented almost always DO NOT result from designed experiments with controls. Thus, they do not exclude other potentially equally successful conditions that have not been tried; nor are unsuccessful conditions reported at present. In a few cases the successful germination conditions result from tests on collections of species native to the UK, but originating elsewhere, usually in Europe. 2. Information from published literature Despite the MSB’s high coverage of UK native species, successful germination conditions are not yet available for some of them. This is sometimes due to as yet intractable dormancy problems in certain species – and research continues on these. More frequently, the collections currently held of those species are too small (<500 seeds) to commit any of them to germination testing, without jeopardising the value of the conservation collection. In such cases, where there is published information available, the database will return a summary of the conditions found to be successful in other laboratories. This part of the database currently relies heavily on the extensive compilation and analysis of published literature by Baskin & Baskin (1998), with further updates to 2001 kindly provided by the authors (cf. Baskin & Baskin, 2003a). Updates beyond that date are from the MSB’s own literature searches, which are ongoing, and will be added to the database in due course. Some published germination treatments for UK native species is for material not collected in the UK. 3. Predicting likely successful temperature regimes Worldwide, around one third of all wild species studied are not exacting in their germination requirements; and this is probably also true of UK species. So long as they have sufficient moisture and a broadly favourable temperature, they are relatively easy to germinate fully. The remainder possess varying degrees of several different kinds of dormancy, presumed to result from evolution to ensure that seedlings emerge when they are most likely to survive, and often also to ensure that emergence is spread over time (‘bet hedging’). Synchronous germination to a high percentage is often quite difficult to achieve for these species. However, optimal germination temperature and dormancy breaking conditions are often related to local climatic conditions; and these can suggest likely successful germination conditions. For example; seeds of tropical dry-land species, shed at the beginning of the dry season, often require an extended period of relatively high temperature in the dry state before germination occurs in the subsequent rainy season. This requirement appears to be an adaptation to avoid germination in response to sporadic, unreliable rainfall during the dry season, when emerging seedlings would probably be killed by drought. Similarly, cool temperate species shed in autumn may delay germination until temperatures begin to rise in the early spring, by having a requirement for an extended period at low temperature (‘cold stratification’) before germination can occur, mimicking the passage of winter and the risk of frost damage to sensitive seedlings. Application of temperature regimes related to seasonal climate cycles forms the basis of ‘move-along’ experiments (e.g., Baskin & Baskin, 2003b); in which seeds, imbibed on a moist substrate, are transferred between a succession of incubators, running at temperatures that approximate to local conditions at the source of the seeds. The start point in the temperature regime is set at the conditions pertaining when seeds are shed naturally (≈ collected) in the field. To help users predict likely successful temperature sequences, they are able to enter the latitude and longitude of the source of their seed collection (if known; input restricted to decimal degrees at present), as well as month of collection. The system will return the monthly mean minimum and maximum as well as corresponding median temperatures. This facility is mainly to allow users to make predictions of likely germination conditions in the absence of information from MSB germination tests MSB. However, they can also be used in conjunction with MSB records and published data, where they exist. The temperature values are provided by ‘WORLDCLIM’ (Hijmans et al., 2005); which uses an algorithm to compute interpolated, or modelled, temperature and rainfall data from real data compiled from weather station records worldwide, at high (1km) spatial resolution. Properties of the algorithm and uneven distribution of climate stations mean that uncertainty is highest for small islands and mountainous regions. The database currently does not return interpolated monthly rainfall amounts for UK locations, as rainfall appears to have very limited value in predicting germination period in the UK. Especially in seasonal climates, rainfall amounts can give an indication of relatively dry periods (when germination would be less likely); or relatively moist periods, when seedling emergence is likely to take place in the field. Rainfall is more evenly distributed throughout the year in the UK; and though observed mostly in spring and autumn, newly emerged seedlings of some species can be seen at any time of the year, even during mild weather in winter. Worldwide, observations of species’ seedling emergence timing are scarce or mostly non-existent; whereas it is often quite well documented for UK species (e.g. ECOFLORA). Keep up to date with events and news from Kew
<urn:uuid:18c9526c-1895-4c2e-804e-a442f3f31d81>
3.625
1,649
Knowledge Article
Science & Tech.
22.236137
1,607
References: CLtL p. 160 Edit History: V1 9 Sep 1988, David Gray (initial version) V2 19 Sep 1988, David Gray (delete first alternative) The OPTIMIZE declaration provides a way to tell the compiler how important various qualities are in order to guide which optimizations are done. There is another quality, however, that is not mentioned, but is an important consideration for the compiler: how much information should be included in the object code to facilitate debugging. This includes both annotation added to enable a debugger to display more information to the user, and also suppression of optimizations that would confuse debugging by making it harder to connect the object code with the In the description of the OPTIMIZE declaration, add an additional quality named DEBUG, described as "ease of debugging". Since ease of debugging is an issue that the user will be concerned with, and is an issue that the compiler needs to consider, this provides a clear way for the user to control the amount of debugging information placed in the object module, with DEBUG=0 meaning none and DEBUG=3 meaning "as much No current implementation of this is known. Cost to implementors: All would have to update their handling of OPTIMIZE declarations to accept the new quality. Cost to users: One more little feature to learn. Some problems may result from the addition of the symbol DEBUG to the LISP package. Provides users a standard way to control the interaction between the compiler and debugger, and saves implementors from having to invent Costs of Non-Adoption: Continued confusion about how debug information should be controlled. Concern has been raised that there is already a problem with the non-orthogonality of SPEED, SAFETY, and SPACE that would be made even worse with DEBUG added, since users tend to be perplexed by the interactions of these qualities.
<urn:uuid:3b8f771d-44c3-49a1-812e-12c2b07fef78>
2.53125
412
Documentation
Software Dev.
24.776247
1,608
Sometimes after an accept method has read some input from the user, it may be necessary to insert a modified version of that input back into the input buffer. The following two functions can be used to modify the input buffer: Arguments: stream new-input start end buffer-start rescan Summary: Replaces the part of the input editing stream stream 's input buffer that extends from buffer-start to its scan pointer with the string new-input . buffer-start defaults to the current input position of stream. start and end can be supplied to specify a subsequence of new-input ; start defaults to 0 and end defaults to the length of new-input . replace-input queues a rescan by calling queue-rescan if the new input does not match the old output, or if rescan is t . The returned value is the position in the input buffer. Arguments: stream object type view buffer-start rescan query-identifier for-context-type Summary: Like replace-input , except that the new input to insert into the input buffer is obtained by presenting the object object with the presentation type type and view view. buffer-start and rescan are as for replace-input , query-identifier is as for accept , and for-context-type is as for present . If the object does not have a readable representation (in the Lisp sense), presentation-replace-input may create an "accept result" to represent the object and insert it into the input buffer. For the purposes of input editing, "accept results" must be treated as a single input gesture. The following two functions are used to read or write a token (that is, a string): input-wait-handler pointer-button-press-handler click-only Summary: Reads characters from the interactive steam stream until it encounters a delimiter, activation, or pointer gesture. Returns the accumulated string that was delimited by the delimiter or activation gesture, leaving the delimiter unread. If the first character of typed input is a quotation mark ( will ignore delimiter gestures until another quotation mark is seen. When the closing quotation mark is seen, will proceed as discussed previously. If the boolean click-only is t , then no keyboard input is allowed. In that case, read-token will simply ignore any typed characters. input-wait-handler and pointer-button-press-handler are as for stream-read-gesture . Refer to 15.2.1, The Extended Input Stream Protocol for details. Arguments: token stream is the opposite of ; given the string token, it writes it to the interactive stream stream. If and there are any characters in the token that are delimiter gestures (see will surround the token with quotation marks ( Typically, present methods will use write-token instead of write-string .
<urn:uuid:5a85e519-a243-4e87-bf4d-d1e98e5b31d8>
2.5625
596
Documentation
Software Dev.
44.267946
1,609
Ancient Lizard Missing Front Limbs Remains from a 95-million-year-old marine creature with nubs for legs is clarifying how some lizards shed their limbs as they crept through evolutionary time and morphed into slinky snakes. Described in the current issue of the Journal of Vertebrate Paleontology, the snake-like lizard had a small head and willowy body. Extending 10 to 12 inches from snout to tail, the aquatic creature also sported a lengthy neck and relatively large rear limbs. Missing were all the bones of its forearms, including the hands and digits found in modern lizards. The oddball creature, Adriosaurus microbrachis, is a member of a lineage of lizards thought to be snakes' closest relatives. “It adds to the picture we have of what was happening 100 million years ago,” said lead researcher Michael Caldwell, a paleontologist at the University of Alberta, in Canada. “We now know that losing limbs isn't a new thing and that lizards were doing it much earlier than we originally thought.” The new fossil reveals the earliest record of this limb-shedding in a lizard and gives scientists a rare glimpse back to the time when terrestrial lizards evolved to be limbless and returned to their watery origins. In fact, the ancestors of all animals lived in aquatic and marine environments. Steps to limb loss Body parts once used in an animal’s evolutionary past but tossed aside or morphed via natural selection to provide another function are called vestigial limbs. “It has been clear for centuries that snakes are tetrapods (four-legged vertebrates) that lost their limbs,” Caldwell told LiveScience. “The process and pattern of this limb-loss has remained a mystery for a long time.” Fossils of lizards in transitional states—as the four-legged critters begin to evolve into snakes—have been rare. “What we have not had to this point is a fossil record of vestigial limbs in lizards,” Caldwell said. “This is the first.” Scientists initially collected the fossil during the 19th century from a limestone quarry in Slovenia. For nearly 100 years, the little lizard remained in a collection bin at the Natural History Museum in Trieste, Italy, before Caldwell and a colleague found it in 1996 during a visit to Europe. The scientists were surprised to find the lizard’s forelimbs were too small to be useful for walking, while its hind limbs appeared to be functional. "For some oddball reason, the forelimbs were lost before the rear limbs, when you would think it would be the opposite," Caldwell said. "The front limbs would be useful for holding onto dinner or digging a hole, but it must be developmentally easier to get rid of the forelimbs." Though the lizard find does not make for a “missing link,” Caldwell suggests it suffices as a critical data point for helping scientists understand the aquatic process of limb loss. MORE FROM LiveScience.com
<urn:uuid:2180f703-0e56-4b1e-8b4a-ccf7838a1334>
3.84375
647
News Article
Science & Tech.
45.261338
1,610
Fermilab scientists find evidence for significant matter-antimatter asymmetry Batavia, Ill.—Scientists of the DZero collaboration at the Department of Energy’s Fermi National Accelerator Laboratory announced Friday, May 14, that they have found evidence for significant violation of matter-antimatter symmetry in the behavior of particles containing bottom quarks beyond what is expected in the current theory, the Standard Model of particle physics. The new result, submitted for publication in Physical Review D by the DZero collaboration, an international team of 500 physicists, indicates a one percent difference between the production of pairs of muons and pairs of antimuons in the decay of B mesons produced in high-energy collisions at Fermilab’s Tevatron particle collider. "This exciting new result provides evidence of deviations from the present theory in the decays of B mesons, in agreement with earlier hints," said Dmitri Denisov, co-spokesperson of the DZero experiment, one of two collider experiments at the Tevatron collider. Last year, physicists at both Tevatron experiments, DZero and CDF, observed such hints in studying particles made of a bottom quark and a strange quark. Dim lights Embed Embed this video on your site When matter and anti-matter particles collide in high-energy collisions, they turn into energy and produce new particles and antiparticles. At the Fermilab proton-antiproton collider, scientists observe hundreds of millions every day. Similar processes occurring at the beginning of the universe should have left us with a universe with equal amounts of matter and anti-matter. But the world around is made of matter only and antiparticles can only be produced at colliders, in nuclear reactions or cosmic rays. “What happened to the antimatter?” is one of the central questions of 21st–century particle physics. To obtain the new result, the DZero physicists performed the data analysis "blind," to avoid any bias based on what they observe. Only after a long period of verification of the analysis tools, did the DZero physicists look at the full data set. Experimenters reversed the polarity of their detector’s magnetic field during data collection to cancel instrumental effects. “Many of us felt goose bumps when we saw the result,” said Stefan Soldner-Rembold, co-spokesperson of DZero. “We knew we were seeing something beyond what we have seen before and beyond what current theories can explain.” The precision of the DZero measurements is still limited by the number of collisions recorded so far by the experiment. Both CDF and DZero therefore continue to collect data and refine analyses to address this and many other fundamental questions. “The Tevatron collider is operating extremely well, providing Fermilab scientists with unprecedented levels of data from high energy collisions to probe nature’s deepest secrets. This interesting result underlines the importance and scientific potential of the Tevatron program,” said Dennis Kovar, Associate Director for High Energy Physics in DOE’s Office of Science. The DZero result is based on data collected over the last eight years by the DZero experiment: over 6 inverse femtobarns in total integrated luminosity, corresponding to hundreds of trillions of collisions between protons and antiprotons in the Tevatron collider. “Tevatron collider experiments study high energy collisions in every detail, from searches for the Higgs boson, to precision measurement of particle properties, to searches for new and yet unknown laws of nature. I am delighted to see yet another exciting result from the Tevatron,” said Fermilab Director Pier Oddone. DZero is an international experiment of about 500 physicists from 86 institutions in 19 countries. It is supported by the U.S. Department of Energy, the National Science Foundation and a number of international funding agencies. Fermilab is a national laboratory funded by the Office of Science of the U.S. Department of Energy, operated under contract by Fermi Research Alliance, LLC. The popular movement for reconciliation "Mussalaha" grows or To know Syria is to love Syria. recentnews Read More 1581 Read More 2678 Read More 3994 Read More 3683 CIA mililtary and covert operations in Libya from 80s - Secret CIA BASE files by Ralph McGehee recentnews Read More 2947
<urn:uuid:39b773f0-e015-4c3d-82d8-f87b7ad7d8f0>
2.9375
933
News (Org.)
Science & Tech.
27.115155
1,611
In July of 2004 we deployed DORISS/PUP in the Sea Cliff Hydrothermal Field on Gorda Ridge. High temperature fluids (~300 deg C) were exiting from a number of vents along the Sea Cliff Hydrothermal Field (above left). We collected spectra from the exiting fluid using both a stand-off optic behind a dome window (not shown) and an immerision optic in an open-bottomed cube (above right). Spectra were obtained from minerals (anhydrite and barite), and bacterial mats using the precision underwater positioner (PUP) (above). End of narrative. Return to LRS main page.Questions? Comments? Please contact Edward Peltzer.
<urn:uuid:26e21330-9a17-43d3-8be0-3a3c5dfb5c25>
2.5625
147
Knowledge Article
Science & Tech.
49.199222
1,612
Home > News > Nanotechnology holds key to longer life: Research December 12th, 2007 Nanotechnology holds key to longer life: Research American scientists seem to have found the secret to extending the lifespan of brain cells, thus spawning hope for a longer life. A molecular biologist and a nanoscientist at the University of Central Florida have found that nanomaterials developed for industry have an unexpected and potentially revolutionary side effect: They can triple or quadruple the life of brain cells. The result is people could live longer and with fewer age-related health problems. Beverly Rzigalinski, assistant professor in the Department of Molecular Biology and Microbiology and at the Biomolecular Sciences Center, and Sudipta Seal, associate engineering professor at the Advanced Materials Processing and Analysis Center and the Department of Mechanical, Materials and Aerospace Engineering, will receive 1.4 million dollars from the National Institutes of Health, National Institute on Aging to study the reasons behind the reaction and possible future applications. Sound waves precisely position nanowires June 19th, 2013 Scientists Use Nanotechnology to Increase Thermal Stability of Essential Oils June 19th, 2013 Production of Bioactive Material for Quick Treatment of Bone Damages June 19th, 2013 Nanometrics Announces Participation in 5th Annual CEO Investor Summit: Accredited Investor and Publishing Research Analyst Event to be Held Concurrently With SEMICON West and Intersolar 2013 in San Francisco June 19th, 2013 Conceptual Nanomedical Lipofuscin Removal Strategy April 29th, 2013 utsandiego.com November 22nd, 2012 Nanoparticles against aging October 3rd, 2012 Frost & Sullivan Hosts Webinar: Can We Live Forever? Gauging the Future Trajectory of Medical Technology Development March 24th, 2011
<urn:uuid:f7eebfc2-feb2-454e-af25-541f8305be7b>
2.515625
376
Content Listing
Science & Tech.
-1.926818
1,613
derived smooth geometry There are two important classes of algebraic groups whose intersection is trivial (the identity group): Linear algebraic groups and abelian varieties. Any algebraic group contains a unique normal linear algebraic subgroup such that their quotient is an abelian variety. An algebraic group is linear iff it is affine. An algebraic group scheme is affine if the underlying scheme is affine. Another important class are commutative algebraic -groups whose underlying variety is projective, namely the abelian varieties; in dimension these are precisely the elliptic curves. If is a perfect field and an algebraic -group, the theorem of Chevalley says that there is a unique linear subgroup such that is an abelian variety. An abelian variety of dimension is called an elliptic curve. Some of the definitions of the following classes exist more generally for group schemes. (See also more generally unipotent group scheme.) An element of an affine algebraic group is called unipotent if its associated right translation operator on the affine coordinate ring? of is locally unipotent as an element of the ring of linear endomorphism of where ”locally unipotent” means that its restriction to any finite dimensional stable subspace of is unipotent as a ring object. Among group schemes are ‘the infinite-dimensional algebraic groups’ of Shafarevich. The affine line without its origin, comes canonically with the structure of a group under multiplication: the multiplicative group . The standard references are M. Demazure, P. Gabriel, Groupes algebriques, tome 1 (later volumes never appeared), Mason and Cie, Paris 1970 M. Artin, J. E. Bertin, M. Demazure, P. Gabriel, A. Grothendieck, M. Raynaud, J.-P. Serre, Schemas en groupes, i.e. SGA III-1, III-2, III-3 A. Borel, Linear algebraic groups, Springer (2nd edition much expanded) W. Waterhouse, Introduction to affine group schemes, GTM 66, Springer 1979. S. Lang, Abelian varieties, Springer 1983. D. Mumford, Abelian varieties, 1970, 1985. J. C. Jantzen, Representations of algebraic groups, Acad. Press 1987 (Pure and Appl. Math. vol 131); 2nd edition AMS Math. Surveys and Monog. 107 (2003; reprinted 2007) T. Springer, Linear algebraic groups, Progress in Mathematics 9, Birkhäuser Boston (2nd ed. 1998, reprinted 2008)
<urn:uuid:6627de79-dc3e-4e95-8a77-87949ae04119>
2.8125
582
Knowledge Article
Science & Tech.
45.737523
1,614
Princeton-led team finds secret ingredient for the health of tropical rainforests Posted December 9, 2008; 09:00 a.m. A team of researchers led by Princeton University scientists has found for the first time that tropical rainforests, a vital part of the Earth's ecosystem, rely on the rare trace element molybdenum to capture the nitrogen fertilizer needed to support their wildly productive growth. Most of the nitrogen that supports the rapid, lush growth of rainforests comes from tiny bacteria that can turn nitrogen in the air into fertilizer in the soil. Until now, scientists had thought that phosphorus was the key element supporting the prodigious expansion of rainforests, according to Lars Hedin, a professor of ecology and evolutionary biology at Princeton University who led the research. But an experiment testing the effects of various elements on test plots in lowland rainforests on the Gigante Peninsula in the Barro Colorado Nature Monument in Panama showed that areas treated with molybdenum withdrew more nitrogen from the atmosphere than other elements. "We were surprised," said Hedin, who is also a professor in the Princeton Environmental Institute. "It's not what we were expecting." The report, detailed in the Dec. 7 online edition of Nature Geoscience, will be the journal's cover story in its print edition. Molybdenum, the team found, is essential for controlling the biological conversion of nitrogen in the atmosphere into natural soil nitrogen fertilizer, which in turn spurs plant growth. "Just like trace amounts of vitamins are essential for human health, this exceedingly rare trace metal is indispensable for the vital function of tropical rainforests in the larger Earth system," Hedin said. Molybdenum is 10,000 times less abundant than phosphorus and other major nutrients in these ecosystems. The discovery has implications for global climate change policy, the scientists said. Previously, researchers knew little about rainforests' capacity to absorb the greenhouse gas carbon dioxide. If molybdenum is central to the biochemical processes involved in the uptake of carbon dioxide, then there may be limits to how much carbon that tropical rainforests can absorb. The biological enzyme, nitrogenase, which converts atmospheric nitrogen into soil fertilizer, feeds on molybdenum, the researchers found. "Nitrogenase without molybdenum is like a car engine without spark plugs," said Alexander Barron, the lead author on the paper, who was a graduate student in Hedin's laboratory and earned his Ph.D. in ecology and evolutionary biology from Princeton in 2007 and who now is working on climate legislation in Congress. Other authors on the paper from Princeton include: Anne Kraepiel, an associate research scholar in the Department of Chemistry; Nina Wurzburger, a research associate in the Department of Ecology and Evolutionary Biology; and Jean Philippe Bellenger, an associate research scholar in the Princeton Environmental Institute. S. Joseph Wright, who earned his bachelor's degree in biology from Princeton in 1974 and now is a staff scientist at the Smithsonian Tropical Institute in Panama, is also a contributing author. Molybdenum, a lustrous, silvery metal, is found in soil, rock and sea water and in a range of enzymes vital to human health. Traces of the element have been found in Japanese swords dating back to the 14th century. In modern times, its high strength, good electrical conductivity and anticorrosive properties have made molybdenum desirable as an element of rocket engines, radiation shields, light bulb filaments and circuit boards. The research was conducted with support from the National Science Foundation, the Andrew W. Mellon Foundation, the Smithsonian Scholarly Studies program, the Smithsonian Tropical Research Institute student fellowship program and the Environmental Protection Agency student fellowship program.
<urn:uuid:d4bf9483-5912-46b6-89f3-db185f426d2d>
3.609375
777
News Article
Science & Tech.
25.525257
1,615
Cooperating Comes Easy To Elephants In a series of tests in Thailand, researchers learned that elephants can cooperate to solve a problem, as reported in Monday’s edition of Proceedings of the National Academy of Sciences. “Elephants are socially complex,” lead researcher Joshua M. Plotnik, study leader from the University of Cambridge explained to AP, “They help others in distress." “They seem in some ways emotionally attached to each other, so you would expect there would be some level of cooperation.” However, he added, “I was surprised how quickly they learned.” Six pairs of elephants were tested 40 times over two days and every pair figured it out, succeeding on at least eight of the last 10 trials. The tests involved food rewards placed on a platform on the ground connected to a rope with the elephants behind a fence. The elephants, to get the food closer to them, had to pull the two ends of the rope at the same time to drag the platform under the fence. Pulling one end resulted in nothing but rope. In another experiment, the researchers left only one end of the rope within reach of the elephants, with the other end coiled on the table. The elephants didn’t bother to pull the rope, seeming to recognize that it wouldn’t work if their partner couldn’t pull the other end. It is hard to draw a line between learning and understanding, the researchers concluded, but the elephants did engage in cooperative behavior and paid attention to their partner. Adam Stone, elephant program manager at Zoo Atlanta, told AP it was significant that the elephants learned quickly. “We’re learning about the amazing mind of the elephant,” he said. It was long thought that learning and cooperation were limited to primates, and “it’s interesting to see that these other species are on the ball,” Stone explained. Associate director of animal care science at the Smithsonian’s National Zoo, Don Moore, explains that observations of elephants have suggested that they cooperate, but it hadn’t been experimentally tested before. “Elephants are big, they’re social, they live long lives and they’re really, really smart,” he said. The youngest elephant in the study quickly learned that she did not have to do any pulling to get a treat. “She could just put her foot on the rope, so her partner had to do all the work,” said Dr. Plotnik. Many scientists, photographers and film-makers have documented remarkable behavior by wild elephants, including “targeted helping” of other elephants that become stuck in mud. There have even been reports of elephants appearing to mourn their dead. “As humans, we like to show that we’re unique,” said Plotnik, “but we’re repeatedly shot down. One thing that remains is our language. But amazingly complex behaviors – culture, tool use, social interaction – we see all of this in the animal kingdom.” On the Net:
<urn:uuid:9f06b727-e343-48a6-af64-b0aff1cb6534>
3.640625
648
News Article
Science & Tech.
48.507036
1,616
The report describes a strategy for monitoring, modeling, and research activities to support management decisions to improve water-quality conditions in the Mississippi River Basin, reduce hypoxia in the northern Gulf of Mexico, and improve conditions for Vanadium and boron were detected at high and moderate concentrations in this area. High concentrations for these constituents were detected almost exclusively in samples collected in the Temecula Valley study area. Overview of Klamath ecological research and links to USGS Klamath studies on ground water, nutrients, sediment oxygen demand, and fish response to water quality, sucker ecology, publications, bibliographies, and data. A brief definition and explanation of hypoxia with special reference to the Gulf of Mexico hypoxic zone along the Louisiana-Texas coast as well as extensive links to USGS and other related information resources. Information about the causes and impact of hypoxia with links to USGS and other Federal agency information and activities related to nutrients in the Mississippi River Basin and hypoxia in the Gulf of Mexico. Water from this reservoir will be used more extensively by the city, so we are developing methods of assessing the water quality in real time by measuring characteristics of stream flow that correlate with important water quality data. Site on the Chlorofluorocarbon Laboratory and its analytical services for CFCs, sulfur hexafluoride, dissolved gases including nitrogen, argon, methane, carbon dioxide, oxygen, and helium, and tritium/helium-3 dating.
<urn:uuid:f4ef0678-ba59-4748-bd04-09495c3d6a6e>
2.75
309
Knowledge Article
Science & Tech.
-1.959442
1,617
The genus Isoetes is easy to recognize, but the distinctions between species are more challenging. Fortunately we have only two species in Wisconsin, of the 24 species reported for North America. The most reliable means of identifying the species of Isoetes requires observation of the megaspores with a microscope. Megaspores of I. echinospora are covered with numerous spines, and are easily distinguished from the curving ridges of I. lacustris megaspores. I. lacustris grows on lake beds, usually completely submersed in the water and often overlooked. The water must be reasonably clear to allow Isoetes to grow on the lake bed and most known locations are of oligotrophic lakes (i.e. of low productivity) with slightly acid water.
<urn:uuid:2a49bc1a-9f25-4da7-aba0-fcc8caafae10>
3.265625
175
Knowledge Article
Science & Tech.
30.454476
1,618
- Carbohydrate composition - Stereochemistry of sugar residues - Polysaccharide linkage analysis GC is the method of choice for glycosyl-residue identification (i.e., the sugar components that make up polysaccharides and oligosaccharides). Two basic derivitization techniques allow for the necessary volatilization and ultimate separation of each glycosyl residue that is released from the polymers by the initial acid hydrolysis. The standard alditol acetate derivitization (or acetylation) technique is highly reproducible and enables the identification of neutral sugar components. A TMS-methyl glucoside techniques expands the analysis to the acidic components, such as the galacturonic acid residues that constitute common pectin. A variation of this second technique is used to determine the d or l configuration of each sugar component. In addition to glycosyl-residue identification, the methylation of polysaccharides prior to acid hydrolysis and acetylation facilitates the identification of the glycosyl residue linkage points. This is essential to understanding the nature and identity of the polysaccharides.
<urn:uuid:b853db84-596a-489a-8339-bd6bd60efae9>
2.609375
238
Knowledge Article
Science & Tech.
-13.243167
1,619
Terra/MODIS Color Image of Copahue Eruption Plume Across South America For the first time since 2000, Copahue is erupting, sending an ash plume across southern South America. So far, the eruption is following the same patterns as the activity that ran from July to October 2000. That activity started with phreatic (water-driven) explosions, so it will be interesting to see if this eruption has new juvenile magma involved. Earlier this year, a study of the summit crater lake suggested new magma was intruding under Copahue and the SERNAGEOMIN report mentioned. that seismicity was rising before today’s eruption. I grabbed the brand new Terra/MODIS imagery for South America and the plume from the Copahue was glorious – stretching over 350 km across Argentina to the east of the volcano. For a sense of scale on the image, the distance between Copahue and the Embalse los Barreales is ~225 km. The plume itself has been reported to be over 9.5 km / 30,000 feet tall. UPDATE 12/22 5 PM EST: Eruptions reader Kirby pointed me to the SERNAGEOMIN webcam pointed at Copahue — check out the eruption live! UPDATE 12/22 7 PM EST: ONEMI has not called for any evacuations on the Chilean side of Copahue — this article also has a nice gallery of pictures from the eruption as well. Check out the original post with more details. Erik Klemetti is an assistant professor of Geosciences at Denison University. His passion in geology is volcanoes, and he has studied them all over the world. You can follow Erik on Twitter, where you'll get volcano news and the occasional baseball comment. Follow @eruptionsblog on Twitter.
<urn:uuid:531a4ccd-c5e2-4850-bc78-faeee15054be>
3.390625
384
Personal Blog
Science & Tech.
47.083548
1,620
In the heart of the Large Magellanic Cloud (one of the Milky Way’s many satellite galaxies), there lies a vast complex of gas called 30 Doradus. And inside that sprawling volume of space is the Tarantula Nebula, a star-forming region so huge it dwarfs even our own Orion Nebula. Thousands of stars are churning away in there, going through the process of being born. And as they do, the hottest and brightest of them carve huge cavities in the nebula, heating the tenuous gas therein to millions of degrees. The result? This: [Click to embiggen.] I love this image! It’s a combination of observations from the Chandra X-Ray Observatory (in blue, showing the incredibly hot gas) and from Spitzer Space Telescope (in red, showing cooler gas). Those bubbles of hot, X-ray emitting gas are constrained by the cooler gas around them, but it’s likely the hot gas is expanding, driving the overall expansion of the nebula itself. However, it’s also possible the sheer flood of high-energy radiation from the nascent stars is behind the gas’s expansion… or it’s a combination of both. Astronomers are still arguing over this, and observations like this one will help figure out who’s right. … but you know me. I love pareidolia, and there’s no way you can look at this image and not see a really angry screaming face, shrieking at that blue blob hovering in its way. That’s so cool! And c’mon, NASA: you release this image two weeks after Halloween? Oh well, I’ll add it to my scary astronomy gallery anyway, which is after the jump below. Image credit: X-ray: NASA/CXC/PSU/L.Townsley et al.; Infrared: NASA/JPL/PSU/L.Townsley et al. I believe without reservation that this may be the greatest instance of pareidolia of all time: an ultrasound of a man experiencing epididymo-orchitis, or pain and swelling of a testicle: Having suffered through a similar (if less traumatic) version of this, may I add that the expression on the man’s, um, "face" is exquisitely accurate. Tip o’ the codpiece to my Hive Overmind co-blogger Ed Yong on Google+. Original image: Elsevier, Inc. I know I’ve posted a lot about the Sun lately, and I know I just posted a funny picture by astrophotographer Alan Friedman. And maybe I should’ve waited for Caturday to post this. But c’mon. How could I not post this as soon as I saw it? [Click to concatenate.] It’s a SOL cat! I love how it looks like it’s rubbing its head on the Sun. If you want the technical description of what you’re seeing, it’s a solar prominence, a long stream of ionized gas belched out by the Sun, flowing along its magnetic field lines. Think of it as a 80,000 kilometer-long cosmic hairball the Sun hacked up. I will from now on. And if you liked that picture by Alan, this one will make your hair stand on end! [UPDATE: Alan calculated the size of this prominence as 80,000 km, and that looks about right to me. So just for comparison, I added the Earth roughly to scale in the picture here. That's a pretty big cat. It's head is bigger than our whole planet! Imagine the litter box that would take...] I love the images of the Sun taken by astrophotographer Alan Friedman. I love pareidolia. And I love cryptozoology. So of course I love love love this: [Click to sasquatchenate.] Pareidolia is the trait of seeing recognizable objects in random patterns (usually, but not always, faces). Cryptozoology is the study of fabled creatures like Nessie, or the chupracabra, or… I don’t know, for a totally random example, let’s say Bigfoot. Still not sure what I mean? Maybe this’ll help: OK, I’ll be a pedantic dork for just a sec, and say that this is actually just a prominence, an eruption of ionized gas off the surface of the Sun, guided by the twisting and churning solar magnetic field. Prominences can take all sorts of shapes — even angels and dragons — as they launch upward and fall back down to the Sun’s surface. Alan Apeman — urp, sorry, I mean Friedman — takes simply amazing pictures of the Sun which I feature here all the time; see the Related posts section below for many more. And you should keep an eye on his pictures. Who knows what you’ll find in them? Image credit: Alan Friedman Hey, I haven’t posted a fun pareidolia (patterns that look like faces or figures) news article in a while, and this is a good one: a man in Finland found this interesting image on his wall: [Here's the Google translation into English.] Of course, the article claims it looks like the Virgin Mary. Now look: I know that the standard depiction of Mary is usually with her head bent, covered in a cowl, with a robe of some sort. That kind of figure lends itself to pareidolia — it’s an easy shape to make, from oil stains to an MRI. But this is a pretty far cry from even that! Unless Mary’s head is a perfect sphere. It looks very much like this is a simple reflection off a window or other shiny object. The way the light plays on the wall makes that clear. Of course, I cannot rule out a supernatural influence… so if it’s not Mary, who is it? When you build and launch a high-resolution solar observatory that stares at the Sun 24 hours a day, you’re bound to catch some pretty cool stuff. As proof, check out this video of a stunning prominence erupting from the Sun’s surface on July 12, 2011, as seen by NASA’s Solar Dynamics Observatory: [Make sure you set the resolution to at least 720p.] That’s really graceful, especially considering that tower reached the staggering height of about 150,000 km (90,000 miles) above the Sun in just a few minutes! The gas on the Sun is ionized, which means it’s had one or more electrons ripped away from its atoms. Technically called a plasma, this makes it sensitive to the Sun’s strong magnetic forces. That becomes really obvious after it starts to collapse; it doesn’t follow a ballistic trajectory like you’d expect (the path a ball thrown up in the air would follow), but instead flows along the Sun’s magnetic field lines. This video is in the ultraviolet, where such a plasma glows brightly. For a moment there, just at its peak, it coincidentally looks like a classic angel with wings spread. Of course, once the angel dissolves it forms more of an arc… so I guess this makes it an archangel. I’m glad no one heard a trumpet playing when this happened. That could’ve been awkward. I glanced out my office window the other day and saw what is clearly a sign that the weather is ticked off about something: Go cloud! Punch that sky! I was thinking at first the cloud was the result of a big convective updraft; warm air screaming upwards and forming a puffy column. A couple of weeks ago I saw this happen in a ginormous cumulonimbus storm cloud. There were several rapidly rising columns of air moving up so quickly they were forming pilei, which are caps of water vapor that look like little shock waves at the top of the cloud. However, when I was looking at this fist cloud just a few minutes later as it blew east toward my house, I saw this was just a perspective effect, and it was just a normal puffy cloud. Too bad. I was getting into it. Give it to the man! Fight the stratus quo! This is a pretty nifty illusion: as you look at a spot between two rapidly changing images of faces, your brain distorts the images, making them look really weird: I could do without the title they chose for the video, but the paper on which it’s based is called "Flashed face distortion effect: Grotesque faces from relative spaces", which may not explain much, either. What it means, basically, is that as the faces flash, certain features get distorted by your brain, and the amount of distortion depends on how much that feature deviates from the rest in the set. In other words, someone with slightly larger eyes gets perceived by you as having huge eyes. Go ahead and pause the video and click through it; the faces are pretty much normal faces, so the distortion really is an illusion. I think that’s pretty neat; I’m fascinated by how our brains perceive faces in particular, since people see them everywhere. I’d love to see some variations on this, like showing men’s faces, or a man on one side and a woman on the other. Would it work for animal faces too? Hmmm. I’ll note that some people have a hard time seeing this illusion; my friend Richard Wiseman — who knows a thing or two on how the brain can be fooled! — doesn’t see it well. Do you? Tip o’ the Nacker cube to Gizmodo and my old friend Bill Dalton. It’s very common to see familiar things in random patterns. We see faces in clouds, Jesus in a tortilla, and smiley faces everywhere. It’s so ubiquitous there’s a term for it: pareidolia. So when I saw on reddit that people were talking about seeing an epic dragon fight in the Orion Nebula, I smiled. But then I saw the image, and that smile turned to pure amazement. Why? Because here’s the image: [Click to ensmaugenate.] Do you see the dragon on the left, wings outstretched, breathing fire, blasting it at the man on the right? He has a face, and I see his shoulder, back, and outstretched arm as well, as if he’s battling the dragon. Let me be clear: this picture is real! Well, the dragon and face aren’t real — they’re more pareidolia — but the images in the nebula are actually there. You might see them more easily in this contrast-enhanced version, too. Let me explain… Pareidolia is the psychological term for seeing patterns in random or near-random distributions of things. The Face on Mars, the Man in the Moon, Jesus in a taco shell, and so on… most of the time it manifests as faces, since our brains are geared to recognize them as easily as possible. But sometimes you get other patterns too. I don’t know about you, but I agree with astronomer Yurii Pidopryhora: this is a dolphin: It’s actually a cold molecular gas cloud about 25,000 light years away in our galaxy, seen in the radio part of the spectrum. I don’t have much to say, except 1) If that dolphin’s swimming, it must be in liquid helium and not water — note the temperature scale on the right; and b) Too bad this is in the constellation of Scutum the shield; it should really be in Delphinus. Image credit: Yurii Pidopryhora (JIVE)
<urn:uuid:1c889934-cab3-4687-91d5-26656dafb332>
2.75
2,542
Personal Blog
Science & Tech.
63.976644
1,621
thr_suspend(3T) immediately suspends the execution of the thread specified by target_thread. On successful return from thr_suspend(), the suspended thread is no longer executing. Once a thread is suspended, subsequent calls to thr_suspend() have no effect. Signals cannot awaken the suspended thread; they remain pending until the thread resumes execution. #include <thread.h> int thr_suspend(thread_t tid); In the following synopsis, pthread_t tid as defined in pthreads is the same as thread_t tid in Solaris threads. tid values can be used interchangeably either by assignment or through the use of casts. thread_t tid; /* tid from thr_create() */ /* pthreads equivalent of Solaris tid from thread created */ /* with pthread_create() */ pthread_t ptid; int ret; ret = thr_suspend(tid); /* using pthreads ID variable with a cast */ ret = thr_suspend((thread_t) ptid); thr_suspend() returns zero after completing successfully. Any other returned value indicates that an error occurred. When the following condition occurs, thr_suspend() fails and returns the corresponding value.
<urn:uuid:ec2bdd72-c883-4621-ab48-ce4591e57178>
2.671875
265
Documentation
Software Dev.
45.279361
1,622
The tkinter.scrolledtext module provides a class of the same name which implements a basic text widget which has a vertical scroll bar configured to do the “right thing.” Using the ScrolledText class is a lot easier than setting up a text widget and scroll bar directly. The constructor is the same as that of the tkinter.Text class. The text widget and scrollbar are packed together in a Frame, and the methods of the Grid and Pack geometry managers are acquired from the Frame object. This allows the ScrolledText widget to be used directly to achieve most normal geometry management behavior. Should more specific control be necessary, the following attributes are available: The frame which surrounds the text and scroll bar widgets. The scroll bar widget.
<urn:uuid:9e141dbe-0ac3-40ad-a9b1-cd37b5f5478f>
2.703125
158
Documentation
Software Dev.
51.956317
1,623
An SMTP instance has the following methods: If the hostname ends with a colon (":") followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. Note: This method is automatically invoked by the constructor if a host is specified during instantiation. This returns a 2-tuple composed of a numeric response code and the actual response line (multiline responses are joined into one long line.) In normal operation it should not be necessary to call this method explicitly. It is used to implement other methods and may be useful for testing private extensions. If the connection to the server is lost while waiting for the reply, SMTPServerDisconnected will be raised. In normal operation it should not be necessary to call this method explicitly. It will be implicitly called by the sendmail() when necessary. Unless you wish to use has_option() before sending mail, it should not be necessary to call this method explicitly. It will be implicitly called by sendmail() when necessary. 1if name is in the set of SMTP service extensions returned by the server, 0otherwise. Case is ignored. Note: many sites disable SMTP "VRFY" in order to foil spammers. Note: The from_addr and to_addrs parameters are used to construct the message envelope used by the transport agents. The SMTP does not modify the message headers in any way. If there has been no previous "EHLO" or "HELO" command this session, this method tries ESMTP "EHLO" first. If the server does ESMTP, message size and each of the specified options will be passed to it (if the option is in the feature set the server advertises). If "EHLO" fails, "HELO" will be tried and ESMTP options suppressed. This method will return normally if the mail is accepted for at least one recipient. Otherwise it will throw an exception. That is, if this method does not throw an exception, then someone should get your mail. If this method does not throw an exception, it returns a dictionary, with one entry for each recipient that was refused. Each entry contains a tuple of the SMTP error code and the accompanying error message sent by the server. This method may raise the following exceptions: Unless otherwise noted, the connection will be open even after an exception is raised. Low-level methods corresponding to the standard SMTP/ESMTP commands "HELP", "RSET", "NOOP", "MAIL", "RCPT", and "DATA" are also supported. Normally these do not need to be called directly, so they are not documented here. For details, consult the module code. See About this document... for information on suggesting changes.
<urn:uuid:9f2fcc7c-898f-4ab4-b47c-c39cebcaa7d8>
2.625
588
Documentation
Software Dev.
49.740668
1,624
The three major operating systems used today are Microsoft Windows, Apple's Macintosh OS, and the various Unix derivatives. A minor irritation of cross-platform work is that these three platforms all use different characters to mark the ends of lines in text files. Unix uses the linefeed (ASCII character 10), MacOS uses the carriage return (ASCII character 13), and Windows uses a two-character sequence of a carriage return plus a newline. Python's file objects can now support end of line conventions other than the one followed by the platform on which Python is running. Opening a file with the mode 'rU' will open a file for reading in universal newline mode. All three line ending conventions will be translated to a "\n" in the strings returned by the various file methods such as read() and Universal newline support is also used when importing modules and when executing a file with the execfile() function. This means that Python modules can be shared between all three operating systems without needing to convert the line-endings. This feature can be disabled when compiling Python by specifying the --without-universal-newlines switch when running Python's configure script. See About this document... for information on suggesting changes.
<urn:uuid:89787f62-14b1-4967-9555-b6fa1113a9f4>
3.84375
257
Documentation
Software Dev.
41.155494
1,625
This module implements an interface to the crypt(3) routine, which is a one-way hash function based upon a modified DES algorithm; see the Unix man page for further details. Possible uses include allowing Python scripts to accept typed passwords from the user, or attempting to crack Unix passwords with a dictionary. Notice that the behavior of this module depends on the actual implementation of the crypt(3) routine in the running system. Therefore, any extensions available on the current implementation will also be available on this module. word will usually be a user’s password as typed at a prompt or in a graphical interface. salt is usually a random two-character string which will be used to perturb the DES algorithm in one of 4096 ways. The characters in salt must be in the set [./a-zA-Z0-9]. Returns the hashed password as a string, which will be composed of characters from the same alphabet as the salt (the first two characters represent the salt itself). Since a few crypt(3) extensions allow different values, with different sizes in the salt, it is recommended to use the full crypted password as salt when checking for a password. A simple example illustrating typical use: import crypt, getpass, pwd def login(): username = input('Python login:') cryptedpasswd = pwd.getpwnam(username) if cryptedpasswd: if cryptedpasswd == 'x' or cryptedpasswd == '*': raise "Sorry, currently no support for shadow passwords" cleartext = getpass.getpass() return crypt.crypt(cleartext, cryptedpasswd) == cryptedpasswd else: return 1
<urn:uuid:62708ab8-96b3-462f-9482-87a7af429490>
3.59375
352
Documentation
Software Dev.
45.562246
1,626
Smoothing the Bumps A scientist could hardly be expected to be happy about finding a mistake in his work after he published it. But if you have to watch your research go down in flames, it may help to regard it as an offering on the sacrificial fire of scientific progress. In the case of “ocean cooling,” Willis has plenty of reasons to consider the sacrifice worth it. The first payoff for finding and fixing the XBT errors was that it allowed scientists to reconcile a stubborn and puzzling mismatch between climate model simulations of ocean warming for the past half century and observations. The second was that it helped explain why sea level rise between 1961-2003 was larger than scientists had previously been able to account for. Much of what scientists know about how ocean heat content has changed over the past half century comes from the work of Sydney Levitus, the director of NOAA’s Ocean Climate Laboratory in Silver Spring, Maryland, and his colleagues. In the early 1990s, the United Nations Education and Scientific Organization (UNESCO) asked Levitus to undertake a scientific rescue mission. The group wanted Levitus to locate historical ocean data sitting around in dusty library stacks, moldy basements, and forgotten filing cabinets around the world before they were lost to natural disaster or neglect. The project became known as the Global Oceanographic Data Archeology and Rescue Project (GODAR). “Since 1993 or so, we have added several million historical temperature profiles. This collection allowed us for the first time to estimate the change in ocean heat content from 1955 on. When we first published these results in 2000, they received a great deal of media, congressional, and scientific attention, because the warming that we saw was consistent with what would have been expected due to the increased greenhouse gases in the atmosphere,” recalls Levitus. What wasn’t consistent was several large bumps in the graph of heat content over time. “We saw an overall linear [warming] trend that was consistent, “ says Levitus, “but we also saw some very large interdecadal variability. In particular, toward the late 1970s, heat content increased substantially and then around 1980, it decreased substantially.” “Those bumps gave everyone heartburn,” says Willis. There was no established physical explanation for them, and climate models didn’t reproduce them. The science community wasn’t sure whether the discrepancy cast doubt on the models or the observations, but fingers got pointed in both directions. Smoothing the Bumps In mid-2008, however, a team of scientists led by Catia Domingues and John Church from Australia’s CSIRO, and Peter Gleckler, from Lawrence Livermore National Laboratory in California, revised long-term estimates of ocean warming based on the corrected XBT data. Since the revision, says Willis, the bumps in the graph have largely disappeared, which means the observations and the models are in much better agreement. “That makes everyone happier,” Willis says. Levitus agrees that the interdecadal variability is substantially decreased, but it isn’t totally gone. He argues that before anyone assumes that the observations must be wrong, they should remember that the amount of variability they are talking about is probably less than the amount of heat gained and lost during the intense El Niño in 1997-98. “Climate models don’t reproduce El Niño events very well either,” he says, but no one doubts they are real. Although he has “caused a stir” among his colleagues in the past by criticizing models’ inability to simulate how ocean heat storage varies on short-term time scales, he stresses, “I have said from the beginning that the fact that the long-term trends in models and observations do agree so well is what is most important.” “My point is just that we need to remain open-minded because it may be that it is possible for the ocean to gain heat and lose it more rapidly than we think. There may be other phenomena [similar to El Niño] operating on different time scales that can explain interdecadal increases and decreases,” says Levitus. Even if these ups and downs don’t change the long-term destination of global warming, they could reveal more detail about what kind of ride we can expect.
<urn:uuid:c73bddb0-5638-4050-adba-7af18ac03713>
3.171875
899
News Article
Science & Tech.
36.727486
1,627
This map, based on data from the Moderate Resolution Imaging Spectroradiometer (MODIS), shows average aerosol amounts around the world for March 2012. An optical thickness of less than 0.1 ( palest yellow) indicates crystal clear sky with maximum visibility, whereas a value of 1 (reddish brown) indicates very hazy conditions. This week’s indicator: 76. No, that is not a reference to the return time of Halley’s Comet (76 years) or the atomic number of the world’s densest natural element (the metal osmium). In this case, 76 is a percentage. And it’s a particular percentage that represents how much of the variability in North Atlantic sea temperatures new climate simulations attribute to small airborne particles called aerosols. British scientists at the Met Office Hadley Center ran the simulations, and Nature published the numberin a recent issue. North Atlantic sea temperatures have gone through warm and cool phases over the last 150 years (a phenomenon called the Atlantic Multidecadal Oscillation, or AMO). The sea was cool, for example, during the 1900s–1920s and 1960s–1990s, while a warm phase occurred in the 1930s–1950s (see graph below). Since the mid-1990s, the North Atlantic has been in a warm phase. The difference between average ocean surface temperatures over the North Atlantic and those over the global oceans has oscillated between cool and warm phases. Figure from the April 4, 2012 edition of Nature. That may sound like arcane trivia, but the cycling of North Atlantic sea temperatures matter. Earlier research has linked its phase (warm or cool) to high-stakes weather events, such as the frequency of Atlantic hurricanes and drought in the the Amazon Basin and the Sahel. Cool phases, for example, have coincided with decreased rainfall in the Amazon, more Atlantic hurricanes, and increased rain in the Sahel. Conventional wisdom has held that the cycling of North Atlantic sea temperature is a natural phenomenon driven by ocean currents. The new climate simulations suggest that aerosols are the real culprit. The British team considered a number of aerosol types in their analysis but the most important was one called sulfates, which come from volcanic eruptions and from humans burning fossil fuels. The researchers used a state-of-the-art climate model to see if they could reproduce the changes in North Atlantic sea temperatures seen over the last 150 years. This wasn’t the first time scientists have tried this, but it was the first time any group did it so accurately. And the key to their success, the British team concluded, was that they incorporated better estimates of how aerosols affect clouds—something that most previous models omitted or only partially included. How do aerosols (for the sake of simplicity, let’s just call it pollution for the moment) affect clouds and how does that affect sea surface temperature? In short, pollution tends to brighten clouds (see illustration above) causing the clouds to reflect more light back to space and cool the sea. Clouds in clean air are composed of a relatively small number of large droplets (left). As a consequence, the clouds are somewhat dark and translucent. In air with high concentrations of aerosols, water can easily condense on the particles, creating a large number of small droplets (right). These clouds are dense, very reflective, and bright white. This influence of aerosols on clouds is called the “indirect effect.” NASA image by Rob Simmon. After taking this indirect aerosol effect into account, the British team’s simulations suggested that the majority (the number they came up with was, of course, 76 percent) of the observed variability in sea temperatures seen since 1860 was due to cooling caused by sulfates from volcanic eruptions and from the buildup of industrial pollution. The results of the simulation also imply that sea temperatures have risen in recent decades because clean air regulations passed in the United States and Europe in the 1960s and 1970s have reduced levels of air pollution. If the new simulation is right, it would be a big deal. It would not only mean, as a Nature News & Views article pointed out, that humans are the key factor driving changes in sea surface temperatures (and that cleaning up the air could be fueling hurricanes); it would also mean that the Atlantic Multidecadal Oscillation (AMO) doesn’t really exist. Before you start mourning its death, however, realize there are some indications that this latest simulation may not be right. Understanding how aerosols affect clouds remains a young science, and the British team may have made some incorrect assumptions about how aerosols affect particular types of clouds. Plus, the model didn’t reproduce changes in the frequency of outbreaks of African dust storms, something that can affect the temperature of the tropical Atlantic. On top of all that, a number of other studies have come to very different conclusions. Bottom line: stay tuned. Things can get messy at the cutting-edge of science, but we’ll be keeping our eye out to see if and how long the 76 percent number holds. Science is full of numbers and here at the Earth Observatory we know they can sometimes be contradictory and confusing. In our new Earth Indicator column, we’ll pick a number from the many floating around in the science or popular press, unpack where it came from, and explain what it means. Also, a tip of the hat to NPR’s Planet Money team. They have a Planet Money indicator on their podcast that we like so much we decided to steal the name.
<urn:uuid:2762aaa0-125c-49a1-9352-720f6f9344d2>
3.921875
1,158
News (Org.)
Science & Tech.
42.665227
1,628
KickSat to launch sprites into space It’ll look like hundreds of postage stamps fluttering toward Earth — each an independent satellite transmitting a signal unique to the person who helped send it to space. A Cornell-based project called KickSat is set to launch more than 200 of these tiny satellites, nicknamed “sprites,” into low-Earth orbit as part of a routine NASA-administered mission in 2013 to the International Space Station. And unlike traditional, big government space exploration, KickSat is truly a launch by the people. Several years ago…Zac Manchester…now a graduate student in aerospace engineering, dreamt up the idea of crowd-sourced, personal space exploration. He and Ryan Zhou…and Justin Atchison…designed and built a prototype spacecraft that fits in the palm of the hand and costs just a few hundred dollars to make. The sprites are a type of micro-satellite called a “ChipSat…” Manchester’s goal, he says in his blog about the mission, “is to bring down the huge cost of spaceflight, allowing anyone from a curious high school student or basement tinkerer to a professional scientist to explore what has until now been the exclusive realm of governments and large companies. By shrinking the spacecraft, we can fit more into a single launch slot and split the costs many ways. I want to make it easy enough and affordable enough for anyone to explore space.” Sprites are the size of a cracker but are outfitted with solar cells, a radio transceiver and a microcontroller (tiny computer). KickSat, which is the name of the sprites’ launching unit, is a CubeSat, a standardized cubic satellite the size of a loaf of bread, frequently used in space research. Using Kickstarter.com to find sponsors for the mission, Manchester raised nearly $75,000 as more than 300 people sponsored a sprite that will transmit an identifying signal, such as the initials of the donor. In 2013, about 250 sprites will be sent into space. One person, who donated $10,000, Manchester added, will get to “push the big red button” on the day of the launch. A delightful dedication to citizen science. A special tradition centuries-old.
<urn:uuid:d589faa7-6417-46a9-9ebc-ffea51fd1913>
2.984375
468
Personal Blog
Science & Tech.
43.649631
1,629
Astronomers have observed in unprecedented detail the processes giving rise to stars and planets in nascent solar systems. The team was able to peer deeply into protoplanetary disks—swirling clouds of gas and dust that feed the growing star in its center and eventually coalesce into planets and asteroids to form a solar system. The big challenge was to obtain the extremely fine resolution necessary to observe the processes that happen at the boundary between the star and its surrounding disk–500 light years from Earth. It’s like standing on a rooftop in Tucson trying to observe an ant nibbling on a grain of rice in New York’s Central Park. “The angular resolution you can achieve with the Hubble Space Telescope is about 100 times too coarse to be able to see what is going on just outside of a nascent star not much bigger than our sun,” explains University of Arizona astronomer Joshua Eisner. In other words, even a protoplanetary disk close enough to be considered in the neighborhood of our solar system would appear as a featureless blob. Full story at Futurity. Get smarter: research news. Photo credit: NASA/JPL-Caltech
<urn:uuid:84c8cdab-2f19-4abe-82bd-7792c3e50bed>
4.15625
238
Truncated
Science & Tech.
40.660529
1,630
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893| Publisher:SAGE Publications, Inc.About this encyclopedia Gordon P. Rands & Pamela Rands ENVIRONMENTAL DEFENSE (ED) , an environmental advocacy group headquartered in New York, began when a group of scientists teamed up with a lawyer, went to court, won a battle to ban the pesticide DDT, and incorporated as the Environmental Defense Fund in 1967. Environmental Defense now prefers to work creatively, without confrontation, for solutions to environmental challenges, the most serious of which it views as global warming, “through partnership with powerful market leaders.” ED prides itself on having on staff “more Ph.D. scientists and economists than any similar group” and is noted for its “rigorous scientific approach.” It seeks not only to oppose policies that it deems detrimental to the environment, but also to propose workable, innovative alternatives. In 2007, Environmental Defense was a founder and organizer of the U.S. Climate Action Partnership (USCAP), a coalition of environmental organizations and corporations advocating legislative action to address global warming. Most of ...
<urn:uuid:37f83397-698f-4dbb-8ba8-1b9a1c17124e>
2.65625
270
Truncated
Science & Tech.
40.713584
1,631
A big question mark stands over Washington's efforts to deal with ocean acidification is money: How much will be needed and where it will come from? A state panel, the first of its kind in the nation, discussed a wide range of draft recommendations Friday (July 20) at the University of Washington. Gov. Chris Gregoire appointed the panel —a collection of scientists, shellfish industry officials, and federal and state government representatives — to recommend how Washington can tackle ocean acidification along its coasts. This is the first state effort of its kind in the nation. Because of the rising levels of acidity, tiny oyster shells in Washington's Dabob Bay and in Oregon's Netarts Bay are crumbling faster than they can grow back. A drop on the water's pH is being pinpointed as the culprit for endangering the Northwest's $270 million shellfish industry. PH measures the acidity or alkalinity of a fluid on a 14-point scale. The lower the number, the more acidic the liquid is. Distilled water is considered "neutral"; sea water is normally 8.1 to 8.2, which is on the alkaline side. Orange juice's pH is 3; battery acid's pH is close to a one. Shellfish survives in a narrow pH spectrum. At 100 feet deep, some Dabob Bay water has sometimes been measured at a pH of 7.5. Gregoire's panel is scheduled to present its fix-it recommendations to her on Oct. 1. Those recommendations will be general ones with plenty of details that will have to be later hashed out. That includes beginning to get a handle on the funding needs, but probably not in detail, said Jay Manning and Bill Ruckelshaus, the panel's chairmen. Manning is Gregoire's former chief of staff. Ruckelshaus was the federal Environmental Protetcion Agency's first chief in 1970 and is advisory board chairman of The William D. Ruckelshaus Center at the University of Washington and Washington State University. (Disclosure: Ruckelshaus is also a member of Crosscut's board.) "We'll probably have qualitative discussions about our best guesses to the costs," Manning said. Panelist Peter Goldmark, Washington's commissioner of public lands, said, "We also want an analysis of what no action would cost." The panel discussed potential remediation measures such as recycling old shells to provide an underwater substrate for shellfish larvae to grow in; experimenting with algae to improve water at hatcheries; and experimenting with eelgrass and seaweed in the field to help remove carbon dioxide from the water. To varying degrees, these measures have worked or have been studied; more field work is needed to gather better evidence on their effectiveness. Other potential measures include expanding the number of pH monitors in Washington's waters; studying whether certain shellfish species perform better in specific bays and inlets; improving how water is treated as it goes into hatcheries; and collecting water and biological data in a more long-term systematic way. Also, the panel is looking at lining up government programs, agencies, and funds that can coordinate and tackle the problem. This includes examining sewage treatment plants. To limit the discharge of nutrients into the water, where they increase acidity, setting up a system of business credits for discharges has been mentioned. Nutrient credits could be traded like carbon credits "We know nutrients are stressors of of the system, even though we can't quantify it," said panelist Ted Sturdevant, director of Washington's Department of Ecolgoy. Another proposition before the panel is to forbid commercial and recreational vessels from discharging sewage into Puget Sound. Some panelists wondered how far new regulations will go on fixing the overall problem; should the carrot or stick be stressed? "Regulation has not gotten us as far as we should," said Ron Sims, representing the Puget Sound Partnership on the panel. Before Oct. 1, the panel wants to set up ways its recommendations will be addressed by the state and federal governments, environmental groups, industry and private citizens. That includes setting up written agreements among agencies, and identifying a state entity with the coordination responsibilities. "We need an insitutional approach to make sure these recommendations are listened to," Ruckelshaus said. Also, panelists said the state's efforts need to be coordinated with worldwide ventures on studying ocean acidity. Ruckelshaus said, "It makes sense to make our results as relevant to the rest of the world as possible." Like what you just read? Support high quality local journalism. Become a member of Crosscut today!
<urn:uuid:d9cfb6c3-a942-4ce5-89fe-e82f8ff3d2fb>
2.71875
961
News Article
Science & Tech.
44.019591
1,632
A student (m = 63 kg) falls freely from rest and strikes the ground. During the collision with the ground, he comes to rest in a time of 0.0200 s. The average force exerted on him by the ground is +18500. N, where the upward direction is taken to be the positive direction. From what height did the student fall? Assume that the only force acting on him during the collision is that due to the ground. is this a problem solved using a kinematic equation?
<urn:uuid:65477000-ac0d-4fd5-9923-939321bcddf1>
2.65625
108
Q&A Forum
Science & Tech.
78.860959
1,633
Podcasts & RSS Feeds Most Active Stories Environment & Science Tue May 8, 2012 Feds say they'll act quicker to release study on keeping carp out of Great Lakes The federal government says it will speed up a decision on how to protect the Great Lakes from invasive species in the Mississippi River basin. The Obama administration announced the new timetable Tuesday. In the past the U.S. Army Corp of Engineers said it would take until 2015 to recommend a way to keep invasives like Asian carp from migrating into the Great Lakes. Under a new plan, the study would be complete by the end of 2013. It would present a number of options and how much each costs. Then lawmakers and the public could weigh in on the best option. Congress will have the authority to make a final choice. Michigan and other Great Lakes states have sued the federal government, calling for a permanent split between the two watersheds. Michigan's Attorney General said he’d be willing to drop the case if the Army Corps of engineers' study got done quicker. His office says the new timetable is a step in the right direction but doesn't satisfy his concerns. Scientists differ about how widely the carp would spread in the Great Lakes, but under worst-case scenarios they could severely damage the region's seven billion dollar fishing industry. Scientists have found traces of carp DNA in Lake Michigan but no actual fish. Michigan and other states want to permanently close the locks that seperate the Great Lakes from the Mississippi River. But that at would cut off shipping between the lakes and Chicago.
<urn:uuid:60de67f7-3077-4911-8d0e-5115b741acf6>
2.640625
322
Content Listing
Science & Tech.
56.821677
1,634
Contrary to Hollywood’s portrayal of gigantic man-eating sharks, the three largest species of shark spend their time peacefully roaming the ocean’s surface munching on the ocean’s smallest creatures. Basking Sharks, the second largest species of shark, cruise the seas in search of plankton, filtering up to 2,000 tons of water across its gills per hour. Reaching lengths of thirty five feet, this shark exists worldwide, yet very little is known about how they live or where they go. To discover more information about this vulnerable species, scientists from the Pacific Shark Research Center (PSRC) and the National Marine Fisheries Service (NMFS) have begun a new type of shark hunt. Unlike the crazed and frantic scenes from the JAWS movie, this shark hunt only requires a boat, camera and telephone! The Spot a Basking Shark Project enlists the help of local sea-farers to uncover the demographics and distribution of the California Basking Shark. Once common along the California coast, these gentle giants are now a rare sight. In the past, these social creatures were seen in schools of hundreds or thousands; however since 1993 no more than three basking sharks have been spotted together. Fishing and eradication efforts by fishermen who believed them to be ‘man-eaters’ contributed heavily to their population decline. Despite the fishery closure in the late 1950s, Basking Shark numbers have remained low, mostly due to human impacts like vessel strikes, fisheries bycatch and illegal shark fining. Based on the decline of Basking Shark numbers and lack of species information, the International Union for Conservation of Nature (IUCN) has listed this species as endangered. If you see a Basking Shark, the PSRC and NMFS want to know! These sharks can be identified by their large size, pointed snouts, and large gill slits that encircle the head. Basking sharks have dorsal fins up to three feet tall that are visible as they slowly swim along the surface with mouths wide open catching plankton. If you see a Basking Shark, call or email the PSRC with your location, date and time of the sighting and any photos or videos. Your information helps the PSRC document and understand these majestic and peaceful creatures.
<urn:uuid:193d0731-0e51-4550-9c1c-07d8b21d053c>
3.84375
469
Personal Blog
Science & Tech.
45.40705
1,635
Whereas people use their eyes and ears to get information, cells rely on proteins that span their outer membranes to scan for chemical signals from the outside world. Now a biotech start-up plans to launch an international consortium to determine the three-dimensional crystal structures of 100 such membrane proteins, many of which represent promising drug targets. Several "structural genomics" efforts have been launched recently to automate the atomic mapping of proteins, but this is the first to concentrate on membrane proteins. The subjects are a class of proteins called G protein-coupled receptors, which are sensitive to stimuli as varied as hormones and photon-altered pigments. Once these proteins detect a specific signal outside the cell, they let loose a cascade of biochemical messengers that alters the cell's chemistry or gene expression. Scientists would love to know more details, but the receptors are notoriously difficult to work with. Removing them from the cell membrane destroys their normal 3D shape and any hope of understanding what they look like in atomic detail. The consortium, led by start-up Bio-Xtal in Roubaix, France, plans to orchestrate a concerted effort to find new ways to express, crystallize, and image the proteins. If all goes as planned, starting in April the company will collaborate with four academic labs in France, Germany, and the Netherlands. Bio-Xtal has applied to the European Union for half the estimated 10 million Euro ($9.3 million) cost of the 3-year project, and it expects to raise the rest from pharmaceutical sponsors. Seventeen companies, including Roche, Merck, and Astra Zeneca, have already offered support, says Etienne L'Hermite, Bio-Xtal's manager. The effort to extend structural genomics to membrane proteins "an excellent idea," says Aled Edwards, a structural biologist at the University of Toronto. But because membrane proteins are difficult to express and crystallize--two necessary steps in determining their structure--the project is certain to face slow going, he says. "Calling something 'genomics' implies automation and high throughput," says Edwards. In this case, "it is a bit of a stretch." Bio-Xtal's Web site
<urn:uuid:a1d7058f-2e41-4b93-964f-d8911a3d413d>
2.625
448
News Article
Science & Tech.
37.529948
1,636
Astronomers are reaching ever further back in time, seeking events from the earliest days of the universe. Now, the discovery of the farthest (and thus oldest) supernova ever seen is raising hopes that astronomers will soon detect the explosive deaths of the first stars to form after the universe's birth. These stars forged the first heavy elements, which helped create smaller and longer-lived stars like our own sun. The earliest stars looked different from modern stars. The big bang produced only three light elements—hydrogen, helium, and a little lithium—but today, stars form in gas clouds that also contain heavier elements such as carbon and oxygen. These elements radiate away enough energy to eventually cool the clouds. When the clouds cool, they fragment into smaller clumps that collapse to spawn a plethora of mostly small stars. But such fragmentation wasn't easy early in the universe's life, when stars formed from carbon- and oxygen-free gas clouds that remained warm. Because of their warmth, more gravity was needed to overwhelm the higher gas pressure—so when a cloud collapsed, it produced massive stars rather than small stars. Astronomer Jeff Cooke of Swinburne University of Technology near Melbourne, Australia, and colleagues have been searching for the most distant, ancient supernovae by examining images from the Canada-France-Hawaii Telescope atop Mauna Kea in Hawaii. To discern even the faintest specks of light, the astronomers combine, or "stack," hundreds of images. In one image, taken in 2006 of a galaxy in Sextans (a faint constellation south of Leo), they spotted a very distant supernova indeed. To find out just how far away it was, Cooke observed the galaxy's spectrum-- the combined light emitted from its stars, arranged by wavelength—at the Keck I telescope, also atop Mauna Kea. "It was quite exciting," he says. "As the spectrum was reading out, I could see the emission line for one of the features, and when I did a quick back-of-the-envelope calculation for the redshift, I saw how high it was." The redshift is a measure of the supernova's distance. As the universe expands, it stretches the light waves traveling to us from a distant galaxy, shifting the galaxy's spectral lines to redder wavelengths; the farther the galaxy is and the more expanding space its light has traveled through, the greater its redshift. And as Cooke's team reports online today in Nature, the supernova's redshift is 3.90, which means it is 12.1 billion light-years from Earth—and it exploded just 1.6 billion years after the big bang. That makes it more than a billion light-years farther than the previous record holder. Moreover, the supernova is anything but normal. It marked the death of a star that was more than 100 times as massive as the sun. During its brief life, such a star supports its great weight by generating so much light that the pressure of that outward radiation balances the inward pull of gravity. Unfortunately for the star, high-energy gamma rays supply much of this outward pressure, and when two gamma rays meet, they can convert their energy into a pair of particles, an electron and a positron. This "pair production" robs the star of the support that the gamma rays' pressure had been providing. As a result, Cooke says, "The whole star collapses in on itself. It's one giant thermonuclear bomb, and it's incredibly bright." A pair-instability supernova emits about 10 times as much light as the brightest normal supernovae, which occur when white dwarf stars explode. Pair-instability supernovae are so rare that observers have previously seen only one good candidate—and that was in a fairly nearby galaxy. Astronomer Abraham Loeb of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, calls the discovery of the distant pair-instability supernova a breakthrough. "It's the first demonstration that such events do take place at early cosmic times, and I think we will find many more of them in the future," he says. Astronomer Volker Bromm of the University of Texas, Austin, says: "This is a very, very promising sign for what we can expect in the coming years." Cooke's team has also detected another pair-instability supernova 10.4 billion light-years from Earth. Neither supernova arose from a star that formed from pristine gas, so neither represents the very first generation of stars to form after the big bang. But the two explosions suggest that pair-instability supernovae—and thus very massive stars—were more common during the first few billion years of the universe's life than they are today.
<urn:uuid:a712c23a-254c-4b26-89cb-1348d8af1955>
4.1875
983
News Article
Science & Tech.
47.621807
1,637
A year on the melting planet takes just 1.4 Earth days While astronomers continue to rack up a list of exciting new exoplanet discoveries, some are simply more awesome than others. The crown jewel of the search for other planets would be an orb similar to Earth, perhaps even similar enough to support living beings. A newly found world called UFC-1.01 is most definitely one of the more interesting recent finds, but not for its ability to foster life — in fact it's quite the opposite. While studying a red dwarf star named GJ 436 using NASA's Spitzer Space Telescope, astronomers noticed something odd: a slight fluctuation in the amount of infrared light the star was giving off. After further study, the researchers were able to identify the cause — a small planet, two-thirds the size of Earth is orbiting the star at a remarkably close distance. The scientists believe that UFC-1.01 is so close to its star that it not only would be completely lifeless, but may also be absolutely covered in molten rock. With a surface temperature of over 1,000 degrees Fahrenheit, UFC-1.01 would be absolutely miserable place to take a vacation. But there's still some good news! Because the planet orbits its star so closely, a year on UFC-1.01 takes less than one and a half Earth days. That means you'd be able to celebrate your birthday every 36 hours or so. Just don't bring an ice cream cake, it probably wouldn't last. [Image credit: NASA] More from Tecca:
<urn:uuid:aba3e57f-fe18-4a2d-8f86-177521c20fc7>
3.046875
317
News Article
Science & Tech.
58.719659
1,638
A 3 digit number is multiplied by a 2 digit number and the calculation is written out as shown with a digit in place of each of the *'s. Complete the whole multiplication sum. When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . The number 10112359550561797752808988764044943820224719 is called a 'slippy number' because, when the last digit 9 is moved to the front, the new number produced is the slippy number multiplied by This challenge is to make up YOUR OWN alphanumeric. Each letter represents a digit and where the same letter appears more than once it must represent the same digit each time. Amazing as it may seem the three fives remaining in the following `skeleton' are sufficient to reconstruct the entire long division Watch our videos of multiplication methods that you may not have met before. Can you make sense of them? Some 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using the digits 1 to 9 each once and only once. The number 4396 can be written as just such a product. Can. . . . Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished? What day of the week were you born on? Do you know? Here's a way to Find the numbers in this sum However did we manage before calculators? Is there an efficient way to do a square root if you have to do the work yourself? This addition sum uses all ten digits 0, 1, 2...9 exactly once. Find the sum and show that the one you give is the only Choose any 4 whole numbers and take the difference between consecutive numbers, ending with the difference between the first and the last numbers. What happens when you repeat this process over and. . . . Read this article to find out the mathematical method for working out what day of the week each particular date fell on back as far as 1700. Start with any triangle T1 and its inscribed circle. Draw the triangle T2 which has its vertices at the points of contact between the triangle T1 and its incircle. Now keep repeating this. . . . Vedic Sutra is one of many ancient Indian sutras which involves a cross subtraction method. Can you give a good explanation of WHY it How would you judge a competition to draw a freehand square? Scheduling games is a little more challenging than one might desire. Here are some tournament formats that sport schedulers use. It's like 'Peaches Today, Peaches Tomorrow' but interestingly A geometry lab crafted in a functional programming language. Ported to Flash from the original java at web.comlab.ox.ac.uk/geomlab Imagine a strip with a mark somewhere along it. Fold it in the middle so that the bottom reaches back to the top. Stetch it out to match the original length. Now where's the mark?
<urn:uuid:f3089b6d-6bb9-41dd-ba6a-02443ec43614>
3.25
690
Content Listing
Science & Tech.
71.074286
1,639
An enormous triangular hole in the Suns corona was captured earlier today by NASAs Solar Dynamics Observatory, seen above from the AIA 211 imaging assembly. This gap in the Suns atmosphere is allowing more charged solar particles to stream out into the Solar System and toward Earth as well. Normally, loops of magnetic energy keep much of the Suns outward flow of gas contained. Coronal holes are regions sometimes very large regions, such as the one witnessed today where the magnetic fields dont loop back onto the Sun but instead stream outwards, creating channels for solar material to escape. The material constantly flowing outward is called the solar wind, which typically blows at around 250 miles (400 km) per second. When a coronal hole is present, though, the wind speed can double to nearly 500 miles (800 km) per second. Increased geomagnetic activity and even geomagnetic storms may occur once the gustier solar wind reaches Earth, possibly within two to three days. The holes appear dark in SDO images because they are cooler than the rest of the corona, which is extremely hot around 1,000,000 C (1,800,000 F)! Heres another image, this one in another AIA channel (193): Keep up with the Suns latest activity and see more images on NASAs SDO site here. Explore further: Mice, gerbils perish in Russia space flight
<urn:uuid:d1c2ac28-82c4-4286-8814-aaa3bcb02afe>
3.84375
286
News Article
Science & Tech.
43.340171
1,640
Those working in science are accustomed to receiving emails starting with "dear sir/madam, please look at the attached file where I'm proving einstein theory wrong". This time it's a tad more serious because the message comes from a genuine scientific collaboration... As everyone knows by now, the OPERA collaboration announced that muon neutrinos produced at CERN arrive to a detector 700 kilometers away in Gran Sasso about 60 nanoseconds earlier than expected if they traveled at the speed of light (incidentally, trains traveling the same route arrive always late). The paper is available on arXiv, and the video from the CERN seminar is here. OPERA is an experiment who has had some bad luck in the past. Its original goal was to study neutrino oscillations by detecting the appearance of tau neutrinos in a beam of muon neutrinos. However due to construction delays their results arrive too late to have any impact on measuring the neutrino masses and mixing; other experiments have in the meantime achieved a much better sensitivity to to these parameters. Moreover, the "atmospheric" neutrino mass difference, which enters the probability of a muon neutrino oscillating into a tau one, turned out to be at the lower end of the window allowed when OPERA was being planned. As a consequence, a fairly small number of oscillation events is predicted to occur on the way to Italy, leading to the expectation of about 1-2 tau events to be recorded during experiment's lifetime (they were lucky to already get 1). However they will not walk off the stage quietly. What was meant to be a little side analysis returned the result that neutrinos travel faster than light, confounding the physics community and wreaking havoc in the mainstream media. I'm not very original in thinking that the result is almost certainly wrong. The main experimental reason, already discussed on blogs, is the observation of neutrinos from the supernova SN1987A. Back in 1987, three different experiments detected a burst of neutrinos, all arriving within 15 seconds and 2-3 hours before the visible light (which agrees with models of supernova explosion). On the other hand, if neutrinos traveled as fast as OPERA claims, they should have arrived years earlier. Note that the argument that OPERA is dealing with muon neutrinos while supernovae produce electron ones is not valid: electron neutrinos have enough time to oscillate to other flavors on the way from the Large Magellanic Clouds. One way to reconcile OPERA with SN1987A would be to invoke a strong energy dependence of the neutrino speed (it should be steeper than Energy^2), since the detected supernova neutrinos are in the 5-40 MeV range, while the energy of the CERN-to-Gran-Sasso beam is 20 GeV on average. However OPERA does not observe any significant energy dependence of the neutrino speed, so that is an unlikely explanation either. From the point of view of theory the chances that the OPERA result being true are no better as there is no sensible model of tachyonic neutrinos. At the same time, we've been observing neutrinos in numerous experiments and in various different settings, for example in beta decay, from terrestrial nuclear reactors, from the Sun, in colliders as missing energy, etc. Each time they seem to behave like ordinary fermions obeying all rules of the local Lorentz invariant quantum field theory. We should weigh this evidence against the analysis of OPERA which does not appear rock solid. Recall that OPERA was conceived to observe tau neutrino appearance, not to measure the neutrino speed, and indeed there are certain aspects of the experimental set-up that call for caution. The most worrying is the fact that OPERA has no way to know the precise production time of a neutrino it detects, as it could be produced anytime during a 10 microsecond long proton pulse that creates the neutrinos at CERN. To go around this problem they need a statistical approach. Namely, they measure the time delay of the neutrino arrival in Gran Sasso with respect to the start of the proton pulse at CERN. Then they fit the time distribution to the templates based on the measured shape of the proton pulse, assuming various hypotheses about the neutrino travel time. In this manner they find that the best fit is for the travel time is 60 nanoseconds smaller than what one would expect if the neutrinos traveled at the speed of light. However, one could easily imagine that the systematic errors of this procedure have been underestimated, for example, the shape of the rise and the fall-off of the proton pulse have been inaccurately measured. OPERA does a very good job arguing that the distance from CERN to Gran Sasso can be determined to 20 cm precision, or that synchronizing the clocks in these two labs is possible to 1 nanosecond precision, but the systematic uncertainties on the shape of the proton pulse are not carefully addressed (and, during the seminar at CERN, the questions concerning this issue were the ones that confounded the speaker the most). So what's next? Fortunately OPERA appears to be open for discussion and scrutiny, thus the issue of systematic uncertainties should be resolved in the near future. Simultaneously, the MINOS collaboration should be able to repeat the measurement with similar if no better precision, and I'm sure they're already sharpening their screwdrivers. In the longer timescale, OPERA could try to optimize the experimental setting for the velocity measurement. For example, they might install a near detector on the CERN site (where there should be no effect if the current observation is due to neutrinos traveling faster than light, or there should be a similar effect if there is an unaccounted for systematic error in the production time). Or they could use shorter proton pulses, so that the neutrino production time can be determined without statistical gymnastics (it appears feasible - the LHC currently works with 5 ns bunches). I bet, my private level of confidence being 6 sigma, that the future checks will demonstrate that neutrinos are not superluminal... in the end the character from the original book turned out to be 100% human. But, of course, the ultimate verdict belongs not to our preconceptions but to experiment.
<urn:uuid:2d84e2d5-59d0-40dd-9416-6babe2a041f0>
2.59375
1,328
Personal Blog
Science & Tech.
33.835646
1,641
Hugh Pickens writes writes "BBC recently asked physicist and Cambridge University professor Dave Ansell to draw up a balance sheet of the mass that's coming in to the earth, and the mass going out to find out if the earth is gaining or losing mass. By far the biggest contributor to the world's mass is the 40,000 tonnes of dust that is falling from space to Earth every year. 'The Earth is acting like a giant vacuum cleaner powered by gravity in space, pulling in particles of dust,' says Dr. Chris Smith. Another factor increasing the earth's mass is global warming which adds about 160 tonnes a year because as the temperature of the Earth goes up, energy is added to the system, so the mass must go up. On the minus side, at the very center of the Earth, within the inner core, there exists a sphere of uranium five mile in diameter which acts as a natural nuclear reactor so these nuclear reactions cause a loss of mass of about 16 tonnes per year." (Read more, below.)Pickens continues: "What about launching rockets and satellites into space, like Phobos-Grunt? Smith discounts this as the mass is negligible and most of it will fall back down to Earth again anyway. But by far the biggest factor in earth's weight loss are the 95,000 tonnes of hydrogen that escape from the atmosphere every year. 'The other very light gas this is happening to is helium and there is much less of that around, so it's about 1,600 tonnes a year of helium that we lose.' Taking all the factors into account, Smith reckons the Earth is getting about 50,000 tonnes lighter a year, which is just less than half the gross weight of the Costa Concordia, the Italian cruise liner that recently ran aground."
<urn:uuid:faeaec14-2ee2-49f4-81a3-3ceb9a796a49>
3.140625
360
Truncated
Science & Tech.
58.764368
1,642
Stability of martian water "Liquid water can be stable against freezing and stable against boiling, but unstable with respect to evaporation. The situation is analogous to Earth's oceans. Liquid water on the surface does not freeze because temperatures are higher than the melting point, and it does not boil because the vapor pressure corresponding to the temperature of the water is less than the surface pressure. Yet the water does evaporate because the atmosphere is not saturated. A similar situation can occur on Mars, though the odds are much more likely for boiling." "But whether it boils or evaporates, either way the water will cool because of the heat loss and thus it will eventually freeze."
<urn:uuid:f127a3cd-f460-4856-a5b7-0c491cddb48d>
3.140625
137
Knowledge Article
Science & Tech.
35.919423
1,643
The answer depends on the infrastructure that you are using. Generally, the best thing is to do nothing. I know this sounds weird, so let me explain. When the OS is talking to a NIC, it generally has at least one pair of RX/TX ring-buffers and, in case of commodity hardware, is likely talking to the device over PCIe bus. On top of the PCIe bus there is a DMA engine that makes it possible for a NIC to read and write from/to host memory without using a CPU. In other words, while the NIC is active, it will always read and write packets on its own, with minimal CPU intervention. There are, of course, a lot of details, but you can generally think that on a driver-level that is what is going on — reads and writes are always performed by the NIC using DMA, no matter whether your application reads/writes anything or not. Now, on top of it there is an OS infrastructure that allows user-space applications to send and receive data to/from the NIC. When you open a socket, OS will determine in what kind of data your application is interested and add an entry into a list of applications talking to a network interface. When that happens, the application starts receiving data that is placed in some sort of application's queue in the kernel. It doesn't matter whether you are calling read or not, the data is placed there. Once the data is placed, the application is getting notified. The notification mechanisms in the kernel vary, but they all share a similar ideas — let application know that data is available to call read(). Once the data is in that "queue", application can pick it up by calling read(). The difference between blocking and non-blocking read is simple — if the read is blocking, the kernel will simply suspend the execution of an application until the data is arrived. In case of non-blocking read, the control is returned to an application in any case — either with data or without it. If latter happens, the application can either keep trying (aka spin on a socket), or wait for a notification from the kernel saying that data is available, and then proceed to reading it. Now let's get back to "doing nothing". What it means is that socket is registered to receive notification only once. Once registered, the application doesn't have to do anything but receive a notification saying "the data is there". So what the application should do is listen to that notification and perform the read only when the data is there. Once enough data is received, the app can start processing it somehow. Knowing all that, let's see what from the three approaches is better... Post another overlapped read on the socket, this time with the size of the packet so it receives it in the next completion? This is a a good approach. Ideally, you wouldn't have to "post" anything, but this depends on how good the OS interface is. If you cannot "register" your application once and then keep receiving notifications every time new data is available and call read() when it is, then posting an asynchronous read request is the next best thing. Read inside the routine the whole packet using blocking sockets and then post another overlapped with recv with 9 bytes? This is a good approach if your application has absolutely nothing else to do and you have only one socket to read from. In other words — it is an easy way of doing so, very easy to program, OS takes care of completions itself, etc. Keep in mind though that once you have more than one socket to read from, you will have to either do a very stupid thing like having a thread per socket (terrible!), or re-write your application using the first approach. Read in chunks (decide the size) say - 4096 and have a counter to keep reading each overlapped completion until the data was read (say it would complete 12 times till all the packet was read). This is the way to go! In fact, this is almost the same as approach #1 with a nice optimization to perform as less round-trips to the kernel as possible, and read as much as possible in one go. First I wanted to correct the first approach with these details, but then I noticed you've done it yourself. Hope it helps. Good Luck!
<urn:uuid:bd1e995f-a69f-4830-ba0f-faa9e5aa7389>
2.5625
887
Q&A Forum
Software Dev.
56.941383
1,644
Working with Data Types in Expressions (Reporting Services) Data types represent different kinds of data so that it can be stored and processed efficiently. Typical data types include text (also known as strings), numbers with and without decimal places, dates and times, and images. Data can be stored using one data type for efficiency but formatted according to your preference when the data is displayed in the report. For example, a field that represents currency can be stored as a floating point number, but can be displayed in a variety of formats depending on the format property you choose. For more information about display formats, see Formatting Reports and Report Items. It is important to understand data types when you write expressions to compare or combine values, for example, when you define group or filter expressions, or calculate aggregates. Comparisons and calculations are valid only between items of the same data type. If the data types do not match, you must explicitly convert the data type in the report item by using an expression. The following list describes cases when you may need to convert data to a different data type: Comparing the value of a report parameter of one data type to a dataset field of a different data type. Writing filter expressions that compare values of different data types. Writing sort expressions that combine fields of different data types. Writing group expressions that combine fields of different data types. Converting a value retrieved from the data source from one data type to a different data type. To determine the data type of a report item, you can write an expression that returns its data type. For example, to show the data type for the field MyField, add the following expression to a table cell: =Fields!MyField.Value.GetDataType().ToString(). The result displays the CLR data type used to represent MyField, for example, System.String or System.DateTime. You can also convert dataset fields before you use them in a report. The following list describes ways that you can convert an existing dataset field: Modify the dataset query to add a new query field with the converted data. For relational or multidimensional data sources, this uses data source resources to perform the conversion. Create a calculated field based on an existing report dataset field by writing an expression that converts all the data in one result set column to a new column with a different data type. For example, the following expression converts the field Year from an integer value to a string value: =CStr(Fields!Year.Value). For more information, see How to: Add, Edit, or Delete a Field in the Report Data Pane. Check whether the data processing extension you are using includes metadata for retrieving preformatted data. For example, a SQL Server Analysis Services MDX query includes a FORMATTED_VALUE extended property for cube values that have already been formatted when processing the cube. For more information, see Using Extended Field Properties for an Analysis Services Dataset. Report parameters must be one of five data types: Boolean, DateTime, Integer, Float, or Text (also known as String). When a dataset query includes query parameters, report parameters are automatically created and linked to the query parameters. The default data type for a report parameter is String. To change the default data type of a report parameter, select the correct value from the Data type drop-down list on the General page of the Report Parameter Properties dialog box. Report parameters that are DateTime data types do not support milliseconds. Although you can create a parameter based on values that include milliseconds, you cannot select a value from an available values drop-down list that includes Date or Time values that include milliseconds. When you combine text and dataset fields using the concatenation operator (&), the common language runtime (CLR) generally provides default formats. When you need to explicitly convert a dataset field or parameter to a specific data type, you must use a CLR method or a Visual Basic runtime library function to convert the data. The following table shows examples of converting data types. Type of conversion DateTime to String String to DateTime String to DateTimeOffset Extracting the Year -- or -- Boolean to Integer -1 is True and 0 is False. Boolean to Integer 1 is True and 0 is False. Just the DateTime part of a DateTimeOffset value Just the Offset part of a DateTimeOffset value You can also use the Format function to control the display format for value. For more information, see Functions (Visual Basic). When you connect to a data source with a data provider that does not provide conversion support for all the data types on the data source, the default data type for unsupported data source types is String. The following examples provide solutions to specific data types that are returned as a string. Concatenating a String and a CLR DateTimeOffset Data Type For most data types, the CLR provides default conversions so that you can concatenate values that are different data types into one string by using the & operator. For example, the following expression concatenates the text "The date and time are: " with a dataset field StartDate, which is a System.DateTime value: ="The date and time are: " & Fields!StartDate.Value. For some data types, you may need to include the ToString function. For example, the following expression shows the same example using the CLR data type System.DateTimeOffset, which include the date, the time, and a time-zone offset relative to the UTC time zone: ="The time is: " & Fields!StartDate.Value.ToString(). Converting a String Data Type to a CLR DateTime Data Type If a data processing extension does not support all data types defined on a data source, the data may be retrieved as text. For example, a datetimeoffset(7) data type value may be retrieved as a String data type. In Perth, Australia, the string value for July 1, 2008, at 6:05:07.9999999 A.M. would resemble: 2008-07-01 06:05:07.9999999 +08:00 This example shows the date (July 1, 2008), followed by the time to a 7-digit precision (6:05:07.9999999 A.M.), followed by a UTC time zone offset in hours and minutes (plus 8 hours, 0 minutes). For the following examples, this value has been placed in a String field called MyDateTime.Value. You can use one of the following strategies to convert this data to one or more CLR values: In a text box, use an expression to extract parts of the string. For example: The following expression extracts just the hour part of the UTC time zone offset and converts it to minutes: =CInt(Fields!MyDateTime.Value.Substring(Fields!MyDateTime.Value.Length-5,2)) * 60 The result is 480. The following expression converts the string to a date and time value: =DateTime.Parse(Fields!MyDateTime.Value) If the MyDateTime.Value string has a UTC offset, the DateTime.Parse function first adjusts for the UTC offset (7 A.M. - [+08:00] to the UTC time of 11 P.M. the night before). The DateTime.Parse function then applies the local report server UTC offset and, if necessary, adjusts the time again for Daylight Saving Time. For example, in Redmond, Washington, the local time offset adjusted for Daylight Saving Time is [-07:00], or 7 hours earlier than 11 PM. The result is the following DateTime value: 2007-07-06 04:07:07 PM (July 6, 2007 at 4:07 P.M). Add a new calculated field to the report dataset that uses an expression to extract parts of the string. For more information, see How to: Add, Edit, or Delete a Field in the Report Data Pane. Change the report dataset query to use Transact-SQL functions to extract the date and time values independently to create separate columns. The following example shows how to use the function DatePart to add a column for the year and a column for the UTC time zone converted to minutes: DATEPART(year, MyDateTime) AS Year, DATEPART(tz, MyDateTime) AS OffsetinMinutes The result set has three columns. The first column is the date and time, the second column is the year, and the third column is the UTC offset in minutes. The following row shows example data: 2008-07-01 06:05:07 2008 480 For more information about SQL Server database types, see Data Types (Database Engine) and Date and Time Data Types and Functions (Transact-SQL).
<urn:uuid:776c2134-a95d-4fc3-89f3-5e1fd2173718>
3.34375
1,845
Documentation
Software Dev.
48.622856
1,645
New technique advances carbon-fiber composites. A revised view of continental tectonics is emerging from the research of an MIT professor who has made the first statistical evaluation in the West of long-secret gravity-field data for a large section of the former Soviet Union. Dr. Marcia K. McNutt of the Department of Earth, Atmospheric and Planetary Sciences, played a leading role, in collaboration with Mikhail G. Kogan of the Institute of Physics of the Earth, Moscow, in making the Russian data available to colleagues throughout the world. They published their findings in a recent issue of Science Magazine. Gravity-field data help geologists understand the nature of the various layers deep below the Earth's surface, giving clues to the influence those layers have on the formation of mountain ranges and other surface features, like the Tibetan Plateau. The data also have important economic and strategic values which lead many nations to keep information for their territories under wraps. The information is of vital importance in finding and extracting mineral resources, oil and gas, especially for terrain where surface features are obscured by forests. Information on gravity-field data can also be used to measure subsurface stress. Some nations believe that a knowledge of gravity data can aid the targeting of intercontinental ballistic missiles. This was the original reason for the classification of the Soviet data, but now a more important Russian imperative has led to the release of the data to help stimulate the economy. The basic geologic question Professor McNutt sought to answer in reviewing the Russian data was: When continental blocks such as Italy, Arabia and India collided with the southern border of Eurasia why were narrow mountain belts created in the west and very broad ones in the east? Her conclusion: Differences in the lateral strength of the upper mantle control intracontinental deformation. The difference between western and eastern Eurasia, she found, can be explained by the presence of a low-viscosity zone in the uppermost mantle beneath eastern Eurasia that is absent in the west. Professor McNutt's statistical analysis of the Russian data shows that the location of the change in viscosity corresponds with the geologic boundary between the older shields and platforms of the Baltics, Russia and Siberia with the younger, geologically active mountain belts of eastern Asia. "Our conclusions from dynamic modeling of gravity and seismic velocity anomalies from northern Eurasia point to a revised view of continental tectonics according to which the physical properties of the upper mantle to depths as great as 400 km (248 miles) are affected by the thermal structure and stress history of the overlying continents, and that lateral variations in these physical properties in turn dictate patterns of intracontinental deformation," Professor McNutt and her Russian colleague wrote in their paper, "Gravity Field over Northern Eurasia and Variations in the Strength of the Upper Mantle," (Science, January 22). Professor McNutt and Dr. Kogan have been working together for about 10 years to gain access to the data that the former Soviet Union spent 30 years and the equivalent of $2 billion acquiring. Their success means a 20-percent increase in the land area available for gravitational analysis at wavelengths less than 2,500 km (1,550 miles). The information is of great interest to scientists because data derived from satellites "currently predict correctly only 50 percent of the total gravity spectrum over northern Eurasia at wavelengths greater than 1,000 km (620 miles) and only 75 percent at wavelengths greater than 3,000 km (1,860 miles)," she said. The Soviet data-an estimated 10 million point measurements-were derived by the Topographic Service of the Armed Forces of Russia at a resolution of 10 km (6.2 miles) using airplanes and helicopters, Professor McNutt said. Despite the value of the data, it was nearly lost, however. The original paper records were never entered into computers and were beginning to crumble, Professor McNutt said. It was the breakup of the Soviet Union that finally led to success for Dr. Kogan in his long quest to make the data available to his scientific colleagues in the west. His first step, after gaining permission, was to arrange a consortium of western companies interested in oil and mineral exploration of Russia. Through the consortium, computer equipment was made available and the data were recorded electronically. Professor McNutt praised her Russian colleague, whom she met 10 years ago in France at a scientific conference, not only for his perservence in pressing his request for access to the data, but for his courage. Dr. Kogan, a Jew, was excluded from membership in the Communist Party, a virtual requirement for advancement, influence and, most important, security in the former Soviet Union, Professor McNutt said. Yet, he kept up his efforts to make the data available. "This was a very good outcome of political change," Professor McNutt said. But she also sees a tragic side for science in the dissolution of the former USSR. "The Soviet system of science was very powerful," she said. "It was held in high esteem, and while researchers didn't always have the best equipment, in some areas they made unsurpassed theoretical contributions. But now, there is not enough money to support scientists because the infrastructure has fallen apart. Many scientific institutes have shut down. The average pay a scientist can command might provide bread for a month. Students are being told to pick potatoes. "Those scientists with foreign contacts have had some success securing funding from abroad, but there is no way that I can see that the Russians will be able to maintain their output of science. The National Science Foundation has helped out to some extent with certain projects of the former Soviet Union, but these are difficult economic times in this country as well and we probably can't support much Russian science from the US. "Russia is encouraging its scientists to get involved in projects that will make money. I hate to see this emphasis on applied science. the Soviets had such a strong commitment to pure research. Once they start looking only at the bottom line, they won't be doing much research." A version of this article appeared in the March 31, 1993 issue of MIT Tech Talk (Volume 37, Number 27).
<urn:uuid:e0c19b09-c970-465b-9e34-28c06e22a019>
3.109375
1,269
News Article
Science & Tech.
38.843191
1,646
Tree of Life This is a tree of life--a diagram that shows how different types of living things, or species, are related. If you follow the lines connecting any two species on the tree, you'll get an idea of how closely related they are. The longer the path is, the more distant the relationship. The 479 species listed on this tree represent only a tiny fraction of the more than 1.7 million species scientists have identified. Many millions more species are believed to exist. Our species, Homo sapiens, is labeled in green in the top left part of the tree. How was it made? Generations of scientists have created tree-of-life diagrams by studying and comparing the physical features of different species. But this tree of life was made by comparing DNA sequences, with physical features playing a supporting role. All living things have some DNA sequences in common because they evolved from a single ancestral species. Closely related species have more DNA in common than distantly related species do, so they are positioned closer to each other on the tree.
<urn:uuid:d9793264-e7aa-404d-b436-b6555ec8d06f>
3.546875
216
Knowledge Article
Science & Tech.
52.626002
1,647
News Story - Climate Cycles and Million Year Old Ice Date: 28 May 2009Dr Eric Wolff talks to SciencePoles - the scientific website of the International Polar Foundation. Dr. Eric Wolff is the 2009 recipient of the prestigious Louis Agassiz Medal awarded by the European Geosciences Union (EGU). A veteran of six Antarctic and two Greenland seasons, Dr. Wolff has been working for the British Antarctic Survey (BAS) for over twenty years, and has played a central role in the extremely important European Project for Ice Coring in Antarctica (EPICA). A leading expert in the study of the chemical composition of snow cover and ice cores and their use in the determination of past climates, pollution and atmospheric chemistry, Dr. Wolff has published some 130 peer-reviewed journal articles. He is one of the most cited scientists in the climate sciences. Read More at http://www.sciencepoles.org/index.php?/articles_interviews/dr_eric_wolff_climate_cycles_and_million_year_old_ice/&uid=1481
<urn:uuid:f170e035-e1be-48b0-8b65-6492fe93544c>
2.53125
228
News (Org.)
Science & Tech.
42.58886
1,648
Once you start to optimize your code, read about patterns, and so on, you realize it is an interesting idea using static data members to keep a single copy of something you may use in all the instances of your class (instead of having a copy in each object). Sometimes those static data members are very simple, like constants or others, but when that information gets more and more complex, then you realize on the limitations of the C++ language: - Static data members must be initialized outside the class body, generally in *.cpp files, except const integral-type static data members. It is a great inconvenience for inline classes and templates defined in one *.h file. - C++ doesn't have static constructors, as Java or C# does, so you usually have to initialize the static data members one by one (independently). This is a limitation because you may want to initialize several static data members in the same loop or algorithm, for example. I decided to workaround both limitations, and here is the result. I previously discoursed the first limitation in this topic at Stack Overflow: static constructors in C++? need to initialize private static objects. About the second limitation, it is interesting to read this other topic: What is the rationale for not having static constructors in C++?. Here is my point of view on this: - Although C++ was not intended to have static constructors in the beginning, it is still interesting to have the possibility to extend the language to work in the same way as Java and C#, as it is the free will of the programmer to use that or not. - C++ is an older language than Java and C#, static constructors were invented later, but there is no technical reason why they cannot be implemented now. - There are ways to call functions to initialize static data members one by one, but it is not the same as initializing several static data members at the same time in the same algorithm (in a static constructor for example). That can also be done in a static Init() function member... just name it StaticConstructor() and create a mechanism to call it automatically at startup and you got it! - Java and C# are managed languages, so they do not need static destructors. But native C++ is different. As a non-managed language, it should have a destructor for each constructor, as there is no garbage collector in native C++. If the programmer calls new at the constructor, we should be able to call delete at the destructor. That is the reason I also implemented the static destructor. Using the Code The way of using the code is very simple: - Include the provided StaticConstructor.h. StaticDestructor as static function members of your class. - Invoke them with the macro outside of the class body. static void StaticConstructor() static void StaticDestructor() You may also use the macros STATIC_DESTRUCTOR() as an alias for static void StaticConstructor() and static void StaticDestructor() respectively, but this only works for inline classes (usually declared only in .h files) or the header declaration. To declare the implementation in a .cpp file, you should use the full expressions static void MyClass::StaticConstructor() static void MyClass::StaticDestructor(). The idea is also to invoke the static constructor in only one .cpp file (whenever possible) to avoid several invokes to it. You may download the source and examples of all these. Apart from that, I also implemented macros for a fast static start-up code (without the need to declare a static constructor in a dummy class): std::cout << "Starting up..." << std::endl; std::cout << "Finishing up..." << std::endl; Using this kind of static constructor in templates is possible, but it is important to invoke separately the static constructor of each template instance in this way: typedef MyTemplate<int> MyTemplateInt; typedef MyTemplate<double> MyTemplateDouble; This is usually done in a different .cpp file from the template definition, which is usually in a .h file. If you also need to initialize some data members for the template, you may do it in the same .cpp file, also for each template instance... but this could be a great inconvenience. You may prefer to encapsulate the initialization inside the template code, because: - You keep together the template code and its initialization. - You do not need to repeat the initialization code for each instance of the template. - You may not want to put that code in a .cpp file. But this cannot be done in C++, so I tried a workaround. Instead of using a data member, I used a function member to store data. I called this: "data function member" (DF member), whose implementation is: static TypeName& DFMemberName() static TypeName DFMemberName(InitValue); In the examples, I use the macro STATIC_DF_MEMBER(TypeName, DFMemberName, InitValue) to declare them in an easier way. You can declare the DF members in the class or template header declaration, and access them in the static constructor and the static destructor. You may use them as a reference to your data members (in fact, they are function members that return references to your data), so this kind of code is valid: DFMemberName() = Value2;. The DF members are initialized the first time you access them, not when you declare them (they are not normal data members), so you may want to access them at the static constructor to ensure they are initialized at the beginning of the execution. Points of Interest Implementation: I used a trick in the macro that invokes the static constructor - I declare a global variable (that would be constructed at startup) to put some code in its default constructor (a call to the static constructor of your class). The destructor of that class would call the static destructor of your class. Order problem: The compiler will process the calls to the data member initialization and the static constructors in a particular order that could be compiler-dependant. Be very careful with that, otherwise the data members may not be initialized the way you want. In Microsoft Visual Studio, the order seems to be the order of the lines of code in the file, so make sure you put the call INVOKE_STATIC_CONSTRUCTOR after the data member initialization. - v1.2 - 2011/11/27: First release with some improvements and a full example. B.Sc. Mathematics and Computer Science. Programming in C++ since 2003.
<urn:uuid:3a16bb24-3187-4ff8-a1df-813304c7535b>
3.234375
1,450
Personal Blog
Software Dev.
45.576678
1,649
This article explains a simple way of implementing digest protocol in C#. A sample application is provided which shows how it is calculated in a step by step manner. In HTTP protocol for authentication, we use different types of protocols: basic, digest and Kerberos. 1. Basic Authentication This is most unsecured because it uses plain text transfer of both userid and password to the server. 2. Digest Authentication This method provides safety up to a certain level. The password is not passed by the client, instead server and client generate a 32 bit key with that password is hashed by a defined algorithm. The communication is attribute value strings and lots of parameters are optional. Due to this, it is vulnerable for middle level hackers who can hack the string and alter it with basic authentication or remove some of the digest optional values. This is considered one of the most secured ways. Authentication is not done in one or two steps. The challenge and response is a process of few steps with tickets for each stage. If the communication breaks for some reason, it has to start from the first stage. Due to this, vulnerability is less. But the process is a long one. In this article, we talk about digest protocol and how we implement it using .NET Framework 3.5. Here we talk about server side handling of protocol only. First the request is sent by the server with these parameters. Realm=Name of the realm Nonce=Generated every time a 32 bit hexadecimal representation of character Stale=true/false (is it repeated call or 1<sup>st</sup> time call) QOP=auth (another method is auth-integer) The client receives the information and it will prompt user for userid and password. User will be giving her/his user id and password. Then the user will press login. When the user presses login, the application will do hashing with the given and some additional parameters. It will send the hashed information and parameter back to the server. The password will not be sent back by the client, instead it will MD5 hash the password with given parameters and the generated parameters. Now the server has to use the data sent by the client. In addition to that, we have to get the password for the userid from the SQL database. It is quite simple to get the password from Database using the userid. The method name is implementation specific, in my case it’s “ Now you have client given parameters including userid and we retrieved the password from the database. Now we have to apply the algorithm. Separate the parameters sent by the client and store it into named variables. In the sample, we are doing that with: private void SplitResponse(String strResponse, out String strUserName, out String StrSplResponse, out String strRealm,out String strURI, out String strNonce, out String strCnonce, out String strNonceCount, out String strQop Have a hashing function that follows MD5 hashing: private String GetHash(String strIn) Now we do algorithm implementation. Format the strings one by one. To get A1: UserName + “:” + Realm + “:” + Password A1Hash = Hash the A1 value To get A2: CommandName + “:” + URI A2Hash = A2 MD5 has it. Now calculate the response: A1Hash + “:” + CNonce + “:” + NonceCount + “:” + QOP + “:” + A2Hash Now hash this response value and check with client return response, it should be equal means the user has entered the proper password and we can allow a token. Otherwise authentication is denied. Sample Data Send by Client Server code which will retrieve (using SQL server / any database): - Password: testpass - Method: DESCRIBE The result response must be equal to “47aa3643329845a954a2d091422eb35f”. I have attached a sample program which demonstrates how to implement MD5 hashing and digest authentication. The sample solution can be used as a sample calculator when you want to implement it in another language or another technology. We can use this article as a step by step checking tool. - 24th October, 2008: Initial post
<urn:uuid:bd334b2b-c3d4-4386-b8af-d6e3e90fffc5>
3.359375
941
Documentation
Software Dev.
50.59384
1,650
Resistant spores of bacillus subtilis have spent 22 months in the 'EXPOSE-R' test container outside the International Space Station (ISS). For the first time during a long-duration mission, they were mixed with artificial meteorite dust and exposed to the harsh conditions of outer space. Scientists at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) are now determining precisely how many of these spores have survived their stay in space. If it turns out that the meteorite dust was able to shield the spores from the hostile space environment, microorganisms may be capable of surviving in meteorites for long periods of time and travelling from one planet to another. EXPOSE-R with microorganisms outside the ISS After disassembly – EXPOSE-R on board the space station Over the next few months, Gerda Horneck, director of the Spores in artificial meteorites (SPORES) experiment, and her colleagues from the DLR Institute of Aerospace Medicine will examine almost 300 samples containing these microorganisms. Since this experiment began back in March 2009, the samples have been subjected to harsh conditions. They have been exposed to ultraviolet and ionising radiation, vacuum and temperature variations from minus 20 to plus 40 degrees Celsius in the ESA EXPOSE-R facility, as well as microgravity and a complete absence of any type of nutrients. The spores of bacillus subtilis have proven to be true survivors, employing an effective strategy; they enter a kind of hibernation, waiting for conditions to become more favourable, and then germinate again and restart their metabolism. The scientists now want to trigger this reaction themselves. "First, we try to bring the spores back to life by feeding them nutrients," explains astrobiologist Corinna Panitz, one of the scientists involved in SPORES. "By doing this we can examine how many spores have survived their extended stay in space, the extent of damage to their DNA and the precise nature of that damage." Resistant to vacuum, radiation and temperature extremes Bacillus subtilis is a thoroughly researched microorganism that is widespread in land, water and air. Its ability to withstand vacuum, radiation and temperature extremes makes it a good candidate for potential travel through space in a meteorite. Researchers have been testing its ability to survive under a wide range of conditions in the EXPOSE-R test facility. "Using optical filters and various artificial meteorite materials, we have created different environments for these microorganisms," states Panitz. Opening up the sample containers In the experiment carriers, some of the samples were exposed to an inert gas atmosphere, while others were exposed to vacuum conditions. Some of the carriers, each of which contained ten million spores, were exposed to ultraviolet radiation through eight-millimetre-thick highly ultraviolet-transparent glass. Other samples received a reduced radiation dose through optical filter wheels. The microorganisms placed on the lower two of the three stacked trays remained completely protected from extraterrestrial UV radiation. "The spores have been subjected to various radiation environments; those that were exposed to the entire spectrum of radiation will probably have died because their cells are unable to repair the damage incurred," explains the astrobiologist. "At lower doses, more of the microorganisms will probably have survived." The researchers also simulated different scenarios with the meterorite dust – some microorganisms were covered with it, while others were mixed together with it. Protection for a journey through space The same experiment that was being carried out on the ISS was also being conducted on the ground by DLR researchers at the Planetary and Space Simulation Facility. In the vacuum test facilities at Cologne, 300 samples of bacillus subtilis in meteorite dust were exposed to virtually the same conditions as the microorganisms in space. The temperature,vacuum and radiation parameters were reported by the ISS, and the conditions were then simulated in the laboratory to create a similar environment for the samples. "We have a set of comparison samples here on Earth," states Corinna Panitz. "But we can't replicate ionising radiation and zero gravity as they only really exist on this scale in space." In addition to the DLR scientists' 300 samples, an additional 800 from EXPOSE-R returned to Earth on board Discovery. Gerda Horneck is the coordinator of the Response of Organisms to Space Environment (ROSE) consortium; this is why the DLR team not only prepared the entire test facility at the start of this mission, but is also in charge of assigning and physically sending the samples to the other scientists involved from around the world. This is when the real work starts; DLR researchers estimate that it will take about one year to investigate and evaluate all of these samples. "During the evaluation stage of this experiment, we analyse precisely how much protection the meteorite material is able to offer the microorganisms.” The answer to this question could shed light on whether or not organisms in a meteorite might be able to travel to a nearby planet. "These samples from the ISS will help us to better understand the origin, development and possible dissemination of life in the Universe. An unprotected cell would never be able to survive the conditions of a long journey in space – but it may be able to do so when it is inside a meteorite."
<urn:uuid:c9dd011d-043b-4977-a692-ba5657454647>
4.1875
1,097
Knowledge Article
Science & Tech.
23.836664
1,651
Water Evaporated from Trees Cools Global Climate, Researchers Find ScienceDaily (Sep. 14, 2011) — Scientists have long debated about the impact on global climate of water evaporated from vegetation. New research from Carnegie's Global Ecology department concludes that evaporated water helps cool Earth as a whole, not just the local area of evaporation, demonstrating that evaporation of water from trees and lakes could have a cooling effect on the entire atmosphere. These findings, published Sept. 14 in Environmental Research Letters, have major implications for land-use decision making. Evaporative cooling is the process by which a local area is cooled by the energy used in the evaporation process, energy that would have otherwise heated the area's surface. It is well known that the paving over of urban areas and the clearing of forests can contribute to local warming by decreasing local evaporative cooling, but it was not understood whether this decreased evaporation would also contribute to global warming Earth has been getting warmer over at least the past several decades, primarily as a result of the emissions of carbon dioxide from the burning of coal, oil, and gas, as well as the clearing of forests. But because water vapor plays so many roles in the climate system, the global climate effects of changes in evaporation were not well understood. The researchers even thought it was possible that evaporation could have a warming effect on global climate, because water vapor acts as a greenhouse gas in the atmosphere. Also, the energy taken up in evaporating water is released back into the environment when the water vapor condenses and returns to earth, mostly as rain. Globally, this cycle of evaporation and condensation moves energy around, but cannot create or destroy energy. So, evaporation cannot directly affect the global balance of energy on our planet. Article continues: http://www.sciencedaily.com/releases/2011/09/110914161729.htm
<urn:uuid:03518dbf-b4fd-425f-9692-606d2af52def>
3.609375
400
Truncated
Science & Tech.
28.781297
1,652
23rd July 1994, 11.55 UTC - received and processed at ESOC (Darmstadt) MetOp is a series of three meteorological operational polar orbiting satellites, the first of which, MetOp-1 is the prototype. The instruments on MetOp will produce high-resolution images, vertical temperature and humidity profiles, and temperatures of the land and ocean surface on a global basis. Also on board the satellites will be instruments for monitoring ozone and wind flow over the oceans. Those instruments will be of significant value to meteorologists and other scientists, particularly those studying the global climate. The first launch is planned for 2006 as part of an international joint system in cooperation with the USA. Credits: ESA-Silicon World
<urn:uuid:3c183b62-bc6b-44c3-a7d5-78453ce4b5ac>
3
147
News (Org.)
Science & Tech.
28.951322
1,653
Search for extraterrestrials with your desktop computer Despite all the reports of the rapidly shrinking globe—with modern technology like the Internet, wireless handheld computers, and satellite global positioning systems—the universe seems to get lonelier every day. The search for intelligent life in the universe continues, although nothing has been found…yet. That "yet" is what keeps people looking and actually, more specifically, listening. In fact, thanks to recent developments the search will soon be stepped up considerably. The Search for Extraterrestrial Intelligence (SETI) program, derided in some scientific circles as a waste of time, received a good measure of publicity with the success of Cal Berkeley's groundbreaking SETI@Home project. Too Much Data The Power of the Internet The project really got rolling when the website was launched in May 1999. It was originally slated to end in two years, but thanks to its wildly surprising popularity and additional funding, it has been prolonged indefinitely. By now, more than 3 million volunteers in over 220 countries have gone to the SETI@Home website and downloaded the software in order to use their computers for the cause. The power and cost efficiency of the giant network of personal computers is boggling. Consider this: The most powerful computer, IBM's ASCI White, is rated at 12 >teraflops and costs $110 million. The computational power of SETI@Home runs at about 15 teraflops and so far has cost just $500,000. High Costs Lead to Innovations The Project Phoenix observations are currently being made using the 1,000-foot radio telescope at Arecibo, Puerto Rico, and the scientists have gone through a large portion of the stars on the Phoenix hit list, listening to two billion channels for each star targeted. But the dish at Arecibo is used for SETI-related purposes only a fraction of the time. The SETI project will soon have its own SETI-dedicated telescope to broaden the search. The idea for an all-SETI telescope is not new, but earlier efforts always failed due to the high costs. The new Allen Telescope Array(ATA), to be built in northern California, will differ in practice, appearance, and cost from optical and radio telescopes currently in use. It will be constructed using hundreds of mass-produced small dishes. The telescope will use new technologies along with the large amounts of affordable computer processing that the SETI@Home project has unearthed. By doing so, it will be possible for the Allen Telescope Array to examine up to a dozen SETI target stars simultaneously and be sensitive to signals over a very wide range of frequencies. New Observatory in Northern California Under current plans, the Allen Telescope Array will be developed in two phases. The first phase began last year with the development of the prototype unveiled in April, and will culminate with a second, larger prototype in early 2003, one that will actually get started on SETI and radio astronomy research. At that point, with all the new technologies proven, a second-stage technical and funding review will occur. The ATA's five-meter antennas are mass-produced and extremely cost-effective. Named for benefactor and Microsoft co-founder Paul Allen, the array will be located at the Hat Creek Observatory, 290 miles northeast of San Francisco on a site operated by UC Berkeley's Radio Astronomy Laboratory. The Hat Creek Observatory is located in an area that is "radio quiet," thereby reducing the level of interfering signals from man-made sources. Fact Monster™ Database, © 2007 Pearson Education, Inc. All rights reserved.
<urn:uuid:712046d5-6503-475c-aa73-27ce02868d6c>
3.21875
738
Knowledge Article
Science & Tech.
34.817121
1,654
New Zealand's capital city lies within the earthquake-generating collision zone between two of the Earth's great tectonic plates, and sits on top of one of the zone's most active geological faults - the Wellington Fault. The Wellington Fault forms distinctive landscape features running right through the central city. Intensive research has been done to understand the nature of the fault and the best ways to reduce possible earthquake damage and loss. - Wellington's Shaky Foundations - How often do earthquakes occur along the fault? - How much do the Wellington fault lines move? - What would a major Wellington earthquake be like? - How do we know which fault is most likely to rupture next in Wellington? Check out our Wellington Fault video here
<urn:uuid:c2194b69-727d-4f86-9eb1-b7a95200a51f>
3.625
151
Knowledge Article
Science & Tech.
48.687805
1,655
Tropical Waves Characteristics Interval/Period of 3-4 Days Between Waves - Lasting from one week to several weeks - Propagating at 10-15 KT Wavelength of 2,000-2,500 Km Extend Vertically Between SFC-5Km A westward traveling tropical wave manifests quite well on the lower atmosphere. In the absence of satellite imagery, RAOBS, ship synoptic and surface observations are the best tools for finding these perturbations. Knowledge of climatology across the region is key for tropical wave detection, as shifts on the prevailing wind flow will be the first clue of an approaching tropical wave. Over the eastern Caribbean, the prevailing easterlies will take a more NE component as a tropical wave approaches. As the wave axis moves over the area the easterlies will return, but as it passes the winds will take an ESE component. Over the southeastern Caribbean the tropical waves are harder to find, as their circulation tends to be masked by the ITCZ anchoring low over the Gulf of Panama. Over Panama-Costa Rica the flow during the wet season has a NE component, except when a tropical wave moves west across the region inducing an ESE rotation of the mean flow. Strong waves can then draw the ITCZ north across Panama/Costa Rica into the southern Caribbean.
<urn:uuid:da0fb17d-4ede-42cd-a8aa-de894f4b95ed>
3.328125
286
Knowledge Article
Science & Tech.
36.427526
1,656
Nereid was discovered in 1949 through Earth-based telescopes. Little is known about Nereid, which is slightly smaller than Proteus, having a diameter of 211 mi (340 km). The satellite's surface reflects about 14% of the sunlight that strikes it. Nereid's orbit is the most eccentric in the solar system, ranging from about 841,100 mi (1,353,600 km) to 5,980,200 mi (9,623,700 km). Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved. More on Nereid from Infoplease:
<urn:uuid:640e3d2a-53ee-41a9-9398-175795852fa7>
3.828125
128
Knowledge Article
Science & Tech.
77.191318
1,657
1.Whales are the largest animals that have ever lived on earth and are the largest animals that live in the ocean. Whales are even bigger than the largest dinosaur. It is believed that millions of years ago, whales probably walked upon land. Their back legs disappeared and their front legs became flippers. Blue whales can weigh over 150 tons and be over 100 feet in length. Humpback Whales are also big, weighing up to 45 tons. Flippers of the humpback whale can be as long as 15 feet. 2. Whales are mammals, so they feed milk to their babies and breathe air. Since whales are not fish they do not have gills, so they cannot breathe under water. They must come up to the surface of the water to get air. The air is breathed in and out through their “blowhole,” which is on their back. 3. Whales live in large groups called “herds.” A baby whale is called a “calf.” 4. Blue whales and killer whales can be found in every ocean around the world. Whales “migrate” further than any other animal. They eat during the summer month building up layers of blubber. When the water begins to cool, the whales begin their migration to warmer waters. They do not eat during their migration. All they do is swim and rest for short periods of time. Sometimes, when whales are migrating, they swim very close to the shore and can be seen “blowing” and jumping out of the water. This jumping is called “breaching. 5. When whales sleep, they stay at the top of the water, with their blowhole above the surface. Sometimes, a whale will swim up to the surface of the water and quickly blow air out of their blowhole, making a fountain of watery mist, called a “blow.” 6. There are two different kinds of whales, the baleen and the toothed whale. Baleen whales are also called “toothless” whales. Instead of teeth, they have plates made of baleen in their jaws. Baleen is a very hard and strong substance and can be compared with the same substance that makes up the horns on some animals. Baleen is also called “whalebone.” Sea water passes through the baleen and krill (a kind of plankton) gets caught. Whales can eat as much as two tons of krill a day. The fin, gray, humpback, blue, bowhead, Bryde’s, right, minke, blue, and sei are baleen whales. Humpback whales are baleen whales. Baleen whales have two nostrils, or blowholes. Killer whales and dolphins are both members of the toothed whale group . Toothed whales have teeth instead of baleen. These whales include the beluga or white, bottlenose, narwhal, pilot and sperm whales. Toothed whales eat fish and plants. They have one nostril, or blowhole. 7. Scientists have determined that killer whales can live a maximum of 35 years. They can tell the age of a whale by looking at a cross-section of a killer whale's tooth. Killer whales, like other marine mammals, produce a periodic growth layer on the teeth. By counting these layers, scientists can estimate the animal's age. 8. You can tell an adult male from an adult female by the shape of their dorsal fin. A male's fin is very tall (up to 6 feet tall) and triangular shaped. A female is shorter (3 feet) and curves back toward the dorsal fin. 9. Whales swim by moving their tails up and down and using their flippers, which also help them to turn. Some whales, such as the sei, can swim more than 30 miles per hour. 10. All whales are very noisy. They squeak, moan, groan, and sigh to talk to each other. These underwater sounds can travel great distances. The sounds they make are called “Whale Song.” Whale are the loudest animals in the world.
<urn:uuid:83d854ce-eb56-491d-a675-c384f189d4e1>
3.65625
876
Listicle
Science & Tech.
72.060353
1,658
An object the size of asteroid 2012 DA14 appears to hit Earth about once every 1,200 years, Yeomans said. "There really hasn't been a close approach that we know about for an object of this size," he added. On its close approach to Earth, it was predicted the asteroid would be traveling at 7.8 kilometers per second, roughly eight times the speed of a bullet from a high-speed rifle, he said. If it had hit our planet -- which was impossible -- it would have done so with the energy of 2.4 megatons of TNT, Yeomans said. This is comparable to the event in Tunguska, Russia, in 1908. That asteroid entered the atmosphere and exploded, leveling trees over an area of 820 square miles -- about two-thirds the size of Rhode Island. Like that rock, 2012 DA14 would likely not have left a crater. What else is out there? So, we knew that this particular asteroid wasn't going to hit us, but how about all of those other giant rocks floating nearby beyond our atmosphere? NASA says 9,697 objects have been classified as near-Earth objects, or NEOs, as of February 12. Near-Earth objects are comets or asteroids in orbits that allow them to enter Earth's neighborhood. There's an important distinction between these two types of objects: Comets are mostly water, ice and dust, while asteroids are mostly rock or metal. Both comets and asteroids have hit Earth in the past. More than 1,300 near-Earth objects have been classified as potentially hazardous to Earth, meaning that someday they may come close or hit our home planet. NASA is monitoring these objects and updating their locations as new information comes in. Right now, scientists aren't warning of any imminent threats. Yeomans and colleagues are using telescopes on the ground and in space to nail down the precise orbit of objects that might threaten Earth and predict whether the planet could be hit.
<urn:uuid:c136d7e7-3b51-494d-afed-36caf176643d>
3.859375
408
News Article
Science & Tech.
62.199211
1,659
Like Alaska's mighty Yukon, a broad river once flowed across Antarctica, following a gentle valley shaped by tectonic forces at a time before the continent became encased in ice. Understanding what happened when rivers of ice later filled the valley could solve certain climate and geologic puzzles about the southernmost continent. The valley is Lambert Graben in East Antarctica, now home to the world's largest glacier. Trapped beneath the ice, the graben (which is German for ditch or trench) is a stunning, deep gorge. But before Antarctica's deep freeze 34 million years ago, the valley was relatively flat and filled by a lazy river, leaving a riddle for geologists to decode: How did Lambert Graben get so steep, and when was it carved? [Full Story: What Antarctica Looked Like Before the Ice] Last year, Yosemite National Park's famed "firefall" was more of a "firedrizzle" due to lack of snow. But this year, the "firefall" is burning bright. Yosemite's Horsetail Fall flows like lava under a clear sky and favorable lighting. It's a small waterfall that makes big news whenever it glows orange during sunset in mid- to late February. This time of year, the sun is setting at just the right angle and the western sky is just clear enough to create the "firefall" effect. When that happens, the waterfall will glow orange for about 10 minutes. [Full Story: Wilderness 'Paparazzi' Flock to Yosemite's 'Firefall' ] A menacing swarm of locusts that entered southern Israel earlier this week has been largely smitten, according to the Israeli government and local reports. But some of the insects' ilk may be back later this week. Officials sprayed the flying insects with pesticide early this morning (March 6), greatly reducing the number of living, flying insects, according to a statement from the Ministry of Agriculture and Rural Development. [Full Story: Israel Escapes Locust Plague — For Now] A new photo taken from the International Space Station shows an ecologically diverse area of Panama in a new light. The picture is the first taken by a new Earth-observing tool recently installed on the orbiting science laboratory, and shows the San Pablo River emptying into the Gulf of Montijo, reported NASA's Earth Observatory. [Full Story: New Space Station Camera Snaps First Image of Earth ] Emperor penguins “wear” an invisible shield of cold air that helps to prevent body heat loss, allowing the flightless birds to survive the sub-zero temps of Antarctica, a new study finds. The report, published in the journal Biology Letters, demonstrates just how hardy the birds are. [Full Story: Penguins Wear a Shield of Cold Air in Winter ] Scientists are unveiling a rare octopus that has never been on public display before. And unlike other octopuses, where females have a nasty habit of eating their partners during sex, Larger Pacific Striped Octopuses mate by pressing their beaks and suckers against each other in an intimate embrace. [Full Story: Rare Kissing Octopus Unveiled For the First Time ] The huge ocean sloshing beneath the icy shell of Jupiter's moon Europa likely makes its way to the surface in some places, suggesting astronomers may not need to drill down deep to investigate it, a new study reports. Scientists have detected chemicals on Europa's frozen surface that could only come from the global liquid-water ocean beneath, implying the two are in contact and potentially opening a window into an environment that may be capable of supporting life as we know it. [Full Story: On Jupiter's Moon Europa, Underground Ocean Bubbles Up to Surface ] The latest in a series of late-season snowstorms is barreling toward the East Coast, dumping nearly a foot of snow on some locales as it passes. The National Weather Service predicts 8 to 12 inches (20 to 30 centimeters) of snow could fall in the Mid-Atlantic states tonight (March 5), with up to 18 inches (45 cm) in West Virginia. Tomorrow (March 6), traffic snarls are expected along Interstate 95 as the system collides with warm air over the East Coast, pummeling northern Virginia, Washington, D.C., Maryland, N.Y.'s Long Island and southern Connecticut with heavy, wet snow. [Full Story: Snowstorm Threatening East Coast Seen from Space ] Camels are the poster animals for the desert, but researchers now have evidence that these shaggy beasts once lived in the Canadian High Arctic. The fossil remains of a 3.5-million-year-old camel were found on Ellesmere Island in Canada's northernmost territory, Nunavut. The camel was about 30 percent bigger than modern camels and was identified using a technique called collagen fingerprinting. The finding, detailed today (March 5) in the journal Nature Communications, suggests that modern camels stemmed from giant relatives that lived in a forested Arctic that was somewhat warmer than today. [Full Story: Giant Camels Roamed the Arctic 3.5 Million Years Ago ] In the second century, an ethnically Greek Roman named Galen became doctor to the gladiators. His glimpses into the human body via these warriors' wounds, combined with much more systematic dissections of animals, became the basis of Islamic and European medicine for centuries. Galen's texts wouldn't be challenged for anatomical supremacy until the Renaissance, when human dissections — often in public — surged in popularity. But doctors in medieval Europe weren't as idle as it may seem, as a new analysis of the oldest-known preserved human dissection in Europe reveals. [Full Story: Grotesque Mummy Head Reveals Advanced Medieval Science ] The European Union has launched a new program to tackle the threat of space junk, which litters the corridors of Earth orbit. Space junk is man-made debris — spent rocket stages, dead satellites and even lost spacewalker tools — orbiting Earth. These bits of detritus pose a risk to orbiting satellites, which even a small piece of space trash could damage or destroy. [Full Story: Europe Takes Aim at Space Junk Menace ]
<urn:uuid:00a1fb6d-3d38-422a-acaa-ee3e5395e692>
3.46875
1,295
Content Listing
Science & Tech.
38.968747
1,660
by Yan Zhang-Princeton University The 2008 Olympic Games in Beijing have focused attention on the problems of air quality in urban environments and will serve as an important platform for developing and testing new technologies and procedures for analysis and management of air quality problems. Regional decisions concerning industrial development, agricultural practice and urban policy can play important roles in air quality problems linked to fine particulate matter. The Olympic Games will provide an important research venue for addressing these issues and unique opportunities for advancing novel environmental sensor systems and atmospheric models. In our work, we will deploy two environmental sensor systems at the Institute of Atmospheric Physics, Chinese Academy of Sciences near the Olympic Stadium in Beijing from June to August 2008 for continuous monitoring of trace gases, before, during, and after the Olympic Games. Data from these sensors will be incorporated into analyses using the Weather Research and Forecasting model, a state-of-the-art meteorological model which is coupled with an atmospheric chemistry (WRF-Chem) module. These analyses will be used to examine air quality problems in the Beijing metropolitan region and regional climatology problems linked to trends of decreasing precipitation in the Beijing metropolitan region associated with increased aerosol loadings. The environmental sensor systems deployed in Beijing use Quantum Cascade Lasers (QCLs) as the core technology for measuring trace gases from "remote sensing" and "point" sensors. QCLs are tiny, tunable mid-Infrared (mid-IR) semiconductor laser sources that have extremely broad wavelength coverage (3-20 μm), which includes the wavelength range where trace gases have their strongest absorption features. The lasers are designed to emit at aparticular wavelength; thus, by knowing where a gas absorbs best, a laser can be designed for detection of that specific gas. As a result of new developments of QCLs, laser absorption spectroscopy is becoming aviable alternative to other analytical methods for trace gas sensing. QCLOPS (Quantum Cascade Laser Open Path System) is an "open path" remote sensing system that uses two QCLs for monitoring multiple trace gases. The principal target gases for QCLOPS are ozone, ammonia, and carbon dioxide. Elevated ozone levels in urban regions around the world present one of the greatest air quality and public health challenges associated with industrial and automobile emissions. Ammonia plays an important and complex role in aerosol chemistry in urban environments and development of sensor systems for ammonia has proven especially challenging. Carbon dioxide is broadly recognized as an important greenhouse gas and its measurement in urban environments is an important goal of QCLOPS. The laser radiation is transmitted through the air and reflected back by a retro-reflector to a detector. The detector is connected to a data acquisition system and a computer. The computer runs a custom algorithm to calculate concentrations. NO and NO2 are important ozone precursors and their presence in urban environments is strongly connected to automobile emissions. Detection of NO and NO2 is of great interest for air quality problems linked to elevated ozone concentrations. Fast and sensitive detection of NO can be realized by Faraday rotation spectroscopy. The best NO detection limit (sub-ppbV;parts per billion by volume) can be obtained at approximately 5.3 μm. An "externalcavity" (EC) QCL source that precisely coincides withth isoptimum absorption wavelength was developed and a Faraday rotation spectrometer based on the EC-QCL was developed for detection of NO. The measurement technique will allow for sensitive and selective measurements of NO even in the presence of strongly interfering gases (especially water vapor). A fully automaticand autonomous EC-QCL Faraday rotation spectroscopic sensorsystem will be deployed at the Beijing test site for contiuous atnospheric NO monitoring. The Weather Research and Forecasting model, coupled with the WRF-Chem atmospheric chemistry module (WRF-Chem), provides a powerful platform for meteorological and air quality forecasting, as well as regional analyses of the impact of anthropogenic emissions on air quality and regional climate. WRF-Chem has been used at Princeton for analyses of aerosol impacts on regional precipitation climatology in the Baltimore and New York City metropolitan region. With the collaboration of the Nansen-Zhu International Center of the Institute of Atmospheric Physics (IAP), the Chinese Academy of Sciences, WRF-Chem will be implemented as a forecasting tool for the Beijing Olympics. An important element of the forecasting system will be integrating observations from sensor systems like QCLOPS in to the forecasting process. The Princeton group will also work closely with IAP in studying and understanding how the urban aerosols influence local weather and public health through coupled modeling and monitoring analyses.
<urn:uuid:ffb455bb-1631-4101-92b3-6ed60997a9a1>
2.78125
945
Academic Writing
Science & Tech.
12.417182
1,661
I'ts a matter of size Quick intro movie about nanotechnology Carbon nanotubes can be broken down biologically A team of Swedish and American scientists has shown that carbon nanotubes can be broken down by an enzyme - myeloperoxidase (MPO) - found in white blood cells. - Previous studies have shown that carbon nanotubes could be used for introducing drugs or other substances into human cells. The problem has been not knowing how to control the breakdown of the nanotubes, which can caused unwanted toxicity and tissue damage. Our study now shows how they can be broken down biologically into harmless components, says Bengt Fadeel, associate professor at the Swedish medical university Karolinska Institutet in a press release. Research has shown that laboratory animals exposed to carbon nanotubes via inhalation or through injection into the abdominal cavity develop severe inflammation. This and the tissue changes (fibrosis) that exposure causes lead to impaired lung function and perhaps even to cancer. For example, a year or two ago, alarming reports by other scientists suggested that carbon nanotubes are very similar to asbestos fibres, which are themselves biopersistent and which can cause lung cancer (mesothelioma) in humans a considerable time after exposure. This current study shows that endogenous MPO can break down carbon nanotubes. This enzyme is expressed in certain types of white blood cell (neutrophils), which use it to neutralise harmful bacteria. Now, however, the researchers have found that the enzyme also works on carbon nanotubes, breaking them down into water and carbon dioxide. The researchers also showed that carbon nanotubes that have been broken down by MPO no longer give rise to inflammation in mice. - This means that there might be a way to render carbon nanotubes harmless, for example in the event of an accident at a production plant. But the findings are also relevant to the future use of carbon nanotubes for medical purposes, says Fadeel. The study was led by researchers at Karolinska Institutet, the University of Pittsburgh and the National Institute for Occupational Safety and Health (NIOSH). The findings were presented in Nature Nanotechnology. Read the full press release >>
<urn:uuid:e5e6939d-e93a-41d5-8956-fd391a4c2653>
3.359375
464
News (Org.)
Science & Tech.
21.776353
1,662
Sending a submarine to the bottom of the ocean on Jupiter's icy moon Europa is the most exciting potential mission in planetary science, according to one prominent researcher. Europa's seafloor may well be capable of supporting life as we know it today, said Cornell University's Steve Squyres, lead scientist for NASA's Opportunity Mars rover, which is currently roaming the Red Planet. So a Europa robotic submarine mission is at the top of his wish list, though it likely won't happen anytime soon. "This is fantastic stuff," Squyres said Wednesday at a conference called Nuclear and Emerging Technologies for Space in The Woodlands, Texas. "This is the holy grail of planetary exploration right here." A habitable environment? Many planetary scientists regard Europa, which is slightly smaller than Earth's moon, as the solar system's best bet for harboring life beyond Earth. That's chiefly because Europa appears to have a huge ocean of liquid water sloshing around beneath its icy shell. [ Photos: Europa, Mysterious Icy Moon of Jupiter ] Here on Earth, all life needs to gain a foothold is liquid water and an energy source. Europa likely boasts both, with hydrothermal vents gushing from the seafloor as they do on our planet, researchers say. And Earth's deep-sea vent systems host vibrant ecosystems. "You do the calculations for Europa, and what you find is that there ought to be hydrothermal activity; there ought to be volcanic activity at Europa's seafloor," said Squyres, who recently chaired the U.S. National Research Council's Planetary Science Decadal Survey, which lays out the scientific community's goals for planetary science over the next 10 years. "This is a chance to search for a potentially habitable environment today on another world," he added. A tough mission A Europa submarine mission didn't make the Decadal Survey's list; it's just not feasible at the moment. If scientists want to tackle it, they'll have to overcome some serious technical and engineering challenges — such as how to get through Europa's icy crust. "This is one of the hardest missions you can imagine," Squyres said. "You need a power system that will enable you to get onto the surface. You then have to some way find your way down through what might be 10 kilometers of ice. And then you have to release some kind of free-swimming vehicle that is able to go down to the bottom of that ocean and find out what's down there." The Decadal Survey did value a mission called the Jupiter Europa Orbiter (JEO) highly, ranking it the No. 2 priority among multibillion-dollar "flagship" possibilities (No. 1 was a Mars sample-return effort). JEO would study the icy moon from above; it, or something like it, could help pave the way for an eventual submarine mission, researchers say, by identifying thin patches in the moon's icy shell. Space news from NBCNews.com Teen's space mission fueled by social media Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew. - Buzz Aldrin's vision for journey to Mars - Giant black hole may be cooking up meals - Watch a 'ring of fire' solar eclipse online - Teen's space mission fueled by social media But with an estimated price tag of $4.7 billion, JEO is not likely to get off the ground in the near future, either. In his 2013 budget request, which was released last month, President Barack Obama allocated just $1.2 billion to NASA's planetary science efforts. That's a 20 percent cut from the current allotment of $1.5 billion, and further reductions are expected over the next several years. As a result, NASA has temporarily shelved its plans for future flagships to other planets and planetary systems, saying it just doesn't have enough money to make them work. The space agency will continue actively planning cheaper missions, and it will be ready to restart flagships if the funding situation improves, officials have said. - Target: Jupiter — Missions to the Solar System's Largest Planet - Touring Jupiter's Big Moons: Io, Ganymede, Europa, Callisto - Our Solar System: A Photo Tour of the Planets © 2013 Space.com. All rights reserved. More from Space.com.
<urn:uuid:61b6b637-dd2c-444b-8fa2-0954e13ec9ec>
2.890625
943
News Article
Science & Tech.
51.782905
1,663
The data types you have seen so far are all concrete, in the sense that we have completely specified how they are implemented. For example, the Card class represents a card using two integers. As we discussed at the time, that is not the only way to represent a card; there are many alternative implementations. An abstract data type, or ADT, specifies a set of operations (or methods) and the semantics of the operations (what they do), but it does not not specify the implementation of the operations. That’s what makes it abstract. Why is that useful? When we talk about ADTs, we often distinguish the code that uses the ADT, called the client code, from the code that implements the ADT, called the provider code. In this chapter, we will look at one common ADT, the stack. A stack is a collection, meaning that it is a data structure that contains multiple elements. Other collections we have seen include dictionaries and lists. An ADT is defined by the operations that can be performed on it, which is called an interface. The interface for a stack consists of these operations: A stack is sometimes called a last in, first out or LIFO data structure, because the last item added is the first to be removed. The list operations that Python provides are similar to the operations that define a stack. The interface isn’t exactly what it is supposed to be, but we can write code to translate from the Stack ADT to the built-in operations. This code is called an implementation of the Stack ADT. In general, an implementation is a set of methods that satisfy the syntactic and semantic requirements of an interface. Here is an implementation of the Stack ADT that uses a Python list: class Stack : def __init__(self): self.items = def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def is_empty(self): return (self.items == ) A Stack object contains an attribute named items that is a list of items in the stack. The initialization method sets items to the empty list. To push a new item onto the stack, push appends it onto items. To pop an item off the stack, pop uses the homonymous ( same-named) list method to remove and return the last item on the list. Finally, to check if the stack is empty, is_empty compares items to the empty list. An implementation like this, in which the methods consist of simple invocations of existing methods, is called a veneer. In real life, veneer is a thin coating of good quality wood used in furniture-making to hide lower quality wood underneath. Computer scientists use this metaphor to describe a small piece of code that hides the details of an implementation and provides a simpler, or more standard, interface. A stack is a generic data structure, which means that we can add any type of item to it. The following example pushes two integers and a string onto the stack: >>> s = Stack() >>> s.push(54) >>> s.push(45) >>> s.push("+") We can use is_empty and pop to remove and print all of the items on the stack: while not s.is_empty(): print s.pop(), The output is + 45 54. In other words, we just used a stack to print the items backward! Granted, it’s not the standard format for printing a list, but by using a stack, it was remarkably easy to do. You should compare this bit of code to the implementation of print_backward in the last chapter. There is a natural parallel between the recursive version of print_backward and the stack algorithm here. The difference is that print_backward uses the runtime stack to keep track of the nodes while it traverses the list, and then prints them on the way back from the recursion. The stack algorithm does the same thing, except that is use a Stack object instead of the runtime stack. In most programming languages, mathematical expressions are written with the operator between the two operands, as in 1 + 2. This format is called infix. An alternative used by some calculators is called postfix. In postfix, the operator follows the operands, as in 1 2 +. The reason postfix is sometimes useful is that there is a natural way to evaluate a postfix expression using a stack: To implement the previous algorithm, we need to be able to traverse a string and break it into operands and operators. This process is an example of parsing, and the results—the individual chunks of the string – are called tokens. You might remember these words from Chapter 1. Python provides a split method in both the string and re (regular expression) modules. The function string.split splits a string into a list using a single character as a delimiter. For example: >>> import string >>> string.split("Now is the time"," ") ['Now', 'is', 'the', 'time'] In this case, the delimiter is the space character, so the string is split at each space. The function re.split is more powerful, allowing us to provide a regular expression instead of a delimiter. A regular expression is a way of specifying a set of strings. For example, [A-z] is the set of all letters and [0-9] is the set of all numbers. The ^ operator negates a set, so [^0-9] is the set of everything that is not a number, which is exactly the set we want to use to split up postfix expressions: >>> import re >>> re.split("([^0-9])", "123+456*/") ['123', '+', '456', '*', '', '/', ''] Notice that the order of the arguments is different from string.split; the delimiter comes before the string. The resulting list includes the operands 123 and 456 and the operators * and /. It also includes two empty strings that are inserted after the operands. To evaluate a postfix expression, we will use the parser from the previous section and the algorithm from the section before that. To keep things simple, we’ll start with an evaluator that only implements the operators + and *: def eval_postfix(expr): import re token_list = re.split("([^0-9])", expr) stack = Stack() for token in token_list: if token == '' or token == ' ': continue if token == '+': sum = stack.pop() + stack.pop() stack.push(sum) elif token == '*': product = stack.pop() * stack.pop() stack.push(product) else: stack.push(int(token)) return stack.pop() The first condition takes care of spaces and empty strings. The next two conditions handle operators. We assume, for now, that anything else must be an operand. Of course, it would be better to check for erroneous input and report an error message, but we’ll get to that later. Let’s test it by evaluating the postfix form of (56+47)*2: >>> print eval_postfix ("56 47 + 2 \*") 206 That’s close enough. One of the fundamental goals of an ADT is to separate the interests of the provider, who writes the code that implements the ADT, and the client, who uses the ADT. The provider only has to worry about whether the implementation is correct – in accord with the specification of the ADT – and not how it will be used. Conversely, the client assumes that the implementation of the ADT is correct and doesn’t worry about the details. When you are using one of Python’s built-in types, you have the luxury of thinking exclusively as a client. Of course, when you implement an ADT, you also have to write client code to test it. In that case, you play both roles, which can be confusing. You should make some effort to keep track of which role you are playing at any moment.
<urn:uuid:4a0df5e6-d1e2-41d7-8f44-77b24b016a93>
4.03125
1,733
Documentation
Software Dev.
60.60727
1,664
Media Contact: Emma MacMillan ()| Communications and External Relations In Alaska's backyard OAK RIDGE, Tenn., Nov. 23, 2011 Making well-informed computational models of an ever changing, vast Alaskan landscape presents challenges that Oak Ridge National Laboratory researchers are working to overcome. Modeling the ever changing Alaskan landscape is the focus of the the Next Generation Ecosystem Experiment, a collaborative research project coordinated by ORNL. The Next Generation Ecosystem Experiments (NGEE) Arctic project, a collaboration between ORNL and other national laboratories and universities, seeks to investigate how permafrost degradation and associated effects on hydrology, landscape evolution and vegetation dynamics will affect climate. Researchers hope to contribute this information to improving climate models of the Arctic. Today's models do not include all the physical and biological processes of the landscape and there are processes operating at small scales where understanding is inadequate. "Current models can be improved by identifying areas where climate predictions are sensitive to biogeochemical processes," said Peter Thornton, climate modeler. "The processes that are going on in the Arctic are dependent on details in the landscape on scales of feet to miles." Given its unique model-driven nature, NGEE will likely span 10 to 12 years, but with informed predictive abilities, it could have an impact that exceeds centuries. "There's not much modeling of the highly coupled landscape processes in Alaska and the Arctic, so we want to study processes that are important to climate modeling to contribute to current and future models," said Stan Wullschleger, NGEE project coordinator and ORNL plant physiologist. As researchers from across the country come together to work in one location, they will be striving to answer a circular question: How will a changing climate affect the Arctic, and how will this in turn impact the planet's climate? "For millennia, carbon has been locked into the frozen soil of Alaska," said Rich Norby, a member of the NGEE team and ORNL physiological ecologist. "With warming, permafrost could thaw and release this stored up carbon. It's a real concern for the climate, but our understanding is incomplete." In preparation for the project, ORNL researchers are working to translate what they've done in the past to what they are now learning about the Alaska tundra. Alaskan systems are complex and interconnected where a cascading effect leads to one change, which leads to another and then to another. One piece of this enormous puzzle is getting a lay of the land and talking with community leaders to figure out what field study sites are most representative of Alaska and the Arctic. "It makes a tremendous difference to be able to go to the sites in Barrow, Nome and elsewhere in Alaska to get an idea of where we'll be conducting research," said Norby. Thornton agrees. "It's also important for modelers to be on the field site to have an understanding of the dynamics; otherwise, the models put too much faith or ignore important aspects of the system," Thornton said. Beyond the magnitude and implications of the science behind this project, researchers find themselves struck by the Alaskan landscape. Norby admires the awe-inspiring vastness. Wullschleger values the diversity of the people and enjoys listening to the stories they tell about how ecosystems have changed over their lifetimes. ORNL is in charge of the overall coordination of the project and will also ensure that plant and microbiology processes are incorporated into climate models. Partners at Los Alamos, Lawrence Berkeley, and Brookhaven National Laboratory and the University of Alaska Fairbanks make unique contributions in other areas, such geomorphology, geophysics, hydrology, landscape evolution, and modeling. "We'll be working at the microscale, macroscale, global scale and just about everywhere in between," Norby said.
<urn:uuid:a7f04aa3-2b88-4345-854e-19deb5e11e93>
3.21875
797
News (Org.)
Science & Tech.
22.600145
1,665
Almost all we know about the universe is derived from the observation of photons. As astronomers have developed instruments to exploit new regions of the electromagnetic spectrum, from radio waves to gamma rays, fascinating new objects have been revealed. At very high energies, however, the universe itself becomes opaque to photons, making astronomy difficult. Many observed phenomena, such as Gamma Ray Bursts, are poorly understood. Other objects which must exist, like the sources of the high energy cosmic rays, have not yet been identified. Neutrinos offer a useful alternative to photons as astronomical messenger particles. Over the last two decades, neutrinos from astrophysical sources have proved useful for both astrophysics and particle physics. The IceCube neutrino telescope, under construction at the South Pole, will give us a new window on the universe at TeV energies and above. It will also permit us to address topics in particle physics such as extra dimensions, supersymmetry, dark matter, neutrino oscillations, and magnetic monopoles.
<urn:uuid:f1c80725-1b6e-46b2-b7d1-de486875ce11>
3.421875
232
Knowledge Article
Science & Tech.
16.946806
1,666
A large Cold War supply of helium-3 has begun to rapidly run out, due to heavy demand from U.S. scientists who need the gas for neutron detectors and cryogenic experiments. Almost 60,000 liters of helium-3 were used in 2007 and 2008, compared to just 10,000 liters used annually about 10 years ago. A House subcommittee has been convened to search for a solution this week, New Scientist reports. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:ac778eef-bd3c-4d30-b530-7bd572865f89>
2.609375
142
Content Listing
Science & Tech.
65.302931
1,667
Mar. 11, 2010 Distant galaxy clusters mysteriously stream at a million miles per hour along a path roughly centered on the southern constellations Centaurus and Hydra. A new study led by Alexander Kashlinsky at NASA's Goddard Space Flight Center in Greenbelt, Md., tracks this collective motion -- dubbed the "dark flow" -- to twice the distance originally reported. "This is not something we set out to find, but we cannot make it go away," Kashlinsky said. "Now we see that it persists to much greater distances -- as far as 2.5 billion light-years away." The new study appears in the March 20 issue of The Astrophysical Journal Letters. The clusters appear to be moving along a line extending from our solar system toward Centaurus/Hydra, but the direction of this motion is less certain. Evidence indicates that the clusters are headed outward along this path, away from Earth, but the team cannot yet rule out the opposite flow. "We detect motion along this axis, but right now our data cannot state as strongly as we'd like whether the clusters are coming or going," Kashlinsky said. The dark flow is controversial because the distribution of matter in the observed universe cannot account for it. Its existence suggests that some structure beyond the visible universe -- outside our "horizon" -- is pulling on matter in our vicinity. Cosmologists regard the microwave background -- a flash of light emitted 380,000 years after the universe formed -- as the ultimate cosmic reference frame. Relative to it, all large-scale motion should show no preferred direction. The hot X-ray-emitting gas within a galaxy cluster scatters photons from the cosmic microwave background (CMB). Because galaxy clusters don't precisely follow the expansion of space, the wavelengths of scattered photons change in a way that reflects each cluster's individual motion. This results in a minute shift of the microwave background's temperature in the cluster's direction. The change, which astronomers call the kinematic Sunyaev-Zel'dovich (KSZ) effect, is so small that it has never been observed in a single galaxy cluster. But in 2000, Kashlinsky, working with Fernando Atrio-Barandela at the University of Salamanca, Spain, demonstrated that it was possible to tease the subtle signal out of the measurement noise by studying large numbers of clusters. In 2008, armed with a catalog of 700 clusters assembled by Harald Ebeling at the University of Hawaii and Dale Kocevski, now at the University of California, Santa Cruz, the researchers applied the technique to the three-year WMAP data release. That's when the mystery motion first came to light. The new study builds on the previous one by using the five-year results from WMAP and by doubling the number of galaxy clusters. "It takes, on average, about an hour of telescope time to measure the distance to each cluster we work with, not to mention the years required to find these systems in the first place," Ebeling said. "This is a project requiring considerable followthrough." According to Atrio-Barandela, who has focused on understanding the possible errors in the team's analysis, the new study provides much stronger evidence that the dark flow is real. For example, the brightest clusters at X-ray wavelengths hold the greatest amount of hot gas to distort CMB photons. "When processed, these same clusters also display the strongest KSZ signature -- unlikely if the dark flow were merely a statistical fluke," he said. In addition, the team, which now also includes Alastair Edge at the University of Durham, England, sorted the cluster catalog into four "slices" representing different distance ranges. They then examined the preferred flow direction for the clusters within each slice. While the size and exact position of this direction display some variation, the overall trends among the slices exhibit remarkable agreement. The researchers are currently working to expand their cluster catalog in order to track the dark flow to about twice the current distance. Improved modeling of hot gas within the galaxy clusters will help refine the speed, axis, and direction of motion. Future plans call for testing the findings against newer data released from the WMAP project and the European Space Agency's Planck mission, which is also currently mapping the microwave background. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - A. Kashlinsky, F. Atrio-Barandela, H. Ebeling, A. Edge, and D. Kocevski. A New Measurement of the Bulk Flow of X-Ray Luminous Clusters of Galaxies. The Astrophysical Journal, 2010; 712 (1): L81 DOI: 10.1088/2041-8205/712/1/L81 Note: If no author is given, the source is cited instead.
<urn:uuid:10aa2468-ee8c-46e2-9623-27fa8711cfee>
3.1875
1,022
News Article
Science & Tech.
45.849404
1,668
Some of school going kids uses Electricity and Magnetism related science projects Experiments in there school’s science fair. children’s always trying to know that what is a magnet or something about electricity. Someone can make electricity and Magnetism Science Fair Projects Using Batteries, Balloons etc. these Electricity and Magnetism Physics Projects are so interesting and Hair Raising. Electricity and magnetism, electromagnetic waves and Magnetism are most used in these Physics Projects of school. here we have video to know more about Electricity and Magnetism Physics Projects Experiments.
<urn:uuid:060c7c0a-7914-4268-ba4d-692994040b5a>
3.21875
116
Truncated
Science & Tech.
22.579167
1,669
More In This Article Across two decades and thousands of pages of reports, the world's most authoritative voice on climate science has consistently understated the rate and intensity of climate change and the danger those impacts represent, say a growing number of studies on the topic. This conservative bias, say some scientists, could have significant political implications, as reports from the group – the U.N. Intergovernmental Panel on Climate Change – influence policy and planning decisions worldwide, from national governments down to local town councils. As the latest round of United Nations climate talks in Doha wrap up this week, climate experts warn that the IPCC's failure to adequately project the threats that rising global carbon emissions represent has serious consequences: The IPCC’s overly conservative reading of the science, they say, means governments and the public could be blindsided by the rapid onset of the flooding, extreme storms, drought, and other impacts associated with catastrophic global warming. "We're underestimating the fact that climate change is rearing its head," said Kevin Trenberth, head of the climate analysis section at the National Center for Atmospheric Research and a lead author of key sections of the 2001 and 2007 IPCC reports. "And we're underestimating the role of humans, and this means we're underestimating what it means for the future and what we should be planning for." Underplaying the intensity A comparison of past IPCC predictions against 22 years of weather data and the latest climate science find that the IPCC has consistently underplayed the intensity of global warming in each of its four major reports released since 1990. The drastic decline of summer Arctic sea ice is one recent example: In the 2007 report, the IPCC concluded the Arctic would not lose its summer ice before 2070 at the earliest. But the ice pack has shrunk far faster than any scenario scientists felt policymakers should consider; now researchers say the region could see ice-free summers within 20 years. Sea-level rise is another. In its 2001 report, the IPCC predicted an annual sea-level rise of less than 2 millimeters per year. But from 1993 through 2006, the oceans actually rose 3.3 millimeters per year, more than 50 percent above that projection. Some climate researchers also worry that recent institutional changes could accentuate the organization's conservative bias in the fifth IPCC assessment, to be released in parts starting in September 2013. The tendency to underplay climate impacts needs to be recognized, conclude the authors of a recent paper exploring this bias. Failure to do so, they wrote in their study published last month in the journal Global Environmental Change, "could prevent the full recognition, articulation and acknowledgement of dramatic natural phenomena that may in fact be occurring." The conservative bias stems from several sources, scientists say. Part can be attributed to science's aversion to drama and dramatic conclusions: So-called outlier events – results at far ends of the spectrum – are often pruned. Such controversial findings require years of painstaking, independent verification. Yet some events in nature are dramatic, conclude University of California, San Diego, history and science professor Naomi Oreskes and Princeton University geosciences professor Michael Oppenheimer, co-authors of the study looking at the IPCC's bias. "If the drama arises primarily from social, political or economic impacts," they wrote, "then it is crucial that the associated risk be understood fully, and not discounted.”
<urn:uuid:5ce8478c-0c2e-4d15-bde4-fd6f24bffdfa>
3.234375
689
Truncated
Science & Tech.
30.25576
1,670
The journal Science published a paper this week about the increase and spread of Dead Zones in the world oceans. These Dead Zones are created when large amounts of nutrients decompose and use up all of the oxygen in the water causing mass deaths (literally suffocation) of marine species. There are various causes for this, but changes in wind, temperature and current regimes are increasingly thought to play an important role. Interestingly the paper concludes that these dead zones may be a symptom of global warming. Description of thousands of dead crabs and other crustaceans found at the bottom of the ocean in one of these dead zones brings to mind the severe coral bleaching event which occurred in the Seychelles and elsewhere following the one month extreme warming of parts of the Indian Ocean in 1998. What was left behind was a mass grave of dead corals and loss of livelihood to islanders and coastal people. In fact in a statement by the US Department of State in 1999 the following important conclusion was made: "These events (i.e. the coral bleaching of 1998)cannot be accounted for by localized stressors or natural variability alone. Nor can El Niño by itself explain the patterns observed worldwide. Rather, the impact of these factors was likely accentuated by an underlying global cause. Thus the geographic extent, increasing frequency, and regional severity of mass bleaching events are likely a consequence of a steadily rising baseline of marine temperatures, driven by anthropogenic global warming." The number of dead sea zones have effectively doubled per decade, since scientists have started to document them. Notable areas include parts of the US coastline, South African and Namibian coastline and other parts of the world. These dead zones further threaten the livelihood of people dependent upon the coast to survive. We once believed that the oceans could take all of the worlds waste, we were soon proved wrong. We are also wrong about the atmosphere, it cannot take all of our waste, we need to reduce emissions, we need to promote renewables so that there can be more research and the price can go down and both China and India will be able to afford energy technologies. That is the message I have for the skeptics, the signal for immediate action on climate change can be found in the oceans. Islanders have learnt to live, respect and protect the oceans, the continental world needs to understand how important the ocean is to them as well.
<urn:uuid:d8135aa8-d8ac-4e98-a942-9c4571a50e14>
3.03125
483
Personal Blog
Science & Tech.
38.377727
1,671
This species of dinoflagellate is a rare visitor to the Rhode River area. The salinity of the mid-Chesapeake Bay is usually too low to support D. accuminata. It has been known to cause toxic shellfish poisoning. The cells average 48u in length and 16u wide with an average total volume of 3.2x103 cubic microns. During late winter and early spring of 2002, D. accuminata grew to bloom proportions in the Potomac River causing oysters beds to be closed. A few wandering cells were found in main bay and near the mouth of the Rhode River.
<urn:uuid:15de15c2-f85e-48b8-af07-9d2985160476>
2.59375
130
Knowledge Article
Science & Tech.
60.709989
1,672
Air Pollution as Seen From the Skies From Mt. Etna to China to the Sahara, these striking satellite images of air pollution are from both natural and man-made causes - By Sarah Zielinski - Smithsonian.com, April 20, 2010 Mount Etna, on the Italian island of Sicily, is Europe’s most active volcano, having erupted half a dozen times in the past decade alone. During an eruption, a volcano spews gases that had been dissolved in molten rock. One of those gases is sulfur dioxide, which turns into sulfuric acid in the atmosphere and then condenses into sulfate aerosols. Those aerosols can linger for months in the upper atmosphere, where they block sunlight and destroy atomospheric ozone.
<urn:uuid:16399f08-cd5e-464e-b7dd-94873f60b037>
3.140625
153
Truncated
Science & Tech.
37.703046
1,673
|Top Tutorials||New Tutorials||Submit||Login||Register| Total Hits: 3254 Total Votes: 37 votes Category: Java Script/Cookies and Sessions Submitted on: 2008-03-13 11:22:08 Submitted By: Devesh Khanna Description:A cookie is a way you can store some information about a user visiting your site. The information is stored on the individuals computer, and thus you do not need any extra server space to customize a page for any amount of users you may have.
<urn:uuid:697eda4d-4934-4ffe-8539-2ca7e7022856>
2.546875
114
Product Page
Software Dev.
28.598196
1,674
The getcontext() function accesses a different Context object for each thread. Having separate thread contexts means that threads may make changes (such as getcontext.prec=10) without interfering with Likewise, the setcontext() function automatically assigns its target to the current thread. If setcontext() has not been called before getcontext(), then getcontext() will automatically create a new context for use in the current thread. The new context is copied from a prototype context called DefaultContext. To control the defaults so that each thread will use the same values throughout the application, directly modify the DefaultContext object. This should be done before any threads are started so that there won't be a race condition between threads calling getcontext(). For example: # Set applicationwide defaults for all threads about to be launched DefaultContext.prec = 12 DefaultContext.rounding = ROUND_DOWN DefaultContext.traps = ExtendedContext.traps.copy() DefaultContext.traps[InvalidOperation] = 1 setcontext(DefaultContext) # Afterwards, the threads can be started t1.start() t2.start() t3.start() . . . See About this document... for information on suggesting changes.
<urn:uuid:dd836b42-b587-4802-a366-4dc7592ec3bd>
2.6875
253
Documentation
Software Dev.
43.943062
1,675
A century of comet data suggests a dark, Jupiter-sized object is lurking at the solar system's outer edge and hurling chunks of ice and dust toward Earth. "We've accumulated 10 years' more data, double the comets we viewed to test this hypothesis," said planetary scientist John Matese of the University of Louisiana. "Only now should we be able to falsify or verify that you could have a Jupiter-mass object out there." In 1999, Matese and colleague Daniel Whitmire suggested the sun has a hidden companion that boots icy bodies from the Oort Cloud, a spherical haze of comets at the solar system's fringes, into the inner solar system where we can see them. In a new analysis of observations dating back to 1898, Matese and Whitmire confirm their original idea: About 20 percent of the comets visible from Earth were sent by a dark, distant planet. This idea was a reaction to an earlier notion that a dim brown-dwarf or red-dwarf star, ominously dubbed Nemesis, has pummelled the Earth with deadly comet showers every 30 million years or so. Later research suggested that mass extinctions on Earth don't line up with the Nemesis predictions, so many astronomers now think that object doesn't exist. "But we began to ask, what kind of an object could you hope to infer from the present data that we are seeing?" Matese said. "What could possibly tickle [comets'] orbits and make them come very close to the sun so we could see them?" Rather than a malevolent death star, a smaller and more benign companion called Tyche (Nemesis' good sister in Greek mythology) could send comets streaming from the Oort Cloud toward Earth. The cosmic snowballs that form the hearts of comets generally hang out in the Oort Cloud until their orbits are nudged by some outside force. This push could come from one of three things, Matese said. The constant gravitational pull of the Milky Way's disk can drag comets out of their icy homes and into the inner solar system. A passing star can shake comets loose from the Oort Cloud as it zips by. Or a large companion like Nemesis or Tyche can pull comets out of their comfort zones. Computational models show that comets in each of these scenarios, when their apparent origins are mapped in space, make a characteristic pattern in the sky. "We looked at the patterns and asked, 'Is there additional evidence of a pattern that might be associated with a passing star or with a bound object?'" Matese said.
<urn:uuid:4d3a4524-7f9d-4fa2-b60e-de75ec12f48e>
3.984375
587
News Article
Science & Tech.
50.198772
1,676
The name Aculeata is used to refer to a monophyletic lineage of Hymenoptera. The word "Aculeata" is a reference to the defining feature of the group, which is the modification of the ovipositor into a stinger (thus, the group could be called stinging wasps). In other words, the structure that was originally used to lay eggs is modified instead to deliver venom. Not all members of the group can sting; in fact, a great many cannot, either because the ovipositor is modified in a different manner (such as for laying eggs in crevices), or because it is lost altogether. This group includes the bees and ants and all of the eusocial Hymenopterans; it is, in fact, commonly believed that the possession of a venomous sting was one of the important features promoting the evolution of social behavior, as it confers a level of anti-predator defense rarely approached by other invertebrates. The use of the name Aculeata has a long history at the rank of infraorder or division, and it is only with the advent of modern phylogenetics that the higher classifications of insects (and other organisms) have come to reject artificial (paraphyletic) grouping categories. While the Aculeata is a good natural group, containing all the descendants of a single common ancestor, the supposed "other infraorder" of the Apocrita - the "Parasitica" or "Terebrantia" - is not a natural group, just as the "sawflies", the basal lineages of Hymenoptera, are not. The Aculeata are therefore maintained as a taxon, either at infraorder or division rank or as an unranked clade. However, the "Parasitica" must be considered a paraphyletic assemblage; the taxon "Parasitica" is discarded and their interrelationships are subject of further study. Provisionally, they all can be treated as s uperfamilies incertae sedis in the Apocrita, without being placed in an infraorder. It is highly likely that at least some of these parasitic wasps - for example the Stephanoidea - are as closely related to the Aculeata as to other "Parasitica". On the other hand, among the parasitic wasps the Ichneumonoidea seem particularly closely related to the Aculeata. If taxonomic ranks are used, it may therefore be best to treat the latter as a division and divide the Apocrita into some 6 infraorders representing lineages of about equal standing, one of which would unite the Aculeata and the Ichneumonoidea. Note that having the same taxonomic rank does not imply equal evolutionary standing, whereas placement in the same higher-ranked taxon ideally does, or at least implies that regardless of what specific rank they have, the lower-ranked taxa are all part of the same evolutionary radiation. Therefore, would the Aculeata and the Ichneumonoidea be placed in an infraorder, the former would still be considered a division and the latter a superfamily. Despite having different ranks, they would be members of the same taxon and sister lineages. - Tree of Life Web Project: Aculeata The Series Aculeata is further organized into finer groupings including: - Superfamily (3): Apoidea · Chrysidoidea · Vespoidea - Family (35): Ampulicidae · Andrenidae · Angarosphecidae · Apidae · Armaniidae · Baissodidae · Bethylidae · Bethylonymidae · Bradynobaenidae · Chrysididae · Colletidae · Crabronidae · Dryinidae · Embolemidae · Falsiformicidae · Formicidae · Halictidae · Heterogynaidae · Limnetidae · Megachilidae · Melittid ae · Mutillidae · Paleomelittidae · Plumariidae · Pompilidae · Rhopalosomatidae · Sapygidae · Sclerogibbidae · Scolebythidae · Scoliidae · Sierolomorphidae · Sphecidae · Sphecomyrmidae · Tiphiidae · Vespidae The Ampulicidae, or Cockroach wasps, is a small (approx. 200 species), primarily tropical group of sphecoid wasps, all of which use various cockroaches as prey items for their larvae. They tend to have elongated jaws, a pronounced neck-like constriction behind the head, a strongly petiolate abdomen, and deep grooves on the thorax. Many are quite ant-like in appearance, though some are brilliant metallic blue or green. [more] The family Andrenidae is a large (nearly) cosmopolitan (absent in Australia) non-parasitic bee family, with most of the diversity in temperate and/or arid areas (warm temperate xeric), including some truly enormous genera (e.g., Andrena with over 1300 species, and Perdita with nearly 800). One of the subfamilies, Oxaeinae, are so different in appearance that they were typically accorded family status, but careful phylogenetic analysis reveals them to be an offshoot within the Andrenidae, very close to the Andreninae. [more] The Apidae are a large family of bees, comprising the common honey bees, stingless bees (which are also cultured for honey), carpenter bees, orchid bees, cuckoo bees, bumblebees, and various other less well-known groups. The family Apidae presently includes all the genera that were previously classified in the families Anthophoridae and Ctenoplectridae, and most of these are solitary species, though a few are also cleptoparasites. The four groups that were subfamilies in the old family Apidae are presently ranked as tribes within the subfamily Apinae. This trend has been taken to its extreme in a few recent classifications that place all the existing bee families together under the name "Apidae" (or, alternatively, the non-Linnaean clade "Anthophila"), but this is not a widely-accepted practice. [more] Bradynobaenidae is a family of wasps similar to the Mutillidae. These species are often found in arid regions. [more] Commonly known as cuckoo wasps, the Hymenopteran family Chrysididae is a very large cosmopolitan group (over 3000 described species) of parasitoid or cleptoparasitic wasps, often highly sculptured, with brilliantly colored metallic-like bodies (thus the common names jewel wasp, gold wasp, or emerald wasp are sometimes used). They are most diverse in desert regions of the world, as they are typically associated with solitary bee and wasp species, which are also most diverse in such areas. [more] Colletidae is a family of bees, and are often referred to collectively as plasterer bees or polyester bees, due to the method of smoothing the walls of their nest cells with secretions applied with their mouthparts; these secretions dry into a cellophane-like lining. There are 5 subfamilies, 54 genera, and over 2000 species, all of them evidently solitary, though many nest in aggregations. Two of the subfamilies, and Hylaeinae, lack the external pollen-carrying apparatus (the scopa) that otherwise characterizes most bees, and instead carry the pollen in their crop. These groups, and in fact most genera in this family, have liquid or semi-liquid pollen masses on which the larvae develop. [more] Crabronidae is a large family of wasps, that includes nearly all of the species formerly comprising the now-defunct superfamily Sphecoidea. It collectively includes well over 200 genera, containing well over 9000 species. Crabronids were originally a part of Sphecidae, but the latter name is now restricted to a separate family based on what was once the subfamily Sphecinae. As this change is very recent, it seems likely that the subfamilies of Crabronidae will each eventually be treated as families in their own right, as they have been treated as such by many authorities in the past (as in the catalog linked below). [more] Dryinidae is a family of hymenopteran insects with about 1,400 described species found worldwide. These are solitary wasps whose larvae are parasitoids on other insects. The only known hosts are Hemiptera, especially leafhoppers. [more] Ants are social insects of the family Formicidae () and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from wasp-like ancestors in the mid-Cretaceous period between 110 and 130 million years ago and diversified after the rise of flowering plants. More than 12,500 out of an estimated total of 22,000 species have been classified. They are easily identified by their elbowed antennae and a distinctive node-like structure that forms a slender waist. [more] Halictidae is a cosmopolitan family of the order Hymenoptera consisting of small (> 4 mm) to midsize (> 8 mm) bees which are usually dark-colored and often metallic in appearance. Several species are all or partly green and a few are red; a number of them have yellow markings, especially the males, which commonly possess yellow faces, a pattern widespread among the various families of bees. They are commonly referred to as sweat bees (especially the smaller species), as they are often attracted to perspiration; when pinched, females can give a minor sting. [more] The Megachilidae are a cosmopolitan family of (mostly) solitary bees whose pollen-carrying structure (called a scopa) is restricted to the ventral surface of the abdomen (rather than mostly or exclusively on the hind legs as in other bee families). Megachilid genera are most commonly known as mason bees and leafcutter bees, reflecting the materials they build their nest cells from (soil or leaves, respectively); a few collect plant or animal hairs and fibers, and are called carder bees. All species feed on nectar and pollen, but a few are cleptoparasites (informally called "cuckoo bees"), feeding on pollen collected by other megachilid bees. Parasitic species do not possess a scopa. The brightly colored scopa leads to a colloquial name used occasionally in North America - "Jelly-belly bees." Megachilid bees are among the world's most efficient pollinators because of their energetic swimming-like motion in the reproductive structures of flowers, which moves pollen, as need ed for pollination. One of the reasons they are efficient pollinators is their frequency of visits to plants, but this is because they are extremely inefficient at gathering pollen; compared to all other bee families, megachilids require on average nearly ten times as many trips to flowers to gather sufficient resources to provision a single brood cell. [more] The family Melittidae is a small bee family, with some 60 species in 4 genera, restricted to Africa and the northern temperate zone. Historically, the family has included the Dasypodaidae and Meganomiidae as subfamilies, but recent molecular studies indicate that Melittidae (sensu lato) was paraphyletic, so each of the three historical subfamilies is now accorded family status, with Dasypodaidae as the basal group of bees, followed by Meganomiids and Melittids, which are sister taxa.. [more] Mutillidae are a family of more than 3,000 species of wasp whose wingless females resemble ants. Their common name velvet ant refers to their dense pile of hair which most often is bright scarlet or orange but may also be black, white, silver, or gold. Their bright colors serve as aposematic signals. They are known for their extremely painful sting, facetiously said to be strong enough to kill a cow, hence the common name cow killer or cow ant is applied to some species. Unlike a real ant, they do not have drones, workers, and queens. However, velvet ants do exhibit haplodiploid sex determination similar to other members of Vespoidea (JH Hunt 1999). [more] Wasps in the family Pompilidae are commonly called spider wasps (in South America, species may be referred to colloquially as marabunta or marimbondo, though these names can be generally applied to any very large stinging wasps). The family is cosmopolitan, with some 5,000 species in 6 subfamilies. All species are solitary, and most capture and paralyze prey, though members of the subfamily Ceropalinae are cleptoparasites of other pompilids, or ectoparasitoids of living spiders. [more] Rhopalosomatidae is a family of Hymenoptera. It contains about 68 extant species in four genera that are found worldwide. Three fossil genera are known. [more] Scoliidae, the scoliid wasps, is a small family represented by 6 genera and about 20 species in North America, but they occur worldwide, with a total of around 300 species. They tend to be black, often marked with yellow or orange, and their wing tips are distinctively corrugated. Males are more slender and elongate than females, with longer antennae, but the sexual dimorphism is not as extreme as is common in the Tiphiidae, a closely related family. [more] Sphecidae (Latreille, 1802) is a cosmopolitan family of wasps that include digger wasps, mud daubers and other familiar types that all fall under the category of thread-waisted wasps. Both of the traditional definitions of the Sphecidae (the conservative one, where all the sphecoid wasps other than ampulicids and heterogynaids were in a single large family, and the more refined one, where the 7 large sphecid subfamilies were each elevated to family rank) have recently been shown to be paraphyletic, and the most recent classification is closer to the conservative scheme; the families Heterogynaidae and Ampulicidae are the sister taxa to what are now two families (instead of one), the Sphecidae and Crabronidae. Thus, the bulk of the sphecoid wasps are now placed in Crabronidae, and Sphecidae per se is a much more restricted concept, equivalent to what used to be the subfamily Sphecinae. [more] The Vespidae are a large (nearly 5,000 species), diverse, cosmopolitan family of wasps, including nearly all the known eusocial wasps and many wasps. Each social wasp colony includes a queen and a number of female workers with varying degrees of sterility relative to the queen. In temperate social species, colonies usually only last one year, dying at the onset of winter. New queens and males (drones) are produced towards the end of the summer, and after mating, the queens hibernate over winter in cracks or other sheltered locations. The nests of most species are constructed out of mud, but polistines and vespines use plant fibers, chewed to form a sort of paper (also true of some stenogastrines). Many species are pollen vectors contributing to the pollination of several plants, being potential or even effective pollinators. [more] At least 4,081 species and subspecies belong to the Family Vespidae. More info about the Family Vespidae may be found here. - The text on this page is licensed under the GNU Free Documentation License. It includes material from Wikipedia retrieved Wednesday, April 25, 2012. - Photographs on this page are copyrighted by individual photographers, and individual copyrights apply. - The technology underlying this page, including the controls behind Keep Exploring, is owned by the BayScience Foundation. All rights are reserved.
<urn:uuid:0916bb06-9aab-424a-8c0d-ce812dbe24ad>
3.765625
3,444
Knowledge Article
Science & Tech.
19.905964
1,677
The flash provides a way to pass temporary objects between actions. Anything you place in the flash will be exposed to the very next action and then cleared out. This is a great way of doing notices and alerts, such as a create action that sets flash[:notice] = "Post successfully created" before redirecting to a display action that can then expose the flash to its template. Actually, that exposure is automatically done. Example: class PostsController < ActionController::Base def create # save post flash[:notice] = "Post successfully created" redirect_to posts_path(@post) end def show # doesn't need to assign the flash notice to the template, that's done automatically end end show.html.erb <% if flash[:notice] %> <div class="notice"><%= flash[:notice] %></div> <% end %> Since the notice and alert keys are a common idiom, convenience accessors are available: flash.alert = "You must be logged in" flash.notice = "Post successfully created" This example just places a string in the flash, but you can put any object in there. And of course, you can put as many as you like at a time too. Just remember: They’ll be gone by the time the next action has been performed. See docs on the FlashHash class for more details about the flash.
<urn:uuid:6f6879f3-791a-49d1-a968-a0c2a618311d>
2.546875
292
Documentation
Software Dev.
52.004511
1,678
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2006 September 23 Explanation: Today, the Sun rises due east at the Equinox, a geocentric astronomical event that occurs twice a year. To celebrate, consider this view of the rising Sun and a lovely set of ice halos recorded on a cold winter morning near Green Bay, Wisconsin, USA, planet Earth. Produced by sunlight shining through common atmospheric ice crystals with hexagonal cross-sections, such halos can actually be seen more often than rainbows. The remarkable sunrise picture captures a beautiful assortment of the types most frequently seen, including a sun pillar (center) just above the rising Sun surrounded by a 22 degree halo arc. Completing a triple sunrise illusion, sundogs appear at the far left and far right edges of the 22 degree arc. An upper tangent arc is also just visible at the very top of the view. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: EUD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:463725f5-6935-4098-b629-aa81a6f0a232>
3.515625
254
Knowledge Article
Science & Tech.
42.104375
1,679
Quicksand and other non-Newtonian fluids share properties with both liquids and solids. Non-Newtonian fluids consist of tiny grains suspended in liquid, with the appearance of a solid or gel. Stand on quicksand and you will sink (though not as rapidly as movies and cartoons suggest). But strike it quickly and it will briefly harden. Previous explanations of quicksand behavior relied on the presence of containment walls and effects like grain dilation under stress. However, a new experimental study challenges prior assumptions, showing that new concepts may be needed to explain non-Newtonian fluids. Scott R. Waitukaitis and Heinrich M. Jaeger at the University of Chicago created a quicksand-like substance called "oobleck" out of cornflour and water, which they then struck with an aluminum rod. By measuring the position, speed, and acceleration of the rod as it interacted with the oobleck, they determined that its solidification arises from compression that propagates away from the impact point. By using a huge amount of fluid (25 liters), the researchers showed the bizarre non-Newtonian effects were independent of the size of the container, so the presence of confining walls is irrelevant. Through X-ray imaging, they discovered a nearly cylindrical solid region forms directly below the impact point. The detailed analysis led the authors to develop a simple model for the impact, which bears striking similarity to models for objects falling into liquids, but produces very different effects. This may have been the most carefully monitored bowl of starch ever devised. In the experiment, the researchers mounted the aluminum rod using guide rails to make sure it impacted along a single axis. For different trials, they either dropped the rod (free fall) or used a slingshot to drive it more quickly downward. The rod was fixed with an accelerometer, and the whole process was recorded on high-speed video to measure the instantaneous position, speed, and acceleration. The grains of the cornflour in the oobleck are irregularly shaped and range in size from 5 to 20 microns (0.005 to 0.02 millimeters), which is typical of quicksand and other non-Newtonian fluids. Additionally, the suspension contained tracer particles that could be imaged with X-rays; motion within the oobleck could be tracked with the tracers. The authors positioned a force sensor directly below the rod at the bottom of the container to examine how the impact distributed itself through the fluid. They also used a laser line across the surface to determine how its shape changed. To measure the effect of container size, the researchers tested fluid containers ranging from 8.5 cm to 20.5 in depth. They found that the rod experienced a rapid deceleration upon impact with the surface at the same point in time, regardless of the container depth. However, shallower containers experienced a rebound effect: after a time, the rod began accelerating upward again. Additionally, X-ray images showed the tracer particles didn't spread much to the sides in the region immediately below the rod. Instead, they moved as a nearly cylindrical unit, acting almost like a second rod within the suspension. This plug of material was surrounded by a conical region where the suspension flowed outward and upward in response, lifting the surface slightly beyond the impact zone (as shown in the image above). After some time, the plug "melted," restoring the suspension to its usual quasi-liquid state. Combining their data, the researchers constructed a simple model for the suspension, including the size of the solid-like plug and the conical displaced mass. The equation bore some similarities to ordinary fluid displacement models, again demonstrating the hybrid nature of suspensions. It also contrasts greatly with the usual approach to non-Newtonian fluids, where the walls of the container play a role and, instead of generating a nearly cylindrical plug, the force distributes itself along angles. The physical picture of the process is clear: momentum from the impact was carried directly downward, and rebounded when it hit the bottom of the container (if it had sufficient time to do so before melting). While the study used cornflour for simplicity and cost-effectiveness, the authors argued the similarity in grain size and shape should make their model applicable to other suspensions.
<urn:uuid:bec751b6-db43-451c-be44-31f461c3f535>
3.546875
891
News Article
Science & Tech.
39.004363
1,680
Early last week, I wrote about parallax and distance measurements. This is a follow-up post to that one. Stellar parallax is very small, and thus correspondingly difficult to measure. The closest star has a parallax of 0.772 arc-seconds (that is nearly 1/4700 of a degree). That is a very tiny angle to measure, and so it is no wonder that it took so long for astronomical technology to advance to where the measurements could be made. The capability to make such small measurements finally came in the early Nineteenth Century. One problem for astronomers trying to measure parallax is that the stars are vast distances away. The farther a star is from us, the smaller the parallax, and the harder that parallax is to measure. Even today, most stars are simply too far away to reliably measure parallax. In the early Nineteenth Century, it was worse. The technology was such that only a handful of the nearest stars had big enough parallaxes to measure. But, there are a lot of stars in the sky? Which ones would be the best candidates to study and to attempt to measure? The measurements would be time consuming, and an astronomer would not be able to measure many stars, so he’d have to pick a star and stick with it. But, if the selected star were too far away, then he’d never be able to measure parallax. At first, astronomers had thought all stars to be similar, so the brighter stars were presumed to be the nearer ones. But, that idea had begun to fall by the wayside by the Eighteenth Century. Astronomers realized that brightness may not correlate at all with nearness (and it largely doesn’t). Eager attempts to measure parallax inevitably resulted in failure. The stars were simply far more distant than anyone had been prepared to imagine. But, along the way, there were a lot of interesting discoveries. For example, the search for parallax led to the discovery of the aberration of starlight. But, there was another important factor, besides brightness, that astronomers looked to when trying to decide on a target star: its proper motion. Edmund Halley discovered that some stars had apparently shifted position over historic times. The stars are not fixed in space relative to one another. This apparent shift of the stars, as seen from Earth, is their proper motion. Assuming that most stars are moving at similar speeds, then the nearer stars might appear to have higher proper motion than the more distant ones. You can see this same effect by looking out the window of a car as you drive down the highway. The nearer objects seem to be going past the window far more quickly than the more distant objects. But, that only really holds true when you are looking at stationary objects. Other cars driving in the same direction as you may be far closer than cattle or trees alongside the road, but they will not appear to be moving very quickly with respect to your window because they have very nearly the same speed as your car. Likewise, stars quite near the Sun might not be seen to have a high proper motion if they share the Sun’s motion through the galaxy. Still, this seemed to be a far more promising correlation with distance than the brightness measure. So, the hunt was on. Numerous astronomers were eagerly working to make the first parallax measurements. Among these were Thomas Henderson, working in South Africa, Friedrick Wilhelm Struve, and Friedrick Wilhelm Bessell (both in Europe). Henderson actually got the jump on the others, making measurements of Alpha Centauri. In 1833, he packed up and went back to England, along with his data. He was in no hurry to reduce his data, so it languished for years. When he finally did get to looking at his measurements, he found that there did seem to be what may have been a parallax shift in Alpha Centauri, but he did not trust his data. He had only 19 measurements, far too few to be certain or conclusive in his findings. Furthermore, the instrument that he had been using had been damaged in shipping to South Africa. He had painstakingly applied corrections to the measurements, but he realized that other astronomers would cast doubt on his findings. He decided to wait for better measurements made with another instrument by his successor at the far southern observatory. Alpha Centauri is indeed the nearest star (actually it is a triple star system, and the closest of the three, Proxima Centauri, is the nearest star other than the Sun), and it really did have a large enough parallax to measure. And, as it turns out, Henderson’s corrections to his data were approximately correct. However, he didn’t know all of that, so he held off publishing his findings until he had more data sent to him. In the mean time, Bessell had acquired a spectacular and very precise instrument ideally suited for the task. It had originally been designed for measuring the sizes of features on the Sun, but he expertly adapted it to measure the distances between stars. Giuseppe Piazzi had shown the star 61 Cygni to have a particularly high proper motion. In fact, it was dubbed the “flying star” and at the time held the record as the star with the highest proper motion (a record that it was to eventually lose to Groombridge 1830, and then to Barnard’s Star). This made it an excellent target star. However, after only a few months, Bessell gave up the endeavor because he found the comparison star that he’d selected to be too dim to follow in poor sky conditions. Other concerns took him away from the task for a number of years. Then, in 1837, Struve announced that he’d measured the parallax of the star Vega. The number that he gave was 0.125″. Bessell poured over Struve’s data, but was not convinced that it was really believable. He feverishly resumed his measurements of 61 Cygni. For the next year, any clear night that he could observe the star, he did, often a dozen times per night, making measurements. After a year, he had hundreds of positions determined using thousands of individual measurements. His data showed no doubt that 61 Cygni moved back and forth as the Earth moved around the Sun. He had found clear evidence of parallax. Bessell computed the parallax of 61 Cygni to be 0.314″, and he published his results in late 1838. Soon afterwards, Struve revised his parallax measurement of Vega to a value nearly double what he had originally found. That huge change cast serious doubt as to the reliability of his measurements. So most astronomers, including Struve himself, ceded the first parallax measurement to Bessell. With Bessell and Struve’s measurements available, Henderson finally published his own findings for Alpha Centauri. Interestingly enough, despite Struve’s uncertainty in his own measurements, his original value for Vega’s parallax is amazingly close to the modern accepted value of 0.129″. Bessell’s parallax for 61 Cygni is also not far off of today’s accepted value of 0.285″. It really is hard to say who should get credit for the first parallax measurements. All three, Struve, Bessell, and Henderson, were working at about the same time. Henderson didn’t believe his measurements, so he didn’t publish them right away, and thus is seldom given credit. Struve published his measurements a year before Bessell, but his measurements were deemed somewhat uncertain, a fact that he most clearly stated himself. Struve, himself, gave Bessell credit for the first unambiguous measurement of stellar parallax. But, I think that all three deserve some mention. If you want to read more about this episode in the history of astronomy, an excellent resource is Alan Hirshfeld’s book Parallax: The Race to Measure the Cosmos. Finder chart for 61 Cygni created using Starry Night Pro software.
<urn:uuid:dd7e790e-73c3-4f82-b82e-a22e49919f14>
3.96875
1,691
Personal Blog
Science & Tech.
52.922019
1,681
I’ve read that July of 1916 was the sunniest month ever recorded in Chicago, logging 95 percent of possible sunshine. Did the sunshine recorders in use then follow use today’s standards? --George Ballas, Morton Grove Retired Chicago weather observer Paul Kubecka found that the instrument for measuring solar irradiance in 1916, the Maring-Marvin sunshine recorder (in use in Chicago until 1953) was less sensitive than today’s version. It required .37 langleys to initiate the recording of sunshine compared with to the current World Meteorological Organization standard of .17 langleys. It also had a slower response time, taking up to 10 minutes before beginning to log sunshine at sunrise and sunset and during passing cloudy intervals. If weather conditions were identical, today’s instrument would have recorded a higher sunshine percentage than the 1916 version.
<urn:uuid:789ece3b-a9a7-46e5-8602-2a4fbc74be84>
3.25
181
Personal Blog
Science & Tech.
35.452337
1,682
December 11, 2012 | 2 If you have seen any of Peter Jackson’s movies, such as this week’s release of The Hobbit: An Unexpected Journey, then you have probably noticed the logo for the special effects company Weta Workshop, which works on most of the director’s New Zealand–based projects. The workshop is named after a bunch of endemic New Zealand insects that look, at first glance, like crickets or grasshoppers on steroids. Weta consist of about 70 species of the largest and heaviest flying insects in the world. Some giant weta species — “very cool, prickly little monsters,” as Weta Workshop puts it — weigh in at up to 30 grams and boast bodily lengths of up to 10 centimeters. A newly discovered member of the group—the Denniston white-faced cave weta—isn’t quite that big or monstrous. In fact, the scientists who found and tentatively named the species (it hasn’t been given an official taxonomic name yet) don’t know how big the species grows, because only juvenile insects were found. But they do know that its only habitat could soon disappear. This newest weta was discovered on the Denniston Plateau on the sparsely populated west coast of New Zealand’s South Island. The plateau receives an amazing six meters of annual rainfall, creating unique rock formations that are home to many rare and endangered species. The 190-hectare area is slated to be converted into an open-cast coal mine that could increase the country’s coal exports by 63 percent but which conservationists say would destroy the habitat and its unique denizens. The Denniston white-faced cave weta was found during a four-day “Bioblitz” in March that identified more than 500 confirmed species on the plateau and another 219 unconfirmed species. Among the hundreds of species was the new weta, which bears a mostly black body, a distinctive white band behind its head and leg spines unlike other weta species. “It just stood out,” Massey University (M.U.) associate professor Steve Trewick said in a prepared statement. “We haven’t seen anything with that appearance and coloration.” Trewick, who led the expedition along with fellow associate professor Mary Morgan-Richards, also tested the new weta’s DNA, which proved to differ from other known species. “This weta might occur elsewhere as well as Denniston, but what it highlights is that destroying distinctive habitat is likely to destroy biodiversity even before we know it is there,” Trewick said. “If we’re destroying biodiversity before we’ve even identified it, we’re clearly following the wrong strategy.” The Bioblitz was supported by the conservation organization Forest & Bird, which seeks to protect the area from the planned coal mine. M.U. is also currently undertaking a project—called Beta Weta Geta—to classify the taxonomy and biodiversity of all of New Zealand’s cave weta species. Photo: Denniston white-faced cave weta courtesy of Massey University
<urn:uuid:5cdbd2c8-b9c9-436d-9d77-97b377aef321>
3.0625
671
News Article
Science & Tech.
44.741295
1,683
Model output file format: Year DOY Hour Min Sec Lat[deg] Lon[deg] NmF2[1/cm3] Here DOY is day of year, andNmF2 [1/cm3] is maximum electron density from either LEO satellites or ISRs. Lat [deg] and Lon [deg] for NmF2 from ISRs are geographic latitude and longitude of ISRs. For NmF2 from LEO radio occultation measurements, Hour, Min, Sec, Lat [deg] and Lon [deg] are universal time and geographic location at which maximum electron density occurs that are not on the satellite track.
<urn:uuid:c42a276d-55db-4601-b143-ea1fdd1d239a>
2.609375
138
Documentation
Science & Tech.
49.683725
1,684
Scientists on the lookout for utility-scale, high efficiency batteries are developing new “flow”systems that that store energy more effectively than lead-acid or lithium-ion batteries, but there’s a catch. The flow batteries in operation now are about the size of a house and they cost more than the equivalent in lithium-ion batteries. The race is on to find smaller, cheaper alternatives and researchers at Sandia National Laboratories believe that they are on to the solution, which is, in fact, a solution of liquid salts called MetILs. The limits of lithium-ion for wind and solar Lithium-ion batteries have been the gold standard of energy storage solutions for a long time, but they fall short when it comes to the utility-scale systems needed to keep up with new high efficiency wind turbines and advanced solar technology. The cost of lithium-ion batteries is one factor. Another is their relatively short lifespan, compared to flow batteries. According to Sandia chemist Travis Anderson, a flow battery can withstand about 14,000 cycles, which adds up to about 20 years of energy storage. Flow battery basics Flow batteries work by converting chemical energy into electricity. Stephanie Hobby of Sandia explains it thusly: “A flow battery pumps a solution of free-floating charged metal ions, dissolved in an electrolyte — substance with free-floating ions that conducts electricity — from an external tank through an electrochemical cell to convert chemical energy into electricity.” Flow batteries charge and discharge rapidly, and they have a long lifespan, but all is not perfect in flow battery land. The most promising systems so far use zinc bromine and vanadium, both of which are “moderately toxic” according to Hobby. In addition, the price of vanadium can spike wildly on the open market. The Sandia “American-made energy” solution In keeping with President Obama’s theme of developing American-made energy, the Sandia team focused on low cost, non-toxic substances that can be dug up out of American soil, including iron, copper and manganese. Working from this foundation the team designed a new family of liquids, the aforementioned MetILs, which stands for Metal-based Ionic Liquids. By using metal based liquids, the team was able to eliminate the use of water-based solutions that are the foundation of conventional flow batteries (water limits the energy density of the battery, and makes it more susceptible to fluctuations in outside temperatures). With a few additional tweaks, the result is a flow battery that could be far smaller, About those tweaks… As Hobby points out, so far the research has focused primarily on materials for the cathode. There is still the anode to deal with, so the new battery won’t be ready for the market any time soon. In the mean time, the Sandia crew better act fast – researchers at MIT are also working on a new high tech, low cost battery of their own. Follow Tina Casey on Twitter: @TinaMCasey. Tina Casey specializes in military and corporate sustainability, advanced technology, emerging materials, biofuels, and water and wastewater issues. Tina’s articles are reposted frequently on Reuters, Scientific American, and many other sites. You can also follow her on Twitter @TinaMCasey and Google+.
<urn:uuid:a2dbd794-9e42-43be-b3fb-1b310a1e0cea>
3.203125
700
News Article
Science & Tech.
29.320834
1,685
This section discusses internal locking; that is, locking performed within the MySQL server itself to manage contention for table contents by multiple sessions. This type of locking is internal because it is performed entirely by the server and involves no other programs. External locking occurs when the server and other programs lock MyISAM table files to coordinate among themselves which program can access the tables at which time. See Section 8.7.4, “External Locking”. MySQL uses row-level InnoDB tables to support simultaneous write access by multiple sessions, making them suitable for multi-user, highly concurrent, and OLTP applications. MySQL uses table-level locking for MERGE tables, allowing only one session to update those tables at a time, making them more suitable for read-only, read-mostly, or single-user applications. Table locking in MySQL is deadlock-free for storage engines that use table-level locking. Deadlock avoidance is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order. MySQL grants table write locks as follows: If there are no locks on the table, put a write lock on it. Otherwise, put the lock request in the write lock queue. MySQL grants table read locks as follows: If there are no write locks on the table, put a read lock on it. Otherwise, put the lock request in the read lock queue. Table updates are given higher priority than table retrievals. Therefore, when a lock is released, the lock is made available to the requests in the write lock queue and then to the requests in the read lock queue. This ensures that updates to a table are not “starved” even if there is heavy SELECT activity for the table. However, if you have many updates for a table, SELECT statements wait until there are no more updates. For information on altering the priority of reads and writes, see Section 8.7.2, “Table Locking Issues”. You can analyze the table lock contention on your system by variables, which indicate the number of times that requests for table locks could be granted immediately and the number that had to wait, respectively: SHOW STATUS LIKE 'Table%';+-----------------------+---------+ | Variable_name | Value | +-----------------------+---------+ | Table_locks_immediate | 1151552 | | Table_locks_waited | 15324 | +-----------------------+---------+ MyISAM storage engine supports concurrent inserts to reduce contention between readers and writers for a given table: If a MyISAM table has no free blocks in the middle of the data file, rows are always inserted at the end of the data file. In this case, you can freely mix SELECT statements for a MyISAM table without locks. That is, you can insert rows into a MyISAM table at the same time other clients are reading from it. Holes can result from rows having been deleted from or updated in the middle of the table. If there are holes, concurrent inserts are disabled but are enabled again automatically when all holes have been filled with new data. This behavior is altered by the variable. See Section 8.7.3, “Concurrent Inserts”. If you acquire a table lock explicitly with LOCK TABLES, you can request a READ LOCAL lock rather than a READ lock to enable other sessions to perform concurrent inserts while you have the table locked. To perform many SELECT operations on a table real_table when concurrent inserts are not possible, you can insert rows into a temporary table temp_table and update the real table with the rows from the temporary table periodically. This can be done with the following code: LOCK TABLES real_table WRITE, temp_table WRITE;mysql> INSERT INTO real_table SELECT * FROM temp_table;mysql> DELETE FROM temp_table;mysql> InnoDB uses row locks. Deadlocks are possible InnoDB because it automatically acquires locks during the processing of SQL statements, not at the start of the transaction. Advantages of row-level locking: Fewer lock conflicts when different sessions access different rows Fewer changes for rollbacks Possible to lock a single row for a long time Disadvantages of row-level locking: Requires more memory than table-level locks Slower than table-level locks when used on a large part of the table because you must acquire many more locks Slower than other locks if you often do BY operations on a large part of the data or if you must scan the entire table frequently Generally, table locks are superior to row-level locks in the following cases: Most statements for the table are reads Statements for the table are a mix of reads and writes, where writes are updates or deletes for a single row that can be fetched with one key read: key_value; DELETE FROM Many scans or GROUP BY operations on the entire table without any writers With higher-level locks, you can more easily tune applications by supporting locks of different types, because the lock overhead is less than for row-level locks. Options other than row-level locking: Versioning (such as that used in MySQL for concurrent inserts) where it is possible to have one writer at the same time as many readers. This means that the database or table supports different views for the data depending on when access begins. Other common terms for this are “time travel,” “copy on write,” or “copy on demand.” Copy on demand is in many cases superior to row-level locking. However, in the worst case, it can use much more memory than using normal locks. Instead of using row-level locks, you can employ application-level locks, such as those provided by RELEASE_LOCK() in MySQL. These are advisory locks, so they work only with applications that cooperate with each other. See Section 12.15, “Miscellaneous Functions”.
<urn:uuid:e1748387-5984-4d41-a764-2069ab2c529f>
3.03125
1,344
Documentation
Software Dev.
41.36039
1,686
Date of this Version Urban-suburban Canada geese (Branta canadensis) create nuisance problems at their foraging sites by littering them with feces. An ecological approach to the problem involves inducing the geese to use alternate foraging sites by reducing the attractiveness of problem sites. This can be accomplished by reducing the forage quality at the nuisance site by not fertilizing and infrequently mowing the lawn or by replacing the lawn with a less palatable grass species or other ground cover. Further, sites can be made less attractive to geese if they are surrounded by tall trees which make it harder for geese to land or take off, and planting bushes and hedges to reduce a goose's ability to watch for approaching predators. Another approach involves relocating roosting areas to more remote sites so that geese have to expend greater time and energy to reach the problem site.
<urn:uuid:721fa7f7-2dd1-44ee-a8f8-8c72ee00c939>
3.59375
180
Knowledge Article
Science & Tech.
21.309919
1,687
SkySails has invented a method that could cut down on ‘fuel consumption, costs and carbon footprints’ for commercial ships by developing giant kites. The Raw Story reports: The blue-hulled vessel would slip by unnoticed on most seas if not for the white kite, high above her prow, towing her to what its creators hope will be a bright, wind-efficient future. The enormous kite, which looks like a paraglider, works in tandem with the ship’s engines, cutting back on fuel consumption, costs, and carbon footprint. “Using kites you can harness more energy than with any other type of wind-powered equipment,” said German inventor Stephan Wrage, whose company SkySails is looking for lift-off on the back of worldwide efforts to boost renewable energy. The 160-square-metre (524-square-foot) kite, tethered to a yellow rope, can sail 500 metres into the skies where winds are both stronger and more stable, according to the 38-year-old Wrage. The secret to the kite’s efficiency lies in its speed and computer-controlled flight pattern. [Continues at The Raw Story]
<urn:uuid:98927218-2f28-41e4-964f-e184fabbb557>
3.0625
257
News Article
Science & Tech.
43.568645
1,688
The efficiency of any application depends on how well memory and garbage collection are managed. The following sections provide information on optimizing memory and allocation functions: Tracing Garbage Collection Other Garbage Collector Settings Tuning the Java Heap Re-basing DLLs on Windows Monitoring the Garbage Collection (GC) activity at the development server and accordingly tuning JVM and GC settings before deploying the server into production is necessary. The GC settings vary depending on the application you are running. Garbage collection reclaims the heap space previously allocated to objects no longer needed. The process of locating and removing the dead objects can stall any application and consume as much as 25 percent throughput. Almost all Java Runtime Environments come with a generational object memory system and sophisticated GC algorithms. A generational memory system divides the heap into a few carefully sized partitions called generations. The efficiency of a generational memory system is based on the observation that most of the objects are short lived. As these objects accumulate, a low memory condition occurs forcing GC to take place. The heap space is divided into old and the new generations. The new generation includes the new object space (eden), and two survivor spaces. The JVM allocates new objects in the eden space, and moves longer lived objects from the new generation to the old generation. Keep the heap size low, so that customers can increase the heap size depending on their needs. To increase the heap size, refer to the link, http://www.devx.com/tips/Tip/5578 The young generation uses a fast copying garbage collector which employs two semi-spaces (survivor spaces) in the eden, copying surviving objects from one survivor space to the second. Objects that survive multiple young space collections are tenured, meaning they are copied to the tenured generation. The tenured generation is larger and fills up less quickly. Garbage is collected less frequently; and each collection takes longer than a young space only collection. Collecting the tenured space is also referred to as doing a full generation collection. The frequent young space collections are quick, lasting only a few milliseconds, while the full generation collection takes a longer, tens of milliseconds to a few seconds, depending upon the heap size. Other GC algorithms, such as the Concurrent Mark Sweep (CMS) algorithm, are incremental. They divide the full GC into several incremental pieces. This provides a high probability of small pauses. This process comes with an overhead and is not required for enterprise web applications. When the new generation fills up, it triggers a minor collection in which the surviving objects are moved to the old generation. When the old generation fills up, it triggers a major collection which involves the entire object heap. Both HotSpot and Solaris JDK use thread local object allocation pools for lock-free, fast, and scalable object allocation. So custom object pooling is not often required. Consider pooling only if object construction cost is high and significantly affects execution profiles. The -Xms and -Xmx parameters define the minimum and maximum heap size. As collections occur when generations fill up, throughput is inversely proportional to the available memory. By default, JVM grows or shrinks the heap at each collection. This helps maintain the proportion of free space to living object at each collection within a specific range. The range is set as a percentage by the parameters -XX:MinHeapFreeRatio=<minimum> and -XX:MaxHeapFreeRatio=<maximum>; and the total size is bound by -Xms and -Xmx. JVM heap setting for Web Server should be based on the available memory on the system and frequency and duration of garbage collection. You can use -verbose:gc jvm option or the J2SE 5.0 monitoring tools to determine the frequency of garbage collection. For more information on J2SE 5.0 monitoring tools, see J2SE 5.0 Monitoring Tools. The maximum heap size should be determined based on the process data model (32-bit or 64-bit) and availability of virtual and physical memory on the system. Excessive use of physical memory for Java heap may cause paging of virtual memory to disk during garbage collection, resulting in poor performance. For more information on Java tuning, see http://java.sun.com/performance/reference/whitepapers/tuning.html.
<urn:uuid:4c7ab6b5-fa80-4636-9639-96c43e580b47>
3.015625
901
Documentation
Software Dev.
38.161999
1,689
Some of Palawan’s reefs are sad reflections of warming ocean temperatures. White skeletons are all that remain of previously colorful and varied coral reefs around the island. The phenomena is known as ‘coral bleaching’, caused by too warm of ocean temperatures. Scientists cited in the article below hold out hope for these damaged reefs. Apparently, some corals can adapt to warming temperatures, and even thrive in them. Studies are being done in Kiribati, an island in the South Pacific, very close to the equator, where ocean temperatures are the hottest. An international team of scientists, including lead researchers from Canada and Australia published an article on March 30, in the journal PLoS ONE, Click on the link below to read the article from ScienceDaily.com: Excerpt from article says, the study: . . . paves the way towards an important road map on the impacts of ocean warming, and will help scientists identify the habitats and locations where coral reefs are more likely to adapt to climate change. “We’re starting to identify the types of reef environments where corals are more likely to persist in the future,” says study co-author Simon Donner, an assistant professor in UBC’s Department of Geography and organizer of the field expedition. “The new data is critical for predicting the future for coral reefs, and for planning how society will cope in that future.” When water temperatures get too hot, the tiny algae that provides coral with its colour and major food source is expelled. This phenomenon, called coral bleaching, can lead to the death of corals. The researchers say coral reefs may be better able to withstand the expected rise in temperature in locations where heat stress is naturally more common. This will benefit the millions of people worldwide who rely on coral reefs for sustenance and livelihoods, they say. “Until recently, it was widely assumed that coral would bleach and die off worldwide as the oceans warm due to climate change,” says lead author Jessica Carilli, a post-doctoral fellow in Australian Nuclear Science and Technology Organisation’s (ANSTO) Institute for Environmental Research. “This would have very serious consequences, as loss of live coral — already observed in parts of the world — directly reduces fish habitats and the shoreline protection reefs provide from storms.” This is very good news for Palawan. DonnaOnPalawan wishes these scientists and their studies continuing success. My novel’s plot revolves around Palawan’s coral reefs and fish life, as I am very concerned about this issue. Palawan’s coral reefs are a precious resource. We hope the damage will be halted, and the reefs will thrive on into the future.
<urn:uuid:5e163826-772f-4baf-9ef3-7b93d366dd8b>
3.65625
568
Personal Blog
Science & Tech.
38.168048
1,690
Palm Springs, Calif.— The U.S. Fish and Wildlife Service issued a final rule Wednesday protecting Casey’s June beetle as an endangered species and designating 587 acres of critical habitat for it in Riverside County, Calif. Known only from the Palm Canyon area of Palm Springs, the beetle is critically endangered by urban development. The Center for Biological Diversity, entomologist David Wright and the Sierra Club petitioned to protect the beetle in 2004. Wednesday’s final rule is the result of a landmark legal settlement between the Center and the Service that will expedite protection for 757 imperiled species across the country. “We’re excited that this unique California beetle now has the Endangered Species Act protection it needs to survive,” said Tierra Curry, a biologist with the Center. “Endangered Species Act protection with critical habitat designation is the most effective tool we have for saving species from extinction.” Once thought to occur from Palm Springs to Indian Wells in the Coachella Valley, the species now survives in only two populations in a small area in the southern part of Palm Springs. Remaining habitat consists of just 800 acres scattered in nine isolated fragments, primarily on private lands, and is actively shrinking due to the rapid pace of development. “Clearly, habitat protection is the most important conservation measure for the Casey’s June beetle,” said Curry. “This announcement recognizes the dire straits of this scarab beetle, which is found nowhere else in the world, and gives it the habitat protection it needs to continue to exist.” Background on the Species Casey’s June beetles are medium-sized June beetles (June beetles are named after their tendency to fly in late spring evenings), about one to two inches long, and dusty brown or whitish with longitudinal stripes. Their reddish-brown antennae are clubbed, as is common in scarab beetles, with ends consisting of a series of leaf-like plates that can be held together or fanned out to sense scents. Adults emerge from holes in the ground to mate in late March through June. Females have rarely been found, and always on the ground rather than in flight. Males fly swiftly over the ground from about one hour before dusk to shortly after dark to look for females. Contact Info: Tierra Curry, (928) 522-3681 Website : Center for Biological Diversity
<urn:uuid:3a8a1f7f-61a3-4b80-8315-ed2602ff274b>
3.234375
498
News (Org.)
Science & Tech.
39.287949
1,691
Skip to Main Content Although terahertz (THz) radiation was first observed about 100 years ago, this portion of the electromagnetic spectrum at the boundary between the microwaves and the infrared has been, for a long time, rather poorly explored. This situation changed with the rapid development of coherent THz sources such as solid-state oscillators, quantum cascade lasers, optically pumped solid-state devices, and novel coherent radiator devices. These in turn have stimulated a wide variety of applications from material science to telecommunications, from biology to biomedicine. Recently, there have been two related compact coherent radiation devices invented able to produce up to megawatts of peak THz power by inducing a ballistic bunching effect on the electron beam, forcing the beam to radiate coherently. An introduction to the two systems and the corresponding output photon beam characteristics will be provided. Date of Publication: Aug. 2007
<urn:uuid:26814bd7-921e-4b4e-8233-3aa4ad415a40>
2.8125
183
Truncated
Science & Tech.
16.606523
1,692
Researchers have created a synthetic "tree" - a centimeter-sized hydrogel with nanopores that can pull water just like real trees pull moisture up their tall trunks. Optical micrograph image of a synthetic tree. Credit: Tobias Wheeler.The scientists, led by Abraham Stroock of Cornell University, explained that this process is called "transpiration." Getting water to overcome the force of gravity and move upward to the top of a tree requires a lot of energy. It's also a large factor in determining the maximum height of a tree - after it reaches a certain height, it just can't pull water any higher. By simulating transpiration in the lab, the researchers show that the process is purely physical, and requires no biological energy. Understanding how the process works could allow researchers to use it for several human applications. In nature, real trees have long strands of transport tissue in their wood called xylum. When water evaporates from the upper leaves of the tree into the atmosphere, the xylum experiences a negative pressure, and this tension pulls water from the roots to the upper parts of the tree. In a sense the xylum is like a long straw, through which water is sucked to the top. The Cornell researchers mimicked this behavior by using laboratory materials. They fabricated leaf and root membranes from hydrogel, similar to the material used in soft contact lenses. Then they created tiny xylem capillaries in nanopores in the hydrogel using microfabrication techniques, through which water could be pulled. The researchers are investigating several applications for the water-pulling technique. One idea is to develop a new kind of passive heat-transfer technology to heat buildings. For instance, a solar collector on the roof could heat a fluid, which could be distributed by gravity to lower parts of the building. Then the fluid could be recycled back up to the roof using the transpiration technique. The method could also be used for cooling laptops and de-contaminating the soil, where contaminated fluid could simply be sucked out of the soil. Another idea is using the method to draw water out of deep soil, doing away with the need to dig wells. via: Cornell University
<urn:uuid:9a1d5afb-b317-4097-b89a-ee5a3ad5c8db>
4.34375
451
News Article
Science & Tech.
42.637549
1,693
Most Active Stories Around the Nation Wed August 29, 2012 Isaac's Size, Speed Help It Pack A Heavyweight Punch Originally published on Fri October 26, 2012 11:28 am Isaac might not be in the same league as Hurricane Katrina seven years ago, but the latest storm to batter Louisiana's Gulf Coast is punching above its weight class in more ways than one, scientists say. The 2005 Hurricane Katrina, which devastated Louisiana and parts of Mississippi and Alabama, was a Category 3 storm (sustained winds of 125 mph) moving at about 15 mph when it made landfall on the Gulf Coast. By comparison, Isaac was a weak Category 1 storm as measured on the Saffir-Simpson scale, with sustained winds of 74-95 mph. By Wednesday afternoon, Isaac had been downgraded to a tropical storm, although it still was close to the Gulf Coast and continued to dump torrential rain. While Isaac is considerably less intense than Katrina, it is large and slow — a dangerous combination — and it's moving west of the Mississippi River, a track that intensifies storm surge, says Timothy Schott, a meteorologist at the National Weather Service's Tropical Cyclone Program. "This storm has sustained tropical storm force winds currently extending out to about 175 miles from the center and the hurricane force about 45 miles from the center," Schott told NPR at about noon ET Wednesday. Measuring Isaac on three criteria — storm surge, rainfall and wind — Schott would rate the storm "high impact" on all of them. Even though the winds are Category 1, the slow movement of the storm increases their effect, he says. "There's also a connection between the size of the storm and the storm surge," Schott says. "We're seeing the storm surge inundation values coming in at 8, 9, 10 feet in those southeast Louisiana parishes." When it comes to predicting storm surge, a lot of factors come into play, says Brian McNoldy, a senior research assistant at the Rosenstiel School of Marine and Atmospheric Science at the University of Miami. Among the questions: How large is the storm? How far do the tropical-force winds extend? How fast is it moving and how long has it been moving in the same way? How deep is the ocean offshore? "Then the onshore land makes a difference too," McNoldy says. "If the land is really, really flat, like New Orleans, the storm surge can go a lot farther inland." Putting 'The Cork' In Place And, with Isaac moving overland at less than 10 mph, Isaac will have plenty of time — perhaps 20 hours in some areas — to bottle up storm surge in the Mississippi, effectively placing a cork in the bottom as it continues to add more water in the form of torrential rainfall. It's not uncommon for hurricanes in the Gulf of Mexico to be relatively slow movers and Isaac had been forecast to be just that, Schott says. "Not every storm moves quickly and sometimes along the Gulf Coast and at these lower latitudes, the storms can slow down as this one has and as it was expected to do," he says. McNoldy calls Isaac's stalling out over one of the most vulnerable flood plains in the country just "bad timing," but says it could have been far worse. "We just got really lucky that this didn't strengthen more as it was moving across the Gulf of Mexico these last few days," he says. Better forecasting has also been a boon. It turns out that models are getting quite good at predicting a storm's path, its speed and even the elusive storm surge. But predicting a hurricane's intensity has proved more difficult, McNoldy says. "For intensity, in some cases we don't even know if we're putting the right data into the models," he says.
<urn:uuid:2f0b23ab-eacb-499f-98c4-b51031fe9a30>
2.765625
798
News Article
Science & Tech.
53.771471
1,694
|This article appears in the January 23, 2004 issue of Executive Intelligence Review. The Mars Exploration Rovers begin the intensive study of the planet Mars, which can lay the basis for its human exploration in the future. Marsha Freeman reports. Spirit Rover Gets Ready To Follow the Water on Mars A highly complex and extraordinary representative of man's intelligence is sitting now in Gusev Crater on the surface of Mars, preparing to begin a geological study of the history of water on the red planet. From its full stand-up height of nearly five feet, Spirit is transmitting back to Earth full-color, three-dimensional photographs of Mars, which are comparable in resolution to what you would see with your own eyes, were you standing there. Its journey will be the first step in a planned multi-decade NASA Mars exploration program. The success of the Spirit landing and its initial operation have placed on the agenda, once again, Lyndon LaRouche's 40-year program, developed in the 1980s, for the establishment of a science city on Mars. (See below). The primary task of the Mars Exploration Rovers, named Spirit and Opportunity, is to deploy a suite of scientific instruments, in order to peer into the past, and help answer the question: At some time in its history, did Mars have a climate and environment that would have supported life? The rovers will not be searching for fossils or direct evidence of life. Experiments aboard the two Viking landers in 1976 attempted to do that, returning ambiguous results that are still being debated in the scientific community. The complex soil chemistry on the Martian surface convinced scientists that the level of robotic technology available today, would best be deployed to search for indirect evidence of possible life on Mars. Since the 1970s Viking missions, field research in extreme environments on Earth has revealed not only that liquid water is a prerequisite for life, but that everywhere there is water, and some source of energy, there is life. Such environments have included hot steam vents under the ocean, lakes under the ice of Antarctica, and the inside of radioactive nuclear power plants. Where there is water and energy, there is life. There is no evidence of liquid water currently on the surface of Mars. There is frozen water at the poles, and, as recently discovered from orbital measurements, significant caches of ice co-mingled with soil under the surface of much of the planet. There is gaseous water vapor in the atmosphere, whose concentration waxes and wanes with the change of seasons on Mars, when polar ice sublimes into the atmosphere, or freezes onto its surface. But could there be liquid water under the surface, as ice locked in the soil is heated by interior activity on the planet? Intriguing features captured by NASA's orbiting Mars Global Surveyor and Odyssey spacecraft, such as gullies carved into the sides of craters by what appear to be geologically-recent flows of water, have led geologists to wonder. Confirming there is a substantial amount of subsurface water or ice at mid-latitudes would also be an enabling factor in establishing a permanent human presence on Mars, for the life we will bring there. NASA is carrying out an intensive study of the current and past history of water on Mars, over the next decade, through a series of unmanned scientific missions, to be launched every 26 months. The Mars Exploration Rovers will provide the first in this series of breakthroughs in our understanding of this remarkable world. - Hitting the 'Sweet Spot' - Choosing the landing sites for the two Mars rovers was a long and arduous task. More than one hundred sites were considered over a period of two years by more than 100 scientists and engineers, using orbital imaging and other data from Mars Odyssey and Global Surveyor. While the site had to be scientifically interesting, the most important criteria, as science team member Dr. Matt Golembek stated, were "safety, safety, and safety." Gusev Crater was the chosen target for Spirit because there is evidence that this depressionthe size of the State of Connecticutonce was home to a lake, or some standing body of water. An asteroid or comet impact created Gusev Crater as long ago as 4 billion years, and on its 95-mile diameter floor, there are younger impact craters. There is a branching valley, called Ma'adim Vallis, probably carved by flowing water, which leads directly into Gusev Crater, through a breach in its southern rim. Water flowing down the valley could have pooled in Gusev Crater, leaving behind sediments from the highlands from whence it came, and from the river's trip into Gusev, before it exited through a gap in the crater's northern rim. The surface of Gusev Crater, which appeared relatively smooth from orbit, and confirmed to be so by photographs taken by Spirit on the ground, may be covered with wind-blown dust deposits, or material from volcanic eruptions. The samples of layered sedimentary rocks that would tell the history of the site, therefore, may be found in the material from the bottom of the original crater that was ejected to the surface, when secondary impacts took place. Placing Spirit in a portion of Gusev Crater that was not too rocky, relatively flat, and not too dusty, required the most intricate trajectory planning, and the analysis of real-time data of the changing conditions in the Martian atmosphere, in order to allow last-minute adjustments. The parameters for entry into Mars' atmosphere, descent, and landing were carefully calculated using models based on imaging and thermal data from the two orbiters. But even the best models can be bested by the red planet. In early December, a dust storm was observed on the opposite side of Mars as the Spirit landing site. Scientists were aware that the increased dust in the atmosphere would increase its temperature, but thought that effect would be limited to the vicinity of the storm. But they discovered shortly before Spirit landed that the effects were global, and that higher temperatures than expected would have an impact on Spirit's landing. Small adjustments were made in the last few minutes before descent and landing, to account for the change in the weather. Images and data, including the dust deposits on the rover's solar arrays, lead scientists to conclude that visibility at Gusev Carter now is similar to a smoggy day on a big city on Earth. Mission managers warned before landing that even a large gust of wind could end the mission. To increase the likelihood of success, a set of systems was placed on the lander to help guide its descent. These included downward-looking cameras on the lander to take three images on the way down; three small rockets to compensate for any wind gusts that might give the lander a horizontal velocity; radar on the lander to send pulses toward the ground to measure its altitude; and the ability for Spirit to communicate with Mars Odyssey during its entire descent to the surface, in order to record each step of the process. The result was that the engineers succeeded in placing Spirit in "the sweet spot," as scientist Dr. Steve Squyres described it. From the first black-and-white images the rover transmitted to Earth, three hours after it landed, it was clear that its neighborhood on Gusev Crater is made to order. A Different Mars Unlike the 1970s Viking and 1997 Pathfinder landing sites, this site has only 3% of its surface covered with rocks, versus 20%. There are no large boulders visiblenothing so tall that the rover will have to drive around it. Nature has saved the scientists time, Dr. Squyres has noted, by scouring the surfaces of many of the rocks through the periodic transit of dust devils, swirling through the windy crater. The site contains the diversity of rocks and soil the scientists had hoped for. There are rounded and angular rocks, dark-surfaced and brighter rocks, soil that appears dusty, soil that appears compacted, and insets of small craters, where ejecta may expose primary material from the past of Gusev Crater. While the egress of the rover from its landing platform was delayed by a couple of daysin order to turn it in place so it could roll off in the safest directionscientists have been studying the data collected by Spirit to make their short- and long-term exploration plans. At a briefing from NASA's Jet Propulsion Laboratory on Jan. 13, Dr. Squyres outlined the primary objectives of each of the working science teams. The atmospheric science team, he reported, is studying the observations of the sky taken by the rover's thermal emission spectrometer, to refine their understanding of Mars' dynamic atomsphere and weather. The team is aiming for high-fidelity temperature profiles of the atmosphere on Mars. This data will be important in fine-tuning the landing of the Opportunity rover on Jan. 25, as well as generally improving weather forecasting on Mars. "There is almost an embarrassment of rocks" to study, Squyres said, in regard to the work of the geology and long-term planning science team. The first order of business will be to study the different rock and soil types in the immediate vicinity of the rover, and the team will be choosing the best targets. Visual images, as well as infrared measurements, already indicate the variety the scientists had hoped for. The group interested in studying the physical properties of the rocks and soil, is most immediately anxious to gain access to the images of the tracks that the rover's wheels will leave in the soil, as it exits the lander ramp. They will study the soil's compressibility, and do little wheel maneuverssuch as holding five fixed and rotating the sixthto provide a look at material at some depth. Later, the rover will do trenching, digging deeper down in the soil. Following that, an intriguing piece of real estate near the landernow dubbed the "magic carpet"where Spirit's airbags apparently dragged against the soil, will be a point of interest. From the visual images, the darker, subsurface soil that was uncovered, looks like mud. Scientists caution that it is unlikely the soil is actually wet, but are anxious to discern its differences from the surrounding area. The mineralogy and geochemistry group is deciding in which direction the rover should go, after it studies its immediate surroundings. The first order of business is to provide a map of the diversity of the site. Their job is to use the rover's suite of spectrographic instruments, which will provide the compositional data on the rocks, to do a thorough characterization of the neighborhood. By Jan. 13, the scientists had assembled the entire 360° color 3-D panorama of photographs from the rover, seeing details that were invisible in the first, black-and-white, lower-resolution navigation images. On the horizon, in an easterly direction from Spirit, is a cluster of eight rolling hills. The nearest, however, is almost two miles from the landing spot, or about five times the distance the rover was designed to travel. Dr. Squyres stressed that even if the rover could not make the traverse all the way to the hills, the view and detail of the hills will "get better and better," as the rover is sent closer and closer. One "extremely attractive target," according to Dr. Squyres, is a small crater only about 800 feet in the distance. It appears to be an impact crater that has excavated subsurface material. Once the rover is on the move, scientists and engineers will decide if it should drive over and peek over the ridge of the crater's rim. Meet the Field Geologists The Mars Exploration Rovers are the most complex robotic devices for planetary study ever deployed. Each is designed to wander the red planet for at least 90 Mars days, or sols (equivalent to 92 Earth days), and cover a distance of up to 300 feet per day. Unlike the diminutive 22-pound Sojourner rover, which depended upon a lander for communications, the 384-pound, golf cart-sized Spirit and Opportunity rovers communicate directly with the two overhead Mars orbiters, and with the Earth. Thus, they have no limit on distance they can travel from the landing site. The amount of data, including images, that Spirit can send back in a day, using all three communication links, is more than ten times what was retrieved from Sojourner in 1997. How far each rover will travel will depend upon how long it is operational. As Mars goes from Summer to Fall in its northern hemisphere over the next three months, and the days shorten and temperatures decline, the rovers will have to use more energy to keep their instruments and electronics warm. At the same time, there will be less solar energy available for their panels to convert into electricity. So for this mission, time is of the essence. During the time they are functioning on the surface of Mars, the rovers have their prime objectives. These were chosen to carry out the studies that would indicate whether or not water was persistent on Mars. For Spirit, this means a thorough characterization of the diversity of the rocks and soil; the search for minerals that could have been deposited by water flow or precipitation; the search for minerals created in the presence of water; and the extraction of clues from its geologic investigation that relate to the environmental conditions when liquid water was present on the surface, such as erosion, or rock fracturing. To meet these objectives, Spirit has a scientific payload, called Athena. It includes two instruments that survey the general site. The first is a pair of high-resolution color stereo cameras, whose photographs have already produced images with a clarity never before seen. The second is a miniature Thermal Emission Spectrometer, or Mini-TES, which sees objects in the infrared. From afar, Mini-TES is determining the mineral composition of Martian features, peering through the dust that coats some of the rocks, to see their spectral signature. It has already identified higher-than-expected concentrations of carbonates, which form in the presence of water on Earth. Mini-TES also measures the gross heat emitted by objects, and will help characterize the texture of the soil (fluffy or compacted), by obtaining a profile of its absorption of heat during the day, and its release at night. The rover has an arm (and hand, and fingers), which can reach out and deploy three instruments for in situ measurements. These are the Microscopic Imagera combined microscope and camera, which will produce extremely closeup view of rocks and soils; the Mossbauer Spectrometer, to determine the composition and abundance of iron-bearing minerals, and magnetic properties of surface materials; and the Alpha Particle X-Ray Spectrometer, to determine the individual elements that make up the rocks and soil. To clear the way for looking behind the surface and into the interior, the Rock Abrasion Tool will grind away the top layer of rocks, and expose fresh material underneath for the arm's instruments to investigate up close. By the end of its mission, scientists hope that Spirit will provide them with the quantity and quality of data to come to a definitive answer to the question of whether there was a lake of some sort at Gusev Crater, and, if so, how long the water persisted there. Next Rover About To Arrive Opportunity is scheduled to land on the opposite side of Mars, at Meridiani Planum, on Jan. 25, Eastern Standard Time; late night the previous day, at the Jet Propulsion Laboratory in California. Like Gusev Crater, Meridiani is near the Martian equator, but halfway around the planet. The site is one of the smoothest, flattest plains on Mars, and is of particular interest due to its mineral composition. From orbit, the Mars Global Surveyor Thermal Emission Spectrometer has observed that Meridiani Planum is rich in an iron oxide mineral called gray hematite, which on Earth is usually formed in the presence of water. Opportunity will take a closer look. These two complementary rover missions are taking important steps in NASA's effort to "follow the water" on Mars. Throughout the rest of this decade, future missions will extend the scope and depth of this intensive exploration of Mars, and put in place the infrastructure for the decade to follow. In 2005, NASA plans to launch the Mars Reconnaissance Orbiter, to carry our a remote sensing study of the planet, comparable to what is carried out continuously to study the Earth. It is designed to combine the big-picture perspective of an orbiter with the level of local detail previously only obtainable from landing a spacecraft on the surface. The Phoenix Mars Scout, scheduled for the next launch opportunity in 2007, will send a spacecraft, for the first time, to a non-equatorial landing spot, at the icy northern, arctic part of the planet. After the Mars Reconnaissance Orbiter conducts its high-resolution examination of thousands of Mars locales, the nuclear-powered, precision-landed Mars Science Laboratory will be deployed in 2009, to intensively study the surface for a full Martian year or longer; it will be able to cover a distance on the ground an order of magnitude larger than the current set of rovers. During the same 2009 launch window, the Mars Telecommunications Orbiter will be sent to Mars. It will be the first interplanetary spacecraft whose primary mission will be to provide a communications link for other missions. Its first task will be to provide the capability to dramatically increase the amount of data that the Science Laboratory can send back to Earth.
<urn:uuid:afc7a3a6-7df3-43b8-8c6f-203b117f8c76>
3.421875
3,642
Truncated
Science & Tech.
38.244262
1,695
Given the twin demands of controlling climate change and ensuring the world's future energy needs are met, "the first question to ask is not 'how do we reduce emissions?' " says Roger Pielke Jr., a science-policy specialist at the University of Colorado at Boulder, the author of the critique. Instead, he says, the question should be: "In a world that needs vast amounts of more energy, how can we provide that energy in ways that do not lead to the accumulation of carbon in the atmosphere?" Technologies that are already at hand or likely to go commercial over the next decade may not be climate friendly enough to stabilize atmospheric greenhouse-gas concentrations so that global warming is held to about 3.6 degrees Fahrenheit by century's end. At this stage, he says, people should focus more on policies that directly address what many analysts see as a yawning technology gap, rather than on regulatory approaches that deal with the gap less directly. The critique by Dr. Pielke and colleagues at the National Center for Atmospheric Research in Boulder, Colo., and McGill University in Montreal, has touched off a small firestorm among the scientific community – in no small part because it appeared in the pages of Nature, one of the most high-profile science journals on the planet. Some of the reaction to the critique focuses on the nuts and bolts of the argument, which implies that when the IPCC lays out emissions projections, it might do better to assume that technologies don't get much better over time. That would give a clearer sense of the challenge ahead than assuming – as they argue the IPCC does now – that anywhere from half to virtually all of the technology gap could close in the course of ordinary economic evolution.
<urn:uuid:8b0c7e96-07ff-40e2-bb08-4995cf601fcb>
2.9375
346
News Article
Science & Tech.
40.46
1,696
Nov 10, 2009, 6:09 AM Post #3 of 7 Re: [DivyaG] 'constant' in perl [In reply to] Can we define any variable as constant in perl (just like const in C and Java), so that its value cannot be modified throughout the program? There are two techniques to get unmodifiable values in Perl: `use constant;` and `use Readonly;` The pragmatic constant is part of the standard Perl installation and is available to any script. It creates a sub that returns the value. It can be used anywhere a sub can. perl -e 'use constant PI=>atan2(0,-1); print PI, "\n"' Running the deparser on it, you can see the sub. You can also see that the compile phase replaces the sub with the value for faster execution. perl -MO=Deparse -e 'use constant PI=>atan2(0,-1); print PI, "\n"' Readonly is a module available from CPAN: http://search.cpan.org/~roode/Readonly-1.03/Readonly.pm What is does is tie into the variable and disables its ability to change. This adds some execution overhead every time the variable is used. It biggest drawback is that Readonly must be installed on every system where the script runs. I love Perl; it's the only language where you can bless your thingy. Perl documentation is available at perldoc.perl.org. The list of standard modules and pragmatics is available in perlmodlib. Get Markup Help. Please note the markup tag of "code".
<urn:uuid:9d2ca47b-cb5d-4f04-ad58-90f004e2d4a1>
2.703125
358
Comment Section
Software Dev.
66.384231
1,697
A successful NASA flight test Monday demonstrated how a spacecraft returning to Earth can use an inflatable heat shield to slow and protect itself as it enters the atmosphere at hypersonic speeds. The Inflatable Re-entry Vehicle Experiment, or IRVE, was vacuum-packed into a 15-inch diameter payload "shroud" and launched on a small sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Va., at 8:52 a.m. EDT. The 10-foot diameter heat shield, made of several layers of silicone-coated industrial fabric, inflated with nitrogen to a mushroom shape in space several minutes after liftoff. The Black Brant 9 rocket took approximately four minutes to lift the experiment to an altitude of 131 miles. Less than a minute later it was released from its cover and started inflating on schedule at 124 miles up. The inflation of the shield took less than 90 seconds. "Our inflation system, which is essentially a glorified scuba tank, worked flawlessly and so did the flexible aeroshell," said Neil Cheatwood, IRVE principal investigator and chief scientist for the Hypersonics Project at NASA's Langley Research Center in Hampton, Va. "We're really excited today because this is the first time anyone has successfully flown an inflatable reentry vehicle." According to the cameras and sensors on board, the heat shield expanded to its full size and went into a high-speed free fall. The key focus of the research came about six and a half minutes into the flight, at an altitude of about 50 miles, when the aeroshell re-entered Earth's atmosphere and experienced its peak heating and pressure measurements for a period of about 30 seconds. An on board telemetry system captured data from instruments during the test and broadcast the information to engineers on the ground in real time. The technology demonstrator splashed down and sank in the Atlantic Ocean about 90 miles east of Virginia's Wallops Island. "This was a small-scale demonstrator," said Mary Beth Wusk, IRVE project manager, based at Langley. "Now that we've proven the concept, we'd like to build more advanced aeroshells capable of handling higher heat rates." Inflatable heat shields hold promise for future planetary missions, according to researchers. To land more mass on Mars at higher surface elevations, for instance, mission planners need to maximize the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be. Provided by JPL/NASA (news : web) Explore further: Collisions of coronal mass ejections can be super-elastic
<urn:uuid:21e0a41d-20f1-46f4-8ac4-ec73d025208c>
3.203125
545
News Article
Science & Tech.
43.023951
1,698
Chaos, like next year’s weather, is anything but predictable. So results appearing in the 8 October print issue of PRL may seem paradoxical: researchers claim that for the first time, two lasers have been synchronized so that one can anticipate the chaotic fluctuations in the other. Some experts are skeptical of the result, but it illustrates some of the challenges in the field of chaos research, where physicists are attempting to understand and make use of nature’s seeming disorder. The weather is a classic example of chaos. In a perfect world, a computer–given a complete set of atmospheric data and equations–should be able to crunch out forecasts for years to come. But in practice, tiny uncertainties in the initial data set eventually lead to completely inaccurate predictions. A laser also becomes very unpredictable if you reflect a small portion of the laser’s output back into the device. This feedback makes the laser’s future output–like next year’s weather–strongly dependent on its past. Alan Shore and his colleagues at the University of Wales in Bangor, UK, created fluctuations in their first laser–the “transmitter”–using this standard feedback technique. They also fed some of the transmitter’s light into a second, “receiver” laser, which went into fits of intensity fluctuations nearly identical with those of the transmitter. The team arranged the mirrors in a way that gave the transmitter’s feedback signal a long round trip, so the receiver got the signal earlier and anticipated the transmitter’s fluctuations by a fixed amount of time. Surprisingly, that fixed anticipation time did not depend upon the round-trip time for light to re-enter the transmitter. Several researchers have pointed out that even the tiniest amount of accidental light traveling from the receiver back to the transmitter could induce unintended fluctuations in the transmitter. They suggest that the receiver might not be anticipating fluctuations in the transmitter, but rather, causing them. Shore says that there is a small amount of feedback from the receiver to the transmitter, but that his team has shown that this feedback is not causing the fluctuations in the transmitter–a result they will soon publish. Although many experts want to consider this result more carefully, they are optimistic about the potential usefulness of chaos synchronization. Dan Gauthier of Duke University in Durham, NC, suggests one possible application for anticipating chaos: researchers could maintain an extremely steady laser intensity by using the advanced signal to counter fluctuations quickly. Ingo Fischer, of the University of Darmstadt in Germany, still sees open questions in the interpretation of the new results, but is fascinated by the synchronization of chaotic lasers. He says researchers are still dreaming up applications. Unfortunately, one critical phenomenon seems to be missing from his list: predicting the weather.
<urn:uuid:8011ffe5-2ebf-4135-a85f-8d69d02cabcf>
3.765625
567
News Article
Science & Tech.
32.773549
1,699