text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Several workers were evacuated on Sunday morning at the Fukushima nuclear plant after the level of radioactivity exceeded 10 million times the normal. The water used to cool the reactor number two was found extremely high amounts of particles of radioactive iodine. Measured in samples from the water, found in the basement of the turbine hall located behind the reactor is milisievert 1,000 per hour, said a spokesman for Tokyo Electric Power Company (TEPCO). “This figure is ten million times greater than the radioactivity found in water in a reactor generally in good condition,” he explained. According to him, the fuel in the reactor core has probably suffered damage during an onset of fusion, occurred immediately after an earthquake followed by tsunami on 11 March. The intervention teams began to use fresh water to cool reactors. This is because sea water that used to accelerate corrosion and could now pose a threat. The government in Tokyo has announced that the efforts of Japanese workers at Fukushima Daiichi nuclear power plant, to cool the reactors suffered a new blow on Friday after a senior official said the reactor vessel 3 was damaged. Hidehiko Nishiyama, deputy director of the Japanese Agency for Nuclear and Industrial Safety, explained that the possibility exists that radiation emanated from the reactor fuel – a mixture of uranium and plutonium – to be released
<urn:uuid:2205fa93-120d-4b95-82ee-1b67fee4a968>
3.171875
271
News Article
Science & Tech.
23.186029
900
(CNN) -- Human concepts of beauty are shaping conservation efforts, protecting good-looking plants and animals over ugly ones, a study suggests. The report, "The new Noah's Ark: beautiful and useful species only,"has been published in the 2012 edition of the scientific journal, Biodiversity. It describes how vulnerable species that overtly display characteristics human beings respect or find desirable -- such as beauty, strength, power or cuddliness -- are more likely to be the focus of concerted conservation programs than animals or plants that are less appealing to the eye. "People have biases towards species that are glamorous," said Dr. Ernie Small, author of the study and taxonomist for Agriculture Canada. "Animals that are beautiful, entertaining or that command respect due to their size or power are almost always given greater forms of conservation protection." The study highlights charismatic mega-fauna such as whales, tigers and polar bears as animals more likely to be the focus of successful conservation programs, protective legislation and public funding drives. As a result, the plight of less glamorous -- but no less ecologically important organisms, such as snakes, spiders and frogs -- are often ignored. Small argues that this focus on large, spectacular species could have profound consequences for a wide variety of finely balanced ecosystems and food chains. "When you concentrate on the preservation of selective species ... you do an inadequate job of protecting biodiversity as a whole," he said.. He adds that by employing such selective methods human beings could also be manufacturing nature to reflect their own image or the characteristics they admire. "We find attractive in animals the same qualities that we find attractive largely in our own species. These are not always the most ecologically important species however," he added. For those working on the front line of conservation, the concerns raised by Small and his study are very real. According to Dr Sybille Klenzendorf, director of World Wildlife Fund's species program, there is already a wide body of evidence that suggests people are most interested in vulnerable animals that most closely resemble human beings -- usually large mammals with forward-facing eyes. But Klenzendorf argues that focusing conservation efforts on these vulnerable species can lead to the best conservation programs. "These large, charismatic species are ... the ones that require the largest amount of wild habitat, and by preserving them we save the less impressive species too," said Klenzendorf. "In order to ensure the survival of wild tigers, we not only have to protect vast amounts of natural forest, but we also need to ensure that the animals on which they prey, and the plants on which those animals depend, are all protected. The same is true for polar bears and elephants." "By protecting those animals that we are the most attracted to, we are also influencing and supporting the survival of other species, as well protecting entire landscapes," she explained. But while recognizing that these methods are "not entirely without benefit," Small believes that more must be done to protect less appealing wildlife. "Aesthetic standards have become one of the primary determinants of which species are deemed worthy for conservation and this has to be looked at," he said.
<urn:uuid:9104c211-db1a-41e5-b988-0cc317fcf702>
3.5625
652
News Article
Science & Tech.
32.614091
901
Foreign type specifiers Here is a list of valid foreign type specifiers for use in accessing external objects. Specifies an undefined return value. Not allowed as argument type. As argument: any value (#f is false (zero), anything else is true (non-zero). As result: anything different from 0 and the NULL pointer is #t. This type maps to int in both C and C++. A signed or unsigned character. As an argument, the input Scheme character is cast to C char or unsigned char, resulting in an 8-bit value. A Scheme character with an integer value outside 0-127 (signed) or 0-255 (unsigned) will be silently truncated to fit; in other words, don't feed it UTF-8 data. As a return type, accepts any valid Unicode code point; the return type is treated as a C int, and converted to a Scheme character. An 8-bit integer value in range -128 - 127 (byte) or 0 - 255 (unsigned byte). Values are cast to and from C char or unsigned char type, so values outside this 8-bit range will be unceremoniously truncated.[type] short A short integer number in 16-bit range. Maps to C short or unsigned short.[type] int An integer number in fixnum range (-1073741824 to 1073741823, i.e. 31 bit signed). unsigned-int further restricts this range to 30 bit unsigned (0 to 1073741823). int maps to C type int and int32 maps to int32_t. As an argument type, these expect a fixnum value, and as a return type they return a fixnum. Values outside the ranges prescribed above are silently truncated; you should use e.g. integer if you need the full 32-bit range. Note: int32 is not recognized as an argument type prior to Chicken 4.7.2. Notes for 64-bit architectures: - C's int is 32 bits on most 64-bit systems (LP64), so int and int32 are functionally (if not semantically) equivalent. - The fixnum type is larger than 32 bits and consequently the entire signed or unsigned 32-bit range is available for this type on 64-bit systems. However, for compatibility with 32-bit systems it is probably unwise to rely on this. If you need a 32-bit range, you should use (unsigned) integer or integer32. A fixnum or integral flonum, mapping to int or int32_t or their unsigned variants. When outside of fixnum range the value will overflow into a flonum. C's int is 32 bits on most 64-bit systems (LP64), so integer and integer32 are functionally (if not semantically) equivalent.[type] integer64 A fixnum or integral flonum, mapping to int64_t or uint64_t. When outside of fixnum range the value will overflow into a flonum. On a 32-bit system, the effective precision of this type is 52 bits plus the sign bit, as it is stored in a double flonum. (In other words, numbers between 2^52 and 2^64-1 can be represented but there are gaps in the sequence; the same goes for their negative counterparts.) On a 64-bit system the range is 62 bits plus the sign bit, the maximum range of a fixnum. (Numbers between 2^62 and 2^64-1 have gaps.) unsigned-integer64 is not valid as a return type until Chicken 4.6.4.[type] long Either a fixnum or a flonum in the range of an (unsigned) machine long. Similar to integer32 on 32-bit systems or integer64 on 64-bit.[type] size_t A direct mapping to C's size_t. A floating-point number. If an exact integer is passed as an argument, then it is automatically converted to a float.[type] number A floating-point number. Similar to double, but when used as a result type, then either an exact integer or a floating-point number is returned, depending on whether the result fits into an exact integer or not. A zero-terminated C string. The argument value #f is allowed and is passed as a NULL pointer; similarly, a NULL pointer is returned as #f. Note that the string contents are copied into (automatically managed) temporary storage with a zero byte appended when passed as an argument. Also, a return value of this type is copied into garbage collected memory using strcpy(3). For the nonnull- variant, passing #f will raise an exception, and returning a NULL pointer will result in undefined behavior (e.g. a segfault).[type] c-string* Similar to c-string and nonnull-c-string, but if used as a result type, the pointer returned by the foreign code will be freed (using the C library's free(3)) after copying. This type specifier is not valid as a result type for callbacks defined with define-external.[type] unsigned-c-string Same as c-string, nonnull-c-string, etc. but mapping to C's unsigned char * type.[type] c-string-list Takes a pointer to an array of C strings terminated by a NULL pointer and returns a list of strings. The starred version c-string-list* also releases the storage of each string and the pointer array afterward using free(1). Only valid as a result type, and can only be used with non-callback functions.[type] symbol A symbol, which will be passed to foreign code as a zero-terminated string. When declared as the result of foreign code, the result should be a string and a symbol with the same name will be interned in the symbol table (and returned to the caller). Attempting to return a NULL string will raise an exception. A blob object, passed as a pointer to its contents. Permitted only as argument type, not return type. Arguments of type blob may optionally be #f, which is passed as a NULL pointer. For the nonnull- variant, passing a #f value will raise an exception.[type] u8vector A SRFI-4 number-vector object, passed as a pointer to its contents. These are allowed only as argument types, not as return types. The value #f is also allowed and is passed to C as a NULL pointer. For the nonnull- variants, passing #f will raise an exception. [type] (c-pointer TYPE) [type] (nonnull-c-pointer TYPE) An operating-system pointer or a locative. c-pointer is untyped, whereas (c-pointer TYPE) points to an object of foreign type TYPE. The value #f is allowed and is passed to C as a NULL pointer; similarly, NULL is returned as #f. For the two nonnull- variants, passing #f will raise an exception, and returning NULL will result in a null pointer object. (Note: It is still possible to deliberately pass a null pointer through a nonnull-c-pointer by manually creating a null pointer object, e.g. via (address->pointer 0).)[type] pointer-vector A vector of foreign pointer objects; see Pointer vectors. Permitted only as an argument type, not as return type. This type was introduced in Chicken 4.6.3. A pointer vector contains a C array of void pointers, and the argument is passed as a void ** pointer to these contents. Just as for bytevector types, you must somehow communicate the length of this array to the callee; there is no sentinel node or NULL terminator. #f is allowed and passed as a NULL pointer. For the nonnull- variant, passing a #f value will raise an exception.[type] (ref TYPE) A C++ reference type. Reference types are handled the same way as pointers inside Scheme code.[type] (function RESULTTYPE (ARGUMENTTYPE1 ... [...]) [CALLCONV]) A function pointer. CALLCONV specifies an optional calling convention and should be a string. The meaning of this string is entirely platform dependent. The value #f is also allowed and is passed as a NULL pointer. Scheme objects[type] scheme-object An arbitrary, raw Scheme data object (immediate or non-immediate). A scheme-object is passed or returned as a C_word, the internal Chicken type for objects. Typically, this consists of an object header and tag bits. It is up to you to build or take apart such objects using the core library routines in chicken.h and runtime.c. More information on object structure can be found in Data representation.[type] scheme-pointer An untyped pointer to the contents of a non-immediate Scheme object; for example, the raw byte contents of a string. Only allowed as an argument type, not a return type. The value #f is also allowed and is passed as a NULL pointer. For the nonnull- variant, passing #f will raise an exception. Don't confuse this type with (c-pointer ...) which means something different (a machine-pointer object). scheme-pointer is typically used to get a pointer to the raw byte content of strings and blobs. But if you pass in a SRFI-4 vector, you will get a pointer to a blob object header (not the blob's contents), which is almost certainly wrong. Instead, convert to a blob beforehand, or use a SRFI-4 specific type. User-defined C types[type] (struct NAME) A struct of the name NAME, which should be a string. Structs cannot be directly passed as arguments to foreign functions, nor can they be result values. However, pointers to structs are allowed.[type] (union NAME) A union of the name NAME, which should be a string. Unions cannot be directly passed as arguments to foreign functions, nor can they be result values. However, pointers to unions are allowed.[type] (enum NAME) An enumeration type. Handled internally as an integer. C++ types[type] (instance CNAME SCHEMECLASS) A pointer to a C++ class instance wrapped into a Scheme object instance. CNAME should designate the name of the C++ class, and SCHEMECLASS should be the class that wraps the instance pointer. (make SCHEMECLASS 'this POINTER) (slot-ref INSTANCE 'this)[type] (instance-ref CNAME SCHEMECLASS) A reference to a C++ class instance.[type] (template TYPE ARGTYPE ...) A C++ template type. For example vector<int> would be specified as (template "vector" int). Template types cannot be directly passed as arguments or returned as results. However, pointers to template types are allowed. Type qualifiers[type] (const TYPE) The foreign type TYPE with an additional const qualifier. Map of foreign types to C types |Foreign type||C type| |[nonnull-]blob||unsigned char *| |[nonnull-]u8vector||unsigned char *| |[nonnull-]u16vector||unsigned short *| |[nonnull-]unsigned-c-string||unsigned char *| |([nonnull-]c-pointer TYPE)||TYPE *| |(enum NAME)||enum NAME| |(struct NAME)||struct NAME| |(ref TYPE)||TYPE &| |(template T1 T2 ...)||T1<T2, ...>| |(union NAME)||union NAME| |(function RTYPE (ATYPE ...) [CALLCONV])||[CALLCONV] RTYPE (*)(ATYPE, ...)| |(instance CNAME SNAME)||CNAME *| |(instance-ref CNAME SNAME)||CNAME &| Previous: Accessing external objects
<urn:uuid:08e121ec-528c-431d-8fa1-d9b648ce2bc3>
2.75
2,585
Documentation
Software Dev.
62.103392
902
AP Science Writer NEW YORK (AP) -- To millions of people, the Christmas tree is a cheerful sight. To scientists who decipher the DNA codes of plants and animals, it's a monster. We're talking about the conifer, the umbrella term for cone-bearing trees like the spruce, fir, pine, cypress and cedar. Apart from their Yuletide popularity, they play big roles in the lumber industry and in healthy forest ecosystems. Scientists would love to identify the billions of building blocks that make up the DNA of a conifer. That's called sequencing its genome. Such analysis is a standard tool of biology, and doing it for conifers could reveal genetic secrets useful for basic science, breeding and forest management. But the conifer genome is dauntingly huge. And like a big price tag on a wished-for present, that has put it out of reach. Now, as Christmas approaches, it appears the conifer's role as a genetic Grinch may be ending. In recent months, scientific teams in the United States and Canada have released preliminary, patchy descriptions of conifer genomes. And a Swedish team plans to follow suit soon in its quest for the Norway spruce. "The world changed for conifer genetics," said David Neale of the University of California, Davis. It's "entering the modern era." What happened? Credit the same recent technological advances that have some doctors predicting that someday, people will have their genomes sequenced routinely as part of medical care. The technology for that has gotten faster and much cheaper. "Until just a few years ago, the idea of sequencing even a single conifer genome seemed impossible," said John MacKay of the University of Laval in Quebec City, who co-directs a multi-institution Canadian project that's tackling the white spruce. The new technologies changed that, he said. How big is a conifer genome? Consider the 80-foot Christmas tree at Rockefeller Center in New York. It's a Norway spruce, so its genome is six times bigger than that of anybody skating below it. Other conifer genomes are even larger. Nobody expects a perfect, finished conifer genome anytime soon. MacKay and others say that reaching that goal would probably require some advances in technology. But even partial versions can help tree breeders and basic scientists, researchers say. Why bother doing this? For breeders, "genomes can really help you speed up the process and simply do a better job of selecting trees, if you understand the genetic architecture of the traits you want to breed for," MacKay said. The prospect of climate change brings another dimension. As forest managers select trees to plant after a fire or tree harvesting, genetic information might help them pick varieties that can adapt to climate trends in coming decades, Neale said. It's all about "giving them a tree that will be healthy into the future," he said. To sequence a genome, scientists start by chopping DNA into small bits, and let their machines sequence each bit. That's the part that has become much faster and cheaper in recent years. But then comes the task of re-assembling these bits back into the long DNA chains found in trees. And that is a huge challenge with conifers, because their DNA chains contain many repeated sequences that make the assembly a lot harder. As a result, conifers present "these large regions I think we will never be able to piece together" with current technologies, said Par Ingvarsson of Umea University in Sweden, who is leading the Norway spruce project. Will scientists develop new technologies to overcome that problem? "You should never say never in this game," Ingvarsson said. This past summer, Neale's group presented partial results for the genome sequence of loblolly pine, based on DNA extracted from a single pine nut. It includes about a million disconnected chunks of DNA, and altogether it covers well over half the tree's genome. Neale figures it will take his team until 2016 to complete genomes of the loblolly, Douglas-fir and sugar pine. The project is financed by the U.S. Department of Agriculture. Mackay's group recently released its early results on DNA taken from a single white spruce. As for the Swedish project on Norway spruce, Ingvarsson said its results will be made public early next year. The 2 million DNA pieces have captured most of the estimated 35,000 to 40,000 genes in the tree, even if researchers don't know just where those genes go in the overall genome sequence, he said. People have about 23,000 genes, not much different from a conifer. The tree's genome so much bigger because it also contains an abundance of non-gene DNA with no obvious function, Ingvarsson said. He said his chief reason for tackling conifer genomes was to fill a conspicuous vacancy in the list of sequenced plants. "It was like the one missing piece," he said. "We just need this final piece to say something about how all the plant kingdom has evolved over the last billion years or so." Canadian project: http://bit.ly/UQdTPd U.S. project: http://pinegenome.org/pinerefseq/ Swedish project: http://www.congenie.org/ Malcolm Ritter can be followed at http://twitter.com/malcolmritter Copyright 2012 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:f461cfee-52d4-44c8-8d89-3766fc9c54a1>
3.421875
1,157
News Article
Science & Tech.
53.136503
903
Science Fair Project Encyclopedia Knot theory is a branch of topology that was inspired by observations, as the name suggests, of knots. But progress in the field no longer depends on experiments with twine. Knot theory concerns itself with abstract properties of theoretical knots — the spatial arrangements that in principle could be assumed by a loop of string. In mathematical jargon, knots are embeddings of the closed circle in three-dimensional space. An ordinary knot is converted to a mathematical knot by splicing its ends together. The topological theory of knots asks whether two such knots can be rearranged to match, without opening the splice. The question of untying an ordinary knot has to do with unwedging tangles of rope pulled tight. A knot can be untied in the topological theory of knots if and only if it is equivalent to the unknot, a circle in 3-space. Knot theory originated in an idea of Lord Kelvin's (1867), that atoms were knots of swirling vortices in the æther (also known as 'ether'). He believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do (i.e. explain what we now understand to depend on quantum energy levels). Scottish physicist Peter Tait spent many years listing unique knots under the belief that he was creating a table of elements. When ether was discredited through the Michelson-Morley experiment, vortex theory became completely obsolete, and knot theory fell out of scientific interest. Only in the past 100 years, with the rise of topology, have knots become a popular field of study. Today, knot theory is inextricably linked to particle physics, DNA replication and recombination, and to areas of statistical mechanics. An introduction to knot theory Creating a knot is easy. Begin with a one-dimensional line segment, wrap it around itself arbitrarily, and then fuse its two free ends together to form a closed loop. One of the biggest unresolved problems in knot theory is to describe the different ways in which this may be done, or conversely to decide whether two such embeddings are different or the same. The unknot, and a knot equivalent to it Before we can do this, we must decide what it means for embeddings to be "the same". We consider two embeddings of a loop to be the same if we can get from one to the other by a series of slides and distortions of the string which do not tear it, and do not pass one segment of string through another. If no such sequence of moves exists, the embeddings are different knots. A useful way to visualise knots and the allowed moves on them is to project the knot onto a plane - think of the knot casting a shadow on the wall. Now we can draw and manipulate pictures, instead of having to think in 3D. However, there is one more thing we must do - at each crossing we must indicate which section is "over" and which is "under". This is to prevent us from pushing one piece of string through another, which is against the rules. To avoid ambiguity, we must avoid having three arcs cross at the same crossing and also having two arcs meet without actually crossing (we would say that the knot is in general position with respect to the plane). Fortunately a small perturbation in either the original knot or the position of the plane is all that is needed to ensure this. In 1927, working with this diagrammatic form of knots, J.W. Alexander and G.B. Briggs , and independently Kurt Reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown right. These operations, now called the Reidemeister moves, are: - Twist and untwist in either direction. - Move one loop completely over another. - Move a string completely over or under a crossing. Knot invariants can be defined by demonstrating a property of a knot diagram which is not changed when we apply any of the Reidemeister moves. Some very important invariants can be defined in this way, including the Jones polynomial. You can unknot any circle in four dimensions. There are two steps to this. First, "push" the circle into a 3-dimensional subspace. This is the hard, technical part which we will skip. Now imagine temperature to be a fourth dimension to the 3-dimensional space. Then you could make one section of a line cross through the other by simply warming it with your fingers. Two knots can be added by breaking the circles and connecting the pairs of ends. Knots in 3-space form a commutative monoid with prime factorization. The trefoil knots are the simplest prime knots. Higher dimensional knots can be added by splicing the spheres. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions. - The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots, Colin Adams , 2001, ISBN 0716742195 - Knots: Mathematics With a Twist, Alexei Sossinsky , 2002, ISBN 0674009444 - Knot Theory, Vassily Manturov , 2004, ISBN 0415310016 The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:5224c8e3-2023-4490-bcb0-4fefe395a832>
4.03125
1,141
Knowledge Article
Science & Tech.
51.304251
904
If a scientist tells you sunny days are ahead – duck and cover! He may be talking about space weather and the giant solar flares that fry satellites and national electrical grids! Just in case you haven’t had enough stormy weather here on Earth, astronomers are warning us to expect some major solar storms over the next few years. Coming our way—more solar flares. Click to expand! Source: NASA Solar Dynamic Observatory Here’s what they are talking about. The sun is a spinning ball of hot magnetized gas and plasma. As it spins, the magnetic force lines in the sun become tangled. After a few years, the twisted magnetic fields are so tangled that they stick out in giant loops through sun’s surface – creating dark sun spots. When the sun is covered with giant storms and sunspots—something has to give. For the three years following the solar peak of activity, something does. Like a rubber band stretched beyond endurance, the magnetic fields snap and break. This splatters large hunks of the solar gas and plasma out into space. We call the snapping “solar flares”. Imagine heating a pot of grease. Eventually, when it gets hot enough, the surface of the grease is covered with bubbles. When the bubbles burst, the grease spatters and anyone nearby can get burned. Now imagine the splatter bigger than Jupiter. Fortunately the Earth's magnetic field is a shield. However, some of the ionized solar particles leak through the North and South Magnetic Poles. When you look up in the sky, you can see the crackling energy as colorful “Northern Lights” or aurora. Scientists warn that these solar “splatters” can do enormous damage to technology. The plasma is hot (269,540˚F), and generates a strong electrical current. They are particularly dangerous to satellites, where they can fry computer chips or burn out equipment, and electrical power grids. They can create electrical surges that enter and burn out equipment and cause blackouts. The last time we had storms big enough to cause this damage was 1989, when a solar storm caused a major Canadian blackout in Quebec and hit several satellites. A National Science Foundation study shows a map of US electrical damage if there was another storm the size of the May 1921 super storm. Yikes! Parts of the US electrical grid at risk from a May, 1921 sized solar storm. Source: NASA Science News - John Kappenmann, Severe Space Weather Events—Understanding Societal and Economic Impacts, National Academy of Sciences, 2008. NASA is monitoring the sun and can warn utilities if a really big storm is coming, in time for the companies to take some protective precautions. Still it is strange. Big solar storms can eliminate GPS, silence cell phones and cause blackouts. Without electricity, pipelines can’t deliver gas or water. If the sun has a major tantrum, we couldn’t even flush our toilets! Compared to that, earthbound snow flurries are enjoyable inconveniences. Speaking of snow flurries—are you one of the people being hit by this week’s storm or one of the lucky people enjoying the safe type of “sunny” weather? Evelyn Browning Garriss, historical climatologist, blogger, writer for The Old Farmer's Almanac, and editor of The Browning Newsletter, has advised farmers, businesses, and investors worldwide on upcoming climate events and their economic and social impact for the past 21 years.
<urn:uuid:dc24befb-d4c9-4371-9a63-1f141b1e9882>
3.328125
726
Personal Blog
Science & Tech.
52.709775
905
colorful images are of thin slices of meteorites viewed through a Part of the group classified as HED meteorites for their mineral content (Howardite, Eucrite, Diogenite), they likely to Earth from 4 Vesta, the mainbelt asteroid currently being explored by NASA's Why are they thought to be from Vesta? Because the HED meteorites have visible and infrared spectra that match the spectrum of The hypothesis of their origin on Vesta is also consistent with data from Dawn's ongoing observations. by impacts, the diogenites shown here would have originated deep within the crust of Vesta. are also found in the lower crust of planet Earth. A sample scale is indicated by the white bars, each 2 millimeters long. Hap McSween (Univ. Tennessee), A. Beck and T. McCoy (Smithsonian Inst.)
<urn:uuid:ebe51457-c1d9-4dd2-ae15-e45a12f08e6c>
3.359375
192
Knowledge Article
Science & Tech.
49.004658
906
Image: Intestinal Bacteria - D. Colgan - © Australian Museum There are many forms of bacteria, which gain their energy in a variety of ways. Some bacteria are autotrophic, making their own food in a similar way to plants by splitting carbon dioxide using energy from the sun, or through the oxidation of elements such as nitrogen and sulphur. Bacteria involved in the decomposition of animal bodies are heterotrophic, breaking down complex molecules into their constituent elements through respiration or fermentation (depending on whether they are aerobic or anaerobic bacteria). Bacteria are largely responsible for the recycling of carbon, nitrogen and sulphur into forms where they can be taken up by plants. For example, heterotrophic bacteria like Bacillus decompose proteins, releasing ammonia, which is oxidised by other bacteria into nitrogen dioxide, and eventually into nitrate. Nitrate can be assimilated by plants as a source of nitrogen.
<urn:uuid:51c8aebe-2cb4-45db-926f-50831be45f19>
3.765625
193
Knowledge Article
Science & Tech.
13.35035
907
A research group at Rice University (Houston; www.rice.edu) has developed a method for vaporizing water into steam using sunlight-illuminated nanoparticles, with only a small fraction of the energy heating the fluid. Sub-wavelength metal or carbon particles... This information is only available to Gold members. Forgot your user ID or password? Click here to have it sent to you. Not a member yet? UPGRADE now to full archive accces and you will receive: A discount on full delegate pass to ChemInnovations.
<urn:uuid:bb9bd42f-ea66-4798-ae90-07b5afa1c944>
2.578125
118
Truncated
Science & Tech.
47.005519
908
What is Dynamic Combinatorial Chemistry? DCC methodology utilizes cyclic structures which interchange via reversible covalent bond formation to create a dynamic library of potential receptors. When this thermodynamically controlled mixture is incubated with an analyte of interest, the library responds by shifting the equilibrium towards the receptor that best binds the analyte, i.e. the best receptor is amplified relative to the non-templated state. This is best visualized with the following simple graphic: The experiment begins with a library of “monomers”, each of which has two reactive groups on it; in the example above, the reactive groups are thiols. Prior to adding the analyte, the dithiols are oxidized and equilibrated to the complex mixture of disulfides; three are shown but statistically many hundreds are possible. An analyte is then added under conditions where the library is in equilibrium, such that the library constituents can respond to the analyte by shifting towards the best host-guest pair to establish a new equilibrium. In this competitive binding situation, the best receptor is identified by determining which compound(s) was amplified. This differs from a traditional (static) combinatorial library because the method simultaneously generates the library and dynamically amplifies/identifies the winner. Funding for the CDCC comes from the Defense Threat Reduction Agency Basic Research program administered by the Army Research Office (W911NF04D0004)
<urn:uuid:6e56a4a3-6b0d-4a30-b131-b11068516a66>
2.734375
299
Knowledge Article
Science & Tech.
13.672278
909
The recognized authority for satellite observations of Amazon deforestation is the Brazilian Space Research Institute (Portuguese acronym, INPE). This organization has been monitoring Amazon deforestation since 1988. Currently, INPE publishes monthly reports (for example, see the August 2009 [PDF] report in Portuguese) describing the latest satellite data and deforestation rates estimated from them. The area shown here is in the southern part of Mato Grosso, which had the highest deforestation rate among all Brazilian states between 2001 and 2005 (and subsequently surpassed by Pará state [PDF]). The images come from INPE, but were provided to Climate Central by Dr. Ruth DeFries (Columbia University) and Dr. Douglas Morton (University of Maryland). Close to 20% of total human-caused emissions of carbon dioxide come from deforestation. Trees, other vegetation and soil return carbon to the atmosphere when forests are cut or burned down. There are many causes of deforestation. Analysis by Tim Searchinger and colleagues found that biofuels crop production in countries like the U.S. may be one contributing factor because of pressure it may generate to increase the amount of land cultivated for food production abroad.
<urn:uuid:285048c7-93aa-4420-917d-d409dcd86a84>
3.5625
233
Knowledge Article
Science & Tech.
26.751402
910
This tutorial, developed for high school physics students, uses multiple graphs and animations to study the relationship between the motion of an object and its graph of Velocity vs. Time. Users explore the relationship between position and velocity, positive and negative velocities, slope and shape of graphs, and acceleration. Interactive self-evaluations are included. See Related Materials for an accompanying lab by the same author. This item is part of The Physics Classroom, a comprehensive set of tutorials and multimedia resources for high school physics. Editor's Note:Education research indicates that many students have difficulty differentiating velocity and acceleration, and often plot velocity graphs as the path of an object. See Related Materials for a free research-based diagnostic tool to probe misconceptions related to velocity. 6-8: 4F/M3b. If a force acts towards a single center, the object's path may curve into an orbit around the center. 9-12: 4F/H1. The change in motion (direction or speed) of an object is proportional to the applied force and inversely proportional to the mass. 9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it. 9. The Mathematical World 9B. Symbolic Relationships 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in steps, or do something different from any of these. 9-12: 9B/H4. Tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another. 9-12: 9C/H3c. A graph represents all the values that satisfy an equation, and if two equations have to be satisfied at the same time, the values that satisfy them both will be found where the graphs intersect. Common Core State Standards for Mathematics Alignments Expressions and Equations (6-8) Represent and analyze quantitative relationships between dependent and independent variables. (6) 6.EE.9 Use variables to represent two quantities in a real-world problem that change in relationship to one another; write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of as the independent variable. Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the equation. Understand the connections between proportional relationships, lines, and linear equations. (8) 8.EE.5 Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways. Use functions to model relationships between quantities. (8) 8.F.5 Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally. High School — Functions (9-12) Interpreting Functions (9-12) F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship.? Linear, Quadratic, and Exponential Models? (9-12) F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another. F-LE.1.c Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another. F-LE.2 Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from a table). Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12 Craft and Structure (6-12) RST.9-10.4 Determine the meaning of symbols, key terms, and other domain-specific words and phrases as they are used in a specific scientific or technical context relevant to grades 9—10 texts and topics. Range of Reading and Level of Text Complexity (6-12) RST.9-10.10 By the end of grade 10, read and comprehend science/technical texts in the grades 9—10 text complexity band independently and proficiently. This resource is part of a Physics Front Topical Unit. Topic: Kinematics: The Physics of Motion Unit Title: Graphing A companion to the resource above, this online tutorial explores the importance of the slope of v-t graphs as a representation of an object's acceleration. Self-guided evaluations help students overcome common misconceptions. %0 Electronic Source %A Henderson, Tom %D June 1, 2011 %T The Physics Classroom: The Meaning of Shape for a v-t Graph %V 2013 %N 19 June 2013 %8 June 1, 2011 %9 text/html %U http://www.physicsclassroom.com/Class/1DKin/U1L4a.cfm Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:9c52facc-6efa-4158-bbd2-7f8905b6f5a6>
4.3125
1,174
Content Listing
Science & Tech.
46.876969
911
This material has 2 associated documents. Select a document title to view a document's information. The Einstein Cannon model computes and displays the trajectory of cannonballs (particles) shot from a cannon in the vicinity of a black hole. It was created for the study of Einstein's theory of general relativity and the Schwarzschild metric. The main window displays a map of space in the vicinity of the black hole using Schwarzschild coordinates and a cannon located a distance r0 from the center black hole's center. The position and firing angle of the cannon can be adjusted by dragging a marker and the number of cannon balls and their initial speed can be changed using input fields. The maximum speed of the cannon ball is the speed of light c=1 in accordance with Einstein's theory. Newton suggested that a cannon ball fired from a high mountain could fall to Earth, orbit the Earth, or fly away depending on how it was fired. The same is true in general relativity but there are many important differences. This model demonstrates these differences. The Einstein Cannon model is a supplemental simulation for the article "When action is not least for orbits in general relativity" by C. G. Gray and Eric Poisson in the American Journal of Physics 79(1), 43-55 (2011) and has been approved by the authors and the American Journal of Physics (AJP) editor. The simulation was developed using the Easy Java Simulations (EJS) modeling tool and is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_gr_EinsteinCannon.jar file will run the program if Java is installed. Last Modified June 12, 2013 This file has previous versions.
<urn:uuid:30473464-14c9-4b83-96a9-e55a7faabbb7>
3.515625
346
Knowledge Article
Science & Tech.
50.896718
912
A new tool to identify the calls of bat species could help conservation efforts. Because bats are nocturnal and difficult to observe or catch, the most effective way to study them is to monitor their echolocation calls. These sounds are emitted in order to hear the echo bouncing back from surfaces around the bats, allowing them to navigate, hunt and communicate. Many different measurements can be taken from each call, such as its minimum and maximum frequency, or how quickly the frequency changes during the call, and these measurements are used to help identify the species of bat. However, a paper by an international team of researchers, published in the Journal of Applied Ecology, asserts that poor standardisation of acoustic monitoring limits scientists’ ability to collate data. Kate Jones, chairwoman of the UK-based Bat Conservation Trust told the BBC that “without using the same identification methods everywhere, we cannot form reliable conclusions about how bat populations are doing and whether their distribution is changing. "Because many bats migrate between different European countries, we need to monitor bats at a European - as well as country - scale.” The team selected 1,350 calls from 34 different European bat species from EchoBank, a global echolocation library containing more than 200,000 bat call recordings. This raw data has allowed them to develop the identification tool, iBatsID , which can identify 34 out of 45 species of bats. This free online tool works anywhere in Europe, and its creators claim can identify most species correctly more than 80% of the time. There are 18 species of bat residing in the UK, including the common pipistrelle and greater horseshoe bat. Monitoring bats is vital not just to this species, but also to the whole ecosystem. Bats are extremely sensitive to changes in their environment, so if bat populations are declining, it can be an indication that other species might be affected in the future.
<urn:uuid:293a002d-f152-4885-9293-13b158f7cec0>
4.46875
398
News Article
Science & Tech.
29.966706
913
Previous DailyTech stories have detailed recent cooling experienced by the planet, and highlighted some of the scientists currently predicting extended global cooling. Even the UN IPCC has stated that world temperatures may continue to decline, if only briefly. Now, an expert in geophysics at the National Autonomous University of Mexico has added his voice to the fray. Victor Manuel Velasco Herrera, a researcher at UNAM's Institute of Geophysics, has predicted an imminent period of cooling intense enough to be called a small ice age. Speaking to a crowd at a conference at the Center for Applied Sciences and Technological Development, Herrera says the sun can both cool and warm the planet. Variations in solar activity, he says, are causing changes in the Earth's climate. "So that in two years or so, there will be a small ice age that lasts from 60 to 80 years", he said. "The most immediate result will be drought." Herrera says satellite temperature data indicates this cooling may have already begun. Recent increases in glacier mass in the Andes, Patagonia, and Canada were given as further evidence of an upcoming cold spell. Herrera also described the predictions of the Intergovernmental Panel on Climate Change (IPCC) as "erroneous". According to Herrera, their forecasts “are incorrect because are only based on mathematical models which do not include [factors such as] solar activity". Herrera pointed to the so-called "Little Ice Age" which peaked in the 17th century, as a previous cooling event caused by solar fluctuations. Herrera made his remarks at UNAM, located in Mexico City, is the oldest university on the North American continent.
<urn:uuid:b541eb1c-8210-4607-99fd-7ed44c1918fe>
3.59375
343
News Article
Science & Tech.
31.846667
914
Indian space program was rocked by a setback on Christmas Day. An unmanned Indian rocket lifted off from the Satish Dhawan Space Center on Saturday and blew up live on television after its launch because of a malfunction. The Geosynchronous Satellite Launch Vehicle (GSLV) was carrying an advanced GSAT-5P communication satellite into orbit when it veered off its intended path and exploded moments after take-off. The Indian Space Research Organization (ISRO) is citing electronic failure as the cause. performance of the (rocket) was normal up to about 50 seconds. Soon after that the vehicle developed large altitude error leading to breaking up of the vehicle," ISRO Chairman K. Radhakrishnan said. indicates commands from onboard computers ceased to reach circuits of the first stage (engines) but what caused the interruption needs to be studied and we hope to get an assessment of what triggered to some reports, the rocket was deliberately blown up by mission control following the malfunction.Originally scheduled for December 20, engineers postponed the launch after they found a leak in one of the Russian-made cryogenic engines of the GSLV.The failed launch is the second this year for the space agency. The first rocket plunged into the Bay of Bengal during a developmental flight in April.India has scheduled its first manned space flight for the year 2016.
<urn:uuid:8f85f615-485c-4f6e-890f-802a5e5bc99f>
2.65625
298
News Article
Science & Tech.
40.971369
915
The Spitzer Space Telescope prior to launch |Organization||NASA / JPL / Caltech| |Major contractors||Lockheed Martin |Launch date||2003-08-25, 05:35:00 UTC| |Launched from||Cape Canaveral, Florida| |Launch vehicle||Delta II 7920H ELV| |Mission length||2.5 to 5+ years (9 years, 9 months, and 25 days elapsed) |Mass||950 kg (2,100 lb)| |Type of orbit||Heliocentric| |Orbit period||1 year| |Location||Orbiting the Sun| |Wavelength||3 to 180 micrometers| |Diameter||0.85 m (2 ft 9 in)| |Focal length||10.2 m| |MIPS||far infrared detector arrays| The Spitzer Space Telescope (SST), formerly the Space Infrared Telescope Facility (SIRTF), is an infrared space observatory launched in 2003. It is the fourth and final of the NASA Great Observatories program. The planned mission period was to be 2.5 years with a pre-launch expectation that the mission could extend to five or slightly more years until the onboard liquid helium supply was exhausted. This occurred on 15 May 2009. Without liquid helium to cool the telescope to the very cold temperatures needed to operate, most of the instruments are no longer usable. However, the two shortest wavelength modules of the IRAC camera are still operable with the same sensitivity as before the cryogen was exhausted, and will continue to be used in the Spitzer Warm Mission. In keeping with NASA tradition, the telescope was renamed after its successful demonstration of operation, on December 18, 2003. Unlike most telescopes which are named after famous deceased astronomers by a board of scientists, the new name for SIRTF was obtained from a contest open to the general public. The contest led to the telescope being named in honor of Lyman Spitzer, one of the 20th century's great scientists. Though he was not the first to propose the idea of the space telescope (Hermann Oberth being the first, in Wege zur Raumschiffahrt, 1929, and also in Die Rakete zu den Planetenräumen, 1923), Spitzer wrote a 1946 report for RAND describing the advantages of an extraterrestrial observatory and how it could be realized with available (or upcoming) technology. He has been cited for his pioneering contributions to rocketry and astronomy, as well as "his vision and leadership in articulating the advantages and benefits to be realized from the Space Telescope Program." It follows a rather unusual orbit, heliocentric instead of geocentric, trailing and drifting away from Earth's orbit at approximately 0.1 astronomical unit per year (a so-called "earth-trailing" orbit). The primary mirror is 85 centimetres (33 in) in diameter, f/12 and made of beryllium and was cooled to 5.5 K (−449.77 °F). The satellite contains three instruments that allowed it to perform astronomical imaging and photometry from 3 to 180 micrometers, spectroscopy from 5 to 40 micrometers, and spectrophotometry from 5 to 100 micrometers. By the early 1970s, astronomers began to consider the possibility of placing an infrared telescope above the obscuring effects of Earth's atmosphere. In 1979, a report from the National Research Council of the National Academy of Sciences, A Strategy for Space Astronomy and Astrophysics for the 1980s, identified a Space Infrared Telescope Facility (SIRTF) as "one of two major astrophysics facilities [to be developed] for Spacelab", a Shuttle-borne platform. Anticipating the major results from an upcoming Explorer satellite and from the Shuttle mission, the report also favored the "study and development of ... long-duration spaceflights of infrared telescopes cooled to cryogenic temperatures." The launch in January 1983 of the Infrared Astronomical Satellite, jointly developed by the United States, the Netherlands, and the United Kingdom, to conduct the first infrared survey of the sky, whetted the appetites of scientists worldwide for follow-up space missions capitalizing on the rapid improvements in infrared detector technology. Earlier infrared observations had been made by both space-based and ground-based observatories. Ground-based observatories have the drawback that at infrared wavelengths or frequencies, both the Earth's atmosphere and the telescope itself will radiate (glow) strongly. Additionally, the atmosphere is opaque at most infrared wavelengths. This necessitates lengthy exposure times and greatly decreases the ability to detect faint objects. It could be compared to trying to observe the stars at noon. Previous space-based satellites (such as IRAS, the Infrared Astronomical Satellite, and ISO, the Infrared Space Observatory) were operational during the 1980s and 1990s and great advances in astronomical technology have been made since then. Most of the early concepts envisioned repeated flights aboard the NASA Space Shuttle. This approach was developed in an era when the Shuttle program was expected to support weekly flights of up to 30 days duration. A May 1983 NASA proposal described SIRTF as a Shuttle-attached mission, with an evolving scientific instrument payload. Several flights were anticipated with a probable transition into a more extended mode of operation, possibly in association with a future space platform or space station. SIRTF would be a 1-meter class, cryogenically cooled, multi-user facility consisting of a telescope and associated focal plane instruments. It would be launched on the Space Shuttle and remain attached to the Shuttle as a Spacelab payload during astronomical observations, after which it would be returned to Earth for refurbishment prior to re-flight. The first flight was expected to occur about 1990, with the succeeding flights anticipated beginning approximately one year later. However, the Spacelab-2 flight aboard STS-51-F showed that the Shuttle environment was poorly suited to an onboard infrared telescope due to contamination from the relatively "dirty" vacuum associated with the orbiters. By September 1983 NASA was considering the "possibility of a long duration [free-flyer] SIRTF mission". Spitzer is the only one of the Great Observatories not launched by the Space Shuttle, which had been originally intended. However after the 1986 Challenger disaster, the Centaur LH2/LOX upper stage, which would have been required to place it in its final orbit, was banned from Shuttle use. The mission underwent a series of redesigns during the 1990s, primarily due to budget considerations. This resulted in a much smaller but still fully capable mission which could use the smaller Delta II expendable launch vehicle. One of the most important advances of this redesign was an Earth-trailing orbit. Cryogenic satellites that require liquid helium (LHe, T ≈ 4 K) temperatures in near-Earth orbit are typically exposed to a large heat load from the Earth, and consequently entail large usage of LHe coolant, which then tends to dominate the total payload mass and limits mission life. Placing the satellite in solar orbit far from Earth allowed innovative passive cooling such as the sun shield, against the single remaining major heat source to drastically reduce the total mass of helium needed, resulting in an overall smaller lighter payload, with major cost savings. This orbit also simplifies telescope pointing, but does require the Deep Space Network for communications. The primary instrument package (telescope and cryogenic chamber) was developed by Ball Aerospace & Technologies Corp., in Boulder, CO. The individual instruments were developed jointly by industrial, academic, and government institutions, the principals being Cornell, the University of Arizona, the Smithsonian Astrophysical Observatory, Ball Aerospace, and Goddard Spaceflight Center. The infrared detectors were developed by Raytheon in Goleta, California. Raytheon used indium antimonide and a doped silicon detector in the creation of the infrared detectors. It is stated that these detectors are 100 times more sensitive than what was once available in the beginning of the project during the 1980s. The spacecraft was built by Lockheed Martin. The mission is operated and managed by the Jet Propulsion Laboratory and the Spitzer Science Center, located on the Caltech campus in Pasadena, California. - IRAC (Infrared Array Camera), an infrared camera which operates simultaneously on four wavelengths (3.6 µm, 4.5 µm, 5.8 µm and 8 µm). Each module uses a 256×256-pixel detector—the short wavelength pair use indium antimonide technology, the long wavelength pair use arsenic-doped silicon impurity band conduction technology. The two shorter wavelength bands (3.6 µm & 4.5 µm) for this instrument remain productive after LHe depletion in the spring of 2009, at the telescope equilibrium temperature of around 30 K, so IRAC continues to operate as the "Spitzer Warm Mission". The principal investigator is Giovanni Fazio of Harvard University; the flight hardware was built by NASA Goddard Space Flight Center. - IRS (Infrared Spectrograph), an infrared spectrometer with four sub-modules which operate at the wavelengths 5.3–14 µm (low resolution), 10–19.5 µm (high resolution), 14–40 µm (low resolution), and 19–37 µm (high resolution). Each module uses a 128×128-pixel detector—the short wavelength pair use arsenic-doped silicon blocked impurity band technology, the long wavelength pair use antimony-doped silicon blocked impurity band technology. The principal investigator is James R. Houck of Cornell University; the flight hardware was built by Ball Aerospace. - MIPS (Multiband Imaging Photometer for Spitzer), three detector arrays in the far infrared (128 × 128 pixels at 24 µm, 32 × 32 pixels at 70 µm, 2 × 20 pixels at 160 µm). The 24 µm detector is identical to one of the IRS short wavelength modules. The 70 µm detector uses gallium-doped germanium technology, and the 160 µm detector also uses gallium-doped germanium, but with mechanical stress added to each pixel to lower the bandgap and extend sensitivity to this long wavelength. The principal investigator is George H. Rieke of the University of Arizona; the flight hardware was built by Ball Aerospace. As an example of data from the different instruments, the nebula Henize 206 was imaged in 2004, allowing comparison of images from each device. The first images taken by SST were designed to show off the abilities of the telescope and showed a glowing stellar nursery; a big swirling, dusty galaxy; a disc of planet-forming debris; and organic material in the distant universe. Since then, many monthly press releases have highlighted Spitzer's capabilities, as the NASA and ESA images do for the Hubble Space Telescope. As one of its most noteworthy observations, in 2005, SST became the first telescope to directly capture the light from extrasolar planets, namely the "hot Jupiters" HD 209458b and TrES-1. (It did not resolve that light into actual images though.) This was the first time extrasolar planets had actually been visually seen; earlier observations had been indirectly made by drawing conclusions from behaviors of the stars the planets were orbiting. The telescope also discovered in April 2005 that Cohen-kuhi Tau/4 had a planetary disk that was vastly younger and contained less mass than previously theorized, leading to new understandings of how planets are formed. While some time on the telescope is reserved for participating institutions and crucial projects, astronomers around the world also have the opportunity to submit proposals for observing time. Important targets include forming stars (young stellar objects, or YSOs), planets, and other galaxies. Images are freely available for educational and journalistic purposes. In 2004, it was reported that Spitzer had spotted a faintly glowing body that may be the youngest star ever seen. The telescope was trained on a core of gas and dust known as L1014 which had previously appeared completely dark to ground-based observatories and to ISO (Infrared Space Observatory), a predecessor to Spitzer. The advanced technology of Spitzer revealed a bright red hot spot in the middle of L1014. Scientists from the University of Texas at Austin, who discovered the object, believe the hot spot to be an example of early star development, with the young star collecting gas and dust from the cloud around it. Early speculation about the hot spot was that it might have been the faint light of another core that lies 10 times further from Earth but along the same line of sight as L1014. Follow-up observation from ground-based near-infrared observatories detected a faint fan-shaped glow in the same location as the object found by Spitzer. That glow is too feeble to have come from the more distant core, leading to the conclusion that the object is located within L1014. (Young et al., 2004) In 2005, astronomers from the University of Wisconsin at Madison and Whitewater determined, on the basis of 400 hours of observation on the Spitzer Space Telescope, that the Milky Way Galaxy has a more substantial bar structure across its core than previously recognized. Also in 2005, astronomers Alexander Kashlinsky and John Mather of NASA's Goddard Space Flight Center reported that one of Spitzer's earliest images may have captured the light of the first stars in the universe. An image of a quasar in the Draco constellation, intended only to help calibrate the telescope, was found to contain an infrared glow after the light of known objects was removed. Kashlinsky and Mather are convinced that the numerous blobs in this glow are the light of stars that formed as early as 100 million years after the big bang, red shifted by cosmic expansion. In March 2006, astronomers reported an 80-light-year-long nebula near the center of the Milky Way Galaxy, the Double Helix Nebula, which is, as the name implies, twisted into a double spiral shape. This is thought to be evidence of massive magnetic fields generated by the gas disc orbiting the supermassive black hole at the galaxy's center, 300 light years from the nebula and 25,000 light years from Earth. This nebula was discovered by the Spitzer Space Telescope, and published in the magazine Nature on March 16, 2006. In May 2007, astronomers successfully mapped the atmospheric temperature of HD 189733 b, thus obtaining the first map of some kind of an extrasolar planet. Since September 2006 the telescope participates in a series of surveys called the Gould Belt Survey, observing the Gould's Belt region in multiple wavelengths. The first set of observations by the Spitzer Space Telescope were completed from September 21, 2006 through September 27. Resulting from these observations, the team of astronomers led by Dr. Robert Gutermuth, of the Harvard-Smithsonian Center for Astrophysics reported the discovery of Serpens South, a cluster of 50 young stars in the Serpens constellation. Scientists have long wondered how tiny silicate crystals, which need high temperatures to form, have found their way into frozen comets, born in the very cold environment of the Solar System's outer edges. The crystals would have begun as non-crystallized, amorphous silicate particles, part of the mix of gas and dust from which the Solar System developed. This mystery has deepened with the results of the Stardust (spacecraft) sample return mission, which captured particles from Comet Wild 2. Many of the Stardust (spacecraft) particles were found to have formed at temperatures in excess of 1000 K. In May 2009, Spitzer researchers from Germany, Hungary and the Netherlands found that amorphous silicate appears to have been transformed into crystalline form by an outburst from a star. They detected the infrared signature of forsterite silicate crystals on the disk of dust and gas surrounding the star EX Lupi during one of its frequent flare-ups, or outbursts, seen by Spitzer in April 2008. These crystals were not present in Spitzer's previous observations of the star's disk during one of its quiet periods. These crystals appear to have formed by radiative heating of the dust within 0.5 AU of EX Lupi. In August 2009, the telescope found evidence of a high-speed collision between two burgeoning planets orbiting a young star. In October 2009, astronomers Anne J. Verbiscer, Michael F. Skrutskie, and Douglas P. Hamilton published findings of the "Phoebe ring" of Saturn, which was found with the telescope; the ring is a huge, tenuous disc of material extending from 128 to 207 times the radius of Saturn. Spitzer observations, announced in May 2011, indicate that tiny forsterite crystals might be falling down like rain on to the protostar HOPS-68. The discovery of the forsterite crystals in the outer collapsing cloud of the proto-star is surprising, because the crystals form at lava-like high temperatures, yet they are found in the molecular cloud where the temperatures are about minus 170 degrees Celsius. This led the team of astronomers to speculate that the bipolar outflow from the young star may be transporting the forsterite crystals from near the star's surface to the chilly outer cloud. In January 2012, it was reported that further analysis of the Spitzer observations of Ex Lupi can be understood if the forsterite crystalline dust was moving away from the protostar at a remarkable average speed of 38 kilometres per second. It would appear that such high speeds can only arise if the dust grains had been ejected by a bipolar outflow close to the star. Such observations are consistent with an astrophysical theory, developed in the early 1990s, where it was suggested that bipolar outflows garden or transform the disks of gas and dust that surround protostars by continually ejecting reprocessed, highly heated material from the inner disk, adjacent to the protostar, to regions of the accretion disk further away from the protostar. GLIMPSE and MIPSGAL surveys GLIMPSE, the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire, is a survey spanning 300° of the inner Milky Way galaxy. It consists of approximately 444,000 images taken at four separate wavelengths using the Infrared Array Camera. MIPSGAL is a similar survey covering 278° of the galactic disk at longer wavelengths. On June 3, 2008, scientists unveiled the largest, most detailed infra-red portrait of the Milky Way, created by stitching together more than 800,000 snapshots, at the 212th meeting of the American Astronomical Society in St. Louis, Missouri. This composite survey is now viewable with the GLIMPSE/MIPSGAL Viewer. Artificial color image of the Double Helix Nebula, thought to be generated at the galactic center by magnetic torsion 1000 times greater than the sun's. A cluster of new stars forming in the Serpens South cloud - Spitzer Space Telescope (2008). "About Spitzer: Fast Facts". NASA / JPL. Archived from the original on 2007-02-02. Retrieved 2007-04-22. - Spitzer Space Telescope. "Spitzer Technology: Telescope". NASA / JPL. Archived from the original on 2007-02-24. Retrieved 2007-04-22. - Spitzer Science Center. "Cycle-6 Warm Mission". NASA / JPL. Retrieved 2009-09-16. - "Who was Lyman Spitzer?". Nasa: For Educators. California Institute of Technology and the Jet Propulsion Laboratory. 11 March 2004. Retrieved 6 January 2009. - "Up close and personal". Physics World (Institute of Physics). 2 March 2009. Retrieved 20 April 2009. - Please refer to Hubble Space Telescope. - Hubble vision: further adventures with the Hubble Space Telescope. CUP Archive. 1998. p. 193. ISBN 0-521-59291-7. - Zimmerman, Robert (2008). The universe in a mirror: the saga of the Hubble Telescope and the visionaries who built it. Princeton University Press. p. 10. ISBN 0-691-13297-6. - William Harwood (December 18, 2003). "First images from Spitzer Space Telescope unveiled". Spaceflight Now. Retrieved 2008-08-23. - Watanabe, Susan (2007-11-22). "Studying the Universe in Infrared". NASA. Retrieved 2007-12-08. - Kwok, Johnny (Fall 2006). "Finding a Way: The Spitzer Space Telescope Story". Academy Sharing Knowledge. NASA. Archived from the original on 2007-09-08. Retrieved 2007-12-09. - Spitzer Science Center Home Page -- Public information. - SSC Observatory general information page, 4 Oct 2009. - SSC Observatory Overview, 4 Oct 2009. - SSC Science Information home page, 4 Oct 2009. - Spitzer Observers' Manual, reference for technical instrument information, Ver 8, 15 Aug 2008. - SSC IRAC (Mid IR camera) science users information page, 4 Oct 2009. - SSC IRS (spectrometer) science users' information page, 4 Oct 2009. - SSC MIPS (long wavelength 24um, 70um, & 160um) imaging photometer and spectrometer science users' information page, 4 Oct 2009. - Press Release: NASA's Spitzer Marks Beginning of New Age of Planetary Science. - Infrared Glow of First Stars Found: Scientific American. - JPL News | Spitzer Catches Star Cooking Up Comet Crystals - Ábrahám et al. (published online May 14, 2009). "Episodic formation of cometary material in the outburst of a young Sun-like star". Nature 459 (7244): 224–226. arXiv:0906.3161. Bibcode:2009Natur.459..224A. doi:10.1038/nature08004. - BBC NEWS | Science & Environment | Traces of planet collision found - Verbiscer, Anne; Michael Skrutskie, Douglas Hamilton (published online October 7, 2009). "Saturn's largest ring". Nature 461 (7267): 1098–100. Bibcode:2009Natur.461.1098V. doi:10.1038/nature08515. PMID 19812546. - NASA Mission News | Spitzer Sees Crystal Rain in Infant Star Outer Clouds - Poteet, C. A., et al. (published online June, 2011). "A Spitzer Infrared Spectrograph Detection of Crystalline Silicates in a Protostellar Envelope". The Astrophysical Journal Letters 733 (2): L32. arXiv:1104.4498. Bibcode:2011ApJ...733L..32P. doi:10.1088/2041-8205/733/2/L32. - Juhász, A., et al. (published online January, 2012). "The 2008 Outburst of EX Lup—Silicate Crystals in Motion". The Astrophysical Journal 744 (2): 118. arXiv:1110.3754. Bibcode:2012ApJ...744..118J. doi:10.1088/0004-637X/744/2/118. - Liffman K. and Brown M. (published online October, 1995). "The motion and size sorting of particles ejected from a protostellar accretion disk". Icarus 116 (2): 275–290. Bibcode:1995Icar..116..275L. doi:10.1006/icar.1995.1126. - Galactic Legacy Infrared Mid-Plane Survey Extraordinaire, University of Wisconsin–Madison Department of Astronomy - Press Release: Spitzer Captures Stellar Coming of Age in Our Galaxy - Released Images and Videos of Milky Way Mosaic - GLIMPSE/MIPSGAL Viewer |Wikimedia Commons has media related to: Spitzer space telescope| - Spitzer Space Telescope official site - Spitzer Space Telescope Profile by NASA's Solar System Exploration - Spitzer images - Spitzer newsroom - Spitzer podcasts - Spitzer video podcasts - Simulation of Spitzer's orbit - Zoomable version of the GLIMPSE/MIPSGAL surveys A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
<urn:uuid:93ae5210-f5df-4b6d-9f49-16f0d8cc451d>
3.625
5,111
Knowledge Article
Science & Tech.
47.510557
916
The sun sets over Welsh mountains in a December 2008 file photo. Sunspot group 1024, which finally developed over the 4th of July weekend. A large flare shoots out from the sun. After one of the longest sunspot droughts in modern times, solar activity picked up quickly over the weekend. A new group of sunspots developed, and while not dramatic by historic standards, the spots were the most significant in many months. "This is the best sunspot I've seen in two years," observer Michael Buxton of Ocean Beach, Calif., said on Spaceweather.com. Solar activity goes in a roughly 11-year cycle. Sunspots are the visible signs of that activity, and they are the sites from which massive solar storms lift off. The past two years have marked the lowest low in the cycle since 1913, and for a while scientists were wondering if activity would ever pick back up. During 2009 so far, the sun has been completely free of spots about 77 percent of the time. NASA researchers last month said quiet jet streams inside the sun were responsible, and that activity would soon return to normal. The new set of spots, named 1024, is kicking up modest solar flares. Sunspots are cool regions on the sun where magnetic energy builds up. They serve as a cap on material welling up from below. Often, that material is released in spectacular light shows called solar flares and discharges of charged particles known as coronal mass ejections. The ejections can travel as space storms to Earth within a day or so, and major storms can knock out satellites and trip power grids on the surface. Prior to the low-activity period, astronomers had been predicting that the next peak in solar activity, expected in 2013, might be one of the most active in many decades. That forecast was recently revised, however, and scientists now expect the next peak to be modest. All this matters because, as laid out in a report earlier this year by the National Academy of Sciences, a major solar storm nowadays could cause up to $2 trillion in initial damages by crippling communications on Earth and fueling chaos among residents and even governments in a scenario that would require four to 10 years for recovery. Such a storm struck in 1859, knocking out telegraph communications and causing those lines to erupt in flames. The world then was not so dependent on electronic communication systems, however. Copyright © 2009 Imaginova Corp. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:ef7dd6c7-d30d-4ae9-ace0-13398394a006>
3.15625
525
News Article
Science & Tech.
53.85457
917
You can speed up the access to nodes of a large Info file by giving it a tags table. Unlike the tags table for a program, the tags table for an Info file lives inside the file itself and is used automatically whenever Info reads in the file. To make a tags table, go to a node in the file using Emacs Info mode and type M-x Info-tagify. Then you must use C-x C-s to save the file. Info files produced by the makeinfo command that is part of the Texinfo package always have tags tables to begin with. Once the Info file has a tags table, you must make certain it is up to date. If you edit an Info file directly (as opposed to editing its Texinfo source), and, as a result of deletion of text, any node moves back more than a thousand characters in the file from the position recorded in the tags table, Info will no longer be able to find that node. To update the tags table, use the An Info file tags table appears at the end of the file and looks like this: ^_^L Tag Table: File: info, Node: Cross-refs^?21419 File: info, Node: Tags^?22145 ^_ End Tag Table Note that it contains one line per node, and this line contains the beginning of the node’s header (ending just after the node name), a ‘DEL’ character, and the character position in the file of the beginning of the node.
<urn:uuid:f1c967b6-09da-4c74-be97-39a54162e550>
2.609375
324
Tutorial
Software Dev.
53.554853
918
Altruistic Aphids, an Evolutionary Anomaly by Brian Thomas, M.S. * Certain aphids manipulate plant tissues to form a hollow gall in which they then reside. But aphids will also help heal plant tissue that they’ve damaged. This behavior serves as a vital self-defense mechanism, because when the gall’s walls are eaten by caterpillars, the tender aphids inside become easy prey for other insect predators. Evolutionary biologist Takema Fukatsu of the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan, found that specialized aphid guards extrude their body contents to fill holes in the plant wall, “kneading their own gooey blood into a big scab.”1 Many of these types of aphids, named Nipponaphis monzeni, die in the process. In a recently-published study, the research team examined gall walls in various stages of repair and found that the aphid-made scab played the same role as animal-skin scabs, serving as templates for body tissue to grow over the area and repair the wound.2 Moreover, the aphids were responsible for manipulating the repair of the plant tissue, “because it healed only if live aphids were still in the gall.”1 Interestingly, scientists have labored for decades to manipulate plant tissue growth like these aphids do. This process entails many technical problems, since some kind of chemical must provide precise communication with the particular plant’s biochemical networks to signal specific dormant genes to activate. Natural processes alone would not produce what is observed here: organisms sacrificing their lives for the greater good of the remaining individuals. Nor do they adequately account for the origin of tiny creatures that can precisely manipulate intricate biochemical pathways for the purpose of healing a plant wound. Natural processes alone cannot explain it, but creation does. And now the Creator’s intricate and complex handiwork is providing a blueprint for scientists seeking to find “novel compounds that could prove useful for manipulating plant cell and tissue cultures.”1 - Youngsteadt, E. Aphids Play Doctor. ScienceNOW Daily News. Posted on sciencenow.sciencemag.org February 25, 2009. - Kutsukake, M. Scab formation and wound healing of plant tissue by soldier aphid. Proceedings of the Royal Society B: Biological Sciences. Published online before print February 25, 2009. * Mr. Thomas is Science Writer at the Institute for Creation Research. Article posted on March 17, 2009.
<urn:uuid:168551a6-9ad6-44d0-90bd-e9024b19fa8d>
3.453125
532
News Article
Science & Tech.
40.200577
919
ICMAKE Part 2 Icmake source files are written according to a well-defined syntax, closely resembling the syntax of the C programming language. This is no coincidence. Since the C programming language is so central in the Unix operating system, we assumed that many people using the Unix operating system are familiar with this language. Providing a new tool which is founded on this familiar programming language relieves everybody of the burden of learning yet another dialect, thus simplifying the use of the new system and allowing its new users to concentrate on its possibilities rather than on its grammatical form. Considering icmake's specific function, we have incorporated a lot of familiar constructs from C into icmake: most C operators were implemented in icmake, as were some of the standard C runtime functions. In this respect icmake's grammar is a subset of the C programming language. However, we have taken the liberty of defining two datatypes not normally found in C. There is a datatype `string' (yes, its variables contain strings) and a datatype `list', containing lists of strings. We believe these extensions to the C programming language are so minor that just this paragraph would probably suffice for their definition. However, they will be described in somewhat greater detail in the following sections. Also, some elements of C++ are found in icmake's grammar: some icmake-functions have been overloaded; they do different but comparable tasks depending on the types of arguments they are called with. Again, we believe this to be a minor departure from the `pure C' grammar, and think this practice is very much in line with C++'s philosophy. One of the tasks of the preprocessor is to strip the makefile of comment. Icmake recognizes two types of comment: standard C-like comment and end-of-line comment, which is also recognized by the Gnu C compiler and by Microsoft's C compiler. Standard comment must be preceded by /* and must be closed by */. This type of comment may stretch over more than one line. End-of-line comment is preceded by // and ends when a new line starts. Lines which start with #! are skipped by the preprocessor. This feature is included to allow the use of executable makefiles. Apart from the #! directive, icmake recognizes two more preprocessor directives: #include and #define. All preprocessor directives start with a `#'-character which must be located at the first column of a line in the makefile. The #include directive must obey the following syntax: When the preprocessor icm-pp encounters this directive, `filename' is read. The filename may include a path specification. When the filename is surrounded by double quotes, icm-pp attempts to access this file exactly as stated. When the filename is enclosed by < and >, icm-pp attempts to access this file relative to the directory pointed to by the environment variable IM. Using the #include directive, large icmake scripts may be modularized, or a set of standard icmake source scripts may be used to realize a particular icmake script. The #define directive is a means of incorporating constants in a makefile. The directive follows the following syntax: #define identifier redefinition-of-identifier The defined name (the name of the defined constant) must be an identifier according to the C programming language: the first character must be an underscore or a character of the alphabet; subsequent characters may be underscores or alphanumerics. The redefinition part of the #define directive consists of spaces, numbers, or whatever is appropriate. The preprocessor simply replaces all occurrences of the defined constant following the #define directive by the redefinition part. Note that redefinition's are not further expanded; an already defined name which occurs in the redefinition part is not processed but is left as-is. Also note that icm-pp considers the redefinition part to be all characters found on a line beyond the defined constant. This would also include comment, if found on the line. Consequently, it is normally not a good idea to use comment-to-end-of-line on lines containing #define directives. |Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013| |Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013| |Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013| |Weechat, Irssi's Little Brother||Jun 11, 2013| |One Tail Just Isn't Enough||Jun 07, 2013| |Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013| - Containers—Not Virtual Machines—Are the Future Cloud - Non-Linux FOSS: libnotify, OS X Style - Linux Systems Administrator - Validate an E-Mail Address with PHP, the Right Way - Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer - Senior Perl Developer - Technical Support Rep - UX Designer - RSS Feeds - Introduction to MapReduce with Hadoop on Linux Free Webinar: Hadoop How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster. Some of key questions to be discussed are: - What is the “typical” Hadoop cluster and what should be installed on the different machine types? - Why should you consider the typical workload patterns when making your hardware decisions? - Are all microservers created equal for Hadoop deployments? - How do I plan for expansion if I require more compute, memory, storage or networking?
<urn:uuid:031f7ec1-ba10-4fc6-af49-782fc65e4df5>
3.109375
1,292
Documentation
Software Dev.
31.781229
920
The Java Virtual Machine (JVM) is a real name dropper when you’re programming in Java. Contrary to what the name indicates, the Java Virtual Machine can be encountered in relation with other programming languages as well. In general, it’s not necessary to know what the Java Virtual Machine is, or even what it does, to be able to program in Java. On the other hand, familiarizing yourself with the inner workings of a machine does help to increase your understanding and overall insight. This article brushes over the idea of the Java Virtual Machine, what it does for you, and some of the most important pros and cons. Although I’ve tried to keep it simple, and there is definitely more advanced literature on the subject, a rudimentary understanding of Java and programming is expected. The semantics of a programming language are designed to be close to our natural language, while staying concise and easy to interpret for a machine. As you probably know, the programming language is wholly different from machine code, the set of instructions a computer uses to construct and run a program. This we call high-level languages; one or multiple levels of abstraction from the machine code. Before it is able to run, high-level code must first be interpreted. A lot of programming interfaces compile in advance (AOT compilation) for a specific platform. This makes the program more efficient at runtime, but far less compatible with different platforms. Java, in contrast, works with an intermediate language called Java bytecode and the Java Virtual Machine. JVM & Java Bytecode When your Java project builds, it translates the source code (contained in *.java source files) to Java bytecode (most often contained in *.class files). This takes your high-level code one step closer to machine code, but not quite there yet. This bytecode is a collection of compact instructions; easier for a machine to interpret, but less readable. When you run a Java application on your computer, cellphone, or any other Java-enabled platform, you essentially pass this Java bytecode to the Java Virtual Machine. The interpreter in the Java Virtual Machine usually starts compiling the entire bytecode at runtime, following the principles of so-called just-in-time compilation. This makes for the typical, albeit often slight delay when opening a Java application, but generally enhances the program performance compared to interpreted compilation. The main advantage of this system is the increased compatibility. Since your applications run in a virtual machine instead of directly on your hardware, the developer can program and build their application once, which can then be executed on every device with an implementation of the Java Virtual Machine. This principle has given birth to the Java slogan: “Write once, run everywhere.” Pro: Compatibility & Increased Security Apart from code compatibility, the Java Virtual Machine comes with other benefits. One of the most important of those is the relative security of Java programs as a result of the Java Virtual Machine. Security, meaning that a program running in a virtual machine is far less likely to disrupt the user’s operating system, or corrupt data files, if errors occur. Con: Different JVM Implementations & Debugging One of the main criticisms voiced against the code compatibility and the Java Virtual Machine is due to the many different implementations of the latter. You see, the Java Virtual Machine is not one piece of software. Oracle, the owners of Java, have their implementation of the Java Virtual Machine, but other people can make theirs if it satisfies various practical and contractual claims. These different implementations mean that your code may run smoothly on one Java Virtual Machine, but crash and burn on another. Although, in practice, you can write your code once and run it everywhere, more complex code sometimes still has to be debugged in different Java Virtual Machine implementations to ensure correct operation. Do you have any experience of working with the Java Virtual Machine? If so, is there anything I missed out here that should be mentioned? More articles about:
<urn:uuid:60d68a8a-bf71-4bc8-b0b4-1b91b8178359>
3.5625
817
Personal Blog
Software Dev.
29.171088
921
When you have 400 earthquakes on top of one of the largest supervolcanoes on Earth, people pay attention. And since the day after Christmas, that's what has happened at Yellowstone National Park. Scientists are seeing what they call a "swarm" of low intensity earthquakes -- the largest since the 1980s. The biggest quake had a magnitude of 3.9, below the level that can cause damage. But the earthquakes have made worldwide news because the park lies on a giant caldera, the crater of a volcano that scientists say could one day explode and destroy most of North America and freeze the rest of the world under a shroud of ash for up to two years. Still, the latest earthquakes are nothing to fear, said park geologist Hank Heasler. Read the full story at idahostatesman.com.
<urn:uuid:47cbb496-cca8-47cb-8acd-5f6b47f47afd>
2.9375
171
Truncated
Science & Tech.
61.486928
922
Auroras Invade the US Earth's magnetic field is still reverberating from a CME strike on March 10, 2011 which resulted in a G1-class geomagnetic storm. Northern Lights have rippling over the US-Canadian border into states such as Wisconsin, Minnesota, and Michigan. Solar wind conditions favor more geomagnetic storming in the hours ahead. Sky watchers, including those in the continental United States, should remain alert for auroras. 03.10.11 - Another X-Class Solar Flare and a CME March 9th ended with a powerful solar flare. Earth-orbiting satellites detected an X1.5-class explosion from behemoth sunspot 1166 around 2323 UT. A movie from NASA's Solar Dynamics Observatory (above) shows a bright flash of UV radiation plus some material being hurled away from the blast site. Coronagraph data from the Solar and Heliospheric Observatory show no bright coronal mass ejection (CME) emerging from this eruption. Some material was surely hurled in our direction, but probably not enough for significant Earth-effects. Updates will be provided as more information becomes available. In addition, on March 10, 2011 around 0630 UT, a CME did strike a glaceing blow to Earth's magnetic field. This was a result of an M3 flare that occurred late on March 7, 2011. At 2,200 km/sec, this was the fasted CME since September 2005. Below is an impact image provided from a sky watcher in Canada. Visit www.spaceweather.com for links to more great aurora imagery. This aurora image was taken just west of Edmonton, Alberta, Canada by a sky watcher. Credit: Zoltan Kenwell What is going on with all this recent solar activity? After four years without any X-flares, the sun has produced two of the powerful blasts in less than one month: Feb. 15th and March 9th. This continues the recent trend of increasing solar activity associated with our sun's regular 11-year cycle, and confirms that Solar Cycle 24 is indeed heating up, as solar experts have expected. Solar activity will continue to increase as the solar cycle progresses toward solar maximum, expected in the 2013 time frame. What is a solar flare and what does X-class mean? A solar flare is an intense burst of radiation coming from the release of magnetic energy associated with sunspots. Flares are our solar system’s largest explosive events. They are seen as bright areas on the sun and they can last from minutes to hours. We typically see a solar flare by the photons (or light) it releases, at most every wavelength of the spectrum. The primary ways we monitor flares are in x-rays and optical light. Flares are also sites where particles (electrons, protons, and heavier particles) are accelerated. Scientists classify solar flares according to their brightness in the x-ray wavelengths. There are three categories: C, M, X, with each one representing approximately 10x more power. The number following the letter indicates another factor applied to the basic classification scheme, from 1-9. At the high end, the X class can go higher than 9 because there is no higher letter classification. What is a coronal mass ejection (CME)? The outer solar atmosphere, the corona, is structured by strong magnetic fields. Where these fields are closed, often above sunspot groups, the confined solar atmosphere can suddenly and violently release bubbles of gas and magnetic fields called coronal mass ejections. A large CME can contain a billion tons of matter that can be accelerated to several million miles per hour in a spectacular explosion. Solar material streams out through the interplanetary medium, impacting any planet or spacecraft in its path. CMEs are sometimes associated with flares but can occur independently. Tony Phillips/Holly Zell NASA's Goddard Space Flight Center
<urn:uuid:4d7eeb62-d140-4302-86f2-fa20f353c81d>
3.0625
820
News (Org.)
Science & Tech.
49.752099
923
WHILE telling the world that they have stopped producing plutonium for nuclear weapons, Britain and the US are planning to carry on making tritium for H-bombs. The US government is proposing to bring a major new tritium production plant into operation by 2010, while British Nuclear Fuels (BNFL) is continuing to manufacture tritium for Trident missiles at the Chapelcross nuclear plant in Scotland. During the 178-nation conference on the Nuclear Non-Proliferation Treaty (NPT), which finished in New York last week, the US and Britain were criticised by non-nuclear weapons states for failing to make enough progress towards nuclear disarmament. In response, both the US Vice-President Al Gore and the British Foreign Secretary Douglas Hurd stressed that they had stopped producing plutonium for weapons. But they failed to explain that this was because they have large plutonium stockpiles. Nor did they mention their plans for tritium production. Tritium is a naturally ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:adde4be9-d1c0-4354-94b3-87ebcd846c91>
2.796875
225
Truncated
Science & Tech.
46.625558
924
Inside RelativeLayoutby James Elliott, coauthor of Java Swing, 2nd Edition As promised in my first article, "RelativeLayout: A Constraint-Based Layout Manager," here's a look inside the RelativeLayout package. This article explains how the layout manager works, and discusses how to extend it to support new kinds of constraints. Readers should be familiar with the original article, which introduces RelativeLayout and explains how to use it as a tool. Once you download and expand the source archive, you'll find the following items inside of it (Figure 1 shows everything it will contain once you're ready to build and run Figure 1: RelativeLayout Source This is a build file for the Ant tool from Apache's Jakarta project. It is used to compile and test RelativeLayout. Once you have installed Ant on your system (which you have likely done already, since it has rapidly and deservedly become the build tool of choice for Java projects) you can compile RelativeLayout simply by moving to the top-level source directory and typing ant compile (after you've set up the lib directory as described below). Other interesting build targets you can run include: ant ex1: runs the first example program discussed in the first article. Similarly, the targets ex3run the second and third examples. ant doc: builds the JavaDoc for RelativeLayout. You may want to refer to this documentation from time to time as you read the overview of how the classes work, below. ant dist: builds the distribution file RelativeLayout.jarso you can easily use RelativeLayoutwith other projects. ant clean: cleans up any generated files and removes the These files are used by the XML-based examples in the first article. They contain the layout constraints used by the second and third example programs. Contains libraries used by RelativeLayout. It's empty when you first download and expand the source archive, because these libraries are available from separate organizations. In order to compile and use RelativeLayout, you'll need the JDOM library and (if you're using a Java SDK earlier than version 1.4) an XML parser such as Apache Xerces, as discussed in the first article. Once you've downloaded any libraries you need (which you likely did in order to run the examples when reading Part 1), copy their library jars (e.g. xerces.jar) into the lib directory, and RelativeLayout will compile and run properly. I used this file along with a test program while I was developing RelativeLayout. It's not too useful now, unless you want to study and play with that test program. Note that the current configuration of the program (invoked through ant test) and this file are inconsistent and cause an over-constraint error to be reported. If you're into that sort of thing, debugging and fixing the problem could be an interesting exercise. The rest of the source is organized under the src directory, so let's move in there and see what we find. - The files These are the three example programs discussed in Part 1. This is the test program that works with test.xmlas described above. It's no longer of much interest except for software archaeology, in that it provides a little insight into the development of the package. This package overview document is used by JavaDoc to provide introductory information on the starting page. The Java source for RelativeLayoutitself is grouped under this directory. To be precise, it's in the nested directory src/com/brunchboy/util/swing/relativelayout, corresponding to the package in which the classes themselves are organized, com.brunchboy.util.swing.relativelayout. The classes that make up RelativeLayoutare explained in the next few sections. You'll best understand how everything works if you can examine the source itself while you read the descriptions below, perhaps by printing one or the other. relativelayoutdirectory also contains the file package.html, used by JavaDoc to provide an introductory explanation for the classes in the directory, and constraint-set.dtd, the XML document type definition (described below), used by XmlConstraintBuilderto parse constraint specifications expressed as XML.
<urn:uuid:1b0ddc5f-b616-4f83-9c81-b11dbb31528f>
2.890625
901
Documentation
Software Dev.
42.500558
925
Heat loss and hydrothermal ciruculation due to sea-floor spreading. Abstract (Summary)Lithospheric cooling along the Galapagos Spreading Center at 86°W longitude, as determined by surface heat-flow measurements, appears dominated by hydrothermal circulation. This same phenomena apparently exists on the Mid-Atlantic Ridge at 36°N and presumably, in some form on all active oceanic ridges. It is responsible for removing the majority of the heat () 80%) lost through young (few m.y. old) oceanic crust. This component of heat has been ignored in previous calculations of the total rate of heat loss by the Earth. A theoretical expression is used to estimate the heat released by sea-floor spreading, since current technology does not provide any means for direct measurement. The revised va lue of lO. 2 x iOl2 cal/sec (il5%) represents a 32% increase over previous estimates. More than 20% of this heat apparently escapes through hydrothermal vents near sea-floor spreading centers. The previously accepted equality of oceanic and continental heat flux is invalid. The revised analysis indicates the oceanic heat flux is 2.2 x iO-6 cal/cm2-sec (HFU) versus l.5 HFU for the continents . The average for the Earth is then approximately 2.0 HFU. The horizontal wavelength of inferred hydrothermal convection at the Galapagos Spreading Center, in the one dimension measured, is 6 il km. The systematic modulation suggests cellular convection. If the system is dominated by cellular convection, the depth of penetration, based on laboratory modeling experiments should be 3 to 4 kilometers. -3- The data from the Galapagos Spreading Center and laboratory experiments both suggest that the position of the cells in a cellular convection system can be a strong function of the local topography, the rising limbs of flow being located beneath topographic highs and the descending limbs beneath topographic lows. The addition of topography enhances the heat transfer efficiency of a convection system. Lateral variation in permeability or the systems bottom boundary condition will also influence the position of cells. Even if the circulation system were strongly influenced by some combination of variations in the strength of the heat source, topography or discrete zones of high permeability, it would probably still be cellular in nature, and similar deep penetration is indicated. If the Galapagos Spreading Center is typical, there are presumably numerous hydrothermal springs and fissures in each square kilometer of near-ridge sea floor and sediment thicknesses of at least 50 meters are apparently penetrable to the flow of water. As the sea floor ages the surface of the hydrothermal system becomes less permeable and eventually both the surface and the deep system are completely clogged and sealed. The age at which this occurs varies from ridge to ridge but there is evidence that suggests it may not be complete until the crust is at least 8 m.y. old and possibly as much as 40-50 m.y. old. Most of the surface is apparently sealed long before hydrothermal circulation stops, although some vents do persist. This behavior of the hydrothermal system has a dramatic effect on conductive heat-flow measurements and is largely responsiBTe--fbr Ene variations observed in conductive heat flow near active spreading ridges. The results of this study show the difficulties in resolving systematic patterns in the heat-flow distribution on spreading ridges. Numerous, closely-spaced measurements with precise navigation combined with a relatively uniform sediment cover, appear to be necessary ingredients for recognition of the heat-flow pattern near active sea-floor spreading centers. Thesis Supervisor: Dr. Richard P. Von Herzen Ti tle: Senior Scientist -4- School Location:USA - Massachusetts Source Type:Master's Thesis Date of Publication:
<urn:uuid:6b27fe4e-b678-4b8b-965f-6d6a0bdfc6c7>
3.203125
793
Academic Writing
Science & Tech.
36.658399
926
Spinach and Silicon for Solar PowerCategory: Science & Technology Posted: September 4, 2012 05:31PM When thinking about solar power, some may think it is this new technology that has been developed most extensively in recent times as a clean energy source. The truth is though that solar power has been in development for millions and even billions of years. Nature has been harnessing the power of sunlight for longer than man has existed, and researchers know that and are trying to take advantage of that. Photosystem 1 (PS1) is a protein involved in photosynthesis that can convert sunlight to electrical energy with almost 100% efficiency; almost three times higher than the maximum efficiency possible with typical semiconductor solar cells. This protein also can continue function after it has been harvested from plants like spinach, but not always for long and is difficult to integrate with our technology. Researchers at Vanderbilt University overcame this though by doping the silicon substrate PS-1 was placed on. This is to prevent the protein from pulling electrons from the silicon to fill holes formed by the useable electric current. The biohybrid solar cell the researchers created was able to generate 850 microamps per square centimeter and 0.3 V. That may not sound like much but represents nearly two and a half times better performance than previous biohybrid solar cells. With further work the researchers are confident they can improve the cells' performance even further and give new meaning to the phase 'green energy.'
<urn:uuid:a5322412-6f1a-48ed-bfca-dc5a7ba63280>
3.25
296
News Article
Science & Tech.
37.916418
927
Murder At The Cellular Level Mortal Chemical Combat Typifies the World of Bacteria ScienceDaily (Nov. 18, 2010) — Like all organisms, bacteria must compete for resources to survive, even if it means a fight to the death. New research led by scientists from the University of North Carolina at Chapel Hill School of Medicine and the University of California, Santa Barbara, describes new complexities in the close chemical combat waged among bacteria. And the findings from this microscopic war zone may have implications for human health and survival. "It has been known for a long time that bacteria can produce toxins that they release into their surroundings that can kill other bacteria, sort of like throwing hand grenades at enemies," said Peggy A. Cotter, PhD, associate professor in the microbiology and immunology department at UNC. "Our data suggests that the situation is far more complex that we thought." Cotter points out that it was in David A. Low's lab at U.C. Santa Barbara, where the discovery was made that bacteria can also produce proteins on their surface that inhibit the growth and end the life of other bacteria upon contact. "So it appears that some bacteria participate in 'man to man' (or 'bacteria to bacteria') combat using poison-tipped swords," Cotter said. "What we have discovered is that each bacterium can have a different poison at the tip of their sword. For each poison, there is a specific protective (immunity) protein that the bacteria also make so that they don't kill themselves and are not killed by other members of their same 'family'." The new research by senior co-authors Cotter and Low and others appear on-line November 18, 2010 in the journal Nature. As to "swords," the metaphor lives close to reality. Bacteria use proteins to interact with a host, including disease-causing bacteria, such as Bordetella pertussis, the cause of whooping cough and Burkholderia pseudomallei, found in soil throughout Southeast Asia and a cause of a frequently fatal tropic disease. In these and other gram-negative bacteria, large proteins appear as rods on the surface of cells. "In the soil or in humans, different bacteria bump into each other all the time and bump into their own 'family,' too. They have to touch each other and recognize each other and then one can inhibit the growth of the other, non-family, bacteria." Cotter said. According to the UNC scientist, this system may represent a primitive form of kin selection, whereby organisms kill organisms that are genetically different but not those that are closely related. "As an additional twist, we have found that some bacteria can have two or three (or possibly more) systems. Our data suggest that these bacteria will be protected from killing by bacteria that produce any of three types of poison swords and they will be able to kill other bacteria that lack at least one of those types of immunity proteins." Moreover, there's evidence here that these bacteria acquire these additional systems by horizontal gene transfer from other bacteria. "In other words, it seems that they may be able to kill their enemy and then steal the poison-tipped sword and protective (immunity) protein from the dead enemy, increasing their own repertoire of weapons." By teasing out the genetics of these bacterial close combat mysteries, it may someday be possible to "engineer an organism, a non-pathogenic variant, and by putting it out in the environment, such as soil, you can potentially get rid of other pathogens, "Cotter said. "Or you could decontaminate an area, if the new knowledge is applied to biodefense." "Experiments are the only means of knowledge at our disposal. The rest is poetry, imagination." Max Planck
<urn:uuid:2ffdede3-95bb-45c3-8bdb-01f92bd40259>
2.890625
784
News Article
Science & Tech.
39.718656
928
Monkeys Understand Basic Counting Skills A team of researchers studying Old World monkeys have found that the primates have better numerical skills than previously believed, BBC News reports. They found, using a basic numeracy test, that long-tail macaques were able to determine which of two plates had more raisins. However, in strange fashion, the macaques only excelled in the basic test if they were not allowed to eat the raisins used in the experiment. The results of the experiments show that the animals have the ability to understand the concept of relative quantities. The researchers, from the German Primate Center in Goettingen, Germany, first tested the macaques by showing them two different amounts of raisins. The primates were then fed the raisins that they pointed to. But the researchers noted that in this test, the monkeys usually got it wrong — choosing the smaller pile of raisins. Vanessa Schmidt, lead researcher on the study, said that instead of thinking about the quantities, the monkeys were thinking more about how much they wanted to eat the raisins. “This impulsiveness impaired their judgment,” Schmidt told BBC News. “But when we repeated the test, this time showing them two plates of inedible objects – pebbles – they did much better.” To find out if the monkeys could actually distinguish quantities, the team decided to try another experiment. “We wanted to know if they could simultaneously maintain two mental representations of the food items, first as choice, and second as food reward,” said Schmidt. In the new experiment, which was a little more complex than the original, the macaques were shown plates of raisins, but the reward for pointing to the correct plate was to be fed raisins that were actually hidden underneath. “They perform as well in this task as they do when choosing the pebbles,” said Schmidt. “This seems to show that they see the raisins as signifiers – representations of the food rewards they’re going to receive.” Professor Julia Fischer, the study’s co-researcher, said that young children displayed the same difficulty in suppressing their impulses. “There’s a well-known experiment called the reverse reward paradigm,” she said. “You have two heaps of candies – one big, and one small. The child obviously points at the big heap – which is then given to another child, while the [first] child itself gets the small heap,” Fischer explained. “Young children have trouble comprehending that they should point at the small heap to get the big one, but if you replace the candies with numerals or other symbols, they can do it,” she added. Other studies of primates in the past, that have used food to test numeracy skills, may have had inconclusive results because of this effect, and therefore didn’t really understand the real abilities of these types of animals. The study is published in the journal Nature Communications. On the Net:
<urn:uuid:8c8d8967-874d-478a-bc76-122c10e629d6>
3.765625
651
News Article
Science & Tech.
47.150296
929
Assignment operator in java This tutorial will help you to understand assignment operator in java.. Conditional operator in java Conditional operators return either true or false value based on the expression.. Java set example In this tutorial we will see how to use the Java Set interface . We will create an example to display the contents of the set collection. . Converting Boolean to String In this tutorial we are going to convert Boolean to String.. Serialization in java Serialization in java means writing a state of the object to the stream. In this section you will learn about how to serialize and deserialize the object.. Iterator in java Iterator is a interface in java, help you to traverse the element of collection.. Java Array declaration This tutorial will help you how to declare array in java. Creating multiple Threads This java tutorial explain how to create multiple thread using Java program. Here you will find step by step process to creating multiple threads.. The JDK Directory Structure The JDK Directory Structure, in this tutorial we are going to explain you the correct directory structure of JDK.. Compiling and Interpreting Applications in Java Compiling and Interpreting Applications in Java. Learn How to compile and interpret your Java application.. How to sort ArrayList in java This Java Tutorial section we demonstrates how to use the sort() method in the Java ArrayList.. String intern() method returns canonical form representation of a string object.. First Java Program Here you will find the video tutorial for creating first Java program. You can learn through video tutorial of Java.. Matrix addition in java In this tutorial, you will learn how to find the sum of two matrices.. Fibonacci series in java This tutorial will help you to understand the Fibonacci number program in java. Java error cannot find symbol Java cannot find symbol occur when compiler has not enough information about what java code trying to execute.. Add two number in java Java add two numbers example explains you that how you can add two integers. Switch case in java Switch statement is a control statement that allow multiple selection by passing control to one of the case statement in the body.. Instance variable in java Instance variable in java are variable which is declared in a class but outside the methods or constructor.. Type casting in java Type casting is used in Java for converting one type into another type. For example you can typecast string representation of number into int data type. This tutorial explains type casting with example program.. Java count vowels This program will count the number of vowels in a String.. Number Format Exception NumberFormatException is a type of RuntimeException which is generated when a programmer try to convert String into integer.. Queue in java In this section we will discuss about queue in java. Queue is a interface in java.util package of java.. Java Tutorial for Beginners The java programming language is an object-oriented programming language that contains complete information, syntax and examples of java program for the beginner's. In this online java programming tutorials for beginners helps you to how to write java program, compile java command as well as how to install and configure java.. How to get Java? This video tutorial explains the steps of getting the Java development kit for windows operating system and installing on it.. Java Video Tutorial - What is Java? Welcome to the Java programming tutorial series. Today we will learn about Java programming language which is used for the development of desktop, web, mobile and embedded devices application. Learn what is the use of Java Programming through this video tutorial.. Java Programming video tutorial for beginners Java programming video tutorials designed especially for beginners in Java helps them to learn Java in easy, step-by-step and systematic method. Online Java video tutorials explain and demonstrate programming with simple examples.. Search an elements in the array In this section we will discuss about how to check the availability of an element in the array.. Continue statement in java In this section we will discuss about continue statement in java. continue is one of the branching statement used in most of the programming languages like C,C++ and java etc.. Finally in java In this section we will discuss about finally block in java. Finally block always execute when try block exits. Finally is a block of code that execute after try/catch block. Transient Java Keyword In this section we will discuss about transient keyword in java. Transient is a keyword in java which is used to prevent any variable being serialized. for loop in java example We are going to discuss about for loop in java example. The for loop statement has type loop control statement. We first initialize the variable. After that check the condition, if true than it will execute further. If it is false, it will terminate loop. . JComboBox Insert Edited Value Into Table In this section we will read about how to make JComboBox an editable and then how to insert the new edited value into the table.. How To Create Internal Frames In Java In this tutorial we will learn about how to create a frame within a frame.. We will discus about treeSet() method. The treeSet implement the Set interface. we have stored collection of data and data order of element. We have stored date string and Integer value.. Comparing two dates in java In this example you will learn how to compare two dates in java. . Prime number program in java In this example you will learn how to write a program to generate and check prime number in java.. Exception handling in java We are going to discus about Exception handling in java. Java program many provides exception. We are handle of error in program when during execution in a program .we are generate of exception try() block and catch() block. . Write a program to find a factorial in any given number This programming tutorial will teach you how to write a factorial of any given number.. Final method in java In this section we will learn about Final method in java.. BufferedReader in java In this section you will learn about BufferedReader in java with example. Java provide java.io.Reader package for reading files, this class contain BufferedReader under the package java.io.BufferedReader.. Converting object to String In this section you will learn to convert Object to String in java. It is sometimes necessary to convert object to String because you need to pass it to the method that accept String only.. This section describe about daemon thread in java. Any thread can be a daemon thread.. Dynamic method dispatch Dynamic dispatch is a process of selecting, which methods to call at run-time. It is a mechanism by which a call to overridden method at run time is resolved rather then compile time.. Convert a String into an Integer Data In this section you will learn to convert a string type of data to integer type data. Converting string to integer and integer to string is a basic task in java programming language because these two type are widely used.. Synchronization in java with example In this section we will discuss about Synchronization in java.. JTable Display Data From MySQL Database This section will describe you the displaying of data of a database table into a JTable. Here you will read about how to create a table in Java swing, how can you add column header's name, how can you show data into the table.. String replaceAll in java In this section you will learn about replaceAll() method in java, This will replace each of the sub string with the given replacement. This method will return the resulting string.. Split in java This section illustrate the use of split in java. Split is used to split the string in the given format.. Convert String into date In this example we are going to convert String into date. SimpleDateFormat is a concrete class for formatting the dates which is inside package "java.text.*" which have a date format which convert a string into Date format..
<urn:uuid:616e711f-c1f6-4ca6-a0db-cef4720f6309>
3.578125
1,670
Content Listing
Software Dev.
49.885241
930
1. Fifty years of manned spaceflight. April marked the 50th anniversary of the first manned spaceflight by Yuri Gagarin aboard Vostok 1 in 1961. Russia continued regular launches with its Soyuz becoming the only way too carry astronauts to the International Space Station (ISS) and making their first unmanned launches from Europe's spaceport at French Guyana. But the crash of a Progress cargo ship in August led to questions of relying too much on Russia. In February, the European Space Agency (ESA) successfully launched its second unmanned Automated Transfer Vehicle (ATV) Johannes Kepler to the ISS. 2. End of the road for the Space Shuttle. Atlantis touched down in July, marking the final spaceflight and the end of the Space Shuttle programme. It also meant that construction of the ISS was essentially complete. Meanwhile, as NASA's own manned space launches appeared to be temporarily abandoned, China's began to accelerate, including the launch of its own first space station module, Tiangong-1, in September. 3. Big strides by commercial space companies. The growing interest in the US and beyond to turn space exploration over to private enterprise got a boost in April when NASA awarded $269 million to companies including SpaceX, Sierra Nevada, Boeing and Blue Origin. Meanwhile, seven years after the first successful suborbital flights by prototype SpaceShipOne, Virgin Galactic is steadily preparing to carry its first paying tourists to the edge of space. 4. New missions to deep space. NASA continued to pioneer exploration of the Solar System. Probes were launched both to Jupiter in August (Juno) and Mars in November (Mars Science Laboratory, or Curiosity). In March, their Messenger probe went into orbit around Mercury, and in July, Dawn began circling the asteroid Vesta. Twin Grail probes to investigate the interior of the Moon are arriving this weekend following a September launch. Russia's bid to fly to Mars failed when its Phobos-Grunt craft became stranded in Earth orbit. 5. Advances in astronomy. The number of planets discovered around other stars climbed above 700 as the year drew to a close including the first two Earth-sized worlds discovered by NASA's Kepler space mission and others found in the so-called habitable zones of their host suns. Another boost for astronomy came with fresh support for the successor to Hubble, the James Webb Space Telescope after a battle in Congress over its budget. In other astronomical news, the Sun roared back into activity with many sunspots and eruptions plus the closest flypast by a giant asteroid ever witnessed was watched in November when 2005 YU55 came well inside the orbit of the Moon.
<urn:uuid:fafccad4-c02d-483d-9fe6-c5e4a5a6007f>
3.28125
534
Listicle
Science & Tech.
45.12835
931
Interpreting the CHARM Page: For a detailed description of CHARM algorigthm, please see the methods paper: Krista and Gallagher (2009) The EIT 195 Å disk images are shown as cylindrical Lambert equal-area projection maps. The projection is limited to 80 degrees due to limb-extrapolation effects. The white corners shift over the year due to the change in the B angle, for which the projection is corrected for. After the detection of low intensity regions in the EIT 195 Å images, corresponding MDI magnetograms are used to determine the flux imbalance in the detected regions. Depending on the flux-imbalance, low intensity regions are classed as coronal holes (CH). This is based on the knowledge that CHs are dominated by a single polartity. Please note that the error in the MDI magnetic field measurements increases considerable towards the solar limb and hence the flux imbalance might not be detected in certain polar CHs. For this reason polar holes are occasionally unidentified in the observations. We are currently working on a reliable resolution to this issue. The identified CHs are grouped based on neighbouring distances. Members of a CH group appear contoured with the same colour and numbered with the same group number. Group ID: the overall CH group number. Location: the location of the CH group centroid (or geometric center). E/W-most points: the east-most and west-most points of a CH group boundary. Area: overall area of a CH group in Mm2. Bz: the average magnetic field of a CH group in Gauss units. Phi: the average magnetic flux of a CH group in Maxwell units. If you would like to use CHARM meta data for any publications, please contact the author: Larisza D. Krista NOAA/SWPC, University of Colorado and cite teh methods paper: Krista and Gallagher, 2009, Solar Physics, 256, 87-100
<urn:uuid:694e8de9-47c4-4875-ad68-970a3064bbfc>
2.671875
418
Knowledge Article
Science & Tech.
50.481508
932
PASADENA, Calif. -- Light-colored mounds of a mineral deposited on a volcanic cone more than three billion years ago may preserve evidence of one of the most recent habitable microenvironments on Mars. Observations by NASA's Mars Reconnaissance Orbiter enabled researchers to identify the mineral as hydrated silica and to see its volcanic context. The mounds' composition and their location on the flanks of a volcanic cone provide the best evidence yet found on Mars for an intact deposit from a hydrothermal environment -- a steam fumarole, or hot spring. Such environments may have provided habitats for some of Earth's earliest life forms. "The heat and water required to create this deposit probably made this a habitable zone," said J.R. Skok of Brown University, Providence, R.I., lead author of a paper about these findings published online today by Nature Geoscience. "If life did exist there, this would be a promising type of deposit to entomb evidence of it -- a microbial mortuary." No studies have yet determined whether Mars has ever supported life. The new results add to accumulating evidence that, at some times and in some places, Mars has had favorable environments for microbial life. This specific place would have been habitable when most of Mars was already dry and cold. Concentrations of hydrated silica have been identified on Mars previously, including a nearly pure patch found by NASA's Mars Exploration Rover Spirit in 2007. However, none of those earlier findings were in such an intact setting as this one, and the setting adds evidence about the origin. Skok said, "You have spectacular context for this deposit. It's right on the flank of a volcano. The setting remains essentially the same as it was when the silica was deposited." The small cone rises about 100 meters (100 yards) from the floor of a shallow bowl named Nili Patera. The patera, which is the floor of a volcanic caldera, spans about 50 kilometers (30 miles) in the Syrtis Major volcanic region of equatorial Mars. Before the cone formed, free-flowing lava blanketed nearby plains. The collapse of an underground magma chamber from which lava had emanated created the bowl. Subsequent lava flows, still with a runny texture, coated the floor of Nili Patera. The cone grew from even later flows, apparently after evolution of the underground magma had thickened its texture so that the erupted lava would mound up. "We can read a series of chapters in this history book and know that the cone grew from the last gasp of a giant volcanic system," said John Mustard, Skok's thesis advisor at Brown and a co-author of the paper. "The cooling and solidification of most of the magma concentrated its silica and water content." Observations by cameras on the Mars Reconnaissance Orbiter revealed patches of bright deposits near the summit of the cone, fanning down its flank, and on flatter ground in the vicinity. The Brown researchers partnered with Scott Murchie of Johns Hopkins University Applied Physics Laboratory, Laurel, Md., to analyze the bright exposures with the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument on the orbiter. Silica can be dissolved, transported and concentrated by hot water or steam. Hydrated silica identified by the spectrometer in uphill locations -- confirmed by stereo imaging -- indicates that hot springs or fumaroles fed by underground heating created these deposits. Silica deposits around hydrothermal vents in Iceland are among the best parallels on Earth. Murchie said, "The habitable zone would have been within and alongside the conduits carrying the heated water." The volcanic activity that built the cone in Nili Patera appears to have happened more recently than the 3.7-billion-year or greater age of Mars' potentially habitable early wet environments recorded in clay minerals identified from orbit. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Reconnaissance Orbiter for NASA. Johns Hopkins University Applied Physics Laboratory provided and operates CRISM, one of six instruments on the orbiter. For more information about the Mars Reconnaissance
<urn:uuid:db5ca746-1e44-42e0-be93-d4bed3532c83>
3.96875
867
News Article
Science & Tech.
34.34614
933
Physics 405/406: Introduction to Astronomy Welcome to "Introduction to Astronomy"! Course runs Mo, We, Fr, 2-3 pm, in the DeMeritt Hall Lecture Room, DeM Take a look at the sky yourself! This is part of what astronomy is about: Taking in the wonders of the night sky. Prof. Möbius is teaching in the Fall Some recent events Mars Rover Curiosity landed safely on Mars on August 5, 2012! launched on Sunday, October 19, 2008. The satellite and sensors (partially built at UNH) are working great. In summer 2009 we displayed the first sky map taken with neutral atoms. IBEX has caught the interstellar wind through our Solar System; see the UNH Press Release from January 2012. As a consequence of IBEX results, there is no Bow Shock in front of the Heliosphere. Check your Class and Assignment Schedule! Reading is assigned for each class (as for Fall semester)! If you are interested in further discussions on Cosmology and Beyond Join the Class "Cosmology and Our View of the World", INCO 796 always taught during the Spring Semester, coming year again with Prof. E. Möbius (Physics), Prof. T. Davis (Genetics), & Prof. W. DeVries Here is a list of the music pieces played during the walk-in period at the beginning of class. Here is the article by John Gianforte, our local amateur astronomer expert, Galileo Galilei that I pointed to in class. Important class material and your Grade Updates can be found on Blackboard - Jupiter lost one of its major cloud belts in 2010. - Last year, 2009, was the International Year of Astronomy (IYA). 400 years after Galileo used a telescope for sky observations the very first time we celebrated advances in Astronomy. - An unexpected flare-up of a normally inconspicuous comet occurred in October P Holmes became prominently visible in the constellation Perseus for a few weeks. By the way, the "P" stands for periodical. This comet is on a known orbit about the Sun at distances between 3 and 5 AU. See also gallery for comet P Holmes. We already have had a few really nice comets over the past 5 years. Here are of comet Ikeya-Zhang of last year, here you will find information on 1998 - Watch a Science Fiction like "eclipse"! To calibrate the UV camera on STEREO a transit of the Moon in front of the Sun was used to provide cover. - In January 2007 we have been enjoying the brightest comet since about 30 years. Comet McNaught has passed the Sun so closely that its activity is magnificent, producing a spectacular tail. Enjoy the McNaught photo gallery on the web. - June 8 2004 was the day of the Venus Transit in front of the Sun's disk. Such events were used in the past (1874 and 1882) to determine distances in the solar system. See information by the European Southern Observatory - The Leonid Meteor Shower was strong over a few years at the end of the previous millenium. This only happens once every 33 years (potentially for a few years in a row) around the time when comet Temple-Tuttle comes to its closest approach to the sun. This happened in 1998. On November 17, 1999, the Leonids produced a decent show, to the delight of some nightly onlookers (with good weather). It was also considered potentially dangerous for the fleet of satellites and spacecraft out there. However, the satellites were spared. The last two years we enjoyed a relatively good showing at the east coast of the US, but this year is likely to be more spectacular. You can get the latest updates on the shower on the NASA Leonid website. The European Space Agency (ESA) is running running a special Leonid observation program down under. We have issued Press Releases on observations of the Leonids in this area. The Leonids are a good target of opportunity every year. However, spectucular showings are not regularly expected until about 2033. Stay tuned! - Weren't able to get to Europe for the August 11 eclipse in 1999? Find pictures and movies here. - Auroral activity may be seen even in New Hampshire, while the sun is still relatively active. Find information on this so-called "Space Weather" on a special website or directly from the NOAA Space Environment Center. - More and more Near Earth Objects (NEOs), asteroids that can come close to Earth, are found. A recently tracked one may have the chance to hit Earth in about 900 years. See how this information is garnered and what could be done, if confirmed. - Check this site regularly for the Astronomy Picture of the Day, home of some of the most gorgeous images of the sky! Check out the collected News Items from Hubble Spacetelescope! Current Events in Spaceflight: - The Interstellar Boundary Explorer (IBEX) was successfully launched in October 2008. IBEX has now taken the first global images of the boundary of our heliosphere with the neighboring interstellar medium, using neutral atom cameras. You can sign up for monthly updates via E-Mail on the IBEX website. A link with multimedia material on the IBEX Mission is available at the Southwest Research Institute. A lot of cool stuff on IBEX is available through the - During the month of September 2009 the MESSENGER Mercury for the fourth time. Watch the flyby through a visualization or follow the podcast. - On February 7, 2007, the Ulysses probe passed one more time over the South Pole of the Sun, thus getting a unique view in the Heliosphere. - First evidence for lakes found outside Earth! Cassini/Huygens found evidence for lakes on Saturn's moon Titan. They most likely consist of liquid methane or ethane. - The year 2007/8 is the International Heliophysical Year (IHY). 50 Years after the International Geophysical Year (IGY) in 1957/8, when we "stuck our head above the Earth's atmosphere" for the first time at the dawn of the space age, we are now "sticking our head out of the Heliosphere", with the at the ouitskirts of the Solar System and the Interstellar Boundary Explorer (IBEX) to be launched on July 12 2008. As pointed out is charting the regions above the Sun's South Pole right now, and it will pass over the North Pole later this year. - The NASA Mars Rovers made it successfully to Mars' surface. Follow Opportunity's hunt for signs of flowing water in Mars' past. It has revealed the most compelling evidence yet. The European Mars probe Mars Express has reached Mars end of 2003. Touch down of the lander Beagle-2 apparently was not successful. - The "Stardust" spacecraft has flown through the dust cloud of a comet and will bring back the comet - The Wilkinson Microwave Anisotropy Probe (WMAP) is providing the most detailed pictures of the "Baby Universe" thus far. Learn about this journey to the beginning of our universe! - After the terrible tragedy on Saturday, Febr 1, 2003, NASA is investigating what the root cause of the catastrophic failure was. They keep the public informed on these actions and provide extensive material about the shuttle mission on a special website. Follow also another view on space.com. Astronomy Education Resources: If you have trouble understanding Astronomy the way it is taught here or in the book, check out the websites from other Astronomy courses listed here. The Cosmic and Heliospheric The Cosmic and Heliospheric Learning Center, brought to you by the people at ACE, is designed to increase your interest in cosmic and heliospheric science. (The heliosphere is the HUGE area in space affected by the Sun.) It's an exciting subject to learn about, and science is constantly moving forward in understanding it. (ACE -- the Advanced Composition Explorer -- is one of the many satellite projects with which UNH has been involved, and promises to answer some of the more exciting questions about the formation of the solar system and our Touching the Limits of Science: One reason you are probably studying astronomy is that you are interested in the Philosophy behind science and are asking yourself where everything comes from. We will get to part of the story, but, as I make the point over and over, this is an endless enterprise. If you want to know more about this, you can either join us (Prof. Thomas M. Davis (Genetics), Prof. Willem DeVries (Philososphy) and Prof. Eberhard Möbius (Physics) in the seminar "Limits of Knowledge: Cosmology and the View of our World" and/or you may start by browsing the website for the seminar.
<urn:uuid:442b397f-adb3-41e9-bae8-2e705ad5de1b>
2.53125
1,981
About (Org.)
Science & Tech.
53.604329
934
Harvard Physicist Sets Record Straight on Internet Carbon Study A Harvard researcher spent much of Monday setting the record straight about his research and how it relates to Google's energy consumption. A Sunday Times of London story reported that conducting two Google searches generates as much carbon dioxide as boiling water, though the researcher denies singling out Google. A story in the Sunday Times of London sent Google's public relations machine into an advanced search for answers. The Times reporters wrote about a new Harvard study that examines the energy impact of Web searches. The story's lead paragraph: "Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research." One problem: the study's author, Harvard University physicist Alex Wissner-Gross, says he never mentions Google in the study. "For some reason, in their story on the study, the Times had an ax to grind with Google," Wissner-Gross told TechNewsWorld. "Our work has nothing to do with Google. Our focus was exclusively on the Web overall, and we found that it takes on average about 20 milligrams of CO2 per second to visit a Web site." And the example involving tea kettles? "They did that. I have no idea where they got those statistics," Wissner-Gross said. Was Google Burned by Energy Story? The Times story is giving Google a chance to talk about the company's green initiatives and its efforts to pursue cleaner energy technologies on several fronts, Google spokesperson Jamie Yood said. "This comes from the top, from (cofounders) Larry (Page) and Sergey (Brin), who are really dedicated to this. There's an acknowledgment that Google is using energy and on the business front it makes sense to get this energy cost as low as possible," Yood told TechNewsWorld. "And on the environmental front, they are passionate about climate change and are really involved. They recognize that if we're going to use energy, let's try to figure out how to do this as minimally as possible." That includes the use of biodiesel shuttles and electric cars to and from its Mountain View, Calif., campus, offering bikes for employees to ride from building to building on that campus, and using recyclable materials throughout those buildings. And when it comes to its server farms, "we do believe we have the most energy efficient data centers." Google takes exception on its Official Google blog to the statistics quoted in the Times story regarding the energy used to Web search vs. boiling a kettle of water. A speedy search uses less energy, the company claims; about the same amount of energy as the human body uses in about 10 seconds. Google has asked to see a copy of the study, and Wissner-Gross says he is more than happy to send them one. One of the Times article's authors had interviewed a Google engineer "whose job is to look at data centers to make sure they're more energy efficient, and he didn't really use any of his material," Yood said. Google's Side of Things Greenpeace doesn't really focus on the energy efficiencies used by Google or Web companies in general, said spokesperson Daniel Kessler. It is more focused on electronics products, the toxic materials used and company recycling initiatives. However, Google gets high marks for its green efforts in Washington D.C., Kessler said. "I commend Google for its lobbying and the legislative work they're doing when it comes to clean energy," Kessler told TechNewsWorld. "In the whole tech sector, they're really on the forefront on taking action regarding the climate." Google's data centers burn through a lot of energy in the course of providing answers to search queries around the world, and the cheapest form of that energy right now is coal, said Roger Kay, principal at Endpoint Technologies Associates, who keeps a close eye on the environmental policies at IT companies. "It's taking that electricity bill they've got and kind of making it a proportion of the total expenditure of the generation of electricity, and then allocating that as a cost to Google and saying that's their responsibility, their piece of it," Kay told TechNewsWorld. "It's just modeling, a modeling exercise that may not necessarily be a reflection of reality." The location of the information needed in a Web search may also play a part, Kay said. "If you're looking for the latest on Brad Pitt, then that's likely to be stored in multiple servers towards the edge of the network, where it will be an easy search. Google through its traffic management knows a lot of people are interested in that. But if you want to read Cicero's works, which haven't been read for a while, you may have to go deep into the network." The Researcher's Take Wissner-Gross, who manages the Web site CO2stats.com to help educate people about energy efficiencies on the Internet, has been inundated with press requests since the Times story was published. The Times quoted him correctly in the story as saying, "A Google search has a definite environmental impact" and "Google operates huge data centers around the world that consume a great deal of power," he confirmed. "I don't think anybody would disagree with those statements," Wissner-Gross said. "Everything online has a definite environmental impact. I think everybody can agree on that, including Google." There's a difference between regular servers and those used in advanced data centers, Wissner-Gross said, and he acknowledges that Google would have a financial interest in maintaining an energy-efficient infrastructure. "Energy consumption may be a higher fraction of infrastructure costs for large companies like Google than the hardware itself." In between answering reporters' e-mails and appearing on CNBC, Wissner-Gross has had a lot of time to think about why the Sunday Times focused on Google in its story. "The short answer is, it's a really easy way to sell papers. Google is a very successful company and it's a very easy way to get readership by making grandiose claims about them."
<urn:uuid:ee73e91b-12f5-4d56-88fb-817e26b5b116>
2.65625
1,274
News Article
Science & Tech.
45.934184
935
Hydrogen fuel cells are an appealing source of clean energy because they have the potential to power anything that uses electricity—from computers and cell phones to cars and ships—without toxic emissions. Thayumanavan, who is an authority on charge transport and molecular design, was recently chosen as the campus’s first Spotlight Scholar in recognition of his research and innovation in clean energy science. Thayumanavan co-directs the Massachusetts Center for Renewable Energy Science and Technology (MassCREST). With colleagues Ryan Hayward, polymer science, and Mark Tuominen, physics, he discovered a new material that improves charge transport—a key energy-generating process for efficient and affordable hydrogen fuel cell design. Using a polymer nanostructure that provides an excellent conduit for transporting protons from one side of a fuel cell membrane to another, they demonstrated how to improve proton conductivity under very low humidity conditions, where fuel cells prefer to operate but where few materials perform well. Hydrogen fuel cells are an appealing source of clean energy because they have the potential to power anything that uses electricity—from computers and cell phones to cars and ships—without toxic emissions. The discovery could lead to commercial development of fuel cell membranes that stay chemically and mechanically stable much longer than current materials allow. The results are so promising that Thayumanavan received $40,000 from the Massachusetts Clean Energy Center to help demonstrate the technology’s viability. “Our work should lead to a lighter, more efficient and sustainable source of clean power,” says Thayumanavan. Thayumanavan, who came to UMass Amherst in 2003, earned high praise from Spotlight Scholar nominators for his multi-faceted work, noting that his research in molecular design is also relevant to the life sciences. He’s created a nanoscopic gel that can effectively encapsulate and then release drug molecules inside cells. Such a feature is useful in selectively delivering chemotherapeutic drug molecules to cancer cells. The campus’s technology transfer office and Thayumanavan are pursuing commercial venture opportunities for bringing this technology to clinical trial.
<urn:uuid:915bc6db-b39f-470d-a05f-74b97df3b210>
2.828125
433
News (Org.)
Science & Tech.
14.33554
936
Most Active Stories - Cheerios Commercial Leaves Bitter Taste - Breaking the Sound Barrier - NPR Labs Brings Radio To Hearing Impaired - Dr. Dorothy Peteet, Columbia University – Hudson River and Climate Records - Dr. Sara Konrath, University of Michigan – Age and Empathy - Mass. Medical Marijuana Regulations Approved, Communities Prepare For Dispensaries Science & Technology Wed March 20, 2013 Scientists: 'No Options' To Stop Massive Asteroids On Collision Course Originally published on Wed March 20, 2013 4:40 pm Without "a few years" warning, humans currently have no capacity to stop an asteroid on a collision course with the planet, scientists told a Senate panel Wednesday. "Right now we have no options," said former astronaut Ed Lu. "If you dont know where they are, there's nothing you can do." Scientists are calling for continued funding and support for NASA satellites and observation programs that look for "near Earth objects." The scenario from Hollywood blockbuster Armageddon is on the minds of lawmakers after two hulking rocks exploded in the air over Russia in February. More than 1,000 people were injured, bringing the risks of future incidents — and measures to prevent them — into clearer focus. "I was disappointed that Bruce Willis was not available to be a fifth witness on the panel," joked Ted Cruz, R-Texas, during the hearing. While scientists put the odds of asteroids one kilometer in diameter or larger colliding with the earth as "once every few thousand year" event, they said cuts in space funding to monitor and detect space rocks could have devastating consequences. "What [the film Armageddon] did was basically convince the American people that if anything bad happened, people would get in a shuttle and fix it," said Joan Johnson-Freese, a professor at the U.S. Naval War College. "That is myth. That is not reality." Scientists were simply sharing a grim reality NPR and others have written about in recent weeks — that the rules of physics mean there's almost no way to stop asteroids and debris from hurtling toward earth. It didn't stop the doomsday-scenario questioning from Sen. Bill Nelson, D-Florida: "What would an asteroid that is a kilometer in diameter, what would it do if it hit the earth?," Nelson asked. "That is likely to end human civilization," said Lu, who is now CEO of the B612 Foundation, which aims to hunt devastating asteroids. Decades of lead time is the only way to prevent that level of destruction, said scientists. With decades of advance notice, Lu said, American astronauts currently do have the capacity to destroy or make small changes to the trajectory of flying space objects to keep them from hitting earth. But detection requires investment, they said. "It's important to know what we're up against, and this decade in particular is great for us to do the research necessary that will contribute to potential mitigation concepts," said James Green, NASA's planetary science director. Lu estimates there's a 30 percent chance this century of relatively smaller asteroids hitting a "random location" on the earth to create a five megaton impact. Casualties would depend on the population of the area of impact. If detected early enough, the cost of a mission to prevent a hit would cost at least a billion dollars, Lu said. "But ... you'd have to compare that against the losses of a massive, megaton impact."
<urn:uuid:2a039c5b-fe12-4a52-8089-dee436a831e3>
2.859375
716
News Article
Science & Tech.
47.808351
937
Some of South Mississippi's tiniest creatures are getting a lot of help surviving in the wild. Sixth graders at one Pascagoula school are working on a project to monitor and protect a bayou that's right in their own backyard. Just a few blocks from Trent Lott Academy is Grant Bayou, a vital habitat for baby shrimp, crabs, and fish. "This is the nursery of the world. This is where our food starts. This is probably the most important wetlands ecosystem that there is," said Michael Henderson, Mississippi Power Maintenance Specialist. Twice a year, the sixth graders at the school walk over to the bayou to track the health of the fragile watershed. "This is our backyard. I don't want a dump in my backyard. The only way it's going to be cleaned up, the only way it's going to be maintained in a good healthy way, is if we do it," said Henderson. With guidance from environmental and compliance specialists at Mississippi Power, the students waded in the water to collect samples. They measured the water's pH and dissolved oxygen levels. They also tested to see if the water was clear and recorded its temperature. "This is actually good useful data, and it teaches them the basics of the importance of math, science, and environmental stewardship," said Henderson. The school has been keeping a close watch over Grant Bayou since 2002. "To help save the fish and keep them alive," said sixth grader Anna Barlow. "For the environment and help the world be clean," said sixth grader Geo Garnica. Some students have taken a personal role in this project. "I usually come out here and help clean up this area back here, because there's a bunch of debris. And I like to feed the fish and the pelicans," said sixth grader Aston Smith. They are learning early to take responsibility in protecting their backyard bayou. "This is a healthy water system. It may not be the prettiest in the world, but it is healthy. Our monitoring will help us make sure that it stays that way," said Henderson. The project is part of "World Water Monitoring Day". The data collected will be entered into a national database. The students will use the information to track any changes to the bayou's health over the years. Be the first to find out about breaking news! To sign up for email alerts from the WLOX Newsroom, just enter your email address below. Your email address will never be shared with a third party and youMore >>
<urn:uuid:d2d67ce3-5a76-4245-8fb6-5b84f1510cb6>
2.953125
534
News Article
Science & Tech.
64.098023
938
An icy interior for Ceres? Observations indicate the largest main-belt asteroid may have an icy mantle beneath its surface. September 12, 2005 The largest asteroid, Texas-size Ceres, may be a mini-planet with a water-rich mantle. The Hubble Space Telescope imaged Ceres in December 2003 and January 2004, revelaing never-before-seen detail. Top: This contrast-enhanced false-color composite of Ceres is made from Hubble Space Telescope visible and ultraviolet images. Bottom: Surface features change position as Ceres rotates in this sequence of visible image. Hubble tracked Ceres through multiple 9-hour rotations. Photo by NASA/ESA/STScI/J. Parker, P. Thomas, and L. McFadden; composite: Francis Reddy September 12, 2005 Located near the middle of the main asteroid belt between Mars and Jupiter, Ceres is the largest such body at about 592 miles (952 kilometers) across. Despite its size, this asteroid has a rather bland reputation among astronomers: Its low density, low reflectivity (albedo), and relatively featureless spectrum have led scientists to conclude the asteroid's interior is uniform in composition, with little structure. However, a paper published in the journal Nature provides strong evidence that Ceres may be far more complex. These animations show Ceres through one full rotation as seen by the Hubble Space Telescope in mid- and near-ultraviolet (top and center) and visible wavelengths (bottom). J. Parker, SWRI A team of astronomers led by Peter Thomas of Cornell University used the Hubble Space Telescope to obtain images of Ceres in December 2003 and January 2004. Measurements of the asteroid's shape revealed Ceres is rotationally symmetric — the distance from the asteroid's surface to its center is the same regardless of longitude. This indicates Ceres' shape is determined by hydrostatic equilibrium, in which the weight of the overlying material determines the pressure at any point within the body. Stars like the Sun, as well as planets — both gas giants like Jupiter and rocky bodies like Earth — are in global hydrostatic equilibrium. Otherwise, their sizes would not remain constant. "This is the first time we have seen Ceres in such detail and can even say something about its interior," says team member Joel Parker of the Southwest Research Institute in San Antonio, Texas. "You can watch it rotate in our observations, and you get the feeling of it being a whole new world, not just a bit of rocky debris." Some astronomers, notably team member Alan Stern, also of SWRI, think a shape determined by global hydrostatic equilibrium is part of what distinguishes planets from asteroids. If we accept this argument, Ceres is a planet. Measurements of the asteroid's shape also provide information about its interior. The equatorial region of any body that spins fast enough bulges outwards. Instead of looking like a perfect sphere, the star or planet (or asteroid, in the case of Ceres) looks "flattened." By measuring the distances from the body's center to its equator and poles, scientists can determine the amount of flattening, which constrains the interior's possible structure. Ceres has a mean density of about 2.077 grams per cubic centimeter (roughly twice that of water), and a uniform body of this density should have a polar radius about 23.8 miles (39.7 km) smaller than the equatorial radius. Ceres' polar radius is only 19.6 miles (32.6 km) smaller. This is strong evidence its interior consists of different layers. In fact, the smaller amount of flattening indicates the presence of a mantle and a core. The team developed computer models of Ceres' interior using available data on the asteroid. Their findings: Ceres has a rocky core and could have an icy mantle as thick as 77 miles (124 km), amounting to about one quarter of its mass. The lack of a water signature in the asteroid's spectrum does not present a problem, as any water ice on Ceres' surface would be unstable and soon lost to space. The astronomers make a bold prediction in their Nature paper: They assert NASA's Dawn spacecraft, upon its arrival at Ceres, will find a "globally relaxed and differentiated object, but which should retain a visible cratering record, and possible tectonic features." We will have to wait until 2015 to find out if they are right. |Bill Cooke is an astronomer with NASA's Marshall Space Flight Center in Huntsville, Alabama.|
<urn:uuid:456e80f9-5994-49ec-80b2-38434eccd7fe>
3.359375
925
News Article
Science & Tech.
42.298067
939
The Gulf of Mexico is like a giant washing machine, the Christian Science Monitor says. Will the Gulf’s washing-machine-like nature be enough to counteract the BP leak? The Gulf is warm, filled with salty water and oil-eating bacteria and is being sloshed around by tides and winds. So, it basically cleans itself. But just how fast it’ll be able to get rid of all the oil and dispersants from BP’s spill and restore order to the Gulf remains to be seen. In localized places such as marshes and beaches, they could stretch the ability of the Gulf’s natural restorative powers to correct what one Gulf biologist calls man’s “insult” to the ecosystem. Researchers, for example, have spotted fluorescent clouds in the deep Gulf, likely a byproduct of benzene in the water — a new phenomenon. “It’s very possible that five years from now this will just be a nightmare memory that we have, but we also don’t know yet if we’ve exceeded the resilience of the system with this spill,” says Richard Snyder, director of the Center for Environmental Diagnostics and Bioremediation at the University of West Florida in Pensacola. A report produced in 1981 by the Coordinated Program of Ecological Studies said that natural conditions broke down much of the oil from the Ixtoc spill in 1979. Two years after the spill, shrimp stock in the Gulf of Mexico was back to pre-spill levels, the CS Monitor said. But the BP spill is a different story. Plumes of oil have penetrated the deeper parts of the ocean and oil-eating bacteria is out-eating phytoplankton, which is essential to the Gulf food chain. “It shifts the whole food web,” says Snyder. “What a lot of us are concerned about is about the impact of a spill of this magnitude where things may recover or seem to recover, but we get this subtle degradation of the environment that’s hard to quanity from repeated impacts,” says Snyder. “There is a point of no return where we exceed the resilience of the environment. Have we done that [with the Gulf oil spill]? We don’t know.”
<urn:uuid:0fcc5970-7481-42fd-9fda-8947ff713b41>
3.25
483
News Article
Science & Tech.
56.464614
940
David Biello is the associate editor for environment and energy at Scientific American. Follow on Twitter The world is waiting for a clean revolution, a shift away from the greenhouse gas-emitting, mountain-leveling, air-polluting, fossil-fuel burning way of life. The world may have to wait a long time if past energy transitions are anything to go by, according to environmental scientist Vaclav Smil of the University of Manitoba—especially since fossil fuel energy is so cheap. "Energy is dirt cheap. Oil is cheaper than any mineral you can buy," Smil noted. "The percent of disposable income devoted to energy is about 10 percent." Smil spoke at the recent Equinox Summit at the Perimeter Insitute in Waterloo, Ontario, which was specifically charged with devising a new energy scenario for 2030, one that would cut greenhouse gas emissions while extending modern energy to the billions of people who lack it today. The summit called for a range of options, from power plants that harvest energy from hot rocks to solar-battery combos for rural electrification. The only problem: all of those resources require fossil fuels to build in the first place. Steel and cement—the essential substrate of energy equipment and cities—require coal (or, even worse, charcoal) to be burned. Cheap plastic photovoltaics require polymers made from oil. The fertilizer that feeds a global population of seven billion requires the conversion of natural gas to more than 140 million tons of ammonia per year. Even advanced nuclear reactors would need large, oil-burning machines to mine the uranium or thorium fuel. "A wind turbine is a pure embodiment of power from fossil fuels," Smil noted. "We are fundamentally a fossil fuel civilization. Everything around us we have fossil fuels to thank for." Nor is the world in danger of finishing off the supply of fossil fuels anytime soon. "Instead of running out of gas, we ran into gas in the shale," Smil said. "We’re not running out of anything on a human scale." That may be a good thing since the alternatives currently on offer—such as biofuels to substitute for oil-derived fuels—can do more harm than good. "It’s insane. It’s taking food from the mouths of babies," Smil said. "It’s a make work project for farmers." Plus it took three decades, tens of billions of dollars in subsidies and a dead zone in the Gulf of Mexico (a result of fertilizer run-off) to allow ethanol from corn—the most productive per hectare crop on the planet—to supply 10 percent of U.S. car fuel. And that’s relatively fast; liquefied natural gas took more than 150 years from conceptual discovery to actual shipments, a timespan similar to the shift from wood to coal, for example. "We should focus our resources and attention on what has the best chance to succeed," Smil said. "That’s not biofuels, that’s not wind. It is PV," or photovoltaic modules for converting light energy to electricity. And what has an even better chance of success—and immediate impact—is reforming the current energy system, whether through better building codes that require more insulation and triple-pane windows or making the most efficient use of fossil fuels. After all, if all of Canada switched to more than 90 percent efficient natural gas furnaces, the country would produce 40 percent less CO2. "There is no renewable energy that will get you 40 percent less carbon on a scale like that," Smil said. "Changing furnaces is an energy transition."
<urn:uuid:a3cee95c-fec4-4d96-a739-3bf614caa35b>
3.328125
754
News Article
Science & Tech.
48.893083
941
There’s a third more carbon dioxide in the air than at the start of the Industrial Revolution. The carbon acts like insulation in the atmosphere, or like glass in a greenhouse — that’s why it’s called a greenhouse gas – and it is warming the air, which warms the seas. The current carbon dioxide concentration is higher than it has been for several million years and rising 100 times faster than any time in the past 650,000 years. Warmer ocean water is already having dramatic effects. Some corals bleach and die when water gets too warm for too long. Bleaching means corals eject algae cells that live inside them and provide them with food and often color. What happens to reefs will affect the hundreds of millions of people worldwide who depend on reefs for food and income. Ocean warming and higher air temperatures also melt polar ice. As polar ice melts, animals that need ice suffer. Polar bears, some seals and many penguins require ice to live. At the base of the Antarctic food web, shrimp-like krill require ice and they are vital food sources for many Antarctic whales, seals, seabirds, and fishes. Melting land-ice, such as glaciers, raises sea levels. (Sea ice is already displacing all the water it will displace, and like ice cubes in a drink, sea ice does not raise sea level when it melts.) Meanwhile, as seawater warms it expands a little, also raising sea levels. Global sea level rise threatens coastal habitat–both marine environments and human settlements. Because such a large proportion of people live within 50 miles of a coast, it’s estimated that over 600 million people—roughly one in ten people on Earth, will be directly affected by sea level rise. Entire island nations in the South Pacific may disappear beneath the waves as the ocean envelops them. Rising sea levels threaten habitats such as coral reefs and coastal mangroves, as well as low islands relied upon by many millions of breeding seabirds. The solution: We need an energy economy based on renewable energy, especially energy sources that do not have to be burned, such as the power of the sun, wind, tides, and the heat of the Earth—the power that drives the whole planet. 3 things you can do to curb ocean warming: 1. Conserve energy at home and at work. 2. Switch to renewable energy whenever possible. 3. Change your driving habits to conserve fuel – walk, ride a bike or carpool. Other great ways you can make a difference. LINKS & VIDEOS Warming 101 – Carl’s Blog Baked Alaska – Carl’s Blog Global Warming – National Wildlife Foundation Coral Reef Bleaching Affects Fish Communities – Science Daily Coral Reef Bleaching Impacts – Coral Reef Resilience Global Climate Change & Krill – Antarctic Krill Conservation Project Polar Bear Habitat 2010 Warmest Year on Record – USA Today Aquarius Ocean Circulation, NASA Until now, researchers did not have a full set of data on ocean salinity and how it impacts climate change. Climate Change Affects Everything, State of the World’s Oceans Climate change affects everything. All the organisms that live in the ocean are used to being bathed in it, are used to its temperature, are used to where the ocean currents flow and all those things change with global climate change. Coral Bleaching Firsthand, Penn State Research Iliana Baums, an assistant professor of biology at Penn State, dons scuba gear for work. She studies coral reef ecosystems, the “forests of the oceans,” diverse habitats that are vital to many species of ocean life. Warming ocean temperatures disrupt that ecosystem and cause episodes of coral bleaching,
<urn:uuid:701d3eb8-eec0-4799-91ca-06d038d4829d>
3.609375
796
Knowledge Article
Science & Tech.
46.412199
942
Scientific name: Epione vespertaria July - August. Aberdeenshire, Moray and Yorkshire. This small moth is either yellow or orange, with brown bordered wings. Found in open woodland or on grassland. Similar to the Bordered Beauty. The female tends to be a lighter yellow than the male, it also has a deeper indentation in the dark border along the edge of the wings. The shape of the dark border helps to distinguish this species from the Bordered Beauty, which can also be slightly larger. The male flies during the day, especially just after sunrise and both sexes can be disturbed from the foodplants in the afternoon. Also flies from dusk and at dawn. Size and Family - Family – Thorns, Beauties and allies (Ennomines) - Small Sized - UK BAP: Priority Species - Rare (Red Data Book 3) Particular Caterpillar Food Plants Aspen in Scotland, and Creeping Willow in Yorkshire. - Countries – England, Scotland - Restricted to a very few sites in Scotland, in Aberdeenshire and the Moray area. Restricted to one site in England, in Yorkshire. Individual records at other localities indicate that it may occur elsewhere. Prefers open and damp scrubby and heathy grassland, usually near tall trees.
<urn:uuid:38288833-445f-4b79-ab0e-0862d266d631>
3.078125
281
Knowledge Article
Science & Tech.
46.377029
943
Plugins are special Modules that are exposed to the user through the Workbench GUI. This is typically done using the main menu, or the context-sensitive menu. Much of the MySQL Workbench functionality is implemented using plugins; for example, table, view, and routine editors are native C++ plugins, as are the forward and reverse engineering wizards. The Administrator facility in MySQL Workbench is implemented entirely as a plugin in Python. A plugin can be a simple function that performs some action on an input, and ends without further interaction with the user. Examples of this include auto-arranging a diagram, or making batch changes to objects. To create a simple plugin, the function must be located in a module and declared as a plugin using the plugin decorator of the Plugins can have an indefinite runtime, such as when they are driven by the user through a graphical user interface. This is the case for the object editors and wizards within MySQL Workbench. Although the wizard type of plugin must be declared in the usual way, only the entry point of the plugin will need to be executed in the plugin function, as most of the additional functionality will be invoked as a result of the user interacting with the GUI. Reloading a plugin requires MySQL Workbench to be restarted. Declare a plugin using this syntax: @ModuleInfo.plugin(plugin_name, caption, [input], [groups], [pluginMenu]) These parameters are defined as follows: plugin_name: A unique name for the plugin. It may contain only alphanumeric characters, dots, and underscores. caption: A caption to use for the plugin in menus. input: An optional list of input arguments. groups: Optional list of groups the plugin belongs to. Recognized values are: Context menu in the Model Overview. Model/Utility: The menu for diagram Plugins menu in the main menu. pluginMenu: Optional name of a submenu in the Plugins menu where the plugin should appear. For example, Catalog, Utilities. This is equivalent to Menu/<category> in the
<urn:uuid:735a36eb-7d81-4c4a-a9b6-85061d2c943e>
2.640625
444
Documentation
Software Dev.
36.109679
944
That’s the conclusion of a new study by astronomers at the California Institute of Technology (Caltech) that provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32—planets that are representative, they say, of the vast majority in the galaxy and thus serve as a perfect case study for understanding how most planets form. “There’s at least 100 billion planets in the galaxy—just our galaxy,” says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. “That’s mind-boggling.” “It’s a staggering number, if you think about it,” adds Jonathan Swift, a postdoc at Caltech and lead author of the paper. “Basically there’s one of these planets per star.” The planetary system in question, which was detected by the Kepler space telescope, contains five planets. The existence of two of those planets have already been confirmed by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by the Kepler mission. The planets orbit a star that is an M dwarf—a type that accounts for about three-quarters of all stars in the Milky Way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the telescope has discovered orbiting other M dwarfs, Swift says. Therefore, the majority of planets in the galaxy probably have characteristics comparable to those of the five planets. While this particular system may not be unique, what does set it apart is its coincidental orientation: the orbits of the planets lie in a plane that’s positioned such that Kepler views the system edge-on. Due to this rare orientation, each planet blocks Kepler -32′s starlight as it passes between the star and the Kepler telescope. By analyzing changes in the star’s brightness, the astronomers were able to determine the planets’ characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general. “I usually try not to call things ‘Rosetta stones,’ but this is as close to a Rosetta stone as anything I’ve seen,” Johnson says. “It’s like unlocking a language that we’re trying to understand—the language of planet formation.” One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known. To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32′s edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star. M-dwarf systems like Kepler-32′s are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury’s orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. “It’s just a weirdo,” he says. The fact that the planets in M-dwarf systems are so close to their stars doesn’t necessarily mean that they’re fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the “habitable zone,” the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32′s five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones. As for how the Kepler-32 system formed, no one knows yet. But the team says its analysis places constraints on possible mechanisms. For example, the results suggest that the planets all formed farther away from the star than they are now, and migrated inward over time. Like all planets, the ones around Kepler-32 formed from a proto-planetary disk—a disk of dust and gas that clumped up into planets around the star. The astronomers estimated that the mass of the disk within the region of the five planets was about as much as that of three Jupiters. But other studies of proto-planetary disks have shown that three Jupiter masses can’t be squeezed into such a tiny area so close to a star, suggesting to the Caltech team that the planets around Kepler-32 initially formed farther out. Another line of evidence relates to the fact that M dwarfs shine brighter and hotter when they are young, when planets would be forming. Kepler-32 would have been too hot for dust—a key planet-building ingredient—to even exist in such close proximity to the star. Previously, other astronomers had determined that the third and fourth planets from the star are not very dense, meaning that they are likely made of volatile compounds such as carbon dioxide, methane, or other ices and gases, the Caltech team says. However, those volatile compounds could not have existed in the hotter zones close to the star. Finally, the Caltech astronomers discovered that three of the planets have orbits that are related to one another in a very specific way. One planet’s orbital period lasts twice as long as another’s, and the third planet’s lasts three times as long as the latter’s. Planets don’t fall into this kind of arrangement immediately upon forming, Johnson says. Instead, the planets must have started their orbits farther away from the star before moving inward over time and settling into their current configuration. “You look in detail at the architecture of this very special planetary system, and you’re forced into saying these planets formed farther out and moved in,” Johnson explains. The implications of a galaxy chock full of planets are far-reaching, the researchers say. “It’s really fundamental from an origins standpoint,” says Swift, who notes that because M dwarfs shine mainly in infrared light, the stars are invisible to the naked eye. “Kepler has enabled us to look up at the sky and know that there are more planets out there than stars we can see.”
<urn:uuid:d02c5cf1-b681-4e78-8671-126fa188b67d>
4.0625
1,629
News Article
Science & Tech.
44.103569
945
Newly Deciphered Ant Genomes Offer Clues on Ant Social Life, Pest Control An international team of scientists has decoded the genome of a persistent household pest -- the Argentine ant, an invasive species that is threatening native insects across the world. These findings could provide new insights on how embryos with the same genetic code develop into either queens or worker ants and may advance our understanding of invasion biology and pest control. Similar to bees, ants have sophisticated social structures. Queen ants typically have larger bodies, wings and fertile ovaries, and are responsible for reproduction in the colony. Worker ants are smaller, wingless and infertile, and are tasked with foraging for food and caring for the queen's offspring. A better understanding of how larvae develop into queens or workers could support the development of new control methods that use more benign chemicals to limit the number of queens born in a colony, effectively sterilizing the population. Source: Science News General Manager - Staff Entomologist
<urn:uuid:03078bd8-1ed3-4dae-a43e-b1a7f96f7e62>
3.453125
200
News Article
Science & Tech.
19.177033
946
C++ concepts: MoveConstructible (since C++11) Specifies that an instance of the type can be move-constructed (moved). This means that type has move semantics: that is, can transfer its internal state to a new instance of the same type potentially minimizing the overhead. The type must meet CopyConstructible requirements and/or implement the following functions: Type::Type( Type&& other ); Type::Type( const Type&& other ); |(One of the variants is sufficient)| Move constructor: constructs an instance of a type with the contents of other. The internal state of other is unspecified after the move. However, it must still be valid, that is, no invariants of the type are broken. The following expressions must have the specified effects: |Type a = rv;|| | |Type(rv);|| a temporary object of type | See also | checks if a type has a move constructor |
<urn:uuid:52e364f0-04d0-4c3c-a64e-1cb5ef1e5b1a>
3.0625
205
Documentation
Software Dev.
30.81962
947
See also the Dr. Math FAQ: Browse High School Triangles and Other Polygons Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Area of an irregular shape. Pythagorean theorem proofs. - Euler Line and Nagel Point [07/20/1998] Can you provide more information on the Euler line and the Nagel point, - Euler Line Proof [11/13/2001] Prove that if the Euler line of a triangle passes through a vertex, then the triangle is either right or isosceles. - Euler's Line Theorem [04/08/2001] Prove that the circumcenter, orthocenter, and centroid of any triangle lie on the same line using analytical geometry. - Euler's Nine-point Circle [02/21/1999] What is the "nine-point circle" problem? - Evaluating a Trigonometric Expression [07/31/1999] Can you help me evaluate tan (arcsin (3/5))? - Existence of the Brocard Point [06/06/2002] Demonstrate that the Brocard point exists in any triangle. - Explanation and Informal Proof of Pick's Theorem [04/27/2004] We just learned Pick's Theorem, A = b/2 + I - 1, where b is the boundary pegs, I is the interior pegs, and A is the area. I don't get why it works. Why do you divide the boundary by 2 and subtract 1? - Explanation and Test Case for Pick's Theorem [06/13/2006] I don't really understand Pick's Theorem and its formula. Can you explain the formula and show how it works for a polygon? - Exploring the Distance from (0,0) to (1,1) with Limits [10/15/2006] Any route traveling from (0,0) to (1,1) going only north and east will cover a total distance of 2 units. But the straight line distance from (0,0) to (1,1) is sqrt(2) units. It seems that if I think of a staircase connecting the two points and let the stairs become infinitely small, the limit of the north/east route distance should converge to sqrt(2). But it doesn't! What's going on here? - Exterior Angles in Triangles [09/11/1999] How can you prove that in any triangle, each exterior angle is equal to the sum of the two nonadjacent interior angles? - Familiar Triangles [01/19/1999] How do you get the lengths of the sides of a 45-45-90 triangle and a 30- - A Fibonacci Jigsaw Puzzle [04/29/1999] Why is the area of our rectangle, formed from a square, 65 when the square's area was 64? - Fibonacci Riddle [11/21/2001] We can cut an 8x8 square with an area of 64 into four pieces and reassemble to get a 5x13 rectangle with an area of 65. Where does the extra 1x1 square come from? - The Figure of Maximum Area and Given Perimeter [06/02/1998] Can you help me show, with and without calculus, that the geometric figure of a maximum area and given perimeter is a circle? What are the dimensions of a triangle with perimeter p that encloses the maximum area? - Filling a Garden with Topsoil [2/5/1996] I have a garden that is 10' x 10' (100 square feet). I want to add 6" of topsoil to my garden. Topsoil is sold by the cubic yard. How many cubic yards of topsoil will I need for my project? - Find Angle ACB [04/20/2002] Let A', B' and C' be points on triangle ABC such that AA' BB' CC' are angle bisectors. Suppose angle A'C'B' = 90 degrees. Find angle ACB. - Find angle DEB [5/27/1996] Given an isosceles triangle ABC... - Find Angles, Area, Perimeter of a Parallelogram [03/23/2001] I can't understand how to find indicated measures when I am given little information to begin with. - Find a Point 3/8 along the Line [12/04/2002] Find a point 3/8 from A to B. Given: two endpoints X,Y coordinates. Point A (-2,7) point B (6,-5). - Finding a Missing Angle [01/23/2002] Using trigonometry, calculate the measure of angles ABC and ACB. - Finding Angles without Using Trigonometry [08/27/2001] Given the lengths of three sides of a triangle, determine the measures of the three angles using only geometry and algebra. - Finding Area and Volume [04/12/2001] When working with area and volume of triangular shapes, how do I know when to divide the base by 2 and when to divide it by 3? - Finding Areas of Different Polygons [09/02/1997] Could you please tell me how to work out the area for an equilateral heptagon, octagon, nonagon, decagon, unedecagon, and dodecagon? - Finding Polygon Areas [03/20/1997] How do I find the area of polygons? - Finding Rhombi in a Rhombus [9/25/1995] How can I work out a formula for finding how many rhombi there are in a rhombus? (say 2cm*2cm or 3cm*3cm and so on, etc.) - Finding Side Lengths of a Scalene Triangle [6/2/1996] Two observers on points A and B of a national park see a beginning fire on point C. Knowing that the angles CAB = 45 degrees, ABC = 105 degrees and that the distance between points A and B is of 15 kilometers, determine the distances between B and C, and between A and C. - Finding the Area of an Irregular Polygon [02/23/2008] What is the formula for finding the area of an irregular polygon? - Finding the Area of an Irregular Shape [01/03/2007] I need the area of a parcel of land with 5 sides. I know the lengths of the sides and the angles at the corners, but am not sure how to calculate the area. - Finding the Area of a Regular Pentagon [04/15/1998] How can you find the area of a regular pentagon given only the length of - Finding the Base of Parts of a Triangle [05/22/2000] Can you derive an expression for L1 in terms of L2 and L3 such that the area of a triangle with base A1 and the area of a triangle with base A2 are each 10% of the total area? - Finding the Center of the Research Triangle [9/5/1995] We live in an area known as the Research Triangle, with the triangle's points at the University of North Carolina, North Carolina State University and Duke University. We are interested in finding the center point of our triangle home and whether there is a unique term (or several terms) for the center point of a triangle. - Finding the Coordinates of a Triangle Vertex [10/26/1999] How can I find the coordinates of the point A of triangle ABC if B lies on the line 3y = 4x, C lies on the line y = 0, the line BC passes through (2/3,2/3) and AOBC forms a rhombus (where O is the origin)? - Finding the Dimensions of a Box [10/21/2001] You want to construct a cardboard box from a cardboard strip that is 8 inches wide. The dimensions of the box are 8"x8"x4". How long does the strip need to be? - Finding the Incenter of a Triangle [10/08/2006] What is the equation or method to find the incenter of a triangle? I'm having trouble with the Cartesian coordinates. - Finding Total Area of Several Rectangles [07/08/2005] I need to find the total square footage of a lot of rectangular lawns. Do I have to find the area of each lawn and add up all the areas, or can I just add all the lengths and all the widths and make one area calculation based on those two totals? - Finding Triangle Vertices from Midpoints [09/18/1999] If you know the coordinates of the midpoints of the sides of a triangle, how can you find the coordinates of its vertices? - Find Lengths of Sides of Triangle [04/20/2002] Let ABC be a right-angled triangle with angle C = 90 degrees. Let the bisectors of angle A and angle B intersect BC and CA at D and E respectively. Given that CD = 9 and CE = 8, find the lengths of the sides of ABC. - Find the Area of the Quadrilateral [05/20/2003] Find the area of an irregular quadrilateral formed by the given intersection of two squares. - Find the Diagonal of a Rectangle [07/26/1997] We use a tape measure to square different things on the job site by measuring opposite corners... - Find the Fourth Side [12/03/2001] The successive sides of a quadrilateral are 2, 6, 9, and x. If the diagonals of the quadrilateral are perpendicular, compute x.
<urn:uuid:a193c291-7e90-4842-aca6-2279883576d6>
3.21875
2,164
Q&A Forum
Science & Tech.
73.305013
948
Date: 01/30/97 at 20:21:52 From: M.Quinn Subject: proof problems For the following statement, give a proof if the statement is true, or a counterexample (with explanation) if the statement is false: If r is any nonzero rational number, and s is any irrational number, then r/s is irrational. I think this is true, but I can't prove it. I know s must be an integer and an integer isn't irrational Am I going the right way? Date: 01/31/97 at 11:15:50 From: Doctor Wilkinson Subject: Re: proof problems Well, so far so good. You're correct that the statement is true. Let's try to figure out a proof. "Irrational" is a negative concept. That is, a number is irrational if it's NOT the quotient of two integers, so you typically have to use an "indirect" proof. That means, assume the number is rational and show that that assumption leads you to something you know is false. So suppose r/s is rational. That means r/s = m/n, where m and n are integers. Let's multiply by ns to get rid of the fractions. That gives us rn = ms. But now what we're really interested in is s. So let's divide both sides by m. (We know we can do this because if m were zero, r would be zero: that's what that extra hypothesis was for!). This gives us: s = rn/m r is rational, n and m are integers, so that makes s rational. But we know it isn't. Contradiction! So our original assumption was wrong, and r/s is irrational. Do you see how this works? This is a typical indirect proof. I hope this helps a little. You seem to be on the right track. -Doctor Wilkinson, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
<urn:uuid:a7300688-7cab-4ccd-a9cc-39081bab45f5>
3.203125
439
Comment Section
Science & Tech.
88.033019
949
The Midwestern region of the United States experienced its second coldest December in the 106 year record of observations. The December 2000 average temperature was 14.3°F, just missing the 1983 record of 13.9°F. A number of first-order stations broke their all-time cold records for December, including South Bend, IN; Chicago-Midway, Rockford, and Moline, IL; and Louisville and Paducah, KY. The center of unusually cold conditions was in Iowa, Missouri, and Illinois, where temperatures averaged 12-15°F below normal over most of the three states (Figure 1). Even in relatively warmer locations such as northern Michigan, temperatures were still more than 5°F below normal. The temperature departure pattern in December 2000 was similar to that in December 1983, in the sense that the largest negative anomalies were in the western half of the region (Figure 2). The causes were similar, too. Both cold months occurred during a somewhat neutral to slightly La Niña oriented season, with central equatorial sea surface temperatures slightly cooler than normal. The resulting upper air patterns indicate the preference for a strong trough to develop over the central and eastern United States in both seasons. During December 2000, a strong ridge dominated the western coast of North America, helping to accentuate the north-south delivery of very cold air in the Midwest trough (Figure 3). December 1983 was similar; while the amplitude of the western upper air ridge was less than in 2000, the eastern trough was deeper (Figure 4). While the amount of liquid water in the precipitation that fell during December 2000 is only slightly above normal overall in the Midwest, some more active regions can be seen in Iowa and the northern tier states (Figure 5). It is unusual for a cold month during winter to also be a month with normal or above normal precipitation. The remarkable aspect of the precipitation in December, though, was that almost all of it was delivered in the form of snow, leading to a widespread deep snow pack that helped to maintain cold temperatures. The snow totals reported in real time in December 2000 vary widely depending on the reliability of reporting, but can be seen to include many very large values, especially in the north-central latitudes of the Midwest (Figure 6). A subset of stations with good climate records and available real-time data show that the snow fall totals were typically more than 10 inches above normal in most of the Midwest (Figure 7). Snow fall exceeded 300% of normal in the central and southern Midwest (Figure 8). While there were several major snowstorms that traversed the region during December, the sheer number of smaller "clipper" systems originating in the northern Rocky Mountains contributed greatly to the overall snow fall totals. At least 15 first-order stations broke all-time records for December snowfall, and five of these set their all-time record snowfall for any month of the year (Table 1). Most of these stations are located in a belt from central Iowa and southern Minnesota eastward through the Great Lakes region (Figure 9). As might be expected, December 1983 was similarly snowy, with the axis of heaviest snow perhaps shifted somewhat to the northern part of the Midwest (Figure 10). Overall, conditions during December 2000 were quite extraordinary in the Midwest. Illinois experienced its single coldest December in 106 years, while Iowa experienced its largest December snow fall state-wide (Harry Hillaker, Iowa State Climatologist). The December 2000 temperature rankings for each of the nine states in the Midwestern region are available in Table 2, and the temperature and snow fall rankings for a list of major Midwestern cities is given in Table 3. The Climate Prediction Center in Washington, D.C., has indicated an appreciable likelihood for cold weather to continue through the upcoming winter months, especially in the northern Great Lakes region. However, it should be noted that following the record cold December 1983, January-February 1984 was the 25th warmest on record for the Midwest.
<urn:uuid:7b95f622-3145-4801-a016-2adda5d7e638>
3.421875
802
Academic Writing
Science & Tech.
39.362833
950
The BeachCOMBERS are gathering data that can be averaged over several years, and will serve to provide "normal," or background, rates of mortality. Then, in the case of an oil spill or other catastrophic event, differences in mortality will help elucidate the amount of damage caused. Volunteers have been trained in animal identification and record-keeping skills, to ensure the most accurate data possible. For each animal found, volunteers note the location, species, age, sex, presence of oil on or near the carcass, probable cause of death, and degree of decomposition. "It's an excellent way for volunteers to participate with the Sanctuary," says Scott Benson, Volunteer Coordinator and Data Manager of BeachCOMBERS. "The Sanctuary can only be as good as local people make it; this gives them a chance to be involved directly with the science of surveying their marine environment." The BeachCOMBERS program has already provided useful information for several recent incidents. "It's been amazing to me that in our first few months we've been able to help so much," says Benson. In the summer of 1997 BeachCOMBER volunteers began to detect a much larger than normal number of dead common murres on local beaches. Without the program's regular monitoring, the incident would never have been noticed, according to Andrew DeVogelaere, Sanctuary Research Coordinator and Senior Scientist (and BeachCOMBER volunteer). Sanctuary and Department of Fish and Game officials began investigating possible causes, and have concluded by reviewing biotoxin data and National Marine Fisheries Service (NMFS) data that the die-off was caused either by a red tide event or by an increase in gill-net fishing in Monterey Bay. "Without this monitoring program, no one would have actually pulled the NMFS data out and studied it, and we wouldn't have been aware of the increase in gill-net fishing," explains DeVogelaere. The BeachCOMBERS program also proved extremely useful in the recent "Monterey Bay Bird Incident," in which a mysterious oil was spilled in the Monterey Bay and coated hundreds of seabirds. (See related story on page 7.) The program was able to provide a background team to do all the field surveys for the spill event because it had the volunteers, data sheets, and beach survey segments already in place to step right into action. "In addition to helping out with that spill event, we now know the other players involved, and they know us," explains Benson. "It builds the basis for cooperative work in the future." The program is largely based on an ongoing effort the Gulf of the Farallones NMS is using very effectively on beaches north of Año Nuevo. "Both offices are looking forward to sharing our data so they can be made available on a Geographic Information System (GIS) and easily interpreted," explains DeVogelaere. BeachCOMBERS is an excellent example of local and state institutions working cooperatively. It was started with a CUEREC (California Urban Environmental Research and Education Center) grant to Dr. Jim Harvey, and then received matching resources from Moss Landing Marine Laboratories. The Sanctuary also donated funds, and California's office of Oil Spill Prevention and Response (OSPR) donated cameras and sampling equipment. The Pacific Grove Natural History Museum has helped with training and identification of birds. The program also has standing cooperative arrangements with county marine mammal stranding networks, SPCAs, Native Animal Rescue, the California Dept. of Parks and Recreation, and others. Having volunteers out walking beaches has proven to be more valuable than just surveying the natural environment. BeachCOMBER Scott Benson and a fellow volunteer, while surveying Zmudowski State Beach, heard screams and saw a woman flailing in the water, being pulled away from shore by a rip tide. Benson, a certified California lifeguard, swam out to the woman, calmed her down, and brought her back in to shore. Paramedics who arrived later are reported to have said that Benson saved the women's life. William J. Douros has been named MBNMS Superintendent. Sanctuary staff are excited to begin working with Douros, who previously served as Deputy Director of the Santa Barbara County Energy Division, and has extensive background in marine regulatory issues, setting policy, and ocean research. He assumed his new position in January 1998. Look for a detailed article in the next issue.
<urn:uuid:90b4e486-eecc-41a9-a7d7-711850e0a67f>
3.1875
931
News (Org.)
Science & Tech.
33.898542
951
4. CHECKLIST FOR THE NEXT DECADE As I have been careful to stress the basic tenets of Inflation + Cold Dark Matter have not yet been confirmed definitively. However, a flood of high-quality cosmological data is coming, and could make the case in the next decade. Here is my version of how "maybe" becomes "yes." - Map of the Universe at 300,000 yrs. COBE mapped the CMB with an angular resolution of around 10°; two new satellite missions, NASA's MAP (launch 2000) and ESA's Planck Surveyor (launch 2007), will map the CMB with 100 times better resolution (0.1°). From these maps of the Universe as it existed at a simpler time, long before the first stars and galaxies, will come a gold mine of information: Among other things, a definitive measurement of a determination of the Hubble constant to a precision of better than 5%; a characterization of the primeval lumpiness; and possible detection of the relic gravity waves from inflation. The precision maps of the CMB that will be made are crucial to establishing Inflation + Cold Dark Matter. - Map of the Universe today. Our knowledge of the structure of the Universe is based upon maps constructed from the positions of some 30,000 galaxies in our own backyard. The Sloan Digital Sky Survey will produce a map of a representative portion of the Universe, based upon the positions of a million galaxies. The Anglo-Australian 2-degree Field survey will determine the position of several hundred thousand galaxies. These surveys will define precisely the large-scale structure that exists today, answering questions such as, "What are the largest structures that exist?" Used together with the CMB maps, this will definitively test the Cold Dark Matter theory of structure formation, and much more. - Present expansion rate H0. Direct measurements of the expansion rate using standard candles, gravitational time delay, SZ imaging and the CMB maps will pin down the elusive Hubble constant once and for all. It is the fundamental parameter that sets the size - in time and space - of the observable Universe. Its value is critical to testing the self consistency of Cold - Cold dark matter. A key element of theory is the cold dark matter particles that hold the Universe together; until we actually detect cold dark matter particles, it will be difficult to argue that cosmology is solved. Experiments designed to detect the dark matter that holds are own galaxy together are now operating with sufficient sensitivity to detect both neutralinos and axions. In addition, experiments at particle accelerators (Fermilab and CERN) will be hunting for the neutralino and its other - Nature of the dark energy. If the Universe is indeed accelerating, then most of the critical density exists in the form of dark energy. This component is poorly understood. Vacuum energy is only the simplest possibly for the smooth dark component; there are other possibilities: frustrated topological defects or an evolving scalar field (see e.g., Caldwell et al, 1998; Turner & White, 1997). Independent evidence for the existence of this dark energy, e.g., by CMB anisotropy, the SDSS and 2dF surveys, or gravitational lensing, is crucial for verifying the accounting of matter and energy in the Universe I have advocated. Additional measurements of SNe1a could help shed light on the precise nature of the dark energy. The dark energy problem is not only of great importance for cosmology, but for fundamental physics as well. Whether it is vacuum energy or quintessence, it is a puzzle for fundamental physics and possibly a clue about the unification of the forces and particles.
<urn:uuid:606aa78d-6e2e-4f18-ba06-96b1eb161c6a>
2.921875
827
Academic Writing
Science & Tech.
40.179762
952
Illustration courtesy L. Calçada, ESO Published May 16, 2012 A NASA spacecraft has witnessed hundreds of "superflares" coming from sunlike stars—and the observations suggest that the trigger for such massive outbursts remains a mystery. On our sun, solar flares aimed at Earth can send huge amounts of energy colliding with our planet. If a flare is strong enough, it can cause a solar storm that might cripple satellites and even knock out the power grid. Astronomers have also recorded similar but much more powerful flares coming from a variety of stars. These superflares can be millions or even a billion times more powerful than an average sun eruption, packing enough energy to roast nearby planets whole. It's not likely, but if a superflare were to engulf Earth, for example, our protective ozone layer would be instantly obliterated, said Brad Schaefer, an astrophysicist at Louisiana State University (LSU) who wasn't involved in the new study. With no more atmospheric ozone, dangerous ultraviolet rays from the sun would flood the planet, and all life would get fried to a crisp. "If you got rid of the ozone layer on Earth, you would sunburn in one second," Schaefer said. The new study is the first to take a detailed look at superflares on sunlike stars, few of which have been seen until now. Using data from the Kepler space telescope, Hiroyuki Maehara and colleagues at Kyoto University in Japan found 365 superflares coming from 148 sunlike stars. A popular theory proposed by Schaefer and colleagues suggested that superflares are linked to the presence of hot Jupiters—large gas giant planets that orbit very close to their host stars. (Related: "Distant Planet Mapped for First Time, 'Hot Jupiter' Features Fierce Winds.") Surprisingly, though, "none have been discovered around the stars that we have studied, indicating that hot Jupiters associated with superflares are rare," the study authors write in their paper. A Strike Against Hot Jupiters Scientists think that solar flares are driven by the sun's magnetic field lines—invisible, tightly wound loops of energy connected to magnetically active regions known as sunspots When a field line snaps, it whips an arc of light and charged particles away from the sun and into space. (Related: "Solar Flare Sparks Biggest Eruption Ever Seen on Sun") According to the widely accepted theory, a similar mechanism likely powers superflares, except that instead of being anchored to two star spots, a magnetic field line rises from the star and connects to a nearby hot Jupiter. Only by anchoring one end of the field line to another object can a star create a magnetic loop large enough to generate a truly immense flare when it snaps, Schaefer explained. "This was a very reasonable idea," he said. "You can't get enough energy unless you have something else nearby. ... So this hot Jupiter idea has been taken by everyone as the default model." Based on this theory, "we predicted that hot Jupiters should be detectable around about 10 percent of the superflare stars," Schaefer said. But with the new observations from Maehara and colleagues, this prediction failed, Schaefer said, and "that's a fairly substantial strike against the hot Jupiter model." Earth Safe From Superflares Even with the new data, Schaefer thinks the model linking planets to superflares can work—it just needs a few tweaks. For example, instead of using a hot Jupiter as an anchor, superflare stars may be using closely orbiting rocky planets—hot Earths or hot super-Earths—to ground their magnetic field lines. "That would work just as well," Schaefer said. "The basic idea is not dead." Testing this version of the theory will be a bit more challenging. Kepler looks for the tiny dip in starlight as a planet transits—or crosses in front of—its host star as seen from Earth. The craft can't yet make out the signal from less massive, rocky planets in tight orbits around sunlike stars. But in either scenario, Earth is safe from superflares, Schaefer added: Mercury, the innermost planet in our solar system, isn't close enough to our star and its magnetic field is too weak for the body to anchor a superflare-generating field line. The new superflare study was published online this week by the journal Nature. A new species of dinosaur-era reptile is rewriting the books on the evolution of so-called sea monsters, a new study claims. The world's highest peak has been shedding snow and ice for the past 50 years, possibly due in part to global warming, new research shows. Detailed scans capture transformation. Celebrating 125 Years Connect With Nat Geo Special Ad Section Shop National Geographic Great Energy Challenge Blog - Stichting Rootbox: Sustainable Design Through Collaboration, With or Without Wind Turbine - Turkey’s Celal Bayar Still Sun-Powered, With Smaller Panels - Hungary’s Kecskemét College: Boosting Power, But Keeping Light - Aston University Plies the Power of Wood - Universidad Ceu Cardenal Herrera Takes Inspiration From Nature
<urn:uuid:97473a30-18ae-439f-8969-218ab28c44af>
3.71875
1,120
News Article
Science & Tech.
41.553764
953
light in space 1. According to the books, the speed of light in a vacuum is 300,000 km per second. If you send out a sudden pulse of light in space, does it have to accelerate to that speed? 2. If you could make a _very_ long tube in space, with a mirror at both ends (perfect mirrors, and perhaps long enough to reach from earth to Venus), could you open one end, shine a bright light in for a few seconds, then slide the mirror back in place, trapping the beam of light in there? Would the light beam keep bouncing back and forth? 1. No, light starts out at the speed of light - it does not have to accelerate. What does happen is that the amplitude (of the electric and magnetic fields) gradually increases so that at the start of the pulse the amplitude is small, it then rises to a peak, and then falls back down to 2. Yes, you sure could do that. In fact, that is essentially the way some experiments on fiber optics work (and somewhat related to the way lasers work). Apparently a recent experiment by some Japanese researchers has sent light pulses round and round a fiber optic cable for some 180 million miles - that is getting into astronomical distances right here on earth! But why does not light "have to accelerate"? In truth it does, but the acceleration to light speed is instantaneous. That is because light is made up of massless photons. The force that creates the photons gives them infinite acceleration, so they reach the speed of light Click here to return to the Physics Archives Update: June 2012
<urn:uuid:996dd21f-14ac-4602-86ae-847a661c620d>
3.578125
356
Knowledge Article
Science & Tech.
65.286147
954
Herman Melville’s Moby Dick may paint a picture of the sperm whale as a terrifying, ferocious creature that destroys ships and attacks the sailors on them, but modern research shows that sperm whales are compassionate and social creatures, dangerous only to the fish and squid that the giant whale feasts on for dinner, or to the orca whales that prey on sperm whale calves. A heartwarming and unusual recent discovery does even more to distinguish the sperm whale from its deadly reputation, as a group of sperm whales were observed “adopting” a bottlenose dolphin with a spinal malformation. Behavioral ecologists Alexander Wilson and Jens Krause discovered this unique phenomenon when they set out to observe sperm whales off the island of Pico in the Azores in 2011. Upon arriving there, they discovered a whale group of adult sperm whales, several whale calves, and an adult male bottlenose dolphin. Over the next eight days, the pair observed the dolphin with the whales six more times, socializing and even nuzzling and rubbing members of the group. At times, the sperm whales seemed merely to tolerate the dolphin’s affection, while at others, they reciprocated. "It really looked like they had accepted the dolphin for whatever reason," Wilson reports to ScienceNOW. "They were being very sociable." Exploring the oceans from one of these animals points of view would be an exciting (and eye opening) experience. So what marine animal would you be if you had the chance to be any creature in the ocean? We posed this question to our Ocean Heroes finalists, and here’s what they had to say. See if you can match their responses to the pictures above (answers at the bottom of this post)! Michele Hunter – Harbor seal Hardy Jones – Sperm whale Kristofor Lofgren – Mako shark Dave Rauschkolb – Porpoise Richard Steiner – Polar bear (I like the odds and the challenge they face) Donald Voss – Humpback whale Sara Brenes – Tiger Shark Calvineers – Blue whale Sam Harris – Tiger Shark James Hemphill – Hawksbill Sea Turtle (I have always been amazed at all the colors on its shell and how gracefully and peacefully it swims) Teakahla WhiteCloud – Dolphin Make sure to vote for your favorite Ocean Heroes, open from now until July 11th. Stay tuned to learn more about our finalists! Photo Credits (clockwise from top left): Sperm Whale: Oceana/Juan Cuentos, Tiger Shark: Albert Kok, Harbor Seal: NOAA, Hawksbill Turtle: NOAA/Caroline Rogers, Porpoise: NOAA, Tiger Shark: Austin Gallagher, Humpback Whale: NOAA, Dolphin: Oceana/Eduardo Sorenson, Mako Shark: NOAA, Polar bear: NOAA, Blue Whale: NOAA (middle) There’s no shortage of blame to go around when it comes to climate change. Individuals are responsible for poor consumer choices; we drive the wrong cars, use the wrong light bulbs, even wash our laundry on the wrong setting. Even the poor dairy cow shares the blame for having the nerve to burp methane emissions. But Bessie isn’t the only creature catching a bad rap. Sperm whales have been criticized for breathing. Yes, breathing. Apparently the carbon dioxide emitted from the roughly 210,000 sperm whales in the Southern Ocean is contributing to global warming, producing in the ballpark of 17 million tons of carbon a year. But new research suggests that we’re missing a very big factor in the calculation. It’s not just what the whales put out, but also what they take in.
<urn:uuid:ea457fa7-3c6c-4c2b-81c4-c41a687f127f>
2.578125
837
News (Org.)
Science & Tech.
44.181934
955
Surface Ocean Currents In the Northern Hemisphere, warm air around the equator rises and flows north toward the pole. As the air moves away from the equator, the Coriolis effect deflects it toward the right. It cools and descends near 30 degrees North latitude. The descending air blows from the northeast to the southwest, back toward the equator (Ross, 1995). A similar wind pattern occurs in the Southern Hemisphere; these winds blow from the southeast toward the northwest and descend near 30 degrees South latitude. These prevailing winds, known as the trade winds, meet at the Intertropical Convergence Zone (also called the doldrums) between 5 degrees North and 5 degrees South latitude, where the winds are calm. The remaining air (air that does not descend at 30 degrees North or South latitude) continues toward the poles and is known as the westerly winds, or westerlies. The trade winds are so named because ships have historically taken advantage of them to aid their journies between Europe and the Americas (Bowditch, 1995).
<urn:uuid:04296702-4413-41d7-82cb-41486186afef>
4
219
Knowledge Article
Science & Tech.
48.413309
956
Quantum Diaries has an interesting introduction to the higgs. It makes it seem like the way that the higgs field gives mass to particles is via all of the interactions with virtual higgs particles. My question is, how can interaction with the field give rise to mass? It seems almost like Flip Tanedo is saying that, due to the interaction with the higgs field, the particle (electron, say) is experiencing a large number of changes in direction, and thus it takes a path much longer than the one that you measured. It seems to imply that, when going from A to B, the electron 'bounces' off of a number of virtual higgs particles and takes a much longer path than the straight line from A to B, travelling at light speed all the while. Is this a good way to look at the creation of 'mass' (i.e. a particle appears to move slower than light speed because it takes a path longer than the shortest one)? Or is there some other way to understand how the higgs mechanism imparts mass?
<urn:uuid:5a536c05-bf0f-4ea3-ba2f-4fa2af41485b>
2.578125
220
Q&A Forum
Science & Tech.
63.962554
957
7 February 2006 Most cosmologists believe that the universe is dominated by “dark energy” — a mysterious form of energy that could explain why the universe is expanding and accelerating at the same time. Now, however, theoretical physicists have studied a new model of gravity that can, they claim, account for the acceleration of the universe without any need for dark energy. Their model relies instead on modifications to the way that gravity behaves at ultra-large cosmological distances (Phys. Rev. Lett. 96 041103).
<urn:uuid:1b0008e7-bee1-4ade-8ba4-88118b31bf00>
3.0625
110
Truncated
Science & Tech.
36.844769
958
Key words : Google, Jane Goodall, forests and the cloud 28 Dec, 2009 10:59 am Not long ago, the only people who could access and analyze satellite images of the earth were government officials, the military, well-equipped scientists and oil, gas and mining companies. Today, anyone with a computer and Internet connection can access to Google Earth. Since its introduction in 2005, Google Earth has become a powerful tool for scientists, activists and ordinary citizens who want to better understand, monitor and communicate about the environment. Today, anyone with a computer and Internet connection can access to Google Earth. Since its introduction in 2005, Google Earth has become a powerful tool for scientists, activists and ordinary citizens who want to better understand, monitor and communicate about the environment. On my way home from Copenhagen, I learned about these new developments from Lilian Pintea, who is the director of conservation science at the Jane Goodall Institute, which is best known for its pioneering research on chimpanzee behavior. We met when we missed a connection in Geneva, so we arranged to have dinner during the layover. You could say that Lilian, who is 38, has already lived through two democratic revolutions. A native of Moldova, he was studying ecology in Moscow when the Soviet Union collapsed in 1991. He subsequently came to the U.S. as a Fulbright scholar and earned a PhD in conservation biology from the University of Minnesota. As a specialist in geographic information systems (GIS) and remote sensing, he has watched technology that was once reserved for elites in the developed world spread to the rest of the world, including remote villages in the global south. “As a biologist, I was always frustrated that I was in the middle of a lake or forest and I would collect my data, and I didn’t know what was happening a few kilometers away,” Pintea says. Now, satellite images reveal landscape patterns that simply aren’t visible from the ground—evidence of illegal logging or gradients in deforestation. “You can then look for political, social economic and ecological factors that explain the pattern,” Pintea says. Lilian, who lives in Maryland and works the northern Virginia suburbs of Washington, travels frequently to Tanzania and Uganda. “One problem which we often face in our project areas is the lack of capacity,” he told me. “Every trip to Africa, I do training…We want to empower local communities and governments to take charge and manage their lands.” In places were traditional land tenure systems are breaking down, the technology had help settle boundary disputes. “Sometimes people don’t agree on where their village begins and ends,” he said. Not surprisingly, geospatial technologies are also used to better understand the relationship between chimpanzees and their habitats. If all goes well, the Google mapping tools announced in Copenhagen will enable communities to generate accurate and timely information about their forests. That’s crucial to a financing mechanism known as REDD (Reducing Emissions from Deforestation and Degradation), which is designed to prevent deforestation. Regular readers of this blog know that emissions from tropical deforestation account for about 17% of global warming pollutants, more than all of the world’s cars, trucks, trains, boats and planes. The government of Norway, a major backer of REDD, has given the Goodall Institute a $2.7 million grant to equip and train villagers in western Tanzania and their institutions to prepare for REDD. Google programmers including Rebecca Moore, an evangelist for Google Earth Outreach, visited the region last fall to train Goodall Institute staff, village forest monitors, local government officials, university staff and others to gather data, take pictures and upload their findings to “the cloud”–meaning the Internet, where powerful software and data are stored.. “It’s still a work in progress but we are already doing it,” Lilian told me. “They’re mapping the forests and monitoring the threats.” The number of scientists, NGOs and companies working on GIS and forestry issues is impressive: Students and professors at the University of Washington have created free software called ODK (Open Data Kit) that makes it easy to collect survey data and upload it from Android phones. Digital Globe, an imagery and information firm based in Longmont, Colorado, gathers more detailed satellite images than those available on Google. ESRI, a software firm based in Redlands, Ca., is the world leader in GIS, proving tools to build geospatial infrastructure. For its part, Google collaborated with Greg Asner of Carnegie Institution for Science, and Carlos Souza of Imazon to build its newest platform, and it got support from the Gordon and Betty Moore Foundation. In my job, I hear a lot of blah-blah-blah about the importance of public-private partnerships. Usually that means a nonprofit wants a business to write a check. Here you have businesses, NGOs and communities combining their brainpower, passion and knowledge to do vital work. That’s exciting. Originally published at Marc Gunther.com
<urn:uuid:da2c6b4e-7bb2-42ea-b7ac-df844778de28>
3.125
1,063
News Article
Science & Tech.
35.787467
959
Comet 32 Runtime The RETURN statement transfers program control to the statement immediately following the most recently invoked subroutine call. All subroutine calls in Internet Basic place a "return address" in a subroutine stack. The RETURN statement simply transfers program control to the address on the top of this stack. It also removes or "pops" the address from the top of the stack so that a subsequent RETURN statement will use the next address it finds on the top of the stack. Executing a RETURN statement with no return address in the subroutines stack causes a "GOSUB STACK UNDERFLOW" exception condition. Processing continues in the subroutine until the program encounters the RETURN statement. At this point, control is transferred back to the main section of the program (to the statement immediately after the GOSUB 1000 statement). The STOP statement is included in the main section of the program to prevent program flow from entering the subroutine without using the GOSUB statement.
<urn:uuid:64fce9e0-e974-4384-ae61-a0fed57fe972>
3.078125
215
Documentation
Software Dev.
39.27769
960
Hundreds of Auroras Detected on Mars 13 Dec 2005 (Source: University of California at Berkeley) Auroras similar to Earth's Northern Lights appear to be common on Mars, according to physicists at the University of California, Berkeley, who have analyzed six years' worth of data from the Mars Global Surveyor. The discovery of hundreds of auroras over the past six years comes as a surprise, since Mars does not have the global magnetic field that on Earth is the source of the aurora borealis and the antipodal aurora australis. According to the physicists, the auroras on Mars aren't due to a planet-wide magnetic field, but instead are associated with patches of strong magnetic field in the crust, primarily in the southern hemisphere. And they probably aren't as colorful either, the researchers say: The energetic electrons that interact with molecules in the atmosphere to produce the glow probably generate only ultraviolet light - not the reds, greens and blues of Earth. "The fact that we see auroras as often as we do is amazing," said UC Berkeley physicist David A. Brain, the lead author of a paper on the discovery recently accepted by the journal Geophysical Research Letters. "The discovery of auroras on Mars teaches us something about how and why they happen elsewhere in the solar system, including on Jupiter, Saturn, Uranus and Neptune." Brain and Jasper S. Halekas, both assistant research physicists at UC Berkeley's Space Sciences Laboratory, along with their colleagues from UC Berkeley, the University of Michigan, NASA's Goddard Space Flight Center and the University of Toulouse in France, also reported their findings in a poster presented Friday, Dec. 9, at the American Geophysical Union meeting in San Francisco. Last year, the European spacecraft Mars Express first detected a flash of ultraviolet light on the night side of Mars and an international team of astronomers identified it as an auroral flash in the June 9, 2005, issue of Nature. Upon hearing of the discovery, UC Berkeley researchers turned to data from the Mars Global Surveyor to see if an on-board UC Berkeley instrument package - a magnetometer-electron reflectometer - had detected other evidence of auroras. The spacecraft has been orbiting Mars since September 1997 and since 1999 has been mapping from an altitude of 400 kilometers (250 miles) the Martian surface and Mars' magnetic fields. It sits in a polar orbit that keeps it always at 2 a.m. when on the night side of the planet. Within an hour of first delving into the data, Brain and Halekas discovered evidence of an auroral flash - a peak in the electron energy spectrum identical to the peaks seen in spectra of Earth's atmosphere during an aurora. Since then, they have reviewed more than 6 million recordings by the electron reflectometer and found amid the data some 13,000 signals with an electron peak indicative of an aurora. According to Brain, this may represent hundreds of nightside auroral events like the flash seen by the Mars Express. When the two physicists pinpointed the position of each observation, the auroras coincided precisely with the margins of the magnetized areas on the Martian surface. The same team, led by co-authors Mario H. Acu?a of NASA's Goddard Space Flight Center and Robert Lin, UC Berkeley professor of physics and director of the Space Sciences Laboratory, has extensively mapped these surface magnetic fields using the magnetometer/reflectometer aboard the Mars Global Surveyor. Just as Earth's auroras occur where the magnetic field lines dive into the surface at the north and south poles, Mars' auroras occur at the borders of magnetized areas where the field lines arc vertically into the crust. Of the 13,000 auroral observations so far, the largest seem to coincide with increased solar wind activity. "The flash seen by Mars Express seems to be at the bright end of energies that are possible," Halekas said. "Just as on Earth, space weather and solar storms tend to make the auroras brighter and stronger." Earth's auroras are caused when charged particles from the sun slam into the planet's protective magnetic field and, instead of penetrating to the ground, are diverted along field lines to the pole, where they funnel down and collide with atoms in the atmosphere to create an oval of light around each pole. Electrons are a big proportion of the charged particles, and auroral activity is associated with a physical process still not understood that accelerates electrons, producing a telltale peak in the spectrum of electron energies. The process on Mars is probably similar, Lin said, in that solar wind particles are funneled around to the night side of Mars where they interact with crustal field lines. The ultraviolet light is produced when the particles hit carbon dioxide molecules. "The observations suggest some acceleration process occurs like on Earth," he said. "Something has taken the electrons and given them a kick." What that "something" is remains a mystery, though Lin and his UC Berkeley colleagues lean towards a process called magnetic reconnection, where the magnetic field traveling with the solar wind particles breaks and reconnects with the crustal field. The reconnecting field lines could be what flings the particles to higher energies. The surface magnetic fields, Brain said, are produced by highly magnetized rock that occurs in patches up to 1,000 kilometers wide and 10 kilometers deep. These patches probably retain magnetism left from when Mars had a global field in a way similar to what occurs when a needle is stroked with a magnet, inducing magnetization that remains even after the magnet is withdrawn. When Mars' global field died out billions of years ago, the solar wind was able to strip the atmosphere away. Only the strong crustal fields are still around to protect portions of the surface. "We call them mini-magnetospheres, because they are strong enough to stand off the solar wind," Lin said, noting that the fields extend up to 1,300 kilometers above the surface. Nevertheless, the strongest Martian magnetic field is 50 times weaker than the field at the Earth's surface. It's hard to explain how these fields are able to funnel and accelerate the solar wind efficiently enough to generate an aurora, he said. Brain, Halekas, Lin and their colleagues hope to mine the Mars Global Surveyor data for more information on the auroras and perhaps join with the European team operating the Mars Express to get complementary data on the flashes that could solve the mystery of their origin. "Mars Global Surveyor was designed for a lifetime of 685 days, but it has been very valuable for more than six years now, and we are still getting great results," Lin observed. The work was supported by NASA. Coauthors with Brain, Halekas, Lin and Acu?a are Laura M. Peticolas, Janet G. Luhmann, David L. Mitchell and Greg T. Delory of UC Berkeley's Space Sciences Laboratory; Steve W. Bougher of the University of Michigan; and Henri R?me of the Centre d'Etude Spatiale des Rayonnements in Toulouse.
<urn:uuid:7afe1857-be38-48b1-8763-ea2b23ec4fad>
3.71875
1,438
News (Org.)
Science & Tech.
40.161343
961
Phenology: Changes in Ecological Lifecycles By Zack Guido | The University of Arizona | September 12, 2008 Lilac flowers bloom with cues from the weather. Caribou give birth at the peak of plant abundance so that their newborns have plenty to eat. In the Southwest, as well as all other parts of the world, variations in the climate trigger life cycle events in plants and animals. Studying these events and their relation to climate is known as phenology. The information obtained is vital for understanding the impact climate change has on humans and ecosystems. A Gila woodpecker feeding on the flowers of the giant saguaro cactus. The timing of blooming may shift in a changing climate. Credit: ©Frank Leung, istockphoto.com Phenology includes the timing of flower blooms, agricultural crop stages, insect activity, and animal migration. All of these events are changing as a result of climate change and these changes impact humans. The date flowers bloom, for example, controls the timing of allergens and infectious diseases—impacting human health—and alters when tourists visit regions to enjoy wildflowers, which impacts economies. Variations in crop phases affect agriculture by influencing the timing of planting, harvesting, and pest activity. Quantitative assessments of the impact of phenological changes on humans in the Southwest are scant primarily because phenology is a relatively recent scientific endeavor in the Southwest. However, increasing concern about climate change has amplified efforts in the following areas: - Documenting observed phenological changes - Projecting phenological changes from climate change - Establishing a national phenological network Phenology in the Southwest is relatively young and there are only a few observational records more than 20 years old. Nonetheless, records less than 20 years are sufficient to observe trends in phenological changes, and experts believe that changes in life cycle events in the Southwest will be similar to those documented in other parts of the world where longer records exist. Two of the more important and well-documented effects of climate change on phenology are changes in the date of flowering and food-chain disruptions. Changes in flower blooms Studies indicate an advance in the date that flowers bloom in the West. Important conclusions include the following: - Shrub specimens collected in the Sonoran Desert of the southwestern U.S. and northwestern Mexico and biological models suggest that the spring bloom of shrubs may have advanced by 20 to 41 days between 1894 and 20041 - A study published in 2001 concluded that the average date of bloom for lilacs in the western U.S. advanced by 7.5 days between 1957 and 1994, while the average bloom date of honeysuckle advanced by 10 days between 1968 and 1994.2 - A 20-year record of the timing of flower blooms for hundreds of plant species across 4,000 vertical feet in the Santa Catalina Mountains near Tucson, Arizona, suggests more than 15 percent of the surveyed species bloom at elevations as much as 1,000 feet higher than they did in the past.3 - The same 20-year record showed the average total number of species in bloom per year increased over the 20-year period by nearly three species per year at the highest elevations—this increase was associated with increasing summer temperatures.4 Food chain disruption Important life cycle events in plants and animals are often triggered by each other. When the timing of life cycle events changes in one species, it can disrupt symbiotic relationships and affect other species. For example, in the northeastern U.S., nectar-producing trees currently bloom 25 days earlier than in the past. As a result, honey bees have switched their source of nectar from the tulip poplar tree to black locust tree, impacting the pollination of tulip poplars and causing their numbers to crash.5 In the Arctic, the peak in plant abundance and caribou births no longer coincide, causing a 400 percent jump in offspring mortality. Future phenological changes will be localized, depending on the specific plant and animal species and the magnitude of climate change. Some species may profit, while others suffer. In general, flowers will likely bloom earlier and food-chain disruptions will likely be more frequent. Several changes are likely in the Southwest: - Because the date and abundance of flower blooms are highly correlated with winter snowpack, projected declines in snowpack will decrease flower abundance and advance the date of flowering.6 - Global warming may have a disproportionate effect on montane plant communities. Some mountain species may not be able to respond to changes in temperature by migrating north or south. In addition, an upward shift in altitudinal range of species to encounter cooler temperatures will decrease habitat area.2 - Earlier flower blooms could have substantial impacts on plant and animal communities in the Sonoran Desert, especially on shrubs and migratory hummingbirds.1 In addition, climate change will cause plant species to move in response to changes in temperature and precipitation. This may be most evident on mountains, where changes in elevation help create specific habitat zones within small areas. In the Santa Catalina Mountains near Tucson, Arizona, for example, the habitat of many species has expanded upslope, and to a lesser extent downslope. The USA National Phenology Network (NPN) is headquartered in Tucson, Arizona. Its mission is to facilitate collection and dissemination of phenological data from the United States. NPN primarily supports scientific research concerning interactions among plants, animals, and the lower atmosphere, especially the long-term impacts of climate change. NPN encourages involvement in phenological research and provides opportunities for interested people to contribute to science. Scholars, students of all grades, and citizens record the timing of life cycle events in key plant and animal species and submit their observations on-line. In this manner, a detailed database is growing. Currently, 800 people in the U.S. participate in NPN. Among them, amateur scientists in the Southwest have provided some of the more valuable and longer observational data. - Bowers, J. E. 2007. Has climatic warming altered spring flowering date of Sonoran desert shrubs? The Southwestern Naturalist, 52(3):347-355. - Cayan, D. R., et al. 2001. Changes in the onset of spring in the western United States. Bulletin of the American Meteorological Society, 82(3):399-415. - Personal communication with Dave Bertelsen, August 4, 2008. - Crimmins, T. H., M. A. Crimmins, D. Bertelsen and J. Balmat. 2008. Relationships between alpha diversity of plant species in bloom and climatic variables across an elevation gradient. International Journal of Biometeorology, 52:353-366. - Personal communication with Jake Weltzin, July 21, 2008. - Inouye, D. W., M. A. Morales and G. J. Dodge. 2002. Variation in timing and abundance of flowering by Delphinium barbeyi Huth (Ranunculaceae): The roles of snowpack, frost, and La Niña, in the context of climate change. Oecologia, 130:543–550.
<urn:uuid:c4515ae2-0ac7-44ec-a9dc-42ec8a5adf76>
3.609375
1,489
Knowledge Article
Science & Tech.
42.405384
962
This tutorial shows how to send modifications of code in the right way: by using patches. The word developer is used here for someone having a KDE SVN account. We suppose that you have modified some code in KDE and that you are ready to share it. First a few important points: Now you have the modification as a source file. Sending the source file will not be helpful, as probably someone else has done other modifications to the original file in the meantime. So your modified file could not replace it. That is why patches exist. Patches list the modifications, the line numbers and a few other useful information to be able to put that patch back into the existing code. (This process is called "patching" or also "applying a patch.") The main tool for creating patches is a tool called diff, which makes the difference between two files. This tool has a mode called unified diff, which KDE developers use. Unified diffs have not just the difference between the file but also the neighborhood around the differences. That allows to patch even if the line numbers are not the same anymore. The most simple patch is created between the modified file (here called source.cpp) and the non-modified version of the file (here called source.cpp.orig.) diff -u -p source.cpp.orig source.cpp That lists the difference between the two files in the unified diff format (and with function name information if possible.) However it only displays it to screen, which is of course not the goal. So you need to redirect the output. diff -u -p source.cpp.orig source.cpp > ~/patch.diff ~/patch.diff is here an example and you can create the file where you prefer with the name that you prefer. (You will soon find out that it is probably not a good idea to create a patch where the source is.) But normally, you do not just change one file and you do not keep the original version around to be able to make the difference later. But here too, there is a solution. The program svn, which is used on the command line interact with the SVN server, has a diff function too: svn diff. You can run it like this and it will give you the difference of the current directory and all sub-directories below it. Of course, here too, you want to redirect the output. svn diff > ~/patch.diff There are useful variants too (shown here without redirection) Note: even if svn can make the difference of another directory (svn diff mydirectory), it is not recommended to do it for a patch that should be applied again. (The problem is that the person that will apply the patch will have to be more careful about how he applies it.) Note: for simple diff, like those shown in the examples above, svn diff can be used offline, therefore without an active connection to the KDE SVN server. This is possible, as svn keeps a copy of the original files locally. (This feature is part of the design of SVN.) By default, svn diff does not have a feature like the -p parameter of diff. But svn allows that an external diff program is called, so you can call diff: svn diff --diff-cmd diff --extensions "-u -p" The procedures described above work very well with text files, for example C++ source code. However they do not work with binary files, as diff is not made to handle them. And even if SVN can internally store binary differences, svn diff is not prepared to do anything similar yet, mainly because it currently uses the unified diff format only, which is not meant for binary data. Therefore, unfortunately, there is little choice but to attach binary files separately from the patch, of course attached in the same email. First, you need to make svn aware of files you have added. svn add path/to/new/file /path/to/another/new/file Then run svn diff as before. Note that if you do svn revert, for example, the files you created will NOT be deleted by svn - but svn will no longer care about them (so they won't show up when you do svn diff, for example). You will have to rm them manually. (TODO: are there any other issues with adding new files if you don't have commit access?) Now you are ready to share the patch. If your patch fixes a bug from KDE Bugs, then the easiest way is to attach it there, see next section. The main way of sharing a patch is to email to a mailing list. But be careful not to send big patches to a mailing list, a few 10KB is the limit. If you find that the patch is too big to send to a mailing list, the best is to create a bug report in KDE Bugs and to attach the patch there, after having created the bug report. Another possibility, however seldom used, is to post the patch on a public Web server (be it by HTTP or FTP) and to send an email to the mailing list, telling that the patch is waiting there. Another variant is to ask on the mailing list which developer is ready to get a big patch. (Try to give its size and ask if you should send it compressed, for example by bzip2.) A last variant, if you know exactly which developer will process the patch and that you know or that you suppose that he currently has time, is to send the patch to a developer directly. (But here too, be careful if your patch is big. Some KDE developers have still analog modems.) In this section we assume that you have chosen to add your patch to an existing KDE bug or that you have created a bug report just for your patch. Even if this tutorial is more meant to send patches to a mailing list, most of it can be applied to adding a patch to KDE Bugs. You have two ways to do it: To send an email to a bug report, you can use an email address of the form firstname.lastname@example.org where 12345 is the bug number. Please be sure to attach your patch and not to have it inlined in your text. (If it is inlined, it would be corrupted by KDE Bugs, as HTML does not respect spaces.) Note: if you send an email to KDE Bugs, be careful to use as sender the same email address as your login email address in KDE Bugs. Otherwise KDE Bugs will reject your email. Note: if you create a new bug report just for your patch, be careful that you cannot attach a patch directly when creating a new bug. However as soon as the new bug is created, you can then attach files, one-by-one, therefore also patches. Warning: sometimes your patch will be forgotten because the developers do not always closely monitor the bug database. In this case, try sending your patch by email as described below. If that also does not help, you can always talk to the developers on IRC Assuming that you have chosen to send the patch to a mailing list, you might ask yourself: to which one? The best destination for patches is the corresponding developer mailing list. In case of doubt, you can send any patch for KDE to the kde-devel mailing list. (However with an increased risk that you would miss the right developer.) Of course, if you know exactly which developer will process the patch and that you know or that you suppose that he currently has time, then you can send the patch to him directly. Now you have a patch redirected into a file (for this example called patch.diff), you are ready to send it by email. But the first question: where? Now that you have entered an email address, a good practice is to attach the patch to your file before writing anything else in the email. So you will not forget to attach it. A little note here: yes, in KDE (unlike for the Linux Kernel for example), we prefer to have the patches sent as attachments. Now you are ready to write the rest of the email. Please think of a title that matches your patch. (Think of having to find it again in the archives in a few months or even years.) A good habit is to precede the title by [PATCH]. So for example a title could be [PATCH] Fix backup files. As for the body of the email, please tell to which file or directory your patch applies. For example for a file: The attached patch applies to the file koffice/kword/kwdoc.cpp or for a directory: The attached patch applies to the directory koffice/kword. This help the developers to have an overview of which code has been modified. Also tell for which branch it is meant, for example for trunk. Then tell what your patch does. If it fixes a bug, then please give the bug number too. If the bug was not registered in KDE Bugs, then please describe instead the bug that is fixed. Similarly, if you know that the patch fixes a bug introduced from a precise SVN revision, please add the revision number. Tell also what could be useful to the developers, for example if you could not completely test the patch (and why), if you need help to finish fixing the code or if it is a quick&dirty solution that should be fixed better in long-term. Now check the email again to see if you have not forgotten anything (especially to attach the patch) and you can send the email. One popular way of submitting patches is KDE's reviewboard. A big advantage over using the bugtracker of KDE is that the patches are less likely to be forgotten here. Also, the reviewboard allows inline review of diffs and other gimmicks. First you need to check if the project you've created the patch for is actually using reviewboard. For this, go to the groups section and see if the project's group is listed there. If it is listed there, you should use the reviewboard, otherwise send the patch by other means. For sending a patch, you first need to register. Then simply click New Review Request and fill out the form. The most important parts of the form are: After you completed the form, a notification mail will be sent to the developers and they will answer you. Now you have to wait that a developer reacts on your patch. (If you are not subscribed to the mailing lists where you have sent the patch, then monitor the mailing list archives] for such a message.) The reaction is normally one of the following: The first case is when nobody has answered. That perhaps means that you have chosen the wrong mailing list. Perhaps you have not explained correctly what the patch fixes or you have given a title that is not precise enough. If this happens, the developer might have overlooked the patch. Perhaps the developer that should have answered has not any time currently. (That too happens unfortunately.) The best is to try to work a little more on the patch, make a better description and try again a second time, perhaps to another mailing list or to use KDE Bugs instead. If the developer tells you that your patch conflicts with changes that he is currently doing, you could probably not do much against it. Maybe you can discuss with him how you can effectively work with him on this piece of code. If your patch was not accepted, you could work further on it. Probably you should discuss the problem on the mailing list to know in which direction you should work further. If a developer wants a few changes, then work on the code to make the changes according to the critic. If you need help because you do not understand how to do the needed change, then ask it on the mailing list. If your patch was accepted, congratulations! :)
<urn:uuid:b1579d04-7a6b-420c-9fe2-a0b676d91ec3>
3.0625
2,482
Customer Support
Software Dev.
64.655329
963
Direct execution is the most basic way to execute a statement. An application builds a character string containing a Transact-SQL statement and submits it for execution using the SQLExecDirect function. When the statement reaches the server, SQL Server compiles it into an execution plan and then immediately runs the execution plan. Direct execution is commonly used by applications that build and execute statements at run time and is the most efficient method for statements that will be executed a single time. Its drawback with many databases is that the SQL statement must be parsed and compiled each time it is executed, which adds overhead if the statement is executed multiple times. SQL Server significantly improves the performance of direct execution of commonly executed statements in multiuser environments and using SQLExecDirect with parameter markers for commonly executed SQL statements can approach the efficiency of prepared execution. When connected to an instance of SQL Server, the SQL Server Native Client ODBC driver uses sp_executesql to transmit the SQL statement or batch specified on SQLExecDirect. SQL Server has logic to quickly determine if an SQL statement or batch executed with sp_executesql matches the statement or batch that generated an execution plan that already exists in memory. If a match is made, SQL Server simply reuses the existing plan rather than compile a new plan. This means that commonly executed SQL statements executed with SQLExecDirect in a system with many users will benefit from many of the plan-reuse benefits that were only available to stored procedures in earlier versions of SQL Server. This benefit of reusing execution plans only works when several users are executing the same SQL statement or batch. Follow these coding conventions to increase the probability that the SQL statements executed by different clients are similar enough to be able to reuse execution plans: Do not include data constants in the SQL statements; instead use parameter markers bound to program variables. For more information, see Using Statement Parameters. Use fully qualified object names. Execution plans are not reused if object names are not qualified. Have application connections as possible use a common set of connection and statement options. Execution plans generated for a connection with one set of options (such as ANSI_NULLS) are not reused for a connection having another set of options. The SQL Server Native Client ODBC driver and the SQL Server Native Client OLE DB provider both have the same default settings for these options. If all statements executed with SQLExecDirect are coded using these conventions, SQL Server can reuse execution plans when the opportunity arises.
<urn:uuid:f82767ed-532d-4540-b80b-cef97bc9291c>
3.234375
512
Knowledge Article
Software Dev.
24.501157
964
Some animals, like earthworms, snails, and spiders, have nerves but not actual brains. A one-year-old child's brain weighs about 2 pounds (950 g), ten times the weight of a dog's brain. The primate cortex is so large that it has to be folded to fit inside the skull. That's why the surface of the brain is wrinkly. The nervous system runs on electricity, but the levels are low. Brain signals involve less than one-tenth the voltage of an ordinary flashlight battery. AT THE MUSEUM See how good you're at recognizing emotions. How fast can you catch a falling ruler? Measure your reaction time. Take the jellybean test to see how your sense of smell enhances taste. Use Braille to create a message for a friend. Explore your nerves by creating a life-sized drawing. Try these trippy experiments to fool your brain. Scientist Rob DeSalle answers kids' questions about the brain! Test your sight and memory with these brain games.
<urn:uuid:3bd155a1-25f8-411c-9c34-1d130efec406>
3.53125
219
Knowledge Article
Science & Tech.
69.394547
965
What's a Mangrove? And How Does It Work? If you've ever spent time by the sea in a tropical place, you've probably noticed distinctive trees that rise from a tangle of roots wriggling out of the mud. These are mangroves—shrub and tree species that live along shores, rivers, and estuaries in the tropics and subtropics. Mangroves are remarkably tough. Most live on muddy soil, but some also grow on sand, peat, and coral rock. They live in water up to 100 times saltier than most other plants can tolerate. They thrive despite twice-daily flooding by ocean tides; even if this water were fresh, the flooding alone would drown most trees. Growing where land and water meet, mangroves bear the brunt of ocean-borne storms and hurricanes. There are 80 described species of mangroves, 60 of which live exclusively on coasts between the high- and low-tide lines. Mangroves once covered three-quarters of the world's tropical coastlines, with Southeast Asia hosting the greatest diversity. Only 12 species live in the Americas. Mangroves range in size from small bushes to the 60-meter giants found in Ecuador. Within a given mangrove forest, different species occupy distinct niches. Those that can handle tidal soakings grow in the open sea, in sheltered bays, and on fringe islands. Trees adapted to drier, saltier soil can be found farther from the shoreline. Some mangroves flourish along riverbanks far inland, as long as the freshwater current is met by ocean tides. One Ingenious Plant How do mangroves survive under such hostile conditions? A remarkable set of evolutionary adaptations makes it possible. These amazing trees and shrubs: - cope with salt: Saltwater can kill plants, so mangroves must extract freshwater from the seawater that surrounds them. Many mangrove species survive by filtering out as much as 90 percent of the salt found in seawater as it enters their roots. Some species excrete salt through glands in their leaves. These leaves, which are covered with dried salt crystals, taste salty if you lick them. A third strategy used by some mangrove species is to concentrate salt in older leaves or bark. When the leaves drop or the bark sheds, the stored salt goes with them. - hoard fresh water: Like desert plants, mangroves store fresh water in thick succulent leaves. A waxy coating on the leaves of some mangrove species seals in water and minimizes evaporation. Small hairs on the leaves of other species deflect wind and sunlight, which reduces water loss through the tiny openings where gases enter and exit during photosynthesis. On some mangroves species, these tiny openings are below the leaf's surface, away from the drying wind and sun. - breathe in a variety of ways: Some mangroves grow pencil-like roots that stick up out of the dense, wet ground like snorkels. These breathing tubes, called pneumatophores, allow mangroves to cope with daily flooding by the tides. Pneumatophores take in oxygen from the air unless they're clogged or submerged for too long. Roots That Multitask Root systems that arch high over the water are a distinctive feature of many mangrove species. These aerial roots take several forms. Some are stilt roots that branch and loop off the trunk and lower branches. Others are wide, wavy plank roots that extend away from the trunk. Aerial roots broaden the base of the tree and, like flying buttresses on medieval cathedrals, stabilize the shallow root system in the soft, loose soil. In addition to providing structural support, aerial roots play an important part in providing oxygen for respiration. Oxygen enters a mangrove through lenticels, thousands of cell-sized breathing pores in the bark and roots. Lenticels close tightly during high tide, thus preventing mangroves from drowning. The mangroves' niche between land and sea has led to unique methods of reproduction. Seed pods germinate while on the tree, so they are ready to take root when they drop. If a seed falls in the water during high tide, it can float and take root once it finds solid ground. If a sprout falls during low tide, it can quickly establish itself in the soft soil of tidal mudflats before the next tide comes in. A vigorous seed may grow up to two feet (about 0.6 m) in its first year. Roots arch from the seedling to anchor it in the mud. Some tree species form long, spear-shaped stems and roots while still attached to the parent plant. After being nourished by the parent tree for one to three years, these sprouts may break off. Some take root nearby while others fall into the water and are carried away to distant shores. A World Traveler Botanists believe that mangroves originated in Southeast Asia, but ocean currents have since dispersed them to India, Africa, Australia, and the Americas. As Alfredo Quarto, the head of the Mangrove Action Project, puts it, “Over the millions of years since they've been in existence, mangroves have essentially set up shop around the world.” The fruits, seeds, and seedlings of all mangrove plants can float, and they have been known to bob along for more than a year before taking root. In buoyant seawater, a seedling lies flat and floats fast. But when it approaches fresher, brackish water—ideal conditions for mangroves—the seedling turns vertical so its roots point downward. After lodging in the mud, the seedling quickly sends additional roots into the soil. Within 10 years, as those roots spread and sprout, a single seedling can give rise to an entire thicket. It's not just trees but the land itself that increases. Mud collects around the tangled mangrove roots, and shallow mudflats build up. From the journey of a single seed a rich ecosystem may be born. More About This Resource... Our innovative Science Bulletins are an online and exhibition program that offers the public a window into the excitement of scientific discovery. This essay was published in May 2004 as part of the Mangroves: The Roots of the Sea Bio Feature. - It begins by explaining that these remarkably tough shrub and tree species can live in water up to 100 times saltier than most other plants can tolerate and thrive despite twice-daily flooding by ocean tides. - It then details the remarkable set of evolutionary adaptations that allow mangroves to survive under such hostile conditions. - The essay concludes with a note about how botanists believe that mangroves originated in Southeast Asia, but ocean currents have since dispersed them to India, Africa, Australia, and the Americas. Supplement a study of biology with a classroom activity drawn from this Science Bulletin essay. - Have students read the essay (either online or a printed copy). - Working individually or in small groups, have them investigate the Explore a Mangrove Forest interactive.
<urn:uuid:0f198350-c837-4dc6-bdfe-ac3b5bfad431>
4.0625
1,472
Knowledge Article
Science & Tech.
51.604172
966
Web Programming is the process of creating Internet Applications. Any application that uses the Internet in a way, can be considered an Internet application. They can be classified into four common categories:- Web Applications - Applications based on the Client/Server architecture over the Internet. The Client/Server architecture is composed of a server, which is responsible for providing services to the other computer systems - Clients. Typically, there is a single server which handles requests from multiple clients and responds to these requests by providing the client with the appropriate information. In a Web Application, the server is the machine where the web page is stored and the clients employ web browsers to view the application. Such a server is called a Web Server. Web Services - Web Services are components that expose processing services from a server to other applications over the Internet. The services themselves are executed remotely in the server hosting them. Internet Enabled Applications - Any stand-alone application that uses the Internet falls into this category. Such an an application uses the Internet for online Registration/Activation, Help, Updates, etc. Peer-to-Peer Applications - These are stand-alone applications that use the Internet to communicate with other users running their own instances of the application. They use decentralized network architecture where there is no central server, rather individual nodes. Examples of such applications can include the famous Bit Torrent client. Note:- In this tutorial, we would be involved with Web Application only. Working of the Web Applications As mentioned before, the client side of the Web Application includes a web browser, which interprets Hypertext Markup Language (HTML) transferred by the server and displays the user interface. The server itself runs the web applications under Microsoft Internet Information Services (IIS) which is responsible for managing the application, passing request from clients to the application and returning the application's responses to the client. The intricate communication involved in this process is done by using a standard set of rules (Protocols) known as the Hypertext Transfer Protocol (HTTP). The responses generated by the Web application is made from the resources (Executable code running on the server, Web Forms, HTML pages, images and other media files) found on the server. These responses are similar to traditional Web sites with HTML pages, except that they are dynamically generated. Consider a university's web site which releases the exam results. Do they take the pain of creating different HTML pages for each of the student's mark sheet? No, they use web applications to retrieve data (marks, subjects, student name, roll no, etc) from a database and dynamically generate the HTML output which is then sent to the client's browser. The executable portion of the Web application is responsible for overcoming the limitations of static web pages. They can be used to:- - Collect information from the user and store the information on the server (in a database). - Performing tasks for the user such as placing an order for a product, performing complex calculations, or retrieving information from a database. - Identify a user and present customized user interface. ASP .NET is the platform that allows us to create Web applications and services that run under IIS. One must note that ASP .NET is NOT the only platform to develop Web applications. Other platforms such as Common Gateway Interface (CGI) can also be used to create Web applications. ASP .NET is unique in the way it is tightly integrated with Microsoft server, programming, data access and security tools. It forms a part of the Microsoft .NET suite of products and is composed of:- - Visual Studio .NET Web Development Tools - These are Graphical User Interface (GUI) based tools to facilitate easy designing of Web pages using What You See Is What You Get (WYSIWYG) editors, project management and deployment tools. - The System.Web namespaces - These form a part of the .NET Framework Base class libraries and include the programming classes that deal with Web specific items such as HTTP requests and responses, browsers and e-mail. - Server and HTML Controls - The user interface components such as Text Box, label, Button, ListBox, etc. that are used to gather information from and provide to users. - Microsoft ADO.NET database classes and tools - Database access is one of the key components of modern Web applications. These tools provide methods to access and use Microsoft SQL Server and ODBC databases. - Microsoft Application Center Test (ACT) - Testing environment for Web applications. Why Choose ASP .NET? The following are the advantages that ASP .NET has over other platforms:- - Faster execution - Executable portions of Web applications are compiled to facilitate faster performance. - On the fly updates of deployed Web applications thus preventing the need to restart the server. - The amount of code to be written is greatly reduced because of the access to .NET Framework base class libraries which includes classes and methods to perform common operations. - Language independent - Developers have the choice to write codes in the friendly Visual Basic programming language or the type safe C# language. Other third party .NET compliant languages can also be used. - Automatic state management for controls on a web page (server controls) makes the controls behave more like the Windows controls. - New controls can be created and existing controls can be extended. - Built in security through the Windows server or through other authentication/authorization methods. - Integration with ADO .NET to provide database access and database design tools from within Visual Studio .NET - Full support for Extensible Markup Language (XML) and Cascading Style Sheets (CSS) - Automatic intelligent caching of frequently requested Web pages, localizing content for specific languages and cultures and detecting browser capabilities. Previous Tutorial - The .NET Framework & CLR : Basic Introduction Edited by turbopowerdmaxsteel, 06 April 2007 - 09:35 AM.
<urn:uuid:0c01a907-82ff-48b2-a8d2-a12fdb03cb7f>
3.703125
1,205
Tutorial
Software Dev.
33.535153
967
Plants flower faster than climate change models predict Scientific models are failing to accurately predict the impact of global warming on plants, says a new report. Researchers found in long-term studies that some are flowering up to eight times faster than models anticipate. The authors say that poor study design and a lack of investment in experiments partly account for the difference. They suggest that spring flowering and leafing will continue to advance at the rate of 5 to 6 days per year for every degree celsius of warming. The results are published in the journal Nature. For more than 20 years, scientists have been carrying out experiments to mimic the impacts of rising temperatures on the first leafing and flowering of plant species around the world. End Quote This Rutishauser Oeschger Centre for Climate Change Research The bottom line is that the impacts might be bigger than we have believed until now” Researchers had assumed that plants would respond in essentially the same way to experimental warming with lamps and open top chambers as they would to changes in temperatures in the real world. Very little has been done to test the assumption until this study lead by Dr Elizabeth Wolkovich, who is now at the University of British Columbia in Vancouver. With her colleagues she studied the timing of the flowering and leafing of plants in observational studies and warming experiments spanning four continents and 1,634 plant species. According to Dr Wolkovich, the results were a surprise. "What we found is that the experiments don't line up with the long term data, and in fact they greatly underestimate how much plants change their leafing and flowering with warming," she said. "So for models based on experimental data, then we would expect that plants are leafing four times faster and flowering eight times faster in the long term historical record than what we're using in some of the models."'Consistent message' Observational data have been gathered by scientific bodies for many years. In the UK, the systematic recording of flowering times dates back to 1875, when the Royal Meteorological Society established a national network of observers. Since then, data has also been recorded by full-time biologists and part-time enthusiasts, and in recent years there have been mass-participation projects such as BBC Springwatch. This new research suggests that these observations of flowering and leafing carried out in many different parts of the world over the past thirty years are remarkably similar according to Dr Wolkovich. "In terms of long term observations, the records are very coherent and very consistent and they suggest for every degree celsius of warming we get we are going to get a five- to six-day change in how plants leaf and flower." She argues that the difficulties in mimicking the impacts of nature in an artificial setting are much greater than many scientists estimate. The team found that in some cases the use of warming chambers to artificially raise temperatures can sometimes have the opposite effect. "In the real world, we don't just see changes in temperature - we see changes in precipitation and cloud patterns and other factors - so certainly when you think about replicating changes in clouds, we are very, very far away from being able to do that. "I guess we will never get to perfectly match nature, but I am hopeful as scientists we can do much, much better, given funding resources." The team found that the greater investment in the design and monitoring of experiments, the more accurate the result. "We have a very consistent message from the long-term historical records about how plants are changing, but we need to think more critically about how we fund and invest in and really design experiments," said Dr Wolkovich. "We do need them in the future, they are the best way going forward to project how species are changing but right now what we're doing isn't working as well as I think it could." Other researchers were equally surprised by the results. Dr This Rutishauser is at the Oeschger Centre for Climate Change Research at the University of Bern in Switzerland. He says that in light of this work scientists will have to rethink the impacts of global warming. "The bottom line is that the impacts might be bigger than we have believed until now. That's going to provoke a lot of work to probably revise modelling results for estimations of what's going to happen in the future for food production especially." Dr Wolkovich agrees that if the models are so significantly underestimating the real world observations, there could be also be impacts on water the world over. "If a whole plant community starts growing a week earlier than we expect according to these experiments, it's going to take up a lot more water over the growing season and if you add to that many years of the model projections, you are going to see big changes in the water supply." She appeals to people to get involved in citizen science projects and help gather data on flowering and leafing, especially in remote areas. The National Phenology Network in the US logged its millionth observation this week, and similar programmes are underway in the UK, Sweden, Switzerland, and the Netherlands, and a pan-European database is under development. "We have very few monitoring networks. We need many, many people out there observing this because it is changing faster and across more habitats than we are currently measuring - we need more help!"
<urn:uuid:2b0a717c-2162-468d-9a68-767e517dc557>
3.5
1,102
News Article
Science & Tech.
39.29112
968
If you are sorting content into an order, one of the most simple techniques that exists is the bubble sort technique. In essence you start at one end of the list, move one by one to the other end of the list, and if you ever reach a situation where two items are out of order, you swap them. This is one of the most simple sort techniques that exists, that is taught in any basic programming course. Let's say you have an array of Grades(5). You want to sort them so that the highest grade is at the beginning of the list, and that the lowest grade is at the end of the list. Note that this is NOT REAL CODE. This is an example of the concept, that you can apply to any language. So you would fill Grades(5) with the values. Then you would say - for ctr = 1 to 4 .for ctr2 = ctr + 1 to 5 ..if Grades(ctr) < Grades(ctr2) then ...Temp = Grades(ctr) ...Grades(ctr) = Grades(ctr2) ...Grades(ctr2) = Temp So in essence you have the outer loop stepping through each item but the very last one. The inner loop steps through every untried item from whereever you are in the outer loop, going forward. The two are compared and if the higher number is not "on top", the are swapped. Let's say your array is 90 70 80 100 60 On the first time through the loop, you begin with 90 (value 1) and compare it with the others, in order. Is 90 < 70? No. Nothing happens. Is 90 < 80? No. Is 90 < 100? Yes. The 100 takes spot 1, and the 90 takes spot 4. Is 100 < 60? No. Now we have guarantee that spot #1 is definitely the largest number in the entire array. Now we work on the second largest number. We move on to stop 2. Is 70 < 80? YES, they swap spots. Is 80 < 90? YES, they swap spots. Is 90 < 60? No, so the 90 stays in spot 2. And so it goes, until the entire array is settled in proper order. You can of course arrange the array in ascending or descending order just by switching the < to a > !
<urn:uuid:ffb6eddd-2fab-4cea-bf6e-98e16bbc8999>
3.53125
498
Tutorial
Software Dev.
94.179111
969
Two long straight wires cross each other at right angles, and the horizontal wire carries 2 I (capital i ) Amp and vertical wire carries one I Amp. Assume ABC and D are equal distance L from both vertical and horizontal wires. Calculate B field in term of I and L and other fundamental constants at each of the points ABC and D due to the two wires. ( for some reason I am having trouble uploading the figure.. If you need the figure I can probably try to describe it.. Just comment) so the top left of the square is "A" ; bottom left is "B" ; bottom right is "C" and top right is "D" on the horizontal,, there should be an arrow pointing to the right which has a value of 2I and on the vertical axis there should be an arrow pointing upward that has a value of I. L would be the distance from the the vertical and horizontal wires. (so on each quadrant, the horizontal and vertical lines from the partial square would have a label "L")
<urn:uuid:dbf258ab-85bc-4fb4-af12-9ab01bb08037>
3.125
220
Q&A Forum
Science & Tech.
65.655599
970
A 25-Year-Old Prediction of Water Scarcity in the Southwest Holds True, Study Finds In 1986, environmental journalist Marc Reisner published Cadillac Desert: The American West and Its Disappearing Water, a landmark book surveying water use in the American Southwest. Having interviewed hundreds of people about the Southwest and learned the history of the region’s water infrastructure, Reisner concluded that more water was being pulled out of the West’s waterways than could be naturally replenished. He said the Southwest was due to run short on water, soon. Nearly 25 years later, a group of researchers has put Reisner’s assertion to the test, checking to see if there is any scientific truth behind it. Armed with modern data from across the Southwest, the group, led by ecologist John Sabo from Arizona State University, found that many of Reisner’s claims were legitimate, and still hold true today. “We asked, is it really as bad as [Reisner] said it is in the book, and are we still where we were in 1986?” explains Sabo, who assembled a group of experts to assess water, dams, fish, soil and crops across the Southwest using modern techniques. “Now we know the answer to both those questions: yes.” The findings from the new study have been published online in the Proceedings of the National Academy of Sciences (PNAS). Revisiting the Cadillac Desert Water levels in Lake Mead, pictured here in November 2008, reached a record low in October 2010. Credit: flickr/wenzday01. In his book, Reisner claimed that humans were consuming most of the water from southwestern streams and rivers. The new review of watersheds shows that water users in the Southwest are already drawing on 76 percent of the available surface water to support more than 50 million people living in the region. Moreover, says Sabo, water usage could climb to as high as 86 percent if the population doubles in the Southwest. Like all cities, those in the West and Southwest import water for people to use in their homes and businesses, as well as for industrial purposes. But this only accounts for a small portion of the total. Farming in arid states like California, Arizona, Colorado, and New Mexico requires a remarkably large amount of water compared to farms in eastern states. According to the study’s findings, it is this water-intensive agriculture that makes Los Angeles, Las Vegas and Phoenix the three largest water-consuming cities in the country. After surveying the great western water reservoirs, such as Lake Mead and Lake Powell, Reisner also claimed that the build-up of sediments would eventually ruin the lakes for water storage and perhaps even electricity generation. The new study, however, found that sediment isn’t accumulating fast enough in these reservoirs to fill them completely in the next 100 years, although sediment has already reduced the ability of these lakes to deliver water to cities and farms. Sabo and his collaborators also found that, true to Reisner’s original conclusions, the buildup of salt in the soil is particularly damaging to crops in the Southwest. Though salt accumulation is a potential threat to agriculture in many parts of the world, the study found that the Southwest is more vulnerable than other parts of the U.S. Looking at a broader scale, the study estimates that farming revenue losses are ten times higher in the American West compared to the East — on the order of about $2.5 billion each year. The already limited water resources in the Southwest have been further stressed over the past decade, during which a persistent drought has affected the region. In October 2010, Lake Mead in Nevada reached a record low level and is currently only about eight feet higher than the designated level of a critical water shortage. With the region so prone to drought, and potentially even more dry weather in the coming decades as the climate continues to change, Sabo says it’s important to find a way to cut back on the amount of water the Southwest is using. Many computer models project further drying in the Southwest as a consequence of global climate change. “The message is that this is a regional problem,” says Sabo, “and that leaders from six U.S. states need to work together to make sure we keep more water running in the rivers.” Reclaiming Sustainable Water in the Southwest The original lesson from Cadillac Desert is familiar to southwesterners, who have heard warnings about water scarcity for years. These new findings lend further credibility to the idea that the region’s population is living beyond its water limits. But according to Peter Gleick, president of the Pacific Institute and a long time researcher of global water resources, identifying the problems in the Southwest hasn’t lead to the implementation of enough solutions over the past two decades. In a companion paper published in the same issue of PNAS, Gleick writes: "Psychologically and socially, it is hard for millions of people who love this region to admit that it is fundamentally dry and that the rules for building, living, and working there must be different from those in the wet regions where most of these same people were born and raised." Gleick says his research points to four strategies the Southwest should adopt to preserve enough water for the millions of people already living there. “Firstly, we have to fundamentally rethink what we mean by water supplies,” he says. In the 20th century, dams and aqueducts were built to harness water in the rivers and transport it to cities. “But that isn’t going to be enough anymore. We’re already at the limits,” he says. Gleick says it’s possible to tap water resources that are typically ignored, which could involve using treated wastewater or saltwater. Seasonal drought outlook issued by the Climate Prediction Center, showing the development of drought conditions in much of the Southwest this winter. Credit: NOAA/Climate Prediction Center. Next, he says the Southwest has to rethink its water demands and that water should be used more efficiently. “This doesn’t mean we will have to take shorter showers or brown the land around us,” he explains. “I mean doing everything we want to do, but with less water.” The trick, he says, is improving water efficiency everywhere it is affordable to do so, including improving irrigation systems for agriculture and switching to low flow faucets in homes. Gleick says another important strategy is developing a more coordinated approach to managing water in the Southwest. Currently, different cities, counties and states control their own water with unique bylaws and regulations. “But in the Southwest watersheds define the landscape,” he says, “so, we should manage water at the regional level.” He recommends that different states and cities should work together as they develop policies for water usage. Finally, Gleick says, “The important point is that we can’t look at the situation without climate change.” Because further climate change is inevitable, he says, so are the impacts on water availability around the world. “And it seems these changes will especially compound the problems in the Southwest rather than make them better.” Already there are some authorities in the Southwest that are including climate change predictions in their water planning. For example, California’s Department of Water Resources released a five-year plan that calls for climate change considerations to be incorporated into all future water planning. “The good news is, we are beginning to make these changes, in all areas,” says Gleick. People are rethinking water supply and demand, are cooperating more and are thinking about how climate change will affect water in the Southwest, he says. On the other hand, Gleick warns, the water problems in the Southwest are bound to get worse in the coming years rather than better. “I’m not sure we’re moving fast enough to avoid more serious disruptions in the future.”
<urn:uuid:3472d914-155b-4aac-8f19-4e3f7d61fd8f>
3.421875
1,664
News Article
Science & Tech.
42.739909
971
Definitions for ultra-gaseous matter The Standard Electrical Dictionary Gas so rarefied that its molecules do not collide or very rarely do so. Experiments of very striking nature have been devised by Crookes and others to illustrate the peculiar phenomena that this matter presents. The general lines of this work are similar to the methods used in Geissler tube experiments, except that the vacua used are very much higher. When the vacuum is increased so that but one-millionth of the original gas is left the radiant state is reached. The molecules in their kinetic movements beat back and forth in straight lines without colliding, or with very rare collisions. Their motions can be guided and rendered visible by electrification. A tube or small glass bulb with platinum electrodes sealed in it, is exhausted to the requisite degree and is hermetically sealed by melting the glass. The electrodes are connected to the terminals of an induction coil or other source of high tension electrification. The molecules which come in contact with a negatively electrified pole are repelled from it in directions normal to its surface. They produce different phosphorescent or luminous effects in their mutual collisions. Thus if they are made to impinge upon glass, diamond or ruby, intense phosphorescence is produced. A piece of platinum subjected to molecular bombardment is brought to white heat. A movable body can be made to move under their effects. Two streams proceeding from one negative pole repel each other. The stream of molecules can be drawn out of their course by a magnet. The experiments are all done on a small scale in tubes and bulbs, resembling to a certain extent Geissler tubes. [Transcriber's note: These effects are caused by plasma--ionized gas and electrons.] Use the citation below to add this definition to your bibliography: "ultra-gaseous matter." Definitions.net. STANDS4 LLC, 2013. Web. 19 May 2013. <http://www.definitions.net/definition/ultra-gaseous matter>.
<urn:uuid:6b1b6976-2352-4b69-8132-9755a8ffd135>
3.0625
414
Structured Data
Science & Tech.
41.175427
972
Researchers at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) have been instrumental in the preparation of a report by the World Meteorological Organization (WMO) regarding the development of the ozone layer in the stratosphere. Based on estimates, by about the mid 21st century, the ozone layer will have the same thickness as it had in the early eighties. The latest evaluations of space-based measurements acquired by the DLR Remote Sensing Technology Institute, combined with model computations from the DLR Institute of Atmospheric Physics support the statement that ‘the regeneration of the ozone layer continues’. “Measurements show that the ozone hole above Antarctica in 2012 is one of the smallest in recent years,” reports Martin Dameris from the Institute of Atmospheric Physics. Both the area of expansion and the reduction in ozone concentrations are small this year in comparison to the values observed in past years. “This is a clear indication that the ozone layer is staging a good recovery,” states Dameris. Based on climate chemistry models So far, climate chemistry model computations performed by the Institute of Atmospheric Physics have been in line with observations. The models demonstrate that, if this trend continues, the ozone hole will close up and that the ozone layer will regenerate itself. These forecasts are based upon computational models that simulate the physical, dynamic and chemical processes in the atmosphere. The Institute of Atmospheric Physics collaborated on the production of these climate chemistry models. To investigate the ozone layer, long-term simulations, starting, for example, in 1960 and extending beyond the simulation date, were conducted at the DLR Institute. Computational results for the past are compared against observational data, in part to assess the quality of the results from the model. It is only on the basis of well-evaluated models that it is possible to produce reliable estimates of future developments, such as that of the ozone layer. To understand atmospheric processes, atmospheric researchers use data from the DLR Remote Sensing Data Center. The scientists at this Institute are primarily involved in the provision of data products derived from satellite measurements. These satellite data products are compared against other independent data to achieve the highest possible level of precision. The reduction in chlorofluorocarbon consumption is having a considerable and positive effect Since the early eighties, the ozone hole has been observed at the start of the Antarctic spring – from mid-September to mid-October. It is the consequence of high chlorine levels in the stratosphere, and is caused by the emission of chlorofluorocarbons (CFCs). The drastic reduction of CFC levels in the atmosphere has had a positive impact on the ozone layer. The production and use of CFCs was regulated by the Montreal Protocol in 1987 and in subsequent agreements; since the mid-nineties, the use of CFCs has been almost totally banned. As a result, a decline in the chlorine content of the stratosphere has been observed since the beginning of this century. Due to meteorological factors, such as the temperature-dependent nature of the ozone-depleting chemical reactions, the ozone layer does not regenerate steadily. This means that the ozone hole shows year-to-year variations, fluctuating between large and small ozone losses, but nonetheless exhibiting a positive trend towards higher ozone levels, and therefore to the restoration of normal levels. The observations carried out this year support this overall positive trend.
<urn:uuid:d78cd322-7c30-456f-bfa4-9cfb0cc4ab5b>
3.78125
712
Knowledge Article
Science & Tech.
19.729589
973
Climate Change Resources Alaska Perspectives on Earth and Climate This collection of lesson plans and student activities from Teacher's Domain compares and contrasts the traditional knowledge of native people and ongoing scientific research and shows how the two can complement each other in looking for solutions to climate change. Alliance for Climate Education The Alliance for Climate Education (ACE) is a national leader in high school climate education, providing a free, award-winning assembly on climate science and solutions. After the assembly, students are invited to start an Action Team to launch carbon-reducing projects at school. ACE supports teams through free, in-person trainings for students and teachers to help build project management and leadership skills. "Antarctica Melting" is a Centers for Ocean Sciences Education Excellence (COSEE) Networked Ocean World four act story. Each act is accompanied by a slide show and a classroom activity. The four acts include "A changing continent" narrated by Dr. Oscar Schofield, "A small world after all" narrated by Dr. Debbie Steinberg, "An Adelie exit" narrated by Dr. Bill Fraser and "A robotic armada" narrated by Dr. Oscar Schofield. Bering Sea Ecosystem Collection The Bering Sea Ecosystem Collection from PolarTREC is a body of educational resources focused on understanding the impacts of climate change and dynamic sea ice over the eastern Bering Sea ecosystem. The collection includes individual activities, lesson plans, videos and presentations that will help to educate the next generation about this complex ecosystem. Beyond Penguins and Polar Bears "Beyond Penguins and Polar Bears" is an online magazine for K-5 teachers, integrating science, literacy and the Polar Regions. Lesson plans provided align with National Science Education Standards while exploring the Arctic and Antarctica. Beyond the lesson plans, this online magazine broadcasts free webinars, podcasts and provides electronic books for grades K-5. Climate Change: The Threat to Life and a New Energy Future A companion piece to the exhibit at the American Museum of Natural History in New York, this website contains accurate information on the history and science behind climate change, as well as solutions to help combat its effects. The website also includes a climate change blog and resources for both educators and kids. Climate Change and Water: Perspectives from the Forest Service Climate Change and Water: Perspectives from the Forest Service is a summary of a forthcoming report by the Forest Service and U.S. Department of Agriculture which will detail the likely impacts of climate change on the Nation's forested watersheds and highlight the importance of managing forests to provide clean, abundant water. Climate Change Wildlife and Wildlands: A Toolkit for Formal and Informal Educators The U.S. Environmental Protection Agency, in partnership with six other federal agencies, developed this kit to aid educators in teaching how climate change is affecting our nation's wildlife and public lands, and how everyone can become "climate stewards." The kit features case studies of 11 eco-regions in the United States, highlighting regional impacts to habitats and wildlife, and information on what kids can do to help. It also contains classroom activities, video, links and other materials. The eco-regions can be explored online and all of the kit materials are available for download from the website. Climate Kids, a NASA website aimed at students in grades 4-6, is a multimedia-rich companion to NASA's acclaimed Global Climate Change site. This kid-friendly guide de-mystifies one of the most important science topics of our time using an interactive Climate Time Machine, a section on Green Careers, educational games and more. The National Oceanic and Atmospheric Administration's Ocean Service Education program offers this page of climate change resources, including fact sheets, lesson plans, case studies and links. The site also provides information on the Climate Change Educator Conferences with archived videos. The site was developed in partnership with the National Science Teachers Association. Cool School Challenge Designed for grades 7-12, the Cool School Challenge is an online toolkit that engages students and teachers in practical strategies to reduce carbon dioxide and other greenhouse gas emissions schoolwide. Through improved energy efficiency, reduced consumption, increased recycling and changes in transportation behaviors, participants learn how simple actions, taken together, can create a climate of change. Cool the Earth Cool The Earth is a program that educates K-8 students and their families about climate change and inspires them to take simple actions to reduce carbon emissions. The five components of the program include a kick-off assembly; action coupons that reward students for energy-saving actions; Action of the Month, a school-wide energy-saving activity; assembling an action team of parents and/or teachers; and measuring success by tallying all of the action coupons that students turn in. Earth Gauge® is a free environmental information service for broadcast meteorologists based on the 3-5 day forecast. The service is designed to make it easy to talk about the links between weather and the environment with simple facts and viewer action tips. The Climate Resource Library includes tips, fact sheets and news stories regarding climate change that are science-based and appropriate for use in the classroom. Earth: The Operators' Manual Earth: The Operators' Manual (ETOM) is a new PBS climate change program hosted by Richard Alley. The program presents an objective assessment of climate change as it takes viewers around the globe to investigate sustainable energy projects. The ETOM website for educators streams clips from each episode for use in the classroom. ETOM provides teacher tips, hand-on activities, an annotated script and a glossary to accompany each clip. The website also lists external resources in multiple formats including DVDs, books and useful links. Ecological Impacts of Climate Change This booklet is based on Ecological Impacts of Climate Change (2009), a report by an independent panel of experts convened by the National Research Council. It explains general themes about the ecological consequences of climate change and identifies examples of ecological changes across the U.S. The booklet can be downloaded as a PDF and printed. Encyclopeida of Earth: Climate Change The Climate, Adaptation, Mitigation, E-Learning (CAMEL) project from the National Council for Science and the Environment assists the climate change section of the Encyclopedia of Earth website. CAMEL encourages educators to submit resources that are then featured on the Encyclopedia of Earth Climate Change website for public use. Resources include images, articles, videos, data sets, presentations, classroom projects and lectures. The featured resource on the website is the Climate Literacy & Energy Awareness Network which is a digital library of reviewed and annotated online resources relating to key climate and energy concepts. FOCUS: Forests, Oceans, Climate and Us FOCUS is a nationwide campaign in partnership with the Forest Service, the National Oceanic and Atmospheric Administration (NOAA) and the Wyland Foundation, which uses art and science to make kids aware of the shared relationship between the health of each ecosystem and the health of the planet. The FOCUS program features mural painting events in communities across the nation. Forest Service Resources on Climate Change The Forest Service has created a webpage of climate change resources as part of their Conservation Education efforts. The resources include information the Forest Service has compiled on the effects climate changing is having and will have on wildlife, wildlands, forests and other natural resources. Global Climate Change Education NASA's Global Climate Change Education initiative seeks to improve climate literacy by improving teaching and learning about climate change, increasing the use of NASA Earth observation data system models to investigate and analyze climate change topics and increasing the number of students prepared for employment in fields relevant to climate change. Visit the website for more information on grants and educator resources. Greenhouse Gases, Climate Change, and Energy This brochure, created by the Energy Information Administration breaks down the science behind greenhouse gas emissions and their effect on climate change. The brochure can be downloaded as a PDF and printed. Journey North enables students in thousands of schools to track the seasons on a real-time basis. Students monitor migration patterns, plant budding, seasonal changes in sunlight, temperature patterns and other natural events. They share their local observations with classmates across North America and analyze current and long-term data from other classrooms and professional scientists. As they do so, participants are better prepared to recognize indicators of climate change and consider its implications. Kid's Crossing: Living in the Greenhouse Operated by the University Corporation for Atmospheric Research, Living in the Greenhouse provides a wealth of information about the global climate. Students can explore how Earth's cycles affect climate, the greenhouse effect and greenhouse gases, ancient climate changes and climate events and news. NCSE-NASA Interdisciplinary Climate Change Education The NCSE-NASA Interdisciplinary Climate Change Education Team is developing a curricular package on climate change based on a University of California Davis course taught by Professor Arnold Bloom. The curriculum includes modules that cover a wide range of topics relevant to climate change. Data produced by NASA is used to create data-driven modules focusing on ice core and recent climate change observations. Other modules include exercises examining climate change impacts on the Colorado River water supply, exploring seasonality from the perspective of satellite maps and introducing remote sensing metrics. NOAA Climate Services: Education NOAA Climate Services provides information and data desigend to help citizens understand climate science. The education section of the website provides teaching resources, professional development and multimedia that assist classroom teachers in understanding and teaching about climate. NOVA Online: Warnings from the Ice Explore how Antarctica's ice has preserved the past - from Chernobyl to the Little Ice Age - going back hundreds of thousands of years, and then see how the world's coastlines would recede if some or all of this ice were to melt. This site for kids also includes a guide and resources for educators. Oceans Effect on Climate and Weather: Global Circulation Patterns This brief lesson plan explores ocean circulation patterns and the effect oceans have on climate. Learning outcomes include explaining how the oceans might influence and affect local weather and climate; describing the cause of hurricanes and frequency of hurricanes; explaining how changes in ocean temperatures influence weather patterns; and listing the major variables that affect the transfer of energy throughout the ocean. Available for free via the National Science Teachers Association online bookstore. Correlates with three Earth Science National Learning Standards. Plant for the Planet: Billion Tree Campaign Created by the United Nations Environment Programme, Plant for the Planet encourages people, communities, organizations, business and industry, civil society and governments to plant trees and enter their tree planting pledges on this web site. The objective is to plant at least one billion trees worldwide each year. Science Education Resource Center: Climate Change and Global Warming This Science Education Resource Center (SERC) Site Guide offers a general collection of climate change resources for educators while highlighting relevant resources from projects within websites hosted by SERC. Resources are arranged by categories, including websites and data sets, teaching activities, visualizations, courses, workshops and upcoming opportunities for educators. Take Aim at Climate Change This environmentally-minded music video, featuring the artists Rhythm, Rhyme, Results, Tommy Boots and Jené, imparts climate change information with a beat, making sure to empower its viewers at the end with simple ways they can make a difference. The video was developed with the support of NASA, the National Science Foundation and Passport to Knowledge. Young Voices on Climate Change Young Voices on Climate Change is a film series featuring young people who are making a difference by shrinking the carbon footprint of their homes, schools and communities. Watch the inspiring videos online.
<urn:uuid:c788638b-5089-4f4d-be49-7751741a8d9e>
3.671875
2,385
Content Listing
Science & Tech.
19.176763
974
Jump to main content or Bold Kids site navigation The OSV Bold supports a variety of monitoring and educational tasks. The ship carries high-tech instruments to collect data from the water column, sediments, and even marine life. The Bold also carries onboard equipment that can take underwater video, side-scan sonar, and sampling instruments such as corers, dredges, and trawls. Onboard laboratories allow scientists to process, analyze, and store samples while they are out at sea. The sturdy A-frame on the back deck of the Bold, helps the scientists deploy the equipment for sampling and monitoring. A bottom grab, does exactly what it sounds like! This piece of equipment catches muddy sediment, and it can collect down to about two feet into the bottom. The grab is lowered to the bottom by a cable and water is able to flow through it as it is lowered down. When it hits the bottom, it releases a catch that allows the two doors to close, capturing the mud. Scientists use this grabbing technique to measure the concentrations of pollutants in the mud or to look at the small marine invertebrates, such as worms (polychaetes), crustaceans (such as amphipods), or mollusks (such as small clams) that may live in the surface of the sediments. This type of sample is incredibly important. Scientists can tell if the study area is a healthy environment or polluted depending on the types of species of organisms they find in the mud grabs. Next to the onboard dry lab, there is a computer room. In here, scientists can use remote control equipment to steer the side scan tow fish, tell the CTD how deep to go and watch underwater video of the area they are studying! A CTD is the primary tool for understanding the physical properties of sea water that are essential for supporting marine life. C stands for "Conductivity," T stands for "Temperature," and D stands for "Depth". A CTD gives scientists an accurate and comprehensive charting of the distribution and change in water temperature, salinity, and density for the water column they are studying. All of these are important for understanding how healthy an area of water is for supporting marine life. How does a CTD work? The CTD is made up of a set of small high tech probes, attached to the large metal rosette water sampler. The rosette is lowered on a cable down to the depths that the scientists want to evaluate, sometimes all the way to the seafloor. While the CTD is still underwater it reports electronic messages through a cable back to the onboard computer lab. While the CTD is gathering data underwater, computers on the ship are constantly reading that data and creating charts and line graphs. This helps the scientists understand right away the changes in the water column as the CTD goes deeper and deeper. A typical CTD drop, or hydro-cast as the scientists like to call it, can take 5 to 15 minutes depending on how deep the scientists want to go. For the work that EPA does, generally within depths of 300 feet, gathering a complete set of CTD data can take less than 20 minutes. True or False? Salinity and conductivity both refer to the amount of dissolved salt in a body of water. Up in the dry lab, on board the ship, scientists can look at organisms under microscopes. These organisms can be collected in the mud, or water and by looking at the species, the scientists can tell if the environment is healthy or polluted, or even being taken over by invasive species that don't naturally belong there. Believe it or not! Some oceanic organisms like pollution, and there presence in a mud sample can tell a scientist a lot about that underwater environment. The otter trawl is a specialized net for catching fish on the bottom of the ocean in sandy, silty seabeds. Contrary to the name, it is not used for collecting otters! When scientists are trying to determine the health of ocean bottom environment, it is sometimes helpful to collect real fish for the study. If the ocean bottom is too muddy, or has too many rocks or boulders, the otter trawl doesn't work very well. When it is slowly dragged on the bottom, (at about 2 knots, or 2 mph) the scientists do not want it to get snagged on a boulder that could tear it! In 2007, this type of trawl helped scientists on the OSV Bold check on some close-to-shore habitats for winter flounder in Rhode Island Sound. The population of winter flounder has decreased dramatically off the coast of Rhode Island in the past 25 years. To try to better understand why this has happened, the scientists wanted to identify the most important nursery zones for flounder. The Bold helped scientists collect data from off shore adult flounder to compare with younger, juvenile flounder still living in the near-shore nursery habitats. By a trace chemical, "fingerprint technique", the scientists could tell which nursery zone the adult flounder had come from. Once EPA and the state of Rhode Island have a better idea where most of the flounder are coming from, everyone can work to better protect those important habitats. A rocking chair dredge is like the trawl, as it is slowly pulled behind the boat. This type of dredge is used to collect or sample for shellfish such as clams, or scallops in the bottom sediments, it works well in sandy bottom environments. The dredge rocks up and down in the sediment collecting the shellfish, which are contained in its mesh bag. While EPA scientists usually collect waters samples within 300 feet deep, this equipment has the ability to go down thousands of meters. How many feet are in 1 meter? Read about the latest dive mission in Puerto Rico! Every day, I commute to my EPA office in downtown New York. However, twice a year, I'm assigned to work on EPA's Ocean Survey Vessel BOLD. I am currently on assignment in Puerto Rico to monitor coral reefs. Read More » Coral Condition Survey Continues We spent the first eight days of BOLD operations deploying dive teams to 60 locations spread across the entire southern coast of Puerto Rico to collect data on the corals. Read More » Side-scan sonar is a type of sonar system that is used to be able to understand what lies at the bottom of the sea floor. On the Bold, the side-scan sonar "tow fish" can create an image of the sea floor so that the scientists can understand the hills, valleys, reefs and debris that are in the study area. This tool is used for mapping the seabed for a wide variety of purposes, including creation of nautical charts and detection and identification of underwater objects and bathymetric features. Bathymetry is the study of underwater depth. Check out this map! Purple areas are the deepest, yellow shows areas of land above the surface of the water. Side scan sonar can be used to conduct surveys for maritime archaeology; along with seafloor samples, the sonar can help scientists understand the different materials and textures of the seabed. The pictures that the sonar tow fish sends back to the ship oftentimes find debris items left from human activities. Check out our gallery of side scan images! What else can side scan sonar help with? On the Bold, the side-scan sonar is called the "tow fish" because it is pulled behind the ship underwater. Slowly, and carefully, the ship’s crew guides the ship in a set path to gather an image of the ocean floor beneath. To make the image, the sonar "tow fish" sends out a fan shaped series of pulses (sound frequencies) down toward the seafloor. The intensity of the acoustic reflections from the seafloor of this fan-shaped beam is recorded in a series of cross-track slices. When stitched together by a computer, these slices can form an image of the sea bottom within the swath (coverage width) of the beam. One of the inventors of side-scan sonar was German scientist, Dr. Julius Hagemann, who worked for the US navy Mine Defense Laboratory in Florida after WW II. His work is documented in US Patent 4,197,591, which remained classified by the US Navy until it was issued in 1980. In 1963 Dr. Harold Edgerton, Edward Curley, and John Yules used side-scan sonar to find the sunken Vineyard Lightship in Buzzards Bay, Massachusetts. A team led by Martin Klein developed the first successful towed, commercial, (non-military) side-scan sonar system from 1963 to 1966. In 1967, Klein's sonar helped find King Henry VIII's flagship Mary Rose. That same year the side scan sonar also aided in the archaeologist George Bass, find a 2000 year old ship off the coast of Turkey. In 1968 Klein founded Klein Associates, Inc, the company that designed the side scan sonar that is used on the Bold. Why do YOU think this area is called the WET LAB? This room is right on the deck where scientists on the Bold deploy the sampling equipment. This way, a mud grab can be put directly into the wet lab to be studied. Sometimes the scientists hose down the sediment to see what organisms are in it and it can get a little messy! The wet lab is equipped with a sieve station (sieving tables and trays), wash station with hot and cold, freshwater and salt water, an ice machine for sample preservation, refrigerator, and an electronic navigation chart that displays the ship’s location and navigation information. Contact the Bold Kids web editor to ask a question, provide feedback, or report a problem.
<urn:uuid:2d0d0052-c712-4ad6-89ec-a82e943091b9>
3.703125
2,037
Knowledge Article
Science & Tech.
53.947236
975
Number of stars en The best estimated count of the total number of stars in a galaxy. It's primarily an estimate because of the inherent difficulty to ascertain the exact total body of galaxies and possible obscuring of portions of a galaxy by itself or intervening celestial objects between the observer and the observed galaxy. The Milky Way is particularly difficult as we can only see our local portions of the galaxy with a portion of it not visible or measurable. [ - ] What are Properties? Properties Estimated number of stars in the celestial body/region, The estimate error range +/- value, What are Included Types? Included Types This type doesn't have any included types.
<urn:uuid:3fb7d2a2-acce-4b9b-84b5-e39ee1a603f4>
2.921875
135
Structured Data
Science & Tech.
30.965789
976
Science & Technology - Posted by Eric Gershon-Yale on Thursday, October 11, 2012 12:50 - 0 Comments Diamond planet is twice the size of Earth YALE (US) — A rocky planet twice the size of Earth that is orbiting a nearby star appears to be made largely out of diamond, new research suggests. The planet—called 55 Cancri e—has a radius twice Earth’s, and a mass eight times greater, making it a “super-Earth.” It is one of five planets orbiting a sun-like star, 55 Cancri, that is located 40 light years from Earth yet still visible to the naked eye in the constellation of Cancer. It orbits at hyper speed—its year lasts just 18 hours, in contrast to Earth’s 365 days. It is also blazingly hot, with a temperature of about 3,900 degrees Fahrenheit, a far cry from a habitable world, researchers say. The star map shows the planet-hosting star 55 Cancri in the constellation of Cancer. The star is visible to the naked eye, though better through binoculars. View larger. (Credit: Nikku Madhusudhan; created using Sky Map Online) “This is our first glimpse of a rocky world with a fundamentally different chemistry from Earth,” says lead researcher Nikku Madhusudhan, a postdoctoral researcher in physics and astronomy at Yale University. “The surface of this planet is likely covered in graphite and diamond rather than water and granite.” The paper reporting the findings have been accepted for publication in the journal Astrophysical Journal Letters. The planet was first observed transiting its star last year, allowing astronomers to measure its radius for the first time. The new information, combined with the most recent estimate of its mass, allowed Madhusudhan and colleagues to infer its chemical composition using models of its interior and by computing all possible combinations of elements and compounds that would yield those specific characteristics. Astronomers had previously reported that the host star has more carbon than oxygen, and the new study confirms that substantial amounts of carbon and silicon carbide, and a negligible amount of water ice, were available during the planet’s formation. Astronomers also thought 55 Cancri e contained a substantial amount of super-heated water, based on the assumption that its chemical makeup was similar to Earth’s, Madhusudhan says. But the new research suggests the planet has no water at all, and appears to be composed primarily of carbon (as graphite and diamond), iron, silicon carbide, and, possibly, some silicates. The study estimates that at least a third of the planet’s mass—the equivalent of about three Earth masses—could be diamond. “By contrast, Earth’s interior is rich in oxygen, but extremely poor in carbon—less than a part in thousand by mass,” says co-author and Yale geophysicist Kanani Lee. The identification of a carbon-rich super-Earth means that distant rocky planets can no longer be assumed to have chemical constituents, interiors, atmospheres, or biologies similar to those of Earth, Madhusudhan says. The discovery also opens new avenues for the study of geochemistry and geophysical processes in Earth-sized alien planets. A carbon-rich composition could influence the planet’s thermal evolution and plate tectonics, for example, with implications for volcanism, seismic activity, and mountain formation. “Stars are simple—given a star’s mass and age, you know its basic structure and history,” says David Spergel, professor of astronomy and chair of astrophysical sciences at Princeton University, who is not a co-author of the study. “Planets are much more complex. This ‘diamond-rich super-Earth’ is likely just one example of the rich sets of discoveries that await us as we begin to explore planets around nearby stars.” In 2011, Madhusudhan led the first discovery of a carbon-rich atmosphere in a distant gas giant planet, opening the possibility of long-theorized carbon-rich rocky planets (or “diamond planets”). The new research represents the first time that astronomers have identified a likely diamond planet around a sun-like star and specified its chemical make-up. Follow-up observations of the planet’s atmosphere and additional estimates of the stellar composition would strengthen the findings about the planet’s chemical composition. The authors of the paper are Madhusudhan, Lee, and Olivier Mousis, a planetary scientist at the Institut de Recherche en Astrophysique et Planetologie in Toulose, France. The research was supported by the Yale Center for Astronomy and Astrophysics (YCAA) in the Yale Department of Physics. Source: Yale University
<urn:uuid:364889c1-8060-4b2c-a483-f29f441c533d>
3.46875
1,024
News Article
Science & Tech.
32.972419
977
As a developer, it is sometimes necessary to have multiple JDKs installed on the same development machine in order to test the latest JDK 8, or to use a proprietary JDK like the IBM one. But even if there are several installed, when you type java, the first one “java” on the PATH will be used. With Linux, a tool exists : Alternative. This one creates a symbolic link that points to the JAVA_HOME you specified. When you want to change of JDK, Alternative will change this link. If Alternative is very usefull for managing JDK on a server, it does not answer the need for a development station. To fit these needs, one way is to manage manually the JAVA_HOME and PATH environment variables. To simplify the switching, there is some scripts that may help, like this one Alternative. However, these scripts help the switch but does not automate it. You always have to type “setjdk” when you change of project. When you look at the ruby worl, there is a very handy tool : rbenv. This allows facilities to manage different versions of ruby and choose per project which is the version that you want to use. And once you setup it, you only have to type ruby, and rbenv will use the selected version automatically and transparently. After some research, I have found no equivalent tool for Java and I took the initiative to create it : jenv For now, the installation is manual. You simple clone the Github project into $HOME/.jenv directory. Next, you have to activate it by inserting this commands into the shell init script ( .bash_profile, .zshrc) One installed, you have to setup one or more JDK by using jenv add command. This one take one parameter : the path to a valid JAVA_HOME directory jenv versions command, you can list the recognized JDKs. 1 2 3 4 The asterisk * denotes the JDK currently used. With no specific configuration it’s the “system” JDK to be used, that means the first found on the PATH. It is possible to change the default version with jenv global <alias name>. You can hit “TAB” after jenv global to have autocompletion for the alias name. jenv versions now show that the oracle64-18.104.22.168 will be used. To configure a JDK per directory/project, you can use jenv local <alias name> 1 2 3 4 5 6 7 To work, jenv store the java version to be used into the .java-version file in the directory. If no .java-version file found, jenv will look into the parent directory, and so on. If none is found, then jenv will switch to the default JDK ( setup by jenv global ) Jenv also has plugins to use the correct version of java with other tools: * Maven * Ant * Gradle * Groovy * Scala * SBT * Play In addition to JDK management, jenv can also configure JVM paremeters per project : And they are also exported via the right environment variables to be used by Maven (MAVEN_OPTS) Gradle (GRADLE_OPTS) or Ant (ANT_OPTS). Jenv is young, and will be improved Jenv is very recent, and provides only simple functionalities that I needed for the moment. It is not yet complete, and has to be improved. The project is on GitHub, so feel free to fork and contribute ideas (by pull requests or issues).
<urn:uuid:0c3840ae-49e7-4529-941a-bfcdf36c1995>
2.640625
792
Personal Blog
Software Dev.
58.11357
978
This chapter evaluates the suitability of models (in particular coupled atmosphere-ocean general circulation models) for use in climate change projection and in detection and attribution studies. We concentrate on the variables and time-scales that are important for this task. Models are evaluated against observations and differences between models are explored using information from a number of systematic model intercomparisons. Even if a model is assessed as performing credibly when simulating the present climate, this does not necessarily guarantee that the response to a perturbation remains credible. Therefore, we also assess the performance of the models in simulating the climate over the 20th century and for selected palaeoclimates. Incremental improvements in the performance of coupled models have occurred since the IPCC WGI Second Assessment Report (IPCC, 1996) (hereafter SAR) resulting from advances in the modelling of the atmosphere, ocean, sea ice and land surface as well as improvements in the coupling of these components. - Coupled models can provide credible simulations of both the present annual mean climate and the climatological seasonal cycle over broad continental scales for most variables of interest for climate change. Clouds and humidity remain sources of significant uncertainty but there have been incremental improvements in simulations of these quantities. - Confidence in model projections is increased by the improved performance of several models that do not use flux adjustment. These models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate change projections. - There is no systematic difference between flux adjusted and non-flux adjusted models in the simulation of internal climate variability. This supports the use of both types of model in detection and attribution of climate change. - Confidence in the ability of models to project future climates is increased by the ability of several models to reproduce the warming trend in 20th century surface air temperature when driven by radiative forcing due to increasing greenhouse gases and sulphate aerosols. However, only idealised scenarios of only sulphate aerosols have been used. - Some modelling studies suggest that inclusion of additional forcings such as solar variability and volcanic aerosols may improve some aspects of the simulated climate variability of the 20th century. - Confidence in simulating future climates has been enhanced following a systematic evaluation of models under a limited number of past climates. - The performance of coupled models in simulating the El Niño-Southern Oscillation (ENSO) has improved; however, the region of maximum sea surface temperature variability associated with El Niño events is displaced westward and its strength is generally underestimated. When suitably initialised with an ocean data assimilation system, some coupled models have had a degree of success in predicting El Niño events. - Other phenomena previously not well simulated in coupled models are now handled reasonably well, including monsoons and the North Atlantic Oscillation. - Some palaeoclimate modelling studies, and some land-surface experiments (including deforestation, desertification and land cover change), have revealed the importance of vegetation feedbacks at sub-continental scales. Whether or not vegetation changes are important for future climate projections should - Analysis of, and confidence in, extreme events simulated within climate models is emerging, particularly for storm tracks and storm frequency. "Tropical cyclone-like" vortices are being simulated in climate models, although enough uncertainty remains over their interpretation to warrant caution in projections of tropical cyclone changes. Coupled models have evolved and improved significantly since the SAR. In general, they provide credible simulations of climate, at least down to sub-continental scales and over temporal scales from seasonal to decadal. The varying sets of strengths and weaknesses that models display lead us to conclude that no single model can be considered "best" and it is important to utilise results from a range of coupled models. We consider coupled models, as a class, to be suitable tools to provide useful projections of future climates.
<urn:uuid:600e2a09-f828-428b-a2ad-dc20d2342adf>
2.859375
858
Academic Writing
Science & Tech.
3.875111
979
A String is a list of characters. String constants in Haskell are values of type String. This library provides support for strict state threads, as described in the PLDI '94 paper by John Launchbury and Simon Peyton Jones Lazy Functional State Threads. Mutable references in the (strict) ST monad. Mutable references in the (strict) ST monad (re-export of Data.STRef) The String type and associated operations. Utilities for primitive marshalling of C strings. The marshalling converts each Haskell character, representing a Unicode code point, to one or more bytes in a manner that, by default, is determined by the current locale. As a consequence, no guarantees can be made about the relative length of a Haskell string and its corresponding C string, and therefore all the marshalling routines include memory allocation. The translation between Unicode and the encoding of the current locale may be lossy. This module is part of the Foreign Function Interface (FFI) and will usually be imported via the module Foreign. The module Foreign.Storable provides most elementary support for marshalling and is part of the language-independent portion of the Foreign Function Interface (FFI), and will normally be imported via the Foreign module. The lazy state-transformer monad. A computation of type ST s a transforms an internal state indexed by s, and returns a value of type a. The s parameter is either * an unstantiated type variable (inside invocations of runST), or * RealWorld (inside invocations of stToIO). It serves to keep the internal states of different invocations of runST separate from each other and from invocations of stToIO. The >>= and >> operations are not strict in the state. For example, > runST (writeSTRef _|_ v >>= readSTRef _|_ >> return 2) = 2 The strict state-transformer monad. A computation of type ST s a transforms an internal state indexed by s, and returns a value of type a. The s parameter is either * an uninstantiated type variable (inside invocations of runST), or * RealWorld (inside invocations of Control.Monad.ST.stToIO). It serves to keep the internal states of different invocations of runST separate from each other and from invocations of Control.Monad.ST.stToIO. The >>= and >> operations are strict in the state (though not in values stored in the state). For example, > runST (writeSTRef _|_ v >>= f) = _|_ An abstract name for an object, that supports equality and hashing. Stable names have the following property: * If sn1 :: StableName and sn2 :: StableName and sn1 == sn2 then sn1 and sn2 were created by calls to makeStableName on the same object. The reverse is not necessarily true: if two stable names are not equal, then the objects they name may still be equal. Note in particular that mkStableName may return a different StableName after an object is evaluated. Stable Names are similar to Stable Pointers (Foreign.StablePtr), but differ in the following ways: * There is no freeStableName operation, unlike Foreign.StablePtrs. Stable names are reclaimed by the runtime system when they are no longer needed. * There is no deRefStableName operation. You can't get back from a stable name to the original Haskell object. The reason for this is that the existence of a stable name for an object does not guarantee the existence of the object itself; it can still be garbage collected. A stable pointer is a reference to a Haskell expression that is guaranteed not to be affected by garbage collection, i.e., it will neither be deallocated nor will the value of the stable pointer itself change during garbage collection (ordinary references may be relocated during garbage collection). Consequently, stable pointers can be passed to foreign code, which can treat it as an opaque reference to a Haskell value. A value of type StablePtr a is a stable pointer to a Haskell expression of type a. The current thread's stack exceeded its limit. Since an exception has been raised, the thread's stack will certainly be below its limit again, but the programmer should take remedial action immediately. A handle managing output to the Haskell program's standard error channel. A handle managing input from the Haskell program's standard input channel. A handle managing output to the Haskell program's standard output channel. Increases the precedence context by one. The member functions of this class facilitate writing values of primitive types to raw memory (which may have been allocated with the above mentioned routines) and reading values from blocks of raw memory. The class, furthermore, includes support for computing the storage requirements and alignment restrictions of storable types. Memory addresses are represented as values of type Ptr a, for some a which is an instance of class Storable. The type argument to Ptr helps provide some valuable type safety in FFI code (you can't mix pointers of different types without an explicit cast), while helping the Haskell type system figure out which marshalling method is needed for a given pointer. All marshalling between Haskell and a foreign language ultimately boils down to translating Haskell data structures into the binary representation of a corresponding data structure of the foreign language and vice versa. To code this marshalling in Haskell, it is necessary to manipulate primitive data types stored in unstructured memory blocks. The class Storable facilitates this manipulation on all types for which it is instantiated, which are the standard basic types of Haskell, the fixed size Int types (Int8, Int16, Int32, Int64), the fixed size Word types (Word8, Word16, Word32, Word64), StablePtr, all types from Foreign.C.Types, as well as Ptr. Minimal complete definition: sizeOf, alignment, one of peek, peekElemOff and peekByteOff, and one of poke, pokeElemOff and pokeByteOff. Show more results
<urn:uuid:227da3cf-3cf4-429b-83e3-ae4fcf0a9e26>
2.65625
1,288
Documentation
Software Dev.
42.129676
980
NASA scientists have spotted thelongest extraterrestrial river system ever - on Saturn's moon Titan - and it appears to be a miniature version of Earth's Nile river. The river valley on Titan stretches more than 400 kilometres from its "headwaters" to a large sea, a NASA Jet Propulsion Laboratory statement said. In comparison, the Nile river on Earth stretches about 6,700 kilometres. Images by NASA's Cassini mission have revealed for the first time a river system this vast and in such high resolution anywhere other than Earth. Titan is known to have vast seas - the only other body in the solar system, apart from Earth, to possess a cycle of liquids on its surface. However, the thick Titan atmosphere is a frigid one, meaning liquid water couldn't possibly flow. The liquids on Titan are therefore composed of hydrocarbons such as methane and ethane, Discovery News reported.
<urn:uuid:5d50269e-8258-4950-9c3c-42dd4c2454ce>
3.78125
186
News Article
Science & Tech.
38.173624
981
Environment - current issues: increased solar ultraviolet radiation resulting from the Antarctic ozone hole in recent years, reducing marine primary productivity (phytoplankton) by as much as 15% and damaging the DNA of some fish; illegal, unreported, and unregulated fishing in recent years, especially the landing of an estimated five to six times more Patagonian toothfish than the regulated fishery, which is likely to affect the sustainability of the stock; large amount of incidental mortality of seabirds resulting from long-line fishing for toothfish note: the now-protected fur seal population is making a strong comeback after severe overexploitation in the 18th and 19th centuries Definition: This entry lists the most pressing and important environmental problems. The following terms and abbreviations are used throughout the entry: Acidification - the lowering of soil and water pH due to acid precipitation and deposition usually through precipitation; this process disrupts ecosystem nutrient flows and may kill freshwater fish and plants dependent on more neutral or alkaline conditions (see acid rain). Acid rain - characterized as containing harmful levels of sulfur dioxide or nitrogen oxide; acid rain is damaging and potentially deadly to the earth's fragile ecosystems; acidity is measured using the pH scale where 7 is neutral, values greater than 7 are considered alkaline, and values below 5.6 are considered acid precipitation; note - a pH of 2.4 (the acidity of vinegar) has been measured in rainfall in New England. Aerosol - a collection of airborne particles dispersed in a gas, smoke, or fog. Afforestation - converting a bare or agricultural space by planting trees and plants; reforestation involves replanting trees on areas that have been cut or destroyed by fire. Asbestos - a naturally occurring soft fibrous mineral commonly used in fireproofing materials and considered to be highly carcinogenic in particulate form. Biodiversity - also biological diversity; the relative number of species, diverse in form and function, at the genetic, organism, community, and ecosystem level; loss of biodiversity reduces an ecosystem's ability to recover from natural or man-induced disruption. Bio-indicators - a plant or animal species whose presence, abundance, and health reveal the general condition of its habitat. Biomass - the total weight or volume of living matter in a given area or volume. Carbon cycle - the term used to describe the exchange of carbon (in various forms, e.g., as carbon dioxide) between the atmosphere, ocean, terrestrial biosphere, and geological deposits. Catchments - assemblages used to capture and retain rainwater and runoff; an important water management technique in areas with limited freshwater resources, such as Gibraltar. DDT (dichloro-diphenyl-trichloro-ethane) - a colorless, odorless insecticide that has toxic effects on most animals; the use of DDT was banned in the US in 1972. Defoliants - chemicals which cause plants to lose their leaves artificially; often used in agricultural practices for weed control, and may have detrimental impacts on human and ecosystem health. Deforestation - the destruction of vast areas of forest (e.g., unsustainable forestry practices, agricultural and range land clearing, and the over exploitation of wood products for use as fuel) without planting new growth. Desertification - the spread of desert-like conditions in arid or semi-arid areas, due to overgrazing, loss of agriculturally productive soils, or climate change. Dredging - the practice of deepening an existing waterway; also, a technique used for collecting bottom-dwelling marine organisms (e.g., shellfish) or harvesting coral, often causing significant destruction of reef and ocean-floor ecosystems. Drift-net fishing - done with a net, miles in extent, that is generally anchored to a boat and left to float with the tide; often results in an over harvesting and waste of large populations of non-commercial marine species (by-catch) by its effect of "sweeping the ocean clean." Ecosystems - ecological units comprised of complex communities of organisms and their specific environments. Effluents - waste materials, such as smoke, sewage, or industrial waste which are released into the environment, subsequently polluting it. Endangered species - a species that is threatened with extinction either by direct hunting or habitat destruction. Freshwater - water with very low soluble mineral content; sources include lakes, streams, rivers, glaciers, and underground aquifers. Greenhouse gas - a gas that "traps" infrared radiation in the lower atmosphere causing surface warming; water vapor, carbon dioxide, nitrous oxide, methane, hydrofluorocarbons, and ozone are the primary greenhouse gases in the Earth's atmosphere. Groundwater - water sources found below the surface of the earth often in naturally occurring reservoirs in permeable rock strata; the source for wells and natural springs. Highlands Water Project - a series of dams constructed jointly by Lesotho and South Africa to redirect Lesotho's abundant water supply into a rapidly growing area in South Africa; while it is the largest infrastructure project in southern Africa, it is also the most costly and controversial; objections to the project include claims that it forces people from their homes, submerges farmlands, and squanders economic resources. Inuit Circumpolar Conference (ICC) - represents the 145,000 Inuits of Russia, Alaska, Canada, and Greenland in international environmental issues; a General Assembly convenes every three years to determine the focus of the ICC; the most current concerns are long-range transport of pollutants, sustainable development, and climate change. Metallurgical plants - industries which specialize in the science, technology, and processing of metals; these plants produce highly concentrated and toxic wastes which can contribute to pollution of ground water and air when not properly disposed. Noxious substances - injurious, very harmful to living beings. Overgrazing - the grazing of animals on plant material faster than it can naturally regrow leading to the permanent loss of plant cover, a common effect of too many animals grazing limited range land. Ozone shield - a layer of the atmosphere composed of ozone gas (O3) that resides approximately 25 miles above the Earth's surface and absorbs solar ultraviolet radiation that can be harmful to living organisms. Poaching - the illegal killing of animals or fish, a great concern with respect to endangered or threatened species. Pollution - the contamination of a healthy environment by man-made waste. Potable water - water that is drinkable, safe to be consumed. Salination - the process through which fresh (drinkable) water becomes salt (undrinkable) water; hence, desalination is the reverse process; also involves the accumulation of salts in topsoil caused by evaporation of excessive irrigation water, a process that can eventually render soil incapable of supporting crops. Siltation - occurs when water channels and reservoirs become clotted with silt and mud, a side effect of deforestation and soil erosion. Slash-and-burn agriculture - a rotating cultivation technique in which trees are cut down and burned in order to clear land for temporary agriculture; the land is used until its productivity declines at which point a new plot is selected and the process repeats; this practice is sustainable while population levels are low and time is permitted for regrowth of natural vegetation; conversely, where these conditions do not exist, the practice can have disastrous consequences for the environment . Soil degradation - damage to the land's productive capacity because of poor agricultural practices such as the excessive use of pesticides or fertilizers, soil compaction from heavy equipment, or erosion of topsoil, eventually resulting in reduced ability to produce agricultural products. Soil erosion - the removal of soil by the action of water or wind, compounded by poor agricultural practices, deforestation, overgrazing, and desertification. Ultraviolet (UV) radiation - a portion of the electromagnetic energy emitted by the sun and naturally filtered in the upper atmosphere by the ozone layer; UV radiation can be harmful to living organisms and has been linked to increasing rates of skin cancer in humans. Water-born diseases - those in which bacteria survive in, and are transmitted through, water; always a serious threat in areas with an untreated water supply. Source: CIA World Factbook - Unless otherwise noted, information in this page is accurate as of February 21, 2013
<urn:uuid:76017011-9346-48e4-a857-c96af9af03c0>
3.25
1,751
Structured Data
Science & Tech.
12.16919
982
There’s a course at Yale University in which undergraduates travel to the Amazon rain forest to collect fungi. The fungus samples are often nothing you’ve encountered. One of them, however, which will be featured in a paper accepted by a scientific journal, might solve the problem of polyurethane building up in our landfills. The fungus basically eats the plastic and breaks it down into carbon. That’s just one discovery being studied in the Rainforest Expedition and Laboratory course taught by professor Scott A. Strobel. “We take 15 undergraduates into the Ecuadorean rain forest and collect plant samples,” said Kaury Kucera, co-instructor of the course and a postdoctoral researcher in the department of molecular biophysics and biochemistry. The fungus they’re looking for “grows in the inner tissues of plant samples that is symbiotic with the plant and often produces natural compounds that are interesting to medicine,” Kucera said. Click "source" for entire article.
<urn:uuid:2d6808b0-5c98-46c6-849f-49b15d671b18>
3.078125
221
Truncated
Science & Tech.
36.633347
983
Mars has vast glaciers hidden under aprons of rocky debris near mid-latitude mountains, a new study confirms, pointing to a new and large potential reservoir of life-supporting water on the planet. These mounds of ice exist at much lower latitudes than any ice previously found on the red planet. "Altogether, these glaciers almost certainly represent the largest reservoir of water ice on Mars that's not in the polar caps," said John Holt of the University of Texas at Austin and the main author of the study. "Just one of the features we examined is three times larger than the city of Los Angeles and up to one-half-mile thick, and there are many more." The gently sloping mid-latitude debris flows have puzzled scientists since they were revealed by NASA's Viking orbiters in the 1970s — they looked very different than the fans and cones of debris found near mountains and cliffs in Mars' equatorial regions. Since their discovery, scientists have been debating how the features formed, with some proposing they were debris flows lubricated by ice that had since evaporated away. But more recent observations suggested that the features "might be more ice than rock," Holt said. In other words, they could be Martian glaciers. Holt and his colleagues used radar observations of the features, taken by NASA's Mars Reconnaissance Orbiter, to peer into the features. The findings, detailed in the Nov. 21 issue of the journal Science, suggest that the glacier theory is correct. Finding huge deposits of ice at the Martian mid-latitudes is a boon to both the study of past potential Martian habitability, as well as future human exploration. Glaciers are huge reservoirs of water once they melt, key to all life as we know it. The team used MRO's Shallow Radar instrument to penetrate the rocky debris flows that lie in the Hellas Basin region of Mars' southern hemisphere. They examined the radar echoes to see what lay beneath the surface. All signs pointed to ice, and lots of it. The radar echoes received back by MRO indicated that radio waves passed through the overlying debris material and reflected off a deeper surface below without losing much strength — the expected signal for thick ice covered by a thin layer of debris. The radar echoes also showed no signs of significant rock debris within the glaciers, suggesting that they are relatively pure water ice. "These results are the smoking gun pointing to the presence of large amounts of water ice at these latitudes," said Ali Safaeinili, a Shallow Rader team member at NASA's Jet Propulsion Laboratory in Pasadena, Calif. The sheer amount of ice present in the flows studied was surprising; extrapolating from the Hellas Basin feature to the many others present in both Martian hemispheres, there seems to be a lot of ice hiding under the Martian surface. The researchers estimate that the amount of ice in these mid-latitude glaciers is about 1 percent of the ice that's in Mars' polar caps — roughly equivalent to the ratio of Earth's non-polar glaciers to its polar ice, Holt told SPACE.com. The glaciers could hold as much as 10 percent of the ice in the polar caps, similar to comparing Greenland's ice sheets to Antarctica, Holt added. But just how the ice got there is still a mystery. "You shouldn't have ice of this quantity at these latitudes," Holt said. The theory is that the ice formed when Mars' orbital tilt was much different than it is now (the axis the planet spins on has considerable "wobble," meaning its angle changes over time) and the planet was much colder, allowing ice to form on the surface. Ice on the surface of Mars today would immediately sublimate (or change directly into the gas phase). The rocky debris covering the ice is likely what keeps it in place today and has allowed it to survive below the surface for millions of years. Scientists aren't exactly sure during which past ice age the glaciers may have formed, but by counting the number of impact craters in the overlying debris, they estimate them to be about 100 million years old, said study team member Jim Head of Brown University in Providence, R.I. These ancient glaciers could hold clues that would shed more light on Mars' past, particularly whether or not it ever harbored life. "On Earth," Head said, "such buried glacial ice in Antarctica preserves the record of traces of ancient organisms and past climate history." Ancient ice layers in glaciers on Earth preserve the signature of the current atmosphere at the time that they formed. Head thinks the same could be true of the Martian glaciers. In particular, small bubbles that form as the ice layers are deposited could have "samples of the atmosphere at that time," he said. A lander capable of drilling down several meters could be able to sample the ice in the glaciers. "These are quite accessible to landers," Holt said. They could also be a source of water for any future manned Mars expeditions. (When the researchers travel to Antarctica, for instance, they simply knock off chunks of ice and melt them instead of lugging water with them.) "It's a lot of ice," Holt said. "You could support a base for a long time." © 2013 Space.com. All rights reserved. More from Space.com.
<urn:uuid:a323ec61-5452-47ac-98b6-0ed51bc51b6f>
3.671875
1,103
News Article
Science & Tech.
47.991595
984
Source Newsroom: University of Alabama Huntsville Newswise — Dr. Michael Briggs, a member of NASA’s Fermi Gamma-ray Burst Monitor (GBM) team at The University of Alabama in Huntsville today announced that the GBM telescope has detected beams of antimatter produced above thunderstorms on Earth by energetic processes similar to those found in particle accelerators. "These signals are the first direct evidence that thunderstorms make antimatter particle beams," said Michael Briggs, a university researcher whose team, located at UAHuntsville, includes scientists from NASA Marshall Space Flight Center, the University of Alabama in Huntsville, Max-Planck Institute in Garching, Germany, and from around the world. He presented the findings during a news briefing at the American Astronomical Society meeting in Seattle. Scientists think the antimatter particles are formed in a terrestrial gamma-ray flash (TGF), a brief burst produced inside thunderstorms that has a relationship to lighting that is not fully understood. As many as 500 TGFs may occur daily worldwide, but most go undetected. The spacecraft, known as Fermi, is designed to observe gamma-ray sources in space, emitters of the highest energy form of light. Fermi’s GBM constantly monitors the entire celestial sky, with sensors observing in all directions, including some toward the Earth, thereby providing valuable insight into this strange phenomenon. When the antimatter produced in a terrestrial thunderstorm collides with normal matter, such as the spacecraft itself, both the matter and antimatter particles immediately are annihilated and transformed into gamma-rays observed by the GBM sensors. The detection of gamma-rays with energies of a particular energy -- 511,000 electron volts -- is the smoking-gun, indicating that the source of the observed gamma-rays in these events is the annihilation of an electron with its antimatter counterpart, a positron, produced in the TGF. Since the spacecraft’s launch in 2008, the GBM team has identified 130 TGFs, which are usually accompanied by thunderstorms located directly below the spacecraft at the time of detection. However, in four cases, storms were a far distance from Fermi. Lightning-generated radio signals, detected by a global monitoring network, indicated the only lightning at the time of these events was hundreds or more miles away. During one TGF, which occurred on December 14, 2009, Fermi was located over Egypt. However, the active storm was in Zambia, some 2,800 miles to the south. The distant storm was below Fermi’s horizon, so any gamma-rays it produced could not have been detected directly. Although Fermi could not see the storm from its position in orbit, it was still connected to it through sharing of a common magnetic field line of the Earth, which could be followed by the high-speed electrons and positrons produced by the TGF. These particles travelled up along the Earth’s magnetic field lines and struck the spacecraft. The beam continued past Fermi along the magnetic field, to a location known as a mirror point, where its motion was reversed, and then 23 milliseconds later, hit the spacecraft again. Each time, positrons in the beam collided with electrons in the spacecraft, annihilating each other, and emitting gamma-rays detected by Fermi’s GBM. NASA's Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership. The spacecraft is managed by NASA's Goddard Space Flight Center in Greenbelt, Md. The GBM instrument is a collaboration between scientists at NASA's Marshall Space Flight Center, the University of Alabama in Huntsville, and the Max-Planck Institute in Garching, Germany. The Fermi mission was developed in collaboration with the U.S. Department of Energy, with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States.
<urn:uuid:989a7303-e260-45ab-8210-a42bb5fa3437>
3.359375
813
News Article
Science & Tech.
30.959029
985
- Accept the Challenge - About NRL - Doing Business - Public Affairs & Media - Field Sites - Visitor Info - Contact NRL In 1963, NRL astronomers made the first positive identification of discrete sources of stellar X rays. A new NRL-developed X-ray detector system was flown on an Aerobee rocket, and the result was the discovery of two X-ray sources - Scorpius X-1 and the Crab Nebula. These findings suggested the possibility that the source of the X rays was a neutron star, a densely packed body of neutrons formed from the collapse of a star. NRL scientists wanted to prove this hypothesis, and in 1964 NRL conducted an experiment on an Aerobee flight during the occultation of the Crab Nebula by the moon. NRL's data did not confirm the neutron star theory, which in turn spurred more intensive investigations. As a result, between 1964 and 1973, 125 discrete sources were discovered, including supernova remnants, pulsars, radio galaxies, and quasars. Specific NRL contributions included: - the first X-ray detection of a pulsar in the Crab Nebula in 1969; - the detection of X-ray galaxies during Aerobee flights in 1967 and 1968; - the compilation of the first comprehensive galactic X-ray sources map; - the discovery of a distinctive difference in time behavior between soft and hard X rays in 1971; and - the discovery of the variability of Cygnus X-1, a possible black hole in the Cygnus constellation. The rapid development of X-ray astronomy, combined with developments in infrared, ultraviolet, and cosmic-ray investigations, led in the 1970s to the utilization of satellites for high-energy astronomy research. In 1972, NASA initiated the High Energy Astronomy Observatory (HEAO) program to study cosmic ray,X-ray, and gamma-ray sources in deep space. NRL was selected to develop one of the four instrument packages to be flown on the HEAO I, which was launched in August 1977. The NRL package, the Large Area X-Ray Survey Array, was the largest space instrument ever to be flown on any satellite. Consisting of seven modules of large-area proportional counters, the instrument mapped the entire sky for high-energy sources, which included radio pulsars, binary pulsars, black holes, quasars, and extragalactic X-ray sources, resulting in a new map of nearly 1000 discrete X-ray sources.
<urn:uuid:5f919188-7fb7-4254-9d04-ac9fb7e7cece>
3.734375
507
About (Org.)
Science & Tech.
34.052844
986
NOAA RELEASES EAST PACIFIC HURRICANE Below Normal Seasonal Activity Expected in 2005 May 16, 2005 - NOAA, the National Oceanic and Atmospheric Administration, today released its 2005 East Pacific Hurricane Season Outlook. The outlook calls for a high likelihood of below normal activity. NOAA scientists are expecting 11-15 tropical storms. Six to eight of these are expected to become hurricanes, including two to four major hurricanes. “There tends to be a seesaw affect between the East Pacific and North Atlantic hurricane seasons,” said Jim Laver, director, NOAA’s Climate Prediction Center in Camp Springs, Md. “When there is above normal seasonal activity in the Atlantic there tends to be below normal seasonal activity in the Pacific. This has been especially true since 1995. Six of the last ten East Pacific hurricane seasons have been below normal, and NOAA scientists are expecting lower levels of activity again this season.” The seesaw effect between the East Pacific and North Atlantic hurricane seasons occurs because the two dominant climate factors that control much of the activity in both regions often act to suppress activity in one region while enhancing it in the other. Like the Atlantic hurricane season, the El Niño/La Niña cycle is a dominant climate factor influencing the East Pacific hurricane season. “However, this hurricane season we are most likely to be in a neutral pattern in regards to El Niño/La Niña,” said Vernon Kousky, NOAA’s El Niño/La Niña expert. While the thought of a hurricane is a sobering image to many people, there are some positive aspects in regards to the East Pacific hurricane season. In contrast to its sibling - the North Atlantic hurricane season, which can cause deadly storms in the southern and eastern United States - “the East Pacific hurricane season can bring much needed precipitation to the usually dry southwestern United States during the summer months,” said Muthuvel Chelliah, NOAA’s Climate Prediction Center’s lead coordinator for the East and Central Pacific Hurricane Season Outlooks. “Most East Pacific tropical storms trek westward over open waters, sometimes reaching Hawaii and beyond. Yet, during any given season, one or two tropical storms can either head northward or re-curve toward western Mexico, ” said Chelliah. After two years of successful experimental outlooks issued by NOAA in 2003 and 2004, the East Pacific Hurricane Season Outlook becomes an operational product this year. Unlike the North Atlantic, the East Pacific Hurricane Season Outlook does not have a scheduled mid-season update at this time. The East Pacific hurricane season runs from May 15 through November 30, with peak activity occurring during July through September. In a normal season, the East Pacific would expect 15 or 16 tropical storms. Nine of these would become hurricanes, of which four or five would be major hurricanes. The East Pacific Hurricane Season Outlook is a product of NOAA’s Climate Prediction Center, Hurricane Research Division and National Hurricane Center. The National Hurricane Center has forecasting responsibilities for the East Pacific region. NOAA, an agency of the U.S. Commerce Department, is dedicated to enhancing economic security and national safety through the prediction and research of weather and climate-related events and providing environmental stewardship of the nation’s coastal and marine resources. Carmeyia Gillis, NOAA Climate Prediction Center, (301) 763-8000, Ext. 7163 Related Web sites: Background on the Eastern Pacific Hurricane Season: http://www.cpc.ncep.noaa.gov/products/Epac_hurr/background_information.html NOAA’s Climate Prediction Center: http://www.cpc.ncep.noaa.gov
<urn:uuid:778f7dd8-ed0b-4d7b-a63e-9870edae0f11>
2.734375
772
News (Org.)
Science & Tech.
36.165247
987
The IDLgrTessellator class is a helper class that converts a simple concave polygon (or a simple polygon with holes) into a number of simple convex polygons (general triangles). A polygon is simple if it includes no duplicate vertices, if the edges intersect only at vertices, and exactly two edges meet at any vertex. Tessellation is useful because the IDLgrPolygon object accepts only convex polygons. Using the IDLgrTessellator object, you can convert a concave polygon into a group of convex polygons. The IDLgrTessellator::Init method takes no arguments. Use the following statement to create a tessellator object: myTess = OBJ_NEW('IDLgrTessellator') See IDLgrTessellator for details on creating tessellator objects. The procedure file obj_tess.pro, located in the examples/visual subdirectory of the IDL distribution, provides an example of the use of the IDLgrTessellator object. To run the example, enter OBJ_TESS at the IDL prompt. The procedure creates a concave polygon, attempts to draw it, and then tessellates the polygon and re-draws. Finally, the procedure demonstrates adding a hole to a polygon. (You will be prompted to press Return after each step is displayed.) You can also inspect the source code in the obj_tess.pro file for hints on using the tessellator object.
<urn:uuid:55ea103e-d05b-4d17-afe3-3d22471f1685>
2.71875
329
Documentation
Software Dev.
30.638619
988
Refraction at a Boundary Need to see it? View The Broken Pencil animation from the Multimedia Physics Studios.Flickr Physics Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of refraction and lenses.Flickr Physics Visit The Physics Classroom's Flickr Galleries and enjoy the terrific display of photos showing the refraction of light by dew drops.Flickr Physics View a collection of incredible photos of reflection and refraction phenomena from TPC's Flickr Pool. Looking for a lab that coordinates with this page? Try the Refraction Action Lab from The Laboratory.Flickr Physics View a collection of incredible photos of reflection and refraction phenomena from TPC's Flickr Pool.Curriculum Corner Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on ray optics, including the topic of refraction. Refraction and Sight In Unit 13 of The Physics Classroom Tutorial, it was emphasized that we are able to see because light from an object can travel to our eyes. Every object that can be seen is seen only because light from that object travels to our eyes. As you look at Mary in class, you are able to see Mary because she is illuminated with light and that light reflects off of her and travels to your eye. In the process of viewing Mary, you are directing your sight along a line in the direction of Mary. If you wish to view the top of Mary's head, then you direct your sight along a line towards the top of her head. If you wish to view Mary's feet, then you direct your sight along a line towards Mary's feet. And if you wish to view the image of Mary in a mirror, then you must direct your sight along a line towards the location of Mary's image. This directing of our sight in a specific direction is sometimes referred to as the line of sight. As light travels through a given medium, it travels in a straight line. However, when light passes from one medium into a second medium, the light path bends. Refraction takes place. The refraction occurs only at the boundary. Once the light has crossed the boundary between the two media, it continues to travel in a straight line. Only now, the direction of that line is different than it was in the former medium. If when sighting at an object, light from that object changes media on the way to your eye, a visual distortion is likely to occur. This visual distortion is witnessed if you look at a pencil submerged in a glass half-filled with water. As you sight through the side of the glass at the portion of the pencil located above the water's surface, light travels directly from the pencil to your eye. Since this light does not change medium, it will not refract. (Actually, there is a change of medium from air to glass and back into air. Because the glass is so thin and because the light starts and finished in air, the refraction into and out of the glass causes little deviation in the light's original direction.) As you sight at the portion of the pencil that was submerged in the water, light travels from water to air (or from water to glass to air). This light ray changes medium and subsequently undergoes refraction. As a result, the image of the pencil appears to be broken. Furthermore, the portion of the pencil that is submerged in water appears to be wider than the portion of the pencil that is not submerged. These visual distortions are explained by the refraction of light. In this case, the light rays that undergo a deviation from their original path are those that travel from the submerged portion of the pencil, through the water, across the boundary, into the air, and ultimately to the eye. At the boundary, this ray refracts. The eye-brain interaction cannot account for the refraction of light. As was emphasized in Unit 13, the brain judges the image location to be the location where light rays appear to originate from. This image location is the location where either reflected or refracted rays intersect. The eye and brain assume that light travels in a straight line and then extends all incoming rays of light backwards until they intersect. Light rays from the submerged portion of the pencil will intersect in a different location than light rays from the portion of the pencil that extends above the surface of the water. For this reason, the submerged portion of the pencil appears to be in a different location than the portion of the pencil that extends above the water. The diagram at the right shows a God's-eye view of the light path from the submerged portion of the pencil to each of your two eyes. Only the left and right extremities (edges) of the pencil are considered. The blue lines depict the path of light to your right eye and the red lines depict the path of light to your left eye. Observe that the light path has bent at the boundary. Dashed lines represent the extensions of the lines of sight backwards into the water. Observe that these extension lines intersect at a given point; the point represents the image of the left and the right edge of the pencil. Finally, observe that the image of the pencil is wider than the actual pencil. A ray model of light that considers the refraction of light at boundaries adequately explains the broken pencil observations. Flickr Physics Photo The broken pencil phenomenon occurs during your everyday spearfishing outing. Fortunately for the fish, light refracts as it travels from the fish in the water to the eyes of the hunter. The refraction occurs at the water-air boundary. Due to this bending of the path of light, a fish appears to be at a location where it isn't. A visual distortion occurs. Subsequently, the hunter launches the spear at the location where the fish is thought to be and misses the fish. Of course, the fish are never concerned about such hunters; they know that light refracts at the boundary and that the location where the hunter is sighting is not the same location as the actual fish. How did the fish get so smart and learn all this? They live in schools. Now any fish that has done his/her physics homework knows that the amount of refraction that occurs is dependent upon the angle at which the light approaches the boundary. We will investigate this aspect of refraction in great detail in Lesson 2. For now, it is sufficient to say that as the hunter with the spear sights more perpendicular to the water, the amount of refraction decreases. The most successful hunters are those who sight perpendicular to the water. And the smartest fish are those who head for the deep when they spot hunters who sight in this direction. Since refraction of light occurs when it crosses the boundary, visual distortions often occur. These distortions occur when light changes medium as it travels from the object to our eyes.
<urn:uuid:f573244e-5ac3-4fed-b8f9-567bac22c4bf>
4.09375
1,417
Tutorial
Science & Tech.
58.77676
989
"Many environmentalists believe that wind and solar power can be scaled to meet the rising demand [of billions emerging from poverty], especially if coupled with aggressive efforts to cut waste," reports Justin Gillis. "But a lot of energy analysts have crunched the numbers and concluded that today’s renewables, important as they are, cannot get us even halfway there." Gillis discusses the most promising innovations in nuclear power, which many technologists see as the most viable option for providing a reliable source of electricity without carbon emissions. These include "a practicable type of nuclear fusion", "a fission reactor that could run on today’s nuclear waste", and "a safer reactor based on an abundant element called thorium." "Beyond the question of whether they will work," he adds, "these ambitious schemes pose a larger issue: How much faith should we, as a society, put in the idea of a big technological fix to save the world from climate change?" And as is appropriate for a nuclear-related news item that appeared on the two-year anniversary of the Tohoku earthquake, we offer a reminder of the twelve different nuclear power "near miss" events that occurred in the United States in 2012.
<urn:uuid:11349972-17b0-4f34-b408-cfc39341347b>
3.203125
248
Truncated
Science & Tech.
24.951667
990
Copyright: This document has been placed in the Public Domain. Many thanks to Bill Baxter, Jarrett Billingsley, Anders F Björklund, Lutger Blijdestijn, Thomas Kuehne, Pierre Rouleau and Max Samuha for their input, and to Walter Bright for making such a great language. One of the great features of D is its’ fantastic support for text. However, many people new to D have trouble understanding why things are the way they are. People coming from a C or C++ background are quickly confused by the fact that char does not appear to work the way they expect it to, whilst people coming from a Java, C# or interpreted language background wonder why D has three different character types, and no string class. This article will hopefully address these questions, and help explain the how and why of text in D. But first, some background. Back when C was created, the dominant character encoding in use was ASCII. ASCII was cool because it could encode every letter of the western alphabet, numbers, and a whole bunch of punctuation. If you needed more characters, then by golly you could just stick them in the upper 128 fields as an extension to ASCII. This led to the rather unfortunate mess that are character encodings. They arose out of the impossibility of fitting every language’s symbols into just 128 characters. Things became worse with multi byte character sets like Shift_JIS where you couldn’t even count on each 8-bit code being an actual symbol. You also had to carry around a description of which code page you were using. It only got worse if you wanted to use multiple character encodings in a single text document: you usually can’t. In the end, this led to the creation of Unicode; a character encoding to replace all other character encodings. Unicode significantly differs from most other character encodings in that it encodes every one of its’ symbols using a unique integer identifier called a code point. For example, the N-ary summation symbol “∑” is identified in Unicode as code point 0x2211. By contrast, this symbol is not defined in most character encodings, usually because there simply isn’t room. However, Unicode by itself does not specify how to actually store these code points; it merely defines what they mean. This is where the Unicode Transformation Formats come in to play. UTF-32 is the easiest to understand. Every Unicode code point is stored literally as a 32-bit unsigned integer. The obvious disadvantage to this is that it requires a large amount of space to store even the simplest of text. UTF-16 is somewhat more complex. As the name suggests, it is based around 16-bit unsigned integers. However, since you cannot represent every Unicode code point with only 16-bits, it uses variable length encoding to make sure you can store any code point you please. Most normal code points will only use a single 16-bit value, with more uncommon code points taking up two. Each of these 16-bit values is called a “code unit.” UTF-8 can be thought of as an “extension” of UTF-16 in that it uses a similar variable length encoding scheme based on 8-bit integers. Code points that fall into the traditional ASCII range remain exactly the same (meaning ASCII is effectively a subset of UTF-8), with other code points taking somewhere between 2 and 4 bytes (aka: code units) to store. So, by now, you’re probably thinking “what a complete and total mess!” To a degree it is, but it’s important to realise that this is a huge simplification of how things used to be. What’s important to take from all this is that there are three distinct ways of representing Unicode text, and all three are supported directly in D. Unlike C which says nothing on, for example, how to store Japanese text, D is designed to use Unicode internally for all text storage. This means that instead of having to support multiple character encodings in your programs, you only need to support one, possibly using a library to convert to and from Unicode as necessary. Specifically, here is how the various encodings translate to D types: - char is a UTF-8 code unit, - wchar is a UTF-16 code unit, - dchar is a UTF-32 code point, - char is a UTF-8 string, - wchar is a UTF-16 string and - dchar is a UTF-32 string. The first thing that trips up people new to D is that the following program works: But this one doesn’t: It simply crashes out with an error saying something about “invalid UTF sequence.” Many people see this and wonder what’s going on. The answer is something like this: remember how UTF-8 encodes code points using somewhere between one and four individual code units? Well, in D, a char is only a single UTF-8 code unit, so it cannot contain all possible code points. The problem is that “є” requires two code units to represent; it is actually stored as "\xD1\x94". So when the program comes to print out the second “character,” the standard library throws up the red flag saying “wait a second, 0xD1 isn’t a valid UTF-8 sequence; you can’t print that!” You’re basically trying to write out half a code point, which really doesn’t make any sense. Is the standard library at fault? Not really; you don’t exactly want to be outputting incomplete code points, otherwise other programs could choke on your output. You certainly wouldn’t appreciate being fed garbage text. The way to fix this is to realise that you’re using the wrong type for the job. Remember, a single char cannot possibly hold all valid code points. What you need to do is use a type which can: The above code works perfectly, since the foreach loop is smart enough to decode a single complete code point at a time. The second problem comes up when programmers discover the power of D’s arrays. They see things like the built-in length property and slicing and think “cool; I can use those on strings!” When their code fails miserably on international text, they wonder just what’s gone wrong. The problem is, once again, that UTF-8 and UTF-16 don’t necessarily store a single code point in a single code unit. For example, if we are using UTF-8, does not give you “є”. It gives you "\xD1" which isn’t what you really wanted. Similarly, gives you “єll” and not “єllѲ” as you would expect (since the “є” actually takes up two chars.) The reason for this is that decoding a UTF-8 or UTF-16 stream is all well and good, but trying to decode a slice in the middle is difficult to do efficiently. Similarly, the length property of a UTF-8 or UTF-16 string can be misleading; it is counting the number of code units, not the number of actual code points. The simplest way to deal with this is to stick to UTF-32 strings (aka: dchar) if you’re going to be doing a lot of indexing or slicing. This is because they do not suffer from these variable length encoding problems. Another possible way to do this is to use a foreach loop to convert your string into individual code points, and manually extract the slice you want as you go. The std.utf module provides many functions which you might find useful: - std.utf.toUTF8(s) – converts s from any UTF encoding to UTF-8, and returns the result. - std.utf.toUTF16(s) – as above, but for UTF-16. - std.utf.toUTF32(s) – as above, but for UTF-32. Another trick to keep in mind is that when using foreach, you can also ask it to give you the index of each code point within the string: The above code produces the following output (assuming your terminal can display UTF-8): Note that the index is that of the first code unit for that code point. These indices can be used in slicing operations to ensure you get a valid UTF sequence. This is an area of active discussion. Many people assert that D needs a string class, whilst others say that it is unnecessary. Instead of trying to convince you either way, I’ll just explain why D doesn’t have a string class, and show what you can do without one. An important thing to remember is that C++ grew a string class because C’s string handling was so incredibly painful. Java has a string class because Java is object-oriented to the extreme, and it makes sense to have one. On the other hand, D does many of the things that C++ needed the string class for quite nicely by itself: - Since all strings are arrays, all strings have a length property, meaning you don’t need a function to go looking for the end of a string. - Strings can also be trivially concatenated together using the concatenation operator ~. - Slicing works as expected for UTF-32 strings, and in UTF-8 and UTF-16 strings as long as you slice on known code point positions. can be rewritten as: Which means that although you don’t have a string class, you can “fake” it, by simply writing functions that take strings as their first argument; you aren’t even limited to what comes in the standard library, unlike in C++ and Java! For a full list of what string manipulation functions come with D, take a look at http://www.digitalmars.com/d/phobos/std_string.html. If you really, really can’t live without the warm comforting embrace of a string class, you can find a good one at http://www.dprogramming.com/dstring.php. By now you should understand the problems that arise because of D’s use of UTF encodings. However, there is another problem that comes about because of how D represents arrays. Back before D had the std.stdio.writefln method, most examples used the old C function printf. This worked fine until you tried to output a string: Statements like the above are very likely to print out garbage, which leaves many people scratching their heads. The reason is that C uses NUL-terminated strings, whereas D uses true arrays. In other words: - Strings in C are a pointer to the first character. A string ends at the first NUL character. - Strings in D are a pointer to the first code unit, followed by a length. There is no terminator. Thankfully, there is an easy solution: The std.string.toStringz function converts any char string to a C-compatible char* string by ensuring that there is a terminating NUL. So you’ve been clever and added some nifty symbols into your source file using Unicode, only to have the compiler barf on them. “What's wrong?” I hear you ask; “I thought D supported Unicode source!” In fact, it does. There are two problems you might run into: - The editor you used may support Unicode, but didn't end up saving in it. Go back and double-check that the file really is Unicode. How you do this depends on your editor, but there's usually an option lying around somewhere to set a file's character encoding. - The other is a bit obscure: if you save your source file in Unicode without a Byte Order Mark and the first character is outside the ASCII character range, D won't be able to read it properly. Use an editor that properly supports UTF. Seriously, even Windows Notepad does it correctly! Yes, it can. D source files support four character encodings: ASCII, UTF-8, UTF-16 and UTF-32. Provided your source file is saved in one of these encodings, you can include any character you like. Of course, this requires that you use an editor that properly supports UTF; as stated above, using an editor that incorrectly writes out UTF files can cause the D compiler to choke on your source files. There are two ways to do this: - Enter the characters you want directly, and save the source file in one of the UTF encodings. - Find out what the code point for the symbol you want to use is, and then manually enter it into the string literal using \uXXXX for code points 0xFFFF and below, or \UXXXXXXXX if they don't fit in the first form. Remember, each X is a hexadecimal digit. You can store ASCII text directly using char strings. Remember, ASCII is a subset of UTF-8, which means that all ASCII strings are valid UTF-8 strings. You can use pretty much any character allowed in C99. This boils down to any of the following: - underscore (_), - code points greater than or equal to \u00A0 and less than \uD800 and - code points greater than \uDFFF. - \u0024 ($), - \u0040 (@) and - \u0060 (`). For that, you will need to use a ubyte array. You should not use char for this purpose, since char is supposed to contain UTF-8 strings, and other encodings more than likely aren't valid UTF-8 strings. To convert between Unicode and your chosen code page, you will want to use a library designed to do this: iconv < http://www.gnu.org/software/libiconv/ > is a popular open source library for code page conversions. On windows, you can look in std.windows.charset, functions toMBSz() and fromMBSz() for converting to/from Win-ANSI/Oem? encodings. Not directly. You can either roll your own system, or use an existing library like gettext < http://www.gnu.org/software/gettext/ > to do this. This one’s tricky to answer. For most cases, char is more than sufficient. It’s also usually the most succinct encoding for Unicode text. Problems only really arise when you need to look at a string's length or do indexing/slicing on fixed locations. The first problem (getting the length) can be solved by using a function like the following: This will give you the correct answer. The second problem’s a little trickier. First of all, it’s important to realise that you can slice a UTF-8 or UTF-16 string: you just need to make sure you're not slicing in the middle of a code sequence. For example: Works just fine since the find function returns the code unit index, and not the code point index. What you need to be careful of is code like this: This doesn't work because ‘ö’ requires two UTF-8 code units to encode. Currently, there is no function in the standard library for extracting the nth character from a string, however you can use something like this: Once you take care of those two problems, aside from things like the system API or what kind of text you're storing, it doesn't really matter which encoding you use. It is a bit, actually. Here's a fast version written by Derek Parnell and Frits van Bommel that supports any given string type passed to it (not just char.) See the newsgroup thread starting at http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D.learn&article_id=7444. Here's the long and short of it: - Windows: ASCII for old Win9x APIs, UTF-16 for WinNT? APIs. You can tell the difference because ASCII APIs have a trailing 'A' on their name, whilst UTF-16 APIs have a trailing 'W'. For example: GetCommandLineA? and GetCommandLineW?. - Linux: Depends on what you're calling, how it was compiled, system (locale) settings, etc. Best to read the documentation. - Mac OSX: Usually UTF-8, some old-old-old functions may expect MacRoman? (yuck!). Be careful with filenames though, becase they allow only specific normalized subset of UTF-8 (you can read them as UTF-8, but you can't use any UTF-8 as filename unless you normalize it). http://developer.apple.com/qa/qa2001/qa1173.html So here’s the short and sweet on text in D: - char is a UTF-8 code unit, and may not be a complete code point. - wchar is a UTF-16 code unit, and may not be a complete code point. - dchar is a UTF-32 code unit, which is guaranteed to be a complete code point. - char is a UTF-8 string, and uses one to four bytes per code point. - wchar is a UTF-16 string, and uses two to four bytes per code point. - dchar is a UTF-32 string and uses four bytes per code point. - Outputting an incomplete UTF-8 or UTF-16 sequence will result in an error. - You cannot reliably index or slice a UTF-8 or UTF-16 string due to variable-length encoding. - The length property of a char or wchar array is the number of code units, not code points. - Strings destined for a C function that expects NUL-terminated strings need to be passed through std.string.toStringz first (or manually make sure the NUL-terminator exists). This version was manually transcribed from the original, and so there may be a few formatting errors. If you update this document, please inform the original author.
<urn:uuid:8c26b6c4-f47b-44dd-adb5-553017cb9d24>
3.4375
3,926
Documentation
Software Dev.
65.643465
991
May 2, 2011 A research team in Lund, Sweden has discovered primary biological matter in a fossil of an extinct varanoid lizard (a mosasaur) that inhabited marine environments during Late Cretaceous times. Using state-of-the-art technology, the scientists have been able to link proteinaceous molecules to bone matrix fibres isolated from a 70-million-year-old fossil -- that is, they have found genuine remains of an extinct animal entombed in stone. Mosasaurs are a group of extinct varanoid lizards that inhabited marine environments during the Late Cretaceous (approximately 100-65 million year ago). With their discovery, the scientists Johan Lindgren, Per Uvdal, Anders Engdahl, and colleagues have demonstrated that remains of type I collagen, a structural protein, are retained in a mosasaur fossil. Collagen is the dominating protein in bone. The scientists have applied a broad spectrum of sophisticated techniques to achieve their results. The scientists have used synchrotron radiation-based infrared microspectroscopy at MAX-lab in Lund, southern Sweden, to show that amino acid containing matter remains in fibrous tissues obtained from a mosasaur bone. In addition to synchrotron radiation-based infrared microspectroscopy, mass spectrometry and amino acid analysis have been performed. Previously, other research teams have identified collagen-derived peptides in dinosaur fossils based on, for example, mass spectrometric analyses of whole bone extracts. The present study provides compelling evidence to suggest that the biomolecules recovered are primary and not contaminants from recent bacterial biofilms or collagen-like proteins. Moreover, the discovery demonstrates that the preservation of primary soft tissues and endogenous biomolecules is not limited to large-sized bones buried in fluvial sandstone environments, but also occurs in relatively small-sized skeletal elements deposited in marine sediments. Other social bookmarking and sharing tools: - Johan Lindgren, Per Uvdal, Anders Engdahl, Andrew H. Lee, Carl Alwmark, Karl-Erik Bergquist, Einar Nilsson, Peter Ekström, Magnus Rasmussen, Desirée A. Douglas, Michael J. Polcyn, Louis L. Jacobs. Microspectroscopic Evidence of Cretaceous Bone Proteins. PLoS ONE, 2011; 6 (4): e19445 DOI: 10.1371/journal.pone.0019445 Note: If no author is given, the source is cited instead.
<urn:uuid:c7687392-c384-4450-9611-c903b395a18a>
3.265625
522
News (Org.)
Science & Tech.
22.867736
992
- Solid-state lasers can produce light in the red and blue parts of the spectrum but not the green. - Recent research suggests that this "green gap" could be plugged as early as this year. - The advance will allow for laser-based video displays that are small enough to fit in a cell phone. On a rainy Saturday morning in January 2007, Henry Yang, chancellor of the University of California, Santa Barbara, took an urgent phone call. He excused himself abruptly from a meeting, grabbed his coat and umbrella, and rushed across the windswept U.C.S.B. campus to the Solid State Lighting and Display Center. The research group there included one of us (Nakamura), who had just received the Millennium Technology Prize for creating the first light-emitting diodes (LEDs) that emit bright blue light. Since that breakthrough over a decade earlier, Nakamura had continued his pioneering research on solid-state (semiconductor) lighting, developing green LEDs and the blue laser diodes that are now at the core of modern Blu-ray disc players. As Yang reached the center about 10 minutes later, people were milling about a small test lab. "Shuji had just arrived and was standing there in his leather jacket asking questions," he recalled. Nakamura's colleagues Steven DenBaars and James C. Speck were speaking with a few graduate students and postdoctoral researchers as they took turns looking into a microscope. They parted for Yang, who peered into the eyepiece to witness a brilliant blue-violet flash emanating from a glassy chip of gallium nitride (GaN).
<urn:uuid:66f7e189-e267-45c9-9d40-931f571e8675>
3.34375
339
News Article
Science & Tech.
52.888857
993
MAKING GRAPHENE NANORIBBONS: The process for tailoring of the silicon carbide crystal for selective graphene growth and device fabrication is illustrated, starting with the top left figure. (A) A nanometer-scale step is etched into the silicon carbide crystal by a fluorine-based reactive ion etch (RIE). (B) The crystal is heated to about 1200-1300 degrees Celsius (at low vacuum), inducing step flow and relaxation to the etching. (C) When the crystal is further heated to about 1450 degrees Celsius, a graphene nanoribbon forms. (D) From there the source and drain contacts, graphene nanoribbon channel, aluminum oxide gate dielectric and metal top gate are added. Image: COURTESY OF WALTER DE HEER For years researchers have held out hope that graphene would be the material to pick up the mantle in the electronics industry when silicon hits its limits as the material of choice for making devices smaller, faster and cheaper. Yet, turning graphene's promise into a reality has been difficult to say the least, in part because of the inherent difficulty of working with a substance one atom thick. Methods of cutting graphene into useable pieces tend to leave frayed edges that mitigate the material's effectiveness as a conductor. Now, a team of researchers at Georgia Institute of Technology led by Walter de Heer claims to have made a significant advance in that area by developing a technique for creating nanometer-scale graphene ribbons without rough edges. (A nanometer is one billionth of a meter.) Graphene has, of course, made headlines throughout the scientific world this week, thanks to the awarding of the Nobel Prize in Physics to two researchers at the University of Manchester in England who in 2004 pioneered a way of isolating graphene by repeatedly cleaving graphite with adhesive tape. The Nobel Prize committee recognized Andre Geim and Konstantin Novoselov "for groundbreaking experiments regarding the two-dimensional material graphene." Unlike the approach taken by Geim and Novoselov, de Heer and his team in the past have created graphene sheets by heating a silicon carbide surface to 1,500 degrees Celsius until a layer of graphene formed. The graphene was then cut to a particular size and shape using an electron beam. "This was a serious problem because cutting graphene leaves rough edges that destroy a lot of graphene's good properties, making it less conductive," says de Heer, regents' professor in Georgia Tech's School of Physics. De Heer's new approach, described October 3 in Nature Nanotechnology, is to etch patterns into the silicon carbide and then heat that surface until graphene forms within the etched patterns. (Scientific American is part of Nature Publishing Group.) In this way graphene forms in specific shapes and sizes without the need for cutting. "The whole philosophy has changed," he says. "We're not starting with an infinite sheet of graphene; we're growing it where we want to grow it." The researchers claim to have used the technique to fabricate a densely packed array of 10,000 top-gated graphene transistors on a 0.24-square-centimeter chip, a step toward their ultimate goal of creating graphene components that can be integrated with silicon for new generations of electronics. Such a consolidation would be a key milestone towards making microprocessors able to operate at terahertz speeds, 1,000 times faster than today's chips (whose speeds are clocked at billions of hertz). Another goal is to reduce heat generation as an increasing number of transistors are packed onto each chip. Such advances would continue to validate Moore's law even as silicon circuits reach their miniaturization limit. "In principle, graphene can overcome silicon's limitation," de Heer says. "If we completely succeed [only] time will tell." Graphene and silicon will be able to coexist much the same way that airplanes and freight ships are used for transporting cargo. "They move at different speeds, but both are important because they have different costs," de Heer says. "I think a similar thing will happen in electronics." De Heer is also quick to acknowledge that, although the study of graphene dates back to the 1970s, the field still has a long way to go. He and his team are now investigating how the ribbons they created will perform over time and to what degree their new approach improves on cutting pieces of graphene out of larger sheets. With so many open questions about graphene's viability, de Heer says he was surprised that the Nobel selection committee recognized graphene at this time. The technology has tremendous potential but only a fraction of that potential has been realized to date. "It's a little early," he says. "If you ask me the bottom line—What has graphene accomplished?—it's still trying to find its way."
<urn:uuid:6708de11-4253-4cbc-926a-ece70408faa5>
4.1875
997
News Article
Science & Tech.
38.968131
994
Saint Michael's Professor's discovery named a top 10 breakthrough for 2011 by Physics World Physics World announced its top 10 breakthroughs for 2011 on December 16th. Coming in at number 10 is Saint Michael's College Professor John O'Meara, with his colleagues Michele Fumagalli and Xavier Prochaska of the University of California, Santa Cruz, for their discovery of clouds of pristine gas from the very early universe- a triumph of Big Bang cosmology. The team was lauded by Physics World for being "the first to catch sight of clouds of gas that are pure relics of the Big Bang. Unlike other clouds in the distant universe - which appear to contain elements created by stars - these clouds contain just the hydrogen, helium and lithium created by the Big Bang. As well as confirming predictions of the Big Bang theory, the clouds provide a unique insight into the materials from which the first stars and galaxies were born." See the full announcement See the story of O'Meara's discovery - Pristine relics of the Big Bang spotted "We are grateful and delighted to have been named a top10 breakthrough in astrophysics, but there is plenty of work still to be done," Professor O'Meara said. Criteria for selecting the top 10 breakthroughs award The top 10 breakthroughs list has been compiled by the Physics World team, who reviewed over 350 news articles about breakthroughs in the physical sciences published on physicsworld.com in 2011. The criteria for judging included: - fundamental importance of research - significant advance in knowledge - strong connection between theory and experiment - general interest to all physicists A story of the astronomical break-through discovery appeared in Science, the premier science journal in the U.S., November 10, 2011. Using the giant 10-meter Keck I telescope in Hawaii, the three astronomers discovered two giant clouds of intergalactic gas whose chemical composition has been unaltered since the dawn of time. The clouds, located over 11 billion light years from Earth, offer direct supporting evidence for the Big Bang model of cosmology. O'Meara explained that in the Big Bang model only the very lightest elements such as Hydrogen and Helium were created during the first few minutes of the history of the universe. As cosmic time progressed over billions of years to the present, gas containing these few elements form stars and galaxies. As part of the life cycle of stars, the remaining elements, such as carbon, nitrogen and oxygen, are produced and recycled into the gas within and outside of galaxies. Until now, astronomers have always detected these heavy element remnants wherever they've looked. Gas with no trace of heavy elements was the break-through discovery. "These clouds are exciting for both what they do and don't have," Saint Michael's Professor O'Meara said. "Specifically, they represent the first detection of pristine gas: gas with no trace whatsoever of heavy element absorption. What the gas does contain, however, is hydrogen and its isotope deuterium in the levels predicted by Big Bang models." Although the discovery is a triumph for the Big Bang cosmology, O'Meara points out that it raises new questions. "A good overall model of cosmology, but plenty of work to do," O'Meara said. "These clouds have been uncontaminated by heavy elements for over two billion years since the Big Bang. This means that our understanding of how galaxies return heavy elements to their environments is incomplete. Although we've provided great evidence that our overall model of cosmology is a good one, we still have plenty of work left to do."
<urn:uuid:b9cc29f1-de6d-4069-8cd8-2e630df5f053>
2.53125
746
News (Org.)
Science & Tech.
42.607494
995
Australian scientists discover deep sea corals SYDNEY (AFP) - Australian scientists mapping the Great Barrier Reef have discovered corals at depths never before thought possible, with a deep-sea robot finding specimens in waters nearly as dark as night. A team from the University of Queensland's Seaview Survey announced the unprecedented discovery 125m below the surface at Ribbon Reef, near the Torres Strait and at the edge of the Australian continental shelf. Dr Ove Hoegh-Guldberg, chief scientist on the project, told AFP on Thursday that coral had previously only been shown to exist to depths of 70m and the finding could bring new understanding about how reefs spawn and grow. "What's really cool is that these corals still have photosynthetic symibionts that supposedly still harvest the light," Dr Hoegh-Guldberg told AFP.
<urn:uuid:30063ff7-16ef-45a6-85e9-73e5e4e40e79>
3.21875
177
News Article
Science & Tech.
38.356381
996
Martian ice swaps poles every 25,000 years Give or take Water-ice on Mars swaps poles over a cycle that spans 51,000 years or so, in step with the way the planet precesses, or wobbles around on its axis. Researchers investigating the different types of ice at the Martian poles plugged new data from the Mars Express mission into a model of the planet's climate. Then, adding in details of the planet's slow precession, they ran the clock back 21,500 years to a time when the northern summer was closest to the sun, the exact opposite of the situation today. Martian water now, and 21,500 years ago. As time passed, the model showed water accumulation rates shifting across the globe. Water at the north pole became unstable and vaporised easily, moving to the southern hemisphere where it recondensed and froze on the surface. Here, over the course of 10,000 years, it formed an ice cap up to six metres thick. Run the clock forward towards today, and the opposite starts to happen: the ice at the south vaporises and shifts on the winds to the north. The process was interrupted about 1,000 years ago, the researchers say, when for some reason a layer of carbon dioxide ice formed a protective layer over the ice, preventing further erosion. The model helps explain newly discovered deposits of ice at the southern pole, spotted by the OMEGA instrument early on in the Mars Express mission. The European Space Agency says the "perennial deposits of water-ice" have built up on top of million-year old layered terrain, and argues that their presence is strong evidence for recent glacial activity. ®
<urn:uuid:de7b3133-5311-4f52-a49b-0b64829ba513>
3.359375
347
News Article
Science & Tech.
46.282362
997
Dr. James McClintock, a renowned University of Alabama-Birmingham marine biologist who has conducted research in Antarctica for more than 25 years, told me the following story. "You work in a scientific lab in the quietest place on Earth - Antarctica. "There's a crack! Boom! "You rush to the window of your remote lab with a number of your fellow scientists, and you witness a glacier 'calving' a chunk of ice the size of a house into the water. Adrenaline permeates the room. "Ten years ago, that exciting and incredible sight would happen about once a week. It was an event. Something rare. "Today, at that same lab in Antarctica, the calving glacial ice, the explosive sounds, are a daily occurrence. "The scientists are almost 'ho-hum' about it, barely lifting their heads to recognize the melting ice.'' Such is life in a warming world. McClintock has spent most of his life searching the ends of the earth for a cure for cancer and other human diseases. In fact, his research team has discovered marine species in the Antarctic that produce compounds active against skin cancer and influenza. McClintock is not an alarmist. He does not have a political agenda. But he knows firsthand the earth is warming and he understands some of the consequences. Mid-winter temperatures on the Antarctic Peninsula where he works are 10 degrees Fahrenheit warmer than they were 60 years ago. That may not seem like a big difference to us non-scientists, but it's devastating to a delicate polar ecosystem (and other ecosystems). In fact, this spring, McClintock and his research associates documented an invasion of king crabs that are likely to endanger fragile Antarctic clams, snails and brittlestars, or perhaps even the sea squirts that he and his colleagues study that could unlock a cure for skin cancer. This new predator, with its crushing claws, is moving in because of the rapidly warming seas. Once they make their way up onto the Antarctic shelf, an archaic marine ecosystem that has been without crushing predators for millennia will find itself largely defenseless. King crabs could very well destroy McClintock's living lab. For McClintock, it's like discovering someone is about to burn down your home and your life's work and possessions. I have always believed the National Academies of Science and the National Research Council motto, "Where the nation turns for independent and expert advice,'' accurately portrays that most venerable institution. As a nation, we have been seeking their advice since President Lincoln established this scientific body in 1863. Last month, without much fanfare, and little to no attention from the national media, the National Academies released their latest congressionally requested report on climate change. The report, "America's Choices,'' does not pull any punches. It reaffirms that climate change is occurring now and that the most effective strategy to combat it would be to begin cutting greenhouse gas emissions immediately. What makes this report more shocking is the fact that it is not new. As far back as 2005, the National Academies of the U.S., France, Canada, the United Kingdom, India, Italy, Japan, Germany, Brazil and China have jointly called upon policymakers throughout the world to address climate change. The message from the National Academies six years ago was virtually identical to the one in 2011. Climate change is real. We need to drastically reduce greenhouse emissions. We need to aggressively seek technological and scientific solutions. Delaying will only make matters worse. And now, more than ever, the signs of climate change are becoming starker. The extreme weather and floods in the Midwest and South this spring, historical droughts and fires in Texas and Arizona, permafrost disappearing in Russia and Siberia, floods in Pakistan, massive drought followed by flooding in Australia and whole villages in Alaska disappearing because of sea level rise are just a few recent examples. The climate is changing so rapidly that the Arbor Day Foundation has changed its recommendations for when and where you should plant your trees. Are we going to follow the National Academy of Sciences and countless scientists' advice on climate change? Are we going to listen to Dr. James McClintock and try to save a place that can lead to cures for cancer? Or are we going to barely lift our heads and refuse to recognize the climate changing around us? Byington is publisher of Bama Environmental News. He is a longtime environmental advocate from Birmingham, Ala., who has served on numerous state and national environmental boards.
<urn:uuid:e3719562-5a0a-40ad-8e62-37bffdd44b6e>
2.78125
935
Nonfiction Writing
Science & Tech.
44.132143
998
“The rover works perfectly,” she said. At a JPL press conference, Curiosity project scientist John Grotzinger compared one of the new images sent from Mars to the Mojave Desert. “It’s quite an experience to be looking at a place that feels really comfortable” and familiar, he said. “What’s going to be interesting is finding out all the ways that it’s different.” Curiosity landed in Gale Crater, which offers opportunities for research that hasn’t been possible on Mars. Scientists know that the crater was covered with water in the past, and the rover itself may well be sitting on the edge of what was once a river delta. Three-mile high Mount Sharp also sits in the midst of the 100 miles in diameter crater, and will be a major focus of the mission. High-resolution close-up images released Wednesday also show what appear to be pebbles and gravel over a layer of what scientist believe is bedrock. One set of images also shows a small nearby indentation with exposed rock. “You can see a harder, rocky surface under gravel and pebbles,” Grotzinger said, indicating that the site could become the rover’s first destination. The Curiosity team expects to spend one to two weeks checking out the basic systems of the rover - the most complex ever sent to another planet - seeing if the 10 science instruments on board are in working order, and switching to a different software system. Curiosity landed in Gale Crater, which offers opportunities for research that haven’t been available on Mars before. The ability of the rover to move may be tested during this time, Grotzinger said, but no firm decisions have been made as the vehicle and its environment are checked out. While some of the new Mars images are striking, the lead of the Curiosity film and photography team, Michael Malin, said that far more precise and dramatic images will come when the more powerful cameras are deployed.
<urn:uuid:3a20d09e-6877-461d-a170-ba72ef0b39a7>
2.71875
420
News Article
Science & Tech.
45.242727
999