text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Let’s go back and consider a linear map . Remember that we defined its rank to be the dimension of its image. Let’s consider this a little more closely. Any vector in the image of can be written as for some vector . If we pick a basis of , then we can write . Thus the vectors span the image of . And thus they contain a basis for the image. More specifically, we can get a basis for the image by throwing out some of these vectors until those that remain are linearly independent. The number that remain must be the dimension of the image — the rank — and so must be independent of which vectors we throw out. Looking back at the maximality property of a basis, we can state a new characterization of the rank: it is the cardinality of the largest linearly independent subset of . Now let’s consider in particular a linear transformation . Remember that these spaces of column vectors come with built-in bases and (respectively), and we have a matrix . For each index , then, we have the column vector appearing as a column in the matrix . So what is the rank of ? It’s the maximum number of linearly independent columns in the matrix of . This quantity we will call the “column rank” of the matrix.
<urn:uuid:a1848edc-3360-42a4-b7b5-4eeaece7ad9d>
3.53125
271
Personal Blog
Science & Tech.
58.287841
2,200
Computational model offers insight into mechanisms of drug-coated balloons. Profound doubts were the frequent response when MIT biophysicist Alexander Rich announced that two single-strand ribonucleic acid (RNA) molecules could spontaneously align themselves to form a double helix, just like those of their famous cousin, DNA. Many biologists thought it impossible; the rest considered it unlikely. Today, 50 years later, it is abundantly clear that Rich--who made the discovery with David R. Davies while both were working at the National Institute of Mental Health--was onto something big. In fact, it generated a paradigm shift in the science of biology. The discovery changed how research is done at the molecular level and helped spawn what has become the global biotechnology revolution. To mark the anniversary, Rich was invited to write an article about the work in the December issue of The Scientist. Likewise, Professor Alexander Varshavsky of Caltech wrote an article for Cell that appeared in the Dec. 29 issue. In 1956 Rich and Davies announced in the Journal of the American Chemical Society that single strands of RNA can "hybridize," joining together to form a double-stranded molecule. As a result of the discovery and the work that followed, scientists now routinely identify, isolate, manipulate and replace the genes in living things. Such work led to the Human Genome Project and is pushing science toward a fundamental understanding of how life works. "This was a founding technology of the biotechnology business," explained Rich, the William Sedgwick Thompson Professor of Biophysics. "The discovery was absolutely remarkable because no one, myself included, thought such a thing was possible and could work." The seminal discovery of double-stranded RNA by Rich and Davies came only three years after James Watson and Francis Crick stunned the scientific world by describing DNA's structure as a double helix. Watson and Crick not only described a structure, but also suggested how inherited information--genetic information--is safely stored and can be passed from one generation to the next. It was a major milestone in the biosciences. In 1953, Rich--working with famed chemist Linus Pauling at Caltech--was using X-ray crystallography to try to discover the structure of RNA, hoping to learn more about its role in life. One nagging question was whether RNA, like DNA, could exist in a double-stranded helical molecule. The x-ray images weren't helping much; they were fuzzy, inconclusive shadows of the gooey, glassy fibers that were pulled from a glob of RNA. At Caltech, and later at the NIH, Rich and his colleagues "talked a lot about RNA," he told a reporter from Chemical and Engineering News last month. "But nobody--including myself--suggested, 'Why don't you mix together PolyA and Poly U,' the two differing stands of RNA. It wasn't at all obvious that could work," he said, in part because everyone felt an enzyme would be needed to stitch them together. "People had no idea that hybridization could occur by itself." Ultimately Rich did try mixing the two strands, resulting in the discovery of double-stranded RNA, but to this day, he said, he can't recall what prompted him to do so. "I've asked my colleagues and searched through my memory, and I don't actually know." But it did work, and that has made all the difference.
<urn:uuid:53b0ec42-68a2-421b-b182-f2399488e686>
3.625
710
Knowledge Article
Science & Tech.
41.355357
2,201
|Dynein Motor Domain Shows Ring-Shaped Motor, Buttress| Movement is fundamental to life. It takes place even at the cellular level where cargo is continually being transported by motor proteins. These tiny machines convert the energy gained from hydrolysing ATP into a series of small conformational changes that allow them to literally “walk” along microscopic tracks. Motor proteins (in the kinesin and myosin families) have been extensively studied by x-ray crystallography, but until recently there was little molecular structural information for dyneins, another type of motor protein. A group from the University of California, San Francisco, working at ALS Beamline 8.3.1 has reported the 6-Å-resolution structure of the motor domain of dynein in yeast. It reveals details of the ring-shaped motor as well as a new, unanticipated feature called the buttress that may play an important role in dynein’s mechanical cycle. Like other motor proteins, dynein produces movement by coupling ATP binding and hydrolysis with changes in shape. ATP binds to a dynein’s motor domain, causing it to release from the microtubule track to which it’s bound and resetting a mobile part of the structure called the linker domain into a different position. Then dynein rebinds the microtubule, triggering the hydrolysis of ATP and a conformational change in the position of the linker domain, generating force. Finally ADP is released from the motor domain, allowing another ATP molecule to bind. Dynein differs from other motor proteins in two ways. Instead of having a single ATP-binding domain, dynein comprises a ring of six AAA+ domains (ATPases Associated with diverse cellular Activities). At least two of these domains are sites of hydrolysis, which raises the question of how many ATP molecules dynein actually uses. Second, the microtubule-track binding site is in a separate domain from the ATP binding site, found at the end of a long alpha helical projection called the stalk. The structure of dynein’s microtubule-binding domain was previously determined at ALS Beamline 8.3.1 (see ALS Science Highlight How Dynein Binds to Microtubules). Now, the x-ray crystal structure for its motor domain has been solved at the same beamline using phases determined from a polytungstate cluster (W12) heavy-atom derivative. Because of the conserved structure of AAA domains and the high alpha helix content of the dynein, researchers were able to build a model and assign a position to all the secondary structure elements. This model shows the mobile linker domain arching over the ring of AAA domains. It also shows that the linker is on the same face of the ring as a set of highly conserved inserts in the AAA domains, suggesting that as the linker moves it may contact different AAA domains. The ring itself was remarkably asymmetric, with some AAA domains packed close together and others gaping open. Notably, dynein’s main ATP binding site, between AAA1 and AAA2, was in an open conformation, consistent with the fact that the structure was solved in the absence of ATP and ADP. An intriguing part of the newly modeled structure is the buttress: a coiled-coil hairpin that extends out of AAA5 to contact the microtubule-binding stalk. The presence of the buttress was unanticipated even though, in hindsight, both electron microscopy and coiled-coil prediction software previously hinted at its existence. It is perfectly placed to link ATP-driven rearrangements of the AAA ring to conformational changes that propagate along the stalk to change the affinity of the microtubule-binding domain for its track. The buttress therefore seems to be a key element in the coupling of ATP hydrolysis to dynein’s movement along microtubules. Research conducted by A.P. Carter (UC San Francisco and Medical Research Council Library of Molecular Biology) and C. Cho, L. Jin, and R.D. Vale (UC San Francisco). Research funding: U.S. Department of Energy (DOE), Office of Basic Energy Sciences (BES). Operation of the ALS is supported by DOE BES. Publication about this research: A.P. Carter, C. Cho, L. Jin, and R.D. Vale, "Crystal structure of the dynein motor domain," Science 331, 6021 (2011). ALS Science Highlight #238
<urn:uuid:74510ee1-aa52-4210-98db-61c0830ae357>
2.75
957
Academic Writing
Science & Tech.
45.023655
2,202
Lichens in Australia The first published report of an Australian lichen appeared in 1806 and since then numerous species have been reported for Australia. Currently over 3000 species are said to occur in Australia and a little over a third are not known elsewhere. Amongst those 3000 plus there are still undoubtedly some invalid records. Such invalid records can arise from a variety of causes and the following paragraph will give some examples. Specimens collected in Australia have been the basis for the descriptions of many species. At times what has later been shown to be the one species has been described as new several times under a variety of names. Here is an example. In 1876 James Stirton of Glasgow described the new species Graphis mucronata (photo below). In 1882 Jean Müller of Geneva described four new species (Phaeographis australiensis, Phaeographis cinerascens, Phaeographis inscripta, Phaeographis subcompulsa) and in the same year Charles Knight of New Zealand described the new species Graphis aulacothecia. All these species were based on material collected in New South Wales. Recent study has shown that all these species are identical and Stirton's name is the one used for it. The species Letrouitia subvulpina, originally described from Cuba, was once thought to occur in Queensland. This was based on the assumed equivalence of that species with Letrouitia sayeri, the original description of which was based on material collected from Queensland and was published after that of Letrouitia subvulpina. Chemical and microscopic analysis has shown the two are best considered as separate, though superficially similar, species. The species Trypethelium exiguellum was described in 1899, based on a specimen collected on Thursday Island in Queensland, and for a century was not recorded from anywhere else. In 1992 this seemingly Australian endemic lichen was shown to be a non-lichenized fungus. Lepraria incana, widespread in the world, had been thought to occur in Tasmania, Victoria and Western Australia but a recent study found no evidence of the species in Australia. Australian specimens named as Lepraria incana were found to be misidentifications of Lepraria lobificans or Lepraria yunnaniana. Lecanora subpiniperda and Lecanora subpurpurea were originally described in 1882 and 1899 respectively, based on material collected in New South Wales and Queensland, respectively. The original species descriptions were very brief and the original (or type) specimens can no longer be found. It is therefore impossible to re-examine the type specimens with modern methods to see if or how those two species differ from others in the genus. Those two Lecanora species are therefore doubtful species. The previous paragraph gave examples of some of the erroneous Australian species records that have been detected and removed. It is easy to see that erroneous records have biogeographic implications. There are still various lichen groups in need of critical study in Australia and future studies of such groups will undoubtedly reveal more dubious species records. Nevertheless, even allowing for such undetected errors, there are still a great many species well studied and validly recorded for Australia, so making some biogeographical analysis possible. With such a wide range of lichens found in Australia it's not surprising that they are geographically distributed in a variety of ways. The DISTRIBUTION PATTERNS page gave examples of distribution patterns, a number of which included Australia. The following two pages give some more biogeographical information about the lichens found in Australia: AUSTRALIA AND ELSEWHERE – This page gives some examples of lichens found in Australia and other parts of the world. ENDEMIC TO AUSTRALIA – Deals with lichens known only from Australia. The remainder of this page will be devoted to a couple of examples, the first involving a single species and the second the summary of a single field trip. The first shows that it can take a considerable time to build up a good picture of a particular species' distribution while the second shows that Australia still offers great scope for lichen exploration, even in non-remote areas. Both show that there are still likely to be many changes in ideas about the biogeography of Australian lichens. Buellia levieri is a good example of a species that, for many years, appears to be confined to a very small area of Australia but is then shown to be far more widespread. This species was described in 1911 by Antonio Jatta, of Italy, based on material collected near Geeveston in Tasmania by William Weymouth. There is no record of habitat information with the specimen, but it seems likely that it was collected in a cool temperate rainforest. A second collection, from cool temperate rainforest in a different area, was made in 1983 by the Tasmanian lichenologist Gintaras Kantvilas. The authors of a paper published in 1994 noted that the species was then still known only from those two collections and wrote: That only two collections of this species are known, despite extensive recent collecting activity in Tasmania, particularly in wet forests, suggests it is extremely rare. In 2007 it was reported from Western Australia and by the end of 2009 it had been reported also from New South Wales, Queensland, Victoria and distant South America. Despite now being known from widespread parts of Australia it is still rare and known from very few locations as you can see from the accompanying map. However the currently known distribution suggests that the species is likely to be found at other places as well. Moreover, it appears to tolerate a variety of habitats since it was collected in Western Australian from a dead Acacia in remnant Acacia-Eucalypt woodland along a seasonal creek, quite different to the cool temperate rainforest of Tasmania. A western fortnight There are still many areas of Australia that are largely unexplored from a lichenological perspective. There are diverse habitats (and micro-habitats) in the many areas that are accessible only by rough roads, especially when you look at the large part of Australia that is more than say a hundred or so kilometres from the nearest coast. However, it is not necessary to go to remote areas to make interesting finds. During April-May of 2004 the Australian lichenologist Jack Elix and two fellow cryptogamists spent a fortnight collecting lichens, bryophytes and fungi in south-west Western Australia, predominantly in non-coastal areas, and the red dots on the accompanying map show the collecting sites. The blue cross to the lower left indicates Perth. The blue dots indicate Geraldton (north of Perth) and Kalgoorlie (to the east). The area covered is not remote from major towns or cities, most of the sites were accessible by all-weather roads and the fortnight produced many interesting collections. With regard to the lichens many collections were of species already known to occur in Western Australia but of these a good proportion were collected from areas where they had not been recorded previously. There were also specimens of lichens already known from other parts of Australia (and perhaps overseas as well) but not yet from Western Australia. The following is a list of those species as well as the other places where the species had been found, first the Australian states or territories and then, after a dash, any overseas countries or regions. Where a species is known only from Australia I give, in brackets, the year in which the first description of the species was published. The fieldwork also yielded specimens of two species, previously known from outside Australia, but not yet found in Australia - Diploschistes conceptionis (known from Chile and Uruguay) and Xanthoparmelia applicata (known from South Africa). Finally, the following new species (and one new subspecies) have been described, based on specimens collected during that fortnight: Buellia psoromica, Diploschistes elixii, Buellia xanthonica, Hypocenomyce isidiosa, Maronina hesperia, Parmeliopsis chlorolecanorica , Pertusaria subarida, Thysanothecium hookeri subsp. xanthonicum and Xanthoparmelia baeomycesica. The distributional information given for any species listed above reflects the state of knowledge on the eve of the publication of that species' discovery in Western Australia. For example, the Western Australian find of Buellia substellulans was published in 2006 and until then the species had been known only from Queensland and New South Wales, and in both those states since 1886. It took 120 years to expand its known distribution - and then considerably. The paper that published the Western Australian find also reported collections from the ACT, the Northern Territory and Tasmania, so giving this species a wide, though still patchy, distribution in Australia, as shown in the accompanying map. There are other species listed above which are also now known from other places but I haven't given the currently known distribution for each. The aim of the above is simply to show that a fortnight's concentrated searching in a rather small, non-remote area of Australia is capable of yielding an impressive addition to knowledge of the country's lichens. Such discoveries are not confined to Western Australia and, since 2004, comparable results have resulted from concentrated field work over similar periods in various other non-remote areas of Australia. Lichen biogeography pages on this website
<urn:uuid:e61a6bb8-832b-481b-8575-316402770cee>
3.515625
1,955
Knowledge Article
Science & Tech.
27.81345
2,203
A Change of Seasonality of the Upper Arctic Ocean in Response to Atmospheric and Sea Ice Forcing The PI proposes to investigate the change of seasonality of the Arctic Ocean Ekman transport and upwelling/downwelling, and their relationships to key components of the arctic system. A systematic investigation of the Arctic Ocean upwelling and its change of seasonality is needed in order to understand complex interactions among components of a rapidly changing arctic system. Upwelling is arguably the single most important dynamical variable in oceanography. It forces oceanic gyres through the Sverdrup balance. Upwelling replenishes nutrients in the surface layer and helps to sustain phytoplankton production. It is a conduit for heat flux from the warm Atlantic Water layer of the Arctic Ocean to the mixed layer, where it influences the sea ice thermodynamics and the atmospheric circulation. Upwelling and Ekman transport affect the surface salinity and thus play roles in the oceanic thermohaline circulation. The most intense upwelling occurs along the Arctic coast, so a change of upwelling seasonality can influence land processes, such as the permafrost in nearshore sediments. Daily Ekman transport and upwelling from 1979 to 2006 have been estimated using satellite and buoy observations. This dataset will be updated, validated and made available to the community in the first year of this project. The PI will quantify the changes in seasonality of the Ekman layer, and examine how they are related to the decline in sea-ice cover and to oscillatory climate modes (such as the Arctic Oscillation). The impacts of the changes in seasonality of upwelling on the chlorophyll concentration, mixed-layer temperature and salinity will be examined. Project Duration:1 July 2009 - 30 June 2012 Supplemental Project File(s): Programs:Arctic System Science Program Funding Agency:National Science Foundation Funding Solicitation/Announcement:Changing Seasonality in the Arctic System (CSAS): NSF 08-567 Unique Project Identifier (Grant #, Project #, Other):0902090 Grant/Project Funding Amount:$497366 Yang, J. (2009), "Seasonal and interannual variability of downwelling in the Beaufort Sea", J. Geophys. Res., 114, C00A14, doi:10.1029/2008JC005084. Pickart, R.S., G.W.K. Moore, D.J. Torres, P.S. Fratantoni, R.A. Goldsmith, and J. Yang (2009), "Upwelling on the continental slope of the Alaskan Beaufort Sea: Storms, ice, and oceanographic response", J. Geophys. Res., 114, C00A13, doi:10.1029/2008JC005009. "A Change of Seasonality of the Upper Arctic Ocean in Response to Atmospheric and Sea Ice Forcing" Download PDF (3.14 MB) Yang, J., "Interannual Variability of the Arctic Ocean Ekman Transport and Upwelling" Download PDF (944 KB).
<urn:uuid:2a5bf9d5-c7dc-499d-8aab-f5e334ee37e1>
2.546875
661
Academic Writing
Science & Tech.
44.865869
2,204
Interview with Matt Golombek Pasadena, Spirit Mission Sol 2 See gallery of Spirit's Sol 1 images and slideshow Now that Spirit has landed safely on Mars, the mission science team has begun to think about where they'd like to send the rover and what scientific experiments they'd like to do. Astrobiology Magazine's managing editor Henry Bortman caught up with geologist Matt Golombek on the second day of the mission to get his initial impressions of the Spirit landing site and the possibilities for scientific discovery. Golombek was the chief scientist for the Pathfinder mission and is a landing site scientist for the MER missions. Astrobiology Magazine (AM): There have been a couple of comments indicating that from the images that Spirit has sent so far, the science team can already tell that Gusev Crater is, as predicted, a dried-up lakebed. I look at these pictures and just see a bunch of rocks. What makes it a lakebed? |Pathfinder chief scientist, Dr. Matthew Golombek, of the Jet Propulsion Laboratory. The Pathfinder lander, formally named the Carl Sagan Memorial Station following its successful touchdown, landed on July 4, 1997 with its Rover, called Sojourner. Pathfinder returned 2.6 gigabits (2.6 billion bits) of data and 16,000 tantalizing pictures of the Martian landscape, including 550 close-up images from the rover. Sojourner performed 15 chemical analyses of rocks. Matt Golombek (MG): Well, you can't tell at this point. We don't know. But in the [site] selection [process], we evaluated every piece of data that was available to be evaluated, and we made a series of predictions [about the site]. At the largest scale, we said it would be safe for landing, it'd be safe for driving, and we said it would have less rocks than the VL [Viking Lander] 1 and 2 sites. And that's exactly what we see. Now, you can actually tell an awful lot just by looking at the geomorphology [landforms, rocks and dust] of the site. What you see is a reasonably flat surface. You see very shallow craters - what I would interpret as craters - we don't know that for sure, but I'm pretty sure that's what they are. What you don't see are a lot of drift or dunes. You see some dune material. In some ways those dunes look like Mermaid Dune at the Pathfinder site. It looks to me - preliminary impressions, not just mine, but many of the other geomorphologists on the team - is that this is a site that's had aeolian [wind-blown] activity, that dunes have marched across the site and taken most of the fine-grained stuff with it. So you don't see a lot of dune material. We didn't see any dunes, really, just a few small ones, like [at the] Pathfinder [site]. We saw what looks like a lag deposit, like you've taken away all the fines and what you're left with are the rocks that are too big [for the wind] to move. And a duricrust surface, which is a more heavily cemented surface, particularly in the center of a bowl-shaped crater [visible in some images]. So in a lot of ways, [the site is] pretty much similar to what we expected and predicted. AM: Okay, but people have said, "It looks just like a lakebed." What is it about the terrain that makes people say that? MG: Just the flatness. AM: But you can find a lot of flat places that aren't lakebeds. MG: That's right. There's nothing unique at this point. I think we really need to go out and start looking at the mineralogy and start looking at the deposits, and that's really what's going to tell us. But it looks like [just what] you'd expect [a lakebed to look like]: a flat surface like that. Part of that is its location inside a crater. But we could be totally wrong about that, and we won't know till we get out and start looking around. AM: So would it be accurate to say that you haven't found anything that's inconsistent with the theory that Gusev is an ancient lake bed, rather than that you have seen clear evidence that it is? MG: Yeah, that's right. There's nothing that we have yet that would compellingly indicate that we're on the bottom of a lakebed, certainly the primary surface of a lakebed. AM: And will you be able to tell that from the images that will be coming back from Mars in the next few days? MG: We'll get clues. If we see certain deposits [in the images from] the Mini-TES, the thermal emission spectrometer, that indicate standing-body-of-water deposits - anhydrites, carbonates, things like that - that would be fairly compelling to suggest that we're sitting in a place where water was. |White [dry-ice] patches of frost on the ground are visible behind the Viking 2 Lander. Top banner image is another Viking landscape in false color, particularly showing dust-covered rocks and apparent discoloration from oxidation or fresher, wind-blown soil. Credit: NASA. In the absence of that, we're probably going to need to go and look at some of those rocks up close, in detail, with the Mössbauer [spectrometer] and the APXS [Alpha Proton X-ray Spectrometer]. AM: Steve Squyres mentioned that in some of the images you can see the lip of what might be a crater. MG: Yeah. My interpretation is that that is a shallow crater. And it looks like it's been filled in. And that lip looks like it would be a piece of the crater rim. It looks like there's a series of boulders along that rim. So that's one of our first targets, to traverse across that shallow crater and get to that rock on that rim. And then at a broader scale, we're looking at where we've actually come to rest. We have the DIMES images - [a series of 3 images taken during the lander's descent] - now, so we know roughly where we are. We don't know precisely. We're still lining up the azimuths. But there is an exposure, that etched terrain, just to the southeast. Those are those irregular cliffs or mountains in the distance. They look to be 1 to 2 kilometers away. And given how flat this surface is, that might be a place that we decide is important enough that that's where we want to be heading towards. But all of this is just the very initial ideas that we have. AM: Are they actually close enough to get to? MG: The estimates for the [total] drive distance that we set for the mission depended upon what the rock abundance was and how flat the terrain was. Well, we have a very smooth site that is just about ideal for long-range driving. And if we don't see a lot of variability in the [nearby] rocks, it's very likely that [after checking] a couple of those out, maybe at the edge of that crater lip, [we'd] start heading off to look at something different as well. And this etched terrain really looks like it's a different [geological] unit. It's been mapped by everyone as a different unit than the stuff that we landed on. AM: Is it different mineralogically as well? |Top, Death Valley, Calif.; bottom, Sojourner rover image of Mars. Earth analogs for Gusev Crater have been offered as the cold Lake Vanda, Antarctica and the African crater lake, Lake Bosumtwi, Ghana. In area, the crater at Gusev is large enough to 'swallow' the state of Connecticut. GM: We have no indications of any mineralogy at this site that are unique, so we can't say from that yet. But there's a gazillion reasons why you wouldn't pick that up from orbital data. We have a much better chance, now that we're on the surface, of starting to see those distinctions. And that's really what we'll be looking for. So in the next week, we're going to try to pinpoint where the spacecraft is, and we'll come up with fairly specific plans for the near-long-term, maybe the first couple of weeks: getting to the rim of that small crater, perhaps. And I think we'll try to [also develop] a long-long-range goal for the mission that would include a target that we'd like to get to by the end of the 90 sols. AM: How far could Spirit go, if everything went well? GM: The tests that were done were done in Viking 1 and 2 terrain, which has 20 percent rock abundance, and a much more variegated surface. And this surface has, it looks like, 5 to 10 percent rocks. That's pretty much exactly what the remote sensing [orbital] data said. I can't tell you how thrilled I am that this site is just what we said it was going to be. |Spirit preparing to investigate a rock Credit: NASA/ JPL You saw how flat it is. So avoiding the rocks is going to be a piece of cake. There just aren't very many big ones that you need to worry about. And you could easily provide X-Y waypoints [interim destinations] along the way. I don't see what the problem would be; you could easily go 50 meters a sol if you wanted to. And you've got 90 sols, so you could go quite a distance. AM: And there's nothing that restricts the total distance the rover can go other than its energy budget. As long as its batteries get recharged every day, it can just keep going? MG: Just the energy budget, that's right. And typically you'd be trading driving for remote sensing or other [science] activities. You want some mix of those, obviously. It really depends upon the variability of the materials at the site. If we look with the Mini-TES and we see a dozen different kinds of materials, well chances are there's going to be a lot of interest in figuring out what those are first, before you start trundling across [the terrain]. But if it looks like one or two rock types, and we figure out what those are, and we look at the sediments and those are all consistent, then there's every indication that we should try to get to something that, mapped out, looks like a different thing; and this etched terrain looks completely different - from the geomorphology, anyway. It's had a different geologic history [than the nearby material]. So that would be a great place to go. Related Web Pages Spirit's First Light Presidential Panorama (Flash) Athena Science: Cornell University Five Year Retrospective: Mars Pathfinder Ancient Lakebed: Spirit Has Landed Proving Grounds: Martian Chronicles XIV Living on Mars Time
<urn:uuid:40b2eb9b-935b-40d4-a9f2-572868f2ebd9>
2.5625
2,362
Audio Transcript
Science & Tech.
63.367644
2,205
Magny et al. utilizing advanced scientific techniques, reconstructed 1,000 years of past summer (July) temperatures for the Swiss Jura Mountains region. Unequivocally, they found that July temperatures during the Medieval Period were significantly warmer than modern summer temps. "Working at Lake Joux in the Swiss Jura Mountains...employed a multi-proxy approach with pollen and lake-level data to develop a 1000-year history of the mean temperature of the warmest month of the year (MTWA), which was July...based on the Modern Analogue Technique. This work revealed what they describe as an "MWP between ca. AD 1100 and 1320," during which time the MTWA at Joux Lake exceeded that of the 1961-1990 reference period by fully 2.0°C....Thus, it would appear that the peak warmth of the MWP at Lake Joux exceeded that of the CWP at that location by something on the order of 0.4-1.0°C." [Michel Magnya, Odile Peyrona, Emilie Gauthiera, Boris Vannièrea, Laurent Milleta, Bruno Vermot-Desroches 2011: Quaternary Research]
<urn:uuid:bbce9d43-37d2-4487-9315-9f58b9d709c0>
3.453125
250
Truncated
Science & Tech.
62.636842
2,206
Place a sheet of paper on a flat desk. Assume that the magnetic field is coming perpendicularly out of the paper. Three particles are traveling diagonally from the left bottom edge to the right top edge. Particle 1 is deflected upwards, Particle 2's motion is not affected and Particle 3 is deflected downwards. What can you say about the charges on the particles?
<urn:uuid:4cb38e8d-0a48-4b28-8c6f-b8b6f115fee9>
2.625
79
Q&A Forum
Science & Tech.
57.689875
2,207
Groundhog Day in a Year Without a Winter The groundhog Punxsutawney Phil may have seen his shadow today, but the prospect of six more weeks of the mild winter of 2011/12 doesn't seem so terrible. In fact, now that we're past the typical coldest period of the year, the days are already getting longer, and the typical average temperatures are warming up day by day across the country. In many areas, this tame winter has been unusual but not unheard of. For example, in the Northeast, the winter has been one of the warmest and least snowy on record, but it has been warmer during past winters. (The Weather Channel has a nice comparison between snow cover charts from February 2011 vs. 2012.) While winter temperatures have been increasing, on average, due to global warming, the mild winter this year is likely mainly due to natural climate variability, including a La Niña event in the Pacific Ocean and the orientation of the upper air jet stream. Temperatures in the Northeast have averaged at least 5°F above average since December, with very little snow cover, according to Art DeGaetano, a Cornell University climatologist and the director of the Northeast Regional Climate Center. “Although December 2011 and January 2012 have been warm, you do not have to go back too far to find a warmer period. The early winter of 2001-02 was the warmest at many Northeast U.S. stations. Over a longer time frame, the early winter of 1931-32 stands out as the warmest at the majority of Northeast U.S. sites,” DeGaetano said in a press release. The same is true in other parts of the country, although in select locations this winter may rank among the top 10 warmest on record, depending on how mild February turns out to be. In normally frigid Minneapolis-St. Paul, for example, January featured temperatures that were 8°F above average. And across the U.S., January snow cover was the 3rd lowest on average, according to NOAA (H/T Paul Douglas). Scientists say the mild winter will reverberate throughout ecosystems during Spring, with bigger deer populations, thanks to more accessible vegetation for them to feed on throughout the winter. David W. Wolfe, a Cornell professor of plant and soil ecology, said the lack of severe cold "will benefit some insect pests and invasive weeds like kudzu." “On the positive side, if you are a farmer or gardener experimenting with crops or ornamentals that sometimes can't survive a severe winter, this will be a good year for you,” he said. Last week the U.S. Department of Agriculture released new maps of plant hardiness zones, indicating a northward shift has taken place, which reflects a warmer climate. Although the Agriculture Department did not characterize the shift as being due to climate change, the movement of the hardiness zones is consistent with climate change projections for the U.S.. The mild winter may also benefit insects such as mosquitos and ticks. Jody Gangloff-Kaufmann, is a professor of entomology and a specialist with the New York State Integrated Pest Management Program, "This year, lots and lots of hungry ticks will emerge even on warm winter days. I anticipate the mosquito problems we normally see to be much more intense and begin earlier than usual if the weather continues to be mild. Even the fleas have had a boost so far this winter and many people are complaining about flea problems right now, in the middle of winter.” Mild temperatures during the past week have set records from the Central Plains to the East Coast. New York's John F. Kennedy Airport, for example, reached 64°F on Feb. 1, breaking the old record of 62°F set in 1989. So far this winter, Alaska has been the only U.S. state that has seen consistently severe cold weather.
<urn:uuid:b2c5686b-0848-447e-bcb0-dc2ba1a2527a>
2.921875
813
News Article
Science & Tech.
56.08038
2,208
in another class i used: CoffeeCup original = new CoffeeCup(); original.add(75); // Original now contains 75 ml of coffee CoffeeCup copy = (CoffeeCup) original.clone(); Now i got exact copy of CoffeeCup. my question is: how the application is creating an subclass object(CoffeeCup) when I call super.clone(). As per my understanding if we call super.clone() that means we are calling Object.clone() -- if that is the case how it is creating the object of sub class? Is there any underlying implementation that Object class uses for creating the subclass object? All you need to know is that Object.clone() will simply create a new object of the exact same type (so in your case CoffeeCup) and copies all fields by using simple assignments. For example: Line 11 could be replaced with the following: There are three big differences though: 1) when you use Object.clone(), a sub class can call super.clone() and it will not return an instance of Test but an instance of that sub class. 2) Object.clone() does not call a constructor so no constructor code is executed 3) Object.clone() can also copy final fields
<urn:uuid:0dada657-197b-47c7-9020-2e8653ef6c14>
3.484375
272
Q&A Forum
Software Dev.
60.264375
2,209
Creeping climate change in the Southwest appears to be having a negative effect on pinyon pine reproduction, a finding with implications for wildlife species sharing the same woodland ecosystems, says a University of Colorado Boulder-led study. According to Einstein, whenever massive objects interact, they produce gravitational waves -- distortions in the very fabric of space and time -- that ripple outward across the universe at the speed of light. While astronomers have found indirect evidence of these disturbances, the waves have so far eluded direct detection. Ground-based observatories designed to find them are on the verge of achieving greater sensitivities, and many scientists think that this discovery is just a few years away. Scientists have disagreed for many years over the precise cause for a period of cooling global temperatures that began after the Middle Ages and lasted into the late 19th century, commonly known as the Little Ice Age. Physicists at JILA on the CU-Boulder campus have for the first time observed chemical reactions near absolute zero, demonstrating that chemistry is possible at ultralow temperatures and that reaction rates can be controlled using quantum mechanics, the peculiar rules of submicroscopic physics. An increase in inhibitions could reduce anxiety in individuals suffering from anxiety and, as a result, help improve their decision making. A new CU-Boulder study shed light on the brain mechanisms that allow people to make choices and could be helpful in improving treatments for the millions suffering from the effects of anxiety disorders. In the study, psychology professor Yuko Munakata and her research colleagues found that “neural inhibition,” a process that occurs when one nerve cell suppresses activity in another, is a critical aspect in an individual’s ability to make choices. They’re called cowboys, but you won’t find them astride a horse rounding up stray cattle. They are scientists—dubbed disease cowboys—who search for the cause when unknown diseases break out in remote locales. Ian Buller, a CU-Boulder senior majoring in ecology and evolutionary biology, has his sights set on being one of these daring “disease cowboys” and to specialize in disease ecology, specifically identifying and studying disease emergence and designing control programs.
<urn:uuid:b8bac2dc-20d4-4590-93c9-7fec580e0cc8>
2.75
459
News Article
Science & Tech.
14.622159
2,210
think before you codeThis tip submitted by Timofei Gerasimov on 2004-12-29 00:00:00. It has been viewed 36739 times. Rating of 6.5 with 216 votes Have the structure of the program very clear in your mind before you ever write a single line of code. If making diagrams or outlines using paper and pencil helps you to do this, go right ahead. This is especially critical if you are writing non-trivial programs, as in more than 200-300 lines. If you have a solid plan before you start to code, you are much less likely to write useless code. It's not very much fun to get halfway through your program and then realize that there's a major structural flaw. Help your fellow programmers! Add a tip!
<urn:uuid:b38588ca-2f3d-4777-8aaf-a5d8b1a5382f>
3.359375
164
Tutorial
Software Dev.
73.114447
2,211
Driven to extinction? Researcher believes invasive species might have caused biodiversity disaster 370 million years ago More science stories - Medical examiner: 24 dead in Oklahoma twister - Heart transplant giving climatologist Lonnie Thompson a second chance - Pesticide makers seek cause of bee deaths - Pesticide makers seek answers as bee losses sting agriculture - Researchers train bees to sniff out land mines - Asteroid’s close pass no danger to Earth Local Stories from ThisWeek - Gahanna mayor: Let’s cancel fireworks, Freedom Festival - Commissioner appeals for crime patrol's expansion - Apartments planned at site near industrial park - McDorman moving ahead with car museum plans The Dunkleosteus, a huge armored fish without teeth, disappeared after the Devonian Period. hen Ohio biologists study invasive species, they typically focus on the here and now — the emerald ash borers killing millions of trees statewide or the zebra mussels choking Lake Erie. Not Alycia Stigall. She looks back — way back. Stigall, an Ohio University paleobiologist, says a much older set of invaders could explain a huge biodiversity crisis more than 370 million years ago. And, she said, that crisis could possibly offer a cautionary tale for our future. For her research, Stigall looked at specific ocean-dwelling brachiopods, bivalves and crustaceans that lived in the North American region. A study that Stigall authored, published last month in the journal PloS One, concludes that the evolution of most of those brachiopods, bivalves and crustaceans were undermined by invasive species, which played an integral role in the Devonian event. Some species became extinct. Stigall said these invaders evolved in large, shallow saltwater basins that were cut off from the rest of the world’s oceans. They include several species of crustaceans, fish and shelled animals similar to clams and mussels. When sea levels rose — a combination of climate change and the gradual collision of continents caused it — these species were able to spread to new areas where they could compete with other animals for food and living space. One such invader was the brachiopod Pseudatrypa, which moved from the Appalachian region basin to the Iowa basin and ultimately to an area now known as New Mexico. She said she found that as invaders spread and began to dominate entire regions, they reduced the ability of all ocean organisms to evolve into new species. This might have contributed to a collapse of Earth’s reef systems during the Devonian Period, 378 million to 375 million years ago. Before the collapse, what is now Ohio was covered by a warm, shallow sea teeming with algae, plankton, fishes and sharks. “The organisms that were the dominant reef builders were really hit by that crisis,” Stigall said. “Reef ecosystems were absent from Earth’s oceans for almost 100 million years.” Scientists know of five “mass extinctions.” The most well-known is the worldwide extinction of the dinosaurs, which occurred roughly 65 million years ago. One of the most popular theories for that extinction was the impact of a large meteor or comet, which sent a huge plume of dust into the atmosphere, blocking sunlight and chilling the planet. The Devonian extinction is more mysterious, according to Stigall and other paleobiologists. Bruce Lieberman, a University of Kansas researcher who studies trilobites — Devonian-era cousins of the horseshoe crab — said the rate of the creation of new species dropped dramatically. Liebermann was a co-author on similar paleo-biological papers with Stigall. That meant as species became extinct at a more or less normal rate over thousands of years, few new species emerged to replace them. As oceans rose and the number of these invasion events increased, the average number of ocean-dwelling species declined. An estimated 15 invasive events that occurred 378 million years ago were followed by an 80 percent drop in species diversity roughly 3 million years later. Using a database of fossil records, Stigall found that the rate at which three classes of common mollusk, Floweria, Leiopteria and Schizophoria, evolved was low. In fact, she writes that the number of new species dropped 83 percent from 385 million years ago to 370 million years ago. Floweria became extinct in the later Devonian. The rate at which Archaeostraca, a class of crustacean species, evolved was less than half the rate at which similar modern day animals evolve. Stigall said new species, which need space and few competitors to establish themselves, didn’t have a chance to develop in an environment dominated by invaders. “When these invaders come in, they are broadly adapted and can eat a lot of food sources,” she said. “Any new species has a problem.” Invaders can take over fast, said Jeff Reuter, director of the Ohio Sea Grant and Stone Lab at Ohio State University. He said zebra mussels and quagga mussels spread across Lake Erie in a matter of a few years in the late 1980s. “Lake Erie used to have lots of clams,” Reutter said. “We still have clams in Lake Erie, but they are in refuges, isolated areas. It’s reasonably hard to find clams in the open waters.” While Stigall said the same thing could have happened during the Devonian, she is quick to point out that no one is sure what happened all those millions of years ago. “When we talk about the Devonian, there are a lot of weird things going wrong. The oceans are going through an overturn; there is climate change going on,” she said. Her studies, she said offer a clue. “It’s a presence of invaders that’s unique.”
<urn:uuid:5f4a786e-17e7-4c24-ad83-9a6d14341df7>
3.578125
1,273
Truncated
Science & Tech.
41.301439
2,212
Pex is a testing tool developed by Microsoft Research. It provides three very useful capabilities to support testing. First, it can explore your code and suggest which tests you should have. Second, if you have parameterized tests, Pex can figure out which combination of parameters needs be tested in order to give you full coverage of possible scenarios. And finally, if you use Code Contracts, then Pex will employ that information to fine-tune the unit tests it suggests or generates for you. I'll review the three aspects in more detail after a quick look at installation and set up. You can download Pex from Microsoft. If you're curious about the underlying technology, you can also access a few PDF documents explaining the internal workings of the tool from the site. Pex is a standard add-in for Visual Studio and should be installed like any other. Suggesting Unit Tests Now let's look at what Pex can do. Suppose you are not a TDD proponent; at some point, you may happen to have a C# class and no unit tests for it. With Pex, all you need do is right-click on the class code in Visual Studio and click the Pex|Create Parameterized Unit Tests menu item. The effect is that a new test project is created and added to the selected solution; moreover, the test project can target MSTest the default Visual Studio testing framework as well as other frameworks, including NUnit. Open up the project and you'll find good parameterized tests are there for you to complete and run. However, auto-generated tests don't really do much except invoke a method with parameters. At the very minimum, you'll want to complete them with some assertions. A parameterized unit test is simply a test that accepts parameters; parameters can be provided in a number of ways depending on the framework. Typically, you pass parameters from a database file. In Visual Studio MSTest, you need to write a wrapper parameter-less test configured to know about the data source. This wrapper test will be invoked iteratively; it reads values from the current row and passes those values to the parameterized test. The challenge is choosing meaningful input values for the tests. This is where the second key feature of Pex comes to the rescue. Choosing Meaningful Input Values Open the previously created unit test file and right-click on it. This time, click on the Run Pex Explorations menu item. At this point, Pex analyzes the code of the method under test and figures out which values it would make sense to test for. Pex uses an implementation of the dynamic symbolic execution technique that basically processes the code and calculates the results of the method execution. Every time a new possible result is found, Pex determines which combination of values leads to the result and adds the values to the list. In this way, it iteratively discovers all possible outcomes of a method and, at the same time, it also determines a way to produce them. The result of a Pex exploration is a list of auto-generated classic unit tests, which call the method with any meaningful combination of inputs previously determined. You can just copy these tests into a test project and integrate them in the build process with the guarantee of a high coverage and, more importantly, strong insurance against corner cases. Generating parameterized unit tests is often a long and boring process Pex automates that process and reduces it to just a click or two. Code Contracts and Pex Fully integrated in .NET Framework 4.0, Code Contracts are Microsoft's implementation of the tried-and-true idea of software contracts most notably, preconditions, postconditions, and invariants. In particular, Code Contracts are used to annotate methods with conditions that must be preliminarily verified for the code to run. As an example of Code Contracts and Pex integration, consider the following scenario. Suppose you have a method that does some work on a string parameter. Suppose also that your implementation blindly assumes the received string is a non-null string. You are therefore potentially exposed to a null reference exception. A quick exploration by Pex reveals the issue, in the form of a notification, which allows you to add a test. A warning about a possible defect is a great help, but Pex can do more. If Code Contracts are enabled for the project, then Pex will offer to add a proper precondition to your code so that you can catch any invalid data and take control of the situation. Pex will run an analysis of the code and build a base of knowledge about your code. Code Contracts provide additional information that contributes to an even more accurate analysis from Pex. In summary, Pex is an innovative white-box testing tool that can be used both as an aid to generate nontrivial unit tests and as a peer reviewer to quickly look at your code and find holes and omissions in it. Pex is not dependent on Code Contracts, but it can use Code Contracts to enhance your tests if they are already employed in your code. One limitation I should mention is that it works only with managed code. Pex is an innovative testing tool that represented a deeper commitment to unit testing than Microsoft had previously exhibited. Since it was released, Microsoft has continued bringing out tools that facilitate unit testing. In Visual Studio 2012, for example, it shipped a framework called Microsoft Fakes. Fakes facilitates the construction of unit tests with stubs and mocks, plus a runtime redirection of code in what the company calls "shims." These technologies will be discussed in upcoming articles. Meanwhile, if you need help generating humdrum unit tests that exercise code thoroughly, download Pex and have it do most of the heavy lifting. [For a hands-on look at using Pex, see "Working with Microsoft PEX Framework" Ed.] Dino Esposito is a frequent writer on Microsoft developer technologies.
<urn:uuid:93cadbc3-6288-4f82-adbb-4377e0d3dee7>
2.640625
1,217
Knowledge Article
Software Dev.
43.456032
2,213
September 23 marked first day of autumn north of the Equator and the first day of spring to the south. Day and night were approximately of equal length worldwide at that time, and will be again next March 20. The animation of Earth’s seasons to the right takes us, one day at a time, from September 19, 2010 to September 19, 2011 with images from Europe’s Meteosat-9 satellite. Africa comprises most of the right part of the image, with Europe dimly visible in the upper right. Each image was taken at 6 a.m. GMT, meaning it was near sunrise at those times over western Europe and about midnight in the eastern United States. On March 20 and September 20, the terminator, or sunrise in this instance, is a straight north-south line with the sun shining directly above the equator. On December 21, the sun resides directly over the Tropic of Capricorn when viewed from the ground, and sunlight spreads more widely over of the Southern Hemisphere. On June 21, the sun sits above the Tropic of Cancer, spreading more sunlight in the north. Of course, it is not the sun that is moving north or south through the seasons, but a change in the orientation and angles between the Earth and the sun. The axis of the Earth is tilted 23.5 degrees relative to the sun. The axis is tilted away from the Sun at the December solstice and toward the Sun at the June solstice, spreading more and less light on each hemisphere. At the equinoxes, the tilt is at a right angle to the sun and the light is spread evenly. Full story and images: NASA
<urn:uuid:0d6036f1-ae48-44dd-bea0-e411f4f607c7>
3.8125
351
Knowledge Article
Science & Tech.
62.802878
2,214
Superlattice to Nanoelectronics provides a historical overview of the early work performed by Tsu and Esaki, to orient those who want to enter into this nanoscience. It describes the fundamental concepts and goes on to answer many questions about todays 'Nanoelectronics'. It covers the applications and types of devices which have been produced, many of which are still in use today. This historical perspective is important as a guide to what and how technology and new fundamental ideas are introduced and developed. The author communicates a basic understanding of the physics involved from first principles, whilst adding new depth, using simple mathematics and explanation of the background essentials. Topics covered include * Introductory materials * Superlattice, Bloch oscillations and transport * Tunneling in QWs to QDs * Optical properties: optical transitions, size dependent dielectric constant, capacitance and doping * Quantum devices: New approaches without doping and heterojunctions - quantum confinement via geometry and multipole electrodes. Issues of robustness, redundancy and I/O. Researchers, course students and research establishments should read this book, written by the leading expert in nanoelectronics and superlattices. * The Author is one of the founders of the field of superlattices * The FIRST historical overview of the field * Provides a basic understanding of the physics involved from first principles, whilst adding new depth, using simple mathematics and explanation of the background essentials We do not deliver the extra material sometimes included in printed books (CDs or DVDs).
<urn:uuid:2313ce39-af9a-4da6-9157-c0166ef12ecb>
2.953125
309
Product Page
Science & Tech.
10.631598
2,215
Helping Streams Help Themselves, Naturally The sights and sounds of a stream running through your backyard or your favorite neighborhood park can be a soothing antidote to the busy pace of modern life. But when U.S. Environmental Protection Agency scientists Paul Mayer and Elise Striz look at a creek bed, they notice something that nature hadn’t intended. They observe the negative effects of water running off the paved surfaces of the urban landscape. “Water runoff is changing the flow of many streams,” says Paul Mayer, Ecologist, U.S. EPA Groundwater and Ecosystem Restoration Division, or GWERD. “Rapid runoff of rain from pavement causes stream banks to erode and forces streams to move laterally in ways that are impacting people’s homes, exposing water and sewer lines, and negatively impacting both ground and surface water quality.” The shifting streams are a problem for municipalities like the Baltimore County Department of Environmental Protection and Resource Management, which is working collaboratively with Mayer and his colleagues at GWERD. Not only are the streams encroaching on private property and jeopardizing the urban infrastructure, the water runoff functions as a delivery system for excess nitrogen into the stream bed. A little bit of nitrogen is necessary for the growth of all living things, but too much nitrogen can be bad for humans and the environment. Too much nitrogen in drinking water can negatively affect human health and too much nitrogen in streams can impact the ecological health of nearby watersheds and estuaries. “The nitrogen is coming from many sources,” says Elise Striz, Hydrologist, also at GWERD. She says those sources include “excess fertilizer runoff, animal wastes, sewer lines, and even the byproduct of fossil fuel combustion from your automobile’s exhaust.” The solution to excess nitrogen may well lie in the same method used to protect the streams from erosion – stream restoration. “It is an engineered approach to land management that redirects the flow of the stream,” says Mayer. “By reconstructing the natural twists, turns and bumps of the stream, and re-establishing plant communities along the stream banks, stream restoration may not only address the land management problems but also improve water quality at the same time.” Mayer and Striz have been testing this theory in a real-world experiment in an urban stream named Minebank Run in Towson, Maryland, outside of Baltimore. Minebank Run had become degraded in recent years, suffering from erosion, sediment buildup, and the loss of plants and trees along the stream banks. The belief is that by restoring the stream, scientists will be able to simultaneously recreate the conditions necessary for natural nitrogen removal from the stream. The result would be a cost-effective, sustainable method for keeping streams vibrant, which in turn aids the health of their plant, animal and human neighbors as well as downstream waterways, such as the Chesapeake Bay. “Naturally existing bacteria actually transform the nitrogen and can reduce the nitrogen level in the water in a dramatic way,” says Mayer. “So, the goal is to create a healthy stream environment within which these natural bacteria can thrive.” In order to test the effectiveness of this natural nitrogen removal, Mayer has been comparing nitrogen levels in Minebank Run from before and after its restoration. The early results have been impressive. Mayer says the natural features of the stream bed have been greatly improved since restoration. There is far less erosion of the stream, more plant life growing on the stream banks and, as anticipated, excess nitrogen is naturally being removed from the stream and the ground water beneath it. In short: by restoring the stream bed back to a more natural state, the stream itself can provide cleaner water for the plants, animals, and humans that rely on the stream. More study is certainly needed. But you can be sure that Mayer, Striz and their colleagues are doing just that, keeping their eyes and ears open.
<urn:uuid:216863e1-5d0e-48c7-b592-9334ce3ad29e>
3.734375
818
Knowledge Article
Science & Tech.
38.667686
2,216
So far we've looked at how weather is observed near the ground, but the atmosphere is like a layer cake. We must examine all the layers before we can determine a complete picture. The lowest layer is important because it's where we live, but what happens at ground level is really a result of the integrated behavior at all the different levels. So before we can put together a good forecast, we must figure out what is going on above the ground. In the early days of upper air observations, kites were sent upward with instruments attached. In one of the earliest attempts to record high-level readings, eighteenth-century physician John Jeffries went up in a balloon and took weather instruments along with him. On November 30, 1784, he made the balloon voyage, which lasted an hour and 21 minutes. He took numerous readings of the pressure and temperature. Thomas Jefferson wrote about the meteorological utility of balloons in April 1784, when he said that balloons would be useful in "throwing new lights on the thermometer, barometer, hygrometer, rain, snow, hail, wind, and other phenomena of which the atmosphere is the theatre." During this century, balloons are still used daily, but they are self-contained with remote sensing instruments. The simplest balloon is called a pilot balloon and is filled with gas. After being released, it's tracked with a telescope-like device called a theodolite. At equal intervals, such as once a minute, the balloon's position is noted in terms of its vertical and horizontal angles. These can be put into a formula to determine wind speed and direction. John Jeffries is considered to be America's first weather-person. In his honor, his birthday, February 5, is called Weather-Person's Day. He kept a weather diary during the colonial period. As a physician, he served the British during the Revolutionary War. Other balloons carry a special instrument package called a radiosonde, which measures the pressure, temperature, and humidity at the different heights. The balloon is tracked, often with radar, and the wind can be determined, just as it is with a pilot balloon. At the same time, the data is transmitted back to the tracking station at given intervals. For example, every few millibars of ascent, the switch goes on, and data is sent. The balloon's position is known, and its pressure given. The strength of the returning signal is proportional to the temperature and humidity. A radiosonde is a balloon-borne instrument that measures and transmits meteorological data of temperature, pressure, and humidity. A theodolite is an instrument used to track a radiosonde. Even during the era of space-age technology, these balloon observations remain the mainstay of upper-air weather observations. They are taken twice each day, at 12-hour intervals. The stations across land are spaced from 200 to 500 miles apart. Although there are more than 1,000 radiosonde launch sites globally, a dense collection of upper-air observations is not routinely available. Most of the sites are in populated areas. The balloons provide data through the troposphere, up to about 19 miles where they normally pop. The instrument package falls to the ground and used again if it's returned to the National Weather Service. The package contains a message asking that it be returned if found. Above 19 miles, radar and rockets are used to determine weather conditions. The rocket drops an instrument package, and it's tracked by radar. Also infrared sensors are being used to examine the temperature as well as the motion of the atmosphere. These are called radiometers. They can detect sharp changes in temperature that also correspond to sharp changes in wind. Water vapor is also a good emitter of infrared radiation, and its variation can be measured with these radiometers. That variation can often be linked to turbulence. Such instruments are helpful in aviation to help pilots determine when they are moving into rough air. Satellites are now being used to profile the various atmospheric variables all the way down to the lower troposphere. Microwave sounding units are used to measure the global temperature. Satellites have the advantage of monitoring more than 95 percent of the globe, and each satellite measures the temperature above most points every 12 hours. (In the next section, we'll take a look at some of the advances in remote sensing from radar and satellites.) Excerpted from The Complete Idiot's Guide to Weather © 2002 by Mel Goldstein, Ph.D.. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc.
<urn:uuid:b99b8bf6-d018-47ae-a780-84fbd8f3d8ca>
4.03125
951
Truncated
Science & Tech.
44.153036
2,217
The Basic Linear Algebra Subprograms (blas) define a set of fundamental operations on vectors and matrices which can be used to create optimized higher-level linear algebra functionality. The library provides a low-level layer which corresponds directly to the C-language blas standard, referred to here as “cblas”, and a higher-level interface for operations on GSL vectors and matrices. Users who are interested in simple operations on GSL vector and matrix objects should use the high-level layer described in this chapter. The functions are declared in the file gsl_blas.h and should satisfy the needs of most users. Note that GSL matrices are implemented using dense-storage so the interface only includes the corresponding dense-storage blas functions. The full blas functionality for band-format and packed-format matrices is available through the low-level cblas interface. Similarly, GSL vectors are restricted to positive strides, whereas the low-level cblas interface supports negative strides as specified in the blas standard.1 The interface for the gsl_cblas layer is specified in the file gsl_cblas.h. This interface corresponds to the blas Technical Forum's standard for the C interface to legacy blas implementations. Users who have access to other conforming cblas implementations can use these in place of the version provided by the library. Note that users who have only a Fortran blas library can use a cblas conformant wrapper to convert it into a cblas library. A reference cblas wrapper for legacy Fortran implementations exists as part of the cblas standard and can be obtained from Netlib. The complete set of cblas functions is listed in an appendix (see GSL CBLAS Library). There are three levels of blas operations, Each routine has a name which specifies the operation, the type of matrices involved and their precisions. Some of the most common operations and their names are given below, The types of matrices are, Each operation is defined for four precisions, Thus, for example, the name sgemm stands for “single-precision general matrix-matrix multiply” and zgemm stands for “double-precision complex matrix-matrix multiply”. Note that the vector and matrix arguments to BLAS functions must not be aliased, as the results are undefined when the underlying arrays overlap (see Aliasing of arrays). In the low-level cblas interface, a negative stride accesses the vector elements in reverse order, i.e. the i-th element is given by (N-i)*|incx| for incx < 0.
<urn:uuid:e363db3a-776d-4a6f-9d52-27799dc33e1f>
2.84375
583
Documentation
Software Dev.
37.18211
2,218
R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control. R is an integrated suite of software facilities for data manipulation, calculation and graphical display. It includes The term "environment" is intended to characterize it as a fully planned and coherent system, rather than an incremental accretion of very specific and inflexible tools, as is frequently the case with other data analysis software. R, like S, is designed around a true computer language, and it allows users to add additional functionality by defining new functions. Much of the system is itself written in the R dialect of S, which makes it easy for users to follow the algorithmic choices made. For computationally-intensive tasks, C, C++ and Fortran code can be linked and called at run time. Advanced users can write C code to manipulate R objects directly. Many users think of R as a statistics system. We prefer to think of it of an environment within which statistical techniques are implemented. R can be extended (easily) via packages. There are about eight packages supplied with the R distribution and many more are available through the CRAN family of Internet sites covering a very wide range of modern statistics. R has its own LaTeX-like documentation format, which is used to supply comprehensive documentation, both on-line in a number of formats and in hardcopy. R can be found on http://cran.r-project.org, the master site of comprehensive R archive network, or at one of its mirrors. It is available as source code and as binary distributions for a number of GNU/Linux and Unix platforms, and for versions of Microsoft Windows that support long file names. R comes with an FAQ list, and an introduction to the language and how to use R for doing statistical analysis and graphics in PDF and other formats. Further manuals and documentation (including in French and Spanish) are discussed in the FAQ and in the documentation section on CRAN. Return to GNU's home page. Please send FSF & GNU inquiries & questions to email@example.com. There are also other ways to contact the FSF. Please send comments on these web pages to firstname.lastname@example.org, send other questions to email@example.com. Copyright (C) 1999 Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111, USA Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved. Last modified: 2000 May 18 by Brian Ripley
<urn:uuid:3184b625-02c2-48dd-9765-6b76ec53fbb7>
3.5
711
Knowledge Article
Software Dev.
42.657449
2,219
|CNIDARIA : ACTINIARIA : Edwardsiidae||SEA ANEMONES AND HYDROIDS| Description: A tiny worm-like anemone up to 40mm long and 2mm diameter, usually about 15mm x 1mm. Up to 16 very long (relative to disc size) tentacles. Transparent, more or less patterned with white as in illustration. Habitat: Habitat similar to Edwardsia ivelli, Nematostella vectensis may be very abundant if present. Distribution: Known from several southern counties: Dorset, Hampshire, Sussex, Suffolk and Norfolk, although it has become extinct in several localities as a result of loss of habitat and pollution. Also known from North America (both coasts) and from Canada. Similar Species: Compare with Edwardsia ivelli. Key Identification Features: Distribution Map from BioMar data for Ireland - Google Earth map: download this placemark (not got Google Earth installed?) Distribution Map from NBN: JNCC MNCR Seasearch data - Grid map : Interactive map : National Biodiversity Network mapping facility, data for UK. |Picton, B.E. & Morrow, C.C., 2010. [In] Encyclopedia of Marine Life of Britain and Ireland | |Copyright © National Museums of Northern Ireland, 2002-2012|
<urn:uuid:0c50637f-062d-4fc3-8411-3b533f31eaba>
2.90625
287
Knowledge Article
Science & Tech.
35.394334
2,220
There are currently over 700 known exoplanets, however the Kepler space craft has reportedly found another 1000, although many have not yet been confirmed by ground based observations. Jupiter sized Exoplanets are reasonably easy to detect, but Earth sized ones are extremely difficult, but Kepler may now have found some. In collaboration with the University of Portsmouth we are helping final year students find stars that have Exoplanets and measure the change in their magnitudes as the stars are transited by one of their planets. Generally the dip in magnitude is a very small percentage of the stars brightness. Although Kepler, in space, can measure dips of just fractions of a percentage that is almost impossible from Earth due to our varying weather conditions and atmospheric effects. After recording very many eclipses it is possible to determine the period of rotation of the planet about its star and indeed identify if its period is reducing - as it appears to be in several cases as some Exoplanets are orbiting very close to their parent star. The students gain experience of using the telescopes and CCD cameras and teasing out the results obtained over several weeks of observing.
<urn:uuid:82ec5866-658a-4d83-abb4-97d2d5aa9ab7>
3.65625
225
Knowledge Article
Science & Tech.
23.458878
2,221
Scientists are claiming they have discovered a new species of monkey living in the remote forests of the Democratic Republic of Congo -- an animal well-known to local hunters but until now, unknown to the outside world. In a paper published Wednesday in the open-access journal Plos One, the scientists describe the new species that they call Cercopithecus Lomamiensis, known locally as the Lesula, whose home is deep in central DR Congo's Lomami forest basin. The scientists say it is only the second discovery of a monkey species in 28 years. In an age where so much of the earth's surface has been photographed, digitized, and placed on a searchable map on the web discoveries like this one by a group of American scientists this seem a throwback to another time. "We never expected to find a new species there," says John Hart, the lead scientist of the project, "but the Lomami basin is a very large block that has had very little exploration by biologists." Hart says that the rigorous scientific process to determine the new species started with a piece of luck, strong field teams, and an unlikely field sighting in a small forest town. "Our Congolese field teams were on a routine stop in Opala. It is the closest settlement of any kind to the area of forest we were working in," says Hart. The team came across a strange looking monkey tethered to a post. It was the pet of Georgette, the daughter of the local school director. She adopted the young monkey when its mother was killed by a hunter in the forest. Her father said it was a Lesula, well-known to hunters in that part of the forest. The field team took pictures and showed them to Hart. "Right away I saw that this was something different. It looked a bit like a monkey from much further east, but the coloring was so different and the range was so different," said Hart. The monkey to the east is the semi-terrestrial owl-faced monkey. Based on the photos, Hart believed that their shape and size could be similar, but their morphology or outward appearance was very distinct.
<urn:uuid:2001c486-8925-4dea-bc1f-3c965615dc2e>
3.578125
445
News Article
Science & Tech.
52.605775
2,222
Were that process to continue or accelerate, many scientists say, the anticipated rise in sea levels over the next few decades may have to be revised upwards. The National Oceanic and Atmospheric Administration, in its annual Arctic Report Card, published this week, said dramatic melting of the Greenland ice sheet had occurred in July, "covering about 97 percent of the ice sheet on a single day." Martin Jeffries, co-author of the report, said on the NOAA website: "As the sea ice and snow cover retreat, we're losing bright, highly reflective surfaces, and increasing the area of darker surfaces -- both land and ocean -- exposed to sunlight. This increases the capacity to store heat within the Arctic system, which enables more melting -- a self-reinforcing cycle." All the evidence says that what in effect is the world's source of air conditioning is getting weaker, with consequences that will be felt far below the 48th parallel.
<urn:uuid:e472be56-7a04-444c-b495-38313d406976>
3.109375
191
News Article
Science & Tech.
31.997769
2,223
. "8 Resistance, Resilience, and Redundancy in Microbial Communities--STEVEN D. ALLISON and JENNIFER B. H. MARTINY." In the Light of Evolution, Volume II: Biodiversity and Extinction. Washington, DC: The National Academies Press, 2008. The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. In the Light of Evolution: Volume II—Biodiversity and Extinction These studies did not suggest that broad taxonomic groups are more or less sensitive to disturbances than narrow taxonomic groups. This pattern suggests that taxonomic breadth is not related to whether a compositional shift was detected. Perhaps more surprisingly, there are no patterns suggesting that methodology influences whether a compositional change was detected. In addition, we were not able to discern whether particular taxonomic or functional groups are more or less sensitive to particular disturbance types. Overall, the low number of studies observing a resistant microbial composition hinders our ability to recognize any patterns among these studies. However, we can conclude that microbial composition is generally sensitive to disturbance. RESILIENCE OF MICROBIAL COMPOSITION Even if microbial composition is sensitive to a disturbance, the community might still be resilient and quickly return to its predisturbance composition. A number of features of microorganisms, and in particular Bacteria and Archaea, suggest that resilience could be common. First, many microorganisms have fast growth rates; thus, if their abundance is suppressed by a disturbance, they have the potential to recover quickly. Second, many microbes have a high degree of physiological flexibility. This is famously the case for the purple nonsulfur bacteria, which can be phototrophs under anoxic conditions and heterotrophs under aerobic conditions. Thus, even if the relative abundance of some taxa decreased initially, these taxa might physiologically acclimate to the new abiotic conditions over time and return to their original abundance. Finally, if physiological adaptation is not possible, then the rapid evolution (through mutations or horizontal gene exchange) could allow microbial taxa to adapt to new environmental conditions and recover from disturbance. All of these arguments assume that abundance is reduced by a disturbance, but some microbial taxa may benefit from the new conditions and increase in abundance. Thus, in order for some taxa to recover in abundance, those that responded positively to the disturbance would also need to decrease in abundance to return the community to its original composition. Few studies explicitly focus on the time course of microbial composition after a disturbance; instead, most focus solely on the sensitivity of composition. Consequently, we recorded the length of time between the application of the disturbance and when microbial composition was assessed for the studies in our sample. If composition is highly resilient, then one should be less likely to detect a compositional change as time from disturbance increases. We compared the time from initial disturbance for those studies that found composition to be sensitive versus resistant. Generally, the tim-
<urn:uuid:cc5f5508-9667-47eb-bc35-87162c8790e0>
3.09375
631
Academic Writing
Science & Tech.
11.852347
2,224
The Global Climate at a Glance (GCAG) web application can be used to retrieve monthly and annual global temperature anomaly maps that date back to 1880. Users can also create timeseries for locations around the globe by selecting a point on the map. The interactive interface allows users to adjust the vertical and horizontal axes of the timeseries plots to view selected range of months or years of data or to view the entire period of record. The maps are created using land surface temperature anomalies from the Global Historical Climatology Network (GHCN) data and sea surface temperature anomalies from the International Comprehensive Ocean-Atmosphere Data Set (ICOADS). These two datasets are blended into a single product to produce the combined global land and ocean temperature anomalies. The temperature anomalies are calculated with respect to the 1971-2000 base period and are averaged over 5 degree by 5 degree grid boxes. These gridded temperature anomalies are mapped by GCAG. Additional information on the dataset can be found here: Global Temperature Anomalies FAQ Page. The data files used by the GCAG web application can be found here *Adobe Flash Player is needed to launch GCAG. Get the latest version of Adobe Flash Player.
<urn:uuid:8a4ea970-b54c-4f1a-86d1-7849cda21dd1>
3.125
243
Knowledge Article
Science & Tech.
24.251802
2,225
Update: On 16 October, NASA released an image of a faint ejecta plume observed by the LCROSS shepherding spacecraft. See Elusive lunar plume caught on camera after all In the final minutes of its plunge toward the moon, NASA's LCROSS spacecraft spotted the brief infrared flash of a rocket booster hitting the lunar surface just ahead of it – and it even saw heat from the crater formed by the impact. But scientists remain puzzled about why the event did not seem to generate a visible plume of debris as expected. As hundreds of telescopes and observers watched, the highly publicised NASA mission to search for water on the moon reached its grand finale at 0431 PDT (1131 GMT) with a pair of high-speed crashes into a lunar crater named Cabeus. During the crucial moments at NASA's Ames Research Center in Moffett Field, California, scientists and engineers with LCROSS (Lunar Crater Observation and Sensing Satellite) peered in silent concentration as successive images of the crater grew larger on their screens. Nearby, some 500 bleary-eyed visitors that had gathered overnight outside mission control were watching the same pictures on a giant outdoor screen. Yet, immediately after the scheduled impact time, there was no obvious sign of the spectacular explosion that many were expecting. "Impacting into the moon is an unpredictable business at best," Anthony Colaprete, principal investigator for LCROSS, said in a post-impact briefing. Colaprete did not offer definitive word as to why the visual camera apparently did not detect the event but added there were interesting changes in spectroscopic data taken by the spacecraft that might have been produced by a debris cloud. "I'm not convinced that the ejecta is not in the data yet," he said. A worst-case scenario would have occurred if the rocket hit bedrock rather than loose, gravelly soil. In that case, the debris plume might not have reached the minimum 1.5-kilometre altitude needed to catch the sunlight and be seen by LCROSS. Because of the angle of the crater, the plume would have needed to rise to 2.5 to 3 km in order to be seen by telescopes on Earth. A 10-km-high plume was expected. The impact was monitored by the Hubble Space Telescope, which has not yet delivered its data. Several major observatories were also watching for signs of impact, including the Keck and Canada-France-Hawaii telescopes on Mauna Kea, neither of which saw a plume. One positive report came from Kitt Peak Observatory in Arizona, where a flash of visible light revealing the presence of sodium was recorded during the impact. "I think we're all a little bit disappointed that we didn't see anything," David Morrison, director of NASA's Lunar Science Institute, told New Scientist. Neither here nor there Regardless of its ultimate scientific return, today's outcome will likely go down as one of the more bemusing episodes in NASA's long history of lunar missions. While the spacecraft appeared to be working as expected and in contact with mission controllers, it clearly did not deliver the views that scientists and spectators were hoping for. Unlike a catastrophic failure, such as Mars Polar Lander in 1999, or a euphoric success, such as the spectacular 2005 collision of the Deep Impact mission with Comet Tempel 1, the non-detection seemed to leave officials unsure of how to react. The big question that planetary scientists hope will be answered is: are there significant quantities of water ice on the moon? Last month, water was discovered in the lunar soil, but the amounts detected were relatively small. A long standing mystery is whether dark craters such as Cabeus could act as cold traps, capturing water molecules that are liberated when comets strike the moon. Data from the Lunar Prospector mission, which flew in the late 1990s, indicate high concentrations of hydrogen in Cabeus. The hydrogen could belong to water ice mixed in with the rock and soil in the crater's depths. LCROSS was designed to look for the signature of water and other molecules as it flew into the debris plume of the rocket impact. It should also have executed a sideways turn one minute prior to its own impact to see the molecular constituents of the impact backlit by the sun. Without a plume to study, scientists will have less of a handle on the question but Colaprete says the spectroscopic data may be enough to spy the constituents of water. "It will probably take two weeks to get a yes or no answer on water," said Michael Bicay, director of science at Ames. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Fri Oct 09 13:48:40 BST 2009 by Dave Commentary I heard indicated that it would take several days to calibrate the data returned. So we may have something, we just do not know yet. Sat Oct 10 09:22:31 BST 2009 by Think Again Would this be like the invisible moon landings? Anyway, maybe they just hit the side of the crater, and so the impact chucked the debris downwards, not upward where the sensors on the orbiter could pick it up. Wish they had had the brains to stick a flashlight and camera on the impactor so we could see how it impacted. Anyway, now that the moon does not show signs of huge amounts of water, we can concentrate our efforts on landing on asteroids which is really important too. Sat Oct 10 17:13:19 BST 2009 by Agent420 They did have a flashlight on it but did not use the bunny's batteries and they went dead on the way there. Mon Oct 12 07:27:14 BST 2009 by Dave And for the really great challenge, try Ceres and the water located there. Gravitationally, the delta-v is a big factor. But it has the potential to be the gas station for the Solar System. Robert Farquhat reminds us that gravity wells are a cul-de-sac to be avoided: (long URL - click here) Tue Oct 13 06:53:47 BST 2009 by Slobodan That is the main reason why we need human missions in space, despite many opinions that robots are more suitable and cheaper. Robotic missions simply are not smart enough yet to handle various unknown circumstances and environments. An astronaut landing on the Moon's south pole with proper equipment would offer an answer about water presence there within the minutes or hours, without any doubt. Robots, if faced with something unpredicted, or unknown simply fail the mission goals, leaving a bitter taste in the mouths of mission control and scientists waiting for the results. Never send a robot to do the man's job! Tue Oct 13 07:19:46 BST 2009 by danielle grandhomme I suppose there is no oxygen on the moon...am I wrong? Well as I know when there is no oxygen, there is no ignition... so, how did the module landed on moon in 1969 re-start from there to earth, as the gas could not be lighted? Why did we need, now, to crash a module to get few datas of particles of SOIL as there have been several moon missions Substantially...have we really been on the moon? I guess not. Everest has been thousands times climbed by anyone, and thus littered with plastic bags... why not the moon? There is a great wave of reactions to this crash, because now we are able to really watch what it is going on, live, and with no possible fake, for anyone can make false images and videos, and thus anyone can recognize fake! NASA was under exam. And showed its weird face to all the world, now aware. There is deception, as I see, and anger or resignation I have no resignation. Tue Oct 13 10:03:43 BST 2009 by Toby Yay! Another conspiracy nutjob. You're absolutely right, but we can go further, let's say man has never been to France either. I know there's evidence, but let's just ignore it. I also don't believe in the existence of chickens. And it's time we exposed the lie started by the CIA that humans need oxygen. Seriously danielle, if you have such a distrust of science, why spend your time visiting a science website? Am I Wrong, Or. . . Fri Oct 09 13:54:16 BST 2009 by Jamie Jones Would a very small 'plume' indicate the soil is being held together by something? I mean... unsettle dry dust, and you get a lot of that dust rising. Unsettle moist dust and you don't get a lot of anything. Am I Wrong, Or. . . Fri Oct 09 15:23:16 BST 2009 by Will Mmmm, you're wrong. This would be some kind of lunar "mud". To be moist, it would have to be bound with liquid water. The temperature in these craters is as near to absolute zero as makes no difference for this purpose. The other stories referring to water in the soil don't literally mean liquid water. They're referring to individual hydroxyl or water molecules. Am I Wrong, Or. . . Fri Oct 09 18:11:38 BST 2009 by nicholasjh True, but what effect would hard ice have? Maybe the concentrations are higher then they even thought. Am I Wrong, Or. . . Fri Oct 09 21:16:55 BST 2009 by Chris W I did come accross something a while back where they were finding lunar soil to be stickier than expected and some have postulated that it has a static charge. I wonder if this could have anything to do with it. Fri Oct 09 14:02:58 BST 2009 by Stricka012 Maybe the craters are deeper than they thought, or the material found there is denser than they were expecting. Fri Oct 09 14:38:00 BST 2009 by Jamie Jones Maybe it's stuck at the bottom of a well. Fri Oct 09 17:41:25 BST 2009 by noses You just nade my day. Literally! Its so bad that it's actually insanely good Fri Oct 09 22:23:00 BST 2009 by jo if there is no gravity on the moon, there would be no equal and opposite force to the rocket crashing into it. So there would be no upward explosion. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:3d55acfc-9c00-4b24-af9a-7713e180542f>
3.546875
2,307
Comment Section
Science & Tech.
62.924365
2,226
Huntington is the science director for Pew's Arctic Program. For the oil and gas industry, the Arctic Ocean is the final frontier. Beneath the ocean floor lies an estimated 90 billion barrels of recoverable oil - about 13 per cent of the global total. As the sea ice retreats and traditional sources of hydrocarbons dwindle, the pressure to drill is becoming irresistible. It now seems inevitable that this harsh environment will be opened up to oil and gas production, which poses a big question: how much scientific research is "enough" to ensure safe drilling in the Arctic Ocean? It is true that hundreds of millions of dollars have been spent on marine science in US Arctic waters. But that doesn't mean the right questions have been asked, or that we have the results necessary to inform responsible management. Unfortunately it turns out that we simply don't know enough about Arctic Ocean ecosystems to ensure our actions won't inadvertently stress species to the point of affecting animal populations and the indigenous peoples who depend on them. ...Read the full piece on the New Scientist website.
<urn:uuid:8a5094ac-8385-4274-8624-963dc433cfa1>
3.53125
218
Truncated
Science & Tech.
50.771927
2,227
Kelly at ORNL, with the tandem in the background Kelly (with Geiger counter) being interviewed by an Albuquerque news agency during one of the open days at the Trinity Site Kelly Picking apples from Newton's Apple Tree in the courtyard of the Physics building in York, UK What if only a few people could understand you? What if your phrases were littered with words like subatomic, gravitational assist, or H-theorem? It would probably be difficult to get your point across, let alone order a cheeseburger. Nuclear physicist Kelly Chipps (AKA Nuclear Kelly) understands just how difficult it is for some people to understand physicists, and with her diverse background she is striving to make physics accessible to everyone. Being capable of communicating “is important in order to convey the information across the barrier,” says Chipps. Chipps, whose father was a chemical engineer and mother was an elementary school teacher, is a talented writer and nuclear physicist breaking barriers and stereotypes of what it means to talk like a physicist. Chipps not only works for the Colorado School of Mines as a postdoctoral fellow, she also is an accomplished musician, a published writer and a blogger. Her research is out -of-this-world. She searches for ways to measure the reactions that take place in stars. “When you can sufficiently know what is happening inside stars and supernova, you can figure out what’s in us. We measure those reactions to be able to understand how much of any given thing is in the universe and that should tell us how much is in us,” said Chipps. But, why is that important? “It is important because we are all stardust. It’s true; everything on Earth is made from the elements that are generated in the scenarios in astrophysics,” she said. She also recognizes that her research has more direct implications to nuclear energy and nuclear medicine. Chipps currently spends a lot of time at the Oak Ridge National Laboratory (ORNL) in Tennessee where she works in a facility that creates radioactive nuclei and sends them through the world’s largest tandem accelerator, resembling a six-story-high Van de Graaf generator, the giant metal ball that uses static electricity to make people’s hair stand on end in science museums and physics classrooms. In her research, the beams of radioactive nuclei are sent through the tandem accelerator and “then we can smash them into targets and measure what comes out the other side. This helps us understand what makes up these particles and how they interact,” said Chipps. Although the tandem accelerator is inside a gigantic chamber of inert gas, it will sometimes spark, sounding like intense thunder, which takes many of the scientists by surprise and inevitably causes people to shudder. Chipps wasn’t always smashing nuclei and playing with large-scale, mad-scientist-like equipment. She describes herself as a normal child, playing on the swing set, digging in the dirt, catching frogs and eating “bad” food like pizza rolls, but Chipps also really enjoyed school and learning about new things. When she was in high school she wanted to be a zoologist. She said, “I wanted to go out into the middle of the woods and study wolves.” She was so intent on taking this path that she rearranged her schedule so that she could take biology her senior year and be better prepared for her upcoming college classes. So, her junior year she took physics. Her physics teacher, Stefan Kern, was from Germany and she remembers him as being “so interested in what he was doing. You could tell he loved physics and I wanted to know why that was.” She decided to drop biology from her senior schedule and instead take the advanced physics class taught by Mr. Kern. Through her studies she discovered why she liked physics. She said, “Here’s something on a fundamental level that explains why things are.” A concept she continues to explore in her research. In high school Chipps was also a member of the band, but she hasn’t outgrown her affection for music. “I was in band all my life and I’ve played with several orchestras,” she said. She would love to continue to play in orchestras, but mostly she plays for personal enjoyment, since moving around from place to place as a postdoc makes establishing herself in a local orchestra difficult. Chipps also recognizes a connection between music and physics. “At a fundamental level they stem from the same thing in that they are both very mathematical,” she said. However, the mathematical connection does not hinder her ability to be creative. “As much as I enjoy science and figuring out the fundamental technical questions, at the same time I have to have a creative outlet,” she said. Besides music, Chipps uses writing as a creative channel. “I’m a published author outside of science,” she said. Chipps has written and published short stories and poetry as well as a nonfiction book exploring the clash between science and religion. Chipps also keeps a blog, Miss Atomic Bomb: The life and times of a female researcher in nuclear physics, where she explores what it means to be a physicist, how science is important in society, and continues to advocate for scientific funding and research and communication. Chipps is also a regular contributor on PhysicsCentral’s Ask-A-Physicist. Kelly (center) with fiancé Steve (left) and friend Baharak (right) hiking in the Scottish Highlands Read some of Chipps' contributions to Ask-A-Physicist: If a helicopter hovers in a fixed position for 24 hours will the earth rotate around it? Are airport whole body x-ray scanners safe for frequent travelers?
<urn:uuid:114ef840-4c0b-4e63-8a31-13998e924249>
3.3125
1,222
About (Pers.)
Science & Tech.
48.276644
2,228
It might be a completely new species--a very tricky new species. Learning skills that will be invaluable in later foxhood In the future, the great Pixel Wolves Of The Sky will look down below on the mutated fish. The wolves will be hungry but also weirded out. Cliff the beagle can sniff out a dangerous bacterium just by smelling patients--no stool sample or long lab analysis necessary. Researchers discover an adorable (yet scary!) species of slow loris. Find out how these arachnids avoid getting trapped in their goo. Research on how the deadly fungus affects immune systems may help HIV research. Following the shooting of a tagged Yellowstone grey wolf just outside the park's borders in Wyoming--the eighth such wolf shot this season--the state of Montana has banned wolf hunting in areas adjacent to the park. The NYTimes quotes a Montana Fish, Wildlife, and Park commissioner who cites the "time and money and effort" that goes into the tagging and research of these wolves, as well as a Yellowstone biologist who still seems to be smarting from the loss, saying this is a "moderate" decision that addresses "some of the issues as far as the science." [NYTimes] Wyoming's anti-scientific laws have allowed the most famous wolf in Yellowstone to be shot. Shooting wolves isn't only senseless--it actively harms the environment. The benefits of living with an engineer Aerial surveillance, radio tagging and ranger patrols aim to fight poaching in Asia and Africa. The "spidernaut" Nefertiti has died. It was 10 months old. A "Johnson Jumper" spider, it was sent on board the International Space Station in July as part of an experiment; researchers watched to see if the spider would adapt its feeding behavior to weightlessness (it did). Nefertiti was returned to Earth after a 100-day stay, and the Smithsonian Institution's National Museum of Natural History then placed the spider in its insect zoo. The display opened to the public on November 29, but the spider died of natural causes yesterday morning. Rest in peace, spidernaut. [SPACE.com] The elephant, Duchess, goes under the knife, with doctors using custom tools for the rare surgery. By tracking the cows' diets, and thus their methane production, researchers can help slow global warming. Scientists in the UK injected dogs with cells grown from the lining of their noses, which continually regenerates.
<urn:uuid:c9089bd5-0d7f-494b-8f49-de75bd415051>
2.6875
506
Content Listing
Science & Tech.
51.831468
2,229
William F. Jasper The New American Dec 19, 2012 GIGO, for garbage-in, garbage-out is a basic principle of computing and/or decision-making which holds that the validity or integrity of the input will determine the validity or integrity of the output. Which is why first-year computer students are taught to check and recheck their input data and assumptions. It is not unreasonable, therefor to expect the same of seasoned scientists with multiple letters after their names, utilizing some of the most sophisticated and expensive computers and operating out of prestigious universities and laboratories. Especially when taxpayers are underwriting their work and the studies produced by their computer models are the basis for far-reaching public policies that will dramatically impact those taxpayers, as well as all of society. However, when it comes to the theory of anthropogenic (human-caused) global warming, or AGW, the GIGO principle appears to be the norm. The so-called mainstream media (MSM) never seem to tire of headlining scary scenarios of climate catastrophe brought on by AGW, based on the latest projections generated by computer modeling of atmospheric temperatures, ocean temperatures, sea levels, glaciers, rain fall, extreme storms, etc. The same media organs, however, rarely report on the many scientific studies that regularly debunk the schlocky — and often outright fraudulent — computer models. The Hockey Schtick blogspot reported on December 10 that a new paper published in the Journal of Climate finds there has been “little to no improvement” in simulating clouds by state-of-the-art climate models. The authors note the “poor performance of current global climate models in simulating realistic [clouds],” and that the models show “quite large biases … as well as a remarkable degree of variation” with the differences between models remaining “large.” This is no small matter, as leading climate scientists have for years been pointing out that failure to account for cloud mediation in the complex interplay of climatic factors is a major flaw in climate models. (See here and here.) As Dr. Roy Spencer points out in his new book, The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists, The most obvious way for warming to be caused naturally is for small, natural fluctuations in the circulation patterns of the atmosphere and ocean to result in a 1% or 2% decrease in global cloud cover. Clouds are the Earth’s sunshade, and if cloud cover changes for any reason, you have global warming — or global cooling. Dr. Spencer, a principal research scientist at the University of Alabama in Huntsville, is himself one of the world’s top climate scientists. A former senior scientist for Climate Studies at NASA, he is co-developer of the original satellite method for precise monitoring of global temperatures from Earth-orbiting satellites. He has provided congressional testimony several times on the subject of global warming and authored the 2008 New York Times bestseller Climate Confusion. Hockey Schtick points out that the latest Journal of Climate paper “is one of many that demonstrate current climate models do not even approach the level of accuracy [within one to two percent] or ‘consensus’ required to properly model global cloud cover, and therefore cannot be used as ‘proof’ of anthropogenic global warming, nor relied upon for future projections.” GWGIGWGO: Global-warming Garbage In, Global-warming Garbage Out Hockey Schtick on December 10 also reported: A paper published today in Geophysical Research Letters examines surface air temperature trends in the Eurasian Arctic region and finds “only 17 out of the 109 considered stations have trends which cannot be explained as arising from intrinsic [natural] climate fluctuations” and that “Out of those 17, only one station exhibits a warming trend which is significant against all three null models [models of natural climate change without human forcing].” Climate alarmists claim that the Arctic is “the canary in the coal mine” and should show the strongest evidence of a human fingerprint on climate change, yet these observations in the Arctic show that only 1 out of 109 weather stations showed a warming trend that was not explained by the natural variations in the 3 null climate models. Additional studies demonstrating the failures and false predictions of climate computer models can be found on the Hockey Schtick blogspot here. Meanwhile, in a December 7 post on his WattsUpWithThat (WUWT) climate blog, Anthony Watts reported on a new study that shows climate models still struggle with medium-term climate forecasts. He asked: “How cold will a winter be in two years?” And “How well are the most important climate models able to predict the weather conditions for the coming year or even the next decade?” Very fair and important questions, obviously, if we are depending on these models to project global temperatures several decades into the future and guide global policies that will impact all humanity. He noted that German scientists Dr. Dörthe Handorf and Prof. Dr. Klaus Dethloff from the Alfred Wegener Institute for Polar and Marine Research in the Helmholtz Association (AWI) have evaluated 23 climate models and published their results in the current issue of the international scientific journal Tellus A. Watts summarized their conclusions: There is still a long way to go before reliable regional predictions can be made on seasonal to decadal time scales. None of the models evaluated is able today to forecast the weather-determining patterns of high and low pressure areas such that the probability of a cold winter or a dry summer can be reliably predicted. None of the models was able to reliably reproduce how strong or weak the Icelandic Low, Azores High, and other meteorological centres of action were at a particular time over the last 50 years. As many skeptical scientists have pointed out, for all the sophistication of computer models, they cannot account for many of the complex inputs that impact our climate. Dr. Handorf, one of the report’s co-authors, acknowledges this limitation, noting that “it will not be enough to increase the pure computer power.” “We must continue to work on understanding the basic processes and interactions in this complicated system called ‘atmosphere,’” said Dr. Handorf. “Even a high power computer reaches its limits if the mathematical equations of a climate model do not describe the real processes accurately enough.” Rising Tide of Facts Debunks Computer-generated Sea-rise Jo Nova, Australia’s climate-science dynamo, recently demolished the outlandish projections by climate alarmists that the city of Perth is in danger of being swamped by rising sea levels due to AGW. The actual data from the tide gauges, which was relatively simple to obtain, directly contradicts the alarmists computer models. And that seems to be the story for the sea-level climate bugaboo worldwide, according to a study by Professor Nils-Axel Mörner, one of the world’s top experts on sea levels. In a report issued December 7 with the unequivocal title, “Sea level is not rising,” published by the Science & Public Policy Institute (SPPI), Dr. Mörner states, “We are facing a very grave, unethical ‘sea-level-gate.’” Professor Mörner makes some stunning charges, including: • At most, global average sea level is rising at a rate equivalent to 2-3 inches per century. It is probably not rising at all. • Sea level is measured by both tide gauges and, since 1992, satellite altimetry. One of the keepers of the satellite record told Professor Mörner that the record had been interfered with to show sea level rising, because the raw data from the satellites showed no increase in global sea level at all. • The raw data from the TOPEX/POSEIDON sea-level satellites, which operated from 1993-2000, shows a slight uptrend in sea level. However, after exclusion of the distorting effects of the Great El Niño Southern Oscillation of 1997/1998, a naturally-occurring event, the sea-level trend is zero. • The GRACE gravitational-anomaly satellites are able to measure ocean mass, from which sea-level change can be directly calculated. The GRACE data show that sea level fell slightly from 2002-2007. • These two distinct satellite systems, using very different measurement methods, produced raw data reaching identical conclusions: sea level is barely rising, if at all. • Sea level is not rising at all in the Maldives, the Laccadives, Tuvalu, India, Bangladesh, French Guyana, Venice, Cuxhaven, Korsør, Saint Paul Island, Qatar, etc. • In the Maldives, a group of Australian environmental scientists uprooted a 50-year-old tree by the shoreline, aiming to conceal the fact that its location indicated that sea level had not been rising. This is a further indication of political tampering with scientific evidence about sea level. • Modelling is not a suitable method of determining global sea-level changes, since a proper evaluation depends upon detailed research in multiple locations with widely-differing characteristics. The true facts are to be found in nature itself. • Since sea level is not rising, the chief ground of concern at the potential effects of anthropogenic “global warming” — that millions of shore-dwellers the world over may be displaced as the oceans expand — is baseless. The results of Dr. Mörner’s research are especially relevant to assessing the claims of climate modelers that the survival of island nations such as Maldives and Tuvalu, and low-lying coastal areas in developing nations, such as India and Bangladesh, is being threatened by rising sea levels due to AGW from emissions of the “rich countries.” The phony climate models projecting catastrophic sea-level rises are then used at UN climate summits, such as at Copenhagen, Cancun, Durban, Rio, and the recently concluded Doha summit, to call for carbon taxes and “loss and damages” payments to the “threatened” nations, in the interest of “climate justice.” As Prof. Mörner charges, “sea-level gate” is indeed a grave scandal, showing widespread unethical practices and serious perversion of science. However, “sea-level gate” is just one of a multitude of scandals, collectively known as Climategate, (Seehere, here, and here), nearly all of which employ computer modeling chicanery to craft wild scenarios (which invariably are contradicted by real-world observations and verifiable historical data) to promote an agenda of empowering governments at local, national, and international levels to deal with the fabricated “crises.” In a July 10, 2012 op-ed column for the Australian journal Quadrant, Professor Cliff Ollier of the School of Earth and Environment at the University of Western Australia took aim at the dangerous practice of allowing unvetted and unreviewed computer models to determine policies in the name of “science.” “Many think political decisions concerning climate are based on scientific predictions,” noted Prof. Ollier. But, he continued, “This is not the case: what the politicians get are projections based on models. What is the difference, and why is it never made clear?” Models depend on what you put in (data), the program, and conclusions drawn from the output. The UN’s Intergovernmental Panel on Climate Change uses adjusted data for the input, mostly from the discredited UK East Anglia Climate Research Unit, and their computer models and codes remain secret — not a scientific procedure. They do not give predictions of the future, but only computer projections. Furthermore they do not take responsibility for the alarm they generate. FACT: No Warming For 16 Years — Computer Models Failed Finally, Prof. Ollier, like many other scientists, points out that the real test of climate computer models is now in the public record: Despite the non-stop hyperventilation by the MSM talking heads about global warming, the fact is there has been no observable, measurable upward trend in global temperatures for the past 16 years. This was acknowledged in October of this year by the U.K.’s Met Office, which has been one of the major promoters of global-warming alarmism. Professor Phil Jones, director of the Climate Research Unit at the University of East Anglia, and one of the leading alarmists at the center of the Climategate e-mail scandal, stated that a period of 15 years without measurable warming would be required to invalidate the projections of the computer models. In 2009, when it was already becoming apparent that the Al Gore narrative based on the computer fables was in trouble, Jones sent an e-mail to one of his alarmist colleagues who was getting nervous: “Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.” Done: the drastic global temperature rises predicted by all the modelers of doom has not occurred for nearly 16 years — according to all the real measurements. The climate modelers have feet of clay. Professor Judith Curry, chair of the School of Earth and Atmospheric Science at Georgia Tech, says the lack of warming over the past 16 years makes it clear that the computer models used to predict future warming are “deeply flawed.” “Climate models are very complex, but they are imperfect and incomplete,” she notes. “Natural variability has been shown over the past two decades to have a magnitude that dominates the greenhouse warming effect.” “It is becoming increasingly apparent,” says Prof. Curry, “that our attribution of warming since 1980 and future projections of climate change needs to consider natural internal variability as a factor of fundamental importance.” This article was posted: Wednesday, December 19, 2012 at 11:44 am
<urn:uuid:83c4d6fb-06c8-408a-8f20-0aaa96e2d8e6>
2.515625
2,948
Nonfiction Writing
Science & Tech.
31.407265
2,230
Airdate: Sep 14, 2004 Scientist: Professor Michael Wells (see home address in details) A little caterpillar + a big appetite and a lot of help = the world's finest fabric. ambience: Silkworms munching mulberry leaves We're listening to the sounds of caterpillars munching mulberry leaves -- not just any caterpillars, mind you. If you're wearing a silk shirt or blouse, it could have had its origins right here. I'm Jim Metzner, and this is the Pulse of the Planet. The only way to manufacture silk is to harvest silkworms, feed them and encourage them to spin cocoons from which silk thread is made. The process originated in China thousands of years ago. "The one thing to realize about silkworms is that they have been breed specifically to make silk." Dr. Michael Wells is a biochemist at the University of Arizona. "So these insects actually can't fly. So they tend to put them just in big plastic barrels and let them mate, and lay them on pieces of paper. And then these are put into trays with freshly harvested mulberry leaves. The labor intensive part is that about every 6 to 8 hours they have to replace them with a fresh batch of mulberry leaves." A silkworm grows from about the size of a pinhead to over 3" in length. Along the way it chews a prodigious amount of mulberry leaves, increasing its body mass about 10,000 times from its original mass. "So you need a large number of mulberry trees, and a lot of people going out and picking leaves and chopping them up. And it takes about 3 to 4 weeks to grow from eggs to the stage where they spin the cocoons. They are put in, now, into plastic devices that look kind of like egg crates where they spin their cocoons - that keeps them all in one place, and then it makes it easy to harvest the cocoons. And then the cocoons are simply put into big vats of an alkaline solution - silk threads are drawn out from that vat with big machines." Pulse of the Planet is presented by DuPont, bringing you the miracles of science, with additional support provided by the National Science Foundation.
<urn:uuid:65d8097b-aa57-4ff5-add7-ff254ff7df80>
2.609375
472
Audio Transcript
Science & Tech.
61.056
2,231
Courtesy jimmowatt "101010 (base two (binary)) equals 42 (base ten). Oddly enough, this is evenly divisible by the number of days in a week (7 (lucky)); and equally oddly, is also evenly divisible by the number 6 (which is generally designated as being unlucky). Both a Ying and Yang situation seem to be incorporated into this date." HubPages.com 10 (base ten) = 1010 (base two) (base ten): 10 x 10 = 100 (base two): 10 x 10 = 100 In Hitchhiker's Guide to the Galaxy, the "Answer to Life, the Universe, and Everything" was 42.
<urn:uuid:eecc3fc9-5ba5-466d-ba38-2e209851b057>
2.65625
143
Knowledge Article
Science & Tech.
60.853
2,232
Anyone who has ever seen a streaky line of vapor, known as a contrail, behind a high-flying aircraft knows that airplanes can produce their own clouds. But in rarer cases, aircraft can also punch round holes, such as the one over Antarctica pictured here, or carve long channels through existing, natural clouds. Those formations arise from the strong cooling effects of airflow over a plane’s propeller blades or a jetliner’s wing. A study published recently in the journal Science reports that cooling can spontaneously freeze water droplets in the cloud and stimulate precipitation. The phenomenon requires a specific set of cloud conditions and is thus unlikely to have significant large-scale effects, but it could affect regional weather near airports.
<urn:uuid:f059d02f-96af-45bf-b069-b425bc15721c>
3.859375
147
Knowledge Article
Science & Tech.
34.675756
2,233
Workbook for BaBar Offline Users - Writing Code Using an Object Oriented design approach to make proper use of C++. Object Oriented programming offers a powerful model for writing computer software. Objects are "black boxes" which send and receive messages. This approach speeds the development of new programs, and, if properly used, improves the maintenance, reusability, and modifiability of software by limiting the dependences among the various objects which are coded. OO Analysis and Design brings in a new approach to modelization with respect to traditional (procedural) methods, understanding the OO vs SA/SD analysis & design characteristics is important. The C++ language offers an easier transition via C, but it still requires an OO design approach in order to make proper use of this technology. This can often be a major problem for experienced C programmers. Object Orientation relies heavily upon concepts and terminology that make up the boundaries of the paradigm. Principles and definitions cannot be ignored in order to program within the OO paradigm boundaries. A rudimentary discussion of Object Orientation may be found in the Object Orientation section of this You may want to look at the FAQ to get quick answers to the basic ideas behind OO. As an introduction to Object Oriented Programming, Bob Jacobsen and Dave Quarrie presented a series of lectures at SLAC in 1995. Like the C++ course, these lectures were recorded and the videos are available or loan and for copy from SLAC and in Europe. The transparencies from the talks are available on the web. Some useful books on OO design are: - Robert C. Martin, "Designing Object-Oriented C++ Application Using the Booch Method", Prentice-Hall Inc, 1995, ISBN 0-13-203837-4. - E.Gamma, R.Helm, R.Johnson and J.Vlissides, ``Design Patterns'', Addison-Wesley, ISBN 0-201-63361-2. - I.White, ``Using the Booch Method: a Rational Approach'', Benjamin Cummins, ISBN 0-8053-0614-5. - G.Booch, ``Object Oriented Analysis and Design with Applications (2nd ed)'', Benjamin Cummins, ISBN 0-8053-5340-2. Methods (functions) in OO are the way to develop software following a controlled and repeatable process. Methods rely on the parallel production of documentation (mostly OO diagrams) and code). Examples are: The C++ Language You may or may not do OO programming with C++ (not even so-called pure-OO languages force you to go OO). If ever, these books are gold-plated: if you don't have them, buy them (and read them). They cover what you should and should not do with OO/C++. C++, Second Edition, 224 pgs, Addison-Wesley, 1998, ISBN 0-201-92488-9. Covers 50 topics in a short essay format. Effective C++, 336 pgs, Addison-Wesley, 1996, ISBN 0-201-63371-X. Covers 35 topics in a short essay format. Gamma et al., Design Patterns, Elements of Reusable Object-Oriented Software, Addison-Wesley, ISBN 0-201-63361-2. Patterns and what OO is all about: The introduction should be carved into each developer's brain. While Meyers works constitute the do's and dont's, the following two works cover most if not all aspect of legal C++. A series of 8 lectures introducing users to C++ programming were given by Paul Kunz during 1995. These lectures were all video'd and the videos are available for loan and copy from SLAC and in Europe. The full transparencies for the course are available on the WWW (copies of these will be essential when viewing the videos !) at /BFROOT/www/Computing/Programming/ProgC++class.html. The Paul Kunz course closely follows the following text: Lippman and Lajoie, C++ Primer, Third Edition, 1237 pgs, Addison-Wesley, 1998, ISBN 0-201-82470-1. Very readable/approachable. Stroustrup, The C++ Programming Language, Third Edition, 646 pgs, Addison-Wesley, 1998, ISBN 0-201-53992-6. Covers a lot of groun This books contains many code examples, all of which are available from the Web, via links on the above pages. The book is really aimed at teaching the basics of C++ to Scientists and Engineers (i.e. FORTRAN programmers). J. Barton and Lee R. Nackman, ``Scientific and Engineering C++: An Introduction with Advanced Techniques and Examples'',published by Addison-Wesley (ISBN 0-201-53393) Other useful texts for learning C++ are: - Stanley B. Lippman,``C++ Primer'', published by Addison Wesley. ( ISBN 0-201-54848-8 ) - Ira Pohl, ``Object Oriented Programming Using C++ - 2nd Edition'', published by Addison Wesley ( ISBN 0-201-89550-1 ) - Scott Meyers,``Effective C++'', published by Addison Wesley. ( ISBN 0-201-56364-X ) - Scott Meyers,``More Effective C++'', published by Addison Wesley. ( ISBN 0-201-63371-X ) In general, each file in a package should represent a class, and each class should contain methods (functions) to do just one task. The classes are defined in the header files (*.hh) which represent the class interface. In it you state what the class is called and declare the variables and functions within that class. The .cc file is just the implementation of the functions. The convention: <class declaration info> should be adhered to for header files to avaoid the header file being "#include"'d more than once at compilation stage. Classes involving the Objectivity database are different - they contain a .cc file and a .ddl file (from which the compiler automatically creates a *_ddl.hh in a tmp directory when it's processing). These files are beyond the scope of this workbook chapter, and will be dealt with in the WorkBook chapter Writing Persistent Classes Writing messages in a way that allows program recover, allows logging and allows the user to control verbosity General Related Documents: Last modification: 5 August 2004 Last significant update: 9 October 2002
<urn:uuid:7219644c-1d7e-486c-9561-60d1fb985e48>
3.296875
1,508
Tutorial
Software Dev.
59.950687
2,234
|Oracle® OLAP DML Reference 10g Release 1 (10.1) Part Number B10339-02 The RTRIM function removes characters from the right of a text expression, with all the rightmost characters that appear in another text expression removed. The function begins scanning the base text expression from its last character and removes all characters that appear in the trim expression until reaching a character that is not in the trim expression and then returns the result. TEXT or NTEXT based on the data type of the first argument. RTRIM (text-exp [, trim-exp]) A text expression that you want trimmed. A text expression that is the characters to trim. The default value of trim-exp is a single blank. The following example trims all of the right-most a's from a string. SHOW RTRIM('Last Wordxxyxy','xy') Last Word
<urn:uuid:cd520809-c4ad-45d4-9fa2-72200ab700b1>
3.40625
191
Documentation
Software Dev.
66.864123
2,235
Pollution-gobbling molecules in global warming SMACKDOWN Boffins: Newly observed particles scrub our filthy air Elusive pollution-busting molecules are scrubbing our planet's atmosphere at a much faster rate than first imagined, according to gas-bothering boffins. Reactions by the cleaning agents, known as Criegee intermediates, are also emitting a by-product that forms solar radiation-reflecting clouds that could help cool Earth and reduce the effects of global warming. The Criegee biradicals were first hypothesised in the 1950s by German chemist Rudolf Criegee, but only now have they been recreated in a lab and directly measured for the first time. Specifically, the scientists took formaldehyde oxide – a species of Creigee intermediate – and observed it reacting with sulphur dioxide and nitrogen dioxide. These dioxides are said to initiate climate change in our atmosphere, yet it's now understood they are removed from the troposphere by helpful Criegee biradicals – described as pivotal atmospheric reactants. The reaction also spews sulphate and nitrate into the atmosphere, creating aerosol droplets that seed planet-cooling clouds. The rate of compound conversion is much higher than the boffins expected, leading them to conclude that the biradicals may have a greater impact on our climate than previously thought. The production of Earth's short-lived Criegee biradical stocks are fuelled by the combination of ozone and chemicals released naturally by plants. The intermediates, otherwise known as carbonyl oxide biradicals, were spotted by researchers from Sandia's Combustion Research Facility, the University of Manchester and Bristol University. The eggheads used a particle accelerator at the Lawrence Berkeley National Laboratory in the US to observe the process using photoionisation mass spectrometry - there's a video describing their work here. Criegee intermediates react in lab conditions "We have been able to quantify how fast Criegee radicals react for the first time. Our results will have a significant impact on our understanding of the oxidising capacity of the atmosphere and have wide ranging implications for pollution and climate change," said project chief Dr Carl Percival of the University of Manchester. The University of Bristol's Professor Dudley Shallcross, who co-wrote the paper, added: "Natural ecosystems could be playing a significant role in off-setting global warming." The scientists' paper, Direct Kinetic Measurements of Criegee Intermediate (CH2OO) Formed by Reaction of CH2I with O2, was published in the latest issue of Science. ®
<urn:uuid:5f5ad4a1-f970-4c99-a3d2-53feaf94b094>
3.3125
540
News Article
Science & Tech.
15.257857
2,236
Want to stay on top of all the space news? Follow @universetoday on Twitter Orbits of twin moonlets around 87 Sylvia. Image credit: ESO Click to enlarge One of the thousands of minor planets orbiting the Sun has been found to have its own mini planetary system. Astronomer Franck Marchis (University of California, Berkeley, USA) and his colleagues at the Observatoire de Paris (France) have discovered the first triple asteroid system – two small asteroids orbiting a larger one known since 1866 as 87 Sylvia. “Since double asteroids seem to be common, people have been looking for multiple asteroid systems for a long time,” said Marchis. “I couldn’t believe we found one.” The discovery was made with Yepun, one of ESO’s 8.2-m telescopes of the Very Large Telescope Array at Cerro Paranal (Chile), using the outstanding image’ sharpness provided by the adaptive optics NACO instrument. Via the observatory’s proven “Service Observing Mode”, Marchis and his colleagues were able to obtain sky images of many asteroids over a six-month period without actually having to travel to Chile. One of these asteroids was 87 Sylvia, which was known to be double since 2001, from observations made by Mike Brown and Jean-Luc Margot with the Keck telescope. The astronomers used NACO to observe Sylvia on 27 occasions, over a two-month period. On each of the images, the known small companion was seen, allowing Marchis and his colleagues to precisely compute its orbit. But on 12 of the images, the astronomers also found a closer and smaller companion. 87 Sylvia is thus not double but triple! Because 87 Sylvia was named after Rhea Sylvia, the mythical mother of the founders of Rome, Marchis proposed naming the twin moons after those founders: Romulus and Remus. The International Astronomical Union approved the names. Sylvia’s moons are considerably smaller, orbiting in nearly circular orbits and in the same plane and direction. The closest and newly discovered moonlet, orbiting about 710 km from Sylvia, is Remus, a body only 7 km across and circling Sylvia every 33 hours. The second, Romulus, orbits at about 1360 km in 87.6 hours and measures about 18 km across. The asteroid 87 Sylvia is one of the largest known from the asteroid main belt, and is located about 3.5 times further away from the Sun than the Earth, between the orbits of Mars and Jupiter. The wealth of details provided by the NACO images show that 87 Sylvia is shaped like a lumpy potato, measuring 380 x 260 x 230 km. It is spinning at a rapid rate, once every 5 hours and 11 minutes. The observations of the moonlets’ orbits allow the astronomers to precisely calculate the mass and density of Sylvia. With a density only 20% higher than the density of water, it is likely composed of water ice and rubble from a primordial asteroid. “It could be up to 60 percent empty space,” said co-discoverer Daniel Hestroffer (Observatoire de Paris, France). “It is most probably a “rubble-pile” asteroid”, Marchis added. These asteroids are loose aggregations of rock, presumably the result of a collision. Two asteroids smacked into each other and got disrupted. The new rubble-pile asteroid formed later by accumulation of large fragments while the moonlets are probably debris left over from the collision that were captured by the newly formed asteroid and eventually settled into orbits around it. “Because of the way they form, we expect to see more multiple asteroid systems like this.” Marchis and his colleagues will report their discovery in the August 11 issue of the journal Nature, simultaneously with an announcement that day at the Asteroid Comet Meteor conference in Arma??o dos B?zios, Rio de Janeiro state, Brazil. Original Source: ESO News Release
<urn:uuid:51e12761-a4e3-4c84-8a2e-c5691d857427>
3.203125
839
News Article
Science & Tech.
45.379911
2,237
2008.02.20 > For Immediate Release contact: Wendy Townley - University Relations phone: 402.554.2762 - email: email@example.com New Cell Behaviors Discovered by UNO Research Team Omaha - The Mathematical Biology Research Group at the University of Nebraska at Omaha (UNO) has found new evidence that individual cells have chemical circuits that allow them to process information from their environment. The finding, published in the Feb. 12 issue of Proceedings of the National Academy of Sciences of the United States of America, shows that the information processing capacity results in decision-making ability at the cellular level -- a molecular “brain” for cells. The results were obtained from the mathematical analysis of a new, large-scale model of the chemical pathways. The model, created by the research group, is unique in that it is based completely on the logic of the chemical interactions that make up the system. Much like humans encounter a flood of sensory information from their environment that must be processed to allow rational decisions to be made, cells receive large amounts of information from their environments in the form of chemical cues. These cues are detected by cells though specific chemical receptors on their surfaces, and the numerous types of receptors form the equivalent of a cellular sensory system. It has recently been speculated that the astonishingly complex chemical networks inside the cell that are associated with these cell surface receptors might be involved in some sort of information processing, but until now the only evidence came from mostly descriptive studies of the complex structure of these networks. The UNO group moved beyond description of the structure and was able to determine the complete logic of the system. In order to assess how the logic of the interactions resulted cellular action, a novel cellular simulation computer program was developed by the group that puts the cellular logic into motion. As a result, the group was able to observe and analyze how the chemical networks reacted to tens of thousands of possible cellular environments. The mathematical analysis of the results revealed that cells are able to classify different environments based on their similarities and make rational decisions as to cellular actions required in those environments. Full understanding of the complex chemical networks in cells is critical as a number of human diseases, most notably cancer, result from malfunctions in the activities of individual chemicals that make up the networks. The Mathematical Biology Research Group at UNO is a highly collaborative group of mathematicians and cell biologists. The members of the group that made the new discovery include John Konvalina and Jack Heidel, both mathematicians, and Jim Rogers a cell biologist. The computer software was developed by Tomas Helikar, a graduate student in Bioinformatics at the University of Nebraska Medical Center and a member of the group. The project was funded by a three-year, nearly $600,000 grant to the research group from the National Institutes of Health. For more information, call (402) 554-2762. © 2013 University Communications. voice: 402.554.2129, fax: 402.554.3541, firstname.lastname@example.org
<urn:uuid:2ce76f63-f04c-464d-b960-fa411bd3f610>
2.734375
642
News (Org.)
Science & Tech.
32.043974
2,238
Highlights research studies involving genetics and genomics, includes a glossary of terms on the subject, a directory of scientists involved in this type of work, and description of research facilities through which USGS carries out genetic studies Overview of chemical analyses, tracer studies, gas geochemistry, stable isotopes analyses, organic chemistry, and thermometry capabilities at major USGS laboratories with links to technique, equipment, and contacts for each procedure. Services available at the Geological Survey TRIGA Reactor (GSTR) site in Denver with information on irradiation, neutron activation analysis, fission track radiography, and geochronology and tours of the facility. Provides hydrologic instruments, equipment, and supplies for USGS, other Federal agencies, and cooperators. Also tests, evaluates, repairs, calibrates, and develops hydrologic equipment and instruments. Homepage of the Land Processes DAAC, which processes, archives, and distributes land-related data collected by Earth Observing System (EOS) sensors. Links to data products including ETM+ Landsat data, ASTER data, and MODIS land products.
<urn:uuid:93903e43-b3b2-4efe-a513-4468f8ea0b88>
2.53125
234
Content Listing
Science & Tech.
-0.047769
2,239
I'm Dave Thurlow for the Mount Washington Observatory and this is The Weather Notebook. Around the beginning of this century, there were few scientists who had much of an idea about how rain, hail, sleet, and snow form. It was, and still is, easy to get the general process of water vapor turning to liquid water or ice, but what happens at the instant when cloud droplets floating in the air, become a raindrop or a snowflake. In 1885, a farmer from Jericho, Vermont asked the same question. Forty years later, this man with no high school education, received the first cash research prize ever awarded by the American Meteorological Society and was known around the world as Wilson Bentley, the Snowflake Man. Bentley spent his life taking thousands of detailed and beautiful photographs of snowflakes, recording in detail the nature of the storm that produced them. By 1905, he had theories about the role of ice crystals in the formation of snow and rain, and established basic temperature profiles of the sky above by simply examining what falls from it. To this day, after years of high tech study, his findings have proven largely to be true. But in the early 1900's nobody in the scientific community paid Mr. Bentley any attention at all. Because he wrote about his work eloquently and emotionally, scientists thought he was a nut. But while many of his critic's names are buried away in the footnotes of weather research history, Snowflake Bentley's name lives on. The Weather Notebook is produced by the Mount Washington Observatory...funded by The National Science Foundation and underwritten by Subaru, maker of the all Weather Legacy. Subaru -- the beauty of All-Wheel Drive.
<urn:uuid:c68ca766-d280-4f15-9e5e-e03d41a035f6>
3.3125
347
Audio Transcript
Science & Tech.
50.238824
2,240
The Atacama Desert is a good place for astronomy. There usually aren't clouds, and there is almost no light pollution. This picture shows the La Silla Observatory. There are a bunch of domes with telescopes inside in this picture. Click on image for full size Image courtesy of the European Southern Observatory, photograph by C. Madsen. Life in the Atacama Desert The Atacama Desert in Chile is one of the driest places on Earth. Some plants, animals, and microbes manage to survive there, though. People live and work in the Atacama Desert, too. Places like the Atacama, where life struggles to get by, are called extreme environments. How can anything live in such a dry desert? In some places, living things get moisture from fog that rolls in off the Pacific Ocean. Some types of algae, lichens, and cacti get water this way. Some types of microbes live under rocks or even in tiny spaces within rocks. These "homes" protect the microbes from the heat and dryness. Why would people live or work in a desert like the Atacama? There has been a lot of mining over the years in the Atacama. People mined silver there in the 16th-18th centuries. Today there are some very large copper mines in the desert. For many years, people mined a chemical called sodium nitrate in the Atacama. It was used to make fertilizers and explosives. Bolivia, Peru, and Chile even fought a war (called the War of the Pacific) in the late 1800s over the valuable nitrate deposits! There are several cities along the Pacific Ocean near the Atacama Desert. They are ports that ship the mining products to other places. Some of the people who live in the Atacama have a strange way of getting water. There is a village in northern Chile called Chungungo. People who live there use nets to "harvest" water from thick fog banks that roll in off the nearby Pacific Ocean! It may seem strange, but people do astronomy and space exploration work in the Atacama Desert. There are usually few clouds and almost no light pollution in the Atacama. That makes it good for astronomy. The European Southern Observatory has several large telescopes in Chile. The extreme dryness of the Atacama is similar to the surface of the planet Mars. Scientists sometimes test robots and sensors in this desert before sending them to Mars. Since even microbes are rare in the Atacama, it is a good place to test instruments that will be used to search for life on other worlds. Finally, many meteorites have been found in this desert. Shop Windows to the Universe Science Store! "Ready, Set, SCIENCE!: Putting Research to Work in K-8 Science Classrooms ", from the National Research Council, provides insight on the types of instructional experiences help K-8 students learn science with understanding. Check our other books in our online store You might also be interested in: The Atacama Desert is one of the driest places on Earth. The Atacama is in the country of Chile in South America. In an average year, much of this desert gets less than 1 millimeter (0.04 inch) of rain!...more This page describes environments that are very hot or very cold, extremely dry, or both. Extreme environments are places where "normal" life finds it hard to survive. That doesn't mean that there isn't...more Extreme environments are places where "normal" life finds it hard to survive. That doesn't mean that there isn't any life in extreme environments. Certain creatures can live and grow in extreme environments....more Deserts are full of interesting questions. How can anything survive in a place with hardly any water? Why is it so dry to begin with? You can find at least one desert on every continent except Europe....more Fog is a ground-level cloud. There are several ways that fog forms. It usually forms when moist air travels over cold land or water. The moist air cools down and the water vapor condenses and forms a cloud...more Some environments are not good homes for most "normal" kinds of life. Places like that are called extreme environments. That doesn't mean that there isn't any life in extreme environments. Certain creatures...more Does industry have something to do with all of the clouds that form over Southeast Pacific Ocean? While the connection might not be obvious to most of us, scientists in the VOCALS research project want...more
<urn:uuid:a935c0a2-a5f2-4bce-ad3a-a493d0ac6366>
3.4375
943
Knowledge Article
Science & Tech.
55.247328
2,241
This chart is a comprehensive view of global, anthropogenic greenhouse gas (GHG) emissions. The chart is an updated version of the original chart, which appeared in Navigating the Numbers: Greenhouse Gas Data and International Climate Policy. One of the greatest challenges relating to global warming is that greenhouse gases result—directly or indirectly—from almost every major human industry and activity. This chart shows these industries and activities, and the type and volume of greenhouse gases that result from them. It includes emissions estimates from a range of international data providers, in an attempt to account for all significant GHG emissions sources. In 2005, total GHGs are estimated at 44,153 MtCO2 equivalent (million metric tons). CO2 equivalents are based on 100-year global warming potential (GWP) estimates produced by the IPCC. 2005 is the most recent year for which comprehensive emissions data are available for every major gas and sector. Comparison to 2000 If you are familiar with the original version of this chart, you may be interested to know what has changed in the update, and why. Total global emissions grew 12.7% between 2000 and 2005, an average of 2.4% a year. However, individual sectors grew at rates between 40% and near zero, and there are substantial differences in sectoral growth rates between developed and developing countries. The other major difference in this version of the chart concerns the Land Use Change sector, which comprised 18.2% of GHG emissions in the previous version of the chart, and only 12.2% of emissions in this version. The apparent decrease is entirely due to revised methodologies used to calculate deforestation in the underlying FRA data, and not to any actual decrease in deforestation rates. Read the working paper for this chart for more information.
<urn:uuid:950ac813-d5e0-46c8-a195-4f8acf42bc02>
3.53125
364
Knowledge Article
Science & Tech.
40.826894
2,242
This pie chart shows the relative likelihood of observing particular other species commonly observed near Allophylus edulis These species are those which most commonly occur in our observation database near Allophylus edulis. Observations favor some phyla over others. Typically Bacteria, Fungi, Protozoa, and Arthropods are more common in the field than in our records. In sections below, we make some habitat inferences based on the known habitat preferences of those species most commonly associated with Allophylus edulis. cultivated areas, deciduous woods and forests, desert, disturbed sites, fields, forests, grasslands, hammocks, meadows, open forests, pasture, pine forests, rain forest, steppes, thickets. dry slopes, flood plains, roadsides, rock outcrops, sand dunes, streamsides. clay, limestone, loam, sandy areas, sandy soil, thin soil. bogs, brackish water, ditches, dry areas, flood plains, lagoon, lakes, marshes, ponds, rivers, shores, stream banks, streams, swamps.
<urn:uuid:b7d5eee8-3ff2-43e3-b9ea-e55a5250661d>
3.140625
240
Structured Data
Science & Tech.
21.488161
2,243
The Moon's far side, although not lacking for light, remained dark in the sense of hidden or obscured until the space race between the US and USSR took aim at the Moon. The Soviets' Luna 3 probe returned the first images of the far side in 1959, and the results were a bit of a surprise. The near side is covered with large, dark, basaltic flows that are called maria; these are rare on the far side, which is dominated by the rugged lunar highlands. A number of explanations have been offered for this difference, but today's issue of Nature contains what is certainly the most dramatic one yet: it suggests that the highlands are the remains of the Earth's missing moon, plastered across the far side of the one remaining Moon. A consensus has formed around the theory that the Moon originated from a collision early in the history of the solar system, when a near-Mars sized body smacked into the Earth. The resulting debris coalesced into two bodies. Models of this process nicely account for some of the difference between the Earth and the Moon, including Earth's large, iron rich core. (Robin Canup, who does some of this modeling, has placed videos of the process on her website.) Frequently, these simulations produce a three body system: the Earth, the Moon, and a smaller companion. In most of these cases, the smaller companion is quickly swallowed up by the Moon while it still primarily molten, erasing all traces of it. But the authors suggest a possible alternative: a small moon could end up in one of the Trojan points, where the gravity of the Earth and Moon cancel each other out, providing a semi-stable home. In this situation, the small companion would be stable for up to 70 million years before a resonance with the gravity of the Sun would pry it from the Trojan point. That would be enough time for the Moon to develop a crust, and for the smaller body (we'll call it Moon II) to solidify entirely. The authors went on to model what would happen if Moon II, once pried out of the Trojan site, were to end up having its own collision with the Moon. Moon II was estimated to be about a third the size of the existing Moon, with a similar composition, except that it would have an entirely solid crust and core, since its small size would allow it to cool faster. The Moon itself was estimated to still have some molten material (a 50km deep magma ocean), with a 20km deep solid crust floating on top of it. The whole system was modeled as a set of blocks 5km on a side. All that computation was apparently quite expensive, as the authors only test two different collisions, one head-on, the other at a 45 degree angle. Both of these runs assumed relatively low velocity, just over the two-body escape velocity: 2.4km/s. Because of the difference in masses involved, the authors estimate that the Moon/Moon II impact would carry only 2.5 percent of the kinetic energy that the Moon-forming impact did. It's also below the speed of sound in silicates, one of the primary components of the two bodies involved. And these factors, the authors say, is enough to make it a qualitatively different collision. "Our primary finding," they note, "is that a companion moon, 1/3 the diameter of the Moon, striking at subsonic velocity, does not form a crater." The volume of the impacting body ends up exceeding the volume the impact could possibly excavate. "The impact produces an accretionary pile rather than a crater." But that doesn't mean that it has no effect on the Moon. For starters, their model suggests that the majority of the magma ocean would get pushed to the opposite side of the Moon, which would explain the preponderance of Maria on that side. In addition, most of the material from Moon II would stay near the point of impact, "pasting on a thickened crust and forming a mountainous region comparable in extent to the far side highlands," they conclude. In short, their model produces something that looks a lot like the actual Moon. The problem is that, since Moon II probably looked a lot like the Moon in terms of its composition, there's no obvious way of telling which rocks came from which. Crustal rocks originating on the Moon have a wide spread of ages (about 200 million years), which is consistent with multiple origins, but could also be consistent with uneven cooling. And, as noted above, this isn't the first model proposed for the differences between the near and far sides of the Moon (alternatives include things like uneven tidal heating and a large impact near the Moon's South Pole). Fortunately, a mission that may help resolve this (or at least eliminate the impact model) is already in progress. NASA's GRAIL (Gravity Recovery and Interior Laboratory) will produce the same sort of gravity maps that the GRACE mission is making for the Earth. GRAIL is scheduled to launch next month. If the authors are right, the magma that was pushed off the far side should have left some indications of its shift behind, and these should show up in the gravity analysis.
<urn:uuid:fdd43574-a116-43b0-859b-789dadeb76b6>
4.3125
1,072
News Article
Science & Tech.
49.624754
2,244
A privately funded spacecraft launched from a Russian submarine and intended to deploy a solar sail into Earth's orbit was lost to controllers shortly after takeoff, but late Tuesday engineers tracking Cosmos 1 said the craft might have been found. Engineers in Moscow and Pasadena poring through reams of tracking data said eight hours after the launch, which took place in the Barents Sea, that they might have detected faint signals from the craft indicating that it was in space, but not in its intended orbit. Data from four separate tracking stations -- buried under a large amount of background noise -- "appears to indicate a spacecraft signal," the Planetary Society said in a late-night statement released in Pasadena, where the society is based. "It seems like it is in orbit," said David Betts, director of projects for the society. "The most consistent story is that it made its orbit and is transmitting. That's great news." Cosmos 1 -- a $4-million spacecraft powered only by the sun's rays -- is regarded by its makers as the first practical attempt to engineer a class of space vehicle that could reach other planets and other stars using rocket power only to attain Earth orbit. Powered by photons emitted by the sun, it could theoretically attain speeds far greater than those of the space shuttle. The apparent reacquisition of the spacecraft was the first bit of good news for the assembled members of the society, who had grown increasingly despondent as the day wore on without any sign of the craft from the tracking stations. A search by U.S. Strategic Command government radar failed to find any trace of the craft. But they were looking for it in the expected orbit, and that's probably why they didn't find it, Betts said. "It's probably in a lower orbit, which is why the signals are so weak," he added. The tracking stations in Petropavlovsk and the Kamchatka Peninsula in Russia, Majuro in the Marshall Islands and Panska Ves in the Czech Republic are part of the Russian space network. The mission is being controlled in Moscow, with a secondary control center in a remodeled carriage house near the society's headquarters in Pasadena. Theorists and science-fiction writers have long imagined a type of spacecraft that would be swept through the cosmos by the sun's rays as they were reflected on broad panels or wings. The spaceship would steadily accelerate as it journeyed deep into space, venturing into the outer solar system or possibly other stars. Among the strongest proponents for such a spacecraft was the late astronomer Carl Sagan. Cosmos 1 is intended to test the theory that photons, or packets of light emitted by the sun, could propel the spacecraft to a higher and higher orbit before it eventually fell back to earth. After being lofted into its initial orbit by a converted intercontinental ballistic missile, Cosmos 1 was designed to unfurl a delicate array of 49-foot sails that would spread like blades on a windmill. The converted intercontinental ballistic missile carrying the craft launched safely at 12:46 p.m. Pacific Daylight Time from the Russian submarine Borisoglebsk. The first part of the launch went according to plan, Planetary Society President Louis D. Friedman said from Moscow, but controllers subsequently observed a great deal of "noise" in the signal during the last part of the launch phase. Cosmos 1 apparently encountered trouble after its main booster rocket had expended all its fuel. The spacecraft was to have been boosted into its target orbit by a second small rocket, but ground controllers never received data indicating that the second rocket had fired. The situation grew even more grave as ground-based tracking stations detected no evidence that the spacecraft had reached its predicted orbit. The society describes itself as the largest space advocacy group on Earth, with more than 80,000 members. Engineers had originally planned to unfold the spacecraft's petal-like sail on Thursday, but it now seemed likely that event would be delayed for several days -- if it could take place at all. If the satellite is, in fact, in a lower orbit, the sail might encounter too much air resistance if it is deployed. The original plan called for a 30-day stay in space before the satellite was brought back to the ground, but that plan might also be changed. The long-term goal would be to use a larger sail to propel a craft to other planets in the solar system, using only enough fuel to launch the craft into an initial Earth orbit. Friedman, former Jet Propulsion Lab director Bruce Murray and Sagan began working on such an idea in the 1970s, but Tuesday's launch marked the first realistic test of the concept. Cosmos Studios in Ithaca, N.Y. -- a science entertainment company founded by Sagan's widow, Ann Druyan -- provided much of the $4-million launch cost. "Whatever we discover from this mission, if it's not a success, we'll still learn from it," Druyan said. "The way to the stars is hard." Times staff writer Brad Wible and Associated Press contributed to this report.
<urn:uuid:3ce479ac-237d-413d-8305-0bfef3690921>
2.765625
1,039
News Article
Science & Tech.
50.487659
2,245
CORVALLIS, Ore. – Ocean acidification is a complex global problem because of increasing atmospheric carbon dioxide, but there also are a number of local acidification “hotspots” plaguing coastal communities that don’t require international attention – and which can be addressed now. A regulatory framework already is in place to begin mitigating these local hotspots, according to a team of scientists who outline their case in a forum article in the journal Science. “Certainly, ocean acidification on a global level continues to be a challenge, but for local, non-fossil fuel-related events, community leaders don’t have to sit back and wait for a solution,” said George Waldbusser, an Oregon State University ecologist and co-author of the paper. “Many of these local contributions to acidity can be addressed through existing regulations.” A number of existing federal environmental laws – including the Clean Air Act, the Clean Water Act, and the Coastal Zone Management Act – provide different layers of protection for local marine waters and offer officials avenues for mitigating the causes of local acidity. “The localized events might be nutrient-loading or eutrophication issues that can be addressed,” said Waldbusser, an assistant professor in OSU’s College of Oceanic and Atmospheric Sciences. “Communities don’t have to wait for a global solution.” The commentary article in Science, “Mitigating Local Causes of Ocean Acidification with Existing Laws,” was inspired in part by some of Waldbusser’s work in Chesapeake Bay, which highlighted how increasing acidity in sections of the Chesapeake were exceeding rates that could be explained by increasing carbon dioxide from fossil fuel emission. Lead authors on the Science forum paper were Ryan Kelly and Melissa Foley of the Stanford University Center for Ocean Solutions. The scientists point to a recent lawsuit that resulted in a U.S. Environmental Protection Agency memorandum outlining the responsibility of individual states to apply federal environmental laws to combat acidification in state waters. As a result, EPA now encourages states to list “pH-impaired” coastal waters where such data exist. One such example, Waldbusser says, is in Puget Sound, where nutrient-loading from sewage treatment plants has created large plankton blooms that eventually die and contribute to greater acidification. “When these blooms die and sink to the bottom, they suck the oxygen out of the water,” Waldbusser said. “Low oxygen is the flip side of high CO2. People in the Northwest are starting to become aware of hypoxia and its impacts, but there hasn’t been the same awareness of ocean acidification on a local level.” Awareness of acidification may be growing. Waldbusser points to work at Whiskey Creek Shellfish Hatchery in Oregon’s Netarts Bay, which monitors ocean water daily for acidification. The northwest oyster industry has been plagued by larval die-offs and ocean acidification may be to blame. The hatchery now takes water from the bay only at certain times of the day when acidification levels are lowest. The OSU ecologist is also studying naturally occurring counter-balances to acidification, including the role of oyster and clam shells. Commercial oyster shells are typically removed from the water and native oyster populations have plummeted, so there are may be fewer shells in Oregon estuaries than ever before. “Calcium carbonate shells help neutralize the effects of acidification,” Waldbusser said. “In essence, they are akin to giving the estuary a dose of Tums. We’re trying to determine how much of an impact shells may have and when conditions are corrosive enough to release the alkalinity from those shells back into the water.” This release is also available at: http://bit.ly/j19Hh0
<urn:uuid:c95638e3-ebe8-4804-85b1-086f825a42e6>
3.109375
837
News (Org.)
Science & Tech.
24.162851
2,246
Carbon dioxide from burning fossil fuels is changing the oceans’ chemistry. This is ocean acidification. The head of the National Oceanic and Atmospheric Administration calls ocean acidification global warming’s equally evil twin. The oceans are absorbing up to a million tons of carbon dioxide every hour. The good news: less carbon dioxide in the air means the atmosphere will warm up more slowly. You guessed there’s bad news? Right again! The bad news is that as the oceans absorb large quantities of carbon dioxide, they become more acidic. When a molecule of carbon dioxide (CO2) enters the ocean, it immediately forms carbonic acid by binding to, and locking up, carbonate molecules. Corals, clams, various plankton, crusting coralline algae, and other creatures that make skeletons and shells of calcium carbonate need those same carbonate molecules that carbon dioxide steals. Carbonate scarcity slows their growth, making them more fragile, and sometimes fatally deformed. Carbonate concentrations in the upper few hundred feet (tens of meters) of the ocean have already declined about 10 percent compared to seawater just before steam-engine times. And at the base of the food-chain, some of the most important ocean drifters use calcium carbonate. They include organisms like single-celled foraminifera and coccolithophorids, which drift the ocean in uncountable trillions, plus certain pteropods (silent ‘p’; they’re related to snails). Trouble for these organisms means trouble for everything that eats them. Most people have not heard of pteropods, but they are well known to hungry young mackerel, pollock, cod, haddock, and salmon. Trouble for corals, meanwhile, is trouble for everything that lives in or on them. Ocean acidity has increased 30% since the industrial revolution. Scientists predict that the oceans will become progressively more acidic over the next century. The solution: We need an energy economy based on renewable energy, especially energy sources that do not have to be burned, such as the power of the sun, wind, tides, and the heat of the Earth—the power that drives the whole planet. What will happen to coral reefs? It’s “hotly” debated. 3 things you can do to fight ocean acidification: 1. Support clean energy businesses 2. Drive an electric or highly efficient vehicle 3. Switch to renewable fuels whenever possible. Other great ways you can make a difference. LINKS, VIDEOS & SOURCES The End of Reefs, Okeanos Northwest Oysters Die Off, Reader Supported News Coral Bleaching, Carl’s Blog Marine Sponges Bore Faster Due to Effects of Climate Change, Blue Ocean Institute Ocean Acidification in the Weddell Sea, Climate Reality Blog Ocean Acidification, National Geographic Carbon Burden on the World’s Oceans, Yale Environment How Acidification Threatens Oceans from the Inside Out, Scientific American Carbon Dioxide and Our Ocean Legacy, NOAA The Ocean in a High CO2 World Symposium, Ocean Acidification Net Further Reading on Ocean Acidification Carbon’s Burden on the World’s Oceans, Yale 360 This groundbreaking NRDC documentary explores the startling phenomenon of ocean acidification, which may soon challenge marine life on a scale not seen for tens of millions of years. The film, featuring Sigourney Weaver, originally aired on Discovery Planet Green. Natural CO2 vents on the floor of the ocean near Italy cause differing levels of acidity and show how marine life is impacted. This is a look into the future of climate change’s impact on the oceans’ basic chemistry. Ocean Acidification is a global-scale change in the basic chemistry of oceans that is under way now, as a direct result of the increased carbon dioxide in the atmosphere. We are just beginning to understand the impacts of Ocean Acidification on life in the ocean. Host:
<urn:uuid:ddaffca7-1156-4563-9b12-3b2834675bd1>
3.4375
855
Knowledge Article
Science & Tech.
33.405447
2,247
A fun new technology that harvests power from a small generator embedded in the sole of your shoe has been developed by Dr. Ville Kaajakari at Louisiana Tech University (LTU). The technology cannot power your house (yet), but it can be used for a range of useful purposes. “This technology could benefit, for example, hikers that need emergency location devices or beacons,” said Kaajakari. “For more general use, you can use it to power portable devices without wasteful batteries.” Kaajakari’s breakthrough technology uses a low-cost polymer transducer with metalized surfaces for electrical contact. Conventionally, ceramic transducers would be used, but given that they might not be comfortable or durable in the sole of your shoe, Kaajakari went with this soft and robust alternative that matches the properties of regular shoe fillings. Rather than putting a heel shock absorber in, this is put in and will supposedly create the same user experience (in other words, you wouldn’t notice the difference). “Kaajakari’s innovative technology, developed at Louisiana Tech’s Institute for Micromanufacturing (IfM), is based on new voltage regulation circuits that efficiently convert a piezoelectric charge into usable voltage for charging batteries or for directly powering electronics,” LTU reports. Currently, the technology could not generate enough power to power very energy-intensive equipment, but eventually, in addition to being able to power sensors, GPS units or portable devices that don’t require a large amount of energy, Kaajakari hopes the technology will be able to create enough energy to power or charge common portable devices like cell phones. If you keep up with clean tech news like this, you’ve probably seen this sort of “piezoelectric energy generation” thing before. So, why aren’t such technologies on the market yet? Well, piezoelectric energy generation doesn’t seem to be worth what it takes to make it happen in many cases. Dr. Kaajakari’s breakthrough technology is being featured in MEMS Investor Journal, a national online industry publication that informs investment professionals about latest developments in the micro electro mechanical systems (MEMS) industry, so perhaps some investors will see it and help it to move into a more prolific place. Image Credit: Louisiana Tech University I'm the director of CleanTechnica, the most popular clean energy website in the world, and Planetsave, a leading green and science news site. I've been covering green news of various sorts since 2008, and I've been especially focused on solar energy, electric vehicles, bicycling, and wind energy for the past few years. You can also find my work on Scientific American, Reuters, Think Progress, GE's ecomagination site, several sites in the Important Media network, & many other places. To connect on some of your favorite social networks, go to zacharyshahan.com or click on some of the links below.
<urn:uuid:22caffe0-e113-4897-af10-f3551a8bb8ff>
2.875
640
News Article
Science & Tech.
25.272462
2,248
Global hurricane activity has decreased to the lowest level in 30 years. Very important: global hurricane activity includes the 80-90 tropical cyclones that develop around the world during a given calendar year, including the 12-15 that occur in the North Atlantic (Gulf of Mexico and Caribbean included). The heightened activity in the North Atlantic since 1995 is included in the data used to create this figure. As previously reported here and here at Climate Audit, and chronicled at my Florida State Global Hurricane Update page, both Northern Hemisphere and overall Global hurricane activity has continued to sink to levels not seen since the 1970s. Even more astounding, when the Southern Hemisphere hurricane data is analyzed to create a global value, we see that Global Hurricane Energy has sunk to 30-year lows, at the least. Since hurricane intensity and detection data is problematic as one goes back in time, when reporting and observing practices were different than today, it is possible that we underestimated global hurricane energy during the 1970s. See notes at bottom to avoid terminology discombobulation. Using a well-accepted metric called the Accumulated Cyclone Energy index or ACE for short (Bell and Chelliah 2006), which has been used by Klotzbach (2006) and Emanuel (2005) (PDI is analogous to ACE), and most recently by myself in Maue (2009), simple analysis shows that 24-month running sums of global ACE or hurricane energy have plummeted to levels not seen in 30 years. Why use 24-month running sums instead of simply yearly values? Since a primary driver of the Earth’s climate from year to year is the El Nino Southern Oscillation (ENSO) acts on time scales on the order of 2-7 years, and the fact that the bulk of the Southern Hemisphere hurricane season occurs from October – March, a reasonable interpretation of global hurricane activity requires a better metric than simply calendar year totals. The 24-month running sums is analogous to the idea of “what have you done for me lately”. During the past 6 months, extending back to October of 2008 when the Southern Hemisphere tropical season was gearing up, global ACE had crashed due to two consecutive years of well-below average Northern Hemisphere hurricane activity. To avoid confusion, I am not specifically addressing the North Atlantic, which was above normal in 2008 (in terms of ACE), but the hemisphere (and or globe) as a whole. The North Atlantic only represents a 1/10 to 1/8 of global hurricane energy output on average but deservedly so demands disproportionate media attention due to the devastating societal impacts of recent major hurricane landfalls. Why the record low ACE? During the past 2 years +, the Earth’s climate has cooled under the effects of a dramatic La Nina episode. The Pacific Ocean basin typically sees much weaker hurricanes that indeed have shorter lifecycles and therefore — less ACE . Conversely, due to well-researched upper-atmospheric flow (e.g. vertical shear) configurations favorable to Atlantic hurricane development and intensification, La Nina falls tend to favor very active seasons in the Atlantic (word of warning for 2009). This offsetting relationship, high in the Atlantic and low in the Pacific, is a topic of discussion in my GRL paper, which will be a separate topic in a future posting. Thus, the Western North Pacific (typhoons) tropical activity was well below normal in 2007 and 2008 (see table). Same for the Eastern North Pacific. The Southern Hemisphere, which includes the southern Indian Ocean from the coast of Mozambique across Madagascar to the coast of Australia, into the South Pacific and Coral Sea, saw below normal activity as well in 2008. Through March 12, 2009, the Southern Hemisphere ACE is about half of what’s expected in a normal year, with a multitude of very weak, short-lived hurricanes. All of these numbers tell a very simple story: just as there are active periods of hurricane activity around the globe, there are inactive periods, and we are currently experiencing one of the most impressive inactive periods, now for almost 3 years. Under global warming scenarios, hurricane intensity is expected to increase (on the order of a few percent), but MANY questions remain as to how much, where, and when. This science is very far from settled. Indeed, Al Gore has dropped the related slide in his PowerPoint (btw, is he addicted to the Teleprompter as well?) Many papers have suggested that these changes are already occurring especially in the strongest of hurricanes, e.g. this and that and here, due to warming sea-surface temperatures (the methodology and data issues with each of these papers has been discussed here at CA, and will be even more in the coming months). The notion that the overall global hurricane energy or ACE has collapsed does not contradict the above papers but provides an additional, perhaps less publicized piece of the puzzle. Indeed, the very strong interannual variability of global hurricane ACE (energy) highly correlated to ENSO, suggests that the role of tropical cyclones in climate is modulated very strongly by the big movers and shakers in large-scale, global climate. The perceptible (and perhaps measurable) impact of global warming on hurricanes in today’s climate is arguably a pittance compared to the reorganization and modulation of hurricane formation locations and preferred tracks/intensification corridors dominated by ENSO (and other natural climate factors). Moreover, our understanding of the complicated role of hurricanes with and role in climate is nebulous to be charitable. We must increase our understanding of the current climate’s hurricane activity. During the summer and fall of 2007, as the Atlantic hurricane season failed to live up to the hyperbolic prognostications of the seasonal hurricane forecasters, I noticed that the rest of the Northern Hemisphere hurricane basins, which include the Western/Central/Eastern Pacific and Northern Indian Oceans, was on pace to produce the lowest Accumulated Cyclone Energy or ACE since 1977. ACE is the convolution or combination of a storm’s intensity and longevity. Put simply, a long-lived very powerful Category 3 hurricane may have more than 100 times the ACE of a weaker tropical storm that lasts for less than a day. Over a season or calendar year, all individual storm ACE is added up to produce the overall seasonal or yearly ACE. Detailed tables of previous monthly and yearly ACE are on my Florida State website. Previous Basin Activity: Hurricane ACE |BASIN||2005 ACE||2006 ACE||2007 ACE||2008 ACE||1982-2008 AVERAGE| * Southern Hemisphere peak TC activity occurs between October and April. Thus, 2008 values represent the period October 2007 – April 2008. The table does not include the Northern Indian Ocean, which can be deduced as the portion of the Northern Hemisphere total not included in the three major basins. Nevertheless, 2007 saw the lowest ACE since 1977. 2008 continued the dramatic downturn in hurricane energy or ACE. The following stacked bar chart demonstrates the highly variable, from year-to-year behavior of Northern Hemisphere (NH) ACE. The smaller inset line graph plots the raw data and trend (or lack thereof). Thus, during the past 60 years, with the data at hand, Northern Hemisphere ACE undergoes significant interannual variability but exhibits no significant statistical trend. So what to expect in 2009? Well, the last Northern Hemisphere storm was Typhoon Dolphin in middle December of 2008, and no ACE has been recorded so far. The Southern Hemisphere is below normal by just about any definition of storm activity (unless you have access to the Elias sports bureau statistic creativity department), and the season is quickly running out. With La Nina-like conditions in the Pacific, a persistence forecast of below average global cyclone activity seems like a very good bet. Now if only the Dow Jones index didn’t correlate so well with the Global ACE lately… Hurricane is the term for Tropical Cyclone specific to the North Atlantic, Gulf of Mexico, Caribbean Sea, and the Pacific Ocean from Hawaii eastward to the Mexican coast. Other names around the world include Typhoon, Cyclone, and Willy-Willy (Oz) but hurricane is used generically to avoid confusion. Accumulated Cyclone Energy or ACE: is easily calculated from best-track hurricane datasets, with the one-minute maximum sustained wind squared and summed during the tropical lifecycle of a tropical storm or hurricane.
<urn:uuid:6aefc5be-c86d-4b69-9deb-35a718b379e0>
2.609375
1,729
Academic Writing
Science & Tech.
28.649356
2,249
Ocean acidification: Coming soon to an ocean near you 04 November 2010 | International news release Manmade ocean acidification will have profound impacts on marine life, even without a further increase of CO₂ emissions. Latest evidence shows that sea water chemistry is already changing and only rapid and huge reductions of fossil fuel use and deforestation can help restore ocean’s health, according to IUCN. A new guide, Ocean Acidification: Questions Answered, states that ocean acidification is now happening ten times faster than that which preceded the extinction 55 million years ago of many marine species. If the current rate of acidification continues, fragile ecosystems such as coral reefs, hosting a wealth of marine life, will be seriously damaged by 2050. The guide provides the latest science on the speed and scale of impact that CO₂ emissions will have on the ocean and on humanity. “Climate change may be all over the headlines, but it has an evil twin, caused by the same invisible gas carbon dioxide, with more measurable, rapid and seemingly unstoppable effects," says Dan Laffoley, Marine Vice Chair of IUCN’s World Commission on Protected Areas and lead editor of the guide. “By answering the main questions people have about ocean acidification, we intend to break through the ignorance and confusion that exist, so everyone is clearer on what is happening and why this is a matter of the highest global priority." Ocean acidification, as climate change, is happening everywhere but some parts of the world will be more rapidly and severely affected than others. The Arctic Ocean will be the quickest to become acidified and hostile to a wide range of ocean life, particularly creatures with shells, according to the report. The chemistry of one half of the Arctic Ocean will be changed by 2050 if CO₂ levels continue to rise at current rates. “Society shouldn’t have to wait any more for its ocean acidification wakeup call," says Carl Gustaf Lundin, Head of IUCN’s Global Marine and Polar Programme. “An acidified ocean poses a real and major threat to our existence. Now is the time to act to minimise the impacts on our life support system while we still have time." Compiled by the Ocean Acidification Reference User Group (RUG) and drawing on the expertise of over 30 of the world’s leading marine scientists, the guide is being launched by Prince Albert II of Monaco at a meeting co-hosted by IUCN. "As new scientific data are generated at an increasing pace due to the growing number of major research projects, it becomes even more critical that these findings are disseminated to end-users, including policymakers and the general public, and this is what we are doing today," says Jean-Pierre Gattuso, Scientific Coordinator of the European Project on Ocean Acidification. • Dr Dan Laffoley, Marine Vice-Chair, IUCN’s World Commission on Protected Areas, • Carl Gustaf Lundin, Head, IUCN Global Marine Programme, e email@example.com For more information or to set up interviews in English, French and Spanish please contact: • Borjana Pervan, IUCN Media Relations Officer, t +41 22 999 0115, +41 798574072 e firstname.lastname@example.org Ocean Acidification: Questions Answered, is a product of the Ocean Acidification Reference User Group, an Initiative of the European Project on Ocean Acidification (EPOCA). To read the guide in English, French, Spanish, Chinese and Arabic visit: Notes to editors IUCN, International Union for Conservation of Nature, helps the world find pragmatic solutions to our most pressing environment and development challenges. IUCN works on biodiversity, climate change, energy, human livelihoods and greening the world economy by supporting scientific research, managing field projects all over the world, and bringing governments, NGOs, the UN and companies together to develop policy, laws and best practice. About the Ocean Acidification Reference User Group (RUG) The RUG is a specially created neutral forum of end users and leading scientists involved in Ocean Acidification. It provides the framework to discuss and understand the latest evidence on ocean acidification, and to develop and devise new outreach mechanisms to bring this issue to wide attention using state of the art scientific knowledge. The RUG supports ocean acidification research in Europe (EPOCA), Germany, the United Kingdom and the Mediterranean. The EU FP7 large-scale integrating project EPOCA (European Project on OCean Acidification) was launched in May 2008 with the overall goal to fill the numerous gaps in our understanding of ocean acidification and its consequences. The EPOCA consortium brings together more than 100 researchers from 31 institutes and 10 European countries. The research of this four-year long project is partly funded by the European Commission and coordinated by CNRS. BIOACID (Biological Impacts of Ocean ACIDification) is a coordinated project involving over 60 PIs from 14 research institutes and SMEs throughout Germany. Launched in 2009 the project is funded by the German Federal Ministry of Education and Research (BMBF) for an initial three year period. http://www.bioacid.de About the UK Ocean Acidification Research Programme The 5 year UK Ocean Acidification Research Programme is the UK’s response to growing concerns over ocean acidification and is jointly funded by Department for Environment, Food and Rural Affairs (Defra), the Natural Environment Research Council (NERC) and Department of Energy and Climate Change (DECC). It brings together about 105 expert scientists from 22 intuitions across the UK. www.oceanacidification.org.uk
<urn:uuid:0d82683a-9ba1-425d-b256-e4c720bd0436>
3.125
1,202
News (Org.)
Science & Tech.
25.77212
2,250
Challenges ahead Data availability and accessibility Improved understanding of the effect of climate change in high mountains on downstream communities and water re- sources will require significant efforts to develop databases and analytical capacities. Currently most projections of cli- mate change are based on general circulation models (GCM) which attempt to mathematically model interactions between oceans, atmosphere and large land areas, typically on the order of hundreds to thousands of square kilometres. The models present two significant challenges. Firstly, the models have a low resolution and have a very limited capability to pro- vide detailed predictions for smaller areas. Large mountain regions like the Himalayas and Andes are extremely diverse and complex in terms of geography, ecology and socio-cul- tural conditions, and there is a great need for higher resolu- tion models with better predictive capacity at smaller scales. Secondly, the current models are in themselves uncertain as our understanding of climate change is limited, i.e. we have limited knowledge of the key natural processes and of which choices we will make in the future with regards to energy con- sumption and greenhouse gas emission. Extensive coordination and cooperation among mountain nations and institutions is required in order to fill data gaps and develop regional assessment models. In some cases rel- evant data sets exist on a national level, but are not acces- sible for regional analysis and cooperation due to strategic or other reasons. In any case current databases and capacity is limited and several factors play a part; the number of years of observation (often short term), quality and availability of data, distribution of observation networks, capacity to ana- lyze and compute data, financial constraints, and lack of time to achieve required results. Currently the paucity of data in many areas, the lack of institutional capacity to analyze, cor- rect and verify data and short duration of data records limit the validity of current models. Modelling water flow is complicated Mountain regions contribute a substantial proportion of the glob- al river runoff (Viviroli et al., 2003; Viviroli and Weingartner, 2004), but modeling this runoff and future variability in time and space due to climate change is highly complicated. The pro- cesses that determine the change from precipitation into run- off are many and complex. As mentioned earlier in this report runoff from melting ice is often a relatively small component in the total runoff regime, but still highly important as a long term, relatively stable supply of water. As noted above, the broad picture indicates that in the coming decades many large glaciers will retreat and a high number of small ones will disappear completely. This could mean that the supply of water will be favorable to agriculture and livelihoods in the short term with increasing amounts of water, but it could also contribute to excess water levels in some areas. In the long run however, i.e. after a few decades when water levels may be drastically reduced due to the diminishment of high mountain glacier systems, impacts on downstream communities could become dramatic in some of the arid ar- eas (ICIMOD, 2009a,b; UNEP, 2009; 2010a,b). However, climate change does not only indicate higher tem- peratures, but also changes in overall precipitation, evapo- transpiration, and changes in the balance between rain and snow which has great implications for runoff rates and stor- age of water. Intensity, amount and distribution of precipi- tation over time are all factors of importance for modeling runoff and impacts to ecosystems and human populations. Seasonality is another factor that will affect mountain regions around the world differently. Most mountain areas have sea- sonal patterns to annual runoff regimes. In areas with mon- soons, runoff from glacial melting is particularly important in the shoulder seasons. There are already many signs of
<urn:uuid:e780b98d-0a02-4866-9deb-2984d6af6a6a>
2.90625
778
Knowledge Article
Science & Tech.
19.388009
2,251
States are the basic units of the state machines. In UML 2.0 states can have substates. Execution of the diagram begins with the Initial node and finishes with Final or Terminate node or nodes. Please refer to UML 2.0 Specification for more information about these elements. State Machine diagrams describe the logic behavior of the system, a part of the system, or the usage protocol of it. On these diagrams you show the possible states of the objects and the transitions that cause a change in state. State Machine diagrams in UML 2.0 are different in many aspects compared to Statechart diagrams in UML 1.5. Copyright(C) 2008 CodeGear(TM). All Rights Reserved. What do you think about this topic? Send feedback!
<urn:uuid:ea692ff2-4976-4580-b298-dbf1d0cd0664>
3.046875
162
Documentation
Software Dev.
60.666143
2,252
New in version 2.1. The inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback. There are four main kinds of services provided by this module: type checking, getting source code, inspecting classes and functions, and examining the interpreter stack. The getmembers() function retrieves the members of an object such as a class or module. The sixteen functions whose names begin with “is” are mainly provided as convenient choices for the second argument to getmembers(). They also help you determine when you can expect to find the following special attributes: |__file__||filename (missing for built-in modules)| |__module__||name of module in which this class was defined| |__name__||name with which this method was defined| |im_class||class object that asked for this method||(1)| |im_func or __func__||function object containing implementation of method| |im_self or __self__||instance to which this method is bound, or None| |__name__||name with which this function was defined| |func_code||code object containing compiled function bytecode| |func_defaults||tuple of any default values for arguments| |func_doc||(same as __doc__)| |func_globals||global namespace in which this function was defined| |func_name||(same as __name__)| |generator||__iter__||defined to support iteration over container| |close||raises new GeneratorExit exception inside the generator to terminate the iteration| |gi_frame||frame object or possibly None once the generator has been exhausted| |gi_running||set to 1 when generator is executing, 0 otherwise| |next||return the next item from the container| |send||resumes the generator and “sends” a value that becomes the result of the current yield-expression| |throw||used to raise an exception inside the generator| |traceback||tb_frame||frame object at this level| |tb_lasti||index of last attempted instruction in bytecode| |tb_lineno||current line number in Python source code| |tb_next||next inner traceback object (called by this level)| |frame||f_back||next outer frame object (this frame’s caller)| |f_builtins||built-in namespace seen by this frame| |f_code||code object being executed in this frame| |f_exc_traceback||traceback if raised in this frame, or None| |f_exc_type||exception type if raised in this frame, or None| |f_exc_value||exception value if raised in this frame, or None| |f_globals||global namespace seen by this frame| |f_lasti||index of last attempted instruction in bytecode| |f_lineno||current line number in Python source code| |f_locals||local namespace seen by this frame| |f_restricted||0 or 1 if frame is in restricted execution mode| |f_trace||tracing function for this frame, or None| |code||co_argcount||number of arguments (not including * or ** args)| |co_code||string of raw compiled bytecode| |co_consts||tuple of constants used in the bytecode| |co_filename||name of file in which this code object was created| |co_firstlineno||number of first line in Python source code| |co_flags||bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg| |co_lnotab||encoded mapping of line numbers to bytecode indices| |co_name||name with which this code object was defined| |co_names||tuple of names of local variables| |co_nlocals||number of local variables| |co_stacksize||virtual machine stack space required| |co_varnames||tuple of names of arguments and local variables| |__name__||original name of this function or method| |__self__||instance to which a method is bound, or None| Changed in version 2.2: im_class used to refer to the class that defined the method. Return all the members of an object in a list of (name, value) pairs sorted by name. If the optional predicate argument is supplied, only members for which the predicate returns a true value are included. Return a tuple of values that describe how Python will interpret the file identified by path if it is a module, or None if it would not be identified as a module. The return tuple is (name, suffix, mode, mtype), where name is the name of the module without the name of any enclosing package, suffix is the trailing part of the file name (which may not be a dot-delimited extension), mode is the open() mode that would be used ('r' or 'rb'), and mtype is an integer giving the type of the module. mtype will have a value which can be compared to the constants defined in the imp module; see the documentation for that module for more information on module types. Changed in version 2.6: Returns a named tuple ModuleInfo(name, suffix, mode, module_type). Return true if the object is a Python generator function. New in version 2.6. Return true if the object is a generator. New in version 2.6. Return true if the object is an abstract base class. New in version 2.6. This is new as of Python 2.2, and, for example, is true of int.__add__. An object passing this test has a __get__ attribute but not a __set__ attribute, but beyond that the set of attributes varies. __name__ is usually sensible, and __doc__ often is. Methods implemented via descriptors that also pass one of the other tests return false from the ismethoddescriptor() test, simply because the other tests promise more – you can, e.g., count on having the im_func attribute (etc) when an object passes ismethod(). Return true if the object is a data descriptor. Data descriptors have both a __get__ and a __set__ attribute. Examples are properties (defined in Python), getsets, and members. The latter two are defined in C and there are more specific tests available for those types, which is robust across Python implementations. Typically, data descriptors will also have __name__ and __doc__ attributes (properties, getsets, and members have both of these attributes), but this is not guaranteed. New in version 2.3. Return true if the object is a getset descriptor. getsets are attributes defined in extension modules via PyGetSetDef structures. For Python implementations without such types, this method will always return False. New in version 2.5. Return true if the object is a member descriptor. Member descriptors are attributes defined in extension modules via PyMemberDef structures. For Python implementations without such types, this method will always return False. New in version 2.5. Clean up indentation from docstrings that are indented to line up with blocks of code. Any whitespace that can be uniformly removed from the second line onwards is removed. Also, all tabs are expanded to spaces. New in version 2.6. Get the names and default values of a function’s arguments. A tuple of four things is returned: (args, varargs, varkw, defaults). args is a list of the argument names (it may contain nested lists). varargs and varkw are the names of the * and ** arguments or None. defaults is a tuple of default argument values or None if there are no default arguments; if this tuple has n elements, they correspond to the last n elements listed in args. Changed in version 2.6: Returns a named tuple ArgSpec(args, varargs, keywords, defaults). Get information about arguments passed into a particular frame. A tuple of four things is returned: (args, varargs, varkw, locals). args is a list of the argument names (it may contain nested lists). varargs and varkw are the names of the * and ** arguments or None. locals is the locals dictionary of the given frame. Changed in version 2.6: Returns a named tuple ArgInfo(args, varargs, keywords, locals). When the following functions return “frame records,” each record is a tuple of six items: the frame object, the filename, the line number of the current line, the function name, a list of lines of context from the source code, and the index of the current line within that list. Keeping references to frame objects, as found in the first element of the frame records these functions return, can cause your program to create reference cycles. Once a reference cycle has been created, the lifespan of all objects which can be accessed from the objects which form the cycle can become much longer even if Python’s optional cycle detector is enabled. If such cycles must be created, it is important to ensure they are explicitly broken to avoid the delayed destruction of objects and increased memory consumption which occurs. Though the cycle detector will catch these, destruction of the frames (and local variables) can be made deterministic by removing the cycle in a finally clause. This is also important if the cycle detector was disabled when Python was compiled or using gc.disable(). For example: def handle_stackframe_without_leak(): frame = inspect.currentframe() try: # do something with the frame finally: del frame The optional context argument supported by most of these functions specifies the number of lines of context to return, which are centered around the current line. Get information about a frame or traceback object. A 5-tuple is returned, the last five elements of the frame’s frame record. Changed in version 2.6: Returns a named tuple Traceback(filename, lineno, function, code_context, index).
<urn:uuid:1243e50d-ecd7-4ea6-b13f-efb27d1010a4>
2.734375
2,249
Documentation
Software Dev.
41.128717
2,253
A lightning storm started this cluster of wildfires in the Cascade Mountains of Oregon on August 24, 2011. Burning through forest, grass, and brush, the fires had covered 90,436 acres as of September 1. They are 25 percent contained. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite acquired these images on September 1. The fires, collectively called the High Cascades Complex are outlined in red. The top image shows the region in natural color, similar to what the human eye sees. In this image, smoke hangs over the Cascade Mountains. Newly burned land is dark brown, similar in tone to the natural land cover in the desert east of the dark green mountains. The lower image combines infrared and visible light in a false color image that reveals the extent of the recently burned area. Here, freshly burned land is red, while plant-covered land is green, and bare or sparsely vegetated land is tan. In this scene, the Razorback and Hancock Complex fires are more obvious. The Razorback Fire is the largest fire in the High Cascades Complex, having burned 51,943 acres as of September 1. The Hancock Complex burned 57,597 acres of grass and is entirely contained. All of the fires started in the same lightning storm on August 24. - InciWeb. (2011, September 2). Hancock Complex. Accessed September 2, 2011. - InciWeb. (2011, September 1). High Cascades Complex. Accessed September 2, 2011. - Aqua - MODIS
<urn:uuid:f1a7838b-bc2f-4c4b-8683-60e5ebe8cfb3>
3.171875
319
Knowledge Article
Science & Tech.
56.705348
2,254
Two Types of Photovoltaic Solar Cells Photovoltaics, which directly convert sunlight into electricity, include both traditional, polysilicon-based solar cell technologies and new thin-film technologies. Thin-film manufacturing involves depositing extremely thin layers of photosensitive materials on glass, metal, or plastics. While the most common material currently used is amorphous silicon, the newest technologies use non-silicon-based materials such as cadmium telluride. A key force driving the advancement of thin-film technologies is a polysilicon shortage that began in April 2004. In 2006, for the first time, more than half of polysilicon production went into PVs instead of computer chips. While thin films are not as efficient at converting sunlight to electricity, they currently cost less and their physical flexibility makes them more versatile than traditional solar cells. China Poised to Become Leading Producer of Solar Cells in 2008 The top five PV-producing countries are Japan, China, Germany, Taiwan, and the United States. Recent growth in China is most astonishing: after almost tripling its PV production in 2006, it is believed to have more than doubled output in 2007. Having eclipsed Germany in 2007 to take the number two spot, China is now on track to become the number one PV producer in 2008. (See additional data from the Earth Policy Institute.) Strong domestic production is not always a good indicator of domestic installations, however. For example, despite China's impressive production, PV prices are still too high for the average Chinese consumer. But large PV projects are expected to increase domestic installations. Residential Use of Solar Cells Increasing Worldwide Japan, the United States, and Spain round out the top four markets with 350, 141, and 70 megawatts installed in 2006, respectively. Thanks to a residential PV incentive program, Japan now has over 250,000 homes with PV systems. But the country is currently experiencing a decrease in the growth rate of PV installations resulting from the phase-out of the incentive program in 2005 and a limited domestic PV supply due to the polysilicon shortage. In contrast, the growth in installations in the United States increased from 20 percent in 2005 to 31 percent in 2006, primarily driven by California and New Jersey. Initial estimates for the United States as a whole indicate that PV incentives, including a tax credit of up to $2,000 available under the U.S. Energy Policy Act of 2005 to offset PV system costs, helped to achieve an incredible 83 percent growth in installations in 2007. Public Policies Drive Nonresidential Use of Solar Cells Spain tripled its PV installations in 2006 to 70 megawatts. A building code that went into force in March 2007 requires all new nonresidential buildings to generate a portion of their electricity with PV. In September 2007, a 20-megawatt PV power plant, currently the largest in the world, came online in the Spanish town of Beneixama and is producing enough electricity to supply 12,000 homes. Falling Prices are Making Solar Power Competitive with Coal The average price for a PV module, excluding installation and other system costs, has dropped from almost $100 per watt in 1975 to less than $4 per watt at the end of 2006. With expanding polysilicon supplies, average PV prices are projected to drop to $2 per watt in 2010. For thin-film PV alone, production costs are expected to reach $1 per watt in 2010, at which point solar PV will become competitive with coal-fired electricity.
<urn:uuid:d1dbd0cf-3b53-4aaf-ac59-a0fd0e5e3b4f>
3.46875
715
Knowledge Article
Science & Tech.
38.196721
2,255
The right hand side of the above line dynamically allocates a single integer on the heap returns a pointer of type int*. The left hand side of the above line implicitly copy constructs a pointer on the stack using the pointer returned from the new expression. It is the equivalent of writing: int *myPtr(new int); The result is that myPtr now points to a dynamically allocated piece of memory. Incidently, this needs to be released using delete when you have finished with it else you will have a memory leak. The above line deletes the allocation that myPtr points to, any attempt to access the address myPtr currently still points to will yeild undefined behaviour. Now you might see the problem with the code you posted... Originally Posted by g.eckert I previously tried something similar to this and the app kept crashing cin>> *( myPtr ); //Store input into the value myPtr points to The problem is that you have constructed an integer pointer myPtr but the pointer has not been initialised or assigned to point to a valid piece of memory. In effect, the pointer could be pointing anywhere. What you need to do is point it to a piece of memory that holds a valid integer. You can do this dynamically, or you can allocate a stack integer variable as follows. int var = 0; int *myPtr = &var; //now myPtr points somewhere valid. //You can use myPtr from now on to access var This aside, why on earth do you want to purposely use pointers to access everything? If you cannot access a single object directly, then the next best thing is accessing it by reference, but if that is not possible (through design constraints), then only as a last resort should you access by pointer. They should NOT be used when there is no need. Last edited by PredicateNormative; April 17th, 2009 at 06:13 AM. Hey thanks for clearing that up. Explained very well. This program is for a CPS class and should demonstrate our knowledge of pointers. The directions say, 'write a function to sort an array using a bubble sort algorithm and a program to test it. Use pointer notation for the program'. I assumed this meant use pointers for everything. Now thinking about it more it might mean use pointer notation for the sorting function. I dont know, ill have to check about that but I agree with you it doesnt make sense to use pointers for size variables and such I thought I needed to because of the directions. I think your original interpretation is correct, it looks like you are meant to use pointer notation for as much as possible. Although this isn't sensible for real world programming, I can understand it as an exercise as long as your lecturer goes on to point out that pointers should only be used when you have to use them (traversing an an array is an example). Nope, I think "in-memory" sorting is a good example for the use of pointers. You don't move the objects in an array (of whatever sort), you just move pointers to the objects. I. e. you have an array of pointers to objects, sort it, and the pointers just point to different object than before.
<urn:uuid:91c813a8-5ade-4a3c-93a7-7f630c62b67a>
2.921875
665
Comment Section
Software Dev.
58.503189
2,256
We've discovered evidence of just about every gas imaginable out in space, but one we'd never seen was molecular oxygen, the stuff we breathe everyday. Now, thanks to powerful infrared telescopes, we've found the very first traces of space oxygen. While individual oxygen atoms are found throughout space, that's not what we breathe. Instead, we inhale O2, which is a molecule composed of two oxygen atoms bonded together. This particular gas is very common on Earth - it accounts for about 20% of the air around us - but we had never found molecular oxygen in space... until now. The key to the gas's discovery was the Herschel Space Observatory, which is tasked with exploring the infrared wavelengths of the universe. This allows it to see the coldest and dustiest parts of the cosmos. In the case of molecular oxygen, that's a very good thing, as researchers believe the gas was locked up in the frozen water ice that surrounds tiny dust grains. The telescope then detected the oxygen gas in the vicinity of the Orion Nebula, where it's thought that starlight was able to first warm up the ice and then break it into its constituent atoms. The now-free oxygen atoms were then combined into molecular gas. NASA researchers still believe that this molecular oxygen is abundant in the universe, even if the evidence for it remains strangely lacking. Now that we have some idea where to look, though, it may be easier to find more. Herschel project scientist Paul Goldsmith explains: "Oxygen gas was discovered in the 1770s, but it's taken us more than 230 years to finally say with certainty that this very simple molecule exists in space. This explains where some of the oxygen might be hiding," said Goldsmith. "But we didn't find large amounts of it, and still don't understand what is so special about the spots where we find it. The universe still holds many secrets."
<urn:uuid:3e84f654-063a-4709-b68c-938435cdf8b7>
3.9375
391
News Article
Science & Tech.
51.134505
2,257
Pub. date: 2011 | Online Pub. Date: May 04, 2010 | DOI: 10.4135/9781412973816 | Print ISBN: 9781412996822 | Online ISBN: 9781412973816| Publisher:SAGE Publications, Inc.About this encyclopedia Anthony R. S. Chiaviello Bluebelts are undeveloped areas retained by cities to provide stormwater catchment and wetland-based wastewater management to supplement or replace conventional constructed sewage and wastewater treatment systems. With the widespread growth of suburban populations, many standard storm- and wastewater management systems require updating and expansion. Bluebelts are another, environmentally friendly way to sanitize water and ensure proper drainage. In many areas, suburban and rural expansion has outpaced the municipality's ability to create infrastructure to handle wastewater and surface runoff drainage. In some cases, the installation of conventional water treatment piping would destroy protected wetlands. Compared with conventional drainage systems, blue-belts are known for being ecologically sound and extremely cost-effective. In addition, because bluebelts are a positive alternative to building newly constructed, chemical-based systems, many towns and cities are turning to creating them. Bluebelt systems can update water management systems without completely replacing or expanding existing structures. They use an area's natural ...
<urn:uuid:756bd5b1-805d-49db-8a51-6981e18613d8>
3.5625
268
Truncated
Science & Tech.
17.432932
2,258
Manual Section... (3) - page: getwd NAMEgetcwd, getwd, get_current_dir_name - Get current working directory #include <unistd.h> char *getcwd(char *buf, size_t size); char *getwd(char *buf); char *get_current_dir_name(void); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): DESCRIPTIONThese functions return a null-terminated string containing an absolute pathname that is the current working directory of the calling process. The pathname is returned as the function result and via the argument buf, if present. The getcwd() function copies an absolute pathname of the current working directory to the array pointed to by buf, which is of length size. If the length of the absolute pathname of the current working directory, including the terminating null byte, exceeds size bytes, NULL is returned, and errno is set to ERANGE; an application should check for this error, and allocate a larger buffer if necessary. As an extension to the POSIX.1-2001 standard, Linux (libc4, libc5, glibc) getcwd() allocates the buffer dynamically using malloc(3) if buf is NULL. In this case, the allocated buffer has the length size unless size is zero, when buf is allocated as big as necessary. The caller should free(3) the returned buffer. get_current_dir_name() will malloc(3) an array big enough to hold the absolute pathname of the current working directory. If the environment variable PWD is set, and its value is correct, then that value will be returned. The caller should free(3) the returned buffer. getwd() does not malloc(3) any memory. The buf argument should be a pointer to an array at least PATH_MAX bytes long. If the length of the absolute pathname of the current working directory, including the terminating null byte, exceeds PATH_MAX bytes, NULL is returned, and errno is set to ENAMETOOLONG. (Note that on some systems, PATH_MAX may not be a compile-time constant; furthermore, its value may depend on the file system, see pathconf(3).) For portability and security reasons, use of getwd() is deprecated. RETURN VALUEOn success, these functions return a pointer to a string containing the pathname of the current working directory. In the case getcwd() and getwd() this is the same value as buf. - Permission to read or search a component of the filename was denied. - buf points to a bad address. - The size argument is zero and buf is not a null pointer. - getwd(): buf is NULL. - getwd(): The size of the null-terminated absolute pathname string exceeds PATH_MAX bytes. - The current working directory has been unlinked. - The size argument is less than the length of the absolute pathname of the working directory, including the terminating null byte. You need to allocate a bigger array and try again. CONFORMING TOgetcwd() conforms to POSIX.1-2001. Note however that POSIX.1-2001 leaves the behavior of getcwd() unspecified if buf is NULL. getwd() is present in POSIX.1-2001, but marked LEGACY. POSIX.1-2008 removes the specification of getwd(). Use getcwd() instead. POSIX.1-2001 does not define any errors for getwd(). NOTESUnder Linux, the function getcwd() is a system call (since 2.1.92). On older systems it would query /proc/self/cwd. If both system call and proc file system are missing, a generic implementation is called. Only in that case can these calls fail under Linux with EACCES. These functions are often used to save the location of the current working directory for the purpose of returning to it later. Opening the current directory (".") and calling fchdir(2) to return is usually a faster and more reliable alternative when sufficiently many file descriptors are available, especially on platforms other than Linux. SEE ALSOchdir(2), fchdir(2), open(2), unlink(2), free(3), malloc(3) COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. This document was created by man2html, using the manual pages. Time: 15:26:57 GMT, June 11, 2010
<urn:uuid:6871ddd3-5be6-4053-9363-cc29ce01215b>
2.953125
1,013
Documentation
Software Dev.
55.413855
2,259
Tricks for Solving Math Problems Date: 9/27/95 at 23:50:19 From: Anonymous Subject: Mathematical Tricks I recently learned a trick for multiplying 2 digit numbers by 11. Add the 2 digits of the number and insert the sum between the original 2 digits. If the sum is greater than 10, add the digit in the tens place to the original first digit. Ex. 23x11 -> 2+3=5 => 253 45x11 -> 4+5=9 => 495 87x11 -> 8+7=15 -> 8+1=9 => 957 The theory shows up quickly when you manually calculate it out in longhand. My question is, do you know of any other tricks like this one? I have seen TV ads for audio/video tapes promising to teach you tricks for multiplication, division & exponents. Date: 10/10/95 at 21:41:58 From: Doctor Ethan Subject: Re: Mathematical Tricks Hey, The most amazing book I have ever read regarding this is Mathemagics by Benjamin and Shermer. It is great. Also, you can get on a mailing list that will send you a new math trick every week. BEATCALC@aol.com You'll find an archive of its tips and other math tips and tricks at http://mathforum.org/k12/mathtips/ Hope this helps. -Doctor Ethan, The Geometry Forum Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
<urn:uuid:0a40342e-7faa-46a9-be83-6017af08c9d1>
3.03125
323
Comment Section
Science & Tech.
85.578
2,260
A Parabola Proof Library Home || Primary || Math Fundamentals || Pre-Algebra || Algebra || Geometry || Discrete Math || Trig/Calc |Trig/Calc, difficulty level 3. Prove that the area of a parabola is 2/3 the product of its width and height.| |Please Note: Use of the following materials requires membership. Please see the Problem of the Week membership page for more information.| © 1994-2012 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
<urn:uuid:cad73f63-5df0-4e53-98ea-ac71c85e8ea4>
2.703125
133
Customer Support
Science & Tech.
40.188902
2,261
A football place kicker is about to attempt a field goal from the 30-yard line. He is lined up directly in front of the left goal post; but since there is a wind coming from the right side of the field, the kicker aims for the right goal post. This means that, from above, the ball will be kicked at a 79º angle to the 30-yard line. A.) The player kicks the ball at this angle with enough force to make the ball go 47 miles per hour, and the wind is blowing at 10 miles per hour. Use a vector sum of these two forces to show that, under these circumstances, the kicker will miss the field goal. B.) To make the field goal, the resultant force of the kick and the wind actually should have a magnitude of 50 miles per hour and a direction of 85º. To counteract the 10-mph wind, with what speed--and at what direction--should the football player kick the ball? I am needing help figuring out the answers to this question. I would appreciate if someone could give great detail in explaining how to do this step by step to finding answers so I will know how to do it.
<urn:uuid:a528b95a-939e-4632-b6d8-e66086c20c3f>
3.296875
241
Q&A Forum
Science & Tech.
78.708583
2,262
One of the most basic questions in mathematics is finding solutions to equations. In this post I want to make a short overview of the ways to solve some of the common forms of equations, and I also want to discus the history of how this solutions were found. This is only the first post about this subject, so it is mostly introductory. For the simplest equations, the solutions are known for a long time, so I will only mention since what period the solutions were known, and will not discuss the ways they were developed. The most simple equation is an equation of the form: The linear equations we now how to solve. So, what other types of equations we have? The next in line is the equation of the form: If we will take b=-1 we will have an even harder problem. The equation becomes then x^2=-1. Until Gauss at the 18 century, such equations were considered unsolvable. There are other, slightly more difficult to solve, second degree polynomials. The most general form is: We can now move b to the second site, and get the equation: It looks very similar to the equation we all learned in school, but an a is missing. However, our b is in fact the original b divided by a and so is c. If we will write b/a and c/a and move them a bit we will get: Any second degree polynomial can be solved using this formula. In ancient Babylon, the problems were solved using this formula, but the solution process went without any symbols. It was done completely verbally. The Greeks also knew how to solve such problems, but they used geometry to solve the problem. I will write about how they did it, and the reasons for using geometry for such tasks in another post. Now we know how to find the roots for second degree polynomial. But what about the third degree? Solutions for simple polynomials were known to the Babylonians. But they didn't know a formula for a general third degree polynomial. The same is true about the Greeks, and the Muslims. I once heard that when Archimedes was killed, his last words were a curse on those who will try to find the general solution for the third degree polynomial: "Cubics you shall not solve". I don't know if this is a true story. Probably it is just a legend. However it took a lot of time to find the solution for this problem. The solution was finally published by Cardano, a French mathematician, in the 16 century. He was the first one to publish it, but he himself got the solution from another mathematician - Tartaglia. In the 16 century the competition between mathematicians was very strong, so Tartaglia who was the first to find the solution, didn't want to publish it, but preferred to keep it to himself. When Cardano discovered that Tartaglia knows the solution, who put a lot of pressure on him to make him tell the solution. Finally Tartaglia agreed, but asked Cardano to make an oath that he will not publish the solution before he does. A few years passed and Cardano found out that the solution Tartaglia told him was found before by another mathematician and went unnoticed (the communication wasn't very good then). Upon discovering this, Cardano published the solution. The solution is anything but simple. Since it is significantly longer than for the second degree I will not write it in this post. The next step is obviously a forth degree polynomial. This one was also solved in the 16 century, by a student of Cardano. Again, the solution is too long to be written in this post. If I will have time, I will write another post in which I will fully solve both of these questions. There two important facts about both of these solutions - they both are solutions by radicals, and they both effectively turn the problem into finding the roots of a polynomial of second (for the cubic) and third degree (for the forth degree polynomial). And finally we get to the fifth degree - the quintic. The quintic is a polynomial of the form: After seeing the solutions of the previous problems, everyone was sure that this problem would be solved as well. But no solution was found for a long period, until Abel came to the scene. He had an interesting idea - he started to question the generally accepted thought that there was a solution. He wrote a rather large (about 950 pages) proof that showed that it wasn't possible to find a general solution for any degree larger than 4. This proof wasn't accepted well. He sent it to Lagrange, but didn't got an answer. When he tried to get an official response from the Academy, the response was that "They don't find it usefully to look into his proof". Just before he died he received a letter from Cauchy, in which Cauchy wrote that he believes his proof is right. It was found out letter than while there is indeed no general formula for a degree larger than 4, there was a mistake in his proof. He skipped on one step, because he thought it was obvious - but it wasn't so. Anyway, it is now a generally accepted fact that there is no general formula. There is also another interesting result that this discovery brought. You probably recognized the formula for the second degree polynomial, but I doubt you know the formula for the third degree or for the forth. They are no longer studied and are of no importance. It is possible to get a B.S. in math and don't know these formulas. It turned out that it is more practical to be able to solve specific examples, than to solve the general case. And for a specific problem we can use a computer who knows the formula. However, we still need to be able to solve the quintic, as well as other polynomials. In the next post I will describe some tricks that are used for this purpose, and the general methods for finding solutions.
<urn:uuid:56fb8810-c673-46a7-95ce-8944ae2686d6>
3.328125
1,248
Personal Blog
Science & Tech.
58.763916
2,263
Giant squid – also known also by their scientific name Architeuthis – have been the stuff of both legend and science for hundreds of years. Stories of great tentacled Kraken in Scandinavia and in the scientific writings of Pliny the Elder are some of the earliest indications that such monsters were thought to exist. Giant squid have also left evidence of their existence tangled up in fishing nets and washed ashore all over the world. Despite their massive size (adults can grow up to about 40 feet from tip to tentacle and weigh up to 610 pounds) searching for them has been a needle-in-a-haystack endeavor. The first video footage of a giant squid in its natural habitat will air this Sunday on the Discovery Channel in Monster Squid: The Giant is Real. The show is the culmination of years of searching, and a successful six-week expedition 550 miles south of Tokyo in June of 2012. "It’s not that they’ve been evading us,” explains Craig McClain, Assistant Director of Science National Evolutionary Synthesis Center and founder of Deep Sea News. “It's more that our daily activities don't overlap with their daily activities." Only around five percent of the oceans on Earth have been explored, and everything about the behavior of the elusive giant squid has been inferred from chance sightings at sea and dissecting beached carcases. Unlike its relative the humboldt squid which hunts in schools, the giant squid is thought to be a solitary creature. Also, until about twenty years ago, the best submersibles were made of opaque metals, and no camera could withstand the pressure and cold of the deepest waters. Discovery and Japanese television company NHK began plotting the ultimate giant squid mission in 2006, but after watching repeated and fruitless attempts by National Geographic and others, there was hesitation about investing resources. But premiere squid scientists Tsunemi Kubodera of Japan’s National Museum of Nature and Science, American oceanographer Edie Widder, and Kiwi renegade marine biologist Steve O'Shea convinced producers that they could find the squid. But they still needed the equipment. Enter Ray Dalio, billionaire manager of the world's largest hedge fund, who just happened to own a fully equipped research vessel. Dalio made his 56-meter motor yacht, the Alucia, available for NHK and Discovery to charter for the expedition, along with three submersible vessels, one of which is "the sexiest, most contemporary deep submersible that money can buy," according to the manufacturers. "He really just wanted to have an opportunity to go under water, just he and his family," said Bruce Jones, CEO of Triton Subs, who manufactured the craft used to capture the footage. "Then he decided that since he had these assets, he might as well use them for some scientific progress." The team knew this was likely to be the last opportunity they had to search for the mythic creature, and on June 22nd, 2012, they boarded the Alucia and set off for six weeks afloat in the vast, blue sea. Kubodera had captured still pictures of giant squid near the Ogasawara Islands, so the team used that as a starting point, setting sail from Sagami Bay. Patrick Lahey, President of Triton Subs, joined the team on the Alucia to train the pilots and crew members in operating the submersibles. Three people would be in a submersible on every dive: a scientist, a photographer, and a pilot. During the six weeks they kept an around-the-clock schedule of missions. Each of the 55 jaunts below the surface lasted eight to 10 hours, and they took full advantage of the sub's capabilities, often reaching its max depth of 1000 meters. “You are down there and you are absolutely lost in time and space,” O’Shea said. Lahey says, “we all have to be a little bit crazy to do this,” because these expeditions are often emotionally and physically draining on crew members. Even the world's foremost giant squid researchers know virtually nothing about the way the giant squid behaves in its natural habitat, so they were forced to guess at how to lure it in front of a camera. Each of the researchers took a different approach, with success hinging on one main unknown: do giant squid prefer the lights on or off? Widder, who has a PhD in Neurobiology and specializes in bioluminescence, sunk to the depths in the dark, extending a glass orb with flashing LEDs as bait. Her goal was to mimic the light display of a deep-sea jellyfish called atolla, which release a glowing chemical while being attacked. She'd observed that smaller squid were attracted to this jellyfish, but had never found any evidence that squid eat them. She concluded that squid were using the jellyfish as a “bioluminescent burglar alarm,” eating whatever was eating the jellyfish. "You've got this small thing lighting up because this medium sized thing is munching on it, and the goal of the small thing is to get away from what's eating it," she explained. Widder didn't capture any video footage of Architeuthis while in the submersible, but she did capture five different recordings of giant squids by dangling a “Medusa” — her bioluminescent lures and a camera system — from a buoy on the surface. O'Shea took a drastically different approach. He armed himself with a mixture of chemicals extracted from the mantles, arms and gonads of fully mature male and female giant squids, which he predicted would act as a pheromone to attract adults, and descended into the abyss "lights blazing, singing Neil Diamond, making as much noise as possible, squirting all sorts of chemicals into the water.” Why, if a major hypothesis of his respected colleague was that the giant squids have an aversion to white light? “Because I firmly believe that these squid don't give a damn about light or sound." When it comes to speculation on the mental prowess of the giant squid, O’Shea is dismissive. “I think it's one of the most stupid animals in the ocean. The only thing going through that 20 gram brain is eating and breeding.” In his dives, O’Shea had lots of creatures attack the bait, and even the sub — on a 500 meter dive they once felt a thump from below and found themselves shrouded in a massive ink cloud. O’Shea saw more squid than anyone else during his dives with the lights on, but none were of the giant variety. In the end, the successful approach was Kubodera’s, who descended like a deep-sea ninja, as quietly and invisibly as possible. Like Widder, he made use of the infrared lighting system and turned off everything electronic in the sub, including the temperature control system. He thought giant squids may be especially sensitive to sound vibrations. He sat staring out into the black abyss for eight hours at a time, cameras aimed at a diamond squid as bait. And finally, on one lucky occasion, Architeuthis approached. What followed was an inter-species staring competition. The squid explored the bait suspended in front of the sub, “sitting there for the most of 18 minutes looking beautiful,” as O’Shea put it. O’Shea and Kubodera have held opposing hypotheses about the giant squid’s hunting behavior for as long as they’ve known each other, and were hoping that finally seeing it in motion would settle the bet once and for all. Kubodera thought the animal would be an aggressive hunter darting around and quickly projecting its tentacles out to pull prey into its mouth. "I always thought that it was a dopey, giant thing that was floating at a 45 degree angle through the water column, dangling the two long tentacles down," O’Shea said. When they watched the video footage, each declared their own hypothesis confirmed. O'Shea says he shed a single tear when he saw the giant squid on video for the first time. "All I felt was overjoyed. It had now been done. We can now relax. We can now move on.” Back on land, producers at Discovery are ecstatic for the giant squid’s 15 minutes, and grateful their gamble paid off. "Had we not succeeded I'm not sure anybody would have tried again," said Christina Weber, VP for Production and Development for Specials at Discovery. "Lights blazing, singing Neil Diamond, making as much noise as possible, squirting all sorts of chemicals into the water." To celebrate the discovery, Dalio flew famous biologist and atheist Richard Dawkins out to meet the research team on the yacht, which Dawkins later blogged about. While the scientists no doubt delighted in breaking new ground, their true bounty was the potential for the giant squid to act as an emblem of the deep, as a symbol with the power to convince the average television watcher of the necessity of preserving the earth's oceans. The question remains as to whether viewers will see the squid as majestic and beautiful, or as a monster like the show’s title asserts. “We are in the entertainment business. We don't always want to preach to the choir,” Weber said, explaining that putting “monster” in the title was a ploy to lure in an audience beyond the scientific types who are already inclined to tune in. "We're driven to find all this weird and wonderful stuff on film for you guys, but at the same time what's driving us is conservation. We use these charismatic megafauna as our hook to lure you into far more important matters,” O’Shea said. "People are going to see this on television and start giving a damn about the marine environment.” The giant squid may have been the holy grail, but it wasn’t quite the final frontier. There is evidence of a squid even bigger than the giant squid out there called the colossal squid. Now that the elements for successful deep sea exploration voyages have been established, it’s only a matter of time before someone attempts to capture the colossal squid in Antarctica.
<urn:uuid:1e4ab405-69e6-4bbd-ad48-fd8fdec64ba8>
2.921875
2,161
News Article
Science & Tech.
48.774846
2,264
Markdown Syntax Guide This is an overview of Markdown's syntax. For more information, visit the Markdown web site. Italics and Bold This is italicized, and so is this. This is bold, and so is this. You can use italics and bold together if you have to. There are three ways to write links. Each is easier to read than the last: The link definitions can appear anywhere in the document -- before or after the place where you use them. The link definition names ( Yahoo!) can be any unique string, and are case-insensitive; [Yahoo!] is the same as Advanced links: Title attributes You can also add a title attribute to a link, which will show up when the user holds the mouse pointer it. Title attributes are helpful if your link text is not descriptive enough to tell users where they're going. (In reference links, you can use optionally parentheses for the link title instead of quotation marks.) : http://www.w3.org/QA/Tips/noClickHere (Advice against the phrase "click here") Advanced links: Bare URLs You can write bare URLs by enclosing them in angle brackets: My web site is at http://attacklab.net. If you use this format for email addresses, Showdown will encode the address to make it harder for spammers to harvest. Try it and look in the HTML Output pane to see the results: Humans can read this, but most spam harvesting robots can't: firstname.lastname@example.org There are two ways to do headers in Markdown. (In these examples, Header 1 is the biggest, and Header 6 is the smallest.) You can underline text to make the two top-level headers: The number of - signs doesn't matter; you can get away with just one. But using enough to underline the text makes your titles look better in plain text. You can also use hash marks for all six levels of HTML headers: # characters are optional. You can insert a horizontal rule by putting three or more hyphens, asterisks, or underscores on a line by themselves: You can also use spaces between the characters: All of these examples produce the same output. A bulleted list: - You can use a minus sign for a bullet - Or plus sign - Or an asterisk A numbered list: - Numbered lists are easy - Markdown keeps track of the numbers for you - So this will be item 3. A double-spaced list: This list gets wrapped in So there will be extra space between items Advanced lists: Nesting You can put other Markdown blocks in a list; just indent four spaces for each nesting level. So: Lists in a list item: - Indented four spaces. - indented eight spaces. - Four spaces again. - Indented four spaces. Multiple paragraphs in a list items: It's best to indent the paragraphs four spaces You can get away with three, but it can get confusing when you nest other things. Stick to four. We indented the first line an extra space to align it with these paragraphs. In real use, we might do that to the entire list so that all items line up. This paragraph is still part of the list item, but it looks messy to humans. So it's a good idea to wrap your nested paragraphs manually, as we did with the first two. Blockquotes in a list item: Skip a line and indent the >'s four spaces. Preformatted text in a list item: Skip a line and indent eight spaces. That's four spaces for the list and four to trigger the code block. Blockquotes are indented: The syntax is based on the way email programs usually do quotations. You don't need to hard-wrap the paragraphs in your blockquotes, but it looks much nicer if you do. Depends how lazy you feel. Advanced blockquotes: Nesting You can put other Markdown blocks in a blockquote; just add a > followed by a space: Parragraph breaks in a blockquote: The > on the blank lines is optional. Include it or don't; Markdown doesn't care. But your plain text looks better to humans if you include the extra Blockquotes within a blockquote: A standard blockquote is indented A nested blockquote is indented more You can nest to any depth. Lists in a blockquote: - A list in a blockquote - With a > and space in front of it - A sublist Preformatted text in a blockquote: Indent five spaces total. The first one is part of the blockquote designator. Images are exactly like links, but they have an exclamation point in front of them: ![Valid XHTML] (http://w3.org/Icons/valid-xhtml10). The word in square brackets is the alt text, which gets displayed if the browser can't show the image. Be sure to include meaningful alt text for blind users' screen-reader software. Just like links, images work with reference syntax and titles: This page is . Markdown does not currently support the shortest reference syntax for images: Here's a broken !checkmark. But you can use a slightly more verbose version of implicit reference names: The reference name ( valid icon) is also used as the alt text. If you need to do something that Markdown can't handle, you can always just use HTML: Strikethrough humor is Markdown is smart enough not to mangle your span-level HTML: Markdown works fine in here. Block-level HTML elments have a few restrictions: - They must be separated from surrounding text by blank lines. - The begin and end tags of the outermost block element must not be indented. - You can't use Markdown within HTML blocks. You can include preformatted text in a Markdown document. To make a code block, indent four spaces: printf("goodbye world!"); /* his suicide note was in C */ The text will be wrapped in ` tags, and the browser will display it in a monospaced typeface. The first four spaces will be stripped off, but all other whitespace will be preserved. You cannot use Markdown or HTML within a code block, which makes them a convenient way to show samples of Markdown or HTML syntax: <blink> You would hate this if it weren't wrapped in a code block. </blink> You can make inline <code> tags by using code spans. Use backticks to make a code span: <Tab> key, then type a (The backtick key is in the upper left corner of most keyboards.) Like code blocks, code spans will be displayed in a monospaced typeface. Markdown and HTML will not work within them: Markdown italicizes things like this: I *love* it. Don't use the <font> tag; use CSS instead. Showing changes from previous revision.
<urn:uuid:fe93ce98-04f8-4316-bbdf-95864213b77d>
3.09375
1,542
Documentation
Software Dev.
63.835254
2,265
Computers today are more mobile than ever. From small laptops to Tablet PCs, many computers can go wherever the user wants to go. Programs that take advantage of the computer's mobility can add significant value to people's lives. For example, a program that can find nearby restaurants and provide driving directions would seem to be a natural fit for a portable computer. But while the technology to determine the user's current location is common and affordable, building solutions on this technology can be a daunting task. To create a location-aware program, you might need to overcome a variety of issues, including: - Global positioning system (GPS) devices that use virtual COM ports, which provide access for only one program at a time. - Understanding and programming for protocols, such as the National Marine Electronics Association (NMEA) specification, as well as proprietary vendor extensions. - Being confined to programming for known, vertical hardware solutions. - Implementing logic to handle transitions between various location providers, such as GPS receivers, connected networks, cellular telephone networks, the Internet, and user settings. This documentation describes the Windows Location application programming interface (API). The Location API helps to simplify location-aware programming by providing a standard way to retrieve data about user location and standardizing formats for location data reports. The Location API automatically handles transitions between location data providers and always chooses the most accurate provider for the current situation. Note Because information about a person's location can be sensitive data, Windows helps protect user privacy. For information about privacy protection, see Privacy and Security in the Sensor and Location Platform. - Getting Started - About the Location API - Location API Programming Guide - Location API C++ Programming Reference - Location API Object Model Reference Build date: 10/27/2012
<urn:uuid:9628c06a-b907-4826-b78b-6aa97aa80f3e>
3.234375
362
Documentation
Software Dev.
15.885548
2,266
Gets or sets the source for the image. Assembly: System.Windows (in System.Windows.dll) Dependency property identifier field: SourceProperty You can set the by specifying an absolute URL (e.g. http://contoso.com/myPicture.jpg) or specify a URL relative to the XAP file of your application. You can set this property in XAML, but in this case you are setting the property as a URI. The XAML behavior relies on underlying type conversion that processes the string as a URI, and calls the BitmapImage(Uri) constructor. This in turn potentially requests a stream from that URI and returns the image source object. The ImageFailed event can occur if the initial attribute value in XAML does not specify a valid source. The following example shows how to create an image. Image myImage = new Image(); myImage.Source = new BitmapImage(new Uri("myPicture.jpg", UriKind.RelativeOrAbsolute)); LayoutRoot.Children.Add(myImage); In this example, the property is used to specify the location of the image you want to display. You can set the by specifying an absolute URL (e.g. http://contoso.com/myPicture.jpg) or specify a URL relative to the XAP file of your application. So for the previous example, you would need to have the XAP file in the same folder as myPicture.png. For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
<urn:uuid:ccdba305-121f-4a31-8fbc-20de02fc4c9d>
2.65625
338
Documentation
Software Dev.
49.63625
2,267
Installs your own termination routine to be called by terminate. The set_terminate function installs termFunction as the function called by terminate. set_terminate is used with C++ exception handling and may be called at any point in your program before the exception is thrown. terminate calls abort by default. You can change this default by writing your own termination function and calling set_terminate with the name of your function as its argument. terminate calls the last function given as an argument to set_terminate. After performing any desired cleanup tasks, termFunction should exit the program. If it does not exit (if it returns to its caller), abort is called. In a multithreaded environment, terminate functions are maintained separately for each thread. Each new thread needs to install its own terminate function. Thus, each thread is in charge of its own termination handling. The terminate_function type is defined in EH.H as a pointer to a user-defined termination function, termFunction that returns void. Your custom function termFunction can take no arguments and should not return to its caller. If it does, abort is called. An exception may not be thrown from within termFunction. typedef void ( *terminate_function )( ); The set_terminate function only works outside the debugger. There is a single set_terminate handler for all dynamically linked DLLs or EXEs; even if you call set_terminate your handler may be replaced by another, or you may be replacing a handler set by another DLL or EXE. This function is not supported under /clr:pure. For additional compatibility information, see Compatibility in the Introduction. Not applicable. To call the standard C function, use PInvoke. For more information, see Platform Invoke Examples.
<urn:uuid:484bc00f-85f1-4321-aed6-32ff47102ee4>
3.21875
374
Documentation
Software Dev.
39.437421
2,268
Brain Has "Face Place" for Recognition, Monkey Study Confirms for National Geographic News |February 3, 2006| The brain of the macaque monkey has a distinct area dedicated to recognizing faces, according to a new study. This brain region is the first one in any animalincluding humansfound to have nearly all of its nerve cells focused on a specific visual form. The finding adds weight to the theory that the brain works like a Swiss Army knife, with separate modules set to different tasks. "When we put an electrode in, it was clear from the very first day that every single cell just responded to faces," said study co-leader Doris Tsao, a neuroscientist at the Harvard Medical School in Boston. The study is reported in today's issue of the journal Science. Like humans, monkeys are social animals. They benefit from recognizing other individuals in their group and from deciphering their peers' facial expressions. (See "Babies Recognize Faces Better Than Adults, Study Says.") Scientists already know that humans have areas of the brain that are adept at face processing. For example, some stroke victims lose the ability to identify faces yet can still recognize everyday objects. Functional magnetic resonance imaging (fMRI) experiments on humans have demonstrated that blood flow to these regions increases during face-recognition tasks, just as it does in an area of the macaque brain known as the middle face patch. But to find out exactly how many of the nerve cells in the region are involved, researchers needed to record the cells' activity directly using an electrode. While two macaques looked at a succession of pictures, some of which depicted faces, Tsao and colleagues logged the activity of more than 400 neurons in the monkeys' middle face patches. Ninety-seven percent of the cells in this brain region responded when a monkey saw a picture of a face. "It doesn't matter if it's a monkey, human, or even a cartoon face," said Tsao, who is planning several follow-up experiments. "[The nerve cells will] respond more to some faces, less to others, but they will fire some response to almost every face." Furthermore, most of the remaining 3 percent of the cells reacted to a type of facial image not included in the original experiment, such as the back of a head or a head titled upward. The discovery casts light on a much debated field of neuroscience. Though some experts promote the aforementioned Swiss-Army-knife view of the brain, others say that mental processes are performed in a widely distributed way. They argue that the regions involved in face recognition are really used for identifying all sorts of objects. Margaret Livingstone, a neuroscientist in the same laboratory as Tsao, says that the brain's expertise at recognizing a range of objects may follow from this ability to recognize faces. "The fact that we found that virtually all the cells are responsive to faces says that it can't just be general expertise," Livingstone said. "There's no machinery left to be expert at [identifying] these other things," such as birds and cars and so on. However, people might use the face selectivity of these cells to recognize objects that in some way resemble a face, Livingstone says. Knives, Apples, and Clocks Some of the monkey brain cells responded, though weakly, to round shapes such as clocks and apples. This finding, according to Tsao, suggests that cells in the macaque's middle face patch are involved in some intermediate level of face coding. "[The cells are] not yet coding a particular identity, but they are coding the basic structure and measurement of a face," Tsao said. There are three face patches in each side of the macaque brain. The human brain has a similar distribution. According to Winrich Freiwald, a former Harvard postdoctoral student who co-led the study with Tsao, it remains a mystery why the brain has multiple face-recognition areas, rather than just one. "Ultimately the answer, I think, will depend on what the areas surrounding these patches do and which are the areas they're connected to," said Freiwald, now with the Brain Research Institute at the University of Bremen in Germany. Free Email News Updates Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample). |© 1996-2008 National Geographic Society. All rights reserved.|
<urn:uuid:cd33c6a5-f0b8-42c1-bda5-3ea855a6d2cd>
3.796875
928
News Article
Science & Tech.
40.941783
2,269
When a scientist can't give you an answer, ask a garden gnome. Researchers have long hypothesized that objects weigh less at Earth's equator because the planet's spin and shape lessen gravity's pull here versus at the poles. (Imagine Earth as a spinning disc. A bean sitting in the center would feel nothing, whereas a bean at the edge would fly off.) Satellite accelerometers have confirmed this, but a digital scale manufacturer decided to test things the old-fashioned way. Enter the Kern garden gnome. When placed on a scale at the South Pole (pictured on the right; San Francisco and Mexico city are left and center, respectively), the intrepid ornament weighed 309.82 grams versus 307.86 grams at the equator, a difference of 0.6%. The gnome's next stop will be the CERN laboratory near Geneva, Switzerland, according to Kern Precision Scales, the manufacturer of the digital scale and the sponsor of the gnome's travels. CERN is currently conducting a search for the Higgs boson, the particle suspected of endowing quarks and electrons with mass; a particularly apt place to test a theory related to gravity. See more ScienceShots.
<urn:uuid:17c79f46-1779-42aa-a4a7-fb1d9893f0a5>
2.984375
243
News Article
Science & Tech.
53.409474
2,270
UNL physicist discusses high-order harmonic generation at AAAS Released on 02/18/2013, at 2:00 AM Office of University Communications University of Nebraska–Lincoln One-billionth of a billionth of a second. That's the scale -- an attosecond -- at which scientists seek to image and control electronic motion in matter. The principle of attosecond science was the focus of a Feb. 17 symposium during the annual meeting of the American Association for the Advancement of Science. University of Nebraska-Lincoln physicist Anthony Starace was among the speakers, presenting "High-Order Harmonic Generation, Attosecond Science and Control of Electron Motion." Starace, a George Holmes University Professor of Physics at UNL, reviewed current theoretical understanding of the "new frontier" of high-order harmonic generation and discussed the prospects for achieving the goals of attosecond science. "Because electrons move on a scale of Angstroms (one 10-billionth of a meter), light pulses used to illuminate this motion must have high energies so that their de Broglie wavelength is sufficiently small to be able to resolve, or image, the electron motion," Starace said. "Also, because electrons move so fast, light pulses must have durations that are shorter than the typical time scale for electron motion." De Broglie waves, a theory of quantum mechanics, indicate how a wavelength is inversely proportional to the momentum of a particle. Attosecond pulses are becoming the preferred tools for imaging, visualizing and even controlling electrons in matter in their natural time scale. Attosecond research could eventually open new applications in a wide range of fields, including nanotechnology and life sciences, based on the ultimate visualization and control of the quantum nature of the electron. Attosecond science evolved from advances in modern laser technology that allow generation of ultra-short light pulses, or high-order harmonic generation -- Starace's area of expertise. Starace joined seven other scientists to discuss "Attosecond Science in Chemical, Molecular Imaging, Spintronics and Energy Science." The AAAS annual meeting was Feb. 14-18 in Boston. At this convention, thousands of leading scientists, engineers, educators and policymakers interact in more than 150 sessions and seminars. Starace is fellow of the American Physical Society and AAAS. He earned his bachelor's degree at Columbia College, his master's and doctorate at the University of Chicago, and did postgraduate work at Imperial College, London. In 2010, his research on four-dimensional imaging was featured in Physical Review Letters and he earned the University of Nebraska's highest honor for research, the Outstanding Research and Creative Activity Award, in 2005. Writer: Kelly Bartling
<urn:uuid:d1ecc7ae-e9bb-4b10-9a04-7c0d10e7ebba>
2.6875
562
News (Org.)
Science & Tech.
19.708464
2,271
Roscosmos will soon consider a project to prevent a large asteroid from colliding with Earth after 2030, the head of Russia's space agency said on Wednesday. "A scientist recently told me an interesting thing about the path [of an asteroid] constantly nearing Earth... He has calculated that it will surely collide with Earth in the 2030s," Anatoly Perminov said during an interview with the Voice of Russia radio. He referred to Apophis, an asteroid that he said was almost three times as large as the Tunguska meteorite. On June 30, 1908, an explosion equivalent to between 5 and 30 megatons of TNT occurred near the Podkamennaya Tunguska River in a remote region of Russia's Siberia. The Tunguska blast flattened 80 million trees, destroying an area of around 2,150 sq km (830 sq miles). Perminov said Russia was not planning to destroy the asteroid. "No nuclear explosions [will be carried out], everything [will be done] on the basis of the laws of physics," he said. The Russian space official also said after having considered the project, Russia could invite experts from Europe, the United States and China to join it. "People's lives are at stake. We should pay several hundred million dollars and design a system that would prevent a collision, rather than sit and wait for it to happen and kill hundreds of thousands of people," Perminov said. Though Apophis is currently considered the largest threat to our planet, NASA scientists published in October an update of its orbit indicating "a significantly reduced likelihood of a hazardous encounter with Earth in 2036." In October, NASA dropped the odds of it hitting Earth in 2036 from a 1-in-45,000 to 1-in-250,000. It said another close encounter in 2068 will involve a 1-in-330,000 chance of impact. Wikipedia on 99942Apothis 99942 Apophis is a near-Earth asteroid that caused a brief period of concern in December 2004 because initial observations indicated a small probability (up to 2.7%) that it would strike the Earth in 2029. Additional observations provided improved predictions that eliminated the possibility of an impact on Earth or the Moon in 2029. However, a possibility remains that during the 2029 close encounter with Earth, Apophis will pass through a gravitational keyhole, a precise region in space no more than about 600 meters across, that would set up a future impact on April 13, 2036. This possibility kept the asteroid at Level 1 on the Torino impact hazard scale until August 2006. It broke the record for the highest level on the Torino Scale, being, for only a short time, a level 4, before it was lowered NASA initially estimated the energy that Apophis would have released if it struck Earth as the equivalent of 1,480 megatons of TNT. A later, more refined NASA estimate was 880 megatons. The impacts which created the Barringer Crater or the Tunguska event are estimated to be in the 3–10 megaton range. The B612 Foundation made estimates of Apophis' path if a 2036 Earth impact were to occur as part of an effort to develop viable deflection strategies. The result is a narrow corridor a few miles wide, called the path of risk, and it includes most of southern Russia, across the north Pacific (relatively close to the coastlines of California and Mexico), then right between Nicaragua and Costa Rica, crossing northern Colombia and Venezuela, ending in the Atlantic, just before reaching Africa. Using the computer simulation tool NEOSim, it was estimated that the hypothetical impact of Apophis in countries such as Colombia and Venezuela, which are in the path of risk, would have had more than 10 million casualties. An impact several thousand miles off the West Coast of the US would produce a devastating tsunami.[ The 1883 eruption of Krakatoa was the equivalent of roughly 200 megatons. Path of risk where 99942 Apophis may impact Earth in 2036. The exact effects of any impact would vary based on the asteroid's composition, and the location and angle of impact. Any impact would be extremely detrimental to an area of thousands of square kilometres. Path of risk where 99942 Apophis may impact Earth in 2036. The exact effects of any impact would vary based on the asteroid's composition, and the location and angle of impact. Any impact would be extremely detrimental to an area of thousands of square kilometres
<urn:uuid:6449bdd7-174e-42e8-afc1-516699dbf64f>
3.296875
939
News Article
Science & Tech.
45.496797
2,272
Design your own scoring system and play Trumps with these Olympic Sport cards. There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper. This activity challenges you to decide on the 'best' number to use in each statement. You may need to do some estimating, some calculating and some research. Can you put these times on the clocks in order? You might like to arrange them in a circle. One day five small animals in my garden were going to have a sports day. They decided to have a swimming race, a running race, a high jump and a long jump. Can you imagine where I could have walked for my path to look like Look at different ways of dividing things. What do they mean? How might you show them in a picture, with things, with numbers and symbols? Can you spot circles, spirals and other types of curves in these photos? Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. This task looks at the different turns involved in different Olympic sports as a way of exploring the mathematics of turns and angles. What is the same and what is different about these tiling patterns and how do they contribute to the floor as a whole? This is a collection of mathematical activities linked to the Football World Cup 2006. These activities can easily be updated for another football event or could be the inspiration for. . . . Noticing the regular movement of the Sun and the stars has led to a desire to measure time. This article for teachers and learners looks at the history of man's need to measure things. How can people be divided into groups fairly for events in the Paralympics, for school sports days, or for subject sets? In this article, Alan Parr shares his experiences of the motivating effect sport can have on the learning of mathematics. Jenny Murray describes the mathematical processes behind making patchwork in this article for students. How does the time of dawn and dusk vary? What about the Moon, how does that change from night to night? Is the Sun always the same? Gather data to help you explore these questions.
<urn:uuid:6609c9c8-0830-44df-827f-97cb81e123c4>
3.34375
490
Content Listing
Science & Tech.
60.717795
2,273
]NASA's Solar Dynamics Observatory (SDO), best known for cutting-edge images of the sun, has made a discovery right here on Earth. "It's a new form of ice halo," says atmospheric optics expert Les Cowley of England. "We saw it for the first time at the launch of SDO--and it is teaching us new things about how shock waves interact with clouds." Ice halos are rings and arcs of light that appear in the sky when sunlight shines through ice crystals in the air. A familiar example is the sundoga rainbow-colored splash often seen to the left or right of the morning sun. Sundogs are formed by plate-shaped ice crystals drifting down from the sky like leaves fluttering from trees. Last year, SDO destroyed a sundogand that's how the new halo was discovered. SDO lifted off from Cape Canaveral on Feb. 11, 2010one year ago today. It was a beautiful morning with only a handful of wispy cirrus clouds crisscrossing the wintry-blue sky. As the countdown timer ticked to zero, a sundog formed over the launch pad. "When the rocket penetrated the cirrus, shock waves rippled through the cloud and destroyed the alignment of the ice crystals," explains Cowley. "This extinguished the sundog." The sundog's destruction was understood. The events that followed, however, were not. "A luminous column of white light appeared next to the Atlas V and followed the rocket up into the sky," says Cowley. "We'd never seen anything like it." Cowley and colleague Robert Greenler set to work figuring out what the mystery-column was. Somehow, shock waves from the rocket must have scrambled the ice crystals to produce the 'rocket halo.' But how? Computer models of sunlight shining through ice crystals tilted in every possible direction failed to explain the SDO event. [PIC=53471:leftThen came the epiphany: The crystals weren't randomly scrambled, Cowley and Greenler realized. On the contrary, the plate-shaped hexagons were organized by the shock waves as a dancing army of microscopic spinning tops. Cowley explains their successful model: "The crystals are tilted between 8 and 12 degrees. Then they gyrate so that the main crystal axis describes a conical motion. Toy tops and gyroscopes do it. The earth does it once every 26000 years. The motion is ordered and precise." Bottom line: Blasting a rocket through a cirrus cloud can produce a surprising degree of order. "This could be the start of a new research fieldhalo dynamics," he adds. The simulations show that the white column beside SDO was only a fraction of a larger oval that would have appeared if the crystals and shock waves had been more wide-ranging. A picture of the hypothetical complete halo may be found here. "We'd love to see it again and more completely," says Cowley. "If you ever get a once-in-a-lifetime opportunity to be at a rocket launch," he advises with a laugh, "forget about the rocket! Look out instead for halos." Explore further: NASA head views progress on asteroid lasso mission
<urn:uuid:3bcc352d-bda5-4b8f-bfdb-184f447e4076>
3.328125
675
News Article
Science & Tech.
57.059754
2,274
The US space agency has narrowed down its prediction of when a defunct six-ton satellite will crash back to Earth, saying on Wednesday that it is expected to land on September 23, US time. "The time reference does not mean that the satellite is expected to re-enter over the United States. It is simply a time reference," NASA said on its website. "Although it is still too early to predict the time and location of re-entry, predictions of the time period are becoming more refined." NASA had previously said the satellite could hit Earth as early as Thursday, September 22 or as late as Saturday, September 24. All but 26 pieces of the Upper Atmosphere Research Satellite (UARS) are expected to burn up on re-entry into Earth's atmosphere, but where exactly they will land remains a mystery. Orbital debris scientists say the pieces will fall somewhere between 57 north latitude and 57 south latitude, which covers most of the populated world. The debris footprint is expected to span 500 miles (800 kilometers). UARS is the biggest NASA spacecraft to come back in three decades, after Skylab fell in western Australia in 1979. The risk to human life and property from UARS is "extremely small," NASA said, adding that in 50 years of space exploration no one has ever been confirmed hurt by falling space junk. More frequent updates are scheduled for 12, six and two hours before it lands. But even at two hours out, debris trackers will not be able to predict landing with an accuracy greater than 25 minutes of impact, or within a potential span of 7,500 miles (12,000 kilometers), NASA said. "Part of the reason it is so uncertain is the spacecraft itself is rather unwieldy looking and it tumbles and we can't predict exactly how it is going to be tumbling," said Mark Matney, an orbital debris expert at NASA. "Even as it tumbles that could change exactly where it is going to land." Explore further: NASA head views progress on asteroid lasso mission
<urn:uuid:a2f53c26-02ee-4d4c-ad9a-45e44d343dee>
2.890625
421
News Article
Science & Tech.
48.072889
2,275
Laser physicists are good at producing and manipulating single photons, but as with good comedy, the timing is important. Even the best experiments in quantum cryptography and computing–applications that make use of single photon properties–use sources that emit photons at random times. In the 4 October PRL a French team demonstrates a system that emits single photons on a dependable schedule at a frequency of 3 MHz. One other “triggered” photon source which operates on completely different principles was reported earlier this year. With these new techniques, researchers know exactly when and where a single photon will be found, and they are a step closer to quantum applications, such as cryptography that allows the receiver of information to deduce whether a message has been intercepted. Imagine a photon counter in front of a very weak laser beam. The number of photons reaching the detector in a given period of time (the laser power) may be precisely known, but the photons will arrive at random times. Even a pulsed laser can’t be rigged to produce single photons on a schedule. To generate photons at specific times, Michel Orrit of the French National Center for Scientific Research (CNRS) in Talence and his colleagues used the fact that a dye molecule will dependably emit a single photon within a matter of nanoseconds every time it’s raised to the right energy level, and they excited such a molecule in a controlled and repeatable way. From previous work, the team had learned that with a dilute solution of a dye chilled to 4 K they could target a single molecule using a well-focused laser beam. For their latest experiments, Orrit and his colleagues simultaneously applied a variable electric field across the frozen sample, which allowed them to slightly alter the frequency of light needed to excite the dye molecules. With the sample continuously illuminated by an excitation laser, the team applied an oscillating electric field at a frequency of 3 MHz, so that the dye molecule was excited twice per cycle–at the moments in time when the laser frequency matched the molecule’s resonant frequency. Ideally, this clock-like process would cause the molecule to flash a fluorescence photon with each excitation, but the system was not perfectly efficient. To collect the photons, the tiny sample was surrounded by a small, dish-shaped (paraboloid) reflector. The team could not dependably detect every photon, so they verified the timing by measuring eight minutes worth of the signal and showing that the light arrived with the expected distribution in time–the majority arriving within a few nanoseconds of the schedule. The researchers also used a beam splitter and a pair of photon detectors to show that 74% of the time exactly one photon was emitted. Orrit says that more than 95% efficiency should be possible with more expensive equipment, but his goal was to show that this relatively simple set-up is “a handy way of delivering single photons,” which quantum cryptographers will need in the future. “It’s very convincing,” says Alain Aspect of the CNRS in Orsay. He is impressed with the team’s control of single photons and with the cleverness of the experiment. He says this new system–as well as the other method for triggering single photons, reported earlier this year–will be important steps in the development of quantum cryptography.
<urn:uuid:48329b8f-7938-49fb-9a1c-b4f98dc75ba9>
3.625
692
News Article
Science & Tech.
31.720896
2,276
Space is three-dimensional ... or is it? When we spoke to theoretical physicist David Berman in October this year we found out that in fact, we are all used to living in a curved, multidimensional universe. And a mathematical argument might just explain how those higher dimensions are hidden from view. Kaluza, Klein and their story of a fifth dimension — David Berman explains the concept of dimension and how a mathematical idea suggests that we might well live in five of them. The ten dimensions of string theory — String theory has one very unique consequence that no other theory of physics before has had: it predicts the number of dimensions of space-time. David Berman explains where these other dimensions might be hiding and how we might observe them. How many dimensions are there? – the podcast — You can listen to an interview with David Berman as he tells us how Kaluza, Klein and their fifth dimension might help us understand the ten dimensions of string theory.
<urn:uuid:80438df5-2719-4ec1-ba89-551d1176c8f5>
2.640625
195
Content Listing
Science & Tech.
50.347848
2,277
The Solar Polar Orbiter (SPO) Technology Reference Study (TRS) examines the feasibility of a mission to obtain true solar polar orbit at an altitude of less than 0.5 AU to perform remote sensing of the Sun and in situ measurements of the surrounding environment. The Solar Polar Orbiter The Solar Polar Orbiter has the following scientific objectives: The Solar Polar Orbiter consists of a single spacecraft, launched on a Soyuz Fregat 2B from Kourou, French Guiana. The spacecraft will utilize a solar sail to lower its orbit to less than 0.5 AU before raising its inclination. After about 4 years the SPO spacecraft will achieve an inclination of approximately 83 degrees in the ecliptic coordinate frame. At this point the sail will be detached in order to perform undisturbed scientific measurements. The preliminary concept for SPO employs a square solar sail with a total area of approximately 25 000 m². The characteristics of the sail are given in the table below. The preliminary mass budget is given in the table below, outlining the spacecraft and payload masses. Solar Sail Material: A lightweight material needs to be developed with the required optical properties to reduce system mass and requirements. The optical properties of the sail must also be preserved during the sail phase. Solar Sail Deployment: The development of a lightweight deployment structure for a very large (~30 000 m²) sail is required. Lightweight booms: Developments of lightweight booms with a length approaching 100 m are required. Such booms should have a specific mass of less than 100 g/m. Solar Sail Attitude Control: There are several options for performing attitude control. The options currently under consideration include a gimbaled boom between the sail and spacecraft, moving masses along the boom structure, and tip vanes or thrusters on the booms. Solar Sail Jettison Mechanism: The sail must be jettisoned upon reaching the final orbit in order to prevent interference with the instruments. This separation must take place with a minimum risk of collision. This study was completed in 2004. It was carried out by SRE-PAM with the assistance of the University of Glasgow. For further information about this study please contact the study manager: Dr. Peter Falkner
<urn:uuid:310c42cb-21fa-4e88-a253-713ce71f63a9>
3.28125
467
Knowledge Article
Science & Tech.
41.244828
2,278
Amongst all the excitement over the first results from Herschel, it’s easy to forget about its comparatively tiny American cousin Spitzer. Launched in 2003 with its 3 instruments IRAC, IRS and MIPS, Spitzer covers the infrared wavelengths from around 3 to 150 microns – a region that from Earth is either totally inaccessible or severely hampered by atmospheric absorption. With its 85-cm diameter primary mirror, it’s easy to dismiss Spitzer as belonging to a former era. But new science is coming out of Spitzer data every day, and vast quantities of data remain unpublished in the archives. The big legacy surveys in particular, such as c2d (Cores to Disks) and the galactic plane surveys GLIMPSE and MIPSGAL, have released a wealth of data into the public domain, throwing light on old problems and unveiling new mysteries to solve. One interesting phenomenon witnessed on the images from the GLIMPSE survey was a curious population on extended green objects (EGOs). Catalogued by Cyganowski et al in 2008, these “green fuzzies” appear to be associated with regions of massive star formation – many of them lie in or very near to infrared dark clouds, known to harbour the earliest forms of massive star birth, or are associated with methanol masers, strong radio emission caused by excitation of methanol molecules by infrared radiation from dust. Their green colour is in a sense incidental, arising from the way we construct 3-colour images from the Spitzer camera IRAC. IRAC takes images in 4 channels, at 3.6, 4.5, 5.8 and 8 microns, and typically an red-green-blue image uses the 8, 4.5 and 3.6 micron data, respectively. In this picture, “green” indicates that the object has an unusually high flux in the 4.5 micron band. This characteristic in a spatially extended object instantly raises a flag among star formation aficionados, as this band, stretching from 4 to 5 microns, contains some prominent spectral lines from molecular hydrogen (H2) and carbon monoxide (CO), commonly seen in emission in outflow regions. Young protostars that are growing by accreting material from their surrounding cloud often have strong streams fo outflowing material. Where the outflow collides with the surrounding interstellar medium, the resulting shocks give rise to this strong emission. So this fuzzy greenness could indicate the presence of a young massive star growing deep inside a dense molecular cloud, even though we can’t yet see the actual young star itself. Even though relatively few massive stars are produced compared with “regular” low mass stars, and their lifetimes are much shorter, their immense output of energy, particularly in heavy elements that they alone can produce and eventually blast into the interstellar medium as supernovae, has galactic-scale influence. But their rapid evolution and long embedded formation stage makes them very elusive objects to study, and astronomers have to rely on indirect signposts of massive stellar birthplaces to get an insight into the process. The prospect of these green fuzzies as an additional telltale sign of young massive protostars is therefore well worth exploring. But IRAC’s broad band imaging alone can’t reveal the true nature of EGOs – for that we need spectroscopy. This week, De Buizer & Vacca of NASA Ames Research Center posted a paper to astro-ph claiming the first spectroscopic identification of green fuzzies over the IRAC wavelength range of 4-5 microns, allowing them to examine directly the source of the emission in these objects, precisely at the wavelengths that they appear strong in. They used the mid-IR instrument NIRI on the 8-m ground-based Gemini North telescope to observe two of the objects from Cyganowski’s catalogue. And interestingly, the objects appeared to be quite different in nature, suggesting that they’re not a clear-cut signpost of anything. The first fuzzy, shown on the left above, looks like some sort of object-with-outflow, and the spectrum does suggest that that is the case. The central source’s spectrum is consistent with a deeply embedded massive forming star, with all of its radiation below 3.5 microns or so being absorbed by a thick shell of dust. The green knotty areas surrounding it show only strong emission in molecular hydrogen lines with no underlying continuum emission, probably emanating from shocked gas in an outflow from the central young star. The second fuzzy, however, has a very different shape – it looks a bit comet-shaped and there’s no clear axis along which you might expect to see an outflow. The spectrum is also very different: it doesn’t show any particular emission features in the IRAC 4.5 micron filter region, although its spectrum does suggest the presence of a deeply embedded young massive star at the brightest location. But there’s no evidence of an outflow. The authors suggest that in this case, the object is not particularly bright at 4.5 microns, it’s unusually faint in the red and blue channels (8 and 3.5 microns, respectively) – and this produces the same appearance. At 3.5 micron, the radiation is likely just being absorbed by dust, while the 8 micron flux is low from a nearby silicate absorption that is strong in embedded young stars. This is not an groundbreakingly new result. Two objects don’t exactly make “a sample”, so we don’t learn anything conclusive about the nature of the Spitzer green fuzzies. And in a way, it’s not unexpected that there are several mechanisms responsible for the enhanced 4.5 micron flux: the galactic plane is a chaotic cauldron of gas and dust, and these objects we’re seeing in the Spitzer images are all at different distances and depths with different amounts and compositions of intervening material. But the spectra are nice, and it’s a good example of how we’re slowly chipping away at new questions coming out of groundbreaking facilities, even many years after their launch. Hundreds of these green fuzzies have been identified from Spitzer images, and other authors have defined differing methods of identifying them, and all are excellent follow-up fodder for current large ground-based telescopes and new observatories and instruments coming online soon. For a phenomenon as important as the birth formation of massive stars, it’s worth exploring anything that can give us a glimpse into the heart of the formation process. James M. De Buizer, & William D. Vacca (2010). Direct Spectroscopic Identification of the Origin of ‘Green Fuzzy’ Emission in Star Forming Regions accepted in ApJ arXiv: 1005.2209v1 C. J. Cyganowski, B. A. Whitney, E. Holden, E. Braden, C. L. Brogan, E. Churchwell, R. Indebetouw, D. F. Watson, B. L. Babler, R. Benjamin, M. Gomez, M. R. Meade, M. S. Povich, T. P. Robitaille, & C. Watson (2008). A Catalog of Extended Green Objects (EGOs) in the GLIMPSE Survey: A new sample of massive young stellar object outflow candidates Astronomical Journal, 136 (6), 2391-2412 arXiv: 0810.0530v1
<urn:uuid:23f28de9-ac0f-4d53-b670-96ba643bc7ab>
3.25
1,575
Personal Blog
Science & Tech.
51.490963
2,279
Iddo Genuth writes "Pratt & Whitney Rocketdyne of West Palm Beach, Florida has successfully completed the third round of its Common Extensible Cryogenic Engine (CECE) testing for the National Aeronautics and Space Administration (NASA). CECE is a new deep throttling engine designed to reduce thrust and allow a spacecraft to land gently on the moon, Mars, or some other non-terrestrial surface." NASA is also set to launch a new satellite on Tuesday — the Orbital Carbon Observatory — that will monitor the level of carbon dioxide in the atmosphere. On the research front, NASA has announced this year's Centennial Challenges. $2 million in prizes are available for a major breakthrough in tether strength (one of the major obstacles for developing a space elevator), and another $2 million is being offered to competitors who are able to beam power to a device climbing a cable at a height of up to one kilometer.
<urn:uuid:92f217a6-e8fb-4d92-9f97-3c97acc2593a>
2.671875
191
Comment Section
Science & Tech.
24.910477
2,280
Learn more physics! What would happen if... I have a round magnet with a hole in it. Then I put a magnet shaped like a nail, into the center of the other magnet? Would it float? If I spun the magnet around the inner magnet, would it spin the inner magnet? Thank you very much. - Matt K. Leiner (age 40) South Lyon, MI The inner magnet would definitely not float. A mathematical theorem due to Earnshaw proves that there\'s no way to get a magnet to float with any scheme like this. The nail will pull over to the other magnet and get stuck on it. For your second question, let\'s assume the nail-magnet was tied on a string or something to keep it in the center, but free to rotate. What it does when the round magnet spins will then depend on how the round magnet\'s poles are configured. (I\'m assuming the poles of the nail are at its ends.) If the round magnet has, say, the south poles on its inside ring and the north poles outside, spinning it won\'t change the magnetic field and won\'t make the nail spin. If the ring\'s south pole is on one side (say left) and the north pole on the other, spinning it will make the fields spin with it. That will make the nail spin with it, at least for slow spin rates. (published on 05/16/2013) Follow-up on this answer.
<urn:uuid:76b08934-3dc6-4aeb-b035-ec81ac1950d9>
3.125
309
Q&A Forum
Science & Tech.
83.20671
2,281
Phylogeny of the Invertebrates The tree below was redrawn from the information and cladograms of the Phylogeny Wing of the University of California Museum of Paleontology. The animal phyla represented have been selected for familiarity and to provide context for phyla often exhibited in aquaria and that might be seen at the James R. Record Aquarium of the Fort Worth Zoo. My selection of which "minor phyla" to include both on the tree and in the list below is frankly idiosyncratic. Common names have been used to identify each of the phyla when one was available. Below you will find a listing of the scientific names of each phylum, and examples of its more common members. Links to representative animals for phyla that can be seen at the zoo have been provided, both from the phylum list and from the tree. Contrary to my usual practice at WhoZoo, I have also included links from the list to discussions of less familiar phyla at other web sites; these are marked with an asterisk(*). Sources for information on these animals are included in the source list below the tree. Only animals that can be seen at the James R. Record Aquarium have been linked from the tree itself. Note: with the closing of the Aquarium, these animals can no longer be seen at the Fort Worth Zoo. - Phylum Porifera: the *sponges - Phylum Cnidaria: anemones, jellyfish and corals - Phylum Platyhelminthes: the flatworms - Phylum Echinodermata: sea stars, sea urchins, sand dollars, sea lilies - Phylum Chordata, subphylum Urochordata: the *sea squirts - Phylum Chordata, subphylum Cephalochordata: the *lancelets - Phylum Chordata, subphylum Vertebrata: the vertebrates - Phylum Mollusca: bivalves, snails, *octopuses and squid - Phylum Pogonophora: this relatively unfamiliar phylum has been included because the Vestimentiferan worms -- the giant red tube-building worms that colonize the waters around *deep sea volcanic vents -- are closely related to them. - Phylum Annelida: earthworms and fanworms - Phylum Rotifera: the *rotifers - Phylum Nematoda: the *nematodes - Phylum Onychophora: the *velvet worms - Phylum Tardigrada: the *water bears -- charming, stumpy-legged segmented animals - Phylum Arthropoda: because the ranks of the major taxa within this large and successful group are variously defined, I have included the groups without a rank assignment below: - Myriapods: centipedes and millipedes - Insects: beetles, flies, butterflies, roaches, ants and bees, bugs, grasshoppers, phasmids (AKA stick insects)and other insects. - Crustaceans: crabs, shrimp and lobsters, and the giant isopods that can be seen in the James R. Record Aquarium. - Merostomata: the horseshoe crabs - Arachnida: spiders, scorpions, ticks and mites. - Trilobita: the *trilobites -- extinct arthropods commonly found as fossils. Additional notes: Deuterostomes are animals whose embryos develop a mouth as a secondary embryonic structure opposite to the blastopore, which opens into the primitive gut. Protostomes are animals that develop their mouths from or very near to the blastopore. Ecdysozoans are animals that molt their cuticles or exoskeletons as they grow. Lophotrochozoans are a group of animals that have either a trochophore (toplike) larva or a feeding organ (lophophore) composed of a ring of ciliated tentacles. Sources and links to offsite information about invertebrates: - Allen G. Collins, Brian R. Speer and Ben Waggoner. The Metazoa: University of California Museum of Paleontology. - Allen G. Collins and Ben Waggoner.Introduction to the Porifera. University of California Museum of Paleontology - Ben Waggoner and B. R. Spear. Introduction to the Flatworms. University of California Museum of Paleontology. - James Wood. The Cephalopod Page - Wim van Egmond and Jan Parmentier. Sea Squirts, Our Distant Cousins - Ben Waggoner. Introduction to the Lancelets. University of California Museum of Paleontology. - Hot Vents. Marine Biology at State University of New York. - Roy Winsby. Rotifers and How to Find Them. - Ben Waggoner and Brian R. Speer. Introduction to the Nematoda. University of California Museum of Paleontology. - Ben Waggoner and Allen G Collins. Introduction to the Onychophora. University of California Museum of Peleontology. - Phil Greaves. Tardigrades. - Sam Gon III. A Guide to the Orders of Trilobites - University of California Museum of Paleontology. Introduction to the Lophotrochozoa - University of California Museum of Paleontology. Introduction to the Ecdysozoa.
<urn:uuid:0f15a9b0-1a04-4b08-9255-f9f7b26d89ac>
2.890625
1,166
Structured Data
Science & Tech.
28.209193
2,282
ScienceDaily (Aug. 5, 2012) It’s a longstanding doubt in biology: How do cells know when to swell by a dungeon cycle? In elementary organisms such as yeast, cells order once they strech a specific size. However, last if this binds loyal for mammalian cells has been difficult, in partial since there has been no good proceed to magnitude mammalian dungeon expansion over time. Now, a group of MIT and Harvard Medical School (HMS) researchers has precisely totalled a expansion rates of singular cells, permitting them to answer that elemental question. In a Aug. 5 online book of Nature Methods, a researchers news that mammalian cells order not when they strech a vicious size, though when their expansion rate hits a specific threshold. This first-ever regard of this threshold was done probable by a technique grown by MIT highbrow Scott Manalis and his students in 2007 to magnitude a mass of singular cells. In a new study, Manalis and his colleagues were means to lane dungeon expansion and describe it to a timing of dungeon multiplication by measuring cells’ mass each 60 seconds around their lifespans. The anticipating offers a probable reason for how cells establish when to start dividing, says Sungmin Son, a grad tyro in Manalis’ lab and lead author of a paper. “It’s easier for cells to magnitude their expansion rate, since they can do that by measuring how quick something in a dungeon is constructed or degraded, since measuring distance precisely is tough for cells,” Son says. Manalis, a highbrow of biological engineering and member of a David H. Koch Institute for Integrative Cancer Research during MIT, is comparison author of a paper. Other authors are former MIT grad tyro Yaochung Weng; Amit Tzur, a former investigate associate during HMS; Paul Jorgensen, a former HMS postdoc; Jisoo Kim, a former undergraduate tyro during MIT; and Marc Kirschner, a highbrow of systems biology during HMS. Tracking cells over time Manalis’ strange cell-weighing system, famous as a dangling microchannel resonator, pumps cells (in fluid) by a microchannel that runs opposite a little silicon cantilever. That cantilever vibrates within a vacuum. When a dungeon flows by a channel, a magnitude of a cantilever’s quivering changes, and a cell’s expansive mass can be distributed from that change in frequency. For a new study, a researchers redesigned their complement so that they could trap cells over a most longer duration of time. The strange complement offering singular control over a suit of cells in a channel; cells could be mislaid or turn unviable due to accrued shear highlight from visit passages by a microchannel. Consequently, expansion could be monitored for reduction than 30 minutes. To equivocate this problem, a researchers grown a proceed to precisely control a upsurge in a complement so that a dungeon could be stopped anywhere in a bypass channel. They also configured a upsurge to constantly feed nutrients and mislay waste. Now a dungeon passes by usually each 60 seconds and stays viable for several generations. The new complement also measures fluorescent signals from a dungeon in serve to a mass. Cells are automatic to demonstrate fluorescent proteins during several points in a dungeon cycle, permitting a researchers to couple dungeon cycle information to growth. A dungeon devotes itself to expansion in a proviso called G1. A vicious transition occurs when a dungeon enters a S phase, during that DNA is replicated in credentials for division. The researchers found that expansion rate increases fast during a G1 phase. This rate varies a good understanding from dungeon to dungeon during G1, though converges as cells proceed a S phase. Once cells finish a transition into S phase, expansion rates separate again. Building on a underline of a new complement that precisely controls a environmental conditions inside a channel, researchers can also change a conditions really rapidly, permitting them to guard how cells respond to such disturbances. “We are now measuring a cell’s response on brief timescales to several perturbations, such as exhausting a sold nutritious or adding a drug,” Manalis says. “We trust this could offer new forms of information that could not be performed from required proliferation assays.” Other amicable bookmarking and pity tools: Note: Materials might be edited for calm and length. For serve information, greatfully hit a source cited above. - Sungmin Son, Amit Tzur, Yaochung Weng, Paul Jorgensen, Jisoo Kim, Marc W Kirschner, Scott R Manalis. Direct regard of mammalian dungeon expansion and distance regulation. Nature Methods, 2012; DOI: 10.1038/nmeth.2133 Note: If no author is given, a source is cited instead. Disclaimer: This essay is not dictated to yield medical advice, diagnosis or treatment. Views voiced here do not indispensably simulate those of ScienceDaily or a staff. Friendly: womens t shirts womens t shirts Global Warming Environmentally Friendly Gifts Environmentally Friendly Gifts global warming clothes Shop global warming clothes Environmentally Friendly Gifts Ideas StoreEnvironmentally Friendly Gifts Ideas Kid's Water Bottle Thermos Can Cooler Thermos Bottle (12oz) Thermos Food Jar Large Thermos Bottle Stainless Water Bottle 1.0L About Co2 Emissions: - manalis cell division - yaochung weng
<urn:uuid:caac93b9-91ce-4452-ba0b-9d4f85dd098c>
3.109375
1,140
News Article
Science & Tech.
39.268458
2,283
Oil dispersant effects remain murky As oil continues to churn from the bottom of the Gulf of Mexico, many questions remain about where the oil will go and how it will affect marine life. Making those questions even tougher to answer are the controversial chemical dispersants that BP has used to break the oil into smaller droplets and keep it away from shorelines and the surface. Although a panel of 50 experts on 27 May came out in support of continued use of dispersants given the tradeoffs at the time, the long-term effects of dispersant use remain uncertain. "We don't know whether it's affecting wildlife or not," says Ed Overton of Louisiana State University. "We're right in the middle of this. We really won't know for a while yet." "It was the consensus that the use of dispersants was, at that point, environmentally beneficial in the big picture," says panel member Carys Mitchelmore, of the University of Maryland Center for Environmental Science. "What concerns me are just the huge uncertainties and data gaps, particularly when you're talking about long-term, continued use." In a blog about her research on the Gulf spill, Samantha Joye, a researcher at the University of Georgia and the leader of multiple research cruises in the Gulf to track underwater oil plumes, writes: "Dispersants are a complicated topic. No one that I have spoken to about this has a full understanding of what the full range of dispersant effects might be. How do dispersants influence microorganisms and microbially-mediated processes? I don't know. How do they impact fish, larvae, phytoplankton, shrimp? I don't know the answer to that either. I do know that the dispersants seem to be doing a good job of breaking up the oil into smaller particles and that keeps the oil off the beaches but I am not convinced this is a good thing because there are so many potential unknown effects of dispersants." The dispersants act to break the oil into small droplets that remain suspended in water, rather than coming to the surface where they can coat birds and shorelines. But by keeping the oil in the water instead of at the surface, other organisms suffer. In evaluating the trade-off, the panel members assumed a worst-case scenario that everything from the surface to 10 metres below dies, says Mitchelmore. "When you add dispersant, organisms are exposed to oil that wouldn't have been," she says. "Dissolved oil can go directly across organisms' membranes. Some organisms think oil droplets are food and they eat them. In other cases, it can stick to gills." "The base of the food chain is probably more sensitive than the larger fish," says George Crozier of Dauphin Island Sea Lab in Alabama. "A lot of the organisms that can swim are probably saying this doesn't smell good or taste good and leaving, but the plankton that forms the base of the food chain doesn't have that option." Research by Andy Nyman at Louisiana State University supports this. He studied the effect of oil with and without COREXIT 9500, the main dispersant used in the ongoing spill, on fish, one type of plankton, a tiny worm that lives in the sediment, and sediment microorganisms. "We found that working with South Louisiana crude and COREXIT 9500, the dispersed oil was more toxic than the undispersed oil initially and even six months later," he says. The plankton and the worm - the major food source for shrimp - were the most sensitive. Nyman acknowledges that his lab experiments were a worst-case scenario where the dispersed oil couldn't float away or be diluted. Organisms in the Gulf are probably exposed to lower concentrations. "I don't know how low the concentration has to be before you stop seeing the increase in toxicity," he says. His study looked at shallow-water shoreline organisms, which theoretically will be spared by the use of dispersants offshore, near the source of oil. But deep-water organisms will probably be affected similarly. "I would expect to see some of the same patterns over the sediments in the deep Gulf of Mexico," says Nyman. "I would expect the dispersed oil to be more toxic and for the effects to last longer unless I saw data otherwise." Starved of oxygen Another concern with dispersants is that by keeping oil in the water column where microbes can degrade it, oxygen levels in the water can drop to potentially dangerous levels as the microbes feed on the oil and consume oxygen. "It looks like oxygen is down 15% or 20% or 30%," Overton said. "That's pretty close to what we would expect." So far, this level is not too concerning, he says. However, Crozier and colleague Monty Graham at the Dauphin Island Sea Lab have identified a zone of low oxygen emerging off the Alabama shore that is likely due to oil. Sampling from earlier this week showed reduced numbers and types of animals in the area, suggesting mobile animals are leaving the area. Plankton in the low oxygen zone appeared dead, says Graham. So far, nearly 4.9 million litres of dispersants have been applied to the Gulf surface and underwater near the wellhead. Although this is the largest US use of dispersants, the 1979 Ixtoc I oil spill off the coast of Mexico used over five months. Difficult to see This is the first time the dispersants have been used underwater in a spill response. EPA approved this use on 15 May, 25 days after the Deepwater Horizon rig exploded, starting the spill. "I think that was terribly ill advised," says Crozier. "It's keeping the oil unseen and very difficult to find and impossible, ultimately, to clean up." Even without the dispersants below the surface, tracking the oil would be complex, since it is shooting out at high pressure mixed with methane gas deep underwater. "You've got natural dispersion from the oil outgassing - just fizzing from the effervescence of the gas mixed with oil," says Overton. "Dispersant use has always been full of uncertainties. A lot of these were identified in (a report in) 1989," says Mitchelmore. "What is the point of doing these reports and finding these data gaps if no one ever looks at them?"
<urn:uuid:ad5360a1-74d9-44eb-8bf1-56604a1c9e14>
2.953125
1,329
News Article
Science & Tech.
51.08358
2,284
The Spectrum of Riemannium Erbium and Eigenvalium Among the spectra in Figure 1 is a series of 100 energy levels of an atomic nucleus, measured 30 years ago with great finesse by H. I. Liou and James Rainwater and their colleagues at Columbia University. The nucleus in question is that of the rare-earth element erbium-166. A glance at the spectrum reveals no obvious patterns; nevertheless, the texture is quite different from that of a purely random distribution. In particular, the erbium spectrum has fewer closely spaced levels than a random sequence would. It's as if the nuclear energy levels come equipped with springs to keep them apart. This phenomenon of "level repulsion" is characteristic of all heavy nuclei. What kind of mathematical structure could account for such a spectrum? This is where those eigenvalues of random Hermitian matrices enter the picture. They were proposed for this purpose in the 1950s by the physicist Eugene P. Wigner. As it happens, Wigner was another Princetonian, who could therefore make an appearance in our movie. Let him be the kindly professor who explains things to a dull student, while the audience nods knowingly. The dialogue might go like this: Wigner : Come, we'll make ourselves a random Hermitian matrix. We start with a square array, like a chessboard, and in each little square we put a random number.... Student : What kind of number? Real? Complex? Wigner : It works with either, but real is easier. Student : And what kind of random? Do we take them from a uniform distribution, a Gaussian...? Wigner : Customarily Gaussian with mean 0 and variance 1, but this is not critical. What is critical is that the matrix be Hermitian. A Hermitian matrix—it's named for the French mathematician Charles Hermite—has a special symmetry. The main diagonal, running from the upper left to the lower right, acts as a kind of mirror, so that all the elements in the upper triangle are reflected in the lower triangle. Student : Then the matrix isn't really random, is it? Wigner : If you insist, we'll call it half-random. We fill the upper half with random numbers, and then we copy them into the lower half. So now we have our random Hermitian matrix M , and when we calculate its eigenvalues.... Student : But how do I do that : You start up Matlab and you type "eig(M)"! Eigenvalues go by many names, all of them equally opaque: characteristic values, latent roots, the spectrum of a matrix. Definitions, too, are more numerous than helpful. For present purposes it seems best to say that every N-by-N matrix is associated with an Nth-degree polynomial equation, and the eigenvalues are the roots of this equation. There are N of them. In general, the eigenvalues can be complex numbers, even when the elements of the matrix are real, but the symmetry of a Hermitian matrix ensures that all the eigenvalues will be real. Hence they can be sorted from smallest to largest and arranged along a line, like energy levels. In this configuration they look a lot like the spectrum of a heavy nucleus. Of course the eigenvalues do not match any particular nuclear spectrum level-for-level, but statistically the resemblance is strong. When I first heard of the random-matrix conjecture in nuclear physics, what surprised me most was not that it might be true but that anyone would ever have stumbled on it. But Wigner's idea was not just a wild guess. In Werner Heisenberg's formulation of quantum mechanics, the internal state of an atom or a nucleus is represented by a Hermitian matrix whose eigenvalues are the energy levels of the spectrum. If we knew the entries in all the columns and rows of this matrix, we could calculate the spectrum exactly. Of course we don't have that knowledge, but Wigner's conjecture suggests that the statistics of the spectrum are not terribly sensitive to the specific matrix elements. Thus if we just choose a typical matrix—a large one with elements selected according to a certain statistical rule—the predictions should be approximately correct. The predictions of the model were later worked out more precisely by Dyson and others. » Post Comment
<urn:uuid:5e260d0c-d6c0-467b-8a76-1127e34a7cc4>
3.578125
919
Nonfiction Writing
Science & Tech.
55.035935
2,285
The redshift (Z) and Early Universe Spectrometer (ZEUS) is a direct detection, submillimeter echelle grating spectrometer designed to study star formation in the Universe from about 2 billion years after the Big Bang to the present through spectroscopy of distant star forming galaxies. The sensitivity of ZEUS enables spectroscopic studies in many of the most important gas cooling lines. From local (z<0.1) systems, these lines include the CO(J=8-7, 7-6, and 6-5) and 13CO(6-5) rotational lines, and the 370 and 610 um [CI] fine-structure lines which probe the neutral gas and far-UV radiation fields. From distant (z> 1) galaxies, we are using the redshifted fine-structure lines of [CII] 158 um, [OIII] 88 um, [OI] 63 um, and [NII] 122 and 205 um which probe stellar radiation fields and its effects on both the neutral and ionized gas components of the interstellar medium. ZEUS is diffraction limited, attains sensitivity very close to the background limit on large submillimeter telescopes, and has a resolving power ~ 1000. It therefore has unsurpassed sensitivity (corresponding to single-side-band receiver temperatures < 40 K) for detecting broad lines from faint, point sources, enabling exciting new science portions of which we detail ZEUS is designed for use in the 350, 450, and 610 um telluric windows on large submillimeter telescopes such as the 10.4 m Caltech Submillimeter Observatory (CSO), the 15 m James Clerk Maxwell Telescope (JCMT), and the 12 m Atacama Pathfinder EXperiment (APEX) telescope. The ZEUS grating is a 35 cm long echelle with a 5th order blaze wavelength of 355 um operated in near Littrow mode. By tilting the grating we access the spectral range between 333 and 381 microns, and 416 to 477 in 5th and 4th orders of the grating providing near complete coverage of the 350 and 450 micron telluric windows. It is straightforward to access the 610 um telluric window (555 to 636 um coverage in 3rd order of the echelle), but we have not done this to date. Distant galaxies are essentially point sources compared with the 7-10" diffraction limited beams delivered by 10 m class submillimeter telescopes in the 350 and 450 micron windows. For instance, at the distance of the nearest ULIRG galaxy (Arp 220, d = 72 Mpc) a 7" beam corresponds to 2.4 kpc linear extent. To maximize sensitivity for point source detection given background limited operation, we operate ZEUS with a near diffraction limited slit width. On the CSO we have used 8.7" and 10.8" slits which corresponds to 1.1 and 1.37 lambda/D at the middle of our wavelength coverage (400 um). ZEUS achieves resolving powers between 565 and 1600 depending on the wavelength and entrance slit width over the 350 and 450 um bands. This resolving power is well matched to extragalactic lines widths (~ 300 km/sec), optimizing sensitivity for detection of weak lines. The resolving power is ~ 1200 at 372 microns, sufficient to well resolve the astrophysically important CO(7-6) and 370 [CI] lines that are spaced by only 1000 km/s. The current ZEUS detector array is a 1 x 32 pixel neutron-transmutation doped silcon bolometer array kept at 220 mK with a dual stage 3He refrigerator. The array was manufactured by S.H. Moseley's group at Goddard Space Flight Center. The 1 mm square pixels are well matched to the slit width so that most of the line flux from a monochromatic point source will fall on a single pixel. The 1 x 32 pixel format yeilds an instantaneous spectrum of 32 spectral elements (instantaneous coverage up to 3% bandwidth) on a single beam on the sky. At present, we have our final bandpass filters located directly in front of the array, so that the 32 element spectrum is split -- one 16 pixel half operates in 4th order, while the other half operates in 5th order of the grating. This eliminates the complexity of a milli-Kelvin filter wheel in the system, and enables simultaneous observations in both telluric windows. A line co-incidence is 12CO(6-5) in the 450 um window can be observed simultaneously with 13CO(8-7) in the 350 um window. A more detailed design description is found elsewhere on this page (see link) and in our publication list (see link). ZEUS can accomodate a much larger format array -- about 54 pixels spectrally, and 20 spatially, so that a 54 spectral element spectrum can be delivered for 20 beams on the sky along a long slit simultaneously. We are creating a multi-color, multi-beam version of ZEUS, ZEUS-2 (see link). Since late 2006, we have enjoyed five very successful runs with ZEUS on the 10.4 m CSO. We detected the CO(6-5), CO(7-6), and [CI] 370 um lines from about two dozen nearby starforming galaxies and ultraluminous galaxies. We also detected the CO(8-7) line from five ULIRGs, 13CO(6-5) from two galaxies, and the redshifted [CII] line emission from six galaxies at redshifts between 1.1 and 1.8. These lines trace the excitation of the neutral interstellar medium and are important coolants, enabling cloud collapse to form stars. First Detection of 13CO(6-5) from an External Galaxy Our detection of the 13CO(6-5) line from the nucleus of NGC 253 is the first detection of the 13CO(6-5) line from an external galaxy, and the first detection of any 13CO transition greater than J=3-2 from a source beyond the Magellanic clouds. We detected the line from the nuclear regions of the starburst galaxy, NGC 253, where at a distance of 2.5 pc, our 11" beam subtends 275 kpc. We observed the 13CO(6-5) the 12CO(6-5), the 13CO(6-5), and [CI] 370 um lines from NGC 253, mapping the last three lines (right). The 13CO(6-5) line is bright, at ~ 7% of the line flux in the 12CO(6-5) line indicating optically thick emission in the latter. We model the observed run of CO and 13CO line emission with J using a large velocity gradient (LVG) method, and find that 35 to 60% of the molecular gas mass in the nuclear regions is both warm (T ~ 110 K), and dense (~ 10 4cm -3) in support of our prior work based on the detection of the 12CO(7-6) line (Bradford et al. 2003). We conclude that the gas is heated by either the cosmic rays from the nuclear starburst, or by the decay of turbulence within molecular clouds, presumably also driven by the formation of stars. The heating of the molecular ISM by the starburst therefore is inhibiting further episodes of star formation, so that for NGC 253 the starburst is self-limiting (Hailey-Dunsheath et al. First Detections of the [CII] 158 um line from the Epoch of Enhanced Star Formation in the Universe Detailed studies in the infrared to submillimeter continuum Over the past decade have shown that the rate of star formation per unit co-moving volume peaked when the Universe was only 15-45% of its present age (redshifts 1 to 3) at values 30 times the present rate. We have begun a program to study star formation in this epoch using the bright far-IR fine-structure lines as they are redshifted into the submm windows as probes. These lines are important coolants for most of the important phases of the interstellar medium, so that their measure yields important information on the star formation process on galactic scales. The brightest of the far-IR lines (and indeed, it can be the brightest single line from a star forming galaxy) is the 158 um [CII] line which cools the cold neutral medium, the warm neutral medium, diffuse ionized gas and the photo-dissociation regions (PDRs) formed on the surfaces of molecular clouds exposed to the far-UV starlight from nearby OB stars and/or the general interstellar radiation field. Most of the [CII] line arises in PDRs (see link) where the gas is predominantly heated through the photo-electic ejection of electrons from grains, and cooled through its [CII] line radiation. About 1% of the far-UV energy flux heats the gas in this way, while most of the remainer heats the dust which cools via its far-IR continuum radiation. The [CII] line to far-IR continuum luminosity ratio, R is therefore a measure of the efficiency of the gas heating via the photo-electric effect. This efficiency is a strong function of the strength of the ambient interstellar radiation field, so to measure R is to measure those fields, i.e. the concentration of the starburst. We have made the first detections of the [CII] 158 um line from star forming galaxies at redshifts between 1 and 2 - so that we can probe the physical parameters of the newly formed stars and their environs in the epoch of enhanced star formation in the Universe. At present, we have strongly detected emission from five systems including two submillmeter galaxies (SMGs), and two quasars. Many of the submillimeter galaxies are quite massive, and appear to be forming more than 1000 stars/year so that it could be these are the progenitors of modern day giant elliptical galaxies. We find the [CII] line is very luminous in all systems, and exceptionally bright in three of the galaxies, where the line flux amounts to more than 0.1% of the total far-IR continuum luminosity. CO rotational line emission has been detected from two of the three exceptionally bright [CII] line systems so that we can build a reasonably constrained model for the line emission regions. In a PDR scenario, the combination of [CII] and CO lines, together with the far-IR continuum constrain both the strength of the ambient interstellar radiation fields and the gas density. For MIPS J142824.0+352619 we find the far-UV radiation fields to be ~ 1000 times the local interstellar radiation field, and gas densities ~ 104cm-3 very similar to the conditions in the nearby starburst galaxy M82 (Hailey Dunsheath et al. 2009). The starburst in MIPS J14218 has similar physical conditions to that of M82, but is more than ~ 1000 times the luminosity! Combining our measure of the stellar radiation field strength with the total luminosity of the systems we can estimate the physical size of the starbursting regions. We find the starburst in MIPS J142824 is greatly extended, occupying a 4 kpc diameter region. This is in contrast to the local, lower luminosity ULIRG galaxes where the star forming regions are often spatially confined to regions ~ few hundred pc on a side. Something stimulates galaxy-wide starbursts in the epoch of maximum star formation for the Universe. For the two quasars in our sample PKS0215+015 and PG1206+495, the [CII] line is relatively weak at only ~ 0.05% of the far-IR continuum luminosity. This lowered ratio is expected, since a significant fraction of the far-IR continuum may arise from regions near the active nucleus where UV radiation fields are exceptionally strong. The [CII] to far-IR continuum ratio is inversely proportional to the strength of the ambient interstellar radiation field since strong fields will charge grains resulting in lessened efficiency for photo-electric heating of the gas. Higher UV fields means less efficient gas heating, hence relatively lower [CII] cooling. The far-IR continuum luminosity is not affected by the strength of the UV fields (essentially all of the far-UV goes into heating the dust), so that the net effect is a smaller [CII] to far-IR continuum luminosity ratio. First Detections of the CO(7-6), [CI] and CO(8-7) Line Emission from ULIRG Galaxies Ultraluminous infrared galaxies (ULIRGs) are galaxies with luminosities in excess of 1012 Lsolar, whose luminosity is dominated by the far-IR emission from dust. Since their discovery, the source of their prodigious luminosity has been hotly debated. Are they powered by super-starbursts (> 100 Msolar/yr) or an active galactic nucleus (AGN)? We detect very bright mid-J CO line emission from ULIRG galaxies. Strong mid-J CO line emission is only produced in starbursts, so our results support starburst powered scenarios. Furthermore, we find a negative correlation between the luminosity of the CO(6-5) line and the far-IR continuum in the sense that ULIRG galaxies have smaller mid-J CO line to far-IR continuum ratios than starbursters. The observed fall-off can be explained through increased far-UV field strengths and cloud densities in ULIRG galaxies. Therefore, we find that star formation is both much more compact and much more vigorous in ULIRG galaxies than in lower luminosity systems. For more information about the Caltech Submillimeter Observatory, visit the CSO Homepage.
<urn:uuid:a58549cc-9565-4bbb-a305-a5b4cf240eae>
2.6875
2,954
Knowledge Article
Science & Tech.
51.9464
2,286
|Galaxies, Stars, and Dust| Spiky stars and spooky shapes abound in deep cosmic skyscape. Its well-composed field of view covers about 2 Full Moons on the sky toward the constellation Pegasus. Of course the brighter stars show diffraction spikes, the commonly seen effect of internal in reflecting telescopes, and lie well within our own Milky Way galaxy. The faint but pervasive clouds of interstellar dust ride above the galactic plane and dimly reflect the Known as high latitude cirrus or integrated flux nebulae they are associated with molecular clouds. In this case, the diffuse cloud cataloged as less than a thousand light-years distant, fills the scene. Other galaxies far beyond the Milky Way are visible through the ghostly apparitions, including the striking spiral galaxy NGC 7497 some 60 million light-years away. Seen almost edge-on near the center of the field, NGC 7497's own spiral arms and dust lanes echo the colors of the Milky Way's stars and dust. Ignacio de la Cueva Torregrosa
<urn:uuid:553c8415-342d-4a60-930a-2156b4472a48>
3.140625
237
Content Listing
Science & Tech.
46.169053
2,287
Divergent evolution occurs when related species develop unique traits due to different environments or selective pressures. A classic example of divergent evolution is the Galapagos finch which Darwin discovered that in different environments, the finches' beaks adapted differently. The individual Galapagos finches looked so different from one another that he was surprised when he found that they were all related. One of the common trends in evolution is something called divergent evolution and that's when two related species diverge and wind up looking very different. They'll have different traits even though they maybe pretty closely related. This happens because the two new species or the two related species are under the influence of different environments or under the influence of different natural selective pressures. What would be a good example of this would might be something like bats and humans. Now bats and us humans are both mammals. We share a lot of common characteristics but we also share what are called homologous structures. Our upper arm and a bat's upper limb share same bones etcetera even though they're used for different traits, different purposes. We take a look here at a bat skeleton. you can actually see some of the common features that we share with them. We've got a rib cage, they've got a rib cage. Number of bones in the arm are the same and their wing is merely just a grossly elongated version of our fingers, our phallanges, bones here. And so that is an example of homologous structure demonstrating how we and bats illustrate divergent evolution.
<urn:uuid:c5fbfb9a-3494-4d3f-becd-890591878bd1>
3.84375
313
Knowledge Article
Science & Tech.
37.31182
2,288
Air-conditioned ants: The secret behind their vast underground cities... ventilation They are known for their industrious nature. But the humble ant's sophisticated home-making skills has left some of the brightest scientific minds in awe of the tiny creatures. Researchers have discovered the vast underground colonies where up to seven million of the insects live have their very own in-built ventilation shafts. It is thought the 'air-conditioning' helps ants tend to a delicately-balanced fungal garden that feeds their young. Scroll down for video Hard workers: Underground ant colonies are so complex, they even have ventilation shafts For years scientists have been at a loss as to how the industrious ants were able to keep their nests at just the right temperature to allow the fungus to grow... until now. New research has shown that the insects make specially-constructed turrets which ventilate the nests for optimum growth. According to a study published in the Journal of Insect Behaviour, the ants carefully create the turrets with highly porous walls which allows air to flow through the chambers. It was already known that ant constructed the turrets, but this study is the first to reveal how they do it, it has been reported. Sophisticated: Scientists could not work out how the ants built nests that stayed at the right temperature... until now A team of researchers took a colony of grass-cutting ants into the lab to test their nest-building techniques with a range of different materials. The ants were given clay, coarse sand and fine sand, with scientists regularly changing the quantity of the material and pouring water over them to simulate rain, according to the BBC. Leading the study, Dr Marcela Cosarinksy, from Buenos Aires' Agentinian Museum of Natural Science told the BBC: 'When [the ants] finished a turret, we analysed the arrangement of the building materials [under] the microscope. 'The ants construct the turrets by stacking sand grains and little balls of clay that they mould with their [jaws].' When pores collapsed under water, the walls would compact - and immediately the worker ants removed the materials and re-worked the turret wall. The scientists said the research confirmed that ventilation turrets were 'built structures' as opposed to passive deposition of excavated soil, a technique used by other ant species.
<urn:uuid:0c1694c8-6d04-4dae-bc56-f75f8f08481d>
2.71875
484
Truncated
Science & Tech.
50.846248
2,289
The origin of species? A recently discovered class of gene may help regulate embryonic development, control the differences between body tissues and even drive animal evolution THE old saying that where there’s muck, there’s brass has never proved more true than in genetics. Once, and not so long ago, received wisdom was that most of the human genome—perhaps as much as 99% of it—was “junk”. If this junk had a role, it was just to space out the remaining 1%, the genes in which instructions about how to make proteins are encoded, in a useful way in the cell nucleus. That, it now seems, was about as far from the truth as it is possible to be. The decade or so since the completion of the Human Genome Project has shown that lots of the junk must indeed have a function. The culmination of that demonstration was the publication, in September, of the results of the ENCODE project. This suggested that almost two-thirds of human DNA, rather than just 1% of it, is being copied into molecules of RNA, the chemical that carries protein-making instructions to the sub-cellular factories which turn those proteins out, and that as a consequence, rather than there being just 23,000 genes (namely, the bits of DNA that encode proteins), there may be millions of them. The task now is to work out what all these extra genes are up to. And a study just published in Genome Biology, by David Kelley and John Rinn of Harvard University, helps do that for one new genetic class, a type known as lincRNAs. In doing so, moreover, Dr Kelley and Dr Rinn show just how complicated the modern science of genetics has become, and hint also at how animal species split from one another. Lincs in the chain Molecules of lincRNA are similar to the messenger-RNA molecules which carry protein blueprints. They do not, however, encode proteins. More than 9,000 sorts are known, and most of those whose job has been tracked down are involved in the regulation of other genes, for example by attaching themselves to the DNA switches that control those genes. LincRNA is rather odd, though. It often contains members of a second class of weird genetic object. These are called transposable elements (or, colloquially, “jumping genes”, because their DNA can hop from one place to another within the genome). Transposable elements come in several varieties, but one group of particular interest are known as endogenous retroviruses. These are the descendants of ancient infections that have managed to hide away in the genome and get themselves passed from generation to generation along with the rest of the genes. Dr Kelley and Dr Rinn realised that the movement within the genome of transposable elements is a sort of mutation, and wondered if it has evolutionary consequences. Their conclusion is that it does, for when they looked at the relation between such elements and lincRNA genes, they found some intriguing patterns. In the first place, lincRNAs are much more likely to contain transposable elements than protein-coding genes are. More than 83% do so, in contrast to only 6% of protein-coding genes. Second, those transposable elements are particularly likely to be endogenous retroviruses, rather than any of the other sorts of element. Third, the interlopers are usually found in the bit of the gene where the process of copying RNA from the DNA template begins, suggesting they are involved in switching genes on or off. And fourth, lincRNAs containing one particular type of endogenous retrovirus are especially active in pluripotent stem cells, the embryonic cells that are the precursors of all other cell types. That indicates these lincRNAs have a role in the early development of the embryo. Previous work suggests lincRNAs are also involved in creating the differences between various sorts of tissue, since many lincRNA genes are active in only one or a few cell types. Given that their principal job is regulating the activities of other genes, this makes sense. Even more intriguingly, studies of lincRNA genes from species as diverse as people, fruit flies and nematode worms, have found they differ far more from one species to another than do protein-coding genes. They are, in other words, more species specific. And that suggests they may be more important than protein-coding genes in determining the differences between those species. What seems to be happening is that endogenous retroviruses are jumping around in an arbitrary way within the genome. Mostly, that will—in evolutionary terms—be either harmless or bad. Occasionally, though, a retrovirus lands in a place where it can change the regulation of a lincRNA gene in a way beneficial to the organism. Such variations are then spread by natural selection in the way that any beneficial mutation would be. But because they affect developmental pathways and tissue types, and thus a creature’s form, rather than just its biochemistry, that could encourage the formation of a new species. This is a long chain of speculation, but it looks a fruitful one. For it is still the case that, more than a century and a half after Charles Darwin published “On the Origin of Species”, biologists do not fully understand how species actually do originate. Work like this suggests one reason for this ignorance may be that they have been looking in the wrong place. For decades, they have concentrated their attention on the glittering, brassy protein-coding genes while ignoring the muck in which the answer really lies.
<urn:uuid:cb16aeed-400c-49af-94d4-b885904beaa3>
3.609375
1,171
News Article
Science & Tech.
39.952839
2,290
The University of Tennessee, Knoxville, in partnership with Oak Ridge National Laboratory (ORNL) will play a key role in finding solutions to what the National Science Foundation (NSF) has deemed the "most important, demanding and urgent global problems of our time." The two will team up to participate in NEON (National Ecological Observatory Network), the nation's first continental-scale ecological observatory, to answer questions that deal with the effects of global change: climate change, the spread of invasive species and changes in biodiversity. "The major challenges that are facing humanity are ecological," said Nate Sanders, associate professor of ecology and evolutionary biology, who is involved in the project. "These challenges include dealing with global change and providing food for an ever-increasing population that is being increasingly affected by climate change. Each of these components will be dealt with by NEON." Funded by the NSF, NEON is an organization which will receive $434 million for construction of the observatory network, which consists of 20 eco-climate core sites scattered around the U.S. Each core site will have an array of sensory equipment for monitoring climate, soils, water, biodiversity and the atmosphere. The core site in our region will be located at ORNL with future observation sites in the Great Smoky Mountains National Park and also at a site in southwest Virginia. The core site's location is closer to a university than any of the other planned core sites, creating ample research opportunities. "The best way to learn about science is to do science," said Sanders. "Students will be able to tackle these most pressing ecological problems. They will be able to hop in a van and go to the satellite site in the Smokies for field-based research and then head to the core site at ORNL to process the data and compare their results to those at other sites to see if what we are experiencing is happening elsewhere in the country." The goal of NEON is to detect and enable forecasting of ecological change at a continental scale. It is hoped that the project's science-based discovery will inform policy to preserve national resources, economic vitality, health, quality of life and national security. According to the NSF, recent assessments indicate that U.S. ecosystems will experience abrupt and unpredictable changes due to human-caused global change in the near future. "The global community of scientists is 99 percent certain climate change is happening and will happen," said Sanders. "Our goal is to tell the rest of the world the consequences of it." Using cutting-edge technology such as airborne observation, mobile and fixed data collection sites, and trained field crews, scientists will be able to calibrate, store and publish information into a cyber-information structure—a collection of linked computers. All NEON data and information products will be made available in near real-time to scientists, educators, students, decision makers and the public. Construction on the first NEON sites in Colorado and New England is slated to begin this fall. Construction in Tennessee is slated to begin soon thereafter. NEON plans to begin full operations at some of the completed sites in late 2012. Sanders has collaborated with Colleen Iversen and Pat Mulholland, staff scientists at ORNL in environmental sciences, on the project. Others from the Great Smoky Mountains National Park and area agencies, universities and colleges will also be heavily involved. AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
<urn:uuid:51c4af01-64c0-4913-9f05-c3a146081915>
3.28125
733
News (Org.)
Science & Tech.
32.258919
2,291
Andromeda Galaxy, cataloged as M31 and NGC 224, the closest large galaxy to the Milky Way and the only one visible to the naked eye in the Northern Hemisphere. It is also known as the Great Nebula in Andromeda. It is 2.2 million light-years away and is part of the Local Group of several galaxies that includes the Milky Way, which it largely resembles in shape and composition, although the Milky Way is a barred spiral galaxy and Andromeda is a spiral galaxy. It has a diameter of about 165,000 light-years and contains at least 200 billion stars. Its two brightest companion galaxies are M32 and M110. The light arriving at earth from the Andromeda Galaxy is shifted toward the blue end of the spectrum, whereas the light from all other cosmic sources exhibits red shift. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Andromeda Galaxy from Fact Monster: See more Encyclopedia articles on: Astronomy: General
<urn:uuid:8a5c0509-b46a-4cad-a8c5-3f71fe47017a>
3.609375
203
Knowledge Article
Science & Tech.
49.286571
2,292
New Mexico's Gila National Forest is a good natural laboratory for studying the effects of wildfire. by Neil LaRubbio, Nov 14, 2012 Controversial new studies question the conventional wisdom on Western ponderosa forests and the severity of their historic wildfires. by Emily Guerin, Sep 26, 2012 The backstory to Emily Guerin's report on the scientific debate over how "normal" severe fire is, and a travelogue from the Gila Wilderness in the wake of this year's massive blaze. by Cally Carswell, Emily Guerin, Neil LaRubbio, Sep 25, 2012 It's hard for journalists to talk about climate change, but they need to keep telling the story, especially when writing about natural disasters. by Allen Best, Jul 25, 2012 A New Mexican watches Whitewater-Baldy fire burn the Gila National Forest, and even as it changes a place she loves, her ecologist self cheers it on. by Martha Schumann Cooper, Jun 14, 2012 President Bush says the Healthy Forests Restoration Act and Initiative were needed to fight wildfire, but several years into the new rules, critics question whether the changes they brought were helpful or even necessary by Kathie Durbin, Apr 17, 2006 This season’s wildfires are caused by three things: Climate change-induced drought, bureaucratic blindness and old-fashioned human folly. by William deBuys, Jun 30, 2011 The aftermath of Boulder's destructive Fourmile Canyon fire. by Cally Carswell, Sep 26, 2010 Pepper Trail, a wildlife biologist in Oregon, says that this is not the time to log our way out of wildfire threats in the by Pepper Trail, Aug 11, 2003 Public officials – and even homeowners – are beginning to accept the inevitability of wildfires in the Golden State. by Peter Friederici , Jun 05, 2009
<urn:uuid:aedd7621-9cad-4728-a9b6-827020b11f21>
2.71875
409
Content Listing
Science & Tech.
31.0575
2,293
1. What is ASP.NET AJAX? ASP.NET AJAX, mostly called AJAX, is a set of extensions of ASP.NET. It is developed by Microsoft to implement AJAX functionalities in Web applications. ASP.NET AJAX provides a set of components that enable the developers to develop applications that can update only a specified portion of data without refreshing the entire page. The ASP.NET AJAX works with the AJAX Library that uses object-oriented programming (OOP) to develop rich Web applications that communicate with the server using asynchronous postback. 2. What is the difference between synchronous postback and asynchronous postback? The difference between synchronous and asynchronous postback is as follows: 3. What technologies are being used in AJAX? AJAX uses four technologies, which are as follows: 4. Why do we use the XMLHttpRequest object in AJAX? 5. How can we get the state of the requested process? XMLHttpRequest get the current state of the request operation by using the readyState property. This property checks the state of the object to determine if any action should be taken. The readyState property uses numeric values to represent the state. 6. What are the different controls of ASP.NET AJAX? ASP.NET AJAX includes the following controls:
<urn:uuid:e5aa2be3-f5c5-43c6-87a4-a92df97818de>
2.953125
279
Q&A Forum
Software Dev.
48.747624
2,294
Since 2011, University of Nevada, Reno biologists are consistently finding large goldfish inhabiting Lake Tahoe, a species introduction biologists will investigate in May 2013. (Photo: courtesy of University of Nevada, Reno) Emerson Marcus, Reno Gazette-Journal RENO - Biologists are worried that Lake Tahoe's pristine blue water may be affected by a "giant" visitor. Goldfish have inhabited the water of the Tahoe Keys since the 1990s, said Sudeep Chandra, a biologist at the University of Nevada, Reno. But it wasn't until 2011 that biologists found a 14.2 inch, 3.4 pound goldfish in the lake. More "giant" goldfish have been found since, he said. It is not entirely known how the goldfish are being introduced to Lake Tahoe. Chandra said he thinks people who have goldfish as pets are disposing of them in the lake. According to the U.S. Forest Service's Lake Tahoe Basin Management Unit, warm-water fish, which include goldfish, have been seen in the lake over the last decade. The invasion of these warm species can be detrimental to the ecology within the lake. Goldfish introduction is common in the Midwest where many people are populating the Great Lakes through aquarium trade and water gardens, Chandra said. "What we forget is this introduction can have a large impact in the region," Chandra said. Chandra said the waste from the goldfish creates a certain near-shore algae that can reduce the clarity of the lake. The introduction of goldfish also creates more competition for other species, he said. According to the Lake Tahoe Basin Management Unit site, goldfish "strip waters of oxygen-producing plants which increases water temperatures and destroys habitat for native juvenile fish." Lake Tahoe is not the only area affected by non-native species. The Florida Everglades has pythons and the Mississippi River basin has been affected by Asian carp. Biologists at the University of Nevada, Reno, will conduct research in May to understand the extent of the goldfish population in Tahoe, the university said Thursday.
<urn:uuid:1583662a-a197-49a7-b489-8102f42f9ee2>
3.109375
441
News Article
Science & Tech.
40.131095
2,295
The Gather Procedure takes an input vector that is distributed across all the processors and gathers it into another distributed vector or array according to an indirect index vector or array. No collisions are possible, since this call is effectively pulling values out of a distributed variable, and there is a different location for each pulled value. This is the opposite of the Scatter procedure. |call Gather (Output, Input, Index, Trace)| |Index||An optional integer vector or array of indirect references to positions in the Input vector. This must be included on the first call to this procedure with a given data structure, but may be omitted on subsequent calls if the Trace variable is present. [Optional]| |Input||A real, integer or logical vector that is distributed across all the processors.| |Trace||An optional structure that stores the setup from a previous Gather/Scatter call using the same Index variable and Input vector length. If Trace is present and uninitialized, it is set by this procedure. If Trace is present, it is used regardless of whether Index is present.| |Output||The gathered version of the Input vector, distributed across the processors.| |Trace||If present, Trace is set to the setup information for this Gather/Scatter. [Optional]| The Gather code listing contains additional documentation. Michael L. Hall
<urn:uuid:34b94f2c-3aee-4c6c-8ddf-632c954bbfdb>
3.09375
283
Documentation
Software Dev.
34.478718
2,296
XQuery: An XML Query Language Resources & References XQuery 1.0: An XML Query Language XML is a versatile markup language, capable of labeling the information content of diverse data sources including structured and semi-structured documents, relational databases, and object repositories. A query language that uses the structure of XML intelligently can express queries across all these kinds of data, whether physically stored in XML or viewed as XML via middleware. This specification describes a query language called XQuery, which is designed to be broadly applicable across many types of XML data sources. The XML Query 1.0 Requirements states that The XML Query Language MAY have more than one syntax binding. One query language syntax MUST be convenient for humans to read and write. One query language syntax MUST be expressed in XML in a way that reflects the underlying structure of the query. XQueryX is an XML representation of an XQuery. It was created by mapping the productions of the XQuery grammar into XML productions. The result is not particularly convenient for humans to read and write, but it is easy for programs to parse, and because XQueryX is represented in XML, standard XML tools can be used to create, interpret, or modify queries. Note: Because the two syntaxes are merely different grammars that express the same query semantics, they share all aspects of an XQuery processing system except for the component that recognizes and translates the source representation of a query (that is, the parser). The aspects that are shared include both the static context and the dynamic context that are defined in XQuery 1.0: An XML Query Language. There are several environments in which XQueryX may be useful: - Parser Reuse. In heterogeneous environments, a variety of systems may be used to execute a query. One parser can generate XQueryX for all of these systems. - Queries on Queries. Because XQueryX is represented in XML, queries can be queried and can be transformed into new queries. For instance, a query can be performed against a set of XQueryX queries to determine which queries use FLWOR expressions to range over a set of invoices. - Generating Queries. In some XML-oriented programming environments, it may be more convenient to build a query in its XQueryX representation than in the corresponding XQuery representation, since ordinary XML tools can be used. - Embedding Queries in XML. XQueryX can be embedded directly in an XML document. The most recent versions of the XQueryX XML Schema and the XQueryX XSLT stylesheet are available at http://www.w3.org/2005/XQueryX/xqueryx.xsd and http://www.w3.org/2005/XQueryX/xqueryx.xsl, respectively. This document defines the W3C XQuery 1.0 and XPath 2.0 Data Model (XDM), which is the data model of XPath 2.0, XSLT 2.0, and XQuery, and any other specifications that reference it. This data model is based on the XPath 1.0 data model and earlier work on an XML Query Data Model. This document is the result of joint work by the XSL Working Group and the XML Query Working Group. This document defines constructor functions, operators and functions on the datatypes defined in XML Schema Part 2: Datatypes Second Edition and the datatypes defined in XQuery 1.0 and XPath 2.0 Data Model. It also discusses functions and operators on nodes and node sequences as defined in the XQuery 1.0 and XPath 2.0 Data Model. These functions and operators are defined for use in XML Path Language (XPath) 2.0, XQuery 1.0: An XML Query Language and XSL Transformations (XSLT) Version 2.0 and other related XML standards. The signatures and summaries of functions defined in this document are available at: http://www.w3.org/2005/xpath-functions. This document defines the syntax and semantics of an extension to XQuery 1.0 called the XQuery Update Facility 1.0. This language extension is designed to meet the requirements for updating instances of the XQuery/XPath Data Model (XDM), as defined in XQuery Update Facility Requirements. The XQuery Update Facility 1.0 provides facilities to perform any or all of the following operations on an XDM instance: - Insertion of a node. - Deletion of a node. - Modification of a node by changing some of its properties while preserving its node identity. - Creation of a modified copy of a node with a new node identity. Additionally, this document defines an XML syntax for the XQuery Update Facility 1.0. The most recent versions of the two XQueryX XML Schemas and the XQueryX XSLT stylesheet for the XQuery Update Facility 1.0 are available at http://www.w3.org/2007/xquery-update-10/xquery-update-10-xqueryx.xsd, http://www.w3.org/2007/xquery-update-10/xquery-update-10-xqueryx-redef.xsd, and http://www.w3.org/2007/xquery-update-10/xquery-update-10-xqueryx.xsl, respectively. This document defines the formal semantics of XQuery 1.0 and XPath 2.0. The present document is part of a set of documents that together define the XQuery 1.0 and XPath 2.0 languages: - XQuery 1.0: An XML Query Language introduces the XQuery 1.0 language, defines its capabilities from a user-centric view, and defines the language syntax. - XML Path Language (XPath) 2.0 introduces the XPath 2.0 language, defines its capabilities from a user-centric view, and defines the language syntax. - Functions and Operators lists the functions and operators defined for the XPath/XQuery language and specifies the required types of their parameters and return value. - Data Model formally specifies the data model used by XPath/XQuery to represent the content of XML documents. The XPath/XQuery language is formally defined by operations on this data model. - Data Model Serialization specifies how XPath/XQuery data model values are serialized into XML. The scope and goals for the XPath/XQuery language are discussed in the charter of the W3C XSL/XML Query Working Group and in the XPath/XQuery requirements XML Query 1.0 Requirements. This document defines the semantics of XPath/XQuery by giving a precise formal meaning to each of the expressions of the XPath/XQuery specification in terms of the XPath/XQuery data model. This document assumes that the reader is already familiar with the XPath/XQuery language. This document defines the formal semantics for XPath 2.0 only when the XPath 1.0 backward compatibility rules are not in effect. Two important design aspects of XPath/XQuery are that it is functional and that it is typed. These two aspects play an important role in the XPath/XQuery Formal Semantics. XPath/XQuery is a functional language. XPath/XQuery is built from expressions, rather than statements. Every construct in the language (except for the XQuery query prolog) is an expression and expressions can be composed arbitrarily. The result of one expression can be used as the input to any other expression, as long as the type of the result of the former expression is compatible with the input type of the latter expression with which it is composed. Another characteristic of a functional language is that variables are always passed by value, and a variable's value cannot be modified through side effects. XPath/XQuery is a typed language. Types can be imported from one or more XML Schemas that describe the input documents and the output document, and the XPath/XQuery language can then perform operations based on these types. In addition, XPath/XQuery supports static type analysis. Static type analysis infers the output type of an expression based on the type of its input expressions. In addition to inferring the type of an expression for the user, static typing allows early detection of type errors, and can be used as the basis for certain classes of optimization. The XPath/XQuery type system captures most of the features of Schema Part 1, including global and local element and attribute declarations, complex and simple type definitions, named and anonymous types, derivation by restriction, extension, list and union, substitution groups, and wildcard types. It does not model uniqueness constraints and facet constraints on simple types. This document is organized as follows. 2 Preliminaries introduces the notations used to define the XPath/XQuery Formal Semantics. These include the formal notations for values in the XPath/XQuery data model and for types in XML Schema. The next three sections: 3 Basics, 4 Expressions, and 5 Modules and Prologs have the same structure as the corresponding sections in the XQuery 1.0: An XML Query Language and XML Path Language (XPath) 2.0 documents. This allows the reader to quickly find the formal definition of a particular language construct. 3 Basics defines the semantics for basic XPath/XQuery concepts, and 4 Expressions defines the dynamic and static semantics of each XPath/XQuery expression. 5 Modules and Prologs defines the semantics of the XPath/XQuery prolog. 7 Additional Semantics of Functions defines the static semantics of several functions in Functions and Operators and gives the dynamic and static semantics of several supporting functions used in this document. The remaining sections, 8 Auxiliary Judgments and D Importing Schemas, contain material that supports the formal semantics of XPath/XQuery. 8 Auxiliary Judgments defines formal judgments that relate data model values to types, that relate types to types, and that support the formal definition of validation. These judgments are used in the definition of expressions in 4 Expressions. Lastly, D Importing Schemas, specifies how XML Schema documents are imported into the XPath/XQuery type system and relates XML Schema types to the XPath/XQuery type system. This specification defines an extension to XQuery 1.0 and XQuery Update Facility. Expressions can be evaluated in a specific order, with later expressions seeing the effects of the expressions that came before them. This specification introduces the concept of a block with local variable declarations, as well as several new kinds of expressions, including assignment, while, continue, break, and exit expressions. This document defines serialization of the W3C XQuery 1.0 and XPath 2.0 Data Model (XDM), which is the data model of at least XML Path Language (XPath) 2.0, XSL Transformations (XSLT) Version 2.0, and XQuery 1.0: An XML Query Language, and any other specifications that reference it. Serialization is the process of converting an instance of the XQuery 1.0 and XPath 2.0 Data Model into a sequence of octets. Serialization is well-defined for most data model instances. XML Schema Part 2: Datatypes defines a number of primitive and derived datatypes, collectively known as built-in datatypes. This document defines operations on these datatypes as well as the two datatypes defined in 1.3 xdt:anyAtomicType and xdt:untypedAtomic and the two totally ordered subtypes of xs:duration defined in 9.2 Two Totally Ordered Subtypes of Duration, for use in XQuery, XPath, XSLT and related XML standards. This document also discusses operators and functions on nodes and node sequences as defined in the XQuery 1.0 and XPath 2.0 Data Model for use in XQuery, XPath, XSLT and other related XML standards. This document describes possible strategies for tokenizing the XML Path Language (XPath) 2.0 and XQuery 1.0: An XML Query Language languages, and is provided as a helpful guide to those who are designing an implementation for these languages, and as background material for the normative EBNF found in the language specifications. In the future this document may be expanded to cover more general parsing strategies.
<urn:uuid:dad5d85d-f00c-4d02-9a2c-a365110a2175>
2.734375
2,619
Documentation
Software Dev.
50.878589
2,297
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) hypotenuse is 20 inches one of the legs are 16 inches what is the measure of the other leg. i got 12 but i dont think its right is there anyone who can help me like asap Take the square root of both sides, So AB = 1.4422205101 cm. Do you know how to round that to the nearest hundredth? I don't know if I can explain my question properly but if someone understands please help!! Ok so the directions say Find the length of AB to the nearest hundredth centimeter. All measurements are in centimeters, but figures may be drawn to different scales. Explain your reasoning. okay so i have a triangle with a side of 230m and 150m, what is the missing length? help me to thanks [Incorrect, and rude, comment removed by moderator]
<urn:uuid:54a5acf3-13c3-4238-a873-d55ffaea6881>
3.015625
252
Comment Section
Science & Tech.
76.086938
2,298
I was thinking about the 90° angle between the x, y, and z-axis. It is all so perfect. So I said you need two independent real numbers (cooridinates) to describe a point in a plane. Then I thought about putting the x and y axis at a 10° angle instead of 90°, and noticed you could still get to all the points on the plane. Next I attempted to find a way to describe all the points on a plane with only 1 number. Is it possible? So at first I played around with the idea of a spiral that was so close together at each pass around that it would take infinite times to make up some area, but this idea soon I decided was dumb and incomprehensible. Then I came up with a plausible idea, if you don't mind a jumbled mess in place of two nice x and y axis. So this is only a first try at this and I hope to come up with refined examples that use negative numbers as well as decimals later, but for now I am only using whole numbers. I started at (-1,1) and called this 1. Then (0,1) would be 2. Here is a chart. (x,y) 1-D number At this point we expand the grid larger by factor of ten and make the intervals smaller by a factor of ten. So we continue. Some duplicate points will exist as we go over the previous ones. This continues to the right until we hit 10,10 and then the rows continue down to form a square grid. Then we will be at approximately the number 201 * 201 + 9. Next we again expand the grid ten times larger and ten times more intricate. etc. etc. You get the picture. Does anybody think it is interesting?? Do you think by going to infinity, you will pass through both a large surface area and work toward getting more intricate as well? Obviously it is a jumbled mess, but if you ignore the mess, it is pretty cool, hence in the "This is Cool" category.
<urn:uuid:d15ae821-475c-4685-ae49-06475635037e>
3.046875
451
Comment Section
Science & Tech.
75.749203
2,299