text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
does the resistance double as you double the length of a piece of wire? Is this a trick question? Seems like it might be... Yes, the resistance is proportional to the length. Therefore if the length is doubled, the resistance is doubled. Interestingly, the relationship of resistance to diameter is not so simple and the frequency of the current affects the resistance (actually, the impedance... have you met this term?). Yes, the resistance of a wire is directly proportional to the length and inversely proportional to the Area. Hence doubling the length of a wire increases the resistance by a factor of two, doubling the area would decrease the resistance by half. The proportionality constant relating resistance to the length and area of a wire is the resistivity of the wire. Resistance=Resistivity x length/area The resistance of a wire, under nearly constant temperature conditions, is directly proportional to length and inversely proportional to the cross R = (rho)L/A R is the resistance rho is the characteristic of the material, and is a measured quantity. Copper has a different rho than say, carbon. L is the length of the wire A is the effective cross sectional area of the wire. Most introductory college physics texts have more information about this. ---Nathan A. Unterman Click here to return to the Physics Archives Update: June 2012
<urn:uuid:99e5286e-ac86-4c04-97ae-b595ef6e0f42>
3.671875
310
Q&A Forum
Science & Tech.
45.630987
Adaptive optics is a technology that can remove the blurring effects of turbulence in the earth’s atmosphere, so that telescopes on the ground can “see” as clearly as if they were in space. I will describe the basic principles of adaptive optics, and illustrate why lasers are needed to increase the fraction of the sky where one can apply adaptive optics corrections. As one example of astronomical observations which have benefitted strongly from adaptive optics correction. I will describe our recent detections of dual supermassive black holes in colliding galaxies. I will conclude with a look forward to the adaptive optics systems of the future.
<urn:uuid:b75f1951-9702-4dd2-be9a-0ec2c8d818b8>
3.640625
125
Academic Writing
Science & Tech.
25.694358
Twisting Space & Time Credit: Joe Bergeron of Sky & Telescope magazine An artist's impression of space and time twisting around a spinning black hole. Astronomers might have already observed the effects of gravitomagnetism. Some black holes and neutron stars shoot bright jets of matter into space at nearly light speed. These jets come in pairs, oppositely directed, as if they emerge from the poles of a rotating object. Theorists think the jets could be powered and collimated by gravitomagnetism In addition, black holes are surrounded by disks of infalling matter called "accretion disks," so hot they glow in the x-ray region of the electromagnetic spectrum. There's mounting evidence, gathered by X-ray telescopes such as NASA's Chandra X-ray Observatory, that these disks wobble, much like the gyroscopes on Gravity Probe B are expected to do. Gravitomagnetism again? Perhaps Here in our solar system gravitomagnetism is, at best, feeble. This raises the question, what do we do with gravitomagnetism once we've found it? The same question was posed, many times, in the 19th century when Maxwell, Faraday and others were exploring electromagnetism. What use could it be? Today we're surrounded by the benefits of their research. Light bulbs. Computers. Washing machines. The Internet. The list goes on and on. What will gravitomagnetism be good for? Is it just "another milestone on the path of our natural quest to understand nature?" wonders Will. Or something unimaginably practical? Time will tell. more on gravitomagnetism by Dr Tony Phillips @ first science For more information visit A Review of Gravity Probe B - from the National Research Council Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke
<urn:uuid:e8ad1fe0-cbb4-4ab4-967f-dbb6e27b9ef2>
3.546875
395
Personal Blog
Science & Tech.
41.906593
A very useful concept in our study of root systems will be that of a Weyl chamber. As we showed at the beginning of last time, the hyperplanes for cannot fill up all of . What’s left over they chop into a bunch of connected components, which we call Weyl chambers. Thus every regular vector belongs to exactly one of these Weyl chambers, denoted . Saying that two vectors share a Weyl chamber — that — tells us that and lie on the same side of each and every hyperplane for . That is, and are either both positive or both negative. So this means that , and thus the induced bases are equal: . We see, then, that we have a natural bijection between the Weyl chambers of a root system and the bases for . We write for and call this the fundamental Weyl chamber relative to . Geometrically, is the open convex set consisting of the intersection of all the half-spaces for . The Weyl group of shuffles Weyl chambers around. Specifically, if and is regular, then . On the other hand, the Weyl group also sends bases of to each other. If is a base, then is another base. Indeed, since is invertible will still be a basis for . Further, for any we can write , and then use the base property of to write as a nonnegative or nonpositive integral combination of . Hitting everything with makes a nonnegative or nonpositive integral combination of , and so this is indeed a base. And, just as we’d hope, these two actions of the Weyl group are equivalent by the bijection above. We have because preserves the inner product, and so . Thus we write for some regular and find that
<urn:uuid:7d9579d1-6289-47a7-aa8f-68973aa6811b>
3.078125
362
Personal Blog
Science & Tech.
55.870263
Science subject and location tags Articles, documents and multimedia from ABC Science Wednesday, 15 May 2013 StarStuff Podcast Record-setting gamma-ray burst shocks astronomers. Also; study suggests water on Earth and Moon came from the same source, and Earth's inner-core out of sync with the rest of the planet. Wednesday, 8 May 2013 StarStuff Podcast The Earth is unlikely to be sizzled by super flare despite the Sun having reached solar max. Also; Apollo rocks reveal Moon's magnetic dynamo lasted longer than thought, and does antimatter fall down or up? Wednesday, 1 May 2013 StarStuff Podcast A bizarre stellar binary system puts Einstein's theory of general relativity to its most extreme test yet. Also; the weird subatomic particles of antimatter research, and how meteoroids make Saturn's ring clouds. Wednesday, 24 April 2013 StarStuff Podcast Kepler uncovers the most Earth-like planet yet discovered. Also; distant blazer mystifies astronomers, and physicists claim they have first concrete hint of the particle behind dark matter. Wednesday, 17 April 2013 StarStuff Podcast New type of gamma-ray burst may have been caused by death of supergiant star. Also; Saturn's rings raining down onto the planet's atmosphere, and is Australia's first space policy too little too late? Wednesday, 10 April 2013 StarStuff Podcast Cosmic ray anti-matter detection could be signs of dark matter. Also; astronomers amazed by the birth of a new star next to a black hole, and the discovery of a new type of supernova. Thursday, 4 April 2013 StarStuff Podcast Questions continue to baffle scientists investigating the biggest nearby supernova explosion of modern times. Also; astrophysicists watch black hole rip apart a planet, and secret to massive stars revealed. Wednesday, 27 March 2013 StarStuff Podcast The universe is a hundred million years older than thought. Also; how planetary migration caused a massive solar system meteor storm; and has Voyager left the solar system? Wednesday, 20 March 2013 StarStuff Podcast Rock samples show Mars was capable of sustaining life as we know it. Also; scientists discover third closest stellar system to the sun; and more certainty, but less magic, about the 'Higgs' particle. Wednesday, 13 March 2013 StarStuff Podcast New evidence the Milky Way galaxy is cannibalising a smaller dwarf galaxy. Also; is a comet about to crash into Mars? And how comets could have seeded Earth with the precursors for life. Wednesday, 6 March 2013 StarStuff Podcast Monster flows from the Milky Way could hold a clue to solving the mystery of dark matter. Also; scientists spot the birth of a new planet; and two comets streak across southern skies. Wednesday, 27 February 2013 StarStuff Podcast Signs of Saturn's birthplace raises new questions about planetary migration theory. Also; smallest planet yet detected, and Higgs boson calculations predict the end of the universe. Wednesday, 20 February 2013 StarStuff Podcast Over a 1000 people injured and millions in damage as a meteor airbursts over Russia. Also; could an asteroid impact site in outback Australia be linked to the late Devonian mass extinction event? And discovery of a source of cosmic rays. Wednesday, 13 February 2013 StarStuff Podcast Get ready for an asteroid half the size of a football field to skim past Earth. Also; asteroid collision and dinosaur extinction much closer than previously thought; and have scientists discovered the last warnings signs of a supernova? Wednesday, 6 February 2013 StarStuff Podcast Time to rewrite the text books on planetary evolution. Also; new CSIRO observations support Big Bang theory, and South Korea launches its first rocket into space.
<urn:uuid:caa86dc3-fdb7-4d4a-8405-79287e17850d>
2.828125
775
Content Listing
Science & Tech.
47.848608
In order to deliver production-ready code at the end of every iteration, the iterative quality assurance approach relies on robust definitions of acceptance criteria for each story card and automated testing to unearth and fix bugs early in the process. Each story card should describe both unit tests and the specific portion of the overall test plan to which it pertains. Adequate testing coverage is important. It is useful to maintain a re-usable test plan throughout the project development life cycle, regardless of the project management methodology employed. With ITS Iterative, story cards are the natural input to the test plan. In other words, unit tests of specific objects or features can and should be included in the story card description. These smaller unit tests can also be used as inputs to the overarching reusable test plan. The test plan will thus reflect a change whenever a new feature or piece of functionality is introduced into or removed from the codebase. A story card needs to detail, as much as possible, the specific portion of the test plan to which it pertains. It is not necessary, however, to refer back to individual story cards from within the test plan. The content of QA tools and techniques is essentially the same for both ITS Iterative and waterfall development environments. The difference, as noted above, is at what point in the project life cycle they are used. It is still useful, however, to follow the ITS Software Development Life Cycle (SDLC) methodology when developing test scripts or creating test data in either environment.
<urn:uuid:250fe88e-c7f5-4e6e-9066-9eaddd047c51>
2.765625
307
Documentation
Software Dev.
35.114457
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) Scientific notation, also known as standard form or in exponential notation, is a way of writing numbers, which can contain values for large or small to be easily written in standard notation decimal notation.Scientific was developed to easily represent numbers are either very large or very small.Sometimes, especially when you use a calculator, you can come up with a very wide range. Nope, no trips planned. So you had logs and logarithms. An extra week, sounds great. It was great, until I was literally handing my passport over to the baggage handler when a woman leant over and wispered in her ear. She then said that all flights from Corsica to UK we cancelled for at least a week. There was a huge panic and a run on the airline desk. Luckily we have a house over there, so we just went home and had an extended holiday. How was Corsica! Cool, happy now s is -2 coz its meter per second squared (unit of acceleration). As you can see, it appears to be a ratio but is rather a "Rate of Change w.r.t Time"! Hi Bobby and ZHero How is it that s is -2 though kg is -1. Does it just denote that m/s is a ratio and kg is a definite unit?
<urn:uuid:3247de02-35c4-4f8d-9e5e-f585505a16d6>
3.46875
350
Comment Section
Science & Tech.
70.200628
The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. A Science Strategy for the Exploration of Europa drainage channels within the glacier. The glacier itself may be several hundreds of meters thick; its structure may be further complicated by the presence of medial moraines, composed of rocky debris that snake across many valley glaciers. The moraine patterns are good indicators of past dynamical instabilities. Extensive geophysical studies have been conducted on temperate glaciers in the Pacific Northwest such as the Blue and Columbia Glaciers. The Bering and Malaspina Glaciers, located along the Gulf of Alaska, are examples of surging glaciers with highly crevassed surfaces, complex subglacial hydrology, and surface and internal moraines. Each has been intensively studied with surface, airborne, and spaceborne remote-sensing techniques. Polar Ice Sheets Greenland and Antarctica are blanketed by the last of Earth's great ice sheets. Continental in size, the ice sheets are characterized by complex dynamics driven in part by external climate forcing and by spatial and temporal variations at the glacier bed and at internal boundaries. The dynamical processes manifest themselves on the ice sheet surface by the presence of exotic structures such as ice streams. These are rivers of ice within the ice sheet, hundreds of kilometers long, that discharge ice from the interior ice sheet toward the floating ice shelves and eventually to sea. The margins of ice streams are heavily crevassed and are strong targets for microwave radar, and they can effectively attenuate signals from high-frequency radar (Figure 4.1). Antarctic ice streams ride over a bed lubricated by subglacial water. The nature of the bed enables the ice streams to move at speeds of several hundreds of meters per year, whereas nearby ice frozen to the bed may move at speeds of only tens of meters per year. Greenland ice streams (such as the Jacobshavn Glacier) apparently flow via the deformation of a basal layer of relatively warm ice — the combination of warm basal ice and the presence of extensive surface crevassing makes Jacobshavn one of the last important glaciers to resist detailed sounding of the glacier bed. Ice sheets preserve an important stratigraphic record of past changes in climate and dynamics. The record takes the form of vertical and horizontal gradients in density, temperature, crystal size, crystalline fabric, impurity content, and deformation rate. Local vertical variations in these properties can lead to stratigraphic horizons that are detectable and apparently continuous for more than hundreds of kilometers (Figure 4.2). Polar Ice Shelves Ice shelves are enormous slabs of floating ice that are fed by a combination of ice flow from the interior ice sheet and accumulation on the ice sheet surface. The largest ice shelves are found in Antarctica. Both the Ross and Filchner-Ronne Ice Shelves are about the size of Texas. Ice thickness ranges from about 800 m near the grounding line to about 250 m near the calving margin. Water-layer thickness beneath the ice shelves varies from a few meters near the grounding line to hundreds of meters. A few ice-shelf-like environments have been identified recently in northern Greenland. For example, Peterman Glacier occupies a long fjord. Much of the length of Peterman Glacier is floating on ocean water that fills the fjord. Pockets of water upstream of the grounding line are also believed to exist based on the strength of radar returns from deep subglacial valleys (Figure 4.3). The interior structure of ice shelves can be complex. Moraine material deposited on the surface of East Antarctica's outlet glaciers, for example, is carried downstream and buried, only to show up as a strong scattering layer in radio-echo sounding data. Rifts through the ice shelf can form near grounding lines or around ice rises. Upwelling brine is forced horizontally through lower-density firn (i.e., granular ice formed by the recrystallization of snow) near the surface, forming a layer nearly opaque to radar. These brine layers are carried downstream and can completely obscure the ice bottom from radar. Bottom and surface crevasses can tear through a significant thickness of ice. Once identified, the crevasses can be useful indicators of the stress regime within the ice shelf (Figure 4.4). Ice-thickness gradients of the ice shelf and currents within the subglacial ocean can plate large thicknesses of sea ice onto the base of the ice shelf. Direct measurement has shown a 6-m-thick layer of briny sea ice on the bottom of the southeastern Ross Ice Shelf. Several hundred meters of sea ice are believed to be accreted onto the
<urn:uuid:1f307a1b-cd30-4147-a4e4-694d26605637>
3.890625
990
Knowledge Article
Science & Tech.
35.36877
Three years ago, Pinon Middle School science teacher Rochelle Silvers asked her seventh-grade students if anyone would like to do a science project for the state science fair. The only student to raise his hand was Garrett Yazzie. Garrett, a Navajo Indian, volunteered because no one else did, and because he was excited about the possibility of a trip off the Indian reservation where he lived. For his project, Garrett made a solar-powered water heater out of aluminum cans and a car radiator from an old Pontiac, which he had found at a junk yard. The result was not just an experiment for the science fair but an invention used in Garrett's home. At the time, 13-year-old Garrett was living with his mother, two sisters and two nephews in a trailer on the Navajo Indian Reservation in Arizona. The trailer, like many homes on the reservation, did not have running water or electricity. Garrett used his heater to heat well water for bathing and to heat his home. According to Garrett's data, his invention can heat water to 200 degrees Fahrenheit and raise the air temperature by 45 degrees Fahrenheit. The environmentally friendly heater helped the Yazzie family in another way. Before the heater, Garrett's family was heating the home by burning coal or wood. The smoke from burning those materials was causing Garrett's younger sister to have respiratory problems, sometimes requiring treatment at a hospital. "I wanted to find a way to heat the house without affecting her health," Garrett said. The solar-powered heater meant Garrett's family no longer had to burn coal or wood to heat their home. The impact of the project was just beginning. The "junk yard genius," as he's known on the reservation, won first place in the engineering category at the 2005 Arizona American Indian Science and Engineering Fair and earned an invitation to the Discovery Channel Young Science Challenge. Out of more than 7,500 nominations, Garrett was selected as one of 40 finalists to attend the challenge in Washington, D.C., in October 2005. Garrett's solar-powered water heater placed seventh in the Discovery Channel Challenge. But the people Garrett and his mom, Georgia, met at the Discovery Channel event have changed his life. He is now attending a preparatory school in Michigan and has a scholarship to Arizona State University waiting for him when he graduates. In 2007, the whole family received a new home through the ABC television network's "Extreme Makeover: Home Edition." Yazzie, now 16, lives with the Pierz family in Clarkston, Mich., and is a sophomore at Orchard Lake St. Mary's Preparatory School, a Catholic college-preparatory high school for boys. The Yazzies met Michael and Kathleen Pierz at the Discovery Channel Challenge, where their daughter was also a finalist. Kathleen Pierz talked with Garrett's mom about her desire to move Garrett off of the reservation to attend school. Pierz said she felt compelled to invite Garrett to live with her family and attend St. Mary's. Two weeks later, Garrett moved to Michigan. Garrett has always enjoyed math, but as a result of the project, he now enjoys science and seeing how things work. "After getting through my science fair project and working with different engineers and scientists, I like science more than any other subject," he said. He has done several other projects, including building a water wheel. The invention uses an industrial-sized cable spool connected to a 10-speed bicycle. When the spool is inserted into a stream of water, the spool spins and turns the bicycle axle. An alternator on the bicycle collects the energy that is created when the axle rotates. The water wheel provided enough electricity to run a refrigerator and the lights in a mountain home. This project won first place at the 2006 Arizona American Indian Science and Engineering Fair and placed Garrett as a semi-finalist once again in the Discovery Channel Young Scientist Challenge. Garrett said he's still shocked that a piece of junk like a 45-year-old radiator could impact his life so much. "It's been a big enjoyment to my life," he said of all the good things that have happened to him since the initial science project. Garrett plans to study engineering in college and maybe start a business to help the people on the Navajo reservation. One business idea is to specialize in environmentally friendly, or green, construction, and offer affordable wind generators and solar panels to the people on the reservation. The business would also create jobs on the reservation. Men on the reservation often need to drive more than two hours to find work, and some of Garrett's cousins drive five hours to construction jobs in Phoenix. He also has his sights set on the moon. During a visit to the Henry Ford Museum, Garrett toured the "History of Invention," which is an exhibit of famous inventors' homes, such as the home where George Washington Carver was born. Garrett told Kathleen Pierz the museum would need to make room for a hogan (a six-sided traditional Navajo home) to represent him, because he is going to be the first Navajo on the moon. "I know I will have to work very hard at that," he said. Garrett credits his mom, a special education teacher, with impressing upon him the importance of education. "Without it (education) you basically have nothing. That's why I want to get a better education. ... That's why I moved off of the reservation, to get a better education." American Indian Science and Engineering Society → John Herrington, first Native American Astronaut → Beat the Heat → Staying Cool on the International Space Station → NASA Education Web Site → Heather R. Smith/NASA Educational Technology Services
<urn:uuid:5463ef32-782e-4f68-91a0-bd5b6abf1351>
3.390625
1,181
Nonfiction Writing
Science & Tech.
49.041068
The failure of ghostly subatomic messengers called neutrinos to show up at an Antarctic telescope has knocked down a major astrophysical theory involving some of the most dramatic explosions in the universe. "I would have preferred to have seen neutrinos," says the IceCube telescope's principal investigator Francis Halzen at the University of Wisconsin, Madison. "Null results are usually not very interesting, but in this case, it is." Neutrinos are emitted by a range of cosmic processes. Most stream through matter without being deflected or changed, making them ideal long-distance messengers from distant galaxies. The IceCube telescope monitors a cubic kilometre of ice beneath the South Pole for neutrinos of various types, including the cosmic variety. Vertical strings of detectors frozen into the ice watch for flashes of blue light emitted when neutrinos strike. The energy of the neutrino determines its source. One source of neutrinos was thought to be explosions known as gamma ray bursts (GRBs) - via mysterious entities called ultra high energy cosmic rays. UHECRS, very high energy protons and charged nuclei, occasionally arrive on Earth , where they are detected by cosmic ray detectors such as the Pierre Auger Observatory in Argentina. UHECRs are known to come from outside our galaxy, but because they get deflected by magnetic fields en route, it's impossible to retrace their path and determine their source. GRBs, thought to occur when massive stars collapse to form black holes, could spew out such particles. If they do, the UHECRs should interact with the photons also streaming out of the explosion to form neutrinos with energies in the hundreds of tera-electronvolts. These should then arrive on Earth along with the photons. With this chain of events in mind, IceCube has been looking for neutrinos occurring at the same time as GRBs. From May 2009 to May 2010, gamma-ray satellite observatories saw 190 GRBs. Theory predicts that IceCube should have seen a handful of neutrinos at the same time, from the same region of the sky. But today IceCube reports that it saw absolutely nothing – a serious blow to a cascade of processes astrophysicists thought they understood. Supermassive black holes Most importantly, the result removes a leading explanation for UHECRs. "That GRBs are the source of the cosmic rays is basically ruled out," says Halzen. "We have put half the theorists out of business." Theorist Dan Hooper of Fermilab in Batavia, Illinois, agrees: "Given that GRBs were a leading candidate for the origin of these, this is an important result," he says. Attention will now shift to active galactic nuclei (AGN), which are powered by supermassive black holes. AGNs could also produce UHECRs and because the mechanism for their production would be different, this is not ruled out by today's IceCube result. "We could observe neutrinos from AGNs any day and prove that they are the source of the cosmic rays," says Halzen. Journal reference: Nature, DOI:10.1038/nature11068 If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Sun Apr 22 19:41:21 BST 2012 by Dirk Pons Of course the logical other option is that the neutrino-genesis model may be off. Gamma ray bursts are expected to produce cosmic rays, and separately neutrinos, through different but linked mechanisms. IceCube saw no neutrinos. But it might be a bit hasty to claim that gamma ray bursts are 'ruled out' as the source of cosmic rays. After all, there are other logical alternative explanations, even if they are less likely, e.g. that fewer pions are produced than expected. It may simply be that we don't fully understand the physics in the GRBs http://arxiv.org/pdf/1112.1076v2.pdf. Neutrinos have surprised before. They may yet again. Our own work rather radically gives a novel take on their production in beta decay http://vixra.org/abs/1111.0022 and even suggests they have a key role in the asymmetry of baryogenesis http://vixra.org/abs/1111.0035 Mon Apr 23 10:10:30 BST 2012 by Mark Bridger It's true that the physics of GRB's are not yet understood, even if black holes or rather, EGO's (extreme gravity objects) are the likely candidates. In my explanation (here in comments (long URL - click here) GRB's would not be associated with UHECR's or neutrinos. Though I guess, very speculatively that UHECR's could come from a similar mechanism in some EGO's that is more extreme than GRB's. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:55950682-dcac-4528-9030-a94b13ad5b7a>
3.625
1,155
Comment Section
Science & Tech.
49.751696
JSON is a simple language for representing data on the Web. Linked Data is a technique for creating a graph of interlinked data across different documents or Web sites. In Linked Data, “things” are identified using IRIs, which are typically dereferencable and thus may be used to find more information about a particular “thing”, creating a Web of Knowledge. JSON-LD is intended to be a simple publishing method for expressing not only Linked Data in JSON, but also for adding semantics to existing JSON. The syntax does not necessarily require applications to change their JSON, but allows one to easily add meaning by adding context in a way that is either in-band or out-of-band. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON with added semantics. Finally, the format is intended to be easy to parse, efficient to generate, stream-based and document-based processing compatible, and require a very small memory footprint in order to operate. Data is messy and disconnected. JSON-LD organizes and connects it, letting your creativity bloom.
<urn:uuid:a1fcc101-c21a-4978-8ee2-a6d5d9cd63c8>
3.015625
237
Knowledge Article
Software Dev.
36.244064
We constantly hear that the warmest years on record have all occurred in the most recent decades, and of course, we are led to believe this must be a result of the ongoing buildup of greenhouse gases. In most places, we have approximately 100 years of reliable temperature records, and we wonder if the warmth of the most recent decades is unusual, part of some cyclical behavior of the climate system, or a warm-up on the heels of a cold period at the beginning of the record. A recent article in Geophysical Research Letters has an intriguing title suggesting a 2,000 year temperature record now exists for China – we definitely wanted to see these results of this one. The article was authored by six scientists with the Chinese Academy of Sciences in Beijing, the State University of New York at Albany, and Germany’s Justus-Liebig University in Giessen; the research was funded by the Chinese Academy of Sciences, National Natural Science Foundation of China, and the United States Department of Energy. In their abstract, Ge et al. tell us “The analysis also indicates that the warming during the 10–14th centuries in some regions might be comparable in magnitude to the warming of the last few decades of the 20th century.” From the outset, we knew we would welcome the results from any long-term reconstruction of regional temperatures. The authors begin noting that “The knowledge of past climate can improve our understanding of natural climate variability and also help address the question of whether modern climate change is unprecedented in a long-term context.” We agree! Ge et al. explain that “Over the recent past, regional proxy temperature series with lengths of 500–2000 years from China have been reconstructed using tree rings with 1–3 year temporal resolution, annually resolved stalagmites, decadally resolved ice-core information, historical documents with temporal resolution of 10–30 years, and lake sediments resolving decadal to century time scales.” However, the authors caution “these published proxy-based reconstructions are subject to uncertainties mainly due to dating, proxy interpretation to climatic parameters, spatial representation, calibration of proxy data during the reconstruction procedure, and available sample numbers.” Ge et al. used a series of multivariate statistical techniques to combine information from the various proxy methods, and the results included the reconstruction of regional temperatures and an estimate of uncertainty for any given year. They also analyzed temperature records from throughout China over the 1961 to 2007 period and established five major climate divisions in the country (Figure 1). Figure 1. Types, lengths, and locations of proxy temperature series and observation used in the Ge et al. study. The five climate regions were based on a “factor analysis” of the 1961–2007 instrumental measurements. Grey shading indicates elevation (from Ge et al., 2010). The bottom line for this one can be found in our Figure 2 that shows the centennially-smoothed temperature reconstruction for the five regions of China. With respect to the Northeast, Ge et al. comment “During the last 500 years, apparent climate fluctuations were experienced, including two cold phases from the 1470s to the 1710s and the 1790s to the 1860s, two warm phases from the 1720s to the 1780s, and after the 1870s. The temperature variations prior to the 1500s show two anomalous warm peaks, around 300 and between approximately 1100 and 1200, that exceed the warm level of the last decades of the 20th century.” The plot for the Northeast shows warming in the 20th century, but it appears largely to be somewhat of a recovery from an unusually cold period from 1800 to 1870. Furthermore, the plot shows that the recent warming is less than warming that has occurred in the past. Figure 2. Five regionally coherent temperature reconstructions with 100-year resolution; the dashed line is the part with fewer series used; and the solid line is the mean value. The shaded areas are the two coldest periods, during the 1620s–1710s and 1800s–1860s (from Ge et al., 2010). The Central East region also has a 2,000 year reconstruction and Ge et al. state “The 500-year regional coherent temperature series shows temperature amplitude between the coldest and warmest decade of 1.8°C. Three extended warm periods were prevalent in 1470s–1610s, 1700s–1780s, and after 1900s. It is evident that the late 20th century warming stands out during the past 500 years. Considering the past 2000 years, the winter half-year temperature series indicate that the three warm peaks (690s–710s, 1080s–1100s and 1230s–1250s), have comparable high temperatures to the last decades of the 20th century.” No kidding – the plot for the Central East region shows that the warmth of the late 20th century was exceeded several times in the past. Commenting on the Tibet reconstruction, Ge et al. state “The warming period of twenty decadal time steps between the 600s and 800s is comparable to the late 20th century.” In the Northwest, they note “Comparable warm conditions in the late of 20th century are also found around the decade 1100s.” Unfortunately, no long-term reconstruction was possible for the Southeast region. In summarizing their work, Ge et al. report : From Figure 3 [our Figure 2 –eds.] , the warming level in the last decades of the 20th century is unprecedented compared with the recent 500 years. However, comparing with the temperature variation over the past 2000 years, the warming during the last decades of the 20th century is only apparent in the TB region, where no other comparable warming peak occurred. For the regions of NE and CE, the warming peaks during 900s–1300s are higher than that of the late 20th century, though connected with relatively large uncertainties. We get the message – the recent warming in at least several regions in China has likely been exceeded in the past millennium or two, the rate of recent warming was not unusual, and the observed warming of the 20th century comes after an exceptionally cold period in the 1800s. Declaring that anthropogenic greenhouse gas emissions have pushed modern temperature beyond their historical counterparts disregards the lessons of 2,000 years of Chinese temperatures. Ge, Q.-S., J.Y. Zheng, Z.-X. Hao, X.-M. Shao, W.-C. Wang, and J. Luterbacher. 2010. Temperature variation through 2000 years in China: An uncertainty analysis of reconstruction and regional difference. Geophysical Research Letters, 37, L03703, doi:10.1029/2009GL041281.
<urn:uuid:415235c7-95da-4917-ac74-eb14d3bd8d18>
3.375
1,400
Knowledge Article
Science & Tech.
49.179624
The picture to the right is an aerial photograph of Fermi National Accelerator Laboratory illustrating the heterogeneous landscape. Seven different habitats were sampled for two years to compare prey availability across the landscape. A log-linear analysis was then used to relate spatio-temporal changes in prey to the diets and activity patterns of the predators. The results showed that the numerically common predators, red fox, coyote, Red-tailed Hawk, and Great Horned Owl, feed heavily on rabbits and voles. Squirrels and field mice suffer comparatively low predation rates. We then examined how predation affects the biology of small prairie mammals, such as mice and voles, by excluding the predators. There were four plots, each 1.5 acres in area, that had the coyotes and red foxes fenced out. Hawks and owls were excluded by suspending netting above the plots. Small mammals on the four exclusion plots and four control plots were trapped at monthly intervals to monitor long-term demographic patterns. The two most common small mammals occurring at the site were white-footed mice (Peromyscus leucopus) and meadow voles (Microtus pennsylvanicus). Following peak densities in late summer, Microtus numbers were as much as three times greater on exclusion plots relative to controls. This was due to preferential selection of Microtus by vertebrate predators, as reflected in their diets. Vertebrate predator exclusion had no detectable effect on Peromyscus numbers; this may have been attributed to an abundance of alternative prey, particularly eastern cottontails. We also burned and mowed a portion of the field site. This was intended to test how predators affect prey populations, how different prairie management techniques affect prey populations, and how these different trophic levels ultimately interact in a tallgrass prairie restoration. Plant sampling involved the use of 60, 0.5-m quadrats distributed between mowed, burned, and control plots in the summer through fall months. Vegetation profiles were also taken to measure litter depth and vegetation structure. Plant data will be coupled with the small mammal censuses to look at how different prairie restoration techniques determine the species composition and abundance of plants and, in turn, vertebrate prey within the tallgrass prairie ecosystem. The predator exclosures were also used to examine the effects of reduced predation risk on the foraging behavior of Peromyscus. Seeds were placed both in foraging towers (seen in the figure to the right), which resulted in elevated feeding sites, and on the ground. The results indicated that in the absence of predators, Peromyscus increased their level of foraging activity at the more exposed, elevated site. This may have been attributed to reduced rustling of leaf litter and greater visual acuity of the prey. Also, there was a reduction in foraging during a full moon as compared to a new moon; presumably the higher lunar light levels resulting in increased predation risk. These experiments have been expanded to include foraging in shrubland and woodland. In the shrubland, we are examining how Peromyscus respond to visual, auditory, and olfactory cues and two spatial scales. In the oak woodland, we're using a similar suite of vertebrate cues, but measure variation in Peromyscus foraging in the open, long (run-ways), at the base of trees, and at elevated sites in tree trunks. Foraging in the Tropics This work is being conducted at Las Cuevas Field Station in Chiquibul Preserve, Belize. In numerous studies throughout North America, small mammals consistently used downed logs as foraging corridors. Many North American predators, such as foxes and owls hunt via auditory cues, and rustling leaves are a main attractant. This past summer, we compared how two disturbances – logging and hurricane – influence the structure of the rainforest and in turn small mammal foraging behavior. Although the sample size is still limited, we were very surprised to find that the small mammals completely avoided all downed logs, contrary to previous work in North America. Multiple encounters with tropical jumping pitvipers may be the explanation. Unlike mammalian or avian predators, pitvipers hunt via thermal and tactile cues. They commonly place their lower mandible on a log to sense vibrations. During future trips to Chiquibul, I hope to increase the sample size of tracked rodents and quantitatively analyze their movements in relation to downed woody material and vegetation structure. Recent studies have focused on the the effects of urbanization on vertebrates. This has included a limited amount of work with anurans and birds. Our work with a variety of mammalian groups, including carnivores, small mammals, and bats, has taken a multifaceted approach. Population structure, community composition, and ecotoxicology are topics encompassed by current urbanization investigations; future collaborative work will incorporate population genetics. Gradient analysis has been used to address questions on how urban sprawl effects mammalian species distribution and abundance. The gradient, shown in the figure at left, starts near the "Lake Shore" of downtown Chicago and ends 100 km away at Midewin National Tallgrass Prairie, a 10,000 ha macrosite. Landcover along the gradient includes dense residential areas (up to 3,200 people per square km), heavy suburban subdivisions, extensive agriculture, and natural From a systems perspective, I also have interests in wetland restorations. Projects have ranged from the distribution and abundance of frogs among a diversity of wetland types, to nested subset patterns of birds and a GIS model for predicting distribution of pre-settlement wetlands. initiated work at a large scale nutrient farming project. One question will focus on experimental changes in hydrological regimes and the distribution and abundance of muskrats. A second question will examine bioaccumulation of pesticides and herbicides resulting from the increased flow of agricultural run-off. Funds are also pending for a wet prairie restoration. This project will examine different moisture regimes along a moisture gradient and how these abiotic parameters interact with herbivory. [Photo yet to come]
<urn:uuid:3756f025-03db-46c3-b1f1-16104e9fdf9c>
3.671875
1,279
Academic Writing
Science & Tech.
25.026287
Microtubules are a structure in the cytoskeleton, they are rope like polymers that grow to a length of about 25 micrometers (25000 nm), and have an outer-diameter of around 25 nm. For comparison, the mean spacing between atoms is on the order of 0.1 to 0.2 nm; so the micro tubule really is micro: about 200 atoms across. In terms of quantum effects though, this is pretty big but not unreasonable. Researchers commonly use quantum dots to play with quantum effects, and these are typically spheres on the order of 10 to 50 atoms in diameter. Note, that we don't know how to couple 5000 quantum dots in one coherent chain (how many you would need to get the length of a microtubule). So microtubules are small, but they are common! Microtubules are found in all dividing eukaryotic cells and in most differentiated cell types. In other words, that mosquito you just smacked and that philosophers always give as an example of something non-conscious is full of microtubules. This should raise some red flags, but we don't need to go into more detail of microtubules to discredit Penrose and Hammeroff. However, if you love cell biology, take a look at Desai & Mitchison (1997). So microtubules are probably a bad basis, but why did Penrose want quantum effects in the brain? In The Emperor's New Mind, Penrose suggests consciousness is non-algorithmic and suggests that a magical quantum computer could do these non-algorithmic tasks. The reason I use 'magical' is because a real quantum computer is Turing-complete, if a classical computer cannon solve a problem then neither can a quantum one (of course, if a classical computer can solve a problem, then quantum one can as well and might be able to do it qualitatively faster). For a nice computer science debunking of this part of Penrose's argument take a look a Scott Aaronson's lecture notes. Why did Hammeroff want quantum-ness? To avoid dualism in explaining consciousness. However, he has gone so far down the reductionist rabbit-hole that he popped out on the other side. He arrived at the same 'magic' we feared in dualism except now he called it 'quantum mechanics'. The biggest irony of this approach is that Penrose was inspired in many ways by Schrodinger's beautiful take on life. Although Schrödinger does bring in quantum mechanics (both as a useful reduction and as an analogy) he uses completely different parts of it (he uses the discritization of energy levels, and specially avoids issues of the uncertainty principle and superposition of states that made him famous). Schrödinger would completely disagree with Penrose and Hameroff:: [I]f we were organisms so sensitive that a single atom, or even a few atoms, could make a perceptible impression on our senses -- Heaves, what would life be like! To stress one point: an organism of that kind would most certainly not be capable of developing the kind of orderly thought which, after passing through a long sequence of earlier stages, ultimately results in forming, among many other ideas, the idea of an atom. This response can be made precise through quantum decoherence (Tegmark, 2000) and there is little regard for the physical importance of quantum mechanics in the brain (Litt et al., 2006) although Hammeroff (2007) still defends it.
<urn:uuid:fa12721a-e508-4aa8-a132-20839baf5f55>
3.1875
721
Q&A Forum
Science & Tech.
43.721676
We are here one more time for another edition of our 3081: Program Design and Development writing assignment, this time getting into the topic called: Testing. Testing? ...what do I mean by testing in terms of computer science? Well, testing is all about producing failures in our own code just to be sure that it is doing what it is supposed to do. Testing is the most effective technique to building up confidence that small pieces of code work as expected. As developers, we spend time debugging, printing variable's values through the code, verifying that those values are the ones we expected to be. However, we also know that it is not enough for testing purposes, so the next question is: How to test a method, a function, a class, a whole project? I found the answer in Regression Testing which states that for any failed execution must yield a test case and it remains as part of the project's testing unit ("Seven principles of software testing", 2008). There are two kinds of test cases: manual and automatic. Automatic tests are derived from the specifications of the project and Manual tests are not intentionally intended as a test run which is the case for the tests from Lab 5. Recalling from Lab 5, there are three tests in readInput method. The first test was set up to not provide any file name by changing the first argument from 2 to 1 in order to prove the correctness of this test. For the second test, we provide a non-existing file to function makeArgs (const char *a0, const char *a1) which returns an expected null pointer. The last test just confirmed that when given a proper existing file name the pointer returned is not null. Manual tests like those previously mentioned are good for understanding of the method's functionality and its arguments. Based on the premise: of finding all bugs in our code, no just one. We should consider an empirical assessment strategy called random testing. Random testing states that successive bugs might be of different nature; hence, we should try as many tests as our creativity and knowledge allow us to do it to uncover as many failures as possible as a function of time. In other words, we should not underestimate a seemingly dump testing strategy before we try it. Testing is far easier in the long run if we put in practice the "pay-as-you-go" model. It means to write a simple test for each small routine as we go along with the project, to spend in a consistent manner some time at the end of the day testing our code rather than having to rush at the last minute fixing too many bugs and reworking many lines of code. In conclusion, testing is not a waste of time and it should not be something that happens toward the end of a project.
<urn:uuid:8124ba0b-2721-4f55-9fcd-b1c269fbe84b>
3.65625
561
Personal Blog
Software Dev.
54.170796
Control is what attracts many OS developers to assembly, often is what leads to or stems from assembly hacking. Note that any system that allows self-development could be qualified an "OS", though it can run "on the top" of an underlying system (much like Linux over Mach or OpenGenera over Unix). Hence, for easier debugging purpose, you might like to develop your "OS" first as a process running on top of Linux (despite the slowness), then use the Flux OS kit (which grants use of Linux and BSD drivers in your own OS) to make it stand-alone. When your OS is stable, it is time to write your own hardware drivers if you really love that. This HOWTO will not cover topics such as bootloader code, getting into 32-bit mode, handling Interrupts, the basics about Intel protected mode or V86/R86 braindeadness, defining your object format and calling conventions. The main place where to find reliable information about that all, is source code of existing OSes and bootloaders. Lots of pointers are on the following webpage: http://www.tunes.org/Review/OSes.html
<urn:uuid:a647ecaa-8fe3-4ecf-bb8e-3a704643b16c>
2.703125
246
Tutorial
Software Dev.
52.590591
NERVA is an acronym for Nuclear Engine for Rocket Vehicle Application, a joint program of the U.S. Atomic Energy Commission and NASA managed by the Space Nuclear Propulsion Office (SNPO) until both the program and the office ended at the end of 1972. NERVA demonstrated that nuclear thermal rocket engines were a feasible and reliable tool for space exploration, and at the end of 1968 SNPO certified that the latest NERVA engine, the NRX/XE, met the requirements for a manned Mars mission. Although NERVA engines were built and tested as much as possible with flight-certified components and the engine was deemed ready for integration into a spacecraft, much of the U.S. space program was cancelled by the Nixon Administration before a manned visit to Mars could take place. NERVA was considered by the AEC, SNPO and NASA to be a highly successful program; it met or exceeded its program goals. Its principal objective was to "establish a technology base for nuclear rocket engine systems to be utilized in the design and development of propulsion systems for space mission application". Virtually all space mission plans that use nuclear thermal rockets use derivative designs from the NERVA NRX or Pewee. Project Rover Los Alamos Scientific Laboratory began researching nuclear rockets in 1952, accelerating into Project Rover in 1955 when the deputy director of Lawrence Livermore National Laboratory, Herbert York, postulated a way to shrink reactor weights considerably. By 1961, after unexpectedly fast-paced progress on the part of Project Rover, NASA's Marshall Space Flight Center began to use nuclear thermal rockets in their mission plans. Marshall planned to use a nuclear-powered rocket from Los Alamos to power a RIFT (Reactor-In-Flight-Test) nuclear stage to be launched as early as 1964, and the need for planning and oversight led to the formation of the Space Nuclear Propulsion Office. SNPO was formed so that the AEC and NASA could work together, and H. B. "Harry" Finger was selected as its first director. Finger made a decision to delay RIFT, and he defined strict objectives for nuclear rocket engines to achieve before RIFT would be allowed. NERVA engine development Finger then immediately selected Aerojet and Westinghouse to develop the NERVA engine. SNPO would depend on what was then known as Los Alamos Scientific Laboratory to supply technology for NERVA rocket engines as part of Project Rover. SNPO chose the original 825 second 75,000 pound-thrust KIWI-B4 nuclear thermal rocket design, named after the Kiwi, a flightless bird native to New Zealand, as the baseline for the 52-inch (22 feet from thrust structure to nozzle bottom) NERVA NRX (Nuclear Rocket Experimental). Phase 2 of Project Rover became called Phoebus, and Phase 3 was known as Pewee, demonstrating much higher power (4000 MW), power density and long-lived fuels, but these programs did not make their way to NERVA. Working NERVA designs (termed NERVA NRX) were based on KIWI; by the time Pewee started testing the Apollo program had largely been defunded by the Nixon administration. Plans to send humans to the Moon and Mars had been indefinitely delayed. Almost all of the NERVA research, design and fabrication was done at Los Alamos Scientific Laboratory. Testing was done at a large installation specially built by SNPO on the Nevada Test Site. Although Los Alamos tested several KIWI and Phoebus engines during the 1960s, testing of NASA's NERVA NRX/EST (Engine System Test) contractor engine didn't begin until February 1966. The objectives were: - Demonstrate the feasibility of starting and restarting the engine without an external power source. - Evaluate the control system characteristics (stability and control mode) during startup, shutdown, cooldown and restart for a variety of initial conditions. - Investigate the system stability over a broad operating range. - Investigate the endurance capability of the engine components, especially the reactor, during transient and steady state operation with multiple restarts. All test objectives were successfully accomplished, and the first NERVA NRX operated for nearly 2 hours, including 28 minutes at full power. It exceeded the operating time of previous KIWI reactors by nearly a factor of two. NERVA XE The second NERVA engine, the NERVA XE, was designed to come as close as possible to a complete flight system, even to the point of using a flight-design turbopump. Components that would not affect system performance were allowed to be selected from what was available at Jackass Flats, Nevada to save money and time, and a radiation shield was added to protect external components. The engine was reoriented to fire downward into a reduced-pressure compartment to partially simulate firing in a vacuum. The NERVA NRX/EST engine test objectives now included: - Demonstrating engine system operational feasibility - Showing that no enabling technology issues remained as a barrier to flight engine development. - Demonstrating completely automatic engine startup. The objectives also included testing the use of the new facility at Jackass Flats for flight engine qualification and acceptance. Total run time was 115 minutes, including 28 starts. NASA and SNPO felt that the test "confirmed that a nuclear rocket engine was suitable for space flight application and was able to operate at a specific impulse twice that of chemical rocket system [sic]." The engine was deemed adequate for Mars missions being planned by NASA. The facility was also deemed adequate for flight qualification and acceptance of rocket engines from the two contractors. Loss of political support and cancellation The Rover/NERVA program accumulated 17 hours of operating time with 6 hours above 2000 K. Although the engine, turbine and liquid hydrogen tank were never physically assembled together, the NERVA was deemed ready to design into a working vehicle by NASA, creating a small political crisis in Congress because of the danger a Mars exploration program presented to the national budget. Clinton P. Anderson, the New Mexico senator who had protected the program, had become severely ill. Lyndon B. Johnson, another powerful advocate of human space exploration, had decided not to run for a second term and was considerably weakened. NASA program funding was somewhat reduced by Congress for the 1969 budget, and the incoming Nixon administration reduced it still further for 1970, shutting down the Saturn rocket production line and cancelling Apollo missions after Apollo 17. Without the Saturn S-N rocket to carry the NERVA to orbit, Los Alamos continued the Rover Program for a few more years with Pewee and the Nuclear Furnace, but it was disbanded by 1972. The most serious injury during testing was a hydrogen explosion in which two employees sustained foot and ear drum injuries. At one point in 1965 the liquid hydrogen storage at Test Cell #2 during a Los Alamos Scientific Laboratory Test was accidentally allowed to run dry; the core overheated and ejected on to the floor of the Nevada desert. Test Site personnel waited 3 weeks and then walked out and collected the pieces without mishap. The nuclear waste from the damaged core was spread across the desert and was collected by an Army group as a decontamination exercise. An engine of this type is on outdoor display on the grounds of the NASA Marshall Space Flight Center in Huntsville Alabama. In the space program NASA plans for NERVA included a visit to Mars by 1978 and a permanent lunar base by 1981. NERVA rockets would be used for nuclear "tugs" designed to take payloads from Low Earth Orbit to larger orbits as a component of the later-named Space Transportation System, resupply several space stations in various orbits around the Earth and Moon, and support a permanent lunar base. The NERVA rocket would also be a nuclear-powered upper stage for the Saturn rocket (the Saturn S-N), which would allow the upgraded Saturn to launch much larger payloads of up to 340,000 pounds to Low Earth Orbit. NERVA rockets had progressed rapidly to the point where they could run for hours, limited in run time by the size of the liquid hydrogen propellant tanks at the Jackass Flats test site. They also climbed in power density. The larger NERVA I rocket gradually gave way to the smaller NERVA II rocket in mission plans as efficiency increased and thrust-to-weight ratios grew, and the KIWI gradually gave way at Los Alamos to the smaller Pewee and Pewee 2 as funding was cut to lower and lower levels by Congress and the Nixon administration. The RIFT vehicle consisted of a Saturn S-IC first stage, an SII stage and an S-N (Saturn-Nuclear) third stage. The Space Nuclear Propulsion Office planned to build ten RIFT vehicles, six for ground tests and four for flight tests, but RIFT was delayed after 1966 as NERVA became a political proxy in the debate over a Mars mission. The nuclear Saturn C-5 would carry two to three times more payload into space than the chemical version, enough to easily loft 340,000 pound space stations and replenish orbital propellant depots. Wernher von Braun also proposed a manned Mars mission using NERVA and a spinning donut-shaped spacecraft to simulate gravity. Many of the NASA plans for Mars in the 1960s and early 1970s used the NERVA rocket specifically, see list of manned Mars mission plans in the 20th century. The Mars mission became NERVA's downfall. Members of Congress in both political parties judged that a manned mission to Mars would be a tacit commitment for the United States to decades more of the expensive Space Race. Manned Mars missions were enabled by nuclear rockets; therefore, if NERVA could be discontinued the Space Race might wind down and the budget would be saved. Each year the RIFT was delayed and the goals for NERVA were set higher. Ultimately, RIFT was never authorized, and although NERVA had many successful tests and powerful Congressional backing, it never left the ground. In fiction and pop culture - In the 1968 short story "Wait It Out", by Larry Niven, an ill-fated exploration mission to Pluto uses a landing craft with a NERVA engine. - The 1970 novel The Throne of Saturn by Allen Drury describes a fictional Project Argosy, a mission to Mars consisting of three Saturn V vehicles, each with a NERVA upper stage. The NERVA stages and living modules are docked together for the trip. - In the 1979 novel Encounter Three, by Martin Caidin, the NERVA program is briefly discussed. It is described as "pushing more thrust through a smaller hole", and is dismissed as useless in the long-term. - In the 1985 film Lifeforce, the NERVA is the propulsion for the fictional space shuttle Churchill. - In Stephen Baxter's 1996 alternate timeline novel Voyage the NERVA project is not canceled but development goes on throughout the 70s producing a test article Apollo-N in 1980. A disaster occurs, the NERVA technology is abandoned as unsafe, and a mission to Mars is launched using chemical rocket engines with a slingshot gravity assist via Venus allowing an expedition to arrive on Mars in 1986. - In Chris Berman's 2008 novel, The Hive, the discovery of an alien device between the orbits of Jupiter and Saturn in the year 2019 creates an emergency situation. This leads to a crash program with Russia and the United States partnering to re-engine a partly complete manned Mars spacecraft with a NERVA rocket motor to send a team to inspect the device. - In Boundary by Eric Flint & Ryk Spoor (Baen Books), it makes an appearance, first in Chapter 7. - The NERVA engine is featured in the sandbox-style space flight simulator game, Kerbal Space Program, developed by the indie-game company, Squad. It is called the "LV-N Atomic Rocket Motor" It has much lower thrust of 60 kN, and uses oxidizer as well as it's liquid fuel and doesn't yet use any nuclear fuel (This may be fixed in an upcoming update to the game) and is said to have radioactive thrust. The thrust was likely lowered to prevent it from being an "Ultimate" rocket motor. NERVA rocket stage specifications - Diameter: 10.55 metres (34.6 ft) - Length: 43.69 metres (143.3 ft) - Mass empty: 34,019 kilograms (75,000 lb) - Mass full: 178,321 kilograms (393,130 lb) - Thrust (vacuum): 333.6 kN (75,000 lbf) - ISP (vacuum): 850 s (8.09 kN·s/kg) - ISP (sea level): 380 s (3.73 kN·s/kg) - Burn Time: 1,200 s - Propellants: LH2 - Engines: 1 Nerva-2 See also - Nuclear thermal rocket - Project Orion (nuclear propulsion) nuclear pulse drive system. - Project Prometheus - Project Rover - RD-0410, the Soviet nuclear thermal rocket engine - Robbins, W.H. and Finger, H.B., "An Historical Perspective of the NERVA Nuclear Rocket Engine Technology Program", NASA Contractor Report 187154/AIAA-91-3451, NASA Lewis Research Center, NASA, July 1991. - Dewar, James (2008). To The End Of The Solar System: The Story Of The Nuclear Rocket (2nd ed.). Apogee. ISBN 978-1-894959-68-1. |Wikimedia Commons has media related to: NERVA| - NASA's Nuclear Frontier The Plum Brook Reactor Facility - 188 page monograph - NERVA in David Darling's Internet Encyclopedia of Science - Nerva entry at Encyclopedia Astronautica - Spacecraft: Project Nerva
<urn:uuid:27b6749d-0c7d-4066-8f69-b9da366aa71f>
3.796875
2,900
Knowledge Article
Science & Tech.
42.774612
Earth Shadow on Moon June 07, 2003 The above image shows the Earth's shadow covering a portion of the Moon. The photo was taken over Quebec, Canada during a total lunar eclipse on January 20, 2000. As was mentioned in the caption of yesterday's Earth Science Picture of the Day, the position of the umbra (deepest part of the Earth's shadow), largely determines how the Moon looks during a total lunar eclipse. Whether or not our atmosphere is clear effects the Moon's appearance as well. If the sky is hazy during an eclipse, the Moon will likely completely fade from view. Only during an eclipse does the position of the Earth control how much of the Moon is illuminated. The phases of the Moon result from our our perspective in relation to the position of the Moon and the Sun -- as the Moon orbits us, our perspective changes each day.
<urn:uuid:470344fb-e60a-4c19-9516-53db3ed0a838>
3.921875
175
Personal Blog
Science & Tech.
55.407597
I know what sonic.blade means, though I'm not sure how well I'll go with explaining it - I haven't done maths of any real sort in years, and multi-dimensional stuff in even longer. If we first of all think about a unit sphere. The sphere exists for values of x, y, and z between -1 and 1, and does not exist for all other values (outside the function's domain, as it were). This is a simple concept to grasp, as we deal with three dimensions every day. If you want to easily comprehend a 4th dimensional unit sphere, the best way is to arbitrarily assign some value to the 4th dimension. Most commonly, people assign time to the 4th dimension. You can then imagine that at time t=0, a 4th dimensional unit sphere would look identical to a 3d unit sphere. But if you went back to t=-0.5 (keeping x, y and z at 1, for simplicity's sake), you would see a smaller sphere. Well, if you think about a 3d sphere, a 2d cross-section taken at z=-0.5 is a smaller circle than at z=0. So for a 4d sphere, the "cross-section" at t=-0.5 will be a smaller sphere than that taken at t=0. You can progress this across the entire domain of t (-1 to 1). If you remember that t is time, you can therefore imagine that 4d sphere would appear as nothing before t=-1, at which point it would become an infintesimal sphere. This sphere would grow until it reached unit dimensions at t=0, and would then shrink back to nothingness. Of course, this is just for visualising a 4d shape. The elegance of this method means that you can assign practically any continuous property to a dimension. You can even use non-continuous properties, provided you limit the domain appropriately. For example, you can set colour as an example. Colour is actually a value on an electromagnetic frequency spectrum. So if you assign colour to the 5th dimension, and set "red" to c=-1, and "blue" to c=1, with a smooth spectrum in between, you can imagine a 5th dimensional sphere. This would be a growing/shrinking sphere, as for the 4th dimensional sphere, but the sphere would have an infinite number of spheres inside it, of differing colours. In the center of the 5-hypersphere would be two infintesimal spheres, one red, and one blue. The outside of the sphere when it was at it's maximum size and colour variation (x,y,z,t and c all equal 0) would be green. There would be a smooth gradient opf colour within the sphere, in BOTH directions (green->red and green->blue). Sort of like an everlasting gobstopper... You can extend this however you like. Assign pitch, yaw and roll to dimensions - that brings us up to 8-d. Assign other properties - luminescence, transparency, roughness... brings us up to 11-d. You can even use non-visual properties (as these properties are simply there to help us understand the shape) such as volume, frequency of a generated noise, etc. Maybe I've explained this poorly... I don't know... it's been too long. Eighty-three percent of all statistical quotes are made up on the spot.
<urn:uuid:a035ee28-fcca-4a8f-9e2b-ca95dde1d9c4>
2.890625
729
Comment Section
Science & Tech.
74.785468
|"Even if such relatively simple molecules, such amino acids (the building blocks of proteins), could populate the hypothetical primitive oceans, the formation of biologically active proteins could never occur. Under these conditions, the sequence of the amino acids would be purely by chance. There we 20 different amino acids in proteins. The average protein has 400 amino acids, but even 100 of these amino acids can be arranged in 20100, or 10130 different ways. The probability of just one such molecule arising by chance is thus equal to the number one followed by 130 zeros. This is essentially impossible, but to get life started would require billions of tons each of several hundred different proteins and equal quantities of even more complex DNA and RNA molecules." --- Reverend Duane Gish| The following are various organic / biochemical reactions that may have occurred on primitive earth. The reactions are taken directly from the text Biochemistry by Geoffrey Zubay, the second edition, 1988. To be honest, I though this text was more comprehensive that it appears to be. In order to address abiogenesis, one first must decide what would be required for a primitive "living" system. Based on the studies of Thomas Cech, Norman Pace, Sidney Altman, and Alan Weiner, I would suggest that a membrane encapsulated system containing RNA or an RNA like molecule would be sufficient. This is based upon experiments which have demonstrated that RNA can perform 1) act as a polymerase and direct template specific synthesis of RNA 2) act as a site specific nuclease to cleave RNA 3) act as polymerase and direct template independent synthesis of RNA The result of these reactions is a molecule that under different ionic conditions can replicate, and release the products of replication via cleavage. To my way of thinking, in order to optimize the concentrations, and allow for somewhat adequete conditions for a self replicating system, it should be self contained, thus a membrane would be important if not required for our first "living" organism. It is quite possible that the earliest life forms performed these required reactions by nucleating in pockets of salt water saturated clays. Eventually however, a membrane is required. You should not from the above discussion assume that proteins are not required for this most primitive of scenarios. Beyond this, there is circumstantial evidence that would support RNA's role in primitive life. First of all, it is completely ubiquitous and absolutely required for life of all known systems. No known biological systems can survive without RNA. DNA viruses have to go through an RNA intermediate. Not all RNA viruses require a DNA intermediate. This is an important distinction. Secondly, increasing evidence has demonstrated that it is the RNA in ribosomes that is critical for protein synthesis, not the proteins. It appears that the proteins are more of a scaffolding, while the RNA performs the catalytic function. Thus we have evidence of yet another role for RNA - that for polypeptide synthesis. Furthermore, RNA has been implicated in maintenance of telomeres, which is important to prevent loss of genetic information in each round of replication. Other groups have also implicated RNA as a catalyst involved in carbohydrate metabolism. From these examples it is clear that no other molecule is nearly as wide reaching in its biological implications as RNA. Now, what is required to form an RNA molecule, and is it reasonable to expect that these molecules may have formed spontaneosly on primitive earth? To answer the first part, you need bases, a sugar and phosphates. To answer the second part, the answer is yes, and no. Although the arguments are certainly not definitive, they are currently the best ones that I am aware of, although it is entirely possible that I have missed important research in this area in the last few years. The next message(s) will detail these reactions and my comments on them. Much to my regret, the text that I have does not supply the reactions for lipid synthesis or sugar synthesis. The lipid reactions I have completely forgotten and will have to ignore. The sugar reactions, I remember a bit more of, and will try to recount what I can. First, I will discuss the biochemistry required for synthesis of the purine bases adenine and guanine. Under conditions postulated to have occurred on primitive earth, all of these reactions have been shown to occur, and the resulting end products are major products of the precursors. H2N CN This is diaminoaleonitrile, a \ / relatively simple product, easily HCN ---> C synthesized from hydrogen cyanide || C / \ H2N CN | | Now add a little ionizing radiation | and another molecule of HCN and we V get: NC N \ / \ \ A mess. Organic molecules do C not lend themselves well to this || C media. Seriously though, you get C / 5-aminoimidazole-4-carbonitrile / \ N which is a direct precursor of H2N adenine. Just add HCN | | HCN V NH2 | N N // \/ \\ | || C \\ / \ / N N By adding H2O to 5-aminoinudazole-4 carbonitrile you get a precursor of guanine | | H2O V O Is it my imagination or are || N my drawings getting better? / \ / \\ anyways, now just add a little H2N || C cyanogen and voila! / \ / H2N N H | | (CN)2 | V O || N HN/ \ / \\ Here is guanine. So the purines | || C seem easy enough to make. Lets /\\ / \ / try some pyrimidines now. NH2 N N HFortunately at least one pathway for pyrimidine synthesis is a bit less complicated than for the purines. For the sake of brevity I will post it here, if you are genuinely curious, you can find all of this in the text cited in the first message. NH2 O HC | || ||| NCO- //\ H2O / \ C ------> N C -----> HN C C | || | || N //\ / //\ / O N O N H H Cytosine UracilSo now we have four bases. The next step is the sugar. To me, this is the biggest problem of the whole thing. Not because sugars would not form spontaneously under these circumstances, but because of the exponential nature of stereoisomers that can form with each additional carbon atom. The number of separate 5 carbon sugars is high enough to make the selection of ribose seem prohibitive. Some researchers think that glycerol or another similar sugar may have evolved first, simulating the structure that would later be achieved through ribose. Such a structure might look like: O Base \ | C H H / | C - C H OH OH Where as ribose looks like: * HOCH O Base \ / \ | C C /\ H H / | H C - C H OH OH * ** * denotes carbons involved in forming nucleotide polymers ** denotes hydroxyl groups required for RNA catalytic activity.As can be seen in the above diagrams glycerol supplies the critical catalytic hydroxyl, but lacks the carbons required for polymerization. To me, this is critical, and needs to be resolved, but until such a time it is the most current thinking. As for the phosphates, suffice it to say that they are added fairly easily. I will look for the lipid reactions, and if I can find them, I will post them along with the phosphate reactions. I hope everyone has found this interesting and informative. --- Jeff Otto Read about the Urey-Miller experiments at the University of Chicago in the 1950's and then follow it up with study of the more recent work of Dr. Sidney Fox at the University of Miami. Urey-Miller created amino acids by discharging electricity through an atmospheric soup of chemicals. Much as lightning passing through a primordial Earth's atmosphere would have done. Sidney Fox at the University of Miami took those amino acids (created in the same way) and then, by heating them (to less than 150 degrees F) in conjunction with other aspartic and glutamic acids (also created through simulation experiments) and was able to polymerize them into proteinoid microspheres. Under a microscope, the microspheres look like primitive cells. In fact, artificially fossilized microspheres are indistinguishable from the earliest known microfossils that date back to about 3.5 BYA. Although hesitant to claim that these were alive Dr. Fox stated that they were undeniably "protoalive". This is not an evasive answer. As Tim M. Berra says in "Evolution and the Myth of Creationism" (pg.75): "For centuries, science knew nothing intermediate between non-living and living things, but today the distinction is not at all clear. Since life evolved from non-living matter, at some point we must arbitrarily draw a line and say that everything beyond that point is alive. Viruses, for example, appear to be alive when they infect a host, but seem to be non-living when outside a host."Since a single cell would appear to be the smallest unit that can be said to be alive, proteinoid microspheres may quite justifiably be called protocells, or, life. These are just the early stages of these types of experiments. There is every likelihood that within the next couple hundred years man will be able to create self-replicating life of varying forms from purely chemical and natural elements under laboratory conditions.--- Simon Ewins See also the Abiogenesis FAQ.
<urn:uuid:cd5a6eff-5982-45b7-b028-357892a1a9c3>
3.640625
2,021
Comment Section
Science & Tech.
43.33008
Plants in Moonlight Can photosynthesis happen in a bright full moon? No, according to the following site, moonlight is not strong enough for " *moonlight is 1/50,000 the intensity of sunlight" "*moonlight is too weak to support photosynthesis" Anthony Brach Ph.D. Click here to return to the Botany Archives Update: June 2012
<urn:uuid:ddd41a97-ac76-4865-839f-e2ba9799de7a>
2.6875
82
Knowledge Article
Science & Tech.
58.531667
Matrix Size & Validation What is Vector? Vector Scalar Multiple Vector Inner Product Vector Outer Product Vector Cross Product Vector Triple Cross Product Vector Triple Dot Product Scalar Triple Product Orthogonal & Orthonormal Vector Cos Angle of Vectors Scalar and Vector Projection What is a matrix? Matrix Diagonal Is Diagonal Matrix? Matrix Basic Operation Is Equal Matrix? Matrix Scalar Multiple Elementary Row Operations Finding inverse using RREF (Gauss-Jordan) Finding Matrix Rank using RREF Is Singular Matrix? Matrix Generalized Inverse Solving System of Linear Equations Linear combination, Span & Basis Vector Linearly Dependent & Linearly Independent Change of basis Matrix Nullity & Null Space Matrix Eigen Value & Eigen Vector Matrix Eigen Value & Eigen Vector for Symmetric Matrix Similarity Transformation and Matrix Diagonalization Singular Value Decomposition Resources on Linear Algebra Reduced Row Echelon Form (RREF) There is a standard form of a row equivalent matrix that if we do a sequence of row elementary operations to reach this standard form, we may gain the solution of the linear system. The standard form is called Reduced Row Echelon Form of a matrix, or matrix RREF in short. An m by n matrix is called to be in reduced row echelon form when it satisfies the following conditions: When only the first three conditions are satisfied, the matrix is called in Row Echelon Form (REF). You will find that the educational program below is awesome. The interactive program gives many examples to compute the Reduced Row Echelon Form of a matrix input using the three row elementary operations. The computation will show you step by step through both REF and RREF. How to use? Simply click Random Example button to create new random input matrix, then click “Matrix RREF” button to get the whole sequence of elementary row operations from the input matrix up to the RREF. The results can be in either rational or decimal format. Yes, this program is a free educational program!! Please don't forget to tell your friends and teacher about this awesome program! Preferable reference for this tutorial is Teknomo, Kardi (2011) Linear Algebra tutorial. http:\\people.revoledu.com\kardi\ tutorial\LinearAlgebra\
<urn:uuid:b692badd-c14a-43bf-bba0-74215118cbf7>
3.3125
518
Tutorial
Science & Tech.
30.749643
by Brian Dunning. No personality in the history of science has been pushed further into the realm of mythology than the Serbian-American electrical engineer Nikola Tesla. He is, without a doubt, one of the true giants in the history of electromagnetic theory. […] Tesla’s unparalleled combination of genius and aberrance have turned him into one of the seminal cult figures of the day. As such, at least as much fiction as fact have swirled around popular accounts of his life, and devotees of conspiracy theories and alternative science hypotheses have hijacked his name more than that of any other figure. In order to put it in the proper perspective, we have to first clear up a popular misconception. Tesla did not invent alternating current, which is what he’s best remembered for. AC had been around for a quarter century before he was born, which was in 1856 in what’s now Croatia. While Tesla was a young man working as a telephone engineer, other men around Europe were already developing AC transformers and setting up experimental power transmission grids to send alternating current over long distances. Tesla’s greatest early development was in his mind: a rotary magnetic field, which would make possible an electric induction motor that could run directly from AC, unlike all existing electric motors, which were DC. […] Tesla built a working prototype, but only two years after another inventor, Galileo Ferraris, had also independently conceived the rotary magnetic field and built his own working prototype. […] Let’s run through a list of some of the seemingly magical feats attributed to Tesla, beginning with: Did Tesla invent X-rays? Tesla did in fact accidentally create the first X-ray photographs in 1895, although inadvertently, when taking a picture of his friend Mark Twain with an early form of fluorescent tube light called a Geissler tube that, unbeknownst to Tesla, also emitted X-radiation. Before he could investigate further, his lab burned down and he lost all that work. At nearly the same time, Wilhelm Röntgen announced his discovery of the X-ray. Later Tesla experimented with more powerful tubes to create stronger X-rays. Did Tesla invent radio? Generally, Tesla did beat Guglielmo Marconi to the demonstration of workable wireless communication and Tesla eventually won all the patent disputes (after his death), though Marconi is the one who shared a Nobel Prize for it. However, both men had been building upon theory and experimentation by dozens of other researchers going back nearly a full century. Patents for various types of wireless communication had begun to be filed by other inventors thirty years before either man. […] Did Tesla really sit in the middle of a room filled with lightning bolts? Tesla spent two years in Colorado Springs where the El Paso Electric Company had agreed to give him free power. There he built the world’s largest Tesla coil, the device most often associated with his name. A Tesla coil is a simple type of transformer, taking a low-voltage input and stepping it up to a very high voltage, even over several million volts. […] At full power, enough electrons are sent up that pole that they are forced to burst out into the atmosphere through the torus, creating the familiar lightning-like streamers that characterize Tesla coil demonstrations. Tesla posed for a famous publicity photograph, that you’ve seen many times, of himself sitting in a chair inside his lab taking notes while the air all around him is filled with such streamers from his giant coil. This picture was, unfortunately, a double exposure. Did Tesla cause a field of light bulbs 26 miles away to illuminate wirelessly? He may or may not have. According to biographer John O’Neill, he did, but not quite as magically as is popularly depicted, and no supporting evidence has ever surfaced. Tesla discovered that the function served by the long inner coil could also be served by a different type of conductor, including the Earth itself. He took a Tesla coil and stuck its inner secondary coil into the ground. He input electricity to the primary coil, and this setup caused his current to be sent into the Earth. That current could be received by an identical setup, some 26 miles away, by receiving current from the primary coil. Wired to that receiver coil, he had an array of some 200 conventional incandescent light bulbs set out in a field. So although the light bulbs themselves were conventionally wired to a normal power source, that power was transmitted wirelessly. Whether this grand display ever happened or not (nobody has ever been able to duplicate it, despite many attempts), Tesla did record some of the calculations, and photographs do exist of very small scale experiments conducted locally at his lab. [..] Did Tesla create ball lightning? Ball lightning — the very existence of which is dubious at best — beautifully illustrates the type of mythology that has been built up around Tesla. Many sources say he routinely created ball lightning in Colorado Springs, and there are even carefully edited quotes of Tesla’s purporting to describe it. In fact, Tesla is not known to have ever mentioned ball lightning in any of his writing or speaking, and no record from his time is known to exist stating that he created, demonstrated, or knew about anything that could reasonably be called ball lightning […] Did Tesla plan to transmit power world-wide through the sky? It was his ultimate plan, but the farthest he ever got was the partial construction of his famous tower at Wardenclyffe which was intended for wireless communication across the Atlantic. His worldwide wireless power system was theoretical only, employing the Schumann-Tesla resonance to charge the Earth’s ionosphere such that a simple handheld coil could receive electrical power for free anywhere, and everywhere, in the world. Tesla’s idea was innovative, but innovative idea it remained, as debts mounted and the tower was dismantled before it ever got to be used. Physicists now consider Tesla’s concept unworkable, and no attempts to test it have ever worked. All sorts of conspiracy theories exist, for example that the HAARP research facility in Alaska is secretly a test of Tesla’s worldwide power grid, or some sort of superweapon based on it. The profound differences between these systems become clear upon doing even the most basic of research. Did Tesla invent a Death Ray? Investment in Tesla’s projects stopped with the advent of the Great Depression in the 1930s. During the final decade of his life, Tesla was essentially penniless and living in a New York hotel, consumed by what we think today was probably obsessive compulsive disorder. It was during this period — and not earlier during his productive laboratory years — that he openly spoke of having built and tested a Death Ray. None of Tesla’s lab assistants ever corroborated this, and no papers, prototypes, or evidence have ever surfaced. He gave vague descriptions with only inadequate hints of what type of technology such a weapon might use. Whether this was mere showmanship to attract new investment, was a legitimate but unknown concept, or was only the ramblings of a deteriorating mind, will probably never be known. Did the government seize all his notes upon his death? Yes, they did. Tesla died in January of 1943, during some of the darkest hours of World War II. […] The year before, nearly all Japanese Americans were imprisoned in an effort to prevent spying. So it wasn’t that big of a stretch for the government, having heard his claims of a Death Ray, to employ a statute enacted during World War I that enabled an Alien Property Custodian to seize all assets of any enemy during wartime — even though Tesla was an American citizen. They entered his New York hotel room and seized all his documents, which was all that remained of his life’s work by that time. It wasn’t very much, as Tesla’s habit throughout his life was to keep plans in his head. […] Appreciate the man, not the myth. Hardly anything written about Nikola Tesla fails to exaggerate his inventions and deify the man. Factually wrong descriptions of his accomplishments are found all over the place. His name is broadly smeared by association with virtually every crank conspiracy theory on the planet. […] Taking the trouble to learn about Tesla, about his unique personal history and about the reality of what his true contributions were, will always put you on firmer ground than accepting the untrue exaggerated or conspiratorial claims. Whenever you hear a good scientist’s name co-opted and exploited by the promoters of crankery, you should always be skeptical. - dusanmatic likes this - wowo1110 reblogged this from science-junkie - theycallmethehunter likes this - onahorsewithoneleg likes this - onahorsewithoneleg reblogged this from science-junkie - nox-aeris reblogged this from oinonio - thebiggerjohn reblogged this from oinonio - thebiggerjohn likes this - oinonio reblogged this from science-junkie - paradoxknight reblogged this from science-junkie - heavyarmscustom reblogged this from science-junkie - refkins reblogged this from science-junkie - theverymiserable likes this - martinilover83 reblogged this from science-junkie - bravetenebrous likes this - wozziebear reblogged this from science-junkie - brandon7290 reblogged this from science-junkie - nikolagriffin likes this - neverlookidly likes this - beezpassthehoney reblogged this from science-junkie - xunoi reblogged this from science-junkie - purplediamondskies likes this - mentalpathways reblogged this from science-junkie and added: - the-doodler likes this - what-skull reblogged this from science-junkie - le-puss reblogged this from science-junkie - deviousbunny reblogged this from science-junkie - calliatra likes this - calderonecheesecake likes this - whathofstadterdoesntsay reblogged this from science-junkie - omgirl808 likes this - donttypeangry reblogged this from science-junkie - bee-spoke reblogged this from science-junkie - largemacks reblogged this from science-junkie - largemacks likes this - raynebowgearz likes this - rkiyakrasn likes this - eliusv reblogged this from science-junkie - permacapslock likes this - vividdaydreaming likes this - alicejxx likes this - cadmiccoulomb likes this - livingmachine89 reblogged this from science-junkie - little-a-watson reblogged this from science-junkie - leather-dog likes this - brandon7290 likes this - angus141096 reblogged this from science-junkie - essend likes this - temperantiarevisited reblogged this from science-junkie - lupusvanwolf likes this
<urn:uuid:3ea12622-47c4-4fdd-9e98-2daa7d1f8490>
3.359375
2,391
Listicle
Science & Tech.
31.795435
The high mountain forests of western North America need fire. Fire returns nutrients to the soil and replaces old stands and ground debris with young forest. Intense fires are a characteristic of the conifer forests, though they occur infrequently—once every 100 to 300 years. The year 1988 brought one of those infrequent, severe fires to Yellowstone National Park. Drought and high temperatures combined to create extreme fire conditions. Fifty wildfires ignited, seven of which grew into major wildfires. By the end of the year, 793,000 acres had burned. This false-color image, taken by the Landsat 5 satellite in 1989, shows the burn scar left on the landscape (orange and red) by the 1988 inferno. It takes many decades for a conifer forest to recover to pre-fire conditions, and through the use of Landsat, researchers have been able to chronicle the recovery over the past two decades. See the year-to-year images in Earth Observatory’s World of Change article: Burn Recovery in Yellowstone. Western conifers burn when temperatures are high and plants and soil are dry. Such conditions will come together more frequently as the climate changes over the next century, and fires are already becoming more frequent. A 2011 study combined several climate models to estimate how fire could change in the Yellowstone ecosystem. Yellowstone is near a tipping point, the researchers assert, as warmer, dryer conditions will likely allow large fires to burn as frequently as every 30 years. When fires occur infrequently, the forest has time to recover. More frequent fires, however, give the conifers little time to grow back. If this occurs, Yellowstone could lose its dense conifer forests and replace them with low montane woodland and grassland by 2050.
<urn:uuid:bc8e29c3-06e6-4677-ad61-6d63077192ea>
4.5
357
Knowledge Article
Science & Tech.
42.042284
Part of the beauty of Haskell is that it allows you to simply write recursive functions. But part of the problem with recursive functions is that they tend to have absolutely horrible big-O run times. The usual solution to this problem is to use what’s known as memoization, which is memorization without the ‘r’, since programmers have to have special names for everything. Memoization is usually implemented as an associative array (or a plain array in the common case where the function takes a single non-negative integer as an argument); the function attempts to look up the return value for its arguments in an associative array. If it finds it, it can return without doing expensive computation; if it doesn’t, then it performs the computation, stores the result in its array, and then returns. In Python, a memoized Fibonacci function might be written as follows: if (n < 2): return n if n not in fib_cache: fib_cache[n] = fib(n-1) + fib(n-2) The speed savings gained by this are enormous; on my test machine, fib(35) takes 15 seconds to compute without memoization, whereas fib(1000) computes almost instantly with memoization. In terms of big-O running times, I believe that the memoized version takes time, whereas the unmemoized version takes , which is interesting since the Fibonacci numbers themselves are . In any case, the memoized version is clearly superior. But how do you do this in a language such as Haskell? You can’t carry state between the various incarnations of the function, since that could potentially lead to the function’s values not solely depending on its arguments, violating referential transparency. You can’t carry the state around in a monad because then different calls to the function would each have separate caches, so you’d have to pull some kind of trick where the function returns itself and its value, then pass the function around and it would just be a huge mess. So instead what you do is you use Data.MemoCombinators, which is a package that lets you turn functions into other, memoized functions. So how do you use it? It’s not too hard, especially if you’re memoizing functions that only use builtin types. An example, straight from the Data.MemoCombinators page: fib' 0 = 0 fib' 1 = 1 fib' x = fib (x-1) + fib (x-2) There are two things to note here: first, the memoized version, fib, is generated from the non-memoized by calling Memo.integral on it. This is how you create memoized versions of single-variable functions: you apply the appropriate combinator. Second, fib’ calls fib inside it. This is very important: if fib’ called fib’, then you couldn’t save time within the fib function, only outside of it. With fib’ calling fib, on the other hand, then the first time you call fib 1000, not only will it return before the heat death of the universe, but you’ll also get fib 999, fib 998, etc. cached. But what if your function to be memoized isn’t one of the standard types? That’s why there’s Memo.wrap. You just have to define two mappings: one from your type to some combination of MemoCombinator types, and one that goes from that combination back. An example will make it clear: So as you can see, first you build up a memoString type which can memoize Strings; since a String is just a list of Chars, you can just apply Memo.list to Memo.char. Then you define toFoo and fromFoo, which send you from the abstracted Foo type to a tuple of a String and an Int. Finally, you use Memo.wrap to ‘wrap’ the pair of a memoized String and a memoized Int (constructed using Memo.pair, naturally) up in an abstract memoFoo memoizer. The other thing you can do with MemoCombinators is memoize functions of multiple variables. Take this sample of code from a project I’m working on: (*) = Memo.memo2 memoNimber memoNimber (*!) where x *! (Nimber 1) = x (Nimber 1) *! x = x a *! b = mex $ liftM2 combine [0 .. pred a] [0 .. pred b] mex xs = fromJust $ find (`notElem` xs) [0..] combine a' b' = a * b' + a' * b + a' * b' The actual definition of *! isn’t important, I’m only including it for completeness. Nor are the definitions of toNimber and fromNimber. What is important is Memo.memo2: you use it to generate a memoized function of multiple arguments. You just pass it memoizers for each of its arguments (since * takes two Nimbers, I pass it memoNimber twice) and the unmemoized version, and it gives you a memoized version. As for how Data.MemoCombinators works, I can’t really explain that. I know it has to do with the fact that expressions in function definitions are cached, but beyond that my knowledge fails. Maybe if I ever learn it I’ll return to this and explain it. Edit: After I wrote this I realized that Data.MemoTrie exists; while it has cleaner syntax for memoizing functions (the memoizer doesn’t need to know the types of the arguments), it has a disadvantage in that it’s not immediately obvious how to memoize the types it doesn’t give you. But if you’re just memoizing functions of Ints or something, go ahead and use MemoTrie.
<urn:uuid:702ed4a0-2917-45ed-9cea-1faf6787209c>
3.0625
1,306
Personal Blog
Software Dev.
61.493426
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. Two other chemical families that are important in petroleum refining are composed of unsaturated molecules. In unsaturated molecules, not all the valence electrons on a carbon atom are bonded to separate carbon or hydrogen atoms; instead, two or three electrons may be taken up by one neighbouring carbon atom, thus forming a “double” or “triple” carbon-carbon bond. Like... Alkanes are described as saturated hydrocarbons, while alkenes, alkynes, and aromatic hydrocarbons are said to be unsaturated. What made you want to look up "unsaturated compound"? Please share what surprised you most...
<urn:uuid:c1c6f950-e24b-43cb-b7cb-37384cb58b04>
3.625
167
Knowledge Article
Science & Tech.
30.089604
Chrooting has been around for a long time now. Chrooting makes a program believe that the root of the file system is higher up in the hierarchy. For example, if I wanted to create a chroot in /chroot/httpd, a program executed from within the chroot would believe that “/chroot/httpd” was actually “/”. There in lies the beauty as the program can’t reach any files outside “/chroot/httpd”. Security of the server as a whole is increased due to the fact that the system binaries are off limits. In addition, chroots usually only have the bare minimum files inside, so exploits have a harder time breaking in.
<urn:uuid:659395bb-ecea-4133-889b-9b5918be1058>
2.765625
152
Documentation
Software Dev.
58.531667
Someone once said : "Time and space are modes by which we think and not a condition in which we live." I did not get exactly what it meant. Can you help me? It is impossible to say that two things happen at the same time. It depends on the motion of the observer. For example, suppose you were standing in your room and saw two distant flares go off at the same time, one in the southwest, and one in the northwest. A pilot flying from south to north in a supersonic aircraft would see the two flares, too, but because he is moving toward the flare in the northwest, then the light from it would reach him a fraction of a second before the light from the flare in the southwest. This effect has strange consequences for speeds close to the speed of light. Particles that are known to have a lifetime of two micro-seconds when they are at rest appear to have lifetimes of 100 micro-seconds when they are moving at speeds very close to the speed of light. The same thing would happen to us if we could move close to the speed of light. We would think we were living a normal life, but a person on Earth would think we were living much longer than normal. The same holds for the length of an object. Its length depends on the speed at which it is moving relative to the measuring device. Fortunately we can work it all out with the formulas given by Einstein's theory. So, we don't live in just space or time, but in space-time. Here is a quote from Hermann Minkowski, one of Einstein's professors who once called Albert "a lazy dog", but later became famous for working on Einstein's theory of relativity: "Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality." Strong gravitational fields, like those thought to exist around black holes, can also warp space and time and cause even weirder effects, but that is another story.
<urn:uuid:9b1986de-84e7-4664-920c-bdb6cee78732>
3.046875
422
Q&A Forum
Science & Tech.
58.953586
To test your code you're often be given testing code and you may sometimes develop your own tests. When dode tests individual methods in your class, these tests are called unit tests and so you need to use the standard JUnit unit-testing library with the appropriate Java class files to test your classes. To choose Run as JUnit test first use the Run As option in the Run menu as shonw on the left below. You have to select the JUnit option as shown on the right below. Most of you will have that as the only option, I have two "run as" options on my machine. You may often be given several testing methods in the JUnit test code provided as part of an assignment. Each testing method is preceeded by the annotation @Test as you can see in testing code provided as part of both the DNA assignment and the Boggle assignment If the JUnit tests pass, you'll get all green as shown on the left below. That screenshot was taken using the testing class provided as part the DNA assignment with Otherwise you'll get red -- on the right below -- and an indication of the first test to fail. Fix that, go on to more tests. The red was obtained from a an implemrentation of the DNA assignment LinkStrand that had nothing but code-stubs generated by Eclipse (with a constructor added).
<urn:uuid:0344f938-3613-4c2d-a588-71d5eaccd915>
2.796875
289
Tutorial
Software Dev.
57.910773
Map of the thickness of the Mt. Simon sandstone in the Illinois Basin (Image: Midwest Geological Sequestration Consortium) Map of Illinois showing Decatur near to the center (Image: Shutterstock) A scheme to inject 1 million tonnes of carbon dioxide under Decatur, Illinois seeks to raise public awareness of the potential environmental benefits of carbon sequestration. (Photo: Shutterstock) A bold undertaking to store one million metric tonnes (1.1 million short tons) of carbon dioxide in a sandstone reservoir 1.3 miles (2.1 km) below Decatur, Illinois, is well under way. The project began last November, and has so far injected more than 75,000 tons of carbon dioxide, almost one tenth of the target. The University of Illinois, which is leading the Illinois Basin - Decatur Project (IBDP), hopes that the scheme will demonstrate the safety and effectiveness of carbon sequestration, as well as raise public awareness of the process's potential environmental benefits. Other Images from this Gallery
<urn:uuid:84eec22c-c84c-4dff-8371-ceccb8cbf3e3>
3.1875
206
Truncated
Science & Tech.
32.359988
The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. Predicting Invasions of Nonindigenous Plants and Plant Pests in their floras (Vitousek et al. 1996). Deliberately introduced species can play a role in the maintenance and management of ecosystem processes. Examples of such species are natural enemies of pests for biological control; aesthetically pleasing, fast-growing, pollution-resistant horticultural plants; fish communities in reservoirs; and grasses that can reclaim strip-mined land in arid regions. The danger arises from nonindigenous species that either play no constructive role or play unexpected roles in their new ranges. Invaders that affect ecosystem processes—such as productivity, nutrient cycling, or disturbance regimes—have been viewed as the most difficult to quantify and verify (Vitousek and Walker 1989, Mack and D’Antonio 1998). In a sense, changing ecosystem processes “changes the rules of the game” in a way that influences many, if not all, of the component species. Plant invasions can also alter nutrient-cycling patterns, as illustrated by the invasion of the nitrogen-fixing tree Myrica faya on volcanic surfaces in Hawaii (Vitousek and Walker 1989). The invasion of American rangelands by Bromus tectorum (cheatgrass) has increased the frequency and intensity of fires, thereby transforming steppe once dominated by the shrub Artemisia tridentata (big sagebrush) to annual grasslands (Whisenant 1990). Similarly, the invasion of nonindigenous annual grasses into Californian chaparral has resulted in more-frequent and more-intense fires, which in turn have altered species composition (Zedler et al. 1983). Plant invasions can also alter hydrology, as illustrated by Melaleuca (Melaleuca quinquenervia), which increases soil elevations and thereby has influenced the hydrology of Florida wetlands (Schmitz et al. 1997), and by the invasion of Pinus spp. into the South African fynbos, which has radically reduced the water yield of catchments (Le Maitre et al. 1995). A recent review (Parker et al. 1999) indicates that most studies of the impacts of invaders on ecosystem processes have concentrated on the effects of the plants—through uptake of light, nutrients, or water—on other plant species. Native animals are also affected by plant invaders (Braithwaite et al. 1989), through loss of habitat and loss of food resources; these interactions have been little studied and might well be underestimated. An invader would have substantial social or economic effects if it altered “ecosystem services” (cf. Ehrlich and Mooney 1983), such as maintaining the gaseous composition of the atmosphere, controlling regional climates, generating and maintaining soils, controlling floods, disposing of wastes, recycling nutrients, and controlling pests (Ehrlich and Wilson 1991). A potentially global change is under way through the conversion of much of the forested Amazon drainage to grasslands. Huge swaths of tropical forest continue to be cleared, burned, and sown with nonindigenous grasses. These grasses, such as Melinis minutiflora and Brachiaria spp., which were introduced primarily from Africa, are forming a variety of new communities: some appear to require continual
<urn:uuid:005446ec-c794-43b6-8d41-2d1df8f7e674>
3.25
722
Truncated
Science & Tech.
24.998977
ON 9 June 1994, a remarkable earthquake struck beneath the rainforests of Bolivia. Measuring 8.3, it was the largest anywhere on Earth for almost two decades. But nobody was killed, because it took place an astonishing 640 kilometres beneath the ground. Though there have been other recorded earthquakes as deep as this, none were anything like as big as the Bolivian earthquake. It was also exceptionally well recorded. By rare good fortune, two temporary networks of seismic instruments were taking readings in Bolivia and Brazil as part of a research programme by seismologists from the University of Arizona and the Carnegie Institute of Washington. Signals from the earthquake were also picked up around the world by a recently installed network of modern, broadband seismometers. Just a few years ago, most of these stations had older, analogue seismographs that would have been swamped by an earthquake this big. Seismologists have now ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:79886d1c-48cf-4329-a78d-1b5bdcf9c666>
3.84375
210
Truncated
Science & Tech.
42.058553
posted June 19, 2007 Oxidants from Pulverized Minerals--- Laboratory measurements of hydrogen peroxide produced from crushed basaltic minerals immersed in water have important implications for Martian and lunar dust. Written by Linda M. V. Martel Joel Hurowitz (previously at State University of New York at Stony Brook and now at the Jet Propulsion Laboratory), Nick Tosca, Scott McLennan, and Martin Schoonen (SUNY at Stony Brook) studied the production of hydrogen peroxide (H2O2) from freshly pulverized minerals in solution. Their experiments focused on olivine, augite, and labradorite; silicate minerals of basaltic planetary surfaces, such as the Moon and Mars, that are exposed to the intense crushing and grinding of impact cratering processes. The H2O2 produced in the experiments was enough to adequately explain the oxidizing nature of Martian regolith first determined by the Viking Landers and the results suggest, for the first time, that mechanically activated mineral surfaces may be an important part of the overall explanation for the Viking Lander biology experiment results. Hurowitz and coauthors further showed that when the pulverized minerals are heat-treated to high temperature under vacuum (to cause dehydroxylation) there is almost a 20 times increase in H2O2 production, a result which may be highly relevant to lunar dust. These careful studies demonstrate the importance of and concern about reactive dusts on planetary surfaces from two standpoints: the health of astronauts on surface maneuvers who may inadvertently breath it and the viability of possible Martian organic species to survive in such a corrosive, antiseptic surface environment. From the time of the 1976 Viking Lander Gas Exchange (GEX) and Labeled Release (LR) experiments on the surface of Mars, researchers have known that Martian regolith is highly oxidizing. What the oxidant is and how it is formed has been debated ever since. Hydrogen peroxide (H2O2) was an early candidate and scientists using powerful telescopes finally reported in 2004 the detection of H2O2 in the Martian atmosphere. Yet, researchers discovered that the observed abundance of H2O2 in the Martian atmosphere was at least a factor of 1000 to 10,000 lower than the amount of oxidizer in the Martian regolith estimated from the Viking experiments at Chryse and Utopia Planitia. |Trenches in the regolith at Chryse Planitia made by the Viking Lander 1 trenching arm can be seen in the lower right of this image, which was taken by the onboard camera. In the distance are low dunes composed of fine-grained material. Part of the Lander can be seen in the lower left, as well as the extended meteorology boom that was used for atmospheric experiments.| There have been many ideas proposed to explain the excess oxidant on the Martian surface, including biogenic processes or chemical changes above the Martian surface when dust devils and storms generate large-scale electrostatic fields. Hurowitz and coauthors addressed the issue by considering the effects that impact cratering has on planetary surfaces and how the resulting mechanically pulverized mineral dust would react with water to produce reactive oxygen species, such as H2O2. Though the Martian surface is dry today, there is abundant evidence of a warmer and wetter past [see, for example, PSRD articles: Magma and Water on Mars, Gullies and Canyons, Rocks and Experiments: The Mystery of Water on Mars, and Liquid Water on Mars: The Story from Meteorites] and a chance for H2O2 production. Whenever water flowed or clouds drizzled, water could come in contact with reactive surface grains. On the Moon it never rains, but water would come in contact with dust grains when astronauts carried dust into their habitats and when they breathed the activated dust. The idea of experimenting with reactive dusts follows naturally from years of medical research of human exposure to fine-grained carcinogenic quartz dusts and associated lung diseases. So, Hurowitz and coauthors used their cosmochemistry expertise to extend the study of H2O2 production from quartz dust to freshly ground silicate minerals that are known to exist in the basaltic crusts on Mars and the Moon. Hydrogen peroxide is a powerful oxidizer and disinfectant and has been referred to previously (e.g. Atreya and others) as a possible answer to why no organics have ever been detected by spacecraft on the surface on Mars. So the task laid out by Hurowitz and colleagues was to measure the quantity of H2O2 produced from a suite of basaltic silicate minerals common on Mars and the Moon, including olivine, augite, and labradorite. These minerals are pulverized into planetary regolith and dust during impact cratering events. Hurowitz and coauthors simply opted for a rock mill in the laboratory. They used an electron microprobe to determine the chemical compositions of their samples and then crushed the mineral samples into powders with grain sizes ranging from about 0.5 to 350 micrometers. These lab powders were slightly coarser-grained with a lower surface area compared to Martian regolith, but similar in grain size and surface area to lunar regolith. Each separate mineral powder was mixed with deionized water, then filtered and analyzed for H2O2 concentration. This formation mechanism for H2O2 works because the process of grinding causes chemical bonds to break and the surfaces of the freshly pulverized minerals become sites of highly reactive radical species. These radical species are stable while dry, but produce H2O2 when immersed in water. The researchers found that the production of H2O2 was not so dependent on grain size or surface area or amount of sample used in the experiments as on the actual structure of the mineral phase itself. Silica tetrahedra exist as separate, isolated structures or chains or three-dimensional structures. The isolated silica tetrahedra (e.g. olivine) share no corners, are known to be easily weathered (chemically altered), and produced the highest amounts of H2O2 during the experiments (see plot below). They computed how much H2O2 was produced by normalizing the total number of nanomoles of H2O2 in solution by the total surface area (m2) of the mineral powders. For example, olivine powder produced 21 to 25 nanomoles H2O2/m2, augite produced 6.6 to 9.1 nanomoles H2O2/m2, and labradorite produced about 1 nanomole H2O2/m2. Interesting too, Hurowitz and colleagues measured <1 namomole H2O2/m2 produced from immersed quartz powders, which shows clearly that the basaltic silicate minerals are capable of forming higher concentrations of H2O2 in solution than is quartz under the same conditions of mechanical pulverization. |Experimental results are shown in the top graph. Hydrogen peroxide production (left axis) is plotted as a function of the number of shared SiO4 corners in the silicate structure. Olivine, which shares no corners, reacts most readily. The right axis shows the results of the Viking GEX experiment, which measured the amount of oxygen produced. If we assume that one mole of H2O2 produces one mole of O2, then this process explains the Viking results. The two vertical axes are expressed in per square meter because they are normalized to surface area of the mineral powders.| The lower portion of this figure shows what the different silica tetrahedra look like. Dark blue is silicon, red is oxygen, and light blue is aluminum. Hurowitz and coauthors were also interested in how much H2O2 would form in solution from dehydroxylated mineral samples. Dehydroxylated minerals have surface-bound water (H2O) and hydroxyl (OH) stripped away by heat treatment under vacuum (see diagram below). Two labradorite samples were dehydroxylated before being immersed in water and they yielded 24 ± 4 and 33 ± 3 nanomoles H2O2/m2, quantities significantly greater than those produced by non-heated samples mentioned above. When the experiments were repeated on the dehydroxylated samples 30 minutes later the H2O2 values were slightly lower, which suggests to the research team that there might be another, shorter-lived oxidant produced when heat-treated mineral samples are immersed in water. They don't know what that other oxidant would be. The results, nonetheless, are significant for the Moon where plagioclase-rich regolith may exist in a partially dehydroxylated state that researchers attribute to impact heating in vacuum and high daytime surface temperatures. |This is a drawing of the heat treatment apparatus Hurowitz and colleagues used on the ground labradorite powders.| Assuming that 1 mole of H2O2 results in 1 mole of O2, Hurowitz and colleagues' experiments show that their mineral powders released enough oxidant to adequately explain the Viking Lander Gas Exchange (GEX) results without having to invoke any other explanation for Martian regolith reactivity. They offer the simple explanation that oxidant was formed as a result of reactions between water and silicate mineral surfaces that had been crushed and activated by impact pulverization. Yet they concede that this mechanically-induced reactivity may really be only one contribution to the overall reactivity of planetary regolith. Cosmochemical analyses show that the physical, optical, chemical, and mineralogical properties of Martian and lunar regoliths have clearly been modified since formation. For instance, on Mars, water, wind, and chemical weathering have changed material properties and have created new secondary alteration minerals (see PSRD article: Pretty Green Mineral--Pretty Dry Mars?) On the Moon, impact gardening and space weathering have caused changes in surface materials (see PSRD article: New Mineral Proves an Old Idea about Space Weathering.) How any of these subsequent modification processes and products have influenced the reactivity of the regolith materials and how deep the contamination goes are still to be determined. High concentrations of reactive oxygen species, such as H2O2, in planetary regoliths and dusts are of great concern and there is strong interest in characterizing them so that mitigation procedures can be worked out if fine-grained dust poses a health hazard to the respiratory systems of future astronauts working on the Martian or lunar surface. Furthermore for Mars, the radiation and oxidative environment have important ramifications on "life on Mars" surface and subsurface habitability issues. Hurowitz and his coworkers' work was funded in part by NASA's Cosmochemistry program, which funds basic research in planetary materials. The applicability to life on Mars and to future missions to the Moon, Mars, and beyond shows how strongly basic research is connected to fundamental scientific questions and to mission planning. LINKS OPEN IN A NEW WINDOW. [ About PSRD | [ Glossary | General Resources | Comments | Top of page ]
<urn:uuid:d8de570b-6c25-4338-8ed8-4cdaa4deed45>
3.5625
2,317
Academic Writing
Science & Tech.
22.757544
, Brahmagupta's formula finds the area of any quadrilateral given the lengths of the sides and some of their angles. In its most common form, it yields the area of quadrilaterals that can be inscribed in a circle In its basic and easiest-to-remember form, Brahmagupta's formula gives the area of a cyclic quadrilateral whose sides have lengths a, b, c, d as where s, the semiperimeter, is determined by This formula generalizes Heron's formula for the area of a triangle. The area of a cyclic quadrilateral is the maximum possible area for any quadrilateral with the given side lengths. Proof of Brahmagupta's formula Area of the cyclic quadrilateral = Area of + Area of But since is a cyclic quadrilateral, Hence Therefore Applying law of cosines for and and equating the expressions for side we have Substituting (since angles and are supplementary) and rearranging, we have Substituting this in the equation for area, which is of the form and hence can be written in the form as Taking square root, we get Extension to non-cyclic quadrilaterals In the case of non-cyclic quadrilaterals, Brahmagupta's formula can be extended by considering the measures of two opposite angles of the quadrilateral: where θ is half the sum of two opposite angles. (The pair is irrelevant: if the other two angles are taken, half their sum is the supplement of θ. Since cos(180° − θ) = −cosθ, we have cos²(180° − θ) = cos²θ.) This more general formula is sometimes known as Bretschneider's formula, but according to MathWorld is apparently due to Coolidge in this form, Bretschneider's expression having been where p and q are the lengths of the diagonals of the quadrilateral. It is a property of cyclic quadrilaterals (and ultimately of inscribed angles) that opposite angles of a quadrilateral sum to 180°. Consequently, in the case of an inscribed quadrilateral, θ = 90°, whence the term giving the basic form of Brahmagupta's formula. Heron's formula for the area of a triangle is the special case obtained by taking d = 0. The relationship between the general and extended form of Brahmagupta's formula is similar to how the law of cosines extends the Pythagorean theorem.
<urn:uuid:2082e138-886a-46f6-8bce-3eca50864f4e>
3.703125
552
Knowledge Article
Science & Tech.
27.888803
Extensive alternative splicing (AS) of precursor mRNAs (pre-mRNAs) in multicellular eukaryotes increases the protein-coding capacity of a genome and allows novel ways to regulate gene expression. In flowering plants, up to 48% of intron-containing genes exhibit AS. However, the full extent of AS in plants is not yet known, as only a few high-throughput RNA-Seq studies have been performed. As the cost of obtaining RNA-Seq reads continues to fall, it is anticipated that huge amounts of plant sequence data will accumulate and help in obtaining a more complete picture of AS in plants. Although it is not an onerous task to obtain hundreds of millions of reads using high-throughput sequencing technologies, computational tools to accurately predict and visualize AS are still being developed and refined. This review discusses the tools to predict and visualize transcriptome-wide AS in plants using short-reads and highlight their limitations. Comparative studies of AS events between plants and animals have revealed that there are major differences in the most prevalent types of AS events, suggesting that plants and animals differ in the way they recognize exons and introns. Extensive studies have been performed in animals to identify cis-elements involved in regulating AS, especially in exon skipping. However, few such studies have been carried out in plants. Here, the current state of research on splicing regulatory elements (SREs) is reviewed and emerging experimental and computational tools to identify cis-elements involved in regulation of AS in plants are discussed. The availability of curated alternative splice forms in plants makes it possible to use computational tools to predict SREs involved in AS regulation, which can then be verified experimentally. Such studies will permit identification of plant-specific features involved in AS regulation and contribute to deciphering the splicing code in plants. Tools for predicting isoforms, their expression, and alternative splicing from RNA-Seq data. |Trans-ABySS (Robertson et al., 2010)||IP, IE||De novo| |Trinity (Grabherr et al., 2011)||IP, IE||De novo| |Rnnotator (Martin et al., 2010)||IP||De novo| |Scripture (Guttman et al., 2010)||IP||G| |IsoLasso (Li et al., 2011)||IP, IE||G| |NSMAP (Xia et al., 2011)||IP, IE||G| |Cufflinks (Trapnell et al., 2010)||IP, IE||G, A| |TAU (Filichkin et al., 2010)||IP||G,A| |SpliceGrapher (Rogers et al., 2012)||SG||G, A| |IsoEM (Nicolae et al., 2010)||IE||G, A| |IsoformEX (Kim et al., 2011)||IE||G, A| |SpliceTrap (Wu et al., 2011)||IE||G, A| |NEUMA (Lee et al., 2011)||IE||G, A| |Solas (Richard et al., 2010)||IE||G, A| |rSeq (Jiang and Wong, 2009)||IE||G, A| |RSEM (Li et al., 2010; Li and Dewey, 2011)||IE||De novo| The tools vary in the specific task they address; we distinguish between several tasks: isoform prediction (IP), isoform expression (IE) and splice graph prediction (SG). The tools also vary in the input data they require: de novo (no input required except for the RNA-Seq data), a reference genome (G) or annotated isoforms (A). - Reddy AS, Rogers MF, Richardson DN, Hamilton M, Ben-Hur A. (2012) Deciphering the plant splicing code: experimental and computational approaches for predicting alternative splicing and splicing regulatory elements. Front Plant Sci 3, 18. [article] Incoming search terms: - alternative splicing - high throughput sequencing - paenms seq - paenms seq gob mx - PAENMS SEQ GOB - paenms seq go - how to test rnnotator - www paenms seq gop - alternative splicing plant rna-seq
<urn:uuid:0fe84bfc-3c0e-4be0-9a3c-565f6322ca05>
3.015625
944
Academic Writing
Science & Tech.
48.210909
Tornado observed by the VORTEX-99 team on May 3, 1999, in central Oklahoma. Note the tube-like condensation funnel, attached to the rotating cloud base, surrounded by a translucent dust cloud. Click on image for full size Courtesy of NOAA Before 1971, there was no way for scientists to rank tornadoes strength. How big the tornado looked had no bearing on how strong it actually was. In 1971, Professor Fujita came up with a system to rank tornadoes according to how much damage they cause. This was called the Fujita Scale. As of February 1, 2007, a new scale for rating the strength of tornadoes is being used. It is called the Enhanced Fujita Scale. The Enhanced Fujita Scale or EF Scale has six categories from zero to five, with EF5 being the highest degree of damage. The Scale was used the first time as three separate tornadoes took place in central Florida early on February 2, 2007. These tornadoes destroyed many houses and businesses and killed at least 21 people. And these tornadoes were only rated EF3 tornadoes! Scientists have to figure out how strong a tornado was after it hits. Because the scale is based on the damage caused by it, they can't predict how strong a tornado would be before it happens. Shop Windows to the Universe Science Store!Cool It! is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store You might also be interested in: How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more Tornadoes form from severe thunderstorms. They have a very high energy density which means that they affect a small area but are very destructive to that area. They also don't last very long which makes...more Tornadoes come in three different sizes, each with different characteristics. The three sizes are: weak, strong, and violent. Their size is based on how large the tornado is as well as the time that the...more Sound travels in waves. You hear sound because waves hit your ear. Sound waves are similar to ocean waves. They both have a certain frequency. The frequency is measured in hertz, which is one cycle per...more Storm chasers are different than storm spotters. Chasers travel around Tornado Alley looking for severe storms and tornadoes. This area in the Great Plains is the best for chasing because of the frequency...more A tornado is the most intense force in nature. That doesn't mean it's the most powerful. In fact, a thunderstorm can be 40,000 times more powerful than a tornado. Then why aren't thunderstorms as dangerous...more The Doppler effect was named after Christian Doppler, who first came up with the idea in 1842. He learned that sound waves would have a higher frequency if the source was moving toward the observer and...more It's hard to forecast tornadoes. They don't last very long and are also very complicated. Scientists don't really know how they form, but they do where they tend to form. Using what they know about the...more
<urn:uuid:710aaac1-2683-4e6b-837f-ed1d73f8f4d2>
4.3125
714
Content Listing
Science & Tech.
62.406889
Endeavour To Begin ISS Assembly News story originally written on November 27, 1998 NASA is going to launch Space Shuttle Endeavour on December 3rd, 1998. The shuttle will carry the first connector for the International Space Station. The connector is called Unity. The Space Shuttle crew will connect the Unity module to the control module. The control module was launched on November 20th from Russia. There will be six astronauts Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: There is a large space station circling Earth right now. It is called the International Space Station (ISS for short). Astronauts live and work in the ISS. Sixteen countries, including the United States,...more It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The sky was clear and the weather was great. This was the America's 123rd manned space mission. A huge...more Scientists found a satellite orbiting the asteroid, Eugenia. This is the second one ever! A special telescope allows scientists to look through Earth's atmosphere. The first satellite found was Dactyl....more The United States wants Russia to put the service module in orbit! The module is part of the International Space Station. It was supposed to be in space over 2 years ago. Russia just sent supplies to the...more A coronal mass ejection (CME) happened on the Sun last month. The material that was thrown out from this explosion passed the ACE spacecraft. ACE measured some exciting things as the CME material passed...more Trees and plants are a very important part of this Earth. Trees and plants are nature's air conditioning because they help keep our Earth cool. On a summer day, walking bare-foot on the sidewalk burns,...more
<urn:uuid:e1789e42-2e71-447e-9529-a4b2520b8e92>
2.734375
456
Content Listing
Science & Tech.
60.152202
Comprehensive DescriptionRead full entry BiologyFound between the seabed and midwater on the lower continental shelf, over sand. Juveniles found in oceanic surface waters (Ref. 2683); adults normally live close to the bottom (normally in 50-350 m depth (Ref. 47377)). Gregarious. Juveniles feed mainly on pelagic invertebrates, mainly copepods, while adults feed on bottom invertebrates (Ref. 6732). Seems to be sympatric with Macroramphosus gracilis (Lowe, 1839) all around the world (Ref. 89357).
<urn:uuid:66613f0a-cf63-45aa-95f4-435ed145e45d>
3.03125
129
Knowledge Article
Science & Tech.
43.615
This Mental and Oral activity revolves around number bonds (or complements) to 100. Children work in pairs, using a mini-whiteboard. Ask them to draw a three by three grid and to quickly fill the grid with different multiples of 5. When they have completed this the teacher starts to call out random multiples of 5, children can circle a number on their grid if it is the other half of the pair totalling exactly 100 (the complement to 100). Read more »
<urn:uuid:c12491b1-cdf1-42f7-81cd-dbaf948237e8>
3.625
99
Truncated
Science & Tech.
66.349451
Longsnout Seahorse Hippocampus reidi Species ID: S.HR Description: The seahorse’s elongated body is covered in bony armour and is characterized by a curving neck and horse-like head, as well as a curled prehensile tail. Body colour is variable, and can be yellow, white, brown, black or even two-toned. The body is peppered with small dark spots, distinguishing it from other seahorses. Males have a smooth pouch on their belly, while females do not. Juveniles resemble adults Maximum Size: 18 cm (7 in) with the tail outstretched Longevity: Unknown, but likely to be at least 4-5 years based on the longevity of relatives Status: Insufficient information is currently available to assess the conservation status of this species according to IUCN standards, however, all seahorses are considered threatened and their trade is therefore controlled by CITES. Long-Snout Seahorse & People: This seahorse is very popular in the aquarium trade, and collection of wild specimens still occurs. They are also dried for sale as tourist souvenirs or for export to Asia where they are used in traditional medicine Geographical Range: The range of this species is still under review, but is believed to occur throughout the Caribbean and Gulf of Mexico, north to Florida and south to Brazil Coral Reef Zone: These seahorses are found in the shore, back reef, and fore reef and drop-off zones Favourite Habitat: Seahorses prefer shallow areas where they can camouflage themselves among seagrasses, algae, gorgonians, or sponges Depth Range : 0 –55 m (0–180 ft) A Day in the Life: Dawn: Mated pairs perform greeting dances, and feeding activity begins Day: Seahorses are most active during the day, when they feed and mate Dusk: Some feeding activity occurs, but declines as the sun sets Night: These seahorses rest at night while clinging to plants, sponges or gorgonians Who Eats Who The long-snout seahorse is a carnivore that eats tiny zooplankton, mysid shrimp and small crustaceans. They have few predators due to their armour and excellent camouflage, but are still eaten by rays, turtles, and especially crabs when young and vulnerable. Scuba Diver & Snorkeler Best Practices Remove only recent garbage from the sea Garbage that has been in the sea for a long time might have been adopted as a new home by some marine organisms; encrusting organisms may also call it home. Remove only recent garbage from the sea. The hardest part about observing seahorses is finding one in the first place – they are masters of camouflage and so are very difficult to spot! Once located however, seahorses are easy to approach and observe because as they are slow swimmers and seldom flee from threats. Long-snout seahorses feed passively on tiny planktonic creatures during the day, with some feeding during dawn and dusk. Their preferred feeding strategy is to remain camouflaged and ambush their prey while they hold on to a plant or coral with their flexible tails. Seahorses have eyes that can look in different directions at the same time, helping them locate their tiny prey, drifting in the ocean’s currents. Seahorses have no teeth so they swallow their food whole. They flick their head forward as they suck up prey through their tube-like mouth. A distinct clicking sound made by the jaws may also be heard as they feed. Although seahorses feed mainly by ambushing their prey, they sometimes leave their hiding spot to search for and chase prey – a behaviour common in areas where there is little vegetation. Observe, record & share: O S.HR-101 – Ambushing prey: Long-snout seahorses sway back and forth, moving each eye independently, as they search for suitable food in the plankton O S.HR-102 – Sucking up prey: Seahorses raise and lower their head as they suck up prey O S.HR-103 – Feeding clicks: The movement of a long-snout seahorse’s jaws produces a distinctive clicking sound audible to divers and snorkelers O S.HR-104 – Hunting prey: Long-snout seahorses have been seen actively chasing planktonic prey Attack & Defence Behaviour Long-snout seahorses rely on their specially shaped bony armour and excellent camouflage abilities as their primary defence against predators. A seahorse’s ability to change colour, and the algae that often grows on its body, make it very difficult for predators to find. If threatened, a seahorse’s most common reaction is to lower its head and turn away from the threat. Although seahorses are not territorial, males will aggressively defend their mates against bachelors. A defending male will wrap his tail around his opponent’s tail and attempt to overpower him. If neither backs down, they may come to blows, consisting of an aggressive head-butt! The weaker seahorse usually swims away, especially after being struck, but if the attacker does not release him he will often darken and attempt to lie flat against the sand to signal submission, which ends the battle immediately. Observe, record & share: O S.HR-201 – Camouflage: Seahorses cling to a surface and often change colour to blend in O S.HR-202 – Defensive posture: A threatened long-snout seahorse may lower its head and turn its back on a potential threat O S.HR-203 – Tail-wrestling: During an encounter, seahorses grasp each other’s tails O S.HR-204 – Head-butting: A fighting male may snap his head forward, striking an opponent with his snout O S.HR-205 – Submissive pose: A seahorse defeated by a stronger opponent will darken and lie flat against the sand to indicate submission Seahorses have one of the most fascinating reproductive strategies on the planet. Amazingly, it is the male of the species, rather than the female, that becomes pregnant. Seahorses are monogamous and stay with their partner for many seasons, perhaps even for life. Pairs use their tails to cling to each other, and sometimes swim together in circles in a behaviour known as carouseling that is believed to reenforce the pair bond and accelerate mating. When the couple is ready to breed, the male courts the female for several days by changing colour, circling her, and flexing his body and pouch to show her that he is a strong mate with an empty egg pouch. In the final minutes of courtship, the two seahorses rise together, with the female pressing her belly against that of the male to deposit her eggs. The male fertilizes these eggs immediately and nurtures the developing embryos for roughly 14 days. When they are ready to hatch, the male goes into labour and jackknifes his body during violent contractions that push as many as 1500 young into the world, only a few of which survive to become adults. The seahorse breeding season lasts about 8 months – generally from February to October. The seahorse pairs breed repeatedly during this time; just as soon as the male has given birth he will begin courtship again. Observe, record & share: O S.HR-301 – Tail-holding: Seahorse pairs hold each other’s tails O S.HR-302 – Carouseling: Seahorses pairs circle around each other, often in the early morning OS.HR-303 – Male colour change: Males impress females with a series of colour changes O S.HR-304 – Pouch pumping: Males rapidly bend and unbend their body, flexing the brood pouch, to impress females O S.HR-305 – Egg transfer: The female presses up against the male’s pouch and transfers her eggs into it O S.HR-306 – Labour and birth: Male seahorses push out their young in contractions Courtship and mating: Seahorse reproduction is a fascinating and unusual behaviour. There are several distinct phases that divers and snorkelers can observe. In stage one, the male changes colour, and displays his brood pouch by bending and pumping water in and out of the pouch opening. In stage two, both partners dance together, intertwining their tails and quivering. In stage three, the female points her snout towards the surface, while the male continues his pouch pumping display. Both partners rise together through the water column and the female presses her belly against that of the male to deposit her eggs into his brood pouch. Did You Know? • The skin of many seahorses contains certain carbohydrates, or sugars, that encourage the growth of algae on their bodies, thus improving their camouflage abilities. • Nearly 25 million seahorses are traded worldwide each year. Because of its large size and bright colours, the long-snout seahorse is one of the most heavily traded seahorse species in the world. The huge volume of trade in seahorses has conservationists worried that they may soon become endangered. What to do? Share your observations today!: Discover your species of interest, observe its behaviour, and share your pictures and videos with friends and coral reef enthusiasts around the world! Upload media to the web, tagged with species common name (ex.: trumpetfish) and species ID code (ex.: A.AM) or species behaviour code (ex.: A.AM-101) - Order Syngnathidae - Length 18 cm (7 in) - Weight Not recorded - Depth 0 –55 m (0–180 ft) - Habitat Shore zone, back reef, fore reef and drop off zones - Distribution Found throughout the Caribbean and Gulf of Mexico, north to Florida and south to Brazil
<urn:uuid:21bfbb30-99eb-47fb-b2fe-7bb6cb6f2141>
2.765625
2,086
Knowledge Article
Science & Tech.
48.188593
HXRBS was designed to examine the role of energetic electrons in solar flares by measuring the variations in intensity and energy of the hard X-ray fluxes. Scintillation events in its actively collimated CsI(Na) detector were read out every 128 ms in fifteen energy channels between ~25 to ~500 keV. A circulating memory was able to accumulate relatively brief periods of data during the more intense flares with time resolution down to 1 ms. The full width at half maximum of the field of view was approximately 40 degrees. The Complete Hard X-Ray Burst Spectrometer Event List, 1980-1989 (NASA Technical Memorandum 4332) Note: The event list itself can be found in the year links below, not in the pdf link above. Responsible NASA official: Joseph B. Gurman, Facility Scientist, Solar Data Analysis Center +1 301 286-4767 NASA Goddard Space Flight Center Solar Physics Branch / Code 682 Greenbelt, MD 20771 Last Modified: 2011 June 15
<urn:uuid:323956a7-3847-4403-bed7-8a9239d25281>
2.90625
216
Knowledge Article
Science & Tech.
42.497806
By Ravi Chellam on The Hindu Folio By the time you finish reading this issue of Folio, perhaps one more plant or animal species somewhere in the world would have disappeared. Gone forever, never to come back. Extinction is forever. What is the significance of this pithy little phrase, oft used in conservation circles? Why should it concern us as humans? Anyway, is extinction not part of nature? Extinction by definition means that no live individual of a particular species exists anywhere in the world, either in its natural habitat (in situ), or in captivity (ex situ). Dinosaurs today exist only in reconstructed models and on film screens. But conservationists do not easily accept the fact that a species has gone extinct. By the strictest definitions for officially recognising that a species has gone extinct, it takes numerous intensive surveys spread over many years, before one can come to this conclusion. In some cases species believed to be extinct will "reappear" after decades . . . remnant populations that no one had earlier chanced upon. On the other hand, new species can be described and added to our knowledge much more easily and in much shorter periods of time. It is natural for species to go extinct. Life evolved on earth more than 600 million years ago. Many species have evolved and many others gone extinct over these long millennia, a fact revealed to us through fossils. This fossil record enables us to reconstruct the manner in which species have evolved, and to fix time scales over which certain forms of life were dominant on earth. These have also enabled us to detect that there is a cycle of mass extinctions that takes place periodically in the evolutionary history of the earth. Mass extinctions are defined as episodes when an exceptional global decline in biodiversity takes place, one which affects a broad range of life forms over a short period of time. For example, there could be forest dwelling insects, land dwelling dinosaurs and ocean-bottom dwelling molluscs, all disappearing at the same time. This time scale could be over a few thousand to hundred thousand or million years, which will seem very long in the normal human perception, but is a very short period in the earth's evolutionary history. Five mass extinctions in the earth's history have been identified: during the Ordovician Era (450 million years ago), Late Devonian (350 million years ago), Late Permian (275 million years ago), Late Triassic (190 million years ago) and Late Cretaceous (65 million years ago). Various explanations have been given for these extinctions, the ones with greatest credibility being the effects of glaciation and the impact of an extra-terrestrial object collision with the earth. Both of these would have had widespread and drastic impacts on the prevailing climate. The sea would have retreated from many areas, the sun would have been obscured for many weeks if not months, by massive clouds of dust, and in general the flow of energy would have been drastically disrupted resulting in the extinction of many forms of life. Yet the evolution of new species exists side by side with such extinction. It is important to note that a much reduced number of life forms would have survived these difficult times of mass extinctions, adapting to the gross environmental changes and over a period of time evolving into many more forms or species. This process of continuous extinction and evolution characterises the history of life on earth. A good example is the information we have on birds. Currently, about 9,000 species of birds survive worldwide. The fossil history indicates that over the last 150 million years, some 1,50,000 species of birds have evolved and become extinct. Yet, it appears that we are today in the midst of the greatest ever diversity of species to have existed at any one given moment. But not for long. We are again today living in an age of mass extinction. So what? Should we be concerned, given that biodiversity has sprung back from five previous mass extinctions? The present episode of mass extinction has important differences from previous ones. These extinctions are largely caused by the impacts of one species - human beings. They are taking place over extremely short periods of time, maybe just a few decades or centuries. Human actions have resulted in the widespread loss of natural habitats, fragmentation of the remaining habitats, poisoning of many areas, displacement of uniquely adapted species by exotics, and in general the gross disruption of the numerous intricate natural processes which govern the evolution of species. The result is that human-induced extinction rates not only far outstrip natural extinction rates but also disrupt normal evolutionary processes. Contemporary species extinction rates are estimated to be 1,000 to 10,000 times higher than the normal background extinction rates expected in the absence of human influences. The result is that in a short period of time there has been a drastic and irreversible decline in the biodiversity of the earth. This is not an alarmist's reaction but a realistic assessment based on data collected over the past few decades from all over the world. The most quoted example of how dramatic human-caused extinctions can be, is the case of passenger pigeons in the U.S.. In the 19th Century, there are estimated to have been an astounding 2,000 million individuals of this species. When some of the huge flocks flew across the skies, they used to obscure the sun for many hours. Due to hunting and habitat destruction, the population was reduced to 2,50,000 by 1896, and by 1914 the species became extinct with the death of the last bird in captivity. If this is the rapidity and scale of human destruction, we can well imagine what awaits the hundreds of endangered species all over the world. The Asiatic lion population is estimated to be only around 320 in only one protected area, Gir National Park in Gujarat. India's tiger population is estimated to be about 5,000 and that of the one horned rhino about 1,500. Despite heroic conservation efforts, unless some very drastic changes are made immediately by the human race as a whole, there is no escaping the fact that most practitioners in the field of conservation will only go down in history as chroniclers of extinctions! Extinctions are probably happening on a daily basis in India, especially amongst some of the smaller and lesser known organisms like insects and fungi. The better known examples of recorded extinctions in India are the pink-headed duck, mountain quail and the cheetah. In fact, extinction of mammal or bird species is more likely to be recognised than for example, plant or amphibian species. Estimates of global species richness range from a minimum of 10 million to 30 and maybe even 50 million. Much of this richness is found in tropical countries like India. Only a fraction of the estimated number of species has been described, about 1.2 million. In India, about 1,36,000 species have been listed (see Table), but there are probably at least 3 to 4 times that many that are not yet recorded. With the rampant destruction of habitats all over the country, especially of the species-rich tropical forests and coral reefs, we are losing numerous species, many of which might still be undescribed and unknown to us. Some scientists estimate that at current rates of habitat destruction, we may lose upto one-third of the total wild species in the country within the next few decades . . . that is, an astounding 45,000 known species, and probably many more unrecorded ones. Extinctions are not restricted to wild species alone. Numerous varieties of crops and breeds of livestock have become extinct in India due to the over-reliance on a handful of high-yielding and hybrid varieties. The genetic erosion this represents is extremely serious and threatens the long-term viability of our agriculture and animal husbandry systems. Of the many initiatives taken to conserve what remains of our biodiversity, the ones that merit mention are the continuation of traditional conservation practices amongst many village communities, the creation of legally protected areas by State governments and the ban on hunting of, and trade in, several species of wildlife. While in themselves commendable, they have been woefully inadequate in halting the decline of biodiversity. A much greater national effort is needed, especially to resolve the basic conflicts between the development aspirations of an industrialising country, and the need to conserve the natural habitats and biodiversity that co-exist with us. A new national process promises to point towards such a resolution, and help us take a small step towards securing the country's biodiversity. This is the National Biodiversity Strategy and Action Plan (NBSAP), being formulated by the Ministry of Environment and Forests with execution by hundreds of NGOs, official agencies, community groups, and others (see Introductory piece). As part of this, specialist Working Groups on Wild Plants, Wild Animals, Micro-organisms, Natural Terrestrial Ecosystems, Natural Aquatic Ecosystems, and Domesticated Biodiversity, are collating existing information on the status of biodiversity, the major threats to its continuation, and the gaps in coverage of conservation initiatives. From this will emerge a picture of what habitats and species need to focused on for urgent conservation intervention, and what concrete steps would be needed to achieve this. It is important for us to immediately realise that there are no technological solutions for the human-induced crisis of extinction. If we do not reform our ways, the extinction of life itself on earth may well become a reality . . . and when millions of species go, can we be far behind? So what if there is mass extinction? Smug as we are in our technological cocoons and monetary illusions, we may think that mass extinction of plants and animals, is of little consequence. We couldn't be farther from the truth. Note the following from R. Prasanna Venkatesh/Wilderfile: - Oxygen is primarily produced by marine algae, themselves dependent on biologically diverse, healthy seas; - 80 per cent of the world's population depends substantially on plant and animal-based medicines; - In many communities, over 40 per cent of food comes from the wild; - Plants from the tropics are worth between $5 billion to $47 billion, annually, to the global pharmaceutical industry (one Indian plant alone, sarpagandha (Rauwolfia serpentina), is the base for $260 million worth trade in hypertension and schizophrenia drugs); - The forests of the tropics, in particular the Amazon, help regulate the earth's climate and hydrological patterns, a benefit whose dimensions are impossible to calculate; - Seed genetic diversity provides the global agricultural economy with billions of dollars worth of value; one wild rice species from central India provided resistance against grassy stunt virus, saving rice grown over millions of hectares in south and south-east Asia, and one wheat variety from Turkey has provided disease resistance valued at over $50 million per year; - Genetic uniformity destroyed the Irish potato crop in 1846, resulting in one million people dying and 1.5 million migrating out; in 1984, similar homogeneity led to bacterial disease amongst citrus in Florida, forcing the destruction of 18 million trees. More important than all the above is the great ethical tragedy of mass extinction: whatever gave us, just one out of 50 million species, the right to snatch life away from any other species? Surely is the ultimate act of ingratitude, to destroy the very natural conditions that gave rise to us? But even if we are not moved by moral arguments, it should not take a genius to realise that tampering with the earth's fragile web of life is to invite trouble onto ourselves . . . yet our species, considering itself to be the most intelligent, continues to do precisely that!
<urn:uuid:461087b9-597c-43b8-abd4-76bb77147183>
3.71875
2,391
Personal Blog
Science & Tech.
36.418424
Newton's Third Law in the Framework of Special Relativity Newton's third law states that any action is countered by a reaction of equal magnitude but opposite direction. The total force in a system not affected by external forces is thus zero. However, according to the principles of relativity a signal can not propagate at speeds exceeding the speed of light. Hence the action cannot be generated at the same time with the reaction because the information about the action has to reach the affected object and the affected object still needs additional time to react on the source, hence the total force cannot be null at a given time. The following analysis provides for a better understanding of the ways natural laws would behave within the framework of Special Relativity, and on how this understanding may be used for practical purposes.
<urn:uuid:2a114258-d956-4492-bed5-9171da813050>
3.671875
156
Academic Writing
Science & Tech.
27.537469
Tim the Plumber wrote: The idea that you can predict the climate based on it's temperature behaviour between 1970 and 1998 is silly. Just as the statement that the absence of warming since 1998 and 2011 cannot utterly disproove AGW the rise between 1970 and 1998 cannot 100% proove the theory that CO2 is a significant greenhouse gas at the levels we have today. Nobody is trying to predict temperatures based on historically temperatures over the last 40 or so. The predictions are based on our understanding of earth's climate over hundreds of millions of years and particularly the last 4 million years of recurring ice ages. The climate while complicated has to obey some very simple basic physical rules that is the energy coming in has over time to equal the energy going out. Change that simple relationship in some way and the temperature will change change until such time as the equation is back in balance. It is certain that greenhouse gases reduce the amount of energy that leaves the earth. Northern Europe is having a wet and cool summer, it's just America which is having a long, hot and dry one. No my original statement is correct According to NOAA:-http://www.ncdc.noaa.gov/sotc/global/2012/6 The Northern Hemisphere land and ocean average surface temperature for June 2012 was the all-time warmest June on record, at 1.30°C (2.34°F) above average. The Northern Hemisphere average land temperature, where the majority of Earth's land is located, was record warmest for June. This makes three months in a row — April, May, and June — in which record-high monthly land temperature records were set. Most areas experienced much higher-than-average monthly temperatures, including most of North America and Eurasia, and northern Africa. Only northern and western Europe, and the northwestern United States were notably cooler than average. Tim the Plumber wrote: When thinking about such climatic events it is vital to have a sense of proportion and not see a tiny change over 3 decades as a reason to think that there will be a drastic "exponential" continuation of this. The temperatures changes over the last 3 decades simply confirms our basic understanding of the climate. It is akin to having a graph of the speed of your car traveling along a highway. When the speed is 55mph your pasenger is happy, when the graph plots up to 57 mph the pasenger panics because the car is about to accelerate untill the machine disintergrates at the sound barrier. When the graph shows a slowing to 53mph the panic is of the sudden stopping of the car and the trafic behind slamming into the back of the car. No it is more being in a car where the cruise control is stuck and the speed just keeps increasing. Climate varies about quiote a lot. Because we live fairly short lives we do not rember the droughts of the dust bowl. We do not rember the medevil warm period. We do not rember the frost fairs on the frozen Thames. This is why we maintain weather data which shows that the current conditions are both worst and different. We should take these dire warnings with a big pinch of salt. Dire warnings should be assessed on the merits and action taken if necessary but never ignored The sea level rose by 18cm last centuary, how many cities flooded because of this? This centuary looks like it could be twice as bad, maybe. So as long as we split the sea level rises into 18 cm chunks it will be no problem ? I am reminded of camels transporting straw.
<urn:uuid:19f0ac3e-e73a-454a-9cba-d004cc56a732>
2.703125
746
Comment Section
Science & Tech.
54.915009
THE nightmare scenario goes like this. Farmers blanket their fields with crops engineered to carry genes designed to defeat pests and disease. The crops pass these genes on to wild relatives, turning them into supercompetitive weeds that rampage through the countryside, wiping out rare and vulnerable plant species. The public is baying for blood. Conservationists are firing on all fronts. Faced with crippling lawsuits, biotechnology companies file for bankruptcy. Colin Merritt sighs. He's heard it all a thousand times before and he isn't impressed. "Superweed is such an emotive word," he says. "It very often hasn't been thought through." Of course, as a scientist who works for Monsanto, the US biotech giant that stands to make billions from transgenic crops, he would say that. It's his job, just as surely as it's the job of the anti-biotech brigade to do the opposite, picketing transgenic crop fields dressed in full-body protective ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:f187529e-4ab3-4fa6-a69b-74749b4c962b>
2.703125
226
Truncated
Science & Tech.
56.996393
My co-authors and I have just published an article in Nature Geoscience (advance online publication here; associated press release here) which seeks to explain certain enigmatic features of tree-ring reconstructions of Northern Hemisphere (NH) temperatures of the past millennium. Most notable is the virtual absence of cooling in the tree-ring reconstructions during what ice core and other evidence suggest is the most explosive volcanic eruption of the past millennium–the AD 1258 eruption. Other evidence suggests wide-spread global climate impacts of this eruption [see e.g. the review by Emile-Geay et al (2008)]. We argue that this–and other missing episodes of volcanic cooling, are likely an artifact of biological growth effects, which lead to a substantial underestimation of the largest volcanic cooling events in trees growing near treeline. We speculate that this underestimation may also have led to overly low estimates of climate sensitivity in some past studies attempting to constrain climate model sensitivity parameters with proxy-reconstructed temperature changes. Tree rings are used as proxies for climate because trees create unique rings each year that often reflect the weather conditions that influenced the growing season that year. For reconstructing past temperatures, dendroclimatologists typically seek trees growing at the boreal or alpine treeline, since temperature is most likely to be the limiting climate variable in that environment. But this choice may also prove problematic under certain conditions. Because the trees at these locations are so close to the threshold for growth, if the temperature drops just a couple of degrees during the growing season, there will be little or no growth and therefore a loss of sensitivity to any further cooling. In extreme cases, there may be no growth ring at all. And if no ring was formed in a given year, that creates a further complication, introducing an error in the chronology established by counting rings back in time. We compared simulated temperature of the past millennium derived by driving theoretical climate models with estimated natural (volcanic+solar) and anthropogenic forcings for the past millennium. We employed two different climate model simulations: (1) the simulation of the NCAR CSM 1.4 coupled atmosphere-ocean General Circulation Model (GCM) analyzed by Ammann et al (2007) and (2) simulations of a simple Energy Balance Model (EBM). While the GCM provides a more comprehensive and arguably realistic description of the climate system, the computational simplicity of the EBM lends itself to extensive sensitivity tests. As the target for our comparison, we used a state-of-the-art tree-ring based Northern Hemisphere (NH) mean temperature reconstruction of D’Arrigo et al (2006). The reconstruction was based on a composite of tree ring annual ring width series from boreal and alpine treeline sites across the northern hemisphere, and made use of a very conservative (“RCS”) tree-ring standardization procedure designed to preserve as much low-frequency climatic information as possible. Interestingly, the long-term variations indicated by the model simulations compared remarkably well with those documented by the tree-ring reconstruction, showing no obvious sign of the potential biases in the estimated low-frequency temperature variations that have been the focus of much previous work (see e.g. this previous RealClimate review). Instead, the one glaring inconsistency was in the high-frequency variations, specifically, the cooling response to the largest few tropical eruptions, AD 1258/1259, 1452/1453 and the 1809+1815 double pulse of eruptions, which is sharpy reduced in the reconstruction relative to the model predictions. Indeed, this was found to be true for any of several different published volcanic forcing series for the past millennium, regardless of the precise geometric scaling used to estimate radiative forcing from volcanic optical depth, and regardless of the precise climate sensitivity assumed. Following the AD 1258 eruption, the climate model simulations predict a drop of 2C, but the tree ring-based reconstruction shows only about a 0.5C cooling. Equally vexing, the cooling in the reconstruction occurs several years late relative to what is predicted by the model. The other large eruptions showed similar discrepancies. An analysis using synthetic proxy data with spatial sampling density and proxy signal-to-noise ratios equivalent to those of the D’Arrigo et al (2006) tree-ring network suggest that these discrepancies cannot be explained in terms of either the spatial sampling/extent or the intrinsic “noisiness” of the network of proxy records. However, using a tree growth model that accounts for the temperature growth thresholding effects discussed above, combined with the complicating effects of chronological errors due to potential missing growth rings, explains the observed features remarkably well. Show in the above figure (Figure 2d from the article) is the D’Arrigo et al tree-ring based NH reconstruction (blue) along with the climate model (NCAR CSM 1.4) simulated NH mean temperatures (red) and the “simulated tree-ring” NH temperature series based on driving the biological growth model with the climate model simulated temperatures (green). The two insets focus on the response to the AD 1258 and AD 1809+1815 volcanic eruption sequences. The attenuation of the response is produced primarily by the loss of sensitivity to further cooling for eruptions that place growing season temperatures close to the lower threshold for growth. The smearing and delay of the cooling, however, arises from another effect: when growing season lengths approach zero, we assume that no growth ring will be detectable for that year. That means that an age model error of 1 year will be introduced in the chronology counting back in time. As multiple large eruptions are encountered further back in time, these age model errors accumulate. This factor would lead to a precise chronological error, rather than smearing of the chronology, if all treeline sites experienced the same cooling. However, stochastic weather variations will lead to differing amounts of cooling for synoptically distinct regions. That means that in any given year, some regions might fall below the “no ring” threshold, while other regions do not. That means that different chronological errors accumulate in synoptically-distinct regions of the Northern Hemisphere. In forming a hemispheric composite, these errors thus lead to a smearing out of the signal back in time as slightly different age model errors accumulate in the different regions contributing to the composite. Including this effect, our model accounts not only for the level of attenuation of the signal, but the delayed and smeared out cooling as well. This is particularly striking in comparing the behavior following both the AD 1258 and AD 1809 eruptions (compare the green and blue curves in the insets of the figure). Our model, for example, predicts the magnitude of the reduction of cooling following the eruptions and the delay in the apparent cooling evidence in the tree-ring record (i.e. in AD 1262 rather than AD 1258). We have also included a minor additional effect in these simulations. While volcanic aerosols cause surface cooling due to decreased shortwave radiation at the surface, they also lead to increased indirect, scattered light at the surface. Plant growth benefits from indirect sunlight, and past studies show that e.g. a Pinatubo-sized eruption (roughly -2W/m^2 radiative forcing) can result in a 30% increase in carbon assimilation by plants. This effect turns out to be relatively small because it is proportional in nature, and thus results in a very small absolute increase when growth is suppressed i the first place by limited growing seasons. However, not including this effect results in a slightly less good reproduction (purple dashed curves in the two insets of the figure) of the observed behavior. As noted earlier, our main conclusions are insensitive to the precise details of the forcing estimates used, the volcanic scaling assumptions made, and the precise assumed climate sensitivity. They were also insensitive to the details of the biological tree growth model over a reasonable range of model assumptions. The conclusion that tree-ring temperature reconstructions might suffer from age model errors due to missing rings is bound to be controversial. A few points are worth making here. First of all, our conclusion is quite specific to temperature-sensitive trees at treeline, and it does not imply more general problems in the larger discipline of dendrochronology. Secondly, the conclusion at this stage simply a hypothesis, a hypothesis that can account for these key enigmatic features in the actual tree-ring hemisphere temperature reconstruction: the attenuation, and the increasing (back in time) delay and temporal smearing of the cooling response to past volcanic forcing. Were an equally successful and more parsimonious hypothesis to be provided for these observations, I would be the first to concede and defer to this alternative explanation. One argument against the specific conclusion of missing growth rings is that trees are carefully cross-dated when forming regional chronologies, and this precludes the possibility of chronological errors. That, however, assumes that there are at least some trees within a particular region that will not suffer a missing ring during the years where our model predicts it. Yet our prediction is that all trees within a region of synoptic or lesser scale where growing season temperatures lie below the growth threshold will experience a missing ring. Thus, cross-dating within that region, regardless of how careful, cannot resolve the lost chronological information. It is my hope that dendroclimatologists will reassess raw chronologies more carefully and critically assess the extent to which the predicted features might indeed be present in the underlying tree-ring data. Again, this paper presents a hypothesis for explaining some enigmatic features of existing tree-ring temperature reconstructions. It is hardly the last word on the matter. Finally it is worth discussing the potential wider implication of these findings. Climate scientists use the past response of the climate to natural factors like volcanoes to better understand how sensitive Earth’s climate might be to the human impact of increasing greenhouse gas concentrations, e.g. to estimate the equilibrium sensitivity of the climate to CO2 doubling i.e. the warming expected for an increase in radiative forcing equivalent to doubling of CO2 concentrations. Hegerl et al (2006) for example used comparisons during the pre-industrial of EBM simulations and proxy temperature reconstructions based entirely or partially on tree-ring data to estimate the equilibrium 2xCO2 climate sensitivity, arguing for a substantially lower 5%-95% range of 1.5–6.2C than found in several previous studies. The primary radiative forcing during the pre-industrial period, however, is that provided by volcanic forcing. Our findings therefore suggest that such studies, because of the underestimate of the response to volcanic forcing in the underlying data, may well have underestimated the true climate sensitivity. It will be interesting to see if accounting for the potential biases identified in this study leads to an upward revision in the estimated sensitivity range. Our study, in this regard, once again only puts forward a hypothesis. It will be up to other researchers, in further work, to assess the validity and potential implications of this hypothesis.
<urn:uuid:fe0b2b98-bcf6-4f0d-bddc-00d8720667e2>
2.875
2,287
Academic Writing
Science & Tech.
26.372584
Archimedes and the “buoyant force” September 6, 2010 | In: Science facts While stepping into a public bath in the third century B.C., the Greek scientist Archimedes noticed how submerging his foot caused water to run over the side. He realized that the volume of water displaced was equal to the volume of the object submerged, his foot. He is said to have been so excited about this insight that he ran naked into the street, shouting about his discovery. The displacement Archimedes noticed occurred because when a solid object like his foot is submerged in a liquid or gas it pushes aside, or “displaces,” some of the liquid or gas molecules to make space for itself. We know today that this displacement gives rise to a force called the “buoyant force”. The buoyant force pushes up on any submerged object. The strength of the force is equal to the weight of the displaced fluid. That is the reason you feel “lighter” when walking through water in a swimming pool. It is also the reason that hot air rises. If a balloon is filled with hot air, helium, or any gas lighter than air, then the buoyant force, equal to the weight of an equal volume of air, may be powerful enough to make the balloon rise. If the balloon is big enough, the force may even be powerful enough to carry people with it. This principle enabled the first humans to fly, in the Montgolfier balloons, in eighteenth century France.
<urn:uuid:79325936-8aa3-41b1-b2df-baa3c169b5a1>
3.765625
318
Knowledge Article
Science & Tech.
56.513197
Common Names in English: White-clawed Crayfish, Atlantic Stream Crayfish, River Crayfish, White-footed Crayfish Typically found in a lake at a mean distance from sea level of 129 meters (424 feet). This is a freshwater species which can be found under submerged cobbles , and amongst fallen leaves in permanent water bodies such as canals, streams quarries (Holdich 2003). Recently it has been found that A. pallipes can tolerate muddy habitats if tree roots or other woody habitats are available (Holdich et al. banks and overhanging vegetation have been highlighted as important features in determining crayfish abundance (Naura and Robinson 1998). It may also be found in large numbers in waters dominated by Chara sp. et al. 2006). This species and hydrological change. Waters containing this species tend to be in the pH range 7-9, with calcium levels above 5 mg l-1. This species occurs in areas with relatively hard, mineral-rich waters on calcareous and rapidly weathering rocks. A study from Western France (Trouilhé et al. 2008) found the site harbouring the largest A. pallipes population had a dissolved oxygen low as 4.93 mg/L, while water temperature rose above 20°C for several consecutive days during summer. Nitrate concentrations were always found to be above 30mg/L. Principal component that an increase of organic matter was a discriminant factor the presence or absence of this species (Trouilhé et al. 2008). It can live for more than 10 years, and usually reaches sexual maturity after three to four years. It will carry 20-160 eggs , but usually less than 100 (Holdich 2003). Declines in this keystone species are said to negatively impact both ecosystem structure and function within freshwater environments through loss of: a) provisioning services – food production from fisheries, recreational fishing , b ) regulatory and support services – trophic cascades , water purification, nutrient cycling, primary productivity, c) cultural value – recreational fishing, education, heritage. Crayfish are also an important food source to a range of species including otters , salmonids , and birds such as kingfishers (Kettunen and ten Brink 2006).. List of Habitats: - 5 Wetlands (inland) - 5.1 Wetlands (inland) - Permanent Rivers/Streams/Creeks (includes waterfalls ) - 5.5 Wetlands (inland) - Permanent Freshwater Lakes (over 8ha) - 5.7 Wetlands (inland) - Permanent Freshwater Marshes/Pools (under 8ha) - 15 Artificial/Aquatic & Marine - 15.1 Artificial/Aquatic - Water Storage Areas (over 8ha) - 15.9 Artificial/Aquatic - Canals and Drainage Channels , Ditches [more info] - Whittaker & Margulis,1978 - C. Linnaeus, 1758 - (Hatschek, 1888) Cavalier-Smith, 1983 - Grobben, 1908 - A.M.A. Aguinaldo et al., 1997 ex T. Cavalier-Smith, 1998 - Latreille, 1829 - Snodgrass, 1938 - (Chernyshev, 1960) - Pennant, 1777 - Latreille, 1802 - Grobben, 1892 - Calman, 1904 - Latreille, 1802 - Suborder: Pleocyemata () - Burkenroad, 1963 - Order: Decapoda () - Latreille, 1802 - Decapods - Superorder: Eucarida () - Calman, 1904 - Subclass: Eumalacostraca () - Grobben, 1892 - Class: Crustacea () - Latreille, 1802 - Crustaceans - Epiclass: Eucrustacea () - Superclass: Crustacea () - Pennant, 1777 - Crustaceans - Infraphylum: Crustaceomorpha () - (Chernyshev, 1960) - Subphylum: Mandibulata () - Snodgrass, 1938 - Phylum: Arthropoda () - Latreille, 1829 - Arthropods - Superphylum: Panarthropoda () - Cuvier - Infrakingdom: Ecdysozoa () - A.M.A. Aguinaldo et al., 1997 ex T. Cavalier-Smith, 1998 - Branch: Protostomia () - Grobben, 1908 - Subkingdom: Bilateria () - (Hatschek, 1888) Cavalier-Smith, 1983 - Kingdom: Animalia () - C. Linnaeus, 1758 - animals Astacus pallipes • Atlantoastacus orientalis • Atlantoastacus orientalis carinthiacus • Atlantoastacus pallipes rhodanicus • Austropotamobius (Atlanoastacus) pallipes lusitanicus • Austropotamobius (Atlantoastacus) berndhauseri Some consider Austropotamobius pallipes as a species complex comprised of two genetically distinct species; A. pallipes and an Italian species for which the name is being discussed. The Italian species is thought to be comprised of a number of subspecies , though this depends on the author . Both the Italian form and A. pallipes can be found in Spain, France, Italy and Switzerland. It is also suggested that there are two subspecies of A. pallipes: A. pallipes pallipes which exists in France, the British Isles, Spain, Switzerland, and Germany, and A. p. subsp. nov. which is known from Liguria in Italy and the Alpes Maritimes region of France. There still exists some debate as to whether the Italian form should be raised to species level, though recent genetic work (Grandjean et al. 2000a, Fratini et al. 2005, Bertocchi et al. 2008) would support a separate species, Austropotamobius italicus with 4 subspecies. Members of the genus Austropotamobius ZipcodeZoo has pages for 2 species and subspecies in this genus: - Search for Pictures: images.google.com - Search for Scholarly Articles: Google Scholar - Search using Scientific Name and Vernacular Names: All the Web | AltaVista Canada | AltaVista | Excite | Google | HotBot | Lycos - Search using Specialized Databases: GenBank | Medline | Scirus | CISTI/CAL | Agricola Periodicals | Agricola Books - 1994 IUCN red list of threatened animals Gland, Switzerland: IUCN, 1993 url p. 157. - A reconnaissance of crayfish populations in western Montana / Helena, Mont.: Montana Dept. of Fish, Wildlife, and Parks, 1989 url p. 14. - California fish and game. [San Francisco, etc.]: State of California, Resources Agency, Dept. of Fish and Game. url p. 178, p. 182, p. 183, p. 255, p. 67. - Checklists for the CORINE Biotopes Programme and its application in the PHARE countries of Central and East Europe: including comparisons with relevant conventions and agreements on the conservation of European species and habitats EC url p. 109, p. 16. - Fishery bulletin / U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Marine Fisheries Service. Washington, D.C.: The Service: url p. 296, p. 298. - Insects and other invertebrates as candidates for the Bern Convention IUCN Conservation Monitoring Centre url p. 219, p. 5. - Invertebrates in need of special protection in Europe Council of Europe url p. 104, p. 162. - Proceedings of the Biological Society of Washington. Washington, Biological Society of Washington url p. 692. - Spixiana. München: Zoologische Staatssammlung München, 1977- url , p. 135, p. 137, p. 138, p. 144, p. 288, p. 77. - The Biological bulletin. Woods Hole, Mass.: Marine Biological Laboratory, url p. 383, p. 385, p. 390, p. 392, p. 401, p. 408, p. 445, p. 48, p. 53, p. 582, p. 590, p. 79. - The Great Basin naturalist. 54 1994 Provo, Utah: M.L. Bean Life Science Museum, Brigham Young University, 1939-1999. url p. 169. - The IUCN Invertebrate Red Data Book IUCN url p. 266, p. 29, p. 294, p. 297, p. 298, p. 300, p. 301, p. 37, p. 38, p. 40, p. 41, p. 42, p. 618. - Brands, S.J. (comp.) 1989-present. The Taxonomicon. Universal Taxonomic Services, Zwaag, The Netherlands. Accessed January 9, 2012. - Füreder, L., Gherardi, F., Holdich, D., Reynolds, J., Sibley, P. & Souty-Grosset, C. 2010. Austropotamobius pallipes. In: IUCN 2011. IUCN Red List of Threatened Species. Version 2011.2. <www.iucnredlist.org>. Downloadedon 30January2012. - IUCN 2012. IUCN Red List of Threatened Species. Version 2011.2. . Downloaded on January 28, 2012. Accessed through GBIF Data Portal December 02, 2007: - UK National Biodiversity Network, Biological Records Centre - Crayfish - data for Britain and Ireland to 2003 - Biodiversity Heritage Library NamebankID: 5434520 - Global Biodiversity Information Facility Taxonkey: 2468095 - IUCN ID: 193656 - Zipcode Zoo Species Identifier: 486 - Standard Deviation = 149.890 based on 7,657 observations. Altitude information for each observation from British Oceanographic Data Centre. [back] - Füreder, L., Gherardi, F., Holdich, D., Reynolds, J., Sibley, P. & Souty-Grosset, C. 2010. Austropotamobius pallipes. In: IUCN 2011. IUCN Red List of Threatened Species. Version 2011.2. <www.iucnredlist.org>. Downloaded on 30 January 2012. [back]
<urn:uuid:41598c1c-9950-4c7d-93ca-918fc60f6d9a>
3.59375
2,361
Knowledge Article
Science & Tech.
57.662015
(Submitted December 22, 2010) Mature elliptical galaxies have been spotted 1 billion years after the big bang. Considering that the stars within these type of galaxies should be at least 10 billion years old, and assuming that the same laws of physics apply to the early universe as apply to the present universe, why do these galaxies exist at all in the early universe. Indeed apparently fully assembled elliptical galaxies have been observed as early as 1 billion years after the Big Bang. The stars within these galaxies, however, are not as old as the stars in nearby elliptical galaxies. The best-estimate stellar ages are somewhat uncertain at the moment, but they are consistent with being formed after the Big Bang, so there are no problems here. This is quite interesting, since it has told us that galaxy assembly proceeds very rapidly after the big bang. Bret & Antara for Ask an Astrophysicist
<urn:uuid:8cf7a824-978b-4bcc-9ab0-9be62505b2cd>
3.5
199
Q&A Forum
Science & Tech.
34.423088
Description: The beautiful Red-eyed Tree Frog measures up to three inches in length. It is bright green above with bluish thighs and sides. It has white underparts and orange toes. The conspicuous eyes are bright red with vertical black pupils. The frog’s bright colors are used to startle predators and allow the frog a few seconds to escape until the predators adjusts to the color. These vivid colors produce a “ghost image” in the eyes of the predator that remains for several seconds even as the frog has already escaped. Red-eyed Tree Frogs are not poisonous. Lifespan is thought to be about five years in the wild. Habitat/Range: Red-eyed Tree Frogs are found in tropical parts of the Western Hemisphere from Mexico to northern South America. They prefer to remain in the forest canopy and usually spend the day sleeping, stuck to the underside of a leaf where they hide their bright colors. They rarely descend to the ground. Diet: Like many frogs, the Red-eyed Tree Frog is a voracious consumer of insects such as crickets, grasshoppers, flies, and moths. These frogs will occasionally eat smaller frogs. Breeding: Males compete vigorously for females, often engaging in “wrestling matches” that end when one frog falls off of the branch. The winning male then mates with the female. Together, the pair go through amplexus, a process in which the male and female hang upside down from a leaf. As the female releases one egg at a time from the leaf, the male, which is on top of the female, fertilizes them before they drop into the water. The eggs hatch in a few days and the tadpoles live in the water before becoming frogletts. Status: Populations are thought to be declining because of continuous habitat destruction. Nevertheless, this species remains fairly common and is an international symbol of rainforest preservation.
<urn:uuid:d5ae4667-503f-44dd-b227-59acaeec365e>
3.640625
397
Knowledge Article
Science & Tech.
53.491857
Artist's impression of GOCE (Gravity field and steady-state Ocean Explorer) satellite. Eyes in the sky 18 May 2012 Sending satellites into orbit isn't a hobby for the cheapskate. Each launch costs millions, and if anything goes wrong on the way up you can kiss that cash goodbye. But for every dollar spent, satellites provide invaluable information we could never gather from the ground. Bristling with sophisticated sensors, they monitor vast swathes of Earth and its atmosphere, repeating their measurements every orbit. This gives scientists long-term and almost real-time information about changes to our planet, which they once had to estimate from a handful of terrestrial instrument stations. Here are just a few examples of satellite applications from the last few years: - The same Global Positional System (GPS) signals used in your sat nav can measure movements of the Earth's crust as slow as a millimetre a year. This reveals tiny deformations and patterns of strain which can indicate where disaster may strike next - information that could save thousands of lives around the world. - The GOCE satellite - Gravity field and Ocean Circulation Explorer - measures variations in Earth's gravity which, until now, we've only known about in general terms. GOCE has already enabled scientists to map gravity more precisely than was ever possible before, and will carry on collecting data until 2013. - The Soil Moisture and Ocean Salinity satellite, SMOS, is transforming our understanding of how seawater's saltiness varies in different parts of the oceans, and of the varying degrees of moisture in soils all over the world. - GPS has a range of other environmental science applications too - everything from tracking migrating animals to monitoring long-term changes in the topography of Antarctic ice. - Infra-red imaging has revealed pyramids and settlements that have been buried for millennia under Egyptian sands. Because the buildings are denser than the overlying sediments they absorb heat differently from their surroundings and can be detected from space even if there's no sign of them on the ground. - Scientists are using satellite-based LIDAR (light detection and ranging) to investigate everything from weather processes to the intricacies of rainforest canopies. Just as RADAR uses radio waves, LIDAR senses its target's distance or speed from how long it takes for a laser to be reflected back to its receiver. - Cryosat 2 uses pulses of microwave energy to obtain detailed measurements of Arctic and Antarctic ice - vital to our understanding of the relationship between climate change and the poles. The satellite is also shedding new light on patterns of ocean circulation and sea-level rise. - When the sun's energy reflects off the Earth, some of it is absorbed by atmospheric gases like carbon dioxide and methane. Different gases absorb energy at different wavelengths, so researchers can monitor their levels using satellites to detect the different wavelengths of radiation that make it back out to space. This is giving us a more sophisticated understanding of the carbon cycle and how human activity is affecting it. For more information, visit the National Centre for Earth Observation at www.nceo.ac.uk Interesting? Spread the word using the 'share' menu on the top right.
<urn:uuid:c3e1947b-8291-4dbd-a71f-e0fd48c1d2a3>
3.734375
666
Knowledge Article
Science & Tech.
36.116551
This paper was written as part of the 2010 Alaska Oceans Sciences Bowl high school competition. The conclusions in this report are solely those of the student authors. Effects of Polar Ice Melt on Ocean Chemistry and Kodiak Island's Economy and Energy Technology Weapons of Bass Destruction Global and regional climate change will lead to major changes that will affect Kodiak. One of the changes will be in chemistry, more specifically the composition of the ocean water, concerning density, temperature, and the thermohaline cycle. Another change will be with the economy of Kodiak, dealing with tourism and the fishing industry. In reaction to these changes, Kodiak has become a leader in energy technology and developed new ways to power homes and businesses, including wind turbines and hydropower. Interviews were conducted with experts in the fields of ocean chemistry, Kodiak wildlife, and energy technology and for each interview a set of questions was developed. These changes make up the overall change that will occur with Kodiak due to global and regional climate change. Within the last 30 years, scientists have called attention to global climate change and the melting and destruction of the polar ice caps. With the melting of ice caps come the changes in chemistry of the ocean, economical impacts, and energy technology. Different regions of the world will be affected in their own way. The island of Kodiak is among the areas that will be affected by polar ice melt and changes in ocean water chemistry. Changes in water chemistry, particularly ocean acidification and ocean surface temperature, are going to influence the economy by limiting Kodiak fisheries resources. The declining fish populations affect bear populations, which in turn affect hunting and tourism opportunities. Technology is moving forward with renewable resources to power the island of Kodiak by the use of hydroelectric power and wind turbines. The Kodiak 2010 Tsunami Bowl team collected information by interviewing local experts on the subjects of ocean chemistry, wildlife and tourism, energy technology, and by conducting online research. This paper gives the aspects of how polar ice melt affects Kodiak and discusses some possible repercussions of global climate change. Interviews were conducted with experts in the fields of ocean chemistry, Kodiak wildlife, and energy technology and for each interview a set of questions was developed. Research assistant professors Clara Deal and Jessie Ellen Cherry from the University of Alaska Fairbanks were interviewed on the topic of ocean chemistry. Supervisory wildlife biologist William Pyle from Kodiak National Wildlife Refuge was interviewed about Kodiak's salmon and bear populations. Chief executive officer Darron Scott from Kodiak Electric Association was interviewed about Kodiak's renewable energy technology. Their answers were included in the results and discussion of this paper. - How will the melting of the polar ice caps affect the "ocean conveyor belt" process? - If the conveyor belt process would stop would that directly affect Kodiak? Could this affect the salmon or crab fishery in Kodiak? - How would oxygen levels in and around Kodiak be affected with the melting of polar ice caps? If so, how would that affect life in the ocean? - Would the salinity and density be affected, and if they are, what effect do they have on the ocean and life involved in the ocean? - What is your opinion on the polar ice cap melting rate and when will the ice caps be gone, or is there no sufficient ice cap melt and is this just a cycle the earth is in? - Will the climate change in Kodiak with polar ice melt and global climate change, if so, how? - Will there be regional changes that affect only Kodiak and not the main land Alaska such as the far north places like Nome or Barrow, if so, how? II.2 Wildlife and Economy - Do you think brown bears in Kodiak will be affected by the melting of the Arctic ice cap? - How will that affect economy in Kodiak? - The melting of the Arctic ice cap impacts the climate of Kodiak; we think it might have an affect on the salmon runs. How do you think changes in salmon runs will impact Kodiak's brown bear populations? - Do you know what the revenue into Kodiak through bear viewing tourism and hunting is? And how might that change? - Kodiak currently has three wind turbines as one of our renewable energy sources, is it possible Kodiak will be adding any more wind turbines? - Terror Lake is also another one of our renewable energy resources, how does it work? Will it be affected by the melting of the polar ice caps/climate change? - Are there any other renewable resources Kodiak can add to decrease the use of fossil fuels or diesel? - Do you know of any other coastal communities that are switching over to renewable resources? - I heard that before the wind turbine project started, our generators would supply us with 20% of our energy using fossil fuels. How much energy do the generators supply now? - I found a powerpoint that you did online and it said that the project goals for the wind turbines were to lower fuel costs, lower emissions and reduce power cost. How exactly is that going to happen? III. Results and Discussion Polar ice cap melt is happening in the Arctic Ocean and this melting causes effects in the chemistry of the water. The major changes in the water focused on in this paper are temperature changes, salinity and density, and the effects on the thermohaline circulation, also known as "ocean conveyor belt." III.1.1 Ocean Temperature Change The melting of polar ice caps with ocean temperature rise is a cause for concern. The ocean surface water temperature influences density and salinity, and the overall sea level depends on the temperature of the ocean water. With the warming of ocean water the temperature will eventually begin to rise, which cause a rise in sea level, not because of the melting polar ice caps, but because warmer water is less dense than colder water. The ocean water will spread out and increase in volume, and cause the rise in sea level. A one degree change will cause the oceans to rise a small but noticeable amount. III.1.2 Salinity and Density Salinity is the measure of the quantity of dissolved salts in the ocean water, and density is the mass per unit volume of a substance. The salinity of the waters in and around the Arctic Ocean will be affected by the melting and depletion of the polar ice caps. The average ocean salinity is 35 parts per thousands at 0 degrees Celsius, the density is 1.028 g/cm3. With melting ice caps cold freshwater is being added to the ocean, which causes the cold water to sink because it is denser than the surrounding ocean water. Research assistant professor Jessie Cherry, from the International Arctic Research Center UAF, said "It [freshwater] could form a dense layer that sinks to the bottom, even though it is fresher than the warmer water it displaces. That forces warmer water to move up and around." This small change of adding cold freshwater into the ocean causes a change in the salinity and density of ocean water, which can result in stronger stratification. This leads to limited nutrient supply for larger plankton, because exchange with deeper water is limited. In this situation smaller plankton have a competitive advantage over larger plankton. III.1.3 Thermohaline Circulation The thermohaline circulation is caused by differences in density and temperature of the water causing an underwater "conveyor belt system" to move water from the Atlantic Ocean to the Pacific Ocean and back again. The thermohaline system is responsible for moving the colder oxygen rich waters down and for carrying nutrients to the surface, and dispensing heat energy. The system is driven by the sinking of cold water in the Greenland Sea; the cold water drives the rest of the system into circulation. Due to the recent climate change and the melting of ice caps Jessie Cherry of the International Arctic Research Center UAF said "changes in freshwater content in the ocean caused by melting sea ice and glaciers change the buoyancy of the ocean, which in turn impacts the circulation." If the cold waters of the north were to warm up due to polar ice cap melt and overall climate change, the buoyancy of the water would alter due to the change in salinity from the fresh water being added causing a disruption in the circulation of ocean water. Since the mid-1970s the ocean circulation system that brings cool water from ocean depths to the surface has been slowing down. There would also be a reduction in the amount of nutrient upwelling. With the loss of the nutrient rich upwelling marine life will be forced to find feeding grounds elsewhere. For example, in the summer of 2009, the baleen whales that have been seen around Kodiak for years have not been sighted in their usual numbers. According to Kodiak marine mammal expert Kate Wynne, with a loss of plankton and nutrient rich waters from upwelling, the baleen whales have been forced to look for new feeding grounds. Changes in the chemistry of ocean water will directly affect Kodiak and with changing ocean water the social aspects of Kodiak must also change to adapt to a new environment. III.2 Changes in Kodiak's Economy Kodiak Island is well-known for its commercial fishing, hunting and wildlife viewing tourism. William Pyle, Supervisory Wildlife Biologist of the Kodiak National Wildlife Refuge, said that Kodiak is a vulnerable place to climate change. The northern areas of the globe may first experience the effects of the Arctic ice melt. Kodiak is subject to be affected as well soon after other parts of the northern areas. We are only now starting to realize the outcomes that global warming will have for our economy, wildlife and natural resources since it's only now starting to happen. There is no direct relation between climate change and salmon survival and distribution (according to Pyle), but it does have an effect on the abundance of salmon; leaving a chain reaction from the fishing industry to the bear population, also a big part in Kodiak's economy through hunting and wildlife viewing tourism. The area around and surrounding Kodiak Island is naturally higher in acidity than other Alaskan places because of the upwelling of deep, nutrient and CO2 enriched water from the ocean conveyor belt. Because of the increasing CO2 buildup near Kodiak, more and more plankton that the salmon feed on may be disappearing. The calcium carbonate in the "shells" of the phytoplankton deteriorates or becomes less likely to form because of the CO2, highly decreasing the phytoplankton's population. Phytoplankton provides food for zooplankton which is the preferred food source for salmon along with other small fish. Since salmon and other commercial fish prefer to eat this specific type of plankton, their food source is decreasing. Areas have been closed down to commercial fishing as a result of declining salmon returns. There will be a change in the value of commercial fish because they are becoming less abundant. Currently, there is no significant fishing in the Arctic area, but mangers of fisheries expect it to become a major target for commercial fishermen. "As Arctic sea ice recedes due to climate change, there is increasing interest in commercial fishing in Arctic waters," Commerce Secretary Gary Locke said in a statement August 20th, 2009. "We are in a position to plan for sustainable fishing that does not damage the overall health of this fragile ecosystem. This plan takes a precautionary approach to any development of commercial fishing in an area where there has been none in the past" (New York Times). Since the Arctic ice is shrinking, there has been debate about expanding commercial fishing to the Arctic, which could also hurt Kodiak and surrounding towns' economy. The majority of jobs in Kodiak are fish-related; either fishing, or cannery work. If the commercial fishing industry moves north, there will be less commercial fishermen in Kodiak and surrounding communities, many people will lose their jobs, causing economic hardship. While salmon fishery plays a big part in Kodiak's economy and social life, the brown bears of Kodiak do as well. They bring in tourism, as well as hunting. The bears feed on the salmon that inhabit Kodiak's rivers and other natural resources such as salmonberries. Bear populations will likely survive without the salmon, but bears will become smaller and their population will decrease a noticeable amount when the salmon population decreases. Fewer and smaller bears will create less revenue for hunting and wildlife viewing businesses. III.3 Technology to Slow Down Cimate Change While there is no sure way of completely stopping global climate change, there are little steps into slowing it. There need to be more studies and research on what climate change can do to our communities. The people of Kodiak are aware and concerned about the potential effects of climate change. Kodiak's community has made an effort to become more informed with this matter and has started to take action. Even though there is no action possible to completely stop CO2 emissions, Kodiak has begun to hold back on non-renewable energy sources and focus on what is renewable; hydroelectric energy and wind power. Climate change is a contributor to the melting of the ice caps in the polar regions. The burning of wood, the use of fossil fuels and many other human activities create an increase of CO2 in the atmosphere. Carbon dioxide and other greenhouse gases cause the surface of the earth to heat up. As the earth's surface gets warmer, it causes the ice in the polar regions to melt. Our community has taken some steps in trying to slow down the changes in the climate; technology is a way to help in the process. A few technological advancements in Kodiak that help to reduce CO2 emissions are: the wind turbines and the Terror Lake hydroelectric power plant. Currently in our community, 11% of our energy comes from diesel; the remaining 89% come from renewable resources. The wind turbines are located close to town, on Pillar Mountain. The Kodiak Electric Association (KEA) started using the wind turbines as a source of energy in August 2009. At this time, there are three 1.5 megawatt wind turbines in operation, each producing enough energy to power 330 homes. According to chief executive officer of KEA, Darron Scott, Kodiak is permitted for six turbines to be installed. In a presentation to the Alaska Power Association, he said that he expects the turbines to save 800,000 gallons of diesel each year. Having the wind turbines lowers our fuel and power costs. Also, it cuts the need of diesel powered generation by almost half. Terror Lake is a high altitude lake approximately 25 miles southwest of the city of Kodiak. A tunnel inlet at the bottom of the lake goes through the mountain for about five miles and transports high pressure water to the hydroelectric plant. The plant then takes the water and spins the turbines to make electricity. It is Kodiak's primary source of power. Kodiak is supplied with 80% of energy from Terror Lake. Diesel generators are used to convert mechanical energy into electrical energy. This form of energy is an exceptionally reliable source, especially when the power demand increases during peak consumption, for example, during fishing season, when the local canneries are at their busiest. The disadvantage to this form of energy is that it uses too much diesel that produces CO2 that goes into the atmosphere. Presently, with the limited supply of fossil fuels, we need to find a new and more efficient way of providing energy that won't harm the earth more than it already is, and one that is practical and clean. Before the wind turbine project started, the generators supplied 20% of Kodiak's energy using fossil fuels. Around the state of Alaska, the use of alternative energy sources is increasing. These include: wind energy, hydroelectric energy, geothermal energy, tidal power, and solar power. At this time, Kodiak has the highest percentage of renewable energy sources of any Alaskan coastal community. Clara (Jodwalis) Deal Research Assistant Professor International Arctic Research Center University of Alaska Fairbanks Jessie Ellen Cherry, Ph.D. Research Assistant Professor International Arctic Research Center and Institute of Northern Engineering University of Alaska Fairbanks Marine Mammal Specialist University of Alaska Fairbanks Fishery Industrial Technology Center Supervisory Wildlife Biologist Kodiak National Wildlife Refuge Chief Executive Officer Kodiak Electric Association
<urn:uuid:7da0e320-595e-4a2a-a4de-09e7ee003c84>
2.96875
3,367
Academic Writing
Science & Tech.
39.00601
Exclusive: No ice at the North PoleFrom a June 20 National Geographic article: Polar scientists reveal dramatic new evidence of climate change By Steve Connor, Science Editor Friday, 27 June 2008 It seems unthinkable, but for the first time in human history, ice is on course to disappear entirely from the North Pole this year. The disappearance of the Arctic sea ice, making it possible to reach the Pole sailing in a boat through open water, would be one of the most dramatic – and worrying – examples of the impact of global warming on the planet. Scientists say the ice at 90 degrees north may well have melted away by the summer. "From the viewpoint of science, the North Pole is just another point on the globe, but symbolically it is hugely important. There is supposed to be ice at the North Pole, not open water," said Mark Serreze of the US National Snow and Ice Data Centre in Colorado. Arctic warming has become so dramatic that the North Pole may melt this summer, report scientists studying the effects of climate change in the field.But there's a big problem with this "alarming" story: it's not even rare for the North Pole to be ice free. "We're actually projecting this year that the North Pole may be free of ice for the first time [in history]," David Barber, of the University of Manitoba, told National Geographic News aboard the C.C.G.S. Amundsen, a Canadian research icebreaker. See the details here, and note that the New York Times ran (and then retracted) a similar story back in August 2000. From a November 2000 Patrick Michaels article: By August 29, the level of outrage the Times had incurred provoked a half-hearted retraction of sorts, on page D-3, where the paper admitted it misstated the true condition of polar ice, noting that about 10 percent of the Arctic Ocean is open in the summer and that those open areas do in fact sometimes extend to the Pole. McCarthy, the Times reported, “would not argue with critics who said that open water at the pole was not unprecedented.” How about the truth? Open water is common. That’s apparent from even a cursory look at the U.N.’s own temperature data or from a study of climate history. Climatologists are pretty sure that polar regions were around 2°C warmer than they are today during the period from 4,000 to 7,000 years ago. That’s three millennia in which summer sea-ice was likely more scattered than it is today. The only ecological catastrophe ecologists might be able say resulted from this deplorable condition was the rise of human civilization.
<urn:uuid:a149c4bf-8743-432a-9132-ef04097b54bf>
3.359375
559
Personal Blog
Science & Tech.
51.209718
First, a brief description of what Bloom filters are. They’re a data structure that relies on hash functions for representing a set. The actual structure is a bunch of bits and a few hash functions. To add an element to a Bloom filter, compute the hashes for each hash function, and set each bit high. To test if an element has been added to the Bloom filter, compute the hashes and return true if all the bits are already high. A quick overview of what this means: - You don’t store any elements in the Bloom filter—just the elements’ hashes. Very compact. - Lookup and adding to the set are extremely fast. - Union and intersection of Bloom filter sets are just bitwise operators. - False positives are possible. (Since we don’t store the actual elements, we might see all the hashed bits as high when it was actually one or more previously added elements that made them high.) - There’s no way to remove an element from the set. (We don’t know if another element has hashed to the bits we want to set low.) Without further ado, here’s the app. (It’s a little rough around the edges, and, yes, it’s a Java applet—Processing.js has a few bugs and is slow as balls for this type of application, apparently.) This is a Bloom filter with 625 bits and 10 hash functions. (More implementation details below.) Have fun—ask questions or leave feedback in the comments! - Give the applet focus by clicking on the dots. - Start typing into it and you’ll see the bits your string is hashing to in real time. - Hit enter to add the string and you’ll see the high bits turn red. Repeat! For the even more curious, here’s the Processing source. Update: Added more explicit directions.
<urn:uuid:8cfcbeee-3319-4132-b3b8-fd12527c05df>
2.765625
405
Documentation
Software Dev.
71.442548
Water Temperature Data for Tantalus Creek Upstream from the Reservoir, Yellowstone National Park This site collects water from all parts of the Norris Geyser Basin except The Gap and Hundred Springs Plain. The temperature is measured immediately prior to where the Creek enters the lake-like water body known as The Reservoir. The site is off-limits to the public and is surrounded by unstable thermal ground. Click the site name to access the temperature graphs. These temperatures reflect both meteorological conditions (solar radiation and air temperature) and the amount of flow from nearby thermal features. Rapid temperature increases can reflect geyser eruptions or other changes in water discharge. Rapid temperature drops may be caused by precipitation. Comparison of the different channel temperatures (as well as the air-temperature control station) can reveal specific parts of the basin affected by increased hydrothermal discharge.
<urn:uuid:6b51d056-2465-41d5-b053-0f872fd77032>
3.828125
175
Knowledge Article
Science & Tech.
23.041525
What surrounds a hotbed of star formation? In the case of the Orion Nebula -- dust. entire Orion field, located about 1600 light years away, is inundated with intricate and picturesque filaments of dust. Opaque to visible light, dust is created in the outer atmosphere of massive cool stars and expelled by a strong outer wind of particles. The Trapezium and other forming star clusters are embedded in the nebula. The intricate filaments of dust surrounding M43 appear brown in the above image, while central glowing gas is highlighted in red. Over the next few million years much of Orion's dust will be slowly destroyed by the very stars now being formed, or dispersed into the Galaxy.
<urn:uuid:d35746ef-a60b-4de4-9bbc-5ad0d11aab1b>
4.1875
159
Knowledge Article
Science & Tech.
48.87
|Are the nearest galaxies distributed randomly? A plot of over one million of the brightest "extended sources" detected by the Two Micron All Sky Survey (2MASS) shows that they are not. The vast majority of these infrared extended sources are galaxies. Visible above is an incredible tapestry of structure that provides limits on how the universe formed and evolved. Many galaxies are gravitationally bound together to form clusters, which themselves are loosely bound into superclusters, which in turn are sometimes seen to align over even larger scale structures. In contrast, very bright stars inside our own Milky Way Galaxy cause the vertical blue sash. Credit: 2MASS, T. H. Jarrett, J. Carpenter, & R. Hurt
<urn:uuid:32c44709-fa4c-4d33-95b0-49f9a48cbdd6>
3.34375
151
Truncated
Science & Tech.
39.641606
Bar-headed geese in high-flying wind tunnel test Video footage of bar-headed geese in high-altitude wind tunnel experiments has been released by researchers. The flights were captured in super slow-motion by the University of British Columbia. During "test flights", birds wear masks they are trained to wear as goslings, which provide them with oxygen levels that simulate high altitude. The masks also collect gas that the birds breathe out, measuring how much precious oxygen they use in flight. End Quote Dr Jessica Meir University of British Columbia I want to uncover the mechanism that allows these incredible physical feats to be accomplished” BBC Nature spoke to lead researcher Dr Jessica Meir at the Society for Experimental Biology's annual meeting in Salzburg. Dr Meir explained that a great deal of research into the "remarkable geese" revealed how the birds are specially adapted to fly at extremely high altitude. Their blood, for example, can carry far more oxygen to their muscles than other birds. But while most studies have focused on the birds while they are at rest, Dr Meir wanted to create a "picture of oxygen delivery while the bird is flying". Fortunately for her, the university's engineering department has a wind tunnel wide enough for a goose - with a wingspan of more than 1.5m - to fly in. Tracking studies have recorded the birds at heights of 6,000m (just under 20,000 feet) - something they need to achieve in order to complete their migration through the Himalayas. Other high altitude wildlife - Yunnan snub-nosed monkeys - see the bizarre-looking primate that lives at the highest altitudes of any primate other than humans. - Jumping spider - watch the Himalayan jumping spider hunting at 6,700m above sea level. - Ancient bristlecones - see the world's oldest trees that have survived 5,000 years of harsh, high altitude conditions. But, in order to find out just how high the birds could fly, Dr Meir and her colleagues recreated the oxygen and nitrogen levels that the birds would receive at 6,000m and at 9,000m above sea level. This is approximately 10% oxygen and 7% oxygen respectively. At 7% oxygen, the birds are experiencing the conditions required to fly over the summit of Mount Everest. Dr Meir told BBC Nature that she was interested in the physiology that allowed animals to cope in extreme environments and "do amazing things". "I want to uncover the mechanism that allows these incredible physical feats to be accomplished," she said. "We already know they fly at up to 6,000m, where the oxygen levels are half what they are at sea level. "And they're not only able to function, they're able to fly, which is an incredibly expensive way of moving around; it takes 10-20 times more oxygen than when they're resting." Dr Meir also hopes that understanding how bar-headed geese cope and perform at such low oxygen levels will help inform research into human respiratory problems.
<urn:uuid:eab844b6-9f5f-4640-813d-0bfba4bdd9cb>
3.109375
641
Truncated
Science & Tech.
45.591341
This week's book giveaway is in the General Computing forum. We're giving away four copies of Arduino in Action and have Martin Evans, Joshua Noble, and Jordan Hochenbaum on-line! See this thread for details. Builder and Bridge patterns solve a different purpose. Builder pattern is used when the construction of an Object is more complex than just saying new ....(). Imagine that a house needs to be constructed based on various designs, choices and amenities selected by the customer. In this case, typically you will use Builder pattern to "CONSTRUCT" the house PER CUSTOMER based on various choices selected by her. Hence, the creation of the House object or end product is complex based on external factors. On the other hand, Bridge pattern is used vary the abstraction and implementation seamlessly. Suppose there is a car demo for, say, Sports car and economy car. You would like to seamlessly switch between the two so that Client code is not affected. You can go through examples from the Web.
<urn:uuid:7e80ad45-790c-49a4-bd05-547169497b28>
2.84375
203
Comment Section
Software Dev.
56.709449
The Twin Paradox The “twin paradox” is not a paradox in the sense of a logical contradiction that falsifies relativity but rather a very curious puzzle. Traditionally, the twin paradox is concerned with the strange result that if one of two twin brothers leaves the other and embarks on a high-speed journey to a remote point and back again, the twins will no longer be the same age. Let’s call these hypothetical twin brothers A and B. For this discussion, we’ll stipulate that A stays home while B travels away from his brother at a speed of 60 percent of the speed of light (0.6c, where c is the speed of light, nearly 300 million meters per second). B travels for fifteen years by A’s reckoning then quickly decelerates to a stop, turns around, and quickly accelerates back to 0.6c in the direction toward his brother, A. After another fifteen years (again, by A’s reckoning), B arrives home, decelerates, and rejoins his brother, who has aged thirty years since he last saw B. The “paradox” is that, even though A’s velocity relative to B is the same as B’s velocity relative to A, B will have experienced only twenty-four years of travel and find himself six years younger than his twin brother, A. While this is indeed puzzling, it is not a logical flaw in relativity. The twins do not have similar experiences during B’s long journey, and that resolves the “paradox.” (While the fiction of very short deceleration/acceleration periods is useful to keep this discussion from getting into general relativity theory, it should be noted that such accelerations would almost certainly reduce twin B to a thin red puddle. It would take weeks to make the velocity changes at tolerable accelerations, say 5 to 10 g. See my accompanying sidebar “On Problems with Near-light-speed Travel” for more on this type of difficulty.) The journey of B, as viewed by twin A, is depicted in figure 1. The workings of the “Twin Paradox” can be explained with the aid of space-time diagrams. A space-time diagram for the stay-at-home twin, A, appears in the left half of figure 2. The grid marks show years on the vertical axis and distance in light-years on the horizontal axis. The thick lines represent A’s and B’s positions over time, while the thin lines with arrows represent the paths of light beams sent between the twins. During the fifteen years (in A’s frame of reference) of outbound travel by twin B, B gets out to a distance of nine light-years (0.6c315 years) from twin A. However, signals or light rays sent from B’s turnaround point won’t even reach A for another nine years, or until twenty-four years (15+9) after B’s departure. That is, A will see his brother B recede for twenty-four years, and then approach for just six years, arriving thirty years after his initial departure. This is in marked contrast to B’s observations: B will see his stay-home brother recede for twelve years. After B turns around, he will see A approaching for twelve years and will return a total of twenty-four years after his departure. However, the same interval is thirty years by A’s calendar. The difference is that, during the short but intense accelerations experienced by B, B’s velocity relative to the universe (and to A) is changing. Twin B effectively “loses synch” with the rest of the universe, including his twin brother, A. Twin B is not in an inertial reference frame over the entire trip—and his bouts with intense accelerations will certainly remind him of that fact. Of course, A won’t be aware of B’s velocity changes until many years later. The space-time diagrams for B’s journey appear on the right of figure 2. These can’t be represented as a single diagram, because they are views of two different inertial frames (B outbound versus B inbound). The twin that undergoes acceleration will be the one who returns home younger than his stay-at-home brother. The loss of synchronization due to acceleration is the key and the reason it’s not a logical “paradox.” Figure 2: Twin paradox space-time diagrams for stay-at-home twin A (left) and traveler B (right). This point is crucial: the time discrepancies between the twins are absolutely real. Here is a quick example, presented with the “radar method”: since any radar beams sent from A meet the target (B) at only one point in space-time, those beams must spend equal times outbound and inbound with respect to the sender. Figure 2 shows that a radar beam emitted by twin A at his time of two years will be reflected from B at some unknown time, and received again by A when his (A’s) calendar reads eight years. Likewise, a beam emitted by twin A at four years will be reflected from B and received by A when his calendar reads sixteen years. Twin A can calculate the time and distance (in A’s frame of reference) of reflections from B, knowing only his own sending and receiving times and that the signals propagate at the speed of light. Since A’s two-year pulse returns at eight years, the reflection occurred (by A’s calendar) at the midpoint of the send/receive times, (2+8)/2=5 years. Since A’s four-year pulse returns at sixteen years, the reflection occurred at (4+16)/2=10 years by A’s calendar. Therefore, A measures the interval between these reflections (at five years and ten years) as being five years long. Because the twins are separating rapidly, there will be a delay in B’s receipt of A’s transmissions. In particular, while A’s transmissions were sent two years apart by his clock, they were received by B over an interval longer than two years, say, K*2 years, where K is a factor greater than 1. However, the same must hold true for B’s “transmissions” back to A: whatever period separates the reflections from B’s craft, A’s measurement of receiving times will be longer—in fact, precisely K times longer—since B is moving away from A exactly as fast as A recedes from B (“relativity”). So, A’s original pulses were sent two years apart; these were received by B at K*2 years apart and received again by A at K*K*2 years apart, or eight years. Clearly, K must equal 2, and B’s interval between receipt of A’s two signals must be 2*2=4 years, while A’s measurement of the time for the pulses to return from B is K*4=8 years, as required. This is how “Time Dilation” comes to be measured by twin A: the five-year interval that A experiences in his own frame of reference takes only four years in B’s frame of reference.
<urn:uuid:9e58a6a9-f4db-4617-b6ea-96c79680718e>
3.328125
1,565
Knowledge Article
Science & Tech.
57.88212
Warp Speed 10: NASA Physicist Says Warp Drive is Feasible September 18, 2012 1:10 PM comment(s) - last by Adjustments have radically reduced the theoretical amount of energy necessary to warp space-time "There is hope." Those were the words of Harold "Sonny" White at the 100 Year Starship Symposium , an event where science fiction fans and theoretical physicists alike met to trade suggestions and ideas about future starship designs . Mr. White was talking about his novel warp drive that bears eerie similarities to the fictional drive of Star Trek fame. I. From Fiction to Feasible The idea for the real-life version was first hatched by Mexican physicist Miguel Alcubierre in 1994. Alcubierre's spaceship was a two-part design consisting of a football-shaped spacecraft and an outer ring of exotic matter, responsible for warping space. Inside the ring was a bubble of normal, safe space-time encapsulating the ship, but outside it the ring contracted space-time ahead of the ship while elongating it behind the ship. The resulting distortion of the fabric of our universe would allow the spaceship to travel at a mind-blowing 10 times the speed of light without violating the fundamental laws of space and time. The warp spaceship is a two-part design. [Image Source: Harold White] So what’s the problem? The amount of energy needed to warp the space was calculated to be equivalent to the mass of the planet Jupiter, the most massive planet in our solar system. Thus for almost a decade the idea was written off as an interesting theoretical observation, but more fit for fiction than fact. Then along came Mr. White with an interesting idea -- what if you turned the relatively flat ring into a donut. The results were astonishing -- used the new rounded ring design, the mass-energy needed was reduced by orders of magnitude to around that of the Voyager 1 probe NASA launched in 1977 -- a small spacecraft. And by oscillating the intensity of the warps over time, the energy could be even further reduced. Comments Mr. White in a , "The findings I presented today change it from impractical to plausible and worth further investigation. The additional energy reduction realized by oscillating the bubble intensity is an interesting conjecture that we will enjoy looking at in the lab. If we're ever going to become a true spacefaring civilization, we're going to have to think outside the box a little bit, were going to have to be a little bit audacious." II. Moving Towards the Stars Following the new revelations, Mr. White's next order of business is to set up a tabletop experiment at the Johnson Space Center using a measurement instrument they invented, dubbed the White-Juday Warp Field Interferometer. The laser instrument is designed to detect small warps in space. Mr. White says of this "humble" experiment, "We're trying to see if we can generate a very tiny instance of this in a tabletop experiment, to try to perturb space-time by one part in 10 million." The warp drive could allow man to reach distant exoplanets. [Image Source: NASA/UCSD] Richard Obousy, president of Icarus Interstellar, a non-profit group of scientists and engineers devoted to pursuing interstellar spaceflight, is thrilled by the progress, commenting, "Everything within space is restricted by the speed of light. But the really cool thing is space-time, the fabric of space, is not limited by the speed of light." At this point the warp engine is still in its very nascent stages of development. And yet one cannot help but imagine the words of fictional Star Trek character Zefram Cochrane, Mr. White's fictional analogue: On this site, a powerful engine will be built - an engine that will someday help us to travel a hundred times faster than we can today. Imagine it: thousands of inhabited planets at our fingertips. And we'll be able to explore those strange new worlds, and seek out new life, and new civilizations. This engine will let us go boldly, where no man has gone before. And at that the mind wonders upon the idea of this device floating through the cold stretches of space -- a doubly round manmade instrument in a universe dominated by curvature, creating oscillations of space which are in turn oscillated in intensity with a sinusoidal, rhythmic beat that could one day carry mankind across the stars. This article is over a month old, voting and posting comments is disabled RE: Seems like fantasy 9/20/2012 10:46:12 AM I hate to say this, but you sound almost logical. Isn't there some place other than DT for logical people to act logical and...you know...make sense? ;) FTL is a bit like God, until someone disproves it, we'll all continue tracing it. Unlike religion, that would actually be good science so I disagree with referring to it as a fantasy just yet - this one is definitely subject to science. "I mean, if you wanna break down someone's door, why don't you start with AT&T, for God sakes? They make your amazing phone unusable as a phone!" -- Jon Stewart on Apple and the iPhone CERN Physicists Observe First Faster-Than-Light Long-Distance Travel September 26, 2011, 10:10 AM Russia is Developing Nuclear Fission Spaceship to Reach the Red Planet October 29, 2009, 9:30 AM NASA Introduces Asteroid Grand Challenge to Protect Earth June 18, 2013, 8:48 PM NSA Leaker May be Killed in Drone Strike Says Ron Paul June 17, 2013, 11:18 AM Airbus A350 XWB "MSN1" Has Successful First Flight June 17, 2013, 11:02 AM Study: Gamers Have Better Visual, Decision-Making Skills Than Non-Gamers June 12, 2013, 11:26 AM Airbus A350 XWB to Take First Flight Friday, Looks to Challenge Boeing Dreamliner June 11, 2013, 8:20 PM Berkeley Lab Tests Artificial Photosynthesis with New Microfluidic Test-Bed June 11, 2013, 11:41 AM Most Popular Articles Source: Don't Worry, NSA Spies on "99 Percent" of Americans' Locations, Call Records June 14, 2013, 3:57 PM Xbox Chief: If You Can't Get Online, Don't Buy an Xbox One June 12, 2013, 9:57 AM GigaHertz Wars 2.0? AMD Releases World's First 5.0 GHz FX Processor June 11, 2013, 3:16 PM Former Palm CEO: Selling Palm to HP was a Waste June 12, 2013, 10:19 AM Report: Apple to Release Larger iPhone Screens, Cheaper iPhone for $99 June 13, 2013, 9:41 AM Latest Blog Posts Lumosity: Does it Work? May 22, 2013, 8:20 PM Quick Note: Sony "Teases" PS4 Ahead of Xbox Reveal in New Video May 20, 2013, 12:33 PM Nokia Introduces Instagram-Like App of Its Own to Help Lumia Sales May 20, 2013, 7:10 AM Parents of Pre-Teen Drivers Commonly Practice Distracted Driving Says Study May 9, 2013, 7:16 AM Apple's iOS 7 Running Into Internal Delays Due to Massive Overhaul May 1, 2013, 4:26 PM More Blog Posts Copyright 2013 DailyTech LLC. - Terms, Conditions & Privacy Information
<urn:uuid:1e9a4e50-963d-411c-98c0-56e10a6449de>
2.75
1,596
Comment Section
Science & Tech.
55.277238
Biodiversity is the diversity within a species, between different species, and the total diversity present in an ecosystem. Extinction of species lessens the biodiversity of the ecosystem in addition to having an impact beyond the local environ. How do species become extinct? It is generally accepted by scientists that biodiversity is being lost (Handbook, 21). While there is some debate how much of the present extinction is caused by mankind (anthropogenic sources), there are many biologists who strongly contend that man is driving extinction way beyond its natural rate. There are many ways humans impact their environment, including land development; overexploitation or hunting; species translocation and introduction of foreign, invasive species; and pollution along with climate change. Of these, the destruction or degradation of habitats that occurs with the development of land is one of the key causes, if not the primary cause of biodiversity loss (Lande, 2). A lot of land, particularly tropical rainforest, has been developed for agricultural or other use. This clearing of lands destroys the habitat of many species. Especially at risk are the biologically rich coral reefs, grasslands, rainforests, and old-growth forests. These sources account for over 50% of all known species (Hanley, 295). Why value biodiversity and its loss? There are many ways in which biodiversity is valued, which also provide reasons for preserving biological richness. (As this section deals with valuing environmental goods, it's appropriate to know about Valuation). - Aesthetic Value - Direct Value - Ecosystem Value What Can We do about it? It is an admirable but ultimately Quixotic goal to try and save all of the species on the planet from extinction. The costs of such an undertaking are uncertain, but definitely exorbitantly high. The benefits calculated using the above methods would also be uncertain, but most likely not enough to offset the high costs of such a grand campaign. Unfortunately, mankind's impact on its environment has been such that a certain degree of species loss seems inevitable. But mankind can also limit its impact and set up a kind of triage system for threatened or endangered species by prioritizing them based on their worth. Though valuing one form of life over another is repugnant to many, it is what is necessary to effectively and efficiently protect biodiversity and the ecosystem it supports. Whenever regulation of human behavior is concerned, it is often a call for government intervention. The area of species protection is no different, with many laws being enacted in the past 35 years concerning the preservation of biodiversity. Though the track records of the below policies have be less than stellar, they comprise a base from which to work. - Local Policies - National Policies - International Policies All of these forms of policies run into the some of the problems associated with externalities and public goods that commonly surround such natural resources. The biodiversity in an unprotected landscape can be conceived as a Common-Property Good, which is an impure form of a public good. This means that the resource is exhaustible- forests can be cut to the point of depletion and species are gone when extinct. Yet people aren't able to be excluded from consuming the good- clear cutting down the forests or trapping and selling exotic species. The issue of externalities arises when the local slash-and-burn farmer doesn't take into account the external cost he is placing on others through his practices: the costs on those who have existence values for the flora and fauna or the cost to those infirm people who could be cured by a drug discovered in the ecosystem they're cutting down. All the farmer is considering is how he is going to raise enough crops to make a living and feed his family. Correspondingly, rich first world countries could impose most of the costs of a biodiversity preservation program upon those poor countries that host the habitat. So the more advanced countries- those where people are likely to have an existence value for rainforests and the like- get to free-ride off of the efforts of the poorer nation to provide the public good of global biodiversity. Or, the tables could be turned and the costs of preserving biodiversity could be shifted over to the richer nations. Then the less advanced nations would be free-riding off of the efforts of the 1st World. What's clear is that hardly anyone that wants to pay for a good like biodiversity that is considered part of the national property and which is provided freely by nature. A policymaker or economist most be mindful of such obstacles, or they can grossly misjudge the situation and what the remedy for it should be. They should know about these serious considerations, because market failures like the underprovision of a public good like biodiversity serve as justification for the government to intervene with some sort of policy. Tropical forests such as does like the Amazon or the Congo are typically known for their wildly diverse biological resources. That's why rainforest preservation is considered one and the same with the preservation of biodiversity. Other reasons concerning the value of preserving the rainforest include incorporating its role in moderating the global climate. In this case study all the values of rainforests are explored, along with the policy solutions addressing the issue of tropical deforestation. Handbook of Market Creation for Biodiversity: Issues in Implementation. 2004. OECD. Hanley, Nick, Jason F. Shogren, and Ben White. Introduction to Environmental Economics. 2001. Oxford University Press, NYC. Lande, Russell. 1999. "Extinction Risks from Anthropogenic, Ecological, and Genetic Factors." In Genetics and the Extinction of Species: DNA and the Conservation of Biodiversity. ed. Laura Landweber and Andrew P. Dobson. Princeton, NJ: Princeton Press Publishers.
<urn:uuid:d4c16220-fdb2-4340-b2a2-2683cf1f272d>
4.15625
1,172
Knowledge Article
Science & Tech.
36.840607
1. Fluid Dynamics This section discusses the analysis of fluid in motion - fluid dynamics. The motion of fluids can be predicted in the same way as the motion of solids are predicted using the fundamental laws of physics together with the physical properties of the fluid. It is not difficult to envisage a very complex fluid flow. Spray behind a car; waves on beaches; hurricanes and tornadoes or any other atmospheric phenomenon are all example of highly complex fluid flows which can be analysed with varying degrees of success (in some cases hardly at all!). There are many common situations which are easily analysed. 2. Uniform Flow, Steady Flow It is possible - and useful - to classify the type of flow which is being examined into small number of groups. If we look at a fluid flowing under normal circumstances - a river for example - the conditions at one point will vary from those at another point (e.g. different velocity) we have non-uniform flow. If the conditions at one point vary as time passes then we have unsteady flow. Under some circumstances the flow will not be as changeable as this. He following terms describe the states which are used to classify fluid flow: Combining the above we can classify any flow in to one of four type: If you imaging the flow in each of the above classes you may imagine that one class is more complex than another. And this is the case - steady uniform flow is by far the most simple of the four. You will then be pleased to hear that this course is restricted to only this class of flow. We will not be encountering any non-uniform or unsteady effects in any of the examples (except for one or two quasi-time dependent problems which can be treated at steady). 3. Compressible or Incompressible All fluids are compressible - even water - their density will change as pressure changes. Under steady conditions, and provided that the changes in pressure are small, it is usually possible to simplify analysis of the flow by assuming it is incompressible and has constant density. As you will appreciate, liquids are quite difficult to compress - so under most steady conditions they are treated as incompressible. In some unsteady conditions very high pressure differences can occur and it is necessary to take these into account - even for liquids. Gasses, on the contrary, are very easily compressed, it is essential in most cases to treat these as compressible, taking changes in pressure into account. 4. Three-dimensional flow Although in general all fluids flow three-dimensionally, with pressures and velocities and other flow properties varying in all directions, in many cases the greatest changes only occur in two directions or even only in one. In these cases changes in the other direction can be effectively ignored making analysis much more simple. Flow is one dimensional if the flow parameters (such as velocity, pressure, depth etc.) at a given instant in time only vary in the direction of flow and not across the cross-section. The flow may be unsteady, in this case the parameter vary in time but still not across the cross-section. An example of one-dimensional flow is the flow in a pipe. Note that since flow must be zero at the pipe wall - yet non-zero in the centre - there is a difference of parameters across the cross-section. Should this be treated as two-dimensional flow? Possibly - but it is only necessary if very high accuracy is required. A correction factor is then usually applied. Flow is two-dimensional if it can be assumed that the flow parameters vary in the direction of flow and in one direction at right angles to this direction. Streamlines in two-dimensional flow are curved lines on a plane and are the same on all parallel planes. An example is flow over a weir foe which typical streamlines can be seen in the figure below. Over the majority of the length of the weir the flow is the same - only at the two ends does it change slightly. Here correction factors may be applied. In this course we will only be considering steady, incompressible one and two-dimensional flow. 5. Streamlines and streamtubes In analysing fluid flow it is useful to visualise the flow pattern. This can be done by drawing lines joining points of equal velocity - velocity contours. These lines are know as streamlines. Here is a simple example of the streamlines around a cross-section of an aircraft wing shaped body: When fluid is flowing past a solid boundary, e.g. the surface of an aerofoil or the wall of a pipe, fluid obviously does not flow into or out of the surface. So very close to a boundary wall the flow direction must be parallel to the boundary. At all points the direction of the streamline is the direction of the fluid velocity: this is how they are defined. Close to the wall the velocity is parallel to the wall so the streamline is also parallel to the wall. It is also important to recognise that the position of streamlines can change with time - this is the case in unsteady flow. In steady flow, the position of streamlines does not change. Some things to know about streamlines A useful technique in fluid flow analysis is to consider only a part of the total fluid in isolation from the rest. This can be done by imagining a tubular surface formed by streamlines along which the fluid flows. This tubular surface is known as a streamtube. And in a two-dimensional flow we have a streamtube which is flat (in the plane of the paper):
<urn:uuid:3e21e89c-3f40-4f7e-8246-b443478962e5>
3.734375
1,154
Tutorial
Science & Tech.
47.745637
The next time you see something flapping in the breeze on an overhead power line, squint a little harder. It may not be a plastic bag or the remnants of a party balloon, but a tiny spy plane stealing power from the line to recharge its batteries. The idea comes from the US Air Force Research Lab (AFRL) in Dayton, Ohio, US, which wants to operate extended surveillance missions using remote-controlled planes with a wingspan of about a metre, but has been struggling to find a way to refuel to extend the plane's limited flight duration. So the AFRL is developing an electric motor-powered micro air vehicle (MAV) that can "harvest" energy when needed by attaching itself to a power line. It could even temporarily change its shape to look more like innocuous piece of trash hanging from the cable. AFRL's initial aim is to work out how to make a MAV flying at 74 kilometres per hour latch onto a power line without destroying itself or the line. In addition, so as not to arouse suspicion, AFRL says the spy plane will need to collapse its wings and hang limply on the cable like a piece of wind-blown detritus. Much of the "morphing" technology to perform this has already been developed by DARPA, the Pentagon's research division. Technologies developed in that program include carbon composite "sliding skins", which allow fuselages to change shape, and telescopic wings that allow lift to be boosted in seconds by boosting a wing's surface area. Challenges abound, though. Zac Richardson, a power-line engineer with National Grid in the UK, warns that if the MAV contacts an 11-kilovolt local power line, it could short circuit two conductors, causing an automatic disconnection of the very power the plane seeks. And, on a 400 kilovolt inter-city power line, it risks discharging sparks. "It will hang there fizzing and banging and giving its position away anyway," says Richardson. "Even kites falling across power lines cause breakdowns," adds Ian Fells, an expert in electricity transmission based in Newcastle, UK. "It's an utterly bizarre idea to try to land a plane on one." Regardless of the challenges faced, AFRL plans test flights in 2008. Aviation - Learn more in our comprehensive special report. Energy and Fuels - Learn more about the looming energy crisis in our comprehensive special report. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Mav Powered From Hv Lines Tue Dec 18 11:45:39 GMT 2007 by Richard Deasington Seems like a reasonable idea. Landing on 11kV lines would be a problem for two reasons: line spacing is to small for a 1 metre wingspan and (relatively) low voltage means you won't get much corona discharge via the MAV to the air. At 400kV I don't see a problem. The lines are separated by several metres, with a good margin of safety and the corona discharge is substantial. Aircraft warning lights have been powered this way for many years - drive to Charles de Gaulle airport in Paris and you will see dozens of large neon tubes powered parasitically from the HV lines. Landing will be tricky - but a hook arrangement at the back should do the trick and a point on the nose will provide enough discharge to the air to give the power needed. Mav Powered From Hv Lines Fri Dec 21 10:15:42 GMT 2007 by Nick Gilbert You don't need to use corona discharge. The article states that the plane will land directly on the power cable. Therefore it can pick up all the power it needs from inductance - no spikey nose needed Mav Powered From Hv Lines Fri Dec 21 19:02:29 GMT 2007 by Anonymous What's reasonable about this? The fact that this bit of gee-whizzery might work in principle, or that it's yet another example of how eager one wealthy nation is to stick their noses into the affairs of either its own citizens or those of other nations, illegally, under the pretense of democracy and the rights of people it pretends to defend? The "ideas" generated by this paranoia have long since past into hilariously ludicrous territory. Just because a technology MIGHT work is not sufficient justification to implement it. Is this science or is it something completely different? Tue Dec 18 13:26:48 GMT 2007 by John W. Blount This technology is not new in the United States. In the Thirties of the last century, Nikola Tesla came up with the same Idea and applied it to automobiles. The car did not need to attach itself to the line but used the lines as 'Tesla Coils'. They pulled the ambient charge from the lines out of the air to do the work (using vacum tubes). The vehicle had virtually unlimited mileage on roads with electric power transmission lines. Power companies shut his idea down. Where would you put the meter to charge the users? Another fellow whoes name I forget (I believe it was Campbell)came up with the same technology in the fourties or fifties. He too was shut down by the power industry. Spy planes have the advantage of putposely being stealthy in their 'stealing' of power. No need for a meter. Maybe the national power companies could accept money from another government tax. This one to cover fueling electric autos on the road. Wed Dec 19 09:22:57 GMT 2007 by Paul Marks Good point abut Tesla - but the AFRL team tried induction and couldn't get enough power from flying their prototypes in close proximity to the power line alone. So they have gone for sucking the power straight out of the line. Paul Marks, New Scientist Wed Dec 19 15:13:39 GMT 2007 by John W. Blount Maybe it needs vacum tubes. Sat Dec 29 09:28:53 GMT 2007 by Dan Sigler Just my two cents: Why not use a near field inductive charger coupled to a small supercapacitor? I don't know how much space they have in the fuselage or what the power requirements are but the ability to take a charge in a few seconds ought to merit some consideration. If the problem was in rapidly drawing power from the field produced by the line than that should be addressable through better materials like high temperature superconductors or through more coil windings. And if they use HTS wire it's incredibly thin to begin with so its shape lends itself to a large number of windings. Cooling the wire would be an issue, but small compressors have been developed using MEMS for HTS cooling applications. Given the the individual component sizes you could probably integrate the whole thing into a package say 100mmX40mmX40mm (or smaller if you designed the entire package as a single component). Small Form Factor Ultracapacitor: (long URL - click here) MEMS based microcompressor: HTS Inductive loops (motor application, but tech is similar): Put together the right components and they shouldn't even need to perch on the wire, a couple of swoops along the line should be enough to recharge the system. Of course these aren't cheap components, but this is a military development project so the component cost shouldn't be to much of a problem. Just another thought: it might be possible to design the MEMS components themselves to be reactive to the line field, in which case you could eliminate the need to power the microcompressor. As you swoop down to line the field would first energize the microcompressor which would then cool the HTS wire coil so that it could draw off the line field. If you transfer the power from the supercapacitor to the vehicles battery then you only need to maintain the coil temperature while charging. Well, I wish them good luck with this. Although I don't necessarily agree with our trend in the US towards being a surveillance society it appears that it is going to happen. In which case it will not be just the military that needs these devices. Arizona sheriffs are already using UAVs to spot drug traffickers and meth labs in the desert and one can imagine that as the cost comes down these devices could find use with other first responders and emergency services like fire and ambulance both for fast response to site and for situational awareness in disaster and other response situations. Even power companies could put these planes to use cruising along the transmission lines in more remote and inaccessable regions to monitor lines for physical wear and as first to site when responding to line failures. Wed Dec 19 14:38:02 GMT 2007 by Mike P Re: where to put the meters. How about in the car? if the car was drawing power, surely this could be measured and charged. Doesn't seem all that hard to me. Maybe i'm missing something. Wed Dec 19 15:06:11 GMT 2007 by John W. Blount An auto is a moving target pulling along the length of the line. Today we might be able to measure it and have readings broadcast automatically to the power company, but in Tesla's time it was not possible. The power would be gone with the vehicle. Tue Dec 18 13:28:26 GMT 2007 by Jeshua I wonder how they plan to meter the usage - or do they just plan on stealing the energy? Tue Dec 18 14:42:39 GMT 2007 by Chris Didn't you notice, they're American, so of course they would be 'liberating' this electricity from its oppressors and not stealing it at all. Then they'll tell us how much happier it is to be flying free though the air whilst helping to bring down the wrath of Bush on its own generating capacity. Wed Dec 19 03:43:14 GMT 2007 by Fred Amazing. The deranged Bush-haters can work a Bush bash into a comment about a totally non-political news story. Yawn Take a xanax, he'll be gone in just a bit more than a year. Then Shrillery or Obamasama can ruin - uh, run the country. Wed Dec 19 08:55:04 GMT 2007 by Miles I suppose the mention of America, spying, and energy made him / her think of Bush. Can't think why. Tue Jan 01 14:42:36 GMT 2008 by Rabbit Nothing deranged about Bush bashing, the bastard deserves more than a bit of bashing. The truly deranged are that dwindling minority of morons who support Bush and the other War criminals in the US regime. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:01cd8515-4678-4ced-b153-e169e6264b65>
2.859375
2,347
Comment Section
Science & Tech.
57.183292
Trinitrotoluene is one of the most commonly used explosive materials by military forces. One of the big reasons for this is that it's very resistent to shocks and friction, which makes it difficult to detonate. Because of this, TNT was originally ignored as an explosive material and actually used as a yellow dye! Eventually the Germans began using it in their artillery shells. Here, the delayed detonation actually served to its advantage. The German shells would explose inside British ships. The British ones, on the other hand, exploded on contact, wasting most of the explosive energy on the outside of the ship.
<urn:uuid:b50a7655-1f9f-4845-9244-af3acd35e4a6>
2.953125
126
Knowledge Article
Science & Tech.
37.484466
Magnesium (Mg) and its alloys are attractive for use in automotive and aerospace applications because of their low density and good mechanical properties. However, difficulty in forming magnesium and the limited number of available commercial alloys limit their use. Powder metallurgy may be a suitable solution for forming near-net-shape parts. However, sintering pure magnesium presents difficulties due to surface film that forms on the magnesium powder particles. The present work investigates the composition of the surface film that forms on the surface of pure magnesium powders exposed to atmospheric conditions and on pure magnesium powders after compaction under uniaxial pressing at a pressure of 500 MPa and sintering under argon at 600 °C for 40 minutes. Initially, focused ion beam microscopy was utilized to determine the thickness of the surface layer of the magnesium powder and found it to be ∼10 nm. The X-ray photoelectron analysis of the green magnesium sample prior to sintering confirmed the presence of MgO, MgCO3·3H2O, and Mg(OH)2 in the surface layer of the powder with a core of pure magnesium. The outer portion of the surface layer was found to contain MgCO3·3H2O and Mg(OH)2, while the inner portion of the layer is primarily MgO. After sintering, the MgCO3·3H2O was found to be almost completely absent, and the amount of Mg(OH)2 was also decreased significantly. This is postulated to occur by decomposition of the compounds to MgO and gases during the high temperature of sintering. An increase in the MgO content after sintering supports this theory. Paul J. Burke, Zeynel Bayindir, and Georges J. Kipouros, "X-ray Photoelectron Spectroscopy (XPS) Investigation of the Surface Film on Magnesium Powders," Appl. Spectrosc. 66, 510-518 (2012)
<urn:uuid:83c0b31c-5249-472d-9761-17edba8745cd>
2.96875
419
Academic Writing
Science & Tech.
41.349598
by Rick Groleau Whether it's in the form of burning tobacco or a raging forest fire, combustion is merely some material rapidly combining with oxygen. Well, maybe that's stating it too simply. Combustion turns out to be a complex interaction between molecules—even the burning of a simple five-atom molecule can involve more than 100 individual chemical reactions. And if you take a look at the burning of organic matter such as tobacco and wood, which contain long molecules of intricately arranged atoms, the interactions are substantially more involved. This feature lets you explore the basics of combustion, including how a fire ignites, how a molecule's atoms rearrange themselves during combustion, and what a flame is made of. This feature originally appeared on NOVA's "Search for a Safe Cigarette" Web site, www.pbs.org/nova/cigarette/. The Producer's Story | The World on Fire | Outfitting Wildland Firefighters How Plants Use Fire | Glossary of Fire Terms | Wildfire Simulator | On Fire Resources | Transcript | Site Map | Fire Wars Home Search | Site Map | Previously Featured | Schedule | Feedback | Teachers | Shop Join Us/E-Mail | About NOVA | Editor's Picks | Watch NOVAs Online | To Print PBS Online | NOVA Online | WGBH © | Updated June 2002 Support provided by For new content visit the redesigned
<urn:uuid:3a88dfec-e21e-4719-9548-0b375c391613>
3.6875
293
Truncated
Science & Tech.
27.704185
int close (fd) int fd; close() deletes a descriptor from the per-process object reference table. If fd is the last reference to the underlying object, then the object will be deactivated. For example, on the last close of a file the current seek pointer associated with the file is lost. On the last close of a socket (see socket.2 associated naming information and queued data are discarded. On the last close of a file holding an advisory lock applied by flock.2 the lock is released. (Record locks applied to the file by lockf.3 however, are released on any call to close() regardless of whether fd is the last reference to the underlying object.) close() does not unmap any mapped pages of the object referred to by fd (see mmap(), munmap.2 A close of all of a process's descriptors is automatic on exit(), but since there is a limit on the number of active descriptors per process, close() is necessary for programs that deal with many descriptors. When a process forks (see fork.2v all descriptors for the new child process reference the same objects as they did in the parent before the fork. If a new process is then to be run using execve.2v the process would normally inherit these descriptors. Most of the descriptors can be rearranged with dup.2v or deleted with close() before the execve() is attempted, but if some of these descriptors will still be needed if the execve() fails, it is necessary to arrange for them to be closed if the execve() succeeds. The fcntl.2v operation F_SETFD can be used to arrange that a descriptor will be closed after a successful execve(), or to restore the default behavior, which is to not close the descriptor. If a STREAMS (see intro.2 file is closed, and the calling process had previously registered to receive a SIGPOLL signal (see sigvec.2 for events associated with that file (see I_SETSIG in streamio.4 the calling process will be unregistered for events associated with the file. The last close() for a stream causes that stream to be dismantled. If the descriptor is not marked for no-delay mode and there have been no signals posted for the stream, close() waits up to 15 seconds, for each module and driver, for any output to drain before dismantling the stream. If the descriptor is marked for no-delay mode or if there are any pending signals, close() does not wait for output to drain, and dismantles the stream immediately. Created by unroff & hp-tools. © by Hans-Peter Bischof. All Rights Reserved (1997). Last modified 21/April/97
<urn:uuid:c1fa8544-5cf4-46bf-93c5-9ba379bf8dfd>
3.484375
578
Documentation
Software Dev.
60.012568
to be an experimental feature. Use in production applications is not recommended. See the Perl provides a fork() keyword that corresponds to the Unix system call of the same name. On most Unix-like platforms where the fork() system call is available, Perl's fork() simply calls it. On some platforms such as Windows where the fork() system call is not available, Perl can be built to emulate fork() at the interpreter level. While the emulation is designed to be as compatible as possible with the real fork() at the level of the Perl program, there are certain important differences that stem from the fact that all the pseudo child ``processes'' created this way live in the same real process as far as the operating system is concerned. The fork() emulation is implemented at the level of the Perl interpreter. What this means in general is that running fork() will actually clone the running interpreter and all its state, and run the cloned interpreter in a separate thread, beginning execution in the new thread just after the point where the fork() was called in the parent. We will refer to the thread that implements this child ``process'' as the pseudo-process. To the Perl program that called fork(), all this is designed to be transparent. The parent returns from the fork() with a pseudo-process ID that can be subsequently used in any process manipulation functions; the child returns from the fork() with a value of 0 to signify that it is the child pseudo-process. Behavior of other Perl features in forked pseudo-processes Most Perl features behave in a natural way within pseudo-processes. $$ or $PROCESS_ID This special variable is correctly set to the pseudo-process ID . It can be used to identify pseudo-processes within a particular session. Note that this value is subject to recycling if any pseudo-processes are launched after others have been wait()-ed on. Each pseudo-process maintains its own virtual environment. Modifications to %ENV affect the virtual environment, and are only visible within that pseudo-process, and in any processes (or pseudo-processes) launched from it. chdir() and all other builtins that accept filenames Each pseudo-process maintains its own virtual idea of the current directory. Modifications to the current directory using chdir() are only visible within that pseudo-process, and in any processes (or pseudo-processes) launched from it. All file and directory accesses from the pseudo-process will correctly map the virtual working directory to the real working directory appropriately. wait() and waitpid() wait() and waitpid() can be passed a pseudo-process ID returned by fork(). These calls will properly wait for the termination of the pseudo-process and return its status. kill() can be used to terminate a pseudo-process by passing it the ID returned by fork(). This should not be used except under dire circumstances, because the operating system may not guarantee integrity of the process resources when a running thread is terminated. Note that using kill() on a pseudo-process() may typically cause memory leaks, because the thread that implements the pseudo-process does not get a chance to clean up its resources. Calling exec() within a pseudo-process actually spawns the requested executable in a separate process and waits for it to complete before exiting with the same exit status as that process. This means that the process ID reported within the running executable will be different from what the earlier Perl fork() might have returned. Similarly, any process manipulation functions applied to the ID returned by fork() will affect the waiting pseudo-process that called exec(), not the real process it is waiting for after the exec(). exit() always exits just the executing pseudo-process, after automatically wait()-ing for any outstanding child pseudo-processes. Note that this means that the process as a whole will not exit unless all running pseudo-processes have exited. Open handles to files, directories and network sockets All open handles are dup()-ed in pseudo-processes, so that closing any handles in one process does not affect the others. See below for some limitations. In the eyes of the operating system, pseudo-processes created via the fork() emulation are simply threads in the same process. This means that any process-level limits imposed by the operating system apply to all pseudo-processes taken together. This includes any limits imposed by the operating system on the number of open file, directory and socket handles, limits on disk space usage, limits on memory size, limits on CPU utilization etc. Killing the parent process If the parent process is killed (either using Perl's kill() builtin, or using some external means) all the pseudo-processes are killed as well, and the whole process exits. Lifetime of the parent process and pseudo-processes During the normal course of events, the parent process and every pseudo-process started by it will wait for their respective pseudo-children to complete before they exit. This means that the parent and every pseudo-child created by it that is also a pseudo-parent will only exit after their pseudo-children have exited. A way to mark a pseudo-processes as running detached from their parent (so that the parent would not have to wait() for them if it doesn't want to) will be provided in future. CAVEATS AND LIMITATIONS outer This limitation arises from fundamental technical difficulties in cloning and restarting the stacks used by the Perl parser in the middle of a parse. Any filehandles open at the time of the fork() will be dup()-ed. Thus, the files can be closed independently in the parent and child, but beware that the dup()-ed handles will still share the same seek pointer. Changing the seek position in the parent will change it in the child and vice-versa. One can avoid this by opening files that need distinct seek pointers separately in the child. Forking pipe open() not yet implemented while ( Forking pipe open() constructs will be supported in future. Global state maintained by XSUBs External subroutines (XSUBs) that maintain their own global state may not work correctly. Such XSUBs will either need to maintain locks to protect simultaneous access to global data from different pseudo-processes, or maintain all their state on the Perl symbol table, which is copied naturally when fork() is called. A callback mechanism that provides extensions an opportunity to clone their state will be provided in the near future. Interpreter embedded in larger application The fork() emulation may not behave as expected when it is executed in an application which embeds a Perl interpreter and calls Perl APIs that can evaluate bits of Perl code. This stems from the fact that the emulation only has knowledge about the Perl interpreter's own data structures and knows nothing about the containing application's state. For example, any state carried on the application's own call stack is out of reach. Thread-safety of extensions Perl's regular expression engine currently does not play very nicely with the fork() emulation. There are known race conditions arising from the regular expression engine modifying state carried in the opcode tree at run time (the fork() emulation relies on the opcode tree being immutable). This typically happens when the regex contains paren groups or variables interpolated within it that force a run time recompilation of the regex. Due to this major bug, the fork() emulation is not recommended for use in production applications at this time. Having pseudo-process IDs be negative integers breaks down for the integer -1 because the wait() and waitpid() functions treat this number as being special. The tacit assumption in the current implementation is that the system never allocates a thread ID of 1 for user threads. A better representation for pseudo-process IDs will be implemented in future. Support for concurrent interpreters and the fork() emulation was implemented by !ActiveState?, with funding from Microsoft Corporation.
<urn:uuid:79ef383c-5bcf-48e7-8c82-2622c9ee95c3>
3.609375
1,641
Documentation
Software Dev.
39.045029
More than 80% of all stars are members of multiple star systems containing two or more stars. Exactly how these systems are formed is not well understood. Some are thought to form when a collapsing cloud of gas breaks apart into two or more clouds which then become stars, or when one star captures another as a result of a grazing collision, or by a close encounter with two or more other stars. The most common multiple star systems are those with two stars. These so-called binary stars have played an important role in many areas of astronomy, especially X-ray astronomy. In many binary systems the stars orbit their common center of mass under the influence of their mutual gravitational force, but they evolve independently. These are called wide binaries, and are analogous to friends that are far apart and stay in touch with an occasional telephone call or e-mail on holidays. The hot upper atmospheres, or coronas, of these stars can produce X-rays, but not nearly so spectacularly as the X-ray binaries discussed below and elsewhere. Wide binaries are nevertheless important because they provide the best means for measuring the masses of stars by observing the size and period of the orbit and then applying the theory of gravity. In some binary systems, called close binaries, the stars are so close together that they can transfer matter to each other and change the way the stars look and evolve. They are like very close friends or family members who strongly affect each other's lives. Consider, for example, the evolution of a binary system with two massive stars, A and B, in which A is the most massive. Because of its greater mass, A will become a red giant star first. As it expands in size, star A will dump a large fraction of its mass onto star B, changing the appearance of both stars. Star A soon uses up its remaining nuclear fuel, explodes as a supernova, and leaves behind a neutron star or black hole. Later when star B becomes a red giant, material flowing onto the neutron star or black hole will produce a strong X-ray source that is called an X-ray binary. The X-ray power of an X-ray binary is millions of times that of the X-rays from normal stellar coronas. The fate of star B varies depending on the details of its orbit and the masses of the two stars: (1) It could spiral into A to form a large black hole; (2) B could explode as a supernova and disrupt the binary system; or (3) the supernova could produce a neutron star or black hole, leading to (3a) binary neutron stars—which have been observed— (3b) a neutron star/black hole binary—which may be observed with Chandra or some other sensitive X-ray telescope, or (3c) binary black holes—which astronomers hope to observe with one of the gravitational wave detectors planned for the future. If the masses of stars A and B are comparable to that of the Sun, the end products are white dwarfs instead of neutron stars and black holes. The dumping of matter from star A onto star B can still result in a strong X-ray source and celestial fireworks, such as a nova, or in rare cases when it transfers too much mass to the white dwarf, a supernova.
<urn:uuid:2249c2c2-c2e5-4930-9e49-909b496e7dc7>
4.15625
671
Knowledge Article
Science & Tech.
45.761533
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2007 February 13 Explanation: The explosion is over but the consequences continue. About eleven thousand years ago a star in the constellation of Vela could be seen to explode, creating a strange point of light briefly visible to humans living near the beginning of recorded history. The outer layers of the star crashed into the interstellar medium, driving a shock wave that is still visible today. A roughly spherical, expanding shock wave is visible in X-rays. The above image captures much of that filamentary and gigantic shock in visible light, spanning almost 100 light years and appearing twenty times the diameter of the full moon. As gas flies away from the detonated star, it decays and reacts with the interstellar medium, producing light in many different colors and energy bands. Remaining at the center of the Vela Supernova Remnant is a pulsar, a star as dense as nuclear matter that completely rotates more than ten times in a single second. Authors & editors: Jerry Bonnell (USRA) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:747ccad0-e65d-43a8-a6c0-494e6da33d0a>
3.765625
261
Knowledge Article
Science & Tech.
40.196318
Neil C. Swart and Andrew J. Weaver, Nature Climate Change, 2, 2012 About the commentary Developing the Alberta oil-sands will lead to carbon emissions that in turn result in global warming. Our paper calculates the amount of warming the oil-sands could potentially cause. We also consider how oil-sands carbon emissions relate to the effort to keep global mean temperatures from exceeding 2°C above pre-industrial levels, as agreed to in the Copenhagen Accord. Finally, we compare the potential for warming of the Alberta oil-sands with the potential for warming of other global fossil-fuel resources. Potential Oil Sands Carbon Footprints The green circle illustrates the current maximum cumulative per capita carbon emissions compatible with keeping global mean warming below 2°C. The red circle shows the per capita carbon footprint that would result from the current populations of the USA and Canada utilizing the Alberta oil sands proven reserves. The blue circle shows the per capita carbon footprint that would be achieved by the current Chinese population by fully utilizing the proven oil sands reserve. The green-circle shows the limit for all emissions, but the red and blue circles show only oil-sands related emissions, and do not include emissions from other sources such as coal burning. Read oil-sands emissions and 2°C warming Read the commentary here. NEW: PDF of calculations for Well-to-wheel warming estimates NOT included in the Nature Climate Change commentary - There are 1.8 trillion barrels of oil-in-place (OIP) in Alberta's oils sands; 170 billion of those are the 'economically viable proven reserve'. - Burning the OIP would lead to a climate warming of 0.36°C (0.24-0.50°C, 5th-95th percentile). - Burning the proven reserve would lead to a warming of 0.03°C (0.02-0.05). - For global temperatures to remain below 2°C above pre-industrial levels, cumulative (over time) per capita carbon emissions must be less than 85 tonnes of carbon, based on todays global population. - By utilizing the oil-sands proven reserves, Canadians and Americans would achieve a per-capita carbon footprint of 64 tonnes (read more about oil-sands emissions and 2°C warming). - The global fossil-fuel resource base is enormous, and could easily yield over 2°C of warming, if exploited to meet growing global energy demands (particularly coal and unconventional gas). - To keep warming below 2°C will require a rapid transition to non-emitting renewable energy sources, while avoiding commitments to infrastructure that supports fossil fuel dependence Additional images are included below. Contact Neil Swart for queries. Oil-sands warming above background The black curve shows the global mean temperature increase simulated due to observed historical human carbon emissions (from 1800-2000), and the projected future emissions under the IPCC SRESA2 'business as usual' scenario (2001-2100). The solid red curve shows the warming that would occur due emissions from utilizing the Alberta oil-sands proven reserve, over the period 2012-2062, in addition to the SRESA2 emissions. The dashed red curve shows the warming that would occur if the entire Alberta oil-sands oil in place were burnt over the period 2012-2062, in addition to the SRESA2 emissions. All curves show the global mean temperature simulated by the UVic ESCM. Central estimate of the potential for warming of the different fossil-fuel resources in Table 1. The red line indicates the limit of 2.0 °C warming from pre-industrial times agreed to under the Copenhagen Accord. Note, that here we only consider the effects of anthropogenic carbon dioxide. The potential for warming associated with proven Alberta oil-sand reserves is indicated as a barely visible sub-component (pink) of unconventional oil (global). The potential warming of the total Alberta oil-sands oil-in-place (OIP) is shown in black. Estimates of the Total Resource Base (global), and global resources come from Rogner et al. *The carbon–climate response method is not valid for emissions above about 20×1017 g C, so these figures are not valid climate change estimates, but are included for comparison. See our commentary appearing soon in Nature Climate Change (publications) News stories on this paper This work (this site and all contents not otherwise attributed) is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 Canada License Permissions beyond the scope of this license may be available by contacting Neil Swart.
<urn:uuid:1714426f-f0ff-41d1-84aa-0ff10b13cecb>
3.234375
966
Academic Writing
Science & Tech.
43.152428
Strings are represented in DTrace as an array of characters terminated by a null byte (that is, a byte whose value is zero, usually written as '\0'). The visible part of the string is of variable length, depending on the location of the null byte, but DTrace stores each string in a fixed-size array so that each probe traces a consistent amount of data. Strings may not exceed the length of this predefined string limit, but the limit can be modified in your D program or on the dtrace command line by tuning the strsize option. Refer to Chapter 16, Options and Tunables for more information on tunable DTrace options. The default string limit is 256 bytes. The D language provides an explicit string type rather than using the type char * to refer to strings. The string type is equivalent to a char * in that it is the address of a sequence of characters, but the D compiler and D functions like trace() provide enhanced capabilities when applied to expressions of type string. For example, the string type removes the ambiguity of the type char * when you need to trace the actual bytes of a string. In the D statement: if s is of type char *, DTrace will trace the value of the pointer s (that is, it will trace an integer address value). In the D statement: by definition of the * operator, the D compiler will dereference the pointer s and trace the single character at that location. These behaviors are essential to permitting you to manipulate character pointers that by design refer to either single characters, or to arrays of byte-sized integers that are not strings and do not end with a null byte. In the D statement: if s is of type string, the string type indicates to the D compiler that you want DTrace to trace a null terminated string of characters whose address is stored in the variable s. You can also perform lexical comparison of expressions of type string, as described in String Comparison.
<urn:uuid:61a4c751-17e4-408c-8d21-e2be9d378f2d>
3.6875
407
Documentation
Software Dev.
46.306377
Flies, gnats, maggots, midges, mosquitoes, keds, bots, etc. are all common names for members of the order Diptera . This diversity of names documents the importance of the group to man and reflects the range of organisms in the order. The order is one of the four largest groups of living organisms. There are more known flies than vertebrates. These insects are a major component of virtually all non-marine ecosystems. Only the cold arctic and antarctic ice caps are without flies. The economic importance of the group is immense. One need only consider the ability of flies to transmit diseases. Mosquitoes and black flies are responsible for more human suffering and death than any other group of organisms except for the transmitted pathogens and man! Flies also destroy our food, especially grains and fruits. On the positive side of the ledger, outside their obviously essential roles in maintaining our ecosystem, flies are of little direct benefit to man. Some are important as experimental animals (Drosophila ) and biological control agents of weeds and other insects. Others are crucial in helping to solve crimes or in pollinating plants. Without Diptera there would be, for example, no chocolate! Some 150,000 different kinds of flies (Order Diptera , Class Insecta , Phylum Arthropoda ) are now known and estimates are that there may be more than 1,000,000 species living today. These species are classified into 188 families and some 10,000 genera. Of these, some 3,125 species are known only from fossils, the oldest of which, a limoniid crane fly, is some 225 MILLION years old (Upper Triassic (Carnian)).
<urn:uuid:028fa3aa-658a-4865-86af-9d451226d740>
3.9375
353
Knowledge Article
Science & Tech.
44.898713
What is it? Chaos is an advanced field of mathematics that involves the study of dynamical systems, that is, systems in motion. Chaos Theory consists of the mathematical proofs and theories which attempt to describe processes in motion. One example of a chaotic dynamical system is the motion of the stars, planets, and galaxies, which mathematicians and scientists have been trying to understand for centuries. Why study it? The study of chaotic dynamical systems has many applications for the 'real world.' Think of any mathematical system that changes over time, such as the weather, the stock market, or the genetic distribution of a population. Whatever dynamical system you think of, Chaos Theory can be used to understand it. For instance, say you worked for a company that makes paint. Your task is to find a way to mix two colors of paint, blue and red, in a huge tank so that the resulting purple paint will be evenly mixed. (It must be evenly mixed, so that each can of purple paint has the same shade of purple.) If you used Chaos Theory to create a paint mixer that would stir the paint in true chaotic motion, you would always get an evenly mixed tank of paint! How can I learn more? This section of Interactive Mathematics Online is specifically designed to help you learn more about chaos. Just follow the links to view more pages, each of which explains and demonstrates an aspect or example of chaotic dynamical systems. The basic outline of your "Tour of Chaos" is as follows: - The Basic Concepts of Chaos - Examples of Chaos - Java Applets Involving Chaos
<urn:uuid:65d104f2-a70e-41c9-81d8-a1480e57875d>
3.71875
339
Knowledge Article
Science & Tech.
52.198784
If there are five points on a circle. How many line segments can be drawn on it, but without overlapping the regions? The number of line segments you can draw, each segment joining two of the 5 points on a circle, no two segments intersecting except at the 5 points, is 7. The five line segments that join the 5 points in a convex pentagon can certainly be drawn, since they don't intersect any other line segments (except at the 5 points). Having drawn those 5 line segments, you can draw any 2 diagonals of the pentagon, but no more than 2. If that's not the question you want answered, please consider editing your question to clarify.
<urn:uuid:0162b1c5-2e1d-48b4-99a7-a4b4a09ff8c1>
3.21875
142
Q&A Forum
Science & Tech.
66.557948
A coin is tossed 4 times. Let X be the number of heads in the first two tosses and Z be the number of heads in the last three tosses. Describe the joint distribution of (X,Z) by means of a table. I know how to find the joint distribution of two independent random variables but I'm not sure how you'd go about with this one as the variables are dependent. Do you have to know how they are dependent first? If so, how would I calculate that?
<urn:uuid:b0e7d62a-7aa9-44d8-b702-a636f793f15f>
2.859375
104
Q&A Forum
Science & Tech.
79.75
Elephant fishes (English), Mormyrids, mormyre (French) With more than 200 species in 20 genera, the Family Mormyridae is a modern radiation within the Osteoglossomorpha, an ancient lineage of teleost fishes in which most other living groups are species-poor. Mormyrids are found only in Africa, in freshwater habitats over most of the continent with the exception of the Sahara, northernmost Mahgreb and southernmost Cape provinces. Mormyrids reach their highest diversity in the river systems of Central and West Africa and are often the numerically most abundant kind of fish in riverine habitats. Mormyrid fishes have long served humans as an important food source along Africa's inland waterways. Ancient Egyptians accurately depicted mormyrids on the walls of their tombs and even worshipped Mormyrus in the temple of Oxyrhynchus. However, scientists only discovered mormyrids' most unusual characteristic in the latter half of the 20th Century: an active electric sense by means of which they orient to their environment and communicate (see "Electric Organ Discharge" below). Mormyrid fishes have since become a model system for research into vertebrate sensory biology, behavior, and communication. They are also popular in the tropical fish hobby where they are known as "elephant-nose fish" and "baby whales." Adult mormyrids range from about 4 centimeters to 1.5 meters in length and vary considerably in morphology. Most species of genera Petrocephalus, Pollimyrus and Stomatorhinus are short, laterally compressed, deep-bodied fishes with blunt, rounded snouts and small, often inferior to subinferior mouths. Others, such as species of Mormyrops and Isichthys, are elongate and more cylindrical, with terminal mouths. Species of Campylomormyrus and some Mormyrops and Mormyrus have long tubular snouts used for extracting invertebrates from sediment and root masses (Marrero & Winemiller, 1993). Others in genera Marcusenius, Gnathonemus, and Genyomyrus possess a variously developed fleshy protuberance on the chin that functions in electrolocation of prey organisms. Mouths are non-protrusible. Small cycloid scales cover all but the head. The head (including the eyes), the dorsum and belly are covered by a thin layer of skin that is perforated with small pores that lead to electroreceptors. All mormyrids retain a full complement of paired and unpaired fins. The dorsal and anal fins lack spiny rays and are variable in length among the different genera. In many genera these fins are positioned far back on the body and are more or less symmetrically opposed about the midline. The caudal fin in mormyrids is deeply forked and has a distinctive rounded V-shape with symmetrical, scaled and fleshy dorsal and ventral lobes; it emerges from a narrow, cylindrical peduncle within which lies the electric organ. In addition to specializations for electroreception which include an enlarged cerebellum, electroreceptors on the body surface, and an electric organ in the caudal peduncle, mormyrids have other specializations for acute audition: a gas-filled tympanic bladder coupled to the sacculus in each ear (Fletcher & Crawford, 2001). Males of the genus Pollimyrus, communicate not only electrically, but also acoustically, with elaborate courtship songs generated by muscles that vibrate the swim bladder (Crawford, 1997). Little is known about the role of acoustic communication in other genera. Most mormyrids are nocturnal invertebrate-feeders. However, some species of the genus Mormyrops are piscivores. At night, Mormyrops anguilloides from Lake Malawi engage in a form of semi-cooperative "pack hunting" of sleeping cichlids (Arnegard & Carlson, 2005). The sister-group to the Mormyridae is the monotypic family Gymnarchidae. Gymnarchus niloticus has a nilo-sudanic distribution and is also electrogenic, but its EOD resembles a continuous wave unlike like the pulsatile EODs produced by mormyrids. Together, the Mormyridae and the Gymnarchidae make up the Superfamily Mormyroidea. Gymnarchus and mormyrids share numerous anatomical characteristics—both related and unrelated to active electrolocation—and they are also the only vertebrates known to posses aflagellate sperm (Morrow, 2004). Based on osteological characters, Taverne (1972) divided the Mormyridae into two subfamilies, the Petrocephalinae, containing only the genus Petrocephalus, and the Mormyrinae, containing the remaining genera. Molecular phylogenetic studies (Lavoué, 1999; Sullivan, 2000; Lavoué et al., 2003) have supported this division. Data of fish populations in African freshwaters are scarce and there are no mormyrid species known to be threatened and with extinction and none are CITES-listed. This is not to say that particular species are not under threat of local extinction in areas impacted by human activity, including over-fishing, and development. Artificial dichotomous key to 19 genera of Mormyridae (Heteromormyrus pauciradiatus not included) 1.a. Nostrils close to one another and to the eye; mouth inferior, below the horizontal level of the eye; body short and rather deep; two simple (unsegmented) rays, visible on radiographs, at the origin of the dorsal fin. Petrocephalus (subfamily Petrocephalinae) 1.b. Nostrils separated from each other and from the eye; mouth terminal or inferior, in advance of the level of the eye, body deep or elongate; usually one simple ray at the origin of the dorsal fin. 2 (subfamily Mormyrinae) 2.a. Teeth in both jaws very small, slender and conical, irregularly arranged in several rows forming a villiform band; (additionally snout narrow and tubular, mouth terminal, chin with a tapering barbel-like appendage nearly as long as snout and pointing forwards). Genyomyrus 2.b. Teeth not as above, 3 3.a. Teeth extending along the entire edge of both jaws in a single series, 10-36 in each jaw; (additionally mouth terminal, well in advance of the level of the eye; body elongate, the depth more than 5.2 times into SL). Mormyrops 3.b. Teeth restricted to middle of each jaw, 3-10 in each jaw, 4 4.a. Dorsal fin more than two times the length of anal, originating directly above or in advance of pelvics. Mormyrus 4.b. Dorsal fin 0.10-1.75 times the length of the anal, its origin behind pelvics, 5 5.a. Dorsal fin very short, less than 0.20 times the length of the anal, and set far back on body; (additionally anal fin long with 58-68 rays). Hyperopisus 5.b. Dorsal fin 0.35-1.75 times the length of the anal fin, 6 6.a. Dorsal fin 1.2-1.75 times the length of the anal fin; dorsal fin origin anterior to anal fin origin, 7 6.b. Dorsal fin 0.35-1.1 times length of the anal fin; dorsal fin origin above or posterior to anal fin origin, 9 7.a. Pelvic fins closer to the anal than to the pectorals; body very elongate, at least 8-11 times as long as deep. Isichthys 7.b. Pelvic fins mid-way between anal and pectoral fins or closer to pectorals; body short to moderately elongate, 8 8.a. Symphysial mandibular teeth incisor-like and projecting beyond lower lip; body moderately elongate (depth < 24% SL), upper back gently convex. Myomyrus 8.b. Median pair of mandibular teeth unmodified; body moderately deep (depth >27% SL) and upper back gently to greatly convex. Cyphomyrus 9.a. Posterior nostril close to the border of the mouth. Stomatorhinus 9.b. Neither nostril close to the border of the mouth, 10 10.a. Snout very elongated and tubular, its length greater than the postorbital length of the head. Snout turned downward. Campylomormyrus 10.b. Snout non-tubular, its length less than the post-orbital length of the head, 11 11. a. Prominent cylindrical barbel-like appendage under the chin, extending forward from below lower jaw. Gnathonemus 11.b. Appendage under chin reduced to fleshy swelling or absent altogether, 12 12. a. Submental appendage present, extending slightly beyond the end of the upper jaw. Marcusenius 12.b. Fleshy chin appendage not extending beyond end of upper jaw or absent altogether, 13 13.a. Dorsal and anal fins approximately equal in length and originating at the same vertical level, dorsal with 31-34 rays, anal with 31-35 rays. Hippopotamyrus (note: H. ansorgii and related forms will not key out here, but with Paramormyrops below) 13.b. Dorsal fin shorter than anal fin and with fewer than 30 rays, 14 14.a. Body moderately elongate, depth 18-22% SL, 15 14.b. Body moderately deep, more than 23% SL, 17 15.a. Anal and dorsal fins terminate at about the same level. Distal tips of last anal and dorsal rays not offset. 16 15.b. Anal fin extends beyond the end of dorsal. Distal tips of last anal and dorsal fin rays offset. Brienomyrus 16.a. Diffuse dark bar between origin of doral and anal fins absent. Paramormyrops 17.a. Globular swelling under chin absent; mouth terminal, 18 17.b. Globular swelling under chin present; mouth subterminal, 19 19.a. Posterior nostril closer to anterior nostril than to eye. Ivindomyrus 19.b. Posterior nostril closer to eye than to anterior nostril. Pollimyrus Electric Organ Discharge Unlike the electric eel Electrophorus and the electric catfish Malapterurus, mormyrids cannot produce strong electric discharges for defense or to immobilize prey. Instead, by means of a specialized organ near the tail these fishes generate a relatively weak electric field around their body that they monitor using cells embedded in their skin called electroreceptors. Using active electroreception they are able to calculate the size, position and other characteristics of nearby objects in the water and can be active at night when vision is of little use. Electroreception requires a lot of brain power and these fishes have one of the largest brain mass to body mass ratios among vertebrates, roughly equal to that of Homo sapiens. In mormryids it is the cerebellum that has become massively hypertrophied. Electric organ discharges, or EODs are also used for communication by mormyrids. Mormyrid EODs are pulses between one-tenth of a millesecond to 20 milleseconds in duration. While the time interval between the pulses is variable, the pulse waveform characteristics are fixed and species-specific. EOD waveforms can differ radically among co-occuring mormyrid species and reproductive males will often develop distinctive waveforms that function in courtship of conspecific females. In this way, EODs serve a function analagous to visual or acoustic signals in many other groups of organisms. Impressive examples of EOD variation among co-occuring species can be found within the genera Paramormyrops of Lower Guinea and Campylomormyrus of the Congo River. The hypothesis that EODs may in fact accelerate speciation in these "riverine species flocks" and within mormyrids generally is another active research area. EODs are relatively easy to record from living mormyrids and because of their species-specificity and stereotypy, are often useful aides in recognizing species boundaries and working out the taxonomy of this group. Adult length of mormyrid species ranges from about 4 to about 150 centimeters. Ecology and Distribution Mormyrids have a broader distribution than their Nilo-Sudanic sistergroup,Gymnarchus niloticus, including most of the African continent with the exception of the Sahara, northernmost Mahgreb and southernmost Cape provinces (Roberts, 1975) and are most diverse in the river systems of Central and West Africa. Mormyrids occupy an ecological niche largely similar to that of other large group of freshwater weakly electric fishes, the ostariophysan South American gymnotiforms (Lowe-McConnell 1987). Fishes of both groups, with some exceptions, are nocturnal benthic invertebrate-feeders and have adapted to a number of different types of freshwater habitats. Interestingly, the widely separate phylogenetic positions of these two groups among non-electroreceptive teleost clades indicates independent evolution of their electrosensory systems (see Bass 1986c, Kramer 1990). Mormyrids are much more abundant and diverse in river and stream habitats than in lakes (in marked contrast to the African cichlids). Some form large schools near the bottom of pools, others are adapted for life in and near rapids (Roberts & Stewart 1976) smaller streams, marginal habitat, or swamps (Lowe-McConnell 1987). Rainy season spawning migrations from river mouths to upriver breeding habitats have been reported for some taxa (Daget 1957, Blake 1977). Little information exists regarding the reproductive behavior in mormyroids, although male Gymnarchus niloticus and Pollimyrus isidori are known to construct and guard elaborate floating nests in which larvae remain for some time after hatching (see Hopkins 1986). Evolution and Systematics Recent literature on mormyrid systematics includes Taverne’s taxonomic revision of the family based on osteology (Taverne, 1969; 1971 a; 1971 b; 1972), Bigorne's (1990a) review of the mormyrids of West Africa and revision of Brienomyrus, Pollimyrus, Isichthys and Mormyrops of that region (Bigorne, 1987; 1989; 1990 b), Boden et al.’s (1997) revision of the Marcusenius of Central Africa with eight circumpeduncular scales, Jégu and Lévêque’s (1984) study of Marcusenius of West Africa, and Harder's (2000) published CD-ROM with descriptions and photos of all existing types of specimens of Mormyridae. Despite this recent work, the monophyly of several genera remains poorly supported. Most of the foregoing work is not explicitly phylogenetic, and only recent molecular studies have provided a well supported tree for the major mormyrid lineages (Alves-Gomes & Hopkins, 1997, Lavoué et al. 2000, Sullivan et al. 2000, Lavoué et al. 2003). Points of agreement between the morphological work of Taverne and the molecular studies are 1) the monophyly of Mormyridae, 2) the sistergroup relationship between Mormyridae and Gymnarchus niloticus, and 3) the basal division of the family into two subfamilies: Mormyrinae and Petrocephalinae, with the latter containing only Petrocephalus. Another recent development is the use of electric organ discharge (EOD) recordings for species discovery and diagnosis within certain genera (Sullivan et al., 2002; Lavoué et al., 2004). Certain aspects of the EOD appear to be phylogenetically conserved, while others are more variable (Alves-Gomes, 1999; Sullivan & Hopkins, 2001; Sullivan et al., 2000). |Journal of Experimental Biology 2000 Sullivan.pdf||507.9||2010-11-21T23:44:14Z| |Biological Journal of the Linnean Society 2003 Lavoue-1.pdf||531.7||2010-11-21T23:51:19Z| |Arnegard 2010 Am. Nat.pdf||2074.9||2011-04-08T14:03:43Z|
<urn:uuid:80a030ec-dc53-4155-8bac-3032b6806286>
3.671875
3,625
Knowledge Article
Science & Tech.
40.345021
In one case I had to determine whether an individual had consumed cocaine before death. I wondered whether I could use the maggots, beetles, and pupa that had inhabited the body and test them for drugs. The answer turned out to be yes. Drug use has skyrocketed in the last ten years, as have drug related homicidesso more and more frequently we are looking at the effect of drugs on bugs. What happens to insects when they take cocaine or heroin? Cocaine, heroin, ecstasy, angel dust [PCP], and amphetamines all affect a bug's life. For example, cocaine increases the growth rate so the maggots grow more quickly. A maggot feeding on a long-term heroin addict will actually have a slower growth rate compared with a maggot that is found in a new user, or someone who died of an overdose. All these factors are important if you are using the life cycles to calculate the time of death. What do you use for your experiments? How do you study decomposition? Generally we use pigs. A 50-pound pig most closely represents human decompositionit's the next best thing to a human corpse. We have a secure area in part of the university and in various military zones where we can let the animals decompose. I once put a 50-pound dead pig wrapped in blankets in my backyard to decompose to mimic a body in a homicide casewe were trying to determine how long it takes insects to penetrate the wrapping. We also put the animals in a range of different environmentsin rain forests, arid volcanic craters, tidal poolsto see how conditions change the rate of decomposition. We have hung pigs in trees to see how decay differs from when the animal rests on the ground. We've buried them and burnt them to varying degrees to see the effects. Rates of decay can vary wildly. In Hawaii's rain forest a body can be stripped to the bone in as little as 18 days. In an arid environment there can still be flesh on the bones after a year. How do you determine time of death? We look at the succession of bugs that have infested the body. In Hawaii, within 10 minutes of a death, female blowflies will begin to investigate the body openingseyes, ears, nose, mouth, anus and genitalsand begin to lay eggs. This essentially starts the biological clock. A dead body is a bit like a barren volcanic island. As plants begin to take root they make the island much more invitingsame with the body. As the maggots hatch and feed they attract predators and parasites. Then come flies, beetles, and parasitic wasps, all within a few days. By day five you have houseflies, then more beetles and so onan endless parade of insects. Even at the skeletal stage there is still a lot going on. The fluids that have seeped out of the body have changed the character of the soilthat's when the study shifts to the soil. There you look at the fungus and algae. Even one and a half years after death there will be detectable stains and a telltale selection of arthropods. Depending on which bugs are there, and when, and the stage of development, you can estimate the period of death. How many species of bugs will infest a corpse? In Hawaii, more than 300 species will visit the body from the time of death through the skeletal stage. On the mainland more than 600 insects can pass through a body. Some people are just a lot more popular dead than alive. What else can you learn about the deceased person? You can tell if a person was moved after death. If you find evidence of city bugs in a person found in the country you know the body was moved. You can also determine cause of death. If the insects depart from the normal routes of colonization, like entering through the chest, you might be able to point out an injury that would have been missed. I had one case where the blood inside a parasitic insectactually a crab lousematched the blood of a rape suspect. We have also used maggots to detect abuse. Looking at how badly a sore was infested revealed how long an elderly person was ignored in a nursing home. How did you become a consultant for a TV show? The writers read my book A Fly for the Prosecution: How Insect Evidence Helps Solve Crimes and picked out cases for shows in the first and second season. Now I generally talk with the writers on a biweekly basis. The main character is a forensic entomologist. If you had told me 20 years ago that there would be a crime show with a forensic entomologist as the main character I think I would still be laughing. You are also curator for the traveling museum exhibit CSI: Crime Scene Insects? Yes, I was asked to curate to make sure the exhibit is scientifically accurate and palatable. It's a lot of fun. Forensics is a great way to trap kids into appreciating science. On TV: Biography of a Corpse airs Monday, April 26, at 9 p.m. ET/PT in the United States and is available only on the National Geographic Channel. Food Taboos: It's All a Matter Of Taste Cicada Fiestas: Top Places to Bug Out Cicada Invasion: Eastern U.S. Braces for Bug Swarm New Insect Order Found in Southern Africa Related Web Sites National Geographic Channel National Geographic Presents: Program Schedule Taboo: Photo Gallery
<urn:uuid:6dd16b67-e267-44d8-b113-3ab3ddb9ac96>
3.125
1,152
Audio Transcript
Science & Tech.
56.15124
How many different journeys could you make if you were going to visit four stations in this network? How about if there were five stations? Can you predict the number of journeys for seven Your challenge is to find the longest way through the network following this rule. You can start and finish anywhere, and with any shape, as long as you follow the correct order. A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken? Arrange the digits 1, 1, 2, 2, 3 and 3 so that between the two 1's there is one digit, between the two 2's there are two digits, and between the two 3's there are three digits. We're excited about this new program for drawing beautiful mathematical designs. Can you work out how we made our first few pictures and, even better, share your most elegant solutions with us? A little mouse called Delia lives in a hole in the bottom of a tree.....How many days will it be before Delia has to take the same What happens when you add three numbers together? Will your answer be odd or even? How do you know? This Sudoku puzzle can be solved with the help of small clue-numbers on the border lines between pairs of neighbouring squares of the grid. There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places. Make a pair of cubes that can be moved to show all the days of the month from the 1st to the 31st. Alice and Brian are snails who live on a wall and can only travel along the cracks. Alice wants to go to see Brian. How far is the shortest route along the cracks? Is there more than one way to go? A Sudoku that uses transformations as supporting clues. What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros? Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? Lolla bought a balloon at the circus. She gave the clown six coins to pay for it. What could Lolla have paid for the balloon? A Sudoku with clues as ratios. Given the products of diagonally opposite cells - can you complete Ram divided 15 pennies among four small bags. He could then pay any sum of money from 1p to 15p without opening any bag. How many pennies did Ram put in each bag? You need to find the values of the stars before you can apply normal Sudoku rules. A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column. Look carefully at the numbers. What do you notice? Can you make another square using the numbers 1 to 16, that displays the same Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables? Can you put the numbers 1 to 8 into the circles so that the four calculations are correct? Use your logical-thinking skills to deduce how much Dan's crisps and ice-cream cost altogether. A Sudoku with clues as ratios or fractions. There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements? Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99 How many ways can you do it? Roll two red dice and a green dice. Add the two numbers on the red dice and take away the number on the green. What are all the different possibilities that could come up? Using the statements, can you work out how many of each type of rabbit there are in these pens? This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code? Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes? This second Sudoku article discusses "Corresponding Sudokus" which are pairs of Sudokus with terms that can be matched using a substitution rule. This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all. Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number. Can you make square numbers by adding two prime numbers together? Ben has five coins in his pocket. How much money might he have? Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100. An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. Find the product of the numbers on the routes from A to B. Which route has the smallest product? Which the largest? This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice. How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this? Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all? In the planet system of Octa the planets are arranged in the shape of an octahedron. How many different routes could be taken to get from Planet A to Planet Zargon? There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the Pentagram Pylons - can you elegantly recreate them? Or, the European flag in LOGO - what poses the greater problem? Here are four cubes joined together. How many other arrangements of four cubes can you find? Can you draw them on dotty paper? 60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra?
<urn:uuid:a053de61-7f2b-4e62-a291-797549eaefd1>
2.921875
1,586
Content Listing
Science & Tech.
71.804803
Aug 14, 2000, 1:40 PM Post #2 of 3 The @new array is left unchanged at 5, 2, 4, 8; How? I don't have my books handy, but i think it goes something like this. First the call is made to Sort::cure, where $a and $b are package global variables. These should be qualified with the package name of the caller, since the sort routine is not in the same package as the caller Since $a and $b are out of scope, the subroutine returns '0' for each call. Then sort compares the values returned from Sort::cure, and since they are all '0', the order remains unchanged. This populates the @new array with the unchanged list of values.
<urn:uuid:7e96e770-de5f-4206-84b9-0318d29a5354>
2.78125
168
Comment Section
Software Dev.
73.165519
Carl Hewitt’s Same-Fringe Problem August 3, 2010 Long ago, Carl Hewitt created the same-fringe problem as a demonstration of the simplest problem that requires concurrency to implement efficiently: Given two binary trees, determine if they have the same leaves in the same order, regardless of their internal structure. A solution that simply flattens both trees into lists and compares them element-by-element is unacceptable, as it requires space to store the intermediate lists and time to compute them even if a difference arises early in the computation. Your task is to write a function that tests if two trees have the same fringe. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
<urn:uuid:577a44e8-018c-482b-8ca9-1ccaadbd0394>
3.046875
167
Personal Blog
Software Dev.
27.139239
With moral and monetary support, including a UC Proof of Concept Grant, two UC grads have formed a company to create 'printable' batteries that are efficient, environmentally friendly and could be made as small as a postage stamp. They delivered a message to legislators: graduate student research is central not only to the future of UC, but to that of the state and the nation as well. UC Berkeley is partnering with two other universities, a philanthropic foundation and industry to conduct earthquake research that could lead to a warning system. People could then have time to dive for cover, and transportation and utility systems could shut down operations. Honey bees get most of the buzz, but some native bees are better at spreading pollen. They may hold the solution to world pollination problems that affect important crops. Someday, solar power will provide all the energy homes and buildings need for electricity, heat and cooling. Scientists at UC Solar, a multicampus research institute based at UC Merced, are helping to make that day come true. Scientists in Mexico and at UC study the breeding habits of a remarkable seabird for possible clues about behavioral evolution and how animals may develop immunities. The project is one of many research collaborations supported by UC MEXUS. UC Merced engineering students do real world research, in California and India. In a remarkable outdoor laboratory in the Sierra, UC Merced and UC Berkeley researchers use sensors to gather a mother lode of data to greatly improve ecological measurement and hydrologic forecasting. Reducing black carbon could immediately slow global warming and save millions of lives, says a UC San Diego scientist. And simply providing cleaner-burning stoves in rural villages can help do the trick. UC Berkeley discoveries about these agile and sticky reptiles have sparked product ideas ranging from rescue robots to sports gear. And they have captured the research imagination of undergraduate and graduate students. Thirdhand smoke is a new frontier, and UC's Tobacco-Related Disease Research Program has assembled a consortium of investigators to study the health risks caused by the remnants of cigarette smoke. Researchers at UC Natural Reserve System locations use sensors to map land, track animals and collect environmental data. Decades ago, students helped spur organic farming and laid the ground for sustainable agriculture research and education programs at UC Davis and UC Santa Cruz that are models today for other universities. Buildings use up to two-thirds of the electricity in the U.S. A UC startup company is developing innovative technologies to curb skyrocketing electrical consumption, energy costs and greenhouse emissions. An instrument to quickly detect traumatic brain injury, a vaccine to save unborn calves from a deadly bacterium and a technology to clean up grimy water are among research projects getting a boost from a new UC program. New grants will help move critical research out of the lab and into the market. For those wanting to know about risky places to drive, or even walk, UC Berkeley researchers have designed a tool for sorting through and mapping all of the serious traffic collisions in the state. A global network, Engineers for a Sustainable World, is now headquartered at UC Merced. Students work on innovative projects from supplying solar-generated electricity to villages in India to replacing polluting diesel fuel on international cargo ships. A lab researcher's campaign to reuse, recycle and reduce draws national acclaim. A unique collaboration between a law professor and chemist to use forensic science to investigate potential weapons of mass destruction and a novel way to monitor carbon emissions in urban areas are among new projects funded by the UC Laboratory Fees Research Program. UC Davis researchers are applying the same science used to sniff out illegal drugs to smelling and picking out the freshest melons. As the nation's power system ages and grows insufficient, UC researchers are building a smarter, greener electric grid for the future. Five UC graduate students and postdoctoral researchers were among innovators named 'Rising Stars of Science: The Forbes 30 under 30.' A large amount of native bee research in California occurs in the wild landscapes protected by the UC Natural Reserve System. California is a leader in protecting marine life and areas, and UC scientists play an important role in studying, advising and shaping policy that must balance the environmental needs of the ocean with those of millions of users. UC professors built and worked in towers as part of the largest single atmospheric research effort in the state. The data they've collected will guide policymakers dealing with air pollution. A multicampus center connects researchers and people in the community to address poverty, employment, health, the environment and other California issues. Making research labs more sustainable can help UC campuses to cut energy use and reduce greenhouse gas emissions.
<urn:uuid:759206a5-267d-43ab-a03d-7be6f5bf369a>
2.8125
950
Content Listing
Science & Tech.
29.39336
Section 2: Mysteries of Light Figure 3: This furnace for melting glass is nearly an ideal blackbody radiation source. Source: © OHM Equipment, LLC. More info The nature of light was a profound mystery from the earliest stirrings of science until the 1860s and 1870s, when James Clerk Maxwell developed and published his electromagnetic theory. By joining the two seemingly disparate phenomena, electricity and magnetism, into the single concept of an electromagnetic field, Maxwell's theory showed that waves in the field travel at the speed of light and are, in fact, light itself. Today, most physicists regard Maxwell's theory as among the most important and beautiful theories in all of physics. Maxwell's theory is elegant because it can be expressed by a short set of equations. It is powerful because it leads to powerful predictions—for instance, the existence of radio waves and, for that matter, the entire electromagnetic spectrum from radio waves to x-rays. Furthermore, the theory explained how light can be created and absorbed, and provided a key to essentially every question in optics. Given the beauty, elegance, and success of Maxwell's theory of light, it is ironic that the quantum age, in which many of the most cherished concepts of physics had to be recast, was actually triggered by a problem involving light. Figure 4: The electromagnetic spectrum from radio waves to gamma rays. The spectrum of light from a blackbody—for instance the oven in Figure 3 or the filament of an electric light bulb—contains a broad spread of wavelengths. The spectrum varies rapidly with the temperature of the body. As the filament is heated, the faint red glow of a warm metal becomes brighter, and the peak of the spectrum broadens and shifts to a shorter wavelength, from orange to yellow and then to blue. The spectra of radiation from blackbodies at different temperatures have identical shapes and differ only in the scales of the axes. Figure 5: Spectrum of the cosmic microwave background radiation. Source: © NASA, COBE. More info Figure 5 shows the blackbody spectrum from a particularly interesting source: the universe. This is the spectrum of thermal radiation from space—the cosmic microwave background—taken by the Cosmic Background Explorer (COBE) satellite experiment. The radiation from space turns out to be the spectrum of a blackbody at a temperature of 2.725 Kelvin. The peak of the spectrum occurs at a wavelength of about one millimeter, in the microwave regime. This radiation can be thought of as an echo of the primordial Big Bang. Enter the quantum In the final years of the 19th century, physicists attempted to understand the spectrum of blackbody radiation but theory kept giving absurd results. German physicist Max Planck finally succeeded in calculating the spectrum in December 1900. However, he had to make what he could regard only as a preposterous hypothesis. According to Maxwell's theory, radiation from a blackbody is emitted and absorbed by charged particles moving in the walls of the body, for instance by electrons in a metal. Planck modeled the electrons as charged particles held by fictitious springs. A particle moving under a spring force behaves like a harmonic oscillator. Planck found he could calculate the observed spectrum if he hypothesized that the energy of each harmonic oscillator could change only by discrete steps. If the frequency of the oscillator is ( is the Greek letter "nu" and is often used to stand for frequency), then the energy had to be 0, 1 , 2 , 3 , ... n, ... where n could be any integer and h is a constant that soon became known as Planck's constant. Planck named the step a quantum of energy. The blackbody spectrum Planck obtained by invoking his quantum hypothesis agreed beautifully with the experiment. But the quantum hypothesis seemed so absurd to Planck that he hesitated to talk about it. Figure 6: Max Planck solved the blackbody problem by introducing quanta of energy. Source: © The Clendening History of Medicine Library, University of Kansas Medical Center. More info The physical dimension—the unit—of Planck's constant h is interesting. It is either [energy] / [frequency] or [angular momentum]. Both of these dimensions have important physical interpretations. The constant's value in S.I. units, 6.6 x 10-34 joule-seconds, suggests the enormous distance between the quantum world and everyday events. Planck's constant is ubiquitous in quantum physics. The combination h/2 appears so often that it has been given a special symbol called "hbar." This symbol appears in the upper-right-hand corner of these pages. For five years, the quantum hypothesis had little impact. But in 1905, in what came to be called his miracle year, Swiss physicist Albert Einstein published a theory that proposed a quantum hypothesis from a totally different point of view. Einstein pointed out that, although Maxwell's theory was wonderfully successful in explaining the known phenomena of light, these phenomena involved light waves interacting with large bodies. Nobody knew how light behaved on the microscopic scale—with individual electrons or atoms, for instance. Then, by a subtle analysis based on the analogy of certain properties of blackbody radiation with the behavior of a gas of particles, he concluded that electromagnetic energy itself must be quantized in units of . Thus, the light energy in a radiation field obeyed the same quantum law that Planck proposed for his fictitious mechanical oscillators; but Einstein's quantum hypothesis did not involve hypothetical oscillators. An experimental test of the quantum hypothesis Whereas Planck's theory led to no experimental predictions, Einstein's theory did. When light hits a metal, electrons can be ejected, a phenomenon called the photoelectric effect. According to Einstein's hypothesis, the energy absorbed by each electron had to come in bundles of light quanta. The minimum energy an electron could extract from the light beam is one quantum, . A certain amount of energy, W, is needed to remove electrons from a metal; otherwise they would simply flow out. So, Einstein predicted that the maximum kinetic energy of a photoelectron, E, had to be given by the equation . The prediction is certainly counterintuitive, for Einstein predicted that E would depend only on the frequency of light, not on the light's intensity. The American physicist Robert A. Millikan set out to prove experimentally that Einstein must be wrong. By a series of painstaking experiments, however, Millikan convinced himself that Einstein must be right. The quantum of light energy is called a photon. A photon possesses energy , and it carries momentum /c, where c is the speed of light. Photons are particle-like because they carry discrete energy and momentum. They are relativistic because they always travel at the speed of light and consequently can possess momentum even though they are massless. Although the quantum hypothesis solved the problem of blackbody radiation, Einstein's concept of a light quantum—a particle-like bundle of energy—ran counter to common sense because it raised a profoundly troubling question: Does light consist of waves or particles? As we will show, answering this question required a revolution in physics. The issue was so profound that we should devote the next section to reviewing just what we mean by a wave and what we mean by a particle.
<urn:uuid:2a4bf4bf-ce34-4e88-9c13-03056f134028>
3.90625
1,482
Knowledge Article
Science & Tech.
43.49241
This chapter shows you how to create cross platform tools. If for some reason you have to stop and come back later, remember to use the su - clfs command, and it will setup the build environment that you left. Before issuing the build instructions for a package, the package should be unpacked as user clfs, and a cd into the created directory should be performed. The build instructions assume that the bash shell is in use. Several of the packages are patched before compilation, but only when the patch is needed to circumvent a problem. A patch is often needed in both this and the next chapters, but sometimes in only one or the other. Therefore, do not be concerned if instructions for a downloaded patch seem to be missing. Warning messages about offset or fuzz may also be encountered when applying a patch. Do not worry about these warnings, as the patch was still successfully applied. During the compilation of most packages, there will be several warnings that scroll by on the screen. These are normal and can safely be ignored. These warnings are as they appear—warnings about deprecated, but not invalid, use of the C or C++ syntax. C standards change fairly often, and some packages still use the older standard. This is not a problem, but does prompt the warning. After installing each package, both in this and the next chapters, delete its source and build directories, unless specifically instructed otherwise. Deleting the sources prevents mis-configuration when the same package is reinstalled later.
<urn:uuid:953ee5dd-8f6e-46ac-a24c-af6d55a7aae6>
2.875
307
Documentation
Software Dev.
51.763474
This report is a product of the Committee on Restoration of the Greater Everglades Ecosystem (CROGEE), which provides consensus advice to the South Florida Ecosystem Restoration Task Force. The Task Force was established in 1993 and was codified in the 1996 Water Resources Development Act (WRDA); its responsibilities include the development of a comprehensive plan for restoring, preserving and protecting the South Florida ecosystem, and the coordination of related research. The CROGEE works under the auspices of the Water Science and Technology Board and the Board on Environmental Studies and Toxicology of the National Research Council. The CROGEE's mandate includes providing the Task Force not only with scientific overview and technical assessment of the restoration activities and plans, but also providing focused advice on technical topics of importance to the restoration efforts. One such topic was to examine "the linkage between the upstream components of the greater Everglades and adjacent coastal ecosystems." This report addresses this issue by breaking it down into three major questions: - What is the present state of knowledge of Florida Bay ("the Bay") on scientific issues that relate to the success of the overall CERP? - What are the potential long-term effects of Everglades restoration as currently designed on the nature and condition of the Bay? - What are the critical science questions that should be answered early in the restoration process to design a system that benefits not only the terrestrial and freshwater aquatic Everglades but the Bay as well? This study was inspired in part by the 2001 Florida Bay and Adjacent Marine Systems Science Conference held on April 23-26, 2001 in Key Largo, Florida. An overlapping meeting of the CROGEE was held at the same location on April 26-28, 2001. The conference was organized by the Program Management Committee (PMC) of the Florida Bay and Adjacent Marine Systems Science Program. The PMC organized the conference around five questions suggested by the Florida Bay Science Oversight Panel. These questions related to circulation, salinity patterns, and outflows of the Bay; nutrients and the nutrient budget; onset, persistence and fate of planktonic algal blooms; temporal and spatial changes in seagrasses and the hardbottom community; and recruitment, growth and survivorship of higher trophic level species. Some of these issues are discussed in the present report. However, as noted earlier, this report focuses on the subset of questions that relate to linkages between the Bay and the upstream portion of the Everglades system that arose at the 2001 Florida Bay Conference.
<urn:uuid:ce810ea2-4229-43f1-91c0-4db42e3c7f0c>
3.09375
514
Knowledge Article
Science & Tech.
22.396225
Flying Low: The Deep Flight II sub uses stubby wings that propel it down like an airplane goes up. Nick Kaloterakis By liberal estimates, we’ve explored about 5 percent of the seas, and nearly all of that in the first 1,000 feet. That’s the familiar blue part, penetrated by sunlight, home to the colorful reefs and just about every fish you’ve ever seen. Beyond that is the deep—a pitch-black region that stretches down to roughly 35,800 feet, the bottom of the Marianas Trench. Nearly all the major oceanographic finds made in that region—hydrothermal vents and the rare life-forms that thrive in the extreme temperatures there, sponges that can treat tumors, thousands of new species, the Titanic —have occurred above 15,000 feet, the lower limit of the world’s handful of manned submersibles for most of the past 50 years. Now engineers want to unlock the rest of the sea with a new fleet of manned submersibles. And they don’t have to go to the very bottom to do it. In fact, only about 2 percent of the seafloor lies below 20,000 feet, in deep, muddy trenches. If we extend our current reach just 5,000 feet—another mile—it will open about 98 percent of the world’s oceans to scientific eyes.
<urn:uuid:5646ec31-bb11-4cff-b9bb-030db9f11483>
3.03125
290
Truncated
Science & Tech.
63.737063
A television camera at ground level is filming the lift-off of a rocket that is rising vertically according to the position function s=50t^2, where "s" is in feet and "t" is in seconds. The camera is 2,000 feet from the base of the launch pad. Find the rate of change in the angle of elevation of the camera 10 seconds after lift-off. I have the diagram. It is a right triangle, with the base=2,000 ft. the height="s". I took Calculus a couple of semesters ago, and didn't get it then, and am struggling again. I know I have to come up with an equation, but not the s=50t^s one... Any help would be appreciated. Thank you.
<urn:uuid:f277d70b-4b86-45ca-bc8e-712110b8f3ad>
3.171875
163
Q&A Forum
Science & Tech.
86.845673
More In This Article - Photo Album “For almost two decades my colleagues and I have been studying one of the most remarkable systems of communication that nature has evolved. This is the ‘language’ of the bees: the dancing movements by which forager bees direct their hivemates, with great precision, to a source of food. In our earliest work we had to look for the means by which the insects communicate and, once we had found it, to learn to read the language. Then we discovered that different varieties of the honeybee use the same basic patterns in slightly different ways; that they speak different dialects, as it were. This led us to examine the dances of other species in the hope of discovering the evolution of this marvelously complex behavior. —Karl von Frisch” Von Frisch shared the 1973 Nobel Prize in Physiology or Medicine. “A sensation was recently caused in Paris by the daring proposal of Prof. Etchegoyen, a distinguished scientist, who declares that France ought to lose no time in converting the vast desert of Sahara into an inland sea. He claims that, since ‘about a quarter of the whole desert area lies below sea level, the construction of a canal some fifty miles long through the higher land of the north African coast would immediately create a Sahara Sea equal in size to about half the extent of the Mediterranean.’ Millions of human beings could then support themselves in comfort, who now lead a miserable existence on the verge of starvation. Moreover, a great new colony could be added to the possessions of France.” “Sicily's sulphur production comprises an area about equal to that of the State of Connecticut. A population of 350,000 ignorant, ill-nourished peasants, called carusi, labor in the mines [see photograph]. Exceedingly crude and simple methods prevail, and have prevailed since the days of the Romans, in the mining of Sicilian sulphur. The Sicilian industry, debilitated by ages of market speculation, usury and local vendette, late in the last century, staggered under the shock of news of the opening up of an immense deposit of sulphur on the gulf coastal plain of Louisiana. By Herman Frasch's invention of a process for liquefying sulphur in the ground, at a depth of 1,000 feet, and pumping it to the surface in fluid form, sulphur is produced at an average cost of $3.68 per ton, as against $12 per ton, the cost of mining sulphur in Sicily.” Civil War Shipbuilding “A number of our engineering establishments are engaged at present in constructing ironclad steamers of various kinds. Contracts have been made by the Navy Department with Capt. Ericsson for building several on the general plan of the Monitor. Five are being constructed at Greenpoint, Brooklyn, where a force of nine hundred men are employed upon them. All will be furnished with revolving turrets of greater thickness than that of the Monitor, and most of them are to be armed with 15-inch guns.” Some images of the technology of warfare, taken from our archives of 150 years ago, can be viewed at www.ScientificAmerican.com/aug2012/civil-war “Many of the natives of Cochin China [southern Vietnam] obtain their live-lihood by tiger catching, the skin of this animal being valuable. They use a novel mode of ensnaring those savage beasts. The snare consists of large leaves, sometimes pieces of paper, covered on one side with a substance of the same nature as bird-lime, and containing a poison, the smallest portion of which, getting into the animal's eyes, causes instant blindness. They are laid about thickly, with the bird-lime side upward, in the track of a tiger, and as surely as the animal puts his paw upon one of the treacherous leaves, he becomes a victim; for, finding it stuck to his foot, he shakes it, and while scratching and rubbing himself to get free, some of the bird-lime poison gets into his eyes and blinds him. He growls and roars in agony, and this is the signal for his captors to come up and dispatch him.” This article was originally published with the title 50, 100 & 150 Years Ago.
<urn:uuid:a5eea5bc-1784-411f-b58b-c2a663f70dd1>
3.0625
894
Truncated
Science & Tech.
45.458359
Web services promote an environment for systems that is loosely coupled and interoperable. Many of the concepts for Web services come from a conceptual architecture called service-oriented architecture (SOA). SOA configures entities (services, registries, contracts, and proxies) to maximize loose coupling and reuse. This chapter describes these entities and their configuration in an abstract way. Although you will probably use Web services to implement your service-oriented architecture, this chapter explains SOA without much mention of a particular implementation technology. This is done so that in subsequent chapters, you can see the areas in which Web services achieve some aspects of a true SOA and other areas in which Web services fall short. Before we analyze the details of SOA, it is important to first explore the concept of software architecture, which consists of the software’s coarse-grained structures. Software architecture describes the system’s components and the way they interact at a high level. These components are not necessarily entity beans or distributed objects. They are abstract modules of software deployed as a unit onto a server with other components. The interactions between components are called connectors. The configuration of components and connectors describes the way a system is structured and behaves, as shown in Figure 1. Rather than creating a formal definition for software architecture in this chapter, we will adopt this classic definition: “The software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them.” Service-oriented architecture is a special kind of software architecture that has several unique characteristics. It is important for service designers and developers to understand the concepts of SOA, so that they can make the most effective use of Web services in their environment. SOA is a relatively new term, but the term “service” as it relates to a software service has been around since at least the early 1990s, when it was used in Tuxedo to describe “services” and “service processes”. Sun defined SOA more rigorously in the late 1990s to describe Jini, a lightweight environment for dynamically discovering and using services on a network. The technology is used mostly in reference to allowing “network plug and play” for devices. It allows devices such as printers to dynamically connect to and download drivers from the network and register their services as being available. Figure 2 shows that other technologies can be used to implement service-oriented architecture. Web services are simply one set of technologies that can be used to implement it successfully. The most important aspect of service-oriented architecture is that it separates the service’s implementation from its interface. In other words, it separates the “what” from the “how.” Service consumers view a service simply as an endpoint that supports a particular request format or contract. Service consumers are not concerned with how the service goes about executing their requests; they expect only that it will. Consumers also expect that their interaction with the service will follow a contract, an agreed-upon interaction between two parties. The way the service executes tasks given to it by service consumers is irrelevant. The service might fulfill the request by executing a servlet, a mainframe application, or a Visual Basic application. The only requirement is that the service send the response back to the consumer in the agreed-upon format.
<urn:uuid:5ed2fcbf-18b8-436c-9e82-ba43a2b314ac>
3.671875
696
Knowledge Article
Software Dev.
24.418497
Narrator: This is Science Today. Plant breeders are always looking for ways to create tougher, more disease-resistant crops. A breakthrough came recently when biologists cloned a family of genes that give resistance to several types of plant disease. Brian Staskawicz of the University of California, Berkeley says the discovery opens up the possibility of someday engineering disease resistant crops. Staskawicz: Can you take a gene from a tobacco plant and put it into a soybean plant and have it confer resistance? These are things that we can test right now. We don't know the answer, and we're currently interested in pursuing that. Narrator: Ever since genetic engineering began, scientists have debated its safety. Some talk about the disaster that would occur if, for example, a weed accidentally picked up a disease resistance gene. But Staskawicz points out that a number of genetically engineered plants are already in development. Staskawicz: A lot of these things are being field-tested right now, so by the time that our stuff gets there, there will be a lot more information available. So it's something that we're aware of, and we'll proceed with caution on it. Narrator: For Science Today, I'm Steve Tokar.
<urn:uuid:8a237033-0080-4b7a-9ed4-00391021e766>
2.984375
273
Audio Transcript
Science & Tech.
41.438011
A large-scale survey of brown hare Lepus europaeus and Iberian hare L. granatensis populations at the limit of their ranges Christian Gortazar, Javier Millán, Pelayo Acevedo, Marco A. Escudero, Javier Marco & Daniel Fernández de Luco Gortazar, C., Millán, J., Acevedo, P., Escudero, M.A., Marco, J. & Fernández de Luco, D. 2007: A large-scale survey of brown hare Lepus europaeus and Iberian hare L. granatensis populations at the limit of their ranges. - Wildl. Biol. 13: 244-251. The historical ranges of the European brown hare Lepus europaeus and the Iberian hare L. granatensis meet in Aragón in northeastern Spain. We studied the relative abundances and the population trends of the two species in 60 localities (13 for the brown hare, 38 for the Iberian hare, and nine from the transition zone where both species are present) by spotlighting in winter during 1992-2002. We carried out a total of 1,407 counts covering 41,511 km. Both the Iberian (132.2 ± 33.2 hares/100 km; range: 52-192) and the brown hare (106.7 ± 26.8; range: 53-136) were more abundant in their respective zones than both species combined in the transition zone (90.9 ± 50.5, range: 37-157). The highest Iberian hare abundances were recorded in the northern Iberian Mountains, an area with well-preserved cereal-dominated ecosystems and a less extreme climate than in other parts of the study area. The Iberian hare had significant inter-annual differences both locally and generally, which was mainly due to a peak in 1998, and this species showed a general positive trend during the study period, suggesting that Iberian hare numbers are increasing. Contrary to the marked declines reported from other European regions, the brown hare abundance indices obtained in the Spanish Pyrenees during our study period remained stable. Key words: hare, Lepus europaeus, Lepus granatensis, population trend, relative abundance, Spain, spotlighting Christian Gortazar, Javier Millán* & Pelayo Acevedo, Instituto de Investigación en Recursos Cinegéticos (IREC, CSIC-UCLM-JCCM), P.O. Box 535, E-13080 Ciudad Real, Spain - e-mail addresses: firstname.lastname@example.org (Christian Gortazar); email@example.com (Javier Millán); firstname.lastname@example.org (Pelayo Acevedo) Marco A. Escudero & Javier Marco, Ebronatura S.L., Camino de Cabezón s.n., E-50730 El Burgo de Ebro (Zaragoza), Spain - e-mail addresses: email@example.com (Marco A. Escudero); firstname.lastname@example.org (Javier Marco) Daniel Fernández de Luco, SEDIFAS, Universidad de Zaragoza, Miguel Servet 177, E-50013 Zaragoza, Spain - e-mail: email@example.com *Present address: Department of Conservation Biology, Estación Biológica de Doñana, E-41013 Sevilla, Spain Corresponding author: Javier Millán Received 10 February 2003, accepted 31 May 2006 Associate Editor: Heikki Henttonen
<urn:uuid:0b0f54b1-30eb-4bc1-b7e0-236f075cf330>
2.953125
829
Academic Writing
Science & Tech.
49.57881
Calcifying sea critters may pay the price for increasing levels of carbon dioxide in the atmosphere Victoria Fabry and Brad Seibel study what’s come to be known as “the other CO2 problem.” Most of us are familiar with the first problem: The copious discharge of carbon dioxide, the primary greenhouse gas, into the atmosphere is forcing the Earth’s temperature to rise, causing a wide range of disruptions and changes to the world’s climate. The oceans play an integral role in mitigating some of that CO2 by absorbing about a third of it as what scientists call a “carbon sink.” But that benefit comes at a cost to marine critters and ecosystems, as the carbon dioxide begins to change the seawater chemistry of the oceans. A leading expert in ocean acidification from California State University San Marcos, Fabry is the principal investigator for a team of scientists in Antarctica studying how Southern Ocean pteropods, small gastropod mollusks (sea snails and slugs), may respond to higher acidic levels of seawater predicted for the next century. These animals may be particularly vulnerable to seawater chemistry change because, as the oceans become more acidified and the pH level decreases, their ability to calcify and form shells and skeletons may be severely affected. “Ocean acidification is going to impact many organisms that calcify,” Fabry said from her office at the Albert P. Crary Engineering and Science Center in McMurdo Station. “It’s going to happen in our lifetimes. It’s not far away.” The pH level, measured in units, is a calculation of the balance of a liquid’s acidity and alkalinity. The lower a liquid’s pH number, the higher its acidity. The pH level for the world’s oceans was stable for tens of thousands of years, but has dropped one-tenth of a unit since the Industrial Revolution in the 1800s. That represents a significant decrease, Fabry said, and current models predict the pH level may drop by as much as four-tenths of a unit by 2100 relative to the pre-industrial value. That could mean big trouble for calcifying organisms, particularly in the higher latitudes of the Arctic and Antarctic. The reason: Most pteropods and other calcifiers, like corals, use the calcium carbonate minerals of calcite or aragonite to construct their shell coverings or skeletons. Normally, surface seawater is not corrosive to calcite and aragonite because the carbonate ion is at supersaturating concentrations. However, as ocean pH falls, so does the concentration of the carbonate ion. Higher latitude waters are naturally less saturated, so the change in chemistry would affect these areas first. By 2040, under some CO2 emissions scenarios, surface waters of some regions may become undersaturated of aragonite, making those calcium carbonate structures constructed of aragonite vulnerable to dissolution. At the end of the century, projections say most of the Southern Ocean and some regions of the subarctic Pacific will become undersaturated with respect to aragonite if CO2 emissions continue in a business-as-usual scenario. Data on the Arctic Ocean are pending. “The high latitudes are the first areas that will have large expanses of surface waters that will be undersaturated with respect to aragonite. It’s not looking good,” Fabry said. “With increasing oceanic uptake of atmospheric CO2, we see CO2 increasing in the water and pH declining at time series stations at Bermuda, Hawaii and the Canary Islands. … And in high latitudes such as the Southern Ocean, what we’re going to have in the coming decades is surface seawater that is corrosive to aragonite. ” In a previous experiment involving a sub-Arctic pteropod, Fabry grew the species at a lower carbonate ion saturation. Within 48 hours, the growing edge of the shell began to dissolve. “The shells start to get pitted, the upper layer peels off, and that exposes more calcium carbonate rods and crystals to dissolution, and they just dissolve,” she said. The team is conducting similar experiments here over two field seasons, though weather for this second year stymied collection efforts for the first couple of weeks. The scientists are after two types of pteropods: one with a shell in its adult stage (euthecosomatous pteropods), and a second, carnivorous species (gymnosomatous pteropods) that feeds exclusively on the first. Seibel, a co-principal investigator on the project, is interested in discovering how ocean acidification will affect other aspects of pteropod physiology, such as oxygen consumption or ammonia excretion. “CO2 causes acidification in body fluids the same way it does in seawater, although not necessarily to the same extent,” said Seibel, with the University of Rhode Island. “Acidification of the body fluids can lead to changes in metabolism that could lead to reductions in growth and reproduction. “In other oceans, we’ve seen detrimental effects of CO2 on squid metabolism,” he added. Very preliminary results show little effect on the pteropod physiology to high levels of CO2 — 1,000 parts per million, about triple the concentration in the oceans today. However, Seibel emphasized that the experiments are short duration. The studies on squid, whose blood has a protein that binds to oxygen to transport it around the body, showed pronounced responses to acidified water. “That protein is very sensitive to pH, so we are able to see changes in oxygen consumption with these levels of CO2 in squids,” he said. The team’s method of specimen collection is pretty low-tech. Members put on chest waders and walk into the water from shore. They then use a long broom handle with a beaker at one end to gather the pteropods. “We just dip them up, because they’re very, very fragile,” Fabry said. The shelled pteropods don’t live long in captivity because they feed by means of a mucous web that is suspended above their bodies, sort of like a free-floating, omnivorous spider surfing its web through the water as it feeds. In the lab, the scientists measure how much the pteropods calcify under varying levels of ocean acidification based on predictions from the Intergovernmental Panel on Climate Change. For Seibel’s purposes, the researchers track rates of oxygen consumption and ammonia excretion, as well as measure the acidification in animal tissues. Fabry said it is too early to say how well the organisms may be able to adapt as worldwide surface ocean pH drops. She noted that oceans could absorb CO2 and eventually neutralize it but that process takes thousands of years. Ocean acidification, beginning in the 1800s, is occurring over the span of a few centuries. “The rate of release of CO2 to the atmosphere is critical. We could put more CO2 in the ocean, if we did it slowly,” she explained. “The biggest unknown is how fast humans will put CO2 in the atmosphere.” NSF-funded research in this story: Victoria Fabry, California State University San Marcos; and Brad Seibel, University of Rhode Island. About the Sun
<urn:uuid:403c3e6e-f757-45df-a41f-9d8b2aa85d97>
3.53125
1,583
Truncated
Science & Tech.
35.143611
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2003 October 1 Explanation: Jon Burnett, a teenager from South Wales, UK, was photographing some friends skateboarding last week when the sky did something very strange. By diverting his camera, he was able to document this rare sky event and capture one of the more spectacular sky images yet recorded. Roughly four minutes later, he took another picture of the dispersing trail. What is it? Experts disagree. The first guess was a sofa-sized rock that exploded as a daytime fireball, but perhaps a better hypothesis is an unusual airplane contrail reflecting the setting Sun. Bright fireballs occur over someplace on Earth nearly every day. A separate bolide, likely even more dramatic, struck India only a few days ago. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & Michigan Tech. U.
<urn:uuid:805bf6d3-3220-4ea5-ac5c-60f8afbc6b6a>
2.796875
229
Knowledge Article
Science & Tech.
38.100658
Engineers find fun way to experiment with old technology Ping Pong’s fun to play, but it’s not a great game to watch. It’s all a bit, I don’t know, repetitive. But that’s what makes this video so great—it actually makes ping pong interesting. Here’s what went down. Mark French, a mechanical engineering at Purdue University, along with graduate students Craig Zehrun and Jim Stratton got together and built a contraption called the “de Laval tube”. Basically, it’s a modern-day version of the de Laval nozzle, an hourglass tube used to accelerate a hot, pressurized gas to supersonic speeds. They put a ping pong ball in the tube in front of this gas and found that they could shoot the ball at speeds over 900 miles per hour, or Mach 1.2. It’s enough to—as you can see in the picture above—put a hole through a ping pong paddle. Ain’t science grand? Video of the gun in action below (about five minutes worth of explanation prior to the gun shooting – go to 5:50 to see it shoot:
<urn:uuid:975b5a36-ca0b-4afb-be47-b9897dff05e1>
3.046875
258
Truncated
Science & Tech.
70.000526
This Identification Section was authored by: Dr. S. Jerrine Nichols U.S. Geological Survey Great Lakes Research Center 1451 Green Road Ann Arbor, MI 48105 Zebra mussels are not the only freshwater bivalve molluscs that are found in North America, nor are they the only introduced species. Because of this, it is important to be able to distinguish between zebra mussels and members of the other two bivalve superfamilies that occur in North American freshwater habitats. Zebra mussels, Dreissena polymorpha, belong to the superfamily Dreissenacea. The quagga mussel, Dreissena bugensis, is the other introduced dreissenid bivalve in North America. D. bugensis and the native brackish water bivalve, the false dark mussel - Mytilopsis leucophaeata, are the two bivalves that are most likely to be confused with D. polymorpha. If one examines the ventral shell margin and ventral shell edge of the mussels, differences are visible. Zebra mussels have a concave or flattened bottom and an acutely angled shell margin. Both features provide additional stability for the attached mussel. If one places representatives from all three species on a flat dish, the zebra mussel will be the only one able to stay upright. Both quagga mussels and Mytilopsis have a convex ventral edge and a rounded ventral or bottom margin. The other North American bivalve superfamilies are the Unionacea and Corbiculacea (Burky 1983). A wide variety of Unionacea can be found in North America. The superfamily Corbiculacea contains two major families: the Corbiculidae and the Sphaeriidae. The Corbiculidae were introduced to North America in the early 1900ís from Asia. The Sphaeriidae are tiny clams that live in freshwater lakes, streams, rivers, ponds, and ephemeral habitats. Unionid and sphaeroid species are native to North America and are easy to distinguish from the dreissenids or the Corbicula. The Corbiculidae were introduced to North America in the early 1900s and have also presented biofouling problems, but to a lesser extent than zebra mussels. Information is provided to aid in the identification and understanding of the three species of closely related bivalves (dreissenid) and members of the two remaining bivalve superfamilies - Unionacea and Corbiculacea. The ability to detect and monitor present and future infestations of the zebra mussel is dependent upon the ability to separate the immature and adult stages from other similar native and introduced mussel species. While adult zebra mussels are fairly easy to separate from other mussel species, identification of the immatures is quite difficult. This CD includes two computer-based identification systems that will greatly reduce the learning curve associated with the identification of both the zebra and quagga mussel. For more information on identification of the zebra mussel, the quagga mussel, or the use of these systems, please click on the appropriate green/underlined topic heading listed below. Please note that users must access the computer-based identification system through the Information Manager. Genetic Studies of the Zebra Mussel and the Discovery of the Quagga Mussel in North America Using the Interactive Identification Systems
<urn:uuid:f4ef9808-7d10-4865-97a9-62805d9df2f9>
3.421875
735
Knowledge Article
Science & Tech.
20.723205
Scientists have documented for the first time that animals can and do consume Archaea – a type of single-celled microorganism thought to be among the most abundant life forms on Earth. Archaea that consume the greenhouse gas methane were in turn eaten by worms living at deep-sea cold seeps off Costa Rica and the West Coast of the United States. - Scientists document first consumption of abundant life form, Archaea Mon, 12 Mar 2012, 15:33:24 EDT - Old life capable of revealing new tricks after all Wed, 6 Jul 2011, 15:36:19 EDT - Planet's nitrogen cycle overturned by 'tiny ammonia eater of the seas'Wed, 30 Sep 2009, 13:36:43 EDT - Rampant helper syndromeThu, 2 Jul 2009, 11:08:35 EDT - Researchers map minority microbes in the colonTue, 2 Aug 2011, 16:35:38 EDT
<urn:uuid:d0d7544f-10f4-476e-bfda-3110cd1c88bd>
3.40625
187
Content Listing
Science & Tech.
21.260845
If a polynomial is divided by (x+2), the remainder is -19. When the same polynomial is divided by (x-1), the remainder is 2. Determine the remainder when the polynomial is divided by (x-1)(x+2). We know that and where and are two polynomials such that . Thus, and . The remainer, when dividing by can be of degree 1 at most since is a polynomial of degree 2. This is, and where is a polynomial of degree We see that and but we know that and so we've to solve a linear system of equations in the unknowns a and b. It's easy to solve this system and the solution is Therefore, the reaminder when is divided by is
<urn:uuid:73d826f6-f3b0-441f-b300-34d1d14a5f41>
3.484375
170
Q&A Forum
Science & Tech.
70.697561
Name: andrew g cantrell Date: 1993 - 1999 Is the moon ever truly full except during a total solar eclipse? Is there a reason that the oion nebula appears blue in my telescope? 1) nope, the moon can't be full during a solar eclipse! (why?) [perhaps you were thinking of lunar eclipses?] 2) the Orion Nebula is one of the brightest of the emission nebulae or H II regions (the "H II" means ionized hydrogen). Emission nebula are usually reddish because they fluoresce by converting stellar UV radiation into the hydrogen Balmer lines, especially H-alpha which is in the red region of the visible spectrum. Why it appears blue could be due to many reasons having to do with your telescope, atmospheric conditions, Orion's elevation above the horizon, or it might actually be blue because of the presence of oxygen in the nebula. [for those who don't know, the Orion Nebula is the middle "star" of the three making up "The Hunter's" sword hanging from his belt... AHA! there is a small cluster of 5 blue stars inside the nebula. reference: Whitney's Star Finder by C. A. Whitney, Alfred A. Knopf, 1977; a great little book for anyone interested in viweing the night sky; includes info on eclipses, sunrises and sunsets, comets, meteors, aurorae, rainbows, haloes, sundogs, and how to photograph all these. Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:f11f64e0-d4ef-4875-81e6-1b9a42b06930>
2.921875
347
Q&A Forum
Science & Tech.
56.638934
Follow humbly wherever and to whatever abyss Nature leads, or you shall learn nothing. -T.H. Huxley We’ve spent a little bit of time talking about dark energy, including what we think of it, how we first discovered it, and how we knew that there wasn’t just something out there blocking the light. It seems to be the latest abyss that Nature is leading us, so we needed to look beyond the type Ia supernova data and see what else the Universe was telling us. So what do we do? First off, we can try to measure how much matter is in the Universe independent of anything else. How do we do this? We use the most accurate method available, of course. This means taking giant surveys of galaxies and clusters of galaxies, combined with a knowledge of gravity. Then you take this actual clustering data and you compare it with simulations of Universes with different matter compositions. You take a Universe with 10% matter, then you take another one with 20%, 30%, 40%, etc., and see which one matches the Universe you actually have in front of you. From clustering data, we can tell that the Universe has somewhere between 25 and 30% of its energy in the form of normal matter. Independent of any supernova data, we learn that most of the energy in the Universe is not normal matter. So what’s the rest of it? We need the cosmic microwave background to tell us that. These tiny little fluctuations tell us a tremendous amount about what’s in our Universe. Moreover, they tell us whether space in the Universe is curved positively like a sphere, flat like a sheet of paper, or curved negatively like a saddle. These three different curvature cases would lead to the hot and cold spots looking different from one another, and the differences are striking. BOOMERANG was able to tell these cases apart. Only the middle case — a flat Universe — holds up to the data. In fact, the limits are that if the Universe is curved, the amount of curvature is less than 2% of the total energy density. So we have not only supernovae, but clusters of galaxies and the cosmic microwave background too, all pointing towards the same Universe. One where it’s spatially flat, full of about 25-30% matter, and where the remaining 70-75% is some mysterious form of energy. Seriously, all these different data sets point towards the same conclusion: The Universe is mostly full of dark energy, which would need to exist even without the supernova data! It’s a very unusual thing for all of these different sources of data to come in all at once, like they have over the past decade, and all support the same conclusion. But this is what we’ve got, and it’s supported from every angle. So take Huxley’s advice, and follow Nature into the abyss of dark energy, or — the horror — you shall learn nothing.
<urn:uuid:6531b9af-a17a-4b83-a4f0-288bb8a99070>
3.109375
624
Personal Blog
Science & Tech.
54.004553