text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
The National Center for Atmospheric Research-Community Climate System Model (NCAR-CCSM) is used in a coupled atmosphere-ocean-sea-ice simulation of the Last Glacial Maximum (LGM, around 21,000 years ago) climate. In the tropics, the simulation shows a moderate cooling of 3 °C over land and 2 °C in the ocean in zonal average. This cooling is about 1 °C cooler than the CLIMAP sea surface temperatures (SSTs) but consistent with recent estimates of both land and sea surface temperature changes. Subtropical waters are cooled by 2-2.5 °C, also in agreement with recent estimates. The simulated oceanic thermohaline circulation at the LGM is not only shallower but also weaker than the modern with a migration of deep-water formation site in the North Atlantic as suggested by the paleoceanographic evidences. The simulated northward flow of Antarctic Bottom Water (AABW) is enhanced. These deep circulation changes are attributable to the increased surface density flux in the Southern Ocean caused by sea-ice expansion at the LGM. Both the Gulf Stream and the Kuroshio are intensified due to the overall increase of wind stress over the subtropical oceans. The intensified zonal wind stress and southward shift of its maximum in the Southern Ocean effectively enhances the transport of the Antarctic Circumpolar Current (ACC) by more than 50%. Simulated SSTs are lowered by up to 8 °C in the midlatitudes. Simulated conditions in the North Atlantic are warmer and with less sea-ice than indicated by CLIMAP again, in agreement with more recent estimates. The increased meridional SST gradient at the LGM results in an enhanced Hadley Circulation and increased midlatitude storm track precipitation. The increased baroclinic storm activity also intensifies the meridional atmospheric heat transport. A sensitivity experiment shows that about half of the simulated tropical cooling at the LGM originates from reduced atmospheric concentrations of greenhouse gases.
<urn:uuid:a6a6195b-abc3-4fca-87c9-daeaf460fe58>
2.90625
412
Knowledge Article
Science & Tech.
28.831824
500
July 17, 1998 July 15, 1998: A unique levitation furnace that flew on the Space Shuttle in 1998 is being eyed for upgrades to fly on future Shuttle and International Space Station missions. "TEMPUS on MSL-1 provided it was operationally reliable," said Dr. Ivan Egry, the project scientist at the German Space Agency (DLR). "I am really surprised at how much scientific data we are still squeezing out of it." Egry spoke Tuesday morning to the third Biennial Microgravity Materials Science Conference sponsored by NASA. TEMPUS - built by the DLR and used jointly by DLR and NASA - is the German acronym for containerless electromagnetic processing in weightlessness. That, simply put, is what TEMPUS does. An electromagnetic coil inside the TEMPUS facility positions metal samples with about 1/1,000th the force needed on the ground to work against gravity and keep the samples from touching the container walls. A second coil pumps in radio wave energy - a bit like a microwave oven - to melt the sample. This approach is vital in a number of research areas because touching the container walls will instantly cool the sample and levitation on the ground often involve forces great enough to disturb the sample. Scientist don't want either to happen when they are trying to make precise measurements of fundamental properties that can help them refine manufacturing processes on Earth. TEMPUS flew on the Microgravity Sciences Laboratory-1 mission in 1998, and on the second International Microgravity Laboratory (IML-2) in 1994. Data are still being analyzed, but Egry gave a preview Tuesday, including benchmark data that will let scientists correct the surface tension measurements for one type of metal, and make the first-ever reliable viscosity measurements. "Many things were surprising," Egry said when asked about the data from TEMPUS. Among them were the first experimental measurements of the electrical conductivity of cobalt-palladium in both its liquid and solid states. TEMPUS demonstrated its value by making repeat measurements that matched very closely with one another. Consistency is crucial when one is trying to establish basic physical properties. For example, one line of experiments involved cooling metals, such as zirconium, far below their normal freezing point and then recording the point where they froze, how much heat they gave off, and other details. The zirconium sample was put through 120 melt/freeze cycles. "It's really amazing to see how one undercooling cycle follows the other," Egry said as he showed a graph showing precise repeatability in the data. All told, the MSL-1 mission hosted 22 experiments comprising 197 hours of test run and 437 melting cycles. Spurred by this success, DLR is looking at adapting TEMPUS to fly on Spacelab, and to incorporate better sample handling and video capabilities, and a broader temperature range. DLR also is looking at an Advanced TEMPUS that would allow scientists to replace samples in orbit - so the furnace would not have to be brought back - and add other improvements to enhance the science. Editor's Note: The original news release, with images and related links, can be found at: http://science.msfc.nasa.gov/newhome/headlines/msad15jul98_2.htm Other social bookmarking and sharing tools: The above story is reprinted from materials provided by NASA/Marshall Space Flight Center--Space Sciences Laboratory. Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:178c8b29-6f46-4fb6-84f2-35ff509c8136>
3.296875
759
Truncated
Science & Tech.
40.882591
501
Dec. 17, 2009 In a pilot project that could help better manage the planet's strained natural resources, space-age technologies are helping a Washington state community monitor its water availability. NASA satellites and sensors are providing the information needed to make more accurate river flow predictions on a daily basis. "World leaders are struggling to protect natural resources for future generations," said Jeff Ward, a senior research scientist at the Department of Energy's Pacific Northwest National Laboratory, which is managed by Battelle. "These tools help us sustainably use natural resources while balancing environmental, cultural and economic concerns." Ward manages a project on behalf of Battelle that is helping to better predict the flow of the Dungeness River, near Sequim, Wash., with data collected by NASA instruments. The project started by creating a new model that predicts river flows in the river's surrounding valley. It then expanded to help other communities in Kansas, Maine, Oregon and Washington state better manage their water and land resources with similar technologies. The project -- called the North Olympic Peninsula Solutions Network -- is lead by the North Olympic Peninsula Resource Conservation & Development Council and supported by PNNL and others. Lucien Cox of NASA will present the project's results Dec. 16 at the 2009 fall meeting of the American Geophysical Union in San Francisco. The project will help regional natural resource managers assess the abundance -- or lack thereof -- of the Dungeness River. The river model was developed to show how NASA technologies like satellites, sensors and computational models could be used to improve short-term stream flow predictions. The river model relies on snowpack and temperature data collected from satellites, as well as real-time snowpack and water data collected by various agencies. The new Dungeness River model's calculations can tell what kind of flow to expect -- from a trickle to a deluge -- on a daily and monthly basis. Before, resource managers primarily relied on either water levels physically measured at gauges or historical data to predict total expected water volume over two to six months. Neither method provided flow predictions as frequently as the new model. Having more precise river flow predictions is especially important along the Dungeness River, where the towering Olympic Mountains create a drying rain shadow effect and steep slopes prevent above-ground water reservoirs. Sequim receives just 15 inches of rain annually. Water is so treasured that the agricultural city is home to a 114-year-old festival that celebrates a historic irrigation system. "Improving the accuracy of stream flow predictions is important to a diverse group of water users, including irrigation-dependent farmers, planners making urban growth decisions and those concerned about salmon survival or water quality," said Clea Rome, North Olympic Peninsula RC&D coordinator. "Stream flow prediction tools can help us avoid a crisis by alerting us before droughts are in full effect, giving us enough notice to adjust water use." But the practical use of NASA technologies isn't limited just to Sequim or river water. The North Olympic Peninsula Solutions Network is helping four other resource, conservation and development councils tackle their unique problems. Another resource -- soil -- has the Solomon Valley RC&D in north central Kansas concerned about agricultural tilling and erosion. Striking a balance between agriculture and forestry is critical for the Threshold to Maine RC&D in southwest Maine. The Wy'East RC&D is looking to better manage water supply and demand in north central Oregon. And in Okanogan, Wash., the possibility of water shortages worries the North Central Washington RC&D. "Space technologies can help us get the best science to the ground, to the decision makers here in the Okanogan Basin," said Samantha Bartling, North Central Washington RC&D coordinator. "We expect it'll help us more precisely predict water availability for a long time to come." The four councils are working with North Olympic Peninsula Solutions Network leaders to determine how NASA technologies can best address their different challenges. The project is funded by a $1.6 million grant from NASA. More information can be found at the North Olympic Peninsula Solutions Network website, http://pcnasa.ctc.edu/. Other project partners include: the Department of Agriculture's Natural Resources Conservation Services; NRCS National Water and Climate Center; National Association of RC&D Councils; Idaho National Laboratory; Olympic National Park; Clallam County; The Dungeness River Management Team; The Elwha-Morse Management Team; Peninsula College and Pacific Northwest Regional Collaboratory. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:ce90c6fc-1c91-43ab-b0a3-e081dd2dc656>
3.5
961
News Article
Science & Tech.
29.381148
502
Mar. 27, 2011 Researchers from the Instituto de Astrofísica de Canarias (IAC) have discovered the existence of a black hole 5.4 times greater in mass than that of our Sun, located in the X-ray binary system XTE J1859+226. The observations carried out from the Gran Telescopio Canarias (GTC), managing to obtain the first spectroscopic data from this binary system to be published, have been determinant for the discovery. X-ray binaries are stellar systems composed by a compact object (which may be a neutron star or a black hole) and a 'normal' star. The compact object sucks matter out of the star and adds it slowly to its own mass, through a spiral disc formed around it. This process of absorption is known as acretion. Only 20 binary systems, out of an estimated population of around 5,000 within our Galaxy, are known to contain a black hole. XTE J1859+226 is, in particular, a transient X-ray binary located in the Vulpecula constellation. It was discovered by satellite RXTE during an eruption registered in 1999. "Transient X-ray binaries are characterised for spending most of their life in a state of calmness, but occasionally entering eruption stages, during which the rhythm of acretion of matter toward the black hole is triggered," Jesús Corral Santana explains, an astrophysicist from the IAC, who led the work published in the Monthly Notices of the Royal Astronomical Society (MNRAS). Neutron stars as well as black holes are the remains left by a massive star after its death. Most of the known neutron stars have a mass around 1.4 times that of the Sun, though in some cases, values up to over twice the mass of the Sun have been measured. Astronomers believe that when greater than tree times the solar mass, neutron stars are not stable, and end up collapsing and forming a black hole. For Corral-Santana, "measuring the mass of compact objects is essential to determine what kind of object it may be. If it's greater than three times the solar mass, it can only be a black hole. We found that XTE J1859+226 has a black hole more than 5.4 times greater than the mass of the Sun. It's the definitive confirmation of the existence of a black hole in this object." "With this result we add a new piece to the study of the mass distribution of black holes. The shape of this distribution has very important implications for our knowledge about the death of massive stars, the formation of black holes, and the evolution of X-ray binary systems," the IAC astrophysicist adds. Twelve years of observation: measuring the visible and the invisible The astrophysicists' team at IAC hadn't lost track of the stellar object since it entered an eruption stage in 1999, when they started to set up observation campaigns to follow its evolution. The researchers have combined the photometric measures from the Isaac Newton Telescope (INT) and the William Herschel Telescope (WHT) in year 2000, and those from the Nordic Optical Telescope (NOT) in 2008, with the spectroscopy carried out with the GTC in 2010, the first one ever published about this particular object. "Due to the low brilliance of the system under observation, we needed 10 meter telescopes in able to obtain spectra. In this sense, having been able to make our observations from the GTC has been determinant," Corral-Santana emphasises. The measurements at the GTC were carried out with the OSIRIS instrument, which may be used as a camera or as a spectrograph in the visible range. The spectrograph decomposes the light emitted by a star into its different frequencies and allows detecting lines corresponding to the different chemical elements present in its atmosphere. These lines adduce information about the physical properties of the star and its movement. The photometric measures allowed determining the orbital period of the binary (6.6 hours), while the spectroscopy data also provided information about the speed of the star's orbital movement around the black hole. The combination of both of these parameters proved to be vital to calculate the mass of the back hole. The Gran Telescopio Canarias (GTC), located at the Roque de los Muchachos Observatory (in La Palma, Canary Islands), is the biggest optical-infrared telescope of the World, with a 10.4 metre diameter mirror. Other social bookmarking and sharing tools: - J. M. Corral-Santana, J. Casares, T. Shahbaz, C. Zurita, I. G. Martínez-Pais, P. Rodríguez-Gil. Evidence for a black hole in the X-ray transient XTE J1859 226. Monthly Notices of the Royal Astronomical Society: Letters, 2011; DOI: 10.1111/j.1745-3933.2011.01022.x Note: If no author is given, the source is cited instead.
<urn:uuid:95b03ccb-d91f-4a44-ad62-4dd16103bb5c>
3.359375
1,069
News (Org.)
Science & Tech.
54.529718
503
& Tornado Alley the Programming Language A Language for Symbolic Computation through the Processing of Lists There are primarily two computer languages used in artificial intelligence work, LISP and PROLOG. LISP, which is short for List Processing, was created by John McCarthy of Stanford University. It looks klutzy but it is based upon the lamba calculus and works quite well for computation associated with artificial intelligence. PROLOG has an elegant formulation but it does not have the range of application that LISP has. The Japanese when they formulated the Fifth Generation project chose PROLOG over LISP as the programming language. This was perhaps one of the factors that contributed to the failure of the Fifth Generation project. HOME PAGE OF Thayer Watkins
<urn:uuid:bfb69420-bb97-4807-ab29-878736b74ff1>
3.21875
154
Knowledge Article
Software Dev.
27.653437
504
A Field Guide to Supernova Spectra Both types exhibit a wide variety of subclasses. Type Ia is of no interest because these stars don't emit neutrinos. Types Ib and Ic are thought to undergo core collapse like Type II supernovae and, therefore, should emit neutrinos. As Maurice Gavin explains in "The Revival of Amateur Spectroscopy", low-resolution spectra of objects as faint as magnitude 13 or thereabouts are accessible to modest amateur equipment. (A few superposed 20-minute exposures with a 12-inch telescope or so should produce an adequate image.) But what will supernovae spectra look like especially shortly after the outburst begins as captured by small telescopes and low-resolution spectrographs? Here's your field guide. To prepare it, we started with high-resolution, calibrated spectra supplied by Alexei Filippenko (University of California, Berkeley). Then, to simulate Gavin's CCD results, we degraded the spectra to a resolution of 50 angstroms per pixel. Finally, and with dramatic results, we changed the intensity along each spectrum to reflect variations in the unfiltered sensitivity of popular CCD chips the KAF-0400 from Kodak and the ICX055BL from Sony. Thus, what you see here is what you will get! (Astrophotographers using panchromatic emulsions will record spectra that look much like the originals.)
<urn:uuid:0144adaf-17ca-4b41-b201-32a6d62c4484>
3.171875
294
Tutorial
Science & Tech.
31.275975
505
Christmas Lights Powered By Poop Research At UC Denver Proves Viability Of Waste As Energy Source Last Updated: 879 days ago A small lighted Christmas tree in a UC Denver laboratory proves the practicality of a novel renewable energy source, and points to its enormous potential.Jason Ren, an assistant professor of civil engineering, calls it "bug power" referring to the millions of bacteria that help generate electricity from wastewater. The process creates two desirable byproducts."Those bacteria are able to consume the waste and produce electricity as well as clean water," Ren said.Bacteria in the microbial fuel cells essentially eat the waste and give off electrons in the process. Those electrons are then captured by a graphite brush. Also, Ren recently discovered that salt water could be turned to fresh water as a third, simultaneous function."Electricity on one side, treating wastewater on the other side, while desalinating sea water in the middle," Ren said, pointing to a small three-chambered reactor."I think it's pretty promising," said Jae-Do Park, an assistant professor of electrical engineering. He is working to make the electricity functional."To harvest the energy from the fuel cell in the most efficient way, and at the same time to form that power from the fuel cell into a usable shape," Park said.The glowing LED lights on the laboratory Christmas tree are proof that it's possible to turn poop into power using bacteria. The microbial fuel cell research is gaining attention and from high places. The Environmental Protection Agency and the U.S. Navy have both provided grants to help advance the technology and its applications.
<urn:uuid:232a8cec-256a-4b73-9a10-0b38bd15f4a9>
3.09375
327
News Article
Science & Tech.
39.570778
506
New from Webteacher Software and partners, GoogleMapBuilder.com An easy interface to turn any spreadsheet into a Google Map Webteacher Software now offers I teach computer classes for a living to corporate clients of all levels. After 2 years of teaching, I have learned a lot about communication between people of various levels of computer experience. This tutorial assumes that you have no prior programming experience, but that you have created your own HTML pages. If you find this tutorial helpful, please let me know (it's my only reward). Also, links are graciously accepted. Actually, the 2 languages have almost nothing in common except for the name. Although Java is technically an interpreted programming language, it is coded in a similar fashion to C++, with separate header and class files, compiled together prior to execution. It is powerful enough to write major applications and insert them in a web page as a special object called an "applet." Java has been generating a lot of excitment because of its unique ability to run the same program on IBM, Mac, and Unix computers. Java is not considered an easy-to-use language for non-programmers. What is Object Oriented Programming? OOP is a programming technique (note: not a language structure - you don't even need an object-oriented language to program in an object-oriented fashion) designed to simplify complicated programming concepts. In essence, object-oriented programming revolves around the idea of user- and system-defined chunks of data, and controlled means of accessing and modifying those chunks. Object-oriented programming consists of Objects, Methods and Properties. An object is basically a black box which stores some information. It may have a way for you to read that information and a way for you to write to, or change, that information. It may also have other less obvious ways of interacting with the information. Some of the information in the object may actually be directly accessible; other information may require you to use a method to access it - perhaps because the way the information is stored internally is of no use to you, or because only certain things can be written into that information space and the object needs to check that you're not going outside those limits. The directly accessible bits of information in the object are its properties. The difference between data accessed via properties and data accessed via methods is that with properties, you see exactly what you're doing to the object; with methods, unless you created the object yourself, you just see the effects of what you're doing. Objects and Properties Your web page document is an object. Any table, form, button, image, or link on your page is also an object. Each object has certain properties (information about the object). For example, the background color of your document is written document.bgcolor. You would change the color of your page to red by writing the line: document.bgcolor="red" The contents (or value) of a textbox named "password" in a form named "entryform" is document.entryform.password.value. Most objects have a certain collection of things that they can do. Different objects can do different things, just as a door can open and close, while a light can turn on and off. A new document is opened with the method document.open() You can write "Hello World" into a document by typing document.write("Hello World") . open() and write() are both methods of the object: document.
<urn:uuid:7772f169-1fe0-4821-9f17-fc1a29f7ccbe>
3.390625
717
Personal Blog
Software Dev.
45.295065
507
Where are we now? Climate "Today" Before we move on to projections of future state of our planet's climate, let's take a few looks at the current state of Earth's climate. These graphs show how carbon emissions, atmospheric concentrations of carbon dioxide, and global average temperatures have changed in recent times. This image shows sea surface temperatures (SST) averaged over a whole year (in this case, 2001). Notice how temperatures range from freezing (0° C or 32° F) near the poles to around 30° C (about 86° F) in the tropics. Credits: Image courtesy of Plumbago via Wikipedia, using data from the World Ocean Atlas 2001. Here is Earth's surface air temperature in recent times. This image shows average temperatures for the period from 1961 to 1990. Credits: Image courtesy of Robert A. Rhohde and the Global Warming Art project. Average Global Temperature 1940-2005 | All values are in comparison to 1940-1980 average (green shading). Map at left shows 1995-2005 averages (the orange shaded region on the graph above). Blue points and lines on the graph are annual values; the red line is the 5-year smoothed average. This map (above) shows recent changes in Earth's surface air temperatures. The colors indicate the temperatures in the decade around 2000 as compared to average values from about 40 years earlier. Specifically, the colors compare average temperatures during the years 1995 through 2004 versus the averages from 1940 through 1980. The global averge temperature increased about 0.42° C during this time. Credits: Map image courtesy of Robert A. Rohde and the Global Warming Art project. Graph is original artwork by Windows to the Universe staff (Randy Russell) using data from NOAA. Use the popup menu in the upper left corner of the interactive below to select a map to view. Choices include contemporary global surface air temperature and sea surface temperature, changes in temperature by 2000, and four climate model projections for possible future climate in 2025 and 2095. Compare maps side-by-side using the viewer below. Shop Windows to the Universe Science Store!Cool It! is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store You might also be interested in: Leaders from 192 nations of the world are trying to make an agreement about how to limit emissions of heat-trapping greenhouse gases, mitigate climate change, and adapt to changing environmental conditions....more Climate in your place on the globe is called regional climate. It is the average weather pattern in a place over more than thirty years, including the variations in seasons. To describe the regional climate...more Less than 1% of the gases in Earth's atmosphere are called greenhouse gases. Even though they are not very abundant, these greenhouse gases have a major effect. Carbon dioxide (CO2), water vapor (H2O),...more Television weather forecasts in the space age routinely feature satellite views of cloud cover. Cameras and other instruments on spacecraft provide many types of valuable data about Earth's atmosphere...more Predicting how our climate will change in the next century or beyond requires tools for assessing how planet responds to change. Global climate models, which are run on some of the world's fastest supercomputers,...more The world's surface air temperature increased an average of 0.6° Celsius (1.1°F) during the last century according to the Intergovernmental Panel on Climate Change (IPCC). This may not sound like very...more A factor that has an affect on climate is called a “forcing.” Some forcings, like volcanic eruptions and changes in the amount of solar energy, are natural. Others, like the addition of greenhouse gases...more
<urn:uuid:6b4aa9b9-1d46-4bb5-b28c-db5733d68d6b>
3.796875
807
Content Listing
Science & Tech.
48.069262
508
Learn something new every day More Info... by email Dark energy is a very sparse, uniform negative pressure that permeates the entire observable universe. It accounts for 70% of the mass/energy in the universe and is responsible for its accelerating rate of expansion. Dark energy is unlike the energy we are familiar with because it is not concentrated locally, as is the case with stars and galaxies, manifestations of conventional matter and energy. There are many other important differences between conventional energy and dark energy, which physicists continue to investigate. The exact form or mechanism of operation of dark energy is unknown. In this respect, it is similar to its cousin, dark matter, which can only be observed by the influence it has on normal matter and energy. There are two major theories for the form of dark energy, although one is more prominent than the other. The first theory, quintessence, describes the dark energy as a fluctuating field that changes its intensity based on location. The second theory, that of a cosmological constant, describes dark energy as constant and uniform. It is this second theory that is believed by most physicists and forms the basis of the Lambda-CDM model, the prevailing model of the structure of the cosmos. The negative pressure of the cosmological constant is thought to originate from vacuum fluctuations at extremely small scales in all space. So-called virtual particles are continuously created and destroyed in this vacuum, creating a quantum foam that itself has energy. The existence of dark energy has implications for the ultimate fate of the universe. If dark energy is an intrinsic property of space, as it looks to be, then it will continue to be exist indefinitely. If dark energy is the cause of the universe’s accelerating expansion, then it will also be the cause of reducing the average density of any parcel of space in the long run. As the universe grows more and more sparse, it will also grow more cold and hostile to life. Therefore, dark energy can justifiably be blamed for bringing on the “Heat Death” of the universe. Is Europe using some type of Dark Energy device to power the Earth? ?My dad read it to me from a news paper. I just Could not find anything about it
<urn:uuid:1fd0b34e-c59a-4cd0-aff5-813ea66ee1be>
3.296875
454
Personal Blog
Science & Tech.
38.134331
509
Galaxy Cluster Takes It to the Extreme Marshall Space Flight Center, Huntsville, Ala. Chandra X-ray Center, Cambridge, Mass. News release: 07-065 Evidence for an awesome upheaval in a massive galaxy cluster was discovered in an image made by NASA's Chandra X-ray Observatory. The origin of a bright arc of ferociously hot gas extending over two million light years requires one of the most energetic events ever detected. The cluster of galaxies is filled with tenuous gas at 170 million degree Celsius that is bound by the mass equivalent of a quadrillion, or 1,000 trillion, suns. The temperature and mass make this cluster a giant among giants. "The huge feature detected in the cluster, combined with the high temperature, points to an exceptionally dramatic event in the nearby Universe," said Ralph Kraft of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass., and leader of a team of astronomers involved in this research. "While we’re not sure what caused it, we've narrowed it down to a couple of exciting possibilities." The favored explanation for the bright X-ray arc is that two massive galaxy clusters are undergoing a collision at about 4 million miles per hour. Shock waves generated by the violent encounter of the clusters' hot gas clouds could produce a sharp change in pressure along the boundary where the collision is occurring, giving rise to the observed arc-shaped structure which resembles a titanic weather front. "Although this would be an extreme collision, one of the most powerful ever seen, we think this may be what is going on,” said team member Martin Hardcastle, of the University of Hertfordshire, United Kingdom. A problem with the collision theory is that only one peak in the X-ray emission is seen, whereas two are expected. Longer observations with Chandra and the XMM-Newton X-ray observatories should help determine how serious this problem is for the collision hypothesis. Another possible explanation is that the disturbance was caused by an outburst generated by the infall of matter into a supermassive black hole located in a central galaxy. The black hole inhales much of the matter but expels some of it outward in a pair of high-speed jets, heating and pushing aside the surrounding gas. Such events are known to occur in this cluster. The galaxy 3C438 in the central region of the cluster is known to be a powerful source of explosive activity, which is presumably due to a central supermassive black hole. But the energy in these outbursts is not nearly large enough to explain the Chandra data. "If this event was an outburst from a supermassive black hole, then it's by far the most powerful one ever seen," said team member Bill Forman, also of CfA. The phenomenal amount of energy involved implies a very large amount of mass would have been swallowed by the black hole, about 30 billion times the Sun's mass over a period of 200 million years. The authors consider this rate of black hole growth implausible. "These values have never been seen before and, truthfully, are hard to believe," said Kraft. These results were presented at the American Astronomical Society meeting in Honolulu, HI, and will appear in an upcoming issue of The Astrophysical Journal. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass. Additional information and images are available at:
<urn:uuid:f7e616a3-eaf8-4fa7-aed6-f3e3866c05f4>
3.46875
744
News (Org.)
Science & Tech.
38.870096
510
by Gregory McNamee Talk about your worm’s-eye view of the world. From time to time, I am pleased in this column to announce the discovery of some hitherto unknown species,or the rediscovery of one thought to have disappeared. An international team of scientists has done this one better, announcing the discovery of an entirely new phylum comprising an ocean-dwelling flatworm called Xenoturbella and its kin, collectively the acoelomorphs. Interestingly, these creatures seem to be backward-evolving: their ancestors had gill slits and guts, but the current acoelomorphic configuration lacks them. As researcher Maximilian Telford of University College London puts it, “We’ve got these very simple worms nested right in the middle of the complex animals. How did they end up so simple? They must have lost a lot of complexity.” * * * If in the course of evolution you decided to lose your ears, you would have good reason. The world is a noisy place, thanks to ever-busy humans, and it’s getting noisier. In response, many species of animals are getting noisier themselves in an effort to be heard, a process, notes Rose Eveleth in Scientific American, called the Lombard effect. Right whales and house finches, for their parts, are calling in at different frequencies to get around shipping and urban noise. As Eveleth writes of animals in her provocative piece, “Many of them are doing the vocal equivalent of wandering around asking, ‘Can you hear me now?’ And increasingly, the answer is no.” * * * Gibbons make a fair amount of noise themselves—and perhaps that stands to reason, given that, next to the great apes, they’re our closest living relatives. That noise is more complex than you might think. Indeed, report researchers from the German Primate Center in Göttingen, the crested gibbons of Southeast Asia have distinctive regional accents. These accents suggest both familial typings, as well as the ancient migration of the species from a location to the north of their current range to points farther south. * * * A new phylum is discovered, but a current species declines. That, sadly, is the way of this noisy world. Scottish scientists, reports the BBC’s Highlands and Islands service, are documenting the decline of the common scoter, a kind of duck, in the islands to the north of the country. The scientists are now studying the effects of climate change, which has implications in predation and in food supply. Says one, “We believe climate change may be a factor because warmer winters and springs could lead to aquatic insects such as mayflies and caddis flies hatching earlier in the season and not being available to the scoter ducklings when they hatch out themselves. And warmer winters may, over time, lead to more predators surviving and that could make an impact.” * * * Homer Simpson, his son, Bart, their kin, and the good citizens of Springfield are odd ducks one and all. They’re cartoons, after all, so they’re supposed to be goofy. It’s worth noting, though, that the Homeric lineup lives in the shadow of a nuclear reactor, the river is full of three-headed fish, and the night sky glows unnaturally, all reasons to think that something other than mere cartoonery might be at play. It’s also worth observing, then, that researchers at the University of South Carolina’s Chernobyl Research Initiative have concluded that the offspring of 48 species of birds born in the vicinity of that vast Ukrainian accident site have smaller brain size (by 5 percent, on average) than birds born elsewhere, and that this correlates with both reduced cognitive ability and heightened mortality. This mutation seems to be occurring at relatively low doses of radiation, further correlating with the widespread difficulties of children born in the northern Ukraine since the 1986 disaster, who, the researchers’ report maintains, “have higher rates of neural tube defects and related neurological disorders than other children in uncontaminated regions of the Ukraine and Europe.” Champions of nuclear power, take note.
<urn:uuid:bfb3a64f-f9a8-41a6-a406-20c8e02d836b>
3.171875
883
Personal Blog
Science & Tech.
40.439699
511
Chronometric Techniques–Part II Most of the chronometric dating methods in use today are radiometric . That is to say, they are based on knowledge of the rate at which certain radioactive isotopes within dating samples decay or the rate of other cumulative changes in atoms resulting from radioactivity. Isotopes are specific forms of elements. The various isotopes of the same element differ in terms of atomic mass but have the same atomic number. In other words, they differ in the number of neutrons in their nuclei but have the same number of protons. The spontaneous decay of radioactive elements occurs at different rates, depending on the specific isotope. These rates are stated in terms of half-lives. One half-life is the amount of time required for ½ of the original atoms in a sample to decay. Over the second half-life, ½ of the atoms remaining decay, which leaves ¼ of the original quantity, and so on. In other words, the change in numbers of atoms follows a geometric scale as illustrated by the graph below. The red curve line shows the of atomic decay The decay of atomic nuclei provides us with a reliable clock that is unaffected by normal forces in nature. The rate will not be changed by intense heat, cold, pressure, or moisture. The most commonly used radiometric dating method is radiocarbon dating. It is also called carbon-14 and C-14 dating. This technique is used to date the remains of organic materials. Dating samples are usually charcoal, wood, bone, or shell, but any tissue that was ever alive can be dated. Radiocarbon dating is based on the fact that cosmic radiation from space constantly bombards our planet. As cosmic rays pass through the atmosphere, they occasionally collide with gas atoms resulting in the release of neutrons. When the nucleus of a nitrogen (14N) atom in the atmosphere captures one of these neutrons, the atom subsequently changes into carbon-14 (14C) after the release of a proton. The carbon-14 quickly bonds chemically with atmospheric oxygen to form carbon dioxide gas. Carbon-14 is a rare, unstable form of carbon. Only one in a trillion carbon atoms in the atmosphere is carbon-14. The majority are carbon-12 (98.99%) and carbon-13 (1.1%). From a chemical standpoint, all of these isotopes of carbon behave exactly the same. Carbon dioxide in the atmosphere drifts down to the earth's surface where much of it is taken in by green growing plants, and the carbon is used to build new cells by photosynthesis . Animals eat plants or other animals that have eaten them. Through this process, a small amount of carbon-14 spreads through all living things and is incorporated into their proteins and other organic molecules. of carbon-14 in the atmosphere and its entrance into the As long as an organism is alive, it takes in carbon-14 and the other carbon isotopes in the same ratio as exists in the atmosphere. Following death, however, no new carbon is consumed. Progressively through time, the carbon-14 atoms decay and once again become nitrogen-14. As a result, there is a changing ratio of carbon-14 to the more atomically stable carbon-12 and carbon-13 in the dead tissue. That rate of change is determined by the half-life of carbon-14, which is 5730 ± 40 years. Because of this relatively rapid half-life, there is only about 3% of the original carbon-14 in a sample remaining after 30,000 years. Beyond 40-50,000 years, there usually is not enough left to measure with conventional laboratory methods. Radioactive decay rate for Carbon-14 (N = the number of atoms) Half-Lives Years Past C-14 Atoms C-12 Atoms 0 0 1 N 1 N 1 5,730 1/2 N 1 N 2 11,460 1/4 N 1 N 3 17,190 1/8 N 1 N 4 22,920 1/16 N 1 N 5 28,650 1/32 N 1 N 6 34,380 1/64 N 1 N 7 40,110 1/128 N 1 N The conventional radiocarbon dating method involves burning a sample in a closed tube containing oxygen. The carbon containing gas that is produced is then cooled to a liquid state and placed in a lead shielded box with a sensitive Geiger counter. This instrument registers the radioactivity of the carbon-14 atoms. Specifically, it detects the relatively weak beta particles released when carbon-14 nuclei decay. The age of a sample is determined by the number of decays recorded over a set period of time. Older samples have less carbon-14 remaining and, consequentially, less frequent decays. Knowing the half-life of carbon-14 allows the calculation of a sample's age. A radiocarbon sample being prepared for dating with the AMS technique A relatively new variation of the radiocarbon dating method utilizes an accelerator mass spectrometer , which is a device usually used by physicists to measure the abundance of very rare radioactive isotopes. When used for dating, this AMS method involves actually counting individual carbon-14 atoms. This allows the dating of much older and smaller samples but at a far higher cost. Although, organic materials as old as 100,000 years potentially can be dated with AMS, dates older than 60,000 years are still rare. Radiocarbon and tree-ring date comparisons made by Hans Suess provide needed data to make radiocarbon dates more reliable Paleoanthropologists and archaeologists must always be aware of possible radiocarbon sample contamination that could result in inaccurate dates. Such contamination can occur if a sample is exposed to carbon compounds in exhaust gasses produced by factories and motor vehicles burning fossil fuels such as coal or gasoline. The result is radiocarbon dates that are too old. This has been called the Autobahn effect, named after the German high speed roadway system. Archaeologists in that country first noted this source of contamination when samples found near the Autobahn were dated. The effect of global burning of fossil fuels on radiocarbon dates was verified and calibrated by Hans Suess of the University of California, San Diego when he radiocarbon dated bristlecone pine tree growth rings that were of known chronometric ages. Subsequently, it is also called the Suess effect. Other kinds of sample contamination can cause carbon-14 dates to be too young. This can occur if the sample is impregnated with tobacco smoke or oils from a careless researcher's hands. This is now well known and is easily avoided during excavation. Still another potential source of error in radiocarbon dating that is adjusted for stems from the assumption that cosmic radiation enters our planet's atmosphere at a constant rate. In fact, the rate changes slightly through time, resulting in varying amounts of carbon-14 being created. This has become known as the de Vries effect because of its discovery by the Dutch physicist Hessel de Vries. All of these potential sources of error in radiocarbon dating are now well understood and compensating corrections are made so that the dates are reliable. There are a number of other radiometric dating systems in use today that can provide dates for much older sites than those datable by radiocarbon dating. Potassium-argon (K-Ar) dating is one of them. It is based on the fact that potassium-40 (40K) decays into the gas argon-40 (40Ar) and calcium-40 (40Ca) at a known rate. The half-life of potassium-40 is approximately 1.25 billion years. Measurement of the amount of argon-40 in a sample is the basis for age determination. Dating samples for this technique are geological strata of volcanic origin. While potassium is a very common element in the earth's crust, potassium-40 is a relatively rare isotope of it. However, potassium-40 is usually found in significant amounts in volcanic rock and ash. In addition, any argon that existed prior to the last time the rock was molten will have been driven off by the intense heat. As a result, all of the argon-40 in a volcanic rock sample is assumed to date from that time. When a fossil is sandwiched between two such volcanic deposits, their potassium-argon dates provide a minimum and maximum age. In the example below, the bone must date to sometime between 1.75 and 1.5 million years ago. Using the potassium-argon method to date volcanic ash strata above and below a bone sample in order to determine a minimum and a maximum age Potassium-argon dates usually have comparatively large statistical plus or minus factors. They can be on the order of plus or minus 1/4 million years for a 2 million year old date. This is still acceptable because these dates help us narrow down the time range for a fossil. The use of additional dating methods at the same site allow us to refine it even more. NOTE: the plus or minus number following radiometric dates is not an error factor. Rather, it is a probability statement. For instance, a date of 100,000 ± 5,000 years ago means that there is a high probability the date is in the range of 95,000 and 105,000 years ago and most likely is around 100,000. Radiometric dates, like all measurements in science, are close statistical approximations rather than absolutes. This will always be true due to the finite limits of measuring equipment. This does not mean that radiometric dates or any other scientific measurements are unreliable. Potassium-argon dating has become a valuable tool for human fossil hunters, especially those working in East Africa. Theoretically it can be used for samples that date from the beginning of the earth (4.54 billion years) down to 100,000 years ago or even more recently. Paleoanthropologists use it mostly to date sites in the 1 to 5 million year old range. This is the critical time period during which humans evolved from their ape ancestors. A relatively new technique related to potassium-argon dating compares the ratios of argon-40 to argon-39 in volcanic rock. This provides more accurate dates for volcanic deposits and allows the use of smaller samples. Fission Track Dating Another radiometric method that is used for samples from early human sites is fission track dating. This is based on the fact that a number of crystalline or glass-like minerals, such as obsidian, mica, and zircon crystals, contain trace amounts of uranium-238 (238U), which is an unstable isotope. When atoms of uranium-238 decay, there is a release of energy-charged alpha particles which burn narrow fission tracks, or damage trails, through the glassy material. These can be seen and counted with an optical microscope. Fission tracks in obsidian as they would appear with an optical microscope The number of fission tracks is directly proportional to the amount of time since the glassy material cooled from a molten state. Since the half-life of uranium-238 is known to be approximately 4.5 billion years, the chronometric age of a sample can be calculated. This dating method can be used with samples that are as young as a few decades to as old as the earth and beyond. However, paleoanthropologists rarely use it to date sites more than several million years old. With the exception of early historic human made glass artifacts , the fission track method is usually only employed to date geological strata. Artifacts made out of obsidian and mica are not fission track dated because it would only tell us when the rocks cooled from a molten state, not when they were made into artifacts by our early human ancestors. Thermoluminescence (TL) dating is a radiometric method based on the fact that trace amounts of radioactive atoms, such as uranium and thorium, in some kinds of rock, soil, and clay produce constant low amounts of background ionizing radiation. The atoms of crystalline solids, such as pottery and rock, can be altered by this radiation. Specifically, the electrons of quartz, feldspar, diamond, or calcite crystals can become displaced from their normal positions in atoms and trapped in imperfections in the crystal lattice of the rock or clay molecules. These energy charged electrons progressively accumulate over time. When a sample is heated to high temperatures in a laboratory, the trapped electrons are released and return to their normal positions in their atoms. This causes them to give off their stored energy in the form of light impulses (photons). This light is referred to as thermoluminescence (literally "heat light"). A similar effect can be brought about by stimulating the sample with infrared light. The intensity of thermoluminescence is directly related to the amount of accumulated changes produced by background radiation, which, in turn, varies with the age of the sample and the amount of trace radioactive elements it contains. A ground up placed in a Heat is raised in an energy from the sample Thermoluminescence release resulting from rapidly heating a crushed clay sample What is actually determined is the amount of elapsed time since the sample had previously been exposed to high temperatures. In the case of a pottery vessel, usually it is the time since it was fired in a kiln. For the clay or rock lining of a hearth or oven, it is the time since the last intense fire burned there. For burned flint, it is the time since it had been heated in a fire to improve its flaking qualities for stone tool making. The effective time range for TL dating is from a few decades back to about 300,000 years, but it is most often used to date things from the last 100,000 years. Theoretically, this technique could date samples as old as the solar system if we could find them. However, the accuracy of TL dating is generally lower than most other radiometric techniques. Electron Spin Resonance Dating Another relatively new radiometric dating method related to thermoluminescence is electron spin resonance (ESR). It is also based on the fact that background radiation causes electrons to dislodge from their normal positions in atoms and become trapped in the crystalline lattice of the material. When odd numbers of electrons are separated, there is a measurable change in the magnetic field (or spin) of the atoms. Since the magnetic field progressively changes with time in a predictable way as a result of this process, it provides another atomic clock, or calendar, that can be used for dating purposes. Unlike thermoluminescence dating, however, the sample is not destroyed with the ESR method. This allows samples to be dated more than once. ESR is used mostly to date calcium carbonate in limestone, coral, fossil teeth, mollusks, and egg shells. It also can date quartz and flint. Paleoanthropologists have used ESR mostly to date samples from the last 300,000 years. However, it potentially could be used for much older samples. Comparison of the Time Ranges for Dating Methods Whenever possible, paleoanthropologists collect as many dating samples from an ancient human occupation site as possible and employ a variety of chronometric dating methods. In this way, the confidence level of the dating is significantly increased. The methods that are used depend on the presumed age of the site from which they were excavated. For instance, if a site is believed to be over 100,000 years old, dendrochronology and radiocarbon dating could not be used. However, potassium-argon, fission track, amino acid racemization, thermoluminescence, electron spin resonance, and paleomagnetic dating methods would be considered. EFFECTIVE TIME RANGE OF THE MAJOR CHRONOMETRIC DATING METHODS In addition to the likely time range, paleoanthropologists must select dating techniques based on the kinds of datable materials available. Dendrochronology can only date tree-rings. Any organic substances can be used for radiocarbon and amino acid racemization dating. Calcium rich parts of animals such as coral, bones, teeth, mollusks, and egg shells can be dated with the electron spin resonance technique. In addition, ESR can date some non-organic minerals including limestone, quartz, and flint. Burned clay and volcanic deposits are materials used for paleomagnetic dating. Glassy minerals, such as mica, obsidian, and zircon crystals are datable with the fission track method. Pottery and other similar materials containing crystalline solids are usually dated with the thermoluminescence technique. The potassium-argon and argon-argon methods are used to date volcanic rock and ash deposits. Other chronometric dating methods not described here include uranium/thorium dating, oxidizable carbon ratio (OCR) dating, optically stimulated luminescence (OSL) dating, varve analysis, and obsidian hydration dating. Copyright © 1998-2012 by Dennis O'Neil. All rights
<urn:uuid:0e63bd67-645e-4c01-8ec7-e353f79e75fb>
3.71875
3,552
Knowledge Article
Science & Tech.
42.607484
512
[Updated] The “object” that was found in Bermuda waters yesterday [Aug 12] is a scientific glider used to collect marine data which was recently deployed by scientists from the Bermuda Institute of Ocean Sciences [BIOS]. The Harbour Radio Duty Officer said, “Bermuda Radio can confirm that the suspected missile spotted on the crown of challenger bank is in fact a scientific glider used to collect marine data. The unit was recently deployed by scientists from BIOS in conjunction with Woods Hole Oceanographic Institution . “The instrument stopped transmitting data around 48 hours ago and was considered lost. BIOS and Woods Hole are keen to recover the unit. Any sighting of the glider should be reported to Bermuda Radio so that retrieval can be arranged,” concluded the Duty Officer. The glider had caused great interest last night, with many people trying their hand and guessing what it might be. Some people did guess correctly that it was a scientific object used by BIOS, while many others thought it may have been some form of missile/drone/torpedo/bomb. Update 9.15am: Larry George from the Woods Hole Oceanographic Institution in Massachusetts [website] confirmed that the object is a spray glider/autonomous underwater vehicle which is used to collect information about the ocean and is controlled remotely. The glider was deployed in Bermuda waters on August 10th, and they lost all contact with it on August 11th, and assumed it was lost until they received a phone call late last night. The glider was deployed to record data such as fine ocean currents, with Dr Jong Jin Park and Dr Breck Owens from Woods Hole the scientists in charge of the research. Asked how many of these gliders are in our waters, Mr George confirmed that this is the only one. He explained the glider was marked, however it was upside down in the water hence the marking was not visible. He also the fact the glider was upside down indicated that something had gone wrong with it. As of this writing the glider is still in the waters, but a team from BIOS is heading out this morning to try and recover it. - BIOS Appoints Interim Director - Ocean Rendezvous Rescues Research Project - BIOS Research: Impacts Of Ocean Acidification - BIOS Study: Climate Change, Ocean Bacteria - The Search For Bermuda’s Deep Water Caves - BIOS Scientist Part Of Global Ocean Study Articles that link to this one: - Unidentified Floating Object In Bermuda’s Waters | Bernews.com | August 13, 2012
<urn:uuid:7c5c6d0a-53be-43ba-91ea-44c403d35868>
2.90625
536
News Article
Science & Tech.
42.61995
513
Shooting sulfur particles into the stratosphere to reflect the sun? Dumping iron into the ocean to boost the absorption of carbon dioxide? Could these far-fetched and dangerous-sounding schemes help avert potentially catastrophic effects of climate change, or would they exacerbate conditions on our ever warming planet? These strategies, which involve the deliberate and large-scale intervention in our climate system to moderate global warming, are known as geoengineering. Fantastical as they seem, billionaires Bill Gates, Sir Richard Branson and others, are investing millions of dollars into the geoengineering research of a few leading climate scientists like Ken Caldeira at Stanford. At first, Caldeira thought geoengineering sounded crazy too, but his research showed that it would basically work. If global warming exceeds 2˚ C, it would be “a prescription for disaster,” said NASA scientist James Hansen. To prevent this from happening, we need to cap atmospheric carbon dioxide levels at 350 parts per million; but in March 2012, we reached almost 394.5 ppm and global greenhouse gas emissions continue to rise. Even if we were able to immediately cut greenhouse gas emissions to zero, however, global warming would continue for the foreseeable future because carbon dioxide remains in the atmosphere for several hundred years. Moreover, the international community has failed to reach an agreement that tackles the fundamental problem of controlling carbon emissions and prospects for doing so don’t look good. As a result, geoengineering is beginning to sound less like science fiction to some, and more like a possible Plan B. Geoengineering strategies fall into two main categories: - Solar radiation management, which seeks to reduce the amount of sunlight that reaches earth by deflecting it or increasing Earth’s reflectivity (albedo). - Carbon dioxide removal, which tries to take carbon dioxide out of the atmosphere. Solar radiation management includes efforts like white roofs that deflect sunlight, brightening clouds by shooting seawater into them to increase their albedo (salt provides the nuclei that seed the clouds), and controversial strategies based on the cooling effect that can follow major volcanic eruptions. In 1991, Mt. Pinatubo in the Philippines erupted, sending 22 million tons of sulfur dioxide into the stratosphere. The sulfur particles scattered around the globe, deflected sunlight, and cooled Earth by 0.4 to 0.5˚ C. Solar radiation management would recreate this effect by using balloons, aircraft or cannons to shoot tiny reflective particles like sulfates into the stratosphere to temporarily block sunlight. The 1992 Panel on Policy Implications of Greenhouse Warming calculated that this strategy would cost just pennies per ton of carbon dioxide mitigated. It would also be fast-acting, capable of quickly reducing the impacts of heat stress on crops, resulting in increased productivity since carbon dioxide levels, which boost growth, would remain high. Other solar radiation management ideas include the use of engineered nanoparticles, which could be constructed to ascend high into the atmosphere and keep their shiny side to the sun, and sunshades in space made of mirrors. Solar radiation management would do nothing to address the root cause of global warming—carbon dioxide emissions—or ocean acidification caused by the sea’s absorption of excess carbon dioxide. And while stratospheric aerosols could theoretically produce cooling on a local or global level, they might also create regional problems by affecting rain and snowfall patterns and causing drought. According to Caldeira, a year or two after Mt. Pinatubo, when aerosols dropped from the stratosphere, both the Amazon River and the Ganges had very low flows and droughts occurred. A 2010 study by ETC (Erosion, Technology and Concentration), an international group that opposes geoengineering, states that solar radiation management climate models show a risk of increased drought over Africa, Asia and the Amazon jungle. Putting sulfate particles into the stratosphere could also damage the ozone layer, lead to acid rain and increased ocean acidification, and interfere with solar cells, astronomy and satellites. In addition, solar radiation management techniques carry the risk of a rapid rise in temperature if the program were started then stopped, which would be more dangerous to life on Earth than a gradual temperature rise. Carbon dioxide removal strategies reduce greenhouse gases in the atmosphere, or attempt to manipulate natural processes to remove greenhouse gases indirectly. While they tackle the fundamental problem of carbon emissions, and address ocean acidification, they would require many years to fully take effect. Carbon dioxide removal techniques include tree planting, creating biochar (charcoal) and burying it to increase carbon sequestration, carbon capture and storage, adding carbonate to the ocean to increase carbon dioxide uptake, and capturing carbon from the air. Klaus Lackner, Director of the Earth Institute’s Lenfest Center for Sustainable Energy, is developing an “artificial tree” that removes carbon dioxide from the air. Ocean fertilization is perhaps the most controversial carbon dioxide removal strategy of all. Through photosynthesis, phytoplankton in the ocean absorbs half the carbon dioxide taken up annually by all of Earth’s plants. Ocean fertilization involves depositing nutrients (iron, nitrogen or phosphorus) into areas of the ocean lacking one of these key nutrients to stimulate the growth of phytoplankton and increase the absorption of carbon dioxide, which is then carried to the ocean floor when the phytoplankton die. Critics say ocean fertilization could alter food webs; deplete oxygen at deeper ocean levels; produce eutrophication, dead zones and toxic algal blooms; increase ocean acidification in the deep sea; and impact coral reefs. While the cost of ocean fertilization would be relatively low, Britain’s Royal Society says that none of the various carbon dioxide removal methods assessed have proven to be effective at an affordable cost with acceptable side effects. Most geoengineering research today is being done with climate models and mapping; few field tests have been conducted. The Fund for Innovative Climate and Energy Research, run by David Keith of Harvard and Ken Caldeira and funded by Bill Gates’ personal funds, has given out $4.6 million for research on climate modeling, technical feasibility, governance, potential and risks, but it does not support field-testing methods like solar radiation management and ocean fertilization that would actually interfere with the climate system. ETC argues that geoengineering cannot be tested because in order to truly assess its effect on the climate, it would need to be deployed on a massive scale, which would likely also have massive repercussions. Germany, India, Canada, Russia and Britain are studying geoengineering, and more countries will soon be capable of it as well. In 2009, a German-Indian government-sponsored experiment (LOHAFEX) dumped 6.6 tons of iron into 300 square kilometers of the South Atlantic. There was a burst of algae growth, but within two weeks, the algae was eaten by small crustaceans, so less carbon dioxide was absorbed than anticipated. In October 2011, a British project called SPICE (Stratospheric Particle Injection for Climate Engineering) was scheduled to test a delivery system using a tethered balloon and hose to deliver water one kilometer into the sky. It was put on hold due to opposition from environmental groups. Geoengineering opponents cite many risks. Strategies could be ineffective or incomplete. The technology could fall prey to mechanical failure, human error, natural disasters or terrorism, and lead to devastating and/or irreversible disruption of the climate system. Many want to ban geoengineering research for fear it would reduce the imperative to cut greenhouse gas emissions. Scott Barrett, Lenfest-Earth Institute professor of natural resource economics at Columbia University, takes issue with this point. “People worry that if we use geoengineering, we wouldn’t reduce our greenhouse gas emissions. But we’re not reducing them anyway…And given that we have failed to address climate change, I think we’re better off having the possibility of geoengineering…However it does raise the question of do we have the wisdom and institutions to use it wisely?” Governance is perhaps the thorniest aspect of geoengineering. Because geoengineering is a relatively cheap way to address climate change, it is unilateral—rich countries and billionaires could finance it on their own—yet the consequences would be global. Who then should get to control geoengineering, and under what governance? Some strategies would benefit certain countries and harm others, so who would have the right to decide whether, when and how to use it? Geoengineering would likely create winners and losers—should losers be compensated? Could conflicts lead to geoengineering wars? While there are various international treaties, aspects of which could limit some geoengineering experiments, there is no overarching regulatory framework that governs the broad use of geoengineering technology. In October 2010, the U.N. Convention on Biological Diversity adopted a moratorium on geoengineering activities that could threaten biodiversity (the United States has not ratified the convention). ETC is pushing for a comprehensive test ban on geoengineering at Rio+20, the U.N. Conference on Sustainable Development in June. The 2009 geoengineering report by Britain’s Royal Society calls for an international body to review mechanisms that could regulate geoengineering, and for scientific organizations to develop guidelines for research and evaluate benefits and environmental effects. “The central problem for the governance of geoengineering,” the report says, “is that while potential problems can be identified with all geoengineering technologies, these can only be resolved through research, development and demonstration… Ideally, appropriate safeguards would be put in place during the early stages of the development of any new technology.” Barrett, an expert in international agreements, believes a geoengineering agreement should focus on what countries should do and what they can agree upon. He contends that an agreement should simply require a country intent on engaging in geoengineering, from field research to larger experiments, to let the world know. This would enable other countries to react or discuss the situation beforehand, make deals or participate, and avoid conflicts. It would also encourage collaborative research and development. Countries would be willing to sign on because they would know that other nations would also have to declare their intentions. If an agreement were too restrictive, or included a ban or veto power, Barrett says, countries that wanted to proceed with geoengineering would simply walk away from the table. Despite the risks and uncertainties of geoengineering, many scientists believe we must study the options to ensure that damaging actions are not taken in haste in the future. The Royal Society recommends that further research and development of geoengineering be undertaken, but that policies also continue to focus on reducing carbon emissions and adaptation. It stresses the importance of placing all concerns about geoengineering in the larger context of climate change impacts that would otherwise be likely to occur anyway and comparing the relative risks and potential benefits. “Imagine some point in the future when things are starting to go very wrong. And turning down the sun would have a good chance of limiting damage. Would you really not want to know if the technology worked and what its side effects were?” Barrett asked. “Even if we were to ban geoengineering today, if things get bad in the future, they’d do it anyway…would you want the future to be ignorant?”
<urn:uuid:9a032602-f263-4c94-a03f-70763337421c>
3.5625
2,322
Personal Blog
Science & Tech.
26.078848
514
Forest Ecosystems: Current Research Regional Fire/Climate Relationships in the Pacific Northwest and Beyond Fire exerts a strong influence on the structure and function of many terrestrial ecosystems. In forested ecosystems, the factors controlling the frequency, intensity, and size of fires are complex and operate at different spatial and temporal scales. Since climate strongly influences most of these factors (such as vegetation structure and fuel moisture), understanding the past and present relationships between climate and fire is essential to developing strategies for managing fire-prone ecosystems in an era of rapid climate change. The influence of climate change and climate variability on fire regimes and large fire events in the Pacific Northwest (PNW) and beyond is the focus of this project. There is mounting evidence that a detectable relationship exists between extreme fire years in the West and Pacific Ocean circulation anomalies. The El Niño/Southern Oscillation (ENSO) influences fire in the Southwest (SW) and the Pacific Decadal Oscillation (PDO) appears to be related to fire in the PNW and Northern Rockies (NR). However, there are reasons to expect that processes driving fire in PNW, SW, and NR are not constant in their relative influence on fire through time or across space and that their differentiation is not stationary through time or across space. - How regionally specific is the relationship between large fire events and precipitation/atmospheric anomalies associated with ENSO and PDO during the modern record? - What do tree-ring and other paleo-records tell us about the temporal variability of the patterns of fire/climate relationships? - How is climate change likely to influence climate/fire relationships given the demonstrated influences of climate variability? Figure 1 A simple model of climate–fire-vegetation linkages. This project emphasizes the mechanisms and variability indicated by (1). For publications on climate impacts on PNW forest ecosystems, please see CIG Publications. Gedalof, Z. 2002. Links between Pacific basin climatic variability and natural systems of the Pacific Northwest. PhD dissertation, School of Forestry, University of Washington, Seattle. Littell, J.S. 2002. Determinants of fire regime variability in lower elevation forests of the northern greater Yellowstone ecosystem. M.S. Thesis, Big Sky Institute/Department of Land Resources and Environmental Sciences, Montana State University, Bozeman. Mote, P.W., W.S. Keeton, and J.F. Franklin. 1999. Decadal variations in forest fire activity in the Pacific Northwest. In Proceedings of the 11th Conference on Applied Climatology, pp. 155-156, Boston, Massachusetts: American Meteorological Society.
<urn:uuid:e4092633-013e-4995-97f5-6212c2dac106>
2.8125
549
Academic Writing
Science & Tech.
27.702762
515
Current models of global climate change predict warmer temperatures will increase the rate that bacteria and other microbes decompose soil organic matter, a scenario that pumps even more heat-trapping carbon into the atmosphere. But a new study led by a University of Georgia researcher shows that while the rate of decomposition increases for a brief period in response to warmer temperatures, elevated levels of decomposition don’t persist. “There is about two and a half times more carbon in the soil than there is in the atmosphere, and the concern right now is that a lot of that carbon is going to end up in the atmosphere,” said lead author Mark Bradford, assistant professor in the UGA Odum School of Ecology. “What our finding suggests is that a positive feedback between warming and a loss of soil carbon to the atmosphere is likely to occur but will be less than currently predicted.” Bradford, whose results appear in the early online edition of the journal Ecology Letters, said the finding helps resolve a long-standing debate about how unseen soil microbes respond to and influence global climate change. Other scientists have noted that the respiration of soil microbes returns to normal after a number of years under heated conditions, but offered competing explanations. Some argued that the microbes consumed so much of the available food under heated conditions that future levels of decomposition were reduced because of food scarcity. Others argued that soil microbes adapted to the changed environment and reduced their respiration accordingly. Bradford and his team, which included researchers from the University of New Hampshire, the Marine Biological Laboratory at Woods Hole, Duke University and Colorado State University, found evidence to support both hypotheses and revealed a third, previously unaccounted for explanation: The abundance of soil microbes decreased under warm conditions. “It is often said that in a handful of dirt, there are somewhere around 10,000 species and millions of individual bacteria and fungi,” said study co-author Matthew Wallenstein, a research scientist at Colorado State University. “Our findings add to the understanding of how complex these systems are and the role they play in feedbacks associated with climate change.” The researchers studied soil microbes at Harvard Forest in Massachusetts, the site of a soil warming experiment that began in 1991. Scientists took soil samples from two plots, one in which buried cables heat the soil to five degrees Celsius above the ambient soil temperature (a condition that is expected to occur around 2100) and a control condition in which cables are buried but not producing heat. In the first set of experiments, the scientists compared microbial respiration in the two groups and found lower rates of decomposition in the heated plots. This finding supported the idea that respiration decreases after a few years of warming, but didn’t explain whether the cause was substrate depletion in the warmer soils or adaptation by the microbes. In the next set of experiments, they added the simple sugar sucrose to both sets of soils to alleviate any food limitation for the microbes. They found that microbes from both conditions increased their respiration, but that the increase was greater in the unheated control soils than in the heated soils. “That finding told us that substrate depletion played a role,” Bradford said, “but it also told us that there were other factors involved.” The researchers then measured microbial biomass and found that there were fewer microbes in the heated soils. To test whether thermal adaptation occurred, they measured respiration while keeping temperature constant. They found that respiration rates were indeed lower in the heated versus the control soils, even when adjusting for microbial biomass. Wallenstein pointed out that the study is among the first to demonstrate that microbes, like many plants and animals, can adapt relatively quickly to changes in climate. “This research presents a new challenge to scientists trying to predict effects of climate change on forest ecosystems because it shows that these soil microbial communities are very dynamic,” Wallenstein said. “We cannot simply extrapolate from the short-term responses of soil microbes to climate change, since they may adapt over the longer-term.” Bradford notes that there is still much to be learned about how soil microbes respond to global warming. His team is currently working to understand whether the reduced microbial respiration in heated soils is caused by the adaptation of individual microbes, by shifts in species composition or a combination of the two factors. He warns against minimizing the role of soil microbes in global warming, even though his findings suggest that current models overstate their contribution. “Although our results suggest that the impact of soil microbes on global warming will be less than is currently predicted,” Bradford said, “even a small change in atmospheric carbon is going to alter the way our world works and how our ecosystems function.” The research was funded by the U.S. Department of Energy.
<urn:uuid:dbd336d1-5dba-4cad-aae7-758f35131d59>
3.84375
983
News Article
Science & Tech.
28.006859
516
Weather is what is happening in the atmosphere now, at any place on Earth’s surface. It includes the temperature and whether it is wet and windy, or dry and calm. The Sun provides the energy that drives Earth’s weather. The Sun heats the air in various parts of Earth’s atmosphere by different amounts. Masses of warm and cold air then move from place to place, creating winds. Winds bring sunny, wet, or stormy conditions. People find out the type of weather to expect in a FORECAST. A weather forecast is a prediction of weather conditions over a particular area, either for a few days (called a short-range forecast), or for several weeks (called a long-range forecast). The people who study the weather and make weather forecasts are called meteorologists. Weather forecasts help people to plan—what to wear, when to travel, or which products to stock in supermarkets. Forecasts are especially important for farmers, builders, sailors, and anyone else who works outdoors. Sometimes an accurate forecast may mean the difference between life and death. Meteorologists receive information about air temperature, wind speeds, clouds, and rainfall from over 50,000 weather stations worldwide—on land and on ships and buoys at sea. The data is fed into huge computers that produce charts and forecasts. These are used, with satellite images, to predict the weather.
<urn:uuid:d9fd49f9-92d2-499a-b0f4-fd69dafafd9f>
3.859375
287
Knowledge Article
Science & Tech.
49.664895
517
Jython is distributed as a self-extracting .class file created by LiftOff. To install Jython, open the command line to the directory in which you have placed the jython-21.class file and then type: You will probably type one of the following three lines, depending on your system. Be sure not to put ".class" at the end of the file name. It can be necessay to set the CLASSPATH to include the current directory Which command to use depends on your operating system and java version. If you have more than one java installed, you may have to supply an explicit path the java command. When installing the JDK 1.2 from javasoft, the default is it install both the JDK and a plugin JRE. The plugin JRE is added to your PATH, so running the java jython-21 command will make jython use the JRE. Specify the full path if you want to use JDK instead, ie: c:\Programs\JDK1.2\bin\java -cp . jython-21. If you do not have a GUI, then add -o dir_to_install_to to the command above. Jython will install to the specified directory without bringing up the graphical installer. E.g. to install all modules to a Jython-2.1 subdirectory in the current directory do: After completing installation, you should be able to run Jython by typing: What Can Go Wrong You should check out this section if your Jython installation doesn't quite work right. It will contain tips for solving the most common problems. Can't Access Standard Python Modules Not all the modules form CPython is available in Jython. Some modules require a C language dynamic link library that doesn't exists in java. Other modules are missing from Jython just because nobody have had a need for it before and no-one have tested the CPython module with Jython. If you discover that you are missing a module, try to copy the .py file from a CPython distribution to a directory on your Jython sys.path. If that works you are set. If it doesn't work, try asking on jython-users mailing list. Any other problems with the installation should be reported to jython-dev. As a workaround you can extract the jython-21.class manually. The class file is basicly a .zip file and most unzip programs can manage to extract the contents of the class into a directory. After doing that, you must Platform Specific Notes If all else fails, you might find that your problem is unique to your platform, and has a solution mentioned on the Platform Specific Information page.
<urn:uuid:a4d35bee-b13e-4d67-8af4-ae472971bd5d>
2.78125
568
Customer Support
Software Dev.
64.067721
518
Thu October 20, 2011 'Living Fossils' Just A Branch On Cycad Family Tree Although dinosaurs died out 65 million years ago, there are still thought to be a few species left over from those days. Plants called cycads are among these rare "living fossils" — they have remained pretty much unchanged for more than 300 million years, but a study in Science magazine suggests that glamorous title may not be deserved. There's no time machine in Washington, D.C., but Harvard botanist Sarah Mathews leads me to what's arguably the next best thing — a room made of glass in the U.S. Botanic Garden, just downhill from the U.S. Capitol. The sign says "The Garden Primeval — The First Land Plants." Right away we see something that looks like a fern growing out of the top of a palm trunk. But it's not a fern or a palm. In fact, it's more closely related to a pine tree. Cycads produce seeds but not flowers. They evolved along with dinosaurs, which presumably munched them for lunch. So they've earned the title living fossil. But "that assumption began to break down as we began sequencing DNA," Mathews says. She and her colleagues — notably Nathalie Nagalingum from the Royal Botanic Gardens in Sydney — have used that DNA to reconstruct the "family tree" of cycads. They find that the "trunk" of the family tree may reach back 300 million years, but the "branches," today's 300 species, actually burst onto the scene about 12 million years ago. "And then it looks like around the world on multiple continents, cycads became more species-rich," Mathews says. What caused that sudden burst of new species? "That's the really fun puzzle of course," she says. It's probably not a coincidence that other plants also put forth a burst of new species around that time, including cacti, ice-plants and agave. Mathews suspects climate change played a role. "There was drying out and cooling going on, globally," she says. This research is part of a broader effort to understand how all plants — most notably flowering plants — evolved. That story is gradually taking shape as scientists study more and more of the DNA from plants. Of course, you might argue this research has some broader philosophical repercussions as well. By finding that these species of cycads are just 12 million years old — and so were not survivors from the days of the dinosaurs — has Mathew's team demoted these species from their lofty status as living fossils? She says not. "I think that we've actually found some interesting patterns for people who didn't think much about cycads before," she says. What about people who think a lot about cycads? Bart Schutzman edits the Cycad Society's journal (global circulation: 500 copies). He's attracted to these plants because he feels a primal bond with this ancient species. And he says the news does not rock his world. Today's cycads still predate human species, and by a lot. "What's the difference between old, older and very old, and very, very old? I mean they're all still very old," he says with a chuckle. As for the moniker, living fossil? "It won't stop people from glamorizing the cycads as the living fossils because their lineage extends so far back," Schutzman says. So here's a little good news from Washington: A walk through the "Garden Primeval" greenhouse still offers a reasonable glimpse of foliage from the days of the dinosaurs, though the species themselves don't have quite the same bragging rights.
<urn:uuid:9fef6959-0818-4cac-94d3-4ad60e69186a>
3.4375
782
News Article
Science & Tech.
65.919013
519
Faces, Vertices, and Edges of Cylinders, Cones, and Spheres Date: 12/28/2003 at 17:21:33 From: Cara Subject: Characteristics of polyhedra I need to know how many faces, vertices, and edges do cylinders, cones, and spheres have? Logically I would say that a sphere has 1 face, 0 vertices and 0 edges. Problem: a face is flat, sphere is not flat. Secondly this does not satisfy Euler's formula v - e + f = 2. I would say a cone has 2 faces, 1 edge, and 1 vertex. Problem: while this does satisfy Euler, it does not satisfy the definitions. Date: 12/28/2003 at 20:41:45 From: Doctor Peterson Subject: Re: Characteristics of polyhedra Hi, Cara. To start, take a look at this page: Cone, Cylinder Edges? http://mathforum.org/library/drmath/sets/select/dm_cone_edge.html Properly speaking, Euler's formula does not apply to a surface, but to a network on a surface, which must meet certain criteria. The "natural" faces and edges for these surfaces, or those determined by applying the definitions used for polyhedra, do not meet these criteria. Just taking the natural parts of a cone, as you say, it has one presumed vertex, the apex; one edge, the circle at the base; and two faces, one flat and one curved. (I say "presumed" because the apex is not really a vertex in the usual sense of a place where two or more edges meet, but it is a point that stands out.) This gives 1 - 1 + 2 = 2 So it does fit the formula; but there is no reason it should, really, because it doesn't fit the requirements for the theorem, namely that the graph should be equivalent to a polyhedron. Each face must be simply connected (able to shrink to a disk, with no "holes" in it), and likewise each edge must be like a segment (not a circle). One of our "natural" faces has a "vertex" in the middle of it, so it is not simply connected; and the "edge" has no ends, so it doesn't fit either. These errors just happen to cancel one another out. As another example, take a cylinder, which in its natural state has no vertices, two "edges", and three "faces": 0 - 2 + 3 = 1 It doesn't work, and the theorem doesn't claim it should. In each case you can "fix" the graph by adding one segment from top to bottom. In the cone, this gives one extra vertex (on the base), and one extra edge, so the formula still holds. In the cylinder, it gives two new vertices and one extra edge, and the formula becomes correct. What do you have to do to "fix" the sphere? Here is a deeper discussion of these ideas: Euler's Formula Applied to a Torus http://mathforum.org/library/drmath/view/51815.html If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum http://mathforum.org/dr.math/ Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
<urn:uuid:b7a43f40-aa4a-4a8e-bff9-be77db0b79c6>
3.46875
715
Comment Section
Science & Tech.
66.453939
520
for National Geographic News If a worker ant dares to reproduce in the presence of the queen, her sisters will smell her attempt and attack, according to a new study. Typically, only queens produce offspring in an ant colony, and males die after mating. The sons and the daughter queens fly away, with hopes of reproducing elsewhere, while the worker daughters stay on to build the colony and care for the next generation. These worker ants are biologically capable of a type of parthenogenesis, the process that allows a female to produce offspring without a mate. When they try, however, they produce chemicals called pheromones that their sisters detect with antennas. "It's basically smell, but not the smell we know," said study co-author Jürgen Liebig of Arizona State University. If the colony lacks a queen, workers are permitted to have their own babies, Liebig explained. But when a queen is present, only she is allowed to produce the pheromone that signals fertility status. If a worker tries to "cheat," her sisters will physically restrain the disobedient ant from successfully reproducing. (Related: "Ants Practice Nepotism, Study Finds" [February 26, 2003].) The research was published online January 8 in the journal Current Biology. Scent of a Woman Ant Previous studies showed a correlation between ant's reproductive policing behavior and these pheromones, Liebig said, so there was strong reason to believe the chemicals were tipping them off. "The problem was that nobody could ever show it," he said. Liebig's team studied the ant species Aphaenogaster cockerelli because it uses a simple version of the compound that the scientists could easily obtain. SOURCES AND RELATED WEB SITES
<urn:uuid:2011961b-4bfb-4d0e-8634-1a01b40cf290>
3.4375
374
News Article
Science & Tech.
38.015316
521
During the 1980s the number of babies born annually was around 12. The total twice fell sharply in the 1990s until just a single calf appeared in 2000. Since then, the average has risen to more than 20 calves a year. Yet this remains 30 percent below the whales' potential rate of reproduction. Why? If scientists are to guide the species' salvation, they need more data and more answers. Fast. One August morning in 2006, when the sea was a sheet of dimpled satin shot through with silver threads, I joined Scott Kraus, the New England Aquarium's vice president of research, and Rosalind Rolland, a veterinarian and senior scientist with the aquarium, on an unlikely quest in the Bay of Fundy. When leviathans rose in the distance through the sea's shimmering skin, Kraus steered the boat downwind of where they had briefly surfaced, handed me a data sheet to log our movements, and zigzagged into the faint breeze. Rolland moved onto the bow. Beside her was Fargo, the world's premier whale-poop-sniffing dog. Fargo began to pace from starboard to port, nostrils flaring. Rolland focused on the rottweiler's tail. If it began to move, it would mean he had picked up a scent—and he could do that a nautical mile away. Twitch … Twitch … Wag, wag. "Starboard," Rolland called to Kraus. "A little more. Nope, too far. Turn to port. OK, he's back on it." A quarter of an hour ran by like the bay's currents. All I saw were clumps of seaweed. Suddenly, the dog sat and turned to fix Rolland with a look. We stopped, and out of the vast ocean horizon came a single chunk of digested whale chow, bobbing along mostly submerged, ready to sink from view or dissolve altogether within minutes. Kraus grabbed the dip net and scooped up the fragrant blob. You'd have thought he was landing a fabulous fish. "At first, people are incredulous. Then come the inevitable jokes. But this," said the man who has led North Atlantic right whale research for three decades, "is actually some of the best science we've done." With today's technology, DNA from sloughed-off intestinal cells in a dung sample can identify the individual that produced it. Residues of hormones tell Rolland about the whale's general condition, its reproductive state—mature? pregnant? lactating?—levels of stress, and presence of parasites.
<urn:uuid:f20dd62f-b6cd-4a43-b899-d8bd8fdb0627>
3.1875
541
Nonfiction Writing
Science & Tech.
65.860114
522
Please use this identifier to cite or link to this item: http://hdl.handle.net/1959.13/916979 - The 'humped' soil production function: eroding Arnhem Land, Australia Heimsath, Arjun M.; Hancock, Greg R. - The University of Newcastle. Faculty of Science & Information Technology, School of Environmental and Life Sciences - We report erosion rates and processes, determined from in situ-produced beryllium-10 (¹⁰Be) and aluminum-26 (²⁶Al), across a soil-mantled landscape of Arnhem Land, northern Australia. Soil production rates peak under a soil thickness of about 35 cm and we observe no soil thicknesses between exposed bedrock and this thickness. These results thus quantify a well-defined ‘humped’ soil-production function, in contrast to functions reported for other landscapes. We compare this function to a previously reported exponential decline of soil production rates with increasing soil thickness across the passive margin exposed in the Bega Valley, south-eastern Australia, and found remarkable similarities in rates. The critical difference in this work was that the Arnhem Land landscapes were either bedrock or mantled with soils greater than about 35 cm deep, with peak soil production rates of about 20 m/Ma under 35–40 cm of soil, thus supporting previous theory and modeling results for a humped soil production function. We also show how coupling point-specific with catchment-averaged erosion rate measurements lead to a better understanding of landscape denudation. Specifically, we report a nested sampling scheme where we quantify average erosion rates from the first-order, upland catchments to the main, sixth-order channel of Tin Camp Creek. The low (~5 m/Ma) rates from the main channel sediments reflect contributions from the slowly eroding stony highlands, while the channels draining our study area reflect local soil production rates (~10 m/Ma off the rocky ridge; ~20 m/Ma from the soil mantled regions). Quantifying such rates and processes help determine spatial variations of soil thickness as well as helping to predict the sustainability of the Earth's soil resource under different erosional regimes. - Earth Surface Processes and Landforms Vol. 34, Issue 12, p. 1674-1684 - Publisher Link - John Wiley & Sons - Resource Type - journal article
<urn:uuid:aaec1b94-9ba9-4b55-bc0f-d21d25032da1>
2.96875
496
Academic Writing
Science & Tech.
41.331953
523
Draw a square. A second square of the same size slides around the first always maintaining contact and keeping the same orientation. How far does the dot travel? Points A, B and C are the centres of three circles, each one of which touches the other two. Prove that the perimeter of the triangle ABC is equal to the diameter of the largest circle. An AP rectangle is one whose area is numerically equal to its perimeter. If you are given the length of a side can you always find an AP rectangle with one side the given length? Tom and Jerry started with identical rectangular sheets of paper. Each of them cut his sheet into two. Tom obtained two rectangles, each with a perimeter of $40$cm while Jerry obtained two rectangles, each with a perimeter of $50$cm. What was the perimeter of Tom's original sheet of paper? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges.
<urn:uuid:cb13c13b-4a27-4c7f-9370-9daf73c47fb4>
3.515625
217
Tutorial
Science & Tech.
59.858656
524
In the southeast U.S., deep-sea corals create oases of special habitat along the coast and are extremely vulnerable to certain kinds of fishing such as bottom trawling and dredging. Both corals and fisheries are managed by the South Atlantic Fishery Management Council. In 2004, the Council responded to the convincing data provided by scientists and identified areas of coral that would be closed to bottom trawling and any other activity that disturbs the seafloor. The boundaries were updated in spring of 2006 to reflect recent research. However, the Council has yet to formalize these designations and the threat to corals remains. Learn more about the Corals of the Southeast U.S.
<urn:uuid:18b62725-68c1-4f06-aae6-70dca0316db4>
3.359375
143
Knowledge Article
Science & Tech.
51.094651
525
Sequences of numbers can have limits. For example, the sequence 1, 1/2, 1/3, 1/4, ... has the limit 0 and the sequence 0, 1/2, 2/3, 3/4, 4/5, ... has the limit 1. But not all number sequences behave so nicely. For example, the sequence 1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 4/5, ... keeps jumping up and down, rather than getting closer and closer to one particular number. We can, however, discern some sort of limiting behaviour as we move along the sequence: the numbers never become larger than 1 or smaller than 0. And what's more, moving far enough along the sequence, you can find numbers that get as close as you like to both 1 and 0. So both 0 and 1 have some right to be considered limits of the sequence — and indeed they are: 1 is the limit superior and 0 is the limit inferior, so-called for obvious reasons. But can you define these limits superior and inferior for a general sequence , for example the one shown in the picture? Here’s how to do it for the limit superior. First look at the whole sequence and find its least upper bound: that’s the smallest number that’s bigger than all the numbers in the sequence. Then chop off the first number in the sequence, and again find the least upper bound for the new sequence. This might be smaller than the previous least upper bound (if that was equal to ), but not bigger. Then chop off the first two numbers and again find the least upper bound. Keep going, chopping off the first three, four, five, etc numbers, to get a sequence of least upper bounds (indicated by the red curve in the picture). In this sequence every number is either equal to or smaller than the number before. The limit superior is defined to be the limit of these least upper bounds. It always exists: since the sequence of least upper bounds is either constant or decreasing, it will either approach minus infinity or some other finite limit. The limit superior could also be equal to plus infinity, if there are numbers in the sequence that get arbitrarily large. The limit inferior is defined in a similar way, only that you look at the sequence of greatest lower bounds and then take the limit of that. You can read more about the limits inferior and superior in the Plus article The Abel Prize 2012.
<urn:uuid:50a0f930-4294-406e-939b-83ec24b275fd>
3.703125
516
News (Org.)
Science & Tech.
72.23
526
Support us | Visit us | Contact us 21 September 2011 Researchers used satellite tracking to monitor the movements of the whales to discover that in 2010 they travelled between the Atlantic and Pacific seas via the famously ice-bound passage. Bones found on beaches in the region suggest that the last time the whales occupied this area was around 10,000 years ago. While Bowheads are adept at moving through ice-bound Arctic seas, it was previously thought that the sea ice in the Northwest Passage was too impenetrable even for these Arctic specialists. However, the new observations show Bowheads travelling through the passage in both directions, suggesting that the rapidly diminishing Arctic sea ice has allowed them to pass from one ocean to another. The findings have huge implications for the ecology of marine life in the region, with the authors stating that their findings “are perhaps an early sign that other marine organisms have begun exchanges between the Pacific and the Atlantic Oceans across the Arctic. Some of these exchanges may be harder to detect than bowhead whales, but the ecological impacts could be more significant should the ice-free Arctic become a dispersion corridor between the two oceans.” Read the full paper for free on the Biology Letters webpage. Learn about our mission to expand the frontiers of knowledge. Explore our annual science exhibition The Government’s spending decisions for the financial year 2015-16 provide an important opportunity to strengthen the role of research and innovation as drivers of UK growth and competitiveness, according to the UK’s four national academies, including the Royal Society. A paper published in Biology Letters today reveals a new species of ichthyosaur (a dolphin-like marine reptile from the age of dinosaurs) which revolutionises our understanding of their evolution and extinction. Pioneers of the Internet, computing, climate modelling and virtual surgery are just some of the experts who have been announced as new Fellows of the Royal Society today (3 May 2013). For a full archive please see the news pages. Latest press releases about our activities. Announcements about articles in our journals. There are about 1,450 Fellows and Foreign Members. We have had 350 years at the heart of scientific progress. Contact the Society's press team.
<urn:uuid:51f13114-c865-4664-8bbe-81fef4087051>
3.890625
462
News (Org.)
Science & Tech.
36.551891
527
Focus Areas for STEREO Plasmas and their embedded magnetic fields affect the formation, evolution and destiny of planets and planetary systems. The heliosphere shields the solar system from galactic cosmic radiation. Our habitable planet is shielded by its magnetic fi eld, protecting it from solar and cosmic particle radiation and from erosion of the atmosphere by the solar wind. Planets without a shielding magnetic field, such as Mars and Venus, are exposed to those processes and evolve differently. And on Earth, the magnetic field changes strength and configuration during its occasional polarity reversals, altering the shielding of the planet from external radiation sources. Understand the causes and subsequent evolution of solar activity that affects Earth's space climate and environment. The climate and space environment of Earth are significantly determined by the impact of plasma, particle, and radiative outputs from the Sun. Therefore, it is essential to understand the Sun, determine how predictable solar activity truly is, and develop the capability to forecast solar activity and the evolution of disturbances as they propagate to Earth.
<urn:uuid:e6f7aa6f-0d69-4896-a04a-4a3d8e7fcfd3>
3.671875
207
About (Org.)
Science & Tech.
16.398732
528
There is an unfortunate side effect when using gdb to debug multi-threaded programs. If one thread stops for a breakpoint, or for some other reason, and another thread is blocked in a system call, then the system call may return prematurely. This is a consequence of the interaction between multiple threads and the signals that gdb uses to implement breakpoints and other events that stop execution. To handle this problem, your program should check the return value of each system call and react appropriately. This is good programming style anyways. For example, do not write code like this: The call to sleep will return early if a different thread stops at a breakpoint or for some other reason. Instead, write this: int unslept = 10; while (unslept > 0) unslept = sleep (unslept); A system call is allowed to return early, so the system is still conforming to its specification. But gdb does cause your multi-threaded program to behave differently than it would without gdb. Also, gdb uses internal breakpoints in the thread library to monitor certain events such as thread creation and thread destruction. When such an event happens, a system call in another thread may return prematurely, even though your program does not appear to stop.
<urn:uuid:da0d0195-b882-4274-8e45-9b9d7f446458>
2.765625
268
Documentation
Software Dev.
50.954935
529
Free Search (10899 images) Oxygen makes Venus glow at night Rating: 5.00/5 (1 votes cast) - Title Oxygen makes Venus glow at night - Released 11/04/2007 2:35 pm - Copyright ESA/VIRTIS/INAF-IASF/Obs. de Paris-LESIA This grey-scale image was taken on 3 June 2006 by the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) onboard ESA’s Venus Express, at a distance of 68 000 kilometres from the planet’s surface. The image shows the oxygen airglow in the night-side of Venus, appearing as the bright features similar to ‘clouds’ visible at the bottom of the image, and also visible as the white ring surrounding the planet’s disk (limb). The oxygen airglow is fully detectable only at specific infrared wavelengths. This image was obtained at 1.27 micrometres. The fluorescence of the airglow is produced when oxygen atoms, ‘migrating’ from the day-side to the night-side of the atmosphere of Venus under the push of the so-called sub-solar and anti-solar atmospheric circulation, recombine into molecular oxygen (or ‘O2’) emitting light. The view was obtained from south, with the south pole at the top of the image. The lower horizon is at about 20 degrees South latitude, while the image centre is at 60 degrees East longitude (coinciding with midnight local time).
<urn:uuid:3791b13b-dba7-4225-98e9-3acc4aa5badf>
3.046875
332
Truncated
Science & Tech.
46.73173
530
As the popularized side of the debate has led us to expect, the authors found that the coldest year (1863) and the coldest decade (1810s) are early in the record, well before the ballyhooed warming of the 20th century. Problematic from a climate change standpoint is the fact that the two distinct cold periods that made the 1810s the coldest decade followed an 1809 “unidentified” volcanic eruption and the eruption of Tambora in 1815 – unusual geologic events that defined the climate. However, of greater importance is the fact that the researchers found the warmest year on record to be 1941, while the 1930s and 1940s are the warmest decades on record. This represents very bad news for climate change alarmists, since the warmest period was NOT the last quarter of the 20th century. In fact, the last two decades of the 20th century (1981-1990 and 1991-2000) were colder across the study area than any of the previous six decades, dating back to the 1900s and 1910s. When examining the instrumental records of the stations it is apparent that no net warming has occurred since the warm period of the 1930s and 1940s.Ouch. The note concludes - In a region of the world where climate models indicate that the greatest impacts of CO2-induced global warming will be most rapid and most evident, this recent extension of instrumental surface air temperature records produces a climate history that seems to suggest otherwise. If global climate models are correct, the increase in CO2 concentration since 1930 should be evidenced rather dramatically in air temperature across a high-latitude region of the Northern Hemisphere such as Greenland. The evidence provided by the instrumental record of air temperature along the western and southern coasts of Greenland produces doubt in the degree to which increased CO2 concentrations impact high latitude climate as represented by the climate models upon which climate change alarmists are hanging their hats.What's fascinating to this layman is how new observations are still being made which seem to challenge what is evidently not at all a settled body of theory. And that theory - and a dodgy discount rate - are a basis for major action?
<urn:uuid:54298b9a-1cd7-4039-ba9b-013180e3e21a>
3.734375
447
Personal Blog
Science & Tech.
34.807563
531
Climate Witness: Pak Azhar, Indonesia I have been living in Balikukup since 1999. Balikukup is a small island of 18 ha consisting mainly of sandbanks. However, the island’s size is not fixed as it depends on the tides. During low tide, a large sandbank is exposed, extending 1 km towards the sea. The weather is a significant factor in the work of a sea cucumber fisherman I started collecting sea cucumbers in 2001. There are 2 ways to catch sea cucumbers; some fishermen just search on the beaches around the island during low tide at night, while others dive underwater, down to depths of 10 m. Sea cucumber fishermen are highly dependent on the weather to do their job. Fishermen cannot catch good harvests during rainy or stormy weather, as sea cucumbers hide underneath the sand during that time. Therefore, it is important for a sea cucumber fisherman to predict what the weather will be like before going to work. Usually I observe the weather at dusk or in the early evening to predict whether it is going to rain or be stormy at night. But nowadays, it is getting harder to predict the weather accurately. For example, early evening yesterday I predicted that there would be no rain at night, but around midnight and early morning heavy rain came down. In the old days, we fishermen could predict the weather. But not anymore. The elders on our island also mentioned the same thing. Since 2002, Atang, one of the fisherman elders whom we regard as the best expert in predicting the weather in Balikukup, said that the weather was getting unpredictable. Before, Atang could produce a very good prediction, even for the course of a full year. ‘Bulan janda’ or Widow month One example of unpredictable weather is the gone phenomena of ‘bulan janda’, or ‘widow month’. It is called widow-month because when the fishermen went to the sea during the event, they rarely came home safely. Thus, their wives became widows. Widow month is an annual event when the wind blows very strongly for 44 days from the south. This wind stops for a short period of time (half an hour), and then goes back to blowing very hard. During that time it is impossible for fishermen to go at sea. Fishermen who had saved enough money and food supply did not need to go at sea during ‘widow month’ because the conditions were too dangerous. However, other fishermen had no other option but to go to sea during the event. The phenomenon of ‘widow month’ does not exist anymore. The last time it happened was in 1991 according to fishermen. After 1991, during the supposedly ‘widow-month’, there could be calm periods for up to 2 weeks. None of the fishermen understands why the ‘widow month’ phenomenon has slowly disappeared. No clue when money will come The unpredictable weather is a disadvantage for us fishermen because we no longer know when we can go fishing. It is difficult for us to predict when we will make money. Before, we could estimate when was the right time to make income and put some money on the side, as we could predict when we can go fishing. Now, whenever we have good weather, we just go fishing. We can no longer make financial plans. Credit: WWF-Indonesia / Primayunta Scientific reviewReviewed by: Dr Heru Santoso, Project Coordinator of the TroFCCA (Tropical Forests and Climate Change Adaptation) project, Indonesia The witnesses told three natural phenomena that they considered climate related. They are increased land erosion, higher tides and unpredictable weather. Even though non-climatic factor could contribute to these phenomena, for example an increase in land erosion could be due to land mismanagement, or a higher tide could be the subsequent of regional subsidence, etc. However, in all three different locations the people observed an increase of wave energy and increasing unpredictable weather that could affect the sustainability of their villages and their livelihood. There are very few scientific literatures to report whether the observed phenomena in this specific region are related to climate change. This region is open to Sulawesi and Sulu seas as flow paths of oceanic current from the western Pacific Ocean to Indian Ocean. Higher tides in Berau area could be related to the increase of sea surface level in the western Pacific during La Niña events. This phenomenon recently has become noticeable than in the past probably because global warming has accentuated the extent of this climate mechanism (Mimura et al. 2007). For the same reason, unpredictable and abrupt change of weather has become noticeable. Abrupt changes are usually associated with high wind speed which could only happen if there is a significant difference in pressures between two areas. Striking heat, in particular over a heat sensitive land area, under a warmer condition could generate this high pressure difference quickly. Land sensitivity to heat is higher if the forest cover has gone or heavily degraded. The ‘widow month’, a regular phenomenon of strong southerly wind that has been disappearing, is normally associated with the monsoonal trade wind in which the easterly wind from eastern Indonesia turn northward to Asia. Global warming or higher regional temperature could alter the distribution of regional or subregional energy concentration and could also alter the scale and extent of circulation. Therefore, global warming could have contributed to the increasing trend of recurrences of natural phenomena as reported by witnesses. However, it is quite proper to verify whether this global warming has accentuated climate mechanisms in this subregion by comparing with other climate variables. For example, during La Niña events warm waters from the east flow to the west and usually bringing more rains. The high tides in the Berau region which could be explained by this mechanism could be verified with rainfall data during that particular time of the events, preferably with a long period of observation data. All articles are subject to scientific review by a member of the Climate Witness Science Advisory Panel.
<urn:uuid:17978afc-38d0-4b2f-ac01-3569ee170f80>
2.796875
1,253
Knowledge Article
Science & Tech.
39.749783
532
Space Station has to wait for its scientific Destiny The International Space Station will now have to wait for delivery of first science facility - the US laboratory 'Destiny' - after the launch of space shuttle Atlantis was cancelled this week. The US$1.4 billion Destiny is a laboratory module enabling experiments in the near-zero gravity of space. The module will end up with 24 payload racks supporting facilities for research in biotechnology, fluid physics, combustion and life sciences. In microgravity - also called weightlessness - fluids no longer convect or flow because one part is lighter or heavier than the other. These conditions allow materials scientists to investigate the fundamental properties that control how materials form and behave. The module is 8.5 metre long and 4.3 metre in diameter and consists of three cylindrical sections and two endcones with hatches that will be mated to other station components. It has an exterior covered by a debris shiled blanket made of a material similar to that used in bulletproof vests on Earth. The current space station crew have passed their 73rd day in space and will live onboard for about 120 days before being replaced by another team of one Russian commander and two Americans. The ISS, which is a joint project of the US, Russia, Europe, Japan and Canada orbits the Earth every 90 minutes at an altitude of 370 kilometers. It is scheduled for completion in 2006 and will have as much pressurised space as a 747 jumbo jet.
<urn:uuid:59f7d0b4-70c0-424b-8f5d-26503ea1f3a9>
3.109375
301
News Article
Science & Tech.
42.979525
533
Plan for an unmanned mission to Earth's core First, split the ground open with cataclysmic force, then fill it with the world's entire supply of molten iron carrying a small communication probe - and the resulting 3,000 kilometre journey to Earth's core should take about a week, according to a U.S. planetary physicist. "We would learn a lot more about the nature of Earth and how it works - the generation of the magnetic field, the origin of some kinds of volcanoes, the heat sources inside Earth, the stuff Earth is made of - in short, all the basic questions," he told ABC Science Online. In his paper, Stevenson argues that "planetary missions have enhanced our understanding of the Solar System and how planets work, but no comparable exploratory effort has been directed towards the Earth's interior". "Space probes have so far reached a distance of about 6,000 million kilometres, but subterranean probes (drill holes) have descended only some 10 kilometres into the Earth," he writes in his article. The main barrier to travelling to the core is the dense matter of the Earth's mantle. The energy required to penetrate the mantle by melting is about a thousand million times the energy needed for space travel, per unit distance travelled. Stevenson's scheme relies on principles observed in 'magma fracturing' - where molten rock migrates through the Earth's interior. He proposes pouring 100 million tonnes of molten iron alloy into a crack of about 300 metres deep in the Earth's surface. This massive volume of iron, containing a small communication probe, would work its way down to the Earth's core, along the crack, which would open up by the force of gravity and close up behind itself. The crack would open downwards at 5 metres per second, giving a mission timescale of "around a week". Such 'Earth dives' have not been tried before on any scale, nor is the technology yet available. "No, we can't do it now," said Stevenson. "But the basic scientific principles are understood. The same answer applied to the atomic bomb in 1940." The initial crack would require a force equivalent to several mega tonnes of TNT, an earthquake of magnitude 7 on the Richter scale, or a nuclear device "with a capability within the range of those currently stockpiled". The amount of iron needed could be as much as the amount produced world-wide in a week. Heat would be maintained through the release of gravitational energy and the partial melting of silicate rock walls. "But of course, the mantle is hot anyway," said Stevenson, "so once you get below the first 100 kilometres, there are alloys that would never freeze in equilibrium with the mantle." He said the probe would penetrate the outer core but the solid inner core of the Earth would probably stop it from going any further. The grapefruit-sized probe embedded in the molten iron would contain instruments to measure temperature, conductivity, and chemical composition. It would rely on encoded sound waves to beam data to the surface, as the Earth's interior does not transmit electromagnetic radiation. One of the existing Laser Interferometer Gravitational-wave Observatories (LIGO), used to detect tiny amounts of gravitational radiation from space, could be reconfigured to read the acoustic frequencies from the probe burrowing beneath. "My paper is an idea, not a blueprint!" Stevenson told ABC Science Online. "But the physical process involved - with melt moving through the outermost 100 kilometres of earth - is something the Earth does every day." "This proposal is modest compared with the space program, and may seem unrealistic only because little effort has been devoted to it," he concludes in Nature. "The time has come for action." Click here to listen to a follow-up of this story broadcast on The Science Show, ABC Radio National.
<urn:uuid:e3d8cbe1-af62-4fab-911a-d7705b5c0ea2>
3.796875
785
Truncated
Science & Tech.
42.832035
534
Lichen love space Scientist have found the most complex organism to date that can survive direct exposure to space: lichen. The European Space Agency (ESA), which sponsored the research, says the findings bolster the possibility that life was transferred between planets. Researchers from Spain flew samples of lichen, which are made of algal cells in a mat of fungus, on the outside of a Russian capsule that spent two weeks in orbit. The organisms survived the high levels of ultraviolet radiation, as well as the vacuum and extreme temperatures of space. Dr Rosa de la Torre, from Spain's National Institute for Aerospace Technology in Madrid, says post-flight analysis shows the lichens not only survived, but still had the ability to photosynthesise upon their return. Images taken by electron microscopes showed no cell damage. "[The experiment shows] for the first time that complex organisms integrated by the association of seaweed and fungi, are able to resist the conditions of space without showing apparent damage," says Professor Leopoldo Sancho, with Complutense University of Madrid. Sealed in a capsule Two species of lichen, Rhizocarpon geographicum and Xanthoria elegans, were sealed in a capsule and launched on Russian Soyuz rocket on 31 May 2005. Upon reaching orbit, the lid of the container holding the lichen was opened, exposing the samples to the space environment for 14.5 days. The lid was then closed to protect the samples while the capsule returned to Earth. "The lichens are probably some of the most resilient organisms that you can find," says astrobiologist Professor Charles Cockell, with the UK's Open University, who is familiar with the Madrid team's work. Lichens have a mineral coating that apparently shields the organisms from the ultraviolet radiation of space, says Dr Rene Demets, who oversaw the project for the ESA. On Earth, lichens are typically found on the surfaces of rocks and survive extreme conditions, such as high on mountaintops. Previous studies have shown that simple organisms such as bacteria can survive in space and possibly even on the surface of Mars. Other organisms, such as plant seeds, have not fared as well. "They could resist the absolute emptiness and the extreme temperatures, but not the radiation," Sancho says. Follow-up ground and flight studies are planned for September 2007 to determine how long lichens might survive in space, and if they could survive re-entry forces if, for example, they are transported on a meteorite.
<urn:uuid:4d05952d-e1e3-4e24-b34a-454e5e481960>
4.21875
526
News Article
Science & Tech.
34.948922
535
Get ready for Comet PANSTARRS — 2013's first naked-eye comet Comet PANSTARRS promises to be the brightest comet in six years when it peaks in March. February 26, 2013 Luis Argerich from Buenos Aires, Argentina, captured Comet PANSTARRS in the sky above Mercedes, Argentina, on February 11, 2013. The comet shone at magnitude 4.5 to the left of an Iridium flare. I’m here today to talk about what promises to be the brightest comet during the first half of 2013 and likely one of the brightest comets of the 21st century — so far. Comet PANSTARRS (C/2011 L4) will peak in March and remain bright well into April. If predictions hold, it should be an easy naked-eye object and will look great through binoculars for several weeks. Astronomers discovered this comet June 6, 2011. As the fourth new comet detected during the first half of June that year, it received the designation “C/2011 L4.” And because researchers first spotted the object on images taken through the 1.8-meter Panoramic Survey Telescope and Rapid Response System on Haleakala in Hawaii, it received the instrument’s acronym, PANSTARRS, as a secondary name. Astronomers credit this scope with more than two dozen comet discoveries, so the “C/2011 L4” designation is more precise even though it’s much easier to say “PANSTARRS.” The comet is making its first trip through the inner solar system. Its journey began eons ago when a star or interstellar cloud passed within a light-year or two of the Sun. This close encounter jostled the so-called Oort Cloud, a vast reservoir of icy objects that lies up to a light-year from the Sun and probably holds a trillion comets. PANSTARRS has been heading toward the Sun ever since. For complete coverage of Comet PANSTARRS, visit www.astronomy.com/panstarrs. Southern Hemisphere observers had the best comet views during February. But by early March, PANSTARRS veers sharply northward and gradually becomes visible in the evening sky for Northern Hemisphere observers. The earliest views should come around March 6 or 7, when it appears a degree above the western horizon 30 minutes after sunset. Each following day, the comet climbs a degree or two higher, which dramatically improves its visibility. It comes closest to the Sun (a position called “perihelion”) the evening of March 9, when it lies just 28 million miles (45 million kilometers) from our star. It then appears 7° high in the west 30 minutes after sunset. If predictions hold true — never a sure thing when it comes to comets making their first trip through the inner solar system — the comet will be a superb object through binoculars and probably an impressive naked-eye sight. Astronomers expect it to reach magnitude 0 or 1 at perihelion, although no one would be too surprised if it ends up one or two magnitudes brighter or dimmer. From perihelion to the end of March, the comet moves almost due north through Pisces and Andromeda while its brightness drops by about a magnitude every five days. In the admittedly unlikely event that the tail of PANSTARRS stretches 10° or more March 13, it will pass behind a two-day-old crescent Moon. The comet should glow around 4th magnitude in early April, which would make the extended object visible only through binoculars or a telescope. It passes 2° west of the Andromeda Galaxy (M31) on the 3rd, then crosses into Cassiopeia on the 9th. During the third week of April, the comet fades to 6th magnitude and is visible all night for those at mid-northern latitudes, where it appears highest before dawn. If Comet PANSTARRS lives up to expectations, it should show two tails emanating from a round glow. The photograph at right shows Comet Hale-Bopp from 1997. Although PANSTARRS likely won’t get as bright as Hale-Bopp was, it lets us see the major components of a comet. If Comet PANSTARRS lives up to expectations, it should show two tails emanating from a round glow. Although PANSTARRS likely won’t get as bright as 1997's Comet Hale-Bopp (pictured) did, it lets us see the major components of a comet. // Tony Hallas The circular head, known as the “coma,” masks the comet’s nucleus. The nucleus is a ball of ice and dust that typically measures a mile or two across. As sunlight hits the nucleus, the ices boil off, and the process liberates dust particles. This cloud of gas and dust forms the coma, which can span a million miles or more. Sunlight removes electrons from the ejected gas molecules, causing then to glow with a bluish color. The solar wind carries this gas away from the comet, creating a straight bluish gas tail. The ejected dust gets pushed away from the Sun more gently, so it forms a curving tail. The dust particles simply reflect sunlight, so the dust tail has a white to pale-yellow color. Although Comet McNaught didn’t show much of a gas tail when it achieved fame in 2007, it more than made up for it with a 30°-long curving dust tail. Will PANSTARRS rival Hale-Bopp or McNaught? The best way to find out is to plan a few observing sessions for this March and April. Even if PANSTARRS falls short of greatness, goodness is a fine attribute when it comes to comets. And remember that 2013 isn’t over yet. November and December should provide exceptional views of Comet ISON (C/2012 S1), which could be 100 times brighter than PANSTARRS. I’ll be back later this year with more details on viewing Comet ISON. Expand your observing with these online tools from Astronomy magazine - Special Coverage: Find everything you need to know about Comet PANSTARRS in Astronomy.com's Year of the Comet section. - StarDome: Locate Comet C/2011 L4 (PANSTARRS) in your night sky with our interactive star chart. To ensure the comet is displayed, click on the "Display..." drop-down menu under Options (lower right) and make sure "Comets" has a check mark next to it. Then click the "Show Names..." drop-down menu and make sure "Comets" is checked there, too. Images: Submit images of Comet PANSTARRS to our Online Reader Gallery. Discussion: Ask questions and share your observations in our Reader Forums. - Sign up for our free weekly e-mail newsletter. Look for this icon. This denotes premium subscriber content. Learn more »
<urn:uuid:e69a0af5-424d-41c4-ae2e-f8bcfbc644b5>
3.109375
1,443
Nonfiction Writing
Science & Tech.
61.908393
536
What about the invaded cell? hboswell at netdoor.com Wed Jan 10 15:08:49 EST 1996 I'm not a virologist, biologist, whatever - I'm a computer scientist who's married to a HS biology teacher, and I asked her a question she couldn't answer, so I thought I'd throw it out here. My understanding is that a virus attacks by entering a cell and inserting DNA/RNA into the cell's chromosone structure, causing the cell to produce copies of the virus. (Sorry if I've oversimplified). If so, what happens to the original cell? Is it still the original cell type? Has it itself become a viral cell? Or with it's now-altered DNA, is it something else entirely? Or, do I have this all wrong? Thanks for any info, Harry Boswell hboswell at netdoor.com Home Page: http://www2.netdoor.com/~hboswell More information about the Virology
<urn:uuid:4b8c91c4-ede2-4902-9c04-4eb609fc977c>
2.609375
220
Comment Section
Science & Tech.
70.657907
537
The amino-acid sequence (or primary structure) of a protein predisposes it towards its native conformation or conformations. It will fold spontaneously during or after synthesis. While these macromolecules may be regarded as "folding themselves", the mechanism depends equally on the characteristics of the cytosol, including the nature of the primary solvent (water or lipid), macromolecular crowding, the concentration of salts, the temperature, and molecular chaperones. Subscribe in a reader Most folded proteins have a hydrophobic core in which side chain packing stabilizes the folded state, and charged or polar side chains on the solvent-exposed surface where they interact with surrounding water molecules. It is generally accepted that minimizing the number of hydrophobic side-chains exposed to water is the principal driving force behind the folding process, although a recent theory has been proposed which reassesses the contributions made by hydrogen bonding The strengths of hydrogen bonds in a protein vary, i.e. they are dependent on their microenvironment, thus H-bonds enveloped in a hydrophobic core contribute more than H-bonds exposed to the aqueous environment to the stability of the native state. The process of folding in vivo often begins co-translationally, so that the N-terminus of the protein begins to fold while the C-terminal portion of the protein is still being synthesized by the ribosome. Specialized proteins called chaperones assist in the folding of other proteins. A well studied example is the bacterial GroEL system, which assists in the folding of globular proteins. In eukaryotic organisms chaperones are known as heat shock proteins. Although most globular proteins are able to assume their native state unassisted, chaperone-assisted folding is often necessary in the crowded intracellular environment to prevent aggregation; chaperones are also used to prevent misfolding and aggregation which may occur as a consequence of exposure to heat or other changes in the cellular environment.
<urn:uuid:12126776-a149-4e0f-8ff9-22952df2e1dd>
3.6875
409
Knowledge Article
Science & Tech.
14.321866
538
So last time, Tetra was being enlightened by MC-kun about definitions. This actually arises from MC-kun using prime numbers as a motivating example. Primes are megas important in mathematics and even more important today. The entire branch of mathematics called number theory is all about studying the properties of prime numbers. They’re so useful that we’ve done stuff like extend the notion of prime elements to algebraic structures called rings or apply analytic techniques to learn more about them, but we’ll stick with elementary number theory for now. Now, for hundreds of years, we’d been studying number theory only because it’s cool and mathematicians love prime numbers. Last time, I mentioned some examples of math preceding useful applications. Well, number theory is a really good example of that, because in the 70s, we found a use for it, which is its main use today, in cryptography. There have been some new techniques using some algebra as well, but for the most part, modern cryptography relies on the hardness of factoring primes. Neat! Okay, so we’re back to the original question that MC-kun tries to get Tetra to answer, which is, what is a prime number? Definition. An integer $p$ is prime if and only if $p\geq 2$ and the only positive divisors of $p$ are 1 and itself. MC-kun explains that the motivation for excluding 1 from the definition of a prime number is because we want to be able to say that we can write every number as a unique product of prime numbers. This is very useful, because now we know we can break down every number like this and we can tell them apart because they’re guaranteed to have a unique representation. This is called unique prime factorization. Theorem. Let $a > 0$ be an integer. Then we can write $a = p_1p_2\cdots p_k$ for some primes $p_1,\dots,p_k$. This representation is unique up to changing the order of terms. We can show this by induction on $a$. We’ve got $a=2$ so that’s pretty obvious. So let’s say that every integer $k\lt a$ can be decomposed like this and suppose we can’t decompose $a$ into prime numbers, assuming $a$ itself isn’t already a prime since it would just be its own prime decomposition. Then we can factor $a=cd$ for some integers $c$ and $d$. But both $c$ and $d$ are less than $a$, which means they can be written as a product of primes, so we just split them up into their primes and multiply them all together to get $a$. Tada. As a sort of side note, I mentioned before that primes are so useful that we wanted to be able to extend the idea of prime elements into rings. Well, it turns out for certain rings, it isn’t necessarily true that numbers will always have a unique representation when decomposed into primes. This is something that comes up in algebraic number theory, which is named so because it involves algebraic structures and techniques. This was invented while we were trying to figure out if Fermat’s Last Theorem was actually true (which needed this and other fun mathematical inventions from the last century that implies that Fermat was full of shit when he said he had a proof). So at the end of the chapter, after Tetra gets her chair kicked over by the megane math girl, we’re treated to a note that acts as a sort of coda to the chapter that mentions that there are infinitely many primes. How do we know this? Suppose that there are only finitely many primes. Then we can just list all of the prime numbers, like on Wikipedia or something. So we’ve got our list of primes $p_1,p_2,\dots,p_k$. So let’s make a number like $N=1+p_1\cdots p_k$. Well, that number is just a regular old number, so we can break it down into its prime factors. We already know all the primes, so it has to be divisible by one of them, let’s say $p_i$. Now we want to consider the greatest common divisor of the two numbers, which is just the largest number that divides both of them. We’ll denote this by $\gcd(a,b)$. So since $p_i$ is a factor of $N$, we’ve got $\gcd(N,p_i)=p_i$. But then that gives us $p_i=\gcd(N,p_i)=\gcd(p_i,1)=1$ by a lemma that says that for $a=qb+r$, we have $\gcd(a,b)=\gcd(b,r)$. This means that we have $p_i=1$, which is a contradiction, since 1 isn’t a prime number, and so I guess there are actually infinitely many primes. So the nice thing is that we won’t run out of prime numbers anytime soon, which is very useful because as we get more and more computing power, we’ll have to increase the size of the keys we use in our cryptosystems. Luckily, because factoring is so hard, we don’t need to increase that size very much before we’re safe for a while. Or at least until we develop practical quantum computers.
<urn:uuid:64fa679e-b305-4951-86ba-269d9887820f>
3.203125
1,216
Personal Blog
Science & Tech.
62.536472
539
Table of Contents There are times when a generic (in the sense of general as opposed to template-based programming) type is needed: variables that are truly variable, accommodating values of many other more specific types rather than C++'s normal strict and static types. We can distinguish three basic kinds of generic type: Converting types that can hold one of a number of possible value types, e.g. int and string, and freely convert between them, for instance interpreting 5 as "5" or vice-versa. Such types are common in scripting and other interpreted languages. boost::lexical_cast supports such conversion functionality. Discriminated types that contain values of different types but do not attempt conversion between them, i.e. 5 is held strictly as an int and is not implicitly convertible either to "5" or to 5.0. Their indifference to interpretation but awareness of type effectively makes them safe, generic containers of single values, with no scope for surprises from ambiguous conversions. Indiscriminate types that can refer to anything but are oblivious to the actual underlying type, entrusting all forms of access and interpretation to the programmer. This niche is dominated by void *, which offers plenty of scope for surprising, undefined behavior. The boost::any class (based on the class of the same name described in "Valued Conversions" by Kevlin Henney, C++ Report 12(7), July/August 2000) is a variant value type based on the second category. It supports copying of any value type and safe checked extraction of that value strictly against its type. A similar design, offering more appropriate operators, can be used for a generalized function adaptor, any_function, a generalized iterator adaptor, any_iterator, and other object types that need uniform runtime treatment but support only compile-time template parameter conformance. |Last revised: March 15, 2003 at 23:12:35 GMT|
<urn:uuid:34ea8624-ad51-4e8f-8439-519cf0cc5d77>
3.1875
397
Documentation
Software Dev.
32.710169
540
Extremes in weather more likely - scientists Wet areas have become wetter and dry areas drier during the past 50 years due to global warming, a study of the saltiness of the world's oceans by a team including CSIRO researchers has shown. The intensification of rainfall and evaporation patterns, which is occurring at twice the rate predicted by climate change models, could increase the incidence and severity of extreme weather events in future. The team's leader, Paul Durack, said the finding was important because reductions in the availability of fresh water posed more of a risk to human societies and natural ecosystems than a rise in temperature alone. "Changes to the global water cycle and the corresponding redistribution of rainfall will affect food availability, stability, access and utilisation," said Dr Durack, a former CSIRO researcher now at the Lawrence Livermore National Laboratory in California. The fact that hotter air can hold more water underpinned predictions that recent warming of the globe's surface and lower atmosphere could have already strengthened the natural evaporation and precipitation cycle – increasing rainfall where it was higher than average and decreasing it where it was lower. Initial attempts to study this "rich get richer" effect, however, were hindered by a shortage of good rainfall records on land and a lack of long-term satellite measurements. So Dr Durack and his Australian colleagues studied the oceans. "The ocean matters to climate," said Richard Matear, a CSIRO researcher and member of the team. "It stores 97 per cent of the world's water and receives 80 per cent of all the surface rainfall." The team analysed about 1.7 million records of surface sea salinity collected worldwide between 1950 and 2000. Their results are published in the journal Science. They found regions near the equator and the poles, where greater rainfall keeps surface waters less salty than average, had become even fresher during the past half century. Saltier areas, such as in the centre of oceans where evaporation dominated, had become even saltier. Brian Soden, a meteorologist at the University of Miami in the US, said the study had important implications for extreme weather. Warmer water moving faster from the surface into the atmosphere could fuel violent storms, and floods and droughts could become more intense. Susan Wijffels, a CSIRO researcher and team member, said a network of 35000 Argo floats throughout the world's oceans would be vital for continued observation of salinity changes.
<urn:uuid:63129401-a931-4a1a-ae80-52f42a886521>
3.59375
507
News Article
Science & Tech.
30.383509
541
A couple of years ago Cary Huang and his brother created this interesting "interactive" visualization of the scale of the the universe. He recently updated and improved it with his Scale of the Universe 2 visual. Learn about the scale of things by zooming in and zooming out. It's certainly an improvement over the earlier version and worth looking at. I especially like the interesting assortment of universe objects that the creators selected (and the fact that you can click on them to learn more). Clearly science teachers can use this when discussing space and molecules. I think that math teachers can take advantage of the scientific notation, too. Depending on your internet connection it can take a second or two to load. Be patient. If you like this one, you might like this video I shared a few months back too. Let me know if you've come across any similar visuals.
<urn:uuid:a658776f-9ebb-4f56-b770-1dc65839a17f>
2.578125
177
Personal Blog
Science & Tech.
54.188636
542
Dino Eggs…And What's Inside by Sara F. Schacter What could be rarer than discovering the egg of a real dinosaur? How about finding the baby dinosaur still inside? In a huge dinosaur nesting ground in Argentina, scientists recently found the fossil remains of six unhatched baby dinosaurs. About a foot long and snuggled up inside eggs the size of grapefruit, these dinosaur embryos have helped solve the mystery of which dinosaurs laid the miles and miles of eggs buried in the dirt and rock. The tiny embryos were titanosaurs—a type of sauropod, the long-necked, plant-eating dinosaurs that were among the largest land animals ever. Scientists were amazed that their delicate skulls and fragile skin had survived long enough to become fossilized. Some embryos still had tiny, sharp teeth in their mouths. By studying the embryos' skulls, scientists are learning just how dramatically the structure of the titanosaurs' faces changed as they grew. The embryos' nostrils are at the tips of their snouts, but by the time titanosaurs were full grown, their skulls changed so that their nostrils were almost between their eyes. In yet another amazing discovery, scientists in England have found fossilized dino vomit! Coughed up 160 million years ago by a large marine reptile called ichthyosaur, the vomit contains the undigested shells of squidlike shellfish—no doubt ichthyosaur's favorite snack. “We believe that this is the first time the existence of fossil vomit on a grand scale has been proven,” said one excited scientist. - embryo: An animal in the earliest stage of development. - fossil: Something that remains of a living thing from long ago. - What kinds of things did scientists learn about the way titanosaurs reproduce? [anno: The scientists learned that titanosaurs laid a lot of eggs over a wide area. They had a nesting ground.] - Where was the dinosaur vomit found? [anno: It was found in England.] - What kind of a dinosaur made the vomit? [anno: an ichthyosaur] - How has the habitat of the ichthyosaur changed, from the time it lived until today? How do you know this change has happened? [anno: When the ichthyosaur lived, its habitat was an ocean. The ichthyosaur was a marine dinosaur, so the area that is now England must have been under water.]
<urn:uuid:5fedcac0-271c-4f51-a936-a65585b0428f>
3.71875
518
Truncated
Science & Tech.
51.383438
543
Early Applications of Electricity Early Applications of Electricity - Page created by SHH, 9 September 2008 - Contributors: SHH x3, Nbrewer x10, Nmolnar x1, Administrator1 x4, Rnarayan x5 - Last modified by Administrator1, 11 January 2012 Making Electricity Work: Putting Theory into Practice When people realized what electricity and magnetism were they took the first steps towards putting them to work. The very first machines hardly seem useful compared to the stuff we use today, but 200 years ago, when the industrial revolution was getting under way in Europe, they were major breakthroughs. In the 19th century inventors began looking for ways to use electromagnetism to run machines, which was being done at that time by steam engines, water wheels, horses, or even people. One of the first to think about using electricity for practical purposes was the American Joseph Henry. In 1829 he used a large battery to build a powerful electromagnet. It was not just a scientific instrument—it could do heavy work, such as lifting hundreds of pounds of metal. With his demonstration, Henry really began to transform electricity into something that people could use every day. Those interested in using electricity also found new ways to produce electric current. Inventors tried to improve the basic idea of electromagnetic induction and used magnets to create a flow of current in wires. One of the first to invent such a machine was Frenchman Hippolyte Pixii in 1832. Pixii’s machine generated what would today be called an alternating current. It flowed first in one direction and then in the opposite direction. Belgian Floris Nollet improved Pixii’s electromagnetic generator around 1850, and his design was capable of producing about 50 volts. The Nollet generator was the first to be produced in large numbers by a manufacturing firm. They were used in electroplating, the first industrial operation to employ electricity. The Electrical Age was truly under way. Along with the generator came much more powerful ways to put electricity to work. A key technology was the electric motor. By the 1800 inventors had already harnessed the power of steam to run locomotives and factory machines. Many thought that electricity could be tapped to do the same kind of work, especially after Michael Faraday demonstrated a tiny electric motor. In 1834 Thomas Davenport designed a motor that was strong enough to run a small printing press. He patented the motor in 1837. But progress was slow—it wasn’t until almost 50 years later that electric motors were used commercially. Davenport also used his motor (which was powered by batteries) to move a small railroad car around a track. Unfortunately, commercial railroad cars were large, and so many batteries were needed that an electric railroad was not practical. But inventors used batteries and motors to power small automobiles beginning in the 1880s. In fact, in 1900 electric automobiles outsold gasoline-powered cars. Today, of course, most cars use gas, but electric cars continue to be developed. Since they do not produce exhaust gas and are easier on the environment than gasoline, they continue to attract interest. Electricity was also put to work at an early age in the field of medicine. Just three years after the invention of the Leyden jar in 1745, doctors in Geneva began to treat patients with electric shocks. A Swiss physician reported that victims of paralysis could sometimes be cured by repeated shocks to their muscles. When Luigi Galvani announced the discovery of “animal electricity,” doctors were encouraged to continue their experiments. Doctors such as Guillaume-Benjamin Duchenne, the “father of electrotherapy,” believed that shocking people with electricity might even cure their ailments. Unfortunately, this type of medicine did not prove effective and became much less common by the early 20th century. But there were many other uses of electricity in medicine that succeeded. The first detection of the electric currents emanating from the brain was made in 1875, and the x-ray machine was introduced in 1895. However, the most successful practical early use of electricity in the 19th century was the telegraph. This new form of communication ushered in the era of electrical communication and brought electricity to the forefront of the public’s attention. <rating comment="false"> Well Written? 1 (No) 2 3 4 5 (Yes) </rating> <rating comment="false"> Informative? 1 (No) 2 3 4 5 (Yes) </rating> <rating comment="false"> Accurate? 1 (No) 2 3 4 5 (Yes) </rating>
<urn:uuid:db8b29ef-2105-4568-aab4-d82eaf665d9e>
3.625
948
Knowledge Article
Science & Tech.
38.009746
544
For immediate release, April 1, 2010 For information: Alice Tibbetts, 612-625-3889 The chocolate headline appeared in a newspaper in 2005 and was based on a study involving only 14 people. Results from more current studies are in the news again, just in time for the Easter candy season. How do we determine if such health claims are credible? How do we interpret the statistics behind headlines? When are statistics manipulated to further an agenda? Nancy Reid, a professor of statistics at the University of Toronto, will speak to these questions at a public lecture, Thursday, April 22, 2010 in 175 Willey Hall, 225 19th Avenue South at the University of Minnesota. She is the final speaker in this season's free lecture series sponsored by the Institute for Mathematics and its Applications (IMA). When the media presents findings as definitive, the public is misinformed, she said. "Statistics are not black and white. In reality, there is a lot of nuance, and in the most complex problems, there is ambiguity. One number won't tell you anything important about climate change or cancer. Instead, we have to ask: Where did the number come from? How can we find more data to better inform us? What could have gone wrong? Data is just the beginning of the conversation." Reid will discuss the statistics behind current news stories, including: chocolate's impact on health, whether girls are really less capable in math than boys, the Netflix Grand Prize for movie recommendations, and the use of new on-line visuals to explain large data sets, such as how stimulus money is being spent. For updates on future public lectures: http://www.ima.umn.edu/public-lecture. The IMA brings together the best minds in math and the sciences to solve pressing problems facing our society, our industries, and our planet. It receives major funding from the National Science Foundation and the University of Minnesota.
<urn:uuid:e7deccef-13d1-4554-9b22-3af78c3c272f>
3.015625
398
News (Org.)
Science & Tech.
50.62878
545
Outside a small town in Gifu Prefecture is a little-known scientific research establishment engaged in a project to “create a sun on the Earth.” If successful, this venture will profoundly affect the lives of most people in the world. The National Institute for Fusion Science (NIFS) is a collection of buildings on the tree-covered hillsides surrounding the town of Toki. It houses the Large Helical Device (LHD), which cost ¥50 billion to make and is the only one of its kind in the world. The machine is designed to replicate fusion, the nuclear reaction that powers the sun. Once this is achieved, it will herald the end of humanity’s dependence on fossil fuels and begin an era of cheap and limitless energy. The current Democratic Party of Japan administration is cutting back on big, expensive projects, and large-scale scientific organizations such as the LHD are under budgetary pressure. But it is worthwhile work according to Denis Humbert, an International Atomic Energy Agency (IAEA) scientist from France, who recently spent three months researching at the NIFS. “The budget for this project is actually very small — $18 billion over the next 20 years. Fusion research has implications of great interest to many other fields such as research into new materials and the nanosciences. And most important, if it works, it will bring a solution to the planet’s energy problem.” Hiroshi Yamada, executive director of research at the NIFS, gave The Japan Times a tour of the facility in July. We put on hard hats, climbed ladders and crossed metal gangways in a huge, cavernous building that measures 40 meters high, 75 meters long and 45 meters wide. The space houses the LHD — an enormous sprouting of pipes and coils all wrapped around a giant metal tube. Your reporter was invited to put on protective gear, crawl into a small space and stand upright to peer through a head-size hole, right into the silvery innards of the beast. If I were to stand in the same spot a few months down the road, I thought, this thing would vaporize me in an instant. Weighing 1,500 tons and measuring 13.5 meters long and 9.1 meters wide, the LHD is shaped somewhat like a vast twisted snake swallowing its own tail. It is the world’s largest superconductor and the only one of its type in the world. It costs the government ¥5 billion a year to run. The technology started with secret military research in the Soviet Union and, separately, the United States and Great Britain, just after World War II. Until today, the most dramatic demonstration of fusion power the world has seen has been purely destructive: the testing of hydrogen bombs set off by atomic bombs and, of course, the nuclear weapons used against the cities of Hiroshima and Nagasaki in August 1945. Nuclear fission splits nuclei to create energy and nuclear fusion joins them to do the same thing. The first fusion device made for peaceful purposes was the Tokamak, which was invented by Leonid Zakharov in Russia in 1951. Nuclear fission for peaceful use began in 1958 after a United Nations conference in Geneva on peaceful uses of atomic energy. Since then, Japan, the European Union and the United States have made great efforts to modify and improve the machine. The Tokamak is still widely regarded as the most promising fusion device, but there are other similar devices in the world, including one in Naka, Ibaraki Prefecture, and at the Culham Centre for Fusion Energy near Oxford, England. The Tokamak has reached temperatures of 500 million degrees Celsius in experiments, more than 30 times hotter than the sun. Nuclear power for peaceful use has developed rapidly and there are now 400 nuclear fission power plants around the world. By contrast, the aim of constructing fusion reactors to generate electricity is still in the research and development phase. “Replicating the fusion of helium and hydrogen that powers the sun, in earthly conditions, means generating temperatures beyond 100 million degrees Celsius,” explains Yamada. “This creates plasma, the fourth state of matter after solids, liquids and gases.” All stars, our sun included, are made of plasma. Flashes of lightning are natural plasma and so too are the spectacular Northern Lights. Artificial plasma, at much lower pressure, is present inside neon lights and plasma television screens. “The extreme temperatures inside the LHD mean the plasma must not be allowed to touch the walls of the device. If it did, (the walls) would melt.” Herein lies the main difficulty with the LHD. Researchers must create materials strong enough to withstand fusion at temperatures many times hotter than the sun. Plasma at extremely high temperatures creates wild, unstable reactions and would irreparably damage any machine made to contain it that uses existing materials. Yamada demonstrates this process by heating a circular fluorescent tube inside a microwave oven in a NIFS display area. When he takes it out, it casts a purplish glow and is warm to the touch. He says, “The glass walls of the tube cool the plasma. When a similar reaction occurs inside the sun, its vast gravitational pull keeps the plasma from shooting in all directions. “Once new materials have been invented, the way will be open to constructing fusion reactors able to generate electricity, using easily obtained resources that will never run out. The raw materials needed for creating plasma in fusion reactions, are lithium and deuterium, which can be extracted from seawater.” One widespread modern use of lithium, is in mobile phones. The amount commonly used in each phone is about 0.3 grams. Together with the deuterium taken from 3 liters of seawater, a fusion reaction equivalent to 22,000 kilowatt-hours of electric power could be created. This amount of electricity would supply a typical family in a developed country for a couple of years. Or to put it another way, one liter of seawater contains enough deuterium to provide the energy content, when fused with tritium, of more than 500 liters of petroleum. Fusion power plants of the future, producing a million kilowatts, would need about a tenth of a ton of deuterium and 10 tons of lithium a year as fuel. Seawater covers over 70 percent of our planet and rates of extraction for hundreds of fusion reactors around the globe would never exhaust supplies. Plasma inside the LHD is prevented from touching the walls by a magnetic field created inside the sinuous innards of the machine. It is done by means of a twisting, orange-hued metal alloy, wound 450 times and coiling round the outer walls of the giant tube. The coil is exposed to an electromagnetic force reaching 1,000 tons per meter. Beforehand the coil and supporting structure are cooled to minus 270 C. When cooled the structure typically shrinks 2 mm. The machine is built to tolerate a shrinkage of 2 cm. Hydrogen gas is heated and injected into the machine. After reaching 10,000 C, the hydrogen molecules disintegrate into atoms. Then the parts of the atoms, the positive nucleus and the tiny negative electrons spinning around it, are unbound and create plasma. Yamada explains how the process works: “Atoms that have lost electrons become ions and are 2,000 times heavier than electrons. The ions are trapped and rotated along the magnetic field and the electrons are sent in an opposite motion. This is the means by which plasma many times the temperature of the sun is kept from destroying the LHD. The sun’s temperature is only 15 million degrees. Its vastness — it is 100 times the size of the Earth — allows fusion to occur at a much less fierce heat than inside the LHD.” When being readied for experiments, the LHD is cooled for a month. Usually from October to February each year it makes plasma four days a week. Last year, however, the machine was switched on only between Oct. 11 and the end of December, due to budget cuts. When the experiment ends and the LHD is switched off, it takes another month to warm up again. Although at the moment the Toki LHD is the only one of its type in the world, another device will be built in Germany in 2015. After that, the next big development in fusion science will be the ITER project (originally the International Thermonuclear Experimental Reactor), when a Tokamak 10 times bigger than Toki’s LHD will be built in Cadarache, France, in 2019. It is expected to be operational around 2027, when plasma will be ignited for the first time. Forty-five percent of the cost will be funded by the European Union, while Japan, China, India, Russia, the United States and South Korea will each contribute around 9 percent. A demonstration reactor is expected to start producing electrical power from fusion energy in the 2030s. Then the next phase will be construction a new generation of fusion reactors. They are expected to start generating electric power, in place of current technologies, around the middle of this century. Yamada defends the fusion process as a lot safer than conventional nuclear power. “Radioactive materials used in fusion do not have to be moved off-site. Waste also does not have to be stored for thousands of years, as is the case with spent uranium at conventional nuclear power stations. Fusion waste could be reused after cooling off for 100 years.” As regards local politics, the NIFS is seen by Toki’s municipal government as a valuable asset to the area. A couple of local politicians oppose it, however, fearing “industrial accidents.” “But the LHD is for studying plasma at high temperatures,” says Yamada. “Not creating fusion. So the dangers of radioactive waste are not the same in Toki as they would be at the site of a real fusion reactor.” As all the scientists analyzing the LHD experiments cannot be physically present in the control room, the results are studied by linking computer systems at eight universities around Japan. NIFS also attracts participating scientists from all over the world. The Deputy Director General of NIFS, professor Osamu Kaneko, believes the educational function of the institute is very important. “Since it will take 20 or more years to make fusion reactors a reality, it is necessary to educate young people as successors to our research. NIFS has a physical sciences department at the Graduate University of Advanced Studies in Kanagawa Prefecture. Thirty students from Japan and abroad study for their PhDs in Toki, at the forefront of nuclear fusion research,” says Kaneko. This big science project is, in a sense, reaching for Utopia. It heralds the end of dependence on fossil fuels such as coal, petroleum and natural gas, along with all their attendant ills: environmental degradation, global warming and the unstable geopolitics of oil. The many unsolved problems associated with atomic fission power would also end. Toki’s LHD is a project looking for results in the long term — extremely long term — explains Akio Komori, director general of the NIFS. “Our era is the longest known period between ice ages,” Komori says. The occurrence of another ice age, despite the current fear of global warming, is an overwhelming likelihood. In that distant future, when the world is again covered in ice, fusion plants, creating ‘suns’ all over the globe, would allow life on Earth to flourish for another 5 billion years, until the sun in the sky finally burns out.”
<urn:uuid:b2ef4a0a-02b2-4877-8e3a-e6c0d48ee689>
3.09375
2,447
News Article
Science & Tech.
46.914612
546
GEL is a dynamically scoped language. We will explain what this means below. That is, normal variables and functions are dynamically scoped. The exception are parameter variables, which are always global. Like most programming languages, GEL has different types of variables. Normally when a variable is defined in a function, it is visible from that function and from all functions that are called (all higher contexts). For example, suppose a function f defines a variable and then calls function g. Then function g can reference a. But once f returns, a goes out of scope. For example, the following code will print out 5. The function g cannot be called on the top level (outside f as will not be defined). function f() = (a:=5; g()); function g() = print(a); f(); If you define a variable inside a function it will override any variables defined in calling functions. For example, we modify the above code and write: function f() = (a:=5; g()); function g() = print(a); a:=10; f(); ato 5 inside f does not change the value of aat the top (global) level, so if you now check the value of ait will still be 10. Function arguments are exactly like variables defined inside the function, except that they are initialized with the value that was passed to the function. Other than this point, they are treated just like all other variables defined inside the function. Functions are treated exactly like variables. Hence you can locally redefine functions. Normally (on the top level) you cannot redefine protected variables and functions. But locally you can do this. Consider the following session: genius> function f(x) = sin(x)^2 = (`(x)=(sin(x)^2)) genius> function f(x) = sin(x)^2 = (`(x)=(sin(x)^2)) genius> function g(x) = ((function sin(x)=x^10);f(x)) = (`(x)=((sin:=(`(x)=(x^10)));f(x))) genius> g(10) = 1e20 Functions and variables defined at the top level are considered global. They are visible from anywhere. As we said the following function f will not change the value of a to 5. a=6; function f() = (a:=5); f(); ato the value 3 you could call: The set function always sets the toplevel global. There is no way to set a local variable in some function from a subroutine. If this is required, must use passing by reference. So to recap in a more technical language: Genius operates with different numbered contexts. The top level is the context 0 (zero). Whenever a function is entered, the context is raised, and when the function returns the context is lowered. A function or a variable is always visible from all higher numbered contexts. When a variable was defined in a lower numbered context, then setting this variable has the effect of creating a new local variable in the current context number and this variable will now be visible from all higher numbered contexts. There are also true local variables, which are not seen from anywhere but the current context. Also when returning functions by value it may reference variables not visible from higher context and this may be a problem. See the sections True Local Variables and Returning Functions.
<urn:uuid:0a75caa1-d419-405f-a5a1-a844a1b452be>
3.015625
741
Documentation
Software Dev.
60.073473
547
Joined: 16 Mar 2004 |Posted: Wed Dec 13, 2006 11:39 am Post subject: New Method Creates Nanowire Detectors Exactly Where Needed |New Method Creates Nanowire Detectors Exactly Where Needed There seems to be little doubt among cancer researchers that new detection systems using nanowires and microfluidics hold the promise of providing a quantum leap in the detection of cancer-related molecules and genes. However, researchers also know that there are significant technical barriers that must be overcome to realize that promise, including the current difficulty in creating microfluidic devices built around nanowire detectors. Now, a team of investigators at the Nanosystems Biology Cancer Center, one of eight NCI-funded Centers of Cancer Nanotechnology Excellence, has developed a method for creating conducting polymer nanowires in place within microfluidic circuits. The team, led by Hsian-Rong Tseng, Ph.D., of the University of California, Los Angeles, and James Health, Ph.D., of the California Institute of Technology, reported their work in the journal Chemical Communications. The researchers create the nanowires using standard microelectrodes built into the microfluidics device specifically for the purpose of carrying out electrochemical reactions within the channels of the device. This allows them to use the microfluidic channels to introduce the precursor molecules, or monomers, needed to create the conducting polymer nanowires and trigger an electrochemical reaction at the exact place where the nanowires are needed to function as biomolecule detectors. This reaction causes the monomers to link to one another, forming the conducting polymer nanowires. This process can create two different types of polymer nanowires, one made of polyaniline, the other of polypyrrole. The chemical reactions are completed within 40 minutes. Once formed, the nanowires can function immediately as detectors, with the electrodes used to form the nanowires now functioning as the circuitry that connects the nanowires to electrical signal recorders. The investigators demonstrate that these detectors are highly sensitive to changes in pH and to changing ammonia concentrations, though they note that these nanowires should be able to be used to detect a wide range of biomolecules. This work, which was supported in part by the National Cancer Institute, is detailed in a paper titled, “Electrochemical fabrication of conducting polymer nanowires in an integrated microfluidic system.” This story was first posted on 26th September 2006.
<urn:uuid:05aa96f1-4573-4ed6-a24c-f08440ed4788>
2.8125
525
Comment Section
Science & Tech.
14.336814
548
Joined: 16 Mar 2004 |Posted: Tue Aug 11, 2009 11:55 am Post subject: New Line of Lasers for Biomedicine |Dundee Leads EU Project to Develop Next Generation of Lasers Laser technology has revolutionised the world of medicine in ways never before thought of. More and more often the scalpel is giving way to a new generation of lasers. Now the FAST-DOT project, backed by the EU with €10.1 million in financing, is underway to develop a new line of lasers for biomedical applications. Led by a team located at the University of Dundee, 18 European partners from 12 countries will pool their knowledge and resources to develop the next generation of lasers which will be used for biomedical applications. Their combined efforts mean that they are able to conduct nearly 100 person years of work in a fraction of the time. According to Professor Edik Rafailov of the University of Dundee, 'This project will revolutionise the use of lasers in the biomedical field, providing both practitioners and researchers with pocket sized ultra high performance lasers at a substantially lower cost, which will make their widespread use affordable.' The new lasers that will be developed will not only be much smaller but also more energy efficient than current lasers in use. Current lasers are not portable and are heavy on energy consumption. The new lasers will be designed for use in microscopy and nanosurgery, where high precision cutting, imaging and treatment therapies will be made possible. According to Neil Stewart, FAST-DOT project manager, 'The objectives of the project are to use a technology called quantum dot materials, probably gallium arsenide, and exploit their lasing characteristics for use in biomedical applications, such as laser tweezing for microsurgery.' The new lasers will mean that surgeons and life scientists will have access to much higher performance and lower cost lasers than are currently available and will open up exciting new application areas for lasers in biomedicine. There is also hope that new lasers under development will also decrease in size. Currently, lasers are roughly the size of a shoebox. FAST-DOT hopes to bring down the size to that of a matchbox while bringing the cost down to a tenth of what they currently are. Dr Stewart also claimed that the new lasers would be applicable in the field of micro-surgery. 'With these lasers we ought to be able to take that down to about a very few microns. And because of the differences in the way the energy is controlled, it enables us to deliver very controlled amounts of energy so we are also going to be investigating things like tissue welding,' he said. Application of lasers in nanosurgery procedure: Ablation of a single mitochondrion inside a living fibroblast cell, before (left) and after (right) laser ablation. Laser systems for use in medicine were initially seen as a surgical tool which is minimally invasive, and were used for the ablation, cutting, or coagulation of tissue. As a result, their earliest application was witnessed in the field of general surgery and laparoscopic surgery. By the 1990s lasers were gaining popularity in the field of ophthalmology for sight correction. Now however lasers are being used in a diagnostic sense thanks to their non-invasive capabilities as well as being utilized for the detection and monitoring of certain diseases.
<urn:uuid:f3058f00-68e1-4ae6-858c-194efbd80988>
2.59375
698
Comment Section
Science & Tech.
31.355379
549
Biologists have known for decades that cells use tiny molecular motors to move chromosomes, mitochondria, and many other organelles within the cell, but no one has been able to understand what "steers" these engines to their destinations. Now, researchers at the University of Rochester have shed new light on how cells accomplish this feat, and the results may eventually lead to new approaches to fighting pathogens and neurological diseases. Michael Welte, associate professor of biology, shows in a paper published in today's issue of Cell that the mechanisms that control the molecular motors are quite different from what biologists have previously believed. Before these findings, scientists assumed that the number of motors attached to an organelle determined how far and fast the organelle could travel, but Welte and colleagues have discovered that it is not the number of motors, but yet-to-be-discovered molecules that are likely the master regulators. "The fact that motor number has nothing to do with regulating transport is extremely surprising, and somewhat unsettling to people working in vitro," says Welte. "It says we're really missing something when we study these motors only in the test tube instead of in a living cell." Intracellular transport is crucial to a cell's health, says Welte. For instance, during cell division, one copy of each of the cell's chromosomes migrates to one side of the cell while the other copy moves to the other side. If this movement is disturbed, it could cause an imbalance of chromosomes in the daughter cells, which might die or become cancerous. Similarly, neurons, some of which are as much as three feet in length, manufacture proteins and organelles at one end and then must move that precious cargo all the way to the far end where they'll be used. This is an enormous task, says Welte, and defects in this transport are thought to cause a number of neurological diseases. Given the difficulty of investigating these tiny motors acting within the cell, biologists have performed basic experiments on them outside of the cell in a carefully controlled environment. This led them to believe that the speed and distance an organelle could be transported depended on how many motors were pulling it, says Welte. Thus, the scientists reasoned, perhaps the cell simply attaches the right number of motors to an organelle to send it the right distance. Although this "multi-motor" hypothesis is very simple and elegant, says Welte, whether it actually holds true within living cells had never been tested. Welte's graduate student, Susan Tran, decided to perform that test. She created fruit-fly eggs lacking a type of molecular motor called kinesin and found that certain organelles stopped moving—strong evidence that kinesin is responsible for their transport. Tran then made another type of mutant eggs, this time ones that produced only about half the number of kinesin motors of a regular egg. In both types of eggs, organelles were transported with the same speed and the same distance. Welte needed to know if this equality was because the normal egg was simply utilizing only half the available kinesin motors, or if some master regulator was controlling the organelle's progress, regardless of the number of motors moving it. To do this, Welte turned to Steven Gross, associate professor of developmental and cell biology at the University of California. Gross' group uses an apparatus called "optical tweezers" that employs laser light to measure the tiny forces the motors generate. The team found that organelles in regular cells are pulled with twice the force of Tran's mutant, low-kinesin cells. "That clinched it for us," says Welte. "Yes, there are multiple motors moving organelles around, but exactly how many doesn't matter. There is something else in the cell that's controlling all the motors. That opens up a big area for research—find what's driving these motors and maybe we can control them all by controlling one thing." Welte and his team are now looking at where in the cell this signal comes from and how it influence the motors. Although Welte's team studied fruit fly eggs, the motors moving the organelles are present in all animals and employed for many tasks, including transport in human neurons. Welte also points out that viruses, including HIV, make use of the same kind of motors to move about the cell, first to get from the site of penetration to the nucleus, where they multiply, and then to get progeny viruses back to the cell surface. If Welte and others can figure out how cells normally control these motors, it may be possible to prevent HIV from taking control of the motors and thus to keep it, and other intracellular pathogens, at the edge of the cell where they can do little harm. This research was funded by the National Institutes of Health, and includes researchers from the University of Rochester, the University of California Irvine, and University of Texas at Austin.
<urn:uuid:7509ef7b-8994-402e-a924-65bea8d3e7eb>
3.59375
1,018
News Article
Science & Tech.
34.636944
550
The latest news from academia, regulators research labs and other things of interest Posted: December 7, 2009 Super cool atom thermometer (Nanowerk News) As physicists strive to cool atoms down to ever more frigid temperatures, they face the daunting task of developing new, reliable ways of measuring these extreme lows. Now a team of physicists has devised a thermometer that can potentially measure temperatures as low as tens of trillionths of a degree above absolute zero. Their experiment is reported in the current issue of Physical Review Letters and highlighted with a Viewpoint in the December 7 issue of Physics. Physicists have developed a new thermometry method suitable for measuring temperatures of ultracold atoms. (Illustration: Alan Stonebraker) Physicists can currently cool atoms to a few billionths of a degree, but even this is too hot for certain applications. For example, Richard Feynman dreamed of using ultracold atoms to simulate the complex quantum mechanical behavior of electrons in certain materials. This would require the atoms to be lowered to temperatures at least a hundred times colder than what has ever been achieved. Unfortunately, thermometers that can measure temperatures of a few billionths of a degree rely on physics that doesn't apply at these extremely low temperatures. Now a team at the MIT-Harvard Center for Ultra-Cold Atoms has developed a thermometer that can work in this unprecedentedly cold regime. The trick is to place the system in a magnetic field, and then measure the atoms' average magnetization. By determining a handful of easily-measured properties, the physicists extracted the temperature of the system from the magnetization. While they demonstrated the method on atoms cooled to one billionth of a degree, they also showed that it should work for atoms hundreds of times cooler, meaning the thermometer will be an invaluable tool for physicists pushing the cold frontier. Source: American Physical Society Translate this article: Check out these other trending stories on Nanowerk:
<urn:uuid:39aad08e-2ac1-46ad-a696-742392fc2775>
3.90625
406
News Article
Science & Tech.
19.743686
551
MANY geologists rather dismiss man-made climate change. On the timescales they work in, they figure nature will absorb anything we throw at it. Not David Archer. The Long Thaw shows how, by digging up and burning our planet's carbon, we are determining climate for millennia hence. It also shows how we may soon unleash changes to the carbon cycle that will cancel the next ice age, and maybe the one after that, not to mention melting enough ice to flood land less than 20 metres above sea level. A beautifully written primer on why climate change matters hugely for our future - on all timescales. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:afce46c1-c3eb-4974-8bef-ada8cf58d2b5>
2.921875
153
Truncated
Science & Tech.
50.520179
552
The pulling power of chaos Our Science Essay ponders the riddle of the wandering stars. Starting with Poincaré, complex maths i What is the most efficient way to get a space probe to its target? When Apollo 11 went to the moon in 1969 it followed a conventional Hohmann transfer orbit. Imagine an egg-shaped outline, with the earth at the bottom. As the spacecraft comes up the left-hand side, it burns fuel to accelerate, and swings into orbit around the moon. This was the quickest route – aside from the impractical one of flying straight out by burning fuel the whole time – and, in a manned mission, speed was of the essence. However, we now know that when efficient use of fuel is the main objective, and time is unimportant, less direct routes can be much better. When Nasa sent the Cassini probe to Saturn, it first went inwards in the solar system, undergoing two close encounters with Venus. Then it swung back past the earth and on to Jupiter before making a sharp turn to meet Saturn. Trajectories such as this exploit the slingshot effect, in which the spacecraft steals energy from a planet. The tiny spacecraft speeds up considerably, pulled towards the planet by gravity; the massive planet slows down very slightly, but not enough to notice. Yet there is another, subtler effect of orbital dynamics which is also being used to get spacecraft to their targets using as little fuel as possible: chaos. The technique was first used in 1991. A Japanese space probe, Hiten, had been surveying the moon. Having completed its mission and returned to orbit the earth, it had pretty much run out of fuel. Edward Belbruno, an orbital analyst at Nasa’s Jet Propulsion Laboratory in Los Angeles, came up with an idea that sounded impossible. He wanted to extend its useful life and enhance its scientific value by sending it back to the moon. Then it would visit the moon’s Trojan points – the points in space 60 degrees ahead of and behind the moon in its orbit where gravity and centrifugal forces cancel each other out. There it could search for cosmic dust that might have become trapped. It sounded crazy, but Belbruno knew a way to do it. Mathematicians and physicists had realised that the motion of bodies under gravity can be chaotic – highly irregular, despite obeying entirely deterministic laws. Chaotic orbits are sensitive to very small disturbances. Normally, this feature is seen as an obstacle to prediction, but Belbruno realised that it could be used to advantage. Very small changes in position or speed, which use very little fuel, can cause large changes to the trajectory. That makes it easy to redirect the spacecraft in a fuel-efficient, though possibly slow, manner. One place where chaotic orbits can arise is somewhere called the “L1 Lagrange point” between the earth and the moon, where the net gravitational force is zero (essentially, objects are “suspended” between the two bodies because of the forces generated by each). Belbruno designed a new orbit that took Hiten close to the L1 point, where a short, carefully calculated burst of its rockets would loop it out to where he wanted it to go. He faxed his proposals, unsolicited, to the Japanese team; they loved the idea. When the probe arrived at L1, it found there was no more dust than you’d expect; after a few years orbiting the moon, Hiten was crashed into its surface in 1993. Still, it had ushered in a new era of space travel. A similar trick was used for Nasa’s Genesis mission to bring back samples of the solar wind. The first Oscar Our fascination with the planets goes back to prehistoric times, when human eyes watched the star-spangled splendour of the night sky and human minds were awed by the cosmic spectacle. Countless stars moved across the sky, pinpricks of light on a gigantic, rotating velvet-black bowl. A few of those pinpricks of light, however, did not obey the rules. They went walkabout. The Greeks called them planetes – wanderers; we call them planets. Their paths are complicated and sometimes loop back on themselves. It is not surprising that the ancients attributed their movements to the caprices of supernatural beings. Ptolemy, a Roman who lived in Egypt around AD120, began the lengthy process of taming the solar system, proposing that we live in an earth-centred universe in which everything revolves around humanity in complex combinations of circles supported by giant crystal spheres. Around 1300, the Persian Islamic philosopher Najm al-Katibi proposed a heliocentric (sun-centred) theory, but changed his mind. The big breakthrough came in 1543 when Nicolaus Copernicus published On the Revolutions of the Celestial Spheres. He was clearly influenced by al-Katibi, but he went further, setting out an explicitly heliocentric system. Among its implications was the novel thought that human beings were not at the centre of things. To the Christian Church, this suggestion was contrary to doctrine, and explicit heliocentrism was heresy. The riddle of the wandering stars was finally answered in 1609 by Johannes Kepler, an assistant to the astronomer Tycho Brahe. When his employer died unexpectedly, Kepler took over as court mathematician to Emperor Rudolph II. His main role was casting imperial horoscopes, but he also had time to analyse the orbit of Mars. For years, he tried without success to fit the planet’s orbit to an egg-shaped curve, sharper at one end than the other. In 1605 he decided to try an ellipse, equally rounded at both ends. He discovered that this shape fitted the observations, and declared: “Ah, what a foolish bird I have been!” In 1609, Kepler published A New Astronomy, stating two basic laws of planetary motion. First law: all planets move in ellipses with the sun as a focus. Second law: a planet moves along its orbit in such a manner that it sweeps out equal areas in equal times. In 1619 he returned to planetary orbits in The Harmony of the World. The book contained many curious ideas – for example, that planets emit musical sounds as they roll round the sun. But it also contained his third law: the squares of the time taken for planets to orbit are proportional to the cubes of their distances from the sun. This work led to one of the greatest scientific discoveries of all time. In his Mathematical Principles of Natural Philosophy of 1687, Isaac Newton proved that Kepler’s three laws are equivalent to a single universal law of gravitation. Two bodies attract each other with a force that is proportional to their mass and inversely proportional to the square of the distance between them. Newton’s law of gravity had a huge advantage over Kepler’s ellipses: it applies to any system of bodies, however many there might be. The price to be paid is the way the law prescribes the orbits – not as geometric shapes, but as solutions of a mathematical equation. The problem is to solve it. Newton achieved that for two bodies – a planet plus the sun – and the answer is what Kepler had already discovered: the bodies move around their common centre of gravity in elliptical orbits. But some questions involve more than two bodies. If you want to predict the motion of the moon with high precision, you have to include both the sun and the earth in your equations. So, fresh from Newton’s success with the motion of two bodies under gravity, mathematicians and physicists moved on to three bodies. Their initial optimism rapidly dissipated; the three-body problem turned out to be very different from the two-body problem. In fact, it defied solution. Only in the late 19th century did its true complexity become apparent, however, when Henri Poincaré tried to win a scientific prize. The 60th birthday of Oscar II, king of Norway and Sweden, happened in 1889. The Norwegian mathematician Gösta Mittag-Leffler persuaded the king to mark the occasion by announcing a prize for calculating the motion of any number of bodies under gravity and finding out whether the solar system is stable. Poincaré decided to start with the simplest case: two bodies (say the sun and a planet) moving in perfect circles, with the third body being a dust particle of negligible mass. Even that version proved too ambitious and he failed to solve it, but he made so much progress that he was awarded the prize anyway. In particular, Poincaré proved that sometimes the orbit of the dust particle became extraordinarily messy. He deduced this from some highly original ideas that made it possible to infer features of the solutions without actually solving the equations, saying: “One is struck by the complexity of this figure that I am not even attempting to draw.” We now recognise Poincaré’s discovery as a sign that the dynamics of such a system are chaotic. The equations are not random, but their solutions can be very irregular, sharing features with properly random processes. This idea is colloquially known as chaos theory, and it all goes back to Poincaré and his Oscar award. Well, that’s the story that historians of mathematics used to tell. Around 1990, however, June Barrow-Green found a copy of Poincaré’s prize-winning memoir in the depths of the Mittag-Leffler Institute in Sweden. She realised that when he submitted his work he had overlooked the chaotic solutions. He spotted the error before the memoir was published, and paid to have the original version destroyed and a corrected version printed. His initial oversight lay undiscovered for a century. Building on Poincaré’s discovery, we now know that the three-body problem does not have simple solutions. Even so, vast progress has been made on the many-body problem in special cases; for example, when all of the bodies have the same mass. This is seldom a realistic assumption in celestial mechanics, but it is sensible for some models of elementary particles, such as electrons. In 1993, Cristopher Moore at the Sante Fe Institute found a solution to the three-body problem in which the bodies play follow-my-leader along the same orbit. Even more surprising is the shape of the orbit – a figure of eight. Stranger than imagination In 2000, the Spanish mathematician Carles Simó used a computer to show that this configuration is stable: it persists after small disturbances. Indeed, it remains stable even when the three masses are slightly different, so, somewhere in the universe, there might be three stars of almost identical mass, chasing each other along a figure-of-eight path. The same year, Douglas Heggie of Edinburgh University estimated that the number of such triple stars lies somewhere between one per galaxy and one per universe. The figure-of-eight orbit is a planetary dance in which the bodies return to the same positions but swap their identities, each occupying the location that the body in front of it has vacated. This kind of orbit is called a choreography. Using a computer, Simó has found a huge number of choreographies, which can involve a large number of bodies. The solar system is, was, and will be, far stranger than we imagine. Consider the comet Oterma. A century ago, Oterma’s orbit was well outside that of Jupiter. After a close encounter with the giant planet, its orbit shifted inside that of Jupiter. After another close encounter, it switched back to outside again. We can confidently predict that Oterma will continue to switch orbits in this way every few decades, not because it breaks Newton’s law, but because it obeys it. Oterma’s gyrations are a far cry from Kepler’s tidy ellipses. The explanation is straight out of science fiction. In Pandora’s Star, Peter Hamilton portrays a future where people travel to planets encircling distant stars by train, running the railway lines through a wormhole, a short cut through space-time. In his Lensman series, Edward Elmer “Doc” Smith came up with the hyperspatial tube, which malevolent aliens used to invade human worlds from the fourth dimension. Although we don’t have wormholes or aliens from the fourth dimension, the planets and moons of the solar system are tied together by a network of multidimensional mathematical tubes that provide energy-efficient routes from one world to another. If we could visualise the ever-changing gravitational landscape that controls the planets, we would see these tubes, swirling along with the planets as they orbit the sun. Oterma’s orbit lies inside two tubes, which meet near Jupiter at a Lagrange point. One tube lies inside Jupiter’s orbit, the other outside. At the Lagrange point the comet can switch tubes, or not, depending on chaotic effects of Jovian and solar gravity; once inside a tube, however, Oterma is stuck there until the tube returns to the junction. Like a train that has to stay on the rails, but can change its route to another set of rails if someone switches the points, Oterma has some freedom to change its itinerary, but not a lot. As such, the way to plan an efficient mission profile is to work out which tubes are relevant to your choice of destination. Then you route your spacecraft along the inside of the first inbound tube, and when it gets to the associated Lagrange point you fire a quick burst on the motors to redirect it along the most suitable outbound tube. That tube naturally flows into the corresponding inbound tube of the next switching point . . . and so it goes on. Plans for future tubular space missions are already being drawn up. In 2000, Wang Sang Koon, Martin Lo, Jerrold Marsden and Shane Ross used the tube technique to plot what they described as a “Petit Grand Tour” – an energy-efficient route – around the moons of Jupiter, ending in orbit around Europa. In 2005, Michael Dellnitz, Oliver Junge, Marcus Post and Bianca Thiere used tubes to plan an energy-efficient mission from the earth to Venus. Their route would use one-third of the fuel required by the European Space Agency’s Venus Express mission, which has observed Venus since 2006. Past, present, future The influence of tubes may go further. Dellnitz has discovered evidence of a natural system of tubes connecting Jupiter to each of the inner planets. This remarkable structure, known as the Interplanetary Superhighway, hints that Jupiter, long known to be the dominant planet of the solar system, also plays the role of a celestial Grand Central Station. In the past, its tubes may well have organised the formation of the entire solar system, determining the spacings of the inner planets. So, is the solar system stable? The answer is a definite “maybe”. Two research groups, run by Jack Wisdom of the Massachusetts Institute of Technology and Jacques Laskar of the Observatoire de Paris, have pioneered highly accurate computational methods to understand the probable future of the solar system. Wisdom’s group has found that Pluto behaves chaotically over timescales of several hundred million years. In 1999, Norman Murray of the Canadian Institute for Theoretical Astrophysics and Matthew Holman of the Smithsonian Astrophysical Observatory discovered that the orbit of Uranus can also change chaotically, so that it occasionally gets close to Saturn, with the possibility that Uranus would then be ejected from the solar system. However, it will probably take about one quintillion years for this to happen. (The sun will blow up into a red giant much sooner, about five billion years from now. The earth will move outwards and might just escape being engulfed, even though tidal interactions will probably pull it into the sun. In any case, our planet’s oceans will boil away long before that. And, anyway, the typical lifetime of a species is no more than five million years.) It’s not just the future that is chaotic; the same methods can be used to investigate the solar system’s past. In 1993, Renu Malhotra of the University of Arizona realised that the early solar system must have been far more dynamic than had been assumed. As the planets were condensing from the primal gas cloud surrounding the sun, there came a time when Jupiter, Saturn, Uranus and Neptune were nearly complete. Among them circulated huge numbers of rocky and icy “planetesimals”, small bodies about ten kilometres across. Many of these were ejected into the wider solar system, reducing the energy of the four giant planets. Neptune migrated outwards. So did Uranus and Saturn. Jupiter, the big loser in the energy stakes, moved inwards. So, our solar system’s apparently stable plan arose through an intricate dance of the giants, in which they threw the smallest bodies at each other in a riot of chaos. Is the solar system stable? Probably not, but don’t worry: we won’t be around to find out. Ian Stewart is emeritus professor of mathematics at the University of Warwick. His latest book is “17 Equations That Changed the World” (Profile, £15.99) More from New Statesman - Online writers: - Steven Baxter - Rowenna Davis - David Allen Green - Mehdi Hasan - Nelson Jones - Gavin Kelly - Helen Lewis - Laurie Penny - The V Spot - Alex Hern - Martha Gill - Alan White - Samira Shackle - Alex Andreou - Nicky Woolf in America - Bim Adewunmi - Kate Mossman on pop - Ryan Gilbey on Film - Martin Robbins - Rafael Behr - Eleanor Margolis
<urn:uuid:e341a786-6a4a-4849-be8e-f303cb373b58>
3.71875
3,735
Nonfiction Writing
Science & Tech.
44.753689
553
The Nobel Prize in Physics 2001 Eric A. Cornell, Wolfgang Ketterle, Carl E. Wieman Bose-Einstein Condensation in a Dilute Gas; The First 70 Years and Some Recent Experiments Eric A. Cornell held his Nobel Lecture December 8, 2001, at Aula Magna, Stockholm University. He was presented by Professor Mats Jonson, Chairman of the Nobel Committee for Physics. Summary: Fundamental ideas behind creating Bose-Einstein condensate (BEC) in a gas are outlined. Starting with Heisenberg's uncertainty principle, the formation of Bose-Einstein condensate (BEC) is explained as occurring when the interatomic spacing is comparable to thermal de Broglie wavelength. The conditions for creating BEC in a gas are described, and the necessary ingredients for creating BEC in a gas are listed in an "Ultra Cold Alkali Tool Kit". Copyright © Nobel Web AB 2001 Credits: Kamera Communications (webcasting) Read the Nobel Lecture Pdf 447 kB Copyright © The Nobel Foundation 2001 From Les Prix Nobel. The Nobel Prizes 2001, Editor Tore Frängsmyr, [Nobel Foundation], Stockholm, 2002 MLA style: "Eric A. Cornell - Nobel Lecture: Bose-Einstein Condensation in a Dilute Gas; The First 70 Years and Some Recent Experiments". Nobelprize.org. 22 May 2013 http://www.nobelprize.org/nobel_prizes/physics/laureates/2001/cornell-lecture.html
<urn:uuid:e599b6f4-d04f-4617-b151-3ff1bce3b832>
2.671875
337
Knowledge Article
Science & Tech.
45.594626
554
Functional diversity in marine ecosystems Functional diversity refers to the variety of biological processes, functions or characteristics of a particular ecosystem in this case marine biodiversity. Functional diversity reflects the biological complexity of an ecosystem. Some scientists argue that examining functional diversity may in fact be the most meaningful way of assessing biodiversity while avoiding the difficult and usually impossible task of cataloging all species in marine ecosystems. By focusing on processes, it may be easier to determine how an ecosystem can most effectively be protected. Protecting biological functions will protect many of the species that perform them. However, the exact function of most of the species is hardly known to date. There are several ways in which ecological classifications group organisms according to common functions: classification according to their habitat, to their position in the food web or to their functional feeding mechanism. Classification by ‘habitat’ Aquatic organisms can be divided into four major groups: pelagic, benthic, neuston and fringing, according to the water body which they inhabit. For the Coastal Wiki there are eleven sub-categories: Pelagic organisms are those that live in ocean water and are not associated with the bottom. They thus inhabit the water column and can be divided into plankton and nekton. Plankton are organisms that are suspended, (they float or are weakly self-propelled) in the water and drift with it as it moves. Plankton is either passive and includes algae, bacteria and variety of animals. Plankton is usually subdivided in phytoplankton (photosynthethic organisms like algae) and zooplankton (animals), what refers to their ecological function. Plankton can also be subdivided in holoplankton and meroplankton. Holoplankton are permanent members, represented by many taxa in the sea. Meroplankton are temporary members, spending only a part of their life cycle in the plankton. They include larvae of anemones, barnacles, crabs and even fish, which later in life will join the nekton or the benthos. Meroplankton are very much a feature of the sea, particularly coastal waters, as the often sedentary adult forms of coastal species use their planktonic stage for dispersal. Nekton are organisms swimming actively in the water, it includes a variety of animals, mostly fish. Benthos comprises organisms on the bed of the water body. Animals attached to or living on the bottom are referred to as epifauna, while those which burrow into soft sediments or live in spaces between sediment particles are described as infauna. Attached multicellular plants and algae are referred to as macrophytes, while single-celled or filamentous algae are called as periphyton or microphytobenthos. Epiphytic algae are those which grow on macrophytes. Benthic consumers can be divided by size into macrofauna (>500 μm), meiofauna (10-500 μm) and micro-organisms (<10 μm). Neuston are those organisms associated with the water surface, where they are supported by surface tension. Most neuston require very still water surface and is therefore very restricted in the sea. Fringing communities are floral communities that occur where the water is shallow enough for plentiful light to reach the bottom, allowing the growth of attached photosynthesisers, which may be entirely submerged or emergent into the air. Marine communities are composed mostly out of algal seaweeds. Wetlands are composed of this type of vegetation. There are a lot of other habitat classifications, for example the EUNIS Habitat types classification.. This is a comprehensive pan-European system to facilitate the harmonized description and collection of data across Europe through the use of criteria for habitat identification; it covers all types of habitats from natural to artificial, from terrestrial to freshwater and marine. An example of the EUNIS habitat classification: marine habitats at level 1 is the following: - Littoral rock and other hard substrata - Littoral sediment - Infralittoral rock and other hard substrata - Circalittoral rock and other hard substrata - Sublittoral sediment - Deep-sea bed - Pelagic water column - Ice-associated marine habitats Classification by position in the food web It is very difficult to make generalizations about the trophic relationships in coastal marine systems, because the ecological habitats are so diverse. A simplified description of a food web: the phytoplankton are the primary producers and are eaten by the zooplankton (smallest floating animals). The zooplankton are eaten by small fish (sardines, herring) and small fish are eaten by larger fish. At the top of the marine food web are the large predators (tuna, seals, sea-birds and some species of whales). Phytoplankton, small zooplankton and large zooplankton, larger animals and top predators all interact in a marine food web. Each species eats and is eaten by several other species at different trophic levels. The interactions in a food web are far more complex than the interactions in a food chain. Furthermore, the branching structure of food webs leads to fewer top predators compared with the numbers of top predators in a food chain. In the microbial loop, bacteria consume Dissolved Organic Material (DOM) that cannot be directly ingested by larger organisms. DOM includes liquid wastes of zooplankton and cytoplasm that leaks out of phytoplankton cells. Bacteria are eaten by microflagellates. Ciliates, which are small enough to eat microflagellates, are eaten by zooplankton. Micro-flagellates and ciliates help to recycle organic matter back into the marine food web. Bacteria also help to facilitate phytoplankton growth by releasing nutrients when they absorb DOM. Viruses are the smallest and most abundant organisms in the sea, viral activity produces DOM, thus helping to drive energy cycles for ocean life. The main difference of the microbial loop between estuarine and coastal waters is that coastal waters tend to have lower population densities of bacteria and of the organism that prey on them. Classification by functional feeding mechanism There is a classification with several groups for marine and coastal systems: - grazer-scrapers feed upon attached algae - scavengers eat coarse particulate organic matter (detritus retained by a 1 mm sieve) - collectors eat fine particulate organic matter (detritus passing through a 1 mm sieve but retained by 0.45 mm sieve) - suspension or filter feeders remove particles from the watercolumn - deposit feeders pick particles from the ocean bed - predators consume other living animals - parasites derive their food from a living organism of another species (host), they usually live in or on the body of the host. In practice this distinction is very imprecise but useful as long as it is understood that they should not be too rigidly applied. The shelf bottom is occupied by diverse groups of benthic organisms that vary with changes in the bathymetry and sediment cover of the sea bed. For example gravel and coarse sand bottoms are mostly populated by filter feeders and fine sand bottoms are predominantly composed of deposit feeders. Muddy substrates are almost exclusively inhabited by deposit and detritus feeders. COASTAL PELAGIC COMMUNITIES In pelagic communities the classification based on feeding mechanism is less successful because consumers are opportunistic and will eat anything that is the correct size for their mouthparts to deal with. Nekton are almost exclusively predators. The smaller species like the zooplankton are both predators and grazers. Plankton are therefore also classified by size, although there is again an overlap as many species will change to a larger size class as they grow older. Loss of functional diversity of fish due to intense fishing causing ecosystem-wide effects in Mediterranean sublittoral rocky reefs - ↑ Thorne-Miller Boyce (1999) The living ocean: understanding and protecting marine biodiversity. United States of America 213p - ↑ 2.0 2.1 2.2 2.3 Dobson M. and Frid C. (1998). Ecology of aquatic systems. Addison Wesley Longman Limited: Edingburgh, (England). p222 - ↑ http://eunis.eea.europa.eu/habitats.jsp - ↑ http://oceanworld.tamu.edu - ↑ From "Fishing down marine food webs' as an integrative concept" by Daniel Pauly (University of British Columbia, Canada), Proceedings of the EXPO'98 Conference on Ocean Food Webs and Economic Productivity, online at the Community Research and Development Information Page - ↑ http://www.bigelow.org - ↑ http://www.waterencyclopedia.com/images Please note that others may also have edited the contents of this article.
<urn:uuid:d7ff4c7c-91bb-4779-b851-05000edfea80>
3.90625
1,911
Knowledge Article
Science & Tech.
25.52511
555
Geoscience experts have developed a system of smart buoys that can predict the formation of self-reinforcing underwater waves, or solitons, 10 hours before they threaten the safety of oil rigs and divers. In 2008, Martin Goff and his colleagues at FUGROS, a geoscience consulting agency, successfully tested the system for three months in the Andaman Sea. Now, Global Ocean Associates have acknowledged the device as "the first deployed system with real-time warning capability." Scientists discover ancient rocks on the sea-floor that give them a window into the Earth's mantle By Gregory Mone Posted 04.14.2008 at 8:28 am 0 Comments No, you can't hike or spelunk or even tunnel down to the center of the Earth, even if movies like The Core or this summer's 3D adventure flick, Journey to the Center of the Earth, suggest otherwise. To find out about our planet's insides, scientists rely on very different tricks. And, apparently, a little luck. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:67de25b3-2aa5-4eb7-8108-2b29a04d3ecc>
3.21875
266
Content Listing
Science & Tech.
52.998413
556
Wind Energy | previous page South Dakota Wind Energy Potential South Dakota ranks in the top five states for wind energy potential. In a recent study, South Dakota was estimated to have the potential to produce more than 3 million gigawatt-hours of energy on annual basis. If this entire wind energy potential of South Dakota was harnessed, it would be nearly enough to provide power for the entire United States. While it is not feasible to harness every breeze, wind farms in the state have reported higher than industry standard capacity factors, which means the quality of South Dakota wind is high. This wind resource map shows how the wind is classified throughout South Dakota. NREL has also developed a wind resource map for South Dakota's tribal lands. Web site links found throughout www.puc.sd.gov are intended to provide helpful resources. Inclusion of a Web link on this site is not necessarily an endorsement of an organization, product or service.
<urn:uuid:7b1637d8-b5c9-496e-91bc-4a513be3b20b>
2.671875
190
Knowledge Article
Science & Tech.
44.912337
557
Using an ultra-bright electron source, scientists at the University of Toronto have recorded atomic motions in real time, offering a glimpse into the very essence of chemistry and biology at the atomic level. Their recording is a direct observation of a transition state in which atoms undergo chemical transformation into new structures with new properties. Using a new tool called a quantum simulator—based on a small-scale quantum computer—... A massive telescope buried in the Antarctic ice has detected 28 extremely high-energy... A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics, and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. The massive ball of iron sitting at the center of Earth is not quite as "rock-solid" as has been thought, say two Stanford University mineral physicists. By conducting experiments that simulate the immense pressures deep in the planet's interior, the researchers determined that iron in Earth's inner core is only about 40% as strong as previous studies estimated. Graphene has dazzled scientists ever since its discovery more than a decade ago. But one long-sought goal has proved elusive: how to engineer into graphene a property called a band gap, which would be necessary to use the material to make transistors and other electronic devices. New findings by Massachusetts Institute of Technology researchers are a major step toward making graphene with this coveted property. With the hand of nature trained on a beaker of chemical fluid, the most delicate flower structures have been formed in a Harvard University laboratory—and not at the scale of inches, but microns. These minuscule sculptures, curved and delicate, don't resemble the cubic or jagged forms normally associated with crystals, though that's what they are. Rather, fields of flowers seem to bloom from the surface of a submerged glass slide. A new joint innovation by the National Physical Laboratory and the University of Cambridge could pave the way for redefining the ampere in terms of fundamental constants of physics. The world's first graphene single-electron pump provides the speed of electron flow needed to create a new standard for electrical current based on electron charge. Described as the "most beautiful experiment in physics," Richard Feynman emphasized how the diffraction of individual particles at a grating is an unambiguous demonstration of wave-particle duality and contrary to classical physics. A research team recently used carefully made fluorescent molecules and nanometric detection accuracy to provide clear and tangible evidence of the quantum behavior of large molecules in real time. Bubble baths and soapy dishwater and the refreshing head on a beer: These are foams, beautiful yet ephemeral as the bubbles pop one by one. Now, a team of researchers has described mathematically the successive stages in the complex evolution and disappearance of foamy bubbles, a feat that could help in modeling industrial processes in which liquids mix or in the formation of solid foams such as those used to cushion bicycle helmets. An international team of physicists has found the first direct evidence of pear-shaped nuclei in exotic atoms. The findings could advance the search for a new fundamental force in nature that could explain why the Big Bang created more matter than antimatter—a pivotal imbalance in the history of everything. From powerful computers to super-sensitive medical and environmental detectors that are faster, smaller, and use less energy—yes, we want them, but how do we get them? In research that is helping to lay the groundwork for the electronics of the future, University of Delaware scientists have confirmed the presence of a magnetic field generated by electrons which scientists had theorized existed, but that had never been proven until now. Physicists working with optical tweezers have conducted work to provide an all-in-one guide to help calculate the effect the use of these tools has on the energy levels of atoms under study. This effect can change the frequency at which atoms emit or absorb light and microwave radiation and skew results; the new findings should help physicists foresee effects on future experiments. Physicists in Switzerland have demonstrated one of the quintessential effects of quantum optics—known as the Hong-Ou-Mandel effect—with microwaves, which have a frequency that 100,000 times lower than that of visible light. The experiment takes quantum optics into a new frequency regime and could eventually lead to new technological applications. The allure of personalized medicine has made new, more efficient ways of sequencing genes a top research priority. One promising technique involves reading DNA bases using changes in electrical current as they are threaded through a nanoscopic hole. Now, a team led by University of Pennsylvania physicists has used solid-state nanopores to differentiate single-stranded DNA molecules containing sequences of a single repeating base. An international research team led by astronomers from the Max Planck Institute for Radio Astronomy used a collection of large radio and optical telescopes to investigate in detail a pulsar that weighs twice as much as the sun. This neutron star, the most massive known to date, has provided new insights into the emission of gravitational radiation and serves as an interstellar laboratory for general relativity in extreme conditions. Using uniquely sensitive experimental techniques, scientists have found that laws of quantum physics—believed primarily to influence at only sub-atomic levels—can actually impact on a molecular level. The study shows that movement of the ring-like molecule pyrrole over a metal surface runs counter to the classical physics that govern our everyday world. In a process comparable to squeezing an elephant through a pinhole, researchers at Missouri University of Science and Technology have designed a way to engineer atoms capable of funneling light through ultrasmall channels. Their research is the latest in a series of recent findings related to how light and matter interact at the atomic scale. Cancer cells that can break out of a tumor and invade other organs are more aggressive and nimble than nonmalignant cells, according to a new multi-institutional nationwide study. These cells exert greater force on their environment and can more easily maneuver small spaces. One simple phenomenon explains why practical, self-sustaining fusion reactions have proved difficult to achieve: Turbulence in the superhot, electrically charged gas, called plasma, that circulates inside a fusion reactor can cause the plasma to lose much of its heat. This prevents the plasma from reaching the temperatures needed to overcome the electrical repulsion between atomic nuclei. Until now. Lawrence Berkeley National Laboratory’s sound-restoration experts have done it again. They’ve helped to digitally recover a 128-year-old recording of Alexander Graham Bell’s voice, enabling people to hear the famed inventor speak for the first time. The recording ends with Bell saying “in witness whereof, hear my voice, Alexander Graham Bell.” Researchers at University of California, Santa Barbara in collaboration with colleagues at the École Polytechnique in France, have conclusively identified Auger recombination as the mechanism that causes light-emitting diodes (LEDs) to be less efficient at high drive currents. A Harvard University-led team of researchers has created a new type of nanoscale device that converts an optical signal into waves that travel along a metal surface. Significantly, the device can recognize specific kinds of polarized light and accordingly send the signal in one direction or another. The planet-hunting Kepler telescope has discovered two planets that seem like ideal places for some sort of life to flourish. According to scientists working with the NASA telescope, they are just the right size and in just the right place near their star. The discoveries, published online Thursday, mark a milestone in the search for planets where life could exist. Throughout decades of research on solar cells, one formula has been considered an absolute limit to the efficiency of such devices in converting sunlight into electricity: Called the Shockley-Queisser efficiency limit, it posits that the ultimate conversion efficiency can never exceed 34% for a single optimized semiconductor junction. Now, researchers have shown that there is a way to blow past that limit. Scientists in Australia have recently demonstrated that ultra-short durations of electron bunches generated from laser-cooled atoms can be both very cold and ultra-fast. The low temperature permit sharp images, and the electron pulse duration has a similar effect to shutter speed, potentially allowing researchers to observe critical but quick dynamic processes, such as the picosecond duration of protein folding. A University of Missouri engineer has built a system that is able to launch a ring of plasma as far as two feet. Plasma is commonly created in the laboratory using powerful electromagnets, but previous efforts to hold the super-hot material through air have been unsuccessful. The new device does this by changing how the magnetic field around the plasma is arranged. Physicists operating an experiment located half a mile underground in Minnesota reported this weekend that they have found possible hints of dark-matter particles. The Cryogenic Dark Matter Search experiment has detected three events with the characteristics expected of dark matter particles.
<urn:uuid:38bd495e-a715-4cfc-97e2-fee204e62652>
3.328125
1,873
Content Listing
Science & Tech.
22.222546
558
OR operator is a kind of a conditional operators, which is represented by | symbol. It returns either true or false value based on the state of the variables i.e. the operations using conditional operators are performed between the two boolean expressions. The OR operator ( is similar to the Conditional-OR operator ( ||) and returns true, if one or another of its operand is true. Read more at: If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:e17bb927-24fb-4bbb-b449-9ba563adebc9>
2.734375
130
Customer Support
Software Dev.
44.05534
559
Giant squids, once believed to be mythical creatures, are squid of the Architeuthidae family, represented by as many as eight species of the genus Architeuthis. They are deep-ocean dwelling squid that can grow to a tremendous size: recent estimates put the maximum size at 10 m (34 ft) for males and 13 m (44 ft) for females from caudal fin to the tip of the two long tentacles (second only to the Colossal Squid at an estimated 14 m, one of the largest living organisms). For more information about the topic Giant squid, read the full article at Wikipedia.org, or see the following related articles: Recommend this page on Facebook, Twitter, and Google +1: Other bookmarking and sharing tools:
<urn:uuid:d8040f71-3afa-434b-8a3e-1971af13bb0c>
2.84375
161
Knowledge Article
Science & Tech.
31.655344
560
Feb. 17, 2009 The genome of a marine bacterium living 2,500 meters below the ocean's surface is providing clues to how life adapts in extreme thermal and chemical gradients, according to an article published Feb. 6 in the journal PLoS Genetics. The research focused on the bacterium Nautilia profundicola, a microbe that survives near deep-sea hydrothermal vents. Microorganisms that thrive at these geysers on the sea floor must adapt to fluctuations in temperature and oxygen levels, ranging from the hot, sulfide- and heavy metal-laden plume at the vents' outlets to cold seawater in the surrounding region. The study combined genome analysis with physiological and ecological observations to investigate the importance of one gene in N. profundicola. That gene, called rgy, allows the bacterium to manufacture a protein called reverse gyrase when it encounters extremely hot fluids from the Earth's interior. Previous studies found the gene only in microorganisms growing in temperatures greater than 80°C, but N. profundicola thrives best at much lower temperatures. "The gene's presence in N. profundicola suggests that it might play a role in the bacterium's ability to survive rapid and frequent temperature fluctuations in its environment," said Assistant Professor of Marine Biosciences Barbara Campbell, the study's lead scientist. Additional University of Delaware contributors were Professor of Marine Biosciences Stephen Craig Cary, Assistant Professor of Marine Biosciences Thomas Hanson, and Julie Smith, marine biosciences doctoral student. Also collaborating on the project were researchers from the Davis and Riverside campuses of the University of California; the University of Louisville; the University of Waikato in Hamilton, New Zealand; and the J. Craig Venter Institute in Rockville, Md. The researchers also uncovered further adaptations to the vent environment, including genes necessary for growth and sensing environmental conditions, and a new route for nitrate assimilation related to how other bacteria use ammonia as an energy source. Photosynthesis cannot occur in the hydrothermal vents' dark environment, where hot, toxic fluids oozing from below the seafloor combine with cold seawater at very high pressures. These results help to explain how microbes survive near the vents, where conditions are thought to resemble those found on early Earth. Nautilia profundicola contains all the genes necessary for life in conditions widely believed to mimic those in our planet's early biosphere and could aid in understanding of how life evolved. "It will be an important model system," Campbell said, "for understanding early microbial life on Earth." Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:facd01c8-ab37-4b28-9686-62d8769d3a80>
3.890625
583
News Article
Science & Tech.
24.138384
561
Jan. 30, 2011 In a new study, scientists at the University of Maryland and the Institut Pasteur show that bacteria evolve new abilities, such as antibiotic resistance, predominantly by acquiring genes from other bacteria. The researchers new insights into the evolution of bacteria partly contradict the widely accepted theory that new biological functions in bacteria and other microbes arise primarily through the process of gene duplication within the same organism. Their just released study will be published in the open-access journal PLoS Genetics on January 27. Microbes live and thrive in incredibly diverse and harsh conditions, from boiling or freezing water to the human immune system. This remarkable adaptability results from their ability to quickly modify their repertoire of protein functions by gaining, losing and modifying their genes. Microbes were known to modify genes to expand their repertoire of protein families in two ways: via duplication processes followed by slow functional specialization, in the same way as large multicellular organisms like us, and by acquiring different genes directly from other microbes. The latter process, known as horizontal gene transfer, is notoriously conspicuous in the spread of antibiotic resistance, turning some bacteria into drug-resistant 'superbugs' such as MRSA (methicillin-resistant Staphylococcus aureus), a serious public health concern. The researchers examined a large database of microbial genomes, including some of the most virulent human pathogens, to discover whether duplication or horizontal gene transfer was the most common expansion method. Their study shows that gene family expansion can indeed follow both routes, but unlike in large multicellular organisms, it predominantly takes place by horizontal transfer. First author Todd Treangen, a postdoctoral researcher in the University of Maryland Center for Bioinformatics and Computational Biology and co-author Eduardo P. C. Rocha of the Institut Pasteur conclude that because microbes invented the majority of life's biochemical diversity -- from respiration to photosynthesis --, "the study of the evolution of biology systems should explicitly account for the predominant role of horizontal gene transfer in the diversification of protein families." Other social bookmarking and sharing tools: - Todd J. Treangen, Eduardo P. C. Rocha. Horizontal Transfer, Not Duplication, Drives the Expansion of Protein Families in Prokaryotes. PLoS Genetics, 2011; 7 (1): e1001284 DOI: 10.1371/journal.pgen.1001284 Note: If no author is given, the source is cited instead.
<urn:uuid:684ca815-f362-4191-a451-4b20ecbf1b10>
3.234375
508
News (Org.)
Science & Tech.
20.453841
562
Visual perception begins with our retinas locating the edges of objects in the world. Downstream neural mechanisms analyze those borders and use that information to fill in the insides of objects, constructing our perception of surfaces. What happens when those borders—the fundamental fabric of our visual reality—are tweaked? Our internal representation of objects fails, and our brain's ability to accurately represent reality no longer functions. Seemingly small mistakes lead to the very distorted perceptions of an illusory world. This article was originally published with the title Your Twisted Little Mind.
<urn:uuid:b1fd5421-9017-4d3b-828b-26983e45deee>
2.65625
111
Truncated
Science & Tech.
28.270242
563
Risky Business: Gambling on Climate Sensitivity Posted on 21 September 2010 by gpwayne There are some things about our climate we are pretty certain about. Unfortunately, climate sensitivity isn’t one of them. Climate sensitivity is the estimate of how much the earth's climate will warm if carbon dioxide equivalents are doubled. This is very important because if it is low, as some sceptics argue, then the planet isn’t going to warm up very much. If sensitivity is high, then we could be in for a very bad time indeed. There are two ways of working out what climate sensitivity is (a third way – waiting a century – isn’t an option, but we’ll come to that in a moment). The first method is by modelling: Climate models have predicted the least temperature rise would be on average 1.65°C (2.97°F) , but upper estimates vary a lot, averaging 5.2°C (9.36°F). Current best estimates are for a rise of around 3°C (5.4°F), with a likely maximum of 4.5°C (8.1°F). The second method calculates climate sensitivity directly from physical evidence: These calculations use data from sources like ice cores, paleoclimate records, ocean heat uptake and solar cycles, to work out how much additional heat the doubling of greenhouse gases will produce. The lowest estimate of warming is close to the models - 1.8°C (3.24°F ) on average - but the upper estimate is a little more consistent, at an average of around 3.5°C (6.3°F). It’s all a matter of degree To the lay person, the arguments are obscure and complicated by other factors, like the time the climate takes to respond. But climate sensitivity is not just an abstract exchange of statistics relevant only to scientists. It also tells us about the likely changes to the climate that today's children will inherit. Consider a rise in sea levels, for example. Predictions range from centimetres to many metres, and the actual increase will be governed by climate sensitivity. The 2007 IPCC report proposed a range of sea level rises based on different increases in temperature, but we now know they underestimated sea level rise, perhaps by a factor of three, in part because of a lack of data about the behaviour of Greenland and Antarctic ice-sheets. Current estimates of sea level rise alone, as a result of a two degree rise in temperature, are very worrying. More worrying is that the current projections do not account for recently accelerated melting of polar regions. There are also many other possible effects of a 2°C rise (3.6°F) that would be very disruptive. All the models and evidence confirm a minimum warming close to 2°C for a doubling of atmospheric CO2 with a most likely value of 3°C and the potential to warm 4.5°C or even more. Even such a small rise would signal many damaging and highly disruptive changes to the environment. In this light, the arguments against mitigation because of climate sensitivity are a form of gambling. A minority claim the climate is less sensitive than we think, the implication being we don’t need to do anything much about it. Others suggest that because we can't tell for sure, we should wait and see. In truth, nobody knows for sure quite how much the temperature will rise, but rise it will. Inaction or complacency heightens risk, gambling with the entire ecology of the planet, and the welfare of everyone on it. This post is the Basic version (written by Graham Wayne) of the skeptic argument "Climate sensitivity is low". For the stout of heart, be sure to also check out the Advanced Version by Dana which is currently getting rave reviews on Climate Progress.
<urn:uuid:2464ec74-3208-4133-9bec-21d308e5cbbb>
2.859375
793
Personal Blog
Science & Tech.
52.635958
564
That's the name of the Slashdot story, U.S. In Danger of Losing Earth-Observing Satellite Capability. Their summary: "As reported in Wired, a recent National Research Council report indicates a growing concern for NASA, the NOAA, and USGS. While there are currently 22 Earth-observing satellites in orbit, this number is expected to drop to as low as six by the year 2020. The U.S. relies on this network of satellites for weather forecasting, climate change data, and important geologic and oceanographic information. As with most things space and NASA these days, the root cause is funding cuts. The program to maintain this network was funded at $2 billion as recently as 2002, but has since been scaled back to a current funding level of $1.3 billion, with only two replacement satellites having definite launch dates."
<urn:uuid:3f2165da-5bbf-4bb9-a470-502424fad46b>
2.703125
176
News Article
Science & Tech.
54.931104
565
Could corals survive more acidic oceans?April 2nd, 2012 - 6:19 pm ICT by IANS Sydney, April 2 (IANS) Corals may yet be able to survive the acidification of the world’s oceans, escaping the effects of climatic devastation. Researchers have identified a powerful internal mechanism that could enable some corals and their symbiotic algae to counter the adverse impact of a more acidic ocean. As humans release ever-larger amounts of carbon dioxide (CO2) into the air, besides warming the planet, the gas is also turning the world’s oceans more acidic, faster than those seen during past extinctions, the journal Nature Climate Change reports. Scientists from Australia’s ARC Centre of Excellence for Coral Reef Studies (CoECRS) and France’s Laboratoire des Sciences du Climat et de l’Environnement, has shown that some marine organisms that form calcium carbonate skeletons have an in-built mechanism to cope with ocean acidification - which others appear to lack. “The good news is that most corals appear to have this internal ability to buffer rising acidity of seawater and still form good, solid skeletons,” says Malcolm McCulloch professor at CoECRS. “Marine organisms that form calcium carbonate skeletons generally produce it in one of two forms, known as aragonite and calcite,” adds McCulloch, according to a CoECRS statement “Our research broadly suggests that those with skeletons made of aragonite have the coping mechanism - while those that follow the calcite pathway generally do less well under more acidic conditions,” said McCulloch. - Fish learn to cope with high CO2 in oceans - Jul 03, 2012 - Carbon emissions speed up ocean acidification - Jan 23, 2012 - Oceans acidification peaks in 300 mn years - Mar 04, 2012 - Make more efforts to tackle rising ocean acidity, say European scientists - May 20, 2010 - Using Mother Nature's method to save oceans' marine life - Jan 20, 2011 - Cut global emissions to save coral reefs - Nov 18, 2009 - 'Rising CO2 levels threaten aquatic food webs' - May 08, 2012 - Radical methods needed to save oceans, say experts - Aug 21, 2012 - Carbon emissions lead to dangerous changes in oceans - Apr 02, 2010 - How climate change and pollution affect ocean chemistry - Jun 20, 2010 - High acidity levels in oceans harming marine life - Dec 05, 2010 - Weed-eating fish key to reef's survival - Mar 11, 2011 - Sea cucumbers could protect endangered corals - Feb 01, 2012 - Global warming threatens coral growth in Red Sea - Jul 16, 2010 - Acidic oceans endangering baby corals - Apr 19, 2012 Tags: acidic conditions, acidity, adverse impact, aragonite, calcite, calcium carbonate, calcium carbonate skeletons, centre of excellence, climate change reports, coping mechanism, coral reef, devastation, internal mechanism, journal nature, malcolm mcculloch, marine organisms, ocean acidification, reef studies, seawater, symbiotic algae
<urn:uuid:c1589688-7e5d-4cec-ac1a-264f3310d5c4>
3.390625
666
News Article
Science & Tech.
-5.793423
566
This is a picture of the Colorado River near Hoover Dam. Click on image for full size Rivers are very important to Earth because they are major forces that shape the landscape. Also, they provide transportation and water for drinking, washing and farming. Rivers can flow on land or underground in deserts and seas. Rivers may come from mountain springs, melting glaciers or lakes. A river's contribution to the water cycle is that it collects water from the ground and returns it to the ocean. The water we drink is about 3 billion years old because it has been recycled over and over since the first rainfall. A delta is where a river meets the sea. Usually the river flows more slowly at the delta than at its start because it deposits sediment. Sediment can be anything from mud, sand and even rock fragments. A special environment is created when the fresh water from the river mixes with the salty ocean water. This is environment is called estuary. The longest river is the Nile River in Africa, and the Amazon River in South America carries the most water. The muddiest river is the Yellow River in China. Shop Windows to the Universe Science Store! We have beautiful specimens of banded iron formation in our online store from Nature's Own, along with many other mineral You might also be interested in: The water at the ocean surface is moved primarily by winds. Large scale winds move in specific directions because they are affected by Earthís spin and the Coriolis Effect. Because Earth spins constantly,...more Sneeze into a pile of dust and the particles fly everywhere. Sneeze into a pile of rocks and they stay put. Thatís because they have more mass. You need more force than a sneeze to move those rocks. Wind...more The Gulf of Mexico: what role will the Mississippi River play in oil washing ashore and into delta wetlands? One of the spill's greatest environmental threats is to Louisiana's wetlands, scientists believe....more In a feat of reverse-engineering, Christian Braudrick of University of California at Berkeley and three coauthors have successfully built and maintained a scale model of a living meandering gravel-bed...more Rivers are very important to Earth because they are major forces that shape the landscape. Also, they provide transportation and water for drinking, washing and farming. Rivers can flow on land or underground...more One process which transfers water from the ground back to the atmosphere is evaporation. Evaporation is when water passes from a liquid phase to a gas phase. Rates of evaporation of water depend on factors...more About 70% of the Earth is covered with water, and we find 97% of that water in the oceans. Everyone who has taken in a mouthful of ocean water while swimming knows that the ocean is really salty. All water...more
<urn:uuid:dfb97c03-d867-4ab1-a61f-575be13348ba>
3.8125
581
Content Listing
Science & Tech.
55.601949
567
Changing Planet: Black Carbon Black carbon contributes to global warming in two ways. When in the atmosphere, it absorbs sunlight and generates heat, warming the air. When deposited on snow and ice, it changes the albedo of the surface, absorbing sunlight and generating heat. This further accelerates warming, since the heat melts snow and ice, revealing a lower albedo surface which continues to absorb sunlight - a vicious cycle of warming. Click on the video at the left to watch the NBC Learn video - Changing Planet: Black Carbon. Lesson plan: Changing Planet: Black Carbon - A Dusty Situation Shop Windows to the Universe Science Store! is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9. You might also be interested in: Earth’s climate is warming. During the 20th Century Earth’s average temperature rose 0.6° Celsius (1.1°F). Scientists are finding that the change in temperature has been causing other aspects of our planet...more This picture shows a part of the Earth surface as seen from the International Space Station high above the Earth. A perspective like this reminds us that there are lots of different things that cover the...more Arctic sea ice is covered with snow all winter. Bright white, the snow-covered ice has a high albedo so it absorbs very little of the solar energy that gets to it. And during the Arctic winter, very little...more Altocumulus clouds (weather symbol - Ac), are made primarily of liquid water and have a thickness of 1 km. They are part of the Middle Cloud group (2000-7000m up). They are grayish-white with one part...more Altostratus clouds (weather symbol - As) consist of water and some ice crystals. They belong to the Middle Cloud group (2000-7000m up). An altostratus cloud usually covers the whole sky and has a gray...more Cirrocumulus clouds (weather symbol - Cc) are composed primarily of ice crystals and belong to the High Cloud group (5000-13000m). They are small rounded puffs that usually appear in long rows. Cirrocumulus...more Cirrostratus (weather symbol - Cs) clouds consist almost entirely of ice crystals and belong to the High Cloud (5000-13000m) group. They are sheetlike thin clouds that usually cover the entire sky. The...more
<urn:uuid:7e0f4306-a276-497d-a041-0d920b423022>
3.796875
523
Tutorial
Science & Tech.
62.789721
568
A ``shelf'' is a persistent, dictionary-like object. The difference with ``dbm'' databases is that the values (not the keys!) in a shelf can be essentially arbitrary Python objects -- anything that the pickle module can handle. This includes most class instances, recursive data types, and objects containing lots of shared sub-objects. The keys are ordinary strings. To summarize the interface ( key is a string, data is an d = shelve.open(filename) # open, with (g)dbm filename -- no suffix d[key] = data # store data at key (overwrites old data if # using an existing key) data = d[key] # retrieve data at key (raise KeyError if no # such key) del d[key] # delete data stored at key (raises KeyError # if no such key) flag = d.has_key(key) # true if the key exists list = d.keys() # a list of all existing keys (slow!) d.close() # close it - The choice of which database package will be used (e.g. dbm or gdbm) depends on which interface is available. Therefore it is not safe to open the database directly using dbm. The database is also (unfortunately) subject to the limitations of dbm, if it is used -- this means that (the pickled representation of) the objects stored in the database should be fairly small, and in rare cases key collisions may cause the database to refuse updates. - Dependent on the implementation, closing a persistent dictionary may or may not be necessary to flush changes to disk. - The shelve module does not support concurrent read/write access to shelved objects. (Multiple simultaneous read accesses are safe.) When a program has a shelf open for writing, no other program should have it open for reading or writing. Unix file locking can be used to solve this, but this differs across Unix versions and requires knowledge about the database implementation used. See About this document... for information on suggesting changes. - Module anydbm: - Generic interface to - Module dbhash: db database interface. - Module dbm: - Standard Unix database interface. - Module dumbdbm: - Portable implementation of the - Module gdbm: - GNU database interface, based on the - Module pickle: - Object serialization used by shelve. - Module cPickle: - High-performance version of pickle.
<urn:uuid:853e10d8-9ce8-43db-8c80-215743d4f260>
2.84375
558
Documentation
Software Dev.
44.784278
569
Brookhaven National Laboratory has what is currently one of the highest energy particle accelerators on the planet. The Relativistic Heavy Ion Collider (RHIC) hosts collisions between the nuclei of gold atoms that are moving at roughly 99 percent of the speed of light, creating a quark soup similar to the one that existed immediately after the big bang. But the scientists running the experiments started noticing something funny about the data: instead of expanding evenly outward, the collision debris were ellipsoidal (think a 3-D ellipse). What was even stranger was that this sort of behavior had already been described, for a gas of lithium atoms at the opposite end of the temperature spectrum, at a fraction of a microkelvin. As these groups were talking about a collaboration, things got stranger still when string theorists started citing this work, since the behavior had already been predicted through their work—a fact that the physicists weren't aware of until a science reporter called to ask what they thought about it. The tale of this unlikely collaboration unfolded at the American Association for the Advancement of Science meeting, where the introductory remarks described just how far apart these systems are. In terms of temperature, the RHIC and chilled lithium differ by 19 orders of magnitude (that's a factor of 1019). When it comes to density, the difference is an astonishing 25 orders of magnitude. Meanwhile, the bit of string theory that describes the normal, four-dimensional (3-D + time) behavior of these systems can be predicted by modeling a four-dimensional sphere wrapped around a five dimensional black hole. Quantum viscosity runs hot and cold The cold atomic cloud is probably easiest to understand, although John Thomas of Duke, who does the work, claimed that, when dragged to wine tastings with his wife's friends, "I wait until everyone's sufficiently drunk before explaining what we do." His short description is that he makes bowls of light; in principle, the first steps in his system involve the sort of laser cooling that our Chris Lee has described in the past. This can only get things down to a bit under a kelvin above absolute zero, but Thomas then loosens the laser trap, and a few atoms evaporate off, taking most of the remaining heat with them. The end result is an atomic cloud at one-tenth of a microkelvin. The 6Li atoms that he uses have up and down spins that form an analog of the cooper pairs of electrons that cause high-temperature superconductivity, so his system allows theorists to test some of their ideas in an accessible experimental system. But it also has interesting properties when in a magnetic field. At a specific magnetic field strength, the interactions between the paired atoms start to go asymptotic and, when at a very precise point, the interactions vanish and quantum effects dominate. When the laser trap is released again, the atoms expand elliptically, displaying essentially the smallest amount of quantum viscosity possible. Because the system is experimentally possible, they were able (on the advice of string theorists—more on that below) to measure both the viscosity and entropy, and found that they were related directly to one divided by four ?. Out at the other end of the temperature spectrum, the collisions in the RHIC were producing what Brookhaven's Barbara Jacack termed "quark soup." In normal matter, quarks interact by exchanging gluons with a limited number of partners. But, at the densities that exist immediately after these collisions, quarks can exchange multiple gluons with multiple partners, leading to longer-range interactions that are more similar to those in a liquid. Two aspects of the behavior seen by RHIC's detectors, however, were a bit surprising. The first is the ellipsoidal expansion that marks the behavior of perfect quantum liquids that we mentioned above. The second is that, although radiation can pass across the small cluster of quark soup, the actual quarks, it appeared, could not. Jacack likened the fact that even the heavy charm quark didn't make it across the collision to a set of bowling pins stopping an incoming ball. Like Thomas, talking to string theorists allowed Jacack and her team to look for some specific properties—in this case, shock waves of a particular type—of the quark soup. So far, it's looking like they're there. RHIC is about to undergo a retrofit that should make it easier to study this, and the stimulus package may have some money for the DOE that could accelerate the work. The theory needs a five-dimensional black hole, but reality may not Clifford Johnson of USC then spoke about how a specific application of string theory helped tie everything together. As he described it, Quantum Chromodynamics (QCD) works very well at describing the interactions of a limited number of particles, and its successes in the early 1970s caused researchers to abandon an earlier version of string theory. But QCD doesn't work that well at the densities seen in the RHIC, where ensembles of particles have emergent behavior—as Johnson noted, a single water molecule isn't wet; that's a property that emerges from a population of water molecules. And this, along with a few other vexing problems, has allowed string theory back in the game. "String theory," Johnson said, "having failed to explain something, got resurrected a few years on and was used to explain everything," or at least provide a quantum description of gravity. He got interested in the problem of describing quantum black holes, which are far smaller than the macroscopic ones we've observed in space. Based on their emission of quantum radiation, they have to have an internal structure, one that our lack of a quantum gravity is preventing us from probing. (During the questions, it became clear that Johnson is one of the few people hoping that the LHC does spawn a small black hole.) It turns out, using the math of string theory, it's easy to examine a five-dimensional black hole simply by wrapping a four-dimensional sheet around it. When you do that, however, a lot of three-dimensional QCD behavior pops out of the equations—"the bugs of string theory become features," as Johnson put it. In the extra dimensions, gravitons get pulled towards, and then bounce off, the black hole, undergoing interference as they do. That interference apparently describes the behavior seen in both of these real-world systems. Johnson was emphatic that this doesn't mean that the experiments that have used these string theory models are a test of the theory; rather, it means that the predictions of string theory are being used to guide experiments, which is a measure of its utility. As for whether there's really an extradimensional black hole tucked away in these conditions, Johnson described himself as "agnostic." It may be possible, he said, to find a way to describe this behavior without resorting to anything beyond our familiar dimensions, but, at the moment, string theory's models are simple and functional, so there's no reason not to use them. In the meantime, everyone seems excited about the prospect of further collaboration. As Jacack said when showing a slide with a certain image of a kitten playing with yarn, "you know your field has hit the big time when you make it into lolcats."
<urn:uuid:4b3439ec-7a03-4190-bead-e890fe4fe4c9>
2.796875
1,503
News Article
Science & Tech.
37.970399
570
BBSRC is not responsible for the content of external websites Biology by design – how synthetic biology could revolutionise everything from medicines to energy 13 July 2012 In a series of articles we will be highlighting the work of some of the leading synthetic biology researchers in the UK. Here we profile Professor Dek Woolfson of the University of Bristol, Professor Jamie Davies of the University of Edinburgh and Professor Richard Cogdell of the University of Glasgow. Flat pack proteins – Professor Dek Woolfson, University of Bristol Professor Dek Woolfson is hoping to use synthetic biology to create new structures out of proteins with uses ranging from wound repair to water purification. - Proteins play many important roles in nature. - Proteins can assemble into complicated structures like tiny pumps and motors. - Scientists are hoping to combine proteins in new ways using synthetic biology to create useful new tools for uses as diverse as water filtration and medicine. Proteins are like nature's robots working tirelessly in the cells of every plant, animal and microbe to do virtually all of the important functions that make life tick. Each individual protein can twist and fold into an incredibly complex 3D shape, with holes, cracks and protrudances giving it its function. Groups of proteins then combine with one another and other types of molecules to create bigger and more complicated structures still. Understanding how proteins assemble and combine is at the heart of Professor Dek Woolfson's research at the University of Bristol. Professor Dek Woolfson This work is important for our understanding of biology because by figuring out how to make parts these molecular machines from scratch scientists can get a much better understanding of how they work in nature. It could also have a range of possible applications. Professor Woolfson and his team are working on a toolkit of newly designed proteins that could be used as building blocks to produce biological machines. This is a key pillar of a synthetic biology approach. Scientists like Professor Woolfson hope to create catalogues of modular parts so that biological structures can be built from flat pack rather than being crafted from scratch each time. One such structure that Professor Woolfson's team are working on is a synthetic version of the extracellular matrix, the scaffold that surrounds our cells. A synthetic extracellular matrix could be used in regenerative medicine to help generate tissues like skin, nerves or bone in the test tube that could then be transplanted into patients. Professor Woolfson is currently working with clinical scientists exploring applications for the technology in wound repair. Another project in their lab is attempting to use rational protein design to produce new technologies for water purification and desalination. The team have discovered a new cylindrical protein structure which they call CC-Hex which they think could be engineered into biological membranes to filter water. These devices would be particularly valuable for producing small scale products that could be used easily by people who do not have access to clean water in the developing world. This research is being developed in collaboration with the University of Oxford and with an Australian water consortium that brings together a team of engineers, biochemists, chemists, materials scientist and microbiologists. Prof Woolfson explains "When we discovered CC-Hex we thought we might use it to make enzymes. It was a visiting colleague from Australia who recognised the similarity of the structure to aquaporins (a natural protein that rescues water in kidneys, the brain and even the roots of plants). He suggested that we explore that direction too and it is now the basis of our latest BBSRC grant. We are far from achieving a working prototype but are collaborating with Australian scientists with this goal in mind." Designer tissues – Professor Jamie Davies, University of Edinburgh Stem cells offer incredible medical promise because they can turn into virtually any tissue in our bodies; but what about tissues that do not exist in our bodies or even in nature? - During development, a simple group of cells multiplies and rearranges to form complicated tissues and organs and eventually a whole plant or animal. - Currently, scientists are working to coax stem cells to produce human tissues in the lab to repair damaged organs. - Using synthetic biology, scientists could put new programming in to cells so that they develop into never-before seen types of tissues with a range of medical uses. Professor Jamie Davies of the University of Edinburgh is working to use synthetic biology to control cell and tissue shape, research which he calls 'synthetic morphology'. His work could lead to a future where cells can be programmed to self assemble into new structures and tissues which have never existed before in nature. Professor Jamie Davies This science is in its infancy and there are a number of technical hurdles still to be overcome. However it promises to give us a far greater understanding of how organisms develop which might give scientists insights that could help prevent developmental abnormalities like conjoined twinning. As well as increasing our understanding of development this work could allow the production of useful new tissues that would not be possible with stem cells. You could imagine, for example, that tissues grown in this way could provide an interface to allow a person to control movement in an artificial hand or even to see through an artificial eye. These developments are still some way off. However in the nearer term Professor Davies hopes to be able to improve medical technologies like dialysis machines by developing tissues that can live happily inside medical machinery. Dialysis machines are very good at replicating the mechanical functions of a kidney but they cannot perform the biochemical functions that are important in properly filtering blood. By designing tissues that could grow along the tubes of a dialysis machine researchers could produce a more effective artificial kidney. Professor Davies explains "The development of even really complex tissues can be broken down into a series of simple events like the multiplication, clumping together or movement of cells. There are about ten of these simple behaviours and we think that by programming cell circuitry to carry them out in different orders we can coax cells into new types of tissues in ways that we can predict." The immediate value of this work to scientists is that it will give them a much deeper understanding of the process of development. How relatively unorganised populations of cells assemble precisely into something as complex as a person is one of the big outstanding questions in biology. By developing synthetic systems that cause cells to organise and assemble themselves, the researchers can begin to understand how it happens in nature. One of the immediate challenges that Professor Davies and their team faced when starting this work was that they wanted to work with animal cells. Most synthetic biology to date has been in simple organisms that are easy to work with like bacteria or yeast. Mammalian cells are much bigger and more complex than those of bacteria which make them considerably harder to work with. However this work is not just limited to human or animal cells. It should be possible to programme bacteria, yeast or plant cells to form new multicellular structures which could have an enormous range of uses in medicine and industry. The artificial 'leaf' – Professor Richard Cogdell, University of Glasgow Professor Richard Cogdell is hoping to use synthetic biology to create an artificial "leaf" capable of converting the sun's energy into sustainable liquid fuels. - Plants use photosynthetisis to capture energy of the sun to create fuel to power the plant's growth. - We use this fuel ourselves in the form of wood, coal, oil and gas. - By using the tools of synthetics biology, scientists hope to create an artificial system that can do photosynthesis, - This could capture the sun's energy like a solar panel but would produce liquid fuel rather than electricity. We have always relied on plants to provide us with energy. For millennia, burning wood was humanity's main, sometimes only, source of power. Later, more energy dense fuels – coal, oil and gas, drove the development of modern society. Professor Richard Cogdell By burning these fuels we are tapping into the stored energy of the sun. In the case of wood this might have been captured months or years before. When we burn fossil fuels we are releasing energy that fell as sunlight on the world of the dinosaurs hundreds of millions of years ago. Only plants, algae and some bacteria have the amazing ability to capture and store the sun's rays as sugars using photosynthesis. While amazing, photosynthesis is actually quite an inefficient process. A plant is not a machine for producing fuel, rather a machine for producing plants, and as such scientists think that they might be able to tweak photosynthesis to produce fuel more efficiently. The researchers, based at the University of Glasgow, hope to deliver the next stage in our long relationship with photosynthesis by taking it out of the leaf and into the lab. Professor Cogdell, who is leading the research project, explains: "More energy hits the surface of the Earth in the form of sunlight in the space of one hour than the entire human race uses in a whole year. This abundant energy is given away for free but making use of it is tricky. We can use solar panels to make electricity but it's intermittent and difficult to store. You can't fly an aeroplane or send a ship round the world using batteries, you need a fuel. What we are trying to do is to take the energy from the sun and trap it so that it can be used when it is needed most." The researchers hope to use a chemical reaction similar to photosynthesis but in an artificial system. Plants take solar energy, concentrate it and use it to split apart water into hydrogen and oxygen. The oxygen is released and the energy from the hydrogen used to lock carbon into a fuel. The latest research aims to use synthetic biology to replicate the process outside of the cell. Professor Cogdell added: "We are working to devise a chemical system that could replicate photosynthesis artificially on a grand scale. This artificial leaf would use solar collectors and produce a fuel, as opposed to electricity." Professor Cogdell hopes that his team's artificial system could also improve on natural photosynthesis to make better use of the sun's energy. By stripping back photosynthesis to a level of basic reactions, much higher levels of energy conversion could be possible. Ultimately, success in this research could allow the development of a sustainable carbon neutral economy arresting the increasing carbon dioxide levels in the atmosphere from fossil fuel burning. In fact, if successful, this research could allow for carbon to be harvested from the atmosphere and returned to the ground, reversing the accumulation of carbon caused by burning fossil fuels. The research is funded through a joint EU funding scheme "EuroSolarFuels" which aims to produce fuels from light. The BBSRC funds the UK part of this research. What is synthetic biology? Synthetic biology is the science of designing, engineering and building useful new biological systems which have not existed before in nature. Using our ever-increasing understanding of genetics and cell biology synthetic biologists are able to design complicated biological parts, systems and devices to act as sensors, tissues or to produce useful chemicals. These technologies could deliver advances in a wide range of fields including medicine, biofuels and renewable materials. A synthetic biology approach offers incredible promise but also poses many ethical, legal and even existential questions for the scientific community, policymakers and for all of us to think about. Some of these questions were explored in a public dialogue carried out by BBSRC and the Engineering and Physical Sciences Research Council in 2010.
<urn:uuid:fb894008-2892-435b-b812-c68e37dea3f7>
3.234375
2,343
Content Listing
Science & Tech.
31.421244
571
An Efficient Solar Harvest Solar power could be harvested more efficiently and transported over longer distances using tiny molecular circuits based on quantum mechanics, according to research inspired by new insights into natural photosynthesis. Incorporating the latest research into how plants, algae and some bacteria use quantum mechanics to optimize energy production via photosynthesis, UCL scientists have set out how to design molecular circuitry that is 10 times smaller than the thinnest electrical wire in computer processors. Published in Nature Chemistry, the report discusses how tiny molecular energy grids could capture, direct, regulate and amplify raw solar energy. Solar fuel production is all about energy from light being absorbed by an assembly of molecules; this electronic excitation is subsequently transferred to a suitable acceptor. For example, in photosynthesis, antenna complexes capture sunlight and direct the energy to reaction centers that then carry out the associated chemistry. In photosynthesis chlorophyll captures sunlight and directs the energy to special proteins that help make oxygen and sugars. This is no different in principle than a solar cell. In natural systems energy from sunlight is captured by colored molecules called dyes or pigments, but it is only stored for a billionth of a second. This leaves little time to route the energy from pigments to the molecular machinery that produces fuel or electricity. The key to transferring and storing energy very quickly is to harness the collective quantum properties of antennae, which are made up of just a few tens of pigments. Recent studies have identified quantum coherence and entanglement between the excited states of different pigments in the light-harvesting stage of photosynthesis. Although this stage of photosynthesis is highly efficient, it remains unclear exactly how or if these quantum effects are relevant. Dr Alexandra Olaya-Castro, co-author of the paper from UCL’s department of Physics and Astronomy said: “On a bright sunny day, more than 100 million billion red and blue colored photons strike a leaf each second.” “Under these conditions plants need to be able to both use the energy that is required for growth but also to get rid of excess energy that can be harmful. Transferring energy quickly and in a regulated manner are the two key features of natural light harvesting systems. “By assuring that all relevant energy scales involved in the process of energy transfer are more or less similar, natural antennae manage to combine quantum and classical phenomena to guarantee efficient and regulated capture, distribution and storage of the sun’s energy.” Summary of lessons from nature about concentrating and distributing solar power with nanoscopic antennae: The basic components of the antenna are efficient light absorbing molecules. Take advantage of the collective properties of light-absorbing molecules by grouping them close together. This will make them exploit quantum mechanical principles so that the antenna can: i) absorb different colors ii) create energy gradients to favour unidirectional transfer and iii) possibly exploit quantum coherence for energy distribution. Make sure that the relevant energy scales involved in the energy transfer process are more or less resonant. This will guarantee that both classical and quantum transfer mechanisms are combined to create regulated and efficient distribution of energy. Article by Andy Soos, appearing courtesy Environmental News Network. |Tags: energy distribution energy production photosynthesis quantum mechanics solar cell solar energy||[ Permalink ]|
<urn:uuid:5650b21a-52be-4f3e-bbd4-894e5d1fd662>
4.28125
683
News (Org.)
Science & Tech.
13.087292
572
The study of motion is often called kinematics. We will begin our study with one dimensional kinematics. We will later expand to 2 and 3 dimensional kinematics after we have studied vectors. We can give the position of an object in relation to a reference point. There are a number of variables we can use for position, such as x, d, or s. The official metric unit for position is the meter (abbreviated m). The meter was first defined in terms of the circumference of the Earth on a meridian passing through Paris. It is now defined in terms of the speed of light. When working with other scales, it might be convenient to use other metric units such as the nanometer (nm), the centimeter (cm), and the kilometer (km). We will often use exponential notation. Exponential notation is convenient for expressing very large and small numbers. For instance, 12,300 would be expressed as 1.23 x 10,000 or 1.23 x 104 So 3.14 km = 3140 m = 3.14 x 103 m For small numbers, 0.000345 = 3.45 x 10-4 A micrometer, 1 μm = 10-6 m The width of a human hair on average is 10 μm. This would be 10 x 10-6 m. The wavelength of a helium-neon laser is 633 nm = 633 X 10-9 m = 6.33 x 10-7 m The common metric units are given in powers or 3. The kilometer is 1000 m. Although the 100 centimeters = 1 meter it is not actually a common unit. 1 Millimeter = 1mm = 10-3 m 1 Micrometer = 1um = 10-6 m 1 Nanometer = 1nm = 10-9 m 1 Picometer = 1pm = 10-12 m 1 Femotometer = 1fm = 10-15 m also known as a Fermi Except for kilometer, we often do not use the larger metric prefixes for distance. But they are used for frequencies and other units in physics. 1 Kilometer = 1 km = 1000 m = 103 m Megameter = 1Mm = 106 m Gigameter = 1Gm = 109 m Terrameter = 1 Tm = 1012 m Common British Imperial units for measuring distance include the inch, the foot, the yard, and the mile. An easy way to remember the conversion from meters to miles can be remembered in terms of Track and Field. The loop in a track is ¼ mile long. It is also known as the 400 m race, so 1 mile is approximately = 1600 m. Engineers in America commonly use Imperial units. Very small measurements for the purposes of manufacturing are given in 1/1000ths of an inch. When dealing with astronomical distances there are other units we might use such as the light-year, the parsec, or the Astronomical Unit. The light-year is the distance light will travel in one year. An object which is one parsec away has one arc-second of parallax from Earth. An astronomical unit is the average distance from the Earth to the Sun. Distance vs Displacement In physics we often study the change in position of an object. If we are only examining the change in position from the start of our observation to the end, we are talking about displacement. We ignore how we get from point A to point B. We are only concerned with how the crow flies. If we are concerned with our path, we are working with distance (see figure A). For example, let us suppose I were to talk around the perimeter of a square classroom (see figure B). The classroom is 10 meters on a side. At the end of my trip I return to my original starting position. The distance traveled would be 40 m. The displacement would be zero meters because displacement only depends on the starting and ending positions. The other important distinction between distance and displacement is that distances do not have a direction. If you were wearing a pedometer is would record distance. The odometer on a car records distance. Displacement has a direction and a magnitude. Magnitude is a fancy physics term for size or amount. For instance, suppose I walked 10 m North, 10 m East, 10 South, and then 5 m West (see figure C). My distance traveled would be 35 m. There is a magnitude but no net direction. Since we can describe distance with just a magnitude (but no direction) we call it a scalar. But my displacement would be 5 m due East. As displacement has both a magnitude and a direction, we call it a vector. We measure time in seconds. We will use the variable t for time. The elapsed time for a certain action would be ΔT. The Greek letter delta, Δ, is used to represent a change in a quantity. If we are talking about a reoccurring event (such as the orbit of the Earth around the sun) we talk about the Period of time T, with a capitol T. For longer periods of time we will often use the conventional minutes, hours, days, or years. For shorter periods of time will often use exponential notation or may use milliseconds, microseconds, picoseconds, or femtoseconds. For instance, chemical reactions may often take place on the picosecond timescale. Just as when you dance under a strobe like at a cool school dance you can see your movements in stop action. Scientists use pulsed lasers with picosecond and femtosecond pulses to examine dynamics at the molecular level. Speed and Velocity Building on changes in position and changes in time, we can examine the rate at which these changes in position take place. How fast are we moving? You probably use the terms speed and velocity interchangeably in your everyday vernacular, but in physics they have distinct meanings. Speed is a scalar and has no direction. Speed can be defined as speed = distance/elapsed time Velocity is a vector. We could consider velocity to be speed in a given direction. To calculate the average velocity over a period of time, we use displacement and elapsed time. Where v is velocity, x is position, t is time. The Greek letter delta, Δ, means a change in a quantity, such as the change in position or the change in time. The bar over the velocity v means the terms in averaged. For instance, Δx = xf – xo , or the change is position equals the difference of the final position and the original positions. Our first set of problems will involve the above kinematic equation. Problem Solving Method When solving physics problems, it is useful to follow a simple problem solving strategy. Although at first, it may be easy to solve some problems in your head, by following this strategy you will develop good problem solving habits. Just as you must develop good habits by brushing your teeth every day, you should attempt to follow the following methodology for solving physics problems. The first step is Step 0 because it does not always apply. Step 0: Draw a picture of the problem if appropriate. Step 1: Write down the given information Step 2: Write down the unknown quantity you are trying to find out Step 3: Write down the physics equations or relationships that will connect your given information to the unknown variables. Step 4: Perform algebraic calculations necessary to isolate the unknown variable. Step 5: Plug in the given information to the new equation. Cancel appropriate units and do the arithmetic. Example 1: A robot travels across a countertop a distance of 88.0 cm, in 30 seconds. What is the speed of the robot? In this case, we do not need to do any algebra. Significant figures: At this point we should not how many significant figures our answer has. Your final answer cannot have more information that your original data. We were presented with a distance and a time with only 3 significant figures, therefore our final answer cannot have more precision than this. Now let us look at a problem which does require some algebra. Example 2: The SR-71 Blackbird could fly at a speed of Mach 3, or 1,020 m/s. How much time would it take the SR-71 to take off from Los Angeles and fly to New York City via a path which is a distance of 5500 miles. You should note that you need to convert miles to meters, remembering that 1 mile = 1600 m First we need to algebraically isolate the variable t .First we multiply both sides by t, and t cancels on the right hand side of the equation. Then dividing both sides by v gives us Now, plugging in for distance and speed gives us Note the units and the number of significant digits. Because one piece of our original data (distance) only had two significant digits, we have to round off our final answer to 2 significant digits. Also, look at the cancelation of units. The meters in the units cancel. Our units have the reciprocal of a reciprocal, thus the final units are in seconds, which you might have guessed since we are working with time. For ease of perspective we converted these units into minutes. Average velocity vs instantaneous velocity Another important distinction is finding an average value or the velocity versus the velocity at a given instant in time. To find an average velocity we only the measure the change in position and the total elapsed time. However, finding the velocity at a given instant can be tricky. The elapsed time for an instant has no finite length. Similarly, a physical position in space has no finite size. To calculate this using equations we would have to reduce the elapsed time to a near infinitesimally small amount of time. Mathematically, this is the basic for calculus which was developed separately by both Newton and Leibniz. In standard calculus notation we would say the instantaneous velocity can be expressed as In our next lesson we will learn how to determine the instantaneous velocity using graphical techniques.
<urn:uuid:627b76b2-d80d-4591-b9b8-01ec5ccc1148>
3.96875
2,093
Tutorial
Science & Tech.
59.364874
573
Studying Hamilton's harbour invaders A team of students in Sigal Balshine's Aquatic Behavioural Ecology Lab is working to better understand the invasive round goby fish found in Hamilton Harbour. The students, both undergraduate and graduate, catch gobies at several locations around the bay and in Cootes Paradise. They then record the area in which the fish were caught, vital statistics such as their sex and size and water quality information. The Aquatic Behavioural Ecology Lab researches the evolution of cooperation, parental care and breeding, sperm competition, species introductions and extinctions and the effects of contaminant exposure. Pauline Capelle, in her third year studying biology and psychology, blogs about the lab's work as part of the School of Graduate Studies' new blog series.
<urn:uuid:cc1c9846-5630-44ff-9fe0-51fe8ef53425>
2.703125
163
News Article
Science & Tech.
24.4424
574
For animals like us, eating seems pretty simple: You bite the food directly, or you use arms to shovel it in. But that's far from the only way to do it. Across the animal kingdom there are numerous creative ways to ingest food and drink--some gross, some conniving, and some wonderfully weird. These are a few of our favorites. Polychaeta are a class of worms that are commonly called bristle worms because of the many bristles that help them move around. But they also have a bizarre way of ingesting: an axial proboscis (pictured at right) that researchers liken to a retractable elephant trunk. It's actually inside the worm's body, and after the worm deploys it to catch food it flexes muscles that retract it. If you're a snoozing bird, that is. Madagascar has more than its fair share of odd animals, and that includes this moth with a fearsome proboscis it uses to snatch the tears of birds. There's no shortage of tear-stealers on mainland Africa, but those typically exploit animals too big to swat them or to flee. With a bird, you have to be more careful. So the moth strikes at night, using its barbed implement to peel back the bird's double eyelid. This moth isn't a tearjerker; it's a tear-drinking jerk. You may have heard about the extraordinary tongue of the chameleon, the longest compared to its body size for all vertebrates. But within mammals, that honor goes to Anoura fistulata, the tube-lipped nectar bat discovered in the cloud forests of Ecuador. While some of its relatives can extend their tongues an inch and a half, this bat's tongue can reach an astonishing 3.4 inches, or more than one and a half times its body length. This gives it access to the nectar inside bell-shaped flowers that no other bat can reach, and it's possible because the tongue is anchored deep inside the bat's rib cage, between its heart and sternum. That lends it this extra leverage. The vicious thorns of the acacia tree, insects flying around the eyes--these are no match for the tongue of the giraffe, one of the longest in the animal kingdom. Besides its prodigious span, the giraffe tongue is also marked by its distinct bluish-black color. Some zoologists think this may be a way to keep the tongue from getting sunburned, since it spends so much time outside the animal's mouth. Hawk moths aren't the most svelte or slender fliers. But when you can unfurl a 14-inch long proboscis, who cares? Like nectar bats, many species of hawk moths (sometimes called sphinx moths) can reach nectar inaccessible to other flying creatures. Instead of keeping their appendages tucked deep inside, though, the moths keep theirs curled up until they need them. Famously, Charles Darwin predicted that there must have been moths with exceedingly long proboscises in Madagascar after he saw the orchids from that island with deeply recessed nectar. Those moths weren't discovered until after the great naturalist's death, so he was posthumously proven correct. No, they're not elephants. And technically, they're not even shrews. But it's not hard to see how elephant shrews got their name. This insectivorous African animal uses that glorious and elongated nose to hunt down spiders, worms, and insects, and then suck them up like an anteater does. Biologists in the past believed these creatures were related to true shrews, hedgehogs, or maybe even primates and rabbits. But, it seems, they are rather their own distinct order dating back millions of years, and a new species turned up in Tanzania just two years ago. As is the case the elephant shrews, it's clear where this handsome fellow, the star nose mole, gets his moniker. That star nose is made of 22 separate tentacles covered in 160,000 sensors per square inch, according to the PBS Nature episode "The Beauty of Ugly," which featured the mole. When it burrows, those tentacles can touch 12 different objects every second, appearing to the human eye as no more than a pink blur of activity. With this ability, it takes less than a second for the star-nose mole to devour its prey, often worms or insects. As we said about naked mole rats in our gallery of weird lab animals, you've got to be tough and talented if you're this ugly. Star nose moles certainly are. The scientific name for barnacles is Cirripedia, and the "cirri" means those weird feathery limbs you see here on goose barnacles. When the barnacle glues itself to its home, be that a rock or a ship's hull, it goes front end-first. These odd appendages then emerge from its back end to pull in plankton to eat. While pigeons might seem dirty, dumb, and fill you with the urge to poison them in the park, the ubiquitous urban birds are actually quite clever, as research examples have shown. Not only that, they use their beaks like straws to suck up water, while most other birds have to rely on getting a few drops in their mouths and then tilting their heads backward to let the water trickle down their throats. On second thought, perhaps I'll let the pigeons live. Emotion researcher Jaak Panksepp Read More » Sign up to get the latest science news delivered weekly right to your inbox!
<urn:uuid:384afd4e-372d-4fe4-af11-090cd4cf1b18>
3.109375
1,172
Listicle
Science & Tech.
57.317853
575
As we check our coding page in .net application we found that at the top after the namespaces there is a call that is partially defined in every web page. The question arrives in our mind that what is this Partial and why we use the class with partial access specifier. Why we not use the class as publicly or privately? Here we are to answer all these question. First we will get the partial class definition and its need. The Normalisation is a data analysis technique to design a database system. The Normalisation allows the database designer to understand the current data structure within an organisation. The end result of a normalisation is a set of entity. We remove the unnecessary redudency by normalising the database table. Read the rest of this entry » The Alias name is the name that is referred to any column name or table name that is given by user. The alias name also used to represent some column or table without using its real name. As we will proceed in this article we will see that how we can use both the alias column name and alias table name in SQL Server. Using the Querystring is the another method to pass information between pages in your ASP.NET application. As we know that Querystring is the portion of the URL after a Question Mark(?). The information is always retrieved as a string that can be converted with any type.Here we get the code to pass multiple values at a single time in Querystring. The Cast() Function is used to change the data type of a column. We can use the cast() function for various purpose. Cast(Original_Expression as Desired_DataType) Read the rest of this entry » The convert() function is used to convert an expression of one specific data type to another type. Also this function can be used to represent the value of date/time type variable in different different format. As we will discuss later in this post we will see how we accomplish this task. Reference type are important features of the C# language. They enable us to write complex and powerful application and effectively use the run-time framework. If we define the reference type variables in C# then the Reference type variables contain a reference to the data not the value. The value is stored in a separate memory area. for example in C# we used several reference type variables such as Classes, Structures, Array, Enumeration etc.
<urn:uuid:65ffd00f-bfa8-4e85-ba0a-061109f0599f>
3.40625
491
Content Listing
Software Dev.
46.28012
576
The Pasterze Glacier in western Austria has been receding since 1856. A combination of higher summer temperatures and lower winter snowfall is causing the retreat. Glaciers in nearby Switzerland receded more rapidly in 2003 than in any other year since annual measurements began in 1880. Despite the record heat in Europe that summer, scientists from the Swiss Academy of Natural Sciences attributed the melting to long-term climate change. NASA scientists use satellite data to measure the advance and retreat of glaciers all around the world. This true-color image was acquired by Space Imaging’s Ikonos satellite on October 3, 2001. The full-resolution image has a resolution of 4 meters per pixel. For more information about monitoring Glaciers, read At the Edge: Monitoring Glaciers to Watch Global Change. Image by Robert Simmon, NASA’s Earth Observatory, based on data copyright Space Imaging
<urn:uuid:3bd74f56-0527-4a18-8035-023e5cfe289a>
4.40625
180
Knowledge Article
Science & Tech.
32.29
577
General Chemistry/Periodicity and Electron Configurations Blocks of the Periodic Table The Periodic Table does more than just list the elements. The word periodic means that in each row, or period, there is a pattern of characteristics in the elements. This is because the elements are listed in part by their electron configuration. The Alkali metals and Alkaline earth metals have one and two valence electrons (electrons in the outer shell) respectively. These elements lose electrons to form bonds easily, and are thus very reactive. These elements are the s-block of the periodic table. The p-block, on the right, contains common non-metals such as chlorine and helium. The noble gases, in the column on the right, almost never react, since they have eight valence electrons, which makes it very stable. The halogens, directly to the left of the noble gases, readily gain electrons and react with metals. The s and p blocks make up the main-group elements, also known as representative elements. The d-block, which is the largest, consists of transition metals such as copper, iron, and gold. The f-block, on the bottom, contains rarer metals including uranium. Elements in the same Group or Family have the same configuration of valence electrons, making them behave in chemically similar ways. Causes for Trends There are certain phenomena that cause the periodic trends to occur. You must understand them before learning the trends. Effective Nuclear Charge The effective nuclear charge is the amount of positive charge acting on an electron. It is the number of protons in the nucleus minus the number of electrons in between the nucleus and the electron in question. Basically, the nucleus attracts an electron, but other electrons in lower shells repel it (opposites attract, likes repel). Shielding Effect The shielding (or screening) effect is similar to effective nuclear charge. The core electrons repel the valence electrons to some degree. The more electron shells there are (a new shell for each row in the periodic table), the greater the shielding effect is. Essentially, the core electrons shield the valence electrons from the positive charge of the nucleus. Electron-Electron Repulsions When two electrons are in the same shell, they will repel each other slightly. This effect is mostly canceled out due to the strong attraction to the nucleus, but it does cause electrons in the same shell to spread out a little bit. Lower shells experience this effect more because they are smaller and allow the electrons to interact more. Coulomb's Law Coulomb's law is an equation that determines the amount of force with which two charged particles attract or repel each other. It is , where is the amount of charge (+1e for protons, -1e for electrons), is the distance between them, and is a constant. You can see that doubling the distance would quarter the force. Also, a large number of protons would attract an electron with much more force than just a few protons would. Trends in the Periodic table Most of the elements occur naturally on Earth. However, all elements beyond uranium (number 92) are called trans-uranium elements and never occur outside of a laboratory. Most of the elements occur as solids or gases at STP. STP is standard temperature and pressure, which is 0° C and 1 atmosphere of pressure. There are only two elements that occur as liquids at STP: mercury (Hg) and bromine (Br). Bismuth (Bi) is the last stable element on the chart. All elements after bismuth are radioactive and decay into more stable elements. Some elements before bismuth are radioactive, however. Atomic Radius Leaving out the noble gases, atomic radii are larger on the left side of the periodic chart and are progressively smaller as you move to the right across the period. Conversely, as you move down the group, radii increase. Atomic radii decrease along a period due to greater effective nuclear charge. Atomic radii increase down a group due to the shielding effect of the additional core electrons, and the presence of another electron shell. Ionic Radius For nonmetals, ions are bigger than atoms, as the ions have extra electrons. For metals, it is the opposite. Extra electrons (negative ions, called anions) cause additional electron-electron repulsions, making them spread out farther. Fewer electrons (positive ions, called cations) cause fewer repulsions, allowing them to be closer. |Ionization energy is the energy required to strip an electron from the atom (when in the gas state). Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period or upward within a group, the first ionization energy generally increases. As the atomic radius decreases, it becomes harder to remove an electron that is closer to a more positively charged nucleus. Ionization energy decreases going left across a period because there is a lower effective nuclear charge keeping the electrons attracted to the nucleus, so less energy is needed to pull one out. It decreases going down a group due to the shielding effect. Remember Coulomb's Law: as the distance between the nucleus and electrons increases, the force decreases at a quadratic rate. It is considered a measure of the tendency of an atom or ion to surrender an electron, or the strength of the electron binding; the greater the ionization energy, the more difficult it is to remove an electron. The ionization energy may be an indicator of the reactivity of an element. Elements with a low ionization energy tend to be reducing agents and form cations, which in turn combine with anions to form salts. Electron Affinity |Electron affinity is the opposite of ionization energy. It is the energy released when an electron is added to an atom. Electron affinity is highest in the upper left, lowest on the bottom right. However, electron affinity is actually negative for the noble gasses. They already have a complete valence shell, so there is no room in their orbitals for another electron. Adding an electron would require creating a whole new shell, which takes energy instead of releasing it. Several other elements have extremely low electron affinities because they are already in a stable configuration, and adding an electron would decrease stability. Electron affinity occurs due to the same reasons as ionization energy. Electronegativity is how much an atom attracts electrons within a bond. It is measured on a scale with fluorine at 4.0 and francium at 0.7. Electronegativity decreases from upper right to lower left. Electronegativity decreases because of atomic radius, shielding effect, and effective nuclear charge in the same manner that ionization energy decreases. Metallic Character Metallic elements are shiny, usually gray or silver colored, and good conductors of heat and electricity. They are malleable (can be hammered into thin sheets), and ductile (can be stretched into wires). Some metals, like sodium, are soft and can be cut with a knife. Others, like iron, are very hard. Non-metallic atoms are dull, usually colorful or colorless, and poor conductors. They are brittle when solid, and many are gases at STP. Metals give away their valence electrons when bonding, whereas non-metals take electrons. The metals are towards the left and center of the periodic table—in the s-block, d-block, and f-block . Poor metals and metalloids (somewhat metal, somewhat non-metal) are in the lower left of the p-block. Non-metals are on the right of the table. Metallic character increases from right to left and top to bottom. Non-metallic character is just the opposite. This is because of the other trends: ionization energy, electron affinity, and electronegativity.
<urn:uuid:7ab562e2-c61b-4988-9c51-24c5b3cb1d20>
4.4375
1,666
Knowledge Article
Science & Tech.
39.408244
578
Most Atlantic hurricanes start to take shape when thunderstorms along the west coast of Africa drift out over warm ocean waters that are at least 80 degrees Fahrenheit (27 degrees Celsius), where they encounter converging winds from around the equator. Warm Air, Warm Water Make Conditions Right for Hurricanes Hurricanes start when warm, moist air from the ocean surface begins to rise rapidly, where it encounters cooler air that causes the warm water vapor to condense and to form storm clouds and drops of rain. The condensation also releases latent heat, which warms the cool air above, causing it to rise and make way for more warm humid air from the ocean below. As this cycle continues, more warm moist air is drawn into the developing storm and more heat is transferred from the surface of the ocean to the atmosphere. This continuing heat exchange creates a wind pattern that spirals around a relatively calm center, or eye, like water swirling down a drain. Converging Winds Create Hurricanes Converging winds near the surface of the water collide, pushing more water vapor upward, increasing the circulation of warm air, and accelerating the speed of the wind. At the same time, strong winds blowing steadily at higher altitudes pull the rising warm air away from the storms center and send it swirling into the hurricanes classic cyclone pattern. High-pressure air at high altitudes, usually above 30,000 feet (9,000 meters), also pull heat away from the storms center and cool the rising air. As high-pressure air is drawn into the low-pressure center of the storm, the speed of the wind continues to increase. As the storm builds from thunderstorm to hurricane, it passes through three distinct stages based on wind speed: - Tropical depressionwind speeds of less than 38 miles per hour (61.15 kilometers per hour) - Tropical stormwind speeds of 39 mph to 73 mph (62.76 kph to 117.48 kph) - Hurricanewind speeds greater than 74 mph (119.09 kph) Scientists Debate Cause of Temperature Changes that Create Hurricanes While scientists agree on the mechanics of hurricane formation, and they agree that hurricanes are becoming more frequent and severe, thats where consensus ends. Some scientists believe that human activity already has contributed significantly to global warming, which is increasing air and water temperatures worldwide and making it easier for hurricanes to form and gain destructive force. Other scientists believe that the increase in severe hurricanes over the past decade is due to natural salinity and temperature changes deep in the Atlanticpart of a natural environmental cycle that shifts back and forth every 40-60 years. Frequency and Severity of Hurricanes Likely to Increase While the scientific community debates the root cause of the temperature changes that are contributing to the current increase in destructive hurricanes, three things are apparent: - Air and water temperatures are rising worldwide. - Human activities such as deforestation and greenhouse gas emissions from a wide range of industrial and agricultural processes are contributing to those temperature changes at a greater rate today than in the past. - Failure to take action now to lower atmospheric levels of greenhouse gases is likely to lead to more frequent and severe hurricanes in the future.
<urn:uuid:2529c9ff-fac1-4c7a-81c9-51e424e73008>
4.21875
647
Knowledge Article
Science & Tech.
31.318844
579
[erlang-questions] Design methodology going from Object oriented to functional programming? Tue Oct 23 04:41:26 CEST 2007 Actually, at the level you are describing, there should be no difference between FP and OOP. While OOP definitely emphasize data and relations, it is not the only paradigm that does so - and given your background you should be experienced with the relational paradigm, which has even heavier emphasis on data and relations, but is not OOP. To leverage your knowledge on relations - you can pretend sql queries are functions, i.e. select is a function, update is a function, and insert is a function. Then instead of writing insert (object) - which looks a lot like the sql query insert into table ... That's it. In the Java style OOP the head of the statement is an object, and in FP the head of the statement is a function name. But in either case you need to model the same world. An extremely crude way of thinking about FP is that it decouples the data from the function (or vice versa, that OOP couples functions and data). That means some facilities that you've taken for granted in OOP, such as inheritance, polymorphism, etc., will no longer be available. But FP have a different approaches to address these problems, and that's where the rubber meet the road. Erlang's process model is basically the Actor model - the idea of everything is an actor feels similar to everything is an object in OOP. So as others have alluded to instead of thinking in objects you can think in processes. But Actor model is independent of FP or OOP, so you would still have to get used to the FP part in Erlang. You might want to check out http://www.math.chalmers.se/~rjmh/Papers/whyfp.html <http://www.math.chalmers.se/%7Erjmh/Papers/whyfp.html>. A higher level introduction is http://www.defmacro.org/ramblings/fp.html. On 10/22/07, Alexander Lamb < > wrote: > Hello list, > I am trying to understand what is the design process (intellectual, that > is) when building a program in Erlang. > Indeed, in the object oriented world, I would start by finding what my > classes might be and the relationship between them. Gradually I would add > functions (class or instance methods) to the classes in order to provide > solid foundations on top of which I can write an application. > For example, I could have PERSON, PROFILE, ROLE, FEATURE, etc... and > decide a PROFILE is a collection of FEATURES. A PERSON can have 0 or many > ROLES. A ROLE is a PROFILE on a given area (a department for example) for a > given time. I would then add functions such as "give all the active roles > for the user" or "what features give that profile" or "does the user have a > given feature for that department". > I admit it is more complexe than that, but you get the idea. > Obviously, this doesn't seem to be the way to go with Erlang. Intuitively, > I would start making a list of all the functions which will allow me to > interract with my application. In that case I could have "give me all users > with an active role on that department", etc... Then by implementing those > high level functions I would split them into pieces by calling smaller > simpler functions. The underlying data structure will "just follow" or > "appear" naturally. > Hence: object oriented design is "data structure and relationships first, > functions second" and functional design is "functions first, data structure > Am I being over simplistic here. Are there some guidelines as to how one > can approach a problem when creating a new program? Especially programs > which deal with persistent data, not protocole analysers or socket servers! > Alexander Lamb > Founding Associate > RODANOTECH Sàrl > 4 ch. de la Tour de Champel > 1206 Geneva > Tel: 022 347 77 37 > Fax: 022 347 77 38 > erlang-questions mailing list -------------- next part -------------- An HTML attachment was scrubbed... More information about the erlang-questions
<urn:uuid:b480edef-faa3-4d41-bb3d-093798823b41>
2.640625
985
Comment Section
Software Dev.
64.797442
580
What does it mean? and What is it for? It is used to map a canonical name for a servlet (not an actual Servlet class that you've written) to a JSP (which happens to be a servlet). On its own it isn't quite useful. You'll often need to map the servlet to a url-pattern as: All requests now arriving at /test/* will now be serviced by the JSP. Additionally, the servlet specification also states: jsp-file element contains the full path to a JSP file within the web application beginning with a “/”. If a jsp-file is specified and the load-onstartup element is present, then the JSP should be precompiled and So, it can be used for pre-compiling servlets, in case your build process hasn't precompiled them. Do keep in mind, that precompiling JSPs this way, isn't exactly a best practice. Ideally, your build script ought to take care of such matters. Is it like code behind architecture in ASP .NET? No, if you're looking for code-behind architecture, the closest resemblance to such, is in the Managed Beans support offered by JSF.
<urn:uuid:d74cec90-49ba-4472-9fd3-5508360e9b05>
2.796875
271
Q&A Forum
Software Dev.
67.660625
581
National Ocean Sciences Accelerator Mass Spectrometer Facility, Department of Geology and Geophysics, Woods Hole Oceanographic Institution Data Center Description The Woods Hole Oceanographic Institution's National Ocean Sciences Accelerator Mass Spectrometry Facility (NOSAMS) was established in 1989 to process and analyze a large number of small volume seawater samples (>13,700) collected as part of the World Ocean Circulation Experiment (WOCE). An ongoing commitment to automation and high-precision 14C AMS measurements has allowed NOSAMS to successfully meet the goals of the WOCE program while providing over 17,000 AMS radiocarbon results (carbon dating ) to non-WOCE investigators from a wide variety of carbon-bearing materials. All of the formal WOCE program samples have been analyzed as of the spring of 2002, and all Pacific and Indian Ocean data have been released to the WOCE Hydrographic Office. The Atlantic Ocean dataset is expected to be released by the summer of 2002. AMS analyses are reported with a routine precision of below 4 per mil (thousand) on samples with a 14C content of more than 70 % of that of a modern sample. NOSAMS accepts samples from all qualified research laboratories and charges fees that vary according to the difficulty of analysis and the nature of the project. Samples from federally supported research programs receive the lowest rates. Overall, the objective of this facility is to support research in all studies of global change.
<urn:uuid:1563d880-0dbc-4430-a22b-750bd3cf773a>
2.65625
304
About (Org.)
Science & Tech.
18.04569
582
|Version 5 (modified by simonmar, 3 years ago)| The Garbage Collector GC algorithms supported: - Copying GC - Parallel GC? - Marking? (for compaction or sweeping) - Sweeping? (for mark-region GC) The GC is designed to be flexible, supporting lots of ways to tune its behaviour. Here's an overview of the techniques we use: - Generational GC, with a runtime-selectable number of generations (+RTS -G<n> -RTS, where n >= 1). Currently it is a traditional generational collector where each collection collects a particular generation and all younger generations. Generalizing this such that any subset of generations can be collected is a possible future extension. - The heap grows on demand. This is straightforwardly implemented by basing the whole storage manager on a block allocator. - Aging: objects can be aged within a generation, to avoid premature promotion. See Commentary/Rts/Storage/GC/Aging. - The heap collection policy is runtime-tunable. You select how large a generation gets before it is collected using the +RTS -F<n> -RTS option, where <n> is a factor of the generation's size the last time it was collected. The default value is 2, that is a generation is allowed to double in size before being collected. GC data structures The main data structure is generation, which contains: - a pointer to a list of blocks - a pointer to a list of blocks containing large objects - a list of threads in this generation - the "remembered set", a list of blocks containing pointers to objects in this generation that point to objects in younger generations and various other administrative fields (see includes/rts/storage/GC.h for the details). Generations are kept in the array generations, indexed by the generation number. A nursery is a list of blocks into which the mutator allocates new (small) objects. For resaons of locality, we want to re-use the list of blocks for the nursery after each GC, so we keep the nursery blocks rather than freeing and re-allocating a new nursery after GC. The struct nursery contains only two fields - the list of blocks in this nursery - the number of blocks in the above list In the threaded RTS, there is one nursery per Capability, as each Capability allocates independently into its own allocation area. Nurseries are therefore stored in an array nurseries, indexed by Capability number. The blocks of the nursery notionally logically to generation 0, although they are not kept on the list generations.blocks. The reason is that we want to keep the actual nursery blocks separate from any blocks containing live data in generation 0. Generation 0 may contain live data for two reasons: - objects live in the nursery are not promoted to generation 1 immediately, instead they are aged, first being copied to generation 0, and then being promoted to generation 1 in the next GC cycle if they are still alive. - If there is only one generation (generation 0), then live objects in generation 0 are retained in generation 0 after a GC.
<urn:uuid:33a5f7fa-0ccf-4ab3-bb64-d5642ce42372>
2.96875
665
Documentation
Software Dev.
42.574535
583
Good Answer by Fishtoaster. The science is ancient, discovered by Archimedes. 1: Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. In other words, if you put a ball, with volume 1 litre completely under water, there is an upwards force on the ball (buoyancy or flotation) equal to the weight of 1 litre of water. (i.e. 1 kilogram-force or 9.81 Newtons) 2: (Corollary) Any floating object displaces its own weight of fluid. If we place a floating object of mass 1 kilo, it will displace exactly 1 kilo of water, or 1 litre of water. If the volume of our object is less than 1 litre, it will float. So, with a Hydrometer, it is weighted such that the submerged volume at the 1.000 reading is exactly equal to the weight of the hydrometer. If we dissolve solids into the water (sugar) that volume of water is heavier, and less of it needs to be displaced in order for the hydrometer's weight to be matched, and the Hydrometer floats higher.
<urn:uuid:710105c9-fdc7-4e38-9ce1-cfb2eb9ec746>
3.65625
266
Q&A Forum
Science & Tech.
67.277035
584
Robots game activity Bring the stack of CRC index cards you developed in lab yesterday to lecture, one for each class you hope to design in your robots program. This activity will involve elaboration of these cards, giving greater specificity to responsibilities and describing each class's attributes. Your task in class is to embellish these cards as follows: - Each card should list of member functions and instance variables of the class. (Instance variables are the private variables.) Insofar as possible, you should give the type/class of all parameters and return values of the member functions, and the types of the instance variables. - One additional card lists non-member functions (if any) along with the purpose of each. Again, give the type/class of all parameters and return values. - Number all member functions and procedures in the order you intend to code and test them. You'll put the number (1) in front of all methods and procedures you can test on their own without writing any others. Put a (2) in front of procedures you can't write until you've written procedures from (1), and so forth. - Identify with a (*) all methods which appear challenging to write. These are ones which you hope you can break up later into smaller procedures once you've thought about the robots program more.
<urn:uuid:4fe2fa59-6eeb-4077-99a2-3e6678c046da>
3.4375
295
Tutorial
Software Dev.
49.502065
585
Fri February 1, 2013 Dung Beetles Use Cosmic GPS to Find Their Way Originally published on Fri February 1, 2013 12:03 pm IRA FLATOW, HOST: Now for a surprising find from the insect world. The dung beetle, that insect known for sculpting little balls of animal feces that they roll around and later feast on. Well, it turns out that these beetles have a built-in cosmic GPS that helps them navigate around. Dung beetles use light - listen to this - use light from the Milky Way to orient themselves at night. It's all in a paper published earlier this month in the journal Current Biology. My next guest is here to tell us more about how dung beetles see the starry - how do they see the starry night, a starry sky? Eric Warrant is a co-author of the dung beetle study in Current Biology, professor of functional zoology at Lund University in Lund, Sweden. Welcome to the program. ERIC WARRANT: Thank you very much. It's nice to be here. FLATOW: So dung beetles use the starry night, they use the Milky Way to navigate around? WARRANT: They do indeed. Yes. It's a surprising finding, but they do indeed. Yeah. We discovered it almost by chance, really, because we were studying their mechanisms of navigating with regards to the moon, which is slightly more visible and obvious stimulus during the night sky. But we discovered on most parts of the month when the moon came up extremely late after midnight, particularly that until midnight, we suddenly discovered that the beetles were still able to navigate even without the moon. So we were quite puzzled by this, a bit alarmed actually at first because we were worried that our previous work was wrong. But then after further contemplation, we sort of realized that, well, maybe they were using the stars. And it turned out to be the case. FLATOW: They can actually see the Milky Way at night time? WARRANT: They can, yes. They probably don't see that many individual stars because their eyes really aren't sensitive enough to discern many more than probably the 10 or so brightest stars. But they can actually see the very dim stripe of light, which is - which makes up the Milky Way, especially in the Southern Hemisphere. It's very, very obvious actually compared to the Northern Hemisphere. And it's this broad and rather dim stripe of light which they're able to detect and to orient with respect to. FLATOW: How do you craft an experiment to discover this? WARRANT: Yes. Well, the first clue we got was that we covered their heads with little tiny cardboard hats, which we cut out of black card and then taped onto the back of their bodies so that they - the view of the night sky was removed. And when we did that, they were no longer able to orient. And the way they normally orient is that they roll balls of dung in straight lines directly away from the dung pad. They have to do this because if they don't do this, they end up rolling back into the dung pad. And there's a lot of beetles there, all competing for this very valuable resource, and it's very likely that they get their dung ball stolen after quite a fight often. So they must get away from the dung pad in a straight line. That's the quickest and the most efficient way of leaving the dung pad, and so it's critically important for them that they do this. And so the stripe of light in the sky helps them to do this. They can actually orient with respect to it and orient in straight line away. FLATOW: So what happens when it's a cloudy night and they can't see the sky? They just don't go out that night? WARRANT: No, they do but they roll in circles. So it's actually a very dangerous night indeed for dung beetles. WARRANT: But thankfully, in South Africa where we were working, cloudy nights are not all that common. FLATOW: Wow. And so they must have developed this over the millennia, this ability to do this? WARRANT: Well, we're not absolutely sure when and how and for how long it's evolved. But certainly, I daresay, it been around for a while because, as I say, it's a very important behavior that they have. Everything that they live for really has to do with dung. So - and the dung ball is incredibly important to them because they have to find a mate and lay their eggs in this ball. So it's a very valuable thing, this ball. So rolling away from the dung pile in a straight line with the help of the moon and - if it's present - and with the Milky Way if the moon isn't present seems to be something that probably has evolved some time ago. And they're probably not the only animals that are able to see it either. FLATOW: Yeah. That was my next question. They must think - there must be other animals that do this too. WARRANT: It's very likely. We - this is the first animal that we know of that's able to orient with respect to the Milky Way. But it's very likely that there are others. There are many night-flying moths and grasshoppers and locusts, for instance, that migrate considerable distances at night. And it might be the case that they, too, can use the Milky Way under a dark night. FLATOW: Wow. Starry, starry night has a whole new meaning now. WARRANT: Indeed it does. FLATOW: All right. Eric Warrant, thank you very much for taking time to be with us. WARRANT: A pleasure. FLATOW: Good luck. WARRANT: Thank you very much for having me. Thank you. Bye bye. FLATOW: You're welcome. Eric Warrant is professor of zoology in the Department of Biology at Lund University in Lund, Sweden. Transcript provided by NPR, Copyright NPR.
<urn:uuid:c7c62123-02a8-4629-86ae-7b86eac58b17>
3.46875
1,273
Audio Transcript
Science & Tech.
70.659929
586
Climate & Weather Resources - General Resources, Auroras, Climatic Changes & Global Warming, Cyclones, Droughts, El Niño, Floods, Frost, Ice, Snow, Hurricanes, Meteorology, Natural Disasters, Rainbow, Space Weather, Storms, Temperature, Tides, Tornadoes, Typhoons, Wind Chill (UCF access only) - Provides files for Monthly Climatic Data for the World, Storm Data, Local Climatological Data, Climatological Data, Hourly Precipitation Data, and Heating & Cooling Degree Day Data. Select the Free access by certain agencies and individuals link to obtain the full reports. - To retrieve Local Climatological Data, you need to know the correct The Florida stations are: |DAB - Daytona Beach Regional Airport ||KYW - Key West International Airport ||TLH - Tallahassee Municipal Airport |FMY - Fort Myers ||MIA - Miami International Airport ||TPA - Tampa International Airport |GNV - Gainesville Municipal Airport ||MCO - Orlando International Airport ||VRB - Vero Beach Municipal Airport |JAX - Jacksonville International Airport ||PNS - Pensacola Regional Airport ||PBI - West Palm Beach International Airport - The UCF Library has paper or microfiche copies of some earlier issues of these publications in the US Documents Collection, including: - Climatological Data - Florida [US DOCS C55.214/8:] - Local Climatological Data - Orlando [US DOCS C55.217:] Climatic Data and Weather Observations Historic Data for Florida Stations Fires and Storms in Florida Water Resources of Florida - Orlando Subdistrict (USGS) - Weather America [REF QC983.W385 1996] provides key climatological data, with rankings, for over 4,000 places in the United States based on observations from 1965-1994. - Weather of U.S. Cities [REF QC983.W393 1996] provides a guide to the recent weather histories of 268 key cities and weather observation stations in the United States and its island territories. - Engineering Weather Data [REF TH7015.K54 2001] is intended to be a comprehensive, single weather data resource including all data commonly used for building systems design and energy analysis for 375 United States, Canadian and worldwide cities. Includes BIN data, degree days, ventilation energy consumption, humidification water consumption, heat recovery savings, economizer savings, and ASHRAE design conditions. "The time period of coverage ranges from the 1830s through the 1970s with most data from the period prior to 1960. Each series typically includes observations for a number of meteorological and other geophysical parameters." "covers research related to analysis and prediction of observed and modeled circulations of the atmosphere, including technique development, data assimilation, model validation, and relevant case studies. This includes papers on numerical techniques and data assimilation techniques that apply to the atmosphere and/or ocean environment." Looking for something else? Ask for assistance at the UCF Library's Reference Desk. Prepared by: Rich Gause, Government URL of this page: http://library.ucf.edu/govdocs/climate/ Last updated October 13, 2011 9:27:08 AM
<urn:uuid:98df9dd3-ecf4-4f7c-8a4f-a38732cb640c>
2.640625
724
Content Listing
Science & Tech.
18.307997
587
Is there no possible way that it has a final digit? He even proved a stronger result, Yes, Loiville 1882 (I believe).not algebraic? It was not proven geometrical. It was proven based on an integration that produces as a result.Geometrically, if we were drawing a circle, the ends must touch? (Even at the very small value level?) Here's my little conjecture (since I don't know how to prove it and excel says it doesn't work, but after 200 terms I don't know how accurate excel can be) This shows that the closer you get to infinity, the more decimal places will be in pi, but what if you reach infinity (which you obviously can't do)? You would get infinite decimal places, and it would equal pi... Does this mean that the ends of a circle do not touch at all? (Apart from at infinity) How can you calculate that in excel?? The best I can get for calculating pi is doing the sum of about 100000 inverse squares! If Pi has infinite decimal places then it must never be closed circle as there must be a space somewhere on the circle that is infinitely small and can not be filled. What I'm thinking is usually hard to understand so think of it like this... Pretend your drawing a circle with an incredibly fine pencil and your drawing of the circle is literally perfect. The diameter of the circle is 1cm so the circumference is Pi. You start at a point and draw 3cm. The circle is not yet complete as there is 0.14159..cm left so you draw 0.1cm but there's still 0.0415926...cm left to draw! You get closer and closer but since pi has infinite decimal places (that aren't all 0s) you will never reach the starting point of the circle! I hope thats easy enough to follow! It's clear what you mean, but mathematically there's no problem and practically, there's no difference with other numbers. Take a diameter of 1/pi, then you have to draw the circle with circumference 1. You have your fine pencil and you start drawing, already at 0.9, then 0.95, then 0.9997, then... Stopping at exactly 1, isn't fysically/practially "easier" than stopping at pi. The circle is closed, because pi is what it is, it's not 3.14 and not 3.141592653, but pi. My mistake was assuming pi would be drawn from a meaurable point of view when really "it is what it is". So in theory - no problem. Pi has its value and thats the value in the ratio from diameter to circumference. However, on paper it can be a problem as if you draw a perfect closed circle (ie. no errors in drawing it whatsoever) there will always be an infinitesimally small gap! Why still the gap? The paper doesn't know about real numbers, nor about our concept of 'meters'. The only problem which we practically encounter is our inability to draw so perfectly. For us, it's not harder/easier to draw a perfect 3-4-5 (5² = 3²+4²) right triangle, than to draw a 1,1,sqrt(2) (sqrt(2)² = 1²+2²) right triangle, although this last one has a side which has an irrational number as length! A number is said to be constructable when we can using Euclidean toys (striaghtedge and compass) construct . This is an algebra question but the important fact about the set of all constructable numbers is that . Meaning the it contains all rational numbers. Thus, what you need to show is that you cannot constuct and hence show it is irrational (note, not transcendental! this does not show this). The problem with this approach is that these concepts was purposely created to simplify construcability problems but we are going backwards meaning from this approach making more difficult.
<urn:uuid:6ea69df8-29c5-4951-a058-658f63ac8872>
2.546875
853
Comment Section
Science & Tech.
76.240375
588
Department of Earth Sciences, University of Bristol Prof. D.M. Sherman Lecture 1: Chemical Fundamentals, Thermodynamics, Acid-Base and Solubility Equilibria One the most important tools we have in environmental geochemistry is thermodynamics. Thisenables us to predict how chemical reactions will proceed. Using thermodynamics, we can calculate the solubilities of minerals and how reactions with minerals and gases will control the compositions of aqueous solutions.
<urn:uuid:57dcfa2d-9077-4c52-8c4f-e8d5c8b57294>
3.28125
99
Academic Writing
Science & Tech.
14.925321
589
Florida is an important place for the endangered and threatened sea turtles of the world. Sea turtles nest on our beaches, forage for food in our estuaries, and all too often wash-up dead on our shoreline. Florida Fish and Wildlife Conservation Commission staff are dedicated to protecting sea turtles in Florida and learning as much as possible about the biology and life history of these The unusually long spell of cold weather in Florida in January 2010 has had a big impact on sea turtles. The FWC has been working with staff from county, state, and federal agencies as well as numerous volunteers on a mass rescue effort for sea turtles throughout the Five species of sea turtles are found swimming in Florida's waters and nesting on Florida's beaches. All sea turtles found in Florida are protected under state statutes. The Florida Fish and Wildlife Conservation Commission's Fish and Wildlife Research Institute coordinates nesting beach survey programs around the state. FWRI staff members coordinate the Florida Sea Turtle Stranding and Salvage Network (FLSTSSN), which is responsible for gathering data on dead or debilitated (i.e., stranded) sea turtles found in Florida. Debilitated turtles are rescued and transported to FWRI marine turtle program staff conduct research on the distribution, abundance, life histories, ecology, migrations, and threats to marine turtles in Florida and contiguous western Atlantic and Caribbean waters. Illegal harvesting, habitat encroachment, and pollution are only some of the things sea turtles must fight against to stay alive. Researchers at FWRI are studying these threats and finding ways to help the population survive.
<urn:uuid:493ce35f-f2ca-46e2-9d45-0983a5f4b531>
3.5
349
About (Org.)
Science & Tech.
33.58245
590
In addition to the above types of problems, considerable research is directed to basic questions such as, Do we understand how quasars form and evolve? Can we connect theories of galaxy and black hole formation with the observations of quasars at high redshift and the incidence of black holes in galaxies at low redshift? Here I mention briefly some recent theoretical work that demonstrates progress in our understanding of quasars and ties in with present and future observational work. Haiman, Madau, and Loeb (1998) point out that the scarcity of quasars at z > 3.5 in the Hubble Deep Field implies that the formation of quasars in halos with circular velocities less than 50 km/s is suppressed (on the assumption that black holes form with constant efficiency in cold dark matter halos). They note that the Next Generation Space Telescope should be able to detect the epoch of formation of the earliest quasars. Cavaliere and Vittorini (1998) note that the observed form for the evolution of the space density of quasars can be understood at early times when cosmology and the processes of structure formation provide material for accretion onto central black holes as galaxies assemble. Quasars then turn off at later times because interaction with companions cause the accretion to diminish. Haehnelt, Natarajan, and Rees (1998) show that the peak of quasar activity occurs at the same time as the first deep potential wells form. The Press-Schechter approach provides a way to estimate the space density of dark matter halos. But the space density of z = 3 quasars is less than 1% that of star-forming galaxies, which implies the quasar lifetime is much less than a Hubble time. For an assumed relation between quasar luminosity and timescale and the Eddington limit, it is possible to connect the observed quasar luminosity density with dark matter halos and the numbers of black holes in nearby galaxies. The apparently large number of local galaxies with black holes implies that accretion processes for quasars are inefficient in producing blue light.
<urn:uuid:3b1a9e85-b862-4186-8471-8747530a00ce>
3.046875
435
Academic Writing
Science & Tech.
34.975905
591
Statistical modeling could help us understand cosmic accelerationDecember 24th, 2010 in Physics / General Physics (PhysOrg.com) -- While it is generally accepted by scientists that the universe is expanding at an accelerated rate, there are questions about why this should be so. For years, scientists have been trying to determine the cause of this behavior. One of the theories is that dark energy could be the cause of cosmic acceleration. In order to test theories of dark energy, a group at Los Alamos National Laboratory in New Mexico and the University of California Santa Cruz came up with a technique designed to test different models of dark energy. We are trying to investigate what could be behind the accelerated expansion of the universe, Katrin Heitmann, one of the Los Alamos scientists tells PhysOrg.com. Our technique is based on data, and can be used to evaluate different models. Heitmann and her collaborators created their method based on Gaussian process modeling; the implementation was led by Tracy Holsclaw from UC Santa Cruz. Were using statistical methods rather than trying to come up with different models. Our process takes data from different sources and then uses it to look for certain deviations from what we assume in a cosmological constant. The groups work can be seen in Physical Review Letters: Nonparametric Dark Energy Reconstruction from Supernova Data. Many scientists think that dark energy is driving the accelerated expansion of the universe, Heitmann says. If this is the case, it is possible to characterize it via its equation of state w(z). The redshift evolution of the equation of state parameter w(z) would show some indication of a dynamical origin of dark energy. Heitmann points out that in such a case, there could be an infinite number of models. We cant test all those models, she says, so we have to do an inverse problem. We have data and we can characterize the underlying cause of the accelerated expansion. It assumes that w is a smoothly varying function, and a dynamical dark energy theory would fit that. We can use data and analyze it to see if we can find indications that dark energy really is behind accelerated expansion. The Los Alamos and University of California, Santa Cruz team first tested their statistical technique on simulated data in order see whether the method was reliable. After we saw that it was, Heitmann says, we tried it on currently available supernova data. So far, their analysis has not revealed that a dynamical dark energy is behind the accelerated expansion (the cosmological constant is a very special case of dark energy and is still in agreement with the data), but Heitmann doesnt think that means that the door is closed on dynamical dark energy theories as the cause of acceleration in the expanding universe. The data so far is limited, and better data is coming in every day, she says. Additionally, the group hopes to include other data in their statistical analyses. Our technique allows for the input of data from cosmic microwave background and baryon acoustic oscillations as well, and thats what we want to add in next. If this technique does identify a dynamical dark energy as the reason behind accelerated expansion of the universe, it could mean revisiting the basics of what we know about the workings of the universe. If we do find the time dependence that supports the idea of dark energy as this mechanism, then we can go back to the theory approach. We would have an idea of which models could better explain universes expansion history and ultimately develop a self-consistent theory with no ad hoc assumptions. More information: Tracy Holsclaw, Ujjaini Alam, Bruno Sansó, Herbert Lee, Katrin Heitmann, Salman Halbib, and David Higdon, Nonparametric Dark Energy Reconstruction from Supernova Data, Physical Review Letters (2010). Available online: link.aps.org/doi/10.1103/PhysRevLett.105.241302 Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. "Statistical modeling could help us understand cosmic acceleration." December 24th, 2010. http://phys.org/news/2010-12-statistical-cosmic.html
<urn:uuid:511e54b3-75f5-412a-95d6-d7d80684d08a>
2.953125
883
News Article
Science & Tech.
40.319942
592
In the Karlsruhe physics course one defines the term "substance-like" quantity: Let my cite the definition from a paper by Falk, Herrmann and Schmid: "There is a class of physical quantities whose characteristics are especially easy to visualize: those extensive physical quantities to which a density can be assigned. These include electric charge, mass, amount of substance (number of particles), and others. Because of the fundamental role these quantities play throughout science and because such quantities can be distributed in and flow through space, we give them a designation of their own: substance-like." Are there examples of extensive quantities, which are not substance-like? I think volume is one example, since it seems to make no sense to assign a density to it, are there others? Now the authors write that a quantity can only be conserved if it is substance-like, let my cite this from an other publication: F. Herrmann, writes: "It is important to make clear that the question of conservation or non-conservation only makes sense with substance-like quantities. Only in the context of substance-like quantities does it make sense to ask the question of whether they are conserved or not. The question makes no sense in the case of non-substance-like quantities such as field strength or temperature." So my second question is: Why has a conserved quantity to be substance like? It would be great if one could give me a detailed explanation (or a counterexample if he thinks the statement is wrong). Are there resources where the ideas cited above are introduced with some higher degree of detail and rigour?
<urn:uuid:38e7da68-52df-433a-bb58-c531417c0521>
2.65625
343
Q&A Forum
Science & Tech.
32.35701
593
Major Section: DOCUMENTATION ACL2 documentation strings make special use of the tilde character (~). In particular, we describe here a ``markup language'' for which the tilde character plays a special role. The markup language is valuable if you want to write documentation that is to be displayed outside your ACL2 session. If you are not writing such documentation, and if also you do not use the character `~', then there is no need to read on. Three uses of the tilde character (~) in documentation strings are as follows. Below we explain the uses that constitute the ACL2 markup language. The other uses of the tilde character are of the following form. Indicates the end of a documentation section; see doc-string. Indicates the literal insertion of a tilde character (~). This directive in a documentation string is effective only during the processing of part 2, the details (see doc-string), and controls how much is shown on each round of moreprocessing when printing to the terminal. If the system is not doing moreprocessing, then it acts as though the ~] is not present. Otherwise, the system put out a newline and halts documentation printing on the present topic, which can be resumed if the user types moreat the terminal. ~key[arg]Before launching into an explanation of how this works in detail, let us consider some small examples. Here is a word that is code: ~c[function-name].Here is a phrase with an ``emphasized'' word, ``not'': Do ~em[not] do that.Here is the same phrase, but where ``not'' receives stronger emphasis (presumably boldface in a printed version): Do ~st[not] do that.Here is a passage that is set off as a display, in a fixed-width font: ~bv This passage has been set off as ``verbatim''. The present line starts just after a line break. Normally, printed text is formatted, but inside ~bv...~ev, line breaks are taken literally. ~evIn general, the idea is to provide a ``markup language'' that can be reasonably interpreted not only at the terminal (via doc), but also via translators into other languages. In fact, translators have been written into Texinfo and HTML. Let us turn to a more systematic consideration of how to mark text in documentation strings using expressions of the form ~key[arg], which we will call ``doc-string tilde directives.'' The idea is that key informs the documentation printer (which could be the terminal, a hardcopy printer, or some hypertext tool) about the ``style'' used to display arg. The intention is that each such printer should do the best it can. For example, we have seen above that ~em[arg] tells the printer to emphasize arg if possible, using an appropriate display to indicate emphasis (italics, or perhaps surrounding arg with some character _, or ...). For another example, the directive for bold ~b[arg], says that printed text for arg should be in bold if possible, but if there is no bold font available (such as at the terminal), then the argument should be printed in some other reasonable manner (for example, as ordinary text). The is case-insensitive; for example, you can use ~BV or ~Bv or ~bV in place of ~bv. Every form below may have any string as the argument (inside [..]), as long as it does not contain a newline (more on that below). However, when an argument does not make much sense to us, we show it below as the empty string, e.g., `` ~- Print the equivalent of a dash ~b[arg] Print the argument in bold font, if available ~bid[arg] ``Begin implementation dependent'' -- Ignores argument at terminal. ~bf Begin formatted text (respecting spaces and line breaks), but in ordinary font (rather than, say, fixed-width font) if possible ~bq Begin quotation (indented text, if possible) ~bv Begin verbatim (print in fixed-width font, respecting spaces and line breaks) ~c[arg] Print arg as ``code'', such as in a fixed-width font ~ef End format; balances ~bf ~eid[arg] ``End implementation dependent'' -- Ignores argument at terminal. ~em[arg] Emphasize arg, perhaps using italics ~eq End quotation; balances ~bq ~ev End verbatim; balances ~bv ~i[arg] Print arg in italics font ~id[arg] ``Implementation dependent'' -- Ignores argument at terminal. ~il[arg] Print argument as is, but make it a link (for true hypertext environments) ~ilc[arg] Same as ~il[arg], except that arg should be printed as with ~c[arg] ~l[arg] Ordinary link; prints as ``See :DOC arg'' at the terminal (but also see ~pl below, which puts ``see'' in lower case) ~nl Print a newline ~par Paragraph mark, of no significance at the terminal (can be safely ignored; see also notes below) ~pl[arg] Parenthetical link (borrowing from Texinfo): same as ~l[arg], except that ``see'' is in lower case. This is typically used at other than the beginning of a sentence. ~sc[arg] Print arg in (small, if possible) capital letters ~st[arg] Strongly emphasize arg, perhaps using a bold font ~t[arg] Typewriter font; similar to ~c[arg], but leaves less doubt about the font that will be used. ~terminal[arg] Terminal only; arg is to be ignored except when reading documentation at the terminal, using :DOC. Style notes and further details It is not a good idea to put doc-string tilde directives inside ~bv ... ~ev. Do not nest doc-string tilde directives; that is, do not write The ~c[~il[append] function ...but note that the ``equivalent'' expression The ~ilc[append] function ...is fine. The following phrase is also acceptable: ~bfThis is ~em[formatted] text. ~efbecause the nesting is only conceptual, not literal. We recommend that for displayed text, should usually each be on lines by themselves. That way, printed text may be less encumbered with excessive blank lines. Here is an Here is some normal text. Now start a display: ~bv 2 + 2 = 4 ~ev And here is the end of that paragraph.The analogous consideration applies to Here is the start of the next paragraph. ~efas well as You may ``quote'' characters inside the arg part of ~key[arg], by preceding them with ~. This is, in fact, the only legal way to use a newline character or a right bracket (]) inside the argument to a doc-string tilde directive. Write your documentation strings without hyphens. Otherwise, you may find your text printed on paper (via TeX, for example) like this -- Here is a hyphe- nated word.even if what you had in mind was: Here is a hyphe- nated word.When you want to use a dash (as opposed to a hyphen), consider using ~-, which is intended to be interpreted as a ``dash.'' For example: This sentence ~- which is broken with dashes ~- is boring.would be written to the terminal (using doc) by replacing ~-with two hyphen characters, but would presumably be printed on paper with a dash. Be careful to balance the ``begin'' and ``end'' pairs, such as ~ev. Also, do not use two ``begin'' ~bv) without an intervening ``end'' directive. It is permissible (and perhaps this is not surprising) to use the doc-string part separator between such a begin-end pair. Because of a bug in texinfo (as of this writing), you may wish to avoid beginning a line with (any number of spaces followed by) the - character or The ``paragraph'' directive, ~par, is rarely if ever used. There is a low-level capability, not presently documented, that interprets two successive newlines as though they were This is useful for the HTML driver. For further details, see the authors of ACL2. Emacs code is available for manipulating documentation strings that contain doc-string tilde-directives (for example, for doing a reasonable job filling such documentation strings). See the authors if you are interested. We tend to use ~em[arg] for ``section headers,'' such as ``Style notes and further details'' above. We tend to use ~st[arg] for emphasis of words inside text. This division seems to work well for our Texinfo driver. Note that arg to be printed in upper-case at the terminal (using arg to be printed at the terminal as though arg were not marked for emphasis. Our Texinfo and HTML drivers both take advantage of capabilities for indicating which characters need to be ``escaped,'' and how. Unless you intend to write your own driver, you probably do not need to know more about this issue; otherwise, contact the ACL2 authors. We should probably mention, however, that Texinfo makes the following requirement: when using one of the special characters }, you must immediately follow this use with a period or comma. Also, the Emacs ``info'' documentation that we generate by using our Texinfo driver has the property that in node names, : has been replaced by (because of quirks in info); so for example, the ``proof-checker'' s, is documented under rather than under We have tried to keep this markup language fairly simple; in particular, there is no way to refer to a link by other than the actual name. So for example, when we want to make invisible link in ``code'' font, we write the following form, which : should be in that font and then both be in that font and be an invisible link.
<urn:uuid:8e49dca7-353b-497d-b5e0-bae637794215>
3.390625
2,246
Documentation
Software Dev.
52.52327
594
“Understanding which species are most vulnerable to human impacts is a prerequisite for designing effective conservation strategies. Surveys of terrestrial species have suggested that large-bodied species and top predators are the most at risk, and it is commonly assumed that such patterns also apply in the ocean. However, there has been no global test of this hypothesis in the sea. We analyzed two fisheries datasets (stock assessments and landings) to determine the life-history traits of species that have suffered dramatic population collapses. Contrary to expectations, our data suggest that up to twice as many fisheries for small, low trophic-level species have collapsed compared with those for large predators. These patterns contrast with those on land, suggesting fundamental differences in the ways that industrial fisheries and land conversion affect natural communities. Even temporary collapses of small, low trophic-level fishes can have ecosystem-wide impacts by reducing food supply to larger fish, seabirds, and marine mammals.” Aceder ao artigo completo aqui. Aceder a mais artigos aqui. Fonte: Sea Web Marine Science Review – 07 de Setembro de 2012
<urn:uuid:7ef17589-6334-4f7f-8707-b2e85202e9c1>
3.59375
235
Truncated
Science & Tech.
23.591912
595
Java vs. C Is Java easier or harder than C?. Java Virtual Machine The key to Java's portability and security is the Java Virtual Machine.. History of Java Java was designed by Sun Microsystems in the early 1990s to solve the problem of connecting many household machines together. This project failed because no one wanted to use it.. Java is arguably the best overall programming languages, but there are problems with it.. Java is an excellent programming language.. GUI - Swing vs. AWT The original graphical user interface (GUI) for Java was called the Abstract Windowing Toolkit (AWT)..
<urn:uuid:fbe2a20f-b715-4f1b-9655-5b4acc5d4d06>
2.75
131
Content Listing
Software Dev.
49.995503
596
SOHO is part of the first Cornerstone project in ESA's science programme, in which the other part is the Cluster mission. Both are joint ESA/NASA projects in which ESA is the senior partner. SOHO and Cluster are also contributions to the International Solar-Terrestrial Physics Programme, to which ESA, NASA and the space agencies of Japan, Russia, Sweden and Denmark all contribute satellites monitoring the Sun and solar effects. Of the spacecraft's 12 sets of instruments, nine come from multinational teams led by European scientists, and three from US-led teams. More than 1500 scientists from around the world have been involved with the SOHO programme, analysing and interpreting SOHO data for their research projects. SOHO was built for ESA by industrial companies in 14 European countries, led by Matra Marconi (now called ASTRIUM). The service module, with solar panels, thrusters, attitude control systems, communications and housekeeping functions, was prepared in Toulouse, France. The payload module carrying the scientific instruments was assembled in Portsmouth, United Kingdom, and mated with the service module in Toulouse, France. NASA launched SOHO and is responsible for tracking, telemetry reception and commanding.
<urn:uuid:7e3e2279-e48f-4fbc-aec0-bb81b03f34c5>
2.84375
251
Knowledge Article
Science & Tech.
30.455643
597
Solar Images to be made by unique X-ray telescope |Tweet|Solar Images to be made by unique X-ray April 2, 1998: A unique cluster of telescopes that make X-rays take a U-turn has been selected for a fourth flight to capture "multicolored" images that will help us understand why the sun's outer atmosphere is so hot. Right: The Sun as seen in the glow of highly ionized iron. Such images are really taken in black and white. Scientists assign them false colors to help in studying different images. "One of the major objectives is to follow up on something we saw on the first flight 10 years ago," said Dr. Arthur B.C. Walker II of Stanford University, the principal investigator for the Chromospheric/Corona Spectroheliograph telescope. It will actually be a bundle of up to 19 telescopes, each taking pictures of the sun in a slightly different X-ray energy. The array is an upgrade of the Multi-Spectral Solar Telescope Array (MSSTA) which flew on October 23, 1987, May 13, 1991, and November 3, 1994. The 1987 flight - which also made the September 30, 1988 cover of Science magazine - returned pictures that showed where the sun's atmosphere was as hot as 1 million deg. K (about 1.8 million deg. F) also showed spectral lines that indicated temperatures of about 700,000 deg. K (1.26 million deg F). "We were mystified by this," Walker said. "We are now convinced that there is material at about 700,000 degrees K in the transition region and which contributes to coronal heating." NASA recently selected the Chromospheric/Corona Spectroheliograph under the solar physics research program. Richard Hoover of NASA's Marshall Space Flight Center and Troy W. Barbee of Lawrence Livermore National Laboratory are co-investigators with Walker. Their project is entitled Investigation of the Corona/Chromosphere Interface. This is the same region that will be studied by the Transition Region and Coronal Explorer (TRACE) scheduled for launch Thursday evening from California. The Chromospheric/Corona Spectroheliograph will complement TRACE by providing images of solar gases at temperatures as high as 5 million degrees K (9 million deg. F). While the sun is more than 99.9 percent hydrogen and helium, it carries significant quantities of carbon, iron, calcium, silicon, and other elements. Heavier elements have more protons (carbon is 6, iron is 26) in their nuclei than do lighter elements (hydrogen is 1, helium is 2). That means that as electrons are stripped from heavier atoms, the charge of the larger number of protons is devoted to the few remaining electrons. It takes ever more energy to strip off another electron. As a result, light from energetic atoms acts like a tracer that reveals where the sun is hot and at what temperatures. This is important to dissecting activities from the sun's corona - its outer atmosphere - through the transition region and to the chromosphere and photosphere - the visible "surface." The challenge is that the X-ray emissions are so energetic that they pass through materials rather than being reflected as visible light would be. The usual trick to making X-ray images is called grazing incidence reflection. Just as light will reflect off clear glass (or a rock will skip on a pond) if it strikes at a shallow angle, X-rays will reflect - and be focused - if they, too, strike at an even shallower angle. Several X-ray telescopes, such as, the Advanced X-ray Astrophysics Facility use this. The MSSTA works by a different effect. Its multi-layer mirrors comprise an ultrasmooth mirror coated by up to 100 layers of heavy elements like tungsten spaced by layers of lightweight elements like carbon. In effect, the layers work like a Bragg crystal, which will reflect X-rays. Everything is extremely smooth, on the order of 0.1 nm (a 10 billionth of a meter, or 1/250 millionth of an inch). These reflect a little bit of the X-rays at the surface of each layer pair. The choice of materials and the thickness of the layers determine precisely which wavelength is making the X-rays interfere with each other reflection. In this way, the scientists can fine tune a telescope to observe in a narrow band of wavelengths (a spectral band) or even one wavelength. That makes it possible to measure the temperature of the solar atmosphere. To observe the sun in several wavelengths at once, several telescopes must be flown together. This unique approach makes it possible to use conventional optical layouts - like the Hubble Space Telescope's Ritchey Chretien design - and get a much larger collecting area and brighter images than are possible grazing incidence optics of the same size. The design was invented by Barbee (and separately by scientists at IBM) and pioneered by Barbee, Walker, and Hoover for use in telescopes. The MSSTA (right) carries up to 19 telescopes of various sizes, each with a filter designed to admit only radiation of a specific wavelength or wavelength band, each corresponding to a specific temperature in the sun's atmosphere. Even though each image is taken in black-and-white, each represents a different wavelength and a different temperature in the solar atmosphere. To help in studying them, scientists often give them false colors to distinguish one from the other. This is similar to a color print that is really made from four black-and-white negatives, each to print a different color. On its fourth flight, the array will include a telescope that can see FE XVII; iron stripped of 9 of its 26 electrons. That takes temperatures up to 5 million deg. K. "It would be a better indicator of the distribution of high-temperature gases in the solar atmosphere," Walker said. This may also reveal small flares that may be one source of energy being pumped into the corona. For the C/CS flight, expected by early 2000 near the around the time of solar maximum. MSSTA will be upgraded and some new telescopes and detectors installed. As with its first two flights, the telescope will be boosted by a Terrier Black Brant IX launched from the White Sands Missile Range, N.M. The C/CS payload will be boosted to an altitude of 230 km (144 mi) and fall then parachute back to Earth for recovery. During the coast above Earth's atmosphere, the telescope array will be pointed precisely at the sun for about 6 minutes. Each telescope will take 10 to 15 full-disk images. Ground-based observatories will take pictures at the same time in white light and H-alpha, and with telescopes equipped to map magnetic fields. Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
<urn:uuid:1b899863-c049-49a1-879d-ec0ca252ae61>
3.53125
1,437
News Article
Science & Tech.
52.210167
598
Popocatépetl from the ISS on January 23, 2001 It might be (and is likely) just normal behavior for Popocatépetl in Mexico, but the volcano produced six plumes over the last 24 hours, according to a report out of Mexico City (in spanish). Officials from El Centro Nacional de Prevención de Desastres (The National Center for the Prevention of Disasters – Cenapred) say that the plumes appear to be mostly water vapor and other volcanic gases, but remind people living near the volcano to be vigilant. Popocatépetl is only 70 km from Mexico City, so any major eruption from the volcano could affect life and air travel to the major metropolis. The last major eruptive period at Popocatépetl ran from 1996-2003, producing VEI 3 eruptions, but the volcano has been producing smaller eruptions since January 2005. The volcano produces a mixed bag of activity, with ash fall, lava flows, pyroclastic flows and lahar generation and might be one of the more hazardous volcanoes in the Americas.
<urn:uuid:6ca4d7c8-733a-480f-8f35-959760bd1585>
3.359375
235
Personal Blog
Science & Tech.
27.212727
599