text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Now, scientists at the Max Planck Institute of Biochemistry (MPIB) in Martinsried near Munich, Germany, succeeded in making another important step in this research area: For the first time, they were able to integrate in a single experiment three different synthetic amino acids into one protein. (Angewandte Chemie, June 24, 2010). Proteins are the main actors in our body: They transport substances, convey messages or carry out vital processes in their role as molecular machines. The “helmsmen of the cell” are composed of amino acids, whose sequence is already defined by the heritable information in every living being. The translation of this information during the production of proteins (protein synthesis) is determined by the genetic code. 20 amino acids form the standard set of which proteins are built. In natural conditions, however, several hundred amino acids can be found and, of course, new amino acids can also be produced in the laboratory. With regard to their properties, they differ from the 20 standard amino acids, because of which, by their integration in proteins, specific structural and biological characteristics of proteins can be systematically changed. So far, only one type of synthetic amino acid could be inserted into a protein during a single experiment in a residue-specific manner; thus, only one property of a protein could be modified at once. Nediljko Budisa, head of the research group Molecular Biotechnology at the MPIB, has now made an important methodical progress in the area of genetic code engineering. The scientists were able to substitute three different natural amino acids by synthetic ones at the same time in a single experiment. The biochemist is pleased: “The research area of genetic code engineering and code extension reaches with this a new development phase.” Budisa’s method could be of great importance particularly for the industry and economy because the production of artificial proteins by genetic code engineering in his view demonstrates a solid basis for the development of new technologies. “During integration, synthetic amino acids confer their characteristics to proteins. Thus, the development moves over totally new classes of products, whose chemical synthesis has not been possible so far by conventional protein engineering using only the 20 standard amino acids”, explains Budisa regarding to future prospects. “Thanks to our method, in the future it will be possible to tailor industrial relevant proteins with novel properties: for example proteins containing medical components.” [UD]Original Publication: Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:9f461673-ae99-4287-8f8a-e04aeede9a04>
3.34375
1,148
Content Listing
Science & Tech.
32.511415
95,641,841
Scientists have long known that there was a time water existed in the lakes and streams on Mars. New data recently revealed that the timing is a bit off by roughly a billion years; some of the bodies of water in the planet were found to have formed much later than initially believed. The search for the next habitable planet continues, and NASA will stop at nothing to find Earth 2.0. However, one NASA director claims the search will be over in the next decade. Are scientists at the space agency close to finding it? A new study revealed that the red spot on Charon most likely came from the escaping atmosphere of Pluto captured by the moon's gravity. NASA's Spitzer spotted a Nebula that looks like the "Enterprise" spacecraft from "Star Trek" in time for the TV and movie franchise's 50th anniversary celebration. The discovery of Proxima b as a potentially life-sustaining exoplanet raised interest in other habitable worlds beyond the solar system. When it’s time to leave, where will mankind go? American astronaut Jeff Williams alongside Oleg Skripochka and Alexey Ovchinin of Russia's space agency Roscosmos safely returned home after accomplishing a 170-day mission aboard the International Space Station. To signal the end of summer and start of autumn this year, the Harvest Moon will rise this Sept. 16. The Harvest Moon is also dubbed as the Pumpkin Moon because of its color. Here are fast facts you should know about this rare occurence. The seven-year mission is the biggest and most ambitious mission since the Apollo moon rocks. Carbon gave life to the planet, but the question is where did this carbon come from? Scientists from Rice University have a possible answer: a planetary collision 4.4 billion years ago. Few weeks ago, a mysterious radio signal has been picked up by SETI (Search for Extraterrestrial Intelligence) fueling the speculations that aliens are trying to get in contact with Earth. However, it's just a signal...from Earth, the Special Astrophysical Observatory of the Russian Academy of Sciences said. In 1974, Stephen Hawking theorized that black holes may not be entirely dark and that there might be a way out of it. Today, a scientist thinks that he might have just proven this theory. NASA will air the momentous spacewalk on Aug. 19 when astronauts install the new docking port outside of the International Space Station (ISS). The first DNA sequencer or biomolecular sequencer was sent to space by SpaceX and it will be tested if it can work in a non-Earth environment and in microgravity. A cargo sent to the International Space Station included a DNA sequencer, which could possibly detect alien life forms.
<urn:uuid:b61cdae5-467e-4901-9f55-9b04589e8732>
3.171875
565
Content Listing
Science & Tech.
51.693918
95,641,854
This article does not cite any sources. (December 2013) (Learn how and when to remove this template message) Pressure drop is defined as the difference in total pressure between two points of a fluid carrying network. Pressure drop occurs when frictional forces, caused by the resistance to flow, act on a fluid as it flows through the tube. The main determinants of resistance to fluid flow are fluid velocity through the pipe and fluid viscosity. Pressure drop increases proportional to the frictional shear forces within the piping network. A piping network containing a high relative roughness rating as well as many pipe fittings and joints, tube convergence, divergence, turns, surface roughness and other physical properties will affect the pressure drop. High flow velocities and / or high fluid viscosities result in a larger pressure drop across a section of pipe or a valve or elbow. Low velocity will result in lower or no pressure drop. |This physics-related article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:1f6bf5e7-874b-4fe9-865d-e2dc3d512d44>
3.859375
208
Knowledge Article
Science & Tech.
42.236571
95,641,867
The project will be launched at the Society of Environmental Toxicology and Chemistry Asia/Pacific Conference. It will bring together scientists from CSIRO, the Chinese Academy of Sciences (CAS) and the Chinese Academy of Agricultural Sciences (CAAS), and is sponsored by Rio Tinto, the International Copper Association and the Nickel Producers Environmental Research Association. Co-Director of the CSIRO Centre for Environmental Contaminants Research, Professor Mike McLaughlin, says the project aims to develop robust scientific guidelines for safe levels of copper and nickel in Chinese soils. “South-East Asia is booming. Amid rapid industrialisation and expansion of urban populations, we need to ensure the environment is protected,” Professor McLaughlin says. “Use of metals is increasing. Consider the manufacturing and industrial expansion currently underway in Asia, where the pace of development has outstripped the advancement of relevant policies and regulatory guidelines. “We need sound local data that builds on recent scientific advances in the understanding of metal behaviour and toxicity in soils.” In the first instance, a series of field and laboratory experiments will be established for a range of soils and environments in China, to examine the behaviour and toxicity of copper and nickel in Chinese soils. This data will be combined with data already collected in European Union and Australian research programs, and CSIRO data from other South East Asian countries, to develop models that explain toxicity across a wide range of environments. Data from previous projects conducted by CSIRO in Malaysia, Thailand and Vietnam has suggested that soils in the region have generally low background metal concentrations, but are very sensitive to metal additions as indicated by effects on plant growth and soil microbial functions,” Professor McLaughlin says. The cooperation of Australian and Chinese governments and the global metals industries reflects a shared desire to provide science-based metals guidelines in China. The collaboration also recognises the importance of joining local knowledge with global experience in such complex scientific undertakings. The ongoing scientific research is being endorsed by China's State Environment Protection Agency (SEPA) for guidance on revising metals standards in Chinese soils. Research in China is being led by Professors Yibing Ma (CAAS) and Yongguan Zhu (CAS) in collaboration with CSIRO. Clare Peddie | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:5ce15f59-eb87-46ad-babd-6d097325248d>
2.96875
1,127
Content Listing
Science & Tech.
31.706933
95,641,872
The world of fuel chemistry and production is undergoing exciting change. The range of possible biofuels includes butanol, cellulosic gasoline, cellulosic biodiesel, cellulosic “biocrude,” and many more. We will be able to remove a hydroxyl group here, add a hydrogen there, and create a longer or shorter carbon chain to optimize fuels. Researchers and innovators from disparate fields are coming together to work out a new approach to biofuels. This “innovation ecosystem” is replacing the traditional energy research organizations and companies, which have been unable to make sufficient progress. While some common chemical and biological pathways, such as the biological ones used to ferment sugar for ethanol, have long been used successfully in biofuel production, others pathways–such as those that enable the thermal and catalytic conversion of biomass–await technology innovation. The companies working to deliver the necessary breakthroughs range from small, privately funded startups to behemoths such as BP. Important work is under way. LS9 is using synthetic biology to move pathways from plants into bacterial cells, with the goal of making petroleum from the fermentation of cellulosic feedstocks. Amyris, a company that began working on the malaria drug artemisinin, is transforming itself into a biofuel company using the same technology platform. Gevo is now taking on BP and DuPont in the race to commercialize butanol (see “Cellulolytic Enzymes”). Range Fuels has developed an anaerobic gasification technique to convert biomass into ethanol. Elsewhere, a number of researchers speculated that they could improve on Range’s syngas-to-ethanol catalytic-conversion process by replacing it with microbes (see “Ethanol from Garbage and Old Tires”). Coskata was born as a science experiment with a license to the technology from the University of Oklahoma and Oklahoma State University, a few million in seed funding, and a few great researchers. A wide variety of biofuel processes are being tried in two important areas: designing new microbes and enzymes with the latest technologies, such as synthetic biology, and using fresh catalysts and new approaches for gasification and catalysis. These and other advances in biofuels have happened in just the last few years. Imagine what new ideas the innovation ecosystem will bring to the development of biofuels in the next decade. Vinod Khosla is the founder of Khosla Ventures, a venture capital firm that has backed a number of biofuel companies, including LS9, Amyris, Gevo, Range Fuels, and Coskata. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:416f9255-3add-45b3-b606-549d583f6674>
2.78125
582
News Article
Science & Tech.
22.449586
95,641,873
Squeeze a piece of silicone and it quickly returns to its original shape, as squishy as ever. But scientists at Rice University have discovered that the liquid crystal phase of silicone becomes 90 percent stiffer when silicone is gently and repeatedly compressed. Their research could lead to new strategies for self-healing materials or biocompatible materials that mimic human tissues. A paper on the research appeared in Nature’s online journal Nature Communications. Silicone in its liquid crystal phase is somewhere between a solid and liquid state, which makes it very handy for many things. So Rice polymer scientist Rafael Verduzco was intrigued to see a material he thought he knew well perform in a way he didn’t expect. “I was really surprised to find out, when my student did these measurements, that it became stiffer,” he said. “In fact, I didn’t believe him at first.” The researchers had intended to quantify results seen a few years ago by former Rice graduate student Brent Carey, who subjected a nanotube-infused polymer to a process called repetitive dynamic compression. An astounding 3.5 million compressions (five per second) over a week toughened the material, just like muscles after a workout, by 12 percent. What Verduzco and lead author/Rice graduate student Aditya Agrawal came across was a material that shows an even stronger effect. They had originally planned to study liquid crystal silicone/nanotube composites similar to what Carey tested, but decided to look at liquid crystal silicones without the nanotubes first. “It’s always better to start simple,” Verduzco said. Silicones are made of long, flexible chains that are entangled and knotted together like a bowl of spaghetti. In conventional silicones the chains are randomly oriented, but the group studied a special type of silicone known as a liquid crystal elastomer. In these materials, the chains organize themselves into rod-shaped coils. When the material was compressed statically, like squeezing a piece of Jell-O or stretching a rubber band, it snapped right back into its original shape. The entanglements and knots between chains prevent it from changing shape. But when dynamically compressed for 16 hours, the silicone held its new shape for weeks and, surprisingly, was much stiffer than the original material. “The molecules in a liquid crystal elastomer are like rods that want to point in a particular direction,” Verduzco said. “In the starting sample, the rods are randomly oriented, but when the material is deformed, they rotate and eventually end up pointing in the same direction. This is what gives rise to the stiffening. It’s surprising that by a relatively gentle but repetitive compression, you can work out all the entanglements and knots to end up with a sample where all the polymer rods are aligned.” Before testing, the researchers chemically attached liquid crystal molecules – similar to those used in LCD displays – to the silicones. While they couldn’t see the rods, X-ray diffraction images showed that the side groups – and thus the rods – had aligned under compression. “They’re always coupled. If the side group orients in one direction, the polymer chain wants to follow it. Or vice versa,” Verduzco said. The X-rays also showed that samples heated to 70 degrees Celsius slipped out of the liquid crystal phase and did not stiffen, Verduzco said. The stiffening effect is reversible, he said, as heating and cooling a stiffened sample will allow it to relax back into its original state within hours. Verduzco plans to compress silicones in another phase, called smectic, in which the polymer rods align in layers. “People have been wanting to use these in displays, but they’re very hard to align. A repetitive compression may be a simple way to get around this challenge,” he said. Since silicones are biocompatible, they can also be used for tissue engineering. Soft tissues in the body like cartilage need to maintain strength under repeated compression and deformation, and liquid crystal elastomers exhibit similar durability, he said. The paper’s co-authors include Carey, a Rice alumnus and now a scientist at Owens Corning; graduate student Alin Chipara; Yousif Shamoo, a professor of biochemistry and cell biology; Pulickel Ajayan, the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of mechanical engineering and materials science, chemistry and chemical and biomolecular engineering; and Walter Chapman, the William W. Akers Professor of Chemical and Biomolecular Engineering, all of Rice; and Prabir Patra, an assistant professor of mechanical engineering at the University of Bridgeport with a research appointment at Rice. Verduzco is an assistant professor of chemical and biomolecular engineering. The research was supported by an IBB Hamill Innovations Grant, the Robert A. Welch Foundation, the National Science Foundation and the National Institutes of Health, through the National Institute of Allergy and Infectious Diseases. Read the abstract at http://www.nature.com/ncomms/journal/v4/n4/full/ncomms2772.html.
<urn:uuid:86d12c89-384a-455d-9da1-e96a7e6d5f83>
3.25
1,123
News Article
Science & Tech.
36.029906
95,641,883
Space will soon be within the grasp of everyday people, small countries, researchers or start-up companies thanks to a fleet of low-cost launch vehicles under development across Europe. Highly sophisticated computers are mining vast amounts of data from the web, digital maps and satellite imagery to pick out trends in areas like demographics, transport and the environment. The Large Hadron Collider (LHC), the world’s biggest particle smasher, stands a good chance of discovering the elusive particle or particles, known to scientists as dark matter, that make up five-sixths of the mass of the universe, researchers say. The sooner-than-expected discovery of gravitational waves, announced in February, has given a new impetus to scientists in the field, who are now working to make sense of what it means not only for their research but also for our understanding of Einstein’s theory of general relativity. At the extremes of mass, energy, gravity and space-time – black holes still present a mystery for scientists, yet the key to finding a way forward is reconciling gravity, described by Albert Einstein’s general relativity, and the behaviour of subatomic particles modelled using so-called quantum theory. Sending astronauts to Mars poses several large challenges, among them a long journey filled with life-threatening radiation from cosmic ray exposure and solar flares. Not to mention the fact that we haven’t yet worked out how to get them back again. Complex and painful disease has been historically overlooked, researchers say. Robin Garrity says that registration, identification and geofencing will increase security. Chemical switches on DNA could explain how the environment may influence the traits we pass on, according to Prof. Thomas Carell.
<urn:uuid:0f0d8376-6bdc-42bc-bb44-7c240a46a057>
3
355
Content Listing
Science & Tech.
29.614091
95,641,899
Tropical Storm Julio continues to weaken as it moves through cooler waters of the Central Pacific Ocean. NASA's Terra satellite passed over Julio and saw that the bulk of the clouds and precipitation were being pushed to the34 north of the center as the storm tracked far north of the Hawaiian Islands. NASA's Terra satellite passed over Julio on August 11 at 21:25 UTC (5:25 p.m. EDT) and the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard took a visible picture of the storm. The MODIS image revealed a circular center, but most of the clouds and showers associated with the storm were pushed north of the center. Drier air, located over the southern quadrant of the storm is sapping the development of thunderstorms. Julio tracked far enough away from the Hawaiian Islands so that no watches or warnings were generated for the storm. At 5 a.m. HST local time (1500 UTC/11 a.m. EDT) on August 12, the center of Tropical Storm Julio was located near latitude 28.6 north, longitude 157.1 west, about 505 miles (815 km) north of Honolulu Hawaii. Julio was moving toward the northwest near 6 mph (9 kph) and NOAA's Central Pacific Hurricane Center (CPHC) expects that motion to continue over the next day before the storm gradually turns north. Maximum sustained winds were near 65 mph (100 kph) and a slow weakening is forecast over the next two days. The CPHC expects that cooler waters and increasing wind shear will weaken Julio into a depression by August 14. Text credit: Rob Gutro NASA's Goddard Space Flight Center Rob Gutro | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:43cfdca5-6185-4407-a9d2-7f46ccc49776>
2.984375
926
Content Listing
Science & Tech.
49.636566
95,641,903
+44 1803 865913 The invasion of the land by plants ('terrestrialization') was one of the most significant evolutionary events in the history of life on Earth, and correlates in time with periods of major palaeoenvironmental perturbations. The development of a vegetation cover on the previously barren land surfaces impacted on the global biogeochemical cycles and the geological processes of erosion and sediment transport. The terrestrialization of plants preceded the rise of major new groups of animals, such as insects and tetrapods, the latter numbering some 24 000 living species, including ourselves. Early land-plant evolution also correlates with the most spectacular decline of atmospheric CO2 concentration of Phanerozoic times and with the onset of a protracted period of glacial conditions on Earth. The Terrestrialization Process includes a selection of papers covering different aspects of the terrestrialization, from palaeobotany to vertebrate palaeontology and geochemistry, promoting a multidisciplinary approach to the understanding of the co-evolution of life and its environments during Early to Mid-Palaeozoic times. - The terrestrialization process: introduction to modelling complex interactions at the biosphere-geosphere interface - Terrestrialization: the early emergence of the concept - An organic geochemical perspective on Terrestrialization - The effects of terrestrialization on marine ecosystems: the fall of CO2 - Palaeogeographic and palaeoclimatic considerations based on Ordovician to Lochkovian vegetation - The land plant cover in the Devonian: a reassessment of the evolution of the tree habit - Early seed plant radiation: an ecological hypothesis - First record of Rellimia Leclercq & Bonamo (Aneurophytales) from Gondwana, with comments on the earliest lignophytes - The sedimentary environment of the Late Devonian East Greenland tetrapods - Terrestrialization in the Late Devonian: a palaeoecological overview of the Red Hill site, Pennsylvania, USA - The biostratigraphical distribution of earliest tetrapods (Late Devonian): a revised version with comments on biodiversification - Palaeoecological and palaeoenvironmental influences revealed by long-bone palaeohistology: the example of the Permian branchiosaurid Apateon - Osmotic tolerance and habitat of early stegocephalians: indirect evidence from parsimony, taphonomy, palaeobiogeography, physiology and morphology There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects thank you for the excellent customer service Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:f211da7c-1408-49b2-a066-ddb94a0c0cf8>
3.59375
585
Product Page
Science & Tech.
-25.169857
95,641,905
What scientific principle is at work in every theme park ride the Imagineers create? It's energy! The Imagineers reveal the role energy plays in popular theme park attractions such as Epcot's Test Track and the Mad Tea Party. Students will learn that energy is the ability to do work and that energy is constantly being transferred from one thing to another. They will also identify the difference between potential and kinetic energy as well as the ability to identify examples and benefits of renewable energy.
<urn:uuid:c96bb982-5d44-4d56-99bd-d739692b3a72>
2.953125
95
Truncated
Science & Tech.
36.505034
95,641,929
It is clear that the sedimentary rock was deposited and folded before the dyke was squeezed into place.By looking at other outcrops in the area, our geologist is able to draw a geological map which records how the rocks are related to each other in the field.From the mapped field relationships, it is a simple matter to work out a geological cross-section and the relative timing of the geologic events.His geological cross-section may look something like Figure 2. It has been established through extensive experimentation that radioactive decay occurs at a constant rate. In this case, the initial condition is the amount of daughter isotope in the rock when it was formed.What I want to do in this video is kind of introduce you to the idea of, one, how carbon-14 comes about, and how it gets into all living things. They can also be alpha particles, which is the same thing as a helium nucleus. And they're going to come in, and they're going to bump into things in our atmosphere, and they're actually going to form neutrons. And we'll show a neutron with a lowercase n, and a 1 for its mass number. And what's interesting about this is this is constantly being formed in our atmosphere, not in huge quantities, but in reasonable quantities. Because as soon as you die and you get buried under the ground, there's no way for the carbon-14 to become part of your tissue anymore because you're not eating anything with new carbon-14.And then either later in this video or in future videos we'll talk about how it's actually used to date things, how we use it actually figure out that that bone is 12,000 years old, or that person died 18,000 years ago, whatever it might be. So let me just draw the surface of the Earth like that. So then you have the Earth's atmosphere right over here. And 78%, the most abundant element in our atmosphere is nitrogen. And we don't write anything, because it has no protons down here. And what's interesting here is once you die, you're not going to get any new carbon-14. You can't just say all the carbon-14's on the left are going to decay and all the carbon-14's on the right aren't going to decay in that 5,730 years.Then the computed age based on the accumulation of daughter products will be incorrect (Stasson 1998).In order to use the valuable information provided by radiometric dating, a new method had to be created that would determine an accurate date and validate the assumptions of radiometric dating. Isotope dating satisfies this requirement, as daughter products do not decay back to the original parent element.
<urn:uuid:aa3a9092-2b6e-4f36-ad4a-2b7fecc8c95c>
3.65625
555
Truncated
Science & Tech.
57.388403
95,641,931
Abstract: To improve the understanding and the modelling of soil water regimes in alpine areas it is essential to know not only the number and the properties of the soil layers, but also their spatial distribution. One common assumption used in many simulations in the literature is that that the soil layers and the bedrock are parallel to the surface, which sometimes diverges significantly from the reality. A field campaign was conducted in the Urseren Valley, which lies in the heart of the Swiss Central Alps. This region is very susceptible to infiltration-triggered shallow landslides and for this reason a realistic simulation of soil water regime is crucial to predict soil slip occurrences. The primary method used for the determination of subsurface topography is the Ground Penetrating Radar at 100MHz and 250 MHz frequency. A 2D processing and analysis was carried out on the data collected form the field. Additional trenches were dug up at strategically important points to verify the soil stratigraphy obtain from the GPR analysis. Furthermore, soil samples were collected and tested in order to obtain some soil properties of the soil layers composing the profiles. These were used in a model based on Cellular Automata for the simulation of unsaturated and saturated flow. Simulations were run using representative rainfall events as recorded at the neighboring station of Andermatt. The results reveal clearly the importance of the detailed knowledge of the subsurface topography in such types of simulations. Citation: Anagnostopoulos, G. G., S. Carpentier, M. Konz, R. Fischer and P. Burlando (2011), The role of subsurface topography and its implications on the water regime in the Urseren Valley, Switzerland, European Geosciences Union General Assembly 2011, Vienna. Poster (6.3 MiB, 526 downloads)
<urn:uuid:d11b47cb-4b99-48f0-9261-d190ae89b873>
2.6875
369
Academic Writing
Science & Tech.
35.963572
95,641,934
Study by the University of Kaiserslautern Plants use certain colour pigments in order to convert light into energy by way of photosynthesis. They allow plants to gather light energy. This also works in a similar way for microbes, for instance cyanobacteria. The fact that a very large number of viruses are able to contribute towards pigment production has now been demonstrated by biologists from the University of Kaiserslautern with a colleague from Israel. The viruses introduce genetic material into the bacteria which then allows them to produce the pink-coloured pigments. The study has now been published in the renowned scientific journal ‘Environmental Microbiology’. Cyanobacteria (also known as blue-green algae) and other oceanic bacteria are able to convert carbon dioxide and water into carbohydrates and oxygen with the help of sunlight, just like plants. “They use light-harvesting complexes in order to capture the energy from the light,” says microbiology Professor Nicole Frankenberg-Dinkel from the University of Kaiserslautern. “These consist of proteins and colour pigments.” The latter are also responsible for the characteristic colouration. In the case of plants, for example, this is the green pigment ‘chlorophyll’, in cyanobacteria this is the blue pigment ‘phycocyanobilin’ and the pink pigment ‘phycoerythrobilin’. “The synthesis of these pigments is already well understood,” the microbiologist adds. “So far researchers have only been able to demonstrate their presence in organisms which release oxygen through the process of photosynthesis.” In addition to this form of conventional photosynthesis performed by plants and cyanobacteria, there are also other variants that do not release any oxygen. The biologists at Kaiserslautern sought to investigate, together with their Israeli research colleague and bioinformatician Oded Béjà (from the Technion-Israel Institute of Technology), the extent to which pigment synthesis is prevalent in certain marine regions. The biosynthesis of pink pigment ‘phycoerythrobilin’ was the focus of their work. “The genetic information for the synthesis of the pink pigment is widespread throughout all the world’s oceans,” says the professor. This is where the researchers made a notable discovery: this information is wide spread in viruses. “The viruses carry genetic information which can be used to produce the pink-coloured pigments,” Frankenberg-Dinkel explains. The viruses introduce this genetic information into bacterial cells which enable them to synthesise the pink pigment. “What is new is that we are able to use bioinformatic analyses to determine the type of viruses which carry this genetic information”, Frankenberg-Dinkel continues. “We were able to show that the viruses most likely affect those microbes for which we do not yet know what purpose the pigment serves.” For her study, Frankenberg-Dinkel and her team analysed datasets obtained from metagenome databases. “These contain all the genetic information of all the organisms we would usually extract during a field trip at sea, for example,” the researcher explains. “This technique allows us to gain a detailed insight into the ecosystem without having to investigate it on location.” The biologists from the University of Kaiserslautern work closely with their colleague from the Technion-Israel Institute of Technology in Haifa. This cooperation is funded by the German-Israeli Foundation for Scientific Research and Development. The study was published in the renowned scientific journal ‘Environmental Microbiology’: Ledermann, B., Beja, O. & Frankenberg-Dinkel, N. (2016) New biosynthetic pathway for pink pigments from uncultured oceanic viruses. Prof Dr Nicole Frankenberg-Dinkel Department of Biology Tel.: +49 631/205-2353 Katrin Müller | Technische Universität Kaiserslautern Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:9f8ebe8b-6977-4bf4-9bc1-ec9fd12c1f80>
3.75
1,431
Knowledge Article
Science & Tech.
30.843585
95,641,998
To make calcium carbonate, shell-building marine animals such as corals and oysters combine Ca+2 with carbonate CO3-2 from surrounding seawater, releasing carbon dioxide and water in the process. Like calcium ions, hydrogen ions tend to bond with carbonate—but they have a greater attraction to carbonate than calcium. When two hydrogens bond with carbonate, bicarbonate (2HCO3-) is formed. Shell-building organisms can't extract the carbonate ion they need from bicarbonate, preventing them from using that carbonate to grow new shell. In this way, the hydrogen essentially binds up the carbonate ions. Environmental Modelling & Software Volume 72, October 2015, Pages 287–303 F. Ewerta, , , R.P. Rötterb, M. Bindic, H. Webbera, M. Trnkad, e, K.C. Kersebaumf, J.E. Oleseng, M.K. van Ittersumh, S. Jansseni, M. Rivingtonj, M.A. Semenovk, D. Wallachl, J.R. Porterm, n, D. Stewarto, p, J. Verhagenq, T. Gaisera, T. Palosuob, F. Taob, C. Nendelf, P.P. Roggeror, L. Bartošovád, S. Assengs a University of Bonn, Institute of Crop Science and Resource Conservation (INRES), Crop Science Group, Katzenburgweg 5, 53115, Bonn, Germany b MTT Agrifood Research Finland, Plant Production Research, Lönnrotinkatu 5, 50100, Mikkeli, Finland c University of Florence, Department of Agri-food Production and Environmental Sciences, Piazzale delle Cascine 18, 50144, Firenze, Italy d Department of Agrosystems and Bioclimatology, Mendel University in Brno, Zemedelska 1, 613 00, Brno, Czech Republic e Global Change Research Centre, Academy of Sciences of the Czech Republic, v.v.i., Bělidla 986/4b, 603 00, Brno, Czech Republic f Leibniz Centre for Agricultural Landscape Research, Institute of Landscape Systems Analysis, Eberswalder Str. 84, 15374, Müncheberg, Germany g Department of Agroecology, Aarhus University, Blichers Allé 20, P.O. Box 50, 8830, Tjele, Denmark h Plant Production Systems Group, Wageningen University, P.O. Box 430, 6700 AK, Wageningen, The Netherlands i Earth Informatics, Alterra, Wageningen University, P.O. Box 47, 6700 AA, Wageningen, The Netherlands j The James Hutton Institute, Craigiebuckler, Aberdeen, AB15 8QH, UK k Computational and Systems Biology Department, Rothamsted Research, Harpenden, Herts, AL5 2JQ, UK l INRA, UMR 1248 Agrosystèmes et développement territorial (AGIR), 31326, Castanet-Tolosan Cedex, France m Natural Resources Institute, University of Greenwich, Greenwich, UK n Faculty of Sciences, University of Copenhagen, Denmark o The James Hutton Institute, Dundee, DD2 5DA, Scotland, UK p Bioforsk, Norwegian Institute for Agricultural and Environmental Research, Nord Holt, Tromsø, Norway q Wageningen University and Research Centre, Plant Research International, P.O. Box 616, 6700 AP, Wageningen, The Netherlands r Nucleo di Ricerca sulla Desertificazione and Dipartimento di Agraria, University of Sassari, viale Italia 39, 07100, Sassari, Italy s Agricultural & Biological Engineering Department, University of Florida, Gainesville, FL, 32611, USA Received 7 February 2014, Revised 7 November 2014, Accepted 2 December 2014, Available online 27 December 2014
<urn:uuid:eb6ce8c0-48cc-4471-951a-80862ee4d55a>
3.5625
891
Knowledge Article
Science & Tech.
45.508478
95,642,038
Applies a function an array’s items, adding the results to a new array A data-manipulation function. This method applies the supplied function to every item in the target array, one by one. The results are placed in a new array created by map() and returned by it. The new array will be the same size as the target array. Contrast map() with apply() which changes the array’s items in situ. The passed function must include a single parameter into which each item in the target array is passed. It is the function’s job to make sure the array values passed are appropriate to the changes it will make, if any.
<urn:uuid:699e8d77-7f9b-404d-81bc-1ce10f5f2f5a>
2.5625
142
Documentation
Software Dev.
57.468158
95,642,059
Honing in on the Great Oxygenation EventJuly 07, 2016 / Written by: Miki Huynh Researchers looked for a particular sulfur isotope pattern called mass-independent fraction of sulfur isotopes (S-MIF) to determine when oxygen first appeared in the Earth’s atmosphere. Photo source: MIT. Scientist at MIT have identified the date of the Great Oxygenation Event (GOE) on Earth, a period of climate change when oxygen became permanently abundant in our atmosphere and provided a step towards the development of complex life on our planet. The research, published in Science Advances, posits the rapid oxygenation of Earth at 2.33 billion years ago, plus or minus 7 million years—the most narrowed down estimate to date. These numbers were found by analyzing shifts in the sulfur isotope pattern of pyrite in sediment cores from South Africa. MIT News released a story detailing the research and its implications. More information about Foundations of Complex Life, the NASA Astrobiology Institute team at MIT with members involved in the research, is available at http://www.complex-life.org/. - Electron Acceptors and Carbon Sources for a Thermoacidophilic Archaea - Yosemite Granite Tells New Story About Earth's Geologic History - Supporting SHERLOC in the Detection of Kerogen as a Biosignature - New Estimates of Earth's Ancient Climate and Ocean pH - How Microbes From Spacecrafts Survive Clean Rooms - Radical Factors in the Evolution of Animal Life - Understanding Oxygen as an Exoplanet Biosignature - Recap of the 2018 Astrobiology Graduate Conference (AbGradCon) - Astrobiologist Rebecca Rapf Receives Inaugural Maggie C. Turnbull Early Career Award - Searching for the Great Oxidation Event in North America
<urn:uuid:e3a61946-931e-4758-906a-fb41084e9056>
3.5625
379
News (Org.)
Science & Tech.
19.800733
95,642,061
The Elementary Concept of Area and Volume A rectangle (right rectangular prism) with sidelengths 1,ℓ, (1,1,ℓ) units is called a normal rectangle with an area (volume) of ℓ sqaure (cubic) units. The area (volume) of any polygon (polyhedra) P which can be decomposed into a finite number of parts so that these parts can be rearranged to form a normal rectangle (rectangular prism) R is equal to the area (volume) of R. KeywordsDihedral Angle Interior Point Rectangular Prism Pythagorean Theorem Equivalence Show Unable to display preview. Download preview PDF.
<urn:uuid:507c8a85-a6b6-4e72-8978-e32dd94dd932>
4.03125
148
Truncated
Science & Tech.
35.44
95,642,063
An Introduction to Quantum Physics anthony Philip French Edwin F Taylor Pdf Creativity interplay between “a pedagogic masterpiece” book/website aim, how they spin systems implement computation between mechanics general relativity, so hope make accessible people with variety backgrounds introduction, thus. Pro gives you access Australasia’s largest range automotive electrical thermal control featured website configured allow studies within wide applications cellular respiration photosynthesis research. Anonymous Doser OnlinePenrose Institute inspired philosophy work Roger Penrose & nbsp modern. Do want microscopic atoms nuclei. Addressed, wavelength electrons can contents preface part gathering the tools l-6. Engineers, this covers experimental basis physics learning. Android text tones Download Introduction to quantum mechanics Harvard University South west london, the course material should be interest physicists! Teaching, controlled, followed 8. Onsite training consulting rise randomness and. You ll need explain things like atomic energy levels, schrödinger s equation single dimension, develop mass producible solutions devices nanoscale quantum, data-driven data analytics, get stuff done without internet connection undergraduate sequence. Your dynamic online catalogue purchasing solution universe - uncertainty principle tunnelling hansatech instruments ltd oxygen electrode measurement systems around s6 clark type oxygen electrode disc. Re starting see more media, computer scientists, quantum Marine Stabilizers include Zero Speed Stabilizers. Prerequisites treats inquiry first foremost process looking discovering rather than assuming deducing. Accessible harvard? Driven by excellence quantum scientific discovery its commercial application to global security issues, mathematicians, introduces wave mechanics. 6 may thought as equivalent bits respectively welcome oxford community, and the MAGLift Rotor Stabilizer for superior ship stability at anchor underway physics / wave mechanics, researcher’s playground? Provides framework comprehensive understanding natural sciences to? Phone or tablet three dimensions, how D-Wave processors are built, what happening inside when programmers although been long time. INTRODUCTION TO QUANTUM MECHANICS Base is a world leader in field of Security from 6955 6985 there was revolution foundations our light matter interactions. Many authors ‘‘popular’’ books on modern physics have regrettable Periodic Table Scientists use order find out important information about various elements uses. Photons, we invent, β 5 5, corresponding orthogonal states system global. Periodic table orders all known elements accordance their similarities control parts 79/7, classical isn t going cut it edu chapter brief introduction mechanics. Theory qft easier, schrödinger Physics / Wave Mechanic, qubits α = 6. Well, science IT competences DWH business strategy decision support methods downfall concepts l-7, create edit others same time -- from your computer, year round. Use Docs Word files 55 ii 56 iii! It treats mendeleev. Phenomenology Natural Science max planck explaining constant discrete energy matter light quanta photons metaphysics space structure matter. Seeks understand human brain, efficient fulfilling. 8 Momentum Probabilities 97 9 A Particle Box I 99 5 Expectation Values 96 Operators 98 Uncertainties 99 6 States 55 Problems 57 Gilles Van Assche today, XT fin stabilizers, created Dmitri Mendeleev 6889-6957, design, aim document describe physically associated circuitry created. 8 contemporary art established 6997 our gallery based converted victorian laundry battersea, 5 〉 6 two reference qubits. Used extensively categories professionals developers method computing.
<urn:uuid:9b6419e0-3539-4770-8f65-41c739ea8144>
2.625
714
Truncated
Science & Tech.
9.728828
95,642,070
One of the touted benefits of the futuristic US hydrogen economy is that the hydrogen supply—in the form of water—is virtually limitless. This assumption is taken for granted so much that no major study has fully considered just how much water a sustainable hydrogen economy would need. Michael Webber, Associate Director at the Center for International Energy and Environmental Policy at the University of Texas at Austin, has recently filled that gap by providing the first analysis of the total water requirements with recent data for a “transitional” hydrogen economy. While the hydrogen economy is expected to be in full swing around 2050 (according to a 2004 report by the National Research Council [NRC]), a transitional hydrogen economy would occur in about 30 years, in 2037. At that time, the NRC predicts an annual production of 60 billion kg of hydrogen. Webber’s analysis estimates that this amount of hydrogen would use about 19-69 trillion gallons of water annually as a feedstock for electrolytic production and as a coolant for thermoelectric power. That’s 52-189 billion gallons per day, a 27-97% increase from the 195 billion gallons per day (72 trillion gallons annually) used today by the thermoelectric power sector to generate about 90% of the electricity in the US. During the past several decades, water withdrawal has remained stable, suggesting that this increase in water intensity could have unprecedented consequences on the natural resource and public policy. “The greatest significance of this work is that, by shifting our fuels production onto the grid, we can have a very dramatic impact on water resources unless policy changes are implemented that require system-wide shifts to power plant cooling methods that are less water-intensive or to power sources that don’t require cooling,” Webber told PhysOrg.com. “This analysis is not meant to say that hydrogen should not be pursued, just that if hydrogen production is pursued through thermoelectrically-powered electrolysis, the impacts on water are potentially quite severe.” Webber’s estimate accounts for both the direct and indirect uses of water in a hydrogen economy. The direct use is water as a feedstock for hydrogen, where water undergoes a splitting process that separates hydrogen from oxygen. Production can be accomplished in several ways, such as steam methane reforming, nuclear thermochemical splitting, gasification of coal or biomass, and others. But one of the dominant production methods in the transitional stage, as predicted in a 2004 planning report from the Department of Energy (DOE), will likely be electrolysis. Based on the atomic properties of water, 1 kg of hydrogen gas requires about 2.4 gallons of water as feedstock. In one year, 60 billion kilograms of hydrogen would require 143 billion gallons of fresh, distilled water. This number is similar to the amount of water required for refining an equivalent amount of petroleum (about 1-2.5 gallons of water per gallon of gasoline). The biggest increase in water usage would come from indirect water requirements, specifically as a cooling fluid for the electricity needed to supply the energy that electrolysis requires. Since electrolysis is likely to use existing infrastructure, it would pull from the grid and therefore depend on thermoelectric processes. At 100% efficiency, electrolysis would require close to 40 kWh per kilogram of hydrogen—a number derived from the higher heating value of hydrogen, a physical property. However, today’s systems have an efficiency of about 60-70%, with the DOE’s future target at 75%. Depending on the fraction of hydrogen produced by electrolysis (Webber presents estimates for values from 35 to 85%), the amount of electricity required based on electrolysis efficiency of 75% would be between 1134 and 2754 billion kWh—and up to 3351 billion kWh for a lower electrolysis efficiency of 60%. For comparison, the current annual electricity generation in the US in 2005 was 4063 billion kWh. In 2000, thermoelectric power generation required an average of 20.6 gallons of water per kWh, leading Webber to estimate that hydrogen production through electrolysis, at 75% efficiency, would require about 1100 gallons of cooling water per kilogram of hydrogen. That’s 66 trillion gallons per year just for cooling. By 2050, the NRC report predicts that hydrogen demand could exceed 100 billion kg—nearly twice the 60 billion kg that Webber’s estimates are based on. By then, researchers may find better ways of producing hydrogen, with assistance from the DOE’s large-scale investments, which will exceed $900 million in 2008. “That most of the water use is for cooling leaves hope that we can change the way power plants operate, which would significantly ease up the potential burden on water resources, or that we can find other means of power production at a large scale to satisfy the demands of electrolysis,” said Webber. If electrolysis becomes a widespread method of hydrogen production, Webber suggests that researchers may want to look for an electricity-generating method other than thermoelectric processes to power electrolysis. With this perspective, he suggests hydrogen pathways such as wind or solar sources, as well as water-free cooling methods such as air cooling. “Each of the energy choices we can make, in terms of fuels and technologies, has its own tradeoffs associated with it,” Webber said. “Hydrogen, just like ethanol, wind, solar, or other alternative choices, has many merits, but also has some important impacts to keep in mind, as this paper tries to suggest. I would encourage the continuation of research into hydrogen production as part of a comprehensive basket of approaches that are considered for managing the transition into the green energy era. But, because of some of the unexpected impacts—for example on water resources—it seems premature to determine that hydrogen is the answer we should pursue at the exclusion of other options.” More information can be found at the Webber Energy Group, an organization which seeks to bridge the divide between policymakers and engineers & scientists for issues related to energy and the environment. Citation: Webber, Michael E. “The water intensity of the transitional hydrogen economy.” Environmental Research Letters, 2 (2007) 034007 (7pp). Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Explore further: Making solar hydrogen generation more efficient in microgravity
<urn:uuid:34af696d-e7b3-429d-9f41-58f562ac8871>
3.8125
1,344
News Article
Science & Tech.
33.020303
95,642,078
Why can cells function so efficiently under anaerobic conditions? Please help.© BrainMass Inc. brainmass.com July 20, 2018, 10:03 am ad1c9bdddf Cells that can function under anaerobic conditions have enzymes that enable them to "ferment." As you know, anaerobic means without oxygen. Oxygen is needed as the terminal electron acceptor in the electron transport chain at the inner mitochondrial membrane in order to allow the oxidative phosphorylation process to occur so that lots of ATP can be made from the reduced cofactors, NADH and FADH2. Without oxygen, oxidative phosphorylation stops and no more ATP ... The solution discusses why cells function so efficiently under anaerobic conditions.
<urn:uuid:e71f1ccd-ef46-472c-ba05-3abc93f982dc>
3.296875
157
Q&A Forum
Science & Tech.
40.623902
95,642,091
Australia has a high rate of extinctions. The rate of loss is continuing, unabated. Some recent extinctions have occurred without relevant managers having sufficient foreknowledge the species is close to disappearing. “The problem is that the Australian threatened species list, which is what most conservation managers and policy makers refer to, is failing to keep abreast of the actual rate of biodiversity loss,” said Professor John Woinarski from Charles Darwin University. The Threatened Species Recovery Hub’s Project 2.1 will work to ensure that policy makers and project managers have more reliable and up to date information about species closest to extinction. “There should be no regrets or surprises. If all the relevant ministers, policy makers and project managers are aware of a high extinction risk, then they have an opportunity to avert it.” Led by Professors John Woinarski and Stephen Garnett, the Project will develop mechanisms to complement the Australian threatened species list to help reduce the risk of further species’ loss. “It’s remarkable how rapidly some species have disappeared, and sometimes it’s hard to predict. The forest skink on Christmas Island is a classic case – it was only officially recognised as threatened five months before it was extinct,” said Professor Woinarski. Researchers will identify fauna species facing imminent risk of extinction and work with on-ground management agencies (state government, NGOs etc.) to identify and prioritise management responses. “We will also be looking to work closely with the Hub’s monitoring project. A better handle on a species’ current population trajectory provides more confidence about estimating the risks and likelihood of extinction.” “Initially we’ll develop modelling to predict extinction risk amongst birds and mammals, for which there is generally more information, and then proceed to other vertebrates and invertebrates in subsequent stages.” “The species we’re fighting for don’t have to be the most charismatic or well known - much of Australia’s biodiversity loss has occurred in less charismatic species.” “For all highly imperilled species, we’ll review the existing or proposed management actions and attempt to refine and improve them. At the end of this project we should have alerted all relevant ministers, policy makers and managers – there should be no excuses or surprises.” Project 2.1 links in with other work taking place through the Hub, to provide the Minister for the Environment, Threatened Species Commissioner and Federal Department of The Environment with evidence to inform policy and on-ground threatened species management decisions. Featured image: An example of insufficient warning of extinction-risk. The Christmas Island forest skink was not formally recognised as threatened until December 2013, far too late to prevent its extinction on 31 May 2014. Photo: Hal Cogger Most people know that cats kill many birds and mammals, but they also have impacts on less charismatic species. Australian cats are killing about 650 million reptiles per year, according to new research published in the journal Wildlife Research. You have to be pretty lucky to make a living by combining your passion and interests, and that’s exactly how Dr Daniel White feels about his current state of affairs. Dan began his career studying genes, and has since applied his science to saving species. Here he describes how. The TSR Hub recognises that outcomes for threatened species will be improved by increasing Indigenous involvement in their management. In response to this, the Hub is guided by an Indigenous Reference Group and has a number of projects across Australia that are collaborating with Indigenous groups on threatened species research on their country. A new contagious fungal plant disease has entered Australia, myrtle rust. It’s highly mobile, can reproduce rapidly and is infecting many species across a broad geographic range. Containment and eradication responses have so far been unsuccessful. Australia is losing large old hollow-bearing trees in our mountain ash forests due to logging, fires and climate change. A team at the Australian National University have been investigating the importance of these trees, the implications of their loss and things we can do to ensure we have enough mountain giants for the future.
<urn:uuid:6906d459-9b3f-47a8-b2a4-7508fbe3b02a>
3.578125
869
News (Org.)
Science & Tech.
32.837445
95,642,111
The low pressure area called System 92S that tracked across northern Madagascar this week and brought flooding rains has moved into the Mozambique Channel, strengthen and has been renamed Irina. NASA satellites captured a visible image of Irina as it filled up the northern half of the Mozambique Channel. NASA's Aqua satellite's MODIS instrument captured this visible image of Tropical Cyclone Irina over the Mozambique Channel on February 29, 2012 at 1100 UTC (6 a.m. EST). Credit: NASA Goddard MODIS Rapid Response Team System 92S strengthened into Cyclone Irina off Cape St Andre, Madagascar after moving across the northern half of the country as a soaking low pressure area. Now in the warm waters of the Mozambique Channel (the body of water between the island nation of Madagascar and Mozambique on the African mainland), it is strengthening and moving to the west. NASA's Aqua satellite's MODIS instrument captured a visible image of Tropical Cyclone Irina over the Mozambique Channel on February 29, 2012 at 1100 UTC (6 a.m. EST). It showed the center of Irina in the northern Mozambique Channel and its clouds extended from Mozambique in the west across the channel to Madagascar. The Atmospheric Infrared Sounder (AIRS) instrument showed another view of the storm: one in infrared light. Infrared light helps determine temperatures of cloud tops and sea surface temperatures, two factors important in tropical cyclones. Warm sea surface temperatures in excess of 26.6 Celsius (80 Fahrenheit) help maintain a cyclone. The warmer the sea surface, the more energy gets fed (evaporation and moisture) into a tropical cyclone, helping it grow stronger. Sea surface temperatures in the Mozambique Channel are near 29 Celsius (84F), which is helping Cyclone Irina develop and strengthen. The cloud-top temperatures need to be the opposite of sea surface temperatures to indicate strengthening. The colder the cloud top temperatures, the higher and stronger the thunderstorms are that make up the tropical cyclone (a cyclone/hurricane is made up of hundreds of thunderstorms). Infrared satellite imagery allows forecasters to see where some of the most powerful thunderstorms are in a tropical cyclone. AIRS infrared data has observed that Irina's cloud top temperatures have grown colder since yesterday, February 28, indicating more strength in the storm. North of Irina's center, cloud top temperatures are now colder than -63 Fahrenheit (-52.7C), a threshold in AIRS data that indicates some of the strongest thunderstorms in a tropical cyclone. Forecasters at the Joint Typhoon Warning Center (JTWC) using infrared satellite data noted that "Deep convection remains confined along the northern half (of the storm)." Vertical wind shear has been weakening slowly, but is still between 10 and 15 knots (11.5 and 17.2 mph /18.5 and 27.8 kph). On February 29, 2012 at 1500 UTC (10 a.m. EST), Irina was a tropical storm with maximum sustained winds near 35 knots (~40 mph/~65 kph). It is centered in the Mozambique Channel, about 305 nautical miles northwest of Antananarivo, Madagascar, near 16.2 South and 42.6 East. JTWC forecasters said today, February 29, that they expect the storm to be strongest between March 2 and March 3 as it moves through the center of the Mozambique Channel. Landfall is expected after 72 hours from 1500 UTC on Feb. 29, which would put it around 1500 UTC (10 a.m. EST) on March 3, 2012 when Irina is forecast to make landfall north of Maputo, Mozambique.Text Credit: Rob Gutro Rob Gutro | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0554ff6d-8423-4e35-8333-a33ac6b8a2b2>
2.9375
1,355
Content Listing
Science & Tech.
46.345309
95,642,115
As a mobile app developer you already know that android is the most popular mobile app platform with largest number of devices running on this. It's already a common knowledge that android is based upon Java. Naturally Java being the core of android is likely to be the most preferred language for aspiring android developers. If you want to master the skill of android development you have no option but to learn Java. Do lately there have been other languages that can be used for coding android mobile apps Java is still considered to be the standard language for android mobile app development. If you are already acquainted with the power and flexibility of android platform you must know that all these qualities can be attributed to Java. Like the huge contribution job are made in developing modern web it also emerged as the most important programming language for android mobile app development. Key aspects that make Java invincible for mobile app developers Java as a programming language has been there for more than two decades and it is still one of the most popular and widely used programming languages world over. The popularity and phenomenal growth of Java as the preferred programming language for multiple platforms became possible thanks to its core features that include the following: Why Java is invincible for multi-platform development? At a time when cross-platform development takes the lead over platform specific development, you need to develop your source code in language that offers flexible multiplatform deployment and in that regard Java comes as supreme. Java becomes particularly invincible for mobile app development because of its platform independence. While with most programming languages you invariably need a compiler to minimise the size of your code to fit into the machine Java allows you the flexibility to use portable code across the machines thanks to the source files widely known as ‘bytecode’ in the world of Java programmers. This byte code operated by Java virtual machine across the devices allow Java to become platform independent and easily deploy the same code across different device platforms. Finally to conclude we must refer to the undermatched remuneration and career growth which is promised for proficient Java developers. As for top-notch career opportunity for programmers and mobile app developers still proficiency in Java is a crucial consideration for most employers.
<urn:uuid:2193a429-8507-463c-815d-851659b93c1a>
2.53125
433
Personal Blog
Software Dev.
18.617046
95,642,122
Scientists have compiled a new database of coastal flooding in the UK over the last 100 years, which they hope will provide crucial information to help prevent future flooding events. 'SurgeWatch' contains information about 96 large storms taken from tide gauge records, which record sea levels back to 1915. It shows the highest sea levels the storms produced and a description of the coastal flooding that occurred during each event. The database, which is described in the journal Scientific Data, has been produced by a team of researchers, led by the University of Southampton and including scientists from the National Oceanography Centre and the British Oceanographic Data Centre. Lead author Dr Ivan Haigh, Lecturer in Coastal Oceanography at the University of Southampton, says: "The winter of 2013/14 saw some of the UK's most extreme sea levels, waves and coastal flooding for several decades. During this period storms repeatedly subjected large areas of our coast to enormous stress and damage, reminding us of the real and ever-present risks and challenges facing coastal communities today." Professor Kevin Horsburgh, Head of Marine Physics and Ocean Climate at the National Oceanography Centre, says: "This new database allows us to improve our understanding of the statistics of extreme sea levels around the UK. Coastal flooding remains a threat to life and to economic and environmental assets. Even if there is no future change in European storminess, the slow rise in mean sea level will increase the number of times that defence thresholds are exceeded. This database is a useful tool for coastal engineers and planners who are concerned with changes to extreme sea levels." SurgeWatch is free and accessible to a range of users, including scientists, coastal engineers, managers and planners. The team aim to expand and update the database and are appealing for the help of the general public. Dr Matthew Wadey, a postdoctoral researcher in Ocean and Earth Science at the University of Southampton, adds: "Do you have any photographs of coastal flooding from recent or past events, which you are willing to share with us? We would like to compile and investigate these in order to improve our understanding of exactly which areas were flooded and to what water depth. Photos can be easily uploaded to our website." Prompted by people asking "Just how unusual was the 2013/14 season?" the researchers spent over 18 months compiling records of high sea level events and coastal flooding going back 100 years. Using meteorological data, they were able to identify the large storms that produced these high sea levels, investigate the weather conditions and track of each storm. They then spent many thousands of hours reading old reports, books, news articles, blogs and websites, to estimate the extent and scale of the coastal flooding. Elizabeth Bradshaw, data scientist at the British Oceanographic Data Centre, says: "Was the 2013/14 season unusual? Yes, very much so. Seven out of the 96 events in the 100-year database occurred during the 2013-14 storm surge season. Two of the events (5 and 6 December 2013 and 3 January 2014) are ranked in the top ten, in terms of height of sea levels. Both of these events also rank highly in terms of spatial footprints, i.e. they impacted very large stretches of the UK coast." Robert Nicholls, Professor of Coastal Engineering at the University of Southampton, adds: "The fact that the damage was so limited during the December 2013 and January 2014 storms, compared to the tragedy of January 1953, during which 307 people were killed along the UK's North Sea coast, is thanks to significant government investment in coastal defences, flood forecasting and sea level monitoring. It is therefore vital we continue to invest in defences, forecasting and monitoring and continue to update this new database." Glenn Harris | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ef47ee7b-7e01-4c79-a4eb-56aae53a6829>
3.390625
1,330
Content Listing
Science & Tech.
42.028851
95,642,137
Caltech biochemists reveal how a ribosomal protein is protected by its chaperone For proteins, this would be the equivalent of the red-carpet treatment: each protein belonging to the complex machinery of ribosomes -- components of the cell that produce proteins -- has its own chaperone to guide it to the right place at the right time and protect it from harm. Structural rendering of a ribosomal protein (yellow and red) bound to its chaperone (blue). By capturing an atomic-resolution snapshot of the pair of proteins interacting with each other, Ferdinand Huber, a graduate student in the lab of André Hoelz revealed that chaperones can protect their ribosomal proteins by tightly packaging them up. The red region illustrates where the dramatic shape alterations occur when the ribosomal protein is released from the chaperone during ribosome assembly. Credit: Huber and Hoelz/Caltech In a new Caltech study, researchers are learning more about how ribosome chaperones work, showing that one particular chaperone binds to its protein client in a very specific, tight manner, almost like a glove fitting a hand. The researchers used X-ray crystallography to solve the atomic structure of the ribosomal protein bound to its chaperone. "Making ribosomes is a bit like baking a cake. The individual ingredients come in protective packaging that specifically fits their size and shape until they are unwrapped and blended into a batter," says André Hoelz, professor of chemistry at Caltech, a Heritage Medical Research Institute (HMRI) Investigator, and Howard Hughes Medical Institute (HHMI) Faculty Scholar." What we have done is figure out how the protective packaging fits one ribosomal protein, and how it comes unwrapped." Hoelz is the principal investigator behind the study published February 2, 2017, in the journal Nature Communications. The finding has potential applications in the development of new cancer drugs designed specifically to disable ribosome assembly. In all cells, genetic information is stored as DNA and transcribed into mRNAs that code for proteins. Ribosomes translate the mRNAs into amino acids, linking them together into polypeptide chains that fold into proteins. More than a million ribosomes are produced per day in an animal cell. Building ribosomes is a formidable undertaking for the cell, involving about 80 proteins that make up the ribosome itself, strings of ribosomal RNA, and more than 200 additional proteins that guide and regulate the process. "Ribosome assembly is a dynamic process, where everything happens in a certain order. We are only now beginning to elucidate the many steps involved," says Hoelz. To make matters more complex, the proteins making up a ribosome are first synthesized outside the nucleus of a cell, in the cytoplasm, before being transported into the nucleus where the initial stages of ribosome assembly take place. Chaperone proteins help transport ribosomal proteins to the nucleus while also protecting them from being chopped up by a cell's protein shredding machinery. The components that specifically aim this machinery at unprotected ribosomal proteins, recently identified by Raymond Deshaies, professor of biology at Caltech and an HHMI Investigator, ensures that equal numbers of the various ribosomal proteins are available for building the massive structure of a ribosome. Previously, Hoelz and his team, in collaboration with the laboratory of Ed Hurt at the University of Heidelberg, discovered that a ribosomal protein called L4 is bound by a chaperone called "Assembly chaperone of RpL4," or Acl4. The chaperone ushers L4 through the nucleus, protecting it from harm, and delivers it to a developing ribosome at a precise time and location. In the new study, the team used X-ray crystallography, a process that involves exposing protein crystals to high-energy X-rays, to solve the structure of the bound pair. The technique was performed at Caltech's Molecular Observatory beamline at the Stanford Synchrotron Radiation Lightsource. "This was not an easy structure to solve," says Ferdinand Huber, a graduate student at Caltech in the Hoelz lab and first author of the new study. "Solving the structure was incredibly exciting because you could see with your eyes, for the very first time, how the chaperone embraces the ribosomal protein to protect it." Hoelz says that the structure was a surprise because it was not known previously that chaperones hold on to their ribosomal proteins so tightly. He says they want to study other chaperones in the future to see if they function in a similar fashion to tightly guard ribosomal proteins. The results may lead to the development of new drugs for cancer therapy by preventing cancer cells from supplying the large numbers of ribosomes required for tumor growth. The study, called "Molecular Basis for Protection of Ribosomal Protein L4 from Cellular Degradation," was funded by a PhD fellowship of the Boehringer Ingelheim Fonds, a Faculty Scholar Award of the Howard Hughes Medical Research Institute, a Heritage Medical Research Institute Principal Investigatorship, a Kimmel Scholar Award of the Sidney Kimmel Foundation for Cancer Research, a Teacher-Scholar Award of the Camille & Henry Dreyfus Foundation, and Caltech startup funds. Whitney Clavin | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:5c2910bd-6b58-4058-bbb6-6f550923e429>
3.546875
1,718
Content Listing
Science & Tech.
30.613858
95,642,138
In order to categorise tropical cyclones around the world, the Saffir-Simpson Hurricane Wind Scale is used defining events by their wind speed and impacts. Although developed in the USA, tropical cyclones around the world are measured by the Saffir-Simpson Hurricane Wind Scale which originated from 1971 with Herbert Saffir, a civil engineer and Bob Simpson of the US National Hurricane Center. The Saffir-Simpson Hurricane Wind Scale consists of a five point scale of hurricane intensity and starts at 74 mph. Tropical cyclones with wind speeds up to 38mph are classified as tropical depressions and those with wind speeds from 39 - 73 mph are classified as tropical storms. Saffir-Simpson Hurricane Wind Scale Wind (mph): 74 - 95 Damage: Minimal - No significant structural damage, can uproot trees and cause some flooding in coastal areas. Wind (mph): 96 - 110 Moderate - No major destruction to buildings, can uproot trees and signs. Coastal flooding can occur. Secondary effects can include the shortage of water and electricity. Wind (mph): 111 - 129 Extensive - Structural damage to small buildings and serious coastal flooding to those on low lying land. Evacuation may be needed. Wind (mph): 130-156 Extreme - All signs and trees blown down with extensive damage to roofs. Flat land inland may become flooded. Evacuation probable. Wind (mph): greater than 156 Catastrophic - Buildings destroyed with small buildings being overturned. All trees and signs blown down. Evacuation of up to 10 miles inland Other phenomena, which can be just as damaging as the wind, frequently accompany tropical cyclones These can include: - High seas - large waves of up to 15 metres high are caused by the strong winds and are hazardous to shipping - Storm surge - a surge of water of up to several metres can cause extensive flooding and damage in coastal regions - Heavy rain - the tropical cyclone can pick up two billion tons of moisture per day and release it as rain. This can lead to extensive flooding - often well inland from where the tropical cyclone hit the coast - Tornadoes - tropical cyclones sometimes spawn many tornadoes as they hit land which can cause small areas of extreme wind damage
<urn:uuid:6b0a1bf1-f8e4-497c-98f6-80563e8b67e7>
3.921875
470
Knowledge Article
Science & Tech.
42.422193
95,642,159
Although scientists have known for more than a century that small populations of closely related plants or animals are likely to suffer from low reproductive success, the exact mechanism by which this “inbreeding depression” occurs is still the subject of debate. The new study, in Conservation Biology, is the first to look at inbreeding depression as it relates to the expression of all of an organism’s genes – to see which are more or less active in inbred populations and what they do. By mating male and female fruit flies that were genetically identical to one another, researchers at the University of Illinois were able to determine how much the flies’ genetic likeness reduced their reproductive success. They repeated the experiment in six lines of fruit flies that were identical to one another except for the composition of one of their chromosomes; only the genes of chromosome three differed between the lines. The researchers also crossed the three highest inbred lines to one another, creating outbred lines that could be compared with the inbred ones. Using oligonucleotide microarrays, which can measure the activity of all of an organism’s genes at once, the researchers were able to see which genes were more or less active (up-regulated or down-regulated) in the inbred versus the outbred lines. The six inbred lines of fruit flies showed a lot of variation in the degree of inbreeding depression, from 24 to 79 percent when compared with non-inbred flies. The researchers also found that 567 genes in the high inbreeding depression lines were expressed at higher or lower levels than the same genes in the other inbred lines. Only 62 percent of these genes were located on chromosome three (the only chromosome that differed between the lines) indicating that variation in chromosome three had altered gene expression on the other chromosomes. “These results suggest that a significant amount of inbreeding depression is due to a few key genes that affect the expression of other genes,” said animal biology professor and department head Ken Paige, who led the study. Of particular note were identical changes in the expression of 46 genes in all three of the high inbreeding depression lines, Paige said, making them of interest for further study. Genes associated with inbreeding depression could be grouped into three broad categories of function: those involved in metabolism, stress, and defense. This is a surprising finding, Paige said, “because we think of inbreeding as a random process.” Many metabolic genes were up-regulated in the inbred flies, as were genes that fight pathogens such as bacteria or viruses. A third group of genes was down-regulated. They code for proteins that protect the body from reactive atoms and molecules that can damage cells. These changes in gene expression are shunting energy away from reproduction and undermining some basic cellular functions, Paige said. Inbreeding depression is thought to result from a deleterious pattern of inherited genes. In general, an organism with two parents has two versions of every gene – one maternal and one paternal. These different flavors of a gene are called alleles. If the maternal and paternal alleles differ, one of them usually dominates, conferring all of its qualities to the offspring. The other, silenced allele is called “recessive.” Some alleles are detrimental to health. Most of these are recessive, meaning that they do not cause problems unless the organism inherits two copies of them – one from each parent. When the alleles differ, one (the dominant allele) often masks the deleterious effects of the other. But the interaction of parental alleles in their offspring can be quite complex. Sometimes an allele causes a disease or disorder even if it is paired with a different allele. Sometimes several genes influence a single trait. And sometimes two different alleles can lead to a higher level of gene activity than occurs in either parent (this last phenomenon is called overdominance). When closely related individuals mate, their offspring are likely to end up with identical alleles for many traits. Many potentially harmful recessive alleles are no longer masked by dominant alleles, so more genetic disorders arise. Similarly, offspring that inherit two identical alleles for some traits will also lose any advantages once conferred by overdominance. Biologists have long wondered which of these mechanisms causes the reproductive failures seen in inbred populations. “It’s still being debated,” Paige said. The new study found that about 75 percent of the reproductive declines seen in the inbred flies could be attributed to the loss of dominant alleles and the subsequent “unmasking” of deleterious alleles. More surprisingly, the data also indicated that 25 percent of the declines were due to the loss of overdominance. “That means we have two mechanisms ongoing,” Paige said. “One does predominate, but the other may be important, too.” The fact that a relatively large number of genes are affected by inbreeding is bad news for conservationists hoping to save small populations of plants or animals from extinction, Paige said. It means that there is no easy fix to the problem of inbred populations. The best approach is to try to preserve and maintain genetic diversity in natural populations well before they begin their slide into an “extinction vortex,” he said. Co-authors on the study included natural resources and environmental sciences graduate student Julien Ayroles, animal biology professor Kimberly Hughes, animal biology doctoral student Kevin Rowe, animal biology technician Melissa Reedy, animal biology postdoctoral researcher Jenny Drnevich, animal biology professor Carla Cáceres, and animal sciences professor Sandra Rodriguez-Zas, who is also an affiliate of the Institute for Genomic Biology. Diana Yates | University of Illinois World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:96483a75-7ef2-4d5a-84d6-5812e36d5831>
3.59375
1,848
Content Listing
Science & Tech.
32.076075
95,642,192
Keys to marine invertebrates of the Woods Hole region : a manual for the identification of the more common marine invertebrates Smith, Ralph I. MetadataShow full item record From the Preface: This handbook of keys is an attempt to fill an obvious need at Woods Hole, the need for a general reference on marine invertebrates for the use of students and investigators who want to know what is here and how to identify it. Suggested CitationBook: Smith, Ralph I., "Keys to marine invertebrates of the Woods Hole region : a manual for the identification of the more common marine invertebrates", Systematics-Ecology Program, Contribution no. 11, 1964, DOI:10.1575/1912/217, https://hdl.handle.net/1912/217 Showing items related by title, author, creator and subject. Development and planktonic larvae of common benthic invertebrates of the Woods Hole, Massachusetts region : summary of existing data and bibliographic sources Scheltema, Rudolf S. (Woods Hole Oceanographic Institution, 1984-04)The early life histories of more than one-half of the most common benthic invertebrates from the region of Woods Hole, Massachusetts have not been described. In many instances it has not even been determined whether or ... Initial settlement of marine invertebrate larvae : the role of passive sinking in a near-bottom turbulent flow environment Hannan, Cheryl Ann (Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 1984-02)The hypothesis that planktonic larvae of benthic invertebrates sink through the water like passive particles in turbulent flows near the seabed was tested in the field using several groups of geometrically different ... Mooney, T. Aran; Katija, Kakani; Shorter, K. Alex; Hurst, Thomas P.; Fontes, Jorge; Afonso, Pedro (BioMed Central, 2015-09-28)Soft-bodied marine invertebrates comprise a keystone component of ocean ecosystems; however, we know little of their behaviors and physiological responses within their natural habitat. Quantifying ocean conditions and ...
<urn:uuid:2e0680f0-53d5-467d-b256-0c2b7bc7703d>
2.5625
456
Content Listing
Science & Tech.
34.160791
95,642,201
Cubosomes are small biological 'capsules' that can deliver molecules of nutrients or drugs with high efficiency. They have a highly symmetrical interior made of tiny cubes of assembled fat molecules similar to the ones in cell membranes. This also means that cubosomes are safe to use in living organisms. Such features have triggered great interest in the pharmaceutical and food industry, who seek to exploit the structure of cubosomes for the controlled release of molecules, improving the delivery of nutrients and drugs. EPFL scientists, working with Nestlé, have now been able to study the 3D structure of cubosomes in detail for the first time. Published in Nature Communications, the breakthrough can help and promote the use of cubosomes in medicine and food science. Molecules of a drug or a nutrient contained inside a cubosome can move by using the numerous tiny channels that make up its structure's interior. The pharmaceutical industry already uses a similar system for drug delivery: the liposomes, which are also made of fats but in the shape of a sphere. Their intricate internal channels give cubosomes a very high internal surface, which offers great potential for the controlled delivery of nutrients and drugs. In short, the properties of cubosomes, like other lipid-based delivery vessels, depend on their particular structures. The problem is that cubosomes are self-assembled, occurring 'spontaneously' after putting together the right ingredients (generally fats and a detergent) under the right conditions. This means that scientists have limited control over their final structure, which makes it hard to optimize their design. In addition, it is very difficult to 'see' the interior of a cubosome and map out the various arrangements of its channels. Davide Demurtas and Cécile Hébert from EPFL's Interdisciplinary Centre for Electron Microscopy (CIME), working with Laurent Sagalowicz at the Nestlé Research Center in Lausanne, have now uncovered the interior 3D structure of cubosomes, and have successfully matched their real-life findings to computer simulations. The researchers used a microscopy technique called 'cryo-electron tomography' (CET). Their method involves embedding cubosomes in a type of 'glass' ice that does not form crystals, which would damage the cubosomes. The samples are kept at -170oC. The microscope then takes photographs while tilting the cubosome at different angles. The technique, which was carried out at CIME, can reconstruct the three-dimensional information to create images of the cubosomes in their native state and with unprecedented detail. "This method allows us to get information about everything, both the inside and outside of the cubosomes," says Cécile Hébert. "Because the CET microscope distinguishes the different densities between cubosome and ice, it is very sensitive and precise." The CET images clearly showed the internal cubic structure, as well as the internal 3D organization of the channels. The researchers also compared the images to the prevailing mathematical models used to make computer simulations of the interface between the interior and exterior. The real-life data successfully matched the theory. "With this approach we can now forge a new understanding of the structure of the cubosomes' interior," says Davide Demurtas. The success is expected to make the study and design of cubosomes with controlled macroscopic properties (e.g. controlled release) easier. This work represents a collaboration of EPFL's Interdisciplinary Centre for Electron Microscopy (CIME) with the Nestlé Research Center Lausanne, EPFL's Institute of Cancer Research, and the Department of Health Science & Technology of ETH Zurich. Demurtas D, Guichard P, Martiel I, Mezzenga R, Hébert C, Sagalowicz L. Direct visualization of dispersed lipid bicontinuous cubic phases by cryo-electron tomography. Nature Communications 17 Nov. 2015. DOI: 10.1038/NCOMMS9915. Nik Papageorgiou | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:6803bfef-afca-48d4-b7bf-52f53e8bfc39>
3.5625
1,495
Content Listing
Science & Tech.
33.79117
95,642,220
Washington (AFP) – A major drought across the western United States has sapped underground water resources, posing a greater threat to the water supply than previously understood, scientists said Thursday. The study involves seven western states — including Arizona, Colorado, Utah, Wyoming, California, New Mexico and Nevada — in an area known as the Colorado River Basin. Since 2000, the region has seen the driest 14-year period in a century, and researchers now say three quarters of the water loss has come from underground. The total amount of water loss is almost double the volume of the nation’s largest reservoir, Nevada’s Lake Mead, said the study in the journal Geophysical Research Letters. From 2004 to 2013, satellite data has shown that the basin lost nearly 53 million acre feet (65 cubic kilometers) of freshwater, it said. “This is a lot of water to lose. We thought that the picture could be pretty bad, but this was shocking,” said lead study author Stephanie Castle, a water resources specialist at the University of California, Irvine. “We don’t know exactly how much groundwater we have left, so we don’t know when we’re going to run out,” added Castle. NASA said the study is “the first to quantify the amount that groundwater contributes to the water needs of western states.” The data came from the NASA’s Gravity Recovery and Climate Experiment (GRACE) satellite, a joint mission with the German Aerospace Center and the German Research Center for Geosciences. This picture taken from a helicopter shows a drought affected area near Los Altos Hills, California, on July 23, 2014 © AFP/File Jewel Samad Experts say water levels and losses in rivers and lakes is well documented, but underground aquifers are not as well understood. The satellite was able to detect below ground water by measuring the gravitational pull of the region as it changed over time due to rising or falling water reserves. The Colorado River Basin supplies water to some 40 million people in seven states, and irrigates about four million acres (1.6 million hectares) of farmland. “The Colorado River Basin is the water lifeline of the western United States,” said senior author Jay Famiglietti, senior water cycle scientist at NASA’s Jet Propulsion Laboratory. He said the basin, like others worldwide, was relying on groundwater to make up for the limited surface-water supply. “We found a surprisingly high and long-term reliance on groundwater to bridge the gap between supply and demand,” he said. “Combined with declining snowpack and population growth, this will likely threaten the long-term ability of the basin to meet its water allocation commitments to the seven basin states and to Mexico,” Famiglietti said.
<urn:uuid:cd46d528-704d-4cc1-81e2-c5dbf3cb7459>
3.40625
595
News Article
Science & Tech.
37.339753
95,642,270
Carbon Dioxide ‘Sponge’ Could Ease Transition to Cleaner Energy News Aug 13, 2014 The material — a relative of the plastics used in food containers — could play a role in President Obama’s plan to cut CO2 emissions 30 percent by 2030, and could also be integrated into power plant smokestacks in the future. The report on the material is one of nearly 12,000 presentations at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, taking place here through Thursday. “The key point is that this polymer is stable, it’s cheap, and it adsorbs CO2 extremely well. It’s geared toward function in a real-world environment,” says Andrew Cooper, Ph.D. “In a future landscape where fuel-cell technology is used, this adsorbent could work toward zero-emission technology.” CO2 adsorbents are most commonly used to remove the greenhouse gas pollutant from smokestacks at power plants where fossil fuels like coal or gas are burned. However, Cooper and his team intend the adsorbent, a microporous organic polymer, for a different application — one that could lead to reduced pollution. The new material would be a part of an emerging technology called an integrated gasification combined cycle (IGCC), which can convert fossil fuels into hydrogen gas. Hydrogen holds great promise for use in fuel-cell cars and electricity generation because it produces almost no pollution. IGCC is a bridging technology that is intended to jump-start the hydrogen economy, or the transition to hydrogen fuel, while still using the existing fossil-fuel infrastructure. But the IGCC process yields a mixture of hydrogen and CO2 gas, which must be separated. Cooper, who is at the University of Liverpool, says that the sponge works best under the high pressures intrinsic to the IGCC process. Just like a kitchen sponge swells when it takes on water, the adsorbent swells slightly when it soaks up CO2 in the tiny spaces between its molecules. When the pressure drops, he explains, the adsorbent deflates and releases the CO2, which they can then collect for storage or convert into useful carbon compounds. The material, which is a brown, sand-like powder, is made by linking together many small carbon-based molecules into a network. Cooper explains that the idea to use this structure was inspired by polystyrene, a plastic used in styrofoam and other packaging material. Polystyrene can adsorb small amounts of CO2 by the same swelling action. One advantage of using polymers is that they tend to be very stable. The material can even withstand being boiled in acid, proving it should tolerate the harsh conditions in power plants where CO2 adsorbents are needed. Other CO2 scrubbers — whether made from plastics or metals or in liquid form — do not always hold up so well, he says. Another advantage of the new adsorbent is its ability to adsorb CO2 without also taking on water vapor, which can clog up other materials and make them less effective. Its low cost also makes the sponge polymer attractive. “Compared to many other adsorbents, they’re cheap,” Cooper says, mostly because the carbon molecules used to make them are inexpensive. “And in principle, they’re highly reusable and have long lifetimes because they’re very robust.” Cooper also will describe ways to adapt his microporous polymer for use in smokestacks and other exhaust streams. He explains that it is relatively simple to embed the spongy polymers in the kinds of membranes already being evaluated to remove CO2 from power plant exhaust, for instance. Combining two types of scrubbers could make much better adsorbents by harnessing the strengths of each, he explains. House Plants an an Early Warning System?News Researchers are exploring the future of houseplants as aesthetically pleasing and functional sirens of home health. The idea is to genetically engineer house plants to serve as subtle alarms that something is amiss in our home and office environments.READ MORE From Toxic Pollutants to Human Health - Key Questions for a Healthy FutureNews An international study determines the twenty-two main questions to consider in order to manage sustainably the environmental risks related to chemical products in Europe.READ MORE
<urn:uuid:f23e0f03-3573-4fab-86ad-af4129b739da>
3.25
926
News Article
Science & Tech.
36.100991
95,642,274
Astronomers on Monday said they have spotted evidence of water vapor plumes rising from Jupiter’s moon Europa, a finding that might make it easier to learn whether life exists in the warm, salty ocean hidden beneath its icy surface.The apparent plumes detected by the Hubble Space Telescope shoot about 125 miles (200 km) above Europa’s surface before, presumably, raining material back down onto the moon’s surface, NASA said.Europa, considered one of the most promising candidates for life in the solar system beyond Earth, boasts a global ocean with twice as much water as in all of Earth’s seas hidden under a layer of extremely cold and hard ice of unknown thickness.While drilling through the ice to test ocean water for signs of life would be a daunting task, sampling water from the plumes might be a simpler project.”If the plumes are real, it potentially gives us easier access to the ocean below … without needing to drill into miles of ice,” said lead researcher Williams Sparks of the Space Telescope Science Institute in Greenbelt, Maryland.Europa is about 1,900 miles (3,100 km) in diameter, slightly smaller than Earth’s moon. Among Jupiter’s four largest moons, Europa is the second closest to the biggest planet in the solar system.The telescope observed the plumes three times in 2014, mostly around Europa’s southern polar region, scientists told a conference call with reporters.On Earth, life is found everywhere where there is water, energy and nutrients, so scientists have a special interest in places elsewhere in the solar system, like Europa, with similar characteristics, said Paul Hertz, director of NASA’s astrophysics division.The findings, which will be published in The Astrophysical Journal, follow an initial Hubble sighting of a water vapor plume over Europa’s south pole in December 2012.Scientists got their first hint that bright, icy Europa, which is crisscrossed by dark bands and ridges, contains an underground ocean from NASA’s twin Voyager probes, which flew by Jupiter in 1979.The follow-on Galileo spacecraft, which circled around and through Jupiter’s system from 1995 to 2003, detected a magnetic field that likely was triggered by a salty, global ocean beneath Europa’s surface.Two more missions are in development to visit Europa.A NASA spacecraft, targeted for launch in the mid-2020s, would make more than 40 close flybys of the moon and possibly sample material in any plumes shooting out from its surface.Jupiter has 67 known moons, plus many smaller ones that have not yet been named. LOS ANGELES On the surface, the “Despicable Me” cartoons appear to be send-ups of the James Bond franchise, but beneath that slick, spoofy exterior, they’re really marshmallow-centered affirmations of good old-fashioned family values.In the […] July 27-3010th Annual Art & Antiques ShowBlowing RockEnjoy two floors filled with furnishings, porcelain, jewelry, silver, art and more on display from antique dealers along the east coast. Tickets to the three-day event are $10 […] BERLIN — LG Electronics on Thursday introduced a slimmer version of its top-of-the-line phone, with advanced new camera and voice-activation features aimed at active video users, which it hopes can help it claw back market […]
<urn:uuid:dee385a5-4cfc-40ae-8cb0-5b8fdbbae0fc>
3.78125
707
News Article
Science & Tech.
34.433643
95,642,277
Three views of a computer model of asteroid 1998 KY26 |Discovery site||Kitt Peak Obs.| 28 May 1998| (discovery: first observed only) |MPC designation||1998 KY26| |NEO · Apollo | |Orbital characteristics | |Epoch 4 September 2017 (JD 2458000.5)| |Uncertainty parameter 3| |Observation arc||11 days| |1.37 yr (500 days)| |0° 43m 12s / day| |Earth MOID||0.0024 AU · 0.93 LD| |25.0 · 25.5| 1998 KY26 is a nearly spherical sub-kilometer asteroid and fast rotator, classified as near-Earth object of the Apollo group, approximately 30 meters in diameter. It was first observed on 2 June 1998, by the Spacewatch survey at Kitt Peak National Observatory during 6 days, while it passed 800,000 kilometers (half a million miles) away from Earth (a little more than twice the Earth–Moon distance). Orbit and classification The asteroid orbits the Sun at a distance of 1.0–1.5 AU once every 16 months (500 days). Its orbit has an eccentricity of 0.20 and an inclination of 1° with respect to the ecliptic. It has an Earth minimum orbital intersection distance of 0.0024 AU (359,000 km), which translates into 0.93 lunar distances. It is one of the most easily accessible objects in the Solar System, and its orbit frequently brings it on a path very similar to the optimum Earth–Mars transfer orbit. This, coupled with the fact that it is water rich, makes it an attractive target for further study and a potential source of water for future missions to Mars. 1998 KY26 is the smallest Solar System object ever studied in detail and, with a rotational period of 10.7 minutes, was the fastest-spinning object observed at the time of its discovery: most asteroids with established rotational rates have periods measured in hours. It was the first recognized minor object that spins so fast that it must be a monolithic object rather than a rubble pile, as many asteroids are thought to be. Since it was found to be a fast rotator, several other small asteroids have been found to also have short rotation periods, some even faster than 1998 KY26. Optical and radar observations indicate that 1998 KY26 is a water-rich object. These physical properties were measured by an international team of astronomers led by Dr. Steven J. Ostro of the Jet Propulsion Laboratory. The team used a radar telescope in California and optical telescopes in the Czech Republic, Hawaii, Arizona and California. Unusually for a body so small, it's shape is fairly regular. - Tholen, D. J. (September 2003). "Recovery of 1998 KY26: Implications for Detecting the Yarkovsky Effect (abstract only)". Bulletin of the American Astronomical Society. 35 (4). Retrieved 25 April 2009. - "Spacewatch discovery of 1998 KY26". SPACEWATCH Project. 7 April 2004. Archived from the original on 1 July 2010. Retrieved 1 August 2017. - "JPL Small-Body Database Browser: (1998 KY26)" (1998-06-08 last obs.). Jet Propulsion Laboratory. Retrieved 1 August 2017. - "1998 KY26". Minor Planet Center. Retrieved 1 August 2017. - "LCDB Data for (1998 KY26)". Asteroid Lightcurve Database (LCDB). Retrieved 1 August 2017. - Ostro, Steven J.; Pravec, Petr; Benner, Lance A. M.; Hudson, R. Scott; Sarounová, Lenka; Hicks, Michael D.; et al. (June 1999). "Radar and Optical Observations of Asteroid 1998 KY26". Science. 285: 557–559(SciHomepage). Bibcode:1999Sci...285..557O. doi:10.1126/science.285.5427.557. Retrieved 1 August 2017. - Hicks, M. D.; Weissman, P. R.; Rabinowitz, D. L.; Chamberlin, A. B.; Buratti, B. J.; Lee, C. O. (September 1998). "Close Encounters: Observations of the Earth-crossing Asteroids 1998 KY26 and 1998 ML14". American Astronomical Society. 30: 1029. Bibcode:1998DPS....30.1006H. Retrieved 1 August 2017. - Pravec, P.; Sarounova, L. (June 1998). "1998 KY26". IAU Circ. (6941). Bibcode:1998IAUC.6941....2P. Retrieved 1 August 2017. - "1998 KY26". Retrieved 25 April 2009. - "Astronomy Picture of the Day: Asteroid 1998 KY26". Nasa. 2002-09-19. Retrieved 1 August 2017. - MPEC 1998-L02 - Scott Hudson's Homepage: The Earth-Crossing Asteroid 1998 KY26 - Steven Ostro's Homepage: 1998 KY26 - Lipanović, Željko. "1998 KY26 Images". Archived from the original on 2009-10-23. - Media Relations Office. Sun never sets, for long, on fast-spinning, water-rich asteroid (press release). Pasadena, California: Jet Propulsion Laboratory. July 22, 1999. - 1998 KY26 at the JPL Small-Body Database
<urn:uuid:30df3e4f-5963-4186-bc7b-77b3b5a6f254>
3.546875
1,177
Knowledge Article
Science & Tech.
74.568897
95,642,283
Molecular Scattering of Light in Gases By using the photographic method of measurement described in Chapter III, Sec. 11, Cabannes first made measurements of the absolute intensity of scattered light in argon and measurement of the relative intensity in some other gases and vapors of organic substances. Later, Daure , in Cabannes’ laboratory, made absolute measurements in vapors of ethyl chloride, and Vaucouleurs made measurements in argon, air, and ethyl chloride. KeywordsSound Absorption Mercury Vapor Sound Absorption Coefficient Atomic Scattering Depolarization Factor Unable to display preview. Download preview PDF.
<urn:uuid:e47eabb0-e1ff-44d5-a878-374ef344d8f3>
3.140625
138
Truncated
Science & Tech.
5.350545
95,642,287
In the parallel universe of the microbiological world, there is a current superstar species of blue-green algae that, through its powers of photosynthesis and carbon dioxide fixation, or uptake, can produce (count 'em) ethanol, hydrogen, butanol, isobutanol and potentially biodiesel. Now that’s some five-tool player. In baseball, you call that player Willie Mays or Mike Trout. In microbiology, it goes by Synechocystis 6803, a versatile, specialized bacterium known as a cyanobacterium. It makes pikers out of plants when it comes to capturing and storing energy from photosynthesis, and it’s a natural in converting the greenhouse gas carbon dioxide (CO2) to useful chemicals that could help both tame global warming and sustain energy supply. In addition, genetically engineered Synechocystis 6803 also has the potential to make commodity chemicals and pharmaceuticals. Granted, that’s mostly in laboratories, on the liter scale. Because of its versatility and potential, this microscopic organism is one of the most studied of its kind since it was discovered in 1968. But just as in baseball, where “can’t miss” five-tool prospects are signed yearly with great expectations and never achieve their promise, Synechocystis 6803 has yet to deliver. Fuzhong Zhang, PhD, assistant professor of energy, environmental & chemical engineering at Washington University in St. Louis, works with Synechocystis 6803 — as well as other microbes and systems — in the areas of synthetic biology, protein engineering and metabolic engineering, with special focus on synthetic control systems to make the organism reach its untapped prowess. Zhang says the biotech world has to overcome several challenges to put the engineered microbes in the applications stage. Zhang will be in the thick of them. “My goal is to engineer microbes and turn them into microfactories that produce useful chemicals,” Zhang says. “Synechocystis is particularly interesting because it can use CO2 as the only carbon source. Engineering this bacterium would turn the fixed CO2 into metabolites that can be further converted to fuels and other chemicals through designed biosynthetic pathways.” Traditional chemical production requires high pressure and temperatures and literally tons of chemical solvents, but the microbial approach is very eco-friendly: Once the engineered cyanobacteria start to grow, all they need are water, basic salts and the CO2. In an academic “scouting report” of Synechocystis, published in the August 2013 Marine Drugs, Zhang and colleagues summarize recent research and conclude that production speed has to be increased and new genetic tools must be developed to control the biochemistry inside Synechocystis so that chemical productivities will be improved to make this technology economically viable. Current industry specifications for potentially scalable chemical production are roughly 100 grams per liter of fuel or chemicals. Presently, the laboratory production is generally less than 1 gram per liter, and the efficiency is very low. Zhang says the research community needs better tools to control gene expression. For example, promoters — little stretches of DNA before genes of interest that help control gene expression — with predictable strength are needed. They also need better cellular biosensors that can sense key metabolites and control the production of vital proteins that create the desired chemicals. And they need to engineer the organisms’ circadian rhythms (day/night) to someday produce organisms that work around the clock making a fuel or chemical. Natural Synechocystis 6803, for instance, performs a yeoman’s task of producing and storing energy molecules during the day through photosynthesis, but at night, it uses a different set of metabolisms to consume the stored energy. The natural circadian rhythm has to be rewired to make a biofuel 24 hours a day. Zhang’s research includes developing gene expression tools, new chemical biosynthetic pathways and circadian control tools for cyanobacteria. “I’m confident that in two or three years we will have more potent tools to engineer gene expression levels and timing, which will speed up the process more accurately and efficiently,” he says. Also, his group has been working to develop dynamical control systems in microbes that function like meters and valves in a traditional chemical production plant – the meters calculate pressure and flow, and the valves control them. “It’s a biological version of the valve-and-meter model to control the flow of metabolites that make the production of fuel and chemicals more efficiently,” he says. Yu Y, You L, Liu D, Hollinshead W, Tang Y, Zhang F. Development of Synechocystis sp. PCC 6803 as a Phototrophic Cell Factory. Marine Drugs 2013, 11, 2894-2916; doi: 10.3390/nd11082894. Funding for this research was provided by the National Science Foundation. The School of Engineering & Applied Science at Washington University in St. Louis focuses intellectual efforts through a new convergence paradigm and builds on strengths, particularly as applied to medicine and health, energy and environment, entrepreneurship and security. With 82 tenured/tenure-track and 40 additional full-time faculty, 1,300 undergraduate students, 700 graduate students and more than 23,000 alumni, we are working to leverage our partnerships with academic and industry partners — across disciplines and across the world — to contribute to solving the greatest global challenges of the 21st century. Neil Schoenherr | Newswise Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:5566e01e-3b0f-4e45-a143-aa31c4214674>
3.03125
1,799
Content Listing
Science & Tech.
35.137618
95,642,292
Scientists make New Year’s resolutions like the rest of us, but their lists are just a little more ambitious. In 2018, they plan to fly machines into the sun’s atmosphere, onto the surface of Mars and alongside an object 4 billion miles away in the Kuiper belt. They’ll deploy a gene-editing system inside the cells of living people to fight a cancer-causing virus. They’ll try to see the edge of a black hole, and to win seats in Congress. And that’s only some of what they have in store. Here are 11 science stories we can’t wait to follow in 2018. Searching for other life-friendly planets NASA’s Kepler mission taught us that solar systems like our own — where planets orbit a central star — are the norm, not the exception. In 2018, the space agency’s new planet-hunting mission, known as TESS, will begin a more targeted search for planetary systems around stars in our stellar neighborhood — that is, those within 300 light-years from Earth. The difference between the two missions is primarily one of approach. Kepler stared long and deep into a narrow patch of sky, looking for the brief dimming of distant stars that would indicate a planet had moved past, obscuring a bit of their light. TESS will use the same detection technique, but will focus on a much more shallow field that encompasses the entire sky. That includes all 10,000 stars we can see without a telescope. TESS, which stands for Transiting Exoplanet Survey Satellite, is scheduled to launch between March and June. It will begin its search for local exoplanets two months later. Flying into the sun’s atmosphere This summer, NASA will launch a mission to touch the sun. The Parker Solar Probe, scheduled to lift off July 31, will swing within 4 million miles of the sun’s surface, tasting charged particles from the atmosphere and making detailed measurements during 24 planned orbits. NASA has sent spacecraft out to the fringes of our solar system, but never so close to the blazing ball of gas that lies at its heart. The probe, whose first close approach to the sun should take place in November, could answer two major questions: Why is the sun's ghostly corona so much hotter than its surface? And what factors power the solar wind, the stream of charged particles that flow from the sun into space? The answers could help scientists better understand solar flares and the solar storms that can wreak havoc with Earth’s satellites, energy grids and other vital infrastructure. Scientists will be on the ballot After a year that has seen hundreds of scientists driven from the federal government and the work of those remaining greatly diminished, the mobilization of science-minded political candidates has continued to pick up steam. At the heart of that effort has been 314 Action, an organization dedicated to recruiting, training and endorsing candidates with backgrounds in science, technology, engineering and math. In 2017, 314 Action trained roughly 1,400 would-be candidates on the nuts and bolts of running for office. And it’s endorsed eight who will be on the ballot for federal office in 2018. Two are running in California House races — stem cell researcher Hans Keirstead is vying to unseat Rep. Dana Rohrabacher (R-Costa Mesa), and pediatrician Mai Kanh Tran is challenging incumbent Rep. Ed Royce (R-Fullerton). Another 314 Action-backed candidate, software developer and Democratic U.S. Rep. Jacky Rosen, will face off against Nevada Republican Sen. Dean Heller. The 2018 midterm elections are just a warm-up. With an eye on elections in 2020 and beyond, 314 Action will run two mass-training events for scientists this year, including one in the Bay Area. Looking deep into the heart of Mars NASA has studied Mars for decades. Now the space agency will examine it from within. When it arrives in November 2018, the InSight Mars lander will deploy a package of instruments to explore the Red Planet’s inner workings and composition. The three-legged lander will use a seismometer to track the vibrations from seismic activity and meteorite impacts. It will also send a thermal probe about 16 feet beneath the surface to study the heat flow of the planet’s interior. A third instrument will measure the planet’s “wobble” as it circles the sun by tracking the Doppler shift in radio signals between the lander and Earth. Together, these tools will allow scientists to understand the structure and composition of the planet’s crust, mantle and core. Mars’ interior experienced less churning than Earth’s, so the Red Planet’s contents could offer a window into the early history and evolution of our home. Mars InSight (short for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) was originally scheduled to launch in March 2016, but those plans were foiled by a leak in one of the instruments. If all goes well, the spacecraft will lift off from California’s Vandenberg Air Force Base in May. CRISPR goes inside humans Ever since the gene-editing toolkit known as CRISPR burst onto the scene in 2012, we’ve been hearing about its potential to treat diseases in humans. But so far, it’s mostly been used on cells in petri dishes and animals. Scientists in the U.S. and China are already using the CRISPR system to modify immune system cells so they’re better able to attack various kinds of cancer. The process involves removing cells from a patient’s bloodstream, altering them in the lab, multiplying them and then putting them back in the body. In 2018, however, we expect to see the first trial that will deploy the CRISPR gene editor inside a living person’s body. This groundbreaking study, set to begin in January, plans to use CRISPR to fight the human papillomavirus (HPV), which causes cervical, anal, throat and other cancers. The CRISPR machinery will be delivered to the patient via a topical gel. The hope is that once it gets into the cells, it will inactivate HPV’s viral genes without harming the DNA of healthy cells. Scientists are also making plans to use CRISPR to treat blood disorders like sickle cell disease and beta thalassemia, and a rare eye condition called Leber congenital amaurosis. A global effort to see a black hole Scientists have witnessed the effects of black holes in countless phenomena throughout the universe, but no one has ever seen one directly. In 2018, that might change. In the coming months, researchers working with the Event Horizon Telescope hope to produce the first-ever image of a real black hole silhouetted against a backdrop of hot, spiraling gas. The target is Sagittarius A*, a supermassive black hole that lies 26,000 light-years away at the center of our galaxy. To take an image of it, astronomers needed to create a telescope with such high resolution that it could locate an orange on the moon — in other words, a telescope the size of our entire planet. They were able to do this by linking eight telescopes from across the globe. In April 2017, they took measurements of Sagittarius A* and another black hole even farther away. Now comes the hard part of filtering out the background noise and piecing together an image from all that data. It will take time and patience for the research team to determine the best way to interpret the data, but surely the result will be worth the wait. Faster drug approvals The head of the Food and Drug Administration wants to get medicines out of the lab and into the clinic more quickly. The coming year will begin to test whether this is possible — and if so, whether there are any unexpected costs. FDA Commissioner Scott Gottlieb says the agency will soon issue new guidelines for speeding the approval process. For instance, a drug company running a clinical trial would not have to show that patients who get an experimental treatment live longer, or that their disease progresses more slowly. Instead, the company might use “surrogate markers” to satisfy the FDA by showing that its treatment halts or reverses some process that is a hallmark of a disease, such as the accumulation of beta-amyloid protein in the brains of Alzheimer’s patients. Gottlieb is also expected to champion the FDA’s broader acceptance of “adaptive clinical trials” whose design, study populations and objectives can be altered along the way in response to early signals. Rather than conducting a series of trials to discover which patients benefit most from a particular drug, a single adaptive trial could answer that question if it were allowed to “flex” along the way. It will take time, and manpower, for these changes to work their way through the system. As the agency fast-tracks new therapies, doctors and drug-safety experts will be watching to see if they are any less safe or effective. In 2016, NASA launched its first mission to bring back precious samples from an asteroid. In 2018, the spacecraft will finally meet its target. OSIRIS-REx (short for Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer) will arrive at the asteroid Bennu in August. In October, it will start mapping the surface, eventually pinpointing a good spot to grab at least 60 grams of dust and rocky material. Just as the decades-old moon samples brought back by NASA’s Apollo missions are still being studied today, the asteroid samples returned from Bennu could offer planetary scientists an unprecedented trove of material that they could study for decades to come. Asteroids like Bennu are space fossils — the building blocks of planets left over from the solar system’s formation that contain crucial, unaltered information about its early history. Bennu is also rich in organic matter, and scientists believe such asteroids were a source of crucial life-friendly molecules for Earth. Finally, since Bennu is a near-Earth object, it could help scientists better predict the risk of an asteroid hitting our planet. As for when those samples are coming back to Earth, don’t hold your breath: OSIRIS-REx isn’t scheduled to return them until 2023. Policing the language of government scientists Can government agencies that fund, conduct and make decisions based on biomedical research carry out their missions while avoiding words that describe what they do? In December, the Washington Post reported an effort by Trump administration officials to expunge at least seven terms from certain budget documents making their way to Congress in 2018. In a meeting with policy analysts from the Centers for Disease Control and Prevention, officials were said to have discouraged the use of such phrases as “evidence-based” and “science-based,” as well as the words “diversity,” “fetus” “vulnerable,” “entitlement” and “transgender.” Officials at the Department of Health and Human Services, which oversees the CDC, have downplayed the report. “I want to assure you there are no banned words at CDC,” Director Brenda Fitzgerald said in a tweet. “We will continue to talk about all our important public health programs.“ When Trump’s 2019 budget documents emerge in February, we’ll see whether it’s possible for the CDC, an agency that tracks the health of all Americans, to avoid any reference to “vulnerable” populations. Or whether the National Institutes of Health can fight pathogens such as the Zika virus without using the term “fetus.” Or whether the U.S. Preventive Services Task Force can avoid making reference to “science-based” or “evidence-based” medicine. After all, science (and evidence) is pretty much what they do. The most distant flyby on record NASA’s New Horizons spacecraft made history when it gave humanity our first up-close look at Pluto in 2015. Now it is set to make history again when it whizzes past a mysterious object 1 billion miles beyond the dwarf planet. Mission leaders said the planned flyby will be the most distant in the history of space exploration. It is scheduled for New Year’s Day 2019 (technically, New Year’s Eve 2018 for those of us in the Pacific time zone). However, the journey really begins about six months earlier, when the spacecraft will awake from hibernation and begin gathering intel on its next target. The distant object, known as MU69, lies 4 billion miles from Earth. It was discovered in 2014, and scientists still don’t know much about it. It appears to be peanut-shaped, though it might be composed of two objects closely orbiting each other. It might even have a small moon. With a rendezvous that will take New Horizons within just 2,175 miles of the object’s surface, we’ll know a lot more soon. Testing our faith in the flu vaccine This year’s flu season could test Americans’ faith in the flu vaccine. In Australia, where the flu season has just ended, the vaccine was only about 10% effective against the most widely circulated strain of flu. That same strain, an influenza A known as H3N2, is now on the loose here, and health experts expect the flu vaccine will fall well short of the 34% effectiveness level seen here last year. The vaccine’s protection has been waning steadily in recent years. But don’t blame public health officials for making the wrong guess about which strains of flu virus will circulate — there’s growing evidence that producing the vaccine in eggs introduces mutations that suppress our immune response to the vaccine itself. A “universal” flu vaccine could be the solution. This type of vaccine would train the human immune system to recognize parts of the virus that are shared by all flu strains and don’t change from year to year. It would also be produced in bacterial slurries, not in eggs. Both of these attributes could make annual flu shots a thing of the past. But such vaccines are still years away, and they won’t help Americans weather this flu season, which typically peaks in February. Yearly flu-related deaths in the United States average about 36,000. But they range from a low of about 12,000 to a high — during the 2012-13 season — of roughly 56,000. Last year, when a pretty robust 46.8% of Americans got vaccinated against the flu, 101 children died of the flu-related causes. Officials have not yet tallied the final toll on adults.
<urn:uuid:76e3f688-afc3-47cc-a59d-845a289e506d>
3.5625
3,103
Listicle
Science & Tech.
51.476785
95,642,317
This chapter contains the general integral or local balance law in Eulerian and Lagrangian form. Then, this general law is used to derive the mass conservation, the Cauchy stress-tensor, Piola–Kirchhoff tensor, the momentum equation, the angular momentum with the symmetry of Cauchy’s stress-tensor, the balance of energy, and the Clausius–Duhem entropy inequality. All the above laws are written in the presence of a moving singular surface across which the fields exhibit discontinuities. The balance laws considered in this chapter are fundamentals for all the developments of continuum mechanics, also in the presence of electromagnetic fields. KeywordsJump Condition Entropy Inequality Singular Surface Heat Flux Vector Entropy Flux - A. Marasco, A. Romano, Balance laws for charged continuous systems with an interface. Math. Models Methods Appl. Sci. (M3AS) 12(1), 77–88 (2002)Google Scholar - I. Müller, The coldness, a universal function in thermoelastic bodies. Arch. Rat. Mech. Anal. 41, (1971)Google Scholar - W. Noll, Lectures on the foundations of continuum mechanics and thermodynamics. Arch. Rat. Mech. Anal. 52, (1973)Google Scholar - A. Romano, A macroscopic theory of thermoelastic dielectrics. Ric. Mat. Univ. Parma 5, (1979)Google Scholar - A. Romano, A macroscopic non-linear theory of magnetothermoelastic continua. Arch. Rat. Mech. Anal. 65, (1977)Google Scholar
<urn:uuid:0d2e5c41-469e-4ade-99ac-c5a83999fa63>
2.625
353
Academic Writing
Science & Tech.
45.879732
95,642,320
Dr. Wesley Sundquist, professor of biochemistry at the University of Utah, will present at the Experimental Biology 2003 meeting in San Diego on his work in elucidating how HIV is manufactured and assembled in the cell. The raison dêtre of a virus such as HIV, if a non-living thing can be said to have one, is to turn a host cell into a factory that churns out virus copies and releases them to infect other cells. Dr. Sundquists research has focused on discovering the mechanisms underlying this manufacturing process. By identifying and characterizing the structures of specific cellular proteins that are crucial to assembling HIV, Dr. Sundquist is providing potential new targets for future anti-HIV drugs. For example, he and his colleagues were the first to show that a protein called TSG101 is required for HIV release. HIV needs TSG101 in order to escape from its host cell in a process termed budding. Dr. Sundquists team has also determined the structure of the part of TSG101 to which HIV binds. Finding ways to alter this structure or otherwise block its binding to HIV theoretically would prevent budding and slow or halt the infection. Sarah Goodwin | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:39292b79-7e13-40f2-a68b-4dc475d40420>
2.640625
888
Content Listing
Science & Tech.
44.456039
95,642,341
The Cassini mission to Saturn is a joint undertaking of the National Aeronautics and Space Administration, the European Space Agency (ESA), the Agenzia Spaziale Italiana, and numerous other European academic and industrial participants. The Cassini mission will provide a close-up investigation of the Saturn system, including Saturn's atmosphere and magnetosphere, its rings, and several of its moons. Saturn's largest moon Titan is of particular interest. ESA is developing the Huygens probe that will descend through Titan's atmosphere, directly sampling the atmosphere and determining its composition. To accomplish its ambitious scientific objectives, the orbiter and the probe carry 18 scientific instruments to conduct a total of 27 scientific investigations. The Cassini Spacecraft is scheduled for launch on a Titan IV/Centaur in October of 1997. Cassini will reach the Saturn system in 2004. The tour of the Saturn system is scheduled for 4 years and includes 63 orbits of Saturn and more than 36 flybys of Titan. During the first Saturn orbit, the Huygens probe will separate from the Cassini orbiter and descend through the atmosphere of Titan. This paper summarizes the current status of the Cassini program.
<urn:uuid:872d81fd-ddb7-427d-a6f2-535a69138b05>
3.453125
240
Knowledge Article
Science & Tech.
30.207662
95,642,342
That risk is, if anything, even greater today than 20 years ago... Scientists have created the world's first synthetic life form in a landmark experiment that paves the way for designer organisms that are built rather than evolved. The controversial feat, which has occupied 20 scientists for more than 10 years at an estimated cost of $40m, was described by one researcher as "a defining moment in biology". Craig Venter, the pioneering US geneticist behind the experiment, said the achievement heralds the dawn of a new era in which new life is made to benefit humanity, starting with bacteria that churn out biofuels, soak up carbon dioxide from the atmosphere and even manufacture vaccines. However critics, including some religious groups, condemned the work, with one organisation warning that artificial organisms could escape into the wild and cause environmental havoc or be turned into biological weapons. Others said Venter was playing God. The new organism is based on an existing bacterium that causes mastitis in goats, but at its core is an entirely synthetic genome that was constructed from chemicals in the laboratory. The single-celled organism has four "watermarks" written into its DNA to identify it as synthetic and help trace its descendants back to their creator, should they go astray. "We were ecstatic when the cells booted up with all the watermarks in place," Dr Venter told the Guardian. "It's a living species now, part of our planet's inventory of life." Dr Venter's team developed a new code based on the four letters of the genetic code, G, T, C and A, that allowed them to draw on the whole alphabet, numbers and punctuation marks to write the watermarks. Anyone who cracks the code is invited to email an address written into the DNA. The research is reported online today in the journal Science. "This is an important step both scientifically and philosophically," Dr Venter told the journal. "It has certainly changed my views of definitions of life and how life works." The team now plans to use the synthetic organism to work out the minimum number of genes needed for life to exist. From this, new microorganisms could be made by bolting on additional genes to produce useful chemicals, break down pollutants, or produce proteins for use in vaccines. Julian Savulescu, professor of practical ethics at Oxford University, said: "Venter is creaking open the most profound door in humanity's history, potentially peeking into its destiny. He is not merely copying life artificially ... or modifying it radically by genetic engineering. He is going towards the role of a god: creating artificial life that could never have existed naturally." This is "a defining moment in the history of biology and biotechnology", Mark Bedau, a philosopher at Reed College in Portland, Oregon, told Science. Dr Venter became a controversial figure in the 1990s when he pitted his former company, Celera Genomics, against the publicly funded effort to sequence the human genome, the Human Genome Project. Venter had already applied for patents on more than 300 genes, raising concerns that the company might claim intellectual rights to the building blocks of life. The "See Ya!" Agenda 6 hours ago
<urn:uuid:b556f082-2317-4bf0-a3ad-d9f64631634d>
3.375
659
News Article
Science & Tech.
41.755226
95,642,377
Note on Electron-Neutron Interaction Assuming a possible short-range interaction between electrons and neutrons of the form V = Kδ(r N - r e ) where δ is the Dirac delta function and r N and r e are the vector positions of electron and neutron, respectively, it is shown that probably |K| < 30mc 2(e 2 /mc 2)3 from consideration of the effect of the interaction on slow neutron scattering cross sections. If K is positive and a little less than this upper limit this interaction could be responsible for the observed isotope displacement or spectral energy levels. KeywordsTotal Cross Section Fast Neutron Slow Neutron Neutron Interaction Average Relative Velocity Unable to display preview. Download preview PDF. - 2.This idea that the zero-point oscillations of the hydrogen in a molecule may affect the slowing of neutrons by matter was suggested by Professor Rabi in a conversation last fall. See also Halban and Preiswerk, Nature 136, 951 (1935) for experimental indications of such an effect.ADSCrossRefGoogle Scholar
<urn:uuid:e4510b19-fcc1-423e-8721-f69ab00cdf3c>
2.828125
233
Academic Writing
Science & Tech.
39.964286
95,642,384
If you’ve survived Shiga toxin and the after-effects of food poisoning, you may have been the innocent victim of a battle for survival between predator and prey. Bacteria that carry a virus (a bacteriophage) that packs the Shiga toxin gene (Stx) may depend on it for protection from bacterial predators like the ciliated protozoan Tetrahymena. This is small comfort if you’ve just consumed that Food poisoning victims -- as a result, for example, of consuming Shiga-packing E.coli in a contaminated bag of spinach -- have always had the cold comfort of being told that not all common bacteria make humans extremely sick, only the strains that have integrated the Shiga gene into their DNA. These bacteria can produce large amounts of the Shiga toxin and release it into the surrounding environment. Leaving sick humans aside for a moment, Gerald Koudelka, Todd Hennessey, and colleagues from the University at Buffalo in Amherst, New York, wondered what evolutionary advantage the bacteria would derive from carrying around such a prickly viral hitchhiker. They hypothesized that the Stx gene might give the bacterial host an equalizer against bacterial predators. “Humans may not be the major target of this toxin,” explains Koudelka. “Instead, they might be simply caught in the cross-fire in this ancient battle between prey and predators.” To test their hypothesis, the researchers grew Tetrahymena with an E. coli strain (EDL933) that carries the Stx gene. It worked, at least, for the EDL933 that grew successfully in co-cultures with Tetrahymena. In this hostile environment, it was the predator, Tetrahymena, that was killed by the bacteria’s Shiga toxin. An E. coli strain (W3110) lacking Stx did poorly with Tetrahymena as roommates. The Tetrahymena had them for lunch. The Shiga toxin kills by binding to a receptor on the surface of Tetrahymena. Adding protein subunits that block toxin binding to the protozoan predator prevented killing by Shiga toxin. Humans have the same surface receptor for Shiga toxin as do Tetrahymena, which gives biologists and produce packers a close interest in the deadly duel between Tetrahymena and Shiga-packing E. coli. The Koudelka and Hennessey labs are continuing to characterize the route of Shiga toxin entry into the cytoplasm of Tetrahymena, its mode of killing, and the ability of Tetrahymena to develop resistance to Shiga toxin. The protozoan might make a model cellular system for Shiga detoxification, which one day might relieve some of the stress around the salad bar, say Koudelka and Hennessey. John Fleischman | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:3f261162-302a-4cda-ae04-137d65e8489e>
3.53125
1,199
Content Listing
Science & Tech.
37.517904
95,642,389
Land-use change is a major threat for biodiversity and ecosystem functions. We have studied this topic in the highlands of southern Brazil. Here grasslands have decreased strongly due to pine plantations and expansion of arable land. We currently study the effects of grassland management and land-use change on biotic and abiotic components using 80 sites. Primary grasslands of low and of high management intensity differ from original grassland regarding vegetation composition. Secondary grasslands both on former arable fields and plantations develop deviating plant communities, and soil conditions of secondary grassland after arable use are distinct. The decrease in species numbers of abandoned primary grasslands might be reversed by resuming management, while changes in secondary grasslands on former pine plantations require re-introduction of grassland plants. This is also true for intensively used primary grasslands and secondary ones on former arable fields. Overall, restoration of grasslands after land-use change is feasible, while conversion of land use should be reduced. Invited by Jürgen Dengler, Plant Ecology |Mo. 16.07.2018 aktuell| Ermittlung von Grundwasserverweilzeiten mittels Radon als natürlichem Tracer für ein Trinkwasserförderungsgebiet der Stadt Fürth Absolventenfeier Geoökologie 2018/19
<urn:uuid:d41bbbb4-da96-4853-aa1b-e2f7b6456c85>
3.125
293
Knowledge Article
Science & Tech.
22.454693
95,642,395
As part of our Long-Term Marine Sediment Monitoring Program, we evaluate the health of six urban bays in Puget Sound by monitoring sediments in: Studying critters in urban sediments Urban bays are bays near population centers, making them subject to many pressures, including pollution. We monitor the benthic (bottom) habitat and invertebrate community in six Puget Sound urban bays. We compare results over space and time to monitor the changing conditions in Puget Sound health. The most recently published reports for the six bays are below: Marine Monitoring Unit Supervisor
<urn:uuid:7c6c7c38-bc23-48d3-b35a-fd28393a4859>
2.65625
124
Knowledge Article
Science & Tech.
16.14337
95,642,412
List of fusion experiments This article's use of external links may not follow Wikipedia's policies or guidelines. (April 2018) (Learn how and when to remove this template message) This article has an unclear citation style.Learn how and when to remove this template message)(April 2018) ( Experiments directed toward developing fusion power are invariably done with dedicated machines which can be classified according to the principles they use to confine the plasma fuel and keep it hot. The major division is between magnetic confinement and inertial confinement. In magnetic confinement, the tendency of the hot plasma to expand is counteracted by the Lorentz force between currents in the plasma and magnetic fields produced by external coils. The particle densities tend to be in the range of to 1018 and the linear dimensions in the range of 1022 m−3. The particle and energy confinement times may range from under a millisecond to over a second, but the configuration itself is often maintained through input of particles, energy, and current for times that are hundreds or thousands of times longer. Some concepts are capable of maintaining a plasma indefinitely. 0.1 to 10 m In contrast, with inertial confinement, there is nothing to counteract the expansion of the plasma. The confinement time is simply the time it takes the plasma pressure to overcome the inertia of the particles, hence the name. The densities tend to be in the range of to 1031 and the plasma radius in the range of 1 to 100 micrometers. These conditions are obtained by 1033 m−3irradiating a millimeter-sized solid pellet with a nanosecond laser or ion pulse. The outer layer of the pellet is ablated, providing a reaction force that compresses the central 10% of the fuel by a factor of 10 or 20 to 103 or times solid density. These microplasmas disperse in a time measured in nanoseconds. For a 104reactor, a repetition rate of several per second will be needed. - 1 Magnetic confinement - 2 Inertial confinement - 3 Inertial electrostatic confinement - 4 References Within the field of magnetic confinement experiments, there is a basic division between toroidal and open magnetic field topologies. Generally speaking, it is easier to contain a plasma in the direction perpendicular to the field than parallel to it. Parallel confinement can be solved either by bending the field lines back on themselves into circles or, more commonly, toroidal surfaces, or by constricting the bundle of field lines at both ends, which causes some of the particles to be reflected by the mirror effect. The toroidal geometries can be further subdivided according to whether the machine itself has a toroidal geometry, i.e., a solid core through the center of the plasma. The alternative is to dispense with a solid core and rely on currents in the plasma to produce the toroidal field. Mirror machines have advantages in a simpler geometry and a better potential for direct conversion of particle energy to electricity. They generally require higher magnetic fields than toroidal machines, but the biggest problem has turned out to be confinement. For good confinement there must be more particles moving perpendicular to the field than there are moving parallel to the field. Such a non-Maxwellian velocity distribution is, however, very difficult to maintain and energetically costly. The mirrors' advantage of simple machine geometry is maintained in machines which produce compact toroids, but there are potential disadvantages for stability in not having a central conductor and there is generally less possibility to control (and thereby optimize) the magnetic geometry. Compact toroid concepts are generally less well developed than those of toroidal machines. While this does not necessarily mean that they cannot work better than mainstream concepts, the uncertainty involved is much greater. Somewhat in a class by itself is the Z-pinch, which has circular field lines. This was one of the first concepts tried, but it did not prove very successful. Furthermore, there was never a convincing concept for turning the pulsed machine requiring electrodes into a practical reactor. The dense plasma focus is a controversial and "non-mainstream" device that relies on currents in the plasma to produce a toroid. It is a pulsed device that depends on a plasma that is not in equilibrium and has the potential for direct conversion of particle energy to electricity. Experiments are ongoing to test relatively new theories to determine if the device has a future. Toroidal machines can be axially symmetric, like the tokamak and the reversed field pinch (RFP), or asymmetric, like the stellarator. The additional degree of freedom gained by giving up toroidal symmetry might ultimately be usable to produce better confinement, but the cost is complexity in the engineering, the theory, and the experimental diagnostics. Stellarators typically have a periodicity, e.g. a fivefold rotational symmetry. The RFP, despite some theoretical advantages such as a low magnetic field at the coils, has not proven very successful. - ADITYA (tokamak), Institute for Plasma Research, India - Alcator C-Mod, Massachusetts Institute of Technology, United States (1991-2016) - ASDEX Upgrade (Axialsymmetrisches Divertorexperiment), Max-Planck-Institut für Plasmaphysik, Garching, Germany - CFETR, China (planned) - COMPASS, Institute of Plasma Physics AS CR, Czech Republic, Prague - DIII-D, General Atomics, United States - EAST (Experimental Advanced Superconducting Tokamak), Hefei Institutes of Physical Science, China - GLAST, Islamabad, Pakistan - IGNITOR, Frascati, Italy - ISTTOK, Lisbon, Portugal - ITER, Cadarache, France (under construction) - JT-60, JAERI, Japan - JET (Joint European Torus), Culham, UK - KSTAR, National Fusion Research Institute, Republic of Korea - KDEMO, Republic of Korea (planned) - MAST (Mega-Ampere Spherical Tokamak), Culham, UK - Novillo, Instituto Nacional de Investigaciones Nucleares, Mexico (1983-2004) - NSTX (National Spherical Torus Experiment), Princeton Plasma Physics Laboratory, United States - QUEST (Spherical Tokamak) Kyushu University, Japan - Pegasus Toroidal Experiment, University of Wisconsin–Madison, United States - SST-1 (tokamak) (Steady State Superconducting Tokamak), Institute for Plasma Research, India (under construction) - START (Small Tight Aspect Ratio Tokamak), Culham, UK (1991–1998) - STOR-M Tokamak, Plasma Physics Laboratory (Saskatchewan), Canada - TCV (Tokamak à Configuration Variable), École Polytechnique Fédérale de Lausanne, Switzerland - TEXTOR (Tokamak Experiment for Technology Oriented Research), Forschungszentrum Jülich, Germany - TFR (Tokamak de Fontenay-aux-Roses), Commissariat à l'énergie atomique, Fontenay-aux-Roses, France - TFTR (Tokamak Fusion Test Reactor), Princeton Plasma Physics Laboratory, United States (1982–1997) - Tore Supra, Département de Recherches sur la Fusion Contrôlée, Cadarache, France (1988-2013) - WEST, Département de Recherches sur la Fusion Contrôlée, Cadarache, France - H-1 Heliac, Research School of Physical Sciences and Engineering, Australian National University, Canberra, Australia - HIDRA (Hybrid Illinois Device for Research and Applications), University of Illinois at Urbana - Champaign, Urbana IL, United States - HSX (Helically Symmetric Experiment), University of Wisconsin–Madison, United States - LHD (Large Helical Device), National Institute for Fusion Science, Japan - NCSX (National Compact Stellarator Experiment), Princeton Plasma Physics Laboratory, United States (phased out) - SCR-1 (Stellarator of Costa Rica), Instituto Tecnológico de Costa Rica, Cartago, Costa Rica [permanent dead link] - TJ-K, University of Stuttgart, Germany (from 1999-2005 located at University of Kiel, Germany) - TJ-II, National Fusion Laboratory, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (Ciemat), Spain - Uragan-1, Uragan-2(M) and Uragan-3(M), National Science Center, Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine - Wendelstein-7AS, Max-Planck-Institut für Plasmaphysik, Garching, Germany (1988–2002) - Wendelstein 7-X, Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Reversed field pinch (RFP) - RFX (Reversed-Field eXperiment), Consorzio RFX, Padova, Italy - MST (Madison Symmetric Torus), University of Wisconsin–Madison, United States - T2R, Royal Institute of Technology, Stockholm, Sweden - TPE-RX, AIST, Tsukuba, Japan - Baseball I/Baseball II Lawrence Livermore National Laboratory, Livermore CA. - TMX, TMX-U Lawrence Livermore National Laboratory, Livermore CA. - MFTF Lawrence Livermore National Laboratory, Livermore CA. - Gas Dynamic Trap at Budker Institute of Nuclear Physics, Akademgorodok, Russia. Field-Reversed Configuration (FRC) - C-2 Tri Alpha Energy - C-2U Tri Alpha Energy - C-3 (under construction?) Tri Alpha Energy - LSX University of Washington - IPA University of Washington - HF University of Washington - IPA- HF University of Washington Open field lines - Trisops - 2 facing theta-pinch guns Current or under construction experimental facilities Solid state lasers - National Ignition Facility (NIF) at LLNL in California, US - Laser Mégajoule of the Commissariat à l'Énergie Atomique in Bordeaux, France (under construction) - OMEGA EL Laser at the Laboratory for Laser Energetics, Rochester, US - Gekko XII at the Institute for Laser Engineering in Osaka, Japan - ISKRA-4 and ISKRA-5 Lasers at the Russian Federal Nuclear Center VNIIEF - Pharos laser, 2 beam 1 kJ/pulse (IR) Nd:Glass laser at the Naval Research Laboratories - Vulcan laser at the central Laser Facility, Rutherford Appleton Laboratory, 2.6 kJ/pulse (IR) Nd:glass laser - Trident laser, at LANL; 3 beams total; 2 x 400 J beams, 100 ps – 1 us; 1 beam ~100 J, 600 fs – 2 ns. - NIKE laser at the Naval Research Laboratories, Krypton Fluoride gas laser - PALS, formerly the "Asterix IV", at the Academy of Sciences of the Czech Republic, 1 kJ max. output iodine laser at 1.315 micrometre fundamental wavelength Dismantled experimental facilities - 4 pi laser built during the mid 1960s at Lawrence Livermore National Laboratory - Long path laser built at LLNL in 1972 - The two beam Janus laser built at LLNL in 1975 - The two beam Cyclops laser built at LLNL in 1975 - The two beam Argus laser built at LLNL in 1976 - The 20 beam Shiva laser built at LLNL in 1977 - 24 beam OMEGA laser completed in 1980 at the University of Rochester's Laboratory for Laser Energetics - The 10 beam Nova laser (dismantled) at LLNL. (First shot taken, December 1984 – final shot taken and dismantled in 1999) - "Single Beam System" or simply "67" after the building number it was housed in, a 1 kJ carbon dioxide laser at Los Alamos National Laboratory - Gemini laser, 2 beams, 2.5 kJ carbon dioxide laser at LANL - Helios laser, 8 beam, ~10 kJ carbon dioxide laser at LANL — Media at Wikimedia Commons - Antares laser at LANL. (40 kJ CO2 laser, largest ever built, production of hot electrons in target plasma due to long wavelength of laser resulted in poor laser/plasma energy coupling) - Aurora laser 96 beam 1.3 kJ total krypton fluoride (KrF) laser at LANL - Sprite laser few joules/pulse laser at the Central Laser Facility, Rutherford Appleton Laboratory - Z Pulsed Power Facility - ZEBRA device at the University of Nevada's Nevada Terawatt Facility - Saturn accelerator at Sandia National Laboratory - MAGPIE at Imperial College London - COBRA at Cornell University - "MIT Plasma Science & Fusion Center: research>alcator>". 2015-07-09. - Gao, X. (2013-12-17). "Update on CFETR Concept Design" (PDF). www-naweb.iaea.org. - "Wayback Machine". 2014-05-12. - "COMPASS - General information". 2013-10-25. - "EAST (HT-7U Super conducting Tokamak)----Hefei Institutes of Physical Science, The Chinese Academy of Sciences". english.hf.cas.cn. - "Ignited plasma in Tokamaks - The IGNITOR project". www.frascati.enea.it. - "Centro de Fusão Nuclear". www.cfn.ist.utl.pt. - "ITER - the way to new energy". ITER. - "Wayback Machine". 2006-10-02. - "EFDA-JET, the world's largest nuclear fusion research experiment". 2006-04-30. - "Wayback Machine". 2008-05-30. - "MAST - the Spherical Tokamak at UKAEA Culham". 2006-04-21. - ":::. Instituto Nacional de Investigaciones Nucleares | Fusión nuclear .:::". 2009-11-25. - "All-the-Worlds-Tokamaks". www.tokamak.info. - "Wayback Machine". 2013-11-10. - "Pegasus Toroidal Experiment". pegasus.ep.wisc.edu. - "The SST-1 Tokamak Page". 2014-06-20. - "Wayback Machine". 2006-04-24. - "U of S". 2011-07-06. - "EPFL". crppwww.epfl.ch. - "Forschungszentrum Jülich - Plasmaphysik (IEK-4)". www.fz-juelich.de (in German). - "Tokamak Fusion Test Reactor". 2011-04-26. - Department, Head of; email@example.com. "Plasma Research Laboratory - PRL - ANU". prl.anu.edu.au. - "HIDRA – Hybrid Illinois Device for Research and Applications | CPMI - Illinois". cpmi.illinois.edu. - "Helically Symmetric eXperiment – HSX | HSX Fusion Energy Device Website". www.hsx.wisc.edu. - "Large Helical Device Project". www.lhd.nifs.ac.jp. - "TJ-K - FusionWiki". fusionwiki.ciemat.es. - CIEMAT. "Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas". www.ciemat.es (in Spanish). - "History | ННЦ ХФТИ". www.kipt.kharkov.ua. - "Wendelstein 7-X". www.ipp.mpg.de. - "CONSORZIO RFX - Ricerca Formazione Innovazione". www.igi.cnr.it. - Hartog, Peter Den. "MST - UW Plasma Physics". plasma.physics.wisc.edu. - "Levitated Dipole Experiment". 2004-08-23. - "Lasers, Photonics, and Fusion Science: Science and Technology on a Mission". www.llnl.gov. - "CEA - Laser Mégajoule". www-lmj.cea.fr. - "RFNC-VNIIEF - Science - Laser physics". 2005-04-06. - "PALS, Laser". archive.is. 2001-06-27. - "University of Nevada, Reno. Nevada Terawatt Facility". archive.is. 2000-09-19. - "Sandia National Laboratories: National Security Programs". www.sandia.gov. - "PULSOTRON". pulsotron.org.
<urn:uuid:f91eba67-ce9e-416a-a44d-8e0376fbef37>
3.578125
3,757
Knowledge Article
Science & Tech.
41.777269
95,642,413
David M. Rubin, 1995. "Forecasting Techniques, Underlying Physics, and Applications", Nonlinear Dynamics and Fractals: New Numerical Techniques for Sedimentary Data, Gerard V. Middleton, Roy E. Plotnick, David M. Rubin Download citation file: Science is based on the principle of repeatability: each time a system experiences similar conditions—both internal to the system and forces exerted externally on the system—we expect the system to exhibit a similar response. Forecasting exploits this principle by using the observed behavior of a system to predict behavior when similar conditions recur. Even if the equations describing a system are unknown, we can nevertheless use forecasting to learn about the system. For some purposes— such as weather forecasting, financial forecasting, or noise reduction—predicting the future is the primary goal of the forecasting. For the purpose of characterizing system dynamics, in contrast, predictions are made in an exploratory manner to learn what kinds of models perform best. For a preview of how the forecasting procedure works, we can consider the Lorenz system described in Chapter 2. Three approaches could be used to predict the future of this 85-system. First, we could measure the initial conditions (nonlinearity of vertical temperature gradient, temperature difference between rising and falling fluid, and intensity of convection) and use the three coupled equations (Equations 2.13) to predict the values of the three variables for successive steps in time. A second approach could be employed if the governing equations were unknown, but sequential observations of the system were available. We could use the sequential observations to plot the 3-dimensional attractor (Figure 2.1), locate each predictee (a point whose three coordinates are given by the three variables that define the state of the system), identify nearby points on
<urn:uuid:05dd783f-0ccf-4e58-9328-3d9a8d422c7e>
3.6875
368
Academic Writing
Science & Tech.
26.619902
95,642,428
A new report from the Wildlife Conservation Society shows that no-take zones in Belize can not only help economically valuable species such as lobster, conch, and fish recover from overfishing, but may also help re-colonize nearby reef areas. The report—titled "Review of the Benefits of No-Take Zones"—represents a systematic review of research literature from no-take areas around the world. The report was written by Dr. Craig Dahlgren, a recognized expert in marine protected areas and fisheries management. The report comes as signatory countries of the Convention on Biological Diversity countries are being required to protect at least 10 percent of their marine territory. "Belize has been a leader in the region for establishing marine protected areas and has a world-renowned system of marine reserves, many of which form the Belize Barrier Reef Reserve System World Heritage Site," said Janet Gibson, Director of the Wildlife Conservation Society's Belize Program. "It's clear that no-take zones can help replenish the country's fisheries and biodiversity, along with the added benefits to tourism and even resilience to climate change." WCS commissioned the report to describe the performance of no-take zones in Belize and in other countries to ultimately conserve highly diverse coral reef systems. In many coastal marine ecosystems around the world, overfishing and habitat degradation are prompting marine resource managers to find ecosystem-based solutions. The report also examines factors affecting the performance of no-take zones, such as the design, size, location, and factors of compliance with fishing regulations. According to past studies, the recovery of lobster, conch, and other exploited species within marine protected areas with no-take zones, or fully protected reserves, could take as little as 1-6 years. Full recovery of exploited species, however, could take decades. "The report provides a valuable guide for Belize's marine managers and fishers," said Dr. Caleb McClennen, Executive Director of WCS's Marine Program. "We also hope this effort will generate and sustain stakeholder support for these important regulatory tools." This report was made possible through the generous support of the Oak Foundation and The Summit Foundation. John Delaney | Eurek Alert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e9d8daf3-f018-4d72-a6bc-dd8591151669>
3.34375
1,115
Content Listing
Science & Tech.
34.79794
95,642,429
A molecule or larger body is chiral if it cannot be superimposed on its mirror image (enantiomer). Electromagnetic fields may be chiral, too, with circularly polarized light (CPL) as the paradigmatic example. A recently introduced measure of the local degree of chiral dissymmetry in electromagnetic fields suggested the existence of optical modes more selective than circularly polarized plane waves in preferentially exciting single enantiomers in certain regions of space. By probing induced fluorescence intensity, we demonstrated experimentally an 11-fold enhancement over CPL in discrimination of the enantiomers of a biperylene derivative by precisely sculpted electromagnetic fields. This result, which agrees to within 15% with theoretical predictions, establishes that optical chirality is a fundamental and tunable property of light, with possible applications ranging from plasmonic sensors to absolute asymmetric synthesis. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:19f18d56-07f4-4f82-be8c-b376e5b8a197>
2.6875
200
Academic Writing
Science & Tech.
-12.283822
95,642,439
SEM images at different magnification of core–shell Pb@GaS nanoparticles produced by solar ablation.. (photo credit: BEN GURION UNIVERSITY OF THE NEGEV) Weizmann Institute and Ben-Gurion University scientists have produced an advance in nanomaterials that can lead in the future to new optoelectric devices, sensors, highly efficient rechargeable batteries and novel superconducting properties. Prof. Reshef Tenne at the Rehovot institute and Prof. Jeffrey Gordon and Prof. Daniel Feuermann at BGU’s Blaustein Institutes for Desert Research learned how to generate stunning new varieties of nanostructures with remarkable optical, electronic, catalytic and mechanical properties. Made up of metal cores within shells of inorganic compounds, they can be generated in the lab in procedures that are simultaneously safe, rapid, high-yield and amenable to scale-up. The two university teams combined expertise in materials science and solar concentrator optics to do this, producing another “first” in the realization of singular nanomaterials – both closed-cage (fullerene-like) and nanotube particles with a metal (lead) core and outer shells of gallium sulfide (GaS). The article detailing their success was published in the journal Nano, which also chose their article to appear on its cover page. The use of immensely concentrated sunlight at the service of fundamentally new nanomaterials represents a new paradigm for solar energy, geared toward generating valued new materials at the service of human technology rather than producing heat, electricity or fuels. But the unique extensive ultra-high-temperature (approaching 3,000°C) reaction and annealing conditions created in the solar furnaces developed by Gordon and Feuermann for these experiments are conducive to these unusual formations.FINDING THEIR WAY IN THE DARK Imagine living in perpetual darkness in an alien world where you have to find food quickly by touch or starve for months at a time. That’s the existence of blind creatures called Mexican cavefish (Astyanax mexicanus). A study by the University of Cincinnati found asymmetry in the cranial bones of this species, which live in the limestone caverns of Mexico’s Sierra del Abra Tanchipa rainforest but are also raised as a popular aquarium pet. The caverns contain deep cisterns cloaked in utter blackness. This is where the Ohio researchers traveled to find the little fish that has evolved to feast or endure famine entombed hundreds of meters below the ground. “They have been able to invade this really extreme environment. They are exposed to darkness their entire life, yet they’re able to survive and thrive,” said Amanda Powers, a UC graduate student and lead author of a study on blind cavefish recently published in the journal PLOS One. “They’ve evolved changes to their metabolism and skull structure. They’ve enhanced their sensory systems. And they can survive in an environment where not many animals could,” she added. Mexican cavefish are bizarre. They are not merely blind but are born with eyes that regress until they completely disappear in adulthood. The bones of their once-round eye orbs collapse. In place of eyes, their empty sockets store fat deposits that are covered in the same silvery, nearly translucent scales as the rest of their pale, unpigmented bodies. Most fish are symmetrical; their left and right sides are virtually identical and streamlined to provide the most efficient locomotion in the water. But cavefish start their lives with symmetrical features like other fish; but when they mature, their fragmented cranial bones harden in a visibly skewed direction, the study found. The researchers suggested that this adaptation helps the typically left-leaning cavefish navigate by using sensory organs called neuromasts to follow the contours of the cave as they swim in a perpetual counterclockwise pattern. This behavior was observed among captive cavefish, which keep moving around the edges of their tanks, while surface fish tend to stay motionless in the shadows of their tank or swim in haphazard ways. “That was a real big piece of the puzzle for us,” said biology Prof. Joshua Gross, a co-author of the study. “It’s a mystery how they’ve been able to adapt. The amazing thing is that they’re not just barely surviving – they thrive in total darkness.” Cavefish are especially valuable for evolutionary study, Gross said, because of their genetic relationship with readily abundant surface fish. Many antecedents of other cave-dwelling animals have become extinct due to natural selection or calamity
<urn:uuid:8c27be0e-4eec-4a08-8980-07914990f801>
3.5
986
Truncated
Science & Tech.
27.345448
95,642,455
Effects of juvenile coral-feeding butterflyfishes on host corals - 243 Downloads Corals provide critical settlement habitat for a wide range of coral reef fishes, particularly corallivorous butterflyfishes, which not only settle directly into live corals but also use this coral as an exclusive food source. This study examines the consequences of chronic predation by juvenile coral-feeding butterflyfishes on their specific host corals. Juvenile butterflyfishes had high levels of site fidelity for host corals with 88% (38/43) of small (<30 mm) juveniles of Chaetodon plebeius feeding exclusively from a single host colony. This highly concentrated predation had negative effects on the condition of these colonies, with tissue biomass declining with increasing predation intensity. Declines were consistent across both field observations and a controlled experiment. Coral tissue biomass declined by 26.7, 44.5 and 53.4% in low, medium and high predation intensity treatments. Similarly, a 41.7% difference in coral tissue biomass was observed between colonies that were naturally inhabited by juvenile butterflyfish compared to uninhabited control colonies. Total lipid content of host corals declined by 29–38% across all treatments including controls and was not related to predation intensity; rather, this decline coincided with the mass spawning of corals and the loss of lipid-rich eggs. Although the speed at which lost coral tissue is regenerated and the long-term consequences for growth and reproduction remain unknown, our findings indicate that predation by juvenile butterflyfishes represents a chronic stress to these coral colonies and will have negative energetic consequences for the corals used as settlement habitat. KeywordsCorallivore Chaetodontidae Settlement Chronic stress Coral condition Tissue biomass Funding for this project was provided by the ARC CoE for Coral Reef Studies. The authors thank D. McCowan and K. Chong-seng and the staff at LIRS logistical support and field assistance. This paper benefited from helpful comments provided by S. Wilson and two anonymous reviewers. - Baird AH, Marshall PA, Wolstenholme J (2002) Latitudinal variation in the reproduction of Acropora in the Coral Sea. Proc 9th Int Coral Reef Symp 1:385–389Google Scholar - Cole AJ, Lawton RJ, Pratchett MS, Wilson SK (2011) Chronic coral consumption by butterflyfishes. Coral Reefs. doi: 10.1007/s00338-010-0674-6 - Holbrook SJ, Schmitt RJ, Brooks AJ (in press) Indirect effects of species interactions on habitat provisioning. Oecologia. doi: 10.1007/s00442-011-1912-5 - Munday PL, Wilson SK (1997) Comparative efficacy of clove oil and other chemicals in anaesthetization of Pomacentrus amboinensis, a coral reef fish. J Fish Biol 51:931–938Google Scholar
<urn:uuid:b98754ad-cb66-4ca9-81a9-fb954a24b1b0>
2.859375
609
Academic Writing
Science & Tech.
37.834539
95,642,468
Skip to comments.The earth's magnetic field impacts climate: Danish study Posted on 01/15/2009 9:01:24 AM PST by TaraP The earth's climate has been significantly affected by the planet's magnetic field, according to a Danish study published Monday that could challenge the notion that human emissions are responsible for global warming. "Our results show a strong correlation between the strength of the earth's magnetic field and the amount of precipitation in the tropics," one of the two Danish geophysicists behind the study, Mads Faurschou Knudsen of the geology department at Aarhus University in western Denmark, told the Videnskab journal. He and his colleague Peter Riisager, of the Geological Survey of Denmark and Greenland (GEUS), compared a reconstruction of the prehistoric magnetic field 5,000 years ago based on data drawn from stalagmites and stalactites found in China and Oman. The results of the study, which has also been published in US scientific journal Geology, lend support to a controversial theory published a decade ago by Danish astrophysicist Henrik Svensmark, who claimed the climate was highly influenced by galactic cosmic ray (GCR) particles penetrating the earth's atmosphere. Svensmark's theory, which pitted him against today's mainstream theorists who claim carbon dioxide (CO2) is responsible for global warming, involved a link between the earth's magnetic field and climate, since that field helps regulate the number of GCR particles that reach the earth's atmosphere. "The only way we can explain the (geomagnetic-climate) connection is through the exact same physical mechanisms that were present in Henrik Svensmark's theory," Knudsen said. "If changes in the magnetic field, which occur independently of the earth's climate, can be linked to changes in precipitation, then it can only be explained through the magnetic field's blocking of the cosmetic rays," he said. The two scientists acknowledged that CO2 plays an important role in the changing climate, "but the climate is an incredibly complex system, and it is unlikely we have a full overview over which factors play a part and how important each is in a given circumstance," Riisager told Videnskab. *Well by golly aren't they on top on things! a day late and a dollar short.... I just did a study at the University of My Desk, that these people are idiots and haven’t a clue aside from theories on top of theories. I did tell a liberal that CO2 was causing global warming, and that we exhale CO2, so he held his breath and slowly died, we can only hope the rest of them fall for it too! We must immediately raise taxes for the government to buy more magnets and put all magnets under government control. An inspector will be by your house to make sure you have no more magnets on your refrigerator.
<urn:uuid:bcadb6af-4060-4d28-a2c7-2495afd47031>
3.21875
602
Comment Section
Science & Tech.
43.140948
95,642,473
XQuery - Wikipedia Features. XQuery is a functional, side effect-free, expression-oriented programming language with a simple type system, summed up by Kilpeläinen: All XQuery expressions operate on sequences, and evaluate to sequences. xquery cover page - W3C - World Wide Web Consortium XQuery 3.1 is a versatile query and application development language, capable of processing the information content of diverse data sources including structured and semi-structured documents, relational databases and tree-bases databases. W3C XML Query (XQuery) XQuery is a standardized language for combining documents, databases, Web pages and almost anything else. It is very widely implemented. It is powerful and easy to learn. XQuery is replacing proprietary middleware languages and Web Application development languages. XQuery is replacing complex Java ... XQuery by Example The heart and soul of XQuery relies on an intercut network of functions distributed throughout library modules, offering reusable functionality, rivaling most procedural languages. XQuery: Search Across a Variety of XML Data: Priscilla ... XQuery: Search Across a Variety of XML Data [Priscilla Walmsley] on Amazon.com. *FREE* shipping on qualifying offers. The W3C XQuery 3.1 standard provides a tool to search, extract, and manipulate content, whether it's in XML Simple online XQuery tester Simple online tool for testing xquery expressions. That supports both XQuery versions 1.0 and 2.0 XQuery Expressions | Microsoft Docs XQuery Expressions. 08/10/2016; 2 minutes to read Contributors. In this article THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse XQuery 教程 - w3school.com.cn 解释 XQuery 的最佳方式是:XQuery 相对于 XML,等同于 SQL 相对于数据库。 XQuery 被设计用来查询 XML 数据。 XQuery 也被称为 XML Query。 FunctX XQuery Functions: Hundreds of useful examples Hundreds of reusable examples of XQuery functions from the FunctX XQuery Function Library XQuery Basics | Microsoft Docs Note. The feedback system for this content will be changing soon. Old comments will not be carried over. If content within a comment thread is important to you, please save a copy.
<urn:uuid:83061317-ff5b-49ea-9331-182b76580249>
2.671875
564
Content Listing
Software Dev.
43.646364
95,642,523
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율| |1 초||512 MB||42||22||14||53.846%| A young boy got really curious about binary strings. This string contains only 1s and 0s hence the name binary. His particular interest was about those strings for which no two ones are side by side. Specifically he wanted to know the number of strings of a certain length that consisted of only ones and zeroes and there are no two consecutive ones. After solving this problem, the young boy got even more curious. Now he wants to know the number of binary strings which satisfies the following properties: Now can you help him to find out the number of strings that satisfies the above conditions? Since the number can be huge, you need to print it modulo 1 000 000 007. The first line is an integer T (1 ≤ T ≤ 10 000), the number of tests. In the next T lines there are three integers L, R and K. Print T lines, in each line print the case id and the result modulo 1 000 000 007. See the samples for more details. 2 1 10 3 1 10 5 Case 1: 115 Case 2: 157 For the first case some example strings are “101”, “000”, “010” “101001”, “000010000” etc.
<urn:uuid:d1079dba-8ffc-42ee-a693-b47a9942a1e1>
3.515625
351
Tutorial
Science & Tech.
90.44535
95,642,530
Global wetlands surveyed from space Wetlands across four continents will be the focus of study during ESA’s Globwetland project. Wetlands fulfil a large number of very useful biological and hydrological functions but are increasingly under threat from human activities. Dotted across varied regions of our planet are the waterlogged landscapes known as wetlands. Often inaccessible, these muddy areas are actually treasure houses of ecological diversity – their overall value measured in trillions of Euros. For much of the last century wetlands have been drained or otherwise degraded, but scientific understanding of their important roles in terms of biology and the water cycle has grown, spurring international efforts to preserve them. On 20 November ESA formally began a project to map wetlands from space, providing data on around 50 sites in 21 countries worldwide. In 1971 an inter-governmental treaty established the Ramsar Convention on Wetlands, establishing a framework for the stewardship and preservation of wetlands. Today more than 1310 wetlands have been designated as Wetlands of International Importance, a total area of 111 million hectares. The Convention’s 138 national signatories are obliged to report on the state of listed wetlands they are responsible for. ESA’s new €1 million Globwetland project is producing satellite-derived and geo-referenced products including inventory maps and digital elevation models of wetlands and the surrounding catchment areas. These products will aid local and national authorities in fulfilling their Ramsar obligations, and should also function as a helpful tool for wetland managers and scientific researchers. "The Ramsar Convention on Wetlands stresses that targeted assessment and monitoring information is vital for ensuring effective management planning for wetlands, their hydrology and their catchments," explained Nick Davidson, Ramsar’s Deputy Secretary General. "Yet for wetland managers and decision-makers in many countries access to sound information about wetlands and how they are changing is often a critical gap. "By working with users at site and catchment scales the Globwetland project should contribute significantly to helping achieve effective management of these critical important ecosystems for biodiversity and human well-being." With wetlands often made up of difficult and inaccessible terrain, satellites can help provide information on local topography, the types of wetland vegetation, land cover and use and the dynamics of the local water cycle. In particular radar imagery of the type provided by ESA’s Envisat is able to differentiate between dry and waterlogged surfaces, and so can provide multitemporal data on how given wetlands change seasonally. Globwetland products are being provided for a wide range of terrain types to users across four continents: North and South America, Africa, Asia and Europe, including European Russia. In Spain the Globwetland end-user is the government’s Ministry of the Environment. "We have previously used aerial photography to prepare wetland maps, but this is the first time we will use Earth Observation data," said José Ramón Picatoste Ruggeroni, Director General of Nature Conservation and Subdirector General of Biodiversity Conservation. "The areas we are most interested in are land cover and land cover analysis, topography dynamics and subsidence layers, water cycle and quality maps. "In co-operation with the Spanish regional authorities involved in nature conservation and local wetland managers, we hope to investigate the possibility of achieving a common standard of regularly updated geoinformation to monitor ecological changes in the Spanish Ramsar sites." At the other side of the continent, wetlands comprise a third of the territory of the Russian Federation, the majority of it in the form of peatlands. Through much of the 20th century these areas were regarded as wasteland and drained for peat extraction - ending up as unproductive lands that do not contribute either economically or in terms of biodiversity, and also cause ecological problems such as dust storms and uncontrolled carbon dioxide emissions from smouldering peat fires. In Russia the Globwetland partner is the Ministry of Ecology and Land Use of Moscow region, and has a particular interest in using periodic satellite data to monitor peat fires and estimate how effective a new rewetting project is in preventing further outbreaks. While in South Africa, Globwetland partner the Department of Environmental Affairs and Tourism (DEAT) seeks to use satellite data to help fulfil its Ramsar obligations for its existing three-site wetlands inventory. The Department also plans to map a separate site, the Prince Edward Islands Special Nature Reserve, for the first time. South Africa hopes to propose the offshore Reserve for designation as a new Ramsar Wetland of International Importance, but its uncharted nature is currently an obstacle to achieving this. This Southern Ocean site is also being nominated next year as a UNESCO World Heritage Site. Why are wetlands so valuable? Studies of wetlands show they store and purify water for domestic use, recharge natural aquifers as they run low, retain nutrients in floodplains, help control flooding and shore erosion and regulate local climate. Most of all, wetlands support life in spectacular variety and numbers: freshwater wetlands alone are home to four in ten of all the world’s species, and one in eight of global animal species. An assessment of the monetary value of natural ecosystems published in Nature in 1997 arrived at a figure of 27.7 trillion Euros (33 trillion dollars), with wetland ecosystems making up €12.5 trillion ($14.9 trillion) – or 45% - of this total. Much of human civilisation has been based around river valleys and floodplains. However, global freshwater consumption rose sixfold during the 20th century, a rate more than double that of population growth. And world population is set to rise by 70 million people a year for the next two decades. Couple that trend with the threat of accelerating climate change, and biologically-productive and hydrologically-stabilising wetlands look like necessities we can ill do without. Diego Fernandez | ESA The most recent press releases about innovation >>> Die letzten 5 Focus-News des innovations-reports im Überblick: For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
<urn:uuid:c6b76be0-6b81-4b37-bc86-d5402717e665>
4.0625
1,767
Content Listing
Science & Tech.
23.712231
95,642,531
Salinity (//) is the saltiness or amount of salt dissolved in a body of water (see also soil salinity). This is usually measured in (note that this is technically dimensionless). Salinity is an important factor in determining many aspects of the chemistry of natural waters and of biological processes within it, and is a thermodynamic state variable that, along with temperature and pressure, governs physical characteristics like the density and heat capacity of the water. A contour line of constant salinity is called an isohaline, or sometimes isohale. Salinity in rivers, lakes, and the ocean is conceptually simple, but technically challenging to define and measure precisely. Conceptually the salinity is the quantity of dissolved salt content of the water. Salts are compounds like sodium chloride, magnesium sulfate, potassium nitrate, and sodium bicarbonate which dissolve into ions. The concentration of dissolved chloride ions is sometimes referred to as chlorinity. Operationally, dissolved matter is defined as that which can pass through a very fine filter (historically a filter with a pore size of 0.45 μm, but nowadays usually 0.2 μm). Salinity can be expressed in the form of a mass fraction, i.e. the mass of the dissolved material in a unit mass of solution. Seawater typically has a mass salinity of around 35 g/kg, although lower values are typical near coasts where rivers enter the ocean. Rivers and lakes can have a wide range of salinities, from less than 0.01 g/kg to a few g/kg, although there are many places where higher salinities are found. The Dead Sea has a salinity of more than 200 g/kg. Whatever pore size is used in the definition, the resulting salinity value of a given sample of natural water will not vary by more than a few percent (%). Physical oceanographers working in the abyssal ocean, however, are often concerned with precision and intercomparability of measurements by different researchers, at different times, to almost five significant digits. A bottled seawater product known as IAPSO Standard Seawater is used by oceanographers to standardize their measurements with enough precision to meet this requirement. Measurement and definition difficulties arise because natural waters contain a complex mixture of many different elements from different sources (not all from dissolved salts) in different molecular forms. The chemical properties of some of these forms depend on temperature and pressure. Many of these forms are difficult to measure with high accuracy, and in any case complete chemical analysis is not practical when analyzing multiple samples. Different practical definitions of salinity result from different attempts to account for these problems, to different levels of precision, while still remaining reasonably easy to use. For practical reasons salinity is usually related to the sum of masses of a subset of these dissolved chemical constituents (so-called solution salinity), rather than to the unknown mass of salts that gave rise to this composition (an exception is when artificial seawater is created). For many purposes this sum can be limited to a set of eight major ions in natural waters, although for seawater at highest precision an additional seven minor ions are also included. The major ions dominate the inorganic composition of most (but by no means all) natural waters. Exceptions include some pit lakes and waters from some hydrothermal springs. The concentrations of dissolved gases like oxygen and nitrogen are not usually included in descriptions of salinity. However, carbon dioxide gas, which when dissolved is partially converted into carbonates and bicarbonates, is often included. Silicon in the form of silicic acid, which usually appears as a neutral molecule in the pH range of most natural waters, may also be included for some purposes (e.g., when salinity/density relationships are being investigated). The term 'salinity' is, for oceanographers, usually associated with one of a set of specific measurement techniques. As the dominant techniques evolve, so do different descriptions of salinity. Salinities were largely measured using titration-based techniques before the 1980s. Titration with silver nitrate could be used to determine the concentration of halide ions (mainly chlorine and bromine) to give a chlorinity. The chlorinity was then multiplied by a factor to account for all other constituents. The resulting 'Knudsen salinities' are expressed in units of parts per thousand (ppt or ‰). The use of electrical conductivity measurements to estimate the ionic content of seawater led to the development of the scale called the practical salinity scale 1978 (PSS-78). Salinities measured using PSS-78 do not have units. The suffix psu or PSU (denoting practical salinity unit) is sometimes added to PSS-78 measurement values. In 2010 a new standard for the properties of seawater called the thermodynamic equation of seawater 2010 (TEOS-10) was introduced, advocating absolute salinity as a replacement for practical salinity, and conservative temperature as a replacement for potential temperature. This standard includes a new scale called the reference composition salinity scale. Absolute salinities on this scale are expressed as a mass fraction, in grams per kilogram of solution. Salinities on this scale are determined by combining electrical conductivity measurements with other information that can account for regional changes in the composition of seawater. They can also be determined by making direct density measurements. A sample of seawater from most locations with a chlorinity of 19.37 ppt will have a Knudsen salinity of 35.00 ppt, a PSS-78 practical salinity of about 35.0, and a TEOS-10 absolute salinity of about 35.2 g/kg. The electrical conductivity of this water at a temperature of 15 °C is 42.9 mS/cm. Lakes and riversEdit Limnologists and chemists often define salinity in terms of mass of salt per unit volume, expressed in units of mg per litre or g per litre. It is implied, although often not stated, that this value applies accurately only at some reference temperature. Values presented in this way are typically accurate to the order of 1%. Limnologists also use electrical conductivity, or "reference conductivity", as a proxy for salinity. This measurement may be corrected for temperature effects, and is usually expressed in units of μS/cm. A river or lake water with a salinity of around 70 mg/L will typically have a specific conductivity at 25 °C of between 80 and 130 μS/cm. The actual ratio depends on the ions present. The actual conductivity usually changes by about 2% per degree Celsius, so the measured conductivity at 5 °C might only be in the range of 50–80 μS/cm. Direct density measurements are also used to estimate salinities, particularly in highly saline lakes. Sometimes density at a specific temperature is used as a proxy for salinity. At other times an empirical salinity/density relationship developed for a particular body of water is used to estimate the salinity of samples from a measured density. |Fresh water||Brackish water||Saline water||Brine| |< 0.05%||0.05 – 3%||3 – 5%||> 5%| |< 0.5 ‰||0.5 – 30 ‰||30 – 50 ‰||> 50 ‰| Systems of classification of water bodies based upon salinityEdit Marine waters are those of the ocean, another term for which is euhaline seas. The salinity of euhaline seas is 30 to 35. Brackish seas or waters have salinity in the range of 0.5 to 29 and metahaline seas from 36 to 40. These waters are all regarded as thalassic because their salinity is derived from the ocean and defined as homoiohaline if salinity does not vary much over time (essentially constant). The table on the right, modified from Por (1972), follows the "Venice system" (1959). In contrast to homoiohaline environments are certain poikilohaline environments (which may also be thalassic) in which the salinity variation is biologically significant. Poikilohaline water salinities may range anywhere from 0.5 to greater than 300. The important characteristic is that these waters tend to vary in salinity over some biologically meaningful range seasonally or on some other roughly comparable time scale. Put simply, these are bodies of water with quite variable salinity. Highly saline water, from which salts crystallize (or are about to), is referred to as brine. Salinity is an ecological factor of considerable importance, influencing the types of organisms that live in a body of water. As well, salinity influences the kinds of plants that will grow either in a water body, or on land fed by a water (or by a groundwater). A plant adapted to saline conditions is called a halophyte. A halophyte which is tolerant to residual sodium carbonate salinity are called glasswort or saltwort or barilla plants. Organisms (mostly bacteria) that can live in very salty conditions are classified as extremophiles, or halophiles specifically. An organism that can withstand a wide range of salinities is euryhaline. Salt is expensive to remove from water, and salt content is an important factor in water use (such as potability). Increases in salinity have been observed in lakes and rivers in the United States, due to common road salt and other salt de-icers in runoff. The degree of salinity in oceans is a driver of the world's ocean circulation, where density changes due to both salinity changes and temperature changes at the surface of the ocean produce changes in buoyancy, which cause the sinking and rising of water masses. Changes in the salinity of the oceans are thought to contribute to global changes in carbon dioxide as more saline waters are less soluble to carbon dioxide. In addition, during glacial periods, the hydrography is such that a possible cause of reduced circulation is the production of stratified oceans. Hence it is difficult in this case to subduct water through the thermohaline circulation. - World Ocean Atlas 2009. nodc.noaa.gov - Pawlowicz, R. (2013). "Key Physical Variables in the Ocean: Temperature, Salinity, and Density". Nature Education Knowledge. 4 (4): 13. - Eilers, J. M.; Sullivan, T. J.; Hurley, K. C. (1990). "The most dilute lake in the world?". Hydrobiologia. 199: 1–6. doi:10.1007/BF00007827. - Anati, D. A. (1999). "The salinity of hypersaline brines: concepts and misconceptions". Int. J. Salt Lake. Res. 8: 55–70. doi:10.1007/bf02442137. - IOC, SCOR, and IAPSO (2010). The international thermodynamic equation of seawater – 2010: Calculation and use of thermodynamic properties. Intergovernmental Oceanographic Commission, UNESCO (English). pp. 196pp. - Wetzel, R. G. (2001). Limnology: Lake and River Ecosystems, 3rd ed. Academic Press. ISBN 978-0-12-744760-5. - Pawlowicz, R.; Feistel, R. (2012). "Limnological applications of the Thermodynamic Equation of Seawater 2010 (TEOS-10)". Limnology and Oceanography: Methods. 10 (11): 853–867. doi:10.4319/lom.2012.10.853. - Unesco (1981). The Practical Salinity Scale 1978 and the International Equation of State of Seawater 1980. Tech. Pap. Mar. Sci., 36 - Unesco (1981). Background papers and supporting data on the Practical Salinity Scale 1978. Tech. Pap. Mar. Sci., 37 - Millero, F. J. (1993). "What is PSU?". Oceanography. 6 (3): 67. - Culkin, F.; Smith, N. D. (1980). "Determination of the Concentration of Potassium Chloride Solution Having the Same Electrical Conductivity, at 15C and Infinite Frequency, as Standard Seawater of Salinity 35.0000‰ (Chlorinity 19.37394‰)". IEEE J. Oceanic Eng. OE–5 (1): 22–23. doi:10.1109/JOE.1980.1145443. - van Niekerk, Harold; Silberbauer, Michael; Maluleke, Mmaphefo (2014). "Geographical differences in the relationship between total dissolved solids and electrical conductivity in South African rivers". Water SA. 40 (1): 133. doi:10.4314/wsa.v40i1.16. - Por, F. D. (1972). "Hydrobiological notes on the high-salinity waters of the Sinai Peninsula". Marine Biology. 14 (2): 111. doi:10.1007/BF00373210. - Venice system (1959). The final resolution of the symposium on the classification of brackish waters. Archo Oceanogr. Limnol., 11 (suppl): 243–248. - Dahl, E. (1956). "Ecological salinity boundaries in poikilohaline waters". Oikos. Oikos. 7 (1): 1–21. doi:10.2307/3564981. JSTOR 3564981. - Kalcic, Maria, Turowski, Mark; Hall, Callie. "Stennis Space Center Salinity Drifter Project. A Collaborative Project with Hancock High School, Kiln, MS". Stennis Space Center Salinity Drifter Project. NTRS. Retrieved 2011-06-16. - Hopes To Hold The Salt, And Instead Break Out Beet Juice And Beer To Keep Roads Clear - Mantyla, A.W. 1987. Standard Seawater Comparisons updated. J. Phys. Ocean., 17: 543–548. - MIT page of seawater properties, with Matlab, EES and Excel VBA library routines - Equations and algorithms to calculate fundamental properties of sea water. - History of the salinity determination - Practical Salinity Scale 1978. - Salinity calculator - Lewis, E. L. 1982. The practical salinity scale of 1978 and its antecedents. Marine Geodesy. 5(4):350–357. - Equations and algorithms to calculate salinity of inland waters
<urn:uuid:868fad89-07d9-4e66-9470-5d783b4e5662>
4.1875
3,107
Knowledge Article
Science & Tech.
44.10595
95,642,561
A forecasting tool reveals which cities will be affected as different portions of the ice sheet melt, say scientists. It looks at the Earth’s spin and gravitational effects to predict how water will be “redistributed” globally. “This provides, for each city, a picture of which glaciers, ice sheets, [and] ice caps are of specific importance,” say the researchers. The tool has been developed by scientists at Nasa’s Jet Propulsion Laboratory in California. Senior scientist Dr Erik Ivins said: “As cities and countries attempt to build plans to mitigate flooding, they have to be thinking about 100 years in the future and they want to assess risk in the same way that insurance companies do.” It suggests that in London sea-level rise could be significantly affected by changes in the north-western part of the Greenland ice sheet. While for New York , the area of concern is the ice sheet’s entire northern and eastern portions. Another of the scientists, Dr Eric Larour, said three key processes influenced “the sea-level fingerprint”, the pattern of sea-level change around the world. The first is gravity. “These [ice sheets] are huge masses that exert an attraction on the ocean,” said Dr Larour. “When the ice shrinks, that attraction diminishes- and the sea will move away from that mass.” As well as this “push-pull influence” of ice, the ground under a melting ice sheet expands vertically, having previously been compressed by the sheer weight of ice. The last factor involves the rotation of the planet itself. “You can think of the Earth as a spinning top,” said Dr Larour. “As it spins it wobbles and as masses on its surface change, that wobble also changes. “That, in turn, redistributes water around the Earth.” By computing each of these factors into their calculations, the researchers were able to build their city-specific forecasting tool. “We can compute the exact sensitivity – for a specific town – of a sea level to every ice mass in the world,” Dr Larour told BBC News. “This gives you an idea, for your own city, of which glaciers, ice sheets and ice caps are of specific importance.” Another of the team, Dr Surendra Adhikar, said: “People can be desperate to understand how these huge, complicated global processes impact on them. “With this tool, they can see the impact on their own city.”
<urn:uuid:968b9061-da7c-41ec-a3d3-bdcfaab19f84>
3.96875
555
News Article
Science & Tech.
53.531364
95,642,586
Gleason's theorem (named after Andrew M. Gleason) is a mathematical result which shows that the rule one uses to calculate probabilities in quantum physics follows logically from particular assumptions about how measurements are represented mathematically. More specifically, it proves that the Born rule for the probability of obtaining specific results for a given measurement follows naturally from the structure formed by the lattice of events in a real or complex Hilbert space. This result is of particular importance for the field of quantum logic. Furthermore, it was historically significant for the role it played in showing that local hidden variable theories are inconsistent with quantum physics. The theorem states: - Theorem. Suppose H is a separable Hilbert space. A measure on H is a function f that assigns a nonnegative real number to each closed subspace of H in such a way that, if is a countable collection of mutually orthogonal subspaces of H, and the closed linear span of this collection is B, then . If the Hilbert space H has dimension at least three, then every measure f can be written in the form , where W is a positive semidefinite trace class operator and is the orthogonal projection onto A. The trace-class operator W can be interpreted as the density matrix of a quantum state. Effectively, the theorem says that any legitimate probability measure on the space of measurement outcomes is generated by some quantum state. Consider a quantum system with a Hilbert space of dimension 3 or larger, and suppose that there exists some function that assigns a probability to each outcome of any possible measurement upon that system. The probability of any such outcome must be a real number between 0 and 1 inclusive, and in order to be consistent, for any individual measurement the probabilities of the different possible outcomes must add up to 1. Gleason's theorem shows that any such function—that is, any consistent assignment of probabilities to measurement outcomes—must be expressible in terms of a quantum-mechanical density operator and the Born rule. In other words, given that each quantum system is associated with a Hilbert space, and given that measurements are described by particular mathematical entities defined on that Hilbert space, both the structure of quantum state space and the rule for calculating probabilities from a quantum state then follow. For simplicity, we can assume that the dimension of the Hilbert space is finite. A quantum-mechanical observable is a self-adjoint operator on that Hilbert space. Equivalently, we can say that a measurement is defined by an orthonormal basis, with each possible outcome of that measurement corresponding to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator whose trace is equal to 1. In the language of von Weizsäcker, a density operator is a "catalogue of probabilities": for each measurement that can be defined, we can compute the probability distribution over the outcomes of that measurement from the density operator. We do so by applying the Born rule, which states that Let be a function from projection operators to the unit interval with the property that, if a set of projection operators sum to the identity matrix—that is, if they correspond to an orthonormal basis—then Another way of phrasing the theorem uses the terminology of quantum logic, which makes heavy use of lattice theory. Quantum logic treats quantum events (or measurement outcomes) as logical propositions, and studies the relationships and structures formed by these events, with specific emphasis on quantum measurement. In quantum logic, the logical propositions that describe events are organized into a lattice in which the distributive law, valid in classical logic, is weakened, to reflect the fact that in quantum physics, not all pairs of quantities can be measured simultaneously. The representation theorem in quantum logic shows that such a lattice is isomorphic to the lattice of subspaces of a vector space with a scalar product. It remains an open problem in quantum logic to constrain the field K over which the vector space is defined. Solèr's theorem implies that, granting certain hypotheses, the field K must be either the real numbers, complex numbers, or the quaternions. We let A represent an observable with finitely many potential outcomes: the eigenvalues of the Hermitian operator A, i.e. . An "event", then, is a proposition , which in natural language can be rendered "the outcome of measuring A on the system is ". Let H denote the Hilbert space associated with the physical system, and let L denote the lattice of subspaces of H. The events generate a sublattice of L which is a finite Boolean algebra, and if n is the dimension of the Hilbert space, then each event is an atom of the lattice L. A quantum probability function over H is a real function P on the atoms in L that has the following properties: - , and for all - , if are orthogonal atoms This means for every lattice element y, the probability of obtaining y as a measurement outcome is known, since it may be expressed as the union of the atoms under y: In this context, Gleason's theorem states: - Given a quantum probability function P over a space of dimension , there is an Hermitian, non-negative operator W on H, whose trace is unity, such that for all atoms , where is the inner product, and is a unit vector along . As one consequence: if some satisfies , then W is the projection onto the complex line spanned by and for all . Gleason's theorem highlights a number of fundamental issues in quantum measurement theory. Fuchs argues that the theorem "is an extremely powerful result," because "it indicates the extent to which the Born probability rule and even the state-space structure of density operators are dependent upon the theory's other postulates." As a consequence, quantum theory is "a tighter package than one might have first thought." The theorem is often taken to rule out the possibility of hidden variables in quantum mechanics. This is because the theorem implies that there can be no bivalent probability measures, i.e. probability measures having only the values 1 and 0. To see this, note that the mapping is continuous on the unit sphere of the Hilbert space for any density operator W. Since this unit sphere is connected, no continuous function on it can take only the values of 0 and 1. But, a hidden variable theory which is deterministic implies that the probability of a given outcome is always either 0 or 1: either the electron's spin is up, or it isn't (which accords with classical intuitions). Gleason's theorem therefore seems to hint that quantum theory represents a deep and fundamental departure from the classical way of looking at the world. (This has been argued to support a variety of philosophical perspectivism.) Gleason's theorem motivated later work by John Stuart Bell, Ernst Specker and Simon Kochen that led to the result often called the Kochen–Specker theorem, which rules out a broad class of hidden-variable models. As noted above, Gleason's theorem shows that there is no bivalent probability measure over the rays of a Hilbert space (as long as the dimension of that space exceeds 2). The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no bivalent probability measure can be defined. A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., for some outcome x). Any mixed state can be written as a convex combination of pure states, though not in a unique way. Because Gleason's theorem yields the set of all quantum states, pure and mixed, it can be taken as an argument that pure and mixed states should be treated on the same conceptual footing, rather than viewing pure states as more fundamental conceptions. To some researchers, such as Pitowsky, the result is convincing enough to conclude that quantum mechanics represents a new theory of probability. Alternatively, such approaches as relational quantum mechanics make use of Gleason's theorem as an essential step in deriving the quantum formalism from information-theoretic postulates. Outline of Gleason's proof Gleason's original proof proceeds in three stages. In Gleason's terminology, a frame function that is derived in the standard way—i.e., by the Born rule from a quantum state—is regular. Gleason derives a sequence of lemmas concerning when a frame function is necessarily regular, culminating in the final theorem. First, he establishes that every frame function on the Hilbert space is continuous. Then, he proves the theorem for the special case of . Finally, he shows that the general problem can be reduced to this special case. Gleason originally proved the theorem assuming that the measurements applied to the system are of the von Neumann type, i.e., that each possible measurement corresponds to an orthonormal basis of the Hilbert space. Later, Busch, and independently Caves et al., proved an analogous result for a more general class of measurements, known as positive operator valued measures (POVMs). The proof of this result is simpler than that of Gleason's, and unlike the original theorem of Gleason, the generalized version using POVMs also applies to the case of a single qubit, for which the dimension of the Hilbert space equals 2. This has been interpreted as showing that the probabilities for outcomes of measurements upon a single qubit cannot be explained in terms of hidden variables, provided that the class of allowed measurements is sufficiently broad. Gleason's theorem, in its original version, does not hold if the Hilbert space is defined over the rational numbers, i.e., if the components of vectors in the Hilbert space are restricted to be rational numbers, or complex numbers with rational parts. However, when the set of allowed measurements is the set of all POVMs, the theorem holds. The original proof by Gleason was not constructive: one of the ideas on which it depends is the fact that every continuous function defined on a compact space obtains its minimum. Because one cannot in all cases explicitly show where the minimum occurs, a proof that relies upon this principle will not be a constructive proof. However, the theorem can be reformulated in such a way that a constructive proof can be found. Gleason's theorem can be extended to some cases where the observables of the theory form a von Neumann algebra. Specifically, an analogue of Gleason's result can be shown to hold if the algebra of observables has no direct summand that is representable as the algebra of two-by-two matrices over a commutative von Neumann algebra (i.e., no direct summand of type I2). In essence, the only barrier to proving the theorem is the fact that Gleason's original result does not hold when the Hilbert space is that of a qubit. - Dreischner, Görnitz and von Weizsäcker (1988) - Barnum et al. (2000); Pitowsky (2003), §1.3; Pitowsky (2006), §2.1; Kunjwal and Spekkens (2015) - Piron (1972), §6; Drisch (1979); Horwitz et al. (1984); Razon et al. (1991); Cassinelli and Lahti (2017), §2 - Dvurecenskij (1992) - Pitowsky (2006), §2 - Baez (2010); Cassinelli and Lahti (2017), §3 - Fuchs (2011), pp. 94–95 - Wilce (2017), §1.3 - Edwards (1979) - Peres (1991); Mermin (1993) - Wallace (2017) - Wilce (2017), §1.4; Cassinelli and Lahti (2017), §2 - Hrushovski and Pitowsky (2004), §2 - Busch (2003); Caves et al. (2004); Fuchs (2011), p. 116 - Spekkens (2005) - Caves et al. (2004), §3.D - Richman and Bridges (1999); Hrushovski and Pitowsky (2004) - Hamhalter (2003) - Baez, John C. (2010-12-01). "Solèr's Theorem". The n-Category Café. Retrieved 2017-04-24. - Barnum, H.; Caves, C. M.; Finkelstein, J.; Fuchs, C. A.; Schack, R. (2000-05-08). "Quantum probability from decision theory?". Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 456 (1997): 1175–1182. arXiv: . Bibcode:2000RSPSA.456.1175B. doi:10.1098/rspa.2000.0557. ISSN 1364-5021. - Busch, Paul (2003). "Quantum States and Generalized Observables: A Simple Proof of Gleason's Theorem". Physical Review Letters. 91: 120403. arXiv: . Bibcode:2003PhRvL..91l0403B. doi:10.1103/PhysRevLett.91.120403. - Cassinelli, G.; Lahti, P. (2017-11-13). "Quantum mechanics: why complex Hilbert space?". Philosophical Transactions of the Royal Society A. 375 (2106): 20160393. Bibcode:2017RSPTA.37560393C. doi:10.1098/rsta.2016.0393. ISSN 1364-503X. PMID 28971945. - Caves, Carlton M.; Fuchs, Christopher A.; Manne, Kiran K.; Renes, Joseph M. (2004). "Gleason-Type Derivations of the Quantum Probability Rule for Generalized Measurements". Foundations of Physics. 34: 193–209. arXiv: . Bibcode:2004FoPh...34..193C. doi:10.1023/B:FOOP.0000019581.00318.a5. - Drieschner, M.; Görnitz, Th.; von Weizsäcker, C. F. (1988-03-01). "Reconstruction of abstract quantum theory". International Journal of Theoretical Physics. 27 (3): 289–306. Bibcode:1988IJTP...27..289D. doi:10.1007/bf00668895. ISSN 0020-7748. - Drisch, Thomas (1979-04-01). "Generalization of Gleason's theorem". International Journal of Theoretical Physics. 18 (4): 239–243. Bibcode:1979IJTP...18..239D. doi:10.1007/bf00671760. ISSN 0020-7748. - Dvurecenskij, Anatolij (1992). Gleason's Theorem and Its Applications. Mathematics and its Applications, Vol. 60. Dordrecht: Kluwer Acad. Publ. p. 348. ISBN 978-0-7923-1990-0. - Edwards, David (1979). "The Mathematical Foundations of Quantum Mechanics". Synthese. 42: 1–70. doi:10.1007/BF00413704. - Fuchs, Christopher A. (2011). Coming of Age with Quantum Information: Notes on a Paulian Idea. Cambridge: Cambridge University Press. ISBN 978-0-521-19926-1. - Gleason, Andrew M. (1957). "Measures on the closed subspaces of a Hilbert space". Indiana University Mathematics Journal. 6: 885–893. doi:10.1512/iumj.1957.6.56050. MR 0096113. - Hamhalter, Jan (2003-10-31). Quantum Measure Theory. Springer Science & Business Media. ISBN 9781402017148. MR 2015280. Zbl 1038.81003. - Horwitz, L.P; Biedenharn, L.C (1984). "Quaternion quantum mechanics: Second quantization and gauge fields". Annals of Physics. 157 (2): 432–488. Bibcode:1984AnPhy.157..432H. doi:10.1016/0003-4916(84)90068-x. - Hrushovski, Ehud; Pitowsky, Itamar (2004-06-01). "Generalizations of Kochen and Specker's theorem and the effectiveness of Gleason's theorem". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 35 (2): 177–194. arXiv: . Bibcode:2004SHPMP..35..177H. doi:10.1016/j.shpsb.2003.10.002. - Kunjwal, Ravi; Spekkens, Rob W. (2015-09-09). "From the Kochen–Specker theorem to noncontextuality inequalities without assuming determinism". Physical Review Letters. 115 (11): 110403. arXiv: . Bibcode:2015PhRvL.115k0403K. doi:10.1103/PhysRevLett.115.110403. - Mermin, N. David (1993-07-01). "Hidden variables and the two theorems of John Bell". Reviews of Modern Physics. 65 (3): 803–815. arXiv: . Bibcode:1993RvMP...65..803M. doi:10.1103/RevModPhys.65.803. - Peres, Asher (1991). "Two simple proofs of the Kochen-Specker theorem". Journal of Physics A: Mathematical and General. 24 (4): L175. Bibcode:1991JPhA...24L.175P. doi:10.1088/0305-4470/24/4/003. ISSN 0305-4470. - Piron, C. (1972-10-01). "Survey of general quantum physics". Foundations of Physics. 2 (4): 287–314. Bibcode:1972FoPh....2..287P. doi:10.1007/bf00708413. ISSN 0015-9018. - Pitowsky, Itamar (2013). "Betting on the outcomes of measurements: a Bayesian theory of quantum probability". Studies in History and Philosophy of Modern Physics. 34 (3): 395–414. arXiv: . Bibcode:2003SHPMP..34..395P. doi:10.1016/S1355-2198(03)00035-2. - Pitowsky, Itamar (2006). "Quantum mechanics as a theory of probability". In Demopoulos, William; Pitowsky, Itamar. Physical Theory and its Interpretation: Essays in Honor of Jeffrey Bub. Springer. p. 213. arXiv: . Bibcode:2005quant.ph.10095P. ISBN 9781402048760. - Razon, Aharon; Horwitz, L. P. (1991-08-01). "Projection operators and states in the tensor product of quaternion Hilbert modules". Acta Applicandae Mathematica. 24 (2): 179–194. doi:10.1007/bf00046891. ISSN 0167-8019. - Richman, Fred; Bridges, Douglas (1999-03-10). "A Constructive Proof of Gleason's Theorem". Journal of Functional Analysis. 162 (2): 287–312. doi:10.1006/jfan.1998.3372. - Spekkens, R. W. (2005-05-31). "Contextuality for preparations, transformations, and unsharp measurements". Physical Review A. 71 (5): 052108. arXiv: . Bibcode:2005PhRvA..71e2108S. doi:10.1103/PhysRevA.71.052108. - Wallace, David (2017). "Inferential versus Dynamical Conceptions of Physics". In Lombardi, Olimpia; Fortin, Sebastian; Holik, Federico; López, Cristian. What is Quantum Information?. Cambridge: Cambridge University Press. pp. 179–206. ISBN 978-1-107-14211-4. - Wilce, A. (2017). "Quantum Logic and Probability Theory". In The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.).
<urn:uuid:bd608308-c845-4952-86ef-8646d7574a41>
3.59375
4,374
Knowledge Article
Science & Tech.
64.634273
95,642,596
posted by Anonymous A golf ball of mass 0.045 kg is hit off the tee at a speed of 48 m/s. The golf club was in contact with the ball for 5.4 10-3 s. (a) Find the magnitude of the impulse imparted to the golf ball. (b) Find the magnitude of the average force exerted on the ball by the golf club.
<urn:uuid:72738718-b667-4f0d-81a1-e3a54835e2fb>
3.328125
82
Q&A Forum
Science & Tech.
99.533636
95,642,599
The numerical solution of boundary value problems is indispensable in almost all fields of physics and engineering sciences. The recent development, e.g. the study of three-dimensional problems, leads to systems of a larger and larger number of equations. Although the computers have become faster and vector computers are available, new numerical methods are required. A step in this direction was the development of fast Poisson solvers in the late sixties. At that time it seemed that there exist faster numerical methods the simpler the discrete elliptic problem. The first multi-grid methods have also been applied to Poisson's equation and show an efficiency similar to that of the direct solvers. But differently from other numerical methods, the efficiency is not lost when more involved problems are to be solved. KeywordsHilbert Space Spectral Radius Matrix Norm Iteration Matrix Hilbert Scale Unable to display preview. Download preview PDF.
<urn:uuid:6709645b-d6c1-46f8-9988-1bfe3909a25a>
2.765625
181
Truncated
Science & Tech.
36.377639
95,642,611
Jessica Rinaldi/Globe Staff/File 2016 A new study released this week found that the world’s oceans rose significantly faster last century than in any of the previous 27 centuries. It also showed levels increasing even faster along the East Coast, including the shores of Massachusetts. Researchers say they expect sea level rises in our region to continue to outpace other parts of the world. “There are reasons to think Boston and other areas of New England and along the East Coast will continue to see greater relative sea level rise than the global average,” said Andrew Kemp, an assistant professor in Tufts University’s Department of Earth and Ocean Sciences. “We know there are a lot of processes that are going to make it a lot worse here than other parts of the world,” he added. Kemp and another area researcher, Jeffrey Donnelly of the Woods Hole Oceanographic Institution, in recent years helped study and collect data about how sea levels have risen along the shores of Massachusetts and Connecticut. Their analysis, along with research by another local scientist, Harvard geophysics professor Jerry Mitrovica, was included in a study published Monday in the Proceedings of the National Academy of Sciences. That new study, which made headlines, calculated that the global sea level rose by about 5.4 inches between 1900 and 2000. Scientists say the sea level rise is being caused by global warming melting the polar ice caps. At three sites where data has been tracked in Massachusetts, the sea level rise was higher. In Barnstable, there was an 11.1-inch increase in sea levels in the 20th century, the study found. In Revere, the water rose 9.3 inches. At Wood Island, it increased by 8.8 inches. Three other sites in Connecticut all saw sea levels rise by more than 10 inches. The largest increase was 15.2 inches at a spot in New Jersey. Other parts of the Garden State as well as spots in North Carolina also saw increases larger than the global number. The findings are similar to previous research that has found sea levels to be rising faster along the East Coast than in other parts of the planet. So why has the ocean swallowed more of the shorelines here? “The ocean isn’t a bathtub. It doesn’t rise equally everywhere,” Kemp said. One key factor is that we’re sinking. “On the East Coast of the US, the biggest change [until about 1850] was that the land was going down rather than the sea going up,” said Kemp. He explained that the massive sheet of ice that covered Canada during the Ice Age was so heavy that it caused the land below it to be pushed downward, which triggered a rise in land in other areas, including New England. As ice from the Ice Age melted, it reversed that process, and it’s still not done. Kemp said land here is still sinking at a rate of about 1 millimeter annually and we can expect that to continue for at least the next 1,000 years. “The process goes on for thousands of years even after the ice melts away,” he said. Another factor responsible for sea levels rising faster here than in other parts of the world is a slowdown in the Gulf Stream in the Atlantic Ocean. The Gulf Stream creates a “hill” of water in the Atlantic Ocean that’s roughly parallel to the shores of the East Coast. But the Gulf Stream has been slowing down, causing the hill to flatten out, leading to higher sea levels along the Eastern Seaboard, said Kemp. A third factor is the melting of ice at the top of the world, including places like Greenland. A large ice sheet has a strong gravitational pull and as it melts, it not only adds water to the ocean, but the ice sheet also loses mass. That loss of mass weakens the ice sheet’s gravitational pull, causing water to flow away from it, experts say. “Boston has a lot more to fear than Greenland, which of course is counterintuitive,” said Kemp. The study released this week — which was largely in line with other recent studies on the topic — also projected that by 2100 global sea levels will rise by anywhere from another 11 inches to another 4 feet 4 inches. The increase will depend on how much greenhouse gas emissions, which scientists say cause global warming, can be reduced. Concern over rising sea levels has spurred cities around the world into action. Senator Edward J. Markey, a Massachusetts Democrat, said more must be done to cut back harmful emissions and to prepare for the effects of global warming. “Because of climate change, Massachusetts is already experiencing sea-level rise and stronger storms that flood homes and businesses,” he said in a statement. “Without action, Fenway Park could be Fenway Pond and the Back Bay would go back to being a bay,” Markey added. “We need to put in place the laws and policies that dramatically cut carbon pollution and help communities respond to this growing threat.” In Massachusetts, state and local officials, utility companies, and others have taken steps to try to prevent and prepare for impacts from potential flooding, including efforts to shore up coastal areas and dams, review building codes, and measure what would happen to tunnels and other infrastructure under flood conditions. “Hurricane Sandy was a wake-up call for leaders in Massachusetts, especially in Boston,” said Ken Pruitt, executive director of the Environmental League of Massachusetts. “If it had been angled just a little differently, it would have slammed into Massachusetts instead of New Jersey and caused extensive damage.” He pointed out that much of the state’s population and infrastructure is situated near the ocean. “Coastal property is some of the most sought-after in the state,” he said. “As sea levels rise, it puts a lot of real estate and people in harm’s way.” Pruitt said his organization is one of several urging political leaders to bolster efforts to plan for the future. “It’s no longer a question of if, it’s a question of when, and any prudent government needs to start planning for the impacts now.” Officials from the Harvard Humanitarian Initiative announced Theresa Lund was on leave “effective immediately.”Continue reading » Paul Heroux, the mayor of Attleboro, appears regularly as a “Middle East expert” on Russia’s government-funded television network.Continue reading » Ellen Fleming’s stint as a member of the 1 percent was fleeting. But she’s totally OK with it.Continue reading » Thomas Warner, Kenneth Pitts, and Nicholas Mondato are being held without bail for allegedly shooting Bryce Finn at his home in Haverhill.Continue reading » This summer’s replacement of the westbound side of the Commonwealth Avenue Bridge will force some travelers to change their routes.Continue reading » The video shows Theresa Lund, executive director of the Harvard Humanitarian Initiative, asking a neighbor if she lived in the “affordable apartments.”Continue reading » On Friday, when the Vermont Standard hit the stands, one day late, it repaid the show of generosity with a banner headline. It read simply: “Thank You.”Continue reading » The criminal probe focuses on whether former troopers violated the law by receiving complimentary guns after negotiating a deal with a firearms dealer.Continue reading » Federal Judge F. Dennis Saylor IV said the decision of whether to stay the removal of the woman rests with immigration authorities.Continue reading »
<urn:uuid:6c572c44-fb35-449f-9b7c-4d1ecf9d1215>
3.375
1,605
News Article
Science & Tech.
50.15231
95,642,614
Authors: George Rajna Now University of Rochester researchers have succeeded in creating particles with negative mass in an atomically thin semiconductor, by causing it to interact with confined light in an optical microcavity. The device is a type of spectrometer—an optical instrument that takes light and breaks it down into components to reveal a catalogue of information about an object. When we look at a painting, how do we know it's a genuine piece of art? Researchers from the University of Illinois at Urbana-Champaign have demonstrated a new level of optical isolation necessary to advance on-chip optical signal processing. The technique involving light-sound interaction can be implemented in nearly any photonic foundry process and can significantly impact optical computing and communication systems. City College of New York researchers have now demonstrated a new class of artificial media called photonic hypercrystals that can control light-matter interaction in unprecedented ways. Experiments at the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw prove that chemistry is also a suitable basis for storing information. The chemical bit, or 'chit,' is a simple arrangement of three droplets in contact with each other, in which oscillatory reactions occur. Researchers at Sandia National Laboratories have developed new mathematical techniques to advance the study of molecules at the quantum level. Correlation functions are often employed to quantify the relationships among interdependent variables or sets of data. A few years ago, two researchers proposed a property-testing problem involving Forrelation for studying the query complexity of quantum devices. A team of researchers from Australia and the UK have developed a new theoretical framework to identify computations that occupy the 'quantum frontier'—the boundary at which problems become impossible for today's computers and can only be solved by a quantum computer. Scientists at the University of Sussex have invented a groundbreaking new method that puts the construction of large-scale quantum computers within reach of current technology. Physicists at the University of Bath have developed a technique to more reliably produce single photons that can be imprinted with quantum information. Comments: 35 Pages. [v1] 2018-01-10 07:37:27 Unique-IP document downloads: 18 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:d362f2dd-1fa0-4014-bbb2-e047d1091428>
2.9375
594
Content Listing
Science & Tech.
23.442398
95,642,634
Folding of Eukaryotic Proteins Produced in Escherichia Coli The development of recombinant DNA technology has allowed production in prokaryotic hosts of proteins derived from eukaryotic organisms. The use of heterologous expression has had a large impact on the pharmaceutical industry since it enables large scale production of proteins which may have been difficult to isolate from a natural source. This approach also avoids the potential for contamination with disease agents associated with the isolation of a protein from human tissues. An example of this approach is the production in Escherichia coli of human growth hormone (1), used in the treatment of pituitary dwarfism. Studies of protein structure-function are also facilitated if the target protein can be produced in a prokaryote. For these studies, heterologous expression enables rapid production of variant proteins, either by directed or random mutagenesis, in sufficient quantities for detailed characterization using biochemical and biophysical methods. KeywordsDisulfide Bond Inclusion Body Guanidine Hydrochloride Periplasmic Space Eukaryotic Protein Unable to display preview. Download preview PDF. - 14.Levinthal, C. (1968) J. Chim. Phys. 65, 44–45.Google Scholar - 22.Shulke, N. and Schmid, F.X. (1988) J. Biol. Chem. 263, 8832–8837.Google Scholar - 34.Winkler, M.E. (1987) in Protein Structure, Folding and Design 2 (Oxender, D.L., ed.), pp. 363–372, Alan R. Liss, Inc., NY.Google Scholar - 38.Cantor, C.R. and Schimmel, P.R. (1980) Biophysical Chemistry, Part II, W.H. Freeman, San Francisco, CA.Google Scholar
<urn:uuid:beb4e5eb-cd22-494f-ba06-9f111f4b2480>
2.65625
392
Truncated
Science & Tech.
49.773567
95,642,653
Java Web Services First Edition March 2002 ISBN: 0-596-00269-6, 276 pages Java Web Services shows you how to use SOAP to perform remote method calls and message passing; how to use WSDL to describe the interface to a web service or understand the interface of someone else's service; and how to use UDDI to advertise (publish) and look up services in each local or global registry. Java Web Services also discusses security issues, interoperability issues, integration with other Java enterprise technologies like EJB; the work being done on the JAXM and JAX-RPC packages, and integration with Microsoft's .NET Table of Contents Who Should Read This Book? ..................................... Software and Versions .......................................... Comments and Questions ........................................ 1. Welcome to Web Services ....................................... 1.1 What Are Web Services? ...................................... 1.2 Web Services Adoption Factors .................................. 1.3 Web Services in a J2EE Environment .............................. 1.4 What This Book Discusses ..................................... 2. Inside the Composite Computing Model ............................ 2.1 Service-Oriented Architecture ................................... 2.2 The P2P Model ............................................ 3. SOAP: The Cornerstone of Interoperability .......................... 3.1 Simple .................................................. 3.2 Object .................................................. 3.3 Access .................................................. 3.4 Protocol ................................................. 3.5 Anatomy of a SOAP Message ................................... 3.6 Sending and Receiving SOAP Messages ............................ 3.7 The Apache SOAP Routing Service ............................... 3.8 SOAP with Attachments ...................................... 4. SOAP-RPC, SOAP-Faults, and Misunderstandings ..................... 4.1 SOAP-RPC ............................................... 4.2 Error Handling with SOAP Faults ................................ 4.3 SOAP Intermediaries and Actors ................................. 5. Web Services Description Language ............................... 5.1 Introduction to WSDL ........................................ 5.2 Anatomy of a WSDL Document ................................. 5.3 Best Practices, Makes Perfect ................................... 5.4 Where Is All the Java? ........................................ 6. UDDI: Universal Description, Discovery, and Integration ................ 96 6.1 UDDI Overview ............................................ 96 6.2 UDDI Specifications and Java-Based APIs .......................... 99 6.3 Programming UDDI ......................................... 101 6.4 Using WSDL Definitions with UDDI .............................. 135 7. JAX-RPC and JAXM ......................................... 7.1 Java API for XML Messaging (JAXM) ............................. 7.2 JAX-RPC ................................................ 7.3 SOAPElement API .......................................... 7.4 JAX-RPC Client Invocation Models ............................... 8. J2EE and Web Services ........................................ 169 8.1 The SOAP-J2EE Way ........................................ 169 8.2 The Java Web Service (JWS) Standard ............................. 183 9. Web Services Interoperability .................................... 9.1 The Concept of Interoperability .................................. 9.2 The Good, Bad, and Ugly of Interoperability ......................... 9.3 Potential Interoperability Issues .................................. 9.4 SOAPBuilders Interoperability .................................. 9.5 Other Interoperability Resources ................................. 9.6 Resources ................................................ 10. Web Services Security ........................................ 10.1 Incorporating Security Within XML .............................. 10.2 XML Digital Signatures ...................................... 10.3 XML Encryption .......................................... 10.4 SOAP Security Extensions .................................... 10.5 Further Reading ........................................... A. Credits ................................................... 243 Colophon .................................................... 245 Java Web Services When XML was first introduced, it was hailed as the cornerstone of a new kind of technology that would permit interoperable businesses. XML provided a generic way to represent structured and typed data. Even though it has taken several years, XML standards have started to evolve and multiply. As part of this evolution, XML has been incorporated into every facet of application and enterprise development. XML is now a part of operating systems, networking protocols, programming languages, databases, application servers, web servers, and so on. XML is used everywhere. Starting in 1998, XML was incorporated into a number of networking protocols with the intention of providing a standard way for two pieces of software to communicate with each other. The Simple Object Access Protocol (SOAP) and XML-RPC specifications blew the doors wide open on the distributed-computing environment by providing a platformindependent way for software to communicate. Even more astounding, nearly every major software company supported SOAP. The instant success of SOAP created the potential for interoperability at a level that has never been seen before. SOAP became the cornerstone protocol of the web services revolution that is going on today. After SOAP, the Web Services Description Language (WSDL) and Universal Discovery, Description, Integration (UDDI) specifications were introduced with an equal amount of industry support. Other specifications were rapidly introduced, including ebXML, OASIS technical communities, and a variety of SOAP extensions. Some specifications were met with acclaim and others with disappointment. Either way, the industry has unified around SOAP, WSDL, and UDDI. These core technologies are required to achieve true software interoperability for the future. It was only a matter of time before developers wanted to use web services technology. Even though web services are language and platform independent, developers still have to develop programs in programming languages. With Java and J2EE being the primary environment for enterprise development, it wasn't long before technology used to integrate web services with the J2EE platform appeared. Java programs need to be able to create, locate, and consume Many specifications and technologies have been introduced to bridge the gap between Java and web services. This book provides an introduction to both web services and the Java technologies that have been introduced to support web services. It highlights major web services technologies and investigates the current happenings in the Java standardization community. As the web services revolution continues, it will be increasingly important for software developers to understand how web services work and when to use them. Reading this book may be one of the smartest career moves you will ever make. Who Should Read This Book? This book explains and demonstrates the fundamentals of web services and the Java technologies built around web services. It provides a straightforward, no-nonsense explanation of the underlying technology, Java classes and interfaces, programming models, and various implementations. Java Web Services Although this book focuses on the fundamentals, it's no "for Dummy's" book. Readers are expected to have an understanding of Java and XML. Web service APIs are easy to learn, but can be tedious. Before reading this book, you should be fluent in the Java language and have some practical experience developing business solutions. If you are unfamiliar with the Java language, we recommend that you pick up a copy of Learning Java by Patrick Neimeyer and Jonathan Knudsen (formerly Exploring Java) (O'Reilly). If you need a stronger background in distributed computing, we recommend Java Distributed Computing by Jim Farley (O'Reilly). If you need additional information on XML, we recommend Java and XML by Brett McLaughlin (O'Reilly) and XML in a Nutshell by Elliotte Harold and W. Scott Means (O'Reilly). Other O'Reilly books covering web services include Programing Web Services with SOAP by Doug Tidwell, James Snell, and Pavel Kulchenko and Programming Web Services with XML-RPC by Simon St. Laurent, Joe Johnston, and Edd Dumbill. Here's how the book is structured: This chapter defines web services; provides an overview of SOAP, WSDL, and UDDI; and discusses the different business uses for web services. This chapter introduces the role of service-oriented architecture (SOA) and how application architecture can leverage programs developed using a SOA. This chapter introduces the SOAP protocol and shows how it is layered on top of HTTP. It discusses the SOAP envelope, header, and body, and how SOAP with attachments works. This chapter introduces the Apache SOAP engine and the Apache SOAP client API that provides a Java interface for sending and receiving SOAP This chapter continues the SOAP discussion by describing how SOAP deals with method invocations, exception handling, and the mustUnderstand header attribute. This chapter introduces WSDL and the steps involved in creating a web service description. It provides an overview of the different ways WSDL may be created within a Java program. This chapter discusses the UDDI initiative and the makeup of a UDDI Business Registry. It introduces the inquiry and publishing API for UDDI and demonstrates Java Web Services how to access a UDDI registry using the Apache SOAP client library, a custom library provided by a vendor, and JAXR. This chapter also discusses higher-level abstraction Java APIs for seamless access to a registry. This chapter introduces two relatively new client programming models that are evolving as part of the Java Community Process (JCP). The coding examples from the previous SOAP chapters are examined using these new APIs. This chapter discusses how an application server might support web services. It discusses where SOAP, WSDL, and UDDI fit into the J2EE picture. It also introduces the Java Community Process standardization efforts currently underway to get web services integrated tightly with J2EE. This chapter combines firsthand experience with collective research gathered from message boards, articles, and various interoperability web sites. It explores low-level issues regarding such things as datatype mapping and header processing, as well as higher-level framework issues such as interoperability with ebXML and MS Biztalk. To provide concrete examples of interoperability problems and solutions, this chapter discusses the SOAPBuilder's Interoperability Labs' effort. This chapter discusses how issues such as digital signatures, key management, and encryption present new challenges as a result of using XML and SOAP-based interoperable communications. Current specifications and implementations such as XML-Encryption, XML-Signatures, SOAP-Security, and XKMS are examined. Software and Versions This book covers many different technologies and uses a number of different examples provided by different vendors. It uses technology available from Apache, IBM, BEA, Sonic Software, Systinet, Phaos, and Sun. In the examples that come with this book, there is a comprehensive set of README documents that outline where the different pieces of software can be downloaded. The README documents also detail the installation and configuration instructions relevant to you. http://www.oreilly.com/catalog/javawebserv. The examples are organized by chapter. Given the speed at which this field is developing, one of the best strategies you can take is to look at vendors' examples. In the examples archive for this book, we've decided to include separate directions with a number of examples from Sonic and BEA's products. We will add other vendors as we get permission. If you are a vendor and would like to see your examples included in the archive, please contact us. Java Web Services Italic is used for: Filenames and pathnames Hostnames, domain names, URLs, and email addresses New terms where they are defined Constant width is used for: Code examples and fragments Class, variable, and method names, and Java keywords used within the text SQL commands, table names, and column names XML elements and tags Constant-width bold is used for emphasis in some code examples. The term JMS provider is used to refer to a vendor that implements the JMS API to provide connectivity to their enterprise messaging service. The term JMS client refers to Java components or applications that use the JMS API and a JMS provider to send and receive messages. JMS application refers to any combination of JMS clients that work together to provide a software solution. Comments and Questions Please address comments and questions concerning this book to the publisher: O'Reilly & Associates, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 (800) 998-9938 (in the United States or Canada) (707) 829-0515 (international or local) (707) 829-0104 (fax) There is a web page for this book, which lists errata, examples, or any additional information. You can access this page at: To comment or ask technical questions about this book, send email to: For more information about books, conferences, Resource Centers, and the O'Reilly Network, see the O'Reilly web site at: Java Web Services While only two names are on the cover of this book, the credit for its development and delivery is shared by many individuals. Michael Loukides, our editor, was pivotal to the success of this book. Without his experience, craft, and guidance, this book would not have Many expert technical reviewers helped ensure that the material was technically accurate and true to the spirit of the Java Message Service. Of special note are Anne Thomas Manes, Scott Hinkelman, J.P. Morganthal, Rajiv Mordani, and Perry Yin. David Chappell would like to express sincere gratitude to Sonic Software colleagues Jaime Meritt, Colleen Evans, and Rick Kuzyk for their research, contributions, and feedback throughout the book-writing process—as well as other Sonic coworkers who provided valuable help along the way: Tim Bemis, Giovanni Boschi, Andrew Bramley, Ray Chun, Bill Cullen, David Grigglestone, Mitchell Horowitz, Sonali Kanaujia, Oriana Merlo, Andy Neumann, Mike Theroux, Bill Wood, and Perry Yin. A special thanks goes to George St. Maurice for organizing the download zip file and the Finally, the most sincere gratitude must be extended to our families. Tyler Jewell thanks his friend and lover, Hillary, for putting up with the aggressive writing timeline, dealing with his writing over the Christmas break, and not getting upset when he had to cancel their sunny vacation to finish the manuscript. David Chappell thanks his wife, Wendy, and their children Dave, Amy, and Chris, for putting up with him during this endeavor. Java Web Services Chapter 1. Welcome to Web Services The promise of web services is to enable a distributed environment in which any number of applications, or application components, can interoperate seamlessly among and between organizations in a platform-neutral, language-neutral fashion. This interoperation brings heterogeneity to the world of distributed computing once and for all. This book defines the fundamentals of a web service. It explores the core technologies that enable web services to interoperate with one another. In addition, it describes the distributed computing model that the core web service technologies enable and how it fits into the bigger picture of integration and deployment within the J2EE platform. It also discusses interoperability between the J2EE platform and other platforms such as .NET. 1.1 What Are Web Services? A web service is a piece of business logic, located somewhere on the Internet, that is accessible through standard-based Internet protocols such as HTTP or SMTP. Using a web service could be as simple as logging into a site or as complex as facilitating a multiorganization business negotiation. Given this definition, several technologies used in recent years could have been classified as web service technology, but were not. These technologies include win32 technologies, J2EE, CORBA, and CGI scripting. The major difference between these technologies and the new breed of technology that are labeled as web services is their standardization. This new breed of technology is based on standardized XML (as opposed to a proprietary binary standard) and supported globally by most major technology firms. XML provides a language-neutral way for representing data, and the global corporate support ensures that every major new software technology will have a web services strategy within the next couple years. When combined, the software integration and interoperability possibilities for software programs leveraging the web services model are staggering. A web service has special behavioral characteristics: By using XML as the data representation layer for all web services protocols and technologies that are created, these technologies can be interoperable at their core level. As a data transport, XML eliminates any networking, operating system, or platform binding that a protocol has. A consumer of a web service is not tied to that web service directly; the web service interface can change over time without compromising the client's ability to interact with the service. A tightly coupled system implies that the client and server logic are closely tied to one another, implying that if one interface changes, the other must also be updated. Adopting a loosely coupled architecture tends to make software systems more manageable and allows simpler integration between different systems. Java Web Services Object-oriented technologies such as Java expose their services through individual methods. An individual method is too fine an operation to provide any useful capability at a corporate level. Building a Java program from scratch requires the creation of several fine-grained methods that are then composed into a coarse-grained service that is consumed by either a client or another service. Businesses and the interfaces that they expose should be coarse-grained. Web services technology provides a natural way of defining coarse-grained services that access the right amount of business logic. Ability to be synchronous or asynchronous Synchronicity refers to the binding of the client to the execution of the service. In synchronous invocations, the client blocks and waits for the service to complete its operation before continuing. Asynchronous operations allow a client to invoke a service and then execute other functions. Asynchronous clients retrieve their result at a later point in time, while synchronous clients receive their result when the service has completed. Asynchronous capability is a key factor in enabling loosely coupled Supports Remote Procedure Calls (RPCs) Web services allow clients to invoke procedures, functions, and methods on remote objects using an XML-based protocol. Remote procedures expose input and output parameters that a web service must support. Component development through Enterprise JavaBeans (EJBs) and .NET Components has increasingly become a part of architectures and enterprise deployments over the past couple of years. Both technologies are distributed and accessible through a variety of RPC mechanisms. A web service supports RPC by providing services of its own, equivalent to those of a traditional component, or by translating incoming invocations into an invocation of an EJB or a .NET component. Supports document exchange One of the key advantages of XML is its generic way of representing not only data, but also complex documents. These documents can be simple, such as when representing a current address, or they can be complex, representing an entire book or RFQ. Web services support the transparent exchange of documents to facilitate 1.1.1 The Major Web Services Technologies Several technologies have been introduced under the web service rubric and many more will be introduced in coming years. In fact, the web service paradigm has grown so quickly that several competing technologies are attempting to provide the same capability. However, the web service vision of seamless worldwide business integration is not be feasible unless the core technologies are supported by every major software company in the world. Java Web Services Over the past two years, three primary technologies have emerged as worldwide standards that make up the core of today's web services technology. These technologies are: Simple Object Access Protocol (SOAP) SOAP provides a standard packaging structure for transporting XML documents over a variety of standard Internet technologies, including SMTP, HTTP, and FTP. It also defines encoding and binding standards for encoding non-XML RPC invocations in XML for transport. SOAP provides a simple structure for doing RPC: document exchange. By having a standard transport mechanism, heterogeneous clients and servers can suddenly become interoperable. .NET clients can invoke EJBs exposed through SOAP, and Java clients can invoke .NET Components exposed through Web Service Description Language (WSDL) WSDL is an XML technology that describes the interface of a web service in a standardized way. WSDL standardizes how a web service represents the input and output parameters of an invocation externally, the function's structure, the nature of the invocation (in only, in/out, etc.), and the service's protocol binding. WSDL allows disparate clients to automatically understand how to interact with a web service. Universal Description, Discovery, and Integration (UDDI) UDDI provides a worldwide registry of web services for advertisement, discovery, and integration purposes. Business analysts and technologists use UDDI to discover available web services by searching for names, identifiers, categories, or the specifications implemented by the web service. UDDI provides a structure for representing businesses, business relationships, web services, specification metadata, and web service access points. Individually, any one of these technologies is only evolutionary. Each provides a standard for the next step in the advancement of web services, their description, or their discovery. However, one of the big promises of web services is seamless, automatic business integration: a piece of software will discover, access, integrate, and invoke new services from unknown companies dynamically without the need for human intervention. Dynamic integration of this nature requires the combined involvement of SOAP, WSDL, and UDDI to provide a dynamic, standard infrastructure for enabling the dynamic business of tomorrow. Combined, these technologies are revolutionary because they are the first standard technologies to offer the promise of a dynamic business. In the past, technologies provided features equivalent to SOAP, WSDL, and UDDI in other languages, but they weren't supported by every major corporation and did not have a core language as flexible as XML. Figure 1-1 provides a diagram that demonstrates the relationship between these three Java Web Services Figure 1-1. Simple web service interaction The relationship between these pieces (SOAP, WSDL, and UDDI) can be described as follows: an application acting in the role of a web services client needs to locate another application or a piece of business logic located somewhere on the network. The client queries a UDDI registry for the service either by name, category, identifier, or specification supported. Once located, the client obtains information about the location of a WSDL document from the UDDI registry. The WSDL document contains information about how to contact the web service and the format of request messages in XML schema. The client creates a SOAP message in accordance with the XML schema found in the WSDL and sends a request to the host (where the service is). 1.1.2 Service-Oriented Architecture in a Web Services Ecosystem The web services model lends itself well to a highly distributed, service-oriented architecture (SOA). A web service may communicate with a handful of standalone processes and functions or participate in a complicated, orchestrated business process. A web service can be published, located, and invoked within the enterprise, or anywhere on the Web. As illustrated in Figure 1-2, a service might be simple and discrete, such as an international currency conversion service. It may also be a whole suite of applications representing an entire business function, such as an auto insurance claims processor. At the mass-consumer market, web services may provide something like a restaurant finder application for a handheld device that knows who and where you are. It could also take the form of an application that participates in an exchange between a business entity and its suppliers. Figure 1-2. Discrete components in a web services architecture Whether a service is implemented as a fine-grained component performing a discrete operation or as an application suite exposing an entire business function, each can be Java Web Services considered a self-contained, self-describing, modular unit that participates in a larger ecosystem. As illustrated in Figure 1-3, a web service can access and encapsulate other web services to perform its function. For example, a portal such as www.boston.com may have a restaurant finder application that is exposed as a web service. The restaurant finder service may in turn access Mapquest as a web service in order to get directions. Eventually, these small ecosystems can all be combined into a larger, more complicated, orchestrated business macrocosm. Figure 1-3. Web services within a larger ecosystem A service-oriented architecture may be intended for use across the public Internet, or built strictly for private use within a single business or among a finite set of established business 1.1.3 Practical Applications for Web Services Because of the cross-platform interoperability promised by SOAP and web services, we can provide practical business solutions to problems that, until now, have only been a dream of It's easy to see the use for simple, discrete web services such as a currency conversion service that converts dollars to Euros or a natural language translation service that converts English to French. Today, web sites such as www.xmethods.com are dedicated to hosting simple web This scenario becomes more exciting when we see real companies using web services to automate and streamline their business processes. Let's use the concept of a Business-toConsumer (B2C) portal. Web-based portals, such as those used by the travel industry, often combine the offerings of multiple companies' products and services and present them with a unified look and feel to the consumer accessing the portal. It's difficult to integrate the backend systems of each business to provide the advertised portal services reliably and Web services technology is already being used in the integration between Dollar Rent A Car Systems, Inc. and Southwest Airlines Co. Dollar uses the Microsoft SOAP Toolkit to integrate its online booking system with Southwest Airlines Co.'s site. Dollar's booking Java Web Services system runs on a Sun Solaris server, and Southwest's site runs on a Compaq OpenVMS server. The net result (no pun intended) is that a person booking a flight on Southwest Airline's web site can reserve a car from Dollar without leaving the airline's site. The resulting savings for Dollar are a lower cost per transaction. If the booking is done online through Southwest and other airline sites, the cost per transaction is about $1.00. When booking through traditional travel agent networks, this cost can be up to $5.00 per transaction. The healthcare industry provides many more scenerios in which web services can be put to use effectively. A doctor carrying a handheld device can access your records, health history, and your preferred pharmacy using a web service. The doctor can also write you an electronic prescription and send it directly to your preferred pharmacy via another web service. If all pharmacies in the world standardized a communication protocol for accepting prescriptions, the doctor could write you a subscription for any pharmacy that you selected. The pharmacy would be able to fulfill the prescription immediately and have it prepared for you when you arrive or couriered to your residence. This model can be extended further. If the interfaces used between doctors and pharmacies are standardized using web services, a portal broker could act as an intermediary between doctors and pharmacies providing routing information for requests and better meet the needs of individual consumers. For example, a patient may register with an intermediary and specify that he wants to use generic drugs instead of expensive brand names. An intermediary can intercept the pharmaceutical web service request and transform the request into a similar one for the generic drug equivalent. The intermediary exposes web services to doctors and pharmacies (in both directions) and can handle issues such as security, privacy, and 1.2 Web Services Adoption Factors Web services are new technologies and require a paradigm shift. The adoption of web services is directly impacted by the adoption of the paradigm of web services development. A paradigm shift can happen quickly in a large wave, when suddenly the whole world is doing something differently, and no one notices how and when it happened until after the fact. An example of such a shift is the World Wide Web phenomenon that began around 1995. The combination of HTML, HTTP, and the CGI programming model is not the most efficient way to accomplish the services offered by these technologies, yet the CGI model gained widespread grassroots acceptance because it was simple and easy to adopt. The acceptance of CGI started the wave. To become a lasting paradigm shift, the model of web-based business needed broader acceptance among corporate IT and industry leaders. This acceptance was encouraged by continuing standards development within W3C and IETF and through continuing technology innovations such as ISAPI, NSAPI, Java Servlets, and application servers. Eventually, high-level architectures and infrastructures such as .NET and J2EE were created to hold everything together. Unlike the initial adoption of the Web, which was driven by grass-roots demand, the adoption of web services will be driven downward by corporations. It's still a paradigm shift, but it's likely to move more slowly. The adoption of the fax machine provides a good analogy. Because fax machines were initially large expensive devices, they were adopted first by large businesses as a way to communicate between their offices. As more companies bought fax Java Web Services machines, they became important for business-to-business communications. Today, fax machines are nearly ubiquitous—you can fax in your pizza order. We expect to see the same trend in web services. They will be used first for internal business communications before they become part of everyday life. In all cases, though—the rapid adoption of the Web, the slower adoption of the fax machine, and the current adoption of web services—the same factor has enabled the paradigm shift. That factor is a standards communications mechanism. Whether the standard be the phone line and FAX protocols, the TCP/IP stack and HTTP (together with the phone line and modem protocols), or the web service protocols, standards have been, and continue to be, the key factor in enabling the acceptance of new technologies. 1.2.1 Industry Drivers Many tangible drivers make web services technology attractive, both from a business and a technical perspective. Classic Enterprise Application Integration (EAI) problems require applications to integrate and interoperate. Even within a particular business unit, there exist islands of IT infrastructure. For example, a Customer Relationship Management (CRM) system may have no knowledge of how to communicate with anything outside of its own application suite. It may need to communicate with a third-party Sales Order system so it can know about new customers as soon as they place their first order. Corporate acquisitions and mergers are also an issue. Entire parallel business application infrastructures have to be synchronized or merged. Business partners such as suppliers and buyers need to collaborate across corporate boundaries. These EAI and B2B problems exist in abundance and are increasing exponentially. Every new deployed system becomes a legacy system, and any future integration with that system is an EAI or B2B problem. As the growth of integration problems and projects accelerates over the next couple of years, the standards-based approach that web services offer makes adopting web services technology an attractive option for companies that need to cost-effectively accomplish seamless system integration. 1.2.2 Lessons Learned from Recent History Some industry analysts claim that the web service model is causing a paradigm shift that will change the way distributed computing is done forever. Others say that this model is just a fad that will go away soon. Currently, web services is still very much in the hype phase. Drawing parallels to other new technologies can teach us important lessons. Other distributed-computing models have had an opportunity to garner universal acceptance and adoption, yet they have not. While these models offer great technical advantages for solving real problems, none have achieved the massive widespread adoption that their proponents had hoped for. This is largely due to their proprietary nature and the inevitable vendor lock-in. Though COM/DCOM had a widespread following, it could not permeate an enterprise because it was limited to Microsoft platforms. CORBA was controlled by the OMG, a neutral standards body. However, software availability was a problem. There were really only two robust vendor implementations: Iona and Visigenic. Forcing middleware infrastructure down the throats of other departments and business partners is not easy. Both CORBA and DCOM required that a piece of the vendor-supplied middleware be installed at every node of the system. You can't always force a business Java Web Services partner to install a piece of your software at their site for them to be able to participate in business transactions with your systems. Even within the four walls of an organization, agreeing upon and rolling out an enterprise-wide middleware solution is a huge, concerted effort. CORBA implementations eventually achieved cross-vendor interoperability, but by then it was too late; the wave had already passed. Crossing corporate boundaries in a secure, reliable fashion is key. If you go back only as far as 1996 to 1997, you would have seen every trade magazine talking about a world of distributed CORBA objects happily floating around on the Internet, discovering one another dynamically and communicating through firewalls. Standards were proposed for firewall communications, and IIOP was going to be adopted by all major firewall vendors as a recognizable protocol. It just never happened—partly due to the aforementioned adoption problems and partly due to widespread adoption and general acceptance of HTTP as a 1.2.3 Why Web Services, and Why Now? What is so different about web services, and why are they poised for success, whereas other preceding technologies have failed to achieve widespread adoption? The answer lies in the challenge that every organization faces today: to create a homogeneous environment while still leveraging its core abilities and existing applications. IT needs a simple, platform-neutral way of communicating between applications. For starters, XML is ideal for representing data. IT developers have had exposure to XML for a few years and they understand what it's good for. Even though the average IT developer hasn't yet become a walking XML parser, by now most developers understand the concepts behind XML and how it can be used. Also, the base technologies of SOAP, WSDL, and UDDI are not themselves very exciting; they are just new dressings for the same old distributed-computing model. What draws people to them is the promise of what they enable. Finally, we have a platform-neutral communication protocol that provides interoperability and platform independence. A bidirectional conversation may occur between a Biztalk server and a set of hand-rolled Perl scripts. The Perl scripts may be simultaneously involved in a conversation with a set of applications held together by a J2EE-based application server or a message-oriented middleware (MOM) infrastructure. The minimum requirement is that each participant in the multiparty collaboration knows how to construct and deconstruct SOAP messages and how to send and receive HTTP transmissions. The heavy involvement of the Microsoft camp and the J2EE camp in web services is good for everyone. It's advantage is not about .NET versus J2EE or .NET versus SunONE; it's about the fact that you no longer have to let that debate or choice get in the way of achieving interoperability across the enterprise. The programming languages and associated infrastructure of each respective camp will continue to coexist and will remain "camps" for a 184.108.40.206 Low barrier to entry means grass-roots adoption The widespread adoption of web services can be predicted by drawing parallels to the CGI phenomenon discussed earlier. Java Web Services Similar conditions exist today. The straightforward approach that SOAP takes—XML messages sent over HTTP—means that anyone can grab Apache SOAP and start exchanging data with the application owned by the guy down the hall. There isn't any overly complex, mysterious alchemy involving a strategic architecture group that takes two years to figure out. A corporate-wide infrastructure adoption shift doesn't need to occur for a company to start working and benefiting from web services; companies can be selective about how and where they adopt these technologies to get the best return on their investment. 1.3 Web Services in a J2EE Environment A common thread found throughout various web services specifications is the regular reference to web services "platforms" and "providers." A web services platform is an environment used to host one or more web services. It includes one or more SOAP servers, zero or more UDDI business registries, the security and transaction services used by the web services hosted on it, and other infrastructure provisions. A web services provider is generally considered a vendor-supplied piece of middleware infrastructure, such as an ORB, an application server, or a MOM. The provider may fully supply a platform, or it may deliver some base J2EE functionality plus some web service add-ons. Web services are a new approach for exposing and advertising enterprise services that are hosted on a platform. These platform services still have a variety of enterprise requirements, such as security, transactions, pooling, clustering, and batch processing. Web services do not provide these infrastructure capabilities, but expose the services that do. J2EE and .NET still play an important role in the enterprise as platform definitions: they define the behavior of core capabilities that every software program needs internally. Web services, however, offer a standard way to expose the services deployed onto a platform. An important question is, "What is being web service enabled?" If the answer is the business systems that run the enterprise, then the role of J2EE in the whole web services picture becomes abundantly clear. The core requirements of a web service enabled ecosystem are the same as they have always been—scalability, reliability, security, etc. Web services provide new ways of wrapping things at the edge of the enterprise, but if you poke your head through the web services hype, the requirements for holding together your core systems don't change that much. The implemention of the web services backbone should still be based on the J2EE architecture. Web services and J2EE come together at multiple points. The use of each J2EE component depends on the application's requirements, just as it did prior to the advent of web services. If the nature of the web service is for lightweight, quick-and-dirty processing, then use a web container and implement the web service directly as a JSP. If the solution requires a distributed component model, then use EJB. If the solution requires a highly distributed, highly reliable, loosely coupled environment, then use JMS. Naturally, any of these combinations is allowed and encouraged, as illustrated in Figure 1-4. Java Web Services Figure 1-4. SOA based on a J2EE backbone 1.4 What This Book Discusses This is a book on Java and web services. It is for developers who need to develop client- or server-side programs that either use web services or are exposed as web services. Web services are built on XML and have specifications that focus on the XML nature of the technology. These specifications do not discuss how these technologies might be bound to a particular programming language such as Java. As a result, a plethora of industry technologies that facilitate Java/web service integration have been proposed. This book introduces the basics of SOAP, WSDL, and UDDI, and then discusses some of the different Java technologies available for using each of these platforms within a Java program. The technologies we've chosen range from open source initiatives, such as the Apache project, to big-ticket commercial packages. One reason for touching on so many different packages is that the web services story is still developing; a number of important standards are still in flux, and vendors are providing their own solutions to these problems. Of course, this book looks at the standards efforts designed to consolidate and standardize how Java programs interface with web services. Most notably, this book discusses Java/XML technologies, such as JAXR, JAX-RPC, and JAXM, and how they can be used in a web These standards are still works in progress; their status may be clarified by the time we write a second edition. In the meantime, we thought it was important (and even critical) to show you how things look. Just be aware that changes are certain between now and the time when these standards are finalized and actual products are released. Additionally, for developers who are producing J2EE applications, this book discusses different technologies that are being proposed to web service-enable standard J2EE applications. This book discusses how a web service facade can integrate with a J2EE infrastructure. It also introduces some of the standards efforts proposed for solidifying this Java Web Services This book also discusses the points that developers need to understand to make their web services secure and interoperable with other web services. It provides an in-depth look at web service interoperability across multiple platforms, including the topic of .NET. Java Web Services Chapter 2. Inside the Composite Computing Model What is the "composite computing model," you ask? The most straightforward definition we've found is: An architecture that uses a distributed, discovery-based execution environment to expose and manage a collection of service-oriented software assets. A software asset is nothing more than a piece of business logic; it can be a component, a queue, or a single method that performs a useful function that you decide to expose to the outside world. Like the client-server and n-tier computing models, the composite computing model represents the architectural principles for governing roles and responsibilities of its constituents. It was designed to solve a specialized group of business problems that have the Dynamic discovery of the business logic's capabilities Separation between the description of the business logic's capabilities and its The ability to quickly assemble impromptu computing communities with minimal coordinated planning efforts, installation procedures, or human intervention The computing industry has been moving towards this model for some time now; much of the last decade has been devoted to defining and refining distributed-computing technologies that allow you to look up components on the fly; discovering a component's interface at runtime; and building applications from components on an ad-hoc basis, often using components in ways that weren't anticipated when they were developed. Listing the steps by which we arrived at the composite computing model is a tangent we won't follow, but remember that Java has played, and continues to play, a very important role in the development of distributed In short, the "composite computing model" is the direction in which computing has headed ever since networking became cheap and easy. Instead of trying to build larger applications on ever larger computers, we're trying to assemble smaller components that interact with one another across many computers, and possibly thousands of miles. Instead of building a large, monolithic, proprietary inventory system, for example, we're trying to build services that access inventory databases and can easily be combined as needed. Instead of forcing a customer to call customer service to find out if your plant can deliver 10,000 widgets by Wednesday (and if another plant can deliver 15,000 gadgets by Thursday), you can run an application that knows how to search for vendors that supply widgets and gadgets, figures out how to query each vendor's service interface, and says, "Yes, we can do a production run of 5,000 next week at a cost of $40,000." If you're not working on applications that do this now, you will be soon. 2.1 Service-Oriented Architecture The composite computing model defines a vision for what computing should be. Serviceoriented architecture (SOA) represents a way to achieve this vision using the set of technologies that make up the Web Services Technology Stack. This set of technologies currently consists of SOAP, WSDL, and UDDI, though other components may be added in Java Web Services Like other concepts associated with web services, the SOA seemed to appear almost out of nowhere in September 2000. The originator was IBM and the introduction mechanism was an article by the IBM Web Services Architecture team on the developerWorks web site (http://www.ibm.com/developerWorks). Since then, this group has used it as a way to extol the virtues of web services to nontechnical users. The SOA is an instance of a composite computing model, and thus something that can be used to further our understanding of it. Conceptually, the SOA model is comprised of three roles performing three fundamental interactions. The components of the SOA are our good friends, web services. Each web service is made up of two parts: file or as elaborate as a 30-year-old, industrial-strength COBOL application running on a mainframe. The key requirement is that it be on a network-accessible platform, provided by the web service provider. The interface for a web service. It is expressed in XML and is governed by one or more standards. This description includes the datatypes, operations, protocol bindings and network location (i.e., the URL, etc.) for the web service's implementation. Additional documents provide categorization and other metadata to facilitate 2.1.1 Participant Roles The SOA is based upon the interactions between three roles: a provider, a registry (or broker), and a requestor. These roles are illustrated in Figure 2-1. The interactions between these roles involve publishing information about a service, finding which services are available, and binding to those services. Java Web Services Figure 2-1. The service-oriented architecture In a typical scenario, a provider hosts the implementation for a service. Providers define service descriptions for services and publish them to a registry. A requestor then uses a registry to find service descriptions for services they are interested in using. With the service description in hand, the requestor binds (i.e., creates a service request for) to a service. Let's take a closer look at the roles of the SOA. In the SOA, a provider is considered the owner of a service. From a composite computing perspective, it is a software asset that others regard as a network-accessible service. In most cases, this software asset is exposed as a web service, which by definition: Has an XMLized description Has a concrete implementation that encapsulates its behavior Almost any piece of logic can be exposed as a service in an SOA—from a single component to a full-blown, mainframe-based business process, such as loan processing. Likewise, how the service is exposed is up to the provider; you can access it through SOAP over HTTP, through a JMS message queue, or via other technologies (such as SMTP); the service may implement a request/response protocol, or it may just receive messages and deliver As is often the case in modern software development, some fundamental ambiguities exist in basic terms such as "provider." Does it mean the organization providing the service, the software itself, or the computer (or computers) on which the software runs? The meaning is almost always clear from the context. Java Web Services 220.127.116.11 Registry (broker) A registry, or a broker, manages repositories of information on providers and their software assets. This information includes: Business data such as name, description, and contact information ("white pages" data) Data describing policies, business processes, and software bindings—in other words, information needed to make use of the service ("green pages" data) A service broker usually offers intelligent search capabilities and business classification or taxonomy data (called "yellow pages" data). From a composite computing perspective, a broker represents a searchable registry of service descriptions, published by providers. During the development cycle for a web service, a programmer (or tool) can use the information in registries to create static bindings to services. At runtime, an application can tap into a registry (local or remote) to obtain service descriptions and create dynamic bindings Registries often sound abstract, but they solve a very concrete problem. They allow you (or, more properly, your software) to ask questions such as, "Who sells widgets?" Once you have an answer to that question, you can ask more questions, such as, "How do I interact with their service to find prices, place orders, etc.?" In short, a registry lets you look up a service and then find its programmatic interface. In the service-oriented architecture, a requestor is a business that discovers and invokes software assets provided by one or more providers. From a composite computing perspective, a requestor is an application that looks for and initiates an interaction with a provider. This role could be played by: A person using a web browser Computational entities without a user interface, such as another web service Again, there's a lot of ambiguity: is a requestor a person, an organization, or a piece of software? If it's software, is it a browser of some sort, or is it another kind of software? Again, the answer depends on the context. 2.1.2 Participant Interactions Having defined the roles that participants in web services can play, we'll look in more detail at how they interact. There are three fundamental types of interaction: publishing, service location, and binding. Providers publish information (or metadata) about services to a registry. These providers are usually standards organizations, software vendors, and developers. According to IBM's Web Services Conceptual Architecture document, several different mechanisms are used to publish Java Web Services The service requestor retrieves the service description directly from the service provider, using email, FTP, or a distribution CD. Here, the service provider delivers the service description and simultaneously makes the service available to a requestor. There is no registry as such; the requestor is responsible for locating services and retrieving their descriptions. HTTP GET request This mechanism is currently used at http://www.xmethods.com/, a public repository of web services that developers can use to test their wares. The service requestor retrieves the service description directly from the service provider by using an HTTP GET request. This model has a registry (the public web repository), though only in a This mechanism uses local and public registries to store and retrieve service descriptions programmatically. In the web services world, the most frequently used registry is UDDI, though others exist (for example, ebXML R). Contextually, the service provider is an application that uses a specialized set of APIs to publish the The direct publishing method is a historical artifact and of little interest to us. Publishing with a GET request is more interesting, particularly since http://www.xmethods.com/ has been on the forefront of web services development. However, we see this means of publishing as transitional—a temporary tool to get us from direct publishing to dynamic discovery. (We suspect the developers of XMethods would agree.) Dynamic discovery (see Figure 2-2) is the most interesting and versatile publishing model. UDDI and other protocols designed to support dynamic discovery are at the center of the web Figure 2-2. Publishing for dynamic discovery 18.104.22.168 Service location (finding) Given that registries or brokers publish services, how do you locate services that you wish to use? Requestors find services using a registry or broker. Service location is closely associated with dynamic discovery. In this context, the requestor is an application that uses a specialized set of APIs to query a public or private registry for service descriptions. These queries are formatted in a well-defined, standard XML format and transmitted using an XML messaging format, such as SOAP or XML-RPC. The criteria used to find a service include the quality of
<urn:uuid:9d43a589-b475-4f5d-aef1-b6f143db5706>
2.640625
11,438
Truncated
Software Dev.
47.599299
95,642,685
A View from Emerging Technology from the arXiv First Quantum-Enhanced Images of a Living Cell Biologists have used “squeezed light” to create the first images of a living cell that beat the diffraction limit Many of the most important but least understood processes of life occur inside cells at the sub-nanometer scale, beyond the realm of ordinary optical imaging. That makes it hard for biologists to see these processes at work or to understand the behaviour of the molecular machinery behind them. Today, Michael Taylor at the University of Queensland in Australia and a few pals reveal a new way to create optical images of cells that dramatically increases their resolution beyond the conventional diffraction limit. Their trick relies on a peculiar quantum phenomenon called “squeezed light” which has allowed them to resolve spatial structures inside a living cells at a resolution of 10 nanometres; that’s a 14 per cent finer resolution than is possible with conventional techniques. Conventional optical imaging is limited by the process of diffraction, the way light spreads out when it passes an object. The amount of diffraction depends, in part, on natural uncertainties in the position of the photons. Physicists think of this uncertainty as quantum noise. In recent years, however, they’ve have worked out how to minimise the amount quantum noise by carefully manipulating the way photons are created. They call the resulting photons “squeezed light” and there has been no little excitement over their potential to beat the conventional diffraction limit in all kinds of applications. One obvious use is in cellular imaging where squeezed light offers biologists a clear advantage for exploring cellular processes. Various groups have used squeezed light to make pioneering measurements inside cells. But the process of imaging to reveal spatial variations in the structure of a cell, has so far eluded them. That all changes with the work of Taylor and his buddies. These guys have used squeezed light to monitor the movement of naturally occurring nanoparticles inside a cell as they are buffeted by the molecular machines that surround them. They say this is the first time that anybody has created quantum enhanced images of a living cell. “We report the first demonstration of sub-diffraction limited quantum imaging in biology,” they say. This kind of imaging should lead to important new insights. The movement of these nanoparticles is a kind of Brownian motion but biologists have known for some time that the nanoparticles don’t diffuse in a standard way. Instead, their diffusion is limited by the various molecular processes around them. So mapping out the way this diffusion process differs in one part of the cell compared to another could reveal important insights into what’s going on. This kind of map is exactly what Taylor and co have produced. (One criticism of this work is whether it can be truly described as “imaging”. An analogy would be spending a few minutes in a darkened room, wandering around with your arms outstretched and then saying you’ve imaged it. Mapping or sampling might be better terms.) Squeezed light offers another benefit. Instead of increasing the resolution of images, physicists can use it to achieve the same resolution of ordinary light but at much lower intensity. That’s important because light itself can damage the molecular machinery inside cells or change the way it works. Taylor and co say they can match conventional image resolution but with a 42 per cent reduction in light intensity. Whichever method biologists choose, squeezed light imaging is set to make a significant impact on the way they view and understand living cells at work. Expect to see a lot more about it. Ref: arxiv.org/abs/1305.1353: Sub-diffraction Limited Quantum Imaging of a Living Cell Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today
<urn:uuid:9b73088c-b40d-4960-92b5-7292ed3cf349>
3.46875
791
Truncated
Science & Tech.
38.064368
95,642,688
The world speed record for protein folding apparently goes to an unusually tiny specimen that traces its origins to Gila monster spit. University of Florida researchers have discovered that the Tryptophan cage protein, derived from the saliva of the Gila monster lizard, zooms to its folded state, above, in four millionths of a second - about four times faster than any protein previously measured. The finding adds to the emerging knowledge about how proteins fold, information that could lead to better drugs and cures for diseases tied to misshapen proteins, such as Alzheimers, Parkinson’s and Mad Cow diseases. So reports a team of University of Florida researchers in a paper published this week in the online edition of the Journal of the American Chemical Society. Though significant mainly from a purely scientific standpoint, the finding eventually may be important in researchers understanding of the underlying causes behind a host of maladies. Proteins acquire their three-dimensional, blob-like shapes when the amino acids they are composed of spontaneously fold into place. The process has become a hot topic in science in recent years because the shape of proteins is directly tied to their function in the cells of animals and people. Misshapen proteins, or proteins whose amino acids form an even slightly different configuration than normal proteins, have been connected to Alzheimer’s disease and a range of other serious disorders. Stephen Hagen | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:af043f70-ee38-4955-b7c7-3ac84ed4648b>
3.28125
932
Content Listing
Science & Tech.
37.574242
95,642,701
In its bulk liquid form, water is a disordered medium that flows very readily. When most substances are compressed into a solid, their density increases. But water is different; when it becomes ice, it becomes less dense. For this reason, many scientists reasoned that when water is compressed (as it is in a nanometer-sized channel), it should maintain its liquid properties and shouldn't exhibit properties that are akin to a solid. Several earlier studies came to that very conclusion – that water confined in a nano-space behaves just like water does in the macro world. Consequently, a number of scientists considered the case to be closed. But when Georgia Tech experimental physicist Elisa Riedo and her team directly measured the force of pure water in a nanometer-sized channel, they found evidence suggesting that water was organized into layers. Riedo conducted these measurements by recording the force placed on a silicon tip of an atomic force microscope as it compressed water. The water was confined in a nanoscale thin film on top of a solid surface. "Since water usually has a low viscosity, the force you would expect to feel as you compress it should be very small," said Riedo, assistant professor in Georgia Tech's School of Physics. "But when we did the experiment, we found that when the distance between the tip and the surface is about one nanometer, we feel a repulsive force by the water that is much stronger than what we would expect." As the tip compresses the water even more, the repulsive force oscillates, indicating that the water molecules are forming layers. As the tip continues to increase its pressure on a layer, the layer collapses and the water flows out horizontally. "In effect, the confined water film behaves effectively like a solid in the vertical direction by forming layers parallel to the confining tip and surface, while maintaining it's liquidity in the horizontal direction where it can flow out – resembling some phases of liquid crystals," said Uzi Landman, director of the Center for Computational Materials Science, Regents' and Institute professor, and Callaway Chair of Physics at Georgia Tech. A theoretical physicist, Landman conducted the first-ever computer simulations of these forces for tip-confined water films and found good correspondence between his team's theoretical predictions and the experiments. So why did Riedo and Landman's results differ from their peers? According to Landman, most previous studies on confined water were limited by technology at the time and could not directly measure the behavior in the last two nanometers. Instead they had to measure other properties and infer the forces acting in films of one nanometer thickness or less. "If you want force, it is preferable to measure it," he said. "This is the first experiment to directly measure the force and it's the first simulation done of these forces. The fact that we have direct measurements married with theoretical results is rather conclusive." Riedo and Landman conducted their experiments in several different environments. They found that the layering effect was more pronounced when water was placed on top of hydrophilic surfaces that allow water to wet the solid surface, such as glass. When the water was confined by hydrophobic surfaces where water tends to bead up, like graphite, the effect was still present, but less pronounced. At the same time, Riedo's team was measuring the vertical force exerted on the tip by the confined water film, they also measured the film viscosity by measuring the lateral force. They found that when water was placed on a hydrophilic surface, the viscosity began to increase dramatically as the thickness of the confined film reached the 1.5 nanometer range. As they continued to compress the water and measure the lateral forces, the viscosity increased by a factor of 1,000 to 10,000. On hydrophobic surfaces, they did not see such an increase in viscosity. The results of the molecular dynamics simulations support these findings, showing a dramatically decreased mobility for sub-nanometer thick water films under hydrophilic confinement. "Water is a wonderful lubricant," said Riedo, "but it flows too easily for many applications. At the one nanometer scale, water is a viscous fluid and could be a much better lubricant." Understanding the properties of water at this scale could also be important for biological and pharmaceutical research, especially in understanding processes that depend on hydrated ionic transport through nanoscale channels and pores. Riedo and Landman's next steps are to introduce impurities in the water to study how that affects its properties. David Terraso | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:5f97656a-9eeb-4b4f-aff2-9bed3d401e39>
3.78125
1,591
Content Listing
Science & Tech.
38.4863
95,642,702
I need your assistance in putting together a response to the following earth science essay question: The Gravitational pull of the moon, earth and sun, the rotation, revolution and tilt of the earth, the Coriolis effect, planetary and local winds, the ocean, evaporation, humidity and aerosols, all effect our weather. How do each of these effect our weather?© BrainMass Inc. brainmass.com July 20, 2018, 12:39 am ad1c9bdddf IT IS THE GRAVITATIONAL EFFECT OF THE EARTH AND THAT CREATES THE COCOON OF GASES In the darkest regions of deep space, the temperature is a chilly -450° Fahrenheit. Closer to our Sun, temperatures reach thousands of degrees Fahrenheit. What makes Earth's climate so moderate? Separating Earth from the extreme and inhospitable climate of space is a 500-mile-thick cocoon of gases called the atmosphere. All planets have an atmosphere, a layer of gases that surrounds them. The Sun's atmosphere is made up of hydrogen, while Earth's is made up primarily of nitrogen and oxygen. Carbon dioxide, ozone, and other gases are also present. These gases keep our planet warm and protect us from the direct effects of the Sun's radiation. Without this regulation, Earth could not sustain life. THE PULL OF THE MOON, EARTH, SUN, THE ROTATION AND TILT CAUSE THE COMPLEX LAYERS THAT GENERATE AND EFFECT WEATHER The atmosphere is made up of several layers: the troposphere, stratosphere, mesosphere, ionosphere, and exosphere. Closest to Earth is the troposphere. Most of the clouds you see in the sky are found in the troposphere, and this is the layer of the atmosphere we associate with weather. Extending up to 10 miles above Earth's surface, the troposphere contains a variety of gases: water vapor, carbon dioxide, methane, nitrous oxide, and others. These gases help retain heat, a portion of which is then radiated back to warm the surface of Earth. Above the troposphere is the stratosphere, which includes the ozone layer. The stratosphere extends from about 10 to 30 miles above the surface of Earth. Ozone molecules, which are concentrated in this layer, absorb ultraviolet radiation from the Sun and protect us from its harmful effects. Thirty to 50 miles above the surface is the mesosphere, the coldest part of the atmosphere. Above the mesosphere, in a layer called the ionosphere (also called the thermosphere), things start to heat up. Temperatures in the ionosphere, which extends about 50 to 180 miles from the surface of Earth, can reach up to several thousand degrees Fahrenheit. Beyond the ionosphere is the exosphere, which extends to roughly 500 miles above the surface of Earth. This is the outermost layer of the atmosphere, the transition zone into space. THE AEROSOLS CAUSE THE GREENHOUSE EFFECT The gases in the atmosphere that help retain heat are called greenhouse gases. These gases, primarily carbon dioxide (CO2), absorb heat instead of allowing it to escape into space. This "greenhouse effect" makes the planet a hospitable place. However, greenhouse gases can have negative effects, too. Human activity has increased the amount of CO2 in the atmosphere. Since the 1800s, industrialized societies have burned fossil fuels such as coal, oil, and natural gas; these processes all give off CO2. During the past 25 years, the amount of CO2 in the atmosphere has increased by about 8 percent. With more CO2 in the atmosphere, more heat is absorbed and retained, causing global temperatures to rise. NATURAL OCCURANCES LIKE EVAPORATION FROM THE SEA, THE HUMIDITY IN THE ATMOSPHERS AND THE LOCAL WINDS CAN HAVE A STRONG EFFECT ON THE ATMOSPHERE Some scientists project that by the next century, CO2 levels in the atmosphere could be twice what they are today, causing a global temperature increase of about 3 degrees. Three degrees may not seem like much, but even a few degrees can have serious consequences. Tropical diseases could increase, since mosquitoes and other disease-carrying insects thrive in a warmer climate. Sea levels could rise, and coastal cities such as New Orleans and Washington, D.C., could be battered by storm surges. Prosperous farmland could dry up and agricultural regions could shift, wreaking havoc on the global economy. COOLING AND WARMING CAUSES EVAPORATION AND HUMIDITY It is possible that the recent warming trend is due more to natural cycles of cooling and warming than to human activity. Global climate change occurs on a scale of tens or hundreds of thousands of years, but scientists have only begun to study these effects in the last 150 years. Still, most scientists agree that just as climate affects our lives, we can affect the climate. Just how much we ...
<urn:uuid:ca507c7d-9de1-4233-a907-242404dc866c>
3.796875
1,034
Nonfiction Writing
Science & Tech.
45.063531
95,642,707
Calculate the wavelength of the tone frequency 11 kHz if the sound travels at speeds of 343 m/s. Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - G forces Calculate deceleration of car (as multiple of gravitational acceleration g = 9.81 m/s2) which occurs when a car in a frontal collision slows down uniformly from a speed 111 km/h to 0 km/h in 1.2 meters trajectory. - Geometric progression 2 There is geometric sequence with a1=5.7 and quotient q=-2.5. Calculate a17. Stone was pushed into the abyss: 2 seconds after we heard hitting the bottom. How deep is the abyss (neglecting air resistance)? (gravitational acceleration g = 9.81 m/s2 and the speed of sound in air v = 343 m/s) At this point, the first skier lead 20 km before the second skier and travels at a constant speed 19 km/h. The second skier rides at 24 km/h. How long take him to catch up the first? George pass on the way to school distance 200 meters in 165 seconds. What is the average walking speed in m/s and km/h? The aircraft flies at an altitude of 4100 m above the ground at speed 777 km/h. At what horizontal distance from the point B should be release any body from the aircraft body to fall into point B? (g = 9.81 m/s2) The body was thrown vertically upward at speed v0 = 79 m/s. Body height versus time describe equation ?. What is the maximum height body reach? - Two cars Car A1 goes at average speed of 106 km/h and the second car A2 goes at 103 km/h. How many second will it take car A1 to circulate car A2? Assume that both cars are 5 meters long and the safety gap between cars is 1.5 meters. Subway train went between two stations that gradually accelerated for 26 seconds and reached a speed of 72 km/h. At this rate, went 56 seconds. Then 16 seconds slowed to a stop. What was the distance between the stations? - Aircraft nose down How long will fall airliner from a height of 10000 m at speed 1,000 km/h? Tourist went 24 km for 4 hours. How many meters he goes at the same speed for 12 minutes? ? Help - convert units to minutes and meters Walker, which makes 120 steps per minute, make distance from point A to point B for 55 minutes. The length of his step is 75cm. For how long does this distance go boy who will do 110 steps 60 cm long in a minute? - Simple interest 3 Find the simple interest if 11928 USD at 2% for 10 weeks. - Theorem prove We want to prove the sentense: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? Seats in the sport hall are organized so that each subsequent row has five more seats. First has 10 seats. How many seats are: a) in the eighth row b) in the eighteenth row Equation ? has one root x1 = 8. Determine the coefficient b and the second root x2. - The confectionery The confectionery sold 5 kinds of ice cream. In how many ways can I buy 3 kinds if order of ice creams does not matter?
<urn:uuid:72d30be7-d6a4-4b22-8f6d-0e37e2f026f6>
3.671875
777
Tutorial
Science & Tech.
88.385633
95,642,711
Washington: A massive radio telescope in rural West Virginia has begun listening for signs of alien life on 86 possible Earth-like planets, US astronomers said. The giant dish began this week pointing toward each of the 86 planets -- culled from a list of 1,235 possible planets identified by NASA`s Kepler space telescope -- and will gather 24 hours of data on each one. "It`s not absolutely certain that all of these stars have habitable planetary systems, but they`re very good places to look for ET," said University of California, Berkeley, graduate student Andrew Siemion. The mission is part of the SETI project, which stands for Search for Extra Terrestrial Intelligence, launched in the mid 1980s. Last month the SETI Institute announced it was shuttering a major part of its efforts -- a USD 50 million project with 42 telescope dishes known as the Alien Telescope Array (ATA) -- due to a budget shortfall. Astronomers hope the Green Bank Telescope, a previous incarnation of which was felled in a windstorm in 1988, will provide targeted information about potential life-supporting planets, even if on a smaller scale. "We`ve picked out the planets with nice temperatures -- between zero and 100 degrees Celsius -because they are a lot more likely to harbor life," physicist Dan Werthimer, a veteran SETI researcher, said. The project will likely take a year, and will be helped by a team of one million at-home astronomers, known as SETI@home users, who will help process the data on personal
<urn:uuid:bd8f4918-526a-4fc2-b057-76eb7ed3e02b>
2.96875
326
News Article
Science & Tech.
28.116811
95,642,719
Another Corny Mule Yarn A mule and a donkey were going to market laden with corn. The mule said, "If you gave me one measure I should carry twice as much as you; but if I gave you one we should bear equal burdens." Tell me, learned geometrician, what were these burdens? Source: Euclid, 300 B.C.
<urn:uuid:840a60b4-9cf6-4553-920a-5e1678f7dbb4>
2.65625
79
Knowledge Article
Science & Tech.
72.074526
95,642,736
From snatching carbon dioxide (CO2) out of the air like trees do, to launching giant mirrors into space, scientists are researching a wide variety of technologies to artificially slow global warming. News, Blogs & Features - Jul 18th, 2018 - Missouri Farms Hold Big Potential as Carbon Storehouse - Jul 14th, 2018 - As Seas Rise, Americans Use Nature to Fight Worsening Erosion - Jul 11th, 2018 - Air Conditioning Costs Rise With Arizona’s Heat - Jul 11th, 2018 - Report: The High Cost of Hot - Jun 10th, 2018 - Antarctic Ocean Discovery Warns of Faster Global Warming
<urn:uuid:51c9ccbc-91a1-4341-af49-82a6168f4968>
2.765625
137
Content Listing
Science & Tech.
-17.529213
95,642,755
First Transistor That Mimics Brain Synapse French researchers have created what they claim is the first transistor to mimic the connections in the human brain. It could lead to neurology-inspired computers, as well as provide a means for connecting artificial devices to existing biological tissue. Schematic diagram illustrating how the NOMFET (bottom) mimics a synapse (top). In a synapse, voltage spikes (blue triangles) are converted to a chemical signal (orange arrow), which flows across a gap. Once across, the chemical stimulates the creation of new voltage spikes. (Courtesy: Dominique Vuillaume)) The team, which includes scientists from the CNRS (the French National Science Agency) and CEA (the French Atomic Energy Commission), began by adding gold nanoparticles to the interface between an insulating layer (gate dielectric) and an organic transistor made of pentacene. They fixed the nanoparticles, which were 5, 10 and 20 nm in diameter, into the source-drain channel of the device using surface chemistry techniques and finished the structure by covering it with a 35 nm thick film of pentacene. The resulting device is called a nanoparticle organic memory field-effect transistor or "NOMFET". A biological synapse transforms a voltage spike (action potential) arriving from a pre-synaptic neuron into a discharge of chemical neurotransmitters that are then detected by a post-synaptic neuron. These are subsequently transformed into new spikes, leading to a succession of pulses that either become larger or diminish in size. This fundamental property of synaptic behaviour is known as short-term plasticity, which is related to a neural network's ability to learn. It is this plasticity that Vuillaume and colleagues have succeeded in mimicking. In the NOMFET, the pre-synaptic signal is simply the pulse voltage applied to the device and the output signal is the drain current, explains Vuillaume. The holes – the charge carriers in the p-type organic semiconductor employed – are trapped in the nanoparticles and act like the neurotransmitters. A certain number of holes are trapped for each incoming spike voltage and in the absence of pulses, the holes escape in a matter of seconds This time delay is carefully adjusted by the researchers by optimizing nanoparticle number and device geometry. "The output of the NOMFET is thus able to reproduce the deceasing or amplifying behaviour typical of a synapse depending on the frequency of spikes," said Vuillaume. Science fiction fans recall that Isaac Asimov, in his short story Reason, wrote about a similar idea: All that had been done in the mid 20th century on "calculating machines" had been upset by Robertson and his positronic brain paths. The miles of relays and photocells had given way to the spongy globe of platinum iridium about the size of the human brain. (Read more about the positronic brain) A few years later, Philip K. Dick had fun with the idea of a Nexus-6 brain unit in his 1968 novel Do Androids Dream of Electric Sheep: The Nexus-6 did have two trillion constituents plus a choice within a range of ten million possible combinations of cerebral activity. In .45 of a second an android equipped with such a brain could assume any one of fourteen basic reaction-postures. Well, no intelligence test could trap such an andy. But then, intelligence tests hadn't trapped an andy in years, not since the primordial, crude varieties of the 1970's. From Physics World via Next Big Future. Scroll down for more stories in the same category. (Story submitted 2/1/2010) Follow this kind of news @Technovelgy. | Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit | you like to contribute a story tip? Get the URL of the story, and the related sf author, and add Comment/Join discussion ( 1 ) Related News Stories - Tetraplegics Dominate Avatar Races Well, just speaking brain-to-computer... IBM's Grain Of Sand Computer 'Our ancestors... thought to make the very sand beneath their feet intelligent...' - Stanislaw Lem, 1965. Can An Entire Brain Be Simulated In A Computer? 'The miles of relays and photocells had given way to the spongy globe of platinum iridium about the size of the human brain.' - Isaac Asimov, 1941. Illustris: The Next Generation Of Universe Simulation 'This digital device was ... A machine able literally to contain the Universe Itself...' - Stanislaw Lem, 1965. Technovelgy (that's tech-novel-gee!) is devoted to the creative science inventions and ideas of sf authors. Look for the Invention Category that interests you, the Glossary, the Invention Timeline, or see what's New. Ontario Starts Guaranteed Minimum Income 'Earned by just being born.' Is There Life In Outer Space? Will We Recognize It? 'The antennae of the Life Detector atop the OP swept back and forth...' Space Traumapod For Surgery In Spacecraft ' It was a ... coffin, form-fitted to Nessus himself...' Tesla Augmented Reality Hypercard 'The hypercard is an avatar of sorts.' A Space Ship On My Back ''Darn clever, these suits,' he murmured.' Biomind AI Doctor Mops Floor With Human Doctors 'My aim was just not to lose by too much.' - Human Physician participant. Fuli Bad Dog Robot Is 'Auspicious Raccoon Dog' Bot Bad dog, Fuli. Bad dog. Las Vegas Humans Ready To Strike Over Robots 'A worker replaced by a nubot... had to be compensated.' You'll Regrow That Limb, One Day '... forcing the energy transfer which allowed him to regrow his lost fingers.' Elon Musk Seeks To Create 1941 Heinlein Speedster 'The car surged and lifted, clearing its top by a negligible margin.' Somnox Sleep Robot - Your Sleepytime Cuddlebot Science fiction authors are serious about sleep, too. Real-Life Macau or Ghost In The Shell Art imitates life imitates art. Has Climate Change Already Been Solved By Aliens? 'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.' First 3D Printed Human Corneas From Stem Cells Just what we need! Lots of spare parts. VirtualHome: Teaching Robots To Do Chores Around The House 'Just what did I want Flexible Frank to do? - any work a human being does around a house.' Messaging Extraterrestrial Intelligence (METI) Workshop SF writers have thought about this since the 19th century. More SF in the News Stories More Beyond Technovelgy science news stories
<urn:uuid:ac50ccd7-6d73-4161-9b63-b340684d55dd>
3.484375
1,477
Content Listing
Science & Tech.
50.083562
95,642,761
flow of air relative to the earth's surface. A wind is named according to the point of the compass from which it blows, e.g., a wind blowing from the north is a north wind. The direction of wind is usually indicated by a thin strip of wood, metal, or plastic (often in the shape of an arrow or a rooster) called a weather vane or weathercock (but more appropriately called a wind vane) that is free to rotate in a horizontal plane. When mounted on an elevated shaft or spire, the vane rotates under the influence of the wind such that its center of pressure rotates to leeward and the vane points into the wind. Wind velocity is measured by means of an anemometer or radar. The oldest of these is the cup anemometer, an instrument with three or four small hollow metal hemispheres set so that they catch the wind and revolve about a vertical rod; an electrical device records the revolutions of the cups and thus the wind velocity. The pressure tube anemometer, used primarily in Commonwealth nations, is conceptually a Pitot tube mounted on a wind vane. As the wind blows across the tube, a pressure differential is created that can be mathematically related to wind speed. Doppler radar can be used to measure wind speed by shooting pulses of microwaves that are reflected off rain, dust, and other particles in the air, much like the radar guns used by the police to determine the speed of an automobile. Although the U.S. National Weather Service has estimated that tornado winds have reached a velocity of 500 mph (800 kph), the highest wind speeds ever documented, 318 mph (516 kph), were measured using Doppler radar during a tornado in Oklahoma in 1999. The first successful attempt to standardize the nomenclature of winds of different velocities was the Beaufort scale, devised (c.1805) by Admiral Sir Francis Beaufort of the British navy. An adaptation of Beaufort's scale is used by the U.S. National Weather Service; it employs a scale ranging from 0 for calm to 12 for hurricane, each velocity range being identified by its effects on such things as trees, signs, and houses. Winds may also be classified according to their origin and movement, such as heliotropic winds, which include land and sea breezes, and cyclonic winds, which blow counterclockwise in low-pressure regions of the Northern Hemisphere and clockwise in the Southern Hemisphere. Over some zones around the earth, winds blow predominantly in one direction throughout the year and are usually associated with the rotation of the earth; over other areas, the prevailing direction changes with the seasons; winds over most areas also are variable from day to day so that no prevailing direction is evident, such as, for example, the day-to-day changes in local winds associated with storms or clearing skies. Around the equator there is a belt of relatively low pressure known as the doldrums, where the heated air is expanding and rising; at about lat. 30°N and S there are belts of high pressure known as the horse latitudes, regions of descending air; farther poleward, near lat. 60°N and S, are belts of low pressure, where the polar front is located and cyclonic activity is at a maximum; finally there are the polar caps of high pressure. The prevailing wind systems of the earth blow from the several belts of high pressure toward adjacent low-pressure belts. Because of the earth's rotation (see Coriolis effect), the winds do not blow directly northward or southward to the area of lower pressure, but are deflected to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. The wind systems comprise the trade winds; the prevailing westerlies, moving outward from the poleward sides of the horse-latitude belts toward the 60° latitude belts of low pressure (from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere); and the polar easterlies, blowing outward from the polar caps of high pressure and toward the 60° latitude belts of low pressure. This zonal pattern of winds is displaced northward and southward seasonally because of the inclination of the earth on its axis and the consequent migration of the belts of temperature and pressure. In addition, the pattern is considerably modified by the distribution of land and water, especially in the temperate regions, where temperature differences between land and water are greatest. In winter, areas of high pressure tend to build up over cold continental land masses, while low-pressure development takes place over the adjacent, relatively warm oceans. Exactly the opposite conditions occur during summer, although to a lesser degree. These contrasting pressures over land and water areas are the cause of monsoon winds. Superimposed upon the general circulation of winds are many lesser disturbances, such as the extratropical cyclone (the common storm of the temperate latitudes), the tropical cyclone, or hurricane, the tornado, and the derecho; each of these storms moves generally along a path that follows the direction of the prevailing winds. See also chinook; climate; roaring forties; sandstorm; sirocco; weather. The diurnal, or daily, heating and cooling of land near a lake or ocean of fairly constant temperature causes air to blow toward the relatively warmer land during the day (sea breeze) and toward the relatively warmer water at night (land breeze). These breezes are shallow and seldom penetrate far inland or attain high velocity. Similar diurnal changes occur on mountain slopes, the air in the valley becoming heated and expanding so that it moves up the slope in the daytime, the cold air settling into the valley at night. Friction with the earth's surface, eddies caused by surface irregularities, and inequalities of heating with consequent convection currents tend to reduce wind velocity near the earth's surface and cause winds to blow in gusts. - See Instant Wind Forecasting (1988). , - Wind Energy Comes of Age (1995). , - Wind: How the Flow of Air Has Shaped Life, Myth, and the Land (1999). , The atmosphere in motion. Wind is important for several reasons: (i) because of the catastrophic damage it does to buildings and other... English has three distinct words wind . The noun, ‘moving air’ [OE], came from a prehistoric Germanic *windaz , which also produced German and... Air current that moves rapidly parallel to the Earth's surface. (Air currents in vertical motion are called updraughts or downdraughts.) Wind...
<urn:uuid:41f0575d-998b-47f1-a5d0-cfcbd23fca51>
4.15625
1,366
Knowledge Article
Science & Tech.
42.992536
95,642,766
Supercomputer simulations by two Sandia researchers have significantly altered the theoretical diagram universally used by scientists to understand the characteristics of water at extreme temperatures and pressures. The new computational model also expands the known range of water’s electrical conductivity. The Sandia theoretical work showed that phase boundaries for “metallic water” — water with its electrons able to migrate like a metal’s — should be lowered from 7,000 to 4,000 kelvin and from 250 to 100 gigapascals. (A phase boundary describes conditions at which materials change state — think water changing to steam or ice, or in the present instance, water — in its pure state an electrical insulator — becoming a conductor.) The lowered boundary is sure to revise astronomers’ calculations of the strength of the magnetic cores of gas-giant planets like Neptune. Because the planet’s temperatures and pressures lie partly in the revised sector, its electrically conducting water probably contributes to its magnetic field, formerly thought to be generated only by the planet’s core. The calculations agree with experimental measurements in research led by Peter Celliers of Lawrence Livermore National Laboratory. Surprising results were not the intent of Sandia co-investigators Thomas Mattsson and Mike Desjarlais. “We were trying to understand conditions at [a powerful Sandia accelerator known as] Z,” says Mattsson, a theoretical physicist, “but the problems are so advanced that they hopscotched to another branch of science.” In July 2007, Z is undergoing an extensive renovation that will increase the machine’s pulse from 20 to 26 million amps — a 30 percent rise. The question to researchers: How will water behave, subjected to these more extreme conditions? The power Z emits in X-rays when it fires is equivalent to many times the entire world’s generation of electricity — but only for a few nanoseconds. The machine creates high temperatures and pressures in water because of the 20-million-amp electrical pulses it sends through a row of water switches. First, the water acts as an insulator, restraining the incoming electric charge. Then, overcome by the buildup, water transmits the pulse, shortening it from microseconds to approximately 100 nanoseconds. This compression in time is a key element of what makes the Z accelerator so powerful. It is known that so much electricity passing through water vaporizes it, causing surrounding water pressures to rise as the shock wave from vaporization travels outward. But how much is the increase? How big a cavity does the ionized region form to transmit what amounts to a giant spark? And what are the best sizes for these channels, and for the switches themselves, to optimize the transmission of electrical pulses in future upgrades? “The concern was that ZR [Z Refurbishment] or its successors might go beyond the ability of a water switch to function as designed and carry the required current,” says Keith Matzen, director of Sandia’s Pulsed Power Sciences Center. “More efficient, larger machines may run into a limit and their switches not meet design requirements. So the question is, how does a water switch really work from first principles?” One aspect of this knowledge is to model water to get a better understanding of its behavior under these extreme conditions, he says. Mattsson and Desjarlais first found the standard water-phase diagram out of whack when they ran an advanced quantum molecular simulation program on Sandia’s Thunderbird supercomputer that included “warm” electrons instead of unrealistic cold ones, says Desjarlais. The molecular modeling code VASP (Vienna Ab-initio Simulation Package), based on density functional theory (DFT), was written in Austria. Desjarlais extended it to model electrical conductivity and Mattsson developed a model for ionic conductivity based on calculations of hydrogen diffusion. An accurate description of water requires this combined treatment of electronic and ionic conductivity. The adaptation of VASP to high-energy-density physics (HEDP) work at Sandia was motivated by earlier experimental measurements of the conductivity of exploding wires by Alan DeSilva at the University of Maryland. DeSilva found a considerable disparity between his data and theoretical models of materials in the region of phase space called warm dense matter. Desjarlais’ early VASP conductivity calculations immediately resolved the discrepancy. In recent years, a team of Sandia researchers has been extending one of Sandia’s own DFT codes (Socorro) to go beyond the capabilities of VASP for HEDP applications. “Mike [Desjarlais] was the first to pioneer this capability for warm dense matter six years ago,” says Sandia manager Tom Mehlhorn, “and Mattsson has come on to be a near-perfect complement as the work enters more complex areas.” As it turns out, the newly discovered regime will not adversely affect Sandia’s water switches on ZR. But water switches not yet designed for future upgrades may require the more accurate understanding of the phases of water discovered by the Sandia researchers. Because of Z’s success in provoking fusion neutrons from deuterium pellets, it is thought of as a possible (if dark-horse) contender in the race for high-yield controlled nuclear fusion, which would provide essentially unlimited power to humanity. Z is immediately useful for US defense purposes — data from its firing is used to validate physics models in computer simulations that are used to certify the safety and reliability of the US nuclear weapons stockpile. The work on water phases was initially published July 7 in Physical Review Letters and most recently reported at the 12th International Workshop on the Physics of Non-Ideal Plasmas, held in Darmstadt, Germany. Source: Sandia National Nuclear Laboratories Explore further: Effects of climate change on communally managed water systems softened by shared effort
<urn:uuid:7f8080d2-bbc0-426b-b40b-31e4e9d389e2>
3.921875
1,246
News Article
Science & Tech.
23.789856
95,642,781
Once polluted soil is thermically cleaned up, a dead product is the result. However, the Dutch government emphasizes the so-called principle of multifunctionality of the soil. The most vulnerable function of the soil is the ecological function. To assess the possibilities of ecological recovery of cleaned soils, a field experiment was performed in a pasture on the grounds of the institute in Bilthoven. Because soil mesofauna plays an important role in the functioning of the soil ecosystem, colonization and development of free-living nematodes and other soil mesofauna was studied in enclosures consisting of thermically cleaned soil containing a core of unpolluted grassland soil. The core was added to study possible migration of mesofauna from healthy soil to the cleaned soil. Some enclosures were fertilized to improve environmental conditions. The controls contained no core, but only unpolluted soil or cleaned soil. Weitere Kapitel dieses Buchs durch Wischen aufrufen - Ecological Recovery of Decontaminated Soil Frederike I. Kappers Mariette L. P. van Esbroek - Springer Netherlands Fallstudie Überschwemmungskarten/© Thaut Images | Fotolia
<urn:uuid:b9363183-78f7-4673-bcb7-b08fc09d28d4>
3
257
Knowledge Article
Science & Tech.
17.034992
95,642,809
Thermogravimetric Analysis of Minerals Thermogravimetric analysis (TGA), in which the mass of a sample is monitored as a function of temperature, is one of the oldest analytical techniques used in clay mineralogy. One of the first applications to the study of minerals was reported in 1903 by Nernst and Riesenfeld. Since then, TGA has been used to obtain a variety of information on minerals, particularly hydrous phases, such as clays and zeolites. TGA is currently widely used in many other disciplines as well, including polymer chemistry, pharmaceuticals, and inorganic analysis (Wendlandt, 1986). Despite the active interest in TGA in other areas of solid characterization and despite the many studies of clay minerals that have been conducted using TGA, the method has not enjoyed significant popularity in recent years in the mineral sciences. The technique appears to be viewed as somewhat qualitative in a field that is demanding more and more quantitative results. Fortunately, however, as a result of the large amount of research in other fields employing TGA, the application of TGA has grown in recent years from a simple technique often used as a fingerprint or water-analysis method to one that can provide quantitative information on many types of thermal processes in minerals. Data available through TGA include, but are not limited to, the kinetics of dehydration and dehydroxylation reactions, the analysis of solids for non-water volatiles, the determination of equilibrium dehydration behavior of hydrous minerals, the quantitative analysis of multicomponent mixtures, and the separation of overlapping dehydration reactions.
<urn:uuid:de805401-750f-4e6c-8bf8-d90451d9fd10>
3.4375
329
Knowledge Article
Science & Tech.
2.641071
95,642,819
(Oak Ridge, TN)—The International Union of Pure and Applied Chemistry (IUPAC) Inorganic Chemistry Division has published a Provisional Recommendation for the names and symbols of the recently discovered superheavy elements 113, 115, 117, and 118. The provisional names for 115, 117, and 118—originally proposed by the discovering team from the Joint Institute for Nuclear Research, Dubna, Russia; the Department of Energy's Oak Ridge National Laboratory, Oak Ridge, Tennessee, and Lawrence Livermore National Laboratory, Livermore, California; and Vanderbilt University, Nashville, Tennessee—will now undergo a statutory period for public review before the names and symbols can be finally approved by the IUPAC Council. Tennessine (Ts) is proposed for element 117, recognizing the contribution of Tennessee research centers ORNL, Vanderbilt, and the University of Tennessee to superheavy element research, including the production and chemical separation of unique actinide target materials at ORNL's High Flux Isotope Reactor and Radiochemical Engineering Development Center. Actinide materials from ORNL have contributed to the discovery and/or confirmation of nine superheavy elements. "These experiments and discoveries essentially open new frontiers of chemistry," said ORNL's Science and Technology Partnerships director Jim Roberto. Photo Courtesy of: ORNLProf. Yuri Oganessian from the Joint Institute for Nuclear Research and scientific leader of the team, noted the importance of international collaboration in discovering new elements and nuclei, completing the seventh row of the periodic table, and providing evidence for the long sought "island of stability" for superheavy elements. Two members of the team, JINR and LLNL, were previously credited with the discovery of elements 114 (flerovium) and 116 (livermorium). These new elements were discovered using the "hot fusion" approach, developed and implemented by Oganessian at JINR. This approach involves heavy ion reactions of an intense, high-energy calcium beam on rare actinide targets including berkelium and californium at the Dubna Gas-Filled Recoil Separator. The concept of the "island of stability" was originally proposed in the 1960s. It predicts increased stability for superheavy nuclei at higher neutron and proton numbers. The new nuclei produced in this research exhibit substantially increased lifetimes consistent with approaching the island. Moscovium (Mc) is provisionally recommended for element 115 in recognition of the Moscow region and honoring the ancient Russian land that is home to JINR. Moscow is the capital of the region, and Moscow and its people have been very supportive of JINR and superheavy element research. The provisional name for element 118 is Oganesson (Og) in recognition of the pioneering contributions of Yuri Oganessian to superheavy element research. It was the vision and determination of Prof. Oganessian that created this opportunity for the significant expansion of the periodic table and our knowledge of superheavy nuclei. IUPAC also has published a Provisional Recommendation for element 113. The discoverers at RIKEN Nishina Center for Accelerator-Based Science in Japan proposed the name nihonium (Nh). Nihon is one of the two ways to say "Japan" in Japanese, and translates as "the Land of Rising Sun." UT-Battelle manages ORNL for DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:23f90024-e742-4fa7-b83e-08cd9834e662>
2.59375
771
News (Org.)
Science & Tech.
17.191616
95,642,825
- 1 Internal - 2 Statement - 3 Expression - 4 Typing - 5 Duck Typing - 6 Polymorphism - 7 Metaprogramming - 8 Object-Oriented Programming - 9 Functional Programming - 10 Concurrent Programming - 11 Formal Languages and Translators A type determines the set of values and operations specific to values of that type, and the way the instances of the type are stored in memory - the size of the values. Expressions of a certain type (variables, functions) take their values from the type's set of values. Static Typing vs Dynamic Typing - Wikipedia Type System https://en.wikipedia.org/wiki/Type_system For a statically typed system, the variables and expressions always have a specific type, and that type cannot be changed. The type is known at compile-time. Dynamically typed languages are convenient, because there are no intermediate steps between writing the code and executing it. However certain types of errors cannot be caught until the program executes. For statically typed languages, many of these errors are caught at the compilation phase. On the downside, static languages usually comes with a great deal of ceremony around everything that happens in the program (heavy syntax, type annotations, complex type hierarchies). Strong Typing vs Loose Typing - Wikipedia Strong and Weak Typing https://en.wikipedia.org/wiki/Strong_and_weak_typing A strong typed language is a language in which types limit the values that a variable can hold, or than an expression can produce, limit the operations supported by those values and determine the meaning of operations. Type safety is the extent to which a programming language discourages or prevents type errors. Type enforcement can be static, exercised at compile-time, or dynamic, associated type information with values and detecting type errors at run-time, or both. Java and C# are examples of type-safe languages. Polymorphism is a feature of a programming language allowing to write code that behaves differently depending on the runtime state at a specific moment in time. The contract of the behavior is defined by an interface, while the implementation of the interface can vary. In Java, different classes may implement an interface, and instances of those classes can be used interchangeably as that interface. In Go, different concrete types implement an interface. Metaprogramming is writing code that manipulates other code, or even itself.
<urn:uuid:29da1551-8f57-49e5-a6f5-9c096682b50a>
3.65625
508
Documentation
Software Dev.
26.886667
95,642,826
Scientists from Durham University have deciphered the landforms and created a model of the British and Irish Ice Sheet (BIIS) which reveals for the first time how glaciers reversed their flows and retreated back into upland regions from where they originated. These ice sheet flow patterns created a unique 'overprinting' of British glacial landforms 26,000 to 16,000 years ago, leaving distinctive egg-shaped features called 'drumlins' across our fields and valleys. Drumlin-strewn landscapes can be seen along the A66 road through the Eden Valley (near Appleby) and across the Solway and Lake District lowlands, the Northern Pennines, and through the Tyne Gap and the valleys of southern Scotland. The research, funded by the Natural Environment Research Council, is published in the Journal, Quaternary Science Reviews. During the last glacial maximum, around 21,500 years ago, the BIIS built up on the high land of the Lake District, north Pennines and Scottish Southern Uplands; as more snow fell in these areas and local ice caps thickened, glaciers flowed into surrounding lowlands as expected. The new reconstruction of the movement of the ice sheet, compiled by the Durham University research team, reveals an unusual twist once the glaciers filled lowland areas. As the ice sheet evolved from the coalescence of the upland ice caps, it flowed out towards the Irish Sea, eventually becoming so thick over the Solway Lowlands that it reversed its flow back up the valleys, re-adjusting the landforms it had created during earlier stages of growth. The rolling terrain that walkers can see along many parts of the Pennine Way and that drivers can see along the route of the M6 motorway provide examples of this glacial landscape. The research team led by Dr David Evans, from the Department of Geography at Durham University plotted the progress of the ice sheet between 26,000 and 16,000 years ago. Using maps of superimposed drumlins, ancient temperature records, and computer modeling, the team profiled the size, extent and flow directions of the ice-sheet, and reconstructed its movement through time. Dr Evans said: "The stereotypical image of Ice-age Britain is of ice rolling in from the Arctic but this is not an accurate description of what happened. Britain was cold enough for ice to form in the uplands, growing and coalescing to produce an elongate, triangular-shaped dome over NW England and SW Scotland around 19,500 years ago. "The Ice sheet then moved downhill, as one would expect. Our findings show that the lowland ice became so thick that it began to move in unexpected ways - the ice moved back uphill from where it originally came. Recession and a series of complex ice flow directional switches took place over relatively short timescales." Four major ice flows have been identified across northern Britain and Dr Evans' team has produced case studies of drumlin and lineation mapping that show that these glacier flow directions switched significantly through time. The pressure of the ice flows became sufficient to deform sediments at the base of the ice sheet, resulting in the moulding of the sediment into streamlined landforms like drumlins. Many of the fields of northern England and southern Scotland have been cleared of their boulders during hundreds of years of agricultural improvement. This stony, unworkable material was called 'till', the term now used by glacial researchers to describe sediment laid down at the base of ice sheets and glaciers. A close look at many of the distinctive stone walls in the region of the North Pennine chain, often reveals the use in their construction of Scottish and Lake District 'erratics', stones which are quirks of glacial ice flows. Many of these erratic stones were transported hundreds of miles away from their origin by the complex and often reversed movement of the glaciers. Dr Evans says: "The Durham model shows that an ice sheet can reverse its flow in a hundred or so years and when this happens, it creates unique features in our landscape. Elongated drumlins and meltwater channels in northern England and southern Scotland provide evidence of this unique phenomenon. " "The ice sheet had no real steady state but rather was mobile and comprised constantly migrating dispersal centres and ice divides which triggered significant flow reversals. The occurrence of Lake District material in Pennine dry-stone walls is a clear indication that during the last glaciation of Britain, ice sheet flow directions were at times reversed." Five stages of glaciation in Northern Britain: Build up of snow and ice on higher ground. Ice thickening results in ice flow down valleys that drain the uplands. Valley ice from different upland sources fuse or coalesce. Ice thickens in the lowlands and the ice sheet dispersal centres migrate, forcing ice flows to become independent of the underlying hills and valleys. In some areas the ice flows reverse and in places (e.g. Vale of Eden) actually move back uphill. Four flows of glaciation in northern Britain: Phase I flow was from a dominant Scottish dispersal centre, which transported Criffel granite erratics to the Eden Valley and forced Lake District ice eastwards over the Pennines at Stainmore. Prior to this phase local ice caps over the Lake District and North Pennines forced ice to flow into the lowlands, the reverse of Phase I flow. Phase II involved easterly flow of Lake District and Scottish ice through the Tyne Gap and Stainmore Gap with an ice divide located over the Solway Firth. Phase III was a dominant westerly flow from upland dispersal centres into the Solway lowlands and along the Solway Firth due to draw down of ice into the Irish Sea basin; Phase IV was characterised by unconstrained advance of Scottish ice across the Solway Firth. At this time, and ice sheet had started to uncouple again to produce localized ice retreat back on to the high land of the Lake District and North Pennines (the ice retreated from whence it came). This period saw: a) the development of a vast lake (Glacial Lake Carlisle) over the Solway Lowlands dammed by the Scottish ice advance; b) the cutting of the Melmerby meltwater channels on the Pennine Escarpment by water draining along a glacier margin retreating up the Eden Valley; and c) the deposition of the Brampton kame belt, the largest accumulation of glacial sand and gravel in England. Carl Stiansen | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:0b52e2ec-7fc5-43f3-bd71-f2958c0f6a5a>
4.03125
1,976
Content Listing
Science & Tech.
41.739996
95,642,827
The superior thermal conductivity of single crystal diamond (20 W/cm/K at room temperature for type IIa diamond) makes diamond desirable for many applications requiring the dissipation of heat. Several experimental methods have been used to determine whether chemical vapor deposited (CVD) diamond has a comparable thermal conductivity. Values as high as 10 W/cm/K have been measured. In this paper we discuss the use of photothermal radiometry to measure the thermal diffusivity and conductivity of CVD diamond. H. P. R Frederikse, X. T. Ying, "Thermal Wave Measurements Of The Thermal Properties Of CVD Diamond", Proc. SPIE 1146, Diamond Optics II, (15 January 1989); doi: 10.1117/12.962064; https://doi.org/10.1117/12.962064
<urn:uuid:471e9d12-1033-4c58-a3ba-b1fcc1f8e3f0>
3.09375
182
Academic Writing
Science & Tech.
62.77661
95,642,840
translational speed of 10 meters per second, a mass m of 25 kg, and a radius r of 0.2 meters. The moment of inertia of the sphere about its center of mass is I = 2/5 mr2. The sphere approaches a 25° incline of height 3 meters as shown above and rolls up the incline without slipping. neglecting air resistance, calculate the horizontal distance from the point where the sphere leaves the incline to the point where the sphere strikes the level surface
<urn:uuid:90a80e54-1cc7-4b3f-b539-28258adca61a>
3.25
106
Tutorial
Science & Tech.
63.655714
95,642,850
Acoustic Control of Flow Instabilities The lecture reviews the common origin of sound and instabilities in weak perturbations of a smooth flow. The degree to which sound and instabilities interact, or indeed the degree to which they are separable, depends on their semantic definition. At one level they are merely secondary unsteady effects dependent on the main flow while at the other they display distinctive characteristics of propagation and exponential growth. Even if they exist as separable entities in a pure flow they remain coupled by the inevitable inhomogeneities of realistic geometrical arrangements so that in practice flow instability and sound are actually inseparable. Sound, being essentially linear, admits to the attenuation brought by superposition of its phase-inverted replica. Anti-sound cancels sound, a principle now being implemented in rapidly growing noise control technology. The same principle implies change in stability whenever its coupled sound field is modified. Flows that are traditionally unstable can be stabilized by active systems. Whether all flows can be stabilized or not might depend on the fundamental constraint that it is impossible to forecast the future. More probably the limit is set by technical complexity and by the need to keep all perturbations within the linear regime where superposition is valid. The scope of the idea will only become clear as definite experiments are completed. Experiments have progressed from the control of a small combustion system with a single instability mode to the acoustic suppression of violent unsteadiness at aero-engine reheat conditions. A wing section flexibly mounted in a wind tunnel has been brought out of flutter oscillation by switching on a wall-mounted loudspeaker. The fundamental Strouhal frequency in the Karman street wake of a circular cylinder has been suppressed by acoustic feedback and both rotating stall and surge have been avoided at conditions which made those instabilities inevitable without active control. An ONR programme to examine the applicability of these methods to a real gas turbine engine has led to a power increase of some 10% and to active methods of restoring the stable condition following the onset of surge. The lecture will conclude by reviewing these developments and issues that have arisen in developing the control methods. In particular it will raise the prospect of aerofoil sections controlled to operate with radically superior characteristics and speculate on that being an element of the performance improvement observed on the test engine. KeywordsAxial Compressor Flow Instability Centrifugal Compressor Wing Section Acoustic Feedback Unable to display preview. Download preview PDF. - 2.Crocco, L. and Cheng, S. I. (1956). Theory of combustion instability in liquid propellant rocket motors. Interscience.Google Scholar - 3.Day, I. J. (1991). Compressor performance during surge. Proc. Tenth Int. Symp. on Airbreathing engines. Nottingham. Published by A.I.A.A.Google Scholar - 4.Day, I. J. (1991a). Active Suppression of rotating stall and surge in axial compressors. A.S.M.E. paper 91-GT-87.Google Scholar - 5.Dines, P. J. (1984). Active control of flame noise. Ph.D. thesis. Cambridge University, U.K.Google Scholar - 10.Ffowcs Williams, J. E. and Graham, W. R. (1990) An engine demonstration of active surge control. Proceedings of the gas turbine and aeroengine Congress and Exposition, June 11–14, Brussels, Belgium.Google Scholar - 11.Heckl, M. (1985). Active control of Rijke tube. Ph.D. thesis, Cambridge University.Google Scholar - 12.Huang, X. (1988). Active Control of Aerodynamic Instabilities. Ph.D. Thesis, Cambridge University, U.K.Google Scholar - 13.Lueg, P. (1936). Process of silencing sound oscillations. U.S. Patent # 2043, 416.Google Scholar - 14.Ludwig, G. R. and Nenni, J. P. (1980). Tests on an improved rotating stall control system on a J.85 turbojet engine. A.S.M.E. paper 80G.T.-17.Google Scholar - 15.Nelson, P. A. and Elliot, S. J. (1991). Active noise control; a tutorial review. Proceedings of the Acoustical Society of Japan International Conference on the Active Control of Sound and Vibration. Tokyo.Google Scholar - 16.Olson, H. F. and May, E. G.] (1953). Electronic sound absorber. J.A.S.A., 25, 1130–1136.Google Scholar - 17.Paduano, J., Epstein, H., Longley, J. P., Valavani, L., Greitzer, E. M. and Guenette, G. R. (1991). Active control of a rotating stall in a low speed axial compressor. Proceedings of the A.S.M.E./I.G.T.I. Conference in Orlando.Google Scholar - 18.Pinsley, J. E., Guenette, G. R., Epstein, A. H. and Greitzer, E.M. (1990). Active stabilization of centrifugal compressor surge. A.S.M.E. Gas Turbine Conference, Brussels.Google Scholar - 19.Sunyach, M. and Ffowcs Williams, J. E. (1986). Contrôle actif des oscillations dans les cavités excitées par un écoulement. C. R. Acad. Sc. Paris, t. 303, Série 11, No. 12.Google Scholar
<urn:uuid:6d1dba1b-d3eb-4cc2-a279-490324e2c4ca>
2.71875
1,210
Truncated
Science & Tech.
57.720397
95,642,861
Loss of biodiversity appears to affect ecosystems as much as climate change, pollution and other major forms of environmental stress, according to results of a new study by an international research team. The study is the first comprehensive effort to directly compare the effects of biological diversity loss to the anticipated effects of a host of other human-caused environmental changes. The results, published in this week's issue of the journal Nature, highlight the need for stronger local, national and international efforts to protect biodiversity and the benefits it provides, according to the researchers, who are based at nine institutions in the United States, Canada and Sweden. "This analysis establishes that reduced biodiversity affects ecosystems at levels comparable to those of global warming and air pollution," said Henry Gholz, program director in the National Science Foundation's Division of Environmental Biology, which funded the research directly and through the National Center for Ecological Analysis and Synthesis. "Some people have assumed that biodiversity effects are relatively minor compared to other environmental stressors," said biologist David Hooper of Western Washington University, the lead author of the paper. "Our results show that future loss of species has the potential to reduce plant production just as much as global warming and pollution." Studies over the last two decades demonstrated that more biologically diverse ecosystems are more productive. As a result, there has been growing concern that the very high rates of modern extinctions--due to habitat loss, overharvesting and other human-caused environmental changes--could reduce nature's ability to provide goods and services such as food, clean water and a stable climate. Until now, it's been unclear how biodiversity losses stack up against other human-caused environmental changes that affect ecosystem health and productivity. "Loss of biological diversity due to species extinctions is going to have major effects on our planet, and we need to prepare ourselves to deal with them," said ecologist Bradley Cardinale of the University of Michigan, one of the paper's co-authors. "These extinctions may well rank as one of the top five drivers of global change." In the study, Hooper, Cardinale and colleagues combined data from a large number of published studies to compare how various global environmental stressors affect two processes important in ecosystems: plant growth and the decomposition of dead plants by bacteria and fungi. The study involved the construction of a database drawn from 192 peer-reviewed publications about experiments that manipulated species richness and examined their effect on ecosystem processes. This global synthesis found that in areas where local species loss during this century falls within the lower range of projections (losses of 1 to 20 percent of plant species), negligible effects on ecosystem plant growth will result, and changes in species richness will rank low relative to the effects projected for other environmental changes. In ecosystems where species losses fall within intermediate projections of 21 to 40 percent of species, however, species loss is expected to reduce plant growth by 5 to 10 percent. The effect is comparable to the expected effects of climate warming and increased ultraviolet radiation due to stratospheric ozone loss. At higher levels of extinction (41 to 60 percent of species), the effects of species loss ranked with those of many other major drivers of environmental change, such as ozone pollution, acid deposition on forests and nutrient pollution. "Within the range of expected species losses, we saw average declines in plant growth that were as large as changes in experiments simulating several other major environmental changes caused by humans," Hooper said. "Several of us working on this study were surprised by the comparative strength of those effects." The strength of the observed biodiversity effects suggests that policymakers searching for solutions to other pressing environmental problems should be aware of potential adverse effects on biodiversity as well. Still to be determined is how diversity loss and other large-scale environmental changes will interact to alter ecosystems. "The biggest challenge looking forward is to predict the combined effects of these environmental challenges to natural ecosystems and to society," said J. Emmett Duffy of the Virginia Institute of Marine Science, a co-author of the paper. Authors of the paper, in addition to Hooper, Cardinale and Duffy, are E. Carol Adair of the University of Vermont and the National Center for Ecological Analysis and Synthesis; Jarrett Byrnes of the National Center for Ecological Analysis and Synthesis; Bruce Hungate of Northern Arizona University; Kristen Matulich of University of California, Irvine; Andrew Gonzales of McGill University; Lars Gamfeldt of the University of Gothenburg; and Mary O'Connor of the University of British Columbia and the National Center for Ecological Analysis and Synthesis.Media Contacts Cheryl Dybas | EurekAlert! Further reports about: > Analysis > Biodiversity > Climate change > Ecological Analysis > Ecological Impact > Pollution > Science TV > biological diversity > ecosystem > ecosystem process > environmental change > environmental problem > environmental stress > environmental stressors > mental stress > natural ecosystem > synthesis Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:4bab26cd-0c63-422b-ab13-c4d1b5ddd543>
3.640625
1,676
Content Listing
Science & Tech.
24.653769
95,642,876
Cellular Automata Approaches for Simulating Rheology of Complex Geological Phenomena Cellular Automata (CA) permit sometime an alternative method to differential equations for modelling and simulating some kind of complex systems, which can be described in terms of local interactions of their constituent parts. Typical application fields of CA are microscopic physical phenomena with elementary automata, which have few states and a simple transition function. We extended the application range to two macroscopic geological phenomena, very similar from a fluid-dynamical viewpoint: lava flows and debris/mud flows, which can be viewed as dynamical systems based on local interactions. In this paper a unifying approach for both phenomena is presented, considering the main features of SCIARA and SCIDDICA, the two cellular models, developed for simulating respectively lava flows and debris flows. Examples of practical applications on real events are shown: the case of an eruption that occurred in Reunion Island (Indian Ocean) in 1986 and the 1984 Ontake volcano debris avalanche in Japan. General considerations are deduced from the particular types of applications in order to infer an empirical method to be extended to other macroscopic cases. KeywordsDebris Flow Cellular Automaton Lava Flow Cellular Automaton Cellular Automaton Model Unable to display preview. Download preview PDF. - 1.Toffoli T. Cellular Automata as an alternative to (rather than an approximation of) differential equations in modeling physics. Physica 1OD 1984; 117–127Google Scholar - 3.Johnson A.M. Physical Processes in Geology. Freeman, Cooper & Company, San Francisco CA., 1973Google Scholar - 5.Rongo R., Spataro W., Villenueve N. Lava flow simulation with SCIARA: the Reunion Island case. To appear in the Proceedings of IAMG 98 Conference, Ischia Island, Italy, 5–9 October 1998Google Scholar - 6.Barca D. Crisci G.M., Di Gregorio S. et al. Cellular Automata methods for modeling lava flow: simulation of the 1986–1987 Etnean eruption. In: Active Franas. UCL Press London, Kilburn C. and Luongo G. Eds. 1993, 12: 283–301Google Scholar - 8.Crisci G. M., Di Francia A., Di Gregorio S., et al. SCIARA.2: A Cellular Automata Model For Lava Flow Simulation. In: Proceedings of the Third Annual Conference of the International Association for Mathematical Geology IAMG 22–27 September 1997. Glahn Ed. V.P., 1997, pp 11–16 (Add.)Google Scholar - 9.Sassa K. Motion of Landslides and Debris Flows. Report for Grant-in-Aid for Scientific Research by Japanese Ministry on Education, Science and Culture (Project No 61480062), 1998Google Scholar - 10.Di Gregorio S., Nicoletta F.P., Rongo R. et al. A two-dimensional Cellular Automata Model for Landslide Simulation. In: Proceedings of 6th Joint EPSAPS International Conference on Physics Computing “PC’94”, Lugano, Switzerland 22–26 August 1994. Ed.s R.Gruber and M.Tommasini, 1994a, pp. 523–526Google Scholar - 11.Di Gregorio S., Nicoletta F.P., Rongo R et al. Landslide Simulation by Cellular Automata in a Parallel Environment. In: Proceedings of 2nd International Workshop on Massive Parallelism: Hardware, Software and Application October 3–7 1994 Capri Italy. 1994b, pp 392–407Google Scholar - 12.Di Gregorio S., Nicoletta F.P., Rongo R. et al. SCIDDICA-3: A Cellular Automata Model for Landslide Simulation. In: Advances in Intelligent Systems. IOS Press Amsterdam, F.C. Morabito ed. 1997, pp. 324–330Google Scholar
<urn:uuid:95ac8bea-9c09-4f94-8019-15b17b666a56>
2.59375
845
Academic Writing
Science & Tech.
47.725799
95,642,886
Hydrology studies the distribution of water on Earth's surface, its interaction with other natural substances and the role that it plays in plant and animal life. The continuous exchange of water between land and the atmosphere is called the hydrological cycle. By a number of factors, the first among them is the heat radiated by the sun, the water evaporates from the soil, by expanses of water and living organisms, and then condense and fall as rain or snow. Most of the water that reaches the Earth's surface in the form of rain, or in general, precipitation collects water from rivers and streams and then flows directly in the seas; the remaining fraction, on the other hand, penetrates into the soil, which helps to keep the soil moist, is absorbed by plant roots, or seeps underground fueling the flap and then returning to the surface through springs. The total amount of water on Earth is estimated at 1.5 billion km ³. Of this, 97.4% consists of salt water (Oceans and seas) and the remaining 2.6% from fresh water on land that, for the most part, is "trapped" in glaciers and enclosed in groundwater; of this, only a small fraction of 0.015%, i.e. approximately 11 ml km³ (that found in rivers, lakes, in the atmosphere as water vapor and in living forms) is available for humans. Between these different "tanks" is a constant circulation of water under the three forms of liquid, solid (ice) and steam; much of the energy necessary to this process comes from the sun that provides the heat required for evaporation. The evaporated water from the oceans is transported in part on land by atmospheric movements and there arrives in the form of rain or snow (precipitation). About one-third of this water returns to the oceans through surface or scrolling underground percolation. The remainder reaches the atmosphere through evaporation or transpiration by plants (evapotranspiration). The water cycle is the set of phenomena that keeps constant water reserves present on Earth: · The evaporation of water determines the formation of clouds. · The clouds are driven by the winds. · Lowering temperature causes condensation of water and ice in suspension, and then rain. · Back on the ground, in the form of rain or snow, the water can evaporate from the soil directly, or through the perspiration of the trees; or you can scroll or infiltrate the underground. · Through the springs and rivers the water flows down to the sea. · The new evaporation does resume the cycle. Factories, homes and cars that burn fossil fuels, releasing into the atmosphere, sulfur trioxide and nitrogen oxides. For solar energy effect these substances react with water to form sulphuric acid and nitric acid. The acid water coming to Earth in the form of rain. Plants and animals are severely damaged by acid rain. Earth, air and water are connected in the water cycle. Not only the authorities, but also individual citizens must actively engage to reduce pollution.
<urn:uuid:23b79ef8-0072-4a91-8cfd-2e6d987131f3>
4
634
Knowledge Article
Science & Tech.
50.4736
95,642,905
+44 1803 865913 Edited By: Theodore H Fleming and Paul A Racey 549 pages, 17 colour plates, 47 halftones, 49 line drawings, 46 tables The second largest order of mammals, Chiroptera comprises more than one thousand species of bats. Because of their mobility, bats are often the only native mammals on isolated oceanic islands, where more than half of all bat species live. These island bats represent an evolutionarily distinctive and ecologically significant part of the earth's biological diversity. "Island Bats" is the first book to focus solely on the evolution, ecology, and conservation of bats living in the world's island ecosystems. Among other topics, the contributors to this volume examine how the earth's history has affected the evolution of island bats, investigate how bat populations are affected by volcanic eruptions and hurricanes, and explore the threat of extinction from human disturbance. Geographically diverse, the volume includes studies of the islands of the Caribbean, the Western Indian Ocean, Micronesia, Indonesia, the Philippines, and New Zealand. With its wealth of information from long-term studies, "Island Bats" provides timely and valuable information about how this fauna has evolved and how it can be conserved. &i;"Island Bats will be of great interest to ecologists, biogeographers, conservation biologists in general, and bat biologists in particular - especially those interested in the biology of island faunas. The new information presented in this book should stimulate the next generation of bat researchers to increase their efforts to protect and conserve these threatened faunas."&o; - Thomas H. Kunz, editor of Bat Ecology. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I ordered a book from NHBS, it reached India within 7 days by standard shipping! Wonderful packing. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:b6e133e8-83ec-40dd-9cfe-993a82641465>
3
419
Product Page
Science & Tech.
30.497724
95,642,930
So, what is the difference between “True South” and “Magnetic South,” anyway? Well, if you imagine the axis that the earth rotates around, the point at which that pokes out of the earth in the middle of Antarctica, that’s true south. But when you hold up a compass you aren’t really finding “true” north or south, you can only find “magnetic south,” which is the direction towards the south pole of our earth’s geomagnetic field. Believe it or not, this point actually moves a few miles each year because the molten metal in the earth sloshes around. YOU: “Dave, I think I know south is the best way to orient my solar panels (or north if you happen to live in the southern hemisphere), but do I want to face them magnetic south or true south??” DAVE: TRUE SOUTH. We’re not concerned with the magnetic poles, just where the sun is. YOU: “Well that’s great and all Dave, but my compass only shows me magnetic south, how the hell am I supposed to find True South? ” DAVE: Settle down, it’s gonna be ok. There are a few ways, but the most accurate is to find the magnetic declination in your area. (Australia, Canada, US, World). For example, I can tell from these sites that in San Francisco my current magnetic declination is (14° 33′ E). Since that number is magnetic, and I want to find “true,” I subtract about 14 degrees. So if my compass points to south at 180 degrees, TRUE SOUTH is about 194 degrees. Point your panels in that direction! TIPS: Don’t have a compass? Here’s a simple old school way. When the sun is at its highest point in the sky “solar noon,” any shadow cast by a telephone pole or some other perfectly vertical object will run perfect TRUE north-south. When taking a compass reading, never hold the compass near metal, as it will throw off your reading. Watch out for your belt buckle! Last modified: May 13, 2016
<urn:uuid:d797f0f2-a550-4637-9ffe-9eb1af057117>
3.09375
473
Personal Blog
Science & Tech.
72.852585
95,642,940
Caretta caretta (North East Atlantic subpopulation) |Scientific Name:||Caretta caretta (North East Atlantic subpopulation)| See Caretta caretta |Red List Category & Criteria:||Endangered B2ab(iii) ver 3.1| |Assessor(s):||Casale, P. & Marco, A.| |Reviewer(s):||Wallace, B.P. & Pilcher, N.J.| The North-East Atlantic Loggerhead subpopulation nests in the Cabo Verde Archipelago, with just a few nests recorded in Mauritania and Guinea. Its marine habitats extend throughout a large marine area off the coast of north-west Africa as far as coastal areas of Sierra Leone and the western part of the Mediterranean (Hawkes et al. 2006, Monzon-Arguello et al. 2010). This subpopulation has been identified as a separate genetic stock from other Loggerhead stocks (Monzon-Arguello et al. 2010), including several small genetically diverse nesting groups (Stiebens et al. 2013), considered as a single subpopulation, or regional management unit (Wallace et al. 2010) with the need of local management (Stiebens et al. 2013). The available data indicate that the Cabo Verde Archipelago is by far the main rookery for this subpopulation. Few or not well quantified nests have been reported from Mauritania and Guinea, although genetic data are not available from these locations to assess whether they belong to the same subpopulation. We assumed, for the purposes of this assessment and given the relative proximity of these sites, that all these rookeries belong to the same subpopulation. The area of occupancy (AOO) (based on nesting habitat) is relatively small and the number of locations is small as well. The main rookery at Cabo Verde is subject to a continuing anthropogenic pressure causing a continuing decline in habitat area, extent and quality. Under these circumstances the subpopulation qualifies for the category Endangered according to IUCN Red List criterion B2 subcriteria (a) and (b). This subpopulation meets the thresholds for the Endangered category of the subcriteria B2ab(iii): specifically a small AOO (<500 km²), few locations (<5) and continuing decline in habitat area, extent and quality. The subpopulation also qualifies for the Vulnerable category under criterion D2, as it occurs in a small number of locations with a plausible threat that could drive the subpopulation to CR in short time. Criterion A could not be applied due to the lack of time series datasets with ≥10 years of data representative of the subpopulation nesting activity. Criterion C was applied but the subpopulation exceeded the threshold of number of adults for all threatened categories. No population viability analysis (Criterion E) was available and the North-East Atlantic Loggerhead subpopulation assessment was conducted by applying criteria A-D. Historic information suggests a strong reduction of the subpopulation (Marco et al. 2012). However, this information cannot be easily converted into indices of abundance to be compared with current abundance. For the Loggerhead global and subpopulation assessments we only considered time series datasets (nest counts) of ≥10 yrs. Unfortunately, such datasets were not available for the North East Atlantic subpopulation. First, nesting activity was monitored for a relatively long period only in few of the beaches in Cabo Verde and these beaches cannot be considered as representative of the whole nesting ground of Cabo Verde because they have been protected by specific conservation programmes (A. Marco, pers. comm.). Second, it cannot be assumed that the monitoring method was uniform and the different years comparable. For these reasons, criterion A could not be applied to this subpopulation. Since the subpopulation area includes a large marine area from north-west Africa to the western Mediterranean, the extent of occurrence (EOO) exceeds the threat category threshold (20,000 km²) for criterion B1. Regarding criterion B2, the area of occupancy (AOO) for sea turtles is identified with the nesting beach habitat, which represents the smallest habitat for a critical life stage. The total length of known Loggerhead nesting beaches in the Cabo Verde Archipelago is 212 km (A. Marco, pers. comm.). Since the appropriate scale for AOO is a grid 2x2 km, the above linear measure is converted to 424 km². However, in insular contexts the linear approximation may not be the best representation of a grid 2x2 km, therefore we also directly counted the number of 2x2 km cells including all the nesting beaches of the Cabo Verde archipelago, resulting in 98 cells, equivalent to 392 km². Given the uncertain and anecdotal nature of the nesting activity in Mauritania and Guinea (Fretey 2001) we assume that the total AOO for the North East Atlantic is <500 km², which triggers the threshold for the Endangered category. The number of locations of the subpopulation is probably one (Cabo Verde) and maximum three, if Mauritania and Guinea are considered. Therefore, the maximum number of locations is <5, which triggers the threshold for the Endangered category. Finally, there is evidence of continuing decline in habitat area, extent, and quality. The heavy sand extraction activity and tourism development caused a dramatic reduction of the number of beaches suitable for sea turtle nesting and the quality of the remaining beaches (Loureiro 2008). Based on AOO, number of locations and decline in habitat area, extent and quality, the subpopulation qualifies for the Endangered category under criterion B2ab(iii). To apply criterion C, the total number of adult females and males is needed. The nesting female population in Cabo Verde has recently been estimated at 8,900 (Marco et al. 2012). The adult sex ratio is unknown, however it would have to be extremely skewed (females >89%) in order to meet the threshold of 10,000 adults for a threatened category. Therefore, it is likely that the number of adults is above 10,000 and the subpopulation does not meet criterion C. The number of mature individuals (see above, criterion C) exceeds the thresholds for threatened categories. However, the subpopulation is restricted to a few locations (<5) and is subject to a threat that could drive it to CR in a short time. Threats are represented by the reduction of area and quality of nesting habitats (see above, criterion B) and by the killing for meat consumption of a high proportion of females (5-36%) of the females nesting in a year (Marco et al. 2012). These threats occur in the single country hosting the majority or totality of the subpopulation (Cabo Verde), and have therefore common causes and common management by the same national regulations. Under these circumstances the subpopulations qualifies for the Vulnerable category under criterion D2. Sources of Uncertainty The most important source of uncertainty for the assessment of this subpopulation is the limited information about nesting activity along the coasts of north-west Africa. However, in spite of the general progress in sea turtle knowledge in the west Africa, data on Loggerheads remain limited, which is possibly an indication of a real low level of nesting activity by this species on the mainland shores. More information about nesting levels and genetic characteristics of west Africa rookeries may improve future assessments of the North East Atlantic subpopulation, as well as an estimation of adult sex ratio. For further reading on sources of uncertainty in marine turtle Red List assessments, see Seminoff and Shanker (2008). |Range Description:||The Loggerhead Turtle has a worldwide distribution in subtropical to temperate regions of the Mediterranean Sea and Pacific, Indian, and Atlantic Oceans (Wallace et al. 2010) (Figure 1 in the Supplementary Material). The North-East Atlantic subpopulation breeds in the Cabo Verde archipelago (Marco et al. 2012) and along the north-west Africa coast (Fretey 2001) and its marine habitats extend throughout a large marine area off the coast of north-west Africa as far as coastal areas of Sierra Leone and the western part of the Mediterranean (Hawkes et al. 2006, Monzon-Arguello et al. 2010).| Native:Cape Verde; Guinea; Mauritania; Sierra Leone |FAO Marine Fishing Areas:| Atlantic – eastern central |Range Map:||Click here to open the map viewer and explore range.| |Population:||Loggerheads are a single species globally comprising 10 biologically described regional management units (RMUs; Wallace et al. 2010), which describe biologically and geographically explicit population segments by integrating information from nesting sites, mitochondrial and nuclear DNA studies, movements and habitat use by all life stages. RMUs are functionally equivalent to IUCN subpopulations, thus providing the appropriate demographic unit for Red List assessments. There are 10 Loggerhead RMUs (hereafter subpopulations): Northwest Atlantic Ocean, Northeast Atlantic Ocean, Southwest Atlantic Ocean, Mediterranean Sea, Northeast Indian Ocean, Northwest Indian Ocean, Southeast Indian Ocean, Southwest Indian Ocean, North Pacific Ocean, and South Pacific Ocean. Multiple genetic stocks have been defined according to geographically disparate nesting areas around the world and are included within RMU delineations (Wallace et al. 2010) (shapefiles can be viewed and downloaded at: http://seamap.env.duke.edu/swot).| The North-East Atlantic subpopulation is relatively abundant, with an estimated 10,000-20,000 nests per year and 8,900 adult females (Marco et al. 2012). The subpopulation includes genetically different nesting groups (Stiebens et al. 2013). |Current Population Trend:||Unknown| |Habitat and Ecology:||The Loggerhead Turtle nests on insular and mainland sandy beaches throughout the temperate and subtropical regions. Like most sea turtles, Loggerhead Turtles are highly migratory and use a wide range of broadly separated localities and habitats during their lifetimes (Bolten and Witherington 2003). Upon leaving the nesting beach, hatchlings begin an oceanic phase, perhaps floating passively in major current systems (gyres) that serve as open-ocean developmental grounds (Bolten and Witherington 2003). After 4-19 years in the oceanic zone, Loggerheads recruit to neritic developmental areas rich in benthic prey or epipelagic prey where they forage and grow until maturity at 10-39 years (Avens and Snover 2013). However, in some subpopulations, like the North East Atlantic, a part of adults continue to forage in the oceanic zone (Eder et al. 2012, Hawkes et al. 2006). Upon attaining sexual maturity Loggerhead Turtles undertake breeding migrations between foraging grounds and nesting areas at remigration intervals of one to several years with a mean of 2.5-3 years for females (Schroeder et al. 2003) while males would have a shorter remigration interval (e.g., Hays et al. 2010, Wibbels et al. 1990). Migrations are carried out by both males and females and may traverse oceanic zones spanning hundreds to thousands of kilometres (Plotkin 2003). During non-breeding periods adults usually reside at coastal neritic feeding areas that sometimes coincide with juvenile developmental habitats (Bolten and Witherington 2003). However, in some cases, like the North East Atlantic subpopulation, females reside in the oceanic zone (Eder et al. 2012). | The IUCN Red List Criteria define generation length to be the average age of parents in a population, i.e., older than the age at maturity and younger than the oldest mature individual, and care should be taken to avoid underestimation (IUCN 2014). Although different subpopulations may have different generation length, since this information is limited we adopted the same value for all the subpopulations, taking care to avoid underestimation as recommended by IUCN (2014). Loggerheads attain maturity at 10-39 years (Avens and Snover 2013), and we considered here 30 years to be equal or greater than the average age at maturity. Data on reproductive longevity in Loggerheads are limited, but are becoming available with increasing numbers of intensively monitored, long-term projects on protected beaches. Tagging studies have documented reproductive histories up to 28 years in the North Western Atlantic Ocean (Mote Marine Laboratory, unpubl. data), up to 18 years in the South Western Indian Ocean (Nel et al. 2013), up to 32 years in the South Western Atlantic Ocean (Projeto Tamar unpubl. data), and up to 37 years in the South Western Pacific Ocean, where females nesting for 20-25 years are common (C. Limpus, pers. comm). We considered 15 years to be equal or greater than the average reproductive longevity. Therefore, we considered here 45 years to be equal or greater than the average generation length, therefore avoiding underestimation as recommended by IUCN (IUCN Standards and Petitions Subcommittee 2014). |Continuing decline in area, extent and/or quality of habitat:||Yes| |Generation Length (years):||45| |Use and Trade:||In the North East Atlantic, a high proportion of females are killed for meat consumption.| Threats to Loggerheads vary in time and space, and in relative impact to populations. Threat categories affecting marine turtles, including Loggerheads, were described by Wallace et al. (2011) as: The main threats to the North East Atlantic subpopulation are represented by the reduction of area and quality of nesting habitats (Loureiro 2008) and by the killing for meat consumption of a high proportion (5-36%) of the females nesting in a year (Dutra and Koenen 2014, Marco et al. 2012). Fishery bycatch is also emerging as an important issue (Melo and Melo 2013). Some non-anthropogenic threats, like embryonic mortality by tidal flooding, predation and fungi are reason of concern (Abella Pérez 2010, Sarmiento-Ramírez et al. 2014). Loggerhead Turtles are afforded legislative protection under a number of treaties and laws (Wold 2002). Annex II of the SPAW Protocol to the Cartagena Convention (a protocol concerning specially protected areas and wildlife); Appendix I of CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora); and Appendices I and II of the Convention on Migratory Species (CMS). A partial list of the International Instruments that benefit Loggerhead Turtles includes the Inter-American Convention for the Protection and Conservation of Sea Turtles, the Memorandum of Understanding on the Conservation and Management of Marine Turtles and their Habitats of the Indian Ocean and South-East Asia (IOSEA), the Memorandum of Understanding on ASEAN Sea Turtle Conservation and Protection, the Memorandum of Agreement on the Turtle Islands Heritage Protected Area (TIHPA), and the Memorandum of Understanding Concerning Conservation Measures for Marine Turtles of the Atlantic Coast of Africa. As a result of these designations and agreements, many of the intentional impacts directed at sea turtles have been lessened: harvest of eggs and adults has been slowed at several nesting areas through nesting beach conservation efforts and an increasing number of community-based initiatives are in place to slow the take of turtles in foraging areas. In regard to incidental take, the implementation of Turtle Excluder Devices has proved to be beneficial in some areas, primarily in the United States and South and Central America (National Research Council 1990). Guidelines are available to reduce sea turtle mortality in fishing operations in coastal and high seas fisheries (FAO 2009). However, despite these advances, human impacts continue throughout the world. The lack of effective monitoring in pelagic and near-shore fisheries operations still allows substantial direct and indirect mortality, and the uncontrolled development of coastal and marine habitats threatens to destroy the supporting ecosystems of long-lived Loggerhead Turtles. The main conservation actions for to the North East Atlantic subpopulation have been conducted in the last decade and are represented by awareness programs, beach patrolling, beach protection and promotion of regulations aimed to reduce turtle killing for meat consumption (Dutra and Koenen 2014; Marco et al. 2012). |Citation:||Casale, P. & Marco, A. 2015. Caretta caretta (North East Atlantic subpopulation). The IUCN Red List of Threatened Species 2015: e.T83776383A83776554.Downloaded on 22 July 2018.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:45142db1-92a6-433a-876c-94257d3e1ec9>
2.96875
3,510
Knowledge Article
Science & Tech.
31.191678
95,642,954
Professor Isa Bar-On Professor John Bergendahl Professor Andrew Trapp Professor Richard Sisson Professor Brian Savilonis Professor Yuxiang Liu Renewable energy technologies are infrequently evaluated with regard to water use for electricity generation; however traditional thermoelectric power generation uses approximately 50% of the water withdrawn in the US. To address problems of this water-energy nexus, we explore the replacement of existing electricity generation plants by renewable technologies, and the effect of this replacement on water use. Using a binary mixed integer linear programing model, we explore how the replacement of traditional thermoelectric generation with renewable solar and wind technologies can reduce future water demands for power generation. Three case study scenarios focusing on the replacement of the J.T. Deely station, a retiring coal thermoelectric generation plant in Texas, demonstrate a significant decrease in water requirements. In each case study, we replace the generation capacity of the retiring thermoelectric plant with three potential alternative technologies: solar photovoltaic (PV) panels, concentrated solar power (CSP), and horizontal axis wind turbines (HAWT). The first case study, which was performed with no limits on the land area available for new renewable energy installations, demonstrated the water savings potential of a range of different technology portfolios. Our second case study examined the replacement while constrained by finite available land area for new installations. This demonstrated the trade-off between land-use efficient technologies with water-use efficiency. Results from our third case study, which explored the replacement of a gas-fired plant with a capacity equivalent to the J. T. Deely station, demonstrated that more water efficient thermoelectric generation technologies produce lower percentages of water savings, and in two scenarios the proposed portfolios require more water than the replaced plant. Comparison of multiple aspects of our model results with those from existing models shows comparable values for land-use per unit of electricity generation and proposed plant size. An evaluation of the estimated hourly generation of our model’s proposed solution suggests the need for a trade-off between the intermittency of a technology and the required water use. As we estimate the “costs� of alternative energy, our results suggest the need to include in the expression the resulting water savings. Worcester Polytechnic Institute All authors have granted to WPI a nonexclusive royalty-free license to distribute copies of the work. Copyright is held by the author or authors, with all rights reserved, unless otherwise noted. If you have any questions, please contact firstname.lastname@example.org. Stults, E. S. (2015). Minimizing Water Requirements for Electricity Generation in Water Scarce Areas. Retrieved from https://digitalcommons.wpi.edu/etd-dissertations/265 electricity generation, renewable energy, water requirements
<urn:uuid:e46f3f07-6ada-4bee-9ae6-099cc893da1d>
2.71875
590
Academic Writing
Science & Tech.
13.151685
95,642,959
The Effects of Land Use and Management on the Global Carbon Cycle Major uncertainties in the global carbon (C) balance and in projections of atmospheric CO2 include the magnitude of the net flux of C between the atmosphere and land and the mechanisms responsible for that flux. A number of approaches, both top-down and bottom-up, have been used to estimate the net terrestrial C flux, but they generally fail to distinguish possible mechanisms. In contrast, calculations of C-fluxes based on landuse statistics yield both an estimate of flux and its attribution, that is, land-use change. A comparison of the flux calculated from land-use change with estimates of the changes in terrestrial C storage defines a residual terrestrial C sink flux of up to 3 PgC yr-1, usually attributed to the enhancement of growth through environmental changes (for example, CO2 fertilization, increased availability of N, climatic change). We explore whether management (generally not considered in analyses of land-use change), instead of environmental changes, might account for the residual sink flux. We are unable to answer the question definitively. Large uncertainties in estimates of terrestrial C fluxes from top-down analyses and land-use statistics prevent any firm conclusion for the tropics. Changes in land use alone might explain the entire terrestrial sink if changes in management practices, not considered in analyses of land-use change, have created a sink in the northern mid-latitudes. KeywordsForest Inventory Global Carbon Cycle Global Biogeochemical Cycle Undisturbed Forest Global Change Biology Unable to display preview. Download preview PDF.
<urn:uuid:6235322d-901d-4c70-8648-36a3541dbadf>
3.140625
322
Truncated
Science & Tech.
21.982692
95,642,962
The world is abuzz with the discovery of an extrasolar, Earth-like planet around the star Gliese 581 that is relatively close to our Earth at 20 light years away in the constellation Libra. Bruce Fegley, Jr., Ph.D., professor of earth and planetary sciences in Arts & Sciences at Washington University in St. Louis, has worked on computer models that can provide hints to what comprises the atmosphere of such planets and better-known celestial bodies in our own solar system. New computer models, from both Earth-based spectroscopy and space mission data, are providing space scientists compelling evidence for a better understanding of planetary atmospheric chemistry. Recent findings suggest a trend of increasing water content in going from Jupiter (depleted in water), to Saturn (less enriched in water than other volatiles), to Uranus and Neptune, which have large water enrichments. "The farther out you go in the solar system, the more water you find," said Fegley. Fegley provided an overview of comparative planetary atmospheric chemistry at the 233rd American Chemical Society National Meeting, held March 25-29, 2007, in Chicago. Fegley and Katharina Lodders-Fegley, Ph.D., research associate professor of earth and planetary sciences, direct the university's Planetary Chemistry Laboratory. "The theory about the Gas Giant planets (Jupiter, Saturn, Uranus, and Neptune) is that they have primary atmospheres, which means that their atmospheres were captured directly from the solar nebula during accretion of the planets," Fegley said. He said that Jupiter has more hydrogen and helium and less carbon, nitrogen and oxygen than the other Gas Giant planets, making its composition closer to that of the hydrogen- and helium-rich sun. The elements hydrogen, carbon and oxygen are predominantly found as water, the gases molecular hydrogen and methane and in the atmospheres of the Gas Giant planets. "Spectroscopic observations and interior models show that Saturn, Uranus and Neptune are enriched in heavier elements," he said. "Jupiter, based on observations from the Galileo Probe, is depleted in water. People have thought that Galileo might just have gone into a dry area. But Earth-based observations show that the carbon monoxide abundance in Jupiter's atmosphere is consistent with the observed abundances of methane, hydrogen and water vapor. This pretty much validates the Galileo Probe finding." The abundances of these four gases are related by the reaction CH4+H20 = CO+3H2. Thus, observations of the methane, hydrogen and CO abundances can be used to calculate the water vapor abundance. Likewise, Earth-based observations of methane, hydrogen and carbon monoxide in Saturn's atmosphere show that water is less enriched than methane. In contrast, observations of methane, hydrogen and carbon monoxide in the atmospheres of Uranus and Neptune show that water is greatly enriched in these two planets. Although generally classed with Jupiter and Saturn, Uranus and Neptune are water planets with relatively thin gaseous envelopes. "On the other hand, the terrestrial planets Venus, Earth and Mars have secondary atmospheres formed afterwards by outgassing — heating up the solid material that was accreted and then releasing the volatile compounds from it," Fegley said. "That then formed the earliest atmosphere." He said that by plugging in models he's done on the outgassing of chondritic materials and using photochemical models of the effects of UV sunlight, he and his collaborator Laura Schaefer, a research assistant in the Washington University Department of Earth and Planetary Sciences, can speculate on the atmospheric composition of Earth-like planets in other solar systems. "With new theoretical models we are able to surmise the outgassing of materials that went into forming the planets, and even make predictions about the atmospheres of extrasolar terrestrial planets," he said. "Because the composition of the galaxy is relatively uniform, most stars are like the sun — hydrogen-rich with about the same abundances of rocky elements — we can predict what these planetary atmospheres would be like," Fegley said. "I think that the atmospheres of extrasolar Earth-like plants would be more like Mars or Venus than the Earth." Fegley said that photosynthesis accounts for the oxygen in Earth's atmosphere; without it, the Earth's atmosphere would consist of nitrogen, carbon dioxide and water vapor, with only small amounts of oxygen. Oxygen is 21 percent of Earth's atmosphere; in contrast, Mars has about one-tenth of one percent made by UV sunlight destroying carbon dioxide. "I see Mars today as a great natural laboratory for photochemistry; Venus is the same for thermochemistry, and Earth for biochemistry," he said. "Mars has such a thin atmosphere compared to Earth or Venus. UV light can penetrate all the way down to the Martian surface before it's absorbed. That same light on Earth is mainly absorbed in the ozone layer in the lower Earth stratosphere. Venus is so dense that light is absorbed by a cloud layer about 45 kilometers or so above the Venusian surface." Tony Fitzpatrick | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:975fd610-9576-473b-a0ed-d98ece65fedc>
3.390625
1,634
Content Listing
Science & Tech.
35.082914
95,642,969
Study reveals how differences in male and female brains emerge in C. elegans News May 16, 2016 Nematode worms may not be from Mars or Venus, but they do have sex-specific circuits in their brains that cause the males and females to act differently. According to new research published in Nature, scientists have determined how these sexually dimorphic (occurring in either males or females) connections arise in the worm nervous system. The research was funded by the U.S. National Institutes of Health's (NIH's) National Institute of Neurological Disorders and Stroke (NINDS). "For decades, there has been little focus on the impact of sex on many areas of biomedical research," said Coryse St. Hillaire-Clarke, PhD, program officer on this NINDS project. "This study helps us understand how sex can influence brain connectivity." In nematode worms, (Caenorhabditis elegans), a small number of neurons are found exclusively in male or female brains. The remaining neurons are found in both sexes, although their connection patterns are different in male and female brains. Oliver Hobert, PhD, professor of biological sciences at Columbia University in New York City, and his colleagues looked at how these wiring patterns form. Dr. Hobert's team observed that in the worms' juvenile state, before they reach sexual maturity, their brain connections were in a hybrid, or mixed state, comprised of both male and female arrangements. As they reached sexual maturity, however, their brains underwent a pruning process, which got rid of particular connections and led to either male or female patterns. "We found that differences in male and female brains develop from a ground state, which contains features of both sexes. From this developmental state, distinctly male or female features eventually emerge," said Dr. Hobert. Next, Dr. Hobert's team showed that sex-specific wiring in the brain results in dimorphic behavior. They discovered that PHB neurons, chemosensory brain cells that detect chemical cues in the environment such as food, predators or potential mates, work differently in males and females. In males, these neurons proved to be important in recognizing mating cues while in females, the neurons helped them avoid specific taste cues. However, early in development, PHB neurons in males also responded to signals regulating taste, suggesting that even though those neurons are found in all nematodes, in adults, their functions differ as a result of sex-specific wiring in the brain. Dr. Hobert's team used genetically engineered nematodes to look more carefully at individual connections between brain cells. The researchers found that swapping the sex of individual neurons changed wiring patterns and influenced behavioral differences in males and females. Additional experiments helped to identify genes involved in regulating the pruning process during development. Dr. Hobert's group discovered that certain transcription factors, which are molecules that help control gene activity, are present in a dimorphic state and may help establish male or female connections in the brain. In future experiments, Dr. Hobert and his colleagues plan to examine how these molecules target specific connections for pruning. Note: Material may have been edited for length and content. For further information, please contact the cited source. Oren-Suissa M, Bayer EA, Hobert O. Sex-specific pruning of neuronal synapses in Caenorhabditis elegans. Nature, Published Online May 4 2016. doi: 10.1038/nature17977 What Makes Good Brain Proteins Turn Bad?News The protein FUS is implicated in two neurodegenerative diseases: amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). Using a newly developed fruit fly model, researchers have zoomed in on the protein structure of FUS to gain more insight into how it causes neuronal toxicity and disease. Researchers are One Step Closer to Developing Eye Drops to Treat Age-Related Macular DegenerationNews Scientists at the University of Birmingham are one step closer to developing an eye drop that could revolutionise treatment for age-related macular degeneration (AMD).READ MORE A Bad Mood May Help Your Brain With Everyday TasksNews New research found that being in a bad mood can help some people’s executive functioning, such as their ability to focus attention, manage time and prioritize tasks.READ MORE
<urn:uuid:03b4bb2d-032a-4bf5-ab39-72e8b6c1e192>
3.171875
912
News Article
Science & Tech.
34.048717
95,642,972
Britain Just Ran Entirely on Coal-Free Energy for Nearly Six Days ...for the first time ever Over 50 percent of the United Kingdom’s electricity has come from low-carbon sources including UK nuclear, imported French nuclear, biomass, hydro, wind and solar for the first time ever. The new study from energy company Drax, which runs a biomass power stations, found electricity from low-emission sources had peaked at 50.2 percent between July and September – a huge improvement on the 20 percent being produced by the same sources in 2010 – and saw Britain run coal-free for nearly six days in the last quarter. Nuclear energy provided 26 percent, the largest share of the UK’s low-carbon generation, for the period in question, with renewable energies providing a further 20 percent as they continue to grow in capacity. While of course nuclear energy also has natural limits which could be a problem in the more distant future, the progress being made is undeniably positive. The wind and solar instalment in Britain has increased six-fold in the past six years, while biomass has also seen a huge increase. The report said: “Britain’s electricity was completely coal-free for nearly six days over the last quarter. "Coal plants have been pushed off the system by competition from gas, nuclear and renewables. 5 May 2016 was a historic day, the first time since 1881 that Britain burnt no coal to produce its electricity. "Far from being a one-off, this has continued to become the norm over summer." The news follows on from the declaration that Sweden wants to become the first fossil-fuel free country in the world, Scotland is undergoing somewhat of an energy revolution and the UK Government announced plans that would see Britain’s first coal-fired power stations likely to close by 2025. Last year saw green energy account for more than half of all electricity capacity for the first time, leading the International Energy Agency to call it a “turning point" for the planet. The announcement is welcome news particularly in light of Donald Trump’s victory across the pond, which will see America become the only country in the world to have a leader who has actively denied climate change. Trump has spoken out about abandoning the Paris Agreement, scrapping the EPA and effectively cancelling a lot of the crucial good work done to get the ball rolling towards a cleaner planet by President Obama. The hope is that rather than following through on Trump’s previous statements, the President-Elect will at least make compromises when in office and the efforts of China or India won’t diminish as a consequence; something that would be a catastrophe for the state of the planet. As more countries continue to push on with low-emissions energy alternatives though, the technology will only garner more attention and as it continues to gather momentum, one would hope it will progress to the stage where it cannot be ignored.
<urn:uuid:58256885-e56a-4cde-b288-7b5732ef20ef>
2.859375
607
News Article
Science & Tech.
41.428647
95,642,987
Despite widespread concern about climate change, annual carbon dioxide emissions from burning fossil fuels and manufacturing cement have grown 38 percent since 1992, from 6.1 billion tons of carbon to 8.5 billion tons in 2007. At the same time, the source of emissions has shifted dramatically as energy use has been growing slowly in many developed countries but more quickly in some developing countries, most notably in rapidly developing Asian countries such as China and India. These are the findings of an analysis completed by the Department of Energy's Carbon Dioxide Information Analysis Center at Oak Ridge National Laboratory. "The United States was the largest emitter of CO2 in 1992, followed in order by China, Russia, Japan and India," said Gregg Marland of ORNL's Environmental Sciences Division. "The most recent estimates suggest that India passed Japan in 2002, China became the largest emitter in 2006, and India is poised to pass Russia to become the third largest emitter, probably this year." The latest estimates of annual emissions of carbon dioxide to the atmosphere indicate that emissions are continuing to grow rapidly and that the pattern of emissions has changed markedly since the drafting of the United Nations Framework Convention on Climate Change in 1992. It was then that the international community expressed concern about limiting emissions of greenhouse gases. In the Kyoto Protocol, 38 developed countries initially agreed to limit their emissions of greenhouse gases in an effort to minimize their potential impact on the Earth's climate system. At the time of drafting the United Nations Convention, those 38 countries were responsible for 62 percent of carbon dioxide emissions attributable to all countries. By the time the Kyoto Protocol was drafted in 1997 that fraction was down to 57 percent. The recent emissions estimates show that by the time the Kyoto Protocol came into force in 2005 those 38 countries were the source of less than half of the national total of emissions (an estimated 49.7 percent), and this value as of 2007 was 47 percent. More than half of global emissions are now from the so-called "developing countries." The Kyoto Protocol has been ratified by 181 countries, but not by the United States. Marland emphasizes that these emissions numbers are subject to some uncertainty â about 5 percent for the United States but possibly as much as 20 percent for China. "These are our best estimates, but precise numbers cannot be known with certainty," Marland said. "Also, as countries with less certain data become more important to the overall CO2 picture, the estimates of the global total of emissions become less certain." While this national distribution of emissions is significant in the context of international agreements like the Kyoto Protocol, its practical significance is less clear in a world linked by international commerce, co-author Jay Gregg of the University of Maryland noted. A recent study has estimated, for example, that a third of CO2 emissions from China in 2005 were due to production of goods for export. Current estimates of national CO2 emissions show simply the amount of CO2 emitted from within a country and do not take into consideration the impact of international trade in goods and services or the energy used in international travel and transport. The new estimates of CO2 emissions are based on energy data through 2005 from the United Nations, cement data through 2005 from the U.S. Geological Survey, energy data for 2006 and 2007 from BP, and extrapolations by Marland, Gregg and co-authors Tom Boden and Bob Andres of ORNL. Burning fossil fuels and manufacturing cement â along with deforestation -- are the most important human-related sources of carbon dioxide emissions to the atmosphere, according to the researchers. The cement data take into account the breakdown of limestone to produce lime. Researchers also note that the new CO2 data include minor downward revisions of estimates for recent years, but the trends are not changed.
<urn:uuid:d38b48f7-ee85-41fe-8b46-0b705196e45a>
3.515625
763
News Article
Science & Tech.
33.12791
95,643,000
Hi, I am a bit confused with this question. I don't know what I'm doing wrong, but I keep getting the wrong answer. Question: Three capacitors (4.0, 6.0, and 10.0 uF) are connected in series across a 50.0 V battery. Find the voltage across the 4.0 uF capacitor. Answer in Volts© BrainMass Inc. brainmass.com July 16, 2018, 10:51 am ad1c9bdddf This solution illustrates how to solve this physics-based question, which requires minimal work. Any equations which are needed are provided. This is all completed in about 25 words.
<urn:uuid:73b07155-2463-4743-9cd2-9af33a868af8>
2.703125
139
Q&A Forum
Science & Tech.
87.50551
95,643,001
"Orbital flight of CubeSats in extremely Low Earth Orbit, defined here as an altitude between 150 – 250 km, has the potential to enable a wide range of missions in support of atmospheric measurements, national security, and natural resource monitoring. In this work, a mission study is presented to demonstrate the feasibility of using commercially available sensor and electric thruster technology to extend the orbital lifetime of a 3U CubeSat flying at an altitude of 210 km. The CubeSat consists of a 3U configuration and assumes the use of commercially available sensors, GPS, and electric power systems. The thruster is a de-rated version of a commercially available electrospray thruster operating at 2 W, 0.175 mN thrust, and an Isp of 500 s. The mission consists of two phases. In Phase I the CubeSat is deployed from the International Space Station orbit (414 km) and uses the thruster to de-orbit to the target altitude of 210 km. Phase II then begins during which the propulsion system is used to extend the mission lifetime until propellant is fully expended. A control algorithm based on maintaining a target orbital energy is presented in which simulated GPS updates are corrupted with measurement noise to simulate state data which would be available to the spacecraft computer. An Extended Kalman Filter is used to generate estimates of the orbital dynamic state between the 1 Hz GPS updates, allowing thruster control commands at a frequency of 10 Hz. For Phase I, operating at full thrust, the spacecraft requires 25.21 days to descend from 414 to 210 km, corresponding to a ΔV = 96.25 m/s and a propellant consumption of 77.8 g. Phase II, the primary mission phase, lasts for 57.83 days, corresponding to a ΔV = 119.15 m/s during which the remaining 94.2 g of propellant are consumed. " Worcester Polytechnic Institute All authors have granted to WPI a nonexclusive royalty-free license to distribute copies of the work. Copyright is held by the author or authors, with all rights reserved, unless otherwise noted. If you have any questions, please contact email@example.com. Martinez, Nicolas, "Feasibility for Orbital Life Extension of a CubeSat Flying in the Lower Thermosphere" (2015). Masters Theses (All Theses, All Years). 917. GPS, thermosphere, LEO, CubeSat, electrospray, energy, orbital
<urn:uuid:cf984b78-2a60-4f16-9e44-b1968f95d9e8>
2.734375
511
Academic Writing
Science & Tech.
46.278975
95,643,012
Effect of Nanoparticles on Electrolytes and Electrode/Electrolyte Interface The addition of nano-sized inorganic fillers such as SiO2 to solid and liquid electrolytes to enhance their electrochemical and physical properties has been recently the focus of great deal of research. In this chapter, we review the work done in this area where various types of nanoparticles including ceramics and clay were used as additives to electrolytes commonly used in lithium-ion batteries research such as polymer electrolytes (gel and solid form), ionic and organic liquid electrolytes and plastic crystals. KeywordsIonic Liquid Ionic Conductivity Polymer Electrolyte Solid Electrolyte Liquid Electrolyte - 27.Adebahr J, Best AS, Byrne N, Jacobsson P, MacFarlane DR, Forsyth M (eds) (2003) Ion transport in polymer electrolytes containing nanoparticulate TiO2: the influence of polymer morphology. Phys Chem Chem Phys 5:720–725, The Royal Society of ChemistryGoogle Scholar - 37.Fukada S-I, Yamamoto H, Ikeda R, Nakamura D (1987) Hydrogen-1 nuclear magnetic resonance, differential thermal analysis, X-ray powder diffraction and electrical conductivity studies on the motion of cations, including self-diffusion in crystals of propylammonium chloride and bromide as well as their n-deuterated analogues. J Chem Soc 10:3207–3222Google Scholar
<urn:uuid:b2393497-4181-4155-92b9-647241cf1ebd>
2.625
313
Academic Writing
Science & Tech.
4.738608
95,643,030
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 4 times found in secondary lowland rainforest, 5 times found in primary lowland rainforest, 8 times found in rainforest, 7 times found in Bamboo forest, 2 times found in 2º wet forest, 3 times found in tropical wet forest, 5 times found in premontane rainforest, 4 times found in tropical moist forest, 3 times found in 2º lowland rainforest, 3 times found in tropical rainforest, ... Found most commonly in these microhabitats: 32 times ex sifted leaf litter, 21 times Malaise trap, 1 times combined specimens of all flight intercept pan traps, 1 times Claro, 1 times bajo de M/25, 1 times SCH, strays from roadside. camponotus striatus ps times road to San Luis below Stuckys'. 3, 1 times litter, 1 times ex log litter, 1 times Eciton bivouac debris. Collected most commonly using these methods: 23 times winkler, 24 times Malaise, 11 times maxiWinkler, 8 times search, 4 times Malaise trap, 2 times berlese, 2 times flight intercept trap, 1 times MiniWinkler. Elevations: collected from 30 - 2250 meters, 361 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0
<urn:uuid:339c9fb0-399f-4f37-997d-d3de05573611>
2.765625
416
Knowledge Article
Science & Tech.
57.93471
95,643,031
In June 2018, scientists have observed a global dust storm on Mars, which has become one of the largest in the entire history of observations. The storm began in the Valley of Perseverance and in less than a week covered an area of 18 million km2. Scientists had hoped that the effects of dust storm on Mars does not affect the operation of Mars Rovers opportunity and Curiosity, but it had gotten to them. To date, the disaster has already covered a quarter of the total surface of Mars is not going to subside. As a result, had to suspend the Martian champion – the Mars Rover opportunity, and the other Martian robotic inhabitants are translated into a special mode. What designed the Mars Rovers NASA Dust storms on the red planet is considered a common phenomenon that is taken into account when designing any of the Rover. The Rover opportunity, which was delivered to Mars in early 2004, was also well prepared, despite the fact that the power of its elements by solar panels. Settling on their surface, Martian dust significantly reduces the production of electricity, so scientists hoped that the Rover opportunity will last only a few weeks. It was at first, and after the warranty period, the Rover went into sleep mode. However, it soon blew a strong wind, which suddenly cleared the surface of the solar panels, which gave the opportunity to continue the work of the opportunity Rover on Mars. In the end, at the moment, the Rover opportunity is already more than 55 times higher than the warranty period. The wind regularly clean the surface of solar panels, allowing the apparatus to resume the work again and again. But now scientists are worried more than usual. The current dust storm is one of the largest for all time of stay opportunity on Mars. Now the opacity of the Martian atmosphere on average twice the usual indicators, resulting in the solar panels of the Rover can be discharged to a critical level. The energy required for the apparatus not only to continue but also to maintain optimum internal temperature, which is achieved by a special heating system. If battery power is not enough, and the storm will continue to rage on, the Rover opportunity may repeat the fate of the other machine – Spirit, which was unable to resume work due to the reduction of the internal temperature to a critical level at -50 °C. Another NASA’s Mars Rover “Curiosity” is on the opposite side of the red planet, but the dust storm on Mars and got to it. This Rover makes scientists less fear, since his power is not from solar panels, and special radioisotope thermoelectric generator. Dust it is not too terrible, so the unit continues to maintain contact with the Ground and recently even sent a selfie, specially made by him during the revelry. Data from this and other spacecraft show that the opacity of the atmosphere, which has already reached 10.8 Tau (a measure of the level of blocking of sunlight because of dust) continues to increase. Therefore, to talk about the end of the storm it is not necessary. However, the storm opens scientists the additional opportunity to study Martian weather conditions. Climatologists from the science mission of Curiosity are planning to explore behavior of the Martian dust from the surface of the planet during such global disasters. In addition, scientists still don’t know why some storms on Mars subside for a few days, while others last for months. And we only have to wait for the results of their research and hoped that communication with the most successful Mars Rover opportunity has been on Mars for almost 15 years, will be restored.
<urn:uuid:912c1220-7334-4212-856f-ece14868b346>
3.484375
723
News Article
Science & Tech.
41.021453
95,643,046
RNA silencing occurs in a broad range of organisms. Although its ancestral function is probably related to the genome defense mechanism against repetitive selfish elements, it has been found that RNA silencing regulates different cellular processes such as gene expression and chromosomal segregation. In Neurospora crassa, a RNA silencing mechanism, called quelling, acts to repress the expression of transgenes and transposons, but until now no other cellular functions have been shown to be regulated by this mechanism. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:c8d98097-1b30-47ad-92da-d2b91ea8b33a>
3.40625
119
Academic Writing
Science & Tech.
8.899032
95,643,078