text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Energy Definition Science Energy definition science is normally described because the potential to carry out work. I wonderful you may recognize the fact that electricity is wanted to do artwork, but what’s work? Work can be described as the movement of mass whilst a stress is carried out to it. In different A few youngsters are simply greater severe, energetic and continual than common. right here are some stuff you need to, and shouldn’t—do while elevating lively kids. Energy kids may be very vital for us. Crude oil, natural gas, and coal are called fossil fuels due to the fact they were shaped over Sustainable energy is a shape of power that meets our nowadays’s call for of power without putting them in the chance of getting expired or depleted and may be used over and over again. Sustainable power must be extensively encouraged because it does no longer cause any harm to the environment. And Water energy is strength derived from the power of water, most usually its movement. power assets the use of water had been around for thousands of years within the form of water clocks and waterwheels. A greater current innovation has been hydroelectricity or the strength produced by means of the drift of The term energy crisis is used pretty loosely so it pays to be clear approximately what’s below dialogue. Widely speaking the time period poses three distinct questions. Do we run out of energy crisis? We rely on coal, oil, and gas (the fossil fuels) for over 80% of our modern power wishes. Light energy is a kind of kinetic energy with the ability to make types of mild seen to human eyes. Light carries photons which can be minute packets of strength. While an item’s atoms get heated up, it consequences inside the production of photons and this is how photons are produced. The
<urn:uuid:83582a58-ecac-4461-b682-ba6625e33504>
2.796875
371
Content Listing
Science & Tech.
48.444756
95,505,904
Chilean President Michelle Bachelet put hammer to stone on an Andean mountaintop on Wednesday evening to mark the start of construction for one of the world’s most advanced telescopes, an instrument that may help shed light on the possibility of life on distant planets. The Giant Magellan Telescope (GMT), scheduled to be completed by 2024, will have a resolution 10 times that of the Hubble spacecraft. Experts say it will be able to observe black holes in the distant cosmos and make out planets in other solar systems with unprecedented detail. Such technology, astronomers say, will help humans determine how the universe formed and if planets hundreds of light years away could support life. “With this science, there are no limits to the possibilities that are open,” said Bachelet, standing on the GMT’s site, a wind-buffeted, 8,250-foot (2,500-meter) mountaintop. “What it does is open the door to understanding,” she said. The GMT – a collaboration of institutions in the United States, Chile, South Korea, Brazil, and Australia – will rely on seven intricately curved lenses, each almost 28 feet (8.5 meters) wide. For the system to work, no one lens can have a blemish of more than 25 nanometers, which is some four thousand times smaller than the average width of a human hair. “Astronomy is like archaeology; what we see in the sky happened many years ago,” said Yuri Beletsky, a Belarussian astronomer for the GMT. “The biggest expectation is that we find something that we don’t expect,” he added on a bus driving up sinuous switchbacks to the planned observatory. Two other massive instruments – the European Extremely Large Telescope, also in Chile, and the Thirty Meter Telescope in Hawaii – are scheduled to be completed in the 2020s as well. But GMT President Patrick McCarthy says the telescope’s massive single lenses and wider observation field will allow for more precise measurements. Among the phenomena he hopes to observe is dark matter, mysterious invisible material that makes up most of the universe’s mass. Astronomers say Chile’s bone-dry Atacama Desert, host to the GMT and dozens of other high-powered telescopes, is uniquely suited to space observation as it has dry air, high mountains, and little light pollution. McCarthy also points out that another advantage for astronomers in Chile is that the airflow from the nearby Pacific Ocean is smoother than that over continental deserts, meaning scientists have to contend with less atmospheric interference. (Reporting by Gram Slattery; Editing by Anthony Esposito, Sandra Maler and Steve Orlofsky)
<urn:uuid:2196ab07-f4fb-4934-8db6-60ca1a8eda59>
3.4375
576
Truncated
Science & Tech.
33.92611
95,505,980
Electroholography enables the projection of three-dimensional (3-D) images using a spatial-light modulator. The extreme computational complexity and load involved in generating a hologram make real-time production of holograms difficult. Many methods have been proposed to overcome this challenge and realize real-time reconstruction of 3-D motion pictures. We review two real-time reconstruction techniques for aerial-projection holographic displays. The first reduces the computational load required for a hologram by using an image-type computer-generated hologram (CGH) because an image-type CGH is generated from a 3-D object that is located on or close to the hologram plane. The other technique parallelizes CGH calculation via a graphics processing unit by exploiting the independence of each pixel in the holographic plane.
<urn:uuid:4fdc0202-a5e5-4457-81f9-a628eb1c6336>
2.5625
167
Academic Writing
Science & Tech.
14.81855
95,505,982
His theoretical research into light-induced processes in the hydroxyl radical (OH), the hydrogen molecule (H2) and nitrous oxide (N2O) has directly contributed to a better understanding of chemical processes taking place on Earth as well as in the universe. The interaction between light and matter is vitally important for a wide range of applications, such as the modelling of chemical processes in the Earth's atmosphere, research into combustion processes and the measurement and modelling of processes in astrophysics. Under the influence of light, molecules can vibrate, rotate, disintegrate or even be formed out of individual atoms. These processes take place according to the laws of quantum mechanics. Although the basic equations are known, solving them is a considerable technical problem. Thanks to Van der Loo's research, such solutions are a step closer.Hydroxyl radical Van der Loo's doctoral research was funded by the NWO programme Jonge Chemici (Young Chemists). David Redeker | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:39587394-39c3-4543-83b0-52b3c7d5a630>
2.84375
777
Content Listing
Science & Tech.
35.184878
95,506,012
Keeping global temperature increases to the lower end of the Paris climate accord would make a dramatic difference to the severity of coral bleaching by mid-century, according to research to be presented to the UN's World Heritage Committee. The study, conducted by scientists including Australians, found only four of the 29 World Heritage reefs are projected to experience severe bleaching twice a decade from heat stress by the second half of this century if warming can be kept to 1.5 degree, compared with pre-industrial levels. That projection, in an update report to the First Global Scientific Assessment of the World Heritage reefs, compares with an assessment last year that projected 25 of those 29 sites – including the Great Barrier Reef – would suffer severe bleaching twice a decade by 2040 if global greenhouse gas emissions were allowed to increase under business-as-usual conditions. "Coral reefs are one of the more sensitive ecosystems and so the impacts [of climate change] are unveiled earliest," said Scott Heron, lead author of the report, and an oceanographer with Coral Reef Watch, run by the US National Oceanic and Atmospheric Administration. "Action on greenhouse gas emissions ... is critical at this stage." Corals suffering sustained heat stress typically expel algae known as zooxanthellae, losing their main source of energy and colour. Unprecedented back-to-back coral bleaching in the summers of 2015-16 and 2016-17 triggered the deaths of about half the corals on the Great Barrier Reef alone. Of the World Heritage-listed natural coral reef properties, 15 were exposed to repeated severe heat stress between 1985-2013, while 25 of the 29 suffered bleaching in the subsequent three years, according to the assessment published last year. The Turnbull government's handling of the Great Barrier Reef's health has lately been controversial following a decision to direct $444 million of reef research funding in the budget for the year ending this month to the little known privately funded Reef Foundation. Keeping global warming to under 1.5 degrees, though, will be a difficult task given the roughly 1-degree increase so far and the timelag effect of the carbon pollution already emitted. Dr Heron's report notes the low-emissions scenario modelled implies pollution peaks during the 2010-2020 decade. Even a low-carbon track, however, won't be enough to spare the Phoenix Islands Protected Area in Kiribati, which will be among the four sites likely to suffer severe coral bleach twice a decade by 2038, the report said. The other three are protected sites off Cocos Islands, the Galapagos Islands and Panama. Dr Heron, who is a researcher at James Cook University and supported by the Australian Marine Conservation Society, will release his report at this week's World Heritage Committee meeting in Bahrain. He is also planning to unveil a new Climate Vulnerability Index assessment tool to highlight the climate action needed to save all of the 1000-plus culture and natural World Heritage-listed sites world. "There's not really the capacity within the World Heritage process to address [climate change]," said Dr Heron. "These are the best of the best places to protect." Climate change impacts "have the potential to overwhelm any efforts that we put into local management", he said. Separately, Hilde Heine, President of the Marshall Islands, will on Thursday announce the first online political leaders summit to drum up support for lifting the emissions goals pledged at the 2015 Paris climate accord. The summit is due to be held after the expected release in October of the Intergovernmental Panel on Climate Change's special report on the impacts of a temperature increase of 1.5 degrees. Morning & Afternoon Newsletter
<urn:uuid:6626c063-5b84-4eb0-bdb2-546ce4ccef2f>
3.828125
753
News Article
Science & Tech.
37.602435
95,506,016
Ability to study cubic ice in the lab could aid climate change models You won't find ice cubes like this in your freezer. Researchers created ice crystals with a near-perfect cubic arrangement of water molecules, in order to better understand how high-altitude ice clouds interact with sunlight and the atmosphere. In this X-ray diffraction image, the ice crystals have scattered X-rays to create concentric rings, which are a fingerprint of the molecular arrangement within the crystals. Image courtesy of The Ohio State University An international team of scientists has set a new record for creating ice crystals that have a near-perfect cubic arrangement of water molecules--a form of ice that may exist in the coldest high-altitude clouds but is extremely hard to make on Earth. The ability to make and study cubic ice in the laboratory could improve computer models of how clouds interact with sunlight and the atmosphere--two keys to understanding climate change, said Barbara Wyslouzil, project leader and professor of chemical and biomolecular engineering at The Ohio State University. It could also enhance our understanding of water - one of the most important molecules for life on our planet. Seen under a microscope, normal water ice--everything from frozen ponds, to snow, to the ice we make at home--is made of crystals with hexagonal symmetry, Wyslouzil explained. But with only a slight change in how the water molecules are arranged in ice, the crystals can take on a cubic form. So far, researchers have used the presence of cold cubic ice clouds high above the earth's surface to explain interesting halos observed around the sun, as well as the presence of triangular ice crystals in the atmosphere. Scientists have struggled for decades to make cubic ice in the laboratory, but because the cubic form is unstable, the closest anyone has come is to make hybrid crystals that are around 70 percent cubic, 30 percent hexagonal. In a paper published in the Journal of Physical Chemistry Letters, Wyslouzil, graduate research associate Andrew Amaya and their collaborators describe how they were able to create frozen water droplets that were nearly 80 percent cubic. "While 80 percent might not sound 'near perfect,' most researchers no longer believe that 100 percent pure cubic ice is attainable in the lab or in nature," she said. "So the question is, how cubic can we make it with current technology? Previous experiments and computer simulations observed ice that is about 75 percent cubic, but we've exceeded that." To make the highly cubic ice, the researchers drew nitrogen and water vapor through nozzles at supersonic speeds. When the gas expanded, it cooled and formed droplets a hundred thousand times smaller than the average raindrop. These droplets were highly supercooled, meaning that they were liquid well below the usual freezing temperature of 32 degrees Fahrenheit (0 degrees Celsius). In fact, the droplets remained liquid until about -55 degrees Fahrenheit (around -48 degrees Celsius) and then froze in about one millionth of a second. To measure the cubicity of the ice formed in the nozzle, researchers performed X-ray diffraction experiments at the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory in Menlo Park, CA. There, they hit the droplets with the high-intensity X-ray laser from LCLS and recorded the diffraction pattern on an X-ray camera. They saw concentric rings at wavelengths and intensities that indicated the crystals were around 80 percent cubic. The extremely low temperatures and rapid freezing were crucial to forming cubic ice, Wyslouzil said: "Since liquid water drops in high-altitude clouds are typically supercooled, there is a good chance for cubic ice to form there." Exactly why it was possible to make crystals with around 80 percent cubicity is currently unknown. But, then again, exactly how water freezes on the molecular level is also unknown. "When water freezes slowly, we can think of ice as being built from water molecules the way you build a brick wall, one brick on top of the other," said Claudiu Stan, a research associate at the Stanford PULSE Institute at SLAC and partner in the project. "But freezing in high-altitude clouds happens too fast for that to be the case--instead, freezing might be thought as starting from a disordered pile of bricks that hastily rearranges itself to form a brick wall, possibly containing defects or having an unusual arrangement. This kind of crystal-making process is so fast and complex that we need sophisticated equipment just to begin to see what is happening. Our research is motivated by the idea that in the future we can develop experiments that will let us see crystals as they form." Additional co-authors on the paper were from Ohio State, SLAC, the National University of Singapore, Stockholm University, KTH Royal Institute of Technology, Brookhaven National Laboratory and the National Science Foundation BioXFEL Science and Technology Center. The research was funded by the National Science Foundation, the U.S. Department of Energy and SLAC. The use of LCLS was supported by the U.S. Department of Energy Office of Science. Barbara Wyslouzil, 614-688-3583; firstname.lastname@example.org Claudiu Stan, email@example.com Written by Pam Frost Gorder, 614-292-9475; Gorder.firstname.lastname@example.org Pam Frost Gorder | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:89a643b9-905c-4769-9b64-22a1e10c6bfe>
4.03125
1,725
Content Listing
Science & Tech.
39.265909
95,506,028
Exocytosis is a process of cellular excretion in which substances contained in vesicles are discharged from the cell by fusion of the vesicular membrane with the outer cell membrane. Or, put it in other words, exocytosis is a cellular process in which cells eject waste products or chemical transmitters (such as hormones) from the interior of the cell. Exocytosis is similar in function to endocytosis but working in the opposite direction. In multicellular organisms there are two types of exocytosis: 1) Ca2+ triggered non-constitutive and 2) non Ca2+ triggered constitutive. Exocytosis in neuronal chemical synapses is Ca2+ triggered and serves interneuronal signalling. Constitutive exocytosis is performed by all cells and serves the release of components of the extracellular matrix, or just delivery of newly-synthesized membrane proteins that are incorporated in the plasma membrane after the fusion of the transport vesicle. Exocytosis is the opposite of endocytosis. There are five steps to exocytosis: 1) in this first step, the vesicle containing the waste product is transported through the cytoplasm towards the part of the cell from which it will be eliminated; 2) as the vesicle approaches the cell membrane, it is secured and pulled towards the part of the cell from which it will be eliminated; 3) in third step, the vesicle comes in contact with the cell membrane, where it begins to chemical and physically merge with the proteins in the cell membrane; 4) the fourth step involves the chemical preparations for the last step of exocytosis; 5) in the last step, the proteins forming the walls of the vesicle merge with the cell membrane and breach, pushing the vesicle contents (waste products or chemical transmitters) out of the cell. This step is the primary mechanism for the increase in size of the cell's plasma membrane.
<urn:uuid:04855dc9-4357-45cc-b87e-d1e17fccb0be>
3.921875
416
Knowledge Article
Science & Tech.
19.476317
95,506,037
Title text: Handy exam trick: when you know the answer but not the correct derivation, derive blindly forward from the givens and backward from the answer, and join the chains once the equations start looking similar. Sometimes the graders don't notice the seam. In college courses with a very large number of students (picture the huge, tiered, amphitheater-style lecture halls shown in any movie or TV show about college), teaching assistants are employed to help the professors grade student work. In math and science courses, students are expected to solve the problems and show their work as supporting evidence. Due to the high volume of work to grade, whether it's being done by the professor or a TA, the grader will get lazy and look for correct answers and the existence of work without checking that the work is accurate. The math switches from √ being square root notation to it being division notation midway. That is an illegal operation. But the correct answer is reached anyway, because 27 is the correct answer to 3 * 9, 3√81, and 81 ÷ 3. - [A problem is given on an arithmetic test: "4) 3x9=?". In handwriting, the student's work follows. The student has accurately reformatted the question as 3 times the square root of 81, which visually resembles the long division problem of 3 divided into 81, and then solved the latter to get 27 — the correct answer to both.] add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments! In the middle of a Physics I exam, I forgot one of the equations of motion. Using my basic working knowledge of Calculus and the relationship between acceleration, velocity, and position, I managed to derive an equation which I used to solve the problem. When I got my exam back, I was given only partial credit because I got the right answer using the wrong formula.Smperron (talk) - Ah, [insert your nation here]'s educational system at work. ImVeryAngryItsNotButter (talk) (please sign your comments with ~~~~) - I did something like this last year when we had to calculate how long it would take for something to fall ... meters on the moon, making a graph with y=v and x=t and making an integral by rewriting v to ...t^2, and then solving for the ... meters. My teacher gave me one point less for not using the correct formula but 2 bonus points for taking thinking about this solution. He is great. 13:37 UTC(I think) 8 octobre 2016 - Or perhaps re-estimate the value of making the cubed root of 81 look like 27 when the marker knows it is really 4.32674871092 and a bit. I used Google News BEFORE it was clickbait (talk) - Uh, no, xth root is written as superscript....? Or am I missing the joke. 184.108.40.206 19:54, 31 October 2015 (UTC) I had an old math teacher once who didn't spend too much effort in grading trickier problems, so I got away with something similar in deriving Lagrange's Trig Identity in a complex class. Maybe 8 steps from the LHS and 2 steps from the RHS were right, and the equals sign that joined them was a leap of faith. --Quicksilver (talk) 01:55, 20 August 2013 (UTC) This is particularly useful when Wolfram|Alpha tells you the answer, but you don't know how to get the answer, and you can't afford Pro. MonacleforSauron (talk) 23:58, 1 August 2016 (UTC) Some browsers seem not to support formulas without add-ons etc. I added the formula also in simple characters. 220.127.116.11 (talk) (please sign your comments with ~~~~)
<urn:uuid:effe4d2e-5d19-46af-baab-00fc0a746f13>
3.21875
815
Comment Section
Science & Tech.
72.666467
95,506,046
Researchers manage to reactivate head regeneration in a regeneration-deficient species of planarians The rabbit can’t do it, neither can a frog, but zebrafish and axolotls can and flatworms are true masters of the craft: Regeneration. Why some animals can re-grow lost body parts or organs while others cannot remains a big mystery. And even more intriguing to us regeneration-challenged humans is the question whether one might be able to activate regenerative abilities in species that don’t usually regenerate. Researchers at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden are now one step further in understanding the factors that regulate regeneration. They discovered a crucial molecular switch in the flatworm Dendrocoelum lacteum that decides whether a lost head can be regenerated or not. And what is even more spectacular: The scientists manipulated the genetic circuitry of the worm in such a way as to fully restore its regeneration potential. In his lab, Jochen Rink, research group leader at the MPI-CBG, usually studies the flatworm species Schmidtea mediterranea. It is known for its excellent regenerative abilities and thus a popular model species in regeneration research: “We can cut the worm to 200 pieces, and 200 new worms will regenerate from each and every piece”, Rink explains. Now, for a change, Rink and colleagues brought a different beast into the lab, the flatworm Dendrocoelum lacteum. Even though a close cousin of the regeneration master S. mediterranea, this species had been reported to be incapable of regenerating heads from its posterior body half. “What’s the salient difference between the two cousins”, the researcher asked? Together with researchers from the Center for Regenerative Therapies Dresden Rink’s team searched for an answer amongst the genes of the two species, focusing on the so-called Wnt-signaling pathway. Like a cable link between two computers, signalling pathways transmit information between cells. The Dresden researchers inhibited the signal transducer of the Wnt pathway with RNAi and thus made the cells of the worm believe that the signalling pathway had been switched to „off”. Consequently, Dendrocoelum lacteum were able to grow a fully functional head everywhere, even when cut at the very tail. Re-building a head complete with brain, eyes and all the wiring in between is evidently complicated business. However, as the study showed, regeneration defects are not necessarily irreversible. Jochen Rink is stunned: “We thought we would have to manipulate hundreds of different switches to repair a regeneration defect; now we learned that sometimes only a few nodes may do”. Will this knowledge soon be applicable to more complex organisms – like humans, for example? “We showed that by comparisons amongst related species we can obtain insights into why some animals regenerate while others don’t – that’s an important first step”. Original publication: S.-Y. Liu, C. Selck, B. Friedrich, R. Lutz, M. Vila-Farré, A. Dahl, H. Brandl, N. Lakshmanaperumal, I. Henry & J. C. Rink Reactivating head regrowth in a regeneration-deficient planarian species. Nature, 25 July 2013
<urn:uuid:dfdc0f1b-7e54-48ab-885d-a29aceb33204>
3.109375
709
News Article
Science & Tech.
40.149364
95,506,047
According to Blair Hedges, a professor of biology at Penn State University and the leader of the research team, half of the newly added skink species already may be extinct or close to extinction, and all of the others on the Caribbean islands are threatened with extinction. Twenty-four new species of lizards known as skinks have been discovered on Caribbean islands by a team led by Blair Hedges, of Penn State University, who has described the species scientifically. Half of these new species already may be extinct or close to extinction. The loss of many skink species can be attributed primarily to predation by the mongoose -- a predatory mammal that was introduced by farmers. Other types of human activity, especially the removal of forests, also are to blame, according to the researchers. This picture is of one of the new species, an Anguilla Bank skink. Credit: Karl Questel, courtesy of Penn State University The researchers found that the loss of many skink species can be attributed primarily to predation by the mongoose -- an invasive predatory mammal that was introduced by farmers to control rats in sugarcane fields during the late nineteenth century. The research team reports on the newly discovered skinks in a 245-page article to be published on 30 April 2012 in the journal Zootaxa. About 130 species of reptiles from all over the world are added to the global species count each year in dozens of scientific articles. However, not since the 1800s have more than 20 reptile species been added at one time. Primarily through examination of museum specimens, the team identified a total of 39 species of skinks from the Caribbean islands, including 6 species currently recognized, and another 9 named long ago but considered invalid until now. Hedges and his team also used DNA sequences, but most of the taxonomic information, such as counts and shapes of scales, came from examination of the animals themselves. "Now, one of the smallest groups of lizards in this region of the world has become one of the largest groups," Hedges said. "We were completely surprised to find what amounts to a new fauna, with co-occurring species and different ecological types." He added that some of the new species are 6 times larger in body size than other species in the new fauna. Hedges also explained that these New World skinks, which arrived in the Americas about 18 million years ago from Africa by floating on mats of vegetation, are unique among lizards in that they produce a human-like placenta, which is an organ that directly connects the growing offspring to the maternal tissues that provide nutrients. "While there are other lizards that give live birth, only a fraction of the lizards known as skinks make a placenta and gestate offspring for up to one year," Hedges said. He also speculated that the lengthy gestational period may have given predators a competitive edge over skinks, since pregnant females are slower and more vulnerable. "The mongoose is the predator we believe is responsible for many of the species' close-to-extinction status in the Caribbean," Hedges said. "Our data show that the mongoose, which was introduced from India in 1872 and spread around the islands over the next three decades, has nearly exterminated this entire reptile fauna, which had gone largely unnoticed by scientists and conservationists until now." According to Hedges, the "smoking gun" is a graph included in the scientific paper showing a sharp decline in skink populations that occurred soon after the introduction of the mongoose. Hedges explained that the mongoose originally was brought to the New World to control rats, which had become pests in the sugarcane fields in Cuba, Hispaniola, Puerto Rico, Jamaica, and the Lesser Antilles. While this strategy did help to control infestations of some pests; for example, the Norway rat, it also had the unintended consequence of reducing almost all skink populations. "By 1900, less than 50 percent of those mongoose islands still had their skinks, and the loss has continued to this day," Hedges said.This newly discovered skink fauna will increase dramatically the number of reptiles categorized as "critically endangered" by the International Union for Conservation of Nature in their "Red List of Threatened Species," which is recognized as the most comprehensive database evaluating the endangerment status of various plant and animal species. "According to our research, all of the skink species found only on Caribbean islands are threatened," Hedges said. "That is, they should be classified in the Red List as either vulnerable, endangered, or critically endangered. Finding that all species in a fauna are threatened is unusual, because only 24 percent of the 3,336 reptile species listed in the Red List have been classified as threatened with extinction. Most of the 9,596 named reptile species have yet to be classified in the Red List." The other member of the research team, Caitlin Conn, now a researcher at the University of Georgia and formerly a biology major in Penn State's Eberly College of Science and a student in Penn State's Schreyer Honors College at the time of the research, added that researchers might be able to use the new data to plan conservation efforts, to study the geographic overlap of similar species, and to study in more detail the skinks' adaptation to different ecological habitats or niches. The research team also stressed that, while the mongoose introduction by humans now has been linked to these reptile declines and extinctions, other types of human activity, especially the removal of forests, are to blame for the loss of other species in the Caribbean. Funding for the research comes from the National Science Foundation. [ Katrina Voss ]CONTACTS Barbara Kennedy | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:8d6fb7c5-a387-4883-8396-545930f0362f>
3.734375
1,839
Content Listing
Science & Tech.
37.607471
95,506,054
The phenomenon known as zodiacal light appears as a pyramid-shaped glow that is best viewed around an hour after sunset and is visible to lucky stargazers in Tenerife for the next few weeks. Zodiacal light is caused by sunlight reflecting off interplanetary dust particles that orbit the sun within the inner solar system at a distance of about 600 million km from earth. These grains are ancient, and are thought to be left over from the process that created Earth around 4.5 billion years ago. The glow coincides with snowfall on El Tielde volcano, which at 3,718 meters (12,198ft) above sea level is Spain's highest mountain. El Teide nevado y luz zodiacal, un fenómeno físico en el que la luz solar es reflejada es dispersada por el polvo interplanetario que se encuentra en el plano que define la órbita de la Tierra (eclíptica).— IAC Astrofísica (@IAC_Astrofisica) February 5, 2018 Crédito: Daniel López / @IAC_Astrofisica pic.twitter.com/rxvXDcNYqM These two images were captured on February 3rd by Juan Carlos Casado and Daniel López and published by the Astrophysics Institute of the Canary Islancds The photos were taken from the Teide Observatory, the largest solar observatory in the world, which is located in the El Teide National Park, a UNESCO World Heritage Site and a recognised ‘Starlight Tourist Destination' because of its exceptionally clear and protected dark skies. John E Beckman an astrophysicist at the Instituto de Astrofísica de Canarias explained exactly what it was that the photos capture. “The zodiacal light is sunlight reflected off very fine dust particles which lie in the plane of the ecliptic. This is the plane in which the planets orbit the sun, and is a remarkably very thin disc, all the planets orbit in the same plane,” he told The Local. “In very clear air, away from contamination and light pollution it is possible to see it with the naked eye, and of course a photograph accumulates light and makes it much easier to pick out.” And the best way to see it? “It is hard to see in moonlight, so observable only when the moon is either dark or well below the horizon. “It is strongest near to the sun, so best observed just after sunset or just before sunrise, although with any kind of cloud or dust in the air this will not be possible as the remaining sunlight scattered in our atmosphere will conceal it. “As the dust particles are very small they tend to spiral in to the sun quite quickly, and need to be continually replaced. This must be due to cometary material which is stripped from the comets as they go close to the sun.” Compartimos unas imágenes del Parque Nacional del Teide y el Observatorio del Teide tras las últimas nevadas, tomadas por el astrofotógrafo colaborador del @IAC_Astrofisica Daniel López. #OTTenerife pic.twitter.com/2rb6uRmjlQ— IAC Astrofísica (@IAC_Astrofisica) February 5, 2018 He explained that the Teide is an amost perfect observational site: “The Teide National Park is an almost ideal site to observe the Zodiacal Light, as it has very clean air, and in many directions the light pollution from towns is very small, which is one of the reasons why we have the observatories.”
<urn:uuid:35e8e0e1-dbc1-4824-91db-7c522a8414dc>
3.640625
821
News Article
Science & Tech.
40.88112
95,506,056
A magnetic domain is a region within a magnetic material in which the magnetization is in a uniform direction. This means that the individual magnetic moments of the atoms are aligned with one another and they point in the same direction. When cooled below a temperature called the Curie temperature, the magnetization of a piece of ferromagnetic material spontaneously divides into many small regions called magnetic domains. The magnetization within each domain points in a uniform direction, but the magnetization of different domains may point in different directions. Magnetic domain structure is responsible for the magnetic behavior of ferromagnetic materials like iron, nickel, cobalt and their alloys, and ferrimagnetic materials like ferrite. This includes the formation of permanent magnets and the attraction of ferromagnetic materials to a magnetic field. The regions separating magnetic domains are called domain walls, where the magnetization rotates coherently from the direction in one domain to that in the next domain. The study of magnetic domains is called micromagnetics. Magnetic domains form in materials which have magnetic ordering; that is, their dipoles spontaneously align due to the exchange interaction. These are the ferromagnetic, ferrimagnetic and antiferromagnetic materials. Paramagnetic and diamagnetic materials, in which the dipoles align in response to an external field but do not spontaneously align, do not have magnetic domains. Also it is been stated that the magnetic field is slowly turning directions as we speak. - 1 Development of domain theory - 2 Domain structure - 3 Landau-Lifshitz energy equation - 4 Domain imaging techniques - 5 See also - 6 References - 7 External links Development of domain theory Magnetic domain theory was developed by French physicist Pierre-Ernest Weiss who, in 1906, suggested existence of magnetic domains in ferromagnets. He suggested that large number of atomic magnetic moments (typically 1012-1018) were aligned parallel. The direction of alignment varies from domain to domain in a more or less random manner, although certain crystallographic axis may be preferred by the magnetic moments, called easy axes. Weiss still had to explain the reason for the spontaneous alignment of atomic moments within a ferromagnetic material, and he came up with the so-called Weiss mean field. He assumed that a given magnetic moment in a material experienced a very high effective magnetic field due to the magnetization of its neighbors. In the original Weiss theory the mean field was proportional to the bulk magnetization M, so that where is the mean field constant. However this is not applicable to ferromagnets due to the variation of magnetization from domain to domain. In this case, the interaction field is Where is the saturation magnetization at 0K. Later, the quantum theory made it possible to understand the microscopic origin of the Weiss field. The exchange interaction between localized spins favored a parallel (in ferromagnets) or an anti-parallel (in anti-ferromagnets) state of neighboring magnetic moments Why domains form The reason a piece of magnetic material such as iron spontaneously divides into separate domains, rather than exist in a state with magnetization in the same direction throughout the material, is to minimize its internal energy. A large region of ferromagnetic material with a constant magnetization throughout will create a large magnetic field extending into the space outside itself (diagram a, right). This requires a lot of magnetostatic energy stored in the field. To reduce this energy, the sample can split into two domains, with the magnetization in opposite directions in each domain (diagram b right). The magnetic field lines pass in loops in opposite directions through each domain, reducing the field outside the material. To reduce the field energy further, each of these domains can split also, resulting in smaller parallel domains with magnetization in alternating directions, with smaller amounts of field outside the material. The domain structure of actual magnetic materials does not usually form by the process of large domains splitting into smaller ones as described here. When a sample is cooled below the Curie temperature, for example, the equilibrium domain configuration simply appears. But domains can split, and the description of domains splitting is often used to reveal the energy tradeoffs in domain formation. Size of domains As explained above, a domain which is too big is unstable, and will divide into smaller domains. But a small enough domain will be stable and will not split, and this determines the size of the domains created in a material. This size depends on the balance of several energies within the material. Each time a region of magnetization splits into two domains, it creates a domain wall between the domains, where magnetic dipoles (molecules) with magnetization pointing in different directions are adjacent. The exchange interaction which creates the magnetization is a force which tends to align nearby dipoles so they point in the same direction. Forcing adjacent dipoles to point in different directions requires energy. Therefore, a domain wall requires extra energy, called the domain wall energy, which is proportional to the area of the wall. Thus the net amount that the energy is reduced when a domain splits is equal to the difference between the magnetic field energy saved, and the additional energy required to create the domain wall. The field energy is proportional to the cube of the domain size, while the domain wall energy is proportional to the square of the domain size. So as the domains get smaller, the net energy saved by splitting decreases. The domains keep dividing into smaller domains until the energy cost of creating an additional domain wall is just equal to the field energy saved. Then the domains of this size are stable. In most materials the domains are microscopic in size, around 10−4 - 10−6 m. An additional way for the material to further reduce its magnetostatic energy is to form domains with magnetization at right angles to the other domains (diagram c, right), instead of just in opposing parallel directions. These domains, called flux closure domains, allow the field lines to turn 180° within the material, forming closed loops entirely within the material, reducing the magnetostatic energy to zero. However, forming these domains incurs two additional energy costs. First, the crystal lattice of most magnetic materials has magnetic anisotropy, which means it has an "easy" direction of magnetization, parallel to one of the crystal axes. Changing the magnetization of the material to any other direction takes additional energy, called the "magnetocrystalline anisotropy energy". The other energy cost to creating domains with magnetization at an angle to the "easy" direction is caused by the phenomenon called magnetostriction. When the magnetization of a piece of magnetic material is changed to a different direction, it causes a slight change in its shape. The change in magnetic field causes the magnetic dipole molecules to change shape slightly, making the crystal lattice longer in one dimension and shorter in other dimensions. However, since the magnetic domain is "squished in" with its boundaries held rigid by the surrounding material, it cannot actually change shape. So instead, changing the direction of the magnetization induces tiny mechanical stresses in the material, requiring more energy to create the domain. This is called "magnetoelastic anisotropy energy". To form these closure domains with "sideways" magnetization requires additional energy due to the aforementioned two factors. So flux closure domains will only form where the magnetostatic energy saved is greater than the sum of the "exchange energy" to create the domain wall, the magnetocrystalline anisotropy energy, and the magnetoelastic anisotropy energy. Therefore, most of the volume of the material is occupied by domains with magnetization either "up" or "down" along the "easy" direction, and the flux closure domains only form in small areas at the edges of the other domains where they are needed to provide a path for magnetic field lines to change direction (diagram c, above). The above describes magnetic domain structure in a perfect crystal lattice, such as would be found in a single crystal of iron. However most magnetic materials are polycrystalline, composed of microscopic crystalline grains. These grains are not the same as domains. Each grain is a little crystal, with the crystal lattices of separate grains oriented in random directions. In most materials, each grain is big enough to contain several domains. Each crystal has an "easy" axis of magnetization, and is divided into domains with the axis of magnetization parallel to this axis, in alternate directions. It can be seen that, although on a microscopic scale almost all the magnetic dipoles in a piece of ferromagnetic material are lined up parallel to their neighbors in domains, creating strong local magnetic fields, energy minimization results in a domain structure that minimizes the large-scale magnetic field. The domains point in different directions, confining the field lines to microscopic loops between neighboring domains within the material, so the combined fields cancel at a distance. Therefore, a bulk piece of ferromagnetic material in its lowest energy state has little or no external magnetic field. The material is said to be "unmagnetized". However, the domains can also exist in other configurations in which their magnetization mostly points in the same direction, creating an external magnetic field. Although these are not minimum energy configurations, due to a phenomenon where the domain walls become "pinned" to defects in the crystal lattice they can be local minimums of the energy, and therefore can be very stable. Applying an external magnetic field to the material can make the domain walls move, causing the domains aligned with the field to grow, and the opposing domains to shrink. When the external field is removed, the domain walls remain pinned in their new orientation and the aligned domains produce a magnetic field. This is what happens when a piece of ferromagnetic material is "magnetized" and becomes a permanent magnet. Heating a magnet, subjecting it to vibration by hammering it, or applying a rapidly oscillating magnetic field from a degaussing coil, tends to pull the domain walls free from their pinned states, and they will return to a lower energy configuration with less external magnetic field, thus "demagnetizing" the material. Landau-Lifshitz energy equation The contributions of the different internal energy factors described above is expressed by the free energy equation proposed by Lev Landau and Evgeny Lifshitz in 1935, which forms the basis of the modern theory of magnetic domains. The domain structure of a material is the one which minimizes the Gibbs free energy of the material. For a crystal of magnetic material, this is the Landau-Lifshitz free energy, E, which is the sum of these energy terms: - Eex is exchange energy: This is the energy due to the exchange interaction between magnetic dipole molecules in ferromagnetic, ferrimagnetic and antiferromagnetic materials. It is lowest when the dipoles are all pointed in the same direction, so it is responsible for magnetization of magnetic materials. When two domains with different directions of magnetization are next to each other, at the domain wall between them magnetic dipoles pointed in different directions lie next to each other, increasing this energy. This additional exchange energy is proportional to the total area of the domain walls. - ED is magnetostatic energy: This is a self-energy, due to the interaction of the magnetic field created by the magnetization in some part of the sample on other parts of the same sample. It is dependent on the volume occupied by the magnetic field extending outside the domain. This energy is reduced by minimizing the length of the loops of magnetic field lines outside the domain. For example, this tends to encourage the magnetization to be parallel to the surfaces of the sample, so the field lines won't pass outside the sample. Reducing this energy is the main reason for the creation of magnetic domains. - Eλ is magnetoelastic anisotropy energy: This energy is due to the effect of magnetostriction, a slight change in the dimensions of the crystal when magnetized. This causes elastic strains in the lattice, and the direction of magnetization that minimizes these strain energies will be favored. This energy tends to be minimized when the axis of magnetization of the domains in a crystal are all parallel. - Ek is magnetocrystalline anisotropy energy: Due to its magnetic anisotropy, the crystal lattice is "easy" to magnetize in one direction, and "hard" to magnetize in others. This energy is minimized when the magnetization is along the "easy" crystal axis, so the magnetization of most of the domains in a crystal grain tend to be in either direction along the "easy" axis. Since the crystal lattice in separate grains of the material is usually oriented in different random directions, this causes the dominant domain magnetization in different grains to be pointed in different directions. - EH is Zeeman energy: This is energy which is added to or subtracted from the magnetostatic energy, due to the interaction between the magnetic material and an externally applied magnetic field. It is proportional to the negative of the cosine of the angle between the field and magnetization vectors. Domains with their magnetic field oriented parallel to the applied field reduce this energy, while domains with their magnetic field oriented opposite to the applied field increase this energy. So applying a magnetic field to a ferromagnetic material generally causes the domain walls to move so as to increase the size of domains lying mostly parallel to the field, at the cost of decreasing the size of domains opposing the field. This is what happens when ferromagnetic materials are "magnetized". With a strong enough external field, the domains opposing the field will be swallowed up and disappear; this is called saturation. Some sources define a wall energy EW equal to the sum of the exchange energy and the magnetocrystalline anisotropy energy, which replaces Eex and Ek in the above equation. A stable domain structure is a magnetization function M(x), considered as a continuous vector field, which minimizes the total energy E throughout the material. To find the minimums a variational method is used, resulting in a set of nonlinear differential equations, called Brown's equations after William Fuller Brown Jr. Although in principle these equations can be solved for the stable domain configurations M(x), in practice only the simplest examples can be solved. Analytic solutions do not exist, and numerical solutions calculated by the finite element method are computationally intractable because of the large difference in scale between the domain size and the wall size. Therefore, micromagnetics has evolved approximate methods which assume that the magnetization of dipoles in the bulk of the domain, away from the wall, all point in the same direction, and numerical solutions are only used near the domain wall, where the magnetization is changing rapidly. Domain imaging techniques There are a number of microscopy methods that can be used to visualize the magnetization at the surface of a magnetic material, revealing the magnetic domains. Each method has a different application because not all domains are the same. In magnetic materials, domains can be circular, square, irregular, elongated, and striped, all of which have varied sizes and dimensions. Magneto-optic Kerr effect (MOKE) Large domains, within the range of 25-100 micrometers can be easily seen by Kerr microscopy, which uses the magneto-optic Kerr effect, which is the rotation of the polarization of light reflected from a magnetized surface. Lorentz microscopy is a transmission electron microscopy technique used to study magnetic domain structures at very high resolution. Off-axis electron holography is a related technique used to observe magnetic structures by detecting nanoscale magnetic fields. Magnetic force microscopy (MFM) Another technique for viewing sub-microscopic domain structures down to a scale of a few nanometers is magnetic force microscopy. MFM is a form of atomic force microscopy that uses a magnetically coated probe tip to scan the sample surface. Bitter patterns are a technique for imaging magnetic domains that were first observed by Francis Bitter. The technique involves placing a small quantity of ferrofluid on the surface of a ferromagnetic material. The ferrofluid arranges itself along magnetic domain walls, which have higher magnetic flux than the regions of the material located within domains. A modified Bitter technique has been incorporated into a widely used device, the Large Area Domain Viewer, which is particularly useful in the examination of grain-oriented silicon steels. - P. Weiss (1906) La variation du ferromagnetisme du temperature, Comptes Rendus, 143, p.1136-1149, cited in Cullity, 2008, p.116 - Cullity; C. D. Graham (2008). Introduction to Magnetic Materials, 2nd ed. New York: Wiley–IEEE. p. 116. ISBN 0-471-47741-9.. - Feynman, Richard P.; Robert B. Leighton; Matthew Sands (1963). The Feynman Lectures on Physics, Vol. II. US: California Inst. of Technology. pp. 37.5–37.6. ISBN 0-201-02117-X. - Dan Wei (28 April 2012). Micromagnetics and Recording Materials. Springer Science & Business Media. ISBN 978-3-642-28577-6. - Carey R., Isaac E.D., Magnetic domains and techniques for their observation, The English University Press Ltd, London, (1966). - A Dictionary of Physics. Oxford University Press, 2009. - R. J. Taylor, A Large area domain viewer, Proceedings of SMM9, 1989 - Jiles, David (1998). Introduction to magnetism and magnetic materials. London: Chapman & Hall. ISBN 0-412-79860-3. |Wikimedia Commons has media related to Magnetic domains.| - Magnetismus und Magnetooptik a German text about magnetism and magneto-optics
<urn:uuid:000b38ca-c6ba-43d3-bc0a-63d0bb5a93ed>
3.890625
3,735
Knowledge Article
Science & Tech.
29.597458
95,506,065
Identifying the targets that bacterial viruses, or phages, use to halt bacterial growth and then screening against those targets for small molecule inhibitors that attack the same targets provides a unique platform for the discovery of novel antibiotics. Researchers from Montreal-based PhageTech, Inc. describe in the February issue of Nature Biotechnology this novel method for discovering new classes of antibiotics. The article is available on-line today at www.nature.com/nbt/. "Over the course of evolution, the multitudes of phages that attack bacteria have developed unique proteins that bind to and inactivate (or redirect) critical cellular targets within their prey," said Jing Liu, Ph.D., corresponding author of the publication. "This binding shuts off key metabolic processes in the bacteria, diverting those organisms from their own growth and reproduction to the production of new phage progeny. We believe these phage-identified bacterial "weak spots" will provide useful screening targets for discovering the sorts of truly novel antibiotics needed to combat growing antibiotic resistance." The publications authors used a high-throughput phage genomics strategy to identify novel 31 novel polypeptide families that inhibit Staphylococcus aureus growth when expressed in the bacteria. Several of these were found to attack targets essential for bacterial DNA replication or transcription. They then employed the interaction between a prototypic phage peptide, ORF104 of phage 77, and its bacterial target, DnaI, to screen for small molecule inhibitors. Using this strategy, the researchers found several novel compounds that inhibited both bacterial growth and DNA synthesis. World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e4ddef41-f122-494f-9fe8-c143be79bf06>
2.90625
984
Content Listing
Science & Tech.
35.08492
95,506,070
Most snakes are born with poisonous bites they use for defense. But what can non-poisonous snakes do to ward off predators? What if they could borrow a dose of poison by eating toxic toads, then recycling the toxins? That's exactly what happens in the relationship between an Asian snake and a species of toad, according to a team of researchers funded by the National Science Foundation (NSF) Division of Integrative Organismal Systems (IOS). Herpetologists Deborah Hutchinson, Alan Savitzky of Old Dominion University in Norfolk, Va., and colleagues published results of research on the snake's dependence on certain toads in this week's online issue of the journal Proceedings of the National Academy of Sciences. Hutchinson studied the Asian snake Rhabdophis tigrinus and its relationship to a species of toxic toad it eats. In the PNAS paper, she and co-authors describe dietary sequestration of toxins by the snakes. The process allows the snakes to store toxins from the toads in their neck glands. When under attack, the snakes re-release the poisons from these neck glands. Many invertebrates sequester dietary toxins for use in defense, including milkweed insects and sea slugs. But vertebrate examples of toxin sequestration, especially from vertebrate prey, are rare. "A snake that's dependent on a diet of toads for chemical defense is highly unusual," said Hutchinson. Hutchinson said the research had identified six compounds in the snakes that may hold promise in medical treatments for people suffering from hypertension and related blood pressure disorders. The researchers made their case by testing Rhabdophis tigrinus on several Japanese islands, one with a large population of the toxic toads and another with none, and compared them with snakes from the Japanese island of Honshu, where toads are few. The presence of toxins in the snakes' neck glands depended upon their access to the toads. Snakes without the borrowed toxins were more likely to turn and flee from danger than to hold their ground and perform a toxin-releasing defensive maneuver. "Sequestration of toxins in a specialized [neck gland] structure in a vertebrate is a remarkable finding," said William Zamer, IOS deputy director at NSF. "This finding offers new insights into the complex mechanisms underlying ecological relationships and will lead to important insights about fundamental biological questions." Cheryl Dybas | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:16a8be02-38cb-42ad-8478-3514068855fa>
3.890625
1,167
Content Listing
Science & Tech.
39.192207
95,506,071
(PhysOrg.com) -- Currently, diamond is regarded to be the hardest known material in the world. But by considering large compressive pressures under indenters, scientists have calculated that a material called wurtzite boron nitride (w-BN) has a greater indentation strength than diamond. The scientists also calculated that another material, lonsdaleite (also called hexagonal diamond, since it’s made of carbon and is similar to diamond), is even stronger than w-BN and 58 percent stronger than diamond, setting a new record. This analysis marks the first case where a material exceeds diamond in strength under the same loading conditions, explain the study’s authors, who are from Shanghai Jiao Tong University and the University of Nevada, Las Vegas. The study is published in a recent issue of Physical Review Letters. “The new finding from our results is that large normal compressive pressures under indenters can transform certain materials (such as w-BN and lonsdaleite) into new superhard structures that are harder than diamond,” coauthor Changfeng Chen from the University of Nevada, Las Vegas, told PhysOrg.com. “This is a new mechanism that can be used to design new superhard materials.” Join PhysOrg.com on Facebook The scientists explain that the superior strength of w-BN and lonsdaleite is due to the materials’ structural reaction to compression. Normal compressive pressures under indenters cause the materials to undergo a structural phase transformation into stronger structures, conserving volume by flipping their atomic bonds. The scientists explain that w-BN and lonsdaleite have subtle differences in the directional arrangements of their bonds compared with diamond, which is responsible for their unique structural reaction. Under large compressive pressures, w-BN increases its strength by 78 percent compared with its strength before bond-flipping. The scientists calculated that w-BN reaches an indentation strength of 114 GPa (billions of pascals), well beyond diamond’s 97 GPa under the same indentation conditions. In the case of lonsdaleite, the same compression mechanism also caused bond-flipping, yielding an indentation strength of 152 GPa, which is 58 percent higher than the corresponding value of diamond. “Lonsdaleite is even stronger than w-BN because lonsdaleite is made of carbon atoms and w-BN consists of boron and nitrogen atoms,” Chen explained. “The carbon-carbon bonds in lonsdaleite are stronger than boron-nitrogen bonds in w-BN. This is also why diamond (with a cubic structure) is stronger than cubic boron nitride (c-BN).” Until recently, normal compressive pressures under indenters have not been included in the calculations of ideal shear strengths of crystals from first principles, but latest developments have enabled researchers to consider their effects, resulting in surprising discoveries like the one shown here. Still, experimenting with w-BN and lonsdaleite will be challenging, since both materials are difficult to synthesize in large quantities. However, another recent study has taken a promising approach to producing nanocomposites of w-BN and c-BN, which may also provide a way to synthesize nanocomposites containing lonsdaleite and diamond. In addition, by showing the underlying atomistic mechanism that can strengthen some materials, this work may provide new approaches for designing superhard materials. As Chen explained, superhard materials that exhibit other superior properties are highly desirable for applications in many fields of science and technology. “High hardness is only one important characteristic of superhard materials,” Chen said. “Thermal stability is another key factor since many superhard materials need to withstand extreme high-temperature environments as cutting and drilling tools and as wear, fatigue and corrosion resistant coatings in applications ranging from micro- and nano-electronics to space technology. For all carbon-based superhard materials, including diamond, their carbon atoms will react with oxygen atoms at high temperatures (at around 600°C) and become unstable. So designing new, thermally more stable superhard materials is crucial for high-temperature applications. Moreover, since most common superhard materials, such as diamond and cubic-BN, are semiconductors, it is highly desirable to design superhard materials that are conductors or superconductors. In addition, superhard magnetic materials are key components in various recording devices.” More information: Pan, Zicheng; Sun, Hong; Zhang, Yi; and Chen, Changfeng. “Harder than Diamond: Superior Indentation Strength of Wurtzite BN and Lonsdaleite.” Physical Review Letters 102, 055503 (2009). Copyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Explore further: Ultrahigh-pressure laser experiments shed light on super-Earth cores
<urn:uuid:42d37b15-3dad-48d8-82ac-4be6517bbc69>
3.546875
1,049
News Article
Science & Tech.
30.243552
95,506,079
The world has agreed that tropical rainforests need protection, so the multi-billion dollar REDD+ program is on its way. The challenge is its practical implementation. DMCii’s Prof. Jim Lynch explains why satellite surveying could do for the 21st century carbon trading economy what seismic surveys did for 20th century oil and gas. July 16 2012 | Carbon projects that reduce greenhouse gas emissions from deforestation and forest degradation (REDD) can save rainforests and slow climate change by keeping carbon locked in trees, but this mechanism and its sibling, REDD+, can only scale up if investors will know how many trees there are, and how much carbon is stored within them and whether this carbon is staying put, year on year. Current methodologies being advocated within the United Nations require the use of forest audits based on traditional forestry management methods of the developed world, but these are simply too expensive to work in the developing world. The Democratic Republic of Congo, for example, has more than 100 million hectares of inaccessible rainforest, and the country doesn’t have the resources to survey this from the ground in a cost-effective manner, let alone quantify the results into a standardised format to be cross-checked against forest stocks elsewhere. Lessons From The Energy Sector Perhaps the emerging carbon-trading economy can take a lesson from the oil and gas economy that dominated the previous century. Sinking exploratory wells represented an impossibly costly way to explore. So a standard technological solution emerged to discover and quantify oil and gas stocks: seismic surveys. Low-frequency sound waves are propagated into promising geological sites and the time taken for them to reflect back is employed to build up a picture of oil and gas deposits. The technique began with wildcatters in the 1920s exploding dynamite sticks; today it involves carefully planned networks of seismic sources and geophones to build up a three-dimensional map of reservoir architecture, with raw acoustic results processed via sophisticated algorithm chains and skillfully interpreted. Modern seismic surveys are employed not only to find new wells but also tracking fluid-front movement to trace a reservoir’s depletion over time. What is the equivalent technological solution for forest monitoring? Earth observation. The only way REDD+ can be made to work on a global scale in a truly open, transparent way is satellite imaging. That’s why I’m working with satellite imagery and services provider DMCii in the UK to develop standardised information products for forest monitoring, tailored to the requirements of REDD+ as well as comparable international forest monitoring schemes – such as the EU’s Forest Law Enforcement, Governance and Trade (FLEGT) Action Plan, ensuring countries importing timber to Europe are logging on a legal and sustainable basis. What Satellites Can Deliver Earth observation can contribute to two aspects of the forest monitoring problem. Firstly, as a means of ascertaining where all the trees are, and how many of them there are. Satellites’ wide area view enables a highly-accurate census to be made. Then comes the stage of processing the imagery to make biomass estimates and gain a measurement of the carbon stocks bound up in them – seeing the carbon for the trees – and then, just as importantly, to estimate carbon fluxes: how that carbon stock is changing over time. The last half-century of Earth observation has demonstrated that it is not the individual remote sensing image but a sequence of images over time that gives the most added value. When it comes to tropical rainforests this is especially true: one can never have enough data about these vast expanses, possessing their own self-generating climate systems – repeated monitoring is important simply to catch gaps in clouds. Then such gaps can be mosaiced together to build up a complete picture. We’re not starting from a blank page. The rapid development of satellite-based precision agriculture offers a model to learn from. Satellites are increasingly – and lucratively – being employed to monitor crop growth to guide the most-efficient application of fertiliser and pesticide for optimal yield. Many of the methods and algorithms originally developed for precision agriculture provide a measurement of photosynthetic activity can be also be applied to forest monitoring.This is very exciting – with the right models and algorithms we can analyse carbon pools for different climates around the world. How We Do It But won’t monitoring from space be astronomically expensive? Not necessarily. DMCii has built a business in selling imagery from the Disaster Monitoring Constellation, a fleet of separately-owned and collectively-managed satellites performing medium-resolution Earth monitoring. As the satellites, built by the UK’s Surrey Satellite Technology Ltd. are affordable, and the data is too. Of the users around the globe making use of DMCii data products, one of the most notable is Brazil, as one of the few forested developing nations to establish an annual forest audit. Brazil is also one of the inspirations for REDD+, having cut its deforestation by half in the last decade and a half. Both achievements are based on the use of satellite data. Brazil’s space agency the National Institute for Space Research (known by its Spanish initials as INPE) approached DMCii in 2005 to begin purchasing DMC imagery. Today DMCii’s contribution is core to Brazil’s forest monitoring programme, having moved beyond regular land cover mapping to active guidance of enforcement efforts. The distinctive patterns of deforestation have become familiar sights – ‘fishbone’ lines of clearing extending from a main road as the transportation route, or alternatively a river system. Early on, in my role with the OECD Sustainability Agency, we could challenge the authorities with this irrefutable evidence, and help guide fact-based policy. In addition to starting to brake deforestation rates, Brazil has also set up new national parks, indigenous reserves and sustainable logging zones. It’s a template for what REDD+ could accomplish on a broader scale– and all down to satellites. The precise nature of DMCii information products and services for REDD+ and related forest monitoring activities are still to be determined. We envisage two main classes of user: firstly, people mainly interested acquiring auditing information in a quick, digestible format, such as financial investors. Secondly, the national governments themselves, who will want to demonstrate their effective reduction in emissions clearly with a high degree of quality and precision, so they can be rewarded for their efforts. We’d also supply the associated measurement and management systems – based on standard geographic information systems (GIS) –to employ the data. Theoretically the information products would be data-neutral; in other words we could employ free imagery from government satellites such as the US Landsat for annual forest surveys, which is the minimum frequency that REDD+ certification is likely to require. But, once you consider the billions of dollars set to be pledged for REDD+, then the case is clear to acquire additional paid-for data to step up the frequency and accuracy of satellite observations. In the case of particular regions at risk then the frequency of acquisitions could reach daily revisits using the DMC satellites. Because they operate in a constellation, their acquisition opportunities are not constrained by the fixed orbit of a single satellite, which might take weeks or months to return to a given target. Last year’s addition to the DMC, the NigeriaSat-2 satellite, offers very high resolution 2.5 m imagery, that is potentially capable of surveying individual REDD+ projects in detail (incidentally giving Nigeria a superior imaging capability to SSTL’s homeland of Great Britain). The UK and SSTL has also announced plans for a new generation of NovaSAR small radar satellites, able to perform observations through clouds or at night, and sensitive to complementary environmental parameters such as leaf area index and soil moisture. DMCii also acquires additional regional forest monitoring capabilities with third party specialists. So we wouldn’t always perform mapping or ground truthing ourselves, but are active in establishing regional partnerships to do just that. In May, a DMCii led consortium won a place on the UK Department for International Development (DfID) Forest Governance Markets and Climate (FGMC) Framework Agreement meaning that we will be able to bid for projects to monitor forest governance and deforestation globally. The multidisciplinary consortium, known as inFORm, comprises commercial and academic partners with skills such as forest mapping, carbon accounting, timber tracking, policy development and broad ranging Earth Observation technologies, allowing a holistic approach to tackling deforestation. For me, one of the strengths of satellite products is that for all the data they contain, they can typically be understood by everyone, not just forest specialists – regional politicians to national media to village headmen. If you present people with clear and accurate information about their national resources, they can then make informed decisions about their future exploitation. In the past satellite images have helped motivate Brazilian politicians to take action and helped guide public opinion too. It is clear that the fruits of past development activities have not always been fairly shared. I was in Ghana in 2010 and I went to villages well away from Accra and asked them what they wanted most. Some electric lights so we’re not in darkness every night was the answer – and running past their village were thick power lines on massive pylons. REDD+ needs to be fair if it is going to be truly sustainable. Satellite imagery can help establish a level playing field of information, showing where the trees are actually placed in areas of contested land ownership, and serving as a basis for organised action on the ground, such as surveying and enforcement activities. Through its staff and parent company SSTL, DMCii has a promising record in knowledge transfer; it has worked with customer nations such as Egypt, Turkey and Nigeria to not only deliver satellites but the beginnings of full national space programmes along with them. The same kind of knowledge sharing, to make REDD+ a bottom-up initiative that local populations have a true stake in, will be crucial. While satellites offer a technical solution to the challenges of implementing a carbon trading economy based on preserving the tropical rainforests – just as seismic surveys proved to be the technical foundation of the expanding oil and gas industry – the political and social factors inherent in making REDD+ a reality will decide whether it succeeds or fails. About the author Trained in industrial chemistry and soil microbiology, Professor Jim Lynch OBE has served as Chief Executive of the UK government’s executive agency, Forest Research, Co-ordinator for the OECD Programme on Biological Resource Management and currently sits on the Board of the European Forestry Institute and Africa’s Council for the Frontiers of Knowledge. As Distinguished Professor of Life Sciences (Emeritus) at the University of Surrey, Prof. Lynch has begun work on a new project to develop forest monitoring information products for DMCii, a subsidiary of University of Surrey spin-off company Surrey Satellite Technology Ltd (SSTL). Please see our Reprint Guidelines for details on republishing our articles.
<urn:uuid:3bd5a090-7e40-4ca9-8934-0ba5c7f69bc0>
3.625
2,284
Nonfiction Writing
Science & Tech.
23.679935
95,506,125
At a depth of 2890 km, the core-mantle boundary (CMB) separates turbulent flow of liquid metals in the outer core from slowly convecting, highly viscous mantle silicates. The CMB marks the most dramatic change in dynamic processes and material properties in our planet, and accurate images of the structure at or near the CMB--over large areas--are crucially important for our understanding of present day geodynamical processes and the thermo-chemical structure and history of the mantle and mantle-core system. In addition to mapping the CMB we need to know if other structures exist directly above or below it, what they look like, and what they mean in terms of physical and chemical material properties and geodynamical processes. Detection, imaging, characterization, and understanding of structure in this remote region have been--and are likely to remain--a frontier in cross-disciplinary geophysics research. I will discuss the statistical problems, challenges and methods in imaging the CMB. University of Illinois at Urbana-Champaign Type of Event:
<urn:uuid:062c2e23-03c2-4900-a47d-ef113957a7ff>
2.96875
218
News (Org.)
Science & Tech.
17.127429
95,506,154
The Earth is further from the Sun than Venus, but how much further? Twice as far? Ten times? When we measure something we use a scale : we consider the size of one thing in terms of another. Make a fist with your hand (it's about the size of a large orange isn't it?), and then locate a point about 10m away from you. If your fist was the Sun, the Earth would be more than 10 metres away and less than 1 mm in diameter (the tip of a ball-point pen). Venus orbits the Sun three times in roughly two Earth years. On rare occasions Venus can be seen (from Earth) passing across the face of the Sun, this is called the Transit of Venus. Usually Venus appears to pass either above or below the Sun because the plane of Venus' orbit is slightly tilted to the Earth's own orbit around the Sun. The Transit of Venus has only been observed and recorded six times since telescopes became available early in the 17th century. Perhaps you remember The Transit of Venus happening in 2004, it will happen again in 2012. The Transit of Venus was a valuable observation because it provided the data with which the Earth to Sun distance could be calculated. This distance is called the Astronomical Unit and is used like a scale for the Solar System. But even without a Transit of Venus to provide data astronomers could know the ratio between the Venus to Sun distance and the Earth to Sun distance. What could they observe and what calculation would they need to do? Google will give you all sorts of interesting things connected with the Astronomical Unit and the Transit of Venus, but before you do that imagine you are living in the 17th century and don't have the internet, how might this Venus to Sun : Earth to Sun ratio be known?
<urn:uuid:103157c1-c5b6-4e99-af76-837807fba8e0>
4.125
369
Knowledge Article
Science & Tech.
61.042879
95,506,158
When the European Space Agency (ESA) sends the “3rd Large Mission” into space in 2034, its goal will be to detect gravitational waves. Scientists at the Laser Zentrum Hannover e. V. (LZH) have now begun to develop fiber amplifiers for the required lasers. The task of the Single-Frequency Laser Group of the LZH almost sounds trivial: The fiber amplifiers developed by this group should be used to post-amplify a special laser with a low output. However, the general framework of the project eLISA makes laser development a real challenge: The choice of optical components that can be used is highly limited. Challenge: Simple and fit for use in space Since the availability of resources in space is very limited, the amplifier in planning must work very efficiently”, says the head of the group Dr. Peter Weßels, when addressing the task. “At the same time, the setup must be kept as simple as possible, so the laser can be qualified for use in space.” Detecting miniscule movements over enormous distances Despite the high limitations, the laser must provide high performance. The laser beam must travel over a distance of around one million kilometers between the mother satellite and both daughter satellites. Once it arrives, the beam is regenerated and sent back the same distance. The differences in the phase of the returning light can be used to conclude distance changes in space on the subatomic scale, the gravitational waves. The scientists working with Dr. Peter Weßels want to develop a so-called „Engineering Qualification Modell“ within the next three years. Such a model is not yet completely ready for use in space, but the setup and design is quite similar to the later model. Apart from the LZH, the Fundação Faculdade de Ciências da Universidade de Lisboa, Portugal, and the Czech Space Research Centre s.r.o., Czech Republic, are working on the development of the laser system for the eLISA mission. The developmental project is headed by the Portuguese company LusoSpace Lda. https://www.elisascience.org/ - eLISA website https://www.elisascience.org/multimedia/image/elisa-spacecraft-two-laser-arms - illustration source Lena Bennefeld | idw - Informationsdienst Wissenschaft First evidence on the source of extragalactic particles 13.07.2018 | Technische Universität München Simpler interferometer can fine tune even the quickest pulses of light 12.07.2018 | University of Rochester For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:ce12302a-a695-4ea7-90d1-5bd0354eab5e>
3.40625
1,149
Content Listing
Science & Tech.
40.574362
95,506,161
Migration routes used by Nearctic migrant birds can cover great distances; they also differ among species, within species, and between years and seasons. As a result, migration routes for an entire migratory avifauna can encompass broad geographic areas, making it impossible to protect continuous stretches of habitat sufficient to connect the wintering and breeding grounds for most species. Consequently, ways to enhance habitats converted for human use (i.e. for pasture, crop cultivation, human settlement) as stopover sites for migrants are especially important. Shelterbelts around pastures and fields, if planted with species targeted to support migrant (and resident) bird species that naturally occupy mature forest habitats and that are at least partially frugivorous, could be a powerful enhancement tool for such species, if the birds will enter the converted areas to feed. I tested this approach for Nearctic migrant birds during the spring migration through an area in Chiapas, Mexico. Mature forest tree species whose fruits are eaten by birds were surveyed. Based on life form, crop size and fruit characteristics, I selected three tree species for study: Cymbopetalum mayanum (Annonaceae), Bursera simaruba (Burseraceae) and Trophis racemosa (Moraceae). I compared the use of fruits of these species by migrants and residents in forest with their use of the fruits of isolated individuals of the same species in pasture and cropland. All three plant species were useful for enhancing converted habitats for forest-occupying spring migrants, although species differed in the degree to which they entered disturbed areas to feed on the fruits. These tree species could probably enhance habitats for migrants at sites throughout the natural geographic ranges of the plants; in other geographic areas for other target bird groups, other tree species might be more appropriate. Email your librarian or administrator to recommend adding this journal to your organisation's collection. * Views captured on Cambridge Core between September 2016 - 17th July 2018. This data will be updated every 24 hours.
<urn:uuid:7fab81ed-2a7a-4850-9604-0dae7d5f91ae>
3.546875
410
Truncated
Science & Tech.
28.828019
95,506,176
A simulation of liquids with different viscosities. The liquid on the right has higher viscosity than the liquid on the left. |? = G·t| The viscosity of a fluid is the measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the informal concept of "thickness": for example, honey has a higher viscosity than water. Viscosity is the property of a fluid which opposes the relative motion between two surfaces of the fluid that are moving at different velocities. In simple terms, viscosity means friction between the molecules of fluid. When the fluid is forced through a tube, the particles which compose the fluid generally move more quickly near the tube's axis and more slowly near its walls; therefore some stress (such as a pressure difference between the two ends of the tube) is needed to overcome the friction between particle layers to keep the fluid moving. For a given velocity pattern, the stress required is proportional to the fluid's viscosity. A fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at very low temperatures in superfluids. Otherwise, all fluids have positive viscosity and are technically said to be viscous or viscid. A fluid with a relatively high viscosity, such as pitch, may appear to be a solid. The dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the idealized situation known as a Couette flow, where a layer of fluid is trapped between two horizontal plates, one fixed and one moving horizontally at constant speed . This fluid has to be homogeneous in the layer and at different shear stresses. (The plates are assumed to be very large so that one need not consider what happens near their edges.) If the speed of the top plate is low enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, and friction between them will give rise to a force resisting their relative motion. In particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the speed u and the area A of each plate, and inversely proportional to their separation y: The ratio is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates (see illustrations to the right). Isaac Newton expressed the viscous forces by the differential equation where ? = , and is the local shear velocity. This formula assumes that the flow is moving along parallel lines to x-axis. Furthermore, it assumes that the y-axis, perpendicular to the flow, points in the direction of maximum shear velocity. This equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. This equation is called the defining equation for shear viscosity. The viscosity is not a material constant, but a material property that depends on physical properties like temperature. The functional relationship between viscosity and other physical properties is described by a mathematical viscosity model called a constitutive equation which is usually more complex than the defining equation for viscosity. There exist many viscosity models, and based on type of development-reasoning, some viscosity models are selected and presented in the article Viscosity models for mixtures. Use of the Greek letter mu (?) for the dynamic stress viscosity is common among mechanical and chemical engineers, as well as physicists. However, the Greek letter eta (?) is also used by chemists, physicists, and the IUPAC. where L is a typical length scale in the system, and u is the velocity of the fluid with respect to the object (m/s). When a compressible fluid is compressed or expanded evenly, without shear, it may still exhibit a form of internal friction that resists its flow. These forces are related to the rate of compression or expansion by a factor called the volume viscosity, bulk viscosity or second viscosity. The bulk viscosity is important only when the fluid is being rapidly compressed or expanded, such as in sound and shock waves. Bulk viscosity explains the loss of energy in those waves, as described by Stokes' law of sound attenuation. In general, the stresses within a flow can be attributed partly to the deformation of the material from some rest state (elastic stress), and partly to the rate of change of the deformation over time (viscous stress). In a fluid, by definition, the elastic stress includes only the hydrostatic pressure. In very general terms, the fluid's viscosity is the relation between the strain rate and the viscous stress. In the Newtonian fluid model, the relationship is by definition a linear map, described by a viscosity tensor that, multiplied by the strain rate tensor (which is the gradient of the flow's velocity), gives the viscous stress tensor. The viscosity tensor has nine independent degrees of freedom in general. For isotropic Newtonian fluids, these can be reduced to two independent parameters. The most usual decomposition yields the dynamic viscosity ? and the bulk viscosity ?. Newton's law of viscosity is a constitutive equation (like Hooke's law, Fick's law, and Ohm's law): it is not a fundamental law of nature but an approximation that holds in some materials and fails in others. A fluid that behaves according to Newton's law, with a viscosity ? that is independent of the stress, is said to be Newtonian. Gases, water, and many common liquids can be considered Newtonian in ordinary conditions and contexts. There are many non-Newtonian fluids that significantly deviate from that law in some way or other. For example: Shear-thinning liquids are very commonly, but misleadingly, described as thixotropic. Even for a Newtonian fluid, the viscosity usually depends on its composition and temperature. For gases and other compressible fluids, it depends on temperature and varies very slowly with pressure. The viscous forces that arise during fluid flow must not be confused with the elastic forces that arise in a solid in response to shear, compression or extension stresses. While in the latter the stress is proportional to the amount of shear deformation, in a fluid it is proportional to the rate of deformation over time. (For this reason, Maxwell used the term fugitive elasticity for fluid viscosity.) However, many liquids (including water) will briefly react like elastic solids when subjected to sudden stress. Conversely, many "solids" (even granite) will flow like liquids, albeit very slowly, even under arbitrarily small stress. Such materials are therefore best described as possessing both elasticity (reaction to deformation) and viscosity (reaction to rate of deformation); that is, being viscoelastic. Indeed, some authors have claimed that amorphous solids, such as glass and many polymers, are actually liquids with a very high viscosity (greater than 1012 Pa·s). However, other authors dispute this hypothesis, claiming instead that there is some threshold for the stress, below which most solids will not flow at all, and that alleged instances of glass flow in window panes of old buildings are due to the crude manufacturing process of older eras rather than to the viscosity of glass. Viscoelastic solids may exhibit both shear viscosity and bulk viscosity. The extensional viscosity is a linear combination of the shear and bulk viscosities that describes the reaction of a solid elastic material to elongation. It is widely used for characterizing polymers. Viscosity is measured with various types of viscometers and rheometers. A rheometer is used for those fluids that cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. Close temperature control of the fluid is essential to acquire accurate measurements, particularly in materials like lubricants, whose viscosity can double with a change of only 5 °C. For some fluids, the viscosity is constant over a wide range of shear rates (Newtonian fluids). The fluids without a constant viscosity (non-Newtonian fluids) cannot be described by a single number. Non-Newtonian fluids exhibit a variety of different correlations between shear stress and shear rate. One of the most common instruments for measuring kinematic viscosity is the glass capillary viscometer. In coating industries, viscosity may be measured with a cup in which the efflux time is measured. There are several sorts of cup - such as the Zahn cup and the Ford viscosity cup - with the usage of each type varying mainly according to the industry. The efflux time can also be converted to kinematic viscosities (centistokes, cSt) through the conversion equations. Also used in coatings, a Stormer viscometer uses load-based rotation in order to determine viscosity. The viscosity is reported in Krebs units (KU), which are unique to Stormer viscometers. Vibrating viscometers can also be used to measure viscosity. Resonant, or vibrational viscometers work by creating shear waves within the liquid. In this method, the sensor is submerged in the fluid and is made to resonate at a specific frequency. As the surface of the sensor shears through the liquid, energy is lost due to its viscosity. This dissipated energy is then measured and converted into a viscosity reading. A higher viscosity causes a greater loss of energy. Apparent viscosity is a calculation derived from tests performed on drilling fluid used in oil or gas well development. These calculations and tests help engineers develop and maintain the properties of the drilling fluid to the specifications required. Both the physical unit of dynamic viscosity in SI units, the poiseuille (Pl), and cgs units, the poise (P), are named after Jean Léonard Marie Poiseuille. The poiseuille, which is rarely used, is equivalent to the pascal second (Pa·s), or (N·s)/m2, or kg/(m·s). If a fluid is placed between two plates with distance one meter, and one plate is pushed sideways with a shear stress of one pascal, and it moves at x meters per second, then it has viscosity of pascal seconds. For example, water at 20 °C has a viscosity of 1.002 mPa·s, while a typical motor oil could have a viscosity of about 250 mPa·s. The units used in practice are either Pa·s and its submultiples or the cgs poise referred to below, and its submultiples. The cgs physical unit for dynamic viscosity, the poise (P), is also named after Jean Poiseuille. It is more commonly expressed, particularly in ASTM standards, as centipoise (cP) since the latter is equal to the SI multiple millipascal seconds (mPa·s). For example, water at 20 °C has a viscosity of 1.002 mPa·s = 1.002 cP. The SI unit of kinematic viscosity is m2/s. The cgs physical unit for kinematic viscosity is the stokes (St), named after Sir George Gabriel Stokes. It is sometimes expressed in terms of centistokes (cSt). In U.S. usage, stoke is sometimes used as the singular form. Water at 20 °C has a kinematic viscosity of about 10-6 m2·s-1 or 1 cSt. The kinematic viscosity is sometimes referred to as diffusivity of momentum, because it is analogous to diffusivity of heat and diffusivity of mass. It is therefore used in dimensionless numbers which compare the ratio of the diffusivities. The reciprocal of viscosity is fluidity, usually symbolized by ? = or F = , depending on the convention used, measured in reciprocal poise (P-1, or cm·s·g-1), sometimes called the rhe. Fluidity is seldom used in engineering practice. The concept of fluidity can be used to determine the viscosity of an ideal solution. For two components A and B, the fluidity when A and B are mixed is which is only slightly simpler than the equivalent equation in terms of viscosity: where ?A and ?B are the mole fractions of components A and B respectively, and ?A and ?B are the components' pure viscosities. The reyn is a British unit of dynamic viscosity. Viscosity index is a measure for the change of viscosity with temperature. It is used in the automotive industry to characterise lubricating oil. At one time the petroleum industry relied on measuring kinematic viscosity by means of the Saybolt viscometer, and expressing kinematic viscosity in units of Saybolt universal seconds (SUS). Other abbreviations such as SSU (Saybolt seconds universal) or SUV (Saybolt universal viscosity) are sometimes used. Kinematic viscosity in centistokes can be converted from SUS according to the arithmetic and the reference table provided in ASTM D 2161. The viscosity of a system is determined by how molecules constituting the system interact. There are no simple but correct expressions for the viscosity of a fluid. The simplest exact expressions are the Green-Kubo relations for the linear shear viscosity or the Transient Time Correlation Function expressions derived by Evans and Morriss in 1985. Although these expressions are each exact, in order to calculate the viscosity of a dense fluid using these relations currently requires the use of molecular dynamics computer simulations. Viscosity in gases arises principally from the molecular diffusion that transports momentum between layers of flow. The kinetic theory of gases allows accurate prediction of the behavior of gaseous viscosity. Within the regime where the theory is applicable: James Clerk Maxwell published a famous paper in 1866 using the kinetic theory of gases to study gaseous viscosity. To understand why the viscosity is independent of pressure, consider two adjacent boundary layers (A and B) moving with respect to each other. The internal friction (the viscosity) of the gas is determined by the probability a particle of layer A enters layer B with a corresponding transfer of momentum. Maxwell's calculations show that the viscosity coefficient is proportional to the density, the mean free path, and the mean velocity of the atoms. On the other hand, the mean free path is inversely proportional to the density. So an increase in density due to an increase in pressure doesn't result in any change in viscosity. In relation to diffusion, the kinematic viscosity provides a better understanding of the behavior of mass transport of a dilute species. Viscosity is related to shear stress and the rate of shear in a fluid, which illustrates its dependence on the mean free path, ?, of the diffusing particles. From fluid mechanics, for a Newtonian fluid, the shear stress, ?, on a unit area moving parallel to itself, is found to be proportional to the rate of change of velocity with distance perpendicular to the unit area: for a unit area parallel to the xz-plane, moving along the x axis. We will derive this formula and show how ? is related to ?. Interpreting shear stress as the time rate of change of momentum, p, per unit area A (rate of momentum flux) of an arbitrary control surface gives where ?ux? is the average velocity, along the x-axis, of fluid molecules hitting the unit area, with respect to the unit area and ? is the rate of fluid mass hitting the surface. By making simplified assumption that the velocity of the molecules depends linearly on the distance they are coming from, the mean velocity depends linearly on the mean distance: Further manipulation will show, Note, that the mean free path itself typically depends (inversely) on the density. This, in turn, is equal to is a constant for the gas. in Sutherland's formula: Valid for temperatures between with an error due to pressure less than 10% below 3.45 MPa. According to Sutherland's formula, if the absolute temperature is less than C, the relative change in viscosity for a small change in temperature is greater than the relative change in the absolute temperature, but it is smaller when T is above C. The kinematic viscosity though always increases faster than the temperature (that is, is greater than 1). Sutherland's constant, reference values and ? values for some gases: |Gas||C (K)||T0 (K)||?0 (?Pa·s)||? (?Pa·s·K-)| The Chapman-Enskog equation may be used to estimate viscosity for a dilute gas. This equation is based on a semi-theoretical assumption by Chapman and Enskog. The equation requires three empirically determined parameters: the collision diameter (?), the maximum energy of attraction divided by the Boltzmann constant () and the collision integral (?(T*)). In liquids, the additional forces between molecules become important. This leads to an additional contribution to the shear stress though the exact mechanics of this are still controversial. Thus, in liquids: The dynamic viscosities of liquids are typically several orders of magnitude higher than dynamic viscosities of gases. The first step is to calculate the viscosity blending number (VBN) (also called the viscosity blending index) of each component of the blend: where ? is the kinematic viscosity in centistokes (cSt). It is important that the kinematic viscosity of each component of the blend be obtained at the same temperature. The next step is to calculate the VBN of the blend, using this equation: where xX is the mass fraction of each component of the blend. Once the viscosity blending number of a blend has been calculated using equation (2), the final step is to determine the kinematic viscosity of the blend by solving equation (1) for ?: where VBNBlend is the viscosity blending number of the blend. alternatively use the more accurate Lederer-Roegiers equation The viscosity of air depends mostly on the temperature. At 15 °C, the viscosity of air is , 18.1 ?Pa·s or . The kinematic viscosity at 15 °C is or 14.8 cSt. At 25 °C, the viscosity is 18.6 ?Pa·s and the kinematic viscosity 15.7 cSt. The dynamic viscosity of liquid water at different temperatures up to the normal boiling point is listed below. |Temperature (°C)||Viscosity (mPa·s)| Some dynamic viscosities of Newtonian fluids are listed below: |Gas||at 0 °C (273 K)||at 27 °C (300 K)| |Fluid||Viscosity (Pa·s)||Viscosity (cP)| |blood (37 °C)||-||3-4| |peanut butter[a]||? 250||?| |Liquid||Viscosity (Pa·s)||Viscosity (cP)| |glycerol (at 20 °C)| |motor oil SAE 10 (20 °C)| |motor oil SAE 40 (20 °C)| |liquid nitrogen (-196 °C)| |Solid||Viscosity (Pa·s)||Temperature (°C)| The term slurry describes mixtures of a liquid and solid particles that retain some fluidity. The viscosity of slurry can be described as relative to the viscosity of the liquid phase: where ?s and ?l are respectively the dynamic viscosity of the slurry and liquid (Pa·s), and ?r is the relative viscosity (dimensionless). Depending on the size and concentration of the solid particles, several models exist that describe the relative viscosity as a function of volume fraction ? of solid particles. In the case of extremely low concentrations of fine particles, Einstein's equation may be used: In the case of higher concentrations, a modified equation was proposed by Guth and Simha, which takes into account interaction between the solid particles: Further modification of this equation was proposed by Thomas from the fitting of empirical data: where A = and B = 16.6. In the case of high shear stress (above 1 kPa), another empirical equation was proposed by Kitano et al. for polymer melts: where A = 0.68 for smooth spherical particles. Einstein derived the applicable first theoretical formula for the estimation of viscosity values of composites or mixtures in 1906. This model developed while assuming linear viscous fluid including suspensions of rigid and spherical particles. Einstein's model is valid for very low volume fraction . Brinkman modified Einstein's model for used with average particle volume fraction up to 4% Batchelor reformed Einstein's theoretical model by presenting Brownian motion effect. Wang et al. model Wang et al. found a model to predict viscosity of nanofluid as follows. Masoumi et al. model Masoumi et al. suggested a new viscosity correlation by considering Brownian motion of nanoparticle in nanofluid. Udawattha et al. model Udawattha et al. modified the Masoumi et al. model. The developed model valid for suspension containing micro-size particles. where Q is activation energy, T is temperature, R is the molar gas constant and A is approximately a constant. The viscous flow in amorphous materials is characterized by a deviation from the Arrhenius-type behavior: Q changes from a high value QH at low temperatures (in the glassy state) to a low value QL at high temperatures (in the liquid state). Depending on this change, amorphous materials are classified as either The fragility of amorphous materials is numerically characterized by Doremus' fragility ratio: and strong materials have RD < 2 whereas fragile materials have RD >= 2. The viscosity of amorphous materials is quite exactly described by a two-exponential equation: with constants A1, A2, B, C and D related to thermodynamic parameters of joining bonds of an amorphous material. If the temperature is significantly lower than the glass transition temperature, T Tg, then the two-exponential equation simplifies to an Arrhenius-type equation: When the temperature is less than the glass transition temperature, T < Tg, the activation energy of viscosity is high because the amorphous materials are in the glassy state and most of their joining bonds are intact. If the temperature is much higher than the glass transition temperature, T >> Tg, the two-exponential equation also simplifies to an Arrhenius-type equation: When the temperature is higher than the glass transition temperature, T > Tg, the activation energy of viscosity is low because amorphous materials are melted and have most of their joining bonds broken, which facilitates flow. In the study of turbulence in fluids, a common practical strategy for calculation is to ignore the small-scale vortices (or eddies) in the motion and to calculate a large-scale motion with an eddy viscosity that characterizes the transport and dissipation of energy in the smaller-scale flow (see large eddy simulation). Values of eddy viscosity used in modeling ocean circulation may be from to depending upon the resolution of the numerical grid.
<urn:uuid:5353e753-f95e-4d3d-a6df-4aef136e2b17>
4.4375
5,200
Knowledge Article
Science & Tech.
39.150327
95,506,204
Lettuce downy mildew caused by Bremia lactucae has long been a model for understanding biotrophic oomycete–plant interactions. Initial research involved physiological and cytological studies that have been reviewed earlier. This review provides an overview of the genetic and molecular analyses that have occurred in the past 25years as well as perspectives on future directions. The interaction between B. lactucae and lettuce (Lactuca sativa) is determined by an extensively characterized gene-for-gene relationship. Resistance genes have been cloned from L. sativa that encode proteins similar to resistance proteins isolated from other plant species. Avirulence genes have yet to be cloned from B. lactucae, although candidate sequences have been identified on the basis of motifs present in secreted avirulence proteins characterized from other oomycetes. Bremia lactucae has a minimum of 7 or 8 chromosome pairs ranging in size from 3 to at least 8 Mb and a set of linear polymorphic molecules that range in size between 0.3 and 1.6Mb and are inherited in a non-Mendelian manner. Several methods indicated the genome size of B. lactucae to be ca. 50Mb, although this is probably an underestimate, comprising approximately equal fractions of highly repeated sequences, intermediate repeats, and low-copy sequences. The genome of B. lactucae still awaits sequencing. To date, several EST libraries have been sequenced to provide an incomplete view of the gene space. Bremia lactucae has yet to be transformed, but regulatory sequences from it form components of transformation vectors used for other oomycetes. Molecular technology has now advanced to the point where rapid progress is likely in determining the molecular basis of specificity, mating type, and fungicide insensitivity. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:dceeaa72-c0d7-4a5e-9e8e-3b9ccb10b1f8>
2.984375
393
Academic Writing
Science & Tech.
22.042328
95,506,219
Magnets can be classified by their ‘hard’ or ‘soft’ magnetic properties. Hard magnets, sometimes called ‘permanent’ magnets, have fixed or ‘pinned’ domain walls which mean the material stays magnetised for a long time. Soft magnets have moveable domain walls that can be easily flipped. These materials exhibit impermanent magnetic properties. Professor Gabriel Aeppli, Director of the LCN and a senior member of the research team, explained the significance of the research: “Whether a magnet is hard or soft determines what you can use it for. Typically, you would use a permanent magnet to fix a note to the door of your refrigerator because you want it to stay there for a long time. On the other hand, you might use a soft magnet in a motor or transformer because it would be better at adapting to the rapid changes in alternating current and would dissipate much less energy than a hard magnet. “It is very rare to be able to continuously tune wall pinning in a magnet but we have now shown how it can be done in a model magnet at a low temperature. In the process, we demonstrate a new route to applications of magnets at higher temperatures and show how chemical disorder at the nanometre (one billionth of a meter) scale can have a huge effect on the properties of a macroscopic (centimetre scale) magnet.” Most physical and biological systems can be thought of as disordered. Semiconductors rely on randomly placed impurities for their electrical properties and uses, while the chemical and structural impurities in magnets determine the domain wall pinning and therefore how easily their polarity can be changed. “From a theoretical point of view, it’s been really interesting for us to see the properties of a large, disordered system being dominated to such an extent by a rare configuration of impurities,” says Professor Aeppli. “Unlike biological systems, in materials science we are used to seeing behaviour which is dominated by the average characteristics of the system. Here we can observe the massive influence of a miniscule number of chemical and structural defects.” Work at the London Centre for Nanotechnology was funded by the UK Engineering and Physical Sciences Research Council and a Wolfson-Royal Society Research Merit Award. Additional research was also carried out at the University of Chicago. David Weston | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:7f1811b2-82a4-47b9-befe-49af31bc35cd>
3.359375
1,071
Content Listing
Science & Tech.
36.519485
95,506,238
By looking at this unique “history book” of our Universe, at an epoch when the Sun and the Earth did not yet exist, scientists hope to solve the puzzle of how galaxies formed in the remote past. The team has undertaken the Herculean task of reconstituting the history of about one hundred remote galaxies that have been observed with both Hubble and GIRAFFE on the VLT. The first results are coming in and have already provided useful insights for three galaxies. In one galaxy, GIRAFFE revealed a region full of ionised gas, that is, hot gas composed of atoms that have been stripped of one or several electrons. This is normally due to the presence of very hot, young stars. However, even after staring at the region for more than 11 days, Hubble did not detect any stars! “Clearly this unusual galaxy has some hidden secrets,” says Mathieu Puech, lead author of one of the papers reporting this study. Comparisons with computer simulations suggest that the explanation lies in the collision of two very gas-rich spiral galaxies. The heat produced by the collision would ionise the gas, making it too hot for stars to form. Another galaxy that the astronomers studied showed the opposite effect. There they discovered a bluish central region enshrouded in a reddish disc, almost completely hidden by dust. “The models indicate that gas and stars could be spiralling inwards rapidly,” says Hammer. This might be the first example of a disc rebuilt after a major merger (ESO 01/05). Finally, in a third galaxy, the astronomers identified a very unusual, extremely blue, elongated structure — a bar — composed of young, massive stars, rarely observed in nearby galaxies. Comparisons with computer simulations showed the astronomers that the properties of this object are well reproduced by a collision between two galaxies of unequal mass. “The unique combination of Hubble and FLAMES/GIRAFFE at the VLT makes it possible to model distant galaxies in great detail, and reach a consensus on the crucial role of galaxy collisions for the formation of stars in a remote past,” says Puech. “It is because we can now see how the gas is moving that we can trace back the mass and the orbits of the ancestral galaxies relatively accurately. Hubble and the VLT are real ‘time machines’ for probing the Universe’s history”, adds Sébastien Peirani, lead author of another paper reporting on this study. The astronomers are now extending their analysis to the whole sample of galaxies observed. “The next step will then be to compare this with closer galaxies, and so, piece together a picture of the evolution of galaxies over the past six to eight billion years, that is, over half the age of the Universe,” concludes Hammer.More information The observations were obtained in the framework of the IMAGES ESO Large Programme.This is a joint ESO/ST-EcF release. The Hubble update is available on: Dr. Henri Boffin | EurekAlert! Further reports about: > 3D views of remote galaxies > Astrophysique > ESO > ESO’s Very Large Telescope > Galaxy > Hubble > Observatoire > Space > Telescope > Universe > VLT > VLT’s FLAMES/GIRAFFE spectrograph > computer simulation > distant galaxies > formation of stars > galaxy collisions > motions of gas in tiny objects > young stars Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:3cf6818f-e194-4fd4-8e9b-e9b44d4a1b0b>
3.84375
1,301
Content Listing
Science & Tech.
38.398815
95,506,253
On 15.08.2011 07:56 Jason Resch said the following: Can we accurately simulate physical laws or can't we? Before you answer, take a few minutes to watch this amazing video, which simulates the distribution of mass throughout the universe on the largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw (Note each point of light represents a galaxy, not a star) The answer on your question depends on what you mean by accurately and what by physical laws. I am working with finite elements (more specifically with ANSYS Multiphysics) and I can tell for sure that if you speak of simulation of the universe, then the current simulation technology does not scale. Nowadays one could solve a linear system reaching dimension of 1 billion but this will not help you. I would say that either contemporary numerical methods are deadly wrong, or simulated equations are not the right ones. In this respect, you may want to look how simulation is done for example in Second Life. Well, today numerical simulation is a good business (computer-aided engineering is about a billion per year) and it continues to grow. Yet, if you look in detail, then there are some areas when it could be employed nicely and some where it better to forget about simulation. I understand that you speak "in principle". Yet, I am not sure if extrapolation too far away from the current knowledge makes sense, as eventually we are coming to "philosophical controversies". You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to email@example.com. To unsubscribe from this group, send email to For more options, visit this group at
<urn:uuid:c3c02a4c-c8c3-4192-a7d0-7918eed070f1>
2.59375
392
Comment Section
Science & Tech.
50.0265
95,506,254
[C++] GCC -fPIC option I'll try to explain what has already been said in a simpler way. Whenever a shared lib is loaded, the loader (the code on the OS which load any program you run) changes some addresses in the code depending on where the object was loaded to. In the above example, the "111" in the non-PIC code is written by the loader the first time it was loaded. For not shared objects, you may want it to be like that because the compiler can make some optimizations on that code. For shared object, if another process will want to "link" to that code he must read it to the same virtual addresses or the "111" will make no sense. but that virtual-space may already be in use in the second process. I have read about GCC's Options for Code Generation Conventions, but could not understand what "Generate position-independent code (PIC)" does. Please give an example to explain me what does it mean. Every process has same virtual address space (If randomization of virtual address is stopped by using a flag in linux OS) (For more details Disable and re-enable address space layout randomization only for myself) So if its one exe with no shared linking (Hypothetical scenario), then we can always give same virtual address to same asm instruction without any harm. But when we want to link shared object to the exe, then we are not sure of the start address assigned to shared object as it will depend upon the order the shared objects were linked.That being said, asm instruction inside .so will always have different virtual address depending upon the process its linking to. So one process can give start address to .so as 0x45678910 in its own virtual space and other process at the same time can give start address of 0x12131415 and if they do not use relative addressing, .so will not work at all. So they always have to use the relative addressing mode and hence fpic option. A minor addition to the answers already posted: object files not compiled to be position independent are relocatable; they contain relocation table entries. These entries allow the loader (that bit of code that loads a program into memory) to rewrite the absolute addresses to adjust for the actual load address in the virtual address space. An operating system will try to share a single copy of a "shared object library" loaded into memory with all the programs that are linked to that same shared object library. Since the code address space (unlike sections of the data space) need not be contiguous, and because most programs that link to a specific library have a fairly fixed library dependency tree, this succeeds most of the time. In those rare cases where there is a discrepancy, yes, it may be necessary to have two or more copies of a shared object library in memory. Obviously, any attempt to randomize the load address of a library between programs and/or program instances (so as to reduce the possibility of creating an exploitable pattern) will make such cases common, not rare, so where a system has enabled this capability, one should make every attempt to compile all shared object libraries to be position independent. Since calls into these libraries from the body of the main program will also be made relocatable, this makes it much less likely that a shared library will have to be copied.
<urn:uuid:6b3eefc4-f721-4345-b83c-9e6b9e144d7e>
2.90625
710
Q&A Forum
Software Dev.
40.452542
95,506,261
of security model used by the Flash Player and AIR runtimes is based on the domain of origin for loaded SWF files, HTML, media, and other assets. Executable code in a file from a specific Internet domain, such as www.example.com, can always access all data from that domain. These assets are put in the same security grouping, known as a (For more information, see For example, ActionScript code in a SWF file can load SWF files, bitmaps, audio, text files, and any other asset from its own domain. Also, cross-scripting between two SWF files from the same domain is always permitted, as long as both files are written using ActionScript is the ability of code in one file to access the properties, methods, and objects defined by the code in another Cross-scripting is not supported between SWF files written using ActionScript 3.0 and those using previous versions of ActionScript; however, these files can communicate by using the LocalConnection class. Also, the ability of a SWF file to cross-script ActionScript 3.0 SWF files from other domains and to load data from other domains is prohibited by default; however, such access can be granted with a call to the method in the loaded SWF file. For more information, see The following basic security rules always apply by default: The Flash Player and AIR runtimes consider the following to be individual domains, and set up individual security sandboxes for Even if a named domain, such as http://example.com, maps to a specific IP address, such as http://18.104.22.168, the runtimes set up separate security sandboxes for each. There are two basic methods that a developer can use to grant a SWF file access to assets from sandboxes other than that of the In the Flash Player and AIR runtime security models, there is a distinction between loading content and extracting or accessing is defined as media, including visual media the runtimes can display, audio, video, or a SWF file or HTML that includes displayed media. is defined as something that is accessible only to code. Content and data are loaded in different Loading content—You can load content using classes such as the Loader, Sound, and NetStream classes; through MXML tags when using Flex; or through HTML tags in an AIR application. Extracting data—You can extract data from loaded media content by using Bitmap objects, the property, or the method is available in Flash Player 11.3 and higher; AIR 3.3 and higher. Accessing data—You can access data directly by loading it from an external file (such as an XML file) using classes such as the URLStream, URLLoader, FileReference, Socket, and XMLSocket classes. AIR provides additional classes for loading data, such as FileStream, The Flash Player security model defines different rules for loading content and accessing data. In general, there are fewer restrictions on loading content than on accessing data. In general, content (SWF files, bitmaps, mp3 files, and videos) can be loaded from anywhere, but if the content is from a domain other than that of the loading code or content, it will be partitioned in a separate security sandbox. There are a few barriers to loading content: By default, local SWF files (those loaded from a non-network address, such as a user’s hard drive) are classified in the local-with-filesystem sandbox. These files cannot load content from the network. For more Real-Time Messaging Protocol (RTMP) servers can limit access to content. For more information, see Content delivered using RTMP servers If the loaded media is an image, audio, or video, its data, such as pixel data and sound data, can be accessed by a SWF file outside its security sandbox only if the domain of that SWF file has been included in a URL policy file at the origin domain of the media. For details, see Accessing loaded media as data Other forms of loaded data include text or XML files, which are loaded with a URLLoader object. Again in this case, to access any data from another security sandbox, permission must be granted by means of a URL policy file at the origin domain. For details, see Using URLLoader and URLStream Policy files are never required in order for code executing in the AIR application sandbox to load remote content or data.
<urn:uuid:aed077fb-7bba-4ad0-8ac1-5c7f8a124e7c>
2.859375
989
Documentation
Software Dev.
43.401332
95,506,267
This page uses content from Wikipedia and is licensed under CC BY-SA. The ionosphere (//) is the ionized part of Earth's upper atmosphere, from about 60 km (37 mi) to 1,000 km (620 mi) altitude, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth. As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a 152.4 m (500 ft) kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later. In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Heaviside's proposal, coupled with Planck's law of black body radiation, may have hampered the growth of radio astronomy for the detection of electromagnetic waves from celestial bodies until 1932 (and the development of high-frequency radio transceivers). Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties. In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923. We have in quite recent years seen the universal adoption of the term 'stratosphere'..and..the companion term 'troposphere'... The term 'ionosphere', for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series. In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect. Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere. In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere. On July 26, 1963 the first operational geosynchronous satellite Syncom 2 was launched. The board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica. The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about 50 km (31 mi) to more than 1,000 km (620 mi). It exists primarily due to ultraviolet radiation from the Sun. The lowest part of the Earth's atmosphere, the troposphere extends from the surface to about 10 km (6.2 mi). Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above 80 km (50 mi), in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially ionized and contains a plasma which is referred to as the ionosphere. Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present. Ionization depends primarily on the Sun and its activity. The amount of ionization in the ionosphere varies greatly with the amount of radiation received from the Sun. Thus there is a diurnal (time of day) effect and a seasonal effect. The local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. The activity of the Sun is associated with the sunspot cycle, with more radiation occurring with more sunspots. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization. There are disturbances such as solar flares and the associated release of charged particles into the solar wind which reaches the Earth and interacts with its geomagnetic field. At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F1 layer. The F2 layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves. The D layer is the innermost layer, 60 km (37 mi) to 90 km (56 mi) above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, high solar activity can generate hard X-rays (wavelength < 1 nm) that ionize N2 and O2. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions. Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime. During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours. The E layer is the middle layer, 90 km (56 mi) to 150 km (93 mi) above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O2). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the Es layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer. This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). However, it was not until 1924 that its existence was detected by Edward V. Appleton and Miles Barnett. The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, rarely up to 225 MHz. Sporadic-E events may last for just a few minutes to several hours. Sporadic E propagation makes VHF-operating radio amateurs very excited, as propagation paths that are generally unreachable can open up. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs most frequently during the summer months when high signal levels may be reached. The skip distances are generally around 1,640 km (1,020 mi). Distances for one hop propagation can be anywhere from 900 km (560 mi) to 2,500 km (1,600 mi). Double-hop reception over 3,500 km (2,200 mi) is possible. The F layer or region, also known as the Appleton–Barnett layer, extends from about 150 km (93 mi) to more than 500 km (310 mi) above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F2) at night, but during the day, a secondary peak (labelled F1) often forms in the electron density profile. Because the F2 layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distances high frequency (HF, or shortwave) radio communications. Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere. An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density. Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457). Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions. At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity. Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain. The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) (100–130 km (62–81 mi) altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet. When the Sun is active, strong solar flares can occur that will hit the sunlit side of Earth with hard X-rays. The X-rays will penetrate to the D-region, releasing electrons that will rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out ends as the electrons in the D-region recombine rapidly and signal strengths return to normal. Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions. Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events. Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast. In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur. Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Some broadcasting stations and automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts. When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough. A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics). The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below: where N = electron density per m3 and fcritical is in Hz. The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time. The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer. The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction. Scientists explore the structure of the ionosphere by a wide variety of methods. They include: A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska. The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 different countries and multiple radars in both hemispheres. Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo radio telescope located in Puerto Rico, was originally intended to study Earth's ionosphere. Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available). Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities. Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed. In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere. F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. Both indices have been shown to be correlated to each other. However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere. There are a number of models used to understand the effects of the ionosphere global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model. Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Uranus, Mars and Jupiter. |Wikimedia Commons has media related to Ionosphere.| |Look up ionosphere in Wiktionary, the free dictionary.|
<urn:uuid:f8124265-01ec-4979-b93f-5e7db9704890>
4.0625
5,449
Knowledge Article
Science & Tech.
36.955798
95,506,272
Though climate scientists have long debated the reasons behind the variation in atmospheric levels of carbon dioxide that occur over lengthy periods in Earth's history, the Princeton team may have found a clue to where the answer can be found. In a new research paper, the team reveals that the waters in the Southern Ocean below 60 degrees south latitude, the region that hugs the continent of Antarctica, play a far more significant role than was previously thought in regulating atmospheric carbon, and -- in contrast to past theories -- the waters north of this region do comparably little to regulate it. "Cold water that wells up regularly from the depths of the Southern Ocean spreads out on the ocean's surface along both sides of this dividing line, and we have found that the water performs two very different functions depending on which side of the line it flows toward," said Irina Marinov, the study's lead author. "While the water north of the line generally spreads nutrients throughout the world's oceans, the second, southward-flowing stream soaks up carbon dioxide, a greenhouse gas, from the air. Such a sharply-defined difference in function has surprised us. It could mean that a change to one side of the cycle might not affect the other as much as we once suspected." The research team, which also includes Princeton's Jorge Sarmiento as well as the National Oceanic and Atmospheric Administration's Anand Gnanadesikan and Robbie Toggweiler, will publish their results in today's (June 22) issue of the scientific journal, Nature. Marinov, who led the study while working in Sarmiento's lab, is currently pursuing postdoctoral research at the Massachusetts Institute of Technology as a NOAA Fellow in Climate and Global Change. The Southern Ocean has long been of interest to scientists, who have found that it influences the rest of the planet in many ways. Two years ago, Sarmiento's research team discovered that the nutrients in the world's oceans were dependent on the Southern Ocean's circulation pattern, but had not realized how the pattern affected the atmospheric carbon cycle. Scientists have also been aware that cold Antarctic waters have the ability to absorb atmospheric carbon dioxide, which could make the region one of the planet's lines of defense against rising greenhouse gas levels. These and other effects the Southern Ocean has on the Earth are not themselves new to science, but distinctions between one effect and another have been difficult to draw. "The new paper shows that carbon dioxide and nutrient flow are separated quite dramatically," said Sarmiento, a professor of geosciences. "What we are trying to do is understand better the balance of forces that help our planet maintain a steady environmental state, so we can anticipate what might cause that state to change. This paper helps us clarify how those forces interact." Changing levels of atmospheric carbon dioxide have long concerned the scientific community, as this well-known greenhouse gas could be a major influence on global warming. Marinov said the discovery could shed light on how the Earth reacted far back in history, which might offer clues to how it will behave in the future. "In the last ice age, for example, the atmosphere experienced very low levels of carbon dioxide, and no one is completely sure why," she said. "However, we now understand the Southern Ocean plays a large role in regulating how much of the gas gets dissolved in water, and how much remains in the atmosphere." The current study, she said, indicates that to better understand the Southern Ocean's effect on atmospheric carbon, scientists should pay greater attention to the Antarctic than to the more northerly sub-Antarctic region. "In the Antarctic, the circulation pattern moves the surface water carrying carbon dioxide deep into the ocean's depths, where the sequestered carbon could potentially be trapped for a long time," Marinov said. "According to the models we used, the deep Antarctic is the critical region where we need to concentrate our research." The team also indicated that the findings had implications for future research into carbon sequestration, a strategy for coping with increased atmospheric carbon dioxide levels. Some scientists propose that sequestration could one day capture atmospheric carbon and store it in places such as the deep ocean, thus mitigating humanity's greenhouse gas emissions. "An interesting idea of recent years is that we can sequester a lot of carbon if we dump iron into the ocean to encourage the growth of certain microorganisms, which incorporate carbon as they grow," Marinov said. "These organisms would then fall to the ocean floor after they die, taking the carbon with them. The overall effect would be to lower concentration of carbon in the surface waters, allowing more atmospheric carbon dioxide to dissolve into the sea. Our research has implications for future iron fertilization experiments, the focus of which we conclude should shift to the Antarctic." Marinov said that the findings were based strongly on the team's computer models, which have limitations that they will now concentrate on eliminating. "While we are confident about the paper's conclusions, we are always looking for ways to clarify our understanding of the Southern Ocean," she said. "Our model, for example, does not take into account the fact that the circulation patterns are strongest in the winter, when the Antarctic is covered in darkness and the phytoplankton cannot grow very much. It is important that we understand the impact of this process on atmospheric carbon dioxide through future research." This research was sponsored in part by the U.S. Department of Energy and NOAA's Postdoctoral Program in Climate and Global Change, administered by the University Corporation for Atmospheric Research. The Princeton team worked closely with NOAA's Geophysical Fluid Dynamics Lab, which is affiliated with Princeton through the graduate program in atmospheric and oceanic sciences. Climate research at Princeton is strongly enriched by the relationship with researchers in the laboratory on the Forrestal Campus, who collaborate on research, supervise Princeton graduate students and teach University courses.Abstract Chad Boutin | EurekAlert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:bce504d7-78bb-4a2b-8afc-93019fc83016>
3.71875
1,831
Content Listing
Science & Tech.
38.613548
95,506,288
Calculating small number of species in aberrant genera of insects and plants. Joachim Barrande's "Colonies", Élie de Beaumont's "lines of Elevation", Forbes's "Polarity" make Darwin despair, as these theories lead to conclusions opposite to Darwin's from the same classes of facts. This image has the following copyright: Do you want to download this image? This metadata has the following copyright: Do you want to download metadata for this document?
<urn:uuid:3e3f222e-3260-49fd-a298-b2054706bb8b>
2.609375
105
Truncated
Science & Tech.
39.971524
95,506,291
by Bhubaneswar Mishra Publisher: Courant Institute of Mathematical Sciences 1993 Number of pages: 425 Algorithmic Algebra studies some of the main algorithmic tools of computer algebra, covering such topics as Gröbner bases, characteristic sets, resultants and semialgebraic sets. The main purpose of the book is to acquaint advanced undergraduate and graduate students in computer science, engineering and mathematics with the algorithmic ideas in computer algebra so that they could do research in computational algebra or understand the algorithms underlying many popular symbolic computational systems. Home page url Download or read it online for free here: by Jonathan M. Borwein - DocServer The desire to understand Pi, the challenge, and originally the need, to calculate ever more accurate values of Pi, has challenged mathematicians for many many centuries, and Pi has provided compelling examples of computational mathematics. by R. L. Constable, at al. - Prentice Hall The authors offer a tutorial on the new mathematical ideas which underlie their research. Many of the ideas in this book will be accessible to a well-trained undergraduate with a good background in mathematics and computer science. by Richard D. Jenks, Robert S. Sutor - axiom-developer.org Axiom is a free general purpose computer algebra system. The book gives a technical introduction to AXIOM, interacts with the system's tutorial, accesses algorithms developed by the symbolic computation community, and presents advanced techniques. The purpose of this book is to show how the computer can draw technically perfect pictures of Julia and Mandelbrot sets. All the necessary theory is explained and some words are said about how to put the things into a computer program.
<urn:uuid:007afbfc-e3e4-4480-9a81-5b22248ce376>
2.9375
353
Content Listing
Science & Tech.
26.631511
95,506,297
Programming an n-Tier Architecture In software engineering, multi tier architecture (often referred to as n-tier architecture) or multilayered architecture is a client–server architecture in which presentation, application processing, and data management functions are physically separated. Adverts Blocked Please disable AdBlocking software and allow me to set cookies so that I can continue providing free content and services. N-tier application architecture provides a model by which developers can create flexible and reusable applications. By segregating an application into tiers, developers acquire the option of modifying or adding a specific layer, instead of reworking the entire application. A three-tier architecture is typically composed of a presentation tier, a domain logic tier, and a data storage tier. At the top of the diagram, we can see various application types. This is the presentation tier, and it is the responsibility of these applications to only perform the functions to gather and display information. If they are required to perform calculations, store or retrieve data then they must interact with another layer, the business logic layer. The business logic is usually a separate assembly or web service which provides a common place for any logic to be performed. This includes calculations, data retrieval and data storage. If this tier requires to store or retrieve data then it, in turn, calls on the data access tier. The data access tier is only concerned with storing and retrieving data from a data source. At the bottom of the diagram we can see various data sources - XML, SQL and JSON in this example, however, these can be any source of data. The data access tier is used to query these data sources and return the results back to the business logic tier. Using this architecture, the presentation tier is not concerned with how to retrieve data which means that upgrades or extensions to the presentation can be performed quickly. It also means that multiple applications (website, mobile applications, desktop applications, web services and so on) can use the same business logic and data access without duplication of code, resulting in consistent results across applications. Another advantage of n-Tier Architecture is that separate teams can work on each tier once a common interface is agreed upon. This allows web developers to develop web applications, database application developers to develop data access, Windows Forms developers to develop applications and so on. Last updated on: Sunday 20th August 2017 There are no comments for this post. Be the first!
<urn:uuid:a73cc865-4eb9-4d23-b72b-52c7d2df2834>
3.046875
492
Personal Blog
Software Dev.
26.0245
95,506,323
A new study conducted at the University of Michigan reveals a previously unrecognized threat to monarch butterflies: mounting levels of atmospheric carbon dioxide reduce the medicinal properties of milkweed plants that protect the iconic insects from disease. Milkweed leaves contain bitter toxins that help monarchs ward off predators and parasites, and the plant is the sole food of monarch caterpillars. In a multi-year experiment at the U-M Biological Station, researchers grew four milkweed species with varying levels of those protective compounds, which are called cardenolides. Half the plants were grown under normal carbon dioxide levels, and half of them were bathed, from dawn to dusk, in nearly twice that amount. Then the plants were fed to hundreds of monarch caterpillars. The study showed that the most protective of the four milkweed species lost its medicinal properties when grown under elevated CO2, resulting in a steep decline in the monarch’s ability to tolerate a common parasite, as well as a lifespan reduction of one week. The study looked solely at how elevated carbon dioxide levels alter plant chemistry and how those changes, in turn, affect interactions between monarchs and their parasites. It did not examine the climate-altering effects of the heat-trapping gas emitted when fossil fuels are burned. “We discovered a previously unrecognized, indirect mechanism by which ongoing environmental change—in this case, rising levels of atmospheric CO2—can act on disease in monarch butterflies,” said Leslie Decker, first author of the study, which was published July 10, 2018 in the journal Ecology Letters. “Our results emphasize that global environmental change may influence parasite-host interactions through changes in the medicinal properties of plants,” said Decker, who conducted the research for her doctoral dissertation in the U-M Department of Ecology and Evolutionary Biology. She is now a postdoctoral researcher at Stanford University. U-M ecologist Mark Hunter, Decker’s dissertation adviser and co-author of the Ecology Letters paper, said findings of the monarch study have broad implications. Many animals, including humans, use chemicals in the environment to help them control parasites and diseases. Aspirin, digitalis, Taxol and many other drugs originally came from plants. “If elevated carbon dioxide reduces the concentration of medicines in plants that monarchs use, it could be changing the concentration of drugs for all animals that self-medicate, including humans,” said Hunter, who has studied monarchs at the U-M Biological Station, at the northern tip of Michigan’s Lower Peninsula, for more than a decade. “When we play Russian roulette with the concentration of atmospheric gases, we are playing Russian roulette with our ability to find new medicines in nature,” he said. Earlier work in Hunter’s lab had shown that some species of milkweed produce lower cardenolide levels when grown under elevated carbon dioxide. That finding caught the attention of Decker, who with Hunter designed a follow-up study to look at the potential impact of rising CO2 on the disease susceptibility of monarchs in the future. “We’ve been able to show that a medicinal milkweed species loses its protective abilities under elevated carbon dioxide,” Decker said. “Our results suggest that rising CO2 will reduce the tolerance of monarch butterflies to their common parasite and will increase parasite virulence.” In recent years, monarch populations have been declining rapidly. Most discussions of the monarch butterfly’s plight focus on habitat loss: logging of trees in the Mexican forest where monarchs spend the winter, as well as the loss of wild milkweed plants that sustain them during their annual migration across North America. “Habitat loss, problems during migration and climate change all contribute to monarch declines,” Hunter said. “Unfortunately, our results add to that list and suggest that parasite-infected monarchs will become steadily sicker if atmospheric concentrations of CO2 continue to rise.” Read full Michigan News press release
<urn:uuid:d10eea5a-57e0-4faf-9e73-1aa9d340d778>
3.796875
827
News (Org.)
Science & Tech.
22.333021
95,506,324
As you've probably heard, yesterday a team of scientists identified evidence of cosmic inflation right after the Big Bang , a finding which helps explain how the entire Universe originated. Amazing as that sounds, it's way more important than you even imagine. To truly grasp the significance, let's start with what exactly it is that the Harvard team found. Forget analogies about ripples in ponds or whatever other over-simplified guff you're read. Here's what actually happened. Catching a Wave Yesterday's results come from analysis of the Cosmic Microwave Background: the thermal radiation left over from when our Universe came into being billions of years ago, which is still present as electromagnetic waves zipping through space. They're essentially the oldest waves that can be observed in the Universe. What the Harvard team announced is that they'd observed primordial B-mode polarization in the microwave background they have been looking at. In English, that means that thermal radiation from the birth of our Universe has been distorted, subtly twisted, by gravitational waves that previously had only existed on paper. That they actually exist reaffirms one of the most foundational principles of modern physics. First predicted by Albert Einstein in 1916, they're tiny ripples — a million times smaller than an atom — that carry energy across the universe. They're an integral part of Einstein's General Theory of Relativity, and the fact we now — for all intents and purposes — have proof of their existence has some profound results. Yes, we now have solid proof that the Big Bang actually happened. But perhaps more importantly, yesterday's discovery rubbishes the most popular competing theory. The cyclic model, championed by Neil Turok, director of the Perimeter Institute in Canada, predicted that the Universe expanded and contracted over very long cycles. Starting with a Big Bang and ending with a Big Crunch, the growth of the Universe, Turok reckoned, would be tempered by gravity pulling it pack together, in an endless cycle of expansion and contraction. But the existence of gravitational waves makes it impossible. On BBC Radio 4 this morning, Professor Stephen Hawking explained that the "cyclic universe theory predicts no gravitational waves from the early universe." In fact, Hawking had a bet with Turok that gravitational waves did exist — which he's now calling in. So, a little validation of results aside, we're left with a prime candidate for explaining how our Universe began: the inflation model of the Big Bang, where everything grew, for the tiniest fraction of a second at least, at a rate much faster than the speed of light. Beyond the Big Bang With more confidence than ever, then, we know where we come from. We're also, though, now better placed than ever to understand the Universe that currently surrounds us. The evidence presented by the Harvard researchers describes the gravitational waves as faint, polarised and distorted by gravitational lensing. That last part is particularly exciting, because gravitational lensing is the key to determining how dark matter manifests itself in our Universe. Put simply, the gravitational force exerted by large objects is enough to bend light — including microwaves like the ones the Harvard scientists have been analysing — subtly. The good news is that if we know what's between a source of light and the point from which we're observing it, we can predict how much light should be bent. Any discrepancies can be attributed to the presence of dark matter. The gravitational lensing of these newly observed waves means that, in theory, we should be able to trace the origins and distributions of dark matter though time, and finally explain how it tangibly affects our universe. That's a big and difficult task that looms ahead, but Doctor Joanne Dunkley from the University of Oxford's Department of Physics recently told me that, if research progressed according to plan, we could expect to see real progress "in the next five to ten years." If anything, this latest finding will speed up that progress considerably. Remember that, before yesterday, there was no tangible data to explain what happened to the Universe until it was over a second old. Now, we can probe it for details of what was happening less than 10 trillion trillion trillionths of a second after everything kicked off. This is, obviously, all wonderful news, so it's no surprise that Andrei Dmitriyevich Linde — one of the main authors of the so happy to hear the news yesterday. What Came Before? But, as ever, there's one catch. The main benefit of the now-debunked cyclic model was that it neatly sidestepped the fact that all the matter in the Universe, every atom around us, had to come from somewhere. As far as it was concerned, everything had been here forever. The inflation model, however, defines a very clear starting point to our Universe, before which there was... well, nobody quite knows. Stephen Hawking's done his best to argue it doesn't matter — "Since events before the Big Bang have no observational consequences," he once explained, "one may as well cut them out of the theory, and say that time began at the Big Bang" — the scientific community is yet to buy it en masse. So while proof of gravitational waves settles many an argument, it brings perhaps the biggest of all to the forefront: what was here before the Big Bang? We may never know. But at least, today, we've got an unprecedented look at what happened after. Image by South Pole Telescope
<urn:uuid:cb46ece4-0e91-4555-a91c-73a0d6de8d03>
3.046875
1,123
News Article
Science & Tech.
42.261127
95,506,330
- There is an average life span of a butterfly – it is usually about one month. Although the smallest butterflies that you can usually spot feasting on the flowers in your front yard will usually only live about one week. Mourning Cloaks, some tropical Heliconians, and Monarchs are some of the only butterflies that have an average life span of about nine months. - Now as many of you know, butterflies are cold-blooded creatures, so there is another factor to take into consideration when you are dealing with butterflies; the climate. For instance, if the butterfly egg has been laid just before the cold weather hits, the egg will stay in egg-form until the weather warms and as soon as it does, the caterpillar will hatch and everything will start again. If the butterfly is an adult butterfly and the weather starts to turn colder and they did not migrate south, the butterfly will hibernate somewhere until the weather warms. What this means is that a butterfly could technically live for many months past the average life span, it all just depends on the climate and what stage of live the butterflies is in when winter comes. - There is also a difference between how long a butterfly would live if it was not living in the wild and how long it will actually live. Butterflies in the wild are exposed to many predators like birds and other insects, so may not live as long as they are capable of.
<urn:uuid:5896e67a-db0c-4f16-a6e5-9d218d18b275>
3.453125
289
Personal Blog
Science & Tech.
42.849719
95,506,354
A fundamental new trend in atmospheric and ocean circulation patterns in the Pacific Northwest appears to have begun, scientists say, and apparently is expanding its scope beyond Oregon waters. This year for the first time, the effect of the low-oxygen zone is also being seen in coastal waters off Washington, researchers at OSU and the Olympic Coast National Marine Sanctuary indicate. There have been reports of dead crabs stretching from the central Oregon coast to the central Washington coast. Some dissolved oxygen levels at 180 feet have recently been measured as low as 0.55 milliliters per liter, and areas as shallow as 45 feet have been measured at 1 milliliter per liter. These oxygen levels are several times lower than normal, and any dissolved oxygen level below 1.4 milliliters per liter is hypoxic, capable of suffocating a wide range of fish, crabs, and other marine life. "There is a huge pool of low-oxygen water off the central Oregon coast with values as low as 0.46 milliliters per liter," said Francis Chan, marine ecologist in the OSU Department of Zoology and with the Partnership for Interdisciplinary Studies of Coastal Oceans (PISCO), a marine research consortium at OSU and other universities along the West Coast. "OSU researchers have documented this year's region of low-oxygen bottom waters from Florence to Cascade Head," Chan said. "The lack of consistent upwelling winds allowed a low-oxygen pool of deep water to build up. Now that the upwelling-favorable winds are blowing consistently, we're seeing that pool of water come close to shore and begin to suffocate marine life. If these winds continue to blow, we expect to see continued and possibly significant die-offs." As events such as this become more regular, researchers say, they appear less like an anomaly and more like a fundamental shift in marine conditions and ocean behavior. In particular, a change in intensity and timing of coastal winds seems to play a significant role in these events. "We're seeing wild swings from year to year in the timing and duration of winds favorable for upwelling," said Jack Barth, an oceanographer with PISCO and the OSU College of Oceanic and Atmospheric Sciences. "This change from normal seasonal patterns and the increased variability are both consistent with climate change scenarios." Barth and his colleagues are working on new circulation models that may allow scientists to predict when hypoxia and these "dead zones" will occur. No connection has been observed between these events and other major ocean cycles, such as El Niño or the Pacific Decadal Oscillation. The lack of wide-scale ocean monitoring makes determining the size and movement of the dead zone difficult, although some new instrumentation being used this year by OSU scientists is helping. Dissolved oxygen sensors have been deployed on the sea floor both close to shore and in 260 feet of water off Newport, some of which are sending data in near real-time. In addition, a new underwater unmanned vehicle equipped with sensors to measure temperature, salinity, chlorophyll and dissolved oxygen is routinely sampling across central Oregon waters. During normal years, cold water rich in nutrients but low in oxygen upwells from the deep ocean off Oregon, mixes with oxygen-rich water near the surface, causes some phytoplankton growth and provides the basis for a thriving fishery and healthy marine food chain. During dead zone periods, some of the normal processes – including wind and current conditions – can change. This allows huge masses of plant growth to die, decay and in the process consume even more of the available oxygen near the sea floor, causing hypoxic conditions for marine life. The first event in 2002 caused a massive die-off of fish and invertebrate marine species on the central Oregon coast. Less severe and somewhat different events occurred in 2003, 2004 and 2005. The 2006 "dead zone" has a wider north-south extent. Some crabbers in the central Washington coast reported all dead crabs in pots at depths of about 45-90 feet, north of the Moclips River. Large numbers of dead Dungeness crab have been reported on the beach as far north as Kalaloch. Numerous species of bottom fish have been found dead on the beach south of the Quinault River in Washington. In Oregon, the most vulnerable area in recent years has been the central third of the coast between about Newport and Florence, where conditions seem to be conducive to the development of low-oxygen waters. It's not always easy to measure the biological impact of the dead zones, because many dead animals may be washed out to the deep sea. But researchers say that this year's event may ultimately be as severe as the first one in 2002, although it reflects slightly different wind and ocean current conditions. Collaborating on this research are scientists from OSU, PISCO, the Oregon Department of Fish and Wildlife, National Oceanic and Atmospheric Administration, University of Washington and the Olympic Coast National Marine Sanctuary. Researchers say that it's difficult to tell what long-term ecological impacts these dead zone events may have on marine ecosystems. "Many marine species live in fairly specialized ecological niches and any time you change the fundamental physics, chemistry and nature of the system, it's a serious concern," Barth said. Jane Lubchenco, the Valley Professor of Marine Biology at OSU and principle investigator for PISCO, also said that the biological monitoring of species health and impacts in the nearshore Pacific Ocean is "grossly inadequate," making it difficult to evaluate the long-term impacts of low-oxygen and other events. Jane Lubchenco | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:aaa5144d-9fd7-4827-95b2-9e121be13702>
3.03125
1,825
Content Listing
Science & Tech.
39.182444
95,506,357
Open Discovery is a suite of programs that use Open Source or freely available tools to dock a library of chemical compounds against a receptor protein. In a paper in the Journal of Chemical Education, we outline the usefulness of having an uncomplicated, free-to-use protocol to accomplish a task that has been the subject of academic and commercial interest for decades . We also highlight the gaps in open source tools around preparing protein - ligand complexes for molecular simulation, an area we expect to develop in the future. - Open Babel (install from http://openbabel.org/ - version developed with: 2.3.1). Open Babel is used for file conversions. Instructions can be found at the following locations: Instructions can be found at:
<urn:uuid:5e73c182-1d05-45e4-8844-688983a51c22>
3.265625
154
Product Page
Software Dev.
39.492316
95,506,367
+44 1803 865913 Edited By: SJ Murch and PK Saxena 325 pages, Figs In plants, the ability to regenerate identical individulas from single cells is the basis for modern agriculture. These scientific advancements have given us virus-free stocks, novel germplasms, clonal propagation systems, and the commercial introduction of difficult to propagate plant species. Without the capacity to regenerate plants, it would not have been possible for plant biotechnology and genetic engineering to have advanced this far. This book contains detailed reviews by the leading scientists who made these discoveries. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Prompt and trustful service. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:a585b2f4-9556-4ca7-a0a3-4af0adb86fc1>
2.890625
180
Product Page
Science & Tech.
34.687353
95,506,372
In this paper, we take the first steps of simplifying particles into a linear function that organizes particles based on their particle number, similar to how atoms are arranged by atomic number. This repeats the method that was used to organize atomic elements and create the Periodic Table of Elements in the 1800s. The solution to linearize particles into a predictable function is not as simple as atomic elements, but it does exist. We will introduce an equation that fits known particles into a linear function and enables the prediction of future particles based on missing energy levels. It also predicts an exact mass of the neutrino. To accomplish this, particles are first organized by particle numbers, similar to atomic numbers in the Periodic Table of Elements and then charted against their known Particle Data Group energy levels. The results show similarities between particles and atomic elements – in both total numbers in formation and also in numbers where both are known to be more stable. Comments: 11 pages [v1] 2018-04-19 17:53:50 Unique-IP document downloads: 42 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:db91655d-6b43-467d-b1b0-a740cc1567f4>
3.40625
356
Academic Writing
Science & Tech.
36.775031
95,506,394
What purpose does a cape serve in a superhero’s wardrobe? Usually nothing except fashion. In Batman Begins however, we are shown that a cape can do more than make you look like a cheap magician. Bruce Wayne uses some fancy materials called memory cloth to turn his cape from a floppy fashion accessory into a stiff glider with the application of an electric charge. I’m sure the gritty nature of this movie required the writers to throw this in to justify giving the Dark Knight what would otherwise look like a frivolous ornament. But even worse than looking silly, a cape can be deadly! One of the greatest scenes from The Incredibles where Edna talks about the unfortunate costume designs of doomed supers, AKA “No capes!” Even Watchmen hits on this theme when it shows the violent demise of Dollar Bill after he gets stuck in a revolving door. So how could you balance the utility with the inconvenience of a long flowy piece of fabric? Fold it up when you don’t need it! Nature is full of examples of reversible folding. When leaves appear in the spring, they don’t just grow very quickly. They are fully formed in the bud, and then unfurl all at once when the timing is right. Mathematicians, physicists, and engineers have started noticing these intricate packaging patterns in biological materials and trying to apply them to their own work. The Japanese art of origami has had a resurgence not just among artists, but also scientists as a way to discover new folds and test designs. One cool application of origami-science has been realized in the deployment of solar panels for space vessels. And I think the greatest thing we’ve gotten out of studying folding is figuring out how to make it easy and reversible! Umbrellas partially demonstrate this idea since they have two stable states and always fold and unfold in the same predictable pattern, but they need quite a bit of force to transition. The people in this video demonstrate how easy it is to fold and unfold a design named the Miura-ori. The most important part is that we’re going between something map-sized and something that will fit in your pocket in a matter of seconds. Insects are masters of reversible folding with their wings. Beetles are one of the best examples because they need to transition between flying and walking so frequently. When they walk on the ground, they risk their wings getting caught or damaged, so they tuck them neatly under a protective shell until they need to fly again. And this is exactly the mechanism I propose for superheroes that actually use capes for flight or protection! Fold it up, and then deploy it on demand. This, by the way, is most of the premise of the short-lived NBC series The Cape. Heroes could possibly even keep their entire costume folded up for a speedy wardrobe change during a crisis. Peter Parker’s backpack could unfurl into Spidey-spandex, or Superman could carry around his own collapsible phone booth for privacy on the go. These are kind of silly examples, but think of all the other ways you could make life easier with simple repeatable folding patterns. Sky divers need professionals to pack up their parachutes correctly, but what if it was designed so you could fold it yourself? A pop-up tent that fits in a small bag would be great for long hikes, or getting emergency shelters to disaster-stricken areas quickly and efficiently. Strollers and all of those other baby furniture accessories could definitely be improved if they could be packed into the car that much faster. Problem-solving by paper-folding! Besides martial arts expertise and a strong sense of justice of course. Batman is all about advancing technology through his super-cool gadgetry, so it’s no surprise that he adapted a highly useful remote sensing technique from his namesake animal. Bats, along with several other species of mammals, birds, and odontocetes, use sound to navigate their surroundings and find prey. Bats produce a series of ultrasonic clicks, and then listen to the echoes to conceptualize their environment. Sound is reflected in different ways depending on the texture of the surface it bounces off of, and the echo qualities can also estimate the size of the target object. The small differences between what is heard in each ear allow the animals to pinpoint locations precisely and detect if something's moving, what direction it's moving in, and how fast. Sound like a familiar human invention? SONAR = SOund Navigation And Ranging The physical principles behind echolocation have been adapted into sonar technology used in submarines to detect other subs and whatever else is in the water. Considering that echolocation in animals was theorized more than a century before the invention of sonar, it's likely that there was a certain amount of bioinspiration involved. It was also used to sense objects in the air before radar was developed. Radar is a generally superior remote sensing system because radio waves move faster than sound waves, but sonar still remains in use underwater because the radar's emitted microwaves are rapidly absorbed by water. Passive sonar was the first system employed in underwater detection, and it worked by listening in on well-placed hydrophones. It’s called passive because the hydrophones are only receiving sounds made by other things without producing any sounds themselves. It was a subpar arrangement since it depended on a quiet ocean while you hoped that what you were looking for was noisy, which might not always be the case. World War II compelled developers to raise the level of the technology and gave us active sonar. Now subs were sending out their own *pings* (the ones you’ve surely heard in Das Boot or The Hunt for Red October) and using the echoes in a manner closer to bats. The technology is also used frequently today to map the ocean floor through multibeam swath bathymetry. Batman uses both active and passive forms of sonar in The Dark Knight when he turns every cell phone in Gotham into a microphone. The phones have become active high frequency sound generators (akin to the ultrasonic clicks of microbats), and they also passively detect sounds outside of that range. Thanks to their built in GPS, he knows exactly where every sound is coming from. Lucius Fox of Wayne Enterprises monitors the console displaying all of the sonar data to help Batman narrow in on the Joker’s location, which is accomplished by comparing a sample of the Joker's voice to all of the incoming noise. Batman takes his batty-ness a step further by projecting the sonar-created images onto the lenses built into his cowl. So even though it's pitch dark, he can use sound pictures to guide his way and find the bad guys. But even humans that aren’t billionaire crime fighters have taken advantage of this technology for personal use! Some blind people have, through direct biomimicry, learned how to use echolocation themselves. For example, this documentary from the UK series Extraordinary People features a teenager who had his eyes removed at the age of three to prevent the spread of retinal cancer. The video is long, but watching the first few minutes will give you the idea. Ben Underwood is not only capable of walking around without a cane or a guide dog, but he’s actually quite proficient at biking, rollerblading, and skateboarding! He achieves this by constantly clicking at his surroundings and listening to the way the clicks bounce back. He can’t reach the high frequencies that dolphins and bats use, but it gets the job done. Unfortunately, Ben died in 2009 as a result of his cancer. Daniel Kish is also featured in the video, and another person rendered blind by retinal cancer who has learned to use echolocation instead of a cane. He is president of World Access for the Blind and teaches children how to navigate an unfamiliar environment using sound. Cool connection to the comic book world: another blind echolocator, Juan Ruiz, appeared in the first episode of Stan Lee’s Superhumans where he demonstrated his super abilities to navigate and measure the length of a cave. It’s an excellent show for investigating the possibilities of superpowers in our mortal realm! Now, comic book fans, another superhero may be coming to mind: Daredevil was made blind by toxins that, through some comic-induced happenstance, enhanced the rest of his senses. He is often described as having a “radar-like sense,” but much of it can actually be attributed to passive sonar. His enhanced hearing allows him to identify and position objects in space. To get an idea of how powerful his ears were, he was purportedly able to hear the Hulk’s heartbeat from four blocks away. So much like these humans, he has compensated for his lack of sight by using other means to explore his environment. I don’t think any of them adapted their white canes to serve as a billy club though. Exploring the realm of biologically inspired design one superhero example at a time, with some other natural sciences mixed in.
<urn:uuid:3503ec71-0482-4e8f-ac6b-3d89dbfc5e70>
2.984375
1,885
Personal Blog
Science & Tech.
43.272892
95,506,410
Dubbed the ‘rainforests of Europe’ as they are so diverse in wildlife, peat bogs contain more than 20 per cent of the world’s carbon. However, western Europe has lost most of its natural peat bogs, largely due to peat extraction for horticulture. Over the last three years, Earthwatch scientists have conducted the first botanical survey of Yelyna, the largest raised peat bog in Europe and a Ramsar Wetland of International Importance, which stretches over 26,175 hectares. In 2002 a series of fires decimated 85 per cent of the bog, resulting in considerable economic loss for local people who rely on the harvesting of swamp cranberries. This research led to the discovery of 17 new locations of eight rare and endangered plants at Yelyna. In response, the Belarus Ministry of Natural Resources announced the creation of a national peat bog monitoring programme that will ensure plant hotspots at Yelyna are protected. Two bogs, Velikiiji Moh and Fomino (equating to 5,016 hectares), have also been designated as protected nature sanctuaries as a direct result of the research. Between them, these bogs will absorb and store approximately 1,354.32 tonnes of carbon dioxide per year – equivalent to the combined emissions of almost 300 London households.* “It is estimated that globally, peat bogs store twice as much carbon as forests,” explains Nat Spring, Head of Research at Earthwatch (Europe). “Even if most people don’t know that the bogs of Belarus exist, protecting them is of vital importance if we are to combat climate change.” The bogs of Belarus provide an important refuge for migratory birds as they travel between western Europe and northern Russia, including the most threatened species of bird in Europe, the aquatic warbler. The long-term goal of this research project is to inform the effective conservation of all of the raised bogs of Belarus. Since 2004, 90 Earthwatch volunteers have donated their time to help survey these peat lands. They include members of the public, corporate employees and Eastern European scientists and educators funded through Earthwatch’s capacity building programme. *According to figures from the Tyndall Centre for Climate Change Research, pristine bogs accumulate CO2 at a rate of 0.27 tonnes per hectare per year. Research published in November 2007 by the Energy Saving Trust (EST) combines car emissions with government figures on household CO2 levels. Households in the city of London emit 4.6 tonnes of carbon per year. Zoe Gamble | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:d48077de-9fbf-4931-b73b-164169425f15>
3.78125
1,202
Content Listing
Science & Tech.
41.958897
95,506,414
Danish researchers who have studied ants at the Smithsonian Tropical Research Institute in Panama since 1992 discovered that in both ant and bee species in which queens have multiple mates, a male's seminal fluid favors the survival of its own sperm over the other males' sperm. However, once sperm has been stored, leafcutter ant queens neutralize male-male sperm competition with glandular secretions in their sperm-storage organ. "Two things appear to be going on here," explains Jacobus Boomsma, professor at the University of Copenhagen and Research Associate at STRI. "Right after mating there is competition between sperm from different males. Sperm is expendable. Later, sperm becomes very precious to the female who will continue to use it for many years to fertilize her own eggs, producing the millions of workers it takes to maintain her colony." With post-doctoral researchers Susanne den Boer in Copenhagen and Boris Baer at the University of Western Australia, professor Boomsma studied sperm competition in sister species of ants and bees that mate singly—each queen with just one male—or multiply—with several males. Their results, published this week in the prestigious journal, Science, show that the ability of a male's seminal fluid to harm the sperm of other males only occurs in species that mate multiply, and that their own seminal fluid does not protect sperm against these antagonistic effects. "Females belonging to many species—from vertebrates to insects-- have multiple male partners. Seminal products evolve rapidly, probably in response to the intense male-male competition that continues even after courtship and mating have taken place," said William Eberhard, Smithsonian staff scientist. "This study continues the STRI tradition of looking at post-copulatory selection in a very biodiverse range of organisms, following in the footsteps of people like Bob Silberglied, who asked why butterflies and moths have two kinds of sperm in the 1970's." Similar sperm competition systems appear to have evolved independently in ants and in bees. Researchers now aim to discover how genes that control sperm recognition in bees and ants may differ, thus continuing to elucidate the details of a process key to reproduction and evolution. A grant from the Danish National Research Foundation and an Australian Research Council Fellowship supported this work. Permits for ant collection and export were issued by Panama's Autoridad Nacional de Ambiente (ANAM). The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama is a unit of the Smithsonian Institution. The institute furthers the understanding of tropical nature and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of beauty and importance of tropical ecosystems. www.stri.org Ref. Susanne P.A. den Boer, Boris Baer, Jacobus Boomsma. Seminal fluid mediates ejaculate competition in social insects. Science. 18 Mar. 2010. Beth King | EurekAlert! The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology Colorectal cancer risk factors decrypted 16.07.2018 | Max-Planck-Institut für Stoffwechselforschung For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:7bb7e143-ec80-412f-be43-bdcc588e4484>
3.734375
1,252
Content Listing
Science & Tech.
37.717399
95,506,415
Radar images show large swath of Texas oil patch is heaving and sinking at alarming rates Two giant sinkholes near Wink, Texas, may just be the tip of the iceberg, according to a new study that found alarming rates of new ground movement extending far beyond the infamous sinkholes. That's the finding of a geophysical team from Southern Methodist University, Dallas that previously reported the rapid rate at which the sinkholes are expanding and new ones forming. Now the team has discovered that various locations in large portions of four Texas counties are also sinking and uplifting. Radar satellite images show significant movement of the ground across a 4000-square-mile area—in one place as much as 40 inches over the past two-and-a-half years, say the geophysicists. "The ground movement we're seeing is not normal. The ground doesn't typically do this without some cause," said geophysicist Zhong Lu, a professor in the Roy M. Huffington Department of Earth Sciences at SMU and a global expert in satellite radar imagery analysis. "These hazards represent a danger to residents, roads, railroads, levees, dams, and oil and gas pipelines, as well as potential pollution of ground water," Lu said. "Proactive, continuous detailed monitoring from space is critical to secure the safety of people and property." The scientists made the discovery with analysis of medium-resolution (15 feet to 65 feet) radar imagery taken between November 2014 and April 2017. The images cover portions of four oil-patch counties where there's heavy production of hydrocarbons from the oil-rich West Texas Permian Basin. The imagery, coupled with oil-well production data from the Texas Railroad Commission, suggests the area's unstable ground is associated with decades of oil activity and its effect on rocks below the surface of the earth. The SMU researchers caution that ground movement may extend beyond what radar observed in the four-county area. The entire region is highly vulnerable to human activity due to its geology—water-soluble salt and limestone formations, and shale formations. "Our analysis looked at just this 4000-square-mile area," said study co-author and research scientist Jin-Woo Kim, a research scientist in the SMU Department of Earth Sciences. "We're fairly certain that when we look further, and we are, that we'll find there's ground movement even beyond that," Kim said. "This region of Texas has been punctured like a pin cushion with oil wells and injection wells since the 1940s and our findings associate that activity with ground movement." Lu, Shuler-Foscue Chair at SMU, and Kim reported their findings in the Nature publication Scientific Reports, in the article "Association between localized geohazards in West Texas and human activities, recognized by Sentinel-1A/B satellite radar imagery." The researchers analyzed satellite radar images that were made public by the European Space Agency, and supplemented that with oil activity data from the Texas Railroad Commission. The study is among the first of its kind to identify small-scale deformation signals over a vast region by drawing from big data sets spanning a number of years and then adding supplementary information. The research is supported by the NASA Earth Surface and Interior Program, and the Shuler-Foscue Endowment at SMU. Imagery captures changes that might otherwise go undetected The SMU geophysicists focused their analysis on small, localized, rapidly developing hazardous ground movements in portions of Winkler, Ward, Reeves and Pecos counties, an area nearly the size of Connecticut. The study area includes the towns of Pecos, Monahans, Fort Stockton, Imperial, Wink and Kermit. The images from the European Space Agency are the result of satellite radar interferometry from recently launched open-source orbiting satellites that make radar images freely available to the public. With interferometric synthetic aperture radar, or InSAR for short, the satellites allow scientists to detect changes that aren't visible to the naked eye and that might otherwise go undetected. The satellite technology can capture ground deformation with an accuracy of sub-inches or better, at a spatial resolution of a few yards or better over thousands of miles, say the researchers. Ground movement associated with oil activity The SMU researchers found a significant relationship between ground movement and oil activities that include pressurized fluid injection into the region's geologically unstable rock formations. Fluid injection includes waste saltwater injection into nearby wells, and carbon dioxide flooding of depleting reservoirs to stimulate oil recovery. Injected fluids increase the pore pressure in the rocks, and the release of the stress is followed by ground uplift. The researchers found that ground movement coincided with nearby sequences of wastewater injection rates and volume and CO2 injection in nearby wells. Also related to the ground's sinking and upheaval are dissolving salt formations due to freshwater leaking into abandoned underground oil facilities, as well as the extraction of oil. Sinking and uplift detected from Wink to Fort Stockton As might be expected, the most significant subsidence is about a half-mile east of the huge Wink No. 2 sinkhole, where there are two subsidence bowls, one of which has sunk more than 15.5 inches a year. The rapid sinking is most likely caused by water leaking through abandoned wells into the Salado formation and dissolving salt layers, threatening possible ground collapse. At two wastewater injection wells 9.3 miles west of Wink and Kermit, the radar detected upheaval of about 2.1 inches that coincided with increases in injection volume. The injection wells extend about 4,921 feet to 5,577 feet deep into a sandstone formation. In the vicinity of 11 CO2 injection wells nearly seven miles southwest of Monahans, the radar analysis detected surface uplift of more than 1 inch. The wells are about 2,460 feet to 2,657 feet deep. As with wastewater injection, CO2 injection increased pore pressure in the rocks, so when stress was relieved it was followed by uplift of about 1 inch at the surface. The researchers also looked at an area 4.3 miles southwest of Imperial, where significant subsidence from fresh water flowing through cracked well casings, corroded steel pipes and unplugged abandoned wells has been widely reported. Water there has leaked into the easily dissolved Salado formation, created voids, and caused the ground to sink and water to rise from the subsurface, including creating Boehmer Lake, which didn't exist before 2003. Radar analysis by the SMU team detected rapid subsidence ranging from three-fourths of an inch to nearly 4 inches around active wells, abandoned wells and orphaned wells. "Movements around the roads and oil facilities to the southwest of Imperial, Texas, should be thoroughly monitored to mitigate potential catastrophes," the researchers write in the study. About 5.5 miles south of Pecos, their radar analysis detected more than 1 inch of subsidence near new wells drilled via hydraulic fracturing and in production since early 2015. There have also been six small earthquakes recorded there in recent years, suggesting the deformation of the ground generated accumulated stress and caused existing faults to slip. "We have seen a surge of seismic activity around Pecos in the last five to six years. Before 2012, earthquakes had not been recorded there. At the same time, our results clearly indicate that ground deformation near Pecos is occurring," Kim said. "Although earthquakes and surface subsidence could be coincidence, we cannot exclude the possibility that these earthquakes were induced by hydrocarbon production activities." Scientists: Boost the network of seismic stations to better detect activity Kim stated the need for improved earthquake location and detection threshold through an expanded network of seismic stations, along with continuous surface monitoring with the demonstrated radar remote sensing methods. "This is necessary to learn the cause of recent increased seismic activity," Kim said. "Our efforts to continuously monitor West Texas with this advanced satellite technique can help sustain safe, ongoing oil production." Near real-time monitoring of ground deformation possible in a few years The satellite radar datasets allowed the SMU geophysicists to detect both two-dimension east-west deformation of the ground, as well as vertical deformation. Lu, a leading scientist in InSAR applications, is a member of the Science Team for the dedicated U.S. and Indian NASA-ISRO (called NISAR) InSAR mission, set for launch in 2021 to study hazards and global environmental change. InSAR accesses a series of images captured by a read-out radar instrument mounted on the orbiting satellite Sentinel-1A/B. The satellites orbit 435 miles above the Earth's surface. Sentinel-1A was launched in 2014 and Sentinel-1B in 2016 as part of the European Union's Copernicus program. The Sentinel-1A/B constellation bounces a radar signal off the earth, then records the signal as it bounces back, delivering measurements. The measurements allow geophysicists to determine the distance from the satellite to the ground, revealing how features on the Earth's surface change over time. "Near real-time monitoring of ground deformation at high spatial and temporal resolutions is possible in a few years, using multiple satellites such as Sentinel-1A/B, NISAR and others," said Lu. "This will revolutionize our capability to characterize human-induced and natural hazards, and reduce their damage to humanity, infrastructure and the energy industry." More information: Jin-Woo Kim et al, Association between localized geohazards in West Texas and human activities, recognized by Sentinel-1A/B satellite radar imagery, Scientific Reports (2018). DOI: 10.1038/s41598-018-23143-6 Provided by: Southern Methodist University
<urn:uuid:a12440d2-482d-4146-bba2-0f99453b0d8e>
2.578125
2,036
News Article
Science & Tech.
32.402703
95,506,427
Rachel C. Walton, Catherine R. McCrohan and Keith N. White Faculty of Life Sciences, University of Manchester, UK The presence of non-essential trace metals in freshwater systems poses a risk to the viability of populations. For this reason aquatic organisms have developed physiological and behavioural responses to such potentially toxic trace metals that enable them to cope with such exposure. Aluminium (Al) is the most abundant metal on earth and is highly toxic in its ionic form. In freshwaters at low pH the main site of action of Al in many aquatic animals is the gills, where it interacts with the surface (epithelial) cells causing an immune response and excess mucus production. This in turn interferes with gill function, preventing efficient oxygen exchange and leading to hypoxia.1 Despite this, Al is often considered to be relatively harmless, as at circum-neutral pH it is usually associated with hydroxides and silicates in freshwaters and soils—forms generally regarded as non-bioavailable. The toxic effects of Al at low pH usually occur in areas associated with anthropogenic events such as acid mine drainage and acid rain. Our interest in Al at neutral pH stemmed from evidence that in freshly neutralised water Al was harmful to aquatic animals. For example, the relatively inert precipitates of Al, which form as pH increases, still cause gill irritation and toxicity in fish.2 We collected further evidence which showed that, at neutral pH, Al is both accumulated and toxic to a range of freshwater invertebrates,3,4,5 including those that do not possess gills, such as the great pond snail Lymnaea stagnalis. This evidence suggested that, even at pH7, Al could be toxic and that toxicity was not caused exclusively by physical irritation at the gill surface by Al precipitation. We subsequently used a range of complementary methods to explore cellular, physiological and behavioural mechanisms underlying Al accumulation and toxicity, and its eventual fate. In this article we discuss how these different techniques have enhanced our understanding of intracellular interactions involving Al, using the pond snail as a model organism. Methods of exposure of snails to aluminium and their behavioural responses Snails are housed in temperature-controlled tanks containing fresh water at neutral pH, to which Al is added. Aluminium levels are usually towards the high end of those found in natural freshwaters (250–500 µg L–1), and exposure usually lasts between 20 and 30 days. Tanks are cleaned, water refreshed and further Al added every two to four days; the exact timing depends on the experimental protocol and other factors such as whether we are also adding specific ligands such as dissolved organic carbon (DOC) to the water. Behaviour is assessed during the exposure period; either by measuring response time to a feeding stimulus such as sucrose, or by assigning an objective score to quantify overall activity level (see Figure 1). For the latter, individual snails are observed over a two minute period and are given a score for general activity, as well as “bonus points” for specific behaviours. The maximum available score is 12 points. Exposure to Al causes a dip in the score at around five days, followed by recovery in the medium term even when exposure is continued (see Figure 1). This initial dip followed by recovery indicates that snails are able to cope with Al toxicity in the short term, suggesting an internal detoxificatory mechanism. The initial dip in behavioural activity also shows that Al is bioavailable at neutral pH, even though the aquatic chemistry of Al might suggest otherwise. Addition of other chemicals, such as mimics for DOC, demonstrate how these also significantly affect the toxicity of Al. Accumulation of aluminium in soft tissues For any toxin to exert an effect it must come into contact with tissues or cells, and must also be in an available form so that it can interact with either intra- or extra-cellular processes. Traditionally, measurement of trace metals in tissues has been carried out using atomic absorption spectroscopy (AAS) or flame emission spectroscopy (FES), depending on the metal in question. The main limitations of such techniques are the detection limit for metals and the single element nature of the analyses. This makes them impractical for multiple metal analyses or when sample volumes are small, for example, owing to the small mass of the tissues analysed. Inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry (ICP-AES and ICP-MS, respectively) are now both increasingly used in trace metal research. These techniques allow simultaneous quantification of multiple metals, and detection limits can be as low as ppt for most metals with ICP-MS; detection limits for non-metals such as silicon (Si) and phosphorus (P) are generally higher at around 1 ppb. ICP techniques have an additional benefit for researchers studying Al as Si, which readily interacts with Al, can also be quantified so allowing analysis of both a metal and its potential ligand simultaneously with no risk of extra errors being introduced through sample splitting and dilutions for separate analyses. As many mechanisms of toxicity result in tissue imbalances in essential metals such as Ca, Na or K, ICP techniques also enable such effects to be examined concurrently. For ICP analyses to be accurate and reproducible, careful treatment of the samples is vital. Both Al and Si are ubiquitous in the environment and so only acid-washed plasticware and ultrapure chemicals should be used. Tissues are usually acid digested and then often further oxidised using an equal volume of hydrogen peroxide prior to dilution of the samples with water to reduce the concentration of H+ before analysis. Samples in our studies are analysed using either a Jobin-Yvon JY24 ICP-AES, Perkin-Elmer Optima 5300 dual view ICP-AES or an Agilent 7500cx ICP-MS with matrix-matched multi-element standards. Our results have demonstrated that Al accumulates in the soft tissues of snails, especially in the digestive gland, which is a large detoxificatory organ equivalent to the liver of vertebrates. The digestive gland contributes approximately 10% of the total mass of the snail soft tissues, yet often has 50% or more of the total body burden of Al (see Figure 2). Interestingly, we also demonstrated co-accumulation of Si with Al in the digestive gland, at a ratio of 2.5:1, consistent with ratios found in aluminosilicates such as allophane. We hypothesised that Si, an element previously with no known role in higher animals, might be used as an intracellular detoxificatory ligand for Al.6 How is aluminium detoxified? The recovery of the snails’ activity during ongoing exposure to Al (see Figure 1), together with the accumulation of Al in the digestive gland, indicated to us that Al detoxification was occurring within the snails, as opposed to formation of non-toxic aluminosilicates in the water. In aquatic invertebrates the two main mechanisms by which the toxicity of metals may be reduced are: (i) via association with metal binding proteins such as metallothioneins—heat stable proteins with a high S content; and/or (ii) by the formation of inorganic granules in which the metals are rendered harmless through strong associations with ligands such as phosphate. These granules form dense inorganic entities enclosed within membrane bound vesicles that can reach 100 µm in diameter and are therefore easily visible using light microscopy. The metal is either then stored in the digestive gland, or excreted via the gut.7 Our investigations therefore focused on the digestive gland, using two complementary approaches—sub-cellular fractionation followed by metal quantification and analytical microscopy. Sub-cellular fractionation involves homogenising the tissue and then separating out different fractions of cellular components in which metals may be found. These fractions are then acid digested and analysed for metal content using ICP-AES/MS. Homogenates are first subjected to a relatively low initial centrifugation speed to separate out cellular debris and dense material from other cellular components. This debris is then heat treated and digested with NaOH to obtain a dense inorganic fraction which includes any granules generated through detoxification processes. The remaining sample is subjected to a series of ultracentrifugation steps with forces up to 100,000 × g, first, to separate out the cellular organelles, and then, after heat treatment, to isolate heat denaturable proteins. The remaining supernatant contains heat stable proteins and small molecules such as citrate. As metallothioneins are heat stable, the fractions of most interest in metal detoxification studies are the inorganic fraction and the heat stable protein fraction. Our results have shown that, whilst the inorganic debris fraction generally contributes less than 5% of the total cell mass, up to 50% of the Al in the digestive gland is found in this fraction, presumably as granules. The remainder is distributed through the other cellular fractions, with most associated with organelles such as the nucleus and mitochondria. These results imply that, if Al detoxification is occurring, it is more likely to be due to the formation of inorganic granules than interactions with metallothioneins. Levels of Si, as well as other metals such as Ca, are also elevated in the inorganic fraction. However, this method cannot identify whether or not the Al, Si and Ca are associated within the cell, or are instead present as discrete entities of similar density that are therefore simply found in the same debris fraction. In order to address this, we used analytical microscopy. Because of the dense nature of granules and other inorganic entities, our fixing and embedding methods for microscopy use hard resins and sections are taken using a diamond knife. Sections are viewed under light microscopy to confirm the presence of granules (identified by morphology) before further ultrathin (~60 nm) sections are cut and mounted on mesh grids. Post-staining with 2% uranyl acetate and 3% lead citrate is used to increase the contrast in order to help identify cellular organelles; further sections are left unstained to prevent any interference of the metal stains with the detection of Al. We viewed stained sections on a Philips 400 transmission electron microscope (TEM), and unstained sections on an FEI CM200 field emission gun TEM fitted with a UTW ISIS energy-dispersive X-ray (EDX) analyser and a Gatan GIF200 electron energy loss spectrometer (EELS). Our initial observations demonstrated that, whilst granules with typical morphologies could be localised in stained sections, localisation of sub-cellular structures in unstained samples could be problematic due to lack of contrast. Owing to poor visualisation of sub-cellular structures, likely areas in which Al could be found in excretory granules were initially identified by their close proximity to microvilli, structures found around the edges of secretory cells. Once membrane bound vesicles were identified, fine probe EDX was used to identify the elements present. Scanning TEM (STEM)-EDX (with a spot size approximately 10 nm diameter) was used to obtain elemental distribution maps of Al, Si, Ca, P, S and V. Electron energy loss elemental maps were also acquired at the Al L- and Si L-edges (see Figures 3 and 4). In addition to Al and Si these methods confirmed the presence of elevated amounts of other metals in granules, notably Ca and Fe. In general, areas which were high in Ca were not high in Al and vice versa. EDX of areas within the microvilli, and areas of resin outside the tissue revealed negligible quantities of Al and Si. Aluminium was closely associated with Si, either as a fine network, such as those shown in Figure 3, or as discrete electron dense structures, assumed to be granules. There was no evidence of significant quantities of S (which would indicate the presence of metallothioneins) within the same granules. P was present at low levels in close proximity to the Al, but not to the same extent as Si, and it was also found elsewhere in the cell. These findings are of significance as they imply that Si does have an intracellular function as a detoxicant of Al and this represents the first reported biological role for Si in higher animals. Our studies present one example of how complementary methods can be utilised to detect metals at the cellular and sub cellular levels and thus extend our understanding of metal toxicity and mechanisms of detoxification. We would like to thank Professor Jonathan Powell and Dr Ravin Jugdaosingh at the MRC-HNR, Cambridge, and also Dr Andy Brown from the Faculty of Engineering, Leeds University for their continued interest in our research and help in obtaining data for this article. - C. Exley, J.S. Chappel and J.D. Birchall, “A mechanism for acute aluminium toxicity in fish”, J. Theor. Biol. 151, 417–428 (1991). - A.B.S. Poléo, E. Lydersen, B.O. Rosseland, F. Krogland, B. Salbu, R.D. Vogt and A. Kvellestad, “Increased mortality of fish due to changing Al-chemistry of mixing zones between limed streams and acid tributaries”, Water Air Soil Poll. 75, 339–351 (1994). - E. Kadar, J. Salanki, J.J. Powell, K.N. Whand C.R. McCrohan, “Effect of sub-leal concentrations of aluminium on the filtration activity of the freshwater mussel Anodonta cygnea L. at neutral pH”, Acta Biol. Hung. 53, 485–493 (2002). - E. Alexopoulos, C.R. McCrohan, J.J. Powell, K.N. White and R. Jugdaohsingh, “Bioavailability and toxicity of freshly neutralized aluminium to the freshwater crayfh Pacifastacus leniusculus”, Arch. Environ. Con. Tox. 45, 509–514 (2003). - A. Dobranskyte, R. Jugdaohsingh, C.R. McCrohan, E. Stuchlik, J.J. Powell and K.N. White, “Effect of humic acid on water chemistry, bioavailability and toxicity of aluminium in the freshwater snail, Lymnaea stagnalis, at neutral pH”, Environ. Pollut. 140, 340–347 (2006). - M. Desouky, R. Jugdaohsingh, C.R. McCrohan, K.N. White and J.J. Powell, “Aluminum-dependent regulation ofntracellular silicon in the aquatic invertebrate Lymnaea stagnalis”, P. Natl. Acad. Sci. USA 99, 3394–3399 (2002). P.S. Rainbow, “Trace metal concentrationin aquatic invertebrates: why and so what?”, Environ. Pollut. 120,497–507 (2002).
<urn:uuid:f755093c-a001-4847-894e-27a445c8dbfe>
3.1875
3,202
Academic Writing
Science & Tech.
38.038693
95,506,442
The Electric Charges In the preceding chapter the charge operators Q G , Q c , Q M , have been defined and their relations discussed in a non-rigorous way. We wish to rigorize these important matters as far as is possible without an explicit solution of the theory. We assume the existence of a rigorously defined QED as a theory of quantum fields A µ , , Ψ, Ψ̄ satisfying the p-Maxwell equations (6.14) and the Dirac equations (6. 10) . We need not know the exact definition of the singular products Ψ̄γ µ Ψ and .A̸ Ψ, as long as the current j µ thus defined is conserved. The theory must satisfy the Postulates 1–6 or, equivalently, the Properties W1—W5 described in Chap. 4. KeywordsInfinitesimal Generator Exact Definition Preceding Chapter Conserve Vector Current Local Field Theory Unable to display preview. Download preview PDF.
<urn:uuid:f0fae71c-52a4-45ef-81d3-680dd48a0faf>
2.5625
209
Truncated
Science & Tech.
61.012696
95,506,448
Small, compact genomes of ultrasmall unicellular algae provide information on the basic and essential genes that support the lives of photosynthetic eukaryotes, including higher plants. Here we report the 16,520,305-base-pair sequence of the 20 chromosomes of the unicellular red alga Cyanidioschyzon merolae 10D as the first complete algal genome. We identified 5,331 genes in total, of which at least 86.3% were expressed. Unique characteristics of this genomic structure include: a lack of introns in all but 26 genes; only three copies of ribosomal DNA units that maintain the nucleolus; and two dynamin genes that are involved only in the division of mitochondria and plastids. The conserved mosaic origin of Calvin cycle enzymes in this red alga and in green plants supports the hypothesis of the existence of single primary plastid endosymbiosis. The lack of a myosin gene, in addition to the unexpressed actin gene, suggests a simpler system of cytokinesis. These results indicate that the C. merolae genome provides a model system with a simple gene composition for studying the origin, evolution and fundamental mechanisms of eukaryotic cells. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:9ccbac8b-3987-4695-bf41-baf40a6692b6>
3.171875
276
Academic Writing
Science & Tech.
25.654444
95,506,462
|MLA Citation:||Bloomfield, Louis A. "Question 937"| How Everything Works 20 Jul 2018. 20 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=937>. What Bernoulli's equation really says is that air has three forms for its energy and that as long as that air flows smoothly and without significant friction through a system of stationary obstacles, the sum of those three energies can't change. The three energies are kinetic energy (the energy of motion), gravitational potential energy, and an energy associated with pressure that I call pressure potential energy. The obstacles must remain stationary so that they can't do work on the air and thus change its total energy. Since the sum of those three energies doesn't change as air flows through a stationary environment, its pressure typically falls whenever its speed rises and vice versa. If the air also changes altitude significantly, then gravitational potential energy must be included in these energy exchanges. So the reason why I can't answer your question about air in a pipe is that I don't know what the air's total energy was before it flowed through the pipe. While I can calculate the air's kinetic energy from its speed and we can neglect gravitational potential energy because the air isn't changing altitudes much in the pipe, I need to know what the air's total energy is in order to determine its pressure potential energy and thus its pressure.
<urn:uuid:967ee93e-6223-4101-bb13-75a5cd0cbd82>
3.640625
290
Q&A Forum
Science & Tech.
49.929011
95,506,471
This article needs additional citations for verification. (February 2011) (Learn how and when to remove this template message) The primary application of wind turbines is to generate energy using the wind. Hence, the aerodynamics is a very important aspect of wind turbines. Like most machines, there are many different types of wind turbines, all of them based on different energy extraction concepts. Though the details of the aerodynamics depend very much on the topology, some fundamental concepts apply to all turbines. Every topology has a maximum power for a given flow, and some topologies are better than others. The method used to extract power has a strong influence on this. In general, all turbines may be grouped as being either lift-based, or drag-based; the former being more efficient. The difference between these groups is the aerodynamic force that is used to extract the energy. The most common topology is the horizontal-axis wind turbine (HAWT). It is a lift-based wind turbine with very good performance. Accordingly, it is a popular choice for commercial applications and much research has been applied to this turbine. Despite being a popular lift-based alternative in the latter part of the 20th century, the Darrieus wind turbine is rarely used today. The Savonius wind turbine is the most common drag type turbine. Despite its low efficiency, it remains in use because of its robustness and simplicity to build and maintain. - 1 General aerodynamic considerations - 2 Characteristic parameters - 3 Drag- versus lift-based machines - 4 Horizontal-axis wind turbine - 5 Axial momentum and the Lanchester–Betz–Joukowsky limit - 6 Angular momentum and wake rotation - 7 Blade element and momentum theory - 8 Aerodynamic modeling - 9 See also - 10 References - 11 Sources General aerodynamic considerations The governing equation for power extraction is stated below: - Where: P is the power, F is the force vector, and v is the velocity of the moving wind turbine part. The force F is generated by the wind's interaction with the blade. The magnitude and distribution of this force is the primary focus of wind-turbine aerodynamics. The most familiar type of aerodynamic force is drag. The direction of the drag force is parallel to the relative wind. Typically, the wind turbine parts are moving, altering the flow around the part. An example of relative wind is the wind one would feel cycling on a calm day. To extract power, the turbine part must move in the direction of the net force. In the drag force case, the relative wind speed decreases subsequently, and so does the drag force. The relative wind aspect dramatically limits the maximum power that can be extracted by a drag-based wind turbine. Lift-based wind turbines typically have lifting surfaces moving perpendicular to the flow. Here, the relative wind does not decrease; rather, it increases with rotor speed. Thus, the maximum power limits of these machines are much higher than those of drag-based machines. Wind turbines come in a variety of sizes. Once in operation, a wind turbine experiences a wide range of conditions. This variability complicates the comparison of different types of turbines. To deal with this, nondimensionalization is applied to various qualities. Nondimensionalization allows one to make comparisons between different turbines, without having to consider the effect of things like size and wind conditions from the comparison. One of the qualities of nondimensionalization is that though geometrically similar turbines will produce the same non-dimensional results, other factors (difference in scale, wind properties) cause them to produce very different dimensional properties. The coefficient of power is the most important variable in wind-turbine aerodynamics. The Buckingham π theorem can be applied to show that the non-dimensional variable for power is given by the equation below. This equation is similar to efficiency, so values between 0 and less than 1 are typical. However, this is not exactly the same as efficiency and thus in practice, some turbines can exhibit greater than unity power coefficients. In these circumstances, one cannot conclude the first law of thermodynamics is violated because this is not an efficiency term by the strict definition of efficiency. - Where: is the coefficient of power, is the air density, A is the area of the wind turbine, and V is the wind speed. The thrust coefficient is another important dimensionless number in wind turbine aerodynamics. Equation (1) shows two important dependents. The first is the speed (U) that the machine is going at. The speed at the tip of the blade is usually used for this purpose, and is written as the product of the blade radius and the rotational speed of the wind ("", where is the rotational velocity in radians/second).[please clarify] This variable is nondimensionalized by the wind speed, to obtain the speed ratio: The force vector is not straightforward, as stated earlier there are two types of aerodynamic forces, lift and drag. Accordingly, there are two non-dimensional parameters. However both variables are non-dimensionalized in a similar way. The formula for lift is given below, the formula for drag is given after: - Where: is the lift coefficient, is the drag coefficient, is the relative wind as experienced by the wind turbine blade, and A is the area. Note that A may not be the same area used in the power non-dimensionalization of power. The aerodynamic forces have a dependency on W, this speed is the relative speed and it is given by the equation below. Note that this is vector subtraction. Drag- versus lift-based machines All wind turbines extract energy from the wind through aerodynamic forces. There are two important aerodynamic forces: drag and lift. Drag applies a force on the body in the direction of the relative flow, while lift applies a force perpendicular to the relative flow. Many machine topologies could be classified by the primary force used to extract the energy. For example, a Savonious wind turbine is a drag-based machine, while a Darrieus wind turbine and conventional horizontal axis wind turbines are lift-based machines. Drag-based machines are conceptually simple, yet suffer from poor efficiency. Efficiency in this analysis is based on the power extracted vs. the plan-form area. Considering that the wind is free, but the blade materials are not, a plan-form-based definition of efficiency is more appropriate. The analysis is focused on comparing the maximum power extraction modes and nothing else. Accordingly, several idealizations are made to simplify the analysis, further considerations are required to apply this analysis to real turbines. For example, in this comparison the effects of axial momentum theory are ignored. Axial momentum theory demonstrates how the wind turbine imparts an influence on the wind which in-turn decelerates the flow and limits the maximum power. For more details see Betz's law. Since this effect is the same for both lift and drag-based machines it can be ignored for comparison purposes. The topology of the machine can introduce additional losses, for example trailing vorticity in horizontal axis machines degrade the performance at the tip. Typically these losses are minor and can be ignored in this analysis (for example tip loss effects can be reduced with using high aspect-ratio blades). Maximum power of a drag-based wind turbine Equation (1) will be the starting point in this derivation. Equation (CD) is used to define the force, and equation (RelativeSpeed) is used for the relative speed. These substitutions give the following formula for power. It can be shown through calculus that equation (DragCP) achieves a maximum at . By inspection one can see that equation (DragPower) will achieve larger values for . In these circumstances, the scalar product in equation (1) makes the result negative. Thus, one can conclude that the maximum power is given by: Experimentally it has been determined that a large is 1.2, thus the maximum is approximately 0.1778. Maximum power of a lift-based wind turbine The derivation for the maximum power of a lift-based machine is similar, with some modifications. First we must recognize that drag is always present, and thus cannot be ignored. It will be shown that neglecting drag leads to a final solution of infinite power. This result is clearly invalid, hence we will proceed with drag. As before, equations (1), (CD) and (RelativeSpeed) will be used along with (CL) to define the power below expression. Solving the optimal speed ratio is complicated by the dependency on and the fact that the optimal speed ratio is a solution to a cubic polynomial. Numerical methods can then be applied to determine this solution and the corresponding solution for a range of results. Some sample solutions are given in the table below. Experiments have shown that it is not unreasonable to achieve a drag ratio () of about 0.01 at a lift coefficient of 0.6. This would give a of about 889. This is substantially better than the best drag-based machine, and explains why lift-based machines are superior. In the analysis given here, there is an inconsistency compared to typical wind turbine non-dimensionalization. As stated in the preceding section, the A (area) in the non-dimensionalization is not always the same as the A in the force equations (CL) and (CD). Typically for the A is the area swept by the rotor blade in its motion. For and A is the area of the turbine wing section. For drag based machines, these two areas are almost identical so there is little difference. To make the lift based results comparable to the drag results, the area of the wing section was used to non-dimensionalize power. The results here could be interpreted as power per unit of material. Given that the material represents the cost (wind is free), this is a better variable for comparison. If one were to apply conventional non-dimensionalization, more information on the motion of the blade would be required. However the discussion on Horizontal Axis Wind Turbines will show that the maximum there is 16/27. Thus, even by conventional non-dimensional analysis lift based machines are superior to drag based machines. There are several idealizations to the analysis. In any lift-based machine (aircraft included) with finite wings, there is a wake that affects the incoming flow and creates induced drag. This phenomenon exists in wind turbines and was neglected in this analysis. Including induced drag requires information specific to the topology, In these cases it is expected that both the optimal speed-ratio and the optimal would be less. The analysis focused on the aerodynamic potential, but neglected structural aspects. In reality most optimal wind-turbine design becomes a compromise between optimal aerodynamic design and optimal structural design. Horizontal-axis wind turbine The aerodynamics of a horizontal-axis wind turbine (HAWT) are not straightforward. The air flow at the blades is not the same as the airflow further away from the turbine. The very nature of the way in which energy is extracted from the air also causes air to be deflected by the turbine. In addition, the aerodynamics of a wind turbine at the rotor surface exhibit phenomena rarely seen in other aerodynamic fields. Axial momentum and the Lanchester–Betz–Joukowsky limit Energy in fluid is contained in four different forms: gravitational potential energy, thermodynamic pressure, kinetic energy from the velocity and finally thermal energy. Gravitational and thermal energy have a negligible effect on the energy extraction process. From a macroscopic point of view, the air flow about the wind turbine is at atmospheric pressure. If pressure is constant then only kinetic energy is extracted. However up close near the rotor itself the air velocity is constant as it passes through the rotor plane. This is because of conservation of mass. The air that passes through the rotor cannot slow down because it needs to stay out of the way of the air behind it. So at the rotor the energy is extracted by a pressure drop. The air directly behind the wind turbine is at sub-atmospheric pressure; the air in front is under greater than atmospheric pressure. It is this high pressure in front of the wind turbine that deflects some of the upstream air around the turbine. Frederick W. Lanchester was the first to study this phenomenon in application to ship propellers, five years later Nikolai Yegorovich Zhukovsky and Albert Betz independently arrived at the same results. It is believed that each researcher was not aware of the others' work because of World War I and the Bolshevik Revolution. Thus formally, the proceeding limit should be referred to as the Lanchester–Betz–Joukowsky limit. In general Albert Betz is credited for this accomplishment because he published his work in a journal that had a wider circulation, while the other two published it in the publication associated with their respective institution, thus it is widely known as simply the Betz Limit. This is derived by looking at the axial momentum of the air passing through the wind turbine. As stated above some of the air is deflected away from the turbine. This causes the air passing through the rotor plane to have a smaller velocity than the free stream velocity. The ratio of this reduction to that of the air velocity far away from the wind turbine is called the axial induction factor. It is defined as below: - where a is the axial induction factor, U1 is the wind speed far away upstream from the rotor, and U2 is the wind speed at the rotor. The first step to deriving the Betz limit is applying conservation of angular momentum. As stated above the wind loses speed after the wind turbine compared to the speed far away from the turbine. This would violate the conservation of momentum if the wind turbine was not applying a thrust force on the flow. This thrust force manifests itself through the pressure drop across the rotor. The front operates at high pressure while the back operates at low pressure. The pressure difference from the front to back causes the thrust force. The momentum lost in the turbine is balanced by the thrust force. Another equation is needed to relate the pressure difference to the velocity of the flow near the turbine. Here the Bernoulli equation is used between the field flow and the flow near the wind turbine. There is one limitation to the Bernoulli equation: the equation cannot be applied to fluid passing through the wind turbine. Instead conservation of mass is used to relate the incoming air to the outlet air. Betz used these equations and managed to solve the velocities of the flow in the far wake and near the wind turbine in terms of the far field flow and the axial induction factor. The velocities are given below as: U4 is introduced here as the wind velocity in the far wake. This is important because the power extracted from the turbine is defined by the following equation. However the Betz limit is given in terms of the coefficient of power . The coefficient of power is similar to efficiency but not the same. The formula for the coefficient of power is given beneath the formula for power: Betz was able to develop an expression for in terms of the induction factors. This is done by the velocity relations being substituted into power and power is substituted into the coefficient of power definition. The relationship Betz developed is given below: The Betz limit is defined by the maximum value that can be given by the above formula. This is found by taking the derivative with respect to the axial induction factor, setting it to zero and solving for the axial induction factor. Betz was able to show that the optimum axial induction factor is one third. The optimum axial induction factor was then used to find the maximum coefficient of power. This maximum coefficient is the Betz limit. Betz was able to show that the maximum coefficient of power of a wind turbine is 16/27. Airflow operating at higher thrust will cause the axial induction factor to rise above the optimum value. Higher thrust cause more air to be deflected away from the turbine. When the axial induction factor falls below the optimum value the wind turbine is not extracting all the energy it can. This reduces pressure around the turbine and allows more air to pass through the turbine, but not enough to account for the lack of energy being extracted. The derivation of the Betz limit shows a simple analysis of wind turbine aerodynamics. In reality there is a lot more. A more rigorous analysis would include wake rotation, the effect of variable geometry. The effect of airfoils on the flow is a major component of wind turbine aerodynamics. Within airfoils alone, the wind turbine aerodynamicist has to consider the effect of surface roughness, dynamic stall tip losses, solidity, among other problems. Angular momentum and wake rotation The wind turbine described by Betz does not actually exist. It is merely an idealized wind turbine described as an actuator disk. It's a disk in space where fluid energy is simply extracted from the air. In the Betz turbine the energy extraction manifests itself through thrust. The equivalent turbine described by Betz would be a horizontal propeller type operating with infinite blades at infinite tip speed ratios and no losses. The tip speed ratio is ratio of the speed of the tip relative to the free stream flow. This turbine is not too far from actual wind turbines. Actual turbines are rotating blades. They typically operate at high tip speed ratios. At high tip speed ratios three blades are sufficient to interact with all the air passing through the rotor plane. Actual turbines still produce considerable thrust forces. One key difference between actual turbines and the actuator disk, is that the energy is extracted through torque. The wind imparts a torque on the wind turbine, thrust is a necessary by-product of torque. Newtonian physics dictates that for every action there is an equal and opposite reaction. If the wind imparts a torque on the blades then the blades must be imparting a torque on the wind. This torque would then cause the flow to rotate. Thus the flow in the wake has two components, axial and tangential. This tangential flow is referred to as wake rotation. Torque is necessary for energy extraction. However wake rotation is considered a loss. Accelerating the flow in the tangential direction increases the absolute velocity. This in turn increases the amount of kinetic energy in the near wake. This rotational energy is not dissipated in any form that would allow for a greater pressure drop (Energy extraction). Thus any rotational energy in the wake is energy that is lost and unavailable. This loss is minimized by allowing the rotor to rotate very quickly. To the observer it may seem like the rotor is not moving fast; however, it is common for the tips to be moving through the air at 6 times the speed of the free stream. Newtonian mechanics defines power as torque multiplied by the rotational speed. The same amount of power can be extracted by allowing the rotor to rotate faster and produce less torque. Less torque means that there is less wake rotation. Less wake rotation means there is more energy available to extract. Blade element and momentum theory The simplest model for horizontal axis wind turbine (HAWT) aerodynamics is blade element momentum (BEM) theory. The theory is based on the assumption that the flow at a given annulus does not affect the flow at adjacent annuli. This allows the rotor blade to be analyzed in sections, where the resulting forces are summed over all sections to get the overall forces of the rotor. The theory uses both axial and angular momentum balances to determine the flow and the resulting forces at the blade. The momentum equations for the far field flow dictate that the thrust and torque will induce a secondary flow in the approaching wind. This in turn affects the flow geometry at the blade. The blade itself is the source of these thrust and torque forces. The force response of the blades is governed by the geometry of the flow, or better known as the angle of attack. Refer to the Airfoil article for more information on how airfoils create lift and drag forces at various angles of attack. This interplay between the far field momentum balances and the local blade forces requires one to solve the momentum equations and the airfoil equations simultaneously. Typically computers and numerical methods are employed to solve these models. There is a lot of variation between different versions of BEM theory. First, one can consider the effect of wake rotation or not. Second, one can go further and consider the pressure drop induced in wake rotation. Third, the tangential induction factors can be solved with a momentum equation, an energy balance or orthogonal geometric constraint; the latter a result of Biot–Savart law in vortex methods. These all lead to different set of equations that need to be solved. The simplest and most widely used equations are those that consider wake rotation with the momentum equation but ignore the pressure drop from wake rotation. Those equations are given below. a is the axial component of the induced flow, a' is the tangential component of the induced flow. is the solidity of the rotor, is the local inflow angle. and are the coefficient of normal force and the coefficient of tangential force respectively. Both these coefficients are defined with the resulting lift and drag coefficients of the airfoil: Corrections to blade element momentum theory Blade element momentum (BEM) theory alone fails to represent accurately the true physics of real wind turbines. Two major shortcomings are the effects of a discrete number of blades and far field effects when the turbine is heavily loaded. Secondary shortcomings originate from having to deal with transient effects like dynamic stall, rotational effects like the Coriolis force and centrifugal pumping, and geometric effects that arise from coned and yawed rotors. The current state of the art in BEM uses corrections to deal with these major shortcomings. These corrections are discussed below. There is as yet no accepted treatment for the secondary shortcomings. These areas remain a highly active area of research in wind turbine aerodynamics. The effect of the discrete number of blades is dealt with by applying the Prandtl tip loss factor. The most common form of this factor is given below where B is the number of blades, R is the outer radius and r is the local radius. The definition of F is based on actuator disk models and not directly applicable to BEM. However the most common application multiplies induced velocity term by F in the momentum equations. As in the momentum equation there are many variations for applying F, some argue that the mass flow should be corrected in either the axial equation, or both axial and tangential equations. Others have suggested a second tip loss term to account for the reduced blade forces at the tip. Shown below are the above momentum equations with the most common application of F: The typical momentum theory applied in BEM is only effective for axial induction factors up to 0.4 (thrust coefficient of 0.96). Beyond this point the wake collapses and turbulent mixing occurs. This state is highly transient and largely unpredictable by theoretical means. Accordingly, several empirical relations have been developed. As the usual case there are several version, however a simple one that is commonly used is a linear curve fit given below, with . The turbulent wake function given excludes the tip loss function, however the tip loss is applied simply by multiplying the resulting axial induction by the tip loss function. The terms and represent different quantities. The first one is the thrust coefficient of the rotor, which is the one which should be corrected for high rotor loading (i.e., for high values of ), while the second one () is the tangential aerodynamic coefficient of an individual blade element, which is given by the aerodynamic lift and drag coefficients. This section does not cite any sources. (September 2016) (Learn how and when to remove this template message) BEM is widely used due to its simplicity and overall accuracy, but its originating assumptions limit its use when the rotor disk is yawed, or when other non-axisymmetric effects (like the rotor wake) influence the flow. Limited success at improving predictive accuracy has been made using computational fluid dynamics (CFD) solvers based on Reynolds-averaged Navier–Stokes (RANS) and other similar three-dimensional models such as free vortex methods. These are very computationally intensive simulations to perform for several reasons. First, the solver must accurately model the far-field flow conditions, which can extend several rotor diameters up- and down-stream and include atmospheric boundary layer turbulence, while at the same time resolving the small-scale boundary-layer flow conditions at the blades' surface (necessary to capture blade stall). In addition, many CFD solvers have difficulty meshing parts that move and deform, such as the rotor blades. Finally, there are many dynamic flow phenomena that are not easily modelled by RANS, such as dynamic stall and tower shadow. Due to the computational complexity, it is not currently practical to use these advanced methods for wind turbine design, though research continues in these and other areas related to helicopter and wind turbine aerodynamics. Free vortex models (FVM) and Lagrangian particle vortex methods (LPVM) are both active areas of research that seek to increase modelling accuracy by accounting for more of the three-dimensional and unsteady flow effects than either BEM or RANS. FVM is similar to lifting line theory in that it assumes that the wind turbine rotor is shedding either a continuous vortex filament from the blade tips (and often the root), or a continuous vortex sheet from the blades' trailing edges. LPVM can use a variety of methods to introduce vorticity into the wake. Biot–Savart summation is used to determine the induced flow field of these wake vorticies' circulations, allowing for better approximations of the local flow over the rotor blades. These methods have largely confirmed much of the applicability of BEM and shed insight into the structure of wind turbine wakes. FVM has limitations due to its origin in potential flow theory, such as not explicitly modelling model viscous behavior (without semi-empirical core models), though LPVM is a fully viscous method. LPVM is more computationally intensive than either FVM or RANS, and FVM still relies on blade element theory for the blade forces. - "Wind Turbine Blade Aerodynamics" (PDF). Gurit.com. Retrieved 21 June 2016. - Gijs A.M. van Kuik The Lanchester–Betz–Joukowsky Limit. Wind Energy (2007), Volume 10, pp. 289–291 - Leishman, J. Principles of Helicopter Aerodynamics, 2nd ed.. Cambridge University Press, 2006. p. 751. - Cottet, G-H. and Koumoutsakos, P. Vortex Methods. Cambridge University Press, 2000. - Leishman, J. Principles of Helicopter Aerodynamics, 2nd ed.. Cambridge University Press, 2006. p. 753. - Cottet, G-H. and Koumoutsakos, P. Vortex Methods. Cambridge University Press, 2000. p. 172. - Schaffarczyk, A.P. Introduction to Wind Turbine Aerodynamics, Springer, 2014 ISBN 978-3642364082 - Hansen, M.O.L. Aerodynamics of Wind Turbines, 3rd ed., Routledge, 2015 ISBN 978-1138775077 |Wikimedia Commons has media related to Wind turbine aerodynamics.|
<urn:uuid:4ba714dc-40de-46da-98ac-79e65a07c817>
3.53125
5,680
Knowledge Article
Science & Tech.
39.998155
95,506,475
By: Charles Q. Choi Published: 10/30/2012 12:07 PM EDT on SPACE.com The largest dark spot on the moon, known as the Ocean of Storms, may be a scar from a giant cosmic impact that created a magma sea more than a thousand miles wide and several hundred miles deep, researchers say. These findings could help explain why the moon's near and far sides are so very different from one another, investigators added. Scientists analyzed Oceanus Procellarum, or the Ocean of Storms, a dark spot on the near side of the moon more than 1,800 miles (3,000 kilometers) wide. The near side of the moon, the side that always faces Earth, is quite different from the far side, often erroneously called the moon's dark side (this side does in fact get sunlight — it simply never faces Earth). For example, widespread plains of volcanic rock called "maria" (Latin for seas) cover nearly a third of the near side, but only a few maria are seen on the far one. Researchers have posed a number of explanations for the vast disparity between the moon's near and far sides. Some have suggested that a tiny second moon may once have orbited Earth before catastrophically slamming into the other moon, spreading its remains mostly on the moon's far side. Others have proposed that Earth's pull on the moon caused distortions that were later frozen in place on the moon's near side. Similarly, Mars' northern and southern halves are also stark contrasts from one another, and researchers had suggested that a monstrous impact may have been the cause. Now scientists in Japan say that a giant collision may also explain the moon's two-faced nature, one that gave rise to the Ocean of Storms. The near side (left) and far side (right) of the moon, showing the outline of the three biggest impact basins. The researchers analyzed the composition of the moon's surface using data from the Japanese lunar orbiter Kaguya/Selene. These data revealed that a low-calcium variety of the mineral pyroxene is concentrated around Oceanus Procellarum and large impact craters such as the South Pole-Aitkenand Imbriumbasins. This type of pyroxene is linked with the melting and excavation of material from the lunar mantle, and suggests the Ocean of Storms is a leftover from a cataclysmic impact. This collision would have generated "a 3,000-kilometer (1,800-mile) wide magma sea several hundred kilometers in depth," lead study author Ryosuke Nakamura, a planetary scientist at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan, told SPACE.com. The investigators say that collisions large enough to create Oceanus Procellarum and the moon's other giant impact basins would have completely stripped the original crust on the near side of the moon. The crust that later formed there from the molten rock left after these impacts would differ dramatically from that on the far side, explaining why these halves are so distinct. Some researchers had speculated that the Procellarum basin was the relic of a gigantic impact. However, this idea was hotly debated because there were no definite topographic signs it was an impact basin, "possibly because the formation date was too old, maybe more than 4 billion years," Nakamura said. "Our discovery provides the first compositional evidence of this idea, which could be confirmed by future lunar sample return missions, such as Moonrise," a proposed NASA mission that would send an unmanned probe to collect lunar dirt and return it to Earth. "The neighboring Earth likely experienced similar-sized impacts around the same period," Nakamura added. "It would have had a great effect on the onset of Earth's continental crust formation and the beginning of life." The scientists detailed their findings online Oct. 28 in the journal Nature Geoscience.
<urn:uuid:03efce06-b946-48be-9b42-8809300681c2>
3.796875
811
Truncated
Science & Tech.
41.27506
95,506,477
Authors: I. Kohlberg, G. Boezer, W. Greer and R. Good Affilation: Institute for Defense Analyses, United States Pages: 68 - 72 Keywords: bacteria, impact theory, scavenging, surface chem-istry, van der Waals In this study, we develop the design parameters required to remove airborne biological particles by impact with scavenging aerosols. The conditions for capturing air-borne biological particles for a head-on collision are deter-mined as a function of each particle’s mass, radius, incident velocity, and modulus of elasticity, the attractive or repulsive force between them, and the binding energy created during impact. Estimates of the radii of bacteria particles that can be captured are determined as a function of the material parame-ters, binding energy, and kinetic energy of the interacting particles. Surface chemistry approaches for creating desired scavenger materials are identified. Molecular simulations of the interaction between the scavenger and bacteria particles offer a cost-effective manner of computing such factors. Designing surface chemistry to strengthen the boundary inter-actions is crucial to maximizing scavenger capability. Weak van der Waals and pair-wise electrostatic interactions may not be sufficient to capture particles with an upper range of kinetic energy. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:0d060767-b868-474d-9521-efe12720974b>
2.625
277
Academic Writing
Science & Tech.
14.3194
95,506,478
On a balmy night in late October 2014, Rachel Lindbergh and dozens of others stood on the grass at the end of Arbuckle Neck Road in Virginia, staring across the bay. Their eyes were trained on a spot on Wallops Island less than two miles away, where a 14-story-tall Antares rocket stood ready to blast off into space, loaded with food, supplies, and science experiments, including one that Lindbergh had been working on for two years. The group ticked off the seconds together as the countdown from mission control came over the portable speakers. The engines ignited, shooting thick curls of smoke from the launchpad, and the Antares began its ascent, bound for the International Space Station. For a few seconds the rocket shone like a yellow jewel against the dark sky, and then it was gone, consumed in a ball of fire. The shock wave that followed the explosion knocked some of the spectators on Arbuckle Neck Road to the ground. “I didn’t really believe what was happening,” Lindbergh said. Now in her second year at the University of Chicago, Lindbergh is a member of a an exclusive group no one actually wants to be in: people who have seen their work destroyed in a failed rocket launch. Resupply missions to the International Space Station, like the one carrying Lindbergh’s experiment, are routine at this point. Cargo is launched every couple months, usually from Russia or the United States. The commercial spaceflight companies NASA uses to carry out these missions are, for the most part, good at it. But everyone knows that something could still go wrong, and sometimes it does. Orbital ATK, the company that owns the Antares rockets and Cygnus spacecrafts, eventually concluded that something—they couldn’t be sure exactly what—caused the main engine system to explode, and engineers were forced to hit the self-destruct button before the rocket fell to the ground. Lindbergh and her fellow student researchers left the water’s edge that day, got some ice cream, and got to work recreating their experiment, a study of “tin whiskers,” potentially dangerous hair-like filaments that can develop on soldering spots on metal circuitry, under spaceflight conditions. The original experiment had taken them about two years to build. This time, they hustled, made some extra tweaks, and got it on a Dragon capsule bound for the ISS atop a Falcon 9 rocket, developed by SpaceX. Lindbergh drove from her hometown of Charleston, South Carolina, to Cape Canaveral, Florida to watch the launch in June 2015, less than a year after the original attempt. The rocket took off—and then blew up two minutes later. No one was injured in either explosion, nor in a third incident in September 2016, when a Falcon 9 rocket exploded on the launchpad two days before its scheduled resupply mission. That’s key here, that there’s no loss of life in these kinds of flights. In the grand scheme of things, in the record of space exploration, which has claimed lives, the mourning period for cargo is quite short. Failures may cause delays in research, but experiments can often be rebuilt and supplies replenished. Still, there’s a certain anguish in witnessing your precious work or equipment vanish in an instant. “I don’t think my brain could process it right away,” said Mike Safyan, the director of launch and regulatory affairs at the San-Francisco based company Planet, which lost 26 imaging satellites in the 2014 explosion and eight more in the 2015 incident. Putting the experience into words proves difficult. Lindbergh considered her tin-whiskers experiment to be her “baby,” and “it’s definitely an incredibly, unbelievably traumatic experience to see your baby destroyed twice in a row.” Stacy Hamel, the flight operations manager at the Student Spaceflight Experiments Program, which lets students like Lindbergh send experiments to the ISS, said it’s feels like “someone who writes a novel and loses their entire work because of a computer failure.” Trevor Hammond, a spokesperson for Planet, likened the experience to “getting a phone call on Sunday that all your servers that your company [uses] are on fire.” Chris Mason, a geneticist at Weill Cornell Medicine in New York City who lost some equipment for a NASA study on astronauts in the 2015 explosion, said it feels like being punched in the gut. In fewer, and perhaps the most relatable, words: It sucks. Again, they all knew the worst could happen, and that awareness helped to cushion some of the pain. At Planet, leadership comes out before every launch to remind employees to brace for both success and failure. After all, private spaceflight companies like SpaceX and Orbital ATK have been in the game for fewer years than more established players, making their launches risky. Space is hard, the old saying goes, and the members of this unlucky group looked to this adage for some comfort. A few days after he lost about $5,000 worth of equipment in the 2015 incident, Mason received this letter in the mail from NASA: Some critics say “space is hard” is a cliche, an excuse NASA and news organizations rolls out after a launch failure to deflect attention from the pace of development in commercial crew efforts. If routine resupply missions are hard, they argue, how can humans make it to Mars? There exists a strange dichotomy when it comes to the ease or difficulty of spaceflight, one that’s sometimes perpetuated by a single entity. Just a few days ago, SpaceX, less than six months out from its last explosion, announced it’s working with NASA scientists to identify potential landing sites on Mars. But Mason, along with the others I spoke to, believes the platitude is accurate, and not an excuse. “It gives people a sense of the necessity of failure,” he said. “That things have to explode to be corrected.” None of them blames the spaceflight companies, either. It may be because the losses, while disappointing, can be restored, at least in most cases. Mason’s equipment reached the ISS on another flight, and he’s currently poring over the data he got back to study how spaceflight might affect astronauts’ DNA. Planet Labs, which can produce 20 of its Dove satellites per week, recently deployed 88 satellites into orbit atop a rocket developed by India’s space agency. Lindbergh and her team rebuilt their experiment—again—and put it on another rocket last April, a Falcon 9 launching from Cape Canaveral. She watched as the engines ignited, the smoke billowed out, and the rocket rose. This time, it didn’t blow up. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:f4d73685-c0e7-4552-8db5-09e399e74d1f>
2.953125
1,452
News Article
Science & Tech.
46.703718
95,506,479
Share this article: Scientists used a big rotating pot to simulate the atmosphere of Saturn, and they may have figured out how the gas giant's massive polar storms take shape. With winds reaching staggering speeds of up to 1,100 mph (1,800 km/h) — in our solar system, only Neptune can be windier — and storms the size of Earth, Saturn's atmosphere has fascinated researchers ever since they got the first good looks at it via observations by NASA's twin Voyager spacecraft in the early 1980s. In a paper published Monday (Feb. 26) in the journal Nature Geoscience, a team of researchers used the rotating pot to better understand Saturn's atmosphere and overcome some of the limitations of more conventional methods, such as computer modeling. A switch to a cooler weather pattern in the midwestern United States will come at the expense of violent thunderstorms prior to the middle of the week. The intense record heat baking the south-central United States is expected to get trimmed back early this week, but a sweep of refreshing air is not on the horizon. This past weekend's rainstorm was only the start of an abnormally wet pattern that will elevate the flood risk in the eastern United States into the end of the month. Despite NASCAR moving up the start time of the Foxwoods Resort Casino 301, rain has hung on and delayed the race at Loudon, New Hampshire. Yet another round of severe weather is threatening the southeastern United States to close out this weekend. The remainder of July will be dominated by a resurgence of heat across the northwestern United States. An uptick in monsoon rainfall is expected to heighten the flood threat across eastern and northern India this week.
<urn:uuid:650d8dec-6e0a-4f52-8602-af0ccd0ddb96>
3.171875
347
News Article
Science & Tech.
45.688444
95,506,507
© 1999 by Linda Moulton Howe May 30, 1999 Ann Arbor, Michigan A few weeks ago I reported on Dreamland that the U. S. government is very concerned about biological warfare and bioterrorists. What would happen if anthrax spores, small pox or other dangerous infectious diseases were sprayed into the air above one or more American cities? One new possible answer seems surprisingly simple and miraculously effective: drops of oil droplets so small they can adhere to the surface of deadly anthrax and small pox pathogens and literally explode the germs. That's why these oil drops are called "nano bombs." Nano means "one billionth." Smaller than viruses and bacteria. © 1998 - 2018 by Linda Moulton Howe. All Rights Reserved.
<urn:uuid:a0e5840e-a794-477d-821f-52085cc2c8b5>
2.53125
159
Personal Blog
Science & Tech.
50.75122
95,506,509
Radio astronomers at Bonn have reached a scientific milestone. One of the world's largest radio telescopes, the Effelsberg 100-m dish, surveyed the entire northern sky in the light of the neutral hydrogen (HI) 21-cm line. This effort, led by Jürgen Kerp (AIfA) and Benjamin Winkel (MPIfR), began in 2008 and has culminated today in the initial data release of the Effelsberg-Bonn HI Survey (EBHIS). The EBHIS data base is now freely accessible for all scientists around the world. In addition to the Milky Way data, the EBHIS project also includes unique information about HI in external galaxies out to a distance of about 750 million light years from Earth. Hydrogen is THE ELEMENT of the universe. Consisting of a single proton and an electron it is the simplest and most abundant element in space. One could almost consider the universe as a pure hydrogen universe, albeit with some minor "pollution" by heavier elements, among these carbon, the fundamental component of all organisms on Earth. The 21-cm line is a very faint but characteristic emission line of neutral atomic hydrogen (or HI). It is not only feasible to detect the weakest signals from distant galaxies with the 100-m Effelsberg antenna, but also to determine their motion relative to Earth with high precision. A special receiver was required in order to enable the EBHIS project. With seven receiving elements observing the sky independently from each other, it was possible to reduce the necessary observing time from decades to about five years only. Field Programmable Gate Array (FPGA) spectrometers were developed within the course of the EBHIS project, allowing real time processing and storage of about 100 million individual HI spectra with consistently good quality. The individual HI spectra were combined using high-performance computers into a unique map of the entire northern sky and provide unsurpassed richness in detail of the Milky Way Galaxy gas. Astronomy students at Bonn University had unique access to the pre-release EBHIS data. In 2013 the European Space Agency (ESA) signed a memorandum of understanding with the Bonn HI radio astronomers. ESA was granted exclusive access to EBHIS data for their Planck satellite mission and, in return, Bonn students were given unique access to Planck data for their thesis projects. Twelve Bachelor, nine Master, and five Doctoral thesis projects have been successfully completed since 2008. The Square Kilometer Array (SKA), the world's largest future radio astronomical facility, to be constructed in Australia and South Africa, will benefit directly from the EBHIS data. Owing to the construction of SKA as a radio interferometer, it is inherently insensitive to the faint and extended HI emission of the Milky Way and nearby external galaxies. Since the HI gas is measured very well by EBHIS, only combining SKA and EBHIS data will allow one to derive a comprehensive view of the interstellar HI gas. The Effelsberg-Bonn HI Survey will be a rich resource for science in the near and far future. Independent attempts to survey the entire northern sky with a 100-m class telescope are not scheduled. The EBHIS data will thus set the quality standard for the Milky Way Galaxy HI for the next decades. EBHIS is based on observations with the 100-m telescope of the Max-Planck-Institut für Radioastronomie (MPIfR) at Effelsberg. The project was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) for six years. The Effelsberg–Bonn HI Survey: Milky Way gas. First data release, B. Winkel, J. Kerp, L. Flöer, P. M. W. Kalberla, N. Ben Bekhti, R. Keller, and D. Lenz, 2016, Astronomy & Astrophysics, A&A 585, A41. Dr. Benjamin Winkel, Max-Planck-Institut für Radioastronomie, Bonn. Fon: +49 2257 301-167 Priv.-Doz. Dr. Jürgen Kerp, Argelander-Institut für Astronomie, Universität Bonn. Fon: +49 228 73-3667 Dr. Norbert Junkes, Press and Public Outreach, Max-Planck-Institut für Radioastronomie. Fon: +49 228 525-399 http://www.mpifr-bonn.mpg.de/pressreleases/2015/9 (Press Release) http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/585/A41 (EBHIS Data Base at CDS) Norbert Junkes | Max-Planck-Institut für Radioastronomie What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:b0acfe5f-3b3f-4fb4-87bd-d6b48f76a164>
3.609375
1,688
Content Listing
Science & Tech.
46.135828
95,506,537
When a huge star many times the mass of the sun comes to the end of its life it collapses in on itself and forms a singularity – creating a black hole where gravity is so strong that not even light itself can escape. At least, that’s what we thought. A scientist has sensationally said that it is impossible for black holes to exist – and she even has mathematical proof to back up her claims. If true, her research could force physicists to scrap their theories of how the universe began. The research was conducted by Professor Laura Mersini-Houghton from the University of North Carolina at Chapel Hill in the College of Arts and Scientists. She claims that as a star dies, it releases a type of radiation known as Hawking radiation – predicted by Professor Stephen Hawking. However in this process, Professor Mersini-Houghton believes the star also sheds mass, so much so that it no longer has the density to become a black hole. Before the black hole can form, she said, the dying star swells and explodes. The singularity as predicted never forms, and neither does the event horizon – the boundary of the black hole where not even light can escape. ‘I’m still not over the shock,’ said Professor Mersini-Houghton. ‘We’ve been studying this problem for a more than 50 years and this solution gives us a lot to think about.’ Experimental evidence may one day provide physical proof as to whether or not black holes exist in the universe. But for now, Mersini-Houghton says the mathematics are conclusive. What’s more, the research could apparently even call into question the veracity of the Big Bang theory. Most physicists think the universe originated from a singularity that began expanding with the Big Bang about 13.8 billion years ago. If it is impossible for singularities to exist, however, as partially predicted by Professor Mersini-Houghton, then that theory would also be brought into question. One of the reasons black holes are so bizarre is that they pit two fundamental theories of the universe against each other. Namely, Einstein’s theory of gravity predicts the formation of black holes. But a fundamental law of quantum theory states that no information from the universe can ever disappear. Efforts to combine these two theories proved problematic, and has become known as the black hole information paradox – how can matter permanently disappear in a black hole as predicted? Professor Mersini-Houghton’s new theory does manage to mathematically combine the two fundamental theories, but with unwanted effects for people expecting black holes to exist. ‘Physicists have been trying to merge these two theories – Einstein’s theory of gravity and quantum mechanics – for decades, but this scenario brings these two theories together, into harmony,’ said Professor Mersini-Houghton. ‘And that’s a big deal.’
<urn:uuid:0bd25b5e-62b4-4e26-aefb-3c58d64a632b>
4.03125
629
News Article
Science & Tech.
44.163831
95,506,549
Image Courtesy of: BRI's Science Communications piece "Center for Mercury Studies" (Portland, ME)—Biodiversity Research Institute (BRI), announces the publication of the scientific paper Evaluating the effectiveness of the Minamata Convention on Mercury: Principles and recommendations for next steps, published by the journal Science of the Total Environment (now available online). Opened for signature in October 2013, the Minamata Convention on Mercury is the first major environmental treaty since the Kyoto Protocol came into effect in 2005. The Convention addresses issues related to the use and release of mercury through trade, industrial uses, and atmospheric emissions, as well as the long-term storage and disposal of mercury and mercury compounds. “The Minamata Convention is a landmark global treaty designed to limit the use and spread of mercury in the environment,” says David C. Evers, Ph.D., BRI’s executive director and lead author on the paper. “Evaluating the effectiveness of the Convention, which is a requirement of the treaty, is a crucial component to ensure that it meets this objective.” Evaluating the effectiveness of the Minamata Convention on Mercury proposes an approach to measure the Convention’s effectiveness that includes a suite of short-, medium-, and long-term metrics related to five major mercury control Articles in the Convention: • Supply sources and trade (Article 3 of the Convention) • The use of mercury in products (Article 4) • Manufacturing processes that use mercury (Article 5) • Mercury use in artisanal and small-scale gold mining (ASGM; Article 7) • Air mercury emissions from coal-fired power facilities and other sectors (Article 8) In addition, this paper describes the use of select bioindicators (both wildlife and humans) to assess and monitor environmental mercury concentrations and associated changes resulting from controls in place. The use of existing biotic mercury data will define spatial gradients (e.g., biological mercury hotspots), baselines to develop relevant temporal trends, and how to assess risk to taxa and human communities of greatest concern. “The task of evaluating the effectiveness of the Convention requires more than a monitoring exercise – it requires a robust array of data,” says Niladri Basu, PhD, associate professor and Canada Research Chair in Environmental Health Sciences at McGill University, Canada. “It is critical that we embrace an evidence-based approach to this evaluation; we have the scientific tools to do so. It is also critical that we share this knowledge to forge the best path forward in reducing mercury pollution around the world.” The Minamata Convention on Mercury is predicted to come into force (when 50 countries ratify the Convention) in 2017. “The debate about how to measure the effectiveness of the Convention is beginning now,” says Susan Egan Keane, deputy director of the Health Program at the Natural Resources Defense Council. “Our goal in publishing this paper is to contribute sound science and analysis to that discussion.” Co-authors of Evaluating the effectiveness of the Minamata Convention on Mercury include Niladri Basu, Ph.D., associate professor and Canada Research Chair in Environmental Health Sciences at McGill University, Canada, Susan Egan Keane, deputy director of the Health Program at the Natural Resources Defense Council, and David Buck, Ph.D., director of BRI’s Tropical Program. For information on the Minamata Convention and other resources, visit: www.nrdc.org/resources/minamata-convention-mercury-contents-guidance-and-resources Biodiversity Research Institute, headquartered in Portland, Maine, is a nonprofit ecological research group whose mission is to assess emerging threats to wildlife and ecosystems through collaborative research, and to use scientific findings to advance environmental awareness and inform decision makers. BRI supports ten research programs within three research centers including the Center for Mercury Studies, which was initiated in 2011. BRI is assisting in multiple ways with the ratification and implementation of the Minamata Convention on Mercury—a globally binding agreement facilitated by the United Nations Environment Programme (UNEP). BRI is a co-lead of UNEP’s Mercury Air Transport and Fate Research Partnership Area and a member of the Artisanal and Small-scale Gold Mining (ASGM) Partnership Area. BRI is also an Executing Agency of the United Nations Industrial Development Organization and serves as an international advisor for the United Nations Development Program to help coordinate and facilitate enabling activities to conduct Minamata Initial Assessments for many countries. Visit www.briloon.org/mercury for more information. The Natural Resources Defense Council is a U.S.-based environmental advocacy organization founded in 1970.With headquarters in New York, NY, and offices around the U.S. and in Beijing, China, NRDC combines the power of more than two million members and online activists with the expertise of some 500 scientists, lawyers, and policy advocates across the globe to ensure the rights of all people to the air, the water, and the wild. Visit www.nrdc.org for more information. The McGill School of Environment, one of 12 schools at McGill University, Montreal, Canada, focuses on research themes including: Health in a Changing Environment; Ecosystems, Biodiversity and Conservation; Citizens, Communities, Institutions and the Environment; Rethinking Social-Ecological Relationships. Visit www.mcgill.ca for more information. Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:ac65f8de-6fc0-4194-bbca-1fa9f970e749>
2.671875
1,153
News (Org.)
Science & Tech.
22.930638
95,506,577
A lopolith is a large igneous intrusion which is lenticular in shape with a depressed central region. In other words, it is a mass of igneous rock similar to a laccolith but concave downward rather than upward. Lopoliths are generally concordant with the intruded strata with dike or funnel-shaped feeder bodies below the body. The term was first defined and used by Frank Fitch Grout during the early 1900s in describing the Duluth gabbro complex in northern Minnesota and adjacent Ontario. Lopoliths typically consist of large layered intrusions that range in age from Archean to Eocene. Examples include the Duluth gabbro, the Sudbury Igneous Complex of Ontario, the Bushveld igneous complex of South Africa, the Skaergaard complex of Greenland and the Humboldt lopolith of Nevada. The Sudbury and Bushveld occurrences have been attributed to impact events and associated crustal melting.
<urn:uuid:1360defc-54c4-43c9-8287-3094069145b5>
3.71875
203
Knowledge Article
Science & Tech.
31.59
95,506,581
+44 1803 865913 By: Bruno Voituriez 222 pages, col illus Amid contemporary scenarios of potential climatic catastrophes and global warming that might be imagined to bring a new ice age, the powerful image of the Gulf Stream rising from the Florida Straits and flowing to the north Atlantic inevitably provokes questions about its ecological significance and whether it might ever stop. Answering these questions demands the sober presentation, given here, of the remarkable scientific fact that even dramatic climatic change would not bring an end to the Gulf Stream. Combining complex scientific information with an emerging narrative, this volume paints an elaborate but accessible portrait of this extraordinary natural phenomenon, tracing its historical discovery and the paradigms of its exploration, outlining its causes and dynamics, and examining its profound importance for the marine ecosystems of the Atlantic Ocean. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I would not hesitate for one second to use your company again and recommend you to others. Marks out of 10? Around 99! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:66ddb9ef-833b-4b03-9b09-d513ee474605>
2.6875
255
Product Page
Science & Tech.
25.993328
95,506,587
Space Weather Update: 03/25/2017 By Spaceweather.com, 03/25/2017 POTENT CORONAL HOLE FACES EARTH: A canyon-shaped hole in the sun’s atmosphere is facing Earth, and it is spewing a stream of fast-moving solar wind toward our planet. NASA’s Solar Dynamics Observatory photographed the giant fissure on March 25th: This is a “coronal hole” (CH) — a vast region where the sun’s magnetic field opens up and allows solar wind to escape. A gaseous stream flowing from this coronal hole is expected to reach our planet on during the late hours of March 27th and could spark moderately-strong G2-class geomagnetic storms around the poles on March 28th or 29th. We’ve seen this coronal hole before. In early March, it lashed Earth’s magnetic field with a fast-moving stream that sparked several consecutive days of intense auroras around the poles. The coronal hole is potent because it is spewing solar wind threaded with “negative polarity” magnetic fields. Such fields do a good job connecting to Earth’s magnetosphere and energizing geomagnetic storms. Arctic sky watchers should be alert for auroras early next week. Free: Aurora Alerts INFERIOR CONJUNCTION OF VENUS: Today Venus is passing almost directly between Earth and the sun. With the planets so aligned, the night side of Venus is facing Earth and only a narrow sliver of Venus’ sunlit hemisphere is visible. This has turned the second planet into an exquisitely slender crescent: Ofer Gabzo sends this picture from the Givatayim Observatory in Israel. “Venus was passing just 9° north of the sun, and had turned its lovely thin crescent exactly to the south,” says Gabzo. “Despite the angular proximity to the sun I did manage to get a glimpse of my favorite planet. The sun was obscured by the observatory’s dome to avoid risking the camera and the telescope’s optics.” Astronomers call this an “inferior conjunction of Venus.” It is the most most beautiful time to observe Venus, but also the most perilous. Optics mis-pointed only slightly can catch the glare of the sun and focus its deadly rays onto vulnerable eyes. If you do try to observe Venus during inferior conjunction, take precautions like Gabzo did: Put the sun behind a tree or building and observe Venus from the safety of the shadows. Safer still is the Realtime Venus Photo Gallery. FLYING TO THE AURORA AUSTRALIS: Last night, a group of sky watchers in Dunedin, New Zealand, boarded a plane and took off. They weren’t heading to another airport. Instead, they flew south into the aurora australis. Taichi Nakamura took these pictures from a window seat in Economy Class: “I was grateful to be on board the first chartered flight to 66 degrees south last night from 45.9 degrees south Dunedin New Zealand,” says Nakamura. “Project name ‘Flight to the Light’ was a quest of aurora enthusiasts flying together to see the aurora more closely and was created by Dunedin’s Otago museum director Ian Griffin. During the flight, we were rewarded with magnificent views of the aurora australis completely surrounding us providing us a breathtaking observatory of the Southern Lights.” A must-see video shows the lights in motion. THE FLIGHT OF THE EASTERNAUTS: The cosmic ray monitoring program ofEarth to Sky Calculus is not supported by government grants or big corporate sponsors. Instead we rely on you. That is, you and the Easternauts: On March 2nd, the student researchers flew a payload-full of Easter bunnies to the edge of space–and you can have one for $39.95. (Space helmet included!) They make great Easter gifts for young scientists, and all proceeds support STEM education. Each bunny comes with a greeting card showing the Easternaut in flight and telling the story of its journey to the stratosphere and back again. More far-out gifts may be found in the Earth to Sky store. All Sky Fireball Network Every night, a network of NASA all-sky cameras scans the skies above the United States for meteoritic fireballs. Automated software maintained by NASA’s Meteoroid Environment Office calculates their orbits, velocity, penetration depth in Earth’s atmosphere and many other characteristics. Daily results are presented here on Spaceweather.com. On Mar. 25, 2017, the network reported 8 fireballs. In this diagram of the inner solar system, all of the fireball orbits intersect at a single point–Earth. The orbits are color-coded by velocity, from slow (red) to fast (blue). [Larger image] [movies] Near Earth Asteroids Potentially Hazardous Asteroids (PHAs) are space rocks larger than approximately 100m that can come closer to Earth than 0.05 AU. None of the known PHAs is on a collision course with our planet, although astronomers are finding new ones all the time. On March 25, 2017 there were 1782 potentially hazardous asteroids. Recent & Upcoming Earth-asteroid encounters: Notes: LD means “Lunar Distance.” 1 LD = 384,401 km, the distance between Earth and the Moon. 1 LD also equals 0.00256 AU. MAG is the visual magnitude of the asteroid on the date of closest approach. Cosmic Rays in the Atmosphere Readers, thank you for your patience while we continue to develop this new section of Spaceweather.com. We’ve been working to streamline our data reduction, allowing us to post results from balloon flights much more rapidly, and we have developed a new data product, shown here: This plot displays radiation measurements not only in the stratosphere, but also at aviation altitudes. Dose rates are expessed as multiples of sea level. For instance, we see that boarding a plane that flies at 25,000 feet exposes passengers to dose rates ~10x higher than sea level. At 40,000 feet, the multiplier is closer to 50x. These measurements are made by our usual cosmic ray payload as it passes through aviation altitudes en route to the stratosphere over California. What is this all about? Approximately once a week, Spaceweather.com and the students of Earth to Sky Calculus fly space weather balloons to the stratosphere over California. These balloons are equipped with radiation sensors that detect cosmic rays, a surprisingly “down to Earth” form of space weather. Cosmic rays can seed clouds, trigger lightning, and penetrate commercial airplanes. Furthermore, there are studies ( #1, #2, #3, #4) linking cosmic rays with cardiac arrhythmias and sudden cardiac death in the general population. Our latest measurements show that cosmic rays are intensifying, with an increase of more than 12% since 2015: Why are cosmic rays intensifying? The main reason is the sun. Solar storm clouds such as coronal mass ejections (CMEs) sweep aside cosmic rays when they pass by Earth. During Solar Maximum, CMEs are abundant and cosmic rays are held at bay. Now, however, the solar cycle is swinging toward Solar Minimum, allowing cosmic rays to return. Another reason could be the weakening of Earth’s magnetic field, which helps protect us from deep-space radiation. The radiation sensors onboard our helium balloons detect X-rays and gamma-rays in the energy range 10 keV to 20 MeV. These energies span the range of medical X-ray machines and airport security scanners. The data points in the graph above correspond to the peak of the Reneger-Pfotzer maximum, which lies about 67,000 feet above central California. When cosmic rays crash into Earth’s atmosphere, they produce a spray of secondary particles that is most intense at the entrance to the stratosphere. Physicists Eric Reneger and Georg Pfotzer discovered the maximum using balloons in the 1930s and it is what we are measuring today. Daily Sun: 25 Mar 17 Sunspot AR2643 poses no threat for strong solar flares. Credit: SDO/HMI Sunspot number: 12 What is the sunspot number? Updated 25 Mar 2017 Current Stretch: 0 days 2017 total: 27 days (32%) 2016 total: 32 days (9%) 2015 total: 0 days (0%) 2014 total: 1 day (<1%) 2013 total: 0 days (0%) 2012 total: 0 days (0%) 2011 total: 2 days (<1%) 2010 total: 51 days (14%) 2009 total: 260 days (71%) Updated 25 Mar 2017 Current Auroral Oval: Coronal Holes: 25 Mar 17 A fast-moving stream of solar wind flowing from the indicated coronal hole could reach Earth as early as March 27th (although the 28th is more likely). Credit: NASA/SDO. Noctilucent Clouds The southern season for noctilucent clouds began on Nov. 17, 2016. Come back to this spot every day to see the “daily daisy” from NASA’s AIM spacecraft, which is monitoring the dance of electric-blue around the Antarctic Circle. Updated at: 02-24-2017 17:55:02 Updated at: 2017 Mar 24 2200 UTC Updated at: 2017 Mar 24 2200 UTC
<urn:uuid:5d6cdb87-03f9-4170-a1b5-517c70b52a27>
3.4375
2,039
News (Org.)
Science & Tech.
56.039174
95,506,597
May lead to sharper picture of human genome as well Dana-Farber Cancer Institute scientists have used a powerful gene-mapping technique to produce the clearest picture yet of all the genes of an animal – the microscopic worm Caenorhabditis elegans (better known as C. elegans). Scientists believe the same technique may be used to bring the current, somewhat blurry picture of the human genome into sharper focus. The study, which will be posted on the Nature Genetics website (http://www.nature.com/ng/) April 7 in advance of its publication in the journal, describes an effort to locate and precisely identify all of the approximately 19,000 genes that have been predicted to exist in the genome of C. elegans. The success of the technique in the worm, whose catalogue of genes is relatively small and well-mapped, is a strong indication that it can be applied to the human genome as well, the authors say. Bill Schaller | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:02b83c84-1c97-4754-a04a-9f193c929132>
3.671875
786
Content Listing
Science & Tech.
38.837049
95,506,614
Currently Updating Question Turn on thread page Beta Maths C2: Seems extremely simple but I don't get it!! watch - Thread Starter Last edited by TwentyTen; 12-02-2010 at 23:10. - 12-02-2010 23:08 (Original post by TwentyTen) - 12-02-2010 23:13 I know the principles, but I don't understand the proofs The angle at the centre of a circle is twice the angle at the circumference. Let angle ACO=a and angle AOX=y Angle CAO = angle ACO (isosceles triangle) Therefore angle CAO = a I get that bit, this is the bit that's confusing.... Hence y= 2a (exterior angle of triangle AOC) because the angels have equal angels they can be expressed as a but because the angle at the center is double you make it 2a=y
<urn:uuid:ce58eba3-e0d7-4e05-ab19-225c0939a1dd>
3.828125
201
Comment Section
Science & Tech.
73.773542
95,506,632
Manual Reference Pages - CHEMISTRY::FILE::VRML (3) Chemistry::File::VRML - Generate VRML models for molecules use Chemistry::Bond::Find find_bonds; my $mol = Chemistry::Mol->read(test.pdb); find_bonds($mol, orders => 1); $mol->write(test.wrl, format => vrml, center => 1, style => ballAndWire, color => byAtom, This module generates a VRML (Virtual Reality Modeling Language) representation of a molecule, which can then be visualized with any VRML viewer. This is a PerlMol file I/O plugin, and registers the vrml format with Chemistry::Mol. Note however that this file plugin is write-only; theres no way of reading a VRML file back into a molecule. This module is a modification of PDB2VRML by Horst Vollhardt, adapted to the The following options may be passed to $mol->write. If true, shift the molecules center of geometry into the origin of the coordinate system. Note: this only affects the output; it does not affect the coordinates of the atoms in the original Chemistry::Mol object. Sets the style for the VRML representation of the molecular structure. Default is Wireframe. Currently supported styles are: Set the overall color of the molecular structure. If the color is set to byAtom, the color the for atoms and bonds is defined by the atom type. Default is byAtom. Currently supported colors are: yellow, blue, red, green, white, brown, Defines the radius in Angstrom for the cylinders in the Stick and BallAndStick style. Default is 0.15 . Defines the factor which is multiplied with the VDW radius for the spheres in the BallAndWire and BallAndStick style. Default is 0.2 . Turns on/off compression of the output. If turned on, all leading whitespaces are removed. This produces a less readable but approx. 20% smaller output, the speed is increased by 10% as well. PDB2VRML originally by Horst Vollhardt, firstname.lastname@example.org, 1998. Modified and adapted as Chemistry::File::VRML by Ivan Tubert-Brohman, PDB2VRML Copyright (c) 1998 by Horst Vollhardt. All rights reserved. Chemistry::File::VRML modifications Copyright (c) 2005 by Ivan Tubert-Brohman. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. PDB2VRML found at PerlMol project at http://www.perlmol.org/ |perl v5.20.3 ||CHEMISTRY::FILE::VRML (3) ||2005-05-16 | Visit the GSP FreeBSD Man Page Interface. Output converted with manServer 1.07.
<urn:uuid:947e8e9e-4392-42d0-8a64-aa89b63e086b>
2.515625
675
Documentation
Software Dev.
60.831821
95,506,700
|Part of a series on| The Adams–Williamson equation, named after L. H. Adams and E. D. Williamson, is an equation used to determine density as a function of radius, more commonly used to determine the relation between the velocities of seismic waves and the density of the Earth's interior. Given the average density of rocks at the Earth's surface and profiles of the P-wave and S-wave speeds as function of depth, it can predict how density increases with depth. It assumes that the compression is adiabatic and that the Earth is spherically symmetric, homogeneous, and in hydrostatic equilibrium. It can also be applied to spherical shells with that property. It is an important part of models of the Earth's interior such as the Preliminary reference Earth model (PREM). Williamson and Adams first developed the theory in 1923. They concluded that "It is therefore impossible to explain the high density of the Earth on the basis of compression alone. The dense interior cannot consist of ordinary rocks compressed to a small volume; we must therefore fall back on the only reasonable alternative, namely, the presence of a heavier material, presumably some metal, which, to judge from its abundance in the Earth's crust, in meteorites and in the Sun, is probably iron." The two types of seismic body waves are compressional waves (P-waves) and shear waves (S-waves). Both have speeds that are determined by the elastic properties of the medium they travel through, in particular the bulk modulus K, the shear modulus μ, and the density ρ. In terms of these parameters, the P-wave speed vp and the S-wave speed vs are These two speeds can be combined in a seismic parameter The definition of the bulk modulus, is equivalent to Suppose a region at a distance r from the Earth's center can be considered a fluid in hydrostatic equilibrium, it is acted on by gravitational attraction from the part of the Earth that is below it and pressure from the part above it. Also suppose that the compression is adiabatic (so thermal expansion does not contribute to density variations). The pressure P(r) varies with r as This equation can be integrated to obtain where r0 is the radius at the Earth's surface and ρ0 is the density at the surface. Given ρ0 and profiles of the P- and S-wave speeds, the radial dependence of the density can be determined by numerical integration. - C. M. R. Fowler (2005). The Solid Earth: An Introduction to Global Geophysics. Cambridge University Press. pp. 333–. ISBN 978-0-521-89307-7. - Eugene F. Milone; William J.F. Wilson (30 January 2014). Solar System Astrophysics: Planetary Atmospheres and the Outer Solar System. Springer Science & Business Media. pp. 494–. ISBN 978-1-4614-9090-6. - Poirier, Jean-Paul (2000). Introduction to the Physics of the Earth's Interior. Cambridge Topics in Mineral Physics & Chemistry. Cambridge University Press. ISBN 0-521-66313-X. - Dziewonski, A. M.; Anderson, D. L. "Preliminary reference Earth model". Physics of the Earth and Planetary Interiors. 25: 297–356. Bibcode:1981PEPI...25..297D. doi:10.1016/0031-9201(81)90046-7.
<urn:uuid:8ab11f6b-21a6-4a68-9b47-637d192fbb10>
4
739
Knowledge Article
Science & Tech.
63.220794
95,506,722
- Nano Express - Open Access Pool boiling of water-Al2O3 and water-Cu nanofluids on horizontal smooth tubes © Cieslinski and Kaczmarczyk; licensee Springer. 2011 Received: 31 October 2010 Accepted: 15 March 2011 Published: 15 March 2011 Experimental investigation of heat transfer during pool boiling of two nanofluids, i.e., water-Al2O3 and water-Cu has been carried out. Nanoparticles were tested at the concentration of 0.01%, 0.1%, and 1% by weight. The horizontal smooth copper and stainless steel tubes having 10 mm OD and 0.6 mm wall thickness formed test heater. The experiments have been performed to establish the influence of nanofluids concentration as well as tube surface material on heat transfer characteristics at atmospheric pressure. The results indicate that independent of concentration nanoparticle material (Al2O3 and Cu) has almost no influence on heat transfer coefficient while boiling of water-Al2O3 or water-Cu nanofluids on smooth copper tube. It seems that heater material did not affect the boiling heat transfer in 0.1 wt.% water-Cu nanofluid, nevertheless independent of concentration, distinctly higher heat transfer coefficient was recorded for stainless steel tube than for copper tube for the same heat flux density. Recent advances in nanotechnology have allowed development of a new category of liquids termed nanofluids, which was first used by a group in Argonne National Laboratory USA to describe liquid suspensions containing nanoparticles with thermal conductivities, orders of magnitudes higher than the base liquids, and with sizes significantly smaller than 100 nm. The augment of thermal conductivity could provide a basis for an enormous innovation for heat transfer intensification, which is pertinent to a number of industrial sectors including transportation, power generation, micro-manufacturing, chemical and metallurgical industries, as well as heating, cooling, ventilation, and air-conditioning industry. Literature findings regarding pool boiling of nanofluids can be summarized as follows. Li et al. studied boiling of water-CuO nanofluids of different concentrations (0.05% and 0.2% by weight) on copper plate. They observed deterioration of heat transfer as compared to the base fluid and attributed this fact to the sedimentation of nanoparticles which leads to the changing of radius of cavity, contact angle, and superheat layer thickness. You et al. reported that independent of the concentration of the nanoparticles (0.001 to 0.05 g/l) nucleate boiling heat transfer coefficients for water-Al2O3 nanofluid while boiling on plate appeared to be the same as for base fluid. They also found that the size of bubbles increased with addition of nanoparticles to water. Das et al. conducted an investigation on the pool boiling of water-Al2O3 nanofluids on a horizontal tubular heater having a diameter of 20 mm with different surface roughness at atmospheric pressure. It was found that the boiling heat transfer of nanoparticle-suspensions was deteriorated compared to that of pure water. Compared with pure water, surface roughness of the heating surface could also greatly affect the nucleation superheat. The subsidence of nanoparticles was considered as the main reason for the increase of the superheat. Vassallo et al. carried out an experiment of water-SiO2 nanofluids boiling on a horizontal NiCr wire at atmospheric pressure. No appreciable differences in the boiling heat transfer were found for the heat flux less than the CHF. Bang and Chang conducted an experimental investigation on the pool boiling of water-Al2O3 nanofluids on a plain plate at atmospheric pressure. The concentration of nanoparticles was 0.5%, 1%, 2%, and 4% by volume. It was found that the boiling curves were shifted right - towards higher wall superheats. The deterioration became worse as nanoparticle concentration increased and was related to the change of the heating surface characteristics by the deposition of nanoparticles on the heating surface. Wen and Ding studied boiling of water-Al2O3 nanofluids on a stainless steel disc with 150 mm in diameter at atmospheric pressure. Contrary to the Bang and Chang's work , heat transfer enhancement has been recorded. Possible explanation of this controversy is lower concentration of nanoparticles used (0.32%). Shi et al. carried out experiments with boiling of water-Al2O3 nanofluid and Fe-water nanofluid on horizontal, copper plate with 60 mm in diameter. The concentration of nanoparticles was 0.1%, 1%, and 2% by volume. Generally, the augmentation and deterioration of heat transfer was observed for water-Fe and water-Al2O3 nanofluids, respectively. Nguyen et al. investigated boiling of water-Al2O3 nanofluid on chrome-plated, very smooth face of copper block of a 100 mm diameter. The concentration of nanoparticles was 0.5%, 1%, and 2% by volume. In general, it was observed that for a given wall superheat, the heat flux considerably decreased with the increase of the particle concentration. Furthermore, for sufficiently high wall superheat, the heat flux tended to become nearly constant. Coursey and Kim J. showed that even if the Al2O3 nanoparticle concentration was increased by over two orders of magnitude, no enhancement or degradation of heat transfer was observed during boiling of ethanol-based nanofluids on glass or gold surface. It was attributed to the highly wetting nature of ethanol. For ethanol-Al2O3 nanofluids and copper surfaces, the nucleate boiling was improved with increasing nanoparticle concentration. Liu and Liao examined nanofluids, i.e., mixture of base fluid (water and alcohol), the nanoparticles (CuO and SiO2) and the surfactant (SDBS), and nanoparticles-suspensions consisted of the base liquid and nanoparticles during pool boiling on the face of copper bar having 20 mm diameter. The boiling characteristics of the nanofluids and nanoparticles- suspensions are poorer compared with that of the base fluids. Narayan et al. studied influence of tube orientation on pool boiling heat transfer of water-Al2O3 nanofluids from a smooth tube of diameter 33 mm inclined at 0°, 45°, and 90°. They found that horizontal orientation gave maximum heat transfer and the boiling performance deteriorated with increase in nanoparticle concentration (0.25%, 1%, and 2% by weight). Lotfi and Shafii performed transient quenching experiments with silver sphere 10 mm diameter immersed in water-Ag and water-TiO2 nanofluids. It was established that the quenching process was more rapid in pure water than in nanofluids and the cooling time was inversely proportional to the nanoparticle mass concentration (0.5%, 1%, 2%, and 4% - Al2O3 and 0.125%, 0.255, 0.5%, and 1% - TiO2). Trisaksri and Wongwises tested R141b-TiO2 nanofluids while boiling on horizontal copper cylinder 28.5 mm diameter. They discovered that adding a small amount of nanoparticles did not affect the boiling heat transfer, but addition of TiO2 nanoparticles at 0.03% and 0.05% by volume deteriorated the boiling heat transfer. Moreover, the boiling heat transfer coefficient decreased with increasing particle volume concentrations, especially at higher heat flux. Kathiravan et al. investigated boiling of water-Cu and water-Cu-SDS (9 wt.%) nanofluids on a 300 mm square stainless steel plate. They revealed that copper nanoparticles caused a decrease in boiling heat transfer coefficient for water as base liquid. The heat transfer coefficient decreased with increase of the concentration of nanoparticles (0.25%, 0.5%, and 1% by weight) for both water-Cu and water-Cu-SDS nanofluids. Suriyawong and Wongwises studied boiling of water-TiO2 nanofluids on horizontal circular plates made from copper and aluminium with different roughness (0.2 and 4 μm). The concentration of nanoparticles was very low: 0.00005%, 0.0001%, 0.0005%, 0.005%, and 0.01% by volume. For copper plate with nanofluid's concentrations more than 0.0001%, the heat transfer coefficient was found to be less than that of the base fluid at both levels of surface roughness. On the other hand, for aluminium surfaces the heat transfer coefficient was found to be less than that of base fluid at every level of nanofluids concentration and surface roughness. Ahmed and Hamed performed experiments with boiling of water-Al2O3 on a face of copper block of 25.4 mm diameter. Nanofluids at 0.01%, 0.1%, and 0.5% by volume concentrations were prepared at a neutral pH of 6.5 and an acidic pH of 5. Ultrasonic vibration and electrostatic stabilization were used to prepare nanofluids. It was found that concentration increase either reduced or had no effect on heat transfer coefficient. Enhancement of heat transfer coefficient was achieved only at low nanofluid concentration (0.01%) and the nanofluid at a pH of 6.5. Recently, Kwark et al. pointed out the transient characteristics of water-Al2O3 nanofluid boiling on horizontal copper plate. The longer a heater is subjected to nanofluid boiling process, the thicker the nanoparticle coating generated on its surface. The thickness of this nanoparticle coating can then dictate boiling heat transfer coefficient. The currently available experimental data on boiling heat transfer of nanofluids are still limited. Additionally, conflicting results as far as effect of nanoparticles on the pool boiling heat transfer performance have been reported [19, 20]. As suggested in , further detailed investigations are necessary to understand the phenomena of boiling of nanofluids. In particular experiments are lacking on the effects of nanoparticles material and heating surface material on boiling heat transfer from horizontal smooth tubes. As a consequence, the main aim of the present study was to obtain boiling characteristics, i.e., boiling curves and heat transfer coefficients for water-Al2O3 and water-Cu nanofluids of different concentrations for copper and stainless steel tubes. where U and I are cartridge heater voltage drop and current, respectively, D o /D i is the outside to inside diameter ratio, L is an active length of a tube, λ is a thermal conductivity of a tube material (copper or stainless steel) and t i was calculated as the arithmetic mean of 12 measured inside wall temperatures. The liquid level was maintained at ca. 15 mm above the centerline of the test tube at saturated state. Preparation and characterization of the tested nanofluids Dispersants were not used to stabilize the suspension. Ultrasonic vibration was used for 4-5 h in order to stabilize the dispersion of the nanoparticles. Nanoparticles were tested at the concentration of 0.01%, 0.1%, and 1% by weight. In a typical experiment, before the test begins, a vacuum pump was used to evacuate the accumulated air from the vessel. Nanofluid at a preset concentration was charged and then preheated to the saturated temperature by auxiliary heater. Next, the cartridge heater was switched on. Measurement was first performed at the lowest power input. Data were collected by increasing the heat flux by small increments. Experiments were performed at atmospheric pressure. Each data point was taken at steady state, the condition of steady state being defined as a variation in the thermocouple outputs of less than 0.001 mV during the 3 min. It generally took about 15 min to achieve steady conditions after the power level was changed. In order to ensure consistent surface state after each test, the boiling surface was prepared in the same manner, i.e., the stainless steel tube was finished with emery paper 400 and copper tube was polished with abrasive compound, next the test tube was placed in an ultrasonic cleaner for 1 h. Finally, the boiling surface was cleaned by water jet. Where the absolute measurement errors of the electrical power ΔPmax, outside tube diameter ΔDo and active length of a tube ΔL are 10 W, 0.02 mm, and 0.2 mm, respectively. So, the maximum overall experimental limits of error for heat flux density extended from ± 1.3% for maximum heat flux density up to ± 1.2% for minimum heat flux density. where the absolute measurement error of the wall superheat, δT, estimated from the systematic error analysis equals ± 0.2 K. The maximum error for average heat transfer coefficient was estimated to ± 2.3%. Investigation of nucleate saturated pool boiling heat transfer on the outside of smooth horizontal tubes submerged in water-Al2O3 and water-Cu nanofluids has been carried out. The measurements were performed at atmospheric pressure and nanoparticles concentration of 0.01%, 0.1%, and 1% by weight. Comparison of present results with literature data Effect of nanoparticle material Effect of nanofluid concentration Effect of tube material Independent of the concentrations tested (0.01%, 0.1%, and 1% by weight) nanoparticle material (Al2O3 and Cu) has almost no influence while boiling of water-Al2O3 or water-Cu nanofluids on smooth copper tube. Contrary to stainless steel tube experiments, the adding of copper as well as Al2O3 nanoparticles deteriorates pool boiling heat transfer on copper smooth tubes. The higher concentration of nanoparticles was the lower heat transfer coefficient got for the same wall superheat. Independent of concentration distinctly higher heat transfer coefficient was recorded for stainless steel tube than for copper tube for the same heat flux density. It seems that surface material does not affect the boiling heat transfer in 0.1% water-Cu nanofluid. A thin solid coating (detected by eye) was observed on copper tubes after tests with water-Al2O3 and water-Cu nanofluids. The higher concentration of the nanoparticles was the thicker coating was recorded at the end of testing. This work was sponsored by the Ministry of Research and Higher Education, Grant No. N N512 374435. - Choi S: Enhancing thermal conductivity of fluids with nanoparticles. Developments and Applications of Non-Newtonian Flows, ASME, FED-Vol. 231/MD-Vol. 66 1995, 99–105.Google Scholar - Li CH, Wang BX, Peng XF: Experimental investigations on boiling of nano-particle suspensions. Proc 5th International Conference Boiling Heat Transfer, Montego Bay, Jamaica 2003.Google Scholar - You M, Kim JH, Kim KH: Effect of nanoparticles on critical heat flux of water in pool boiling heat transfer. Applied Physics Letters 2003, 83: 3374–3376. 10.1063/1.1619206View ArticleGoogle Scholar - Das SK, Putra N, Roetzel W: Pool boiling characteristics of nano-fluids. Int J Heat and Mass Transfer 2003, 46: 851–862. 10.1016/S0017-9310(02)00348-4View ArticleGoogle Scholar - Vassallo P, Kumar R, Amico S: Pool boiling heat transfer experiments in silica-water nano-fluids. Int J Heat and Mass Transfer 2004, 47: 407–411. 10.1016/S0017-9310(03)00361-2View ArticleGoogle Scholar - Bang IC, Chang SH: Boiling heat transfer performance and phenomena of Al2O3- water nano-fluids from a plain surface in a pool. Int J Heat and Mass Transfer 2005, 48: 2407–2419. 10.1016/j.ijheatmasstransfer.2004.12.047View ArticleGoogle Scholar - Wen D, Ding Y: Experimental investigation into the boiling heat transfer of aqueous based γ-alumina nanofluids. J Nanoparticles Research 2005, 7: 265–274. 10.1007/s11051-005-3478-9View ArticleGoogle Scholar - Shi MH, Shuai MQ, Lai YE, Li YQ, Xuan M: Experimental study of pool boiling heat transfer for nanoparticle suspensions on a plate surface. 13th International Heat Transfer Conference, Sydney, 2006, paper BOI-06 (CD-ROM)Google Scholar - Nguyen CT, Galanis N, Roy G, Divoux S, Gilbert D: Pool boiling characteristics of water Al2O3nanofluid. 13th International Heat Transfer Conference, Sydney, 2006, NAN-02 (CD-ROM)Google Scholar - Coursey JS, Kim J: Nanofluid boiling: the effect of surface wettability. Int J Heat Fluid Flow 2008, 29: 1577–1585. 10.1016/j.ijheatfluidflow.2008.07.004View ArticleGoogle Scholar - Liu Z, Liao L: Sorption and agglutination phenomenon of nanofluids on a plain heating surface during pool boiling. Int J Heat and Mass Transfer 2005, 48: 2407–2419. 10.1016/j.ijheatmasstransfer.2004.12.047View ArticleGoogle Scholar - Narayan GP, Anoop KB, Sateesh G, Das SK: Effect of surface orientation on pool boiling heat transfer of nanoparticle suspensions. Int J Multiphase Flow 2008, 34: 145–160. 10.1016/j.ijmultiphaseflow.2007.08.004View ArticleGoogle Scholar - Lotfi H, Shafii MB: Boiling heat transfer on a high temperature silver sphere in nanofluid. Int J Thermal Sc 2009, 48: 2215–2220. 10.1016/j.ijthermalsci.2009.04.009View ArticleGoogle Scholar - Trisaksri V, Wongwises S: Nucleate pool boiling heat transfer of TiO2-R141b nanofluids. Int J Heat and Mass Transfer 2009, 52: 1582–1588. 10.1016/j.ijheatmasstransfer.2008.07.041View ArticleGoogle Scholar - Kathiravan R, Kumar R, Gupta A, Chandra R: Preparation and pool boiling characteristics of copper nanofluids over a flat plate heater. Int J Heat and Mass Transfer 2010, 53: 1673–1681. 10.1016/j.ijheatmasstransfer.2010.01.022View ArticleGoogle Scholar - Suriyawong A, Wongwises S: Nucleate pool boiling heat transfer characteristics of TiO2-water nanofluids at very low concentrations. ETFS 2010, 34: 992–999.Google Scholar - Ahmed O, Hamed MS: The effect of experimental techniques on the pool boiling of nanofluids. 7th International Conference On Multiphase Flow, ICMF 2010, Tampa, Fl USA 2010.Google Scholar - Kwark SM, Kumar R, Moreno G, Yoo J, You SM: Pool boiling characteristics of low concentration nanofluids. Int J Heat and Mass Transfer 2010, 53: 972–981. 10.1016/j.ijheatmasstransfer.2009.11.018View ArticleGoogle Scholar - Taylor RA, Phelan PE: Pool boiling of nanofluids: Comprehensive review of existing data and limited new data. Int J Heat and Mass Transfer 2009, 52: 5339–5347. 10.1016/j.ijheatmasstransfer.2009.06.040View ArticleGoogle Scholar - Godson L, Raja B, Lal DM, Wongwises S: Enhancement of heat transfer using nanofluids - An overview. Renewable and Sustainable energy Reviews 2010, 14: 629–641. 10.1016/j.rser.2009.10.004View ArticleGoogle Scholar - Wang XQ, Mujumdar AS: Heat transfer characteristics of nanofluids: a review. Int J Thermal Sc 2007, 46: 1–19. 10.1016/j.ijthermalsci.2006.06.010View ArticleGoogle Scholar - Marto PJ, Anderson CL: Nucleate boiling characteristics of R-113 in small tube bundle. Transactions ASME J Heat Transfer 1992, 114: 425–433. 10.1115/1.2911291View ArticleGoogle Scholar - Chiou ChB, Lu DCh, Wang ChCh: Pool boiling of R-22, R124 and R-134a on a plain tube. Int J Heat Mass Transfer 1997, 40(7):1657–1666. 10.1016/S0017-9310(96)00239-6View ArticleGoogle Scholar - Keblinski P, Eastman JA, Cahill DG: Nanofluids for thermal transport. Materials Today 2005, 8(6):36–44. 10.1016/S1369-7021(05)70936-6View ArticleGoogle Scholar - Cooper MG: Heat flow in saturated nucleate pool boiling - a wide-ranging examination using reduced properties. Advances in Heat Transfer 1984, 16: 157–239. full_textView ArticleGoogle Scholar - Esawy M, Malayeri MR, Müller-Steinhagen H: Crystallization fouling of finned tubes during pool boiling: effect of fin density. In Proceedings of International Conference on Heat Exchanger Fouling and Cleaning VIII - 2009. June 14–19, 2009, Schladming, Austria Edited by: Müller-Steinhagen H, Malayeri MR, Watkinson AP. 2009.Google Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:787a8846-1045-4865-9e1d-80038a631b64>
2.625
4,694
Academic Writing
Science & Tech.
55.675199
95,506,724
Species Detail - Black-tailed Godwit (Limosa limosa) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). Protected Species: Wildlife Acts || Threatened Species: Birds of Conservation Concern || Threatened Species: Birds of Conservation Concern >> Birds of Conservation Concern - Amber List 1 January (recorded in 2003) 31 December (recorded in 2010) National Biodiversity Data Centre, Ireland, Black-tailed Godwit (Limosa limosa), accessed 22 July 2018, <https://maps.biodiversityireland.ie/Species/10060>
<urn:uuid:45d95a54-3cb4-4693-b749-0aa480c92b3d>
2.859375
170
Structured Data
Science & Tech.
28.409979
95,506,737
Bruce collaborated with researchers from the Massachusetts Institute of Technology and Ecole Polytechnique Federale in Switzerland to develop a process that improves the efficiency of generating electric power using molecular structures extracted from plants. The biosolar breakthrough has the potential to make "green" electricity dramatically cheaper and easier. "This system is a preferred method of sustainable energy because it is clean and it is potentially very efficient," said Bruce, who was named one of "Ten Revolutionaries that May Change the World" by Forbes magazine in 2007 for his early work, which first demonstated biosolar electricity generation. "As opposed to conventional photovoltaic solar power systems, we are using renewable biological materials rather than toxic chemicals to generate energy. Likewise, our system will require less time, land, water and input of fossil fuels to produce energy than most biofuels." Their findings are in the current issue of Nature: Scientific Reports. To produce the energy, the scientists harnessed the power of a key component of photosynthesis known as photosystem-I (PSI) from blue-green algae. This complex was then bioengineered to specifically interact with a semi-conductor so that, when illuminated, the process of photosynthesis produced electricity. Because of the engineered properties, the system self-assembles and is much easier to re-create than his earlier work. In fact, the approach is simple enough that it can be replicated in most labs—allowing others around the world to work toward further optimization. "Because the system is so cheap and simple, my hope is that this system will develop with additional improvements to lead to a green, sustainable energy source," said Bruce, noting that today's fossil fuels were once, millions of years ago, energy-rich plant matter whose growth also was supported by the sun via the process of photosynthesis. This green solar cell is a marriage of non-biological and biological materials. It consists of small tubes made of zinc oxide—this is the non-biological material. These tiny tubes are bioengineered to attract PSI particles and quickly become coated with them—that's the biological part. Done correctly, the two materials intimately intermingle on the metal oxide interface, which when illuminated by sunlight, excites PSI to produce an electron which "jumps" into the zinc oxide semiconductor, producing an electric current. The mechanism is orders of magnitude more efficient than Bruce's earlier work for producing bio-electricity thanks to the interfacing of PS-I with the large surface provided by the nanostructured conductive zinc oxide; however it still needs to improve manifold to become useful. Still, the researchers are optimistic and expect rapid progress. Bruce's ability to extract the photosynthetic complexes from algae was key to the new biosolar process. His lab at UT isolated and bioengineered usable quantities of the PSI for the research. Andreas Mershin, the lead author of the paper and a research scientist at MIT, conceptualized and created the nanoscale wires and platform. He credits his design to observing the way needles on pine trees are placed to maximize exposure to sunlight. Mohammad Khaja Nazeeruddin in the lab of Michael Graetzel, a professor at the Ecole Polytechnique Federale in Lausanne, Switzerland, did the complex testing needed to determine that the new mechanism actually performed as expected. Graetzel is a pioneer in energy and electron transfer reactions and their application in solar energy conversion. Michael Vaughn, once an undergraduate in Bruce's lab and now a National Science Foundation (NSF) predoctoral fellow at Arizona State University, also collaborated on the paper. "This is a real scientific breakthrough that could become a significant part of our renewable energy strategy in the future," said Lee Riedinger, interim vice chancellor for research. "This success shows that the major energy challenges facing us require clever interdisciplinary solutions, which is what we are trying to achieve in our energy science and engineering PhD program at the Bredesen Center for Interdisciplinary Research and Graduate Education of which Dr. Bruce is one of the leading faculty." The Bredesen Center is a joint UT/Oak Ridge National Laboratory academic unit. Bruce is also a co-principal investigator and scientific thrust leader in TN: SCORE, the Tennessee Solar Conversion and Storage Using Outreach, Research and Education. The $20 million project is funded by the NSF and focuses on promoting research and education on solar energy problems across Tennessee. Additionally, he co-founded and is associate director of UT's Sustainable Energy Education. Bruce's work is funded by the Emerging Frontiers Program at the National Science Foundation. Whitney Heins | Newswise Science News Factory networks energy, buildings and production 12.07.2018 | FIZ Karlsruhe – Leibniz-Institut für Informationsinfrastruktur GmbH Manipulating single atoms with an electron beam 10.07.2018 | University of Vienna For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:26ac36f6-847c-4d52-9f21-7ad112f4aef4>
3.890625
1,604
Content Listing
Science & Tech.
30.694829
95,506,770
Ocean acidification slows algae growth in the Southern Ocean Bremerhaven, 24 February 2015. In a recent study, scientists at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), demonstrate for the first time that ocean acidification could have negative impacts on diatoms in the Southern Ocean. In laboratory tests they were able to observe that under changing light conditions, diatoms grow more slowly in acidic water. In so doing, Dr Clara Hoppe and her team have overturned the widely held assumption that sinking pH values would stimulate the growth of these unicellular algae. Their findings will be published today in the journal New Phytologist. "Diatoms fulfil an important role in the Earth's climate system. They can absorb large quantities of carbon dioxide, which they bind before ultimately transporting part of it to the depths of the ocean. Once there, the greenhouse gas remains naturally sequestered for centuries," explains Dr Clara Hoppe, a biologist at the AWI and first author of the present study (learn more about the role of diatoms in this interview with Dr Clara Hoppe). Scientists have so far worked under the assumption that the progressive acidification of the ocean could promote growth in diatoms, primarily because the additional carbon dioxide in the water can have a fertilising effect. However, previous studies on the topic have overlooked an important aspect: the light environment. The previous experiments used stable unchanging light conditions. But constant light is hard to come by in nature, especially in the Southern Ocean, where storms mix the upper water layers. As Hoppe elaborates, "Several times a day, the wind and currents transport diatoms in the Southern Ocean from the uppermost water layer to the layers below, and then back to the surface - which means that, in the course of a day, the diatoms experience alternating phases with more and with less light." Under these conditions, the diatoms suffer most from insufficient light when they are in deeper water layers; this is why they grow more slowly in changing compared to constant light. So here they spend less time under optimal light conditions and have to constantly adjust from more light to less. But these conditions were not taken into account in experiments on ocean acidification so far. The new study shows: This shifting light intensity significantly affects the reaction to ocean acidification. "Our findings show for the first time that our old assumptions most likely fall short of the mark. We now know that when the light intensity constantly changes, the effect of the ocean acidification reverses. All of a sudden, lower pH values don't increase growth, like studies using constant light show; instead, they have just the opposite effect," says Dr Björn Rost from the AWI, co-author of the study. In experiments conducted at the Alfred Wegener Institute in Bremerhaven, the researchers investigated how the Antarctic diatom species Chaetoceros debilis grows in constant and in shifting light, respectively - and how the effects of the different light conditions change in todays as well as more acidic seawater. The new study effectively demonstrates that there are surprising interactions between changing light conditions and ocean acidification. As a result, in a future scenario characterised by more acidic water under changing light intensities, diatoms' biomass production could be drastically reduced. The results also reveal that under ocean acidification the diatoms are especially sensitive when subjected to phases of higher light levels. As Hoppe relates, "At a certain intensity, the light actually begins to shut down and even destroy part of the photosynthesis chain, a phenomenon referred to as high-light stress. In these phases, the algae cells have to invest a great deal of energy to undo the damage done by the light. This point, at which enough light becomes too much light, is more quickly reached in acidic water." For their experiments, Hoppe's team examined the diatom species Chaetoceros debilis. "Though it's always difficult to generalise for all species on the basis of just one, Chaetoceros is one of the most important groups of diatoms and is often dominant in algal communities. Further, previous studies have shown that its responses to ocean acidification are fairly typical for other diatom species," Hoppe explains. Over the next few years, Clara Hoppe, Björn Rost and their colleagues will continue to explore how different algae species react to changes in their habitat, which species will benefit and which will suffer. The AWI researchers will next turn their attention to investigating plankton communities in the Arctic Ocean. More information: New Phytologist. DOI: 10.1111/nph.13334 Provided by: Alfred Wegener Institute
<urn:uuid:dd6f080c-fb15-495c-8ae3-2cb44ca808d0>
3.390625
973
News Article
Science & Tech.
33.150279
95,506,781
Even when both poles of the planet undergo ozone losses during the winter, the Arctic’s ozone depletion tends to be milder and shorter-lived than the Antarctic’s. This is because the three key ingredients needed for ozone-destroying chemical reactions —chlorine from man-made chlorofluorocarbons (CFCs), frigid temperatures and sunlight— are not usually present in the Arctic at the same time: the northernmost latitudes are generally not cold enough when the sun reappears in the sky in early spring. Still, in 2011, ozone concentrations in the Arctic atmosphere were about 20 percent lower than its late winter average. Maps of ozone concentrations over the Arctic come from the Ozone Monitoring Instrument (OMI) on NASA’s Aura satellite. The left image shows March 19, 2010, and the right shows the same date in 2011. March 2010 had relatively high ozone, while March 2011 has low levels. Credit: NASA/Goddard The new study shows that, while chlorine in the Arctic stratosphere was the ultimate culprit of the severe ozone loss of winter of 2011, unusually cold and persistent temperatures also spurred ozone destruction. Furthermore, uncommon atmospheric conditions blocked wind-driven transport of ozone from the tropics, halting the seasonal ozone resupply until April. “You can safely say that 2011 was very atypical: In over 30 years of satellite records, we hadn’t seen any time where it was this cold for this long,” said Susan E. Strahan, an atmospheric scientist at NASA Goddard Space Flight Center in Greenbelt, Md., and main author of the new paper, which was recently published in the Journal of Geophysical Research-Atmospheres. “Arctic ozone levels were possibly the lowest ever recorded, but they were still significantly higher than the Antarctic’s,” Strahan said. “ There was about half as much ozone loss as in the Antarctic and the ozone levels remained well above 220 Dobson units, which is the threshold for calling the ozone loss a ‘hole’ in the Antarctic – so the Arctic ozone loss of 2011 didn’t constitute an ozone hole.” The majority of ozone depletion in the Arctic happens inside the so-called polar vortex: a region of fast-blowing circular winds that intensify in the fall and isolate the air mass within the vortex, keeping it very cold. Most years, atmospheric waves knock the vortex to lower latitudes in later winter, where it breaks up. In comparison, the Antarctic vortex is very stable and lasts until the middle of spring. But in 2011, an unusually quiescent atmosphere allowed the Arctic vortex to remain strong for four months, maintaining frigid temperatures even after the sun reappeared in March and promoting the chemical processes that deplete ozone. The vortex also played another role in the record ozone low. “Most ozone found in the Arctic is produced in the tropics and is transported to the Arctic,” Strahan said. “But if you have a strong vortex, it’s like locking the door -- the ozone can’t get in.” To determine whether the mix of man-made chemicals and extreme cold or the unusually stagnant atmospheric conditions was primarily responsible for the low ozone levels observed, Strahan and her collaborators used an atmospheric chemistry and transport model (CTM) called the Global Modeling Initiative (GMI) CTM. The team ran two simulations: one that included the chemical reactions that occur on polar stratospheric clouds, the tiny ice particles that only form inside the vortex when it’s very cold, and one without. They then compared their results to real ozone observations from NASA’s Aura satellite. The results from the first simulation reproduced the real ozone levels very closely, but the second simulation showed that, even if chlorine pollution hadn’t been present, ozone levels would still have been low due to lack of transport from the tropics. Strahan’s team calculated that the combination of chlorine pollution and extreme cold temperatures were responsible for two thirds of the ozone loss, while the remaining third was due to the atypical atmospheric conditions that blocked ozone resupply. Once the vortex broke down and transport from the tropics resumed, the ozone concentrations rose quickly and reached normal levels in April 2011. Strahan, who now wants to use the GMI model to study the behavior of the ozone layer at both poles during the past three decades, doesn’t think it’s likely there will be frequent large ozone losses in the Arctic in the future. “It was meteorologically a very unusual year, and similar conditions might not happen again for 30 years,” Strahan said. “Also, chlorine levels are going down in the atmosphere because we’ve stopped producing a lot of CFCs as a result of the Montreal Protocol. If 30 years from now we had the same meteorological conditions again, there would actually be less chlorine in the atmosphere, so the ozone depletion probably wouldn’t be as severe.”Maria-Jose Vinas Maria-Jose Vinas | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:45e68530-5b67-4e57-9b46-01c2858420cf>
4.28125
1,690
Content Listing
Science & Tech.
39.520748
95,506,787
Electrochemistry of Porous Materials - 628 Downloads Porous materials are of interest for a range of applications: as supports for catalysts, energy conversion and storage and chromatography are amongst the most topical areas of interest currently. This book focuses on electrochemical aspects of porous materials. The book is well laid out, with 12 chapters written in a concise and focussed manner. Chapter 1 provides an introduction to electrochemical techniques, with sufficient information for researchers who are not familiar with electrochemistry while not delving into too much detail. Researchers interested in such detail can refer to the list of references provided. One minor editorial comment, capacity should be replaced by capacitance (p. 16) and non-electron conducting materials by insulators (p. 27). Chapter 2 consists of a theoretical description of the electrochemical processes of porous materials. Chapter 3 describes the modelling of mechanisms of electrocatalysis. The next chapters describe a range of materials used to prepare porous electrodes. Chapter 4 describes zeolites and aluminosilicates. The author uses Maya Blue as an interesting example of how electrochemical methods can be utilised to characterise and provide detailed information on microporous materials. Chapter 5 focuses on metal–organic frameworks and Chapter 6 on the electrochemical properties of porous oxides and layered hydrogels, including electrocatalytic studies. This is followed by a description of porous carbon and carbon nanotubes which are of interest for use as materials for hydrogen storage and in lithium batteries (Chapter 7). The electrochemistry of porous polymers and hybrid materials, including composite materials is described in Chapter 8. Subsequent chapters focus on applications, electrochemical sensing (Chapter 9), supercapacitors, batteries and fuel cells (Chapter 10), magnetoelectrochemical and photoelectrochemical applications (Chapter (11), culminating in a final brief chapter on electrosynthesis and environmental remediation. The book, as expected from its title, focuses on electrochemical studies of porous materials; it does not describe any chromatographic applications which may be of interest to readers of Chromatographia (this is not a criticism, this reviewer is not aware of any such applications). A description of possible uses of porous materials in terms of commercial applications (or the potential for such applications) is not included in the book and would have been a useful addition. Overall the book is a good reference volume for any researcher interested in the electrochemistry of porous materials; it provides a survey of virtually all of the current areas of interest in this field. The reference list is extensive and up to date and is of interest to anybody interested in obtaining detailed information on a particular topic. It would make a good textbook for a specialist graduate level course.
<urn:uuid:62096299-cff8-43c0-a67f-db3f436b3291>
2.625
560
User Review
Science & Tech.
12.738546
95,506,800
How? They made a computer simulation of the universe. And it looks sort of like us. A long-proposed thought experiment, put forward by both philosophers and popular culture, points out that any civilisation of sufficient size and intelligence would eventually create a simulation universe if such a thing were possible. And since there would therefore be many more simulations (within simulations, within simulations) than real universes, it is therefore more likely than not that our world is artificial. Now a team of researchers at the University of Bonn in Germany led by Silas Beane say they have evidence this may be true. In a paper named ‘Constraints on the Universe as a Numerical Simulation’, they point out that current simulations of the universe – which do exist, but which are extremely weak and small – naturally put limits on physical laws. Technology Review explains that “the problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.” What that basically means is that by just being a simulation, the computer would put limits on, for instance, the energy that particles can have within the program. These limits would be experienced by those living within the sim – and as it turns out, something which looks just like these limits do in fact exist. For instance, something known as the Greisen-Zatsepin-Kuzmin, or GZK cut off, is an apparent boundary of the energy that cosmic ray particles can have. This is caused by interaction with cosmic background radiation. But Beane and co’s paper argues that the pattern of this rule mirrors what you might expect from a computer simulation. Naturally, at this point the science becomes pretty tricky to wade through – and we would advise you read the paper itself to try and get the full detail of the idea. But the basic impression is an intriguing one. Like a prisoner in a pitch-black cell, we may never be able to see the ‘walls’ of our prison — but through physics we may be able to reach out and touch them. - using radio free exile and radio free exile televised, I bring you the perspective of a self-imposed exile, whatever that brings to the table, if you know what I mean. someone has to chronicle the goings on in penfield, new york, and I've appointed myself.here's a link to radio free exile dot com, where you'll find everything you'll ever need to know. everything.for the best original bumper stickers, t-shirts, mugs, buttons, and whatever, check out the radio free exile super swag emporium - your 1st amendment one-stop.for updates to all things exile, join exileguy's announcement list, and you'll get an occasional email with what's new.exilep.o. box 691penfield, new york14526
<urn:uuid:7e96bfe3-9834-41e9-99ee-8cba4861e68f>
3.484375
617
Personal Blog
Science & Tech.
46.665266
95,506,804
(1)Explain why a beaker filled with water at 4 degree celsius overflows if the temperature is decreased or increased (2)Mercury boils at a temperature of 357 Degree Celsius. How then can mercury thermometers be used to measure temperatures up to 500 Deg Celcius ? (3)Two thermometers are constructed in the same way except that one has a spherical bulb and the other has a cylindrical bulb. which one will respond quickly to temperature changes ?© BrainMass Inc. brainmass.com July 18, 2018, 3:49 pm ad1c9bdddf (1) This is due to anomalous expansion of water. The maximum density of water occurs at 4 Deg Celcius. So the water expands whether it is heated above 4 deg C or cooled ... The temperature change functions are solved. The anomalous expansion of water is determined. The solution provides explanations to the asked questions.
<urn:uuid:1dfd6ef6-bd54-4ac6-b9d6-cce0c5f9a0a5>
4.0625
190
Q&A Forum
Science & Tech.
55.6125
95,506,810
|Standard Model of particle physics| |Quantum field theory| In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (one of the two class of particles, the other being fermions) would be considered massless, but measurements show that the W+, W−, and Z bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field (the Higgs field) that permeates all space, to the Standard Model. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Large Hadron Collider at CERN announced results consistent with the Higgs particle on March 14, 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature. The mechanism was proposed in 1962 by Philip Warren Anderson, following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics. A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert; by Peter Higgs; and by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, Anderson–Higgs mechanism, Anderson–Higgs-Kibble mechanism, Higgs–Kibble mechanism by Abdus Salam and ABEGHHK'tH mechanism [for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble and 't Hooft] by Peter Higgs. On October 8, 2013, following the discovery at CERN's Large Hadron Collider of a new particle that appeared to be the long-sought Higgs boson predicted by the theory, it was announced that Peter Higgs and François Englert had been awarded the 2013 Nobel Prize in Physics (Englert's co-author Robert Brout had died in 2011 and the Nobel Prize is not usually awarded posthumously). - 1 Standard model - 2 History of research - 3 Examples - 4 See also - 5 References - 6 Further reading - 7 External links In the standard model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field becomes tachyonic; the symmetry is spontaneously broken by condensation, and the W and Z bosons acquire masses. (This is also known as electroweak symmetry breaking; EWSB.) Structure of the Higgs field In the standard model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin), which is a scalar under Lorentz transformations. Its (weak hypercharge) U(1) charge is 1. Under U(1) rotations, it is multiplied by a phase, which thus mixes the real and imaginary parts of the complex spinor into each other—combining to the standard two-component complex representation of the group U(2). The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group U(2). This is often written as SU(2) × U(1), (which is strictly speaking only the same on the level of infinitesimal symmetries) because the diagonal phase factor also acts on other fields in particular quarks. Three out of its four components would ordinarily amount to Goldstone bosons, if they were not coupled to gauge fields. However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons ( ), and are only observable as components of these weak bosons, which are now massive; while the one remaining degree of freedom becomes the Higgs boson—a new scalar particle. The photon as the part that remains massless The gauge group of the electroweak part of the standard model is SU(2) × U(1). The group SU(2) is the group of all 2-by-2 unitary matrices with unit determinant; all the orthonormal changes of coordinates in a complex two dimensional vector space. Rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor (0, v). The generators for rotations about the x, y, and z axes are by half the Pauli matrices σx, σy, and σz, so that a rotation of angle θ about the z-axis takes the vacuum to While the Tx and Ty generators mix up the top and bottom components of the spinor, the Tz rotations only multiply each by opposite phases. This phase can be undone by a U(1) rotation of angle 1/θ. Consequently, under both an SU(2) Tz-rotation and a U(1) rotation by an amount 1/θ, the vacuum is invariant. This combination of generators defines the unbroken part of the gauge group, where Q is the electric charge, Tz is the generator of rotations around the z-axis in the SU(2) and Y is the hypercharge generator of the U(1). This combination of generators (a z rotation in the SU(2) and a simultaneous U(1) rotation by half the angle) preserves the vacuum, and defines the unbroken gauge group in the standard model, namely the electric charge group. The part of the gauge field in this direction stays massless, and amounts to the physical photon. Consequences for fermions In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance. For these fields, the mass terms should always be replaced by a gauge-invariant "Higgs" mechanism. One possibility is some kind of Yukawa coupling (see below) between the fermion field ψ and the Higgs field Φ, with unknown couplings Gψ, which after symmetry breaking (more precisely: after expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e., by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the Yukawa interaction of a fermion field ψ and the Higgs field Φ is where again the gauge field A only enters Dμ (i.e., it is only indirectly visible). The quantities γμ are the Dirac matrices, and Gψ is the already-mentioned Yukawa coupling parameter. Now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value . Again, this is crucial for the existence of the property mass. History of research Spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstone's theorem, these bosons should be massless. The only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking. A similar problem arises with Yang–Mills theory (also known as non-Abelian gauge theory), which predicts massless spin-1 gauge bosons. Massless weakly-interacting gauge bosons lead to long-range forces, which are only observed for electromagnetism and the corresponding massless photon. Gauge theories of the weak force needed a way to describe massive gauge bosons in order to be consistent. The idea was proposed in 1961 by Julian Schwinger and who was referenced in Philip Warren Anderson's paper on an application of the idea in non-relativistic field theory, who discussed its consequences for particle physics but did not work out an explicit relativistic model. The relativistic model was developed in 1964 by three independent groups – Robert Brout and François Englert; Peter Higgs; and Gerald Guralnik, Carl Richard Hagen, and Tom Kibble. Slightly later, in 1965, but independently from the other publications the mechanism was also proposed by Alexander Migdal and Alexander Polyakov, at that time Soviet undergraduate students. However, the paper was delayed by the Editorial Office of JETP, and was published only in 1966. The mechanism is closely analogous to phenomena previously discovered by Yoichiro Nambu involving the "vacuum structure" of quantum fields in superconductivity. A similar but distinct effect (involving an affine realization of what is now recognized as the Higgs field), known as the Stueckelberg mechanism, had previously been studied by Ernst Stueckelberg. These physicists discovered that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry group, the gauge bosons can consistently acquire a nonzero mass. In spite of the large values involved (see below) this permits a gauge theory description of the weak force, which was independently developed by Steven Weinberg and Abdus Salam in 1967. Higgs's original article presenting the model was rejected by Physics Letters. When revising the article before resubmitting it to Physical Review Letters, he added a sentence at the end, mentioning that it implies the existence of one or more new, massive scalar bosons, which do not form complete representations of the symmetry group; these are the Higgs bosons. The three papers by Brout and Englert; Higgs; and Guralnik, Hagen, and Kibble were each recognized as "milestone letters" by Physical Review Letters in 2008. While each of these seminal papers took similar approaches, the contributions and differences among the 1964 PRL symmetry breaking papers are noteworthy. All six physicists were jointly awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. Benjamin W. Lee is often credited with first naming the "Higgs-like" mechanism, although there is debate around when this first occurred. One of the first times the Higgs name appeared in print was in 1972 when Gerardus 't Hooft and Martinus J. G. Veltman referred to it as the "Higgs–Kibble mechanism" in their Nobel winning paper. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. In the nonrelativistic context, this is the Landau model of a charged Bose–Einstein condensate, also known as a superconductor. In the relativistic condensate, the condensate is a scalar field, and is relativistically invariant. The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or, in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory). A superconductor expels all magnetic fields from its interior, a phenomenon known as the Meissner effect. This was mysterious for a long time, because it implies that electromagnetic forces somehow become short-range inside the superconductor. Contrast this with the behavior of an ordinary metal. In a metal, the conductivity shields electric fields by rearranging charges on the surface until the total field cancels in the interior. But magnetic fields can penetrate to any distance, and if a magnetic monopole (an isolated magnetic pole) is surrounded by a metal the field can escape without collimating into a string. In a superconductor, however, electric charges move with no dissipation, and this allows for permanent surface currents, not just surface charges. When magnetic fields are introduced at the boundary of a superconductor, they produce surface currents which exactly neutralize them. The Meissner effect is due to currents in a thin surface layer, whose thickness, the London penetration depth, can be calculated from a simple model (the Ginzburg–Landau theory). This simple model treats superconductivity as a charged Bose–Einstein condensate. Suppose that a superconductor contains bosons with charge q. The wavefunction of the bosons can be described by introducing a quantum field, ψ, which obeys the Schrödinger equation as a field equation (in units where the reduced Planck constant, ħ, is set to 1): The operator ψ(x) annihilates a boson at the point x, while its adjoint ψ† creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value ψ of ψ(x), which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate. When there is a charged condensate, the electromagnetic interactions are screened. To see this, consider the effect of a gauge transformation on the field. A gauge transformation rotates the phase of the condensate by an amount which changes from point to point, and shifts the vector potential by a gradient: When there is no condensate, this transformation only changes the definition of the phase of ψ at every point. But when there is a condensate, the phase of the condensate defines a preferred choice of phase. The condensate wave function can be written as where ρ is real amplitude, which determines the local density of the condensate. If the condensate were neutral, the flow would be along the gradients of θ, the direction in which the phase of the Schrödinger field changes. If the phase θ changes slowly, the flow is slow and has very little energy. But now θ can be made equal to zero just by making a gauge transformation to rotate the phase of the field. The energy of slow changes of phase can be calculated from the Schrödinger kinetic energy, and taking the density of the condensate ρ to be constant, Fixing the choice of gauge so that the condensate has the same phase everywhere, the electromagnetic field energy has an extra term, When this term is present, electromagnetic interactions become short-ranged. Every field mode, no matter how long the wavelength, oscillates with a nonzero frequency. The lowest frequency can be read off from the energy of a long wavelength A mode, This is a harmonic oscillator with frequency The quantity |ψ|2 (= ρ2) is the density of the condensate of superconducting particles. In an actual superconductor, the charged particles are electrons, which are fermions not bosons. So in order to have superconductivity, the electrons need to somehow bind into Cooper pairs. The charge of the condensate q is therefore twice the electron charge −e. The pairing in a normal superconductor is due to lattice vibrations, and is in fact very weak; this means that the pairs are very loosely bound. The description of a Bose–Einstein condensate of loosely bound pairs is actually more difficult than the description of a condensate of elementary particles, and was only worked out in 1957 by Bardeen, Cooper and Schrieffer in the famous BCS theory. Abelian Higgs mechanism Gauge invariance means that certain transformations of the gauge field do not change the energy at all. If an arbitrary gradient is added to A, the energy of the field is exactly the same. This makes it difficult to add a mass term, because a mass term tends to push the field toward the value zero. But the zero value of the vector potential is not a gauge invariant idea. What is zero in one gauge is nonzero in another. So in order to give mass to a gauge theory, the gauge invariance must be broken by a condensate. The condensate will then define a preferred phase, and the phase of the condensate will define the zero value of the field in a gauge-invariant way. The gauge-invariant definition is that a gauge field is zero when the phase change along any path from parallel transport is equal to the phase difference in the condensate wavefunction. The condensate value is described by a quantum field with an expectation value, just as in the Ginzburg-Landau model. In order for the phase of the vacuum to define a gauge, the field must have a phase (also referred to as 'to be charged'). In order for a scalar field Φ to have a phase, it must be complex, or (equivalently) it should contain two fields with a symmetry which rotates them into each other. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points. The only renormalizable model where a complex scalar field Φ acquires a nonzero value is the Mexican-hat model, where the field energy has a minimum away from zero. The action for this model is which results in the Hamiltonian The first term is the kinetic energy of the field. The second term is the extra potential energy when the field varies from point to point. The third term is the potential energy when the field has any given magnitude. This potential energy, V(z, Φ) = λ(|z|2 − Φ2)2, has a graph which looks like a Mexican hat, which gives the model its name. In particular, the minimum energy value is not at z = 0, but on the circle of points where the magnitude of z is Φ. When the field Φ(x) is not coupled to electromagnetism, the Mexican-hat potential has flat directions. Starting in any one of the circle of vacua and changing the phase of the field from point to point costs very little energy. Mathematically, if with a constant prefactor, then the action for the field θ(x), i.e., the "phase" of the Higgs field Φ(x), has only derivative terms. This is not a surprise. Adding a constant to θ(x) is a symmetry of the original theory, so different values of θ(x) cannot have different energies. This is an example of Goldstone's theorem: spontaneously broken continuous symmetries normally produce massless excitations. The Abelian Higgs model is the Mexican-hat model coupled to electromagnetism: The classical vacuum is again at the minimum of the potential, where the magnitude of the complex field φ is equal to Φ. But now the phase of the field is arbitrary, because gauge transformations change it. This means that the field θ(x) can be set to zero by a gauge transformation, and does not represent any actual degrees of freedom at all. Furthermore, choosing a gauge where the phase of the vacuum is fixed, the potential energy for fluctuations of the vector field is nonzero. So in the Abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential A in the x-direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy: This energy is the same as a mass term 1/m2A2 where m = qΦ. Non-Abelian Higgs mechanism The Non-Abelian Higgs model has the following action where now the non-Abelian field A is contained in the covariant derivative D and in the tensor components and (the relation between A and those components is well-known from the Yang–Mills theory). It is exactly analogous to the Abelian Higgs model. Now the field φ is in a representation of the gauge group, and the gauge covariant derivative is defined by the rate of change of the field minus the rate of change from parallel transport using the gauge field A as a connection. Again, the expectation value of Φ defines a preferred gauge where the vacuum is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost. Depending on the representation of the scalar field, not every gauge field acquires a mass. A simple example is in the renormalizable version of an early electroweak model due to Julian Schwinger. In this model, the gauge group is SO(3) (or SU(2) − there are no spinor representations in the model), and the gauge invariance is broken down to U(1) or SO(2) at long distances. To make a consistent renormalizable version using the Higgs mechanism, introduce a scalar field φa which transforms as a vector (a triplet) of SO(3). If this field has a vacuum expectation value, it points in some direction in field space. Without loss of generality, one can choose the z-axis in field space to be the direction that φ is pointing, and then the vacuum expectation value of φ is (0, 0, A), where A is a constant with dimensions of mass (). Rotations around the z-axis form a U(1) subgroup of SO(3) which preserves the vacuum expectation value of φ, and this is the unbroken gauge group. Rotations around the x and y-axis do not preserve the vacuum, and the components of the SO(3) gauge field which generate these rotations become massive vector mesons. There are two massive W mesons in the Schwinger model, with a mass set by the mass scale A, and one massless U(1) gauge boson, similar to the photon. The Schwinger model predicts magnetic monopoles at the electroweak unification scale, and does not predict the Z meson. It doesn't break electroweak symmetry properly as in nature. But historically, a model similar to this (but not using the Higgs mechanism) was the first in which the weak force and the electromagnetic force were unified. Affine Higgs mechanism Ernst Stueckelberg discovered a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Effectively, Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value H goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to H, so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. The vector meson mass, however, equals to the product eH, and stays finite. The interpretation is that when a U(1) gauge field does not require quantized charges, it is possible to keep only the angular part of the Higgs oscillations, and discard the radial part. The angular part of the Higgs field θ has the following gauge transformation law: The gauge covariant derivative for the angle (which is actually gauge invariant) is: In order to keep θ fluctuations finite and nonzero in this limit, θ should be rescaled by H, so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting . since eH is the gauge boson mass. By making a gauge transformation to set θ = 0, the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field: To have arbitrarily small charges requires that the U(1) is not the circle of unit complex numbers under multiplication, but the real numbers R under addition, which is only different in the global topology. Such a U(1) group is non-compact. The field θ transforms as an affine representation of the gauge group. Among the allowed gauge groups, only non-compact U(1) admits affine representations, and the U(1) of electromagnetism is experimentally known to be compact, since charge quantization holds to extremely high accuracy. The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The theory of quantum electrodynamics with a massive photon is still a renormalizable theory, one in which electric charge is still conserved, but magnetic monopoles are not allowed. For non-Abelian gauge theory, there is no affine limit, and the Higgs oscillations cannot be too much more massive than the vectors. - Electroweak interaction - Electromagnetic mass - Higgs bundle - Mass generation - Quantum triviality - Yang–Mills–Higgs equations - G. Bernardi, M. Carena, and T. Junk: "Higgs bosons: theory and searches", Reviews of Particle Data Group: Hypothetical particles and Concepts, 2007, http://pdg.lbl.gov/2008/reviews/higgs_s055.pdf - P. W. Anderson (1962). "Plasmons, Gauge Invariance, and Mass". Physical Review. 130 (1): 439–442. Bibcode:1963PhRv..130..439A. doi:10.1103/PhysRev.130.439. - F. Englert; R. Brout (1964). "Broken Symmetry and the Mass of Gauge Vector Mesons". Physical Review Letters. 13 (9): 321–323. Bibcode:1964PhRvL..13..321E. doi:10.1103/PhysRevLett.13.321. - Peter W. Higgs (1964). "Broken Symmetries and the Masses of Gauge Bosons". Physical Review Letters. 13 (16): 508–509. Bibcode:1964PhRvL..13..508H. doi:10.1103/PhysRevLett.13.508. - G. S. Guralnik; C. R. Hagen; T. W. B. Kibble (1964). "Global Conservation Laws and Massless Particles". Physical Review Letters. 13 (20): 585–587. Bibcode:1964PhRvL..13..585G. doi:10.1103/PhysRevLett.13.585. - Gerald S. Guralnik (2009). "The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles". International Journal of Modern Physics. A24 (14): 2601–2627. arXiv: . Bibcode:2009IJMPA..24.2601G. doi:10.1142/S0217751X09045431. - History of Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism. Scholarpedia. - "Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism". Scholarpedia. Retrieved 2012-06-16. - Liu, G. Z.; Cheng, G. (2002). "Extension of the Anderson-Higgs mechanism". Physical Review B. 65 (13): 132513. arXiv: . Bibcode:2002PhRvB..65m2513L. doi:10.1103/PhysRevB.65.132513. - Matsumoto, H.; Papastamatiou, N. J.; Umezawa, H.; Vitiello, G. (1975). "Dynamical rearrangement in the Anderson-Higgs-Kibble mechanism". Nuclear Physics B. 97: 61. Bibcode:1975NuPhB..97...61M. doi:10.1016/0550-3213(75)90215-1. - Close, Frank (2011). The Infinity Puzzle: Quantum Field Theory and the Hunt for an Orderly Universe. Oxford: Oxford University Press. ISBN 978-0-19-959350-7. - "Press release from Royal Swedish Academy of Sciences" (PDF). 8 October 2013. Retrieved 8 October 2013. - "Guralnik, G S; Hagen, C R and Kibble, T W B (1967). Broken Symmetries and the Goldstone Theorem. Advances in Physics, vol. 2" (PDF). - Schwinger, Julian (1961). "Gauge Invariance and Mass". Phys. Rev. 125 (1): 397–8. Bibcode:1962PhRv..125..397S. doi:10.1103/PhysRev.125.397. - A.M. Polyakov, A View From The Island, 1992 - Farhi, E., & Jackiw, R. W. (1982). Dynamical Gauge Symmetry Breaking: A Collection Of Reprints. Singapore: World Scientific Pub. Co. - Frank Close. "The Infinity Puzzle." 2011, p.158 - Norman Dombey, "Higgs Boson: Credit Where It's Due". The Guardian, July 6, 2012 - Cern Courier, Mar 1, 2006 - Sean Carrol, "The Particle At The End Of The Universe: The Hunt For The Higgs And The Discovery Of A New World", 2012, p.228 - A. A. Migdal and A. M. Polyakov, "Spontaneous Breakdown of Strong Interaction Symmetry and Absence of Massless Particles", JETP 51, 135, July 1966 (English translation: Soviet Physics JETP, 24, 1, January 1967) - Nambu, Y (1960). "Quasiparticles and Gauge Invariance in the Theory of Superconductivity". Physical Review. 117 (3): 648–663. Bibcode:1960PhRv..117..648N. doi:10.1103/PhysRev.117.648. - Higgs, Peter (2007). "Prehistory of the Higgs boson". Comptes Rendus Physique. 8 (9): 970–972. Bibcode:2007CRPhy...8..970H. doi:10.1016/j.crhy.2006.12.006. - "Physical Review Letters – 50th Anniversary Milestone Papers". Prl.aps.org. Retrieved 2012-06-16. - "American Physical Society – J. J. Sakurai Prize Winners". Aps.org. Retrieved 2012-06-16. - Department of Physics and Astronomy. "Rochester's Hagen Sakurai Prize Announcement". Pas.rochester.edu. Archived from the original on 2008-04-16. Retrieved 2012-06-16. - FermiFred (2010-02-15). "C.R. Hagen discusses naming of Higgs Boson in 2010 Sakurai Prize Talk". Youtube.com. Retrieved 2012-06-16. - Sample, Ian (2009-05-29). "Anything but the God particle by Ian Sample". Guardian. Retrieved 2012-06-16. - G. 't Hooft; M. Veltman (1972). "Regularization and Renormalization of Gauge Fields". Nuclear Physics B. 44 (1): 189–219. Bibcode:1972NuPhB..44..189T. doi:10.1016/0550-3213(72)90279-9. - "Regularization and Renormalization of Gauge Fields by t'Hooft and Veltman (PDF)" (PDF). Archived from the original (PDF) on 2012-07-07. Retrieved 2012-06-16. - Goldstone, J. (1961). "Field theories with " Superconductor " solutions". Il Nuovo Cimento. 19: 154–164. Bibcode:1961NCim...19..154G. doi:10.1007/BF02812722. - Stueckelberg, E. C. G. (1938), "Die Wechselwirkungskräfte in der Elektrodynamik und in der Feldtheorie der Kräfte", Helv. Phys. Acta. 11: 225 - Schumm, Bruce A. (2004) Deep Down Things. Johns Hopkins Univ. Press. Chpt. 9. - Englert-Brout-Higgs-Guralnik-Hagen-Kibble mechanism Tom W B Kibble Scholarpedia, 4(1):6441. doi:10.4249/scholarpedia.6441 - For a pedagogic introduction to electroweak symmetry breaking with step by step derivations, not found in texts, of many key relations, see http://www.quantumfieldtheory.info/Electroweak_Sym_breaking.pdf - Guralnik, G.S.; Hagen, C.R.; Kibble, T.W.B. (1964). "Global Conservation Laws and Massless Particles". Physical Review Letters. 13 (20): 585–87. Bibcode:1964PhRvL..13..585G. doi:10.1103/PhysRevLett.13.585. - Mark D. Roberts (1999) "A Generalized Higgs Model" - 2010 Sakurai Prize - All Events - YouTube - From BCS to the LHC - CERN Courier Jan 21, 2008, Steven Weinberg, University of Texas at Austin. - on YouTube 06-11-2009 - Gerry Guralnik speaks at Brown University about the 1964 PRL papers - Guralnik, Gerald (2013). "Heretical Ideas that Provided the Cornerstone for the Standard Model of Particle Physics". SPG MITTEILUNGEN March 2013, No. 39, (p. 14) - Steven Weinberg Praises Teams for Higgs Boson Theory - Physical Review Letters – 50th Anniversary Milestone Papers - Imperial College London on PRL 50th Anniversary Milestone Papers - Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia - History of Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia - The Hunt for the Higgs at Tevatron - on YouTube. A lecture with UCSD physicist Kim Griest (43 minutes)
<urn:uuid:8f36e814-0bb9-43bf-84a0-84ade671afdc>
3.484375
7,574
Knowledge Article
Science & Tech.
58.709093
95,506,812
The launch of NASA's planet-hunting satellite TESS has been postponed until Wednesday. The satellite had been scheduled to launch from Cape Canaveral, Florida, on Monday evening. SpaceX, which is providing the launch vehicle and launchpad for TESS, tweeted Monday afternoon that it is "standing down today to conduct additional GNC [guidance navigation control] analysis, and teams are now working towards a targeted launch of @NASA_TESS on Wednesday, April 18." NASA followed up with a tweet that "launch teams are standing down today to conduct additional Guidance Navigation and Control analysis. The @NASA_TESS spacecraft is in excellent health and remains ready for launch on the new targeted date of Wednesday, April 18. Updates: blogs.nasa.gov/tess." The Transiting Exoplanet Survey Satellite is NASA's next mission in the search for exoplanets, or those that are outside our solar system, and TESS will be on the lookout for planets that could support life. Once it launches, TESS will use its fuel to reach orbit around the Earth, with a gravity assist from the moon. That will enable it to have a long-term mission beyond its two-year objective. "The Moon and the satellite are in a sort of dance," Joel Villasenor, instrument scientist for TESS at the Massachusetts Institute of Technology, said in a statement. "The Moon pulls the satellite on one side, and by the time TESS completes one orbit, the Moon is on the other side tugging in the opposite direction. The overall effect is the Moon's pull is evened out, and it's a very stable configuration over many years. Nobody's done this before, and I suspect other programs will try to use this orbit later on." Sixty days after TESS establishes an orbit around Earth, after instrument tests, the two-year mission will officially begin. What will TESS do? TESS will pick up the search for exoplanets as the Kepler Space Telescope runs out of fuel. Kepler, which has discovered more than 4,500 potential planets and confirmed exoplanets, launched in 2009. After mechanical failure in 2013, it entered a new phase of campaigns to survey other areas of the sky for exoplanets, called the K2 mission. This enabled researchers to discover even more exoplanets, understand the evolution of stars and gain insight about supernovae and black holes. Soon, Kepler's mission will end, and it will be abandoned in space, orbiting the sun and never getting any closer to Earth than the moon. TESS will survey an area 400 times larger than what Kepler observed. This includes 200,000 of the brightest nearby stars. Over the course of two years, the four wide-field cameras on board will stare at different sectors of the sky for days at a time. TESS will begin by looking at the Southern Hemisphere sky for the first year and move to the Northern Hemisphere in the second year. It can accomplish this lofty goal by dividing the sky into 13 sections and looking at each one for 27 days before moving on to the next. The satellite itself is not much bigger than a refrigerator. The cameras sit on top, beneath a cone that will protect them from radiation. TESS will look for exoplanets using the transit method, observing slight dips in the brightness of stars as planets pass in front of them. Bright stars allow for easier followup study through ground- and space-based telescopes. "TESS is helping us explore our place in the universe," said Paul Hertz, Astrophysics Division director at NASA Headquarters. "Until 20 years ago, we didn't know of any planets beyond our own solar system. We've expanded our understanding of our place in the universe, and TESS will help us keep expanding." The cameras can detect light across a broad range of wavelengths, up to infrared. This means TESS will be able to look at many nearby small, cool red dwarf stars and see whether there are exoplanets around them. Red dwarf stars have been found to host exoplanets within the habitable zone, and many astronomers believe they could be the best candidate for hosting Earth-size exoplanets with conditions suitable for life. What makes TESS different NASA expects TESS to allow for the cataloging of more than 1,500 exoplanets, but it has the potential to find thousands. Of these, officials anticipate, 300 will be Earth-size exoplanets or double-Earth-size Super Earths. Those could be the best candidates for supporting life outside our solar system. Like Earth, they are small, rocky and usually within the habitable zone of their stars, meaning liquid water can exist on their surface. "One of the biggest questions in exoplanet exploration is: If an astronomer finds a planet in a star's habitable zone, will it be interesting from a biologist's point of view?" said George Ricker, TESS principal investigator at the Massachusetts Institute of Technology's Kavli Institute for Astrophysics and Space Research in Cambridge. "We expect TESS will discover a number of planets whose atmospheric compositions, which hold potential clues to the presence of life, could be precisely measured by future observers." These exoplanets will be studied so that NASA can determine which are the best targets for missions like the James Webb Space Telescope. That telescope, whose launch was just pushed back to 2020, would be able to characterize the details and atmospheres of exoplanets in ways scientists have not been able to do. "We learned from Kepler that there are more planets than stars in our sky, and now TESS will open our eyes to the variety of planets around some of the closest stars," Hertz said. "TESS will cast a wider net than ever before for enigmatic worlds whose properties can be probed by NASA's upcoming James Webb Space Telescope and other missions." NASA believes that TESS will build on Kepler's momentum and open the study of exoplanets in unprecedented ways. "TESS is opening a door for a whole new kind of study," said Stephen Rinehart, TESS project scientist at NASA's Goddard Space Flight Center. "We're going to be able study individual planets and start talking about the differences between planets. The targets TESS finds are going to be fantastic subjects for research for decades to come. It's the beginning of a new era of exoplanet research. I don't think we know everything TESS is going to accomplish. To me, the most exciting part of any mission is the unexpected result, the one that nobody saw coming." The search for life More than a decade ago, Massachusetts Institute of Technology scientists first proposed the idea of a mission like TESS. They have been instrumental in bringing the mission from idea to reality and will continue to be involved once the mission launches. A science team devoted to TESS at MIT aims to measure the masses of at least 50 small exoplanets that have a radius of less than four times that of Earth -- an ideal dimension that could suggest habitability. "Mass is a defining planetary characteristic," said Sara Seager, TESS deputy director of science at MIT. "If you just know that a planet is twice the size of Earth, it could be a lot of things: a rocky world with a thin atmosphere, or what we call a 'mini-Neptune' -- a rocky world with a giant gas envelope, where it would be a huge greenhouse blanket, and there would be no life on the surface. So mass and size together give us an average planet density, which tells us a huge amount about what the planet is." TESS Objects of Interest, an MIT-led effort, will look for objects in TESS' data that could be exoplanets and catalog them. "TESS is kind of like a scout," said Natalia Guerrero, deputy manager of TESS Objects of Interest. "We're on this scenic tour of the whole sky, and in some ways we have no idea what we will see. It's like we're making a treasure map: Here are all these cool things. Now, go after them." TESS data will also be publicly available so that anyone can download them and search for exoplanets. A data pipeline has been established so that TESS can fulfill its mission. It will collect about 27 gigabytes per day -- that's about 6,500 song files -- and send data back every two weeks. NASA's Pleiades, an incredibly powerful supercomputer, will be able to keep up and process the 10 billion pixels over three to five days. The more people that look through the data, the better, the scientists believe. This could be how planets that support life are found. "There's no science that will tell us life is out there right now, except that small rocky planets appear to be incredibly common," Seager said. "They appear to be everywhere we look. So it's got to be there somewhere." TESS is NASA's latest planet-hunting satellite It is expected to find thousands of exoplanets by surveying 85% of the sky - Launch of NASA's planet-hunting satellite TESS postponed - Meet TESS, the satellite that will find thousands of planets - NASA launches mission to Mars - Eight planets found orbiting distant star, NASA says - This little satellite will investigate a curious star and its planet - SpaceX launches demo satellites for its high-speed internet project - Amateur astronomer discovers NASA satellite that's been lost for 12 years - NASA aids Kilauea disaster response efforts using satellite imagery, research aircraft - NASA 'goes for gold' Thursday with successful mission launch - SpaceX to launch demo satellites for its high-speed internet project
<urn:uuid:34008759-a25d-42eb-bd4c-74919bed914f>
2.765625
2,012
News Article
Science & Tech.
47.68432
95,506,822
Earthquakes—several in the range of 5.0 and higher—are still rocking the coast of Japan. But the aftershocks are dropping off, both in strength and frequency. The US Geological Survey has a nice explanation of aftershocks ... and foreshocks. Key takeaway—earthquakes really shouldn't be thought of as a single event. Instead, "an" earthquake is really a swarm of earthquakes. Aftershocks usually occur geographically near the main shock. The stress on the main shock's fault changes drastically during the main shock and that fault produces most of the aftershocks. Sometimes the change in stress caused by the main shock is great enough to trigger aftershocks on other, nearby faults, and for a very large main shock sometimes even farther away. As a rule of thumb, we call earthquakes aftershocks if they are at a distance from the main shock's fault no greater than the length of that fault. The automatic system keeps track of where aftershocks have occurred, and when enough aftershocks have been recorded to pinpoint the more and less active locations, the system adjusts the probabilities on the map to reflect those local variations. I spent the last few days fighting off a mouse infestation in our RV. So far I’ve trapped and tossed six of the furry little bastards out on their asses. As I began the search for where they were getting into our rig, yesterday, I got to wondering how much space they can actually squeeze […] The little pink-edged ferns above are Azolla filiculoides, and they’re smaller than a fingernail. Scientists just made it the first fern to get its genome sequenced because of its potential for fertilizing and even cooling the planet. Fifty million years ago, it was so abundant as ocean blooms that it helped cool the earth’s atmosphere. […] When a deaf Czech girl had her genes tested, researchers were surprised to find two sets of her father’s genome spliced, leaving almost none of her mother’s genome. Only about 25 girls and zero boys have ever been found with this trait. Summer’s here, which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So, for those who weren’t as productive as they would have liked during the first half of 2018, we’ve rounded up 5 skill course bundles you can start learning today to help you finish […] It’s good to be proactive, but when it comes to preparing for an emergency situation, one of the most important items you can pack is a flashlight. After all, whatever else you include in your kit won’t be of much use if you can’t see what you’re doing. The Viper 1000-Lumen Tactical Flashlights not only […] Chances are you took a handful of language classes in high school, and aside from a smattering of conjugations and vocabulary words, the only things you likely remember are the dry, rehearsed sentences that did little to make you speak like a true native. If you’re still hoping to learn a new language but want […]
<urn:uuid:bef983eb-3709-4b6e-b599-fe312e923f62>
2.640625
658
Content Listing
Science & Tech.
55.986628
95,506,874
+44 1803 865913 Edited By: Jorge M Vivanco and Tiffany Weir The mystique of the rain forest has captured the imaginations of generations of young people, explorers, authors, and biologists. It is a delicate ecosystem whose myriad sounds and smells, whose vibrancy of life, is balanced by constant cycles of death and decay. It is a place of fierce competition where unusual partnerships are forged and creative survival strategies are the norm. In this book, you will meet the scientific pioneers who first attempted to quantify and understand the vast diversity of these tropical forests, as well as their successors, who utilize modern tools and technologies to dissect the chemical nature of rain forest interactions. This book provides a general background on biodiversity and the study of chemical ecology before moving into specific chemical examples of insect defenses and microbial communication. It finishes with first-hand accounts of the trials and tribulations of a canopy biology pioneer and a rain forest research novice, while assessing the state of modern tropical research, its importance to humanity, and the ecological, political, and ethical issues that need to be tackled in order to move the field forward. Biodiversity.- Chemical Ecology: Definition and Famous Examples.- Chemical Defenses of Insects: A Rich Resource for Chemical Biology in the Tropics.- Defensive Behaviors in Leaf Beetles: From the Unusual to the Weird.- Microbes: A New Frontier in Tropical Chemical Biology.- Out on a Limb - True Confessions of a Bug Detective.- So, You Want to do Research in the Rainforest? There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Vastly superior to the Amazon offering. Recommended unreservedly. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:0be6641a-4887-417a-aafe-3f20a6116bb2>
2.578125
383
Product Page
Science & Tech.
31.530721
95,506,878
The JWST NIRCam coronagraphic target acquisition (TA) positions the bright "host" on the center of the coronagraphic mask. The goal of coronagraphic target acquisition (TA) with NIRCam is to accurately align an astronomical point source—the "host"—on a coronagraphic mask (occulter). The purpose of PSF subtraction is to achieve limiting contrast between a bright host and the faintest detectable "companion," which is the main source of scientific interest; details about this for NIRCam are available in this article: NIRCam-Specific Treatment of Limiting Contrast. The companion may be an extended source, such as a circumstellar disk, or a point source, such as an exoplanet. The PSF reference image may be a composite of multiple images obtained after pointing changes—either a roll or an offset. To achieve PSF subtraction, the reference image is scaled and subtracted from science images. Coronagraphic target acquisition In order for the coronagraph to achieve maximum suppression of unwanted host light, a small angle maneuver (SAM) must place the target accurately on the center of the coronagraphic mask (occulter). A "target" may be any of three types: - A "host," meaning a bright, point source that may harbor a "companion" feature of primary scientific interest, such as an extrasolar planet, circumstellar disk, or quasar feeding zone; - (2) a "PSF reference," meaning a generic, bright, point source, observed to document the PSF, particularly in the wings and outside the inner working angle (IWA); or - (3) the target may be a "reacquisition" of a previously observed target that must now be reacquired to reduce pointing errors, which as may have been introduced, for example, by a roll maneuver. Some activities do not call for a subsequent reacquisition, such as rotating the filter wheel to change the filter. The first phase of coronagraphic target acquisition (TA) involves an initial slew of the telescope to place the target on a 4" × 4" subarray in the ~10″ vicinity of the selected coronagraphic mask. If the target is brighter than K ≈ 7, the subarray is located behind a square of neutral density (ND, nominally ND = 3). If fainter than K ≈ 7, the target is positioned behind a nearby, clear (ND = 0) region of the coronagraphic optical mount (COM). The first phase of TA is complete when the detector obtains an image of the target on an appropriate region of the COM (ND = 0 or 3) near the specified coronagraphic mask. In the second TA phase, an on-board centroiding algorithm estimates the target's precise position on the detector and computes the distance and direction of the small angle maneuver (SAM) needed to move it to the center of the specified coronagraphic mask. This phase includes rejecting cosmic rays, subtracting the background, and flat fielding. The third TA phase is executed by a SAM that shifts the target to the center of the coronagraphic mask. Simultaneously, the image of a possible companion target—if one is indeed present nearby but outside the IWA—may now be shifted onto an unobscured the detector for imaging. To eliminate the possibility of a latent image, the "opaque" or "dark" position on the pupil wheel is placed into the beam before executing the SAM in the third TA phase. The three TA phases are executed autonomously, with no interaction with the ground. TA images are always be taken in either the F210M or F335M filter, for short- or long-wavelength (SW, LW) coronagraphy, respectively. The detector readout patterns for TA are restricted to provide three evenly spaced up-the-ramp images. The three images are used to reject cosmic rays, which could otherwise introduce error to the calculation of the location of the target. TA images are taken using 642 or 1282 subarrays completely behind a clear region of the COM or an "ND = 3" square, as appropriate. Once the target is relocated behind the coronagraphic mask, science observations can begin, starting with the selection of the requested science filter. The NIRCam TA procedure estimates the centroid of the brightest target in the target acquisition subarray. If coronagraphy is desired for a nearby target of interest that is less bright, users can insert a SAM from the TA source and the target of interest, which will place the target of interest behind the selected coronagraphic mask. To summarize, the ten detailed events of any NIRCam coronagraphic TA are: - Place the target behind the clear region or "ND = 3" square associated with the selected coronagraphic mask (occulter). - Rotate the filter wheel to the TA filter (F210M or F335M). - Rotate the pupil wheel to the appropriate Lyot stop for the type of occulter (round or bar-shaped). - Take the TA images using the default, three-sample detector readout pattern. - Compute the centroid position of the target. This process includes rejecting cosmic rays, background subtraction, and flat fielding. - Rotate the pupil wheel to the opaque ("dark") position. - Rotate the filter wheel to the desired science filter. Execute a small angle maneuver (SAM) to place the target on the selected coronagraphic mask. For the spot occulters, the SAM will place the target behind the center of the mask. For the bar occulters, the SAM will place the target at the position along the bar that is appropriate for the selected science filter. - Rotate the pupil wheel to the appropriate Lyot mask (round or bar). - Take science data. The centroiding algorithm takes about five minutes. Additionally, the TA exposures themselves can take >15 minutes in the following cases, where K is the Vega magnitude in K band: F210M (short wavelength TA) 7.1 < K < 8.2 (bright) 14.2 < K < 15.3 (faint) 15.3 < K (too faint) F335M (long wavelength TA): 5.5 < K < 6.6 (bright) 13.1 < K < 14.3 (faint) 14.3 < K (too faint) JWST User Documentation Home NIRCam Coronagraphic Imaging NIRCam Coronagraphic Occulting Masks and Lyot Stops NIRCam Filters for Coronagraphy JWST High Contrast Imaging Overview JWST High Contrast Imaging Optics JWST High Contrast Imaging Inner Working Angle Contrast Considerations for JWST High-Contrast Imaging NIRCam-specific treatment of limiting contrast JWST Coronagraphic Observation Planning JWST Coronagraphic Sequences JWST High Contrast Imaging in ETC JWST High Contrast Imaging in APT Beichman, C. A., et al. 2010, PASP, 122:162 Imaging Young Giant Planets from Ground and Space Perrin, M., Stansberry, J., Beck, T., Hines, D., and Soummer, R., 2013, JWST-STScI-003472 Sample Target Acquisition Scenarios for JWST Stark, C., Van Gorkom, K., & Pueyo, L., 2016, JWST-STScI-004707, SM-12 How to Implement a JWST Coronagraphic Observation Sequence in APT This page has no comments.
<urn:uuid:f3317347-b16a-43d8-963a-0be26b86a107>
2.75
1,646
Documentation
Science & Tech.
47.30223
95,506,889
Biological membranes are like a guarded border. They separate the cell from the environment and at the same time control the import and export of molecules. The nuclear membrane can be crossed via many tiny pores. Scientists at the Biozentrum and the Swiss Nanoscience Institute at the University of Basel, together with an international team of researchers, have discovered that proteins found within the nuclear pore function similar to a velcro. In “Nature Nanotechnology”, they report how these proteins can be used for controlled and selective transport of particles. There is much traffic in our cells. Many proteins, for example, need to travel from their production site in the cytoplasm to the nucleus, where they are used to read genetic information. Pores in the nuclear membrane enable their transport into and out of the cell nucleus. The Argovia Professor Roderick Lim, from the Biozentrum and the Swiss Nanoscience Institute at the University of Basel, studies the biophysical basics of this transport. In order to better understand this process, he has created an artificial model of the nuclear pore complex, together with scientists from Lausanne and Cambridge, which has led to the discovery that its proteins function like a nanoscale “velcro” which can be used to transport tiniest particles. “Dirty velcro” inside the nuclear pore Nuclear pores are protein complexes within the nuclear membrane that enables molecular exchange between the cytoplasm and nucleus. The driving force is diffusion. Nuclear pores are lined with “velcro” like proteins. Only molecules specially marked with import proteins can bind to these proteins and thus pass the pore. But for all non-binding molecules the nuclear pore acts as a barrier. The researchers postulated that transport depends on the strength of binding to the “velcro” like proteins. The binding should be just strong enough that molecules to be transported can bind but at the same time not too tight so that they can still diffuse through the pore. In an artificial system recreating the nuclear pore, the researchers tested their hypothesis. They coated particles with import proteins and studied their behavior on the molecular “velcro”. Interestingly, the researchers found parallels in behavior to the velcro strip as we know it. On “clean velcro”, the particles stick immediately. However, when the “velcro” is filled or “dirtied” with import proteins, it is less adhesive and the particles begin to slide over its surface just by diffusion. “Understanding how the transport process functions in the nuclear pore complex was decisive for our discovery,” says Lim. “With the nanoscale ‘velcro’ we should be able to define the path to be taken as well as speed up the transport of selected particles without requiring external energy.” Potential lab-on-a-chip technology applications Lim's investigations of biomolecular transport processes form the basis for the discovery of this remarkable phenomenon that particles can be transported selectively with a molecular “velcro”. “This principle could find very practical applications, for instance as nanoscale conveyor belts, escalators or tracks,” explains Lim. This could also potentially be applied to further miniaturize lab-on-chip technology, tiny labs on chips, where this newly discovered method of transportation would make today's complex pump and valve systems obsolete. Kai D. Schleicher, Simon L. Dettmer, Larisa E. Kapinos, Stefan Pagliara, Ulrich F. Keyser, Sylvia Jeney and Roderick Y.H. Lim Selective Transport Control on Molecular Velcro made from Intrinsically Disordered Proteins Nature Nanotechnology; published online 15 June 2014 | doi: 10.1038/nnano.2014.103 Prof. Dr. Roderick Lim, University of Basel, Biozentrum, and Swiss Nanoscience Institute, phone: +41 61 267 20 83, email: email@example.com http://dx.doi.org/10.1038/nnano.2014.103 - Abstract Katrin Bühler | Universität Basel Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:faa48234-72fc-41a3-8c3c-0c874b06e52d>
3.34375
1,534
Content Listing
Science & Tech.
40.769298
95,506,893
Dolphins have been a source of curiosity and have appeared in our stories and myths for thousands of years. We know they are intelligent animals, but just how intelligent are they, and how is dolphin intelligence expressed? Adam Walker, and open ocean endurance swimmer set out on an eight-hour swim to cross the Cook Strait off the coast of New Zealand. After several hours in the cold water, exhausted, he suddenly found himself surrounded by a group of dusky dolphins. Little did he know he was also being closely followed by a great white shark. The dolphins appeared to be protecting him from the predator, which left an indelible impression on Adam. What is the link between our two species? Why do we seem to be so interested and curious about each other? How far might this fascination between humans and dolphins bring us? Will we one day be able to communicate with one another? Scientists around the world are asking themselves the same questions. Over the decades the focus on dolphin research has changed from asking “how intelligent are dolphins?” to “how are dolphins intelligent?” The film brings us to the research sites of some of the most internationally renowned dolphin specialists, and alongside experts studying dolphins in the wild. Do dolphins think the way we do or are their brains wired in a very different way from ours? Kelley Jaakkola, Director of Research at the Dolphin Research Center in Marathon Key, FL., Is testing dolphins understanding of “numerosity” – the ability to compare separate collections of geometric shapes and indicate which contains the lesser number. Not an easier task at high speed, and dolphins seem to be faster at it than we are! Fabienne Delfour in Paris is a cognitive ethologist. She explains that dolphins, unlike most mammals, are self-aware. One of the tests indicating self-awareness in an animal is the mirror test, and Fabienne sets up a one-way mirror to observe the dolphins’ behavior. The dolphins’ reactions prove to us that they do not mistake the reflection in the mirror for that of another dolphin. They truly know they are seeing themselves. In humans this kind of self-recognition happens around the age of 2 or 3. Stan Kucjaz of the Marine Mammal behavior and Cognition Lab, and Holly Eskelinen at Dolphins Plus have been studying the dolphins signature whistles: individual vocalizations that we can compare to human names. Dolphins emit signature whistles to call their young or to introduce themselves to other dolphins in the wild. Every signature whistle is different, and scientists are still working to understand what other information they may contain. Stan and Holly are also working to test the communication skills of dolphins. They insert a fish in tube and wait for the dolphins to figure out how to work together to open it. Once they have learned to do so, one of the two dolphins is paired with a dolphin who has never performed the task. It takes two to open the tube, and the first dolphin rapidly communicates to the newcomer how to open it. Just how are the dolphins communicating such detailed information to one another? There is much that is still unknown about dolphin communication but we know it is highly complex. Off the Australian coast, Janet Mann, from Georgetown University, is studying the unusual tool use of the dolphins in Shark Bay. These “Spongers” pick up a sponge and use it to protect their beaks as they forage through the rough sediment of the ocean floor hunting for fish. The knowledge is confined to these dolphins in Shark Bay, and passed from mother to daughter. Vic Peddemors, Professor of Marine Biology and Gail Addison, a naturalist take us on the Sardine Run near Kwazulu Natal, at the southern tip of South Africa. Thousands of dolphins from hundreds of kilometers away gather to hunt together in a dramatic, all you can eat buffet. The dolphins use their communication skills with one another to drive the hunt. Visit our website to watch the series online, discover extra behind-the-scenes stories and view Canada's nature scenes in 360. Visit Wild Canadian Year Tweets about #CBCTNoT OR #NatureofThings
<urn:uuid:c422726e-d586-49c7-ae3a-7173a138a2c9>
3
859
Truncated
Science & Tech.
47.780724
95,506,900
- Poster presentation - Open Access Design and assessment of binary DNA for nanopore sequencing © Akan et al; licensee BioMed Central Ltd. 2010 Published: 11 October 2010 DNA sequencing using an array of nanometer-sized pores (nanopores) offer an exciting option for third-generation sequencing, which will allow faster and cheaper sequencing with minimal sample pre-processing. When a voltage is applied through a nanopore in a conducting fluid, a slight electric current is observed, the strength of which depends on the structure of the nanopore. When a DNA molecule passed through a nanopore, with an applied voltage, the current detected through the nanopore will differ for each base due to their differential effect on the structure of nanopore. However, current nanopore platforms cannot fully differentiate between single nucleotides due to their fast passage through the nanopore. To overcome this, Meller et al. proposed a strategy where the sequence detection is fluorescence based and involves passage of predesigned short oligonucleotides, so-called binary DNAs, encoding nucleotides . In the current model, there are two binary DNA sequences and their binary combination encodes the DNA bases. Two molecular beacons complementary to the binary DNAs, each carrying a different fluorophore are also required for the fluorescence detection of DNA sequences. The genomic DNA is sheared and every 24 bases of sequence is converted to a DNA molecule consisting of corresponding binary DNAs encoding the matching bases. The binary-encoded DNA is then hybridised with molecular beacons. Once the binary-encoded DNA passes through the nanopore, the beacons are separated from the original strand and a short burst of light is emitted which is then used to determine the base information to sequence the DNA. We have designed sequences that serve as binary DNAs and corresponding molecular beacons. They have balanced GC content, minimal secondary structure, no cross-hybridisation and no occurrence in the human genome. A software pipeline either generates random DNA sequences or parses sequences from Archaea genomes. It then filters sequences by their GC content, repeat content, complexity, self hybridisation and occurrence in the human genome. Sequences that meet the desired criteria are then paired and tested further (e.g. cross-hybridisation). One set of a successful pair of binary DNA was assessed using the Biacore instrument, which allows interaction analysis in a label-free fashion. Our results confirmed that there is no cross hybridisation between the binary DNA sequences and they have similar hybridisation kinetics. We are currently testing the performance of molecular beacons and different hybridisation conditions (e.g. salt concentration, temperature). Our aim is to sequence an actual binary converted DNA using an array of nanopores. This article is published under license to BioMed Central Ltd.
<urn:uuid:034ae121-a366-47f6-8285-76d50da6b54d>
2.953125
576
Truncated
Science & Tech.
21.931021
95,506,912
Carbon in the Atmosphere Hit 800,000 Year High in 2016, Report Shows “Future generations will inherit a much more inhospitable planet.” The concentration of carbon dioxide in the atmosphere rose to its highest level in over 800,000 years in 2016, according to the United Nations’ World Meteorological Organization (WMO) in its annual Greenhouse Gas Bulletin which was released today. Atmospheric concentrations of CO2 reached 403.3 parts per million (ppm), compared to 400.0 ppm in 2015, marking the most rapid annual increase in CO2 levels in 30 years, and lifting concentrations 145% higher than pre-industrial levels. Such a rise threatens to push global temperature increases compared to pre-industrial levels well beyond the 2 degrees Celsius (3.6 degrees Fahrenheit) mark enshrined in the Paris climate agreement. “Without rapid cuts in CO2 and other greenhouse gas emissions, we will be heading for dangerous temperature increases by the end of this century, well above the target set by the Paris climate change agreement,” said WMO Secretary-General Petteri Taalas in a press release. “Future generations will inherit a much more inhospitable planet.” The WMO measures atmospheric greenhouse gas levels through research stations in 51 countries, according to the BBC. The change in 2016 CO2 concentrations was driven partly by the year’s powerful El Niño, according to the WMO report. Read More: “Super El Niño Finally Comes to an End” El Niños are semi-annual climatic events that happen when warm waters in the Pacific Ocean amplify weather patterns, leading to extreme precipitation and droughts in many countries. The report said that droughts caused by the 2016 El Niño reduced the ability of trees, vegetation, and oceans to absorb carbon, leading to the surge. This increase comes as global carbon emissions from burning fossil fuels have shown little growth in the past few years, suggesting that greenhouse gases that had been absorbed by the biosphere in the past are being released into the atmosphere through deforestation, desertification, and other events, according to the WMO. Historically, around a quarter of greenhouse gas emissions have been absorbed by the ocean and another quarter have been absorbed by the biosphere, the report said. The last time the atmosphere regularly had this much carbon in it was 2-3 million years ago, when sea levels were 30-60 feet higher, and temperatures were 2 to 3 degrees Celsius higher, according to the WMO. The report also noted that concentrations of methane and nitrous oxide — the two other primary drivers of climate change — rose in 2016. Some of this increase was driven by the natural release of emissions that happens as ice melts. For instance, as Arctic ice melts, methane gas that had been trapped is escaping. Scientists fear that this could cause a feedback loop of escalating atmospheric concentrations — as the world warms from rising emissions, more ice will melt, releasing more emissions, causing more ice to melt, and so on. "The numbers don't lie,” said Erik Solheim, head of UN Environment, in a press release. “We are still emitting far too much and this needs to be reversed. The last few years have seen enormous uptake of renewable energy, but we must now redouble our efforts to ensure these new low-carbon technologies are able to thrive. We have many of the solutions already to address this challenge. What we need now is global political will and a new sense of urgency." Adidas verkauft eine Million Schuhe aus Ozean-Plastik Und das Unternehmen hat noch viel vor in Sachen Nachhaltigkeit. Mehr Plastiktütenverbot in Australien – Kunden drehen durch Und beleidigen Verkäufer. Mehr Liste der besten und schlechtesten Recycling-Länder der Welt Grün, Braun, Schwarz, Gelb - das muss man uns Deutschen erstmal nachmachen. Mehr
<urn:uuid:2d841fcc-43cd-4ba2-b1b2-bb86abebeed5>
3.46875
872
News Article
Science & Tech.
40.716419
95,506,947
|MadSci Network: Physics| You pose interesting questions. What you need to keep in mind is that when Dirac posed the idea of a filled sea of negative/energy electrons in 1927, the concept of antimatter did not exist (the positron was discovered in 1933 by Carl Anderson, who was unaware of Dirac's prediction). Dirac's "sea of electrons" was a way to save his equation from the unexpected negative energy solutions that it allowed. While it is possible to explain the production of particle-antiparticle pairs using Dirac's picture (not just electrons, but any other electrically charged particle!), there are other, more useful models which have been developed over the years that have much farther-reaching predictive powers, such as Feynman calculus. As you can gather from this link, here we can see that the positron can be described as an electron travelling backwards in time! In any case, I don't think that there is anything particularly profound with the Dirac picture, although many of your fundamental questions about gravitation and the vacuum energy are major points of interest and debate among cosmologists. While I am not a cosmologist myself, Ned Wright's Cosmology Tutorial is an extremely informative website, and may be of interest to you. I will try now to respond to your specific questions: 1. The total amount of gravitation in the universe is not actually changed by pair production. Photons, too, can cause gravity. If you had a box with mirrors on the inside, it would weigh more if there were a lot of photons trapped inside than if it were dark. 2. You do create mass with pair production. In Dirac's picture, you have a hole where there used to be negatively charged negative energy. This lack of negatively charged negative energy is seen as a positively charged particle with the mass of the electron. But this is just saying that a photon is transformed into an electron-positron pair. 3. Pair production can only occur in the presence of another particle or particles. This is due to conservation of momentum. A frame of reference exists where the center-of-mass of the final electron-positron pair is stationary. But if there was only the initial photon taking part in the interaction, there is no frame of reference where the photon could have been stationary too (light speed is constant in all reference frames). Therefore at least one other particle must be included in the system to balance out the photon momentum. I hope this is helpful to you. Please feel free to email me at firstname.lastname@example.org if you need more information. Try the links in the MadSci Library for more information on Physics.
<urn:uuid:5b809291-6fcb-4b98-9a36-818190a17987>
3.3125
555
Q&A Forum
Science & Tech.
43.368158
95,506,970
A team of ancient DNA and palaeontology researchers from the University of Adelaide, University of Otago and the NZ Department of Conservation have published their analyses of plant seeds, leaf fragments and DNA from the dried faeces (coprolites) to start building the first detailed picture of an ecosystem dominated by giant extinct species. Former PhD student Jamie Wood, from the University of Otago, discovered more than 1500 coprolites in remote areas across southern New Zealand, primarily from species of the extinct giant moa, which ranged up to 250 kilograms and three metres in height. Some of the faeces recovered were up to 15 centimetres in length. ”Surprisingly for such large birds, over half the plants we detected in the faeces were under 30 centimetres in height,” says Dr Wood. “This suggests that some moa grazed on tiny herbs, in contrast to the current view of them as mainly shrub and tree browsers. We also found many plant species that are currently threatened or rare, suggesting that the extinction of the moa has impacted their ability to reproduce or disperse.” “New Zealand offers a unique chance to reconstruct how a ‘megafaunal ecosystem’ functioned,” says Professor Alan Cooper, Director of the Australian Centre for Ancient DNA, which performed the DNA typing. “You can’t do this elsewhere in the world because the giant species became extinct too long ago, so you don’t get such a diverse record of species and habitats. Critically, the interactions between animals and plants we see in the poo provides key information about the origins and background to our current environment, and predicting how it will respond to future climate change and extinctions.” “When animals shelter in caves and rock shelters, they leave faeces which can survive for thousands of years if dried out,” Professor Cooper says. “Given the arid conditions, Australia should probably have similar deposits from the extinct giant marsupials. A key question for us is ‘where has all the Australian poo gone?’” Other University of Adelaide members of the research team include Dr Jeremy Austin and Dr Trevor Worthy from the Australian Centre for Ancient DNA, part of the University’s newly-established Environment Institute. The team’s findings have recently been published in Quaternary Science Reviews, an international geological research journal.Professor Alan Cooper Professor Alan Cooper | Newswise Science News Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:040bbf96-6bd0-4cc9-9a22-ad4abfc4d30d>
3.625
1,097
Content Listing
Science & Tech.
37.697072
95,506,980
This incredible experiment kit explores the science behind cutting-edge magnetic levitation transformation. The train is modeled after real-life maglev services such as the train in Shanghai that can reach speeds of 268 mph. The Magnetic Levitation Express kit will allow children to build and experiment with a scale-model of such trains, and learn how electromagnets, propulsion magnets, and levitation magnets are used to help the train hover and accelerate forward. Precisely placed magnetic strips are built into the side o the gude-way track to allow the train to maintain a stable hover and run smoothly at hih speeds. After building the train, curious minds can learn the basic principles of how magnets repel and attract each other, watching these processes in action. This is an ideal do-it-yourself science fair, after-school, ort summer workshop project with the bonus gift of learnig the basics of electro-magnetic theory. Experts believe that the retention of a child's learning multiplies if they can coordinate reading with actual hands-on experience. With 155 assembly parts, it's a wonderful experiment for the young train or magnet enthusiast, and constitutes a gateway to further scientific inquiry. Fast, pollution-free, and immaculately quiet, the Magnetic Levitation Express demonstrates the exciting implications of emergent technology. Create currents and discover gravity while exploring the theory of Lenz's Law with this kit! Use this laser pointer for any professional presentation! This microscope is so versatile you can even take it with you on a job site! This quality Dobsonian style telescope features a 76 mm aperture reflector optical tube. Easily assemble a working alarm motion detector! A perpetual motion model of the galaxy, makes a good desk toy.
<urn:uuid:e6afae23-601f-4310-9826-309812f99423>
3.421875
357
Product Page
Science & Tech.
37.374945
95,506,982
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 9 times found in rainforest, 11 times found in tropical dry forest, 2 times found in littoral rainforest, 1 times found in montane forest, 1 times found in dry forest, 1 times found in gallery forest, 1 times found in mixed tropical forest near river. Found most commonly in these microhabitats: 7 times sifted litter (leaf mold, rotten wood), 8 times ex log, 6 times ex rotten log, 1 times under rotten log. Collected most commonly using these methods: 2 times MW 75 sample transect, 5,10m, 4 times MW 50 sample transect, 5m, 1 times at light, 1 times MW 25 sample transect, 5m, 1 times pitfall trap, PF 50 tube sample transect, 5m, 1 times Malaise, 1 times malaise trap. Elevations: collected from 20 - 1100 meters, 346 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0
<urn:uuid:233873f7-fe7c-4d77-bc07-32ed11ed9131>
2.828125
347
Knowledge Article
Science & Tech.
62.7175
95,507,001
- Spirals in nature get their shape because of an uneven growth rate. Think about the spiraled-horns on a bighorn sheep, or the spiral in a seashell – they grow more quickly on the outside edge than on the inside, causing a spiral to form. - Every sunflower’s seeds are arranged in the same spiral pattern with the same exact number of spirals. Each has 34 spirals that go in one direction, while 55 go in the opposite direction. This arrangement allows the sunflower to pack the most seed heads in the least amount of space without smashing any one seed against its neighbors. - Asymmetry in your anatomy can actually be to you advantage. Take Michael Phelps – his wingspan is wider than he is tall, and the ratio of his leg length to his torso is much smaller than average. This allows him to propel through the water faster than competitors with longer legs. - Humans come in all shapes and sizes, but the underlying patterns make our bodies more alike than different. That’s because human anatomy follows predictable proportions, specifically The Golden Ratio, or phi (ɸ), which is the most common proportion found in nature. For example, your forearm is about 1.6 times longer than your hand. - Making beautiful music can start with a simple reflection. Composers often use symmetry and repetition when writing music. They’ll take a single string of notes and reflect them backwards or upside down and then repeat them throughout a piece of music to create a pleasing melody.
<urn:uuid:3e76b32d-8107-4313-853f-c54a5771bb40>
3.9375
314
Listicle
Science & Tech.
52.229412
95,507,002
A survey is given on the scarce information on the visual organs (eyes or ocelli) of Tardigrada. Many Eutardigrada and some Arthrotardigrada, namely the Echiniscidae, possess inverse pigment-cup ocelli, which are located in the outer lobe of the brain, and probably are of cerebral origin. Occurrence of such organs in tardigrades, suggested as being eyeless, has never been checked. Depending on the species, response to light (photokinesis) is negative, positive or indifferent, and may change during the ontogeny. The tardigrade eyes of the two eutardigrades examined up to now comprise a single pigment cup cell, one or two microvillous (rhabdomeric) sensory cells and ciliary sensory cell(s). In the eyes of the eutardigrade Milnesium tardigradum the cilia are differentiated in an outer branching segment and an inner (dendritic) segment. Because of the scarcity of information on the tardigrade eyes, their homology with the visual organs of other bilaterians is currently difficult to establish and further comparative studies are needed. Thus, the significance of these eyes for the evolution of arthropod visual systems is unclear yet. © 2007 Elsevier Ltd. All rights reserved. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:114bb89a-5cfd-48c3-ad88-52f8afb66cda>
3.3125
298
Academic Writing
Science & Tech.
19.674364
95,507,004
Perkins-Kirkpatrick, Sarah E. Fischer, Erich M. Gibson, Peter B. - Journal Article Rights / licenseCreative Commons Attribution 3.0 Unported Understanding what drives changes in heatwaves is imperative for all systems impacted by extreme heat. We examine short- (13 yr) and long-term (56 yr) heatwave frequency trends in a 21‐member ensemble of a global climate model (Community Earth System Model; CESM), where each member is driven by identical anthropogenic forcings. To estimate changes dominantly due to internal climate variability, trends were calculated in the corresponding pre-industrial control run. We find that short-term trends in heatwave frequency are not robust indicators of long-term change. Additionally, we find that a lack of a long-term trend is possible, although improbable, under historical anthropogenic forcing over many regions. All long-term trends become unprecedented against internal variability when commencing in 2015 or later, and corresponding short-term trends by 2030, while the length of trend required to represent regional long-term changes is dependent on a given realization. Lastly, within ten years of a short-term decline, 95% of regional heatwave frequency trends have reverted to increases. This suggests that observed short-term changes of decreasing heatwave frequency could recover to increasing trends within the next decade. The results of this study are specific to CESM and the 'business as usual' scenario, and may differ under other representations of internal variability, or be less striking when a scenario with lower anthropogenic forcing is employed Show more Journal / seriesEnvironmental Research Letters Pages / Article No. PublisherInstitute of Physics SubjectHeatwaves; Internal variability; Trends; Observations; Model projections; Global; Regional Organisational unit03777 - Knutti, Reto MoreShow all metadata
<urn:uuid:6f8c7315-09df-4f31-92db-07e93ec9e3f9>
2.515625
384
Academic Writing
Science & Tech.
18.799121
95,507,005
Chances that the 2008 ice extent will fall below last year's record minimum is about 8 percent, researchers forecast after having run a number of different models predicting the fate of the Arctic sea ice this summer. But there is still reason for concern; the scientists are almost certain the ice extent will fall below the minimum of 2005, which was the second lowest year on record. With a probability of 80% the minimum ice extent in 2008 will be in the range between 4.16 and 4.70 million km2. “After the strong decrease of the Arctic ice during the last summer, climate scientists all around the world are constantly asked: how will the ice develop in the next years?” described Prof. Dr. Rüdiger Gerdes from the Alfred Wegener Institute his motivation. “To answer this question, we did not want to guess, but to rely on sound calculations.” Scenarios of the long-term development of sea ice clearly indicate a de-creased ice cover - exact prognoses for the following summer, however, are not yet possible. This is mainly due to the fact that the short-term development of sea ice depends strongly on the actual atmospheric conditions, namely the weather and in particular wind, cloud cover and air temperatures. Because the exact atmospheric conditions which determine the weather patterns in the Arctic Ocean during the coming months are not predictable, Rüdiger Gerdes and his team have entered atmospheric data of the last twenty years into an ocean sea ice model. “Through this, we are still not able to make a definitive statement on sea ice cover in September. This 'trick' enables us to compute the bandwidth of possible ice covers, and to quantify the probability of extreme events”, said Gerdes. Apart from the variability of atmospheric quantities during the melting season, ice thickness at the beginning of the season determines the new ice minimum. Accordingly, computations of ice thickness enter the models of the researchers. Start conditions from June 27th 2008 were used for their current prognosis.Different from long-term prognoses, the researchers' forecasts can quickly be checked by reality. Eleven research cruises will be carried through as part of the DAMOCLES programme (Developing Arctic Modelling and Observing Capabilities for Long-term Environment Studies) during the summer and autumn 2008, see detailed list below. On some of the cruises it is possible for journalists or media to join. Please see contact details in the end of this press release.Cruise:Oceania cruise Contact: Ilker Fer: Ilker.Fer@bjerknes.uib.noCruise:TBA Contact: Hanne Sagen: email@example.com The European Union Programme DAMOCLES, which is part of the International Polar Year, is concerned with the potential for a significantly reduced sea ice cover, and the impacts this might have on the environment and human activities, both regionally and globally. During The International Polar Year 50,000 researchers from more than 60 countries joins in an effort to learn more about our polar regions. This autumn, several expeditions under the DAMOCLES project will collect and reveal data about the ice extent in the Arctic, among many other activities. Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:1e9002b9-b26d-4776-ad6d-e6c8bd7b0350>
2.765625
1,330
Content Listing
Science & Tech.
43.102057
95,507,018
Many surfaces within the body are lined with tightly interconnected sheets of epithelial cells, with individual cells tethered to one another via complexes known as adherens junctions (AJs). These sheets undergo considerable reorganization during embryonic development and wound healing; accordingly, AJs are not merely ‘cellular staples’, but appear to provide an important mechanism for monitoring adjacent cells. “I imagine that cells confirm whether their neighbors are alive and have the same adhesion molecules by ‘pulling’ adjacent cells through AJs,” explains Shigenobu Yonemura, of the RIKEN Center for Developmental Biology in Kobe. “Dead cells cannot pull back, and thus would not be recognized as members of the epithelial cell sheet.” Yonemura’s team has uncovered evidence that AJs counter tensions generated through intercellular interactions via their associations with cytoskeletal actin filaments, spotlighting a potentially important association between AJ component á–catenin and the actin-binding protein vinculin1. By further exploring the relationship between these two proteins, his team has now achieved a breakthrough in understanding AJ-mediated force detection2. The researchers identified a vinculin-binding region in the middle of á-catenin, but also identified a second segment of the protein that actively inhibits this interaction. At one end, á-catenin also contains an actin-binding region, and Yonemura and colleagues found that this association appears to be essential for relieving this self-inhibition, suggesting that the á-catenin–vinculin interaction is force-dependent. Subsequent experiments enabled the team to construct a model in which á-catenin is normally collapsed like an accordion, with the inhibitory domain masking the vinculin binding site. However, increased tension extends the protein and exposes this site, enabling further interactions with the cytoskeleton that effectively counter the force pulling against a given AJ. The result is essentially a ‘tug of war’ between cells, with the integrity of the epithelium hanging in the balance. If accurate, this model offers a simple explanation for how epithelial cells can react rapidly to rearrangements in their local environment. “The central part of the mechanism involves the protein structure of á-catenin—no enzymatic reaction is required,” says Yonemura. “Because of this, sensing and response take place at the same time and place.” His team is now designing experiments to confirm this á-catenin rearrangement in response to applied force, but Yonemura believes they may have potentially uncovered a broadly relevant model for cellular communication. “Because the mechanism is so simple, I think that it could be fundamental and used among a wide variety of cells,” he says. The corresponding author for this highlight is based at the Electron Microscope Laboratory, RIKEN Center for Developmental Biology Journal information1. Miyake, Y., Inoue, N., Nishimura, K., Kinoshita, N., Hosoya, H. & Yonemura, S. Actomyosin tension is required for correct recruitment of adherens junction components and zonula occludens formation. Experimental Cell Research 312, 1637–1650 (2006) 2. Yonemura, S., Wada, Y., Watanabe, T., Nagafuchi, A. & Shibata, M. á-Catenin as a tension transducer that induces adherens junction development. Nature Cell Biology 12, 533–542 (2010) gro-pr | Research asia research news Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:47ccd995-43ba-4349-88d1-61dfd2cf7544>
2.84375
1,429
Content Listing
Science & Tech.
32.726922
95,507,019
Squid skin could be the solution to camouflage material Cephalopods — which include octopuses, squid, and cuttlefish — are masters of disguise. They can camouflage to precisely match their surroundings in a matter of seconds, and no scientist has quite been able to replicate the spectacle. But new research by Leila Deravi, assistant professor of chemistry and chemical biology at Northeastern, brings us a step closer. The chromatophore organs, which appear as hundreds of multi-colored freckles on the surface of a cephalopod's body, contribute to fast changes in skin color. In a paper published last week in Advanced Optical Materials, Deravi's group describes its work in isolating the pigment granules within these organs to better understand their role in color change. The researchers discovered these granules have remarkable optical qualities and used them to make thin films and fibers that could be incorporated into textiles, flexible displays, and future color-changing devices. Deravi's lab collaborated with the U.S. Army Natick Soldier Research, Development, and Engineering Center for the study. Chromatophores come in shades of red, yellow, brown, and orange. They are similar to the freckles on human skin that appear over time. But in cephalopods, these freckles open and close within a fraction of a second to give rise to a continuously reconfiguring skin color. Underneath the chromatophores is a layer of iridophores that act as a mirror. Together, these organs reflect all colors of visible light. By removing individual pigment particles from the squid, Deravi was able to explore the breadth of their capabilities as static materials. One particle is only 500 nanometers in size, which is 150 times smaller than the diameter of a human hair. Deravi's team layered and reorganized the particles and found they could produce an expansive color pallet. "We're showing these pigments are a powerful tool that can produce ultra-thin films that are really rich in colors," Deravi said. Her team also discovered the pigments can scatter both visible and infrared light. This enhances brightness and light absorption and affects how a final color is perceived. And when Deravi engineered a system that included a mirror — mimicking the layout of organs that squids have naturally–she was able to further enhance the perceived color through scattering light through and off the granules. This process could potentially be replicated on functional materials like solar cells to increase the absorption of sunlight, Deravi said. "From a scientific and technical engineering perspective, understanding how light scattering affects color is very important, and this is an exciting new development in the field of optics in biology," said Richard Osgood, a collaborator from the U.S. Army Natick Soldier Research, Development, and Engineering Center. "This is an unusual harnessing of optics and physics knowledge in scattering to understand biological systems." The researchers made spools of fibers from the squids' pigment particles and are now exploring uses for the material. The fibers are so visually interesting that it's not difficult to imagine weaving them into fabric for clothing or other art forms. But perhaps the most exciting possible application is wearable, flexible screens and textiles that are capable of adaptive coloration. Osgood said the research could allow the Army to create new capabilities for soldiers. "For more than a decade, scientists and engineers have been trying to replicate this process and build these devices that can color match, color change, and camouflage just like the cephalopods, but many of them come nowhere near the speed or dynamic range of color that the animals can display," Deravi said. "Cephalopods have evolved to incorporate these specific pigment granules for a reason, and we're starting to piece together what that reason is."
<urn:uuid:332e1849-99df-4b0c-a033-cbcafa881c18>
3.640625
785
News Article
Science & Tech.
31.638923
95,507,034
Update on the Larsen-C iceberg breakaway with new animation, satellite image and Nature Climate Change article The largest remaining ice shelf on the Antarctic Peninsula lost 10% of its area when an iceberg four times the size of London broke free earlier this month. Sentinel-1 data shows network of cracks grow on the Larsen-C Ice-Shelf, before the colossal iceberg broke free. Credit: A.E. Hogg, CPOM, University of Leeds Since the 12 July 2017 breakaway Dr Anna Hogg, from the University of Leeds and Dr Hilmar Gudmundsson, from the British Antarctic Survey (BAS), have continued to track the iceberg - known as A68 - using the European Space Agency (ESA) and European Commission's Copernicus Sentinel-1 satellite. Their observations show that since the calving event, the berg has started to drift away from the Larsen-C, with open ocean clearly visible in the ~ 5 kilometre gap between the berg and the ice-shelf. A cluster of over 11 'smaller' icebergs have also now formed, the largest of which is over 13 km long. These 'bergy bits' have broken off both the giant iceberg and the remaining ice-shelf. Dr Hogg, an ESA Research Fellow in the Centre for Polar Observation and Modelling (CPOM) at Leeds said: "The satellite images reveal a lot of continuing action on Larsen-C Ice Shelf. We can see that the remaining cracks continue to grow towards a feature called Bawden Ice Rise, which provides important structural support for the remaining ice shelf. "If an ice shelf loses contact with the ice rise, either through sustained thinning or a large iceberg calving event, it can prompt a significant acceleration in ice speed, and possibly further destabilisation. It looks like the Larsen-C story might not be over yet." Reporting this week in the journal Nature Climate Change Dr Hogg and Dr Gudmundsson, examine the events leading up to this dramatic natural phenomenon and discuss how calving of huge icebergs affects the stability of Antarctic ice shelves. Their article asserts that a calving event is not necessarily due to changes in environmental conditions and may simply reflect the natural growth and decay cycle of an ice shelf. Dr Gudmundsson said: "Although floating ice shelves have only a modest impact on of sea-level rise, ice from Antarctica's interior can discharge into the ocean when they collapse. Consequently we will see increase in the ice-sheet contribution to global sea-level rise. "With this large calving event, and the availability of satellite technology, we have a fantastic opportunity to watch this natural experiment unfolding before our eyes. We can expect to learn a lot about how ice shelves break up and how the loss of a section of an ice shelf affects the flow of the remaining parts." Ice-shelf retreat on the Antarctic Peninsula, has been observed throughout the satellite era - about 50 years. Large sections of the Larsen Ice Shelf A and B, and the Wilkins1 ice-shelf collapsed in a matter of days in 1995, 2002, and 2008, respectively. Geological evidence suggests that ice-shelf decay of this magnitude is not unprecedented, however, prior to 2002 the Larsen-B ice shelf remained intact for the last 11,000 years. While Antarctic ice shelves are in direct contact with both the atmosphere and the surrounding oceans, and thus subject to changes in environmental conditions, they also go through repeated internally-driven cycles of growth and collapse. New animation, GIF and PNG available for download at: https:/ Credits and captions: Credit: A.E. Hogg, CPOM, University of Leeds. Caption: The story of the giant iceberg calving event on the Larsen-C Ice-Shelf. Credit: A.E. Hogg, CPOM, University of Leeds. Caption: Sentinel-1 data shows network of cracks grow on the Larsen-C Ice-Shelf, before the colossal iceberg broke free. Credit: A. Fleming, British Antarctic Survey. Caption: View of the A68 iceberg from a European Copernicus Sentinel-1 satellite image acquired on 30.7.2017. Dr Anna Hogg and Dr Hilmar Gudmundsson are available for interviews and additional information. To get in touch with Dr Hogg please contact: Anna Martinez, press officer, University of Leeds tel:+44 (0)113 343 4196; email: firstname.lastname@example.org To get in touch with Dr Hilmar Gudmundsson please contact: Sarah Vincent, Senior Communications Manager, British Antarctic Survey: tel +44 (0)1223 221445; mobile +44 (0)7850 541910; email email@example.com Broadcast-quality footage of Larsen Ice Shelf and additional images are available from British Antarctic Survey Press Office Impacts of the Larsen-C ice-shelf calving event by Anna E Hogg and G Hilmar Gudmundsson will be published on Nature Climate Change on 2 August 2017 (PDF available on request) University of Leeds The University of Leeds is one of the largest higher education institutions in the UK, with more than 33,000 students from 147 different countries, and a member of the Russell Group of research-intensive universities. We are a top 10 university for research and impact power in the UK, according to the 2014 Research Excellence Framework and we are The Times and The Sunday Times University of the Year 2017. Additionally, the University has been awarded a gold rating by the Government's Teaching Excellence Framework recognising its 'consistently outstanding' teaching and learning provision. http://www. British Antarctic Survey (BAS), an institute of the Natural Environment Research Council (NERC), delivers and enables world-leading interdisciplinary research in the Polar Regions. Its skilled science and support staff based in Cambridge, Antarctica and the Arctic, work together to deliver research that uses the Polar Regions to advance our understanding of Earth as a sustainable planet. Through its extensive logistic capability and know-how BAS facilitates access for the British and international science community to the UK polar research operation. Numerous national and international collaborations, combined with an excellent infrastructure help sustain a world leading position for the UK in Antarctic affairs. For more information visit http://www. Anna Martinez | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:8433843b-b208-496b-a884-8b5971617c87>
3.390625
1,967
Content Listing
Science & Tech.
41.603451
95,507,049
FOR the last seven months, Element 106, an artificial element first created in 1974 by the Lawrence Berkeley Laboratory in Berkeley, Calif., has been provisionally named seaborgium in honor of Dr. Glenn T. Seaborg, the laboratory's director at large, a Nobel laureate and the codiscoverer of plutonium. But just as seaborgium is finding its way into textbooks and scientific papers, an international committee has created an uproar by dropping Dr. Seaborg's name from the official roster of chemical elements. In a decision announced yesterday in Chemical and Engineering News, a committee of the International Union of Pure and Applied Chemistry, or I.U.P.A.C., recommended changes in the provisional names assigned to Elements 101 through 109. The changes, which must be ratified next August by the full council of the British-based organization, drew angry responses from several leading American chemists and nuclear physicists. The list of elements that included Dr. Seaborg's name was adopted last March by the nomenclature committee of the American Chemical Society; it was this list that the international panel changed. "It's outrageous," said Albert Ghiorso of the Lawrence Berkeley Laboratory, who led the team that discovered Element 106. "It's unheard of for a discoverer to be denied the right to name an element." Dr. Seaborg said, "It's very disappointing." The executive secretary of the international group, Dr. Maurice Williams, said Dr. Seaborg's name was dropped because the organization's nomenclature commission, meeting in Balatonfured, Hungary, on Aug. 31, decided that no element should be named for a living person.Continue reading the main story If, as seems likely, this recommendation stands, the "transfermium elements" -- elements higher on the periodic table of elements than the man-made element fermium -- will have the following names: Element 101: mendelevium; unchanged. Named for the geneticist Gregor Mendel. Element 102: nobelium; unchanged. Named for Alfred Nobel, founder of the Nobel Prizes. Element 103: lawrencium; unchanged. Named for Ernest O. Lawrence, inventor of the cyclotron. Element 104: dubnium; changed from rutherfordium (or kurchatovium). Named for the Joint Institute for Nuclear Research at Dubna, Russia. Element 105: joliotium; changed from hahnium. Named for Frederic Joliot-Curie, a pioneer in nuclear physics. Element 106: rutherfordium; changed from seaborgium. Named for Ernest Rutherford, a pioneer in atomic physics. Element 107: bohrium; changed from nielsbohrium. Named for Niels Bohr, a founder of quantum mechanics. Element 108: hahnium; changed from hessium. Named for Otto Hahn, a pioneer in the discovery of nuclear fission. Element 109: meitnerium; unchanged. Named for Lise Meitner, a pioneer in nuclear fission. All these elements were created in laboratories in the United States, Germany and Russia by bombarding heavy atoms with heavy atomic nuclei, merging their protons and neutrons into new, larger nuclei. All of them are highly radioactive and decay swiftly into lighter atoms. An article in Science Times yesterday about the renaming of chemical elements misidentified the scientist for whom mendelevium is named. It is the Russian chemist Dmitri I. Mendeleyev, not the geneticist Gregor Mendel.
<urn:uuid:a857d18a-19af-4a8e-955f-1ddcc913faf2>
3.046875
745
News Article
Science & Tech.
40.801228
95,507,054
Superposition, Entanglement, and Raising Schrödinger’s Cat Presenter: David Wineland Published: July 2014 Age: 18-22 and upwards Views: 859 views Tags: quantum; superposition; schrodinger; cat In 1935, Erwin Schrödinger, one of the inventors of quantum mechanics, illustrated his discomfort with the theory by pointing out that its extension to the macroscopic world could lead to bizarre situations such as a cat being simultaneously alive and dead, a so-called superposition state. Today, we can create analogous situations on a small scale, such as putting an atom in a ‘bowl’ and placing it on the left and right sides of the bowl simultaneously. Superpositions can be used as clocks. For example, the wave function that describes the superposition of two different energy levels in an atom oscillates at a frequency given by the energy level difference divided by Planck’s constant. The duration required to count a prescribed number of these oscillations can be used to define a unit of time such as the second. Today, atomic clocks run at rates that are uncertain at a level of only 1 part in 10^17. Superpositions might also be useful for computation. For example, two energy levels in an atom, labeled “0” and “1,” can be used to store information like the bits in our laptops. However, as in the atom/bowl experiment, we can arrange the quantum bit to be in a superposition, thereby storing both states of the bit simultaneously. This property leads to a memory and processing capacity that increases exponentially with the number of bits. This and a related property called ‘entanglement’ would enable a quantum computer to efficiently solve certain classes of problems that are intractable on conventional computers. So far, scientists have constructed quantum computers composed of only a few bits, but with advances in technology, a useful processor may someday become a reality. A macroscopic quantum processor would realize a close analog to Schrödinger’s cat. These topics will be briefly discussed in the context of trapped atomic ions.
<urn:uuid:f681c93a-80c3-4dec-8a13-ae0fcfaf6117>
3.21875
448
Truncated
Science & Tech.
31.277649
95,507,061
The past year has been a momentous time for the world’s forests, with both good and bad news. Fasten your seat belts, because 2016 promises to be another roller-coaster ride. Here I highlight five factors that could have a big impact on forests this year. For further discussion, see this insightful analysis by environmental journalist Rhett Butler. 1. Collapsing commodity prices The ripple effects from China’s slowing economy could be huge for forests. China has been an aggressive driver of mineral, fossil fuel and timber exploitation, especially in developing nations across the Asia-Pacific, Latin America, Africa and Siberia. It has pushed hard for road and infrastructure expansion into many remote wilderness regions — projects that have often opened a Pandora’s box of environmental problems for forests and wildlife. With prices for many natural resources falling, forests could get some respite in 2016. Conservationists need to use this breathing space to create new protected areas and promote land-use planning in environmentally critical regions. Africa, in the midst of a mining and road-building frenzy, is a particularly high priority. 2. The El Niño drought The fire-breathing “Godzilla” drought ain’t dead yet — far from it. The unusual Pacific Ocean conditions feeding this monster are still strong. This could lead to serious droughts and fires in South and Central America and the Asia-Pacific region. Indonesia, in particular, has been reeling from the drought, with massive forest and peat fires that have had much of Southeast Asia gasping for air. On a daily basis, Indonesia’s fires belched out as much carbon as the entire US economy. 3. Brazil’s imploding economy If China’s economy is cooling off, then Brazil’s once-promising economy is entering an Ice Age — a remarkable downturn for a nation so rich in land and natural resources. It’s hard to predict how this could affect rainforests like the vast Brazilian Amazon and the critically imperiled Brazilian Atlantic Forest, a global biodiversity hotspot that has been massively reduced and fragmented. On the one hand, Brazil’s currency, the real, has fallen dramatically in value. That means that its export commodities such as timber, soy, beef, oil and minerals will be more competitive internationally — potentially promoting more forest exploitation. On the other hand, domestic and international investors tend to be cautious in a slowing economy. New infrastructure and land-exploiting projects, such as a slate of planned mega-dams in the Amazon and elsewhere, may well slow down. It’s hard to call this one. The imploding economy may well lead to the political demise of Brazilian President Dilma Rousseff, who generally has been pro-environment. For instance, Rousseff did everything she could to staunch recent efforts to weaken Brazil’s Forest Code — a legal framework that’s been crucial for protecting the nation’s forests. Over the last decade, annual deforestation rates in the Brazilian Amazon have fallen by more than 75%, but rural and industrial lobbies have incessantly attacked the government land-use controls that have helped make this reduction possible. 4. Zero-deforestation agreements A remarkable development in the last two years is that scores of corporations producing or using oil palm, wood pulp, soy, beef and other commodities have declared their intent to halt or sharply curtail forest destruction. Pressures from eco-conscious consumers and environmental NGOs have been a key driver of this trend. Overall, this has been a hugely positive step. However, Indonesia and Malaysia — which collectively produce around 85% of all the world’s palm oil — appear determined to stop or erode zero-deforestation agreements for corporations working there. Bottom line: they want to continue clearing large expanses of native forest for oil palm and industrial wood-pulp plantations, and the zero-deforestation agreements are getting in the way of this. Indonesia alone plans to fell another 14 million hectares of native forest by 2020. This is truly a critical issue to watch. If corporations start to backslide on their zero-deforestation agreements, then conservationists are likely to let them know — loudly and emphatically — that they’re doing the wrong thing. 5. The Paris Climate Accord I attended the recent Climate Convention in Paris, where there were two key developments relating to forests. Firstly, a formal agreement for advancing REDD — which stands for “reducing emissions from deforestation and forest degradation” — was finally approved. In theory, this means that more international funding should start flowing for forest conservation — to slow deforestation, encourage forest regeneration and promote more-sustainable logging — all in the interest of reducing carbon emissions and thereby limiting global warming. There’s no question that this is good news — though it’s time to stop talking and start acting. In particular, wealthier nations such as the US, Japan and Australia must amp up their funding for REDD initiatives, especially in the tropics. Secondly, the world’s nations agreed in principle to limit global warming to 2℃ — and to strive for an increase of just 1.5℃. It’s wonderful that nations have made this broad commitment, but actually achieving it is going to be a tremendous challenge. There’s no time for complacency. The Paris Agreement will only be effective if it’s followed by concerted actions by nations to reduce their carbon emissions and conserve forests. Seriously, forests are important Conserving and regenerating forests really is one of the smartest things we can do for our planet’s health. For one thing, protecting large expanses of forest makes these biodiversity-rich ecosystems much more resilient to future climate change. Forest tracts that span large gradients in rainfall, elevation and other environmental factors give species the opportunity to migrate or find local refuges during heat waves, fires, storms and other extreme weather conditions. Protecting and regenerating forests could also have a huge impact on the global climate. Forests cool the Earth’s surface while emitting trillions of tonnes of water vapor that generates much of the planet’s rainfall. But most of all, forests can rapidly absorb and store a great deal of carbon. It has recently been estimated that a concerted effort to halt tropical deforestation and regenerate forests on degraded tropical lands could get us halfway to our global goal to reduce carbon emissions over the next 50 years. As we follow the dramatic events unfolding for the world’s forests in 2016, we should bear in mind just how vital these imperiled ecosystems are for all of us. Bill Laurance is a distinguished research professor and Australian laureate at James Cook University - How farms can help improve the lives of disadvantaged young people - Genetically testing human embryos: what you need to know about the debate - How Vladimir Putin outfoxed Donald Trump at Helsinki before their meeting even began - Youth mobility scheme after Brexit won't fill gaps left by end to free movement This article originally published at The Conversation here
<urn:uuid:639022eb-314a-41ec-8190-6d990e8081ab>
3
1,473
Personal Blog
Science & Tech.
34.339936
95,507,066
httpv://www.youtube.com/watch?v=dyZoQNInlI0 The world has been obsessed with Greece’s credit ratings and earth shattering quakes. But is Iceland’s volcanic eruption the real black swan? So far, Iceland’s Eyjafjallajokull (ay-yah-FYAH-lah-yer-kuhl) volcano has pulled the reigns on Europe’s economy by grounding air travel. What if the volcano continues to erupt? Some scientists fear a larger eruption at the nearby Katla volcano which sits on the massive Myrdalsjokull icecap. Undoubtedly, the smoke from these massive eruptions can quickly become a concern for more than the economy. We go about our daily lives making the ultimate assumption that tomorrow will be much like today, but the future is absolutely unknown. There is the possibility that these type of volcanic eruptions can not only cancel flights, but also cause major freezes and hazardous air pollution. The USA Today reports, “”When Katla went off in the 1700s, the USA suffered a very cold winter,” says Gary Hufford, a scientist with the Alaska Region of the National Weather Service. “The Mississippi River froze just north of New Orleans, and the East Coast, especially New England, had an extremely cold winter. Depending on a new eruption, Katla could cause some serious weather changes.” No one knows whether this situation will continue to escalate. And the mainstream media surely won’t help cause a global panic. But if the Mississippi River freezes in Louisiana, I think the economy will do the same.
<urn:uuid:5b1d02c9-4dd1-4763-ae41-0e8cc059955f>
2.90625
347
Personal Blog
Science & Tech.
46.863356
95,507,068
Vertical pressure variation Vertical pressure variation is the variation in pressure as a function of elevation. Depending on the fluid in question and the context being referred to, it may also vary significantly in dimensions perpendicular to elevation as well, and these variations have relevance in the context of pressure gradient force and its effects. However, the vertical variation is especially significant, as it results from the pull of gravity on the fluid; namely, for the same given fluid, a decrease in elevation within it corresponds to a taller column of fluid weighing down on that point. A relatively simple version of the vertical fluid pressure variation is simply that the pressure difference between two elevations is the product of elevation change, gravity, and density. The equation is as follows: - , and - P is pressure, - ρ is density, - g is acceleration of gravity, and - h is height. The delta symbol indicates a change in a given variable. Since g is negative, an increase in height will correspond to a decrease in pressure, which fits with the previously mentioned reasoning about the weight of a column of fluid. When density and gravity are approximately constant (that is, for relatively small changes in height), simply multiplying height difference, gravity, and density will yield a good approximation of pressure difference. Where different fluids are layered on top of one another, the total pressure difference would be obtained by adding the two pressure differences; the first being from point 1 to the boundary, the second being from the boundary to point 2; which would just involve substituting the ρ and Δh values for each fluid and taking the sum of the results. If the density of the fluid varies with height, mathematical integration would be required. Whether or not density and gravity can be reasonably approximated as constant depends on the level of accuracy needed, but also on the length scale of height difference, as gravity and density also decrease with higher elevation. For density in particular, the fluid in question is also relevant; seawater, for example, is considered an incompressible fluid; its density can vary with height, but much less significantly than that of air. Thus water's density can be more reasonably approximated as constant than that of air, and given the same height difference, the pressure differences in water are approximately equal at any height. |Wikimedia Commons has media related to Hydrostatic paradox.| The barometric formula depends only on the height of the fluid chamber, and not on its width or length. Given a large enough height, any pressure may be attained. This feature of hydrostatics has been called the hydrostatic paradox. As expressed by W. H. Besant, - Any quantity of liquid, however small, may be made to support any weight, however large. The Dutch scientist Simon Stevin was the first to explain the paradox mathematically. In 1916 Richard Glazebrook mentioned the hydrostatic paradox as he described an arrangement he attributed to Pascal: a heavy weight W rests on a board with area A resting on a fluid bladder connected to a vertical tube with cross-sectional area α. Pouring water of weight w down the tube will eventually raise the heavy weight. Balance of forces leads to the equation Glazebrook says, "By making the area of the board considerable and that of the tube small, a large weight W can be supported by a small weight w of water. This fact is sometimes described as the hydrostatic paradox." In the context of Earth's atmosphere If one is to analyze the vertical pressure variation of the Atmosphere of Earth, the length scale is very significant (troposphere alone being several kilometres tall; thermosphere being several hundred kilometres) and the involved fluid (air) is compressible. Gravity can still be reasonably approximated as constant, because length scales on the order of kilometres are still small in comparison to Earth's radius, which is on average about 6371 km, and gravity is a function of distance from Earth's core. Density, on the other hand, varies more significantly with height. It follows from the ideal gas law that - m is average mass per air molecule, - P is pressure at a given point, - k is the Boltzmann constant, - T is the temperature in kelvins. Put more simply, air density depends on air pressure. Given that air pressure also depends on air density, it would be easy to get the impression that this was circular definition, but it is simply interdependency of different variables. This then yields a more accurate formula, of the form - Ph is the pressure at height h, - P0 is the pressure at reference point 0 (typically referring to sea level), - m is the mass per air molecule, - g is gravity, - h is height difference from reference point 0, - k is the Boltzmann constant, - T is the temperature in kelvins. Therefore, instead of pressure being a linear function of height as one might expect from the more simple formula given in the "basic formula" section, it is more accurately represented as an exponential function of height. Note that even that is a simplification, as temperature also varies with height. However, the temperature variation within the lower layers (troposphere, stratosphere) is only in the dozens of degrees, as opposed to difference between either and absolute zero, which is in the hundreds, so it is a reasonably small difference. For smaller height differences, including those from top to bottom of even the tallest of buildings, (like the CN tower) or for mountains of comparable size, the temperature variation will easily be within the single-digits. (See also lapse rate.) An alternative derivation, shown by the Portland State Aerospace Society, is used to give height as a function of pressure instead. This may seem counter-intuitive, as pressure results from height rather than vice versa, but such a formula can be useful in finding height based on pressure difference when one knows the latter and not the former. Different formulas are presented for different kinds of approximations; for comparison with the previous formula, the first referenced from the article will be the one applying the same constant-temperature approximation; in which case: where (with values used in the article) - z is the elevation, - R is the specific gas constant = 287.053 J/(kg K) - T is the absolute temperature in kelvins = at sea level, 288.15 K - g is the acceleration due to gravity = 65 m/s2, 9.806 - P is the pressure at a given point at elevation z, and - P0 is pressure at the reference point = 325 Pa at sea level. 101 A more general formula derived in the same article accounts for a linear change in temperature as a function of height (lapse rate), and reduces to above when the temperature is constant: - L is the atmospheric lapse rate (change in temperature divided by distance) = ×10−3 K/m, and −6.5 - T0 is the temperature at the same reference point for which P = P0 and the other quantities are the same as those above. This is the recommended formula to use. - "The Barometric Formula". - Besant, W. H. (1900). Elementary Hydrostatics. Internet Archive. George Bell & Sons. p. 11. - Roux, Sophie (25 Sep 2012). The Mechanization of Natural Philosophy. Springer Science & Business Media. p. 160. ISBN 9400743459. Stevin provides an original mathematical demonstration of the so-called hydrostatic paradox - Glazebrook, Richard (1916). Hydrostatics: An elementary textbook, theoretical and practical. Internet Archive. Cambridge University Press. p. 42. - Greenslade, Jr., Thomas B. "Hydrostatic paradox". Kenyon College. - on YouTube - "Radius of the Earth". - "Newton's Law Of Gravity". - "A Quick Derivation relating altitude to air pressure" (PDF). - Merlino, Robert L. (2003). "Statics – Fluids at rest". Retrieved 2014-11-20.
<urn:uuid:be7a65b3-d5be-4c26-9679-3c7de7239628>
4.0625
1,700
Knowledge Article
Science & Tech.
42.627404
95,507,079
20 July 2018 Better reduction of aqueous carbon dioxide Published online 11 February 2015 Researchers design an improved electrocatalyst to selectively convert CO2 to CO. A team of researchers have found an improved catalyst to make the conversion of carbon dioxide to carbon monoxide — one of the most sought-after reactions in modern chemistry — more efficient. Plants can perform photosynthesis, the remarkable ability to convert carbon dioxide from the atmosphere to organic materials using energy from sunlight. For years, chemists have attempted to mimic such a process. Instead of light, electricity can be used to reduce CO2 to CO, which in turn can be further reduced to organic hydrocarbons, but this needs an electrocatalyst, which facilitates the transfer of electrons from an electrode to a reactant. Publishing in Angewandte Chemie1, a team of researchers from the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia show that electrocatalysts made of an alloy of copper and indium are good candidates. The team prepared a copper-indium (Cu-In) electrode by immersing thermally oxidized metallic copper sheets in a solution containing indium. They then applied a voltage to simultaneously reduce the oxidized copper again — which improved properties of the catalyst — and electrochemically deposit metallic indium on to the electrode. When they used an aqueous solution saturated with dissolved carbon dioxide, they found that the Cu-In electrode had a carbon monoxide selectivity of 95%, compared to 30% using only oxidized copper, and showed remarkable stability over long reaction periods. Cu-In electrodes are also far less expensive than noble-metal-based electrodes. - Rasul, S. et al. A highly selective copper–indium bimetallic electrocatalyst for the electrochemical reduction of aqueous CO2 to CO. Angew. Chem. Int. Edit. 54, 2146–2150 (2015).
<urn:uuid:5caaacee-99d2-475b-b17d-09e1bbfec77d>
3.515625
404
Truncated
Science & Tech.
24.782912
95,507,086
Is the coefficient of oxygen 7 when the equation C2H6+O2>CO2+H2O is balanced?© BrainMass Inc. brainmass.com July 21, 2018, 9:17 pm ad1c9bdddf The solution is attached below in two formats. One is in Word XP Format, while the other is in Adobe pdf format. Therefore, you can choose the format that is most suitable to you. The short answer to your question is yes, the coefficient of the oxygen is 7 in the balanced equation. If you want to see the mathematical procedure I use to balance any equation, you can look at the process in the solution files attached. To balance an equation we assign arbitrary coefficient to each compound that participates ... This solution is provided in 276 words and shows step-by-step how to balance the equation and find the oxygen coefficient.
<urn:uuid:cc71f748-2cbc-48fc-941d-35dfa05df851>
2.90625
182
Q&A Forum
Science & Tech.
64.221802
95,507,096
Remediation/Alleviation of Metal(s) Contaminated Media A major role of geochemistry in environmental projects is to assess clean-up possibilities for ecosystems that contain pollutant concentrations of potentially toxic metals that could access a foodweb. This role extends to remediation of environments where, in addition to heavy metals, extreme conditions occur that threaten ecosystem life such as low pH (acidic) waters or waters with limited BOD capacity The targeted cleanup media include solids, liquids and gases from contaminated soils, groundwater and surface waters, sediment (fluvial, lacustrine, estuarine, marine), waste disposal sites and sewage sludge (industrial, agricultural, mining and municipal), and chimney emissions (e.g., smelting and electricity generating facilities). KeywordsHeavy Metal Sewage Sludge Toxic Metal Acid Mine Drainage Water Hyacinth Unable to display preview. Download preview PDF.
<urn:uuid:8bc2221c-a307-411f-8a7c-5a3532f7c7bb>
2.96875
189
Truncated
Science & Tech.
0.119167
95,507,099
Scientists studying the genes and proteins of human cells infected with a common cold virus have identified a new gene identification technique that could increase the genetic information we hold on animals by around 70 to 80 per cent. The findings, published in Nature Methods, could revolutionise our understanding of animal genetics and disease, and improve our knowledge of dangerous viruses such as SARS that jump the species barrier from animals to humans. Modern advances in genome sequencing — the process of determining the genetic information and variation controlling everything from our eye colour to our vulnerability to certain diseases — has enabled scientists to uncover the genetic codes of a wide range of animals, plants and insects. Until now, correctly identifying the genes and proteins hidden inside the genetic material of a newly sequenced species has been a monumental undertaking requiring the careful observation and cataloguing of vast amounts of data about the thousands of individual genes that make up any given animal, plant or insect. Dr David Matthews, the study's lead author and a Senior Lecturer in Virology at the University of Bristol's School of Cellular and Molecular Medicine, said: "Gene identification is mainly led by computer programmes which search the genome for regions that look like genes already identified in other animals or humans. However, this type of analysis is not always effective." The Bristol team has now discovered a more effective way of detecting the genetic information present in animals, plants and insects using cutting-edge analysis tools to directly observe the genes and all the proteins they make. To prove their technique worked, the researchers conducted an experiment to see how good their process was at gene discovery. Human cells were infected with a well-understood common cold bug to mimic a newly discovered virus. These infected cells were then analysed using the technique as if they were cells from a newly sequenced organism infected with a newly discovered virus. The resulting list of "discovered" genes and proteins, when compared to the genetic information already known about humans and cold virus, proved extremely successful and demonstrated the power of this method. A similar analysis of hamster cells provided directly observed evidence for the existence of thousands of genes and proteins in hamsters in a single relatively inexpensive experiment. Direct evidence for the existence of almost all of theses genes and proteins in hamsters is not available in the 'official' lists of hamster genes and proteins. Dr Matthews added: "These findings open up the potential to take powerful analysis tools currently used to study human diseases and apply them to study any animal, insect or even plants – something previously either very challenging or simply not possible. This technique will also make it easier and much more efficient for scientists to study anything from farm animals and their diseases to insect pests that damage crops. "In recent years, a number of dangerous new viruses have been transmitted from animals to humans including Influenza, SARS, Ebola, Hendra and Nipah viruses. Earlier this year three people became seriously ill and two of them died when they contracted a new SARS-like virus in the Middle East which is thought to have come directly from bats. "Why bats harbour these viruses with limited ill effect is a mystery as the genetic make-up of these creatures is poorly understood. We are starting to apply our technique to laboratory grown bat cells to analyse the genetic and protein content of bats to gain more insight into their genetics and to understand how they are able to apparently co-exist with these viruses which all too often prove fatal in humans." Caroline Clancy | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ccb2f7be-8062-45bd-b6d8-5c86b959ef1c>
3.859375
1,284
Content Listing
Science & Tech.
31.461849
95,507,117