text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
London is completely blanketed by the black plume of smoke from Europe’s worst peacetime fire in this Envisat image, taken within five hours of the blaze beginning. This image was acquired at 10:45 GMT on Sunday morning by the Medium Resolution Imaging Spectrometer (MERIS), one of ten instruments aboard Envisat, Europe’s largest satellite for environmental monitoring. This Reduced Resolution mode image has a spatial resolution of 1200 metres, and shows the cloud spread across a span of around 140 km. The pall of smoke comes from a fire at Buncefield oil depot on the outskirts of Hemel Hempstead. Buncefield is the fifth largest fuel storage depot in the UK, distributing millions of tonnes of petrol and other oil products per year, including aviation fuel to nearby Luton and Heathrow Airports. Mariangela D’Acunto | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:98daee60-b4dc-4571-a80e-cf9738d18b46>
2.765625
842
Content Listing
Science & Tech.
40.957655
95,525,615
|Tuesday, 2018-07-17, 2:30 PM||Main | Registration | Login| Video of the Month Total online: 1 ACSL Key Findings ABRUPT CHANGES IN THE EARTH’S CLIMATE SYSTEM ABRUPT CHANGE IN SEA LEVEL - Since the mid-19th century, small glaciers (sometimes called " glaciers and ice caps;”) have been losing mass at an average rate equivalent to 0.3 to 0.4 millimeters per year of sea level rise. - The best estimate of the current (2007) mass balance of small glaciers is about -400 gigatons per year (Gt a-1), or nearly 1.1 millimeters sea level equivalent per year.- The mass balance loss of the Greenland Ice Sheet during the period with good observations increased from 100 Gt a-1 in the mid-1990s to more than 200 Gt a-1 for the most recent observations in 2006. Much of the loss is by increased summer melting as temperatures rise, but an increasing proportion is by enhanced ice discharge down accelerating glaciers. - The mass balance for Antarctica is a net loss of about 80 Gt a-1 in the mid-1990s, increasing to almost 130 Gt a-1 in the mid-2000s. There is little surface melting in Antarctica, and the substantial ice losses from West Antarctica and the Antarctic Peninsula are very likely caused by increasing ice discharge as glacier velocities increase. - During the last interglacial period (~120 thousand years ago) with similar carbon dioxide levels to preindustrial values and arctic summer temperatures up to 4 ºC warmer than today, sea level was 4-6 meters above present. The temperature increase during the Eamian was the result of orbital changes of the sun. During the last two deglaciations, sea level rise averaged 10-20 millimeters per year with large "meltwater fluxes” exceeding sea level rise of 50 millimeters per year lasting several centuries. - The potentially sensitive regions for rapid changes in ice volume are those with ice masses grounded below sea level such as the West Antarctic Ice Sheet, with 5 to 6 meters sea level equivalent or large glaciers in Greenland like the Jakobshavn Isbrae, also known as Jakobshavn Glacier and Sermeq Kujalleq (in Greenlandic), with an over-deepened channel reaching far inland; total breakup of Jakobshavn Isbrae ice tongue in Greenland, as well as other tidewater glaciers and ice cap outlets, was preceded by its very rapid thinning. - Several ice shelves in Antarctica are thinning, and their area declined by more than 13.500 square kilometers in the last 3 decades of the 20th century, punctuate by the collapse of the Larsen A and Larsen B ice shelves, soon followed by several-fold increases in velocities of their tributary glaciers. - The interaction of warm waters with the periphery of the large ice sheets represents a strong potential cause of abrupt change in the big ice sheets, and future changes in ocean circulation and ocean temperatures will very likely produce changes in ice-shelf basal melting, but the magnitude of these changes cannot currently be modeled or predicted. Moreover, calving, which can originate in fractures far back from the ice front, and ice-shelf breakup, are very poorly understood. - Existing models suggest that climate warming would result in increased melting from coastal regions in Greenland and an overall increase in snowfall. However, they are incapable of realistically simulating the outlet glaciers that discharge ice into the ocean and cannot predict the substantial acceleration of some outlet glaciers that we are already observing. |Copyright gogreencanada © 2018 | Free web hosting — uCoz|
<urn:uuid:1253bcb9-bda5-41fa-a4f9-d3eaf0fb26d9>
3.578125
784
Content Listing
Science & Tech.
32.516129
95,525,620
A technological fix, technical fix, technological shortcut or solutionism refers to the attempt of using engineering or technology to solve a problem (often created by earlier technological interventions). Some references define technological fix as an "attempt to repair the harm of a technology by modification of the system", that might involve modification of the machine and/or modification of the procedures for operating and maintaining it. Technological fixes are inevitable in modern technology. It has been observed that many technologies, although invented and developed to solve certain perceived problems, often create other problems in the process, known as externalities. In other words, there would be modification of the basic hardware, modification of techniques and procedures, or both. Technological fix is the idea that all problems can find solutions in better and new technologies. It now is used as a dismissive phrase to describe cheap, quick fixes by using inappropriate technologies; these fixes often create more problems than they solve, or give people a sense that they have solved the problem. In the contemporary context, technological fix is sometimes used to refer to the idea of using data and intelligent algorithms to supplement and improve human decision making in hope that this would result in ameliorating the bigger problem. One critic, Evgeny Morozov defines this as "Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized--if only the right algorithms are in place." While some criticizes this approach to the issues of today as detrimental to efforts to truly solve these problems, opponents finds merits in such approach to technological improvement of our society as complements to existing activists and policy efforts. An example of the criticism is how policy makers may be tempted to think that installing smart energy monitors would help people conserve energy better, thus improving global warming, rather than focusing on the arduous process of passing laws to tax carbon, etc. Another example is thinking of obesity as a lifestyle choice of eating high caloric foods and not exercising enough, rather than viewing obesity as more of a social and class problem where individuals are predisposed to eat certain kind of foods (due to the lack of affordable health-supporting food in urban food deserts), to lack optimally evidence-based health behaviors, and lack of proper health care to mitigate behavioral outcomes. The technological fix for climate change is an example of the use of technology to restore the environment. This can be seen through various different strategies such as: geo-engineering and renewable energy. Geo-engineering is referred as "the artificial modification of Earth's climate systems through two primary ideologies, Solar Radiation Management (SRM) and Carbon Dioxide Removal (CDR)"[better source needed] Different schemes, projects and technologies have been designed to tackle the effects of climate change, usually by removing CO2 from the air as seen by Klaus Lackner's invention of a Co2 prototype, or by limiting the amount of sunlight that reaches the Earth's surface, by space mirrors. However, "critics by contrast claim that geoengineering isn't realistic – and may be a distraction from reducing emissions." It has been argued that geo-engineering is an adaptation to global warming. It allows TNC's, humans and governments to avoid facing the facts that global warming is a crisis that needs to be dealt with head-on by reducing emissions and implementing green technologies, rather than developing ways to control the environment and ultimately allow Greenhouse Gases to continue to be released into the atmosphere. Renewable Energy is also another example of a technological fix, as technology is being used in attempts to reduce and mitigate the effects of global warming. Renewable Energy refers to technologies that has been designed to be eco-friendly and efficient for the well-being of the Earth. They are generally regarded as infinite energy sources, which means they will never run out, unlike fossil fuels such as oil and coal, which are finite sources of energy. They additionally release no green house gases such as carbon dioxide, which is harmful for the planet as it depletes the ozone layer. Examples of renewable energy can be seen by wind turbines, solar energy such as solar panels and kinetic energy from waves. These energies are regarded as a technological fix as they have been designed and innovated to overcome issues with energy insecurity, as well as to help protect Earth from the harmful emissions released from non-renewable energy sources, and thus overcome global warming. It is also know that such technologies will in turn require their own technological fixes. For example, some types of solar energy have local impacts on ambient temperature, which can be a hazard to birdlife. It has been made explicit within society that the world's population is rapidly increasing, with the "UNICEF estimating that an average of 353,000 babies are born each day around the world." Therefore, it is expected that the production of food will not be able to progress and develop to keep up with the needs of species. Ester Boserup highlighted in 1965 that when the human population increases and food production decreases, an innovation will take place. This can be demonstrated in the technological development of hydroponics and genetically modified crops. Hydroponics is an example of a technological fix. It demonstrates the ability for humans to recognise a problem within society, such as the lack of food for an increasing population, and therefore attempt to fix this problem with the development of an innovative technology. Hydroponics is a method of food production to increase productivity, in an "artificial environment." The soil is replaced by a mineral solution that is left around the plant roots. Removing the soil allows a greater crop yield, as there is less chance of soil-born diseases, as well as being able to monitor plant growth and mineral concentrations. This innovative technology to yield more food reflects the ability for humans to develop their way out of a problem, portraying a technological fix. Genetically modified organismEdit Genetically modified organism (GMO) reflect the use of technology to innovate our way out of a problem such as the lack of food to cater for the growing population, demonstrating a technological fix. GM crops can create many advantages, such as higher food fields, added vitamins and increased farm profits. Depending on the modifications, they may also introduce the problem of increasing resistance to pesticides and herbicides, which may inevitably precipitate the need for further fixes in the future. Golden rice is one example of a technological fix. It demonstrates the ability for humans to develop and innovate themselves out of problems, such as the deficiency of vitamin A in Taiwan and Philippines, in which the World Health Organization reported that about 250 million preschool children are affected by. Through the technological development of GM Crops, scientists were able to develop golden rice that can be grown in these countries with genetically higher levels of beta-carotene (a precursor of vitamin A). This enables healthier and fulfilling lifestyles for these individuals and consequently helps to reduce the deaths caused by the deficiency. Externalities refer to the unforeseen or unintended consequences of technology. It is evident that everything new and innovative can potentially have negative effects, especially if it is a new area of development. Although technologies are invented and developed to solve certain perceived problems, they often create other problems in the process. DDT was initially use by the Military in World War II to control a range of different illnesses, varying from Malaria to the bubonic plague and body lice. Due to the efficiency of DDT, it was soon adopted as a farm pesticide to help maximise crop yields to consequently cope with the rising populations food demands post WWII. This pesticide proved to be extremely effective in killing bugs and animals on crops, and was often referred as the "wonder-chemical." However, despite being banned for over forty years, we are still facing the externalities of this technology. It was found that DDT had major health impacts on both humans and animals. It was found that DDT accumulated within the fatty cells of both humans and animals and therefore highlights that technological fixes have their negatives as well as positives. - Breast & other cancers - Male infertility - Miscarriages & low birth weight - Developmental delay - Nervous system & liver damage - DDT is toxic to birds when eaten. - Decreases the reproductive rate of birds by causing eggshell thinning and embryo deaths. - Highly toxic to aquatic animals. DDT affects various systems in aquatic animals including the heart and brain. - DDT moderately toxic to amphibians like frogs, toads, and salamanders. Immature amphibians are more sensitive to the effects of DDT than adults. Global warming can be a natural phenomenon that occurs in long (geologic) cycles. However, it has been found that man-made ozone layer depletion and the release of Greenhouse Gases through industry, and traffic, causes the earth to heat up to unnatural temperatures. This is causing externalities on the environment, such as melting icecaps, shifting biomes and extinction of many aquatic species, through ocean acidification and changing temperatures. Automobiles with internal combustion engines have revolutionised civilisation and technology. However, whilst the technology was new and innovative, helping to connect places through the ability of transport, it was not recognised at the time that burning fossil fuels, such as coal and oil, inside the engines would release pollutants. This is an explicit example of an externality caused by a technological fix, as the problems caused from the development of the technology was not recognised at the time. Different types of technological fixesEdit High-tech megaprojects are large scale and require huge sums of investment and revenue to be created. Examples of these high technologies are Dams, nuclear power plants, and airports. They usually cause externalities on other factors such as the environment, are highly expensive, and are top-down governmental plans. Three Gorges DamEdit The Three Gorges Dam is an example of a high-tech technological fix. The creation of the multi-purpose navigation hydropower and flood control scheme was designed to fix the issues with flooding whilst providing efficient, clean renewable hydro-electric power in China. The Three Gorges Dam is the world's largest power station in terms of installed capacity (22,500 MW). The dam is the largest operating hydroelectric facility in terms of annual energy generation, generating 83.7 TWh in 2013 and 98.8 TWh in 2014, while the annual energy generation of the Itaipú Dam in Brazil and Paraguay was 98.6 TWh in 2013 and 87.8 in 2014. It was estimated to have cost over £25 billion. There have been many externalies from this technology, such as the extinction of the Chinese River Dolphin, an increase in pollution, as the river can no longer 'flush' itself, and over 4 million locals being displaced in the area. Is usually small-scale and cheap technologies that are usually seen in developing countries. The capital to build and create these technologies are usually low, yet labour is high. Local expertise can be used to maintain these technologies making them very quick and effective to build and repair. An example of an intermediate technology can be seen by water wells, rain barrels and pumpkin tanks. Technology that suits the level of income, skills and needs of the people. Therefore, this factor encompasses both high and low technologies. An example of this can be seen by developing countries that implement technologies that suit their expertise, such as rain barrels and hand pumps. These technologies are low costing and can be maintained by local skills, making them affordable and efficient. However, to implement rain barrels in a developed country would not be appropriate, as it would not suit the technological advancement apparent in these countries. Therefore, appropriate technological fixes take into consideration the level of development within a country before implementing them. Michael and Joyce Huesemann caution against the hubris of large-scale techno-fixes In the book Techno-Fix: Why Technology Won't Save Us Or the Environment they show why negative unintended consequences of science and technology are inherently unavoidable and unpredictable, why counter-technologies or techno-fixes are no lasting solutions, and why modern technology in current context does not promote sustainability but instead collapse. Naomi Klein is a prominent opponent of the view that simply technological fixes will solve our problems. She explained her concerns in her book This Changes Everything: Capitalism vs. the Climate and states that technical fixes for climate change such as geoengineering bring significant risks as "we simply don't know enough about the Earth system to be able to re-engineer it safely". According to her the proposed technique of dimming the rays of the sun with sulphate-spraying helium balloons in order to mimic the cooling effect on the atmosphere of large volcanic eruptions for instance is highly dangerous and such schemes will surely be attempted if abrupt climate change gets seriously under way. Various experts and environmental groups have also come forward with their concerns over views and approaches that look for techno fixes as solutions and warn that those would be "misguided, unjust, profoundly arrogant and endlessly dangerous" approaches as well as over the prospect of a technological 'fix' for global warming, however impractical, causing lessened political pressure for a real solution. - Cook, Stephen P. The Worldview Literacy Book Parthenon Books 2009. Excerpt at http://www.projectworldview.org/wvtheme46.htm - The sacred and the limits of the technological fix AR Drengson - Zygon®, 1984 - Wiley Online Library - The Technological Fix Critique of Agricultural Biotechnology, D Scott, http://wiki.umt.edu/odc/images/d/db/TechFixISU6-25.pdf - E. Morozov, To Save Everything, Click Here (2013), pg 5 - Alexis C. Madrigal. "Toward a Complex, Realistic, and Moral Tech Criticism". The Atlantic. - Dorfman, Lori and Lawrence Wallack (2007) “Moving Nutrition Upstream: The Case for Reframing Obesity,” Journal of Nutrition Education and Behavior, Vol. 39, Issue 2, S45-S50 - E. Morozov, To Save Everything, Click Here (2013) - "Geoengineering Affects You, Your Environment, and Your Loved Ones". Geoengineering Watch. Retrieved 2015-10-22. - "What is geoengineering?". the Guardian. Retrieved 2015-10-24. - "High-Tech Solar Projects Fail to Deliver". Once built, U.S. government biologists found the plant’s superheated mirrors were killing birds. In April, biologists working for the state estimated that 3,500 birds died at Ivanpah in the span of a year, many of them burned alive while flying through a part of the solar instalment where air temperatures can reach 1,000 degrees Fahrenheit. - "How Many Babies Are Born Each Day?". The World Counts. Retrieved 2015-10-27. - "Big Picture". Big Picture. Retrieved 2015-11-02. - "BBC - GCSE Bitesize: Hydroponics". Retrieved 2015-10-27. - "Just 4 Growers: Global Garden Community". www.just4growers.com. Retrieved 2015-10-27. - "Q and A About Genetically Modified Crops - Pocket K | ISAAA.org". isaaa.org. Retrieved 2015-10-31. - Neuman, William; Pollack, Andrew (4 May 2010). "U.S. Farmers Cope With Roundup-Resistant Weeds". The New York Times. p. B1. Retrieved 22 May 2016. - "Vitamin A Deficiency". www.goldenrice.org. Retrieved 2015-10-31. - "NPIC - National Pesticide Information Centre" (PDF). - "The DDT Story | Pesticide Action Network". www.panna.org. Retrieved 2015-10-29. - "World Health Organization. DDT and its derivatives. Environmental aspects. Environmental Health Criteria. Geneva, Switzerland, 1989; Vol. 83". DDT. 2015-10-29. - Toxicology Profile for (Update); U. S. Department of Human Health & Human Services, Agency for Toxic Substances and Disease Registry, 1994. - "Effects of Global Warming". LiveScience.com. Retrieved 2015-11-04. - "internal-combustion engine: Introduction". www.infoplease.com. Retrieved 2015-11-03. - "Generation". Itaipu Binacional. Retrieved 2 January 2015. - "Three Gorges breaks world record for hydropower generation". Xinhua. 1 January 2014. Retrieved 2 January 2015. - "Drought curbs Itaipu hydro output". Business News Americas. 5 January 2015. Retrieved 5 January 2015. - "GoConqr - Types of technological fix". GoConqr. Retrieved 2015-11-02. - Welfens, Paul J. J.; Ryan, Cillian (2011-02-14). Financial Market Integration and Growth: Structural Change and Economic Dynamics in the European Union. Springer Science & Business Media. ISBN 9783642162749. - "Appropriate Technology text". lsa.colorado.edu. Retrieved 2015-11-03. - "Moonshots for the Earth: are there technological fixes for climate change?". New Statesman. Retrieved 11 February 2017. - "Techno-Fix". New Society Publishers. Retrieved 11 February 2017. - Gray, John (22 September 2014). "This Changes Everything: Capitalism vs the Climate review – Naomi Klein's powerful and urgent polemic". The Guardian. Retrieved 11 February 2017. - Scipes, Kim. "A Review of Naomi Klein, This Changes Everything" (PDF). Retrieved 11 February 2017. - "Geoengineering Has No Place Among Serious Climate Solutions, Declare Experts". BillMoyers.com. 16 February 2015. Retrieved 11 February 2017. - "Scientists: We Cannot Geoengineer Our Way Out of the Climate Crisis". The Nation. Retrieved 11 February 2017.
<urn:uuid:2d0be944-d7a6-487f-a3f9-bef0394e4b7a>
3.421875
3,744
Knowledge Article
Science & Tech.
40.152029
95,525,623
Nature and society are full of so-called real-world complex systems, such as protein interactions. Theoretical models, called complex networks, describe them and consist of nodes representing any basic element of that network, and links describing interactions or reactions between two nodes. In the case of protein-interaction studies, reconstruction of complex networks is key as the data available is often inaccurate and our knowledge of the exact nature of these interactions is limited. For reconstruction of networks, link predict—the likelihood of the existence of a link between two nodes—matters. Now, Chinese scientists have looked at the influence of the network structure to shed some light on the robustness of the latest methods used to predict the behaviour of such complex networks. Jin-Xuan Yang and Xiao-Dong Zhang from Shanghai Jiao Tong University in China have just published their work in EPJ B, providing a good reference for the choice of a suitable algorithm for link prediction depending on the chosen network structure. In this paper, the authors use two parameters of networks—the common neighbours index and the so-called Gini coefficient index—to reveal the relation between the structure of a network and the accuracy of methods used to predict future links. Their study partly involves a statistical analysis, which reveals a correlation between characteristics of the network, like the common neighbours index, Gini coefficient index and other indices that specifically describe the network structure, such as its clustering coefficient or its degree of heterogeneity. The authors test their theory experimentally in a variety of real-world networks and find that the proposed algorithm yields better prediction accuracy and robustness to the network structure than existing methods. This also leads the authors to devise a new method to predict missing links. Explore further: Scientists develop new high-precision method for analysing and comparing functioning and structure of complex networks Jin-Xuan Yang et al, Revealing how network structure affects accuracy of link prediction, The European Physical Journal B (2017). DOI: 10.1140/epjb/e2017-70599-4
<urn:uuid:be0027d1-62e8-487e-bb38-e95a32d8abde>
3
420
Truncated
Science & Tech.
25.033816
95,525,660
The ability to forecast how ENSO will respond to global warming thus matters greatly to society. Providing accurate predictions, though, is challenging because ENSO varies naturally over decades and centuries. Instrumental records are too short to determine whether any changes seen recently are simply natural or attributable to man-made greenhouse gases. Reconstructions of ENSO behavior are usually missing adequate records for the tropics where ENSO develops. Help is now underway in the form of a tree-ring record reflecting ENSO activity over the past seven centuries. Tree-rings have been shown to be very good proxies for temperature and rainfall measurements. An international team of scientists spearheaded by Jinbao Li and Shang-Ping Xie, while working at the International Pacific Research Center, University of Hawaii at Manoa, has compiled 2,222 tree-ring chronologies of the past seven centuries from both the tropics and mid-latitudes in both hemispheres. Their work is published in the June 30, 2013 online issue of Nature Climate Change. The inclusion of tropical tree-ring records enabled the team to generate an archive of ENSO activity of unprecedented accuracy, as attested by the close correspondence with records from equatorial Pacific corals and with an independent Northern Hemisphere temperature reconstruction that captures well-known teleconnection climate patterns. These proxy records all indicate that ENSO was unusually active in the late 20th century compared to the past seven centuries, implying that this climate phenomenon is responding to ongoing global warming. "In the year after a large tropical volcanic eruption, our record shows that the east-central tropical Pacific is unusually cool, followed by unusual warming one year later. Like greenhouse gases, volcanic aerosols perturb the Earth's radiation balance. This supports the idea that the unusually high ENSO activity in the late 20th century is a footprint of global warming" explains lead author Jinbao Li. "Many climate models do not reflect the strong ENSO response to global warming that we found," says co-author Shang-Ping Xie, meteorology professor at the International Pacific Research Center, University of Hawaii at Manoa and Roger Revelle Professor at Scripps Institution of Oceanography, University of California at San Diego. "This suggests that many models underestimate the sensitivity to radiative perturbations in greenhouse gases. Our results now provide a guide to improve the accuracy of climate models and their projections of future ENSO activity. If this trend of increasing ENSO activity continues, we expect to see more weather extremes such as floods and droughts." Citation: Li, J., S.-P. Xie, E. R. Cook, M. Morales, D. Christie, N. Johnson, F. Chen, R. D'Arrigo, A. Fowler, X. Gou, and K. Fang (2013): El Niño modulations over the past seven centuries. Nature Climate Change. http://dx.doi.org/10.1038/nclimate1936 This research was funded by the National Science Foundation, the National Basic Research Program of China (2012CB955600), the National Oceanic and Atmospheric Administration, the Japan Agency for Marine-Earth Science and Technology, FONDECYT (No.1120965), CONICYT/FONDAP/15110009, CONICET and IAI (CRN2047). Author Contact: Jinbao Li, currently at: email@example.com, +852 3917-7101, The University of Hong Kong. Shang-Ping Xie, currently at: firstname.lastname@example.org, (858) 822-0053, Scripps Institution of Oceanography. International Pacific Research Center Media Contact: Gisela E. Speidel, email@example.com. (808) 956- 9252. The International Pacific Research Center (IPRC) of the School of Ocean and Earth Science and Technology (SOEST) at the University of Hawaii at Manoa, is a climate research center founded to gain greater understanding of the climate system and the nature and causes of climate variation in the Asia-Pacific region and how global climate changes may affect the region. Established under the "U.S.-Japan Common Agenda for Cooperation in Global Perspective" in October 1997, the IPRC is a collaborative effort between agencies in Japan and the United States. Talia S Ogliore | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:5e8a40f1-ca2e-40fa-84e5-031a484cfe11>
3.765625
1,558
Content Listing
Science & Tech.
38.592322
95,525,664
When rain falls from a cloud but doesn't reach the ground it can create wispy tails from clouds known as virga. What are virga? Virga, from the Latin for 'rod' or 'branch' appear as light wisps which are attached to the base of a cloud and are often at their most striking when lit by a red sunset with a light wind extending the tail into a angled curve. How does virga form? Simply put, virga are trails of precipitation that fall from the underside of a cloud but evaporate or sublime before it can reach the earth's surface. This happens when falling rain or ice passes through an area of dry or warm air. What weather is associated with virga? As you might expect, virga are associated with precipitation that does not reach the ground, often happening in isolated locations on sometimes, the finest of days. However, on some occasions, virga can lead to the development of microbursts, which pose a dangerous threat to planes and aircraft. These microbursts come about as rainfall transitions back into water vapour, removing heat from the air and causing an accelerating sink of colder air, which can cause severe turbulence. The jellyfish of the skies Virga are often referred to as 'jellyfish clouds' based on their puffy-top appearance with streaky stingers hanging below. Apart from jellyfish though, they are often spotted looking like various objects in the sky. What clouds are associated with virga? As a supplementary cloud feature, they occur most frequently with Cirrocumulus, Altocumulus, Altostratus, Nimbostratus, Stratocumulus, Cumulus and Cumulonimbus. Find out more about the different types of cloud in our
<urn:uuid:7beb5619-f081-4d34-b5e1-bc9013ac2cc6>
3.90625
372
Knowledge Article
Science & Tech.
45.42272
95,525,673
Online Dictionary: translate word or phrase from Indonesian to English or vice versa, and also from english to english on-line. Hasil cari dari kata atau frase: Magnetism(0.01167 detik) Found 2 items, similar to Magnetism. English → English (WordNet) n 1: attraction for iron; associated with electric currents as well as magnets; characterized by fields of force [syn: magnetic attraction, magnetic force] 2: the branch of science that studies magnetism [syn: magnetics] English → English (gcide) Magnetism \Mag"net*ism\, n. [Cf. F. magn['e]tisme.] The property, quality, or state, of being magnetic; the manifestation of the force in nature which is seen in a magnet. At one time it was believed to be separate from the electrical force, but it is now known to be intimately associated with electricity, as part of the phenomenon of [1913 Webster +PJC] 2. The science which treats of magnetic phenomena. 3. Power of attraction; power to excite the feelings and to gain the affections. “By the magnetism of interest our affections are irresistibly attracted.” --Glanvill. Animal magnetism, Same as hypnotism, at one time believe to be due to a force more or less analogous to magnetism, which, it was alleged, is produced in animal tissues, and passes from one body to another with or without actual contact. The existence of such a force, and its potentiality for the cure of disease, were asserted by Mesmer in 1775. His theories and methods were afterwards called mesmerism, a name which has been popularly applied to theories and claims not put forward by Mesmer himself. See Mesmerism, Biology, Od, Hypnotism. Terrestrial magnetism, the magnetic force exerted by the earth, and recognized by its effect upon magnetized needles and bars.
<urn:uuid:c67fcbe4-71ec-4e5d-b33b-b89d230dc462>
3.03125
450
Structured Data
Science & Tech.
43.315534
95,525,678
The Atlantic hurricane season runs from June 1st to November 30th, and the Eastern Pacific hurricane season runs from May 15th to November 30th. The term “hurricane” has its origin in the religions of past civilizations. The Mayan storm god was named Hunraken. The Taino people of the Caribbean considered the god Huracan evil. Hurricanes may not be evil but they are one of nature's most powerful storms. Their potential for loss of life and destruction of property is tremendous. Weather & atmosphere education resources The term weather describes the state of the atmosphere at a given point in time and geographic location. Weather forecasts provide an estimate of the conditions we expect to experience in the near future and are based on statistical models of similar conditions from previous weather events. Temperature, amount and form of airborne moisture, cloudiness, and strength of wind are all different components of our weather. Severe weather events such as tornadoes, tropical storms, hurricanes, floods, lightning strikes and extremes of heat or cold can be costly and deadly. Knowing how to recognize threatening weather conditions, where to get reliable information, and how to respond to this information can help save lives. In addition to weather, NOAA also monitors and forecasts other atmospheric processes that effect our planet such as ozone levels, changing climate conditions, and variables outside Earth's atmosphere such as solar winds. By influencing global temperatures and precipitation, ENSO significantly impacts Earth’s ecosystems and human societies. El Nino and La Nina are opposite extremes of the ENSO, which refers to cyclical environmental conditions that occur across the Equatorial Pacific Ocean. These changes are due to natural interactions between the ocean and atmosphere. Sea surface temperature, rainfall, air pressure, atmospheric and ocean circulation all influence each other. When storms in outer space occur near Earth or in Earth's upper atmosphere, we call it space weather. Rather than the more commonly known weather within our atmosphere (rain, snow, heat, wind, etc.), space weather comes in the form of radio blackouts, solar radiation storms, and geomagnetic storms caused by disturbances from the Sun. Imagine our weather if Earth were completely motionless, had a flat dry landscape and an un-tilted axis. This of course is not the case; if it were, the weather would be very different. The local weather that impacts our daily lives results from large global patterns in the atmosphere caused by the interactions of solar radiation, Earth's large ocean, diverse landscapes, and motion in space.
<urn:uuid:37504441-cb67-4a9e-be8b-c56eafd37aed>
3.734375
510
Knowledge Article
Science & Tech.
31.583187
95,525,682
What is Volcanism WHAT IS VOLCANISM? Volcanism is a geological term used to describe the complete range of volcanic eruptions, volcanic landforms, and volcanic materials. Volcanism is driven by the internal heat of a planet and provides evidence of the way in which heat is released from that planet. The type and abundance of volcanoes on the surface of a planet can provide evidence about the level of geologic activity of the planet. "What is Volcanism." Space Sciences. . Encyclopedia.com. (July 22, 2018). http://www.encyclopedia.com/science/news-wires-white-papers-and-books/what-volcanism "What is Volcanism." Space Sciences. . Retrieved July 22, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/news-wires-white-papers-and-books/what-volcanism
<urn:uuid:e7022c4e-6ac4-4788-88f9-069b4c783970>
3.84375
199
Knowledge Article
Science & Tech.
49.68963
95,525,702
Service-orientation in Communication and Applications Service-oriented architecture (SOA) is an architectural paradigm applied for the development of distributed systems. With the help of design principles like loose coupling and service contract it is possible to create a pool of application agnostic, reusable services. Via orchestration and choreography of such services, complex business processes can be modeled as workflows. The application of the SOA principles can improve the agility not only of applications but also of network infrastructures. They offer novel possibilities for the development of future network protocols. All necessary information like dates, files, etc. can be found in OLAT. Database access not possible.
<urn:uuid:31d71f92-8c33-4a62-b91d-e04077a0ffac>
2.5625
136
Knowledge Article
Software Dev.
12.214552
95,525,726
+44 1803 865913 Edited By: William F Ruddiman 535 pages, Figs, tabs, maps Covers most of the dramatic transformations of the Earth's surface in recent geologic history including the collision of the continents, changes in the position of the jet stream and westerly winds, formation of permanent ice sheets over Antarctica and Greenland, and the development of sea-ice cover in the Arctic Ocean. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects NHBS has enriched my life with knowledge for many years now Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:0c103a7e-2a75-42be-b3f6-4fb5069aa29e>
2.859375
154
Product Page
Science & Tech.
38.521314
95,525,785
In a paper published in Genome Research on Nov. 4, scientists at the Genome Institute of Singapore (GIS) report that what was previously believed to be "junk" DNA is one of the important ingredients distinguishing humans from other species. More than 50 percent of human DNA has been referred to as "junk" because it consists of copies of nearly identical sequences. A major source of these repeats is internal viruses that have inserted themselves throughout the genome at various times during mammalian evolution. Using the latest sequencing technologies, GIS researchers showed that many transcription factors, the master proteins that control the expression of other genes, bind specific repeat elements. The researchers showed that from 18 to 33% of the binding sites of five key transcription factors with important roles in cancer and stem cell biology are embedded in distinctive repeat families. Over evolutionary time, these repeats were dispersed within different species, creating new regulatory sites throughout these genomes. Thus, the set of genes controlled by these transcription factors is likely to significantly differ from species to species and may be a major driver for evolution. This research also shows that these repeats are anything but "junk DNA," since they provide a great source of evolutionary variability and might hold the key to some of the important physical differences that distinguish humans from all other species. The GIS study also highlighted the functional importance of portions of the genome that are rich in repetitive sequences. "Because a lot of the biomedical research use model organisms such as mice and primates, it is important to have a detailed understanding of the differences between these model organisms and humans in order to explain our findings," said Guillaume Bourque, Ph.D., GIS Senior Group Leader and lead author of the Genome Research paper. "Our research findings imply that these surveys must also include repeats, as they are likely to be the source of important differences between model organisms and humans," added Dr. Bourque. "The better our understanding of the particularities of the human genome, the better our understanding will be of diseases and their treatments." "The findings by Dr. Bourque and his colleagues at the GIS are very exciting and represent what may be one of the major discoveries in the biology of evolution and gene regulation of the decade," said Raymond White, Ph.D., Rudi Schmid Distinguished Professor at the Department of Neurology at the University of California, San Francisco, and chair of the GIS Scientific Advisory Board. "We have suspected for some time that one of the major ways species differ from one another – for instance, why rats differ from monkeys – is in the regulation of the expression of their genes: where are the genes expressed in the body, when during development, and how much do they respond to environmental stimuli," he added. "What the researchers have demonstrated is that DNA segments carrying binding sites for regulatory proteins can, at times, be explosively distributed to new sites around the genome, possibly altering the activities of genes near where they locate. The means of distribution seem to be a class of genetic components called 'transposable elements' that are able to jump from one site to another at certain times in the history of the organism. The families of these transposable elements vary from species to species, as do the distributed DNA segments which bind the regulatory proteins." Dr. White also added, "This hypothesis for formation of new species through episodic distributions of families of gene regulatory DNA sequences is a powerful one that will now guide a wealth of experiments to determine the functional relationships of these regulatory DNA sequences to the genes that are near their landing sites. I anticipate that as our knowledge of these events grows, we will begin to understand much more how and why the rat differs so dramatically from the monkey, even though they share essentially the same complement of genes and proteins." Cathy Yarbrough | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:74891204-08c9-44df-8552-d271d8c44edf>
3.640625
1,363
Content Listing
Science & Tech.
34.174691
95,525,797
Once you have an interest in all this astronomical you will need to start reading something about this fascinating subject. Books on all sorts of topics are available. The ones we list are generally to do with the practical side of astronomy rather than the theoretical physics. We present the best selection of Astronomy Books and Sets from PHILIP'S and Cambridge University Press for the fascinating hobby of astronomy! Each book, map or guide has been written or compiled by an expert in the field such as Patrick Moore, Britain's best known astronomer, and all are accurate, practical and easy-to-follow. The range caters for amateur astronomers from beginners through to advanced and general readers. We also have Astronomy Software and DVD Tutorials readily available to help you with your observing and astroimaging tasks.
<urn:uuid:36e00ef8-6c7e-4efb-aafa-414e539ea081>
2.53125
162
Product Page
Science & Tech.
36.913066
95,525,805
Join the Conversation To find out more about Facebook commenting please read the Conversation Guidelines and FAQs How many birds are killed by solar farms? During a meeting at Palm Springs City Hall in June, public commenters tore into the Palen solar farm, which would be built on 4,200 acres of public land just south of Joshua Tree National Park. One of their main complaints: The vast field of solar panels — which would generate enough electricity to power 230,000 average California homes, according to its developer — would kill birds. "We do know that (solar) photovoltaic projects kill birds, especially ones in these Colorado River flyways. There are a lot of numbers now available from Desert Sunlight, Blythe, McCoy," said Kevin Emmerich, co-founder of the environmental group Basin and Range Watch, referring to large solar farms already operating in Riverside County. "I believe the California numbers — there are up to 183 species of birds killed on these solar projects, and over 3,500 individuals. And those are just what's reported." It's a common refrain from desert environmentalists: Solar farms might help limit the carbon emissions responsible for climate change, but they're harming delicate ecosystems and species, from desert tortoises and bighorn sheep to certain migratory birds. Many conservationists have argued that state and federal officials should prioritize rooftop solar panels over large-scale power plants. But at least when it comes to birds, those broad criticisms belie a more nuanced reality. Yes, birds have died at solar farms — most famously at the Ivanpah project in San Bernardino County, where birds have been incinerated as they fly through the "solar flux" reflected by fields of mirrors toward boilers atop three massive towers. But Ivanpah's tower-and-mirror setup is the exception, not the rule. Only two big tower projects have been built in the United States, and it's unclear whether there will be more, in part due to the technology's high costs. Most solar farms use photovoltaic panels like the ones installed on many rooftops, which convert sunlight directly to electricity. And the impact of those facilities on birds is still mostly unknown, according to academic experts, industry officials and leading bird conservationists. Yes, birds have died at solar photovoltaic projects, some of them from crashing into panels or other infrastructure. But there's hardly any science examining how many birds deaths are caused solar farms, and what can be done to stop it. "Unfortunately, there's not a whole lot that can be said with any certainty right now," said Lee Walston, an environmental scientist at Argonne National Laboratory, a federal funded lab operated by the University of Chicago. Walston was the lead author of a study last year reviewing what little information there is about bird deaths at solar farms. The study noted that avian fatality data was available for just seven large-scale solar plants in the United States, four of which had conducted systematic monitoring for bird carcasses. Only three of the seven projects used solar photovoltaic panels, also known as PV panels. One PV project that has reported mortality data is Desert Sunlight, a 550-megawatt power plant in eastern Riverside County that was the largest solar project in the world when it opened. A contractor for the project says it documented 173 bird deaths during construction, from August 2011 through December 2014, although it didn't conduct systematic monitoring. Thomas Dietsch, a migratory bird biologist at the U.S. Fish and Wildlife Service, said in a presentation earlier this year that 3,545 bird deaths had been reported at seven Southern California solar farms, including Ivanpah, from 2012 through April 2016. Those deaths spanned at least 183 species, including three species listed as endangered or threatened under the federal Endangered Species Act: Ridgway's rail, willow flycatcher and yellow-billed cuckoo. But it's hard to know how accurate or complete those figures are. Most of the monitoring has been done by consultants hired by project owners, and there are no consistent guidelines for how to scour solar plants for dead birds. It's difficult to estimate the overall number of deaths based on the number of birds actually found, in part because scavengers sometimes make off with bird carcasses. Those factors could mean the reported numbers are lowball estimates. At the same time, it's unclear how many of the birds deaths have actually been caused by solar panels and other electrical infrastructure. Bird experts say we don't know much about "background mortality" rates in the desert — the number of birds that might die at project locations even in the absence of solar panels. "There's valuable data (on bird deaths at solar farms). The problem is, most of this is gray literature. It's not peer-reviewed," said Thomas Smith, director of the Center for Tropical Research at UCLA. "You get into a real problem using not-peer-reviewed material in making decisions." Industry officials and environmentalists are trying to get better data. Last year they formed the Avian-Solar Work Group, which includes First Solar, NextEra Energy Resources, Audubon California and Defenders of Wildlife, among others. The group organized a team of scientists, led by Smith, to develop a rigorous scientific plan for studying the relationship between solar farms and birds. The group is finalizing that plan and intends to seek funding for specific research projects, said Danielle Mills, senior policy advisor for the Large-scale Solar Association, a California trade group. "All of the organizations that are participating in the (work group) have an interest in developing renewable energy to combat climate change," Mills said. "At the same time, we don't want that to have negative impacts on populations of birds. It's those two goals that are really driving the work." Among the topics the group hopes to learn more about: The "lake effect" theory, which posits that waterbirds might crash into solar panels after confusing them with lakes. Critics of Palen and other solar farms have pointed to the alleged lake effect as a major cause for concern, but experts say it isn't yet a proven phenomenon — it's still a theory, based on incidental observations. The Argonne report said water-dependent species accounted for 46 percent of known bird deaths at Desert Sunlight, but less than half a percent of known bird deaths at the California Valley Solar Ranch, a 250-megawatt PV project in San Luis Obispo County. A field of solar panels "may look like a lake to a human being, but we don’t know what it looks like to a bird," Smith said. "There needs to be more information collected on waterbirds found in these facilities, and not just what birds are found there, but what are the potential impacts on populations of those birds." Other key research questions revolve around the Pacific Flyway, a vast migratory bird route that stretches from Alaska to South America, passing through California. At least a billion birds travel along the route each year, according to the Audubon Society. Some critics of big solar point to the desert's location along the Pacific Flyway as a cause for concern, but the reality is more complicated. The term "Pacific Flyway" doesn't refer to one specific route, experts say, but rather a broad swath of of geographic real estate that encompasses many narrower pathways used by various bird species. The impact of desert solar farms on migratory birds needs to be understood at the level of specific species and specific project sites, Smith said. If you want to know how a proposed solar farm might harm migratory birds, he said, you need to know which bird populations might pass near the project site during their migrations, and what kinds of threats those populations already face. Funded by grants from the California Energy Commission and the developer First Solar, among others, Smith is using cutting-edge genomic techniques to map out migration routes, stopovers and breeding and wintering locations for 10 bird species. Eventually, he hopes to cover 100 species. That information could help developers and government officials determine the best places to build solar farms in the desert while minimizing bird deaths. "It's very cool," said Garry George, renewable energy director for Audubon California. "This work can be applied to lots of things. It's just that it's kicked off by these questions around the solar industry." Government agencies are trying to boost their understanding of solar-bird impacts, too. State and federal officials formed the Multiagency Avian-Solar Collaborative Working Group earlier this year, and they're now finalizing a science plan to identify knowledge gaps, with a focus on California, Arizona and Nevada. The group's draft plan, released this month, lists several research needs, including a better understanding of what draws birds to solar farms, clearer monitoring guidelines and a more robust strategy for deciding when avian impacts should be considered significant. Walston said similar questions have been raised about the wind energy industry over time, leading to more consistent monitoring and efforts to avoid building turbines in sensitive locations. "In many ways, solar is where wind was 10 or 15 years ago," Walston said. "Over time, we're going to see some monitoring guidelines." That process will take years. In the meantime, bird deaths have already become a sticking point as federal officials finalize a series of regulations that will help determine the future of renewable energy development in the California desert. The Desert Renewable Energy Conservation Plan would encourage solar and wind development on nearly 400,000 acres of public land across seven counties, while setting aside 5.3 million acres for conservation. Energy companies that want to build in the development zones would still need to comply with hundreds of new conservation rules, some of which are designed to protect birds. Several of those rules have drawn the ire of solar developers, who say the requirements would constrain or possibly eliminate development in the desert. Environmental groups believe the industry is overreacting, or else posturing for a better deal. Some solar proponents say the industry is being unfairly targeted, pointing to the number of bird deaths from other causes as evidence. Scientists have estimated that between 365 million and 988 million birds die from crashing into buildings and windows in the United States each year. Cars are estimated to kill between 89 million and 340 million birds in the United States annually, with fossil fuel power plants responsible for about 14.5 million. A 2013 study found that cats kill between 1.4 billion and 3.7 billion birds in the continental United States each year. In another study led by Walston and published earlier year, Argonne researchers estimated that large solar farms are responsible for somewhere between 37,800 and 138,600 bird deaths in the United States each year. Walston cautioned it's an extremely rough estimate, based on the incomplete and not-necessarily-reliable data that exists for three Southern California solar farms. "We really have to stress that it's preliminary, and it's only meaningful in one region, the Southwest," he said. So far, more bird deaths have been reported at Ivanpah than at any other solar farm. A contractor for the project owner estimated in June that 6,185 birds had died at the power-tower plant during its second year of operation, based on 1,068 birds carcasses and 30 injured birds actually being found at the site. The contractor attributed 572 of those confirmed deaths and injuries to known causes, including collisions and "singeing" from flying through the intense sunlight reflected by the mirrors. Causes couldn't be determined for the other 526 deaths and injuries, according to the contractor. Whatever the true impact of solar farms on avian life, many environmentalists say climate change poses a greater threat to birds than industrial solar projects do. In a 2014 report, Audubon Society scientists found that 314 North American bird species could lose at least half of their current ranges by 2080 due to global warming. The organization described 126 of those species as “climate endangered,” meaning they're likely to lose at least half of their ranges by 2050 if global warming continues unabated. Audubon's chief scientist, Gary Langham, has called global warming "the greatest threat our birds face today." The biggest thing people can do to help birds deal with climate change, he told The Desert Sun earlier this year, is to limit the carbon emissions that are causing temperatures to rise. That will involve replacing coal, oil and natural gas with cleaner sources of energy, like solar and wind. Still, big solar farms shouldn't be given "free passes" to kill birds, said George, from Audubon California. "I don't think that there will be no utility-scale projects in the future. They’re part of the plan," he said. But "if we're going to have them, we're going to put them in the right places, and understand the impacts, and mitigate effectively for them." Sammy Roth writes about energy and the environment for The Desert Sun. He can be reached at firstname.lastname@example.org, (760) 778-4622 and @Sammy_Roth.
<urn:uuid:60b3fe3f-1247-4004-ab51-2fa55c7ad58f>
2.78125
2,717
News Article
Science & Tech.
42.122229
95,525,820
A team of researchers from UCLA and the University of Wisconsin-Madison has spent the last decade developing a process to analyze the rocks to find out if it does contain the oldest fossils in the world. New analysis of ancient fossils could upend decades of thinking on when the last common ancestor of humans and chimps lived. Even more surprising, humans may have originated in Europe rather than in Africa. This claim is based on a reexamination of purported stone tools and broken mastodon bones that were uncovered in the early 1990s. However, not all researchers who have seen the research agree with the findings. Scientists studying ancient rock formations in Quebec have identified microscopic structures in the rock that they believe are the fossilized remains of microorganisms that lived on Earth between 3.7 and 4.28 billion years ago. New research suggests that the last mammoth population in North America wasn’t hunted to death and didn’t die out from a lack of food. Subscribe Today to get the latest ExtremeTech news delivered right to your inbox. This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
<urn:uuid:629ec58f-f71d-41bd-a1e4-8b8a95ead838>
3.734375
235
Truncated
Science & Tech.
45.630516
95,525,821
February 2 2011 Astronomy Newsletter Here's the latest article from the Astronomy site at BellaOnline.com. Bode and Bode's Law Johann Elert Bode, the author of the greatest star atlas of the Golden Age of star atlases, is better known today for Bode's Law. Strangely, Bode's Law is neither a law nor original to Bode. So what was it? How did it inspire the Celestial Police? How did Neptune ruin it all? As if two tragedies in commemorated in January weren't enough, the first of February is the anniversary of the loss of the space shuttle Columbia. A NASA site is a tribute to all of them. http://www.nasa.gov/externalflash/dor10/ Seven craters on the Moon have been named for those who died on Challenger. They are located within a large impact basin called Apollo. http://www.universetoday.com/wp-content/uploads/2011/01/challenger-580x580.jpg Friday, February 4th is the anniversary of Clyde Tombaugh's birth. Born in 1906, he was the American astronomer who discovered Pluto in 1930. Tombaugh methodically took pictures of the sky and then from the same position took more several days afterwards, so that he had at least two pictures of every region. Using a “blink comparator,” he compared a pair of images. This “blinked” rapidly between the two images so that anything that had moved would stand out. It was still a pretty difficult job. Here are the two plates containing Pluto. http://upload.wikimedia.org/wikipedia/en/thumb/c/c6/Pluto_discovery_plates.png/800px-Pluto_discovery_plates.png European Astrofest – sponsored by Astronomy Now magazine – is on in London Friday and Saturday February 4-5. Here's all the information: http://www.astronomynow.com/astrofest/index.html If you look closely at the picture, you can see me in the front row (next to the two empty seats). That's all for this now. Wishing you clear skies. Please visit astronomy.bellaonline.com for even more great content about Astronomy. To participate in online discussions, this site has a community forum all about Astronomy located here - I hope to hear from you sometime soon, either in the forum or in response to this email message. I welcome your feedback! Do pass this message along to family and friends who might also be interested. Remember it's free and without obligation. Mona Evans, Astronomy Editor Unsubscribe from the Astronomy Newsletter Online Newsletter Archive for Astronomy Site Master List of BellaOnline Newsletters
<urn:uuid:c78a9a8b-b904-4971-8a6e-71a00f61abbd>
3.1875
594
News (Org.)
Science & Tech.
53.630586
95,525,850
Unicode and Multibyte Character Set (MBCS) Support Some languages, for example, Japanese and Chinese, have large character sets. To support programming for these markets, the Microsoft Foundation Class Library (MFC) enables two different approaches to handling large character sets: wchar_tbased wide-characters and strings encoded as UTF-16. Multibyte Character Sets (MBCS), char based single or double-byte characters and strings encoded in a locale-specific character set. Microsoft has recommended the MFC Unicode libraries for all new development, and the MBCS libraries were deprecated in Visual Studio 2013 and Visual Studio 2015. This is no longer the case. The MBCS deprecation warnings have been removed in Visual Studio 2017. MFC Support for Unicode Strings The entire MFC class library is conditionally enabled for Unicode characters and strings stored in wide characters as UTF-16. In particular, class CString is Unicode-enabled. These library, debugger, and DLL files are used to support Unicode in MFC: (version represents the version number of the file; for example, '140' means version 14.0.) CString is based on the TCHAR data type. If the symbol _UNICODE is defined for a build of your program, TCHAR is defined as type wchar_t, a 16-bit character encoding type. Otherwise, TCHAR is defined as char, the normal 8-bit character encoding. Therefore, under Unicode, a CString is composed of 16-bit characters. Without Unicode, it is composed of characters of type char. To complete Unicode programming of your application, you must also: Use the _T macro to conditionally code literal strings to be portable to Unicode. When you pass strings, pay attention to whether function arguments require a length in characters or a length in bytes. The difference is important if you are using Unicode strings. Use portable versions of the C run-time string-handling functions. Use the following data types for characters and character pointers: Use TCHAR where you would use char. Use LPTSTR where you would use char\*. Use LPCTSTR where you would use const char\*. CStringprovides the operator LPCTSTR to convert between CString also supplies Unicode-aware constructors, assignment operators, and comparison operators. MFC Support for MBCS Strings The class library is also enabled for multibyte character sets, but only for double-byte character sets (DBCS). In a multibyte character set, a character can be one or two bytes wide. If it is two bytes wide, its first byte is a special "lead byte" that is chosen from a particular range, depending on which code page is in use. Taken together, the lead and "trail bytes" specify a unique character encoding. If the symbol _MBCS is defined for a build of your program, type TCHAR, on which CString is based, maps to char. It is up to you to determine which bytes in a CString are lead bytes and which are trail bytes. The C run-time library supplies functions to help you determine this. Under DBCS, a given string can contain all single-byte ANSI characters, all double-byte characters, or a combination of the two. These possibilities require special care in parsing strings. This includes Unicode string serialization in MFC can read both Unicode and MBCS strings regardless of which version of the application that you are running. Your data files are portable between Unicode and MBCS versions of your program. CString member functions use special "generic text" versions of the C run-time functions they call, or they use Unicode-aware functions. Therefore, for example, if a CString function would typically call strcmp, it calls the corresponding generic-text function _tcscmp instead. Depending on how the symbols _MBCS and _UNICODE are defined, _tcscmp maps as follows: |Neither symbol defined|| The symbols _MBCS and _UNICODE are mutually exclusive. CString methods are implemented by using generic data type mappings. To enable both MBCS and Unicode, MFC uses TCHAR for char or wchar_t, LPTSTR for char\* or wchar_t*, and LPCTSTR for const char\* or const wchar_t*. These ensure the correct mappings for either MBCS or Unicode.
<urn:uuid:92a87421-27aa-46de-8247-4966600bb20f>
3.109375
962
Documentation
Software Dev.
45.310923
95,525,855
Nitrogen is a dynamically typed, interpreted programming language written in Go. Nitrogen draws inspiration from Go, C, and several other languages. It's meant to be a simple, easy to use language for making quick scripts and utilities. Building the Interpreter - Clone the repo git clone https://github.com/nitrogen-lang/nitrogen - Run make cd nitrogen && make - Run the interpreter Documentation for the standard library and language is available in the docs directory. Running the Interpreter Nitrogen can run in interactive mode much like other interpreted languages. Run Nitrogen with the -i flag to start the REPL. Run Nitrogen like so: nitrogen filename.ni. The file extension for Nitrogen source files is .ni. The extension for compiled Nitrogen can run as an SCGI server using multiple workers and the embedded interpreter for performance. Use the flag to start the server. See the SCGI docs for more details. Command Line Flags nitrogen [options] SCRIPT -i: Run an interactive REPL prompt. -ast: Print a representation of the abstract syntax tree and then exit. (Internal debugging) -version: Printer version information. -debug: Print debug information during execution. (Very verbose) -cpuprofile profile.out: Make a CPU profile. (Internal debugging) -memprofile profile.out: Make a memory profile. (Internal debugging) -o file.nib: Output a compiled script to file then exit. -M /module/path: Directory to search for imported modules. This flag can be used multiple times. -al module.so: Autoload a module from the search path. This flag can be used multiple times. Autoloaded modules are loaded before any script is executed. -info file.nib: Print information about a compiled Nitrogen file. Issues and pull requests are welcome. Once I write a contributors guide, please read it ;) Until then, always have an issue open for anything you want to work on so we can discuss. Especially for major design issues. All code should be ran through go fmt. Any request where the files haven't been through gofmt will be denied until they're fixed. Anything written in Nitrogen, use 4 space indent, keep lines relatively short, and use camelCase for function names and PascalCase for class names. All contributions must be licensed under the 3-Clause BSD license or a more permissive license such as MIT, or CC0. Any other license will be rejected. Both the language specification and this reference interpreter are released under the 3-Clause BSD License which can be found in the LICENSE file.
<urn:uuid:f151fd48-d722-4546-8db7-706aa9ba1559>
2.703125
578
Documentation
Software Dev.
42.772933
95,525,856
GravitationUnknown - 2009 Although gravity is the weakest of the fundamental forces, it is nevertheless the most universal-and the easiest to demonstrate! This program demystifies the properties and behavior of gravity with the help of real-world illustrations and animated graphics. Topics include the four fundamental forces or interactions; Newton's Law of Universal Gravitation; the physics involved in microgravity environments; the role played by gravity in the trajectories of space vehicles and satellites, including geostationary satellites; gravitational field strength and other planets; and the inverse square nature of the law of gravitation. Publisher: New York, N.Y. : Films Media Group, , c2005 Characteristics: 1 streaming video file (30 min.) : sd., col., digital file. + instructional materials (online)
<urn:uuid:82968991-7fe3-4f61-8a7d-850371e9adf1>
3.3125
164
Product Page
Science & Tech.
25.282088
95,525,872
Scientists had previously established that certain types of aphids live in colonies where they are used as a food source by a neighbouring colony of ants. The ants have been known to bite the wings off the aphids in order to stop them from getting away and depriving the ants of one of their staple foods: the sugar-rich sticky honeydew which is excreted by aphids when they eat plants. Chemicals produced in the glands of ants can also sabotage the growth of aphid wings. The new study shows, for the first time, that ants’ chemical footprints – which are already known to be used by ants to mark out their territory - also play a key role in manipulating the aphid colony, and keeping it nearby. The research, which was carried out by a team from Imperial College London, Royal Holloway University of London, and the University of Reading, used a digital camera and specially modified software to measure the walking speed of aphids when they were placed on filter paper that had previously been walked over by ants. The data showed that the aphids’ movement was much slower when they were on paper that had been walked on by ants, than on plain paper. Furthermore, when placed on a dead leaf, where the aphid’s instinct is to walk off in search of healthy leaves for food, the scientists found that the presence of ants significantly slowed the aphids’ dispersal from the leaf. Lead author Tom Oliver from Imperial’s Department of Life Sciences explains how ants could use this manipulation in a real-life scenario: “We believe that ants could use the tranquillising chemicals in their footprints to maintain a populous ‘farm’ of aphids close their colony, to provide honeydew on tap. Ants have even been known to occasionally eat some of the aphids themselves, so subduing them in this way is obviously a great way to keep renewable honeydew and prey easily available.” However, Tom points out that the relationship between the ants and the aphids might not be that straightforward: “There are some definite advantages for aphids being ‘farmed’ like this by ants for their honeydew. Ants have been documented attacking and fighting off ladybirds and other predators that have tried to eat their aphids. It’s possible that the aphids are using this chemical footprint as a way of staying within the protection of the ants.” Professor Vincent Jansen of Royal Holloway’s School of Biological Sciences, concludes: “Although both parties benefit from the interaction, this research shows is that all is not well in the world of aphids and ants. The aphids are manipulated to their disadvantage: for aphids the ants are a dangerous liaison.” Danielle Reeves | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:d399c524-4c15-4af1-9b68-670a77b2fb59>
3.84375
1,226
Content Listing
Science & Tech.
43.633677
95,525,876
Poop Helps Track Hybrid Monkey Mating “Jimmy” is a hybrid male monkey in the study group in Gombe National Park. (Credit:Maneno Mpongo / Gombe Hybrid Monkey Project) A researcher from Florida Atlantic University is the first to document that two genetically distinct species of guenon monkeys inhabiting Gombe National Park in Tanzania, Africa, have been successfully mating and producing hybrid offspring for hundreds maybe even thousands of years. Her secret weapon? Poop. Prior studies and conventional wisdom have suggested that the physical characteristics of guenon monkeys with a variety of dazzling colors and very distinct facial features like bushy beards and huge nose spots are a function of keeping them from interbreeding. The idea is that their mate choices and the signals they use to select a mate are species specific and that they share common traits linked to their species. So if their faces don’t match, they shouldn’t be mating, right? Wrong, according to evidence from a novel study published in the International Journal of Primatology. For the study, Kate Detwiler, Ph.D., author and an assistant professor in the Department of Anthropology in FAU’s Dorothy F. Schmidt College of Arts and Letters, who first studied these monkeys in Gombe National Park in 1994, examined the extent and pattern of genetic transfer or gene flow from “red-tailed” monkeys (Cercopithecus ascanius) to “blue” monkeys (Cercopithecus mitis) due to hybridization. These two species are the only forest guenons that colonized the narrow riverine forests along Lake Tanganyika that characterize Gombe National Park. They co-exist in the same forests as Jane Goodall’s chimpanzees and baboons. Detwiler identifies hybrid monkeys by their combined markings from both parental species. She estimates that about 15 percent of this population is made up of hybrids, which is very unusual. Using mitochondrial DNA, extracted non-invasively from the feces of 144 red-tailed monkeys, blue monkeys, and hybrids, Detwiler is the first to show the movement of genetic material from one guenon species to another in an active hybrid zone. After examining the fecal samples, she found that all of the monkeys – hybrids, red-tails and blues have red-tailed mitochondrial DNA – all traced back to female red-tailed monkeys. For this lineage of monkeys, it is the first time that science shows that not only is the DNA there, but so are the hybrids. Detwiler used mitochondrial DNA because it is more abundant than nuclear DNA in fecal samples, and only comes from the mother – indicating the maternal species in the hybridizing pair. “There’s a lot of promiscuity taking place in Gombe National Park. Red-tails are mating with blues, blues are mating with red-tails, blues are mating with blues, red-tails are mating with red-tails, and hybrids are mating with everyone,” said Detwiler. “But we’re just not seeing any negative consequences from these two very different species repeatedly mating and producing offspring on an ongoing basis. If the differences in their facial features are so important and signal that they shouldn’t be mating, then why is this happening and why do I keep finding hybrid infants?” A key finding from the study shows that the blue monkeys in Gombe National Park emerged out of the hybrid population, tracing their origins back to hybridization events between resident red-tails and blues most likely from outside the park. For her control groups, Detwiler collected and examined feces from blue monkeys from a park to the north and a park to the south where hybrids do not exist. These monkeys only had blue monkey mitochondrial DNA. Detwiler speculates that red-tailed monkeys got to Gombe National Park first and thrived in the environment. Male blue monkeys outside the park had to find new homes after they were kicked out of their groups, which happens when they reach sexual maturity. Sex-driven, they ventured out into the landscape to find appropriate mates – female blue monkeys. Instead, they found the red-tailed females. Apparently, some female red-tailed monkeys were attracted to novel males with different faces and welcomed the sexual advances from these male blue monkeys. “I keep coming back to the idea that if they are only supposed to mate with their own kind, then why did these red-tailed monkeys mate with the blue monkeys, especially if they had males of their own species around,” said Detwiler. “The female red-tailed monkeys present as willing partners and they are not coerced or forced into copulation with blue monkeys.” Today, Gombe is an isolated forest habitat. Because they are very social and have had to share close quarters for decades or even centuries, Detwiler believes that they have socially learned that if you grow up in a hybrid group it is okay to mate with everyone. “The Gombe hybrid population is extremely valuable because it can be used as a model system to better understand what hybridization looks like and how genetic material moves between species,” said Detwiler. “We have this amazing laboratory in nature to help us answer many questions about hybridization and how species boundaries are maintained. This research is very timely because hybridization often occurs in response to environmental changes, as we are seeing with climate change and modified landscapes — it is nature’s way to respond.” This article has been republished from materials provided by Florida Atlantic University. Note: material may have been edited for length and content. For further information, please contact the cited source. Working Together Helps Phage Overcome CRISPRNews Surprising results show that phage join forces to overcome bacteria’s CRISPR -based immune defenses. Improved understanding of the interactions between phage and their bacterial hosts could help advance phage-based therapies and stimulate viral research.READ MORE CRISPR Screening Reveals Sickle Cell Disease TargetNews A key signaling protein, known as heme-regulated inhibitor (HRI), has been identified as a potential therapeutic target for the development of drugs to treat sickle cell disease, using a CRISPR screening approach.READ MORE DNA Methylation Related to Liver Disease in Obese PatientsNews DNA methylation implicated in initiation of liver fibrosis in non-alcoholic fatty liver diseaseREAD MORE International Conference on Neurooncology and Neurosurgery Sep 17 - Sep 18, 2018
<urn:uuid:1f2b1cae-262f-4bdc-b0bc-3aa0e15afb97>
3.234375
1,368
News Article
Science & Tech.
32.55801
95,525,894
Implications of Anisotropic Wing Structure on Hovering Aerodynamic: Hawkmoths As discussed earlier, there are few studies of the aerodynamics of flapping wings associated with anisotropic wing structure; in addition, the 3D wing shape and the timing of deformation during flapping flight have not yet been studied well. Recently, Nakata and Liu conducted a computational analysis of hawkmoth hovering flight based on a realistic wing-body model, which takes into account the anisotropy of a hawkmoth wing. They investigated how the wing deformation and modified kinematics due to the inherent structural flexibility affect unsteady fluid physics and aerodynamic performance. In their study, a reasonably representative wing – body morphological model was built as shown in Figure 4.54. Note that although hawkmoths are four-winged, for simplicity, they modeled the fore – and hindwings as a single pair of wings because of the highly synchronized motion observed in flapping flight. Hawkmoths’ wing structure is mainly supported by wing veins and membrane. The wing veins are clustered and thickened around the wing base and leading edge, as illustrated in Figure 4.54a, and are tapered toward the wingtip and trailing edge . A thin flexible membrane is placed between the veins; the directional arrangement of the wing veins and the difference of bending stiffness between the veins and membrane result in a high anisotropy of the flexural stiffness of hawkmoth wings . Wing and body kinematic models were constructed based El: flexural stiffness Figure 4.54. (a): A hawkmoth Agrius Convolvuli with a generalized wing venation including fore – and hind-wings. (b) A computational model for CFD and CSD analysis. From Nakata and Liu . on the experimental data of the hovering hawkmoth, Manduca, and on kinematic parameters described in Section 3.7. Note that the body was assumed to be stationary because the body motion in hovering flight is negligibly small . 184.108.40.206 In-Flight Deformation of a Hawkmoth’s Wing Figure 4.55 shows the comparison of the instantaneous and time-averaged results associated with flexible and rigid wings. Figure 4.56 displays the time-varying wing shape and deformation in terms of the spanwise bending, the twist, and the camber. From Figure 4.55a the flexible wing shows an advanced phase in the feathering angle, but a delayed phase in the positional angle at the wingtip, with respect to prescribed motion of the rigid wing at the wingtip. Also from Figure 4.55b, the translational and rotational velocities in the cross-section of 0.8R increase remarkably, in particular before stroke reversal. Those results occur because of large spanwise bending and twisting when the wing translates and because of small peaks before the subsequent stroke reversal as illustrated in Figure 4.56. Furthermore, although the spanwise bending and the twist angle vary smoothly, there exists a rapid increase in distal area. The flexible wings show pronounced deformation in spanwise bending and twist immediately after stroke reversal. The maximum of nose-down twist in the distal area (forewing) is approximately 12° when predicted by computation and 15° ~ 20° when measured by experimentally . One can see some positive camber, less than 2 percent, that is relatively small compared with the measured cambers of large insects such as locusts . Overall, such deformation leads to significant changes in wingtip kinematics.
<urn:uuid:fed55958-0c4b-4964-853a-6bd502303f3f>
2.859375
742
Academic Writing
Science & Tech.
37.485654
95,525,899
What is creating these dark streaks on Mars? No one is sure. Candidates include dust avalanches, evaporating dry ice sleds, and liquid water flows. What is clear is that the streaks occur through light surface dust and expose a deeper dark layer. Similar streaks have been photographed on Mars for years and are one of the few surface features that change their appearance seasonally. Particularly interesting here is that larger streaks split into smaller streaks further down the slope. The featured image was taken by the HiRISE camera on board the Mars-orbiting Mars Reconnaissance Orbiter (MRO) several months ago. Currently, a global dust storm is encompassing much of Mars. What’s that spot next to the Moon? Venus. Two days ago, the crescent Moon slowly drifted past Venus, appearing within just two degrees at its closest. This conjunction, though, was just one of several photographic adventures for our Moon this month (moon-th), because, for one, a partial solar eclipse occurred just a few days before, on July 12. Currently, the Moon appears to be brightening, as seen from the Earth, as the fraction of its face illuminated by the Sun continues to increase. In a few days, the Moon will appear more than half full, and therefore be in its gibbous phase. Next week the face of the Moon that always faces the Earth will become, as viewed from the Earth, completely illuminated by the Sun. Even this full phase will bring an adventure, though, as a total eclipse of this Thunder Moon will occur on July 27. Don’t worry about our Luna getting tired, though, because she’ll be new again next month (moon-th) — August 11 to be exact — just as she causes another partial eclipse of the Sun. Pictured, Venus and the Moon were captured from Cannon Beach above a rock formation off the Oregon (USA) coast known as the Needles. About an hour after this image was taken, the spin of the Earth caused both Venus and the Moon to set. There is much more to the familiar Ring Nebula (M57), however, than can be seen through a small telescope. The easily visible central ring is about one light-year across, but this remarkably deep exposure – a collaborative effort combining data from three different large telescopes – explores the looping filaments of glowing gas extending much farther from the nebula‘s central star. This remarkable composite image includes narrowband hydrogen image, visible light emission, and infrared light emission. Of course, in this well-studied example of a planetary nebula, the glowing material does not come from planets. Instead, the gaseous shroud represents outer layers expelled from a dying, sun-like star. The Ring Nebula is about 2,000 light-years away toward the musical constellation Lyra. The smallest of the three partial solar eclipses during 2018 was just yesterday, Friday, July 13. It was mostly visible over the open ocean between Australia and Antarctica. Still, this video frame of a tiny nibble on the Sun was captured through a hydrogen-alpha filter from Port Elliott, South Australia, during the maximum eclipse visible from that location. There, the New Moon covered about 0.16 percent of the solar disk. The greatest eclipse, about one-third of the Sun’s diameter blocked by the New Moon, could be seen from East Antarctica near Peterson Bank, where the local emperor penguin colony likely had the best view. During this prolific eclipse season, the coming Full Moon will bring a total lunar eclipse on July 27, followed by yet another partial solar eclipse at the next New Moon on August 11. Only 11 million light-years away, Centaurus A is the closest active galaxy to planet Earth. Spanning over 60,000 light-years, the peculiar elliptical galaxy also known as NGC 5128, is featured in this sharp telescopic view. Centaurus A is apparently the result of a collision of two otherwise normal galaxies resulting in a fantastic jumble of star clusters and imposing dark dust lanes. Near the galaxy’s center, left over cosmic debris is steadily being consumed by a central black hole with a billion times the mass of the Sun. As in other active galaxies, that process likely generates the radio, X-ray, and gamma-ray energy radiated by Centaurus A. It’s northern noctilucent cloud season — perhaps a time to celebrate! Composed of small ice crystals forming only during specific conditions in the upper atmosphere, noctilucent clouds may become visible at sunset during late summer when illuminated by sunlight from below. Noctilucent clouds are the highest clouds known and now established to be polar mesospheric clouds observed from the ground. Although observed with NASA’s AIM satellite since 2007, much about noctilucent clouds remains unknown and so a topic of active research. The featured time-lapse video shows expansive and rippled noctilucent clouds wafting over Paris, France, during a post-sunset fireworks celebration on Bastille Day in 2009 July. This year, several locations are already reporting especially vivid displays of noctilucent clouds. What’s that light at the end of the road? Mars. This is a good month to point out Mars to your friends and family because our neighboring planet will not only be its brightest in 15 years, it will be visible for much of night. During this month, Mars will be about 180 degrees around from the Sun, and near the closest it ever gets to planet Earth. In terms of orbits, Mars is also nearing the closest point to the Sun in its elliptical orbit, just as Earth moves nearly between it and the Sun — an alignment known as perihelic opposition. In terms of viewing, orange Mars will rise in the east just as the Sun sets in the west, on the opposite side of the sky. Mars will climb in the sky during the night, reach its highest near midnight, and then set in the west just as the Sun begins to rise in the east. The red planet was captured setting beyond a stretch of road in Arches National Park in mid-May near Moab, Utah, USA.
<urn:uuid:cbf7ec4e-abc4-4141-97ce-3f7888a09f09>
3.625
1,274
Content Listing
Science & Tech.
49.816727
95,525,909
Satellite pictures reveal Earth's ice sheets are in 'runaway melt mode' New satellite pictures have revealed huge ice sheets in Greenland and Antarctica are shrinking faster than scientists predicted and in some areas are in runaway melt mode. British scientists have calculated the changes in the height of the vulnerable but massive ice sheets and found them especially worse at their edges. An Antarctic glacier meets the ocean, where water continually eats away at the ice In some parts of Antarctica, ice sheets have been losing 30 feet a year in thickness since 2003, according to the paper published in the journal Nature today. Some of those areas are about a mile thick so still have plenty of ice to burn through. But the drop in thickness is speeding up. In parts of Antarctica, the yearly rate of thinning from 2003 to 2007 was 50 per cent higher than it was from 1995 to 2003. The new measurements are based on 50 million laser readings from a NASA satellite. The research found that 81 of the 111 Greenland glaciers surveyed are thinning at an accelerating self-feeding pace. The more the ice melts, the more water surrounds and eats away at the remaining ice. A graphic of Antarctica shows the thinning areas in red. The breakout box shows the Pine Island Glacier with multiple lines of dence measurements made across the rapidly changing West Antarctic glacier 'To some extent it's a runaway effect. The question is how far will it run?' asked the study's lead author Hamish Pritchard of the British Antarctic Survey. 'It's more widespread than we previously thought.' The study does not answer the crucial question of how much this worsening melt will add to projections of sea level rise from man-made global warming. Some scientists have previously estimated that steady melting of the two ice sheets will add about 3 feet, maybe more, to sea levels by the end of the century. The ice sheets are so big, however, that it should take hundreds of years for them to disappear. Worsening data keeps proving 'that we're underestimating' how sensitive the ice sheets are to changes, Mr Pritchard warned. Most watched News videos - Shocking video shows mother brutally beating her twin girls - Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant - Disaster averted by good samaritan that saved child in hot car - Sir David Attenborough shuts down Naga Munchetty's questions - Man fatally shoots a father during an argument over a handicap spot - Waitress tackles male customer after grabbing her backside - Bon Jovi star Richie Sambora soars in fighter plane - Cohen taped Trump discussing payment to Playboy model - Woman livestreams unassisted birth of her 6th child in her garden - London commuter sings out loud and doesn't care who hears him - Roseanne Barr gives official statement on her Valerie Jarrett tweet - Road rage brawl ends with BMW driver sending man flying
<urn:uuid:c2db0473-5dd9-45ff-a198-13ecd15501d8>
3.359375
600
Truncated
Science & Tech.
42.46322
95,525,916
Evolution of Tropospheric Ions The purpose of this work is to clarify the agglomeration phenomena around small positive and negative ions of tropospheric air. Evolution of tropospheric ions is not well-known; polluting vapors act upon this evolution, according to chemical reactions which are not well understood. The apparatus used enables us to measure simultaneously the mobility and the mass of ions created in a mixture of atmospheric air and various polluting vapors, at a pressure up to 40 torrs. The experimental results have shown the importance of portonation in the positive ion formation. The evolution rate constants of negative ions are slower than those of positive ions. Finally a mathematical model has allowed a qualitative approach to the sequence of positive ion-molecule reactions in the lower troposphere. KeywordsOrganic Vapor Torr Pressure Source Chamber Agglomeration Phenomenon Townsend Discharge Unable to display preview. Download preview PDF. - 2.Carroll, D. I., and Mason, E. A., A theoretical relationship between ion mobility and mass, paper presented at 19th Annual Conference on Mass Spectrometry (Atlanta 1971).Google Scholar - 3.Cohen, M. J., Kilpatrick, W. D., Carroll, D. L, Werlund, R. F., and Gibson, H. C, (abstract), Eos Trans, AGU 51 (11), 760 (1970).Google Scholar - 6.Huertas, M. L., Contribution à l’étude des ions positifs de la troposphere, Doctorat ès-sciences physiques dissertation, n°493, Univ. of Toulouse (France 1972).Google Scholar - 8.Huertas, M. L., and Fontan, J., Evolution times of tropospheric positive ions, submitted to “Atmospheric Environment” (1974 b).Google Scholar - 10.Mohnen, V. A., On the nature of tropospheric ions, Planet. Electro-dynamics, vol. 1, edited by S. Coroniti and J. Hughes, p. 197 (New York 1969).Google Scholar - 15.Siksna, R., Role of water substance in the structure of ions in ambient atmospheric air, Planet. Electrodynamics, vol. 1, edited by S. Coroniti and J. Hughes, p. 207 (New York 1969).Google Scholar - 16.Siksna, R., The structure of aggregates formed by means of the hydrogen bonds between molecules and some organic substances, paper presented at the 15th General Assembly of the IUGG, Moscow (1971).Google Scholar
<urn:uuid:4be80dd4-e6ef-47ff-bbe0-79bc72bdf8d5>
2.796875
560
Academic Writing
Science & Tech.
53.459406
95,525,919
Among other discoveries, the 2008 Messenger spacecraft mission has revealed new information on the chemicals that make up Mercury’s atmosphere. The atmospheric pressure on Mercury is extremely low, about a thousandth of a trillionth of Earth's at sea level. Data shows that Mercury has carbon dioxide, nitrogen and other familiar gases, although in very small total amounts. Carbon Dioxide and Carbon Monoxide According to the Messenger findings, carbon dioxide gas makes up over 95 percent of Mercury’s atmosphere. Although on Earth, carbon dioxide is strongly associated with life, it is very unlikely that Mercury’s blistering maximum daytime temperature of 427 degrees Celsius (800 degrees Fahrenheit) and near-vacuum conditions support any known living organisms; instead, the CO2 there is most probably due to volcanic and other activities on the planet’s surface. Carbon monoxide is also present at 0.07 percent. Surprisingly, Mercury’s atmosphere contains tiny amounts of water vapor -- 0.03 percent. Although Mercury cannot have oceans, water ice has been detected in the cold polar regions where shadows create permanent frigid zones hidden from sunlight. The water vapor may be the result of hydrogen and oxygen combining in Mercury’s atmosphere. Nitrogen and Oxygen Nitrogen and oxygen are two gases that make up the majority of Earth’s atmosphere, and they appear in Mercury’s as well. The abundance of nitrogen is 2.7 percent of Mercury’s air, and oxygen accounts for 0.13 percent. On Earth, plants are responsible for the production of oxygen. The source of Mercury’s small amount is a subject of speculation; it may come from meteorites bearing water, which then splits into hydrogen and oxygen in the powerful sunlight. Other sources may include the breakdown of minerals on Mercury’s surface. Argon is an inert gas, rarely reacting with other chemicals or even itself. It amounts to 1.6 percent of Mercury’s atmosphere. Along with other gases, Mercury’s argon probably seeps out from deep inside the planet and is released by volcanoes and meteorite impacts; minerals are unlikely sources as argon doesn’t react chemically to form any known mineral. Mercury has other chemicals in its atmosphere, although the exact concentrations are very small and difficult to measure. Hydrogen and helium are known to exist, likely arriving with the solar wind and being temporarily caught in Mercury’s weak gravity. The Messenger spacecraft has detected traces of krypton, a chemical cousin to argon, as well as methane gas. Other chemicals found include the alkaline metals, sodium, potassium and calcium.
<urn:uuid:3552ef7b-4283-421a-abd8-9a24256c90c4>
4.03125
545
Knowledge Article
Science & Tech.
37.963654
95,525,921
Fremont Earthquake Exhibit field trips for 6th grade-college field trips by calling (510)790-6284 or email up to 32 people fall and spring time trip available only (NGSS Correlation below) WHAT YOU WILL SEE The field trip starts at the Fremont Earthquake Exhibit (back part of the Fremont Community Center) on Paseo Padre and Mission View. Students will see a crack through the foundation and look at about 1.5 inches of offset. We discuss the Hayward Fault and other information appropriate to audience. The trip continues outside to walk about 1 mile to see the various "clues" in which the earthquake fault is outlined. Included are up to 3 inches of offset. Participants will never look at cracks in the street the same way. En enchelon patterns and offsets can be seen throughout Central Park in Fremont. We will visit the site of the former City Hall that was taken down because it was built in the wrong location. We will discuss how the County Main Library was built to take care of a possible shaker. The tour was designed by Dr. Joyce Blueford, a geologist, who incorporates history of the east bay and discusses how the faults are responsible for our present landscape. The Hayward Fault is a major fault of concern in the East Bay. It has been considered the most dangerous area for a possible major seismic event by the U.S. Geological Survey. There is a one in three chance of a major earthquake of 6.8 or greater on the Hayward Fault within the next 30 years. The last major quake in this area was on October 21, 1868, with a magnitude of 7.0, which ripped almost a continuous shear of about 6 feet from Milpitas to Oakland. City of Fremont was incorporated in 1956. Unknowingly they built their first building, the Within 10 years they noticed that the floor was They first thought that it was only due to poor construction but then they realized there was an offset to a set of cracks. After 10 years they had to close this area to the public, closing down a Children’s Theater. Over the years it has grown into a 1-2 inches of offset. The Math Science Nucleus in collaboration with the City of Fremont and the U.S. Geological Survey have created a “Faulted Floor” exhibit to be part of a general earthquake trail tour throughout Central Park. There are currently two tours. One through Fremont Recreation Department that includes a walk through Central Park (click here for more information) and a more scientific and longer from Tule Ponds at Tyson Lagoon to the exhibit showing natural earthquake features (sag ponds, fault scarps) and urban features (offset curbs, moving of asphalt, compression ridges) in Central Park (contact Math Science Nucleus msn@msnucleus to arrange dates). Both include the "Faulted Floor." This City of Fremont walled off 600 square feet including walls to help us dramatize the science of earthquakes. The facility will be used for field trips for K-college by the Math Science Nucleus. The City of Fremont will also use the area when there are events in the park so people can look at the exhibit to learn the science behind earthquakes as well as learn about earthquake and staff from the Math Science Nucleus will conduct the tours. S. Geological Survey to create a large map of the Visitors will be able to locate their house in relationship to the fault while stepping over the “crack.” A project of The City of Fremont and U.S. Geological Survey Any questions please email us at firstname.lastname@example.org Next Generation Science Standards MATH SCIENCE NUCLEUS since 1982 has served the education and public by offering quality science and math lessons that take our children learn critical thinking skills. We manage the Children's Natural History Museum and Tule Ponds at Tyson Lagoon Wetland Center. Math Science Nucleus received partial funding from PGE for the Faulted Floor Exhibit. ||U.S. GEOLOGICAL SURVEY is a government agency that Federal source for science about the Earth, its natural and living resources, natural hazards, and the environment. They provide the posters and map for the Faulted Floor Plate Tectonics and Large-Scale System Interactions The locations of mountain ranges, deep ocean trenches, ocean floor structures, earthquakes, and volcanoes occur in patterns. Most earthquakes and volcanoes occur in bands that are often along the boundaries between continents and oceans. Major mountain chains form inside continents or near their edges. Maps can help locate the different land and water features areas of Earth. (4-ESS2-2) A variety of hazards result from natural processes (e.g., earthquakes, tsunamis, volcanic eruptions). Humans cannot eliminate the hazards but can take steps to reduce their Plate Tectonics and Large-Scale System Interactions Maps of ancient land and water patterns, based on investigations of rocks and fossils, make clear how Earth’s plates have moved great distances, collided, and spread apart. (MS-ESS2-3)
<urn:uuid:6d3b7e7e-6142-408b-8abf-05675dc42c06>
2.9375
1,183
Product Page
Science & Tech.
57.464379
95,525,947
The planet Mars has always been exceptionally interesting through much of human history. Its coppery-red color makes it quite distinct from virtually all of the other objects in the sky; only a few bright stars are as visibly red. It is the only planet whose surface is at all visible through terrestrial telescopes. The changing colors and enigmatic markings on its surface fueled speculation about life on the red planet. When two Viking spacecraft landed on the surface of Mars in 1976, it became the only body beyond the earth-moon system to be examined at such close range. Its similarities to and differences from the earth make it an excellent test bench for many ideas in the earth sciences. KeywordsCarbonaceous Chondrite Martian Surface Planetary Scientist Space Colonization Viking Lander Unable to display preview. Download preview PDF. - 1.David W. Smith of the University of Delaware’s School of Life and Health Sciences first used this phrase in an extraterrestrial life course that we team-taught.Google Scholar - 2.Michael H. Carr, The Surface of Mars (New Haven: Yale University Press, 1981), pp. 2–3.Google Scholar - Victor R. Baker, The Channels of Mars (Austin: University of Texas Press, 1982), p. 4.Google Scholar - 3.C. Sagan, The Cosmic Connection (Garden City, N.Y.: Doubleday, 1973), p. 130.Google Scholar - 5.Victor R. Baker, The Channels of Mars (Austin: University of Texas Press, 1982), pp. 81–85.Google Scholar - 7.Michael H. Carr, The Surface of Mars, p. 181.Google Scholar - 10.Ibid., p. 103.Google Scholar - 11.James B. Pollack, “Atmospheres of the Terrestrial Planets,” in J. Kelly Beatty, B. O’Leary, and A. Chaikin (eds.), The New Solar System, 2nd ed. (Cambridge, U.K.: Cambridge University Press, and Cambridge, Mass.: Sky Publishing Corporation, 1982), pp. 57–70.Google Scholar - 13.Joseph Priest, Energy for a Technological Society, 2d. ed. (Reading, Mass.: Addison-Wesley, 1979), p. 16.Google Scholar
<urn:uuid:b86b40da-cff8-495f-bced-cc18ad8641e0>
3.3125
486
Truncated
Science & Tech.
69.552791
95,525,964
Dr Wareing and his colleagues used the COBRA supercomputer to simulate in three-dimensions the movement of a dying star through surrounding interstellar gas. At the end of their life, Sun-sized stars lose their grip on their outer layers and as much as half of their mass drifts off into space. The computer simulation modelled the collision between material given off by the star and the interstellar gas. It showed that a shockwave forms ahead of the dying star and giant eddies and whirlpools develop in the tail of material behind the star, similar to those seen in the wake of boats on open water. The group have now backed up these predictions with observations of the planetary nebula Sharpless 2-188 taken as part of the IPHAS (Isaac Newton Telescope Photometric H alpha Survey of the Northern Galactic Plane). The central star of Sharpless 2-188 is 850 light years away and it is travelling at 125 kilometres per second across the sky. Observations show a strong brightening in the direction in which the star is moving and faint material stretching away in the opposite direction. Dr Wareing believes that the bright structures in the arc observed ahead of Sharpless 2-188 are the bowshock instabilities revealed in his simulations, which will form whirlpools as they spiral past the star downstream to the tail. "These vortices can improve the mixing of the stellar material back into interstellar space, benefiting the next cycle of star formation. The turbulent whirlpools have an inherent spin, or angular momentum, which is an essential ingredient for the formation of the next generation of stars." said Dr Wareing who developed the computer model during his PhD and is now using it to understand the fate of our Sun. Dying stars eject both gas and dust into space. The dust will coalesce into planets around later generations of stars. The gas contains carbon, necessary for life and produced inside stars. How the carbon, other gas and dust are ejected from the dying star is not well understood. The whirlpools in space can play an important role in mixing these essential ingredients into the interstellar gas from which further stars and planets will form. IPHAS is a major survey of the Northern Galactic Plane being carried out with the 2.5-metre Isaac Newton Telescope (INT) in La Palma. The IPHAS survey began taking data with the INT Wide Field Camera in 2003 with the goal of imaging the entire northern galactic plane in the latitude range -5° Anita Heward | alfa What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:f2d7b313-56d4-4046-a412-65e4228d8a6c>
3.703125
1,155
Content Listing
Science & Tech.
45.683367
95,525,985
hit the Drygalski ice tongue in mid April, knocking off a chunk of the glacier. The berg then got stuck in Vincennes Bay for a month, but is now moving off Scientists will continue to monitor the iceberg, but it is unlikely to disrupt the movement of penguins and ships. The June 21 issue of The Antarctic Sun (warning, pdf provides more details: Almost the entire mouth of the sound had been blocked before B15a took off, Brunt said. Two other giant bergs, B15k and C16, are still blocking about 60 percent of the entrance to the sound. But that’s a big improvement. “B15a is out of the way and that’s a good thing,” said Marianne Okal, another graduate student with the group. “I’d be surprised if there’s still 85 miles of sea ice out there next December.” There is, however, a new iceberg resident inside McMurdo Sound. The interloper is about 16km long and 2km wide, Weidner said. It got within 60km of McMurdo Station, but backed up so that it’s about 90km away now, The berg shouldn’t affect any penguin colonies, he said. Nor should it interfere with the ships moving in and out of station in January.
<urn:uuid:d489d72c-5e8c-4ad7-8220-6fbbf9e0c83c>
2.59375
295
Comment Section
Science & Tech.
69.14041
95,526,003
Bats are masters of flight in the night sky, capable of steep nosedives and sharp turns that put our best aircrafts to shame. Although the role of echolocation in bats' impressive midair maneuvering has been extensively studied, the contribution of touch has been largely overlooked. A study published April 30 in Cell Reports shows, for the first time, that a unique array of sensory receptors in the wing provides feedback to a bat during flight. The findings also suggest that neurons in the bat brain respond to incoming airflow and touch signals, triggering rapid adjustments in wing position to optimize flight control. "This study provides evidence that the sense of touch plays a key role in the evolution of powered flight in mammals," says co-senior study author Ellen Lumpkin, a Columbia University associate professor of dermatology and physiology and cellular biophysics. "This research also lays the groundwork for understanding what sensory information bats use to perform such remarkable feats when flying through the air and catching insects. Humans cannot currently build aircrafts that match the agility of bats, so a better grasp of these processes could inspire new aircraft design and new sensors for monitoring airflow." Bats must rapidly integrate different types of sensory information to catch insects and avoid obstacles while flying. The contribution of hearing and vision to bat flight is well established, but the role of touch has received little attention since the discovery of echolocation. Recently, co-senior study author Cynthia Moss and co-author Susanne Sterbing-D'Angelo of The Johns Hopkins University discovered that microscopic wing hairs stimulated by airflow, are critical for flight behaviors such as turning and controlling speed. But until now, it was not known how bats use tactile feedback from their wings to control flight behaviors. In the new study, the Lumpkin and Moss labs analyzed, for the first time, the distribution of different sensory receptors in the wing and the organization of the wing skin's connections to the nervous system. Compared to other mammalian limbs, the bat wing has a unique distribution of hair follicles and touch-sensitive receptors, and the spatial pattern of these receptors suggests that different parts of the wing are equipped to send different types of sensory information to the brain. "While sensory cells located between the "fingers" could respond to skin stretch and changes in wind direction, another set of receptors associated with hairs could be specialized for detecting turbulent airflow during flight," says Sterbing-D'Angelo, who also holds an appointment at the University of Maryland. Moreover, bat wings have a distinct sensory circuitry in comparison to other mammalian forelimbs. Sensory neurons on the wing send projections to a broader and lower section of the spinal cord, including much of the thoracic region. In other mammals, this region of the spinal cord usually receives signals from the trunk rather than the forelimbs. This unusual circuitry reflects the motley roots of the bat wing, which arises from the fusion of the forelimb, trunk, and hindlimb during embryonic development. "This is important because it gives us insight into how evolutionary processes incorporate new body parts into the nervous system," says first author Kara Marshall of Columbia University. "Future studies are needed to determine whether these organizational principles of the sensory circuitry of the wing are conserved among flying mammals." The researchers also found that neurons in the brain responded when the wing was either stimulated by air puffs or touched with a thin filament, suggesting that airflow and tactile stimulation activate common neural pathways. "Our next steps will be following the sensory circuits in the wings all the way from the skin to the brain. In this study, we have identified individual components of these circuits, but next we would like to see how they are connected in the central nervous system," Moss says. "An even bigger goal will be to understand how the bat integrates sensory information from the many receptors in the wing to create smooth, nimble flight." The paper is titled, "Somatosensory Substrates of Flight Control in Bats." The authors are Ellen A. Lumpkin, Kara L. Marshall, Mohit Chadha, Laura A. deSouza (CUMC); Susanne J. Sterbing-D'Angelo, Cynthia F. Moss (Johns Hopkins University). The study was funded by grants from the National Institutes of Health (R01NS073119), Air Force Office of Scientific Research (FA95501210109), and other sources listed in the paper. The other authors declare no financial or other conflicts of interest. Lucky Tran | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:25bd1eef-5de3-4004-8c8d-e8236fadc2a1>
4.25
1,576
Content Listing
Science & Tech.
39.410638
95,526,007
Do we live in the Matrix? Researchers say they have found a way to find out - Any simulation of the universe must have limits, and finding these would prove we live in an artificial reality, physicists claim If the Matrix left you with the niggling fear that we might indeed be living in a computer generated universe staged by a malevolent artificial intelligence using the human race as an energy farm, help is at hand. A team of physicists have come up with a test which they say could prove whether or not the universe as we know it is a virtual reality simulation - a kind of theoretical red pill, as it were. Silas Beane of the University of Bonn, Germany, and his colleagues contend that a simulation of the universe, no matter how complex, would still have constraints which would reveal it. Is the real world real? Physicists say they have come up with a way of determining whether the world we experience is actually a computer simulation, as imagined in The Matrix trilogy of films All we have to do to identify what these constraints would be is to build our own simulation of the universe, which is close to what many researchers are trying to do on an incredibly miniscule scale. Computer simulations have been run to recreate quantum chromodynamics - the theory that describes the nuclear forced that binds quarks and gluons into protons and neutrons, which then bind to form atomic nuclei. It is believed that simulating physics on this fundamental level is equivalent, more or less, to simulating the workings of the universe itself. Even operating on this vanishingly small scale, the maths is pretty difficult so, despite using the world's most powerful supercomputers, physicists as yet have only managed to simulate regions of space on the femto-scale. To put that in context, a femtometre is 10^-15 metres - that's a quadrillionth of a metre or 0.000000000001mm. However, the main problem with all such simulations is that the law of physics have to be superimposed onto a discrete three-dimensional lattice which advances in time. And that's where the test comes in. IS REALITY MERELY AN ILLUSION? The question of whether we are actually aware of the real world is one which has been continually asked by philosophers. One of the earliest articulations of the conundrum occurs in Plato's Republic, where the Allegory of the Cave attempts to describe the illusory existence led by most unthinking people. Plato, regarded by many as the father of Western philosophy, suggested that the only way to come to a realisation of the real world was an in-depth study of maths and geometry, which would give students an inkling of the real nature of the world. French philosopher Rene Descartes, pictured above right, whose works are often used as a general introduction to metaphysics, raises the problem again as a thought experiment to lead readers to a position of radical doubt. By postulating a malicious demon who can keep us trapped in an illusory world, Descartes asks readers to cast aside all the evidence of their sensory experiences in a search for one certain premise. He famously comes up with the argument 'cogito ergo sum', or rather 'I think therefore I am', which he uses as a indubitable bedrock from which to reconstruct a certain picture of reality. Subsequent critics of his work, however, say that just because there are thoughts, there is no guarantee there is really a thinker. Professor Beane and his colleagues say this lattice spacing imposes a limit on the energy that particles can have, because nothing can exist that is smaller than the lattice itself. This means that if the universe as we know it is actually a computer simulation, there ought to be a cut off in the spectrum of high energy particles. And it just happens that there is exactly this kind of cut off in the energy of cosmic rays, a limit known as the Greisen–Zatsepin–Kuzmin (GZK) cut off. As the Physics arXiv blog explains, this cut off is well-studied and happend because high energy particles interacting with the cosmic microwave background lose energy as they travel across long distances. The researchers calculate that the lattice spacing forces additional features on the spectrum, most strikingly that the cosmic rays would prefer to travel along the axes of the lattice. This means they wouldn't observed equally in all directions. That would the acid test that the researchers are searching for - an indication that all is not at it seems with the universe. Excitingly, it's also a measurement we could do now with our current levels of technology. That said, the finding is not without its caveats. One problem Professor Beane identifies is that the simulated universe could be constructed in an entirely different way to how they have envisaged it. Moreover, the effect is only measurable if the lattice cutoff is the same as the GZK cutoff, any smaller than that and the observations will draw a blank. Professor Beane and his colleagues' findings are reported in Cornell University's arXiv journal. Most watched News videos - Brutal bat attack caught on surveillance video in the Bronx - Disaster averted by good samaritan that saved child in hot car - Comedian is forced to move her scooter from disability space on train - Shocking video shows mother brutally beating her twin girls - NFL quarterback Jimmy Garoppolo goes on a date with porn star - Man sets up projector to make garden look like jurassic park - The terrifying moment a plane comes crashing down in South Africa - Waitress tackles male customer after grabbing her backside - Leo Varadkar outlines Ireland's preparation for Brexit - Road rage brawl ends with BMW driver sending man flying - Sir David Attenborough shuts down Naga Munchetty's questions - Biker jailed after filming himself speeding at 200mph
<urn:uuid:27698110-dbe8-4b07-8934-749db824c208>
2.921875
1,237
Truncated
Science & Tech.
32.474424
95,526,021
Imagine having superhuman hearing. You're at a noisy, cocktail party and yet your ears can detect normally inaudible sounds made by your friends' muscles as they lean in to dish the latest gossip. But, unlike normal hearing, each of these sounds causes your ears to react in the same way. There is no difference between the quietest and loudest movements. To your superhuman ears, they all sound loud, like honking horns. According to a study funded by the National Institutes of Health, that may be how a shark's electrosensing organ reacts when it detects teensy, tiny electrical fields emanating from nearby prey. "Sharks have this incredible ability to pick up nanoscopic currents while swimming through a blizzard of electric noise. Our results suggest that a shark's electrosensing organ is tuned to react to any of these changes in a sudden, all-or-none manner, as if to say, 'attack now,'" said David Julius, PhD, professor and chair of physiology at the University of California, San Francisco and senior author of the study published in Nature. His team studies the cells and molecules behind pain and other sensations. For instance, their results have helped scientists understand why chili peppers feel hot and menthol cool. Led by post-docs Nicholas W. Bellono, PhD and Duncan B. Leitch, PhD, Dr. Julius' team showed that the shark's responses may be very different from the way the same organ reacts in skates, the flat, winged, evolutionary cousins of sharks and sting rays, and this may help explain why sharks appear to use electric fields strictly to locate prey while skates use them to find food, friends, and mates. They also showed how genes that encode for proteins called ion channels may control the shark's unique "sixth sense." "Ion channels essentially make the nervous system tick. They play a major role in controlling how information flows through a nervous system. Mutations in ion channels can be devastating and have been linked to a variety of disorders, including cystic fibrosis and some forms of epilepsy, migraines, paralysis, blindness, and deafness," said Nina Schor, MD, PhD, deputy director at NIH's National Institute of Neurological Disorders and Stroke. "Studies like this highlight the role a single ion channel can play in any nervous system, shark, skate, or human." In both sea creatures, networks of organs, called ampullae of Lorenzini, constantly survey the electric fields they swim through. Electricity enters the organs through pores that surround the animals' mouths and form intricate patterns on the bottom of their snouts. Once inside, it is carried via a special gel through a grapevine of canals, ending in bunches of spherical cells that can sense the fields, called electroreceptors. Finally, the cells relay this information onto the nervous system by releasing packets of chemical messengers, called neurotransmitters, into communication points, or synapses, made with neighboring neurons. For decades scientists knew that minute changes in electric fields stimulated a graded range of wavy currents in skate cells, much like the way our ears react to sounds. Larger fields stimulated bigger currents while smaller fields induced smaller responses. And, last year, Drs. Bellono and Leitch showed how genes for proteins called ion channels controlled the responses. But few had looked at how shark cells had reacted. In this study, the team compared currents recorded from little skate electroreceptor cells with those from the chain catshark. They found that although both cells were sensitive to the same narrow range of voltage zaps, the responses were very different. Shark currents were much bigger than skate currents and they were the same size and waviness for each zap. In contrast, the skate cells responded with currents that varied in both size and waviness to each zap. Further experiments suggested that these contrasting responses may be due to different ion channels genes, which encode proteins that form tunnels in a cell's membrane, or skin. When activated the tunnels open and create electrical currents by allowing ions, or charged molecules, to flow in and out of the cell. Bellono and Leitch showed that while both shark and skate electroceptors may have used the same type of voltage sensitive, calcium conducting ion channels to sense the zaps, they appeared to use very different types of potassium conducting ion channels to shape the responses. Their results suggested that shark cells used a special voltage activated channel that supported large repetitive responses while the skate cells used a calcium activated channel that tended to dampen the initial currents. In addition, they suggested that the voltages at which the cells electrically rested may also have contributed to the responses. The shark's voltage was slightly lower than the skate's and in a range that could have primed the calcium ion channels to respond with stronger currents. These differences also affected how the electroreceptors relayed information to the rest of the nervous system. The results suggested that shark electroreceptors basically released the same number of neurotransmitter packets, regardless of the size of the voltage zaps. In contrast, bigger zaps caused skate cells to send more messages and smaller zaps less. "In almost every way, the shark electrosensory system looks like the skate's and so we expected the shark cells to respond in a graded manner," said Dr. Bellono. "We were very surprised when we found that the shark system reacts completely differently to stimuli." Ultimately, these differences affected how sharks and skates reacted to electric fields that mimicked those produced by prey. To test this, the researchers exposed sharks and skates swimming alone in tanks to a wide range of low voltage electric field frequencies and then measured their breathing rates. As anticipated, the skates had a variety of reactions. Some frequencies caused their breathing rates to rise above rest while others produced minimal changes. The results may help explain why a previous study found that skates may use their electrosensory perceptions to detect both prey and mates. And the sharks? They basically had one simple reaction. Almost every field raised their breathing rates to a level seen when they smelled food, suggesting their system is tuned for one thing: catching prey. So why, did a pain and chili pepper researcher decide to study sharks? "In short, it's cool!" said Dr. Julius. "We're on a mission to understand how the nervous system controls pain and other sensations. Sharks and skates have a unique sensory system that detects electrical fields. Although humans do not share this experience, you can learn a lot from studying unique, or extreme, systems in nature. It's also a captivating way to learn about how evolution shapes the senses." Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:da22c7b3-2e83-471f-81e8-8bf02443edfb>
3.234375
1,381
Truncated
Science & Tech.
43.848009
95,526,022
Everything from the ability to concentrate, perceive and learn to debilitating illnesses such as amyotrophic lateral sclerosis, muscular dystrophy, post-traumatic stress syndrome and schizophrenia is influenced by the number of receptors on nerve cells. The more receptors each cell has at its communication points, or synapses, the better that messages are carried through the brain. A UIC research team led by David Featherstone, assistant professor of biological sciences, has discovered that receptor numbers are controlled by the brain's level of glutamate. But it is not the same glutamate that most neuroscientists think about -- the neurotransmitter that moves in message packets across the synapse. Instead, it is what Featherstone calls ambient extracellular glutamate, which just floats around the nervous system and has generally been ignored because no one knew where it came from or what it was doing. For years, scientists failed to identify glutamate as a key neurotransmitter precisely because there was so much of it. "It made no sense," said Featherstone. "People figured you couldn't use glutamate to send messages because there was too much glutamate background noise in the brain. It turns out that this background noise plays an important part in regulating information transfer." Featherstone and his lab team found that glia cells are the source of the excess ambient glutamate. Along with neurons, these poorly understood "support" cells fill the brain. The team discovered proteins in fruit fly glia cells that regulate the amount of ambient glutamate in the brain. Called xCT transporter proteins, they pump glutamate out of glia cells. "When we mutate the protein, we get less ambient extracellular glutamate, more glutamate receptors, and so a stronger transfer of messages at synapses," Featherstone said. The gene mutation also made the flies bisexual, leading him to name the gene "genderblind." "The mutants are completely bisexual, but fertile. It's the first gene that really specifically affects homosexual behavior without affecting heterosexual behavior," he said. "Trying to understand fly bisexuality sounds silly, but these behavioral changes are important evidence that ambient extracellular glutamate and xCT transport proteins play important, unsuspected roles in brain function," Featherstone said. "We think we'll be able to learn a lot about perception and development from figuring out exactly what's happening in these flies. "It's amazing how many biomedical breakthroughs have come from crazy directions." Paul Francuch | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:7d20623c-b6d3-4fbe-9380-667371a040e6>
3.390625
1,080
Content Listing
Science & Tech.
35.024529
95,526,031
For Immediate Release, July 26, 2016 Contact: Tierra Curry, (928) 522-3681, email@example.com New Federal Policy on Endangered Species Decision Process Will Push Less-studied Species to Extinction WASHINGTON— In a move that will condemn uncharismatic, little-studied species to greater risk of extinction, the U.S. Fish and Wildlife Service today finalized a new methodology for prioritizing decisions on whether species petitioned by citizens and conservation groups warrant protection under the Endangered Species Act. The Service claims the policy, which places species into one of five categories or “bins,” is intended to provide clarity and transparency as the agency evaluates nearly 500 plants and animals backlogged for protection decisions. But in practice the policy will leave species vulnerable to extinction when limited information is available about them, or when conservation efforts or new science is underway but not completed. “This policy will create a purgatory where decisions are postponed but threats aren’t addressed,” said Tierra Curry, a senior scientist at the Center. “Conservation agreements can take years to decades to develop, and the need for more study is a well-worn excuse that can always be used to delay protection decisions.” Pending status reviews will now be placed into one of five categories, based on available data, threats, conservation efforts planned or underway, and any new or developing science: 1) High Priority Critically Imperiled Species; 2) Strong Data Available; 3) New Science Underway; 4) Conservation Efforts Underway; and 5) Limited Data Currently Available. Giving lowest priority to species where limited information is available will bias decisions toward certain groups. For example, 99 percent of mammals have been evaluated for extinction risk, but less than 1 percent of insects and less than 4 percent of plants have been evaluated. Unpopular species, like mollusks and crayfish, will unfairly wait longer for protection because not as much is known about them, even though they are highly threatened. “This is a sad day in the battle to protect all the species that make up the web of life we all depend on,” said Curry. “Under this industry-friendly policy, species will simply be sacrificed to extinction while the Service plans conservation efforts or waits for better research.” Of further concern, the prioritization categories have considerable overlap, which will foster confusion and controversy. For example, a high-priority critically imperiled species could experience prolonged delay in listing if it is being studied or conservation plans are in development. “We’re in the midst of an extinction crisis, and what we really need as a nation is to prioritize funding so that the Fish and Wildlife Service has the staff to be able to actually protect all the species that are at risk of extinction before we lose even more of our natural heritage,” said Curry. The Center for Biological Diversity is a national, nonprofit conservation organization with more than 1 million members and online activists dedicated to the protection of endangered species and wild places.
<urn:uuid:ebcb5985-9fe2-41e4-810e-dabbe8f417b7>
2.71875
640
News (Org.)
Science & Tech.
22.451763
95,526,043
The planet Jupiter has spectacular rings of auroras around each pole but until now scientists have not been able to explain how they form. All auroras are caused by energetic charged particles crashing into the top of the atmosphere and making it glow. In the Earth’s auroras, these particles come from the Sun in a flow of charged particles known as the solar wind. But this can’t account for Jupiter’s auroras because the solar wind does not reach to the region where the brightest are found. Space physicists from the University of Leicester have now proposed a new theory of how Jupiter’s auroras are formed. An enormous disk of plasma gas rotates around Jupiter, flowing outwards from the moon Io. They believe that a large-scale electric current system (stream of charged particles) flows between the planet’s upper atmosphere and this disk of gas. They have also calculated that in order for such large currents to flow between the atmosphere and the disk, electrons must be strongly accelerated between these regions, causing the bright ring of auroras around each pole when they hit the top of the atmosphere and make it glow. Professor Stan Cowley, of the University of Leicester said: "The force associated with this electric current causes the plasma gas to spin at the same rate as the planet as it flows outwards. Our calculations suggest that the total current in this giant circuit is 100 million amps. The power transferred from the atmosphere to the plasma disk is about a thousand million megawatts or about 20,000 times the peak electricity demand in the UK!" Julia Maddock | alphagalileo Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:4eb7b747-cd1c-419e-83ce-6a8e6fc4f69d>
3.84375
904
Content Listing
Science & Tech.
42.276556
95,526,046
- Research highlight - Open Access Tracing the evolution of tissue identity with microRNAs © BioMed Central Ltd 2010 Published: 30 March 2010 Comparison of microRNA expression identified tissues present in the last common ancestor of Bilaterians and put evolution of microRNAs in the context of tissue evolution. Animal evolution has fascinated biologists for centuries and, despite tremendous progress in our understanding of the evolutionary process, it still keeps many of its mysteries secret. Initially, morphological and developmental studies were performed to reconstruct the road that animal evolution has followed. With the coming of age of molecular biology, comparative single- and multiple-gene analyses contributed to the further unraveling of evolutionary relationships within the animal kingdom. Although these studies resulted in the separation of the main phyla and taxa, the occurrence of convergent evolution, secondary loss of characters, poor knowledge of several animal groups at key positions and the presence of slow- and fast-evolving genomes complicated the reconstruction of the exact evolutionary paths. Over the past decade, it has become clear that the appearance of more complex organisms during animal evolution was driven by an increase in the complexity of gene regulatory mechanisms at both a transcriptional and a post-transcriptional level . Intriguingly, mechanisms of post-transcriptional gene regulation by non-coding RNAs were already present early on in the evolution of the Metazoa . In particular, microRNAs (miRNAs) have been suggested to have a major role in evolutionary changes of body structure, as the number of miRNA genes correlates strikingly with the morphological complexity of organisms [4–6]. miRNAs are small 21 to 23 nucleotide non-coding RNAs that regulate gene expression by binding to specific target mRNAs, leading to their translational inhibition and/or degradation. Given that miRNAs control gene expression in a wide range of biological processes, including developmental timing, cell proliferation and differentiation, it is feasible that alterations in spatio-temporal expression of miRNAs during evolution could result in significant changes in physiology and morphology between different taxa. Novel miRNAs continuously evolve in animal genomes . Once integrated into a gene regulatory network, miRNAs are strongly conserved and not susceptible to significant secondary loss. As such, miRNA studies partially overcome the limitations faced by morphological, developmental and protein comparison approaches, such as parallel evolution, convergence and missing data. These appealing characters rapidly attracted the attention of evolutionary biologists, and miRNAs became a promising tool for reconstructing animal evolution. The coming age of miRNAs in evolutionary studies Initially, the authors performed deep sequencing of the small RNA repertoire to identify the conserved bilaterian miRNAs, and found, in accordance with recent studies [3–6], 34 miRNA families common to protostomes and deuterostomes. Subsequently, they investigated in detail the spatio-temporal localization profile of these conserved miRNAs in Platynereis using whole mount in situ hybridization and found that expression patterns of these miRNAs are highly specific for certain tissues and cell types and are strongly conserved throughout bilaterian evolution. This comparison allowed Christodoulou and colleagues to reconstruct the minimal set of cell types and tissues that existed in the last common ancestor of nephrozoans (Figure 1). This ancestor is predicted to have had neurosecretory cells along its mouth (characterized by the expression of miR-100, miR-125 and let-7) and motile ciliated cells (miR-29+ miR-34+ miR-92+). In addition, the nephrozoan ancestor would have had a miR-1+ miR-22+ miR-133+ body musculature, a miR-12+ miR-216+ miR-283+ gut and miR-9+ miR-9*+ cells related to sensory information processing. Finally, the nephrozoan ancestor is predicted to have had a miR-124+ central nervous system, which would be connected with a miR-8+ miR-183+ miR-263+ peripheral sensory tissue, and to be already equipped with neurosecretory cells in a primitive brain (miR-7+ miR-137+ miR-153+). Implications and new directions Innovation at the post-transcriptional gene regulatory level through expansion of the miRNA repertoire has previously been suggested as one of the driving forces behind the evolution of animal complexity [3–7]. It is not clear, however, how exactly novel miRNAs evolve and what roles they have in the establishment of tissue identity. According to the model of transcriptional control of new miRNA genes suggested by Chen and Rajewsky , newly emerging miRNAs initially should be expressed at low levels and in specific tissues in order to minimize deleterious off-targeting effects and to allow natural selection to eliminate these slightly deleterious targets over time. Subsequently, miRNA expression levels can be increased and tissue-specificity relaxed . Now, with the discovery of Christodoulou et al. that ancient miRNAs were expressed in specific cell types of the protostome-deuterostome ancestor and in many cases assumed broader expression patterns later in evolution, this model of miRNA emergence gains additional solid experimental support. As shown by Christodoulou et al. , comparison of the miRNA repertoire between different taxa can significantly contribute to the hypothetical reconstruction of the ancestral body plan: by a detailed examination in which tissues/cell types conserved miRNAs evolved, the authors were able to create a hypothetical picture of an ancestor at a key phylogenetic position for which we have no fossils. Although the appearance of the last common ancestor of deuterostomes and protostomes still remains elusive, the authors elucidated the differentiated cell repertoire from this ancestor and, by doing so, unequivocally established miRNAs as a powerful new tool for reconstructing ancient animal body plans at important evolutionary nodes. Further investigation of miRNA repertoires and expression patterns in additional taxa might give fundamental clues about unknown nodes within the animal tree and resolve some phylogenetic uncertainties. For example, one of the frequently disputed questions is the phylogenetic position of Acoelomorpha (which includes the flatworm-like acoels and nemertodermatids). Acoels were originally grouped within the phylum Platyhelminthes but have recently been placed at a key position at the base of the Bilateria on the basis of new molecular data (Figure 1). Earlier studies revealed that the highly conserved miRNA let-7, which is present in all other Bilaterians, is absent in acoels, indicating that acoels might have branched off earlier from the last common ancestor of protostomes and deuterostomes. In addition, although acoels are believed to primitively lack a real brain, having instead a simple 'commissural' brain characterized by transverse fiber accumulation in the head, without classical ganglionic cell mass , Christodoulou et al. suggest that nervous system centralization was already present before the split between protostomes and deuterostomes. Therefore, a detailed analysis of the acoel miRNA repertoire and their corresponding expression patterns might help to further reveal how evolution at the base of the Bilateria took place and whether or not the urbilaterian - the last common ancestor of acoels and nephrozoans - had complex tissues. Conservation of sequence and expression patterns suggests that the core functions of ancient miRNAs also remained conserved through evolution. What are these core functions? From data from other animal models, Christodoulou et al. speculate that some miRNAs, such as miR-100 and let-7, could have roles in developmental timing. However, only few miRNA genes are known to work as developmental switches, and, perhaps surprisingly, the majority of miRNAs are in fact not essential for initial establishment of tissue identity but seem to be important for the maintenance of cells in differentiated states. It is likely, then, that miRNAs facilitate evolution of complexity by stabilizing existing and newly emerging regulatory circuits and transcriptional programs. Elucidating the principle components of miRNA-containing networks that were present at the dawn of animal evolution and tracing the acquisition of new miRNA circuitry through evolution is the next great evo-devo challenge in the miRNA field. We thank Bernhard Egger and Turan Demircan for fruitful discussions. - Levine M, Tjian R: Transcription regulation and animal diversity. Nature. 2003, 424: 147-151. 10.1038/nature01763.PubMedView ArticleGoogle Scholar - Chen K, Rajewsky N: The evolution of gene regulation by transcription factors and microRNAs. Nat Rev Genet. 2007, 8: 93-103. 10.1038/nrg1990.PubMedView ArticleGoogle Scholar - Grimson A, Srivastava M, Fahey B, Woodcroft BJ, Chiang HR, King N, Degnan BM, Rokhsar DS, Bartel DP: Early origins and evolution of microRNAs and Piwi-interacting RNAs in animals. Nature. 2008, 455: 1193-1197. 10.1038/nature07415.PubMedView ArticleGoogle Scholar - Sempere LF, Cole CN, McPeek MA, Peterson KJ: The phylogenetic distribution of metazoan microRNAs: insights into evolutionary complexity and constraint. J Exp Zool B Mol Dev Evol. 2006, 306: 575-588. 10.1002/jez.b.21118.PubMedView ArticleGoogle Scholar - Prochnik SE, Rokhsar DS, Aboobaker AA: Evidence for a microRNA expansion in the bilaterian ancestor. Dev Genes Evol. 2007, 217: 73-77. 10.1007/s00427-006-0116-1.PubMedView ArticleGoogle Scholar - Heimberg AM, Sempere LF, Moy VN, Donoghue PC, Peterson KJ: MicroRNAs and the advent of vertebrate morphological complexity. Proc Natl Acad Sci USA. 2008, 105: 2946-2950. 10.1073/pnas.0712259105.PubMedPubMed CentralView ArticleGoogle Scholar - Berezikov E, Thuemmler F, van Laake LW, Kondova I, Bontrop R, Cuppen E, Plasterk RH: Diversity of microRNAs in human and chimpanzee brain. Nat Genet. 2006, 38: 1375-1377. 10.1038/ng1914.PubMedView ArticleGoogle Scholar - Christodoulou F, Raible F, Tomer R, Simakov O, Trachana K, Klaus S, Snyman H, Hannon GJ, Bork P, Arendt D: Ancient animal microRNAs and the evolution of tissue identity. Nature. 2010, 463: 1084-1048. 10.1038/nature08744.PubMedPubMed CentralView ArticleGoogle Scholar - Egger B, Steinke D, Tarui H, De Mulder K, Arendt D, Borgonie G, Funayama N, Gschwentner R, Hartenstein V, Hobmayer B, Hooge M, Hrouda M, Ishida S, Kobayashi C, Kuales G, Nishimura O, Pfister D, Rieger R, Salvenmoser W, Smith J, Technau U, Tyler S, Agata K, Salzburger W, Ladurner P: To be or not to be a flatworm: the acoel controversy. PLoS ONE. 2009, 4: e5502-10.1371/journal.pone.0005502.PubMedPubMed CentralView ArticleGoogle Scholar - Raikova OI, Reuter M, Kotikova EA, Gustafsson MKS: A commissural brain! The pattern of 5-HT immunoreactivity in Acoela (Plathelminthes). Zoomorphology. 1998, 118: 69-77. 10.1007/s004350050058.View ArticleGoogle Scholar
<urn:uuid:cd5b7208-7365-46a4-94ee-b818b8e9d936>
2.90625
2,593
Academic Writing
Science & Tech.
28.569245
95,526,058
Under anaerobic conditions, the absence of oxygen, pyruvic acid can be routed by the organism into one of three pathways: lactic acid fermentation, alcohol fermentation, or cellular (anaerobic) respiration. Humans cannot ferment alcohol in their own bodies, we lack the genetic information to do so. These biochemical pathways, with their myriad reactions catalyzed by reaction-specific enzymes all under genetic control, are extremely complex. We will only skim the surface at this time and in this course. Alcohol fermentation is the formation of alcohol from sugar. Yeast, when under anaerobic conditions, convert glucose to pyruvic acid via the glycolysis pathways, then go one step farther, converting pyruvic acid into ethanol, a C-2 compound. Many organisms will also ferment pyruvic acid into, other chemicals, such as lactic acid. Humans ferment lactic acid in muscles where oxygen becomes depleted, resulting in localized anaerobic conditions. This lactic acid causes the muscle stiffness couch-potatoes feel after beginning exercise programs. The stiffness goes away after a few days since the cessation of strenuous activity allows aerobic conditions to return to the muscle, and the lactic acid can be converted into ATP via the normal aerobic respiration pathways.
<urn:uuid:83d99f5d-5c8c-4552-baff-33ea21fafc47>
3.453125
256
Knowledge Article
Science & Tech.
19.252247
95,526,068
At the surface, Antarctica is a motionless and frozen landscape. Yet hundreds of miles down the Earth is moving at a rapid rate, new research has shown. The study, led by Newcastle University, UK, and published this week in Earth and Planetary Science Letters, explains for the first time why the upward motion of the Earth's crust in the Northern Antarctic Peninsula is currently taking place so quickly. Previous studies have shown the earth is 'rebounding' due to the overlying ice sheet shrinking in response to climate change. This movement of the land was understood to be due to an instantaneous, elastic response followed by a very slow uplift over thousands of years. But GPS data collected by the international research team, involving experts from Newcastle University, UK; Durham University; DTU, Denmark; University of Tasmania, Australia; Hamilton College, New York; the University of Colorado and the University of Toulouse, France, has revealed that the land in this region is actually rising at a phenomenal rate of 15mm a year – much greater than can be accounted for by the present-day elastic response alone. And they have shown for the first time how the mantle below the Earth's crust in the Antarctic Peninsula is flowing much faster than expected, probably due to subtle changes in temperature or chemical composition. This means it can flow more easily and so responds much more quickly to the lightening load hundreds of miles above it, changing the shape of the land. Lead researcher, PhD student Grace Nield, based in the School of Civil Engineering and Geosciences at Newcastle University, explains: "You would expect this rebound to happen over thousands of years and instead we have been able to measure it in just over a decade. You can almost see it happening which is just incredible. "Because the mantle is 'runnier' below the Northern Antarctic Peninsula it responds much more quickly to what's happening on the surface. So as the glaciers thin and the load in that localised area reduces, the mantle pushes up the crust. "At the moment we have only studied the vertical deformation so the next step is to look at horizontal motion caused by the ice unloading to get more of a 3-D picture of how the Earth is deforming, and to use other geophysical data to understand the mechanism of the flow." Since 1995 several ice shelves in the Northern Antarctic Peninsula have collapsed and triggered ice-mass unloading, causing the solid Earth to 'bounce back'. "Think of it a bit like a stretched piece of elastic," says Nield, whose project is funded by the Natural Environment Research Council (NERC). "The ice is pressing down on the Earth and as this weight reduces the crust bounces back. But what we found when we compared the ice loss to the uplift was that they didn't tally – something else had to be happening to be pushing the solid Earth up at such a phenomenal rate." Collating data from seven GPS stations situated across the Northern Peninsula, the team found the rebound was so fast that the upper mantle viscosity - or resistance to flow - had to be at least ten times lower than previously thought for the region and much lower than the rest of Antarctica. Professor Peter Clarke, Professor of Geophysical Geodesy at Newcastle University and one of the authors of the paper, adds: "Seeing this sort of deformation of the earth at such a rate is unprecedented in Antarctica. What is particularly interesting here is that we can actually see the impact that glacier thinning is having on the rocks 250 miles down." Louella Houldcroft | Eurek Alert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:8cb681c3-285c-49eb-842a-0d001106dd40>
3.96875
1,363
Content Listing
Science & Tech.
42.06989
95,526,088
The integral quantum Hall effect and fractional quantum Hall effect are two of the most remarkable physical phenomena to be discovered in solid state physics in recent years. In many respects, the two phenomena share very similar underlying physical characteristics and concepts, for instance, the two-dimensionality of the system, the quantization of the Hall resistance in units of h/e 2 with a simultaneous vanishing of the longitudinal resistance, and the interplay between disorder and the magnetic field giving rise to the existence of extended states. In other respects, they encompass entirely different physical principles and ideas. In particular, the IQHE is believed to be a manifestation of the transport properties of a noninteracting, charged-particle system, in the presence of a strong perpendicular magnetic field, whereas the FQHE results from a repulsive interaction between particles, giving rise to a novel form of many-body ground state. Moreover, the mobile (non-localized) elementary excitations must carry fractional charge and lie at a finite energy above the ground state. These fractionally charged excitations are thought to obey unusual statistics which are neither Fermi nor Bose-Einstein. KeywordsFilling Factor Landau Level Extended State Quantum Hall Effect Experimental Aspect Unable to display preview. Download preview PDF.
<urn:uuid:152511dc-0ed6-4908-9fa7-2814255ec218>
2.625
260
Truncated
Science & Tech.
10.508382
95,526,116
Steam distillation may be used to separate a mixture of p-nitrophenol and o-nitrophenol. The ortho isomer distills at 93 degrees Celsius, the para isomer does not. Explain.© BrainMass Inc. brainmass.com July 22, 2018, 2:37 am ad1c9bdddf The reason why the ortho distills first (meaning it's less stable in terms of ... This solution provides the explanation required to answer this organic chemistry question which pertains to the mixture separation technique of steam distillation. This is discussed in under 100 words.
<urn:uuid:f7652e63-0f65-48d1-be0b-d901318066cc>
2.9375
125
Truncated
Science & Tech.
59.323286
95,526,146
Temporal range: 59–0 Ma Paleogene to present A sailfish is a fish of the genus Istiophorus of billfish living in colder areas of all the seas of the earth. They are predominantly blue to gray in colour and have a characteristic erectile dorsal fin known as a sail, which often stretches the entire length of the back. Another notable characteristic is the elongated bill, resembling that of the swordfish and other marlins. They are, therefore, described as billfish in sport-fishing circles. Two sailfish species have been recognized. No differences have been found in mtDNA, morphometrics or meristics between the two supposed species and most authorities now only recognized a single species, (Istiophorus platypterus), found in warmer oceans around the world. FishBase continues to recognize two species: Sailfish grow quickly, reaching 1.2–1.5 m (3.9–4.9 ft) in length in a single year, and feed on the surface or at middle depths on smaller pelagic forage fish and squid. Sailfish were previously estimated to reach maximum swimming speeds of 35 m/s (130 km/h; 78 mph), but research published in 2015 and 2016 indicate sailfish do not exceed speeds between 10–15 m/s. During predator–prey interactions, sailfish reached burst speeds of 7 m/s (25 km/h; 16 mph) and did not surpass 10 m/s (36 km/h; 22 mph). Generally, sailfish do not grow to more than 3 m (9.8 ft) in length and rarely weigh over 90 kg (200 lb). Sailfish have been reported to use their bills for hitting schooling fish by tapping (short-range movement) or slashing (horizontal large-range movement) at them. The sail is normally kept folded down when swimming and raised only when the sailfish attack their prey. The raised sail has been shown to reduce sideways oscillations of the head, which is likely to make the bill less detectable by prey fish. This strategy allows sailfish to put their bills close to fish schools or even into them without being noticed by the prey before hitting them. Sailfish usually attack one at a time, and the small teeth on their bills inflict injuries on their prey fish in terms of scale and tissue removal. Typically, about two prey fish are injured during a sailfish attack, but only 24% of attacks result in capture. As a result, injured fish increase in number over time in a fish school under attack. Given that injured fish are easier to catch, sailfish benefit from the attacks of their conspecifics but only up to a particular group size. A mathematical model showed that sailfish in groups of up to 70 individuals should gain benefits in this way. The underlying mechanism was termed protoco-operation because it does not require any spatial co-ordination of attacks and could be a precursor to more complex forms of group hunting. The bill movement of sailfish during attacks on fish is usually either to the left or to the right side. Identification of individual sailfish based on the shape of their dorsal fins identified individual preferences for hitting to the right or left side. The strength of this side preference was positively correlated with capture success. These side-preferences are believed to be a form of behavioural specialization that improves performance. However, a possibility exists that sailfish with strong side preferences could become predictable to their prey because fish could learn after repeated interactions in which direction the predator will hit. Given that individuals with right- and left-sided preferences are about equally frequent in sailfish populations, living in groups possibly offers a way out of this predictability. The larger the sailfish group, the greater the possibility that individuals with right- and left-sided preferences are about equally frequent. Therefore, prey fish should find it hard to predict in which direction the next attack will take place. Taken together, these results suggest a potential novel benefit of group hunting which allows individual predators to specialize in their hunting strategy without becoming predictable to their prey. The injuries that sailfish inflict on their prey appear to reduce their swimming speeds, with injured fish being more frequently found in the back (compared with the front) of the school than uninjured ones. When a sardine school is approached by a sailfish, the sardines usually turn away and flee in the opposite direction. As a result, the sailfish usually attacks sardine schools from behind, putting at risk those fish that are the rear of the school because of their reduced swimming speeds. - "A compendium of fossil marine animal genera". Bulletins of American Paleontology. 364: 560. 2002. Retrieved 2008-01-08. - Froese, Rainer, and Daniel Pauly, eds. (2013). Species of Istiophorus in FishBase. April 2013 version. - McGrouther, M. (2013). Sailfish, Istiophorus platypterus. Australian Museum. Retrieved 26 April 2013. - Collette, B.; Acero, A.; Amorim, A.F.; Boustany, A.; Canales Ramirez, C.; Cardenas, G.; Carpenter, K.E.; de Oliveira Leite Jr., N.; Di Natale, A.; Die, D.; et al. (2011). "Istiophorus platypterus". The IUCN Red List of Threatened Species. IUCN. 2011: e.T170338A6754507. doi:10.2305/IUCN.UK.2011-2.RLTS.T170338A6754507.en. Retrieved 26 December 2017. - Gardieff, S: Sailfish. Florida Museum of Natural History. Retrieved 26 April 2013. - Collette, B.B., McDowell, J.R. and Graves, J.E. (2006). Phylogeny of Recent billfishes (Xiphioidei). Bull. Mar. Sci. 79(3): 455-468. - Marras S, Noda T, Steffensen JF, Svendsen MBS, Krause J, Wilson ADM, Kurvers RHJM, Herbert-Read J & Domenic P 2015) "Not so fast: swimming behavior of sailfish during predator–prey interactions using high-speed video and accelerometry". Integrative and Comparative Biology 55: 718-727. - Svendsen MBS, Domenici P, Marras S, Krause J, Boswell KM, Rodriguez-Pinto I, Wilson ADM, Kurvers RHJM, Viblanc PE, Finger JS & Steffensen JF (2016) "Maximum swimming speeds of sailfish and other large marine predatory fish species based on muscle contraction time: A myth revisited". Biology Open, 5: 1415-1419. - Domenici P, Wilson ADM, Kurvers RHJM, Marras S, Herbert-Read JE, Steffensen JF, Krause S, Viblanc PE, Couillaud P & Krause J (2014) "How sailfish use their bill to capture schooling prey". Proceedings of the Royal Society London B, 281: 20140444. - Sailfish Hunting Sardines – Youtube. - Herbert-Read JE, Romanczuk P, Krause S, Strömbom D, Couillaud P, Domenici P, Kurvers RHJM, Marras S, Steffensen JF, Wilson ADM & Krause J (2016) "Group hunting sailfish alternate their attacks on their grouping prey to facilitate hunting success". Proceedings of the Royal Society London B, 283: 20161671. - Kurvers RHJM, Krause S, Viblanc PE, Herbert-Read JE, Zalansky P, Domenici P, Marras S, Steffensen JF, Wilson ADM, Couillaud P & Krause J (2017) "The evolution of lateralisation in group hunting sailfish". Current Biology. - Krause J and Ruxton GD (2002) Living in Groups Oxford University Press. ISBN 9780198508182 - Schultz, Ken (2003) Ken Schultz's Field Guide to Saltwater Fish pp. 162–163, John Wiley & Sons. ISBN 9780471449959. |Wikimedia Commons has media related to Istiophorus.| |Wikispecies has information related to Istiophorus|
<urn:uuid:68386533-ba77-425e-a846-a6b269c5e43a>
4.25
1,780
Knowledge Article
Science & Tech.
62.601632
95,526,147
Original Publication Date: 1995-Jul-01 Included in the Prior Art Database: 2005-Mar-30 Barrett, KL: AUTHOR [+1] Disclosed is an object-oriented framework for representing a processor language, in assembly language and machine language forms, as well as providing a way to convert between the two forms. an object-oriented framework for representing a processor language, in assembly language and machine language forms, as well as providing a way to convert between the two forms. In order to translate between assembly language instructions and machine language instructions, it is necessary to have knowledge of the instructions, the instructions operands, the operand positions, and the properties of the operands. It is useful to store this knowledge in a machine accessible form in a compact way to automate the process of translating between the two forms. The knowledge is complex enough to warrant the development of such a representation, rather than a straightforward programmatic translation, but the details of the translation at the lower levels are simple enough to permit such a representation. representation scheme presented is based on object-oriented design principles. The knowledge about both the instruction assembly and machine language forms are stored as property lists in the object representing that instruction. In the parlance of knowledge based systems, every instruction is represented by an object and the two language forms of the instruction are represented as slots of the object. The machine language form is stored as the value of the OPCODE slot and the assembly language form is stored in the ARG_LIST and TC_FORM slots. The value of the OPCODE slot is a number which (if represented in the binary number system) is the machine language instruction. The OPCODE slot has a facet called BLUE_PRT which defines the various fields of the machine language instruction. For example, the STX has the following BLUE_PRT facet value on the OPCODE slot: ((OPCD 0 5) (EO 21 30) (RS 6 10) (RA 11 15) (RB 16 20)) which is interpreted as: field OPCD (the 6 bit code of the instruction) starts at bit 0, and ends at bit 5. field EO (extended opcode) starts at bit 21, and ends at bit 30. field RC (condition bit) is bit 31. field RS (source register number) is specified in bits 6
<urn:uuid:31243960-f7bc-443f-ab55-7ee508a2a686>
2.71875
531
Knowledge Article
Software Dev.
38.878642
95,526,175
On April 6, the Intergovernmental Panel on Climate Change (IPCC) will release a report entitled Climate Change 2007: Impacts, Adaptation, and Vulnerability that focuses on how climate change is affecting the planet. One finding is an accelerated rate of species extinctions, with estimates of up to 1 million species at risk in coming decades. However, new research shows that protected areas can be an effective tool for preventing such extinctions. The study by a team of international scientists published March 30 in the journal Frontiers in Environment and Ecology (FREE) concludes that protected areas are necessary for preventing the loss of species due to climate change – provided that shifts in species’ ranges are factored into early analysis of whether to expand current protected areas or create new ones. It is the first research on the relevancy of protected areas – a mainstay of conservation efforts – in adapting to climate change. “Extinctions due to climate change are not inevitable – this research shows that new protected areas can greatly reduce the risk faced by species that help sustain us,” said Lee Hannah, a Conservation International (CI) climate scientist and the study’s lead author. “Areas set aside for nature are an important tool to combat climate change extinctions, and one that is well-tested and can be deployed immediately.” The study by scientists from the United States, South Africa, United Kingdom, Spain and Mexico found that existing protected areas cover the ranges of many species as climate changes, but additional area is required to cover all species. Creating new protected areas based on climate change would cover the ranges of most species. As the climate changes, species adapt by moving beyond their traditional ranges, potentially traveling out of current protected areas such as national parks. The study found that existing protected areas remain effective in the early stages of climate change, while adding new protected areas or expanding current ones would maintain species protection in future decades and centuries. It also shows that anticipating the need for new protected areas and getting them created in the short term will be less expensive than waiting until the impacts of climate change become more significant. The new study measures the continued effectiveness of protected areas as the global climate changes, unlike previous studies that modeled species range shifts without considering new protected areas or focused on existing impacts of climate change without considering the long-term future. “Existing conservation plans have assumed that species distributions change relatively slowly, unless they are directly affected by human activities,” said Miguel Araújo, a co-author of the study. “However, our study shows that these strategies must anticipate the impacts of climate change if extinctions are to be reduced.” The study’s authors also warned that protected areas would fail in the long run unless climate change is stopped. “Stopping climate change and dealing with the impacts that are now inevitable must go hand-in-hand,” Hannah said. “No conservation strategy can cope with the levels of change that will be experienced if we continue at the current pace of climate change.” Three regions used as models for the study were Mexico (birds and mammals), and Western Europe and the Cape Floristic Region of Africa (plants). Species distribution models were used for a total of 1,695 species in the three regions. Because the three highly varied regions represent many of the world’s ecosystems, it is likely that new protected areas must be created in most parts of the world. Tom Cohen | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Power and Electrical Engineering 17.07.2018 | Life Sciences 16.07.2018 | Physics and Astronomy
<urn:uuid:e768b926-7b7d-48fc-98db-1418212d580f>
4.3125
1,377
Content Listing
Science & Tech.
35.645326
95,526,191
Nearly eight years after astronomers lost track of the estimated 120m-long space rock, Asteroid WC9 is ready to come back in spectacular fashion. The giant asteroid, around the size of the Statue of Liberty, will track a peculiar path and fly between Earth and its natural satellite. At its closest, the asteroid will come close to Earth within 0.53 lunar distances – around 203,000km. The approach is expected to happen around 1am BST, in the early morning hours of Tuesday, May 15. Asteroid 2010 WC9 was first observed on November 30, 2010, by the US-based Catalina Sky Survey in Arizona. But the asteroid dashed out of sight and into the darkness of space by December, leaving scientists uncertain where it might be headed next. Astronomers were unable to track WC9’s path until now. Daniel Bamberger, at London’s Northolt Branch Observatories, said: “We imaged this object twice. First on May 9, when it was still known by its temporary designation ZJ99C60. “Then again on May 10, after it was identified as asteroid 2010 WC9, which had been a lost asteroid for eight years. “It is still a faint object of 18th magnitude, but it is brightening very rapidly. “2010 WC9 will be brighter than 11th magnitude at closest approach, making it visible in a small telescope.” Calculations by NASA’s Jet Propulsion Laboratory suggest this will be the closest Asteroid WC9 will come to Earth in the next 300 years. Unfortunately, the asteroid will not light up brightly enough to be visible to the naked eye, but small telescopes could aid you in the endeavour. However, if you are keen on seeing the asteroid zoom by, then you are in luck because the event will be streamed online by multiple sources. You can check out robotic telescope service Slooh, which will live stream the asteroid on Facebook Live and its website tonight. Northolt Branch Observatories will also host a Facebook Live event later this evening. Guy Wells at Northolt Branch Observatories, said: “We are planning to broadcast this asteroid live to our Facebook page on the night of May 14, likely around midnight, if the weather forecast remains positive. “The broadcast will be less than 25 minutes in duration, as the asteroid will cross our field of view within that period of time. “The asteroid will be moving quite rapidly – 30 arcseconds per minute. Our display will update every five seconds. “We are of course collecting astrometric data while this is happening, but the motion of the asteroid will be apparent every five seconds.”
<urn:uuid:63604430-18be-446e-81af-8c276a3231f4>
2.875
569
News Article
Science & Tech.
53.535959
95,526,239
Derive from first principles the Poiseuille equation for pressure drop generated by the steady flow of a Newtonian fluid through a straight tube of circular cross-section. If the flow is laminar, what is the form of the velocity profile with in the tube? Show that the mean velocity is half the peak in such circumstances.© BrainMass Inc. brainmass.com July 21, 2018, 6:00 am ad1c9bdddf Please see the attached file. Consider steady flow of a Newtonian fluid through a straight tube of circular cross-section with a radius R. Consider an element of fluid with a radius r and axial length L. Let the pressure at the end be p1 and p2 as shown in the figure. The net force pushing the fluid is FP =p1πr2 - p2πr2 = πr2(p1 - p2) Note that the force inducing the motion of the fluid is the difference or gradient in pressure and does not depend upon the absolute magnitude of the pressure itself. In other words, even if the pressure in the tube is very large, there will be no motion of the fluid if there is no difference in pressure between the two ends and the motion will be in the direction of the positive ... The solution provides step-by-step explanations and derivations for the Poiseuille equation in a 3-page Word document.
<urn:uuid:bbb71002-39ab-4cbd-8a3d-27bcf45c360b>
3.515625
295
Tutorial
Science & Tech.
65.095303
95,526,244
SØG - mellem flere end 8 millioner bøger: Viser: Who Cares about Particle Physics? - Making Sense of the Higgs Boson, Large Hadron Collider, and CERN Detaljer Om Varen - Hardback: 272 sider - Udgiver: Oxford University Press, Incorporated (September 2016) - ISBN: 9780198783244 CERN, the European Laboratory for particle physics, regularly makes the news. What kind of research happens at this international laboratory and how does it impact people's daily lives? Why is the discovery of the Higgs boson so important? Particle physics describes all matter found on Earth,in stars and all galaxies but it also tries to go beyond what is known to describe dark matter, a form of matter five times more prevalent than the known, regular matter. How do we know this mysterious dark matter exists and is there a chance it will be discovered soon? About sixty countriescontributed to the construction of the gigantic Large Hadron Collider (LHC) at CERN and its immense detectors. Dive in to discover how international teams of researchers work together to push scientific knowledge forward.Here is a book written for every person who wishes to learn a little more about particle physics, without requiring prior scientific knowledge. It starts from the basics to build a solid understanding of current research in particle physics. A good dose of curiosity is all one will need to discovera whole world that spans from the infinitesimally small and stretches to the infinitely large, and where imminent discoveries could mark the dawn of a huge revolution in the current conception of the material world. 1: What is matter made of?2: What about the Higgs boson?3: Accelertors and detector, the essential tools4: The discovery of the Higgs boson5: The dark side of the Universe6: Going beyond the Standard Model: calling SUSY to the rescue7: What does fundamental research put on our plate?8: CERN experiments: a unique management and cooperation model9: Diversity in science10: What could th next big discoveries be? De oplyste priser er inkl. moms
<urn:uuid:69c70130-9b93-497a-b422-463197b1fb27>
3.171875
453
Product Page
Science & Tech.
40.662565
95,526,260
Original Research ARTICLE Cable Bacteria and the Bioelectrochemical Snorkel: The Natural and Engineered Facets Playing a Role in Hydrocarbons Degradation in Marine Sediments - Water Research Institute, IRSA-CNR, Rome, Italy The composition and metabolic traits of the microbial communities acting in an innovative bioelectrochemical system were here investigated. The system, known as Oil Spill Snorkel, was recently developed to stimulate the oxidative biodegradation of petroleum hydrocarbons in anoxic marine sediments. Next Generation Sequencing was used to describe the microbiome of the bulk sediment and of the biofilm growing attached to the surface of the electrode. The analysis revealed that sulfur cycling primarily drives the microbial metabolic activities occurring in the bioelectrochemical system. In the anoxic zone of the contaminated marine sediment, petroleum hydrocarbon degradation occurred under sulfate-reducing conditions and was lead by different families of Desulfobacterales (46% of total OTUs). Remarkably, the occurrence of filamentous Desulfubulbaceae, known to be capable to vehicle electrons deriving from sulfide oxidation to oxygen serving as a spatially distant electron acceptor, was demonstrated. Differently from the sediment, which was mostly colonized by Deltaproteobacteria, the biofilm at the anode hosted, at high extent, members of Alphaproteobacteria (59%) mostly affiliated to Rhodospirillaceae family (33%) and including several known sulfur- and sulfide-oxidizing genera. Overall, we showed the occurrence in the system of a variety of electroactive microorganisms able to sustain the contaminant biodegradation alone or by means of an external conductive support through the establishment of a bioelectrochemical connection between two spatially separated redox zones and the preservation of an efficient sulfur cycling. Petroleum hydrocarbons are important sources of energy for daily life and industrial activities. During their production processes and/or transportation, tanker accidents may occur representing a global environmental issue (Prince, 1993). Oil spill is one of the major causes of marine pollution and represents a risk for human health and ecosystem functioning (Kvenvolden and Cooper, 2003; Van Hamme et al., 2003; Das and Chandran, 2011; Thapa et al., 2012; Sammarco et al., 2013). In order to reduce the severe toxicity of these compounds, remediation strategies are urgently required. Technologies based on contaminant degradation processes operated by autochthonous microorganisms deserve increasing attention (Leahy and Colwell, 1990; Das and Chandran, 2011). Several studies exploited novel biological processes and investigated the ability of marine bacteria to mineralize these pollutants under sustainable conditions and at lower costs compared to physical-chemical treatments (Swannell et al., 1996; Van Hamme et al., 2003; Roling et al., 2004; Nikolopoulou et al., 2013). The success of hydrocarbon biodegradation depends on the environmental conditions favoring the action of specialized microorganisms. In particular, besides the adequate sources of nutrients (i.e., nitrogen, phosphorus, sulfur and iron), oxygen availability is fundamental for fast hydrocarbon biodegradation (even if anaerobic degradation may also occur at slower rates) (Ron and Rosenberg, 2014). To ensure the continuous availability of electron acceptors, remediation strategies based on the addition and/or the delivering of oxygen have been proposed (Zhang et al., 2010; Lu et al., 2014). However, due to its low solubility and fast reaction with reduced inorganic species (e.g., sulfide, ferrous ion), some of these strategies are often poorly effective and relatively expensive. Promising alternatives based on bioelectrochemical systems were recently proposed for the clean up of contaminated marine environments, offering the opportunity to drive efficient and sustainable bioremediation processes employing electrodes as electron acceptors to stimulate the oxidation of petroleum-derived pollutants (Holmes et al., 2004; Zhang et al., 2010; Morris and Jin, 2012; Rakoczy et al., 2013; Cruz Viggi et al., 2015; Daghio et al., 2016). In some of these studies, the primary involvement of sulfur cycle on hydrocarbon degradation in bioelectrochemical systems was hypothesized (Cruz Viggi et al., 2015; Daghio et al., 2016). However, the role and identity of microorganisms responsible for such processes as well as the mechanisms involved were not deeply investigated and, in turn, fully understood. The occurrence of Desulfobulbaceae members on the anode surface and on the bulk of a bioelectrochemical system able to sustain toluene degradation was recently found even though its involvement was not directly proved (Daghio et al., 2016). The occurrence in natural environments of sulfate reducing bacteria belonging to Desulfobulbaceae, able to oxidize sulfide by using their ability to act as electron cables, was recently shown (Pfeffer et al., 2012). These microorganisms are capable to transport electrons, derived from sulfide oxidation, to oxygen as the final electron acceptor, using centimeter-long filaments as electrical cables. In nature there is a great diversity of electroactive bacteria able to transfer electrons far beyond the cell surface to an electrode or vice versa (e.g., Geobacter sulfurreducens, Acidithiobacillus ferrooxidans, Shewanella oneidensis) (Rabaey and Rozendal, 2010; Liu et al., 2011; Rosenbaum et al., 2011; Levar et al., 2012; Bücking et al., 2013; Babauta et al., 2014; Dolch et al., 2014). Despite the metabolic potentialities of such microorganisms, only a few studies have dealt with electroactive bacteria in contaminated marine sediments where hydrocarbon bioelectrochemical degradation occurs (Holmes et al., 2004; Rowe et al., 2015). In the present study, we have explored the structure and the associated metabolic traits of the microbial communities thriving in an innovative bioelectrochemical system (i.e., the Oil Spill Snorkel) recently developed for the anoxic biodegradation of petroleum hydrocarbons in marine sediments (Cruz Viggi et al., 2015). In the system, a graphite rod (i.e., the snorkel), half-buried within the anoxic contaminated sediment, was able to accept electrons deriving from the biological oxidation of contaminants and other reduced species in the marine sediment. In this configuration, electrons flow, along the conductive graphite rod, from the section of the rod buried in the anoxic sediment (i.e., anode) to the upper oxic section (i.e., cathode) where oxygen is reduced to form water in the presence of a catalyst (i.e., activated carbon). Overall, the system allows the formation of a (bio)electrochemical connection between the anoxic sediment and the overlying oxic water, thereby increasing the rate of oxidative reactions occurring in the sediment and positively affecting the extent of hydrocarbon degradation. Despite a higher degradation of total petroleum hydrocarbons was observed in the bioelectrochemical system compared to a control system, the interplay between the multiple biological reactions occurring in the Oil-Spill Snorkel, including hydrocarbon oxidation and sulfur cycle, was unresolved as well as the main mechanisms driving the impact, direct or indirect, of the electrode on the oxidation reactions. Materials and Methods The Oil-Spill Snorkel Experiments The “Oil-Spill Snorkel” experimental setup consisted of sacrificial microcosms containing crude oil-supplemented sandy sediment from Messina Harbor (Italy). The sediment was artificially contaminated in the laboratory with Intermediate Fuel Oil (IFO 180) to a final concentration of approximately 20 g/kg. Microcosms were prepared in 120-mL serum bottles. Each bottle was filled (starting from the bottom) with 50 grams of oil-supplemented sediment, 40 g of clean sand, 10 g of Norit® granular activated carbon (serving as oxygen reduction catalyst), and 40 mL of oxygenated seawater from the site. Graphite rods (1 or 3, depending on the treatment) were inserted vertically through the layers of the different materials to create the electrochemical connection between the anoxic sediment and the oxygenated overlaying water. Five different treatments were setup, namely: treatment “S3” which contained 3 graphite rods, treatment “S” which contained 1 graphite rod, treatment “C” (biotic control) which contained no graphite rods, treatment “B3” (autoclaved control) which contained 3 graphite rods and was autoclaved (120°C for 1 h) on 3 successive days and treatment “B” (autoclaved control) which contained 1 graphite rods and was also autoclaved (120°C for 1 h) on 3 successive days. Once prepared all the microcosms were statically incubated in the dark in a temperature-controlled room at 20 ± 1°C. Weekly, the headspace of the bottles was analyzed for oxygen consumption and carbon dioxide evolution by gas-chromatography (GC) with thermal conductivity detector (TCD). At fixed times, one bottle from each treatment was sacrificed: the sediment was analyzed (upon liquid–solid extraction) by GC and flame ionization detector (FID) for quantification of total petroleum hydrocarbons (TPH); the liquid phase was analyzed by ion chromatography (IC) for quantification of seawater anions. Sediment samples and biofilms growing on electrode surface were collected at the end of the treatment “S3” (t = 417 d) for CARD-FISH analysis. 1 g of marine sediment was fixed in formaldehyde and processed as previously reported (Cruz Viggi et al., 2015). In parallel, microorganisms on the electrode surface were scraped with a sterile spatula, dissolved in PBS buffer with formaldehyde (2% v/v). Microorganisms detached both from the marine sediment particles and from the electrode surface were filtered through 0.2 μm polycarbonate filters (Ø 47 mm, Millipore) by gentle vacuum (<0.2 bar) and stored at −20°C until use. Each sample has been used for Catalyzed Reporter Deposition-Fluorescence In situ Hybridization (CARD-FISH) following the procedure published elsewhere (Matturro et al., 2016a). Oligonucleotide probes targeting Deltaproteobacteria (DELTA495abc; Loy et al., 2002) and Desulfobulbaceae (DSB706; Schauer et al., 2014) were employed following the hybridization conditions reported elsewhere (Matturro et al., 2016a). The analysis was performed by epifluorescence microscopy (Olympus, BX51). Images were captured with Olympus F-View CCD camera and handled with Cell^F software (Olympus, Germany). DNA extraction for NGS analysis was performed on samples collected at the end of the treatment “S3” (t = 417 days). In detail, 0.25 g of dry marine sediment were collected with a sterile spatula and processed for DNA extraction with Power Soil DNA extraction kit (MoBio, Italy) following the manufacturer's instructions. Simultaneously, biofilm growing on the electrode surface was gently scraped with a sterile spatula and dissolved in 15 mL sterile Milli-Q water (Millipore, Italy). Pellet was collected after 15 min of centrifugation at 15,000 g and processed for DNA extraction with Power Soil DNA extraction kit (MoBio, Italy) following the manufacturer's instructions. Purified DNA from each sample was eluted in 100 μL sterile Milli-Q water and 10 ng of extracted DNA was used for the following NGS analysis. Next Generation Sequencing (NGS) 16S rRNA Amplicon Library Preparation (V1–3) was performed as detailed in Matturro et al. (2016b). The procedure for bacterial 16S rRNA amplicon sequencing targeting the V1–3 variable regions is based on Caporaso et al. (2012), using primers adapted from the Human Gut Consortium (Ward et al., 2012). 10 ng of extracted DNA was used as template in the PCR reaction (25 μL) containing dNTPs (400 nM of each), MgSO4 (1.5 mM), Platinum® Taq DNA polymerase HF (2 mU), 1X Platinum® High Fidelity buffer (Thermo Fisher Scientific, USA) and barcoded library adaptors (400 nM) containing V1–3 primers (27F: 5′-AGAGTTTGATCCTGGCTCAG-3′; 534R: 5′-ATTACCGCGGCTGCTGG-3′). All PCR reactions were run in duplicate and pooled afterward. The amplicon libraries were purified using the Agencourt® AMpure XP bead protocol (Beckmann Coulter, USA). Library concentration was measured with Quant-iTTM HS DNA Assay (Thermo Fisher Scientific, USA) and quality validated with a Tapestation 2200, using D1K ScreenTapes (Agilent, USA). The purified sequencing libraries were pooled in equimolar concentrations and diluted to 4 nM. The samples were paired end sequenced (2 × 301 bp) on a MiSeq (Illumina) using a MiSeq Reagent kit v3, 600 cycles (Illumina) following the standard guidelines for preparing and loading samples on the MiSeq. 10% Phix control library was spiked in to overcome low complexity issue often observed with amplicon samples. Forward and reverse reads were trimmed for quality using Trimmomatic v. 0.32 (Bolger et al., 2014) with the settings SLIDINGWINDOW:5:3 and MINLEN:275 and merged using FLASH v. 1.2.7 (Magoč and Salzberg, 2011) with the settings -m 25 -M 200. The merged reads were dereplicated, formatted for use in the UPARSE workflow (Edgar, 2013) and clustered using the usearch v. 7.0.1090 -cluster_otus command with default settings. OTU abundances were estimated using the usearch v. 7.0.1090 -usearch_global command with -id 0.97. Taxonomy was assigned using the RDP classifier (Wang et al., 2007) as implemented in the parallel_assign_taxonomy_rdp.py script in QIIME (Caporaso et al., 2010), using the MiDAS database v.1.20 (McIlroy et al., 2015). The results were analyzed in R (R Core Team, 2015) through the Rstudio IDE using the ampvis package v.1.9.1 (Albertsen et al., 2015). Evenness (E), Shannon (H) and the taxonomic distinctness (TD) indices were used to describe the biodiversity of the marine sediment and the biofilm on the electrode surface by using Past version 3.10. Effect of the Snorkel on Key Biogeochemical Processes in Oil-Contaminated Sediments The crude oil-supplemented microcosms containing 3 graphite rods (i.e., treatment “S3”) displayed a 1.7-fold higher cumulative oxygen uptake and a 1.4-fold higher cumulative CO2 evolution compared to the snorkel-free biotic controls (Cruz Viggi et al., 2015). In agreement with that, the initial rate of petroleum hydrocarbons biodegradation was also substantially enhanced. Indeed, while after 200 days of incubation a negligible degradation of hydrocarbons was noticed in snorkel-free control microcosms, a substantial reduction of 12 and 21% was observed in microcosms containing 1 and 3 snorkels, respectively. Following a more prolonged incubation (day 417), an extensive degradation of TPH occurred in all treatments, including the autoclaved controls, with removals exceeding 80% in most treatments. Sulfate reduction fuelled by TPH and/or by the organic matter contained in the sediment was observed in all biotic treatments, although it proceeded at a substantially higher rate in the snorkel-free controls (Cruz Viggi et al., 2015). A possible explanation for that is the preferential use of the “snorkel” over sulfate as respiratory electron acceptor for the oxidation of organic substrates in the sediment. On the other hand, another possible explanation is that the “snorkels” facilitated the (biotic or abiotic) back-oxidation (to sulfate) of the sulfide generated in the sediment from the activity of sulfate-reducing microorganisms, hence resulting in an apparently lower sulfate reduction. The predominance of members of Proteobacteria both in the bulk sediment and on the electrode surface of the Oil-Spill Snorkel microcosms (treatment “S3”) was shown by CARD-FISH analysis. In particular, members of Deltaproteobacteria and Chloroflexi were predominant in the initial contaminated marine sediment and increased up to 9.8 and 8.6-fold respectively at the end of the treatment (Cruz Viggi et al., 2015). On the contrary, the electrode surface was mostly colonized (~95% of total bacteria) by cells belonging to Alphaproteobacteria, Gammaproteobacteria, and to lesser extent by Deltaproteobacteria evidencing the existence on the electrode surface of a distinct microbial niche (Cruz Viggi et al., 2015). Interestingly, the microscopic analysis revealed mainly in the bulk sediment the presence of filamentous bacteria belonging to Deltaproteobacteria (Figure 1). The filaments were composed of bacilli with clear indentations at septa and a total length ranging from 10 to 100 μm. However, the actual length of the filaments in the original sediment before the sample pretreatment required for the CARD-FISH assay (i.e., vortexing to detach cells from sediment particles and successive centrifugation to separate cells from the sediment particles) was probably higher. As shown in Figure 1, the filamentous bacteria and some single rod-shaped cells positively hybridized with DSB706 probe specific for Desulfobulbaceae family. Figure 1. Filamentous Desulfobulbaceae evidenced by microscopic analysis at the end of the treatment in the marine sediment after DAPI staining and CARD-FISH analysis with oligonucleotide probes targeting Deltaproteobacteria (Delta495abc probes) and Desulfobulbaceae (DSB706 probe). Microbiome of the Contaminated Marine Sediment NGS analysis of the bulk sediment produced a total of 256 OTUs. Proteobacteria dominated the microbiome representing 61% of total OTUs. In particular, Deltaproteobacteria members were the most abundant within the entire microbiome (46%) and were mostly affiliated to Desulfobacteraceae (19.6%), Desulfobulbaceae (13.5%) and Desulfarculaceae (10%) (Figure 2, Table 1). Further, Chloroflexi represented 10% of total OTUs and were mostly affiliated to Anaerolineaceae family, while Alphaproteobacteria members (11% of total OTUs) were related to Rhodospirillaceae family, including Magnetovibrio, Pelagibius, Thalassospira, and Defluviicoccus genera, and to Rhodobacteraceae family (Figure 2, Table 1). Additionally, members of Deferribacteres phylum were abundant and represented 6% of total OTUs of which SAR406 clade (Marine group A) members were the most representative ones (Figure 2, Table 1). Figure 2. Microbiome of the bulk sediment. Data are reported as percentage out of total OTUs produced by NGS analysis. Table 1. Phylogenetic affiliation of the most representative OTUs detected by NGS in the marine sediment. Microbiome of the Biofilm Growing at the Surface of the Electrode NGS analysis, conducted on the biofilm sample taken from the surface of the electrode buried within the sediment, provided 240 OTUs. Proteobacteria represented 85% of total OTUs and, diversely from the marine sediment, mainly comprised of Alphaproteobacteria (59% of total OTUs) (Figure 3). They were mostly affiliated to Rhodospirillaceae family (33%), including Magnetovibrio (11%), Thalassospira (5%), Pelagibus (3%), Nisaea (3%), and Defluviicoccus (3.5%) genera. As shown in Table 2, 6% of total OTUs within Rhodospirillaceae family were unidentified. Further, Gammaproteobacteria members (18%), affiliated to Sedimenticola (9.6%), and Xanthomonadales (4%), and Deltaproteobacteria (6%), mostly represented by Desulfobulbaceae, were also found on the electrode surface. Moreover, 8% of total OTUs were affiliated to Planctomycetes, whose most representative members belonged to Phycisphaeraceae family (SM1A02 genus) (Figure 3, Table 2). Figure 3. Microbiome of the biofilm taken from the electrode surface. Data are reported as percentage out of total OTUs produced by NGS analysis. Table 2. Phylogenetic affiliation of the most representative OTUs detected by NGS in the biofilm taken from electrode surface. The analysis of bacterial diversity was performed from data generated by NGS. Overall, all indices (TD, H, E) indicated a higher biodiversity in the sediment compared to the biofilm attached on the electrode surface. Values of TD, which captures phylogenetic diversity and it is more closely linked to functional diversity (Clarke and Warwick, 1999), were low indicating the occurrence of distinct microbial niches occurring in the sediment and on the electrode surface (Table 3). Similarly, S and H indices were low in both matrixes analyzed. Table 3. Biodiversity indices of the microbial community inhabiting the marine sediment and the biofilm growing at the electrode surface. Microbiome of the Marine Sediment At the end of the treatment, the contaminated marine sediment was mostly composed by members of Desulfobulbaceae (14% of total OTUs), Desulfobacteraceae (12% of total OTUs), and Desulfarculaceae (10% of total OTUs). The majority of the representatives of these OTUs still results uncultured and comprises strictly anaerobic sulfate reducing bacteria able to reduce sulfate, sulfite and thiosulfate to sulfide, consistently with the occurrence of sulfate-reduction, possibly fuelled by TPH, in the Oil-Spill Snorkel treatments. Indeed, these microorganisms have been already found in marine habitats, isolated from oil-reservoir and/or marine environments and reported as sulfate-reducing hydrocarbon degraders (i.e., Desulfotignum species) (Harms et al., 1996; Ommedal and Torsvik, 2007; Higashioka et al., 2011; Abu Laban et al., 2015; Almstrand et al., 2016; Daghio et al., 2016). Notably, recent studies reported the occurrence of Desulfobulbaceae members in sulfidic rich and current-producing sediments (Nielsen et al., 2010; Pfeffer et al., 2012; Daghio et al., 2016). These microorganisms form an electron transporter filamentous-like structure composed by long cables containing thousands of cells that share an outer membrane serving as electrical insulation from the external medium. This structure allows the establishment of an electron-conducting system through the sediment able to directly connect the sulfide-oxidation in the suboxic zone with the oxygen-reduction at the oxic zone (Nielsen et al., 2010; Roden et al., 2010; Pfeffer et al., 2012; Kato, 2016). As shown in Figure 1, the massive occurrence in the marine sediment of similar filamentous bacteria belonging to Desulfobulbaceae was found. Despite Desulfobulbaceae are commonly known as sulfate reducing bacteria living in the ocean floor where the deeper cells do not have access to the oxygen, some studies also reported that the deeper cells might initiate hydrogen sulfide oxidation to elemental sulfur with oxygen serving as a spatially distant electron acceptor. These strategies allow sulfate-reducing bacteria, while inhabiting anoxic environments, to compete with other aerobic sulfide oxidizing bacteria (Fuseler et al., 1996; Finster, 2008). Moreover, some Desulfobulbaceae members (i.e., Desulfobulbus, Desulfofustis, Desulfocapsa species) were recently distinguished for their ability to couple growth to the disproportionation of elemental sulfur to sulfate and sulfide (Pagani et al., 2011; Abu Laban et al., 2015). Within Proteobacteria, some Gammaprotebacteria were also found in the contaminated marine sediment, such as Sedimenticola spp. (<1% of total OTUs), most of them known as sulfur-oxidizing bacteria in marine environments capable of coupling the oxidation elemental sulfur and sulfide to autotrophic growth and to produce sulfur inclusions as metabolic intermediates (Flood et al., 2015). Hydrocarbon petroleum biodegradation was also likely sustained by other anaerobic hydrocarbon degraders affiliated to Anaerolineaceae family (Chloroflexi phylum) comprising obligate anaerobes, whose presence has been already documented in many hydrocarbon environments, including marine sediments, where biodegradation of oil-related compounds occurred (Sherry et al., 2013; Liang et al., 2015). In line with our observations, Anaerolineaceae have been also reported as fundamental community members in metabolism of low-molecular-weight alkanes under sulfate-reducing conditions (Savage et al., 2010). As well as for the yet uncultured Desulfobulbaceae OTUs retrieved in the marine sediment, a detailed taxonomic affiliation of Anaerolineaceae OTUs was not reached by NGS analysis and surely further investigations will be necessary to better define the taxonomy and the physiology of these microorganisms. Further, a remarkable presence of Deferribacteres members in the sediment was observed. They mostly belonged to SAR406 clade (Marine group A), recently named “Marinimicrobia” and known to be ubiquitously distributed in oxygen minimum zones of marine environments (Stevens and Ulloa, 2008; Schattenhofer et al., 2009). Members of the phylum Deferribacteres were shown to be able to respire anaerobically different organic substrates by using Fe+3, Mn+4, S0, Co+3, or nitrate as electron acceptors. Interestingly, previous studies reported the occurrence of Deferribacteres members in oil contaminated submarine anoxic zones where they have nitrogen fixing ability and are also able to utilize a variety of both complex organic compounds or small molecules substrates, like hydrogen and acetate, as electron donors (Greene et al., 1997; Wang et al., 2011; Liang et al., 2015; Yilmaz et al., 2015). Even though the metabolism of these microorganisms is poorly understood, the occurrence of Deferribacteres members deserves attention as they might have a role in the anaerobic petroleum biodegradation in contaminated marine sediments and future efforts should be addressed to elucidate the role of these microorganisms in such polluted environments. The electrode surface was remarkably colonized by Alphaproteobacteria, mostly affiliated to Rhodospirillaceae family, whose members belong to unidentified genera (12% of total OTUs) or to Magnetovibrio genus (11% of total OTUs). Rhodospirillaceae are purple non-sulfur bacteria able to photoassimilate anaerobically simple organic compounds. Some genera grow photoheterotrophically under anoxic conditions in the light and chemoheterotrophically in the dark, while others grow heterotrophically under aerobic and microaerophilic conditions. Interestingly, recent studies have reported the isolation of some Rhodospirillaceae species from contaminated marine environments exhibiting hydrocarbonoclastic potential under anaerobic conditions in bioelectrochemical remediation systems (Venkidusamy and Megharaj, 2016). This may indicate a role in the anaerobic petroleum hydrocarbon biodegradation of Rhodospirillaceae species living tightly to the electrode surface. Considerably, Rhodospirillaceae members are also known as magnetotactic bacteria (MTB), microorganisms present at the oxic-anoxic transition zone where opposing gradients of oxygen and reduced sulfur and/or iron exist (Geelhoed et al., 2010). They are able to biomineralize a unique organelle, called magnetosome displaying polar magnetotaxis, where magnetic iron mineral crystals are formed (Lefèvre and Wu, 2013; Barber-Zucker and Zarivach, 2016; Lefèvre, 2016). This capability probably originated as a result of the toxicity of free iron in the cells (Lefèvre and Wu, 2013). These metabolic features of Rhodospirillaceae are in line with previous observations regarding the formation of a Fe3+ reddish biofilm on the electrode surface of the Oil Spill Snorkel system (Cruz Viggi et al., 2015). Probably the colonization of Rhodospirillaceae carrying magnetotactic abilities is linked to the availability of the magnetosome precursors (i.e., Fe3+) on the electrode surface. Further, previous studies reported members of Rhodospirillaceae being able to oxidize reduced sulfur species (e.g., sulfide or thiosulfate) to sulfate using oxygen as terminal electron acceptor, under microaerophilic conditions (Geelhoed et al., 2010). Possibly, in the Oil Spill Snorkel microcosms, whereby anaerobic conditions prevailed, these microorganisms thrived using the electrode (in place of oxygen) as terminal electron acceptor for the oxidation of sulfide to sulfate. This could explain the apparently lower sulfate reducing activity observed in the Snorkel treatments compared to the Snorkel-free controls (Cruz Viggi et al., 2015). In detail, NGS data showed that a large portion of Rhodospirillaceae members found at the electrode surface was mainly affiliated to Magnetovibrio genus. Representatives of this genus are MTB bacteria and were isolated from sulfide-rich sediments. They are able to grow chemoheterotrophically with organic and some amino acids as carbon and electron source or chemoautotrophically on thiosulfate and sulfide with oxygen as terminal electron acceptor (microaerophilic growth) and on thiosulfate using nitrous oxide (N2O) as terminal electron acceptor (anaerobic growth) (Bazylinski et al., 1988, 2013; Lefèvre and Wu, 2013). Moreover, Rhodospirillaceae members other than Magnetovibrio such as chemoheterotrophic facultative anaerobic genera (i.e., Nisaea), strictly aerobic and microaerophilic genera (i.e., Thalassospira) were also found on the electrode surface. In particular, some Thalassospira strains have been isolated and sequenced as electrogenic petroleum-degrading bacteria (Kiseleva et al., 2015). Moreover, marine sediment-derived strains were reported to exhibit electrotrophic behavior, accepting electrons from insoluble sulfur but the capacity of these strains to transfer electrons to an anode has not already proven (Rowe et al., 2015). Overall, the presence of these microorganisms might suggest the occurrence of the anaerobic/aerobic gradient through the graphite electrode buried in the sediment, which allows the creation of a bioelectrochemical connection between the anoxic sediment and the overlying oxic water, driving the oxidation of petroleum hydrocarbons (to carbon dioxide and water) or reduced sulfur species such as elemental sulfur, sulfide, or thiosulfate (to sulfate). Interestingly, sequences belonging to Defluviicoccus genus, commonly detected in anaerobic-aerobic wastewater treatment plants (Lanham et al., 2008; Burow et al., 2009), were obtained from the electrode surface. Some recent studies reported the occurrence of members of this genus in marine environments like cold-water coral reefs as hotspots of carbon mineralization (Van Oevelen et al., 2009; Rovelli et al., 2015; Van Bleijswijk et al., 2015). Besides the massive presence of Alphaprotebacteria, sulfur oxidizing Sedimenticola members of Gammaproteobacteria (Flood et al., 2015) were also found at the electrode surface. Moreover, a large portion of unidentified OTUs was retrieved suggesting the need of further efforts to shed light on the identity of novel microorganisms involved in petroleum hydrocarbons bioelectrochemical degradation in marine environments. Grasping the System As a Whole: Biological Network between the Marine Sediment and the Electrode A tentative model of the biological network found in the contaminated marine sediment and at the electrode surface is schematically shown in Figure 4. Our findings suggest the existence in the Oil Spill Snorkel system of two parallel electrical cables, the artificial (graphite electrode) and the natural (Desulfubulbaceae filaments) electron conduits, which stimulate the hydrocarbons biodegradation through the establishment of an efficient sulfur cycling mediated by multiple interconnecting metabolic pathways. Figure 4. The system as a whole: the metabolic network between the marine sediment and the graphite electrode. Petroleum hydrocarbon biodegradation occurs in the contaminated marine sediment primarily via sulfate reduction, being sulfate the main oxidizing agent in the marine reaction environment. This process is efficiently sustained by the oxidation of sulfide to inorganic sulfur mediated simultaneously by the graphite electrode and by the cable bacteria, both capable to vehicle electrons from hydrogen sulfide, resulted from sulfate reduction in the anoxic sediment, to oxygen as a spatially distant electron acceptor. The further sulfur oxidation, driven by the electrode and mediated by several bacteria (e.g., members of Desulfubulbaceae, Sedimenticola, and Rhodospirillaceae), as well as likely the sulfur disproportion to sulfate, may regenerate sulfate in the sediment allowing to the sulfur cycle to start over again. This finding is in line with the observation of an apparently lower sulfate reduction observed in the sediment containing the electrodes compared to the control, tentatively previously linked to a back-oxidation of sulfide to sulfate (Cruz Viggi et al., 2015). Whereas microbes affiliated to Deltaproteobacteria drive most of the biological processes in the sediment, analog reactions at the electrode or in the proximity of the electrode are controlled mainly by Alphaproteobacteria (mostly members of Rhodospirillaceae). The latter family contains microbes with high metabolic versatility including magnetotactic bacteria affiliated to Magnetovibrio genus which are often reported to occur in the aerobic-anoxic transition zone in water or sediment where opposing gradients of oxygen and reduced sulfur and/or iron exist. Overall, the picture defined by NGS analysis showed the occurrence in the system of a variety of electroactive microorganisms able to sustain the contaminant biodegradation alone or by means of an external conductive support through the establishment of a bioelectrochemical connection between two spatially separated redox zones and the preservation of an efficient sulfur cycling. This potential might be higher than the one here described due to the unexplored identity and physiology of many OTUs generated by NGS analysis. All authors contributed equally to this work. BM performed the biomolecular experiments, analyzed data and wrote the paper. CCV and FA constructed the Oil Spill Snorkel system. SR conceived and coordinated the study and wrote the paper. All authors reviewed the results and approved the final version of the manuscript. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The work has been carried out in the framework of EU KillSPill Project (grant agreement no. 312139). Abu Laban, N., Tan, B., Dao, A., and Foght, J. (2015). Draft genome sequence of uncultivated toluene-degrading desulfobulbaceae bacterium Tol-SR, obtained by stable isotope probing using [13C6] toluene. Genome Announc. 3, e01423–e01414. doi: 10.1128/genomeA.01423-14 Albertsen, M., Karst, S. M., Ziegler, A. S., Kirkegaard, R. H., and Nielsen, P. H. (2015). Back to basics – the influence of DNA extraction and primer choice on phylogenetic analysis of activated sludge communities. PLoS ONE 10:e0132783. doi: 10.1371/journal.pone.0132783 Almstrand, R., Pinto, A. J., Figueroa, L. A., and Sharp, J. O. (2016). Draft genome sequence of a novel Desulfobacteraceae member from a sulfate-reducing bioreactor metagenome. Genome Announc. 4, e01540–e01515. doi: 10.1128/genomeA.01540-15 Babauta, J. T., Atci, E., Ha, P. T., Lindemann, S. R., Ewing, T., Call, D. R., et al. (2014). Localized electron transfer rates and microelectrode-based enrichment of microbial communities within a phototrophic microbial mat. Front. Microbiol. 5:11. doi: 10.3389/fmicb.2014.00011 Bazylinski, D. A., Williams, T. J., Lefèvre, C. T., Berg, R. J., Zhang, C. L., Bowser, S. S., et al. (2013). Magnetococcus marinus gen. nov., sp. nov., a marine, magnetotactic bacterium that represents a novel lineage (Magnetococcaceae fam. nov., Magnetococcales ord. nov.) at the base of the Alphaproteobacteria. Int. J. Syst. Evol Microbiol. 63(Pt 3), 801–808. doi: 10.1099/ijs.0.038927-0 Bücking, C., Schicklberger, M., and Gescher, J. (2013). “The biochemistry of dissimilatory ferric iron and manganese reduction in Shewanella oneidensis,” in Microbial Metal Respiration, eds A. Kappler and J. Gescher (Verlag Berlin; Heidelberg: Springer), 49–82. Burow, L. C., Mabbett, A. N., and Blackall, L. L. (2009). Anaerobic central metabolic pathways active during polyhydroxyalkanoate production in uncultured cluster 1 Defluviicoccus enriched in activated sludge communities. FEMS Microbiol. Lett. 298, 79–84. doi: 10.1111/j.1574-6968.2009.01695.x Caporaso, J. G., Kuczynski, J., Stombaugh, J., Bittinger, K., Bushman, F. D., Costello, E. K., et al. (2010). QIIME allows analysis of high-throughput community sequencing data. Nat. Methods 7, 335–336. doi: 10.1038/nmeth.f.303 Caporaso, J. G., Lauber, C. L., Walters, W. A., Berg-Lyons, D., Huntley, J., Fierer, N., et al. (2012). Ultra-high- throughput microbial community analysis on the Illumina HiSeq and MiSeq platforms. ISME J. 6, 1621–1624. doi: 10.1038/ismej.2012.8 Cruz Viggi, C., Presta, E., Bellagamba, M., Kaciulis, S., Balijepalli, S. K., Zanaroli, G., et al. (2015). The “Oil-Spill Snorkel”: an innovative bioelectrochemical approach to accelerate hydrocarbons biodegradation in marine sediments. Front. Microbiol. 4:881. doi: 10.3389/fmicb.2015.00881 Daghio, M., Vaiopoulou, E., Patil, S. A., Suárez-Suárez, A., Head, I. M., Franzetti, A., et al. (2016). Anodes stimulate anaerobic toluene degradation via sulfur cycling in Marine sediments. Appl. Environ. Microbiol. 82, 297–307. doi: 10.1128/AEM.02250-15 Dolch, K., Danzer, J., Kabbeck, T., Bierer, B., Erben, J., Förster, A. H., et al. (2014). Characterization of microbial current production as a function of microbe-electrode-interaction. Bioresour. Technol. 157, 284–292. doi: 10.1016/j.biortech.2014.01.112 Flood, B. E., Jones, D. S., and Bailey, J. V. (2015). Complete genome sequence of sedimenticola thiotaurini strain SIP-G1, a polyphosphate- and polyhydroxyalkanoate-accumulating sulfur-oxidizing gammaproteobacterium isolated from salt marsh sediments. Genome Announc. 3:e00671–e00815. doi: 10.1128/genomeA.00671-15 Fuseler, K., Krekeler, D., Sydow, U., and Cypionka, H. (1996). A common pathway of sulfide oxidation by sulfate-reducing bacteria. FEMS Microbiol. Lett. 144:129–134. doi: 10.1111/j.1574-6968.1996.tb08518.x Geelhoed, J. S., Kleerebezem, R., Sorokin, D. Y., Stams, A. J., and van Loosdrecht, M. C. (2010). Reduced inorganic sulfur oxidation supports autotrophic and mixotrophic growth of Magnetospirillum strain J10 and Magnetospirillum gryphiswaldense. Environ. Microbiol. 12, 1031–1040. doi: 10.1111/j.1462-2920.2009.02148.x Greene, A. C., Patel, B. K., and Sheehy, A. J. (1997). Deferribacter thermophilus gen. nov., sp. nov., a novel thermophilic manganese- and iron-reducing bacterium isolated from a petroleum reservoir. Int. J. Syst. Bacteriol. 47, 505–509. doi: 10.1099/00207713-47-2-505 Harms, G., Zengler, K., Rabus, R., Aeckersberg, F., Minz, D., Rosselló-Mora, R., et al. (1996). Anaerobic oxidation of o-xylene, m-xylene, and homologous alkylbenzenes by new types of sulfate-reducing bacteria. Appl. Environ. Microbiol. 65, 999–1004. Higashioka, Y., Kojima, H., and Fukui, M. (2011). Temperature-dependent differences in community structure of bacteria involved in degradation of petroleum hydrocarbons under sulfate-reducing conditions. J. Appl. Microbiol. 110, 314–322. doi: 10.1111/j.1365-2672.2010.04886.x Holmes, D. E., Nicoll, J. S., Bond, D. R., and Lovley, D. R. (2004). Potential role of a novel psychrotolerant member of the family Geobacteraceae, Geopsychrobacter electrodiphilus gen. nov., sp. nov., in electricity production by a marine sediment fuel cell. Appl. Environ. Microbiol. 70, 6023–6030. doi: 10.1128/AEM.70.10.6023-6030.2004 Kiseleva, L., Garushyants, S. K., Briliute, J., Simpson, D. J., Cohen, M. F., and Goryanin, I. (2015). Degrading Thalassospira sp. Strain, HJ. Genome Announc. 23:e00483–15. doi: 10.1128/genomeA.00483-15 Lanham, A. B., Reis, M. A., and Lemos, P. C. (2008). Kinetic and metabolic aspects of Defluviicoccus vanus-related organisms as competitors in EBPR systems. Water Sci. Technol. 58, 1693–1697. doi: 10.2166/wst.2008.552 Levar, C., Rollefson, J., and Bond, D. (2012). “Energetic and molecular constraints on the mechanism of environmental Fe(III) reduction by Geobacter,” in Microbial Metal Respiration, eds J. Gescher and A. Kappler (Berlin: Springer), 29–48. Liang, B., Wang, L. Y., Mbadinga, S. M., Liu, J. F., Yang, S. Z., Gu, J. D., et al. (2015). Anaerolineaceae and Methanosaeta turned to be the dominant microorganisms in alkanes-dependent methanogenic culture after long-term of incubation. AMB Express 5:37. doi: 10.1186/s13568-015-0117-4 Liu, W., Lin, J., Pang, X., Cui, S., Mi, S., and Lin, J. (2011). Overexpression of rusticyanin in Acidithiobacillus ferrooxidans ATCC19859 increased Fe(II) oxidation activity. Curr. Microbiol. 62, 320–324. doi: 10.1007/s00284-010-9708-0 Loy, A., Lehner, A., Lee, N., Adamczyk, J., Meier, H., Ernst, J., et al. (2002). Oligonucleotide microarray for 16S rRNA gene-based detection of all recognized lineages of sulfate-reducing prokaryotes in the environment. Appl. Environ. Microbiol. 68, 5064–5081. doi: 10.1128/AEM.68.10.5064-5081.2002 Lu, L., Yazdi, H., Jin, S., Zuo, Y., Fallgren, P. H., and Ren, Z. J. (2014). Enhanced bioremediation of hydrocarbon-contaminated soil using pilot-scale bioelectrochemical systems. J. Hazard. Mater. 274, 8–15. doi: 10.1016/j.jhazmat.2014.03.060 Matturro, B., Ubaldi, C., Grenni, P., Barra Caracciolo, A., and Rossetti, S. (2016a). Polychlorinated biphenyl (PCB) anaerobic degradation in marine sediments: microcosm study and role of autochthonous microbial communities. Environ. Sci. Pollut. Res. 23, 12613–12623. doi: 10.1007/s11356-015-4960-2 Matturro, B., Ubaldi, C., and Rossetti, S. (2016b). Microbiome dynamics of a polychlorobiphenyl (PCB) historically contaminated marine sediment under conditions promoting reductive dechlorination. Front. Microbiol. 7:1502. doi: 10.3389/fmicb.2016.01502 McIlroy, S. J., Saunders, A. M., Albertsen, M., Nierychlo, M., McIlroy, B., Hansen, A. A., et al. (2015). MiDAS: the field guide to the microbes of activated sludge. Database 2015:bav062. doi: 10.1093/database/bav062 Nielsen, L. P., Risgaard-Petersen, N., Fossing, H., Christensen, P. B., and Sayama, M. (2010). Electric currents couple spatially separated biogeochemical processes in marine sediment. Nature 463, 1071–1074. doi: 10.1038/nature08790 Nikolopoulou, M., Pasadakis, N., Norf, H., and Kalogerakis, N. (2013). Enhanced ex situ bioremediation of crude oil contaminated beach sand by supplementation with nutrients and rhamnolipids. Mar. Pollut. Bull. 77, 37–44. doi: 10.1016/j.marpolbul.2013.10.038 Ommedal, H., and Torsvik, T. (2007). Desulfotignum toluenicum sp. nov., a novel toluene-degrading, sulphate-reducing bacterium isolated from an oil-reservoir model column. Int. J. Syst. Evol. Microbiol. 57(Pt 12), 2865–2869. doi: 10.1099/ijs.0.65067-0 Pagani, I., Lapidus, A., Nolan, M., Lucas, S., Hammon, N., Deshpande, S., et al. (2011). Complete genome sequence of Desulfobulbus propionicus type strain (1pr3T). Stand. Genomic Sci. 4, 100–110. doi: 10.4056/sigs.1613929 Pfeffer, C., Larsen, S., Song, J., Dong, M., Besenbacher, F., Meyer, R. L., et al. (2012). Filamentous bacteria transport electrons over centimetre distances. Nature 491, 218–221. doi: 10.1038/nature11586 Rakoczy, J., Feisthauer, S., Wasmund, K., Bombach, P., Neu, T. M., Vogt, C., et al. (2013). Benzene and sulfide removal from groundwater treated in a microbial fuel cell. Biotechnol. Bioeng. 110, 3104–3113. doi: 10.1002/bit.24979 Roden, E. E., Kappler, A., Bauer, I., Jiang, J., Paul, A., Stoesser, R., et al. (2010). Extracellular electron transfer through microbial reduction of solid-phase humic substances. Nat. Geosci. 3, 417–421. doi: 10.1038/ngeo870 Roling, W. F., Milner, M. G., Jones, D. M., Fratepietro, F., Swannell, R. P., Daniel, F., et al. (2004). Bacterial community dynamics and hydrocarbon degradation during a field-scale evaluation of bioremediation on a mudflat beach contaminated with buried oil. Appl. Environ. Microbiol. 70, 2603–2613. doi: 10.1128/AEM.70.5.2603-2613.2004 Rosenbaum, M., Aulenta, F., Villano, M., and Angenent, L. T. (2011). Cathodes as electron donors for microbial metabolism: which extracellular electron transfer mechanisms are involved? Bioresour. Technol. 102, 324–333. doi: 10.1016/j.biortech.2010.07.008 Rovelli, L., Attard, K. M., Bryant, L. D., Floegel, S., Stahl, H., Roberts, J. M., et al. (2015). Benthic O2 uptake of two cold-water coral communities estimated with the non-invasive eddy correlation technique. Mar Ecol Progr Ser. 525, 97–104. doi: 10.3354/meps11211 Rowe, A. R., Chellamuthu, P., Lam, B., Okamoto, A., and Nealson, K. H. (2015). Marine sediments microbes capable of electrode oxidation as a surrogate for lithotrophic insoluble substrate metabolism. Front. Microbiol. 5:784. doi: 10.3389/fmicb.2014.00784 Sammarco, P. W., Kolian, S. R., Warby, R. A., Bouldin, J. L., Subra, W. A., and Porter, S. A. (2013). Distribution and concentrations of petroleum hydrocarbons associated with the BP/Deepwater Horizon Oil Spill, Gulf of Mexico. Mar. Pollut. Bull. 73, 129–143. doi: 10.1016/j.marpolbul.2013.05.029 Savage, K. N., Krumholz, L. R., Gieg, L. M., Parisi, V. A., Suflita, J. M., Allen, J., et al. (2010). Biodegradation of low-molecular-weight alkanes under mesophilic, sulfate-reducing conditions: metabolic intermediates and community patterns. FEMS Microbiol. Ecol. 72, 485–495. doi: 10.1111/j.1574-6941.2010.00866.x Schattenhofer, M., Fuchs, B. M., Amann, R., Zubkov, M. V., Tarran, G. A., and Pernthaler, J. (2009). Latitudinal distribution of prokaryotic picoplankton populations in the Atlantic Ocean. Environ. Microbiol. 11, 2078–2093. doi: 10.1111/j.1462-2920.2009.01929.x Schauer, R., Risgaard-Pedersen, N., Kjeldsen, K. U., Tataru Bjerg, J. J., Jorgensen, B. B., Schramm, A., et al. (2014). Succession of cable bacteria and electric currents in marine sediment. ISME J. 8, 1314–1322. doi: 10.1038/ismej.2013.239 Sherry, A., Gray, N. D., Ditchfield, A. K., Aitken, C. M., Jones, D. M., Röling, W. F. M., et al. (2013). Anaerobic biodegradation of crude oil under sulphate-reducing conditions leads to only modest enrichment of recognized sulphate-reducing taxa. Int. Biodeter. Biodegr. 81, 105–113. doi: 10.1016/j.ibiod.2012.04.009 Thapa, B., Ajay Kumar, K. C., and Ghimire, A. (2012). A review on bioremediation of petroleum hydrocarbon contaminants in soil. Kathmandu Univ. J. Sci. Eng. Technol. 8, 164–170. doi: 10.3126/kuset.v8i1.6056 Van Bleijswijk, J. D. L., Whalen, C., Duineveld, G. C. A., Lavaleye, M. S. S., Witte, H. J., and Mienis, F. (2015). Microbial assemblages on a cold-water coral mound at the SE Rockall Bank (NE Atlantic): interactions with hydrography and topography. Biogeosciences 12, 4483–4496. doi: 10.5194/bg-12-4483-2015 Van Oevelen, D., Duineveld, G., Lavaleye, M., Mienis, F., Soetaert, K., and Heip, C. H. R. (2009). The cold-water coral community as hotspot of carbon cycling on continental margins: a food-web analysis from Rockall Bank (northeast Atlantic). Limnol. Oceanogr. 54, 1829–1844. doi: 10.4319/lo.2009.54.6.1829 Venkidusamy, K., and Megharaj, M. (2016). A novel electrophototrophic bacterium rhodopseudomonas palustris strain RP2, exhibits hydrocarbonoclastic potential in anaerobic environments. Front. Microbiol. 7:1071. doi: 10.3389/fmicb.2016.01071 Wang, L. Y., Gao, C. X., Mbadinga, S. M., Zhou, L., Liu, J. F., Gu, J. D., et al. (2011). Characterization of an alkane-degrading methanogenic enrichment culture from production water of an oil reservoir after 274 days of incubation. Int. Biodeter. Biodegr. 65, 444–450. doi: 10.1016/j.ibiod.2010.12.010 Wang, Q., Garrity, G. M., Tiedje, J. M., and Cole, J. R. (2007). Naive Bayesian classifier for rapid assignment of rRNA sequences into the new bacterial taxonomy. Appl. Environ. Microbiol. 73, 5261–5267. doi: 10.1128/AEM.00062-07 Ward, D. V., Gevers, D., Giannoukos, G., Earl, A. M., Methé, B. A., Sodergren, E., et al. (2012). Evaluation of 16s rDNA-based community profiling for human microbiome research. PLoS ONE 7:e39315. doi: 10.1371/journal.pone.0039315 Zhang, T., Gannon, S. M., Nevin, K. P., Franks, A. E., and Lovley, D. R. (2010). Stimulating the anaerobic degradation of aromatic hydrocarbons in contaminated sediments by providing an electrode as the electron acceptor. Environ. Microbiol. 12, 1011–1020. doi: 10.1111/j.1462-2920.2009.02145.x Keywords: bioremediation, petroleum hydrocarbons biodegradation, next generation sequencing, oil spill snorkel, cable bacteria, sulfur cycle, marine sediment Citation: Matturro B, Cruz Viggi C, Aulenta F and Rossetti S (2017) Cable Bacteria and the Bioelectrochemical Snorkel: The Natural and Engineered Facets Playing a Role in Hydrocarbons Degradation in Marine Sediments. Front. Microbiol. 8:952. doi: 10.3389/fmicb.2017.00952 Received: 02 February 2017; Accepted: 12 May 2017; Published: 29 May 2017. Edited by:Sabine Kleinsteuber, Helmholtz-Zentrum für Umweltforschung (UFZ), Germany Reviewed by:Nils Risgaard-Petersen, Aarhus University, Denmark Pier-Luc Tremblay, Wuhan University of Technology, China Copyright © 2017 Matturro, Cruz Viggi, Aulenta and Rossetti. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Simona Rossetti, firstname.lastname@example.org
<urn:uuid:c7786a85-be0f-485d-b3e1-a318fecb48ac>
2.671875
13,037
Academic Writing
Science & Tech.
45.274307
95,526,262
Scientists at the University of Basel and the Center of Free-Electron Laser Science in Hamburg were able for the first time to successfully sort out single forms of molecules with electric fields and have them react specifically. 3-aminophenol conformers in a molecular beam are spatially separated in an electric field and react with calcium ions that have been localized in space by laser cooling. Analysis of the reaction rates showed a relation between the spatial structure of the sorted molecules and their chemical reactivity. The results have been published in the renowned magazine «Science». The reactivity of a chemical compound, that is the rate at which a substance undergoes a chemical reaction, is strongly influenced by the shape of its molecules. Complex molecules often exhibit different shapes, so-called conformers, in which parts of the molecules vary in their spatial arrangement. However, conformers often interconvert between each other under ambient conditions, so that a detailed study of their individual reactivities has been difficult so far. Scientists around Prof. Stefan Willitsch from the Department of Chemistry at the University of Basel and Prof. Jochen Küpper from the Center for Free-Electron Laser Science in Hamburg (CFEL, DESY) have developed a new experimental setup that allows to study the reactivity of single isolated conformers. The scientists produced a beam of molecules from which they were able to pick specific conformers with a «molecular sorting machine» in order to specifically inject them into a chemical reaction. The scientists made use of the fact that a change in the shape of a molecule usually also leads to the modification of its dipole moment. The dipole moment describes how a molecule reacts to an external electric field. Inside this sorting machine, a non-uniform electric field deflects single conformers to varying extents so that they are spatially separated. In a first experiment, the scientists separated two conformers 3-aminophenol, a well-known compound that is widely used in industry. The two conformers only differ in the position of a single hydrogen atom. The separated conformers were then directed into a reaction chamber where they reacted with electrically charged calcium atoms, so-called ions, in a trap. The ions were cooled down with laser light to almost the absolute zero point of temperature scale at minus 273 degrees Celsius. In this way the ions were localized in space and formed an ideal target for reactions with the spatially separated conformers. Thus, the scientists were able to show that one of the conformers reacted twice as fast with the calcium ions than the other, a phenomenon that could be explained by the different electrical properties of the conformers. The new method allows insight into fundamental reaction mechanisms and the relations between molecular conformation and chemical reactivity, with potentially far-reaching applications in chemical catalysis and the synthesis of new molecules.Original Citation Science (2013) | doi: 10.1126/science.1242271Further Information • Prof. Jochen Küpper, Center for Free-Electron Laser Science, Deutsches Elektronen-Synchrotron (DESY), Notkestrasse 85, 22607 Hamburg, Tel. +49 40 8998-6330, E-Mail: firstname.lastname@example.org Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6601d68e-1d4d-4e25-b8b6-16f16b587cb6>
3.015625
1,258
Content Listing
Science & Tech.
34.876935
95,526,263
In August, the geologist Matt Jackson left California with his wife and 4-year-old daughter for the fjords of northwest Iceland, where they camped as he roamed the outcrops and scree slopes by day in search of little olive-green stones called olivine. A sunny young professor at the University of California, Santa Barbara, with a uniform of pearl-snap shirts and well-utilized cargo shorts, Jackson knew all the best hunting grounds, having first explored the Icelandic fjords two years ago. Following sketchy field notes handed down by earlier geologists, he covered 10 or 15 miles a day, past countless sheep and the occasional farmer. “Their whole lives they’ve lived in these beautiful fjords,” he said. “They look up to these black, layered rocks, and I tell them that each one of those is a different volcanic eruption with a lava flow. It blows their minds!” He laughed. “It blows my mind even more that they never realized it!” The olivine erupted to Earth’s surface in those very lava flows between 10 and 17 million years ago. Jackson, like many geologists, believes that the source of the eruptions was the Iceland plume, a hypothetical upwelling of solid rock that may rise, like the globules in a lava lamp, from deep inside Earth. The plume, if it exists, would now underlie the active volcanoes of central Iceland. In the past, it would have surfaced here at the fjords, back in the days when here was there—before the puzzle-piece of Earth’s crust upon which Iceland lies scraped to the northwest. Other modern findings about olivine from the region suggest that it might derive from an ancient reservoir of minerals at the base of the Iceland plume that, over billions of years, never mixed with the rest of Earth’s interior. Jackson hoped the samples he collected would carry a chemical message from the reservoir and prove that it formed during the planet’s infancy—a period that until recently was inaccessible to science. After returning to California, he sent his samples to Richard Walker to ferret out that message. Walker, a geochemist at the University of Maryland, is processing the olivine to determine the concentration of the chemical isotope tungsten-182 in the rock relative to the more common isotope, tungsten-184. If Jackson is right, his samples will join a growing collection of rocks from around the world whose abnormal tungsten isotope ratios have completely surprised scientists. These tungsten anomalies reflect processes that could only have occurred within the first 50 million years of the solar system’s history, a formative period long assumed to have been wiped from the geochemical record by cataclysmic collisions that melted Earth and blended its contents. The discoveries are sending geologists like Jackson into the field in search of more clues to Earth’s formation—and how the planet works today. Modern Earth, like early Earth, remains poorly understood, with unanswered questions ranging from how volcanoes work and whether plumes really exist to where oceans and continents came from, and what the nature and origin might be of the enormous structures, colloquially known as “blobs,” that seismologists detect deep down near Earth’s core. All aspects of the planet’s form and function are interconnected. They’re also entangled with the rest of the solar system. Any attempt, for instance, to explain why tectonic plates cover Earth’s surface like a jigsaw puzzle must account for the fact that no other planet in the solar system has plates. To understand Earth, scientists must figure out how, in the context of the solar system, it became uniquely earthlike. And that means probing the mystery of the first tens of millions of years. “You can think about this as an initial-conditions problem,” said Michael Manga, a geophysicist at the University of California, Berkeley, who studies geysers and volcanoes. “The Earth we see today evolved from something. And there’s lots of uncertainty about what that initial something was.” * * * On one of an unbroken string of 75-degree days in Santa Barbara the week before Jackson left for Iceland, he led a group of earth scientists on a two-mile beach hike to see some tar dikes—places where the sticky black material has oozed out of the cliff face at the back of the beach, forming flabby, voluptuous folds of faux rock that you can dent with a finger. The scientists pressed on the tar’s wrinkles and slammed rocks against it, speculating about its subterranean origin and the ballpark range of its viscosity. When this reporter picked up a small tar boulder to feel how light it was, two or three people nodded approvingly. A mix of geophysicists, geologists, mineralogists, geochemists and seismologists, the group was in Santa Barbara for the annual Cooperative Institute for Dynamic Earth Research (CIDER) workshop at the Kavli Institute for Theoretical Physics. Each summer, a rotating cast of representatives from these fields meet for several weeks at CIDER to share their latest results and cross-pollinate ideas—a necessity when the goal is understanding a system as complex as Earth. Earth’s complexity, how special it is, and, above all, the black box of its initial conditions have meant that, even as cosmologists map the universe and astronomers scan the galaxy for Earth 2.0, progress in understanding our home planet has been surprisingly slow. As we trudged from one tar dike to another, Jackson pointed out the exposed sedimentary rock layers in the cliff face—some of them horizontal, others buckled and sloped. Amazingly, he said, it took until the 1960s for scientists to even agree that sloped sediment layers are buckled, rather than having piled up on an angle. Only then was consensus reached on a mechanism to explain the buckling and the ruggedness of Earth’s surface in general: the theory of plate tectonics. Projecting her voice over the wind and waves, Carolina Lithgow-Bertelloni, a geophysicist from University College London who studies tectonic plates, credited the German meteorologist Alfred Wegener for first floating the notion of continental drift in 1912 to explain why Earth’s landmasses resemble the dispersed pieces of a puzzle. “But he didn’t have a mechanism—well, he did, but it was crazy,” she said. A few years later, she continued, the British geologist Sir Arthur Holmes convincingly argued that Earth’s solid-rock mantle flows fluidly on geological timescales, driven by heat radiating from Earth’s core; he speculated that this mantle flow in turn drives surface motion. More clues came during World War II. Seafloor magnetism, mapped for the purpose of hiding submarines, suggested that new crust forms at the mid-ocean ridge—the underwater mountain range that lines the world ocean like a seam—and spreads in both directions to the shores of the continents. There, at “subduction zones,” the oceanic plates slide stiffly beneath the continental plates, triggering earthquakes and carrying water downward, where it melts pockets of the mantle. This melting produces magma that rises to the surface in little-understood fits and starts, causing volcanic eruptions. (Volcanoes also exist far from any plate boundaries, such as in Hawaii and Iceland. Scientists currently explain this by invoking the existence of plumes, which researchers like Walker and Jackson are starting to verify and map using isotope studies.) The physical description of the plates finally came together in the late 1960s, Lithgow-Bertelloni said, when the British geophysicist Dan McKenzie and the American Jason Morgan separately proposed a quantitative framework for modeling plate tectonics on a sphere. Other than their existence, almost everything about the plates remains in contention. For instance, what drives their lateral motion? Where do subducted plates end up—perhaps these are the blobs?—and how do they affect Earth’s interior dynamics? Why did Earth’s crust shatter into plates in the first place when no other planetary surface in the solar system did? Also completely mysterious is the two-tier architecture of oceanic and continental plates, and how oceans and continents came to ride on them—all possible prerequisites for intelligent life. Knowing more about how Earth became earthlike could help us understand how common earthlike planets are in the universe and thus how likely life is to arise. The continents probably formed, Lithgow-Bertelloni said, as part of the early process by which gravity organized Earth’s contents into concentric layers: Iron and other metals sank to the center, forming the core, while rocky silicates stayed in the mantle. Meanwhile, low-density materials buoyed upward, forming a crust on the surface of the mantle like soup scum. Perhaps this scum accumulated in some places to form continents, while elsewhere oceans materialized. Figuring out precisely what happened and the sequence of all of these steps is “more difficult,” Lithgow-Bertelloni said, because they predate the rock record and are “part of the melting process that happens early on in Earth’s history—very early on.” Until recently, scientists knew of no geochemical traces from so long ago, and they thought they might never crack open the black box from which Earth’s most glorious features emerged. But the subtle anomalies in tungsten and other isotope concentrations are now providing the first glimpses of the planet’s formation and differentiation. These chemical tracers promise to yield a combination timeline-and-map of early Earth, revealing where its features came from, why, and when. * * * Humankind’s understanding of early Earth took its first giant leap when Apollo astronauts brought back rocks from the moon: our tectonic-less companion whose origin was, at the time, a complete mystery. The rocks “looked gray, very much like terrestrial rocks,” said Fouad Tera, who analyzed lunar samples at the California Institute of Technology between 1969 and 1976. But because they were from the moon, he said, they created “a feeling of euphoria” in their handlers. Some interesting features did eventually show up: “We found glass spherules—colorful, beautiful—under the microscope, green and yellow and orange and everything,” recalled Tera, now 85. The spherules probably came from fountains that gushed from volcanic vents when the moon was young. But for the most part, he said, “the moon is not really made out of a pleasing thing—just regular things.” In hindsight, this is not surprising: Chemical analysis at Caltech and other labs indicated that the moon formed from Earth material, which appears to have gotten knocked into orbit when the 60 to 100 million-year-old proto-Earth collided with another protoplanet in the crowded inner solar system. This “giant impact” hypothesis of the moon’s formation, though still hotly debated in its particulars, established a key step on the timeline of the Earth, moon and sun that has helped other steps fall into place. Chemical analysis of meteorites is helping scientists outline even earlier stages of our solar system’s timeline, including the moment it all began. First, 4.57 billion years ago, a nearby star went supernova, spewing matter and a shock wave into space. The matter included radioactive elements that immediately began decaying, starting the clocks that isotope chemists now measure with great precision. As the shock wave swept through our cosmic neighborhood, it corralled the local cloud of gas and dust like a broom; the increase in density caused the cloud to gravitationally collapse, forming a brand-new star—our sun—surrounded by a placenta of hot debris. Over the next tens of millions of years, the rubble field surrounding the sun clumped into bigger and bigger space rocks, then accreted into planet parts called “planetesimals,” which merged into protoplanets, which became Mercury, Venus, Earth and Mars—the four rocky planets of the inner solar system today. Farther out, in colder climes, gas and ice accreted into the giant planets. As the infant Earth navigated the crowded inner solar system, it would have experienced frequent, white-hot collisions, which were long assumed to have melted the entire planet into a global “magma ocean.” During these melts, gravity differentiated Earth’s liquefied contents into layers—core, mantle and crust. It’s thought that each of the global melts would have destroyed existing rocks, blending their contents and removing any signs of geochemical differences left over from Earth’s initial building blocks. The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline—at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago—was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.” Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said. Wasserburg dubbed the event the “lunar cataclysm.” Now more often called the “late heavy bombardment,” it was a torrent of asteroids and comets that seems to have battered the moon 3.9 billion years ago, a full 600 million years after its formation, melting and chemically resetting the rocks on its surface. The late heavy bombardment surely would have rained down even more heavily on Earth, considering the planet’s greater size and gravitational pull. Having discovered such a momentous event in solar system history, Wasserburg left his younger, more reserved colleagues behind and “celebrated in Pasadena in some bar,” Tera said. As of 1974, no rocks had been found on Earth from the time of the late heavy bombardment. In fact, Earth’s oldest rocks appeared to top out at 3.8 billion years. “That number jumps out at you,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. It suggests, Bottke said, that the late heavy bombardment might have melted whatever planetary crust existed 3.9 billion years ago, once again destroying the existing geologic record, after which the new crust took 100 million years to harden. In 2005, a group of researchers working in Nice, France, conceived of a mechanism to explain the late heavy bombardment—and several other mysteries about the solar system, including the curious configurations of Jupiter, Saturn, Uranus and Neptune, and the sparseness of the asteroid and Kuiper belts. Their “Nice model” posits that the gas and ice giants suddenly destabilized in their orbits sometime after formation, causing them to migrate. Simulations by Bottke and others indicate that the planets’ migrations would have sent asteroids and comets scattering, initiating something very much like the late heavy bombardment. Comets that were slung inward from the Kuiper belt during this shake-up might even have delivered water to Earth’s surface, explaining the presence of its oceans. With this convergence of ideas, the late heavy bombardment became widely accepted as a major step on the timeline of the early solar system. But it was bad news for earth scientists, suggesting that Earth’s geochemical record began not at the beginning, 4.57 billion years ago, or even at the moon’s beginning, 4.51 billion years ago, but 3.8 billion years ago, and that most or all clues about earlier times were forever lost. * * * More recently, the late heavy bombardment theory and many other long-standing assumptions about the early history of Earth and the solar system have come into question, and Earth’s dark age has started to come into the light. According to Carlson, “the evidence for this 3.9 [billion-years-ago] event is getting less clear with time.” For instance, when meteorites are analyzed for signs of shock, “they show a lot of impact events at 4.2, 4.4 billion,” he said. “This 3.9 billion event doesn’t show up really strong in the meteorite record.” He and other skeptics of the late heavy bombardment argue that the Apollo samples might have been biased. All the missions landed on the near side of the moon, many in close proximity to the Imbrium basin (the moon’s biggest shadow, as seen from Earth), which formed from a collision 3.9 billion years ago. Perhaps all the Apollo rocks were affected by that one event, which might have dispersed the melt from the impact over a broad swath of the lunar surface. This would suggest a cataclysm that never occurred. Furthermore, the oldest known crust on Earth is no longer 3.8 billion years old. Rocks have been found in two parts of Canada dating to 4 billion and an alleged 4.28 billion years ago, refuting the idea that the late heavy bombardment fully melted Earth’s mantle and crust 3.9 billion years ago. At least some earlier crust survived. In 2008, Carlson and collaborators reported the evidence of 4.28 billion-year-old rocks in the Nuvvuagittuq greenstone belt in Canada. When Tim Elliott, a geochemist at the University of Bristol, read about the Nuvvuagittuq findings, he was intrigued to see that Carlson had used a dating method also used in earlier work by French researchers that relied on a short-lived radioactive isotope system called samarium-neodymium. Elliott decided to look for traces of an even shorter-lived system—hafnium-tungsten—in ancient rocks, which would point back to even earlier times in Earth’s history. The dating method works as follows: Hafnium-182, the “parent” isotope, has a 50 percent chance of decaying into tungsten-182, its “daughter,” every 9 million years (this is the parent’s “half-life”). The halving quickly reduces the parent to almost nothing; by 50 million years after the supernova that sparked the sun, virtually all the hafnium-182 would have become tungsten-182. That’s why the tungsten isotope ratio in rocks like Matt Jackson’s olivine samples can be so revealing: Any variation in the concentration of the daughter isotope, tungsten-182, measured relative to tungsten-184 must reflect processes that affected the parent, hafnium-182, when it was around—processes that occurred during the first 50 million years of solar system history. Elliott knew that this kind of geochemical information was previously believed to have been destroyed by early Earth melts and billions of years of subsequent mantle convection. But what if it wasn’t? Elliott contacted Stephen Moorbath, then an emeritus professor of geology at the University of Oxford and “one of the grandfather figures in finding the oldest rocks,” Elliott said. Moorbath “was keen, so I took the train up.” Moorbath led Elliott down to the basement of Oxford’s earth science building, where, as in many such buildings, a large collection of rocks shares the space with the boiler and stacks of chairs. Moorbath dug out specimens from the Isua complex in Greenland, an ancient bit of crust that he had pegged, in the 1970s, at 3.8 billion years old. Elliott and his student Matthias Willbold powdered and processed the Isua samples and used painstaking chemical methods to extract the tungsten. They then measured the tungsten isotope ratio using state-of-the-art mass spectrometers. In a 2011 Nature paper, Elliott, Willbold and Moorbath, who died in October, reported that the 3.8 billion-year-old Isua rocks contained 15 parts per million more tungsten-182 than the world average—the first ever detection of a “positive” tungsten anomaly on the face of the Earth. The paper scooped Richard Walker of Maryland and his colleagues, who months later reported a positive tungsten anomaly in 2.8 billion-year-old komatiites from Kostomuksha, Russia. Although the Isua and Kostomuksha rocks formed on Earth’s surface long after the extinction of hafnium-182, they apparently derive from materials with much older chemical signatures. Walker and colleagues argue that the Kostomuksha rocks must have drawn from hafnium-rich “primordial reservoirs” in the interior that failed to homogenize during Earth’s early mantle melts. The preservation of these reservoirs, which must trace to the first 50 million years and must somehow have survived even the moon-forming impact, “indicates that the mantle may have never been well mixed,” Walker and his co-authors wrote. That raises the possibility of finding many more remnants of Earth’s early history. The researchers say they will be able to use tungsten anomalies and other isotope signatures in surface material as tracers of the ancient interior, extrapolating downward and backward into the past to map proto-Earth and reveal how its features took shape. “You’ve got the precision to look and actually see the sequence of events occurring during planetary formation and differentiation,” Carlson said. “You’ve got the ability to interrogate the first tens of millions of years of Earth’s history, unambiguously.” Anomalies have continued to show up in rocks of various ages and provenances. In May, Hanika Rizo of the University of Quebec in Montreal, along with Walker, Jackson and collaborators, reported in Science the first positive tungsten anomaly in modern rocks—62 million-year-old samples from Baffin Bay, Greenland. Rizo hypothesizes that these rocks were brought up by a plume that draws from one of the “blobs” deep down near Earth’s core. If the blobs are indeed rich in tungsten-182, then they are not tectonic-plate graveyards as many geophysicists suspect, but instead date to the planet’s infancy. Rizo speculates that they are chunks of the planetesimals that collided to form Earth, and that the chunks somehow stayed intact in the process. “If you have many collisions,” she said, “then you have the potential to create this patchy mantle.” Early Earth’s interior, in that case, looked nothing like the primordial magma ocean pictured in textbooks. More evidence for the patchiness of the interior has surfaced. At the American Geophysical Union meeting earlier this month, Walker’s groupreported a negative tungsten anomaly—that is, a deficit of tungsten-182 relative to tungsten-184—in basalts from Hawaii and Samoa. This and other isotope concentrations in the rocks suggest the hypothetical plumes that produced them might draw from a primordial pocket of metals, including tungsten-184. Perhaps these metals failed to get sucked into the core during planet differentiation. Meanwhile, Elliott explains the positive tungsten anomalies in ancient crust rocks like his 3.8 billion-year-old Isua samples by hypothesizing that these rocks might have hardened on the surface before the final half-percent of Earth’s mass—delivered to the planet in a long tail of minor impacts—mixed into them. These late impacts, known as the “late veneer,” would have added metals like gold, platinum and tungsten (mostly tungsten-184) to Earth’s mantle, reducing the relative concentration of tungsten-182. Rocks that got to the surface early might therefore have ended up with positive tungsten anomalies. Other evidence complicates this hypothesis, however—namely, the concentrations of gold and platinum in the Isua rocks match world averages, suggesting at least some late veneer material did mix into them. So far, there’s no coherent framework that accounts for all the data. But this is the “discovery phase,” Carlson said, rather than a time for grand conclusions. As geochemists gradually map the plumes and primordial reservoirs throughout Earth from core to crust, hypotheses will be tested and a narrative about Earth’s formation will gradually crystallize. Elliott is working to test his late-veneer hypothesis. Temporarily trading his mass spectrometer for a sledgehammer, he collected a series of crust rocks in Australia that range from 3 billion to 3.75 billion years old. By tracking the tungsten isotope ratio through the ages, he hopes to pinpoint the time when the mantle that produced the crust became fully mixed with late-veneer material. “These things never work out that simply,” Elliott said. “But you always start out with the simplest idea and see how it goes.” This article appears courtesy of Quanta Magazine. This post appears courtesy of Quanta Magazine. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:d25991b5-03b9-4410-96a8-64630598701d>
3.46875
5,623
Truncated
Science & Tech.
42.729315
95,526,284
- Five new species of orchids where discovered in the mountains of Bukidnon - The orchids were found in an area where many Communist guerrillas operate - Conservationists say the presence of the rebels help deter entry of illegal poachers Scientists have discovered five new species of wild orchids in the remote mountains of Mindanao where Communist guerrillas have staged their decades-long insurgency against the Philippine government. According to conservationists, it is the presence of these guerrillas that have helped in protecting these rare species, as they block the entry of poachers in the area. They also admit, however, that this also makes cataloging these plant species, difficult. “The insurgency problem helps prevent poachers or would-be orchid-hunters from entering the forests. These areas are very isolated. The terrain is treacherous, accessible only by foot and occasionally, a motorcycle or horse,” explained plant and wildlife conservationist Miguel David de Leon to AFP, in a story earlier published by GMA News. The orchids were found mostly in the mountain ranges of Bukidnon, where the guerrilla insurgents live among the poor in the farming and mountain communities of the province. Among the recent discoveries include a yellow orchid with brown spots, which they named Epicrianthes aquinoi in honor of outgoing President Benigno Aquino III whose family has long used the color yellow in their political career. Two species of Dendrobium called the pure white variety and the red-lipped white variety, a dark red Epicrianthes species, and a green slipper orchid with red stripes, were also among the discoveries made by de Leon and his team. Australian taxonomist Jim Cootes, who joined Filipinos de Leon and research associate Mark Arcebal Naive in their journey through the mountains of Mindanao, said that the recent discoveries show the rich biodiversity of the Philippines. “We need to preserve what is left because the variation within the different species is so high that it is almost priceless. The mountains throughout the archipelago need to be preserved,” he said. Deforestation, as well as the illegal wildlife trade, continues to be the major threat to the forests of the Philippines where many of these rare species thrive.
<urn:uuid:c164df58-e262-4b38-a10e-e61f6795995b>
2.65625
477
News Article
Science & Tech.
23.153294
95,526,289
Stress and Strain In the first three chapters, we have given a fairly simple description of a solid from the atomic point of view, where we explicitly brought out the connection between the individual atoms, their interactions, and the resulting properties of the solid as a whole. In this chapter, we begin a discussion of the formal theory for the continuum solid; we have already introduced the important concepts of stress and strain, and the elastic moduli that connect them. Here we describe these concepts, and the mathematical theory used to handle them, in more detail. KeywordsStress Tensor Body Force Strain Tensor Rotation Tensor Stress Invariant Unable to display preview. Download preview PDF.
<urn:uuid:fa819eb3-610e-4a02-97e2-63ebd4678c7d>
3.109375
144
Truncated
Science & Tech.
37.156177
95,526,303
The Genesis of the ACE+ Anti-Rotating Satellites Concept The lack of data over the oceans and other remote regions contributes greatly to the uncertainties in the initial state of global weather-prediction models, which, in turn, limits their forecast capabilities. It appeared, that the agreement of the GPS/MET data with NWP was noticeably better over data-dense (U.S., Europe) versus data-sparse (Pacific Ocean) regions. These studies suggest that the GPS radio occultation data are likely to have a significant positive impact on global climate analyses and global weather prediction. As a result of this, several systems proposed to take advantage of a constellation to provide a better repartition of GPS measurements all over the world. The main improvement of WATS1 or ACE+ missions2 lies in the introduction of a LEO-LEO link, besides of classical LEO-GNSS measurements, which will provide a spaceborne capacity to discriminate water vapour from temperature profiles (classical methods use radiosondes). The ALCATEL mission analysis group, in association with the system architecture department, conduced several studies to propose constellation concepts coping with all scientific mission requirements, instrument technological limitations, and mission analysis intrinsic constraints. The proposed paper makes therefore a review of the ALCATEL constellation studies and associated trade-off, including the evolution from the initial constellation concepts (based on classical Walker constellation types, on which many system such as GPS are based) up to the innovative approach of anti-rotating satellites (satellites on polar orbits with an opposite direction of rotation) that was finally selected in the WATS concept and re-used in the ACE+ mission. The work presented in this paper has been done in the frame of self-funded R&D activities, and in the frame of an ESA contract. Responsibility for the contents resides on the authors. KeywordsNumerical Weather Prediction Numerical Weather Prediction Model Polar Orbit Forecast Capability Occultation Event Unable to display preview. Download preview PDF. - Abbondanza S, WATS Final Mission Analysis, ALCATEL internal documentGoogle Scholar - Anthes A, Rocken C, Kuo YH (March 2000) Applications of COSMIC to Meteorology and Climate Richard. Terrestrial, Atmospheric and Oceanic SciencesGoogle Scholar - Eyre JR, English SJ, Butterworth P, Renshaw RJ, Ridley JK and Ringer MA (2000) 2000 Recent progress in the use of satellite data in NWP. UK Met Office, NWP Technical Report No. 296Google Scholar - Mimoun D, WATS Mission Architecture Outline,ALCATEL internal document.Google Scholar - Larson WJ, Wertz JR (1993) Space Mission Analysis and Design, Kluwer Academic Publisher, second editionGoogle Scholar - Rocken C, Kuo YH, Schreiner W, Hunt D, Sokolovskiy S COSMIC System Description University Corporation for Atmospheric Research -COSMIC Project OfficeGoogle Scholar - Walker, JG (1971) Some Circular Orbit Patterns Providing Continuous Whole Earth Coverage. Journal of the British Interplanetary Society, Vol. 24: 369–384Google Scholar - WATS: report for assessment, ESA SP-l257(3)Google Scholar - Johsen KP, Rockel B Comparison of GPS Water Vapour Estimates with a NWP Model and with Radiosonde AscentsGoogle Scholar
<urn:uuid:b6a23205-783e-47c3-a6be-8fe2735ca002>
2.921875
715
Academic Writing
Science & Tech.
15.850146
95,526,304
4-D spectral maps with new detail extracted via a multidimensional coherent spectroscopic method called 'GAMERS,' elucidating subtle effects governing the chemical, physical and optical properties of systems Researchers at Northwestern University have created a new method to extract the static and dynamic structure of complex chemical systems. In this context, "structure" doesn't just mean the 3-D arrangement of atoms that make up a molecule, but rather time-dependent quantum-mechanical degrees of freedom that dictate the optical, chemical and physical properties of the system. Consider how we view the world: three dimensions in space and one dimension in time, i.e., space-time. Remove any one of these dimensions and the view becomes incomplete and far more confused. For the same reason, this new method uses four spectral dimensions to resolve structure to reveal hidden features of molecular structure. In this week's The Journal of Chemical Physics, from AIP Publishing, assistant professor Elad Harel and professor Irving M. Klotz, from the Department of Chemistry at Northwestern University, report a novel 4-D coherent spectroscopic method that directly correlates within and between electronic and vibrational degrees of freedom of complex molecular systems. Harel's work involves a theoretical description of a recent experimental method developed in his lab, called GRadient-Assisted Multi-dimensional Electronic Raman Spectroscopy, or "GAMERS." It's a multidimensional coherent spectroscopic method in which the dimensions are the electronic and vibrational degrees of freedom of the system. "Using multiple pulses of light, GAMERS probes how these different degrees of freedom are correlated to one another, creating a sort of spectral map that is unique to each molecule," Harel said. "[I]t demonstrates that subtle effects dictating the chemical, physical, and optical properties of a system, which are normally hidden in lower-order or lower-dimensionality methods, may be extracted by the GAMERS method." Unlike other methods, this enables a uniquely detailed look at the molecules' energy structure in way that may offer predictive value. "The shape of the potential surface, which is important for determining the kinetics and thermodynamics of a chemical reaction, may be directly measured," Harel said. "The level of molecular detail afforded by using more pulses of light to interrogate the system was surprising." One potential application of GAMERS could be to pinpoint the physical mechanism of energy transfer during the earliest stages of photosynthesis, a question that remains controversial among researchers, according to Harel. Right now, the main application of this work "is to enable insights into the physical mechanisms behind a host of quantum phenomena in a wide variety of chemical systems," Harel said. "These include singlet fission processes, charge carrier generation and transport in hybrid perovskites, and energy transfer in pigment-protein complexes. Understanding these processes has important implications for developing next-generation solar cells." The GAMERS method is still in an early phase of development, according to Harel, but the team has high hopes for its future application. "We believe technical advances could make such analysis far more widespread within the chemical physics community," said Harel. The article, "Four-dimensional coherent electronic Raman spectroscopy," is authored by Elad Harel. The article will appear in The Journal of Chemical Physics April 18, 2017 (DOI: 10.1063/1.4979485). After that date, it can be accessed at http://aip. ABOUT THE JOURNAL The Journal of Chemical Physics publishes concise and definitive reports of significant research in the methods and applications of chemical physics. See http://jcp. Julia Majors | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:7c4f4cc1-2823-4c2e-aa56-da10fd710326>
2.609375
1,405
Content Listing
Science & Tech.
33.109296
95,526,340
Lightcurve-based 3D-model of Hebe |Discovered by||Karl Ludwig Hencke| |Discovery date||July 1, 1847| |MPC designation||(6) Hebe| |Epoch November 26, 2005 (JD 2453700.5)| |Aphelion||2.914 AU (435.996 Gm)| |Perihelion||1.937 AU (289.705 Gm)| |2.426 AU (362.851 Gm)| |3.78 a (1379.756 d)| Average orbital speed |Proper orbital elements| Proper semi-major axis Proper mean motion |95.303184 deg / yr| Proper orbital period Precession of perihelion |31.568209 arcsec / yr| Precession of the ascending node |−41.829042 arcsec / yr| |Dimensions||km × 185 km × 170 km205 | |109 000 km2| |Volume||3 380 000 km3| Equatorial surface gravity Equatorial escape velocity Equatorial rotation velocity ~170 K | max: ~269 K (−4°C) |7.5 to 11.50| |0.26" to 0.065"| 6 Hebe (// HEE-bee) is a large main-belt asteroid, containing around half a percent of the mass of the belt. However, due to its apparently high bulk density (greater than that of the Moon or even Mars), Hebe does not rank among the top twenty asteroids by volume. This high bulk density suggests an extremely solid body that has not been impacted by collisions, which is not typical of asteroids of its size – they tend to be loosely-bound rubble piles. In brightness, Hebe is the fifth-brightest object in the asteroid belt after Vesta, Ceres, Iris, and Pallas. It has a mean opposition magnitude of +8.3, about equal to the mean brightness of Titan, and can reach +7.5 at an opposition near perihelion. Hebe was discovered on 1 July 1847 by Karl Ludwig Hencke, the sixth asteroid discovered. It was the second and final asteroid discovery by Hencke, after 5 Astraea. The name Hebe, goddess of youth, was proposed by Carl Friedrich Gauss. Major meteorite source Hebe is the probable parent body of the H chondrite meteorites and the IIE iron meteorites. This would imply that it is the source of about 40% of all meteorites striking Earth. Evidence for this connection includes the following: - The spectrum of Hebe matches a mix of 60% H chondrite and 40% IIE iron meteorite material. - The IIE type are unusual among the iron meteorites, and probably formed from impact melt, rather than being fragments of the core of a differentiated asteroid. - The IIE irons and H chondrites likely come from the same parent body, due to similar trace mineral and oxygen isotope ratios. - Asteroids with spectra similar to the ordinary chondrite meteorites (accounting for 85% of all falls, including the H chondrites) are extremely rare. - 6 Hebe is extremely well placed to send impact debris to Earth-crossing orbits. Ejecta with even relatively small velocities (~280 m/s) can enter the chaotic regions of the 3:1 Kirkwood gap at 2.50 AU and the nearby secular resonance which determines the high-inclination edge of the asteroid belt at about 16° inclinations hereabouts. - Of the asteroids in this "well-placed" orbit, Hebe is the largest. - An analysis of likely contributors to Earth's meteorite flux places 6 Hebe at the top of the list, due to its position and relatively large size. Lightcurve analysis suggests that Hebe has a rather angular shape, which may be due to several large impact craters. Hebe rotates in a prograde direction, with the north pole pointing towards ecliptic coordinates (β, λ) = (45°, 339°) with a 10° uncertainty. This gives an axial tilt of 42°. It has a bright surface and, if its identification as the parent body of the H chondrites is correct, a surface composition of silicate chondritic rocks mixed with pieces of iron–nickel. A likely scenario for the formation of the surface metal is as follows: - Large impacts caused local melting of the iron rich H chondrite surface. The metals, being heavier, would have settled to the bottom of the magma lake, forming a metallic layer buried by a relatively shallow layer of silicates. - Later sizeable impacts broke up and mixed these layers. - Small frequent impacts tend to preferentially pulverize the weaker rocky debris, leading to an increased concentration of the larger metal fragments at the surface, such that they eventually comprise ~40% of the immediate surface at the present time. As a result of the aforementioned 1977 occultation, a small moon around Hebe was reported by Paul D. Maley. It was nicknamed "Jebe" (see Heebie-jeebies). This was the first modern-day suggestion that asteroids have satellites. It was 17 years later when the first asteroid moon was formally discovered (Dactyl, the satellite of 243 Ida). However, the discovery of Hebe's moon has not been confirmed. - "AstDyS-2 Hebe Synthetic Proper Orbital Elements". Department of Mathematics, University of Pisa, Italy. Retrieved 2011-10-01. - Jim Baer (2008). "Recent Asteroid Mass Determinations". Personal Website. Retrieved 2008-11-28. - Supplemental IRAS Minor Planet Survey Archived June 23, 2006, at Archive.is - J. Torppa et al. Shapes and rotational properties of thirty asteroids from photometric data, Icarus, Vol. 164, p. 346 (2003). - Calculated based on the known parameters - Planetary Data System Small Bodies Node, lightcurve parameters Archived June 14, 2006, at Archive.is - Donald H. Menzel & Jay M. Pasachoff (1983). A Field Guide to the Stars and Planets (2nd ed.). Boston, MA: Houghton Mifflin. p. 391. ISBN 0-395-34835-8. - The Brightest Asteroids Archived 2008-05-11 at the Wayback Machine. - "Not the mother of meteorites". www.eso.org. Retrieved 19 June 2017. - A. Morbidelli et al. Delivery of meteorites through the ν6 secular resonance, Astronomy & Astrophysics, Vol. 282, p. 955 (1994). - M. J. Gaffey & S. L. Gilbert Asteroid 6 Hebe: The probable parent body of the H-Type ordinary chondrites and the IIE iron meteorites, Meteoritics & Planetary Science, Vol. 33, p. 1281 (1998). - W. R. Johnston Other reports of Asteroid/TNO Companions
<urn:uuid:c9852c96-7570-4d80-9dda-d395803dfc20>
2.859375
1,541
Knowledge Article
Science & Tech.
61.368861
95,526,362
In many examples in the preceding chapters we have seen that shells can be very thin-walled and that they very often are subjected to compressive stresses in extensive areas. The question arises whether the elastic equilibrium of such shells is stable. To answer this question, one of the standard methods of the theory of elastic stability must be applied : the method of adjacent equilibrium or the energy method. We shall explain here the basic ideas of both methods in the terminology of shells and then consider an Euler column to demonstrate their use. KeywordsCylindrical Shell Critical Load Axial Compression Shear Load Stress Resultant Unable to display preview. Download preview PDF. - 1.Southwell, R. V., Skan, S. W.: On the stability under shearing forces of a flat elastic strip. Proc. Roy. Soc. London A, 105 (1924), 587.Google Scholar - Lundquist, E. E.: Generalized analysis of experimental observations in problems of elastic stability. NACA, Techn. Note 658 (1938).Google Scholar
<urn:uuid:0a86909c-e9b9-4e2c-9f35-a718bff9771c>
3.03125
220
Truncated
Science & Tech.
53.893396
95,526,364
Their findings, which appear in the latest edition of Proceedings of the National Academy of Sciences, have a range of implications -- from the production of pharmaceuticals and new electronic materials to unraveling the pathways for kidney stone formation. The researchers focused on L-cystine crystals, the chief component of a particularly nefarious kind of kidney stone. The authors hoped to improve their understanding of how these crystals form and grow in order to design therapeutic agents that inhibit stone formation. While the interest in L-cystine crystals is limited to the biomedical arena, understanding the details of crystal growth, especially the role of defects -- or imperfections in crystals -- is critical to the advancement of emerging technologies that aim to use organic crystalline materials. Scientists in the Molecular Design Institute in the NYU Department of Chemistry have been examining defects in crystals called screw dislocations -- features on the surface of a crystal that resemble a spiraled ham. Dislocations were first posed by William Keith Burton, Nicolás Cabrera, and Sir Frederick Charles Frank in the late 1940s as essential for crystal growth. The so-called BCF theory posited that crystals with one screw dislocation would form hillocks that resembled a spiral staircase while those with two screw dislocations would merge and form a structure similar to a Mayan pyramid -- a series of stacked "island" surfaces that are closed off from each other. Using atomic force microscopy, the Molecular Design Institute team examined both kinds of screw dislocations in L-cystine crystals at nanoscale resolution. Their results showed exactly the opposite of what BCF theory predicted -- crystals with one screw dislocation seemed to form stacked hexagonal "islands" while those with two proximal screw dislocations produced a six-sided spiral staircase. A re-examination of these micrographs by Molecular Design Institute scientist Alexander Shtukenberg, in combination with computer simulations, served to refine the actual crystal growth sequence and found that, in fact, BCF theory still held. In other words, while the crystals' physical appearance seemed at odds with the long-standing theory, they actually did grow in a manner predicted decades ago. "These findings are remarkable in that they didn't, at first glance, make any sense," said NYU Chemistry Professor Michael Ward, one of the authors of the publication. "They appeared to contradict 60 years of thinking about crystal growth, but in fact revealed that crystal growth is at once elegant and complex, with hidden features that must be extracted if it is to be understood. More importantly, this example serves as a warning that first impressions are not always correct." The research was supported by the National Science Foundation (CHE-0845526, DMR-1105000, and DMR-1206337) and by the NSF Materials Research Science and Engineering Center (MRSEC) Program (DMR-0820341). NYU's center is one of 27 MRSECs in the country. These NSF-backed centers support interdisciplinary and multidisciplinary materials research to address fundamental problems in science and engineering. For more, go to http://mrsec.as.nyu.edu and http://www.mrsec.org. James Devitt | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b21308ee-2a81-4a1f-88da-2063354c5ab1>
2.78125
1,247
Content Listing
Science & Tech.
36.881689
95,526,369
The structure of the simplest permutite The simplest permutite is isostructural with the mineral sodalite, but there are hydroxyl ions in the 000, 1/2 1/2 1/2 positions instead of chlorine ions, and the oxygen atoms are somewhat displaced. The cell constant of permutite increases from 8.93 to 9.03 A, on heating to 850°. The conclusion follows from the packing of the molecules in the hexagonal structure of ice that the voids of permutite contain 10 molecules of water, of which two are in the form of hydroxyl while the remaining eight are in the form of zeolite water. KeywordsOxygen Hydroxyl Zeolite Hexagonal Chlorine Unable to display preview. Download preview PDF. - 1.N. S. Kurnakov, L. G. Berg, and V. N. Sveshnikova, Izv. AN SSSR, Old. Mat.-estestv. n., 1381 (1937).Google Scholar - 2.E. Z. Vydrevich and E. L. Gal'perin, Zh. priklad. khimii,34, 1971 (1961).Google Scholar - 3.R. Barrer and E. White, J. Chem. Soc., 1561 (1952).Google Scholar - 4.R. Barrer, J. Baynham, F. Bultitude, and W. Meier, J. Chem. Soc., 195 (1959).Google Scholar - 5.M. M. Dubinin, Dokl. AN SSSR,138, 866 (1961).Google Scholar
<urn:uuid:b3236a00-5e18-45e0-91e7-f718ff7fb5e8>
2.515625
361
Academic Writing
Science & Tech.
87.529082
95,526,402
Molecular hydrogen is discussed as promising renewable energy source and attractive alternative to fossil fuels. Many microorganisms exploit the beneficial properties of hydrogen already since more than two billion years. They accommodate dedicated enzymes that either split or evolve molecular hydrogen according to the specific metabolic requirements of the cell. These hydrogen-converting biocatalysts are called hydrogenases and occur in nature in different varieties. Most hydrogenases become inactivated or even destroyed in the presence of molecular oxygen. This intrinsic property represents a serious problem regarding biotechnological application. However, some hydrogenases maintain their catalytic activity in the presence of oxygen. An interdisciplinary team of scientists headed by the UniCat researchers Oliver Lenz and Bärbel Friedrich from Humboldt-Universitaet zu Berlin and Patrick Scheerer and Christian Spahn from Charité - Universitätsmedizin Berlin now succeeded in solving the first X-ray crystal structure of a hydro-genase that produces hydrogen even at atmospheric oxygen concentration. The X-ray crystal structure allows detailed insights into the three-dimensional architecture of the enzyme and its metal cofactors which participate in catalysis. The results have been published in Nature online (http://dx.doi.org/10.1038/nature10505). Interestingly, the hydrogenase contains a novel iron-sulfur center which acts as an electronic switch in the course of detoxification of detrimental oxygen. With this discovery, the scientists could substantiate the hypothesis that this particular group of hydro-genases is able to convert both, hydrogen and oxygen in a catalytic manner. During catalysis, oxygen becomes reduced to harmless water. The new results are particularly relevant for fundamental research. More-over, also the biotechnological application of hydrogenases, e.g. solar-driven hydrogen production by photosynthetic microorganisms and enzyme-driven biological fuel cells, may profit from the new findings. Furthermore, it is anticipated that the novel iron-sulfur center will inspire chemists to design model compounds with improved catalytic properties.UniCat Published in: Fritsch, J., P. Scheerer, S. Frielingsdorf, S. Kroschinsky, B. Friedrich, O. Lenz & C. M. Spahn. The crystal structure of an oxygen-tolerant hydrogenase uncovers a novel iron-sulphur centre. Nature doi: 10.1038/nature10505 (2011) For further information, please contact: Dr. Oliver Lenz, Institut für Biologie / Mikrobiologie der Humboldt-Universität zu Berlin, Germany, Phone: +49 (0) 30/2093 8173, E-mail: email@example.comDr. Martin Penno, UniCat Cluster of Excellence, Public Relations Officer Stefanie Terp | idw Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:3d4dce9f-19d3-4d32-b7b1-b00a28aea8e3>
3.609375
1,258
Content Listing
Science & Tech.
30.44667
95,526,422
Tuesday, March 9, 2010 A recent story from Robert Boyd, at the McClatchy news service, reports that scientists now believe that "nearly half the living material on our planet is hidden in or beneath the ocean or in rocks, soil, tree roots, mines, oil wells, lakes and aquifers on the continents." Wow. Apparently, at the December 2009 gathering of the American Geophysical Union in San Francisco, Katrina Edwards -- a microbiologist at the University of Southern California, in Los Angeles -- claimed that "The organisms that live in this environment may collectively have a mass equivalent to that of all of Earth's surface dwellers and may provide keys to solving major environmental, agricultural and industrial problems." And, marine geologists are preparing to drill in six locations under the world's oceans to install "observatories" that will be linked to on-land research stations. Fantastic!
<urn:uuid:f76dbe3a-ddd5-46b2-a8ef-7a0d89e6b8fd>
2.546875
181
Personal Blog
Science & Tech.
29.903902
95,526,440
Tornado Fails to Dislodge College Bible At a Glance - A swarm of tornadoes tore through eight southern states in late January 2017. - Only one other January outbreak since 1950 spawned more tornadoes. - Hattiesburg, Mississippi, and Albany, Georgia, were particularly hard hit. The Jan. 21-23, 2017 tornado outbreak was one of the largest outbreaks on record not only for January, but for any winter month, featuring one of the longer tornado tracks on record, according to data from the National Weather Service. Storm surveys done jointly by the National Weather Service offices in Peachtree City, Georgia, and Tallahassee, Florida, found an EF3 tornado that ravaged parts of Albany, Georgia, on January 22 was on the ground for an hour and 12 minutes, tearing an almost 71-mile path through parts of five Georgia counties. This continuous, long tornado path is quite rare in the historical record since 1950, according to Alex Lamers, a meteorologist and program coordination office with NOAA, formerly a staff meteorologist at NWS-Tallahassee. Lamers also noted only 26 tornadoes in the U.S. from 1995 through 2015 had a maximum width larger than the Albany tornado (1.25 miles wide). The NWS damage survey noted severe tree damage along the entire path, with "90 to 100 percent of the trees in the path uprooted or snapped." A portion of a Proctor and Gamble plant in Dougherty County collapsed, and a concrete block church was demolished, suggesting estimated winds speeds reached 150 mph. One of Winter's Most Prolific Outbreaks Over a roughly 48-hour period from the morning of Jan. 21 through Jan. 23, 80 tornadoes have been confirmed by National Weather Service damage surveys in eight southern states from Texas and Arkansas to Florida to South Carolina. According to data from The Weather Channel severe weather expert, Dr. Greg Forbes, this outbreak spawned the second largest number of tornadoes for any January outbreak, topped only the January 21-22, 1999 outbreak. This was also the third largest tornado outbreak on record for any winter month of December, January or February in reliable records dating to 1950. The infamous Super Tuesday 2008 outbreak also exceeded the count from the Jan. 2017 outbreak. |Jan. 21-22, 1999||129 tornadoes| |Feb. 5-6, 2008 ("Super Tuesday")||86 tornadoes| |Jan. 21-23, 2017||80 tornadoes (confirmed)| |Feb. 23-24, 2016||75 tornadoes| |Jan. 29-30, 2013||56 tornadoes| The January 2017 U.S. tornado count reached 134 by the end of the month, the second highest total for any January on record since 1950, and only the second time that count has topped 100 in January, according to Forbes. January 1999 had an incredible 212 tornadoes. To put this in perspective, January 2017 has seen more tornadoes than an average July (20-year average from 1996-2015 is 112). More Outbreak Details At least 22 deaths are being blamed on severe weather across the Deep South and Gulf Coast. (NEWS: Latest Severe Impacts) Sadly, the tornado death toll now tops the toll from all of 2016, and is the deadliest January for tornadoes in the U.S. since 1969. A deadly EF3 tornado touched down in Hattiesburg, Mississippi, early on Jan. 21, which caused considerable damage and killed four people. In Georgia, 41 tornadoes were confirmed from January 21-22, by far and away the largest two-day outbreak in the state on record dating to the mid-20th century. The previous two-day record was 25 tornadoes in September 15-16, 2004, during Hurricane Ivan. According to Keith Stellman, meteorologist-in-charge of the NWS office in Peachtree City, Georgia, the previous record number of Georgia tornadoes in any entire month of January from 1952-2013 was 15 in 1972. The number of tornadoes that occurred on January 21, 27 tornadoes, broke the old record for most tornadoes in the state of Georgia on a single day, as well. Twelve deaths are being blamed on a tornado (or tornadoes) that struck Thomas, Brooks, Berrien County and Cook Counties in Georgia during the early morning on Jan. 22. At least a 15-mile non-continuous path was found near Tennille in Washington County from EF1 tornadoes, according to a NWS-Peachtree City survey. Three separate tornadoes were found, with the greatest path being 9 miles. Seventeen tornadoes have been confirmed in Alabama. Nine tornadoes were confirmed in Louisiana during this outbreak. In South Carolina, another EF2 tornado carved a 13-mile path near Barnwell and Denmark and rolled a mobile home several times, injuring one woman stuck inside. Finally, NWS meteorologists confirmed a pair of EF1 tornadoes touched down in south Florida before dawn on Jan. 23, producing tree and some structural damage in both Hialeah and Miami Springs. A rare "high risk" severe weather outlook was issued on the morning of Jan. 22 by NOAA's Storm Prediction Center (SPC). This was the first high risk severe weather outlook to be issued by SPC since June 3, 2014, which gives an idea of how unusual they are, though the number of severe reports and tornadoes on Jan. 22 were less than expected. Strangely enough, rather intense low pressure over the South associated with this storm system took on an eye-like feature, and lead to non-thunderstorm wind damage in Houston and San Antonio on Jan. 22. At least 30,000 were without power in the Houston metro area from these damaging winds at one time. This impressive system set the January low pressure record for North Little Rock, Arkansas, 29.33 inches of mercury, besting the previous record from Jan. 18, 1982. MORE ON WEATHER.COM: Severe Weather in the South, Jan. 2017 (PHOTOS)
<urn:uuid:66ce7b6d-375e-4983-be3a-e85bd7a9f364>
2.984375
1,273
News Article
Science & Tech.
59.920289
95,526,488
by Carl Edward Rasmussen, 2018-07-06Release of green house gases, such as carbon dioxide, or CO2, into the atmosphere is causing global warming. The underlying physical mechanism has been well understood at least since the 1970s. More recently, attempts have been made to limit the release of green house gases both nationally and globally, most prominently the Kyoto Protocol in force from 2005 and the Paris Agreement in force from 2016. In this note I assess the progress made so far in limiting atmospheric carbon dioxide concentration. An excellent source of recent historical monthly measurements of the atmospheric concentration of carbon dioxide at Mauna Loa in Hawaii is available at this site, the actual data being available here. Thank you to the National Oceanic and Atmospheric Administration (NOAA) for providing this. The data set contains monthly average measurements from 1958 until today, figure 1: The monthly average carbon dioxide concentration in parts per million [ppm] is plotted as a function of calendar year in red dots, the inset shows a closeup of the most recent years. This graph is sometimes known as the Keeling Curve. The data shows an increasing concentration of carbon dioxide from about 315 ppm in 1958 to about 408 ppm in 2018. For context, the atmospheric carbon dioxide concentration remained constant at about 280 ppm for the 10000 years leading up to the industrial revolution. The Mauna Loa carbon dioxide concentration is not identical to the entire atmospheric average, but it's pretty close (there is a small dependency on latitude). There is a fairly pronounced seasonal fluctuation, which is caused by plants taking up more carbon dioxide in the summer than in winter. Here, we're not primarily interested in the seasonal variation, but we will still have to understand what it looks like, in order to remove its effects from the data. The seasonal component itself looks like this, figure 2: The seasonal variation as a function of calendar month and year, the coloured equi-concentration curves are labeled with the magnitude of the contribution in ppm. There is a wrap-around boundary condition between December and January. The seasonal component shows a fairly rapid decrease in the four and a half monts between middle of May and end of September or early October (when photosynthesis is most active in the northern hemisphere), followed by a slower rise over seven and a half months to the middle of May. The seasonal component is itself slowly changing, becoming more pronounced with time, and moving earlier in the year. The peak to peak magnitude has grown from 5.8 ppm in 1958 to 7 ppm in 2017. The average of the seasonal component over a year is always zero. In figure 1, a de-seasonalised blue region has been superimposed on the data. The (small) width of the blue area indicates the 95% confidence for the underlying de-seasonalised value. Since we are only basing our inferences about carbon dioxide concentration on finitely many, slightly noisy measurements, we can't be absolutely sure exactly what the concentration is at a certain time. But we can be 95% confident that the actual value is within the blue band. What is the growth of carbon dioxide in the graph? The growth rate is the instantaneous slope of the de-seasonalised curve (also called the derivative) and is measured in parts per million per year [ppm/y]. However, the raw instantaneous growth rate isn't that useful to plot for two reasons: firstly, the instantaneous growth rate is not itself that interesting, because it doesn't reveal much about how the carbon dioxide concentration is evolving over finite intervals of time, and secondly, the instantaneous growth rate is both very variable and associated with a large amount of statistical uncertainty, as our data only contains monthly measurements. To overcome this issue, we either 1) first compute the instantaneous growth rate and then average the result locally in time, or 2) first average the concentration values locally in time, and then compute the growth rates. In fact these two views are mathematically equivalent, so you can use whichever interpretation you prefer. The resulting growth rates are shown in figure 3: The growth rate of the de-seasonalised carbon dioxide concentration for three different time averages, annual in blue, three-yearly in green and decadal in red. The annual averaged growth rate tends to wobble about a fair bit. This is probably caused by random fluctuations in wind patterns leading to local effects of mixing of air volumes across altitude and latitude and to other short term variations in carbon dioxide fluxes. These effects are quite short term, typically lasting just a year or two. The annual averaged growth-rate is also associated with a fair bit of uncertainty, indicated by the width of the blue region, which corresponds to the 95% confidence region. Note, that the uncertainty about decadal averaged growth rate is much smaller than the annual averaged growth rate (because it's based on more measurements). Note also, that the band is wider towards the boundaries of the data, reflecting that we're less certain about the rate here. In 1960-1970 the growth rate was slightly less than about 1 ppm/y, but the growth-rate has been steadily increasing, reaching 2.34±0.26 ppm/y (mean ± 2 std dev) at the begining of 2018. This means that currently, the concentration of carbon dioxide is growing by about 2.3 ppm per year. The National Oceanographic and Atmospheric Administration (NOAA) also provide a graph of carbon dioxide growth, which looks similar to the plots presented here. The difference is that the plots presented here emphasises the continuous nature of the process (rather than discrete yearly, or decade-long average growth events), and they quantify uncertainty. So, what does the data tell us? It shows that all is not well in the state of the atmosphere! In order to prevent further warming, the carbon dioxide levels must not grow any further. On the growth curve, this corresponds the curve having to settle down to 0 ppm/y. There is absolutely no hint in the data that this is happening. On the contrary, the rate of growth is itself growing, having now reached about 2.3 ppm/y the highest growth rate ever seen in modern times. This is not just a "business as usual" scenario, it is worse than that, we're actually moving backward, becoming more and more unsustainable with every year. This shows unequivocally that the efforts undertaken so-far to limit green house gases such as carbon dioxide are woefully inadequate. Unfortunately, the carbon dioxide data has been subject to some misleading interpretations, due to poor statistical reasoning. For example in the Nature Communications paper [Keenan et al, 2016] entitled "Recent pause in the growth rate of atmospheric CO2 due to enhanced terrestrial carbon uptake", the authors identify "a pause in the growth rate of atmospheric CO2", lasting from 2002 to (at least) 2014. They also identify a "point of structural change" in the growth rate in 2002. In fact it is highly unlikely that any such pause or point of structural change actually exists. On the growth rate figure above, the decadal average growth rate was 1.93±0.01 ppm/y at the start of 2002 and 2.37±0.08 ppm/y at the end of 2014 (mean ± 2 std dev), so the growth rate has indeed grown. In fact, the growth rate of the growth rate (also called the acceleration) within the start of 2002 to end of 2014 interval is 0.034±0.006 ppm/y2, whose lower bound is close to the entire 60 year average of 0.027 ppm/y2. Publishing such flippant ideas should be avoided, as they create confusion where there should be clarity. It would be prudent of the authors to retract their paper (or explain why it is a valuable contribution when it doesn't agree with observations). It is indisputable, that in the long term, the carbon dioxide growth must be brought down to zero, otherwise the earth will just keep getting warmer. However, it is unrealistic to assume that carbon dioxide growth can be halted instantaneously. It is more realistic to assume that the reduction in the growth will take place gradually. The level at which the concentration will stop growing depends on the schedule of the growth reduction. Let's assume that the global growth will be reduced at a fixed relative rate, at some percentage per year, indefinitely. Below is a figure showing what the equilibrium carbon dioxide concentration will be depending on the rate of growth reduction: The graph shows that the faster the growth rate is reduced, the lower the final concentration. To limit the final level to be twice the preindustrial level will require roughly an annual 1.5% reduction in the growth rate if we start now; if we first wait 25 years with taking action, close to 3.5% reduction per year will be required. A doubling of the carbon dioxide concentration is estimated to cause a warming of between 1.5°C-4.5°C, see IPCC 5th assessment report, and see also the concept of climate sensitivity. Reduction rates of less than about 0.3% will lead to very large concentrations. (But note that currently the growth rate is growing by about 1.3% per year, not reducing at all.)
<urn:uuid:715fdf61-3a70-4b9e-a04a-08e56f8414f2>
3.5625
1,886
Academic Writing
Science & Tech.
48.38292
95,526,529
Activity coefficients are used to calculate the actual concentration of non-ideal solutions. When two liquids are mixed together in different mole fractions, the mole fractions of their vapors will also be different. The partial pressures of each component is measured in both the liquid and vapor phase. With a knowledge of both mole fractions and partial pressures, we can determine the activity of each component from Raoult's law and Henry's law. The end goal of this problem is to derive and provide a relation between the activity coefficient, partial pressures and mole fraction from both Raoult's law and Henry's law individually.© BrainMass Inc. brainmass.com July 17, 2018, 3:38 pm ad1c9bdddf Most chemistry freshmen (and yes, freshwomen too!) encounter aqueous equilibrium which involves the calculation of equilibrium constants and the all familiar pH. Generally, these involve the use of molarity concentrations but the unit is said to be dimensionless. We all know that if concentrations are used in terms of molarity, the units cannot be disregarded. There is no magic to chemical problems! Units ... The method of finding activity coefficients from both Raoult's law and Henry's law is given.
<urn:uuid:9c9ad6ea-4802-44c7-bece-5102b3a0cfd6>
3.5625
247
Truncated
Science & Tech.
50.045427
95,526,555
In many discussions of rocket technology, a skeptic will often make some comment about how things would be so much better if we had Warp Drive. But the reality is that we don’t really need Warp Drive for things to be interesting. We just need Sufficiently Advanced Propulsion Technology™ (name derived from Clarke’s Third Law). While all sorts of arbitrary definitions for Sufficiently Advanced Propulsion Technology™ can likely be suggested, I would suggest the following high-level requirements as a minimum for a candidate technology: - Can operate safely in a biosphere. - Has an Isp high enough to enable an SSTO vehicle with a Mass Ratio comparable to existing jumbo jets - Has an engine thrust/weight sufficient to enable a high enough vehicle thrust/weight for minimized gravity losses while still keeping total engine mass comparable to the engine mass fractions of existing jumbo-jets My logic for this is simply that if you had a propulsion system like this, the airframe and systems requirements for the vehicle would end up being not that much higher than that of an airliner, and most likely the service life of the system would be measured in the low tens of thousands of flights. While I think that it’s possible to design a fully-reusable TSTO and maybe even SSTO rockets using existing rocket propulsion technology, in order to hit the mass fractions necessary, you either get really miniscule payload fractions, or you end up getting very very challenging mass ratios, or most likely both. But with SAPT™ you get very high payload fractions, with very unchallenging mass ratios. Now realistically, you would want to spread the pain a little bit more between the propulsion and the vehicle structures/systems, but I think this is a good first-pass that’s easy to analyze–i.e. this is what a propulsion system would need to do in order to enable easy highly-reusable SSTO vehicles without having to push hard on any of the other technical areas. So what would this mean as far as performance specifications? Let’s start with stats from a representative jumbo-jet. I’ll use the 777 Freighter as an example of an existing, long-range jumbo jet. Payload Fraction: .29 Propellant Mass Fraction: .43 Engine Mass Fraction: .05 Let’s start first with a “dense” propellant SAPT™ system (say something like subcooled ammonia or water), since for that you could assume that the propellant storage and handling dry mass is similar to the 777’s propellant handling and storage dry mass. If you assume a required 9500m/s of delta-V to orbit (a reasonable SWAG including gravity losses and drag losses with a high-Isp propulsion system), and a .43 pmf (which works out to a MR=1.75), the required Isp is approximately 1750s. If you assume a vehicle launch T/W of 1.4 (to keep gravity losses modest in spite of the very slow change in vehicle mass with respect to time), that comes out to an engine T/W of about 29. By comparison, most solid-core NTRs have Isps with subcooled ammonia of only about 450-500s, and T/W ratios in the 10-30 range (with the better numbers being for more theoretical designs and the worse numbers for ones that actually were closer to operations back when the US did NTRs). For a LH2 propellant SAPT™ system, you’d need to do something to factor in the much lower density of LH2 compared to Jet A, liquid ammonia, or water. If you want to keep the payload fraction the same, and assume that the propellant-density-scaling parts of mass ratio (tanks and tank volume-driven structures) make up 10% of the non-engine dry mass ratio, that gives you ~2.3%. With LH2 being ~10x less dense than Jet A, and assuming you keep the payload fraction, engine fraction, and the non-propellant-density-scaling dry fraction constant (ie you take all the extra tank mass out of the propellant fraction), that drops the propellant fraction to about 22%. Using a slightly higher required delta-V of 10km/s (to factor in higher drag losses due to lower system density, and higher gravity losses due to the higher Isp engines not accelerating the vehicle as fast due to slower mass change), you get a required engine Isp of about 4100m/s. Are there any even semi-sane propulsion technologies that come anywhere close to these numbers? The only ones I can think of that meet all three criteria might be Bussard’s QED ARC engines or variants that run off of something like Winterberg’s micro-chemical fusion bomblets…both of which involve not-exactly-proven-out fusion technology. Pretty much you’re stuck assuming some sort of advanced fusion or anti-matter system. Solid-core nuclear thermal or laser or microwave thermal are all too low of Isp. Gas Core Nuclear thermal could be high enough Isp, maybe, for an open cycle design. But finding a way to do that without flagrantly violating requirement #1 would be tough. Some of the pulsed nuclear propulsion ideas get up into the right Isp and T/W ratio range, but all involve EMP levels that would fry anything in LEO. There may be something else that none of us are thinking of, but nothing that looks even remotely near-term. So in conclusion, this was a fun and somewhat silly exercise, but it does show you that in order for a propulsion system to be advanced enough to make the rest of designing a high-flight-rate SSTO-class vehicle “easy”. We aren’t talking about warp drive per se, but we are talking about technologies that are sufficiently advanced compared to the state of the art to still seem somewhat magic. Until we have something like SAPT™, it looks like we’re probably ought to have focus on technologies that allow us better T/W ratios for our engines, much better mass ratios for our vehicles (due to better materials), keep living with much crappier payload mass fractions, and live with more Rube Goldberg launch methods (like TSTOs, boostback, airlaunch, tether assisted launch, etc). It isn’t the end of the world, and I think there’s a ton of room left to be squeezed out of existing, boring, chemical propulsion. But yeah, the inner sci-fi nerd in me hopes that some genius wunderkinden out there are working on propulsion technologies that are indistinguishable from magic. Latest posts by Jonathan Goff (see all) - Administrivia - July 17, 2018 - Research Papers I Wish I Could Con Someone Into Writing Part I: Lunar ISRU in the Age of RLVs - March 9, 2018 - Random Thoughts: A Now Rather Cold Take on BFR - February 5, 2018
<urn:uuid:81e04ae8-9923-44c9-9604-4f14507809d3>
2.828125
1,485
Personal Blog
Science & Tech.
47.178239
95,526,559
SEATTLE — The mass of warm water known as “the blob” that heated up the North Pacific Ocean has dissipated, but scientists are still seeing the lingering effects of those unusually warm sea-surface temperatures on Pacific Northwest salmon and steelhead. Federal research surveys this summer caught among the lowest numbers of juvenile coho and Chinook salmon in 20 years, suggesting that many fish did not survive their first months at sea. Scientists warn that salmon fisheries may face hard times in the next few years. Fisheries managers also worry about below-average runs of steelhead returning to the Columbia River now. Returns of adult steelhead that went to sea as juveniles a year ago so far rank among the lowest in 50 years. Scientists believe poor ocean conditions are likely to blame: Cold-water salmon and steelhead are confronting an ocean ecosystem that has been shaken up in recent years. Most Read Local Stories - ‘Deadliest Catch’ co-star Edgar Hansen pleads guilty to sexually assaulting teen girl - Carmen Best, once rejected, is Seattle mayor's pick for top cop. Citizens have 'a lot of questions' about how this went. - Tiny-home villages are a key part of Seattle’s homeless strategy. So why did one village lack case management for three months? - Amid worsening financial picture, UW President Ana Mari Cauce returns $95K in deferred compensation - ReachNow launches ride-hailing app that competes with Uber, Lyft “The blob’s fairly well dissipated and gone. But all these indirect effects that it facilitated are still there,” Brian Burke, a research fisheries biologist with the Northwest Fisheries Science Center. Marine creatures found farther south and in warmer waters have turned up in abundance along the coasts of Washington and Oregon, some for the first time. “That’s going to have a really big impact on the dynamics in the ecosystem,” Burke said. “They’re all these new players that are normally not part of the system.” Researchers with NOAA Fisheries and Oregon State University Cooperative Institute for Marine Resources Studies have been surveying off the Pacific Northwest for 20 years to study juvenile salmon survival. In June, they caught record numbers of warm-water fish such as Pacific pompano and jack mackerel, a potential salmon predator. But the catch of juvenile coho and Chinook salmon during the June survey — which has been tied to adult returns — was among the three lowest in 20 years. Burke and other scientists warned in a memo to National Oceanic and Atmospheric Administration fisheries administrators last month that poor ocean conditions may mean poor salmon returns to the Columbia River system over the next few years. “There was hardly any salmon out there,” Burke said. “Something is eating them and we don’t know what and we don’t know precisely where,” he added. Seabirds such as common murres could be the culprits. Researchers caught fewer forage fish, such as herring, anchovy and smelt. When forage fish are low, avian predators may be forced to eat more juvenile salmon. Seabirds near the mouth of the Columbia River may have feasted on more juvenile salmon as they entered the ocean. The North Pacific Ocean had been unusually warm since the fall of 2013 with “the blob,” but sea-surface temperatures have recently cooled to average or slightly warmer than average conditions. Changes in the marine ecosystem are likely to be seen for a while. The research surveys also pulled up weird new creatures that had not been netted before. Researchers have caught tens of thousands of tube-shaped, jelly-like pyrosomes, which are generally found in tropical waters. Their impact on the marine food web isn’t yet clear. Fisheries managers are also seeing lower runs of steelhead to the Columbia River system this year. Joe DuPont, a regional fisheries manager with Idaho Fish and Game, blames poor feeding conditions when juvenile steelhead went out to the Pacific Ocean last year. Warm waters brought less nutrient-rich copepods, tiny crustaceans at the base of the food chain. Meanwhile, northern copepods richer in lipids, that young steelhead eat, were less abundant. It’s the second year of consecutive low steelhead runs, said Tucker Jones, ocean salmon and Columbia river program manager with the Oregon Department of Fish and Wildlife. “There’s a lot of circumstantial evidence to point to an unhappy river experience and meeting ocean conditions that were far from hospitable,” Jones said. “The ‘blob’ especially changed the zooplankton food web structure that was out there,” he added. Fisheries managers have put some fishing restrictions in place due to low forecast of steelhead expected back this season. While the mechanisms for steelhead and salmon may be different, “large-scale changes to the ocean are driving all of it,” said Burke.
<urn:uuid:c73dec78-d047-4270-ac76-970654052315>
2.8125
1,050
Truncated
Science & Tech.
42.018147
95,526,584
Rare Proteins Collapse Earlier Than Thought Some organisms are able to survive in hot springs, while others can only live at mild temperatures because their proteins aren’t able to withstand such extreme heat. ETH researchers investigated these differences and showed that often only a few key proteins determine the life and heat-induced death of a cell. Proteins are produced in cells as thread-like molecules, which then mass together into a protein-specific structure, some are spherical and others tubular. These structures disintegrate during denaturation; the proteins become thread-like again and as a result lose their function. Previous research based on computational analysis has assumed that a large part of the proteins of a cell denature when the narrow temperature range in which the proteins function optimally is exceeded. For the intestinal bacterium E. coli, the optimal temperature is about 37°C; anything above 46°C and the bacteria die because the protein structures collapse. This basic assumption is now being overturned by a team of researchers led by Paola Picotti, Assistant Professor of Biochemistry at ETH Zurich. In a study that has just been published in the journal Science, the researchers show that only a small fraction of key proteins denature at the same time when a critical temperature threshold is reached. In their study they examined and compared the entirety of all proteins, the proteome, from four organisms at different temperatures. The researchers exposed the intestinal bacterium E. coli, human cells, yeast cells and the heat-resistant bacterium T. thermophilus to gradually increasing temperatures up to 76°C. After each temperature increment, the scientists measured the proteins present in the cells and determined their structural features. In total, the researchers analysed 8,000 proteins. “Thanks to this research, we can now show that only a few proteins collapse at the temperature at which the bacterium dies,” says Picotti. “We could not confirm the prediction that the majority of proteins of an organism denature at the same time.” About 80 of the proteins examined collapsed as soon as the temperature exceeded the species-specific optimum by a few degrees. Although they constitute only a small fraction of the proteins of a cell, this proves fatal for the cell since some of these types of proteins have vital functions or are key components in a large protein network. “As soon as these key components fail, the cell cannot continue to function,” says Picotti. That the key components of a biological system are sensitive to heat would at first glance appear to be an evolutionary glitch. However, these proteins are often unstable as a result of their flexibility, which enables them to carry out varying tasks in the cell, says the biochemist. “Flexibility and stability can be mutually exclusive. The cell has to make a compromise.” The researchers also show that the most stable proteins and the least prone to aberrant or pathological folds are also the most common in cells. From the perspective of the cell, this makes the most sense. Were it reversed and the most common proteins were to misfold the fastest, the cell would have to invest a lot of energy in their reconstruction or disposal. For this reason, cells ensure that common proteins are more stable than the rare ones. Why are T. thermophilus bacteria unaffected even by temperatures of over 70°C? According to the researchers, these cells would preferentially stabilise the more heat-sensitive, functionally crucial proteins, such as through adapted protein sequences. Picotti’s findings could be used to help genetically modify organisms to withstand higher temperatures. Today, certain chemicals, such as ethanol, are biotechnologically produced with the help of bacteria. But these bacteria often work only in a narrow temperature window, which constrains the yield. If production could proceed at higher temperatures, the yield could be optimised without damage to the bacteria. The researchers also found evidence that certain denatured proteins tend to clump again at even higher temperatures and form aggregates. In human cells, Picotti and her colleagues found that the protein DNMT1 first denatures with increasing heat and later aggregates with others of its kind. These and other proteins with similar properties are associated with neurological disorders, such as Alzheimer’s or Parkinson’s. This study is the first to investigate the thermal stability of proteins from several organisms on a large scale directly in the complex cellular matrix. Proteins were neither isolated from the cellular fluid nor purified to conduct the measurements. For their study, the researchers broke the cells open and then measured the stability of all proteins directly in the cellular fluid at different temperatures. Leuenberger, P., Ganscha, S., Kahraman, A., Cappelletti, V., Boersema, P. J., von Mering, C., … Picotti, P. (2017). Cell-wide analysis of protein thermal unfolding reveals determinants of thermostability. Science, 355(6327), eaai7825. doi:10.1126/science.aai7825 Synthetic Material That Detects Enzymatic ActivityNews Scientists integrate protein and polymer building blocks to create stimulus-responsive systemsREAD MORE Regenerative Medicine Meets Clever Engineering to Accommodate Bone GraftsNews Personalized bone grafts developed to repair bone defects from disease or injuryREAD MORE Rapid and Cost-Effective Instrument that Measures Molecular DynamicsNews By combining mass spectrometry and thermal desorption, researchers honed a new method to measure excitation and relaxation rates of uracil, the building block of RNA.READ MORE 5th edition of the International Conference Clinical Oncology and Molecular Diagnostics Jun 17 - Jun 18, 2019
<urn:uuid:0d0f239b-cfcb-4b48-b9fd-8efc818d7c72>
3.828125
1,183
News Article
Science & Tech.
32.650518
95,526,586
Droughts can travel hundreds to thousands of kilometers from where they started, like a slow-moving hurricane. A new study sheds light on how these droughts evolve in space and time, bringing vital new insight for water managers. A small subset of the most intense droughts move across continents in predictable patterns, according a new study published in the journal Geophysical Research Letters by researchers in Austria and the United States. The study could help improve projections of future drought, allowing for more effective planning. While most droughts tend to stay put near where they started, approximately 10% travel between 1,400 to 3,100 kilometers (depending on the continent), the study found. These traveling droughts also tend to be the largest and most severe ones, with the highest potential for damage to the agriculture, energy, water, and humanitarian aid sectors. “Most people think of a drought as a local or regional problem, but some intense droughts actually migrate, like a slow-motion hurricane on a timescale of months to years instead of days to weeks," says Julio Herrera-Estrada, a graduate student in civil and environmental engineering at Princeton, who led the study. The researchers analyzed drought data from 1979 to 2009, identifying 1,420 droughts worldwide. They found hotspots on each continent where a number of droughts had followed similar tracks. For example, in the southwestern United States, droughts tend to move from south to north. In Australia, the researchers found two drought hotspots and common directions of movement, one from the east coast in a northwest direction, the other from the central plains in a northeast direction. What causes some droughts to travel remains unclear, but the data suggest that feedback between precipitation and evaporation in the atmosphere and land may play a role. "This study also suggests that there might be specific tipping points in how large and how intense a drought is, beyond which it will carry on growing and intensifying," said Justin Sheffield, a professor of hydrology and remote sensing at the University of Southampton. Sheffield was Herrera-Estrada's advisor while serving as research scholar at Princeton. While the initial onset of a drought remains difficult to predict, the new model could allow researchers to better predict how droughts will evolve and persist. “This study used an innovative approach to study how droughts evolve in space and time simultaneously, to have a more comprehensive understanding of their behaviors and characteristics, which has not been possible from previous approaches,” says Yusuke Satoh, a researcher at the International Institute for Applied Systems Analysis (IIASA), who also worked on the study. The study also raises the importance of regional cooperation and of sharing information across borders, whether state or national. One example is the North American Drought Monitor, which brings together measurements and other information from Mexico, the US, and Canada, creating a comprehensive real-time monitoring system. The researchers said the next step for the work is to examine why and how droughts travel by studying the feedback between evaporation and precipitation in greater detail. Herrera-Estrada also said he would like to analyze how drought behavior might be affected by climate change. Herrera-Estrada JE, Satoh Y, & Sheffield J (2017). Spatio-Temporal Dynamics of Global Drought. Geophysical Research Letters: 1-25. DOI:10.1002/2016GL071768. http://pure.iiasa.ac.at/14387/ Katherine Leitzell | idw - Informationsdienst Wissenschaft Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:78ab7772-14e5-4d46-b56d-1ff53bec7ae0>
4
1,403
Content Listing
Science & Tech.
40.498876
95,526,599
Nuclear power plants provide about 17 percent of the world's electricity. Some countries depend more on nuclear power for electricity than others. In France, for instance, about 75 percent of the electricity is generated from nuclear power, according to the International Atomic Energy Agency. In the United States, nuclear power supplies about 15 percent of the electricity overall, but some states get more power from nuclear plants than others. There are more than 400 nuclear power plants around the world, with more than 100 in the United States. The dome-shaped containment building at the Shearon Harris Nuclear Power Plant near Raleigh, NC Have you ever wondered how a nuclear power plant works or how safe nuclear power is? In this edition of HowStuffWorks, we will examine how a nuclear reactor and a power plant work. We'll explain nuclear fission and give you a view inside a nuclear reactor. Uranium is a fairly common element on Earth, incorporated into the planet during the planet's formation. Uranium is originally formed in stars. Old stars exploded, and the dust from these shattered stars aggregated together to form our planet. Uranium-238 (U-238) has an extremely long half-life> (4.5 billion years), and therefore is still present in fairly large quantities. U-238 makes up 99 percent of the uranium on the planet. U-235 makes up about 0.7 percent of the remaining uranium found naturally, while U-234 is even more rare and is formed by the decay of U-238. (Uranium-238 goes through many stages or alpha and beta decay to form a stable isotope of lead, and U-234 is one link in that chain.) Uranium-235 has an interesting property that makes it useful for both nuclear power production and for nuclear bomb production. U-235 decays naturally, just as U-238 does, by alpha radiation. U-235 also undergoes spontaneous fission a small percentage of the time. However, U-235 is one of the few materials that can undergo induced fission. If a free neutron runs into a U-235 nucleus, the nucleus will absorb the neutron without hesitation, become unstable and split immediately. See How Nuclear Radiation Works for complete details. The animation above shows a uranium-235 nucleus with a neutron approaching from the top. As soon as the nucleus captures the neutron, it splits into two lighter atoms and throws off two or three new neutrons (the number of ejected neutrons depends on how the U-235 atom happens to split). The two new atoms then emit gamma radiation as they settle into their new states. There are three things about this induced fission process that make it especially interesting: In order for these properties of U-235 to work, a sample of uranium must be enriched so that it contains 2 percent to 3 percent or more of uranium-235. Three-percent enrichment is sufficient for use in a civilian nuclear reactor used for power generation. Weapons-grade uranium is composed of 90-percent or more U-235. - The probability of a U-235 atom capturing a neutron as it passes by is fairly high. In a reactor working properly (known as the critical state), one neutron ejected from each fission causes another fission to occur. - The process of capturing the neutron and splitting happens very quickly, on the order of picoseconds (1x10-12 seconds). - An incredible amount of energy is released, in the form of heat and gamma radiation, when a single atom splits. The two atoms that result from the fission later release beta radiation and gamma radiation of their own as well. The energy released by a single fission comes from the fact that the fission products and the neutrons, together, weigh less than the original U-235 atom. The difference in weight is converted directly to energy at a rate governed by the equation E = mc2. Something on the order of 200 MeV (million electron volts) is released by the decay of one U-235 atom (if you would like to convert that into something useful, consider that 1 eV is equal to 1.602 x 10-12 ergs, 1 x 107 ergs is equal to 1 joule, 1 joule equals 1 watt-second, and 1 BTU equals 1,055 joules). That may not seem like much, but there are a lot of uranium atoms in a pound of uranium. So many, in fact, that a pound of highly enriched uranium as used to power a nuclear submarine or nuclear aircraft carrier is equal to something on the order of a million gallons of gasoline. When you consider that a pound of uranium is smaller than a baseball, and a million gallons of gasoline would fill a cube 50 feet per side (50 feet is as tall as a five-story building), you can get an idea of the amount of energy available in just a little bit of U-235. Inside a Nuclear Power Plant To build a nuclear reactor, what you need is some mildly enriched uranium. Typically, the uranium is formed into pellets with approximately the same diameter as a dime and a length of an inch or so. The pellets are arranged into long rods, and the rods are collected together into bundles. The bundles are then typically submerged in water inside a pressure vessel. The water acts as a coolant. In order for the reactor to work, the bundle, submerged in water, must be slightly supercritical. That means that, left to its own devices, the uranium would eventually overheat and melt. To prevent this, control rods made of a material that absorbs neutrons are inserted into the bundle using a mechanism that can raise or lower the control rods. Raising and lowering the control rods allow operators to control the rate of the nuclear reaction. When an operator wants the uranium core to produce more heat, the rods are raised out of the uranium bundle. To create less heat, the rods are lowered into the uranium bundle. The rods can also be lowered completely into the uranium bundle to shut the reactor down in the case of an accident or to change the fuel. The uranium bundle acts as an extremely high-energy source of heat. It heats the water and turns it to steam. The steam drives a steam turbine, which spins a generator to produce power. In some reactors, the steam from the reactor goes through a secondary, intermediate heat exchanger to convert another loop of water to steam, which drives the turbine. The advantage to this design is that the radioactive water/steam never contacts the turbine. Also, in some reactors, the coolant fluid in contact with the reactor core is gas (carbon dioxide) or liquid metal (sodium, potassium); these types of reactors allow the core to be operated at higher temperatures. Once you get past the reactor itself, there is very little difference between a nuclear power plant and a coal-fired or oil-fired power plant except for the source of the heat used to create steam. Electricity for homes and businesses comes from this generator at the Shearon Harris plant. It produces 870 megawatts. Pipes carry steam to power the generator at the power plant. The reactor's pressure vessel is typically housed inside a concrete liner that acts as a radiation shield. That liner is housed within a much larger steel containment vessel. This vessel contains the reactor core as well the hardware (cranes, etc.) that allows workers at the plant to refuel and maintain the reactor. The steel containment vessel is intended to prevent leakage of any radioactive gases or fluids from the plant. Finally, the containment vessel is protected by an outer concrete building that is strong enough to survive such things as crashing jet airliners. These secondary containment structures are necessary to prevent the escape of radiation/radioactive steam in the event of an accident like the one at Three Mile Island. The absence of secondary containment structures in Russian nuclear power plants allowed radioactive material to escape in an accident at Chernobyl. Steam rises from the cooling tower at the Harris plant. Workers in the control room at the nuclear power plant can keep an eye on the nuclear reactor and take action if something goes wrong. Uranium-235 is not the only possible fuel for a power plant. Another fissionable material is plutonium-239. Plutonium-239 can be created easily by bombarding U-238 with neutrons -- something that happens all the time in a nuclear reactor. |Subcriticality, Criticality and Supercriticality| When a U-235 atom splits, it gives off two or three neutrons (depending on the way the atom splits). If there are no other U-235 atoms in the area, then those free neutrons fly off into space as neutron rays. If the U-235 atom is part of a mass of uranium -- so there are other U-235 atoms nearby -- then one of three things happens: - If, on average, exactly one of the free neutrons from each fission hits another U-235 nucleus and causes it to split, then the mass of uranium is said to be critical. The mass will exist at a stable temperature. A nuclear reactor must be maintained in a critical state. - If, on average, less than one of the free neutrons hits another U-235 atom, then the mass is subcritical. Eventually, induced fission will end in the mass. - If, on average, more than one of the free neutrons hits another U-235 atom, then the mass is supercritical. It will heat up. For a nuclear bomb, the bomb's designer wants the mass of uranium to be very supercritical so that all of the U-235 atoms in the mass split in a microsecond. In a nuclear reactor, the reactor core needs to be slightly supercritical so that plant operators can raise and lower the temperature of the reactor. The control rods give the operators a way to absorb free neutrons so the reactor can be maintained at a critical level. The amount of uranium-235 in the mass (the level of enrichment) and the shape of the mass control the criticality of the sample. You can imagine that if the shape of the mass is a very thin sheet, most of the free neutrons will fly off into space rather than hitting other U-235 atoms. A sphere is the optimal shape. The amount of uranium-235 that you must collect together in a sphere to get a critical reaction is about 2 pounds (0.9 kg). This amount is therefore referred to as the critical mass. For plutonium-239, the critical mass is about 10 ounces (283 grams). What Can Go Wrong Well-constructed nuclear power plants have an important advantage when it comes to electrical power generation -- they are extremely clean. Compared with a coal-fired power plant, nuclear power plants are a dream come true from an environmental standpoint. A coal-fired power plant actually releases more radioactivity into the atmosphere than a properly functioning nuclear power plant. Coal-fired plants also release tons of carbon, sulfur and other elements into the atmosphere (see this page for details). Unfortunately, there are significant problems with nuclear power plants: These problems have largely derailed the creation of new nuclear power plants in the United States. Society seems to have decided that the risks outweigh the rewards. - Mining and purifying uranium has not, historically, been a very clean process. - Improperly functioning nuclear power plants can create big problems. The Chernobyl disaster is a good recent example. Chernobyl was poorly designed and improperly operated, but it dramatically shows the worst-case scenario. Chernobyl scattered tons of radioactive dust into the atmosphere. - Spent fuel from nuclear power plants is toxic for centuries, and, as yet, there is no safe, permanent storage facility for it. - Transporting nuclear fuel to and from plants poses some risk, although to date, the safety record in the United States has been good. For more information on nuclear power and related topics, check out the links on the next page. Lots More Information! Related HowStuffWorks Articles More Great Links
<urn:uuid:4aabe067-496d-47fd-b041-5536b5185195>
4.125
2,477
Knowledge Article
Science & Tech.
50.394161
95,526,603
Unit of time This article needs additional citations for verification. (March 2016) (Learn how and when to remove this template message) A unit of time or time unit is any particular time interval, used as a standard way of measuring or expressing duration. The base unit of time in the International System of Units (SI), and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atom. The exact modern definition, from the National Institute of Standards and Technology is: - The duration of 192631770 periods of the 9radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom. Historically units of time were defined by the movements of astronomical objects. - Sun based: the year was the time for the earth to rotate around the sun. Year-based units include the olympiad (four years), the lustrum (five years), the indiction (15 years), the decade, the century, and the millennium. - Moon based: the month was based on the moon's orbital period around the earth. - Earth based: the time it took for the earth to rotate on its own axis, as observed on a sundial. Units originally derived from this base include the week at seven days, and the fortnight at 14 days. Subdivisions of the day include the hour (1/24th of a day) which was further subdivided into minutes and finally seconds. The second became the international standard unit (SI units) for science. - Celestial sphere based: as in sidereal time, where the apparent movement of the stars and constellations across the sky is used to calculate the length of a year. These units do not have a consistent relationship with each other and require intercalation. For example, the year cannot be divided into 12 28-day months since 12 times 28 is 336, well short of 365. The lunar month (as defined by the moon's rotation) is not 28 days but 28.3 days. The year, defined in the Gregorian calendar as 365.24 days has to be adjusted with leap days and leap seconds. Consequently, these units are now all defined as multiples of seconds. The natural units for timekeeping used by most historical societies are the day, the solar year and the lunation. Such calendars include the Sumerian, Egyptian, Chinese, Babylonian, ancient Athenian, Hindu, Islamic, Icelandic, Mayan, and French Republican calendars. Scientific time units - The jiffy is the amount of time light takes to travel one fermi (about the size of a nucleon) in a vacuum. - Planck time is the time light takes to travel one Planck length. Theoretically, this is the smallest time measurement that will ever be possible. Smaller time units have no use in physics as we understand it today. - The TU (for Time Unit) is a unit of time defined as 1024 µs for use in engineering. - The Svedberg is a time unit used for sedimentation rates (usually of proteins). It is defined as 10−13 seconds (100 fs). - The galactic year, based on the rotation of the galaxy, and usually measured in million years. - The geological time scale relates stratigraphy to time. The deep time of Earth’s past is divided into units according to events which took place in each period. For example, the boundary between the Cretaceous period and the Paleogene period is defined by the Cretaceous–Paleogene extinction event. The largest unit is the supereon, composed of eons. Eons are divided into eras, which are in turn divided into periods, epochs and ages. It is not a true mathematical unit, as all ages, epochs, periods, eras or eons don't have the same length; instead, their length is determined by the geological and historical events that define them individually. Note: The light-year is not a unit of time, but a unit of length of about 9.5 trillion kilometres (9,454,254,955,488 kilometres). |Unit||Length, Duration and Size||Notes| |Planck time unit||5.39 x 10−44 s||The amount of time light takes to travel one Planck length. Theoretically, this is the smallest time measurement that will ever be possible. Smaller time units have no use in physics as we understand it today.| |jiffy (physics)||3 × 10−24s||The amount of time light takes to travel one fermi (about the size of a nucleon) in a vacuum.| |zeptosecond||10−21 s||Time measurement scale of the NIST strontium atomic clock. Smallest fragment of time currently measurable is 850 zeptoseconds.| |femtosecond||10−15 s||Pulse time on fastest lasers.| |Svedberg||10−13 s||Time unit used for sedimentation rates (usually of proteins).| |nanosecond||10−9 s||Time for molecules to fluoresce.| |shake||10−8 s||10 nanoseconds, also a casual term for a short period of time.| |microsecond||10−6 s||Symbol is µs| |millisecond||0.001 s||Shortest time unit used on stopwatches.| |jiffy (electronics)||1/60s to 1/50s||Used to measure the time between alternating power cycles. Also a casual term for a short period of time.| |second||1 sec||SI Base unit.| |moment||1/40th of an hour (90 seconds)||Medieval unit of time used by astronomers to compute astronomical movements.| |ke||14 minutes and 24 seconds||Usually calculated as 15 minutes, similar to "quarter" as in "a quarter past six" (6:15).| |kilosecond||1,000 seconds||16 minutes and 40 seconds.| |day||24 hours||Longest unit used on stopwatches and countdowns.| |week||7 days||Also called "sennight".| |megasecond||1,000,000 seconds||About 11.6 days.| |fortnight||2 weeks||14 days| |lunar month||27 days 4 hours 48 minutes–29 days 12 hours||Various definitions of lunar month exist.| |month||28–31 days||Occasionally calculated as 30 days.| |quarter and season||3 months| |semester||an 18-week division of the academic year||Literally "six months", also used in this sense.| |year||12 months or 365 days| |common year||365 days||52 weeks and 1 day.| |tropical year||365 days and 5:48:45.216 hours||Average.| |Gregorian year||365 days and 5:49:12 hours||Average.| |sidereal year||365 days and 6:09:09.7635456 hours| |leap year||366 days||52 weeks and 2 days.| |olympiad||4 year cycle||48 months, 1,461 days, 35,064 hours, 2,103,840 minutes, 126,230,400 seconds.| |indiction||15 year cycle| |gigasecond||1,000,000,000 seconds||About 31.7 years.| |millennium||1,000 years||Also called "kiloannum".| |terasecond||1 trillion seconds||About 31,700 years.| |Megannum||1,000,000 (106) years||Also called "Megayear." About 1,000 millennia (plural of millennium), or 1 million years.| |petasecond||1015 seconds||About 31,700,000 years| |galactic year||Approximately 230 million years||The amount of time it takes the Solar System to orbit the center of the Milky Way Galaxy one time.| |cosmological decade||varies||10 times the length of the previous| cosmological decade, with CÐ 1 beginning either 10 seconds or 10 years after the Big Bang, depending on the definition. |aeon||1,000,000,000 years or an indefinite period of time||Also spelled "eon"| |Day of Brahman (aka Day of God) |4,320,000,000 years or 4.32 aeon||Like the galactic year which measures the time it takes all the solar systems of the Milky Way Galaxy to orbit its central nexus one time, this measurement of time is the presumed length of time it takes all the Galaxies in the Universe to orbit its presumed central nexus (aka "Ground Zero of the Big Bang"), one time. In this context, the "7 Days of Creation" mentioned in the book of Genesis are seen in a much different light, since Earth is estimated to be ~4.3 billion years old, or 1 Day of God according to the Vedic system of time.| |exasecond||1018 seconds||About 31,700,000,000 years| |zettasecond||1021 seconds||About 31.7 trillion years| |yottasecond||1024 seconds||About 31.7 x 1015 years| All of the important units of time can be interrelated. The key units are the second, defined in terms of an atomic process; the day, an integral multiple of seconds; and the year, usually 365.25 days. Most of the other units used are multiples or divisions of these three. The graphic also shows the three heavenly bodies whose orbital parameters relate to the units of time. - "Definitions of the SI base units". The NIST reference on Constants, Units, and Uncertainty. National Institute of Standards and Technology. Retrieved 4 March 2016. - http://starchild.gsfc.nasa.gov/docs/StarChild/questions/question18.html NASA - StarChild Question of the Month for February 2000 - "It only takes a zeptosecond: Scientists measure smallest fragment of time". RT International. Retrieved 2017-04-20. - Milham, Willis I. (1945). Time and Timekeepers. New York: MacMillan. p. 190. ISBN 0-7808-0008-7. - . Webster's Dictionary http://www.merriam-webster.com/dictionary/semester. Retrieved 3 December 2014. Missing or empty - McCarthy, Dennis D.; Seidelmann, P. Kenneth (2009). Time: from Earth rotation to atomic physics. Wiley-VCH. p. 18. ISBN 3-527-40780-4., Extract of page 18 - Jones, Floyd Nolen (2005). The Chronology Of The Old Testament (15th ed.). New Leaf Publishing Group. p. 287. ISBN 0-89051-416-X., Extract of page 287
<urn:uuid:4f9374f4-d325-48b0-8a7a-d050bc18cfcc>
3.609375
2,380
Knowledge Article
Science & Tech.
66.769223
95,526,607
Bright Halo Crater (Impact) Living reference work entry Halo around fresh impact craters displaying brighter optical albedo, higher radar backscatter, and distinctive thermal properties relative to their surroundings. Optical and radar-bright halo surrounding fresh craters; often occurring with bright crater rays (Pieters et al. 1985) (Fig. 1). In some cases, a dark collar also is seen adjacent to crater ( Dark Halo Crater; impact, optical). Most optically bright fresh craters also have a radar-bright ring (Fig. 2); however, the reverse is not true (Thomson et al. 2013). KeywordsImpact Crater Dark Halo Radar Backscatter Secondary Crater Ejecta Blanket These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. - Bell SW, Thomson BJ, Dyar MD, Neish CD, Cahill JTS, Bussey DBJ (2012) Dating small fresh lunar craters with Mini-RF radar observations of ejecta blankets. J Geophys Res 117:E00H30. doi:10.1029/2011JE004007Google Scholar - Buczkowski DL, Barnouin-Jha OS, Seelos FP, Seelos FP, Seelos KD, Malaret E, Hash C et al (2008) Bright-haloed craters in chryse planitia. LPS 39:1032Google Scholar - Elger TG (1895) The moon – a full description and map of its principal physical features. George Philip & Son, LondonGoogle Scholar - Thomson BJ, Bussey DBJ, Cahill JTS, El-Baz F, Neish CD, Raney RK, Trang D (2013) Global distribution of radar-bright halos on the moon detected by LRO Mini-RF. 44th LPSC #2107Google Scholar - Wood CA (2003) The modern moon. A personal view. Sky Publishing, Cambridge, MAGoogle Scholar © Springer Science+Business Media New York 2014
<urn:uuid:fc7101ae-d67b-4891-9632-ae1b795a55f9>
2.640625
448
Structured Data
Science & Tech.
61.405483
95,526,627
The vast majority of life on Earth is heavily dependent on the availability of molecular oxygen (O2). Modern life thrives in its presence, and is pushed to its limits when it goes away. Our group is very much interested in Earth's oxygenation history. Lucky for us, many fundamental questions on this topic remain unanswered. When did O2 first appear at Earth's surface? Once it was available, how stable was it in the atmosphere? What about in the oceans? AnbarLab staff and students have many projects dedicated to addressing these questions. Our window to early-Earth: Ancient marine Sedimentary Rocks Certain marine sedimentary rocks capture and preserve key chemical details of Earth's ancient oceans and atmosphere. For example, the abundance of redox-sensitive trace metals can provide invaluable information about changes in atmospheric and oceanic oxygenation that occurred when the rock was originally deposited. Our group uses drill cores from all over the world to assess changes in these elemental abundances at various points in geologic time. Direct Clues: isotopes Some processes in nature prefer to use versions of an element with more or less neutrons, referred to in chemistry as "isotopes". If this happens, evidence is sometimes left behind in the ancient rock record. If these processes are linked to the availability of O2, investigating these clues becomes a powerful tool in reconstructing Earth's oxygen history. To date, our group has developed and refined the use of Fe, Mo, and U isotopes as paleoredox proxies for perturbations in ancient Earth O2. Indirect clues: Experiments Experiments under simulated early-earth conditions provide unrivaled information about how changes in the environment are reflected in the ancient rock record. For example, experiments can be conducted under O2-deficient conditions to test the viability of alternate oxidation pathways, or to estimate the rate of elemental delivery to the ocean in a dominantly anoxic atmosphere. Our analytical tool of choice: mass spectrometry Bulk elemental abundances of samples can be directly measured using a quadrupole inductively-coupled plasma mass spectrometer (ICPMS). For isotope ratio measurements, we instead turn to the more powerful Neptune multi-collector ICPMS (MC-ICPMS). For details about our instrumentation, sample preparation, and analyses, please visit the website of the W. M. Keck Foundation Laboratory for Environmental Biogeochemistry (LINK). Learn more about this research: - Web site of the NSF “Dynamics of Earth Surface Oxygenation” project based at ASU (PI: Anbar) - Web site of the Australian project of the Agouron Institute Drilling Program - “A whiff of oxygen before the Great Oxidation Event?” Anbar et al., Science (2007) - “Metal stable isotopes in paleooceanography” Anbar and Rouxel, Ann. Rev. Earth Planet. Sci. (2007) - “Reconstructing paleoredox conditions through a multitracer approach: the key to the past is the present” Severmann and Anbar, Elements (2009) - “Rapid expansion of oceanic anoxia immediately before the end-Permian mass extinction” Brennecka et al., Proc. Nation. Acad. Sci. (2011) - Forbes interview about O2 and the NSF FESD project.
<urn:uuid:58d5de9e-45de-4f63-9204-c6f5eb74d0ae>
3.78125
720
About (Org.)
Science & Tech.
27.150732
95,526,651
Nasa's robotic moon explorer, Ladee, is no more. Flight controllers confirmed Friday that the orbiting spacecraft crashed into the back side of the moon as planned, just three days after surviving a full lunar eclipse, something it was never designed to do. Researchers believe Ladee likely vaporized when it hit because of its extreme orbiting speed of 3,600 mph, possibly smacking into a mountain or side of a crater. No debris would have been left behind. "It's bound to make a dent," project scientist Rick Elphic predicted Thursday. By Thursday evening, the spacecraft had been skimming the lunar surface at an incredibly low altitude of 300ft. Its orbit had been lowered on purpose last week to ensure a crash by Monday following an extraordinarily successful science mission. Ladee, short for Lunar Atmosphere and Dust Environment Explorer, was launched in September from Virginia. From the outset, Nasa planned to crash the spacecraft into the back side of the moon, far from the Apollo artifacts left behind during the moonwalking days of 1969 to 1972. It completed its primary 100-day science mission last month and was on overtime. The extension had Ladee flying during Tuesday morning's lunar eclipse; its instruments were not designed to endure such prolonged darkness and cold. But the small spacecraft survived – it's about the size of a vending machine – with just a couple pressure sensors acting up. The mood in the control center at Nasa's Ames Research Center in Mountain View, California, was upbeat late Thursday afternoon, according to project manager Butler Hine. "Having flown through the eclipse and survived, the team is actually feeling very good," Hine told the Associated Press in a phone interview. But the uncertainty of the timing of Ladee's demise had the flight controllers "on edge", he said. As it turns out, Ladee succumbed within several hours of Hine's comments. Nasa announced its end early Friday morning. It will be at least a day or two before Nasa knows precisely where the spacecraft ended up; the data cutoff indicates it smashed into the far side of the moon, although just barely. Ladee did not have enough fuel to remain in lunar orbit much beyond the end of its mission. It joined dozens if not scores of science satellites and Apollo program spacecraft parts that have slammed into the moon's surface, on purpose, over the decades, officials said. Until Ladee, the most recent man-made impacts were the LCross crater-observing satellite that went down in 2009 and the twin Grail spacecraft in 2012. During its $280 million mission, Ladee identified various components of the thin lunar atmosphere – neon, magnesium and titanium, among others – and studied the dusty veil surrounding the moon, created by all the surface particles kicked up by impacting micrometeorites. "Ladee's science cup really overfloweth," Elphic said earlier this month. "Ladee, by going to the moon, has actually allowed us to visit other worlds with similar tenuous atmospheres and dusty environments."
<urn:uuid:20e48f6b-9d0f-4264-b7c0-32868106dfd2>
3.1875
630
News Article
Science & Tech.
43.464839
95,526,659
Skip to Main Content Predicted fire behavior and societal benefits in three eastern Sierra Nevada vegetation typesAuthor(s): C.A. Dicus; K. Delfino; D.R. Weise Source: Fire Ecology 5(1): 67-78 Publication Series: Scientific Journal (JRNL) PDF: View PDF (1018.49 KB) DescriptionWe investigated potential fire behavior and various societal benefits (air pollution removal, carbon sequestration, and carbon storage) provided by woodlands of pinyon pine (Pinus monophylla) and juniper (Juniperus californica), shrublands of Great Basin sagebrush (Artemisia tridentata) and rabbitbrush (Ericameria nauseosa), and recently burned annual grasslands near a wildland-urban interface (WUI) community in the high desert of the eastern Sierra Nevada Mountains. Fire behavior simulations showed that shrublands had the greatest flame lengths under low wind conditions, and that pinyon-juniper woodlands had the greatest flame lengths when winds exceeded 25 km hr-1 and fire transitioned to the crowns. Air pollution removal capacity (PM10, O3, NO2, etc.) was significantly greater in pinyon-juniper stands, followed by shrublands and grasslands. Carbon storage (trees and burned tree snags only) did not significantly differ between pinyon-juniper and burned stands (~14 000 kg ha-1), but will change as burned snags decompose. Annual C sequestration rates in pinyon-juniper stands averaged 630 kg ha-1 yr-1. A landscape-level assessment showed that total compliance with residential defensible space regulations would result in minimal impact to air pollution removal capacity and carbon sequestration due to a currently low population density. Our methodology provides a practical mechanism to assess how potential management options might simultaneously impact both fire behavior and various environmental services provided by WUI vegetation. - You may send email to email@example.com to request a hard copy of this publication. - (Please specify exactly which publication you are requesting and your mailing address.) - We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information. - This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain. CitationDicus, C.A.; Delfino, K.; Weise, D.R. 2009. Predicted fire behavior and societal benefits in three eastern Sierra Nevada vegetation types. Fire Ecology 5(1): 67-78 Keywordsair pollution removal, Artemesia tridentata, carbon sequestration, fire behavior, FlamMap, NEXUS, Pinus monophylla, UFORE, wildland-urban interface - Bird habitat relationships along a Great Basin elevational gradient - Big sagebrush in pinyon-juniper woodlands: Using forest inventory and analysis data as a management tool for quantifying and monitoring mule deer habitat - Pinyon/juniper woodlands [Chapter 4] XML: View XML
<urn:uuid:9650a7e2-404d-4418-b7a8-308eece3924b>
2.671875
657
Academic Writing
Science & Tech.
30.191103
95,526,673
Evaluating relationships among tree growth rate, shade tolerance, and browse tolerance following disturbance in an eastern deciduous forestAuthor(s): Lisa M. Krueger; Chris J. Peterson; Alejandro Royo; Walter P. Carson Source: Canadian Journal of Forest Research. 39: 2460-2469. Publication Series: Scientific Journal (JRNL) Station: Northern Research Station View PDF (973.56 KB) Interspecific differences in shade tolerance among woody species are considered a primary driving force underlying forest succession. However, variation in shade tolerance may be only one of many interspecific differences that cause species turnover. For example, tree species may differ in their sensitivity to herbivory. Nonetheless, existing conceptual models of forest dynamics rarely explicitly consider the impact of herbivores. We examined whether browsing by whitetailed deer (Odocoileus virginianus Zimmermann) alters the relationship between light availability and plant performance. We monitored growth and survival for seedlings of six woody species over 2 years within six windthrow gaps and the nearby intact forest in the presence and absence of deer. Browsing decreased seedling growth for all species except beech (Fagus grandifolia Ehrh.). More importantly, browsing altered growth rankings among species. Increased light availability enhanced growth for three species when excluded from deer, but browsing obscured these relationships. Browsing also reduced survival for three species; however, survival rankings did not significantly differ between herbivory treatments. Our results indicated that browsing and light availability operated simultaneously to influence plant growth within these forests. Thus, existing models of forest dynamics may make inaccurate predictions of the timing and composition of species reaching the canopy, unless they can account for how plant performance varies as a result of a variety of environmental factors, including herbivory. - Check the Northern Research Station web site to request a printed copy of this publication. - Our on-line publications are scanned and captured using Adobe Acrobat. - During the capture process some typographical errors may occur. - Please contact Sharon Hobrla, firstname.lastname@example.org if you notice any errors which make this publication unusable. CitationKrueger, Lisa M.; Peterson, Chris J.; Royo, Alejandro; Carson, Walter P. 2009. Evaluating relationships among tree growth rate, shade tolerance, and browse tolerance following disturbance in an eastern deciduous forest. Canadian Journal of Forest Research 39:2460-2469. - Managing for diversity: harvest gap size drives complex light, vegetation, and deer herbivory impacts on tree seedlings - Spatiotemporal variation in deer browse and tolerance in a woodland herb - Ten-year response of competing vegetation after oak shelterwood treatments in West Virginia XML: View XML
<urn:uuid:c5268f26-f364-4872-b314-0d47ca623e8d>
2.90625
582
Truncated
Science & Tech.
21.398959
95,526,674
Climate change scientists have made made a startling U-turn on the theory surrounding the effects of man-made climate change. They now say that the burning of fossil fuels and cutting down of trees causes the earth to cool down, rather than warm up as previously believed. A new NASA study suggests that areas on Earth that have seen heavy industrialisation have actually cooled down, which completely flies in the face of traditional climate change science which dictates that the warming up of the Earth is man-made. Environmentalists have long argued the burning of fossil fuels in power stations and for other uses is responsible for global warming and predicted temperature increases because of the high levels of carbon dioxide produced – which causes the global greenhouse effect. While the findings did not dispute the effects of carbon dioxide on global warming, they found aerosols – also given off by burning fossil fuels – actually cool the local environment, at least temporarily. The research was carried out to see if current climate change models for calculating future temperatures were taking into account all factors and were accurate. A NASA spokesman said: “To quantify climate change, researchers need to know the Transient Climate Response (TCR) and Equilibrium Climate Sensitivity (ECS) of Earth. “Both values are projected global mean surface temperature changes in response to doubled atmospheric carbon dioxide concentrations but on different timescales. “TCR is characteristic of short-term predictions, up to a century out, while ECS looks centuries further into the future, when the entire climate system has reached equilibrium and temperatures have stabilised.” The spokesman said it was “well known” that aerosols such as those emitted in volcanic eruptions and power stations, act to cool Earth, at least temporarily, by reflecting solar radiation away from the planet. He added: “In a similar fashion, land use changes such as deforestation in northern latitudes result in bare land that increases reflected sunlight.” Kate Marvel, a climatologist at GISS and the paper’s lead author, said the results showed the “complexity” of estimating future global temperatures. She said: “Take sulfate aerosols, which are created from burning fossil fuels and contribute to atmospheric cooling. “They are more or less confined to the northern hemisphere, where most of us live and emit pollution. “There’s more land in the northern hemisphere, and land reacts quicker than the ocean does to these atmospheric changes. “Because earlier studies do not account for what amounts to a net cooling effect for parts of the northern hemisphere, predictions for TCR and ECS have been lower than they should be.” The study found existing models for climate change had been too simplistic and did not account for these factors. The spokesman said: “There have been many attempts to determine TCR and ECS values based on the history of temperature changes over the last 150 years and the measurements of important climate drivers, such as carbon dioxide. “As part of that calculation, researchers have relied on simplifying assumptions when accounting for the temperature impacts of climate drivers other than carbon dioxide, such as tiny particles in the atmosphere known as aerosols, for example. Climate scientist Gavin Schmidt, the director of NASA’s Goddard Institute for Space Studies (GISS) in New York and a co-author on the study, published in the journal Nature Climate Change, said: “The assumptions made to account for these drivers are too simplistic and result in incorrect estimates of TCR and ECS. “The problem with that approach is that it falls way short of capturing the individual regional impacts of each of those variables,” he said, adding that only within the last ten years has there been enough available data on aerosols to abandon the simple assumption and instead attempt detailed calculations. But, rather than being good news, NASA has concluded the lack of taking these factors into account means existing climate change models have underestimated at the future impact on global temperatures will be. NASA researchers at GISS accomplished a first ever feat by calculating the temperature impact of each of these variables—greenhouse gases, natural and manmade aerosols, ozone concentrations, and land use changes—based on historical observations from 1850 to 2005 using a massive ensemble of computer simulations. The spokesman said: “Analysis of the results showed that these climate drivers do not necessarily behave like carbon dioxide, which is uniformly spread throughout the globe and produces a consistent temperature response; rather, each climate driver has a particular set of conditions that affects the temperature response of Earth. “Because earlier studies do not account for what amounts to a net cooling effect for parts of the northern hemisphere, predictions for TCR and ECS have been lower than they should be. “This means that Earth’s climate sensitivity to carbon dioxide—or atmospheric carbon dioxide’s capacity to affect temperature change—has been underestimated, according to the study.” The Intergovernmental Panel on Climate Change, which draws its TCR estimate from earlier research, places the future estimate rise at 1.8°F (1.0°C). But the new NASA study dovetails with a GISS study published last year that puts the TCR value at 3.0°F (1.7° C). Mr Schmidt said: “If you’ve got a systematic underestimate of what the greenhouse gas-driven change would be, then you’re systematically underestimating what’s going to happen in the future when greenhouse gases are by far the dominant climate driver.” Latest posts by Sean Adl-Tabatabai (see all) - New San Francisco Mayor: “There’s Literally Human Sh*t Everywhere!” - July 16, 2018 - Trump And Putin Vow To Destroy The ‘New World Order’ - July 16, 2018 - President Trump: CNN Is The Enemy Of The People - July 16, 2018
<urn:uuid:b1ceae6f-a0b1-4750-8753-e06678dd7e70>
3.9375
1,232
News Article
Science & Tech.
28.409958
95,526,681
Optics is the branch of physics that deals with light and vision. It is primarily the generation, propagation and detection of electromagnetic radiation having wavelengths greater than x-rays and shorter than microwaves. Most optical phenomena can be accounted for using the classical electromagnetic description of light. However, often times these descriptions are too difficult to apply in practice. Practical optics is usually done using simplified models. Visual perception is the ability to interpret information and surroundings from visible light reaching the eye. The perception which results is known as eyesight or vision. The physiological components involved in vision are referred to as the visual system. There are many focuses of research on visual systems in the fields of psychology, cognitive science, neuroscience and molecular biology. The visual system in humans allows people to assimilate information from the environment. The act on seeing starts when the lens of the eye focuses an image of its surroundings onto a light-sensitive membrane in the back of the eye. This is called the retina. The retina converts patterns of light into neuronal signals. These signals are processed by different parts of the brain in a hierarchical fashion from the retina to the lateral geniculate nucleus, to the primary and secondary visual cortex of the brain. Signals can also travel directly from the retina to the Superior colliculus.© BrainMass Inc. brainmass.com July 17, 2018, 7:20 pm ad1c9bdddf
<urn:uuid:315f4ed1-32f5-4bff-b6bf-69d2dbf9db70>
3.78125
284
Knowledge Article
Science & Tech.
36.091022
95,526,688
26 February 2016 Glow-in-the-dark bacterial lights could illuminate shop windows Bacteria may light up the future. Glowee, a start-up company based in Paris, France, is developing bioluminescent lights to illuminate shop fronts and street signs. After a successful demo in December, Glowee has launched its first product – a bacteria-powered light that glows for three days. The company is now working on lights that will glow for a month or more. “Our goal is to change the way we produce and use light,” says Glowee founder Sandra Rey. “We want to offer a global solution that will reduce the 19 per cent of electricity consumption used to produce light.” The lights are made by filling small transparent cases with a gel that contains bioluminescent bacteria. Glowee uses a bacterium called Aliivibrio fischeri, which gives marine animals such as the Hawaiian bobtail squid the ability to glow with a blue-green light. The gel provides nutrients that keep the bacteria alive. At first, the lights only worked for a few seconds. But by tweaking the consistency of the gel so it delivers nutrients more efficiently, the team has been able to extend their lifespan to three days. Bioluminescent lights are not new. But Glowee is one of the first companies to develop a commercial product, which is initially being marketed to shops. In France, retailers are not allowed to light their shop windows between 1 am and 7 am to limit light pollution and energy consumption. The softly glowing bacterial lights – about as bright as night lights – provide a way to get around the ban. Glowee wants to use them for other purposes too, including decorative lighting, building exteriors and street signs – as well as providing lighting in places with no power cables, such as parks. ERDF, a largely state-owned utility company that manages 95 per cent of France’s electricity network, is among the backers of Glowee’s recent crowdfunding campaign. “Glowee is not meant to replace electric light; it offers different possibilities,” says Rey. The business case But how feasible is the idea in the long run? Edith Widder at the Ocean Research & Conservation Association in Fort Pierce, Florida, thinks that the costs of producing and maintaining large numbers of bioluminescent bacteria in suitable environmental conditions are too high for most commercial lighting needs. To get the bacteria to continue working for more than a few days requires adding extra nutrients and removing waste products, she says. “If you do the math, it doesn’t make sense, especially when you factor in how incredibly efficient LED lighting has become.” But Glowee is undeterred. Having adjusted the make-up of its gel, it is now genetically engineering the bacteria. Rey says her team is developing a molecular switch that will activate the bioluminescence only at night. This will let the bacteria save energy during the day and make the nutrients last longer. The team also plans to make the bacteria glow brighter and survive temperature fluctuations of up to 20 °C. Rey says the company will launch a commercial product in 2017 that lasts a month. Solutions exist in nature, says Rey. “Now that we have the tools to copy them, we can build far more sustainable processes and products.” Brain images display the beauty and complexity of consciousness Horses remember if you smiled or frowned when they last saw you Autism can bring extra abilities and now we’re finding out why Quantum dots in brain could treat Parkinson’s and Alzheimer’s diseases Cause of polycystic ovary syndrome discovered at last Thought plastic was bad enough? Here’s another reason to worry Is this our first clue to a world beyond quantum theory? Six pollution policies gutted by Scott Pruitt – and what happens next Parkinson’s disease may start in the gut and travel to the brain Our brains prefer invented visual information to the real thing
<urn:uuid:28b70e7a-d460-421e-981f-7a63972e1b44>
2.78125
845
Truncated
Science & Tech.
47.160662
95,526,691
Scientists develop novel system that recovers energy normally lost in industrial processes Each year, energy that equates to billions of barrels of oil is wasted as heat lost from machines and industrial processes. Recovering this energy could reduce energy costs. Scientists from Australia and Malaysia have developed a novel system that is designed to maximize such recovery. Heat can be converted to electricity by devices called thermoelectric power generators (TEGs), which are made of thermoelectric materials that generate electricity when heat passes through them. Previous studies have attempted to use TEGs to recover energy from the heat generated by, for example, car engines, woodstoves and refrigerators. However, TEGs can only convert a small amount of the heat supplied to them, and the rest is emitted as heat from their "cold" side. No previous studies have attempted to recover energy from the waste heat that has already passed through TEGs. Researchers from Malaysia's Universiti Teknologi MARA and RMIT University in Australia set out to develop a system that can do this. The researchers designed a novel system in which a TEG was sandwiched between two heat pipes, which are devices that can efficiently transfer heat. One pipe delivered heat to the TEG and the other collected heat emitted from the other side. The team built a small-scale version of their system to test in the lab before larger scale versions are made for real-world applications. In this test system, the energy source was not heat wasted by machinery, but an electrical heater. Using a controlled heat source in this way ensured that the researchers knew how much energy entered the system. The supplied heat was transferred to eight TEGs via heat pipes. The researchers measured the amount of electricity produced by the TEGs and the amount of energy recovered from their cold side. When two kilowatts of energy were supplied under normal circumstances to the novel system, they recovered approximately 1.35 kilowatts of heat: over 67% of the energy supplied. In addition, the TEGs generated 10.39 watts of electricity during the heat recovery process. Both heat pipes and TEGs are passive devices that require no energy input besides the waste energy, and the findings demonstrate that these simple devices can be used to generate electricity and make energy recovery more efficient. The work could provide the basis for future development of larger scale energy recovery systems. Provided by: Universiti Teknologi MARA (UiTM)
<urn:uuid:1ac10147-0729-4792-a4cf-73dd4bd1dd04>
3.75
505
News Article
Science & Tech.
30.930878
95,526,705
A repository & source of cutting edge news about emerging terahertz technology, it's commercialization & innovations in THz devices, quality & process control, medical diagnostics, security, astronomy, communications, applications in graphene, metamaterials, CMOS, compressive sensing, 3d printing, and the Internet of Nanothings. NOTHING POSTED IS INVESTMENT ADVICE! REPOSTED COPYRIGHT IS FOR EDUCATIONAL USE. Tuesday, May 10, 2016 Quantum Swing: a pendulum that moves forward and backwards at the same time Fig. 1: Experimental data: (a) Two-dimensional (2D) scan of the sum of the electric fields E(?,t) of the three driving THz pulses A, B, and C as a function of the coherence time ? and the real time t. The contour plot is colored red for positive electric fields and blue for negative fields. (b) 2D scan of electric field ENL(?,t) nonlinearly emitted by the two-phonon coherence in InSb. The orange dashed line indicates the center of pulse A. (c) Electric field transient ENL(0,t) for the coherence time ?=0. Credit: Image courtesy of Forschungsverbund Berlin e.V. (FVB) Two-quantum oscillations of atoms in a semiconductor crystal are excited by ultrashort terahertz pulses. The terahertz waves radiated from the moving atoms are analyzed by a novel time-resolving method and demonstrate the non-classical character of large-amplitude atomic motions. The classical pendulum of a clock swings forth and back with a well-defined elongation and velocity at any instant in time. During this motion, the total energy is constant and depends on the initial elongation which can be chosen arbitrarily. Oscillators in the quantum world of atoms and molecules behave quite differently: their energy has discrete values corresponding to different quantum states. The location of the atom in a single quantum state of the oscillator is described by a time-independent wavefunction, meaning that there are no oscillations. Oscillations in the quantum world require a superposition of different quantum states, a so-called coherence or wavepacket. The superposition of two quantum states, a one-phonon coherence, results in an atomic motion close to the classical pendulum. Much more interesting are two-phonon coherences, a genuinely non-classical excitation for which the atom is at two different positions simultaneously. Its velocity is nonclassical, meaning that the atom moves at the same time both to the right and to the left as shown in the movie. Such motions exist for very short times only as the well-defined superposition of quantum states decays by so-called decoherence within a few picoseconds (1 picosecond = 10-12 s). Two-phonon coherences are highly relevant in the new research area of quantum phononics where tailored atomic motions such as squeezed and/or entangled phonons are investigated. In a recent issue of Physical Review Letters, researchers from the Max Born Institute in Berlin apply a novel method of two-dimensional terahertz (2D-THz) spectroscopy for generating and analyzing non-classical two-phonon coherences with huge spatial amplitudes. In their experiments, a sequence of three phase-locked THz pulses interacts with a 70-μm thick crystal of the semiconductor InSb and the electric field radiated by the moving atoms serves as a probe for mapping the phonons in real-time. Two-dimensional scans in which the time delay between the three THz pulses is varied, display strong two-phonon signals and reveal their temporal signature [Fig. 1]. A detailed theoretical analysis shows that multiple nonlinear interactions of all three THz pulses with the InSb crystal generate strong two-phonon excitations. This novel experimental scheme allows for the first time to kick off and detect large amplitude two-quantum coherences of lattice vibrations in a crystal. All experimental observations are in excellent agreement with theoretical calculations. This new type of 2D THz spectroscopy paves the way towards generating, analyzing, and manipulating other low-energy excitations in solids such as magnons and transitions between ground and excited states of excitons and impurities with multiple-pulse sequences. Movie: Visualization of nonclassical quantum coherences in matter. The two parabolas (black curves) show the potential energy surfaces of harmonic oscillators representing the oscillations of atoms in a crystalline solid around their equilibrium positions, i.e., the so called phonons. Blue curves: probability of presence of atoms at different spatial positions in thermal equilibrium. The quantum mechanical uncertainty principle demands a finite width of such distribution functions. Red curves: time-dependent probability distributions of coherent oscillating states in matter. One-phonon coherence (left panel): the quantum mechanical motion of atoms resembles the classical motion of a pendulum (cyan ball). The latter moves during the oscillation either from left to right or vice versa. Two-phonon coherence (right panel): quantum mechanics allows also for kicking off a nonclassical state with the quantum-mechanical property that the atom can be at two positions simultaneously. The velocity of the atoms behaves also nonclassical, i.e., the atom moves at the same time both to the right and to the left. In the case of a perfect harmonic oscillator the currents of the two parts of the atom exactly cancel each other. Thus, a small anharmonicity is necessary to observe the emission of a coherent electric field transient as shown in Fig. 1(c).
<urn:uuid:d4881212-d5d0-4b83-a554-570fe9342f2a>
2.78125
1,213
Academic Writing
Science & Tech.
30.325196
95,526,707
Are there any plants which do not photosynthesize at all? –Swaagahs Saikih, Singapore Most plants are autotrophic, meaning they produce their own food from basic inorganic substances around them – water, atmospheric carbon dioxide, and mineral nutrients from the soil. Animals, fungi, and most other organisms on Earth are heterotrophic – we consume organic carbon, unable to process the raw material ourselves. Plants form the basis for worldwide food webs, turning inorganic carbon into sugars that are consumed by almost everything else. Typically, plants “fix” carbon from the air as CO2, mix it with water, and convert it to glucose through photosynthesis. Powered by energy from sunlight, this reaction takes place in specialized structures called chloroplasts — organelles contained in the cells of plants. Each chloroplast acts like a small solar factory, bringing in raw materials and putting out usable carbon and waste products. Although the raw materials may seem easy to find, building and maintaining these chloroplasts, and growing specialized structures to capture sunlight (leaves) can be an expensive processes. Because photosynthesis can be costly, plants around the world have developed an alternative to collecting and converting their own carbon – they steal it from other organisms. This strategy is known as parasitism, and has independently evolved at least 12 times during plants’ reign on the Earth. Parasitic plants have developed their own specialized structures — penetrating projections in their roots called haustoria that tap into hosts and steal carbon, water, and micronutrients. This process has allowed them to supplement or forego photosynthesis, and occasionally to lose the leaves no longer needed to house their carbon processing facilities. The degree to which parasitic plants can fix their own carbon varies. Some have the ability to survive without a host and are called “hemiparasites.” These plants retain their photosynthetic abilities and supplement their own carbon production with that from a host when available. Those that have delved so fully into parasitism that they have become wholly dependent on their hosts for survival are known as “holoparasites.” However, not all holoparasites are entirely non-photosynthetic. In the San Francisco Bay Area, dodder (genus Cuscuta) is a local representative of the parasitic plant lifestyle. This unusual plant appears as yellow coils wound around its host like a sprayed can of silly string. Its leaves are reduced to diminutive scales on the stem, and its root system shrivels and disconnects once the plant is attached to a host. It is an obligate holoparasite, unable to survive and reproduce on its own. Yet some species of dodder have retained the ability to photosynthesize, as evidenced by sparsely distributed chloroplasts in their stem cells. Maintaining the power to acquire its own carbon may give this plant greater flexibility, allowing it to survive when times are tough. As research tools have changed, so has our understanding of the ecology of parasitic plants. In years past, we used powerful microscopes to look for chloroplasts. Now we can use genetic analysis tools to understand if plants have the ability produce photosynthetic structures. To find truly non-photosynthetic plants, we can look for clues deep within the chloroplast DNA. As plants evolve further into holoparasitism, selective pressure on the chloroplasts decreases. This allows mutations to build up, and as a result, parasitic plant chloroplasts have rapidly evolving genes that are very different compared to their photosynthetic relatives. New genetic information from holoparasitic plants around the world shows a common trait – a dramatically reduced code for photosynthesis. Thismia tentaculata, a recently discovered parasitic plant from the lowland forests of coastal Vietnam and Hong Kong, is a prime example of shortened genetic code. Named after the tentacle-like petals that burst from its tiny flower, the T. tentaculata chloroplast genome has been reduced by an order of magnitude over its non-parasitic relative in the genus Tacca. This reduction includes a complete loss of the photosynthetic genes, meaning this plant has committed to acquiring all of its carbon from its host. An even more dramatic example is Rafflesia lagascae, a holoparasite native to the Phillippine Islands. Plants in the genus Rafflesia are well known for both their parasitic nature and their knack for producing the largest flowers in the world – up to a meter in diameter. In a 2014 study, scientists were unable to locate either chloroplasts or the genetic codes to create them in R. lagascae. If these results are replicated, they would indicate Rafflesia may be the first known plants to have completely lost their ability to both photosynthesize and to produce any of the microscopic structures related to photosynthesis. So yes, we now know there are plants that are completely non-photosynthetic, and many more are on a similar evolutionary path. With between 4,000 – 4,500 parasitic plants in the world having a parasitic lifestyle, we have many remaining places to look. Trent Pearce is an interpretive naturalist with the California Center for Natural History and the East Bay Regional Park District. Currently residing in Berkeley, California, he enjoys documenting the Bay Area’s flora, fauna, and fungi on www.iNaturalist.org, and regularly explores the trails of the East Bay, camera in hand. Ask the Naturalist is a reader-funded bimonthly column with the California Center for Natural History that answers your questions about the natural world of the San Francisco Bay Area. Have a question for the naturalist? Fill out our question form or email us at atn at baynature.org! Like this article? Help Bay Nature tell more stories about nature in the Bay Area Make a tax deductible donation to Bay Nature today! Most recent in Ask the Naturalist Birds can become confused by glass skyscrapers and artificial light. What will happen with San Francisco's newest skyscraper? Ask the Naturalist
<urn:uuid:f4326d83-78b2-4113-bd3a-663657f3131e>
3.8125
1,269
Nonfiction Writing
Science & Tech.
29.237876
95,526,755
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 21 times found in cloud forest, 10 times found in hardwood forest, 8 times found in montane wet forest, 7 times found in tropical rainforest, 7 times found in lowland rainforest, 4 times found in oak forest litter, 3 times found in mesophil forest, 3 times found in oak-pine forest litter, 1 times found in ridgetop montane forest, 3 times found in mixed oak forest litter, ... Found most commonly in these microhabitats: 81 times ex sifted leaf litter, 19 times Malaise trap, 1 times litter, 2 times Sura, 2 times SCH, 2 times Claro, 1 times suelo, 1 times stray forager, 1 times STR, 1 times sifted leaf litter, 1 times ex sifted litter, ... Collected most commonly using these methods: 44 times MaxiWinkler, 28 times Malaise, 21 times Berlese/Winkler, 12 times Winkler, 6 times Malaise trap, 5 times flight intercept trap, 5 times Berlese, 3 times MiniWinkler, 1 times Night MiniWinkler, 1 times bait, 1 times search, ... Elevations: collected from 20 - 2350 meters, 1265 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:1
<urn:uuid:fb840094-aa2c-47d7-bc69-581516327f11>
2.734375
414
Knowledge Article
Science & Tech.
61.154529
95,526,778
To cite this page, please use the following: · For print: . Accessed · For web: Found most commonly in these habitats: 4 times found in Rainforest, 1 times found in 2ndgrowth veg., 1 times found in 2ndgrowth veg./houses, 1 times found in dry Dipterocarp Forest, 0 times found in primary forest, 1 times found in Rainforest edge, 1 times found in tropical rainforest. Found most commonly in these microhabitats: 2 times ground nest, 1 times foragers, 1 times under rotten log, 1 times random ground foragers, 1 times nocturnal collecting. Collected most commonly using these methods: 2 times Malaise trap, 3 times search, 0 times flight intercept trap, 1 times 100 sweeps, 1 times beating, 1 times Malaise traps, 0 times sweeping, 1 times sweeping 'hill. Elevations: collected from 50 - 2650 meters, 397 meters average AntWeb content is licensed under a Creative Commons Attribution License. We encourage use of AntWeb images. In print, each image must include attribution to its photographer and "from www.AntWeb.org" in the figure caption. For websites, images must be clearly identified as coming from www.AntWeb.org, with a backward link to the respective source page. See How to Cite AntWeb. Antweb is funded from private donations and from grants from the National Science Foundation, DEB-0344731, EF-0431330 and DEB-0842395. c:0
<urn:uuid:c94b6042-9e2b-4225-89ea-12f741e8bdf4>
2.734375
326
Knowledge Article
Science & Tech.
59.668215
95,526,779
The ice cools the can and the air around it. As air cools, it condenses, or changes from water vapor into a liquid. This is one step of cloud formation. Dew eventually forms as the water vapor condenses on the side of the can. In the atmosphere, the water vapor condenses onto tiny dust, dirt, pollens and pollutants that we call condensation nuclei. Once this happens, the tiny water droplets collide with each other, eventually becoming big enough for us to see in the form that we know as clouds. When enough water vapor (moisture) is present on a cool night, some of the water vapor will condense onto grass, cars and other surfaces. The water vapor changes from a gas, to a liquid, called dew droplets. When salt is added to our experiment, it cools the temperature below freezing. Instead of dew forming on the can, frost forms. On a freezing night when enough water vapor is present, it can change into the solid state and form frost. Frost can cause significant damage to crops.
<urn:uuid:be66aa06-9604-4505-81f3-8ba7110d612b>
3.9375
222
Knowledge Article
Science & Tech.
60.241515
95,526,791
Introduction to Domain Driven Design DDD is a structural idea of software creation. The primary thought is that the plan of an application ought to be founded on a model, not on innovation or whatever else. This influences it to clear for designers as well as for professionals in the field. Domain Driven Design depends on five items: - Entities: These are the articles that speak profoundly of an application. They express the business and are comprehended by customers. What’s more imperative is that every Entity has a personality and it’s special inside the framework. - Value Objects: These items don’t have personalities and depict attributes. - Services: Services contain rationale that doesn’t have a place with a specific Entity. - Factories: This is only a notable example, which deals with making Entities. - Repositories: Repositories are utilized to isolate Entities and their portrayal in a database. They contain code which is in charge of collaboration with a database. DDD Entities vs. EF EntityObjects When somebody ponders making another application utilizing DDD and EF, the primary thought which happens is to utilize EF EntityObjects as DDD Entities. EntityObjects are extraordinary and do have characters, as required for Entities. Be that as it may, they can’t be isolated from the Entity Framework. As per the DDD approach, area items ought to speak to just business rationale, they shouldn’t be in charge of putting away or showing themselves or whatever else. This idea is otherwise called Persistence Ignorance, implying that the area objects are unmindful of how information is spared or recovered from an information source. Shockingly, EF isn’t a PI ORM, and there is no real way to make EntityObjects autonomous of the system. Their code is auto produced in view of a database outline and a mapping record. Hypothetically, it’s conceivable to make “genuine” substances and utilize the EntityFramework just as an information industriousness layer. Nonetheless, it’s completely strange in light of the fact that for this situation it will be important to again actualize all the industriousness designs gave by EF (Unit of Work, Identity Mapper, and so forth.). In this way, as far as DDD, the EntityFramework goes about as Repositories and EntityObjects as its incorporated Entities. It is, obviously, a practical arrangement, yet many like to have space objects which don’t know anything about a fundamental information source. Persistence Ignorance is something individuals have been contending about for a long time. Some believe it’s great, some believe it’s great however not generally essential. These are the confinements that a constant insensible structure must not force on the engineer’s area objects: - Require acquiring from a predefined base class other than Object - Require the utilization of an industrial facility technique to instantiate them - Require the utilization of specific information composes for properties, including accumulations - Require actualizing specific interfaces - Require the utilization of structure particular constructors - Require the utilization of particular fields or properties - Prohibit particular application builds - Require composition RDBMS code, for example, inquiries or Stored Procedure brings in a supplier particular tongue Regardless of whether it’s decent to have a chance to swap databases, not every person needs it. A more commonsense objective of PI is a capacity to taunt database objects less demanding, which disentangles Unit Testing. IPOCO remains for Plain Old DLR Object and IPOCO for Interface POCO. POCOs are objects which don’t contain any confounded, system particular code. For instance, EntityObjects in Entity Framework aren’t POCOs, on the grounds that there is a ton of ORM particular code. IPOCO isn’t a real interface, it’s an example which requires actualizing interfaces as opposed to getting from uncommon articles to utilize an ORM. - IEntityWithChangeTracker for change following - IEntityWithKey for uncovering the substance character key (discretionary, yet not actualizing this interface causes a noteworthy diminishment in execution) - IEntityWithRelationships is required for elements with affiliations If there are constantly different alternatives. You can accomplish a more noteworthy level of reflection for area protests by utilizing IPOCO, by sitting tight for EF v2, or by utilizing another ORM (NHibernate is said to be completely PI). - Deependra is a Senior Developer with Microsoft technologies, currently working with Opteamix India business private solution. In My Free time, I write blogs and make technical youtube videos. Having the good understanding of Service-oriented architect, Designing microservices using domain driven design.
<urn:uuid:400653de-dfe1-4ba8-815b-c661263d0ecc>
3.046875
1,027
Personal Blog
Software Dev.
18.768228
95,526,802
Determination of the Actual Amount of Insolation Absorbed by a Photovoltaic Panel (125w) Science Journal of Energy Engineering Volume 6, Issue 1, March 2018, Pages: 27-30 Received: Mar. 2, 2018; Accepted: Mar. 20, 2018; Published: Apr. 14, 2018 Views 487 Downloads 33 Rex Kemkom Chima Amadi, Department of Mechanical Engineering, Rivers State University, Port Harcourt, Nigeria Anthony Kpegele Leol, Department of Mechanical Engineering, Rivers State University, Port Harcourt, Nigeria Follow on us The determination of the ability of the Photovoltaic cell in absorbing radiation was performed, taking into cognizance the fact that photovoltaic panels have the ability to harness solar energy incident on it in the form of radiation. It utilized the hourly production of open circuit voltage for a 125W module Photovoltaic panel made of silicon material. The solar panel was exposed to the open atmosphere from sunrise to sunset, precisely from 5.00am to 7.00am, with no obstruction from heights, high rise buildings and tall trees including towers. The photovoltaic panel was brought to ambient temperature and results were taken in clear sky weather condition in the neighborhood of ±10 minutes to the hour mark. The voltage output was recorded with the aid of a digital multimeter and a Power model was used to determine the actual amount of radiation available and a plot of voltage V hours and Watt V hour when done, showed the same pattern as that of Adnot, Haurwitz and Alboteani et al. results. The hourly variation of voltage graph when assessed, showed an ascendancy as the hour increased to a maximum of 20.65Volts at mid day and then a decrease to 0 volts at twilight (7.00pm) on the 11th day of November, 2017. The same ascendancy and decrease patterns were also displayed by the hourly variation of absorbed power. Thus, photovoltaic panel output varies with the sun hours like solar radiation varies with the sun hours, when results are obtained are obtained under stated conditions. Hence, results stands validated. Photovoltaic Plate, Solar Radiation, Sun Hour, Clear Sky Weather, Voltage Output and Power To cite this article Rex Kemkom Chima Amadi, Anthony Kpegele Leol, Determination of the Actual Amount of Insolation Absorbed by a Photovoltaic Panel (125w), Science Journal of Energy Engineering. Vol. 6, No. 1, 2018, pp. 27-30. Copyright © 2018 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/ ) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. C. G. Popovici, S. V. Hudisteanu, T. D. Mateescu, & N. C. Cheruchees, (2015). Efficiency improvement of photovoltaic panels by using air cooled heat. Sinks. I.L.Alboteanu, Bulucea, & C. A., S. Degaralu, (2015). Estimating solar irradiation absorbed by Photovoltaic panels with low concentration located in Craiova, Romania. Sustainability 7 (2071-1050) 2644-2661 doi:10.3390/Su 7032644. www.mdi.com/journal/sustainability. D.O. Akpootu& Y. A. Sanusi, (2015). A new temperature based model for estimating global solar radiation in Port Harcourt, south south Nigeria. IJES 4(1) 63-73 retrieved from www.thewes.com/papers/v4-1/version-1/kd411063073.pdf Z. Guo, (2017). Daily variation law of solar radiation flux density incident on the horizontal surface.Academy Science P. R. China.Journal of Earth Science and Climatic change, 8(4)12.DOI:10.4.72/2157-76.7, 10004.2. O.S. Ohunakin, M.S. Adaramola, O.M. Oyewola& R.O. Fagbenle (2013).Correlations for Estimating Solar Radiation Using Sunshine Hours and Temperature Measurement in Osogbo, Osun State, Nigeria.Front Energy,7(2), 214-222. DOI:10.1007/s11708-013-0241-2 H.R.H. Liang, J.M.Zhang, J.A. Liu, Z.A. Sun, & X.H. Cheng (2012).Estimation of Hourly Solar Radiation at the Surfaceunder Cloudless Conditions on the Tibetan Plateau Using a Simple Radiation Model.Adv.Atmos.Sciences,29(4), 675-689. DOI: 10.1007/s00376-012-1157-1 S.S. Chandel& R.K. Aggarwal (2011).Estimation of Hourly Solar Radiation on Horizontal and Inclined Surfaces in Western Hamalayas.Smart Grid and Renewable Energy Journal 2011 (2), 45-55. Doi:10.4236/sgre.2011.21006 V. Ambas& E. Baltas (2014) Spectral Analysis of Hourly Solar Radiation.Environmental processes, 1(3), 21-263. www.link.springer.com/article/10.1007/s40710-014-0023-9ss. R.K.C. Amadi, “The Regenerator as a Photovoltaic Recharger,” unpublished. T. Khatib& W. Elmenreich (2015).A Model of Hourly Solar Radiation Data Generation from Daily Solar Radiation Data Using a Generalized Regression Artificial Neural Network.International journal of photoenergy, 2015(4), 1-13 http://downloads.hindawi.com/journals/ijp/2015/968024.pd
<urn:uuid:b71cba02-cb80-472f-80e4-7b248ae628ce>
2.609375
1,329
Academic Writing
Science & Tech.
63.760088
95,526,816
Seismic tomography indicates that flow is commonly deflected in the mid-mantle. However, without a candidate mineral phase change, causative mechanisms remain controversial. Deflection of flow has been linked to radial changes in viscosity and/or composition, but a lack of global observations precludes comprehensive tests by seismically detectable features. Here we perform a systematic global-scale interrogation of mid-mantle seismic reflectors with lateral size 500–2000 km and depths 800–1300 km. Reflectors are detected globally with variable depth, lateral extent and seismic polarity and identify three distinct seismic domains in the mid-mantle. Near-absence of reflectors in seismically fast regions may relate to dominantly subvertical heterogeneous slab material or small impedance contrasts. Seismically slow thermochemical piles beneath the Pacific generate numerous reflections. Large reflectors at multiple depths within neutral regions possibly signify a compositional or textural transition, potentially linked to long-term slab stagnation. This variety of reflector properties indicates widespread compositional heterogeneity at mid-mantle depths. The Earth’s mantle undergoes significant mineralogical and physical changes as temperature and pressure increase with depth. Characterising these changes in the upper 400–800 km has advanced our understanding of heat and material fluxes through the mantle. In particular, variations in the depth of seismic discontinuities, which reflect and convert seismic waves1, 2, have been used to map solid-to-solid mineralogical phase changes and thus regional variations in mantle temperature and/or composition. A classic example is the pressure–temperature sensitivity of the depth of major discontinuities that bound the Mantle Transition Zone (MTZ) at 410 and 660 km. These boundaries demarcate transitions of olivine to wadsleyite and ringwoodite to bridgmanite+ferropericlase3. In contrast, there are no known phase changes in mantle minerals that readily explain regional discontinuities (here termed seismic 'reflectors') at mid-mantle depths (from 800 to 1300 km)4,5,6,7,8,9,10,11,12. Thus the origin and geodynamic implications of these mid-mantle reflectors remain elusive. Recent work posits that the mid-mantle may represent a significant transition in Earth’s rheology and/or composition13,14,15. Tomographical studies have found that only few recently subducted slabs sink unimpeded through the MTZ16, 17; many slabs flatten and appear to stagnate at either ~660 km or ~1000 km depth18. Upwelling mantle plumes also commonly show deflection at similar mid-mantle depths19, 20. However, observations of Tethys and Farallon lithosphere in the lower mantle21 reveal that flow crosses these depths, at least regionally. While deflections of mantle flow near 660 km depth can be related to the effects of a major phase transition22, 23, those in the mid-mantle have instead been ascribed to a range of alternative mechanisms. These include the presence of a viscosity jump14, 24, radial change(s) in mantle composition13 and mineral phase changes for particular material compositions, such as transitions within subducted slabs25, 26 and/or impedance contrasts arising from the different composition of the subducted material itself27, 28. Testing various processes for the origin of the reflectors (compositional vs. rheological) requires detailed evaluation of seismic reflections on a global scale. Previous work shows that any mid-mantle reflectors display immense variation in seismic properties and depths, and no global mantle discontinuity has been detected beneath the 660. Abundant regional mid-mantle reflectors and scatterers occur from 700 to nearly 2000 km depth in the mantle4,5,6,7,8,9,10,11, 13, 29. Reflectors are observed beneath areas of active subduction including Indonesia and the Marianas6,7,8,9,10,11,12. Numerous small-scale (~10s of kilometres) features are detected around the Pacific Ocean, which are interpreted as subducted oceanic material27,28,29,30,31,32,33. Studies also find evidence for reflectors in regions of upwelling, such as the Hawaiian and Icelandic hotspots19, 34, 35. Further isolated observations are situated well away from subduction zones and hotspots12, 36,37,38. There are also several locations where mid-mantle reflectors have not been found despite detailed examination, such as the Tonga subduction zone2, 39, and vast regions remain to be mapped at mid-mantle depths13. This is in part due to a lack of studies of the mid-mantle on a global scale. Indeed, a comprehensive worldwide investigation is required to further our understanding of the mid-mantle. Here we perform a systematic global-scale seismic interrogation of mid-mantle reflectors. We search for reflectors in the 800 to 1300 km depth range, using precursors to the seismic phase SS. This shear wave has two paths in the mantle and reflects once from Earth’s surface at its midpoint; SS precursors are generated by any reflectors beneath the surface. The arrival time of this seismic phase is thus sensitive to the depth of the bounce point. A large global dataset of SS-precursor arrivals is partitioned into regional bins and stacked into vespagrams, employing common mid-point stacking (see Methods section for more information). We demonstrate using synthetic modelling that our dataset is sensitive to near-horizontal reflectors with length scales on the order of 500 to 1500 km and show that the reflectors are too small to be resolved by global tomography techniques. We measure the location, geographic size, depth, and impedance contrast of the reflectors in the mid-mantle, finding large variability. We investigate a range of different geographical bin sizes to constrain the variation in these properties across multiple length scales and perform more detailed analysis in regions of higher data sampling density. Reflector properties are evaluated in the context of average seismic velocity from global tomography models40. Such an evaluation puts our observations into the framework of global mantle flow patterns41, performed to improve our understanding of variations in mantle temperature and composition (e.g., refs. 16, 17, 20). Mapping reflectors in the mid-mantle is key to constraining the heterogeneity that may exist in the mid-mantle, with implications for the history of mantle mixing. A systematic search reveals widespread regional reflectors in the mid-mantle (Figs. 1, 2a–c and 3a, Supplementary Figs. 1–8). The wide geographic variation in the depth and lateral extent of these reflectors indicates that a coherent global discontinuity at any individual mid-mantle depth can be excluded; correspondingly, a global stack shows no features here (Supplementary Fig. 1). This is consistent with global seismic velocity models42, 43. Reflectors occur across the entire depth range explored, corroborating previous regional studies2, 4,5,6,7,8, 11. Reflections from 875 km depth are most abundant (Fig. 4a), with less pronounced peaks in the range of 1000–1300 km depth. The geographically most extensive reflectors are located beneath the Pacific Ocean and (offshore) eastern South America. The scale lengths of reflectors vary laterally over 500–2000 km, and some bin locations have multiple reflectors at two or more depths (Fig. 2c). Reflectors of small regional extent (<500 km) are located beneath the North Pacific, western South America, and Eastern Europe, in agreement with prior studies (e.g., refs. 9,10,11,12, 32). Most precursors have the same polarity as the amplitude of SS, implying either a velocity or density increase with depth (i.e., a positive shear impedance increase with depth), but a subset (26%) of the observed reflectors display opposite polarity (Figs. 3b and 4b). Polarities of the reflectors are consistent within geographic regions but do not vary systematically with depth (Supplementary Fig. 8). The corresponding shear-wave velocity contrasts (assuming no density contrast) range from ±0.7 to ±3.2% (Fig. 4b); contrasts below 0.7% are too weak to be detected by our stacking methodology. Likewise, density contrasts (assuming zero contrasts in intrinsic shear modulus) are therefore calculated as approximately ±1.4 to ±7.3%. Actual shear impedance contrasts will be intermediate combinations across the two properties and also depend upon the geometry of the reflector within the bin; the observed seismic properties represent an average across the length scale of the bin. Several areas are characterised by the absence of reflections in our stacks across the full depth range explored, termed here as 'non-detections' (Fig. 2d). Indeed, 24% of the bins in Fig. 3a do not display reflectors (Table 1). Notable coherent geographical regions without reflectors exist beneath the North Pacific (Aleutian Trench), central Europe, and the Brazilian coast (Peru-Chile Trench). The amount of non-detections varies regionally. Some areas display a higher proportion of bins with coherent reflectors (e.g., beneath the Pacific Ocean), whereas other locations have a higher percentage of bins with non-detections (e.g., beneath Europe and the Brazilian coast) (Figs. 5 and 6). The presence and quantity of non-detections also varies across length scales, evidenced by variation between bin sizes (Supplementary Figs. 9, 10). Larger bins typically display a higher proportion of non-detections (Supplementary Table 1). A lack of reflector may result from multiple factors, not solely limited to the absence of sub-horizontal mid-mantle heterogeneity. For example, small impedance contrasts that do not generate energy above the noise level, gradual radial transitions (>65 km) including gradual thermal gradients that do not produce reflectors at SS frequencies, or complex three-dimensional (3-D) structure that does not stack coherently within the bin would not generate reflectors44,45,46. Owing to the mid-point stacking technique, any reflectors that are not oriented sub-horizontally, such as dipping structures, will also not stack coherently. We examine these averaging effects across bin sizes, by comparing the small and large bins to confirm variation in reflector coherency across lateral length scales (Supplementary Figs. 4, 5). The averaging effect is exemplified beneath the mid-Pacific Ocean, where the depths of reflectors vary significantly for the 5° bins (Supplementary Fig. 4a). Conversely, the larger 15° bins predominantly display fewer reflectors (Supplementary Fig. 4d); a consequence of averaging over small length-scale variations. We also find bins with non-observations that are situated directly adjacent to bins with robust detections, despite using overlapping bin geometries. For example, bins with no reflectors are located within regions of significant variability beneath the South Pacific. This observation suggests highly complex structure that is not fully resolved by SS precursors and could be constrained by alternative, higher-resolution techniques (e.g., refs. 27, 33, 35). Observability of reflectors As mentioned above, any S-wave reflections retrieved by our method require sub-horizontal reflectors of a particular impedance contrast and lateral extent for a given bin size. We observe precursor/SS amplitude ratios in the range of ±0.03, and the smallest SdS/SS amplitude ratios that we detect are 0.0065. This marks the approximate limit of detectability of the precursors; precursor signals that are smaller than this amplitude will not be visible above the noise level. Below, using synthetic modelling, we quantify the sensitivity of the SS precursors to the sizes and strengths of reflectors for multiple bin sizes. This allows us to establish a framework for the interpretation of our observations across different length scales and place constraints on the limitations of the method. We use the 2.5-D spectral elements code AxiSEM47 to generate synthetic seismograms and stacks and obtain estimates on the observability of reflectors as a function of their strength and size relative to the bin. We determine candidate seismic impendence contrasts to which our observations correspond and explore the influence of the lateral size of reflector as a function of bin size. We present these as contour plots (Fig. 7), which reveal the detection limits as a function of bin size (yellow-to-red colours in Fig. 7). We test the same size and strength of reflectors for bins of radii 25°, 15° and 10°, in order to also explore the dependence of observability of a given reflector on absolute bin size. The modelling reveals that, generally, the SS precursors are sensitive to horizontal structures consistent on length scales similar to the bin sizes (500–1500 km), with detectable reflectors resulting from sharp and large transitions in shear impedance (<5 km gradients, shear impedance <5%). As expected, reflectors that comprise a larger proportion of the bin area are detectable for much lower velocity contrasts than smaller reflectors. Larger reflectors will generate coherent signals in stacks, whereas smaller reflectors will be somewhat suppressed by bounce points from portions of the cap with no signal, reducing observability of the SS precursors. The influence of reflector size with respect to the bin is clearest in Fig. 7a, where the relative size of the reflector proportional to the bin increases from 10 to 50%. Putting this into the context of our observations, the smallest observed SdS/SS amplitude ratio of 0.0065 corresponds to a minimum impedance contrast of about 0.8%. Thus the synthetic calculations show that a reflector at this limit of observability will be observed in a bin for which it comprises at least 50% of its size. In other words, the weakest reflector we detect has to be on the order of 500 km in size. As bin size decreases, the observability of reflectors is skewed significantly towards detecting smaller reflectors. For example, for bin sizes of radius 10°, almost all theoretical reflectors may be observed in high-quality stacks. This is corroborated in our observations for various bin sizes, whereby the proportion of observations generally increases with decreasing bin size (Table 1). Thus the reflectors that are only resolvable in the smaller bin sizes must vary on short length scales and hence are suppressed within larger bins. This confirms that the method is primarily suited to detecting features on the length scales of the bins. The modelling thus allows us to estimate the geographical size as well as lateral variation in topography of the reflectors in our subsequent analysis, based on any consistent variation across bin sizes (or lack thereof). Our calculations highlight the trade-off between reflector size and impedance contrasts for a bin. The measurements represent an average value across the size of the bin, and it is not possible to distinguish between large but weak reflectors versus small but strong reflectors within an individual bin. Consequently, in terms of amplitude ratios, we consider only the polarity rather than absolute measurements. However, the lateral variability of amplitude ratios may be used to infer lateral variation in strength as well as the presence of a reflector (e.g., in the case of a laterally intermittent discontinuity). In future, more computationally intensive modelling work, as well as more data with different length scales of resolution, is required to investigate the complex features that exist in the mid-mantle. Our synthetic tests elucidate that we should expect averaging across any structures present in the bin. Very likely, such structures include multiple reflectors at different depths in one bin, reflectors with laterally varying or potentially anisotropic impedance contrast, as well as tilted reflectors. Relationship to 3-D tomography We explore the relationship of reflectors to radial seismic velocity gradients, and the influence of 3-D velocity structure in the mantle, to explore various potential structural and geodynamical origins for the reflectors. Investigations are performed for two recent shear-wave mantle tomography models, S20RTS48 or SEMUCB-W149. For each model, we calculated the average 3-D radial velocity gradient for a bin within ±25 km of the estimated depth of the reflector. The SS precursor data are sensitive to velocity gradients that occur across this radial length scale or less. We identified no robust correlation between reflectors and velocity gradients (Fig. 8), indicating that the mantle structures that cause the reflections are not resolved by tomography. Notably, all calculated shear-wave velocity gradients are positive, yet a significant proportion of the reflectors have negative impedance contrasts. For both of these reasons, the reflectors must therefore result from structures with shorter length scales than those in the tomographic models. Lateral velocity anomalies as resolved by mantle tomography may also affect the localisation of reflectors by SS precursors. Our initial dataset was not corrected for 3-D velocity structure, and we test the influence of 3-D heterogeneity by performing corrections for individual ray paths, by calculating for delay times of S1000S with respect to SS. We find that there are limited travel time differences between the vespagrams with uncorrected data and vespagrams corrected for each model (Supplementary Figs. 11, 12). For all of the 10° bins, S20RTS alters the times by an average of 0.6 ± 3.3 s, whereas the average change for SEMUCB-W1 is 2.3 ± 2.8 s (using the standard deviation of all corrected measurements as the error). This corresponds to depth corrections and errors of approximately 3 ± 17 km and 12 ± 14 km. This is likely due to averaging effects over the range of distances and azimuths within each bin (see Supplementary Fig. 13 for distribution across all data). The major effect of the corrections is in the waveform shape of the precursors, rather than significant differences in their arrival time. These discrepancies may result from defocussing of reflectors at other depths, as well as the main SS arrival, and influences the travel time of the maximum amplitudes. The difference is clear in the shape of the waveforms in the cross-sections and particularly noticeable for the SEMUCB-WM1 corrections (Supplementary Fig. 12c). As a consequence, we do not use 3-D corrections for our data analysis, as the average correction falls below the extent of our 25 km depth bins. Regional domain analysis of reflectors We interpret our observations in the context of mid-mantle tomography models (Fig. 3). Data from a recent study integrate cluster analysis of five mantle tomography models to independently generate 'vote maps' of seismically fast, slow, and neutral (i.e., with velocities close to the global average) domains40. Each tomography model is allocated a 'vote' as to whether the mid-mantle structure is grouped into one of these three clusters (or domains) to generate a global combined map. We define each bin according to the average votes, which accounts for bins that may incorporate multiple domain types. Fig. 6 shows cross-sections through these vote maps; shades of blue and red indicate regions for which the majority of tomography models agree that mantle rocks are fast and slow, respectively; no shading corresponds to 'neutral'. We analyse our data in the context of the domain in which reflectors are located, since these roughly correspond to the degree-2 structure of whole-mantle convection also predicted by global geodynamic models41. Fast regions are commonly related to downwelling cold material (subducted slabs), whereas slow regions correspond to hot upwellings (plumes). Neutral domains are not correlated to either upwelling or downwelling flow and may be characterised in some regions by the impedance of radial flow, such as stagnation of slabs or plumes at various depths in the MTZ17, 20, 50. By considering our observations in the context of the average seismic velocity properties, we obtain an insight into the relationship of horizontal reflectors to mantle flow and deflection processes and associated thermochemical heterogeneities. We find statistically significant differences between seismic domains for bin sizes up to 10° (p < 0.1; i.e., the probability of different domains having the same seismic properties is <10%) (see Methods section for full details). We characterise each bin according to the average seismic domain votes and use a z-test to perform systematic statistical comparisons for proportion of reflector observations versus non-detections and polarity between each domain types (Table 1). For the combined bin approach and 5° bins, the proportions of bins containing reflectors differ significantly between seismic domains (p < 0.05). As bin size increases, both the differences between domains and the significance decrease and are ultimately no longer statistically significant at 15° bin sizes. This statistical analysis highlights the averaging effects for larger bins, including the fact that larger bins are more likely to encompass multiple seismic domains. Geographical bins from slow domains (upwellings) predominantly show reflectors (85%), which vary on short length scales (500 km) across the full depth range, and roughly follow the tomographically defined domain boundaries in vertical cross-sections (Fig. 6a, c). The small length scales of lateral variations are highlighted by the decrease in the number of reflectors observed as bin size increases. This reveals that the reflectors vary on length scales corresponding to the size of the 5° bins (up to 1000 km) and thus are not resolvable in the larger bins. Of the reflectors detected, relatively more possess negative polarity (31%) compared to other domains, suggesting local mantle heterogeneity to produce such seismic structures51. This supports the inference that these reflectors correspond to a significant compositional and/or structural difference between slow regions and other seismic domains (see Table 1). In contrast, fast domains (downwellings) are characterised by relatively more non-detections than slow regions (i.e., only 71% of the reliable bins contain reflectors). Spatially coherent reflectors are rarely found within the bulk of the fast domain, and there is no consistent relationship to length scale of observation. The majority of reflectors are located near to the edges of the domains (Figs. 3a and 6a, b). Comparisons between bins of differing sizes reveal no trend in quantity of detections with increasing bin size, indicating sporadic, isolated reflectors, with varied length scales across our range of resolution. In comparison to the slow domains, a greater proportion of the observed reflectors have positive polarities than negative (76%). Compared to fast and slow regions, neutral domains contain an intermediate proportion of reflectors within bins (77%), with the majority exhibiting positive polarity (74%). An assessment of the proportion of observations for different bin sizes in the neutral domains reveals that lateral scale lengths of the reflectors are geographically consistent across larger length scales than other domains. The defining characteristics of reflectors in neutral domains, compared to those in fast and slow regions, is that they are often laterally coherent across bins, forming very large and continuous features with consistent depths. Neutral regions further display a majority of shallow detections around 900 km depth; 50% of the reflectors are within ±100 km of this depth. Unlike in fast and slow domains, these reflectors tend to be situated away from domain edges and can extend across the entire domain. The observed mid-mantle reflectors do not exhibit any geographic relationship to surface features. Instead, they correlate to mid-mantle structure as independently imaged by seismic tomography. There is a good agreement between tomography models in terms of the locations and extent of mid-mantle tomographic domains40, which reflect large-scale mantle flow patterns41. For example, broad mid-mantle upwelling is likely manifested above the large low shear-velocity provinces (LLSVPs) of Africa and the South-central Pacific. Downwelling should be focussed along the high velocity belts found across Asia and the Americas, where Tethys, Pacific and Farallon lithosphere sinks through the mid-mantle16. The South Pacific is our best-resolved example of a slow region, with dense horizontal reflectors that vary on short lateral length scales. Reflectors are absent in only very few slow domain bins (primarily beneath the Pacific Ocean and likely as a result of variation on length scales too small to resolve) and occur near the edges or tops of slow domains (Fig. 6) or LLSVPs. One hypothesis for mid-mantle reflections is that they result from a compositional change across the top edges of the LLSVPs, which are interpreted as thermochemical piles that host primordial material and/or basalt-enriched subducted material52,53,54. Some thermal contribution may arise if the gradients are extremely strong (occurring over vertical distances of less than approximately 65 km). The abundance of reflectors with near-equal occurrences of positive and negative impedance contrasts may attest to heterogeneity within the LLSVPs55. The top of the low-velocity anomaly would produce a negative impedance contrast, although such a feature may be gradational. Streaks of basalt/harzburgite would produce alternating bands of elevated and lowered seismic velocity and density contrasts, similar to the observed varied impedance contrasts and polarities within the data. This interpretation implies that the numerous reflectors within the seismically slow region map the shallow roof of a compositionally distinct Pacific LLSVP40, 51, 55 (Fig. 6). Although not recovered here due to sparse data coverage in the region (Fig. 5a, Supplementary Fig. 9), we would predict similar reflectors near the roof of the African LLSVP. The comparatively high quantity of non-detections in 'fast regions' is partially due to sparse data coverage in regions of expected mid-mantle downwellings (Fig. 5a, Supplementary Fig. 9), although the best-example fast regions in Europe have extensive data sampling (Fig. 3). Heterogeneity in fast regions (i.e., downgoing slabs of cold, seismically faster basalt and harzburgite) is expected to be dominantly sub-vertically oriented, as well as small scale, and thus difficult for the SS precursors to resolve compared to shorter wavelength methods4,5,6,7,8,9,10,11. Reflectors smaller than ~500 km are difficult to be resolved (see Fig. 7). Alternatively, no reflectors would be detected if the impedance contrasts are small (less than approximately 0.7% averaged across the entire bin). The scattered mid-mantle reflectors in these regions are consistent with small-scale heterogeneity on the order of a few hundred kilometres, as may be expected from the vast range in composition of subducted material. Deeper sub-horizontal reflections as observed from within the fast domains may arise from coherently stacked piles of basalt56 (Fig. 9). Alternatively, they may be generated by phase transitions within the basalt, continental crust, or sediment layers of the subducted slabs26, 57. Such an explanation requires specific geometries of the these (thin) layers to sustain large (>500 km) coherent reflectors, which would generate a reflector with or without an accompanying phase transition. We predict similar results in other fast regions (e.g., Central America), to be obtained with methods of higher spatial resolution than SS (e.g., refs. 11, 33, 35). The observations near to the edges of the fast regions are likely generated by the expected large impedance contrast between compositionally distinct domains. In our best-example 'neutral region' in the Northeast Pacific, there are two dominant geographically large reflectors at 850 and 1050 km, with scattered deeper detections (Fig. 6). Possible mechanisms for the deeper reflections are regional changes in rock texture or composition with depth13, such as a transition from pyrolite to bridgmanite-enriched mantle50, 58. Our reflections could alternatively correspond to a regional jump in viscosity, which has been proposed to occur at mid-mantle depths14. Shallower reflections may arise from the top and/or bottom of a thermally equilibrated (i.e., fossil) slab that stagnates atop the (textural or compositional) viscosity jump14, 50. Long-term stagnation can occur as slab sinking is impeded at a viscosity (or density) contrast to allow progressive slab warming that removes the negative buoyancy of the slab. Once oriented horizontally, a slab then becomes detectable by the SS precursor data. The mapped reflectors are laterally coherent (in depth and polarity) over thousands of kilometres, and thus witness large-scale mantle structure, and not just small-to-mid scale heterogeneity. In eastern South America, another well-resolved 'neutral region', we observe three reflectors with alternating polarity (positive at 850, negative at 1000 and positive at 1100 km depth; Fig. 6a, c). These may have similar origins to those in the North Pacific, but a different configuration (e.g., stacked fossil slab on top of the compositional/textural change and/or complex geometrical configuration). Local accumulations of subducted oceanic or continental material may generate further regional reflectors as a consequence of heterogeneities and phase changes25, 26, 54, 57. Our global-scale observations provide the first detection of widespread reflectors associated with heterogeneity in the lower mantle. Significant variation in reflector geometry, depth, and polarity indicates that the underlying mechanisms arise from distinct origins in tomographically diverse domains. As reflections are most likely to occur across large-scale compositional boundaries, this study is a step towards mapping geochemical reservoirs that host subduction-related59 and/or primordial materials60 in the convecting mantle. Our study also provides new evidence for a potentially long-lived reservoir associated with large-scale heterogeneity in the neutral mid-mantle regions50. Future work is required to better characterise large-scale compositional heterogeneity in the lower mantle and orient our observations into the context of modern mineral-physics experimentation as well as geodynamic modelling. The configuration of any observed reflectors ultimately informs about the geometry of mantle reservoirs, as well as the regionally diverse style and history of mantle flow and mixing. Data and processing The seismic phase SS corresponds to mantle shear waves that reflect once at the Earth’s surface (Fig. 1a). Underside reflections of seismic energy from deeper mantle reflectors generate precursors to SS. Interrogating SS precursors benefits from a near global coverage of mantle shear-wave structure (Fig. 1b). We have compiled a high-quality dataset of 45,634 hand-picked SS arrivals (Fig. 1c). The data are stacked into vespagrams using common mid-points for regional bins of sizes dependent on data density (various examples are shown in Fig. 2, and Supplementary Figs. 1–3) to reveal the small amplitude precursors not visible in individual seismograms. We benefit from the recent expansion of available seismic data, meaning that this is the largest hand-picked dataset of SS precursors to date. Although our dataset spans nearly 30 years, approximately half of our data is from the past 7 years, as a result of the recent increase in seismic data coverage. Even so, data density is still poor in many areas. Fig. 1b shows the geographical coverage of the SS bounce points. Note that this does not correspond to sensitivity, however; we also require ample azimuthal and epicentral distance variation across a region to obtain slowness resolution. Supplementary Fig. 13 contains the entire dataset as a function of epicentral distance and azimuth; both show good coverage globally, but the variation in each is clear from the plots. Correspondingly, some regions therefore suffer a lack of ray paths across the full distance and azimuthal ranges, explaining why we do not retain bins in some regions with apparently sufficient data coverage. Our proportional data coverage for each domain agrees well to the global distribution of domains, and as expected, absolute data coverage increases with bin size (Supplementary Table 2; also see Supplementary Fig. 9). We downloaded data from IRIS for every suitable event from January 1988 to April 2016. Our event criteria involve magnitudes from 6.0 to 7.0 and focal depths shallower than 30 km. We obtain data from stations in the event-receiver epicentral distance range from 100° to 180°. The data are first deconvolved from the receiver response to displacement, rotated to the transverse component and then filtered from 15 to 75 s for picking of individual data. We initially perform automated quality checking by removing any seismogram with a root mean square amplitude in the precursor window >0.3 of the SS signal amplitude. The data are then hand-picked by event to ensure consistency of SS waveforms and also the part of the SS that was picked. Final quality checking is performed at this stage to remove any seismograms with large amplitude arrivals in the precursor window or inconsistent SS waveforms within an event. Following picking, the data are aligned on the SS peak. For stacking, we use a relatively short period filter of 10–50 s, maintaining the original SS pick times (realigned to the position of the maxima of the SS phases). The data are then normalised according to SS amplitude. We partition the data into overlapping spherical caps based on their bounce points, to generate regional maps. The geometry is such that the centre point of a bin corresponds to the edge of each adjacent bin. Generally, even reflectors from the major 410 and 660 km discontinuities are too small to be detected in all but the highest-quality individual seismograms (Fig. 1b). Therefore, the binned data are stacked into vespagrams (Supplementary Fig. 1), which suppress incoherent noise and reveal small but coherent seismic phases. Red and blue signals in vespagrams in Fig. 2 and Supplementary Figs. 1–3 correspond to arrivals of seismic energy. A global stack reveals the major discontinuities at 410 and 660 km depth but no global features in the mid-mantle, consistent with 1-D seismic velocity models42, 43. SS precursors are identified within vespagrams using theoretical arrival time and slowness with respect to SS. The cross-sections beneath the vespagrams (Fig. 2, Supplementary Fig. 1) are taken through the predicted arrival time and slowness of SS precursors with respect to SS (dashed line), calculated for PREM42 with the TauP toolkit61, which computes theoretical ray paths of seismic phases. Signals away from this line are not SS precursor energy. Using bootstrap resampling with 300 random resamples per stack, we estimate the 95% confidence levels (two standard deviations) of our data by calculating the standard deviation of the bootstrapped stacks. Any red-filled peaks in the cross-sections in Fig. 2 and Supplementary Figs. 1–3 have a 95% confidence level above zero and are hence significant. After stacking, we perform quality checks for the vespagram of each bin. We discard any bins for which the 410 and 660 km discontinuities cannot be identified with certainty. We then remove stacks with significant noise in the precursor window or with poor slowness resolution. Significant noise is defined as non-precursor energy (i.e., away from the predicted arrival time and slowness) with comparable energy to that on the predicted precursor arrival and slowness line. In this case, we cannot establish whether the arrivals are actually deflected precursors or scattered noise energy. Poor slowness resolution, where energy extends across multiple slownesses, means that it is not possible to determine the true incoming slowness and hence whether the signals are SS precursors or not. Following this quality control, we rank our remaining data by quality of the SS precursor observations, using the non-precursor noise and slowness resolution. Examples of high-quality 'A' vespagrams are shown in Fig. 2 and Supplementary Fig. 2. We define any significant peak in the precursor window along the theoretical arrival time and slowness line as an observation of a mid-mantle feature (Fig. 2a–c). We analyse vespagrams with rather large slowness ranges of −2 to +2 s/deg to confirm that the detection is indeed an SS precursor and not energy leaking from an arrival with a different slowness. Care is also taken to avoid picking potential side lobes of the 660 km precursor; we do not interpret any signals with an estimated depth of <800 km, corresponding to 280 s before SS. No significant arrivals in this window are a negative detection (Fig. 2d; crosses in Fig. 3). Examples of intermediate-quality 'B' and lower-quality 'C' vespagrams are included in Supplementary Fig. 3; an observation and a non-observation are shown for both quality rankings. 'B' quality data are characterised by an increase in energy away from the predicted arrival time and slowness of the precursors to result in a slightly noisier vespagram but no interference with the arrivals of interest. 'C' quality data is noisier throughout the vespagram, with non-significant energy arriving along the theoretical prediction, and less consistency in the arrivals of S410S and S660S. The importance of our statistical analysis is highlighted here, allowing us to discard energy that arrives with the expected theoretical time and slowness curve but is not significant. Measurements and observations The arrival times and amplitudes of the precursors relative to SS are used to calculate the depths and impedance contrasts of mid-mantle reflectors. We use the cross-sections taken through the vespagrams at the theoretical arrival time and slowness of the precursors relative to SS to make measurements of the arrival times and amplitudes of the precursors with respect to the SS phase. The SS waveforms are cross-correlated with both positive and negative arrivals in the precursor window. This identifies waveforms that have a similar shape to SS and are therefore likely to be SS precursors. Here we make use of the bootstrapped vespagrams to ensure only the robust SS precursors are measured. We then measure differential travel time residuals of the precursors with respect to PREM42, using the Seismic Analysis Code (SAC)62. The arrival times of the SS precursors are taken at the time of the maximum amplitude of the phase. Based on the relative arrival times, we calculate estimates of the depth of the discontinuities using TauP by introducing theoretical reflectors at all depths. The amplitude ratios of the precursors relative to SS are also measured. We use the maximum precursor signal amplitude within ±5 s of the cross-correlated arrival time. This also corresponds to the picked arrival time. The precursor/SS amplitude ratios are then corrected for the path difference of SS and of the precursors, incorporating the differing influence of geometrical spreading as well as upper mantle attenuation. This provides us with reflection coefficients. We ultimately obtain the estimated impedance contrasts by calculating reflection coefficients for theoretical mid-mantle discontinuities. Following conversion of precursor-SS travel time residual to depth, the depths are partitioned into vertical bins of 25 km, in order to help suppress any 3-D velocity variations within the vicinity of the reflectors, as well as negate errors due to measurement uncertainties. Corrections for 3-D velocity structure (see main text) reveals that the standard deviation in depth errors is approximately 14 or 17 km, depending on the model. We thus estimate that partitioning our depth observations into 25 km radial bins should yield robust results. Partitioning the reflectors by depth is also useful for our later interpretation of the origins of the reflectors, since the depths of a specific reflector arising from a phase change may vary laterally due to external factors, such as temperature. Performing travel time corrections for each SS bin may help to improve constraints, yet will also introduce unanticipated errors due to any discrepancies in the velocity model employed. For example, altering various travel times will influence the focussing of precursors within the vespagrams, in turn affecting their observability, and measured arrival times and amplitudes. Furthermore, the two models calculate different corrections for the data. Correlation to velocity domains and statistical calculations We characterise the reflectors according to their seismic tomography domain using clustering analysis of five different tomography models40. This process classifies regions into clusters based on similar seismic properties; regions are defined as seismically fast, slow or neutral/average. The clustering was performed for five tomography models, which then generated vote maps. For plotting, we define fast or slow regions as those with three or more votes. Across each bin, we calculate the proportion of votes for each seismic domain. For bins with reflectors, we evaluate the average of the votes at the depth of the observed reflector, across the bin. For bins with no reflectors, we average the results of the cluster votes over the entire depth range of 800–1300 km at intervals of 50 km. The bin is assigned the average votes for each seismic domain, totalling five per bin. Note that all of the bins with reflectors necessarily display no reflectors at the majority of depths explored, and so our observations and statistics are heavily skewed towards and characterised by the detections rather than non-observations. We calculate the statistical significance between two of the seismic domains, comparing the quantity of observations and their polarity. We employ a one-tail z-test, to obtain the probability that the observations and polarities from any two types of seismic domain are significantly different from one another. Table 1 shows the observational results for each, with significance indicated, and Supplementary Table 1 contains the calculated p-values. Travel time uncertainties and 3-D velocity corrections The errors in our calculated depths are estimated from the bootstrap resampling. For each observation, we generate 300 random resampled datasets and restack within ±15 s of the original reflector arrival time. The arrival time of the maximum peak in this window is used as an estimate for the reflector arrival time in each bootstrapped stack. The standard error on the mean of these 300 picks provides an estimate of the error on the arrival time of the reflector in the original vespagrams; i.e., how the arrival time would vary if data coverage differed. The mean error on all travel time measurements is 4.5 s, corresponding to an average depth error of 22 km. The errors on the picks of the arrival times are calculated by combining the sampling rate of the cross-sections through the vespagrams (0.1 s) to give a total picking error of 0.14 s. Further uncertainties are calculated based on 3-D velocity corrections from two shear-wave velocity models: S20RTS48 and SEMUCB-W149. These are not incorporated as errors since they are not measurement uncertainties. In order to estimate the influence of 3-D velocity structure, we calculate 3-D tomography corrections for the models and calculate the average 3-D residual for each bin. For each seismogram, we use ray tracing through the models to obtain the delay time of S1000S with respect to SS. We also correct the individual data for the two shear-wave velocity models, using the theoretical delay times for S1000S with respect to SS. The data are then re-stacked into vespagrams; we recreate the four high-quality observations in Fig. 2 (Supplementary Figs. 11, 12). We select 1000 km depth as the reference since it is near to the mid-point of our depth range and our depth of interest. The data are sensitive to near-horizontal reflectors and lateral variation of length scales that depend on the size of the bin. The minimum lateral size is therefore approximately 500 km for the 5° bins ranging up to 1500 km for the 15° bins. The lateral resolution tends to decrease as data density decreases; and bin sizes necessarily increase in order to obtain enough data for successful stacking. Here we band pass filter our data at periods of 15–50 s. Correspondingly the size of the SS Fresnel zone is also fairly large, on the order of 1000 km, and further complicated by its mini-max shape44. This can introduce errors into depth calculations, which are somewhat negated via averaging by binning and stacking the data. In order to investigate the lateral extent of the discontinuities, we explore different cap sizes of 5°, 7.5°, 10° and 15°. There are significant discrepancies between our results for the different sizes of spherical caps, attesting to the variable length scales of heterogeneity. In the 5° cap results, several areas show detections of mid-mantle discontinuities, which vary on shorter length scales than those of larger bins. This finding indicates that lateral variation of the depth (and impedance contrast) of mid-mantle reflectors is averaged in larger bins. Other regions display observations in small bins and non-observations in large bins; indicating that some reflectors are either not large enough across larger bins to produce coherent detections or their depth varies too much across the length scale of the larger bin to stack coherently. This is corroborated by reflector properties tending towards an average as bin size increases, with differences between domains no longer significant for the largest bins. Analogously, the absence of a global mid-mantle discontinuity is not inconsistent with the widespread presence of regional reflectors. Larger caps generally display larger signal-to-noise ratios as a result of more data in the stack and averaging over the lateral heterogeneities. However, this lateral smearing of heterogeneity becomes an issue for detecting smaller reflectors, as described above. Conversely, smaller cap sizes are too noisy in many regions and suffer from poor data coverage in some areas. To resolve this issue, we iteratively complete the map in Fig. 3 by systematically populating empty areas with increasingly large cap sizes. Although this approach generates a greater number of bins in regions with the highest data density, we prefer it as it allows us to generate higher resolution imaging where possible and provide greater global coverage than one bin size alone can provide. Maps with globally constant cap sizes are shown in Supplementary Figs. 4 (depth of discontinuity) and 6 (precursor/SS amplitude ratio); corresponding histograms for separate bins sizes are given in Supplementary Figs. 6 and 7. The data coverage is highly variable depending on bin size (Supplementary Fig. 14). Supplementary Fig. 8 displays the data coverage for the map of combined bins (Fig. 3), which shows the maximum geographical sensitivity of the dataset. The corresponding data coverage for each different bin size is shown in Supplementary Fig. 11. The significant variation in data coverage between bins of different sizes is a consequence of the stacking process eliminating bins with insufficient data. We also note that the combined bin coverage (Fig. 5a) appears to be poorer than the 15° bin coverage (Supplementary Fig. 9d). This is the direct result of our iterative method of completing the map; we do not incorporate larger size bins that overlap the already incorporated smaller bins by more than the bin radius, since this would result in redundant double counts for some reflectors. The sensitivity of SS data to horizontal discontinuities is estimated using the wavelength λ of our filtered data. Discontinuities that occur over radial length scales of >λ/4 cannot be detected, as they do not generate reflections. For our data with periods of 15–50 s, this corresponds to approximately 65 km. To calculate this length scale, we use the central value of the frequency range (23 s) with a mantle wave velocity of 6 km/s. This indicates that any reflectors we detect must arise from primarily compositional differences. Thermal gradients generally occur on vertical length scales on the order of hundreds of kilometres and thus are too gradual to be observed using SS precursors. However, their presence influences the depths of mantle discontinuities, causing shallowing or deepening of transitions. Any extremely large thermal gradients may help to generate reflectors; for example, the tops of LLSVPs may have some thermal contribution (particularly for the negative polarity reflectors), although such a gradient must occur across a vertical distance of <65 km. Observability of reflectors related to strength and size We tested the observability of SS precursors, through exploring the amplitudes of reflections generated by horizontal reflectors of varying lateral size relative to the bin and impedance contrast. For this, we used the 2.5-D spectral elements code AxiSEM47, which generates full wavefield synthetics, incorporating attenuation and other real Earth effects. AxiSEM is selected as it allows for the incorporation of a 2-D structure; which permits us to synthesize discrete horizontal reflectors corresponding to our observations. Here we model the reflectors as regional velocity perturbations to a background model by introducing a discontinuity without a hardwired velocity jump at 1000 km. Using PREM42 as a background model, reflectors of varying lateral size and shear-wave velocity contrast are placed at 1000 km depth. Within the event-station geometry, they are located to be centred on the SS bounce point for the reference stacking epicentral distance of 125° (i.e., 62.5° away from the event). We explore the influence of both size and strength of the reflectors. Lateral size is varied from 5 to 25°, in increments of 5°, which corresponds to horizontal sizes of 500–2500 km at 1000 km depth in the mantle. Note that these are absolute lateral sizes of the reflectors, in contrast to the bins which are described in terms of radius. Since AxiSEM produces 2-D structures, a lateral reflector with size 25° would comprise 50% of a bin with radius 25°. The shear-wave velocity contrast is introduced as a positive perturbation with respect to PREM, and we test values from 1% to 5% in increments of 1%. The contrast is a discontinuous step, and the velocity structure reverts back to PREM linearly over a depth of 200 km (i.e., so as not to introduce further complications from additional reflected phases). PREM attenuation is also included in our synthetic calculations. Synthetic stations are placed every 1° from 100° to 150° event-receiver epicentral distance. Since the event location remains static, the theoretical cap size for the full epicentral distance range is therefore 25° radius (Fig. 7a), which is much larger than any bin that we employ. The different size reflectors that we introduce correspond to between 10% and 50% of this bin size, indicating the resolvability of reflectors with length scales smaller than the bins. The large epicentral distance range produces high slowness resolution. For completeness, we also stack for the smaller epicentral distance ranges that correspond to our actual bin size; we test bin sizes of 15° and 10° radius (Fig. 7b, c). Note that the slowness resolution decreases with bin size due to employing only one theoretical event for the modelling process; as a consequence, we do not model bin sizes of 5° or 7.5°. The synthetic data are processed using the same methods as for the real data, including aligning on the major SS peak, and stacking into vespagrams. Cross-sections are taken through the synthetic vespagrams to allow for picking. The theoretical arrival times of the S1000S reflectors are calculated for PREM42 using TauP61, which permits for measurement of their theoretical amplitudes even when the signal cannot be identified visually in the cross-section. Using SAC62, we finally measure the amplitude ratio of S1000S to SS (Fig. 7). Waveform data were obtained from the IRIS Data Management Center (NSF grant EAR-1063471). The processed data and measurements are available from the corresponding author upon request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. L.W. is the recipient of a Discovery Early Career Research Award (project number DE170100329) funded by the Australian Government. N.C.S. and L.W. were supported by the NSF grant EAR-1361325. The research was inspired by the Cooperative Institute for Dynamic Earth Research (CIDER) program; CIDER-II is funded as a 'Synthesis Center' by the Frontiers of Earth System Dynamics (FESD) program of NSF under grant number EAR-1135452. We thank William McDonough and James Ni for helpful discussions.
<urn:uuid:df3bd731-dbc6-4046-948f-cb6ce521d89b>
2.703125
11,091
Truncated
Science & Tech.
35.896459
95,526,825
Researchers with the Indian Meteorological Department (IMD) now say the effect of global warming has become more pronounced in the country over the last 10 years. Besides an average rise of 0.4-0.5 degrees celsius in temperature, the country has seen a gradual rise in the minimum temperature and also a reversal of the cooling trends in the past decade. B Mukhopadhyay ADGM (R), IMD, said that studies have revealed that effects of global warming have become more pronounced in the country. “Research showed that in the last 30 years, there has been a reversal of cooling trends across the country,” he said. As the name suggests, the ‘cooling trend’ refers to the temperature going back to its normal mean over a given period of time. The trend is more pronounced in northwestern parts of the country in the states of Rajasthan, Punjab, and central areas like Madhya Pradesh. - Monsoon likely to hit Mumbai coast three days in advance - Simply put: How heat footprint has grown - In first half of May, Delhi saw five squalls — twice as much as last year - Harsher summer, heatwaves normal, says IMD - Surface temperature in India follows global warming trend - Erratic weather blamed on global warming Another trend that caught the attention of the researchers was the gradual rise in minimum temperature over the country. Mukhopadhyay said this trend, seen worldwide is supposed to be an indication of global warming. “This rise in minimum temperature can be due to greenhouse gases or due to trapped heat. We have observed this trend across both urban and rural areas,” he said. A daytime rise in temperature is often compensated by heat being radiated out of the earth’s surface at night. However, as Mukhopadhyay pointed out, greenhouse gases prevent the rising of hot air which leads to a rise in minimum temperature. This phenomenon was observed in developed countries way back in the 1960-70s while in India, this has started becoming more and more prominent since the 1990s. The last 10 years have seen a steady rise in mean minimum temperature, he said, and 2010 was the warmest year in the last 110 years. While the mean temperature was 0.93 degrees higher than normal, the minimum temperature recorded an anomaly of 0.84. The annual climate survey of the IMD shows that eight of the 10 warmest years were witnessed in the timeframe of 2000-2010 making it the warmest decade on record. The IMD has around 200 observation centres across the country, of which 120 are used for long-term observations. The minimum temperature in the last decade has been on the rise, which is a departure from earlier trends. In the previous decades, a year’s rise in minimum temperature was followed by a fall the following year. For example, the mean minimum temperature in 1994 was higher by 0.25 than the normal mean, while in 1995 the same was higher by 0.34 than the normal mean. The yearly minimum of 1996 had fallen as compared to 1995 with the year recording a minimum temperature that was higher than the normal by 0.26. The trend was maintained until 1999, after which the minimum temperature has always been rising. If the minimum temperature was higher in 2000 by 0.14, it was higher by 0.3, 0.48, 0.55 for 2001, 2002 and 2003 respectively.The gradual warming of the Arabian Sea is another phenomenon that signals the increased effect of global warming in the country, he said. After the monsoons, there used to be a dip in the temperature of the Arabian Sea, but over the last 10 years or so this has failed to happen. The Indian monsoon, which by far is the most important weather phenomenon for the country, has also not been spared from change. Mukhopadhyay said that although there has been no decrease in the quantity of rain, there has been a change in the pattern of monsoon. “Rain events have gone down but there has been an increase of heavy rain events. The increase has been by 5-10 per cent,” he said. “Heavy rain” events refer to incidents of short spells of heavy rain. Such events lead to flood or landslides and weather researchers have pointed out detrimental effect of such events for the country’s agricultural cycle as well.
<urn:uuid:0835b4d3-2237-4f58-8261-d2896df568a6>
3.3125
914
News Article
Science & Tech.
57.6175
95,526,837
You may have heard it has been said that if our planet were shrunk down to the size of a billiard ball, it would be smoother than it. In other words, the Earth is smoother than a billiard ball. Is that true? Back in 2008, on the “Bad Astronomy” blog on discovermagazine.com, in the article titled “Ten things you don’t know about the Earth“, Phil Plait wrote about that, and he said “…according to the World Pool-Billiard Association, a pool ball is 2.25 inches in diameter and has a tolerance of +/- 0.005 inches.” and after making some calculations, he concluded that “… the urban legend is correct. If you shrank the Earth down to the size of a billiard ball, it would be smoother.” Even the famous American astrophysicist, author and science communicator Neil deGrasse Tyson once tweeted about that, saying “If shrunk to a few inches across, Earth would feel as smooth as a billiard-hall cue ball.”. SMOOTH EARTH: If shrunk to a few inches across, Earth would feel as smooth as a billiard-hall cue ball. — Neil deGrasse Tyson (@neiltyson) April 22, 2016 In fact, the Earth is much smoother than one might think. Yes, there are big mountains like Himalayas, and big trench under the oceans like Mariana Trench. The highest point on Earth is the top of Mount Everest, at 8.85 km. The deepest point on Earth is the Mariana Trench, at about 11 km deep. But even those are very small compared to the Earth’s a diameter which is about 12,735 kilometers (on average). According to the World Pool-Billiard Association, “All balls must be composed of cast phenolic resin plastic and measure 2 1/4 (+.005) inches [5.715 cm (+ .127 mm)] in diameter”. So, if we could shrink the Earth to the size of a billiard ball, the height of Mount Everest would be only 0.04 millimeters. The depth of Mariana Trench would be only 0.045 millimeters. These measurements are inside 0.127 mm or 0.005 inches, no pits or bumps more than that, so the Earth is smoother than a billiard ball, right? First of all, the specifications of World Pool-Billiard Association does not say “there mustn’t be pits or bumps more than .005 inches”. This is only about diameter, the rule says that the diameter must be within 2 1/4 (+.005) inches. Smoothness is a very different thing. Let’s we assume that we produced a billiard ball and covered its surface with medium sandpaper (grit particle size of 0.005 in, for more about grit sizes of a sandpaper see the Grit size table on the wikipedia entry of sandpaper). By the definition of smoothness used by Phil Plait of Discover Magazine and Neil deGrasse Tyson, that billiard ball would also be “smooth” – which is obviously ridiculous. The billiard-ball sized Earth’s smoothness would be equivalent to that of 320 grit sandpaper. Still not quite smooth, right? So, it’s obvious that 0.005 inches (0.127 mm) official tolerance is for shape, for roundness, not for smoothness. Human fingers are very sensitive According to a 2013 study titled “Feeling Small: Exploring the Tactile Perception Limits” published on Nature, a human finger can feel wrinkles as small as 10nm (nanometers), or 0.00001 millimeters, demonstrating that human tactile discrimination extends to the nanoscale. So, if the Earth were shrunk down to the size of a billiard ball, you would definitely feel the Mount Everest, which would be 0.04 millimeters high. As round as a billiard ball Speaking of roundness, is Earth as round as a billiard ball? Earth’s equatorial diameter is 7,926 miles (12,756 km), but from pole to pole, the diameter is 7,898 miles (12,714 km) – a difference of only 28 miles (42 km). If we take the bigger diameter and shrink it down, the difference would be 0.0049 inches (0.0125 mm). If we take the smaller diameter, the difference would be very slightly bigger, but almost the same. So yes, the Earth is as round as a billiard ball. But it’s almost at the limit. - Is the Earth as smooth as a billiard ball? Answer: No. - Is the Earth as round as a billiard ball? Answer: Yes. You can also watch Vsauce’s great video titled How Much of the Earth Can You See at Once?, which also covers this very subject. - Ten things you don’t know about the Earth, “Bad Astronomy” blog by Phil Plait on discovermagazine.com - World Pool-Billiard Association Equipment Specifications on wpapool.com - Is Earth as smooth as a billiard ball? on skeptics.stackexchange.com - Is the Earth Like a Billiard Ball Or Not? on Possibly Wrong blog
<urn:uuid:abca2a70-d636-4957-8906-19a2acb2ca70>
3.265625
1,144
Personal Blog
Science & Tech.
76.40922
95,526,838
In particle physics, the weak interaction (the weak force or weak nuclear force) is the mechanism of interaction between sub-atomic particles that causes radioactive decay and thus plays an essential role in nuclear fission. The theory of the weak interaction is sometimes called quantum flavordynamics (QFD), in analogy with the terms quantum chromodynamics (QCD) dealing with the strong interaction and quantum electrodynamics (QED) dealing with the electromagnetic force. However, the term QFD is rarely used because the weak force is best understood in terms of electro-weak theory (EWT). The weak interaction takes place only at very small, sub-atomic distances, less than the diameter of a proton. It is one of the four known fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation. The Standard Model of particle physics provides a uniform framework for understanding the electromagnetic, weak, and strong interactions. An interaction occurs when two particles, typically but not necessarily half-integer spin fermions, exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either elementary (e.g. electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles. In the case of the weak interaction, fermions can exchange three distinct types of force carriers known as the W+, W−, and Z bosons. The mass of each of these bosons is far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. The force is in fact termed weak because its field strength over a given distance is typically several orders of magnitude less than that of the strong nuclear force or electromagnetic force. Quarks, which make up composite particles like neutrons and protons, come in six "flavors" – up, down, strange, charm, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows for quarks to swap their flavor for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Other important examples of phenomena involving the weak interaction include beta decay, and the fusion of hydrogen into deuterium that powers the Sun's thermonuclear process. Most fermions will decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium illumination, and in the related field of betavoltaics. In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. However, it is better described as a non-contact force field having a finite range, albeit very short. In 1968, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electro-weak force. The weak interaction is unique in a number of respects: - It is the only interaction capable of changing the flavor of quarks (i.e., of changing one type of quark into another). - It is the only interaction that violates P or parity-symmetry. It is also the only one that violates charge-parity CP symmetry. - It is mediated (propagated) by force carrier particles that have significant masses, an unusual feature which is explained in the Standard Model by the Higgs mechanism. Due to their large mass (approximately 90 GeV/c2) these carrier particles, termed the W and Z bosons, are short-lived with a lifetime of under 10−24 seconds. The weak interaction has a coupling constant (an indicator of interaction strength) of between 10−7 and 10−6, compared to the strong interaction's coupling constant of 1 and the electromagnetic coupling constant of about 10−2; consequently the weak interaction is weak in terms of strength. The weak interaction has a very short range (around 10−17 to 10−16 m). At distances around 10−18 meters, the weak interaction has a strength of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. At distances of around 3×10−17 m, a distance which is scaled up by just one and a half decimal orders of magnitude from before, the weak interaction is 10,000 times weaker than the electromagnetic. The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact through gravity and the weak interaction only, and neutrinos were the original reason for the name weak force. The weak interaction does not produce bound states nor does it involve binding energy – something that gravity does on an astronomical scale, that the electromagnetic force does at the atomic level, and that the strong nuclear force does inside nuclei. Its most noticeable effect is due to its first unique feature: flavor changing. A neutron, for example, is heavier than a proton (its sister nucleon), but it cannot decay into a proton without changing the flavor (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavor changing, so this proceeds by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the quarks of the same name) would also be conserved across all interactions. All mesons are unstable because of weak decay. In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual boson which is then converted into an electron and an electron antineutrino. Another example is the electron capture, a common variant of radioactive decay, wherein a proton and an electron within an atom interact, and are changed to a neutron (an up quark is changed to a down quark) and an electron neutrino is emitted. Due to the large masses of the W bosons, particle transformations or decays (e.g., flavor change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about 10−16 seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about 10−8 seconds, or a hundred million times longer than a neutral pion. A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes. Weak isospin and weak hyperchargeEdit |Generation 1||Generation 2||Generation 3| |Electron neutrino||Muon neutrino||Tau neutrino| |Up quark||Charm quark||Top quark| |Down quark||Strange quark||Bottom quark| |All of the above left-handed (regular) particles have corresponding| right-handed anti-particles with equal and opposite weak isospin. |All right-handed (regular) particles and left-handed antiparticles have weak isospin of 0.| All particles have a property called weak isospin (symbol T3), which serves as a quantum number and governs how that particle behaves in the weak interaction. Weak isospin plays the same role in the weak interaction as does electric charge in electromagnetism, and color charge in the strong interaction. All left-handed fermions have a weak isospin value of either +1⁄2 or −1⁄2. For example, the up quark has a T3 of +1⁄2 and the down quark −1⁄2. A quark never decays through the weak interaction into a quark of the same T3: Quarks with a T3 of +1⁄2 only decay into quarks with a T3 of −1⁄2 and vice versa. In any given interaction, weak isospin is conserved: the sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed) , with a weak isospin of 1 normally decays into a μ (+1⁄2) and a (as a right-handed antiparticle, +1⁄2). Following the development of the electroweak theory, another property, weak hypercharge, was developed. It is dependent on a particle's electrical charge and weak isospin, and is defined by: where YW is the weak hypercharge of a given type of particle, Q is its electrical charge (in elementary charge units) and T3 is its weak isospin. Whereas some particles have a weak isospin of zero, all spin-1⁄2 particles have non-zero weak hypercharge. Weak hypercharge is the generator of the U(1) component of the electroweak gauge group. There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because it is mediated by particles that carry an electric charge (the bosons), and is responsible for the beta decay phenomenon. The second type is called the "neutral-current interaction" because it is mediated by a neutral particle, the Z boson. In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a boson (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavor") of neutrino (electron, muon or tau) is the same as the type of lepton in the interaction, for example: Similarly, a down-type quark (d with a charge of −1⁄3) can be converted into an up-type quark (u, with a charge of +2⁄3), by emitting a boson or by absorbing a boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a boson, or absorb a boson, and thereby be converted into a down-type quark, for example: The W boson is unstable so will rapidly decay, with a very short lifetime. For example: Decay of the W boson to other products can happen, with varying probabilities. In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the energy involved in the process (i.e., the mass difference between the down quark and the up quark), the boson can only be converted into an electron and an electron-antineutrino. At the quark level, the process can be represented as: Like the W boson, the Z boson also decays rapidly, for example: The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons ( , the three carriers of the weak interaction) and the massless photon (γ, the carrier of the electromagnetic interaction). According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless gauge bosons – each similar to the photon – forming a complex scalar Higgs field doublet. However, at low energies, this gauge symmetry is spontaneously broken down to the U(1) symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. This symmetry-breaking would be expected to produce three massless bosons, but instead they become integrated by the other three fields and acquire mass through the Higgs mechanism. These three boson integrations produce the bosons of the weak interaction. The fourth gauge boson is the photon of electromagnetism, and remains massless. This theory has made a number of predictions, including a prediction of the masses of the Z and W-bosons before their discovery. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125–127 GeV/c2, whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, the Higgs boson was tentatively confirmed to exist. Violation of symmetryEdit The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a mirror-reflected copy of the experimental apparatus. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics. Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V−A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V−A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction. However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics. Unlike parity violation, CP violation occurs in only a small number of instances, but remains widely held as an answer to the difference between the amount of matter and antimatter in the universe; it thus forms one of Andrei Sakharov's three conditions for baryogenesis. - Griffiths, David (2009). Introduction to Elementary Particles. pp. 59–60. ISBN 978-3-527-40601-2. - "The Nobel Prize in Physics 1979: Press Release". NobelPrize.org. Nobel Media. Retrieved 22 March 2011. - Fermi, Enrico (1934). "Versuch einer Theorie der β-Strahlen. I". Zeitschrift für Physik A. 88 (3–4): 161–177. Bibcode:1934ZPhy...88..161F. doi:10.1007/BF01351864. - Wilson, Fred L. (December 1968). "Fermi's Theory of Beta Decay". American Journal of Physics. 36 (12): 1150–1160. Bibcode:1968AmJPh..36.1150W. doi:10.1119/1.1974382. - "Steven Weinberg, Weak Interactions, and Electromagnetic Interactions". - "1979 Nobel Prize in Physics". Nobel Prize. Archived from the original on 7 July 2014. - Cottingham & Greenwood (1986, 2001), p.8 - W.-M. Yao et al. (Particle Data Group) (2006). "Review of Particle Physics: Quarks" (PDF). Journal of Physics G. 33: 1–1232. arXiv: . Bibcode:2006JPhG...33....1Y. doi:10.1088/0954-3899/33/1/001. - Peter Watkins (1986). Story of the W and Z. Cambridge: Cambridge University Press. p. 70. ISBN 978-0-521-31875-4. - "Coupling Constants for the Fundamental Forces". HyperPhysics. Georgia State University. Retrieved 2 March 2011. - J. Christman (2001). "The Weak Interaction" (PDF). Physnet. Michigan State University. Archived from the original (PDF) on 20 July 2011. - "Electroweak". The Particle Adventure. Particle Data Group. Retrieved 3 March 2011. - Walter Greiner; Berndt Müller (2009). Gauge Theory of Weak Interactions. Springer. p. 2. ISBN 978-3-540-87842-1. - Cottingham & Greenwood (1986, 2001), p.29 - Cottingham & Greenwood (1986, 2001), p.28 - Cottingham & Greenwood (1986, 2001), p.30 - Baez, John C.; Huerta, John (2009). "The Algebra of Grand Unified Theories". Bull. Am. Math. Soc. 0904: 483–552. arXiv: . Bibcode:2009arXiv0904.1556B. doi:10.1090/s0273-0979-10-01294-2. Retrieved 15 October 2013. - K. Nakamura et al. (Particle Data Group) (2010). "Gauge and Higgs Bosons" (PDF). Journal of Physics G. 37. Bibcode:2010JPhG...37g5021N. doi:10.1088/0954-3899/37/7a/075021. - K. Nakamura et al. (Particle Data Group) (2010). "n" (PDF). Journal of Physics G. 37: 7. Bibcode:2010JPhG...37g5021N. doi:10.1088/0954-3899/37/7a/075021. - "The Nobel Prize in Physics 1979". NobelPrize.org. Nobel Media. Retrieved 26 February 2011. - C. Amsler et al. (Particle Data Group) (2008). "Review of Particle Physics – Higgs Bosons: Theory and Searches" (PDF). Physics Letters B. 667: 1–6. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. - "New results indicate that new particle is a Higgs boson | CERN". Home.web.cern.ch. Retrieved 20 September 2013. - Charles W. Carey (2006). "Lee, Tsung-Dao". American scientists. Facts on File Inc. p. 225. ISBN 9781438108070. - "The Nobel Prize in Physics 1957". NobelPrize.org. Nobel Media. Retrieved 26 February 2011. - "The Nobel Prize in Physics 1980". NobelPrize.org. Nobel Media. Retrieved 26 February 2011. - M. Kobayashi; T. Maskawa (1973). "CP-Violation in the Renormalizable Theory of Weak Interaction" (PDF). Progress of Theoretical Physics. 49 (2): 652–657. Bibcode:1973PThPh..49..652K. doi:10.1143/PTP.49.652. hdl:2433/66179. - "The Nobel Prize in Physics 1980". NobelPrize.org. Nobel Media. Retrieved 17 March 2011. - Paul Langacker (2001) . "Cp Violation and Cosmology". In Cecilia Jarlskog. CP violation. London, River Edge: World Scientific Publishing Co. p. 552. ISBN 9789971505615. - R. Oerter (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Plume. ISBN 978-0-13-236678-6. - B.A. Schumm (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 0-8018-7971-X. - Walter Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 3-540-67672-4. - G.D. Coughlan; J.E. Dodd; B.M. Gripaios (2006). The Ideas of Particle Physics: An Introduction for Scientists (3rd ed.). Cambridge University Press. ISBN 978-0-521-67775-2. - W. N. Cottingham; D. A. Greenwood (2001) . An introduction to nuclear physics (2nd ed.). Cambridge University Press. p. 30. ISBN 978-0-521-65733-4. - D.J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 0-471-60386-4. - G.L. Kane (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 0-201-11749-5. - D.H. Perkins (2000). Introduction to High Energy Physics. Cambridge University Press. ISBN 0-521-62196-8.
<urn:uuid:4c55e57b-b876-4198-8839-6e850175ca92>
3.953125
4,859
Knowledge Article
Science & Tech.
61.962149
95,526,844
Dr Jean-Hervé Lignot (Louis Pasteur University) and Dr Robert K. Pope (Indiana University South Bend) will talk about the implications this has on the way these snakes digest food on Saturday 31st March at the Society for Experimental Biology’s Annual Meeting in Glasgow. "Juvenile pythons normally eat every week, while adults can have a meal every month and can even stop feeding for several months under certain circumstances," explains Dr Lignot. "They are therefore physiologically fine tuned to cope with prolonged fasting, re-feeding on large meals, and intense digestion and nutrient absorption". Researchers monitored changes in the python gut after feeding. They observed drastic morphological changes, which coincided with a rapid increase in body temperature. Cell replication and death were sparked soon after feeding, as new cells were produced and worn out cells eliminated. In this way the stomach and intestine re-modelled themselves in anticipation of the next fasting and feeding periods. An exciting development was the discovery of a new cell type in the small intestine of pythons which is responsible for the degradation of bone. Small particles observed in the intestine and colon of pythons within hours of feeding were found to have originated from the prey’s skeleton. These particles are degraded in specialised cells, shaped like golf tees, and the components released into the bloodstream. This process is thought to allow pythons to optimise absorption of calcium (a component of bone) from their meals. Gillian Dugan | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0a151f4a-2d68-439b-9960-c9b26d78d93c>
3.203125
895
Content Listing
Science & Tech.
36.659287
95,526,846
NOAA's GOES-13 satellite sits in a fixed orbit and monitors the weather in the eastern half of the continental United States and the Atlantic Ocean. NASA's GOES Project at NASA's Goddard Space Flight Center in Greenbelt, Maryland uses the data from GOES-13 and creates imagery. NASA's GOES Project created an image of Tropical Depression 2 from June 17 at 1:10 p.m. EDT. Tropical Depression 2 formed in the western Caribbean Sea during the early afternoon hours (Eastern Daylight Time) on June 17. NOAA's GOES-13 satellite captured an image of the storm as it consolidated enough to become a tropical depression while approaching the coast of Belize. Credit: NASA GOES Project/NASA's Goddard Space Flight Center Looking closely at the imagery, strong thunderstorms are firing up around the center of circulation, just off-shore from Belize. The clouds associated with the depression stretch much farther, from far western Cuba, to the eastern Yucatan Peninsula, and over Belize and Honduras. The National Hurricane Center designated the low pressure area as Tropical Depression 2 at 11 a.m. EDT. At that time it had maximum sustained winds near 35 mph (55 kph) and was moving to the west-northwest at 13 mph (20 kph). Tropical Depression 2 is located near 16.2 north and 87.6 west, about 60 miles (95 km) east of Monkey River Town, Belize. The center of the depression will move inland over southern Belize this afternoon where no change in strength is expected as it moves over land. The depression could emerge into the Bay of Campeche on Tuesday, according to the National Hurricane Center (NHC). NHC noted that an increase in strength is possible on Tuesday if the center emerges into the Bay of Campeche. If that happens, Tropical Depression 2 could become Tropical Storm Barry. Patrick Lynch | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ca899e6a-52e2-48c9-89f9-15b78843ba57>
3.171875
967
Content Listing
Science & Tech.
46.974883
95,526,862
Covariation in reproductive variables across 26 families of teleost fish is examined to investigate an evolutionary trade-off between the size and number of offspring. Clutch size is positively correlated with fish length, but there is no significant correlation between egg volume and fish length. There is a significant, negative correlation between clutch size and egg volume after removing the effects of body size, suggesting an evolutionary trade-off. This pattern is found within both fresh-water and marine fish. The product of clutch size and egg volume is not correlated with either clutch size or egg volume after removing the effects of body size. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:3f99e428-40ed-4b15-b292-08ebfad47408>
2.609375
141
Academic Writing
Science & Tech.
20.063736
95,526,876
Image Credit: Clodagh C. O’Shea, Salk Institute, La Jolla, CA How can six and a half feet of DNA be folded into the tiny nucleus of a cell? Researchers funded by the National Institutes of Health have developed a new imaging method that visualizes a very different DNA structure, featuring small folds of DNA in close proximity. The study reveals that the DNA-protein structure, known as chromatin, is a much more diverse and flexible chain than previously thought. This provides exciting new insights into how chromatin directs a nimbler interaction between different genes to regulate gene expression, and provides a mechanism for chemical modifications of DNA to be maintained as cells divide. The results will be featured in the July 28 issue of Science. For decades, experiments suggested a hierarchical folding model in which DNA segments spooled around 11 nanometer-sized protein particles, assembled into rigid fibers that folded into larger and larger loops to form chromosomes. However, that model was based on structures of chromatin in vitro after harsh chemical extraction of cellular components. Now, researchers at the Salk Institute, La Jolla, California, funded by the NIH Common Fund, have developed an electron microscopy technique called ChromEMT that enables the 3D structure and packing of DNA to be visualized inside the cell nucleus of intact cells. Contrary to the longstanding textbook models, DNA forms flexible chromatin chains that have fluctuating diameters between five and 24 nanometers that collapse and pack together in a wide range of configurations and concentrations. The newly observed and diverse array of structures provides for a more flexible human genome that can bend at varying lengths and rapidly collapse into chromosomes at cell division. It explains how variations in DNA sequences and interactions could result in different structures that exquisitely fine tune the activity and expression of genes. “This is groundbreaking work that will change the genetics and biochemistry textbooks,” remarks Roderic I. Pettigrew, PhD, MD, director of the National Institute of Biomedical Imaging and Bioengineering (NIBIB), which administered the grant. “It’s an outstanding example of how constantly improving imaging techniques continue to show the true structure of everything from neuronal connections in the brain to the correct visualization of gene expression in the cell. It reveals how these complex biological structures are able to perform the myriad intricate and elaborate functions of the human body.” “We identified a fluorescent small molecule that binds specifically to DNA and can be visualized using advanced new 3D imaging methods with the electron microscope,” explained Clodagh O’Shea, PhD, the leader of the Salk group, associate professor, and Howard Hughes Medical Institute Faculty Scholar. “The system enables individual DNA particles, chains, and chromosomes to be visualized in 3D in a live, single cell. Thus, we are able to see the fine structure and interactions of DNA and chromatin in the nucleus of intact, live cells.” Dr. O’Shea’s team included collaborators from the University of California, San Diego, and the National Center for Microscopy and Imaging Research, San Diego. The researchers believe their discovery dovetails with their research on how tumor viruses and cancer mutations change a cell’s DNA structure and organization to cause uncontrolled cell growth. It could enable the design of new drugs that manipulate the structure and organization of DNA to make a tumor cell ‘remember’ how to be normal again or impart new functions that improve the human condition. “To see the human genome in in all of its 3D glory is the dream of every biologist. Now, we are working to design probes that will allow us to also see the proteins that bind to the DNA to turn genes on and off. We will then be able to view an actual gene in action,” concluded O’Shea. The research was supported by NIH grants from the National Institutes of Health Common Fund (U01EB021247), the National Cancer Institute, and the National Institute of General Medical Sciences. Additional funding was provided by the Howard Hughes medical Institute and the W.M. Keck Foundation. Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:e15db4a4-e505-44fc-94b9-a1cbcde4e72e>
3.609375
867
News Article
Science & Tech.
30.164871
95,526,903
Fire and Succession in the Conifer Forests of Northern North America Ecosystem theories must encompass the conclusions now emerging from studies of fire and vegetation in fire-dependent northern conifer forests. Such forests comprise more than half the present forest area of North America, including most of the forests that have never been altered through logging or land clearing. Furthermore, vast areas are still influenced by natural lightning-fire regimes, and it is possible to study directly the role of fire in controlling vegetation mosaics and ecosystem function. KeywordsBoreal Forest Vegetation Change Fire Regime Vegetative Reproduction Crown Fire Unable to display preview. Download preview PDF.
<urn:uuid:06106117-3745-414c-8792-b8cea06eb2d9>
3.0625
132
Truncated
Science & Tech.
16.32
95,526,905
Delphi Programming for Beginners - Lesson 3Basic Components VCL (Visual Component Library) is a stable part of Delphi. We have two types of components: visual (the ones users can see such as TButton) and non-visual (the ones which users cannot see - such as TOpenDialog). Let’s mention the basic ones. At the picture you may see several basic components:Common Properties Because every single component comes from a general basis, they share some common properties (to be precise, most of them). - Enabled - enables and disables the component - Text and Caption - the main text or the caption of the component (depends on the type) - TabOrder and TabStop - determines whether a component can be selected by TAB key (TabStop) and its position in selection - ReadOnly - determines whether the data can be edited (compared to Enabled, the text can be selected for example by mouse) - Font and ParentFont - font for a component, if ParentFont is set, it has a priority over Font - Align - aligning the component, several options, for example alNone (no aligning) - Cursor - change the mouse cursor if it's above a component - BevelInner, BevelOuter, BevelWidth, BevelKind - a frame (line) around the component - Left, Top, Width, Height - a position and a size of a component - Visible - a component is visible - Color - a color of a component (background) - Font.Color - a color of a component’s font User can left-click on it or he can select it using TAB key and confirm using Spacebar. Both ways call OnClick event and a programmer can do something. Cancel and Default set a button as either ESC (Cancel) or Enter (Default). Important in a dialog ModalResult - after pressing it returns this value to a called dialog It’s used for inputing a line-long text, for example a password. If you enable PasswordChar, a component will be used for a password input.TLabel This component is - generally - used for showing a text. The text is set in Caption property. Very important property is FocusControl. This property show the component which will get a focus in case of clicking on a label. If & sign is in a caption, the following letter will be used an accelerator (if Windows allows that), that is, after pressing ALT + key, that component will be given a focus. Autosize dynamically determines and sets the width based on the length of a text.TCheckBox Allows users to choose from 2 options YES or NO (property Checked) or 3 options. That is set via property State.TRadioButton Allows users to choose one option out of several ones. Warning: TRadioButton automatically checks that only one is selected. That applies to all TRadioButtons of one owner (for example on a form or on a panel). If we put two panels on a form and on both we put several TRadioButtons, then only the ones within the same panel can influence each other. Ideally use component TGroupBox which has a title and a frame which can be seen in a picture below where you can as well see difference between TRadioButton and TCheckBox (more options can be chosen within a group). Similar to TEdit but operates with more lines. Lines are saved in a property Lines which allows their saving and loading. Scrollbars sets the visibility of scrollbars. WordWrap sets whether a text will be put on another line if it extends the current line. HideSelection hides selection if a component isn’t active (it doesn’t have a focus).TListBox Looks similar to TMemo but a whole line is always chosen (index of a chosen line is in ItemIndex - only while running). Lines are saved in a property Lines.TComboBox This component allows users to pick an option from a scroll list or to input a text (if it’s enabled). Property Style determines whether user can input a text (csDropDown) or not (csDropDownList). Items are saved in property Items A chosen property is set by property ItemIndex where -1 means nothing is selected and 0 means the first item. If csDropDown is set as a style, the inputed text is saved in property Text.Example We can now - using these several components - actually create something useful. Let’s create this form: Into the body of OnClick event insert this code: procedure TForm1.btnAddClick(Sender: TObject); begin if rbListBox.Checked then lst1.Items.Add(edt1.Text) else mem1.Lines.Add(edt1.Text); edt1.Text := ''; // clear end; After pressing a button, the text will be added into the selected component. Once done that, the text in TEdit will be erased. Now we just add code for CheckBox which is nothing really difficult: procedure TForm1.chkReadOnlyMemoClick(Sender: TObject); begin mem1.ReadOnly := chkReadOnlyMemo.Checked; end; Replace RadioButton with ComboBox with 2 items (insert into listbox, insert into memo - without an option to input a text). Also, make visible only the chosen component and the chosen one will always appear on the same place (that means you'll have to change a value of property Left). Finally, add CheckBox with caption “Bold” which - if checked - makes the text in Memo bold (via property Font). Sample project regarding homework can be downloaded from here. About the AuthorIng. Radek Červinka More interesting links and pages What is control panel?
<urn:uuid:c59c25b7-5c37-4dd7-a8ed-e076b00a470f>
3.515625
1,262
Tutorial
Software Dev.
47.737192
95,526,909
The sound of silence on dying coral reefs disorientates fish larvae The sound of silence on dying coral reefs disorientates fish larvae You can hear the sound of former bustling coral reefs dying due to the impact of human activity, according to new research from the Universities of Essex and Exeter. The sound of silence on dying coral reefs disorientates fish larvae You can hear the sound of former bustling coral reefs dying due to the impact of human activity, according to new research from the Universities of Essex and Exeter. Coral reefs are amongst the noisiest environments on our planet, and healthy reefs can be heard using underwater microphones from kilometres away. However, scientists have found that coral reefs impacted by human activity, such as overfishing, are much quieter than protected reefs, which can have a big impact on the fish and invertebrates which rely on the reefs for survival. Led by Dr Julius Piercy, from the University of Essex, the study involved taking acoustic recordings of coral reefs with different levels of protection around islands in the Philippines. The research found that the noise produced by the few remaining resident fish and crustaceans on unprotected reefs was only one third of the sound produced at bustling, healthy reef communities. This is particularly important to the larval stages of reef fish and invertebrates, which spend the first few days of their life away from reefs and use sound as an orientation cue to find their way back. With less sound being produced at impacted reefs, the distance over which larvae can detect habitat is ten times less, impacting on the replenishment of future generations needed to build up and maintain healthy population levels. “In an environment where underwater noise plays such an important role in the population dynamics of coral reefs, it is alarming to find such a large effect of human impact on the natural acoustic environment, Ø explained Dr Piercy. “This puts reef sound in the spotlight for the people who manage coral reef ecosystems on two counts. Firstly, that they might need to consider reef sound as an integral part of the design of marine protected area networks to ensure that there is sufficient recruitment of larvae within and between reserves and neighbouring reefs. Secondly, this study shows sound can be useful in monitoring the health of coral reefs. Ø With growing evidence demonstrating the direct impacts of man-made noise on aquatic life, these findings highlight additional indirect human impacts - such as overfishing and landscape development - on natural underwater sounds. MCS Senior Biodiversity Policy Officer, Dr Jean-Luc Solandt, agrees with the findings: “Previous research has shown the importance of noise in attracting young fish to coral reefs, where they are likely to be protected and survive. I’ve not long returned from some very noisy reefs in Musandam in Oman carrying our ReefCheck surveys. This picture (below, right) is the coral, pocillpora, it hosts millions of noisy ‘snapping shrimp’ that collectively create a noise like thousands and thousands of crisp packets being scrunched together!” Dr Steve Simpson, from Biosciences at the University of Exeter, added: “Taking sound recordings is a cheap, fast and objective way to get a broad idea of whether a reef is in a good condition or not. While it cannot replace detailed visual surveys conducted by snorkelers or divers, it gives a good account of the cryptic and nocturnal species missed in visual census, and quickly provides a general picture of the state of coral reefs without requiring time-consuming surveys and extensive training. Ø The researchers also found that reef sounds can be detected further away than predicted, increasing previous estimates of the likely detection zone for recruiting larvae and increasing the potential importance of reef sound in attracting new fish and crustaceans to coral reefs. The study highlights the need to further characterise reef soundscapes and identify acoustic cues that larvae tune into when seeking a suitable home. Dr Simpson said: “We still know very little about what sounds these animals are listening to and it is likely to be very different between species. Combined with recent findings that fish dislike the smell of impacted reefs (another homing cue used by the larvae), there is a real need to understand how human impacts can indirectly affect the success of future generations of reef organisms. Ø Actions you can take Help protect 40% of English seas Let the government know they must protect our ocean and marine wildlife before it’s too late.
<urn:uuid:a3622ebf-613e-4493-a380-6f7b29b1c58b>
3.34375
903
News (Org.)
Science & Tech.
27.586382
95,526,925
When the crisis at the Fukushima power plant first began six years ago, there were legitimate fears that the radioactive particles spewing from the fuel rods could blanket the Earth. Since then the experts and the mainstream media have downplayed that possibility. Either they don’t think it’s possible, or they don’t think it’ll be significant. But recently, scientists in Norway have revealed that the radiation emitted from Fukushima really did have a global reach. It’s been over half a decade since Japan’s Fukushima-Daiichi nuclear plant suffered a catastrophic meltdown due to the effects of a tsunami which struck the island nation, but scientists are only just now confirming its far-reaching effects. After conducting the first worldwide survey to measure the ultimate radiation exposure caused by the reactor meltdown, researchers at the Norwegian Institute for Air Research finally have a figure on exactly how much extra radiation humanity was exposed to. According to the group’s data, over 80 percent of the radiation that was released by the meltdown ended up in either the ocean or ice at the north and south poles. Of the remaining radiation, each human on the planet received roughly 0.1 millisievert, which equates to about “one extra X-ray each,” according to the team. Fortunately, that’s not a whole lot of radiation. That’s significantly less than the average amount of background radiation that most people receive in a year. The fact that we all received the equivalent of an x-ray isn’t alarming. What is alarming, however, is the fact that Fukushima did this, and it did it to every man and woman and child on Earth. Optimox - Iodoral, Hig... Buy New $37.11 (as of 11:30 EDT - Details) J.Crowu2019s Lugolu201... Buy New $19.77 (as of 05:20 EDT - Details) The Iodine Crisis: Wha... Best Price: $7.37 Buy New $15.60 (as of 11:30 EDT - Details) Iodine : Why You Need ... Best Price: $5.53 Buy New $15.25 (as of 11:10 EDT - Details)
<urn:uuid:967915de-9c9a-4eb6-8ee5-152bbfc74467>
3.03125
469
Listicle
Science & Tech.
67.888553
95,526,955
A two-dimensional gas is a collection of objects constrained to move in a planar or other two-dimensional space in a gaseous state. The objects can be: ideal gas elements such as rigid disks undergoing elastic collisions; elementary particles, or any object in physics which obeys laws of motion. The concept of a two-dimensional gas is used either because: - (a) the issue being studied actually takes place in two dimensions (as certain surface molecular phenomena); or, - (b) the two-dimensional form of the problem is more tractable than the analogous mathematically more complex three-dimensional problem. While physicists have studied simple two body interactions on a plane for centuries, the attention given to the two-dimensional gas (having many bodies in motion) is a 20th-century pursuit. Applications have led to better understanding of superconductivity, gas thermodynamics, certain solid state problems and several questions in quantum mechanics. Research at Princeton University in the early 1960s posed the question of whether the Maxwell–Boltzmann statistics and other thermodynamic laws could be derived from Newtonian laws applied to multi-body systems rather than through the conventional methods of statistical mechanics. While this question appears intractable from a three-dimensional closed form solution, the problem behaves differently in two-dimensional space. In particular an ideal two-dimensional gas was examined from the standpoint of relaxation time to equilibrium velocity distribution given several arbitrary initial conditions of the ideal gas. Relaxation times were shown to be very fast: on the order of mean free time . In 1996 a computational approach was taken to the classical mechanics non-equilibrium problem of heat flow within a two-dimensional gas. This simulation work showed that for N>1500, good agreement with continuous systems is obtained. While the principle of the cyclotron to create a two-dimensional array of electrons has existed since 1934, the tool was originally not really used to analyze interactions among the electrons (e.g. two-dimensional gas dynamics). An early research investigation to explore a two-dimensional electron gas with respect to cyclotron resonance behavior and the de Haas–van Alphen effect. The investigator was able to demonstrate for a two-dimensional gas, the de Haas–van Alphen oscillation period is independent of the short-range electron interactions. Later applications to Bose gas Experimental research with a molecular gas In general, 2D molecular gases are experimentally observed on weakly interacting surfaces such as metals, graphene etc. at a non-cryogenic temperature and a low surface coverage. As a direct observation of individual molecules is not possible due to fast diffusion of molecules on a surface, experiments are either indirect (observing an interaction of a 2D gas with surroundings, e.g. condensation of a 2D gas) or integral (measuring integral properties of 2D gases, e.g. by diffraction methods). An example of the indirect observation of a 2D gas is the study of Stranick et al. who used a scanning tunnelling microscope in ultrahigh vacuum (UHV) to image an interaction of a two-dimensional benzene gas layer in contact with a planar solid interface at 77 kelvins. The experimenters were able to observe mobile benzene molecules on the surface of Cu(111), to which a planar monomolecular film of solid benzene adhered. Thus the scientists could witness the equilibrium of the gas in contact with its solid state. Integral methods that are able to characterize a 2D gas usually fall into a category of diffraction (see for example study of Kroger et al.). The exception is the work of Matvija et al. who used a scanning tunneling microscope to directly visualize a local time-averaged density of molecules on a surface. This method is of special importance as it provides an opportunity to probe local properties of 2D gases; for instance it enables to directly visualize a pair correlation function of a 2D molecular gas in a real space. If the surface coverage of adsorbates is increased, a 2D liquid is formed, followed by a 2D solid. It was shown that the transition from a 2D gas to a 2D solid state can be controlled by a scanning tunneling microscope which can affect the local density of molecules via an electric field. Implications for future research A multiplicity of theoretical physics research directions exist for study via a two-dimensional gas. Examples of these are - Complex quantum mechanics phenomena, whose solutions may be more appropriate in a two-dimensional environment; - Studies of phase transitions (e.g. melting phenomena at a planar surface); - Thin film phenomena such as chemical vapor deposition; - Surface excitations of a solid. - Feld; et al. "Observation of a pairing pseudogap in a two-dimensional gas". Nature. 480 (7375): 75–78. arXiv: . Bibcode:2011Natur.480...75F. doi:10.1038/nature10627. - C.M.Hogan, Non-equilibrium statistical mechanics of a two-dimensional gas, Dissertation, Princeton University, Department of Physics, May 4, 1964 - D. Risso and P. Cordero, Two-Dimensional Gas of Disks: Thermal Conductivity, Journal of Statistical Physics, volume 82, pages 1453–1466, (1996) - Kohn, Walter (1961). "Cyclotron Resonance and de Haas–van Alphen Oscillations of an Interacting Electron Gas". Physical Review. 123: 1242–1244. Bibcode:1961PhRv..123.1242K. doi:10.1103/physrev.123.1242. - Vanderlei Bagnato and Daniel Kleppner. Bose–Einstein condensation in low-dimensional traps, American Physical Society, 8 April 1991 - Stranick, S. J. ; Kamna, M. M. ; Weiss, P. S, Atomic Scale Dynamics of a Two-Dimensional Gas-Solid Interface, Pennsylvania State University, Park Dept. of Chemistry, 3 June 1994 - Kroger, I. (2009). "Tuning intermolecular interaction in long-range-ordered submonolayer organic films". Nature Physics. 5: 153–158. Bibcode:2009NatPh...5..153S. doi:10.1038/nphys1176. - Matvija, Peter; Rozbořil, Filip; Sobotík, Pavel; Ošťádal, Ivan; Kocán, Pavel (2017). "Pair correlation function of a 2D molecular gas directly visualized by scanning tunneling microscopy". The Journal of Physical Chemistry Letters. 8: 4268–4272. doi:10.1021/acs.jpclett.7b01965. - Thomas Waldmann; Jens Klein; Harry E. Hoster; R. Jürgen Behm (2012), "Stabilization of Large Adsorbates by Rotational Entropy: A Time-Resolved Variable-Temperature STM Study" (in German), ChemPhysChem: pp. n/a–n/a - Matvija, Peter; Rozbořil, Filip; Sobotík, Pavel; Ošťádal, Ivan; Pieczyrak, Barbara; Jurczyszyn, Leszek; Kocán, Pavel (2017). "Electric-field-controlled phase transition in a 2D molecular layer". Scientific Reports. 7: 7357. Bibcode:2017NatSR...7.7357M. doi:10.1038/s41598-017-07277-7.
<urn:uuid:bd8ad78b-380b-4231-a12a-8d28bbde1aa0>
3.65625
1,618
Knowledge Article
Science & Tech.
42.914373
95,527,009
When the Intergovernmental Panel on Climate Change recently requested a figure for its annual report, to show global temperature trends over the last 10,000 years, the University of Wisconsin-Madison's Zhengyu Liu knew that was going to be a problem. "We have been building models and there are now robust contradictions," says Liu, a professor in the UW-Madison Center for Climatic Research. "Data from observation says global cooling. The physical model says it has to be warming." Writing in the journal Proceedings of the National Academy of Sciences today, Liu and colleagues from Rutgers University, the National Center for Atmospheric Research, the Alfred Wegener Institute for Polar and Marine Research, the University of Hawaii, the University of Reading, the Chinese Academy of Sciences, and the University of Albany describe a consistent global warming trend over the course of the Holocene, our current geological epoch, counter to a study published last year that described a period of global cooling before human influence. The scientists call this problem the Holocene temperature conundrum. It has important implications for understanding climate change and evaluating climate models, as well as for the benchmarks used to create climate models for the future. It does not, the authors emphasize, change the evidence of human impact on global climate beginning in the 20th century. "The question is, 'Who is right?'" says Liu. "Or, maybe none of us is completely right. It could be partly a data problem, since some of the data in last year's study contradicts itself. It could partly be a model problem because of some missing physical mechanisms." Over the last 10,000 years, Liu says, we know atmospheric carbon dioxide rose by 20 parts per million before the 20th century, and the massive ice sheet of the Last Glacial Maximum has been retreating. These physical changes suggest that, globally, the annual mean global temperature should have continued to warm, even as regions of the world experienced cooling, such as during the Little Ice Age in Europe between the 16th and 19th centuries. The three models Liu and colleagues generated took two years to complete. They ran simulations of climate influences that spanned from the intensity of sunlight on Earth to global greenhouse gases, ice sheet cover and meltwater changes. Each shows global warming over the last 10,000 years. Yet, the bio- and geo-thermometers used last year in a study in the journal Science suggest a period of global cooling beginning about 7,000 years ago and continuing until humans began to leave a mark, the so-called "hockey stick" on the current climate model graph, which reflects a profound global warming trend. In that study, the authors looked at data collected by other scientists from ice core samples, phytoplankton sediments and more at 73 sites around the world. The data they gathered sometimes conflicted, particularly in the Northern Hemisphere. Because interpretation of these proxies is complicated, Liu and colleagues believe they may not adequately address the bigger picture. For instance, biological samples taken from a core deposited in the summer may be different from samples at the exact same site had they been taken from a winter sediment. It's a limitation the authors of last year's study recognize. "In the Northern Atlantic, there is cooling and warming data the (climate change) community hasn't been able to figure out," says Liu. With their current knowledge, Liu and colleagues don't believe any physical forces over the last 10,000 years could have been strong enough to overwhelm the warming indicated by the increase in global greenhouse gases and the melting ice sheet, nor do the physical models in the study show that it's possible. "The fundamental laws of physics say that as the temperature goes up, it has to get warmer," Liu says. Caveats in the latest study include a lack of influence from volcanic activity in the models, which could lead to cooling — though the authors point out there is no evidence to suggest significant volcanic activity during the Holocene — and no dust or vegetation contributions, which could also cause cooling. Liu says climate scientists plan to meet this fall to discuss the conundrum. "Both communities have to look back critically and see what is missing," he says. "I think it is a puzzle." The study was supported by grants from the (U.S.) National Science Foundation, the Chinese National Science Foundation, the U.S. Department of Energy, and the Chinese Ministry of Science and Technology. Kelly April Tyrrell, 608-262-9772, firstname.lastname@example.org Zhengyu Liu, email@example.com, 608-262-0777 (beginning 8/18/14) Zhengyu Liu | Eurek Alert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:a4893c43-dfb0-436a-8166-7f0b75af0a6e>
3.578125
1,597
Content Listing
Science & Tech.
42.112293
95,527,029
Volcanic Winter? Climatic Effects of the Largest Volcanic Eruptions Calculations suggest that the largest volcanic eruptions could have significant effects on global climate. We estimate the amount of sulfur volatiles that could have been released in very large eruptions by scaling up from smaller historical eruptions. The greatest well-known Late Quaternary explosive eruption, Toba (Indonesia, 75,000 years B.P.) erupted at least 1000 km3 of magma, and may have released enough sulfur volatiles to have formed 9 × 1014 to 5 × 1015 g of H2SO4 stratospheric aerosols. Basaltic fissure eruptions release even greater amounts of sulfur volatiles, which can be lofted into the stratosphere in convective plumes rising above fire fountains. The Roza flow eruption (about 700 km3 of magma) of the Miocene Columbia River Basalt Group could have produced up to 6 × 1015 g of aerosols. Distributed worldwide, these aerosol mass loadings would lead to effects ranging from a noticeable dimming of the sun to conditions similar to those described in some models of nuclear winter. Unless self-limiting mechanisms of stratospheric aerosol formation and removal are important, very large eruptions may lead to widespread darkness, cold weather, and acid precipitation. Even the minimum estimated effects of these great eruptions would represent significant perturbations of the global atmosphere. KeywordsOptical Depth Aerosol Optical Depth Fissure Eruption Stratospheric Aerosol Volcanic Aerosol Unable to display preview. Download preview PDF. - Brown WH, Peczkis J (1984) Nuclear war - counting the cost. Nature (Lond) 310: 455Google Scholar - Lamb HH (1970) Volcanic dust in the atmosphere; with a chronology and assessment of its meteorological significance. Phil Trans R Soc Lond, Ser A266: 425–533Google Scholar - National Research Council (1985) The effects on the atmosphere of a major nuclear exchange. Natl Acad, Washington DCGoogle Scholar - Pang KD, Chou H (1984) A correlation between Greenland ice core climatic horizons and ancient Oriental meteorological records. Eos 65: 846Google Scholar - Parry ML (1985) The impact of climatic variations on agricultural margins. In: Kates RW, Ausubel JH, Berberian M (eds) Climate impact assessment (SCOPE 27 ), studies of the interaction of climate and society, Wiley & Sons, Chichester, pp 351–369Google Scholar - Post JD (1977) The last great subsistence crisis in the western world. Johns Hopkins Univ Press, BaltimoreGoogle Scholar - Rose WI, Chesner CA (1986) Dispersal of ash in the great Toba eruption, 75,000 yrs B.P. Geol Soc Am Abs w Prog 18: 733Google Scholar
<urn:uuid:c7361d01-21ac-41cb-9989-f26e2a70e066>
3.828125
619
Academic Writing
Science & Tech.
36.962574
95,527,041