text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
With the costs of genome sequencing rapidly decreasing, and with the infrastructure now developed for almost anyone with access to a computer to cheaply store, access, and analyze sequence information, emphasis is increasingly being placed on ways to apply genome data to real world problems, including reducing dependency on fossil fuel. For the efficient production of bioenergy, this may be accomplished through development of improved feedstocks. A recently published study examined the impact of very cheap sequence data (approximately 1USD per genome) on improvement of switchgrass, a perennial grass well suited to biomass production. Results were published in the current issue of The Plant Genome. Acquiring the genetic component of natural variation is or will soon become cheap enough that it will soon be able to be incorporated through marker-assisted selection into almost all breeding programs. With availability of cheap sequencing capacity, neither complete sequence assembly nor gene annotation is required to apply these techniques. In a species such as switchgrass there exists a great deal of phenotypic variation derived from latitudinal adaptation across its natural range and local adaptation to soil, temperature, and moisture conditions. It is still largely undomesticated and thus large gains might be realized through fixation of beneficial alleles in breeding populations. There are likely to be a few genes with large effects that will dramatically impact yields once incorporated into breeding programs. This has occurred during the domestication of all our grain crops, but it may take just a fraction of the time now. The development of a dollar genome sequence could provide information highways that would cut across several disciplines and drive the development of next generation biomass feedstocks, bioproducts, and processes for replacing fossil fuels. New feedstocks could produce sustainable high yields with minimal inputs in regions where competition with food is minimized, as well as provide ancillary environmental benefits associated with carbon sequestration and environmental remediation. Another result of inexpensive sequencing would be an increased use of comparative genomics. A comprehensive survey of genetic diversity would help guide conservation efforts to preserve germplasm diversity and allow reconstruction of past speciation events at a more detailed level. As a result of access to multiple related genomes, similarities between closely related species would allow inference of missing data. For example, if a draft switchgrass genome assembly does not provide a complete assembly as judged by comparison to an inbred genome or more closely related grass, it will be possible to infer unresolved regions, including retrotransposon family composition and composition of other abundant repetitive elements. Comparative approaches would be applied to better understand the molecular basis for differences between species that result in higher or lower yields in different environments. The full article is available for no charge for 30 days following the date of this summary. View the abstract at http://plantgenome.scijournals.org/content/2/1/5.full. The Crop Science Society of America (CSSA), founded in 1955, is an international scientific society comprised of 6,000+ members with its headquarters in Madison, WI. Members advance the discipline of crop science by acquiring and disseminating information about crop breeding and genetics; crop physiology; crop ecology, management, and quality; seed physiology, production, and technology; turfgrass science; forage and grazinglands; genomics, molecular genetics, and biotechnology; and biomedical and enhanced plants. CSSA fosters the transfer of knowledge through an array of programs and services, including publications, meetings, career services, and science policy initiatives. Sara Uttech | Newswise Science News Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:c18fa2ca-cf77-46c7-af2f-8f299255c860>
3.484375
1,353
Truncated
Science & Tech.
26.7157
95,509,901
May 03, 2017 10:24 AM EDT A recent study conducted by an international team of astronomers claims that the gamma-ray glow coming from the Milky Way might be produced by pulsars. Previously, the source of the pulsing lights was thought to be dark matter. According to Astronomy, pulsars are the fast-spinning cores of collapsed ancient stars that were once 30 times more massive than the sun. The researchers used data from the Large Area Telescope on NASA's Fermi Gamma-ray Space Telescope. In the process, they examined the central part of the Milky Way, where the glows are located. Indeed, the evidence that proves pulsar cause the light show in the said galaxy puts the existence of dark mater into further doubts. Until now, no concrete evidence of that black force has been recorded. For the record, dark energy is said to be responsible for 85 percent of all matter in the universe. Mattia Di Mauro, the lead researcher, said in previous press releases that mankind does not need the idea of an invisible dark force to understand the gamma-ray emissions in the galaxy. Di Mauro is from the Kavli Institute for Particle Astrophysics and Cosmology. Instead, the expert added, they have identified the pulsars that may explain the formation of the Milky Way. To better illustrate, dark matter is one of the biggest mysteries in modern physics. While scientists believe it exists and it bends light from distant galaxies and affects how galaxies rotate, they do not know its exact composition. Nevertheless, per Science Daily, the majority of experts say that it is made up of "yet-to-be-discovered" particles that almost never interact with regular matter other than through gravity. One way to document this strange material is when the particles decay or collide and destroy each other. The said collisions are believed to produce gamma rays. Gamma rays could also come from supernova remnants. Lastly, the fact that scientists can still see gamma rays from the identified pulsar population today suggests that the pulsars are in binary systems with companion stars. See Now: Facebook will use AI to detect users with suicidal thoughts and prevent suicide© 2017 University Herald, All rights reserved. Do not reproduce without permission.
<urn:uuid:71b41fb7-92b3-4499-a1d1-d39c75aadf98>
3.78125
456
Truncated
Science & Tech.
44.428
95,509,906
+44 1803 865913 Edited By: RB Singh 340 pages, Tabs, figs Aims at focusing on bio-geographic dimensions, biodiversity conservation and sustainable use of its components into socio-economic development. A holistic perspective of biodiversity conservation includes bio-geo monitoring and indicators, climate change, tourism, invasive and alien species. Special attention has been given on mountain, coastal and marine biodiversity, eco-development in protected areas, local knowledge, technology transfer, education and public awareness. The present book comprises 26 chapters relating to conceptual and empirical case studies from the developed and developing countries. The book also combines science and policy perspectives to the biogeography and biodiversity. It is useful for students, researchers and teachers coming from geography, environmental studies, biosciences, ecology and policy science. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects On behalf of Parque Nacional Nahuel Huapi I would like to thank NHBS. The book will be very useful for my students. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:f68a8b85-cf65-43be-96d7-3c8a7e9c458f>
2.96875
251
Product Page
Science & Tech.
22.189626
95,509,942
Global warming - Part 30 Fact: Earths average temperature has risen by over 1/2% in the last 100 years – the years since 1980 have been the hottest on record. Global warming is caused by increased fossil fuel use: More Essay Examples on Environmental Management Rubric Since the industrial revolution, people have needed more energy for work and in the home – this has come from burning more fossil fuels, particularly coal and oil. This burning releases more carbon dioxide and methane into the atmosphere – these cause what’s known as the “Greenhouse Effect”. Earth is like a giant greenhouse: Energy from the sun passes through the atmosphere as light and warms it up - Global warming introduction. When it reflects off the Earths surface as heat, it is trapped by the atmosphere and cant get back into space – this is how a greenhouse keeps the heat inside, and it means that the Earth gets hotter. Greenhouse gases need to be reduced: Britain and Europe want to reduce gas emissions – they are big users of fossil fuels. India and other LEDCs don’t want to, because their rate of development will slow down. Oil states in the Gulf don’t want to because their revenues from oil sales will go down. The USA doesn’t want to because it doesn’t want a fall in living standards. Global warming causes sea levels to rise: Ice sheets and glaciers are beginning to melt. Sea levels have risen by 0.25m in the last 100 years. In 100 years, the seas levels will probably rise another 0.5m. Low-lying areas of the world are under threat of flooding – e.g. parts of Southeast England, the Nile and Ganges deltas and most major world cities. The world’s climates are also changing: Droughts, floods and storms could become more severe, widespread and more common. The Northern Hemisphere wheat belt could become more arid and less productive. The Tundra could become warmer and support crop growth. The Sahara could spread north into Southern Europe. The North Atlantic Drift could be altered, and Britain could become much colder.
<urn:uuid:17f7a150-153c-4d7d-9d46-42f7d9d9ce50>
3.265625
457
Truncated
Science & Tech.
60.005239
95,509,962
Working aboard research vessels in the Atlantic, scientists mapped the distribution of nutrients including phosphorous and nitrogen and investigated how organisms such as phytoplankton are sustained in areas with low nutrient levels. They found that plants are able to grow in these regions because they are able to take advantage of iron minerals in Saharan dust storms. This allows them to use organic or ‘recycled’ material from dead or decaying plants when nutrients such as phosphorous – an essential component of DNA – in the ocean are low. Professor George Wolff, from the University’s Department of Earth and Ocean Sciences, explains: “We found that cyanobacteria – a type of ancient phytoplankton – are significant to the understanding of how ocean deserts can support plant growth. Cyanobacteria need nitrogen, phosphorous and iron in order to grow. They get nitrogen from the atmosphere, but phosphorous is a highly reactive chemical that is scarce in sea water and is not found in the Earth’s atmosphere. Iron is present only in tiny amounts in sea water, even though it is one of the most abundant elements on earth. “Our findings suggest that Saharan dust storms are largely responsible for the significant difference between the numbers of cyanobacteria in the North and South Atlantic. The dust fertilises the North Atlantic and allows phytoplankton to use organic phosphorous, but it doesn’t reach the southern regions and so without enough iron, phytoplankton are unable to use the organic material and don’t grow as successfully.” Professor Ric Williams, co-author of the research, added: “The Atlantic is often referred to as an ‘ocean desert’ because many nutrients, which are essential in plant life cycles, are either scarce or are only accessible in the darker depths of the ocean. Plants, however, need some sunlight in order to absorb these important nutrients and so can’t always access them from the ocean depths. They therefore need to find the nutrients from elsewhere. Now that we are able to show how cyanobacteria make use of organic material we can understand more clearly how life is sustained in the ocean and why it isn’t an ‘ocean desert.’ “These findings are important because plant life cycles are essential in maintaining the balance of gases in our atmosphere. In looking at how plants survive in this area, we have shown how the Atlantic is able to draw down carbon dioxide from the atmosphere through the growth of photosynthesising plants.” Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:835ea806-8ef1-4128-9459-897304541470>
4.125
1,098
Content Listing
Science & Tech.
39.112458
95,509,979
For the first time, astronomers are able to predict when major flares--enormous explosions that shoot hot gases into space--will erupt on stars outside our solar system, according to research to be published in an upcoming issue of the Astrophysical Journal. The research is based on data from the longest-running continuous radio survey of flares produced by two types of binary systems, each containing a pair of stars under the influence of each others gravity. Stars in both binary systems, located about 95 light years from our solar system, are like a younger version of our Sun. "Studying the flares on these stars can help us understand more about how life evolved on Earth because they indicate the kind of environment that was bombarding our planet during an earlier age," says Mercedes Richards, professor of astronomy and astrophysics at Penn State University and the leader of the survey team. During their 5-year-long observations, the researchers used the Green Bank Interferometer in West Virginia to continuously monitor radio waves produced by flares on pairs of stars as they circle each other like partners in a dance, regularly eclipsing each other when viewed from Earth. They studied two systems of such stars, one known as "The Demon Star," or "Beta Persei," which is the brightest and closest eclipsing binary pair in the sky. It contains a hot, blue star along with a cool, orange-colored star that is like our Sun but a bit more active. The other system, known as "V711 Tauri" to indicate its location in the constellation Taurus, also contains relatively cool stars like our Sun, one orange-colored and the other slightly hotter and yellow-colored. Barbara K. Kennedy | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:047d7ee7-d53b-4472-b9ba-a7e105f0d974>
3.65625
924
Content Listing
Science & Tech.
39.742304
95,509,980
Found 10 talks width keyword astrochemistry The very metal-poor (VMP; [Fe/H] < –2.0) and extremely metal-poor (EMP; [Fe/H] < –3.0) stars provide a direct view of Galactic chemical and dynamical evolution; detailed spectroscopic studies of these objects are the best way to identify and distinguish between various scenarios for the enrichment of early star-forming gas clouds soon after the Big Bang. It has been recognized that a large fraction of VMP (15-20%) and EMP stars (30-40%) possess significant over-abundances of carbon relative to iron, [C/Fe] > +0.7. This fraction rises to at least 80% for stars with [Fe/H] < –4.0. Recent studies show that the majority of CEMP stars with [Fe/H] < –3.0 belong to the CEMP-no sub-class, characterized by the lack of strong enhancements in the neutron-capture elements (e.g., [Ba/Fe] < 0.0). The brightest EMP star in the sky, BD+44:493, with [Fe/H] = –3.8 and V = 9.1, is a CEMP-no star. It shares a common elemental-abundance signature with the recently discovered CEMP-no star having [Fe/H] < –7.8. The distinctive CEMP-no pattern has also been identified in high-z damped Lyman-alpha systems, and is common among stars in the ultra-faint dwarf spheroidal galaxies, such as SEGUE-1. These observations suggest that CEMP-no stars exhibit the nucleosynthesis products of the VERY first generation of stars. We discuss the multiple lines of evidence that support this hypothesis, and describe current efforts to identify the nature of the massive stellar progenitors that produced these signatures. All the elements from carbon to uranium present in the Solar System were produced by hundreds to thousands of stars belonging to different stellar generations that evolved and died during the presolar evolution of the Galaxy. Using the abundances of radioactive nuclei inferred from meteoritic analysis we can date the last of these stellar additions. We have found that the last contribution of elements such as carbon and slow neutron-capture elements to the Solar System from an asymptotic giant branch star occurred 15-30 Myr before the formation of the Sun. This provides us with an upper limit of the time when the precursor material of the Solar System became isolated from the bulk of the galactic material. Interestingly, it compares well to the lifetime of high-mass molecular clouds suggesting that the Sun was born in a very large family of stars. The origins of neutron(n)-capture elements (atomic number Z > 30) have historically been discerned from the interpretation of stellar spectra. However, in the last decade nebular spectroscopy has been demonstrated to be a potentially powerful new tool to study the nucleosynthesis of n-capture elements. In this talk, I will discuss exciting new advances made in this field with near-infrared and optical observations of planetary nebulae, and atomic data investigations that enable the analysis of spectroscopic data. The classical idea that globular clusters are the prototypes of simple stellar populations has been revolutionized in the last few years. Multiple sequences of stars have been detected in the colour-magnitude diagram of a number of clusters, mostly thanks to high-precision HST photometry, and the correlation with the chemical properties of different generations of stars has been demonstrated. In this talk, we will first present a summary of the observational picture, and we will then introduce the SUMO project (a SUrvey of Multiple pOpulations). This is a long-term project, lead here at the IAC and aimed at detecting and characterizing multiple populations in a large sample of globular clusters. We will review the scope, the observing and reduction strategy, and the first results. So far, data for more than 30 clusters have been secured, using the wide field imagers available at the 2.2m ESO/MPI and INT telescope, thus covering both hemispheres. We will present a new photometric index which turned out to be very effective in detecting multiple RGBs in nearly all the clusters analyzed so far. The connection with the chemical content of the different populations will be also discussed. AbstractThe so called "dark ages" of the universe began about 400.000 years after the Big Bang as matter cooled down and space became filled with neutral hydrogen for hundreds of millions years. How the Universe was heated and reionized during the first billion years after the Big Bang is a question of topical interest in cosmology. I will show that current theoretical models on the formation and collapse of primordial stars suggest that a large fraction of massive stars should have imploded, forming high-mass black hole X-ray binaries. Then, I will review the recent observations of compact stellar remnants in the near and distant universe that support this theoretical expectation, showing that the thermal (UV and soft X-rays) and non-thermal (hard X-rays, winds and jets) emission from a large population of stellar black holes in high mass binaries heated the intergalactic medium over large volumes of space, complementing the reionization by their stellar progenitors. Feedback from accreting stellar black holes at that epoch would have prevented the formation of the large quantities of low mass dwarf galaxies that are predicted by the cold dark matter model of the universe. A large population of black hole binaries may be important for future observations of gravitational waves as well as for the existing and future atomic hydrogen radio surveys of HI in the early universe. Understanding the composition and the nature of any asteroid approaching the Earth, and consequently potentially hazardous, is a matter of general interest, both scientific and practical. The potentially hazardous asteroid 1999 RQ36 is especially accessible to spacecraft and is the primary target of NASA's OSIRIS-REx sample return mission. Spectra of this asteroid point to the most primitive meteorites (CIs and CMs) as the most likely analogs. Asteroid (3200) Phaethon is also particularly interesting. Together with 2005 UD and 2001 YB5, is one of the only 3 near-Earth asteroids with associated meteor showers, which mostly come from comets. There is evidence of the presence of hydrated minerals on its surface, usually associated with organic material. Both asteroids are classified as "B". B-type asteroids are found mostly in the middle and outer main belt and are believed to be primitive and volatile-rich. We combine dynamical and spectral information to identify the most likely main-belt origin of these two objects. We present the new stellar population synthesis models based on the empirical stellar spectral library MILES, which can be regarded nowadays as standard in the field of stellar population studies. The synthetic SEDs cover the whole optical range at resolution 2.3 Å (FWHM). The unprecedented stellar parameter coverage of MILES allowed us to extend our model predictions from intermediate- to very-old age regimes, and the metallicity coverage from super-solar to [M/H] = -2.3. Observed spectra can be studied by means of full spectrum fitting or line-strengths. For the latter we propose a new Line Index System (LIS) to avoid the intrinsic uncertainties associated with the popular Lick/IDS system and provide more appropriate, uniform, spectral resolution. We present a web-page with a suite of on-line tools to facilitate the handling and transformation of the spectra. Online examples with practical applications to work with stellar spectra for a variety of instrumental setups will be shown. Furthermore we will also show examples of how to compute spectra and colors with varying instrumental setup, redshift and velocity dispersion for a suite of Star Formation Histories. AbstractDue to their orbits, near-Earth asteroids (NEAs) have been considered the most evident parent bodies of meteorites. Dynamical models show that NEAs come primarily from the inner and central parts of the Main Belt (MB), and they reach their orbits by means of gravitational resonances (mainly ?6 and 3:1). This part of the MB is dominated by spectral types S and Q, also the most common spectral types among the NEA population (~60%), and correspond to objects composed of silicates. Their reflectance spectra show very characteristic absorption bands that can be used to infer their mineralogical composition applying different methods of analysis. Those absorption bands are also present in the spectra of the most abundant class of meteorites (~80%), the ordinary chondrites (OC). In order to better understand the connection between MB asteroids, NEAs and OCs, we undertook a spectroscopic survey of asteroids between 2002 and 2007, using the telescopes and instrument facilities of "El Roque de los Muchachos" Observatory, in the Canary Islands. The survey contains visible and near-infrared spectra (0.5 - 2.5 µm) of a total of 105 asteroids. We have applied a method of mineralogical analysis based on spectral parameters to our sample of NEAs, and also to a sample of 91 MBs and 103 OCs obtained from different databases. We have found some significant compositional differences between NEAs, MBs and OCs. The most remarkable one is that NEAs compositionally differ from the whole set of OCs, and show a more olivine-rich composition, similar to what it is found for LL chondrites (only 8% of the falls). This result suggests that S type NEAs are not the immediate precursors of ordinary chondrites, as it was believed. We consider the size of the objects as the key factor to explain this difference. NEAs are km-sized objects, while meteorites are meter tocm sized objects. Combining the information obtained from the dynamical models and the drift in semimajor axis of the smaller objects due to their thermal intertia (Yarkovsky effect), we set out a possible scenario for the formation and the transport routes of NEAs and meteorites that could explain this compositional difference in a plausible way. AbstractThere is a multitude of photochemical processes occurring in a planet's atmosphere. Some of these processes occur with an excess of energy and lead to products in the form of excited atoms, molecules and ions.In specific cases, these gases radiate at wavelengths that range from the UV to the NIR. Solar light is the ultimate cause of these airglow emissions, but traditionally one distinguishes between the day airglow (dayglow), and the night airglow (nightglow). The contribution of the Sun to the excitation of the emitting gas is more immediate in the day glow than in the nightglow. The airglow makes it possible to remotely investigate the chemical kinetics, energetic balance and dynamics of a planetary atmosphere. In the talk, I will go over some of the air glow missions that are known to exist in the atmospheres of the Earth, Mars and Venus. The examples illustrate some of my recent work, and include theoretical modelling and the interpretation of observational data. There is a long record of contributions to the nightglow from observations carried out at ground-based telescopes. I will briefly comment some of these. AbstractThe composition of the outer solar system is of particular interest because it holds the key to understanding the chemical evolution of the Solar System. Observations at the edge of the Solar System are difficult because of distance and size limitations. The Spitzer Space Telescope has provided a wealth of data for Kuiper Belt Objects (KBOs), the small inhabitants of this remote part of the Solar System past the orbit of Neptune, as well as for Centaurs, similar objects to the KBOs but with orbits that come closer to the Sun. Are these observations sufficient to tell us what the composition of these objects is? We briefly introduce spectral modeling, its strengths and limitations. Making use of synthetic surface reflectance spectra we assess the feasibility of determining the composition of Kuiper Belt Objects and Centaurs making use of Spitzer-IRAC data alone. « Newer Older » - Deciphering the Milky Way: dark and visible matter at home and at the edge of the UniverseDr. Elena D’OnghiaTuesday July 17, 2018 - 12:30 (Aula) - COLLOQUIA: Chemical evolution in the Milky-Way and its satellites: an observational perspectiveVanessa HillWednesday July 18, 2018 - 10:30 (Aula)
<urn:uuid:05ed97de-8ce2-470f-a51f-21e05968e197>
2.96875
2,608
Content Listing
Science & Tech.
37.373961
95,509,984
|MLA Citation:||Bloomfield, Louis A. "Question 1561: Will shaking a container of gas warm it up?"| How Everything Works 16 Jul 2018. 16 Jul 2018 <http://howeverythingworks.org/print1.php?QNum=1561>. A simple way to see why that's the case is to picture the gas as composed of many little bouncing balls inside the container. Those balls are perfectly elastic so they rebound from a stationary wall without changing their speeds at all. But the walls of the container aren't stationary, they move back and forth as you shake the container. Because of the moving walls, the balls change their speeds as they rebound. A ball that bounces off a wall that is moving toward it gains speed during its bounce, like a pitched ball rebounding from a swung bat. On the other hand, a ball that bounces off a wall that is moving away from it loses speed during its bounce, like a pitched ball rebounding from a bat during a bunt. If both types of bounces were equally common in every way then, on average, the balls (or actually the gas molecules) would neither gain nor lose speed as the result of bounces off the walls and the gas temperature would remain unchanged. But the bounces aren't equally common. It's more likely that a moving ball will hit a wall that is moving toward it than that it will hit a wall that is moving away from it. It's a geometry problem; you get wet faster when you run toward a sprinkler than when you run away from the sprinkler. So, on average, the balls (or gas molecules) gain speed as the result of bounces off the walls and the gas temperature increases. How large this effect is depends on the relative speeds of the gas molecules and the walls. The effect becomes enormous when the walls move as fast or faster than the gas molecules but is quite subtle when the gas molecules move faster than the walls. Since air molecules typically move at about 500 meters per second (more than 1000 mph) at room temperature, you'll have to shake the container pretty violently to see a substantial heating of the gas.
<urn:uuid:7c0e4d14-f902-49b5-b316-3d3e9b204fb9>
3.203125
438
Knowledge Article
Science & Tech.
64.911452
95,509,989
Displaying results 1 - 13 of 13 matches (0.04 seconds) Extreme heat conditions in South Asia are making the headlines for the second year in a row. The HI-AWARE project is currently studying this extreme heat and ways to cope with it in three major cities in South Asia; Delhi in India, Faisalabad in Much of the water originates around the highest mountains on earth, a region often called “the third pole” because of its immense concentration of snow and ice, the largest outside the Arctic and Antarctic. Relying on a complex interplay of Frequently Asked Questions of temperature rise in urban centers, has gained attention in the recent decades. Known as the urban Heat Island (UHI) effect, it was first conceptualized by Luke Howard in the early 1800s. Since then, several attempts have been made to Water resources assessment and monitoring Winter in the Hindu Kush Himalayas usually lasts from December to February. During this time many low-land areas and valleys experience short durations of morning fog...
<urn:uuid:9af23604-c97a-4a4d-9a57-6e34463528c2>
2.671875
216
Content Listing
Science & Tech.
44.340425
95,509,990
Soil and Litter Community: Temporary Dwellers The most obvious invertebrates on the floor of the desert are usually ants and beetles, and although these insects contribute significantly to the taxonomic array and ecological importance of that habitat’s temporary fauna, many other kinds of invertebrates add to its composition as well. All of the temporary dwellers are transients on the surface, where they consist either of dispersal stages of otherwise relatively immobile species (e.g., solitary bees, asilid flies, adult and triungulin instars of meloid beetles), or of mobile members of nonmetamorphosing species (e.g., snails, solifugids, crickets). Less restricted to soil than nematodes and microarthropods, these usually nonsocial animals encounter new conditions when and if they leave the confines of the soil. And, once on its surface, they often exploit resources quite different from those utilized underground. KeywordsTemporary Dweller Chihuahuan Desert Negev Desert Litter Community Tiger Beetle Unable to display preview. Download preview PDF.
<urn:uuid:b0fcd537-7145-48be-8c50-373f8054e7fd>
3.140625
228
Truncated
Science & Tech.
19.173468
95,510,008
Planting trees across the United States and Europe to absorb some of the carbon dioxide emitted by the burning of fossil fuels may just outweigh the positive effects of sequestering that CO². New climate modeling research from LLNL and the Carnegie Institution shows that northern temperate forests (top) may contribute to global warming, while tropical forests (bottom) can help keep global temperatures cool. Panel a: Direct warming associated with global forest cover. (These are results from a forest covered world minus the results for bare ground). Forests produce over 10°C (18°F) of warming in parts of the northern hemisphere due primarily to increased absorption of solar radiation. Forests produce several degrees of cooling in tropical areas, primarily due to increased evapotranspiration (evaporation). Panel b:Direct warming associated with forest cover between between 20°N and 50°N. (These are results from actual vegetation with added forests in the mid-latitudes minus the results for bare ground.) Mid-latitude forests can produce warming locally of up to 6°C (10°F). Panel c: Increase in fractional absorption of solar radiation at the ground for forests relative to bare ground. New climate modeling research from LLNL and the Carnegie Institution shows that northern temperate forests (top) may contribute to global warming, while tropical forests (bottom) can help keep global temperatures cool. (Click here to download a high-resolution image.) In theory, growing a forest may sound like a good idea to fight global warming, but in temperate regions, such as the United States, those trees also would soak up sunlight, causing the earth’s surface to warm regionally by up to 8 degrees Fahrenheit. Forests affect climate in three different ways: they absorb the greenhouse gas, carbon dioxide, and help to keep the planet cool; they evaporate water to the atmosphere, which also helps keep the planet cool; and they are dark and absorb a lot of sunlight, warming the Earth. Anne Stark | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:5185c67f-8569-459b-98da-7c80eb9669b2>
3.953125
1,045
Content Listing
Science & Tech.
40.577273
95,510,011
Ancient humans left Africa to escape drying climate Humans migrated out of Africa as the climate shifted from wet to very dry about 60,000 years ago, according to research led by a University of Arizona geoscientist. Genetic research indicates people migrated from Africa into Eurasia between 70,000 and 55,000 years ago. Previous researchers suggested the climate must have been wetter than it is now for people to migrate to Eurasia by crossing the Horn of Africa and the Middle East. "There's always been a question about whether climate change had any influence on when our species left Africa," said Jessica Tierney, UA associate professor of geosciences. "Our data suggest that when most of our species left Africa, it was dry and not wet in northeast Africa." Tierney and her colleagues found that around 70,000 years ago, climate in the Horn of Africa shifted from a wet phase called "Green Sahara" to even drier than the region is now. The region also became colder. The researchers traced the Horn of Africa's climate 200,000 years into the past by analyzing a core of ocean sediment taken in the western end of the Gulf of Aden. Tierney said before this research there was no record of the climate of northeast Africa back to the time of human migration out of Africa. "Our data say the migration comes after a big environmental change. Perhaps people left because the environment was deteriorating," she said. "There was a big shift to dry and that could have been a motivating force for migration." "It's interesting to think about how our ancestors interacted with climate," she said. The team's paper, "A climatic context for the out-of-Africa migration," is published online in Geology this week. Tierney's co-authors are Peter deMenocal of the Lamont-Doherty Earth Observatory in Palisades, New York, and Paul Zander of the UA. The National Science Foundation and the David and Lucile Packard Foundation funded the research. Tierney and her colleagues had successfully revealed the Horn of Africa's climate back to 40,000 years ago by studying cores of marine sediment. The team hoped to use the same means to reconstruct the region's climate back to the time 55,000 to 70,000 years ago when our ancestors left Africa. The first challenge was finding a core from that region with sediments that old. The researchers enlisted the help of the curators of the Lamont-Doherty Core Repository, which has sediment cores from every major ocean and sea. The curators found a core collected off the Horn of Africa in 1965 from the R/V Robert D. Conrad that might be suitable. Co-author deMenocal studied and dated the layers of the 1965 core and found it had sediments going back as far as 200,000 years. At the UA, Tierney and Paul Zander teased out temperature and rainfall records from the organic matter preserved in the sediment layers. The scientists took samples from the core about every four inches (10 cm), a distance that represented about every 1,600 years. To construct a long-term temperature record for the Horn of Africa, the researchers analyzed the sediment layers for chemicals called alkenones made by a particular kind of marine algae. The algae change the composition of the alkenones depending on the water temperature. The ratio of the different alkenones indicates the sea surface temperature when the algae were alive and also reflects regional temperatures, Tierney said. To figure out the region's ancient rainfall patterns from the sediment core, the researchers analyzed the ancient leaf wax that had blown into the ocean from terrestrial plants. Because plants alter the chemical composition of the wax on their leaves depending on how dry or wet the climate is, the leaf wax from the sediment core's layers provides a record of past fluctuations in rainfall. The analyses showed that the time people migrated out of Africa coincided with a big shift to a much drier and colder climate, Tierney said. The team's findings are corroborated by research from other investigators who reconstructed past regional climate by using data gathered from a cave formation in Israel and a sediment core from the eastern Mediterranean. Those findings suggest that it was dry everywhere in northeast Africa, she said. "Our main point is kind of simple," Tierney said. "We think it was dry when people left Africa and went on to other parts of the world, and that the transition from a Green Sahara to dry was a motivating force for people to leave." More information: Jessica E. Tierney et al, A climatic context for the out-of-Africa migration, Geology (2017). DOI: 10.1130/G39457.1 Provided by: University of Arizona
<urn:uuid:73cac1ae-c803-475f-9148-dc2b3c0a02b6>
3.46875
976
News Article
Science & Tech.
46.126689
95,510,020
"What's important about this event isn't so much the 'what' but the 'where,'" said Neil Gehrels, lead scientist for Swift at NASA's Goddard Space Flight Center in Greenbelt, Md. "GRB 090429B exploded at the cosmic frontier, among some of the earliest stars to form in our universe." Because light moves at finite speed, looking farther into the universe means looking back in time. GRB 090429B gives astronomers a glimpse of the cosmos as it appeared some 520 million years after the universe began. Now, after two years of painstaking analysis, astronomers studying the afterglow of the explosion say they're confident that the blast was the farthest explosion yet identified -- and at a distance of 13.14 billion light-years, a contender for the most distant object now known. Swift's discoveries continue to push the cosmic frontier deeper back in time. A gamma-ray burst detected on Sept. 4, 2005, was shown to be 12.77 billion light-years away. Until the new study dethroned it, GRB 090423, which was detected just six days before the current record-holder, reigned with a distance of about 13.04 billion light-years. Gamma-ray bursts are the universe's most luminous explosions, emitting more energy in a few seconds than our sun will during its energy-producing lifetime. Most occur when massive stars run out of nuclear fuel. When such a star runs out of fuel, its core collapses and likely forms a black hole surrounded by a dense hot disk of gas. Somehow, the black hole diverts part of the infalling matter into a pair of high-energy particle jets that tear through the collapsing star. The jets move so fast -- upwards of 99.9 percent the speed of light -- that collisions within them produce gamma rays. When the jets breach the star's surface, a gamma-ray burst is born. The jet continues on, later striking gas beyond the star to produce afterglows. "Catching these afterglows before they fade out is the key to determining distances for the bursts," Gehrels said. "Swift is designed to detect the bursts, rapidly locate them, and communicate the position to astronomers around the world." Once word gets out, the race is on to record as much information from the fading afterglow as possible. In certain colors, the brightness of a distant object shows a characteristic drop caused by intervening gas clouds. The farther away the object is, the longer the wavelength where this sudden fade-out begins. Exploiting this effect gives astronomers a quick estimate of the blast's "redshift" -- a color shift toward the less energetic red end of the electromagnetic spectrum that indicates distance. The Gemini-North Telescope in Hawaii captured optical and infrared images of GRB 090429B's quickly fading afterglow within about three hours of Swift's detection. "Gemini was the right telescope, in the right place, at the right time," said lead researcher Antonino Cucchiara at the University of California, Berkeley. "The data from Gemini was instrumental in allowing us to reach the conclusion that the object is likely the most distant GRB ever seen." The team combined the Gemini images with wider-field images from the United Kingdom Infrared Telescope, which is also located on Mauna Kea in Hawaii, to narrow estimates of the object's redshift. Announcing the finding at the American Astronomical Society meeting in Boston on Wednesday, May 25, the team reported a redshift of 9.4 for GRB 090429B. Other researchers have made claims for galaxies at comparable or even larger redshifts, with uncertain distance estimates, and the burst joins them as a candidate for the most distant object known. Studies by NASA's Hubble Space Telescope and the Very Large Telescope in Chile were unable to locate any other object at the burst location once its afterglow had faded away, which means that the burst's host galaxy is so distant that it couldn't be seen with the best existing telescopes. "Because of this, and the information provided by the Swift satellite, our confidence is extremely high that this event happened very, very early in the history of our universe," Cucchiara said. Swift, launched in November 2004, is managed by Goddard. It was built and is being operated in collaboration with Penn State University, University Park, Pa., the Los Alamos National Laboratory in New Mexico, and General Dynamics of Gilbert, Ariz., in the U.S. International collaborators include the University of Leicester and Mullard Space Sciences Laboratory in the United Kingdom, Brera Observatory and the Italian Space Agency in Italy, and additional partners in Germany and Japan. Francis Reddy | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:7bf41845-9fc8-4e59-8015-67402c709b30>
3.453125
1,620
Content Listing
Science & Tech.
45.618396
95,510,022
Inducing thermal gradients in fluid systems with initial, well-defined density gradients results in the formation of distinct layered patterns, such as those observed in the ocean due to double-diffusive convection. In contrast, layered composite fluids are sometimes observed in confined systems of rather chaotic initial states, for example, lattes formed by pouring espresso into a glass of warm milk. Here, we report controlled experiments injecting a fluid into a miscible phase and show that, above a critical injection velocity, layering emerges over a time scale of minutes. We identify critical conditions to produce the layering, and relate the results quantitatively to double-diffusive convection. Based on this understanding, we show how to employ this single-step process to produce layered structures in soft materials, where the local elastic properties vary step-wise along the length of the material. Pattern forming systems are some of the intriguing and spectacular phenomena throughout science and technology1,2,3,4. In nature, patterns in fluid media, such as the waves on the surface of deep water5, 6, oscillations in flames7, large-scale von Kármán vortex streets in clouds8, and the symmetric yet complex shape of snow flakes9, constitute some of the earliest self-organized systems, which have attracted human curiosity and initiated scientific exploration. A considerable class of spatial patterns in fluids are structured due to thermal effects, which trigger hydrodynamic instabilities10,11,12. For example, well-known instabilities triggered by thermal effects, such as Rayleigh-Bérnard convection13,14,15, are often found in systems with well-defined initial conditions. In a fluid system, when thermal gradients are introduced in the presence of an initial well-defined density gradient, distinct layered patterns are observed similar to those sometimes found in the ocean due to double-diffusive convection16,17,18,19,20. Surprisingly, we observe distinct horizontal layers formed after haphazardly pouring espresso into a glass of warm milk. Pouring forces a lower-density liquid (espresso) into a higher-density ambient (milk). The downward liquid inertia caused by pouring is opposed by buoyancy. The dynamics is similar to the fountain effect21, 22, which characterizes a wide range of flows driven by injecting a fluid into a second miscible phase of different density. Here we perform controlled model experiments, injecting warm dyed water from the top into a cylindrical tank filled with warm salt solution. The mixture cools down at room temperature and multiple horizontal layers emerge over several minutes. We use light intensity in the digital images of the fluid in the tank, after the injection, to quantify the distribution of the mixture density. We show that the formation of horizontal layers is a result of double-diffusive convection, where the salinity and temperature gradients are applied vertically and horizontally, respectively. The presence of the circulating flows within the layers is confirmed via particle image velocimetry (PIV) experiments and numerical simulations. Furthermore, we report that the formation of the horizontal layers is controlled by the injection velocity, i.e. layers emerge only when the injection velocity is higher than a critical value. Finally, we propose a single-step procedure for fabricating multi-layer soft materials based on our understanding of the model system. A glass of latte is made by pouring a cup of espresso into a glass of warm milk. Since the two liquids are miscible, the result of pouring is an espresso-milk mixture at the top of the glass, while the bottom may only contain milk, if no additional stirring is applied (Fig. 1a). In fact, although the initial state of the mixture is complex and chaotic (Fig. 1b), there are conditions where the mixture cools at room temperature and exhibits an organized layered pattern (Fig. 1c, see Supplementary Movie 1). These stable layers, whose structures may be maintained for at least tens of minutes (Fig. 1d), or even several hours, contain different concentrations of espresso and hence exhibit distinct visible boundaries. In order to investigate the mechanisms leading to the layering of the mixture, we performed controlled experiments in a model system comprised of a low-density jet of dyed water (density at , injection volume ml) entering a tank filled with relatively higher-density brine (9.1 wt% sodium chloride solution, at , 340.9 ml). The jet enters from the top (Fig. 1e) and the solution is then left to cool at room temperature . When the dyed water is injected into the higher-density salt solution, a downward jet is generated. However, the penetration of this liquid jet into the salt solution is opposed by the buoyant force pushing the lower-density liquid jet back to the top of the tank (Fig. 2a, c). As a result, a mixture is formed, in which the dyed water is mixed with the salt solution and is separated from the original salt solution at the bottom of the tank. In addition to buoyancy, the mixing is mainly governed by inertia with the Reynolds number defined as ( is the injection velocity, is the diameter of the needle and is the kinematic viscosity of the fluid), while diffusion does not play a significant role during the injection. The Schmidt number is ( is the mass diffusivity of the salt), indicating that the momentum diffusion is far faster than the salt diffusion during both the injection and layering (if any) processes. The Schmidt number for the milk and espresso system is approximately , and so the system is similar to the model salt and water system, in which momentum diffusion dominates. At relatively low injection velocities (Fig. 2a, b), the mixture of dyed water and salt solution (initially at ) remains unchanged as it cools down at room temperature. However, above a critical injection velocity, multiple layers similar to those observed in the glass of latte (Fig. 1a–d) are formed in the mixture several minutes after the injection (Fig. 2d). Once formed, the layers are not influenced by external mechanical disturbances, and can recover even after gentle stirring. As the cooling continues, these layers may merge and form thicker structures, which can last for days before being entirely eliminated by diffusion (Supplementary Fig. 1). The layers can be observed in the milk and espresso or in the salt and water mixture, only when the initial temperature is different from the room temperature. We quantify the concentration of the salt in the mixture using the concentration of the blue dye as an indicator (Supplementary Fig. 2). Therefore, the local intensity of the blue dye in the digital images (Fig. 2a–d) can be correlated with the local density of the mixture. The dashed lines in Fig. 2e represent the initial density profiles for two different injection speeds, while the solid lines refer to the density profiles 30 min later. For both experiments the dashed lines exhibit continuous monotonically increasing density profiles, when moving from the top to the bottom of the mixture. While the density profiles before and after the injection at a low velocity (black) remain almost identical, the density profile 30 min after the injection at a higher velocity (blue) exhibits clear steps indicating the formation of horizontal layers. After the high-velocity injection, the density in a single layer is constant, implying that the liquid within a layer is uniformly mixed. Moreover, the discontinuities in the density profile are clearer near the top of the mixture, where the gradient of the density is smaller than that of the bottom layers. We postulate that this layered state is reached due to the double-diffusive convection, which is well known in layer formation in open water systems such as oceans or lakes18,19,20, 23, 24. In our experiment, the double-diffusive convection results from the combination of heat transfer to the surroundings from the warm liquid, and a density gradient generated in the mixture from the initial pouring or injection. A directional heat transfer phenomenon in a mixture with an initial density gradient has been observed previously in other systems to lead to the formation of well-defined layers in fluid mixtures due to double-diffusive convection18,19,20, 23, 24. In particular, when a given temperature difference is created between two vertical walls bounding a fluid with an initial vertical density gradient, the fluid near the cold wall is cooled and thus gets denser and sinks. The sinking of the liquid due to the heat transfer will however be suppressed as the cooler liquid close to the wall reaches a zone of a similar density in the mixture. Therefore, the downward-moving liquid starts flowing inwards away from the cold wall as it can no longer proceed in the vertical direction. A similar motion but in the opposite sense is created close to the warmer wall. Consequently, closed streamlines are formed in the fluid circulating between the cold and the warm sources23. Within the circulation cells, the fluid mixes uniformly, and thus the density is fairly constant, while each circulation cell acquires a different density. In order to verify the postulate of double-diffusive convection as the cause of layering in our confined injection-driven system, we performed experimental (Fig. 2f, g) and numerical analyses (Fig. 2h, i) of the time-dependent flows (see details in Methods, Supplementary Figs 3, 4 and Supplementary Discussion). Both approaches document the formation of recirculating patterns in the form of axisymmetric rings between the wall and the center of the tank. The boundaries separating these circulation cells overlap with the limits of the layers in the mixture. Critical injection velocity The circulation cells are the results of the competition between the horizontal thermal gradient and the vertical density gradient generated by the fluid injection, i.e., the thermal gradient triggers the fluid to rise or sink close to the boundaries, while the vertical density gradient opposes this motion and subsequently stabilizes the flow. In our model experiments, layers are observed only above a critical injection velocity (Fig. 2a–d and Supplementary Fig. 5). At higher injection velocities, the depth of penetration of the liquid jet increases, and similarly the thickness of the mixture increases, indicating that the dyed water mixes with a larger volume of salt solution. For a fixed volume of injected water ( ml), we found that the volume of the resulting mixture increases linearly with the injection velocity for the range of parameters studied here. Consequently, the mixing level and the density gradient in the mixture of volume are controlled by the injection velocity. We define an average density gradient in the mixture, where is the magnitude of the density difference between the bottom and top of the mixture. Also, indicates the average resistance imposed by the salinity gradient against the formation of a convection cell due to the thermal gradient and is a function of . Our experimental measurements show that decreases with increasing due to enhanced mixing (Fig. 3a). We find that for the resistance from salinity can no longer compete with the thermal gradient, thus circulation cells appear and multiple layers emerge. The competition between the thermal cooling and the salinity gradient in this double-diffusive convection, where the horizontal temperature gradient is orthogonal to the vertical salinity gradient, is characterized by the Rayleigh number, , where is the gravitational acceleration, is the coefficient of fluid volume expansion, is the temperature difference in the fluid, is the kinematic viscosity of the fluid, is the thermal diffusivity of the fluid and is the normalized salinity gradient in the mixture18. In a system with an initial linear salinity gradient and an imposed horizontal temperature gradient, the critical Rayleigh number for initiation of an instability and formation of the layers has been found to be around experimentally18. Note that in a conventional double-diffusive convection problem, a linear density gradient is imposed at the initial stage and the temperature gradient is created between two vertical bounding walls by setting different constant temperatures18, 24. The initial and boundary conditions leading to the layers in a glass of latte and our model experiments are, however, different from those in the traditional problem: (1) the density gradient is caused by the injection and is not constant in the mixture (Fig. 2e) and (2) the temperature gradient is not constant during the experiments as the core of the liquid cools down continuously. Therefore, to characterize our system and to calculate , we chose to use the minimum density gradient over the typical thickness of the layers (around 5 mm) at the corresponding injection velocity and the maximum temperature gradient. We found that the critical Rayleigh number in our experiments closely matches the value reported in the literature for an idealized configuration (Fig. 3b). Consistent with our observations, indicates that the layering is obtained only when is above . Consideration of the dynamics in a proper dimensionless framework requires an analysis with at least both the Froude and Reynolds numbers, which is a topic of on-going research. In addition, when layering occurs, the expected length scale (thickness) of a layer is approximately . Application to soft materials The double-diffusive convection and the formation of the layers are simply controlled by the thermal and salinity gradients in the fluid systems discussed above, which implies no conceptual restriction on applying this principle to more complex fluid systems, such as thermally established soft gels. There are various approaches for generating layered soft materials, but most of these approaches are currently multi-step processes, where solid layers are usually formed sequentially25. Nevertheless, based on the understanding outlined above, we can make multiple layers in soft materials (UltraPure Agarose, Invitrogen) simply by a single step of injecting () a hot gel solution into a denser solvent and cooling the mixture at room temperature (Fig. 4, see the gel recipe in Supplementary Methods). To further demonstrate the presence of the horizontal layers in the gel, we performed experiments with the same recipe but measured the light intensities in the digital images of the gel rather than the elastic properties (see Supplementary Fig. 6 and Supplementary Discussion). While cooling, horizontal layers are first formed in the agarose solution, which is subsequently solidified to a layered gel below the gelation temperature. The Young’s moduli in the final layered state formed from the same agarose solution vary systematically with the vertical position (50 kPa at the bottom compared to 230 kPa at the top), which implies that the concentration of agarose in distinct layers is different. Further, the difference of concentration in the gel layers leads to a difference of porosity in these layers, so the concentration and the diffusion rate of additives will be different. This single-step, single-chemistry method can facilitate the fabrication of multiple-layered structures in food science25, tissue engineering26, 27, and other applications in materials science. Model experiment setup In our model experiments with salt solutions, blue dye (methylene blue hydrate, Sigma-Aldrich, 0.01 wt%) is added to the water jet to visualize the mixing of the two liquids. Dyed water is injected downwards using a syringe pump (Harvard Apparatus PHD 2000) through nozzles of circular cross-section, with inner diameters mm, into a polystyrene cylindrical tank with diameter cm (Fig. 1e). Working liquids are brought to the final temperature () in a water bath. We controlled the flow rate of the injected water and consequently the inlet velocity using the syringe pump. After the injection, the tank is cooled down at room temperature. The appearance and evolution of the layers are visualized by placing the tank in between a LED panel and a camera, while the flow velocities in the mixture are obtained by following tracer particles in the PIV experiments. Density profile in the mixture We performed a calibration procedure to correlate the local intensity of the blue dye in the digital images to the local concentration (mass ratio) of injected dyed water containing 0.01 wt% methylene blue hydrate (see Supplementary Fig. 2). Therefore, we calibrated the local intensity of the blue dye to obtain the local mass ratio of injected dyed water in the mixture, and then calculated the local density in the mixture by considering , where is the density of water and is the density of the salt water initially in the tank. Flow visualization and PIV The water injected from the top of the tank is labelled with blue dye; therefore the concentration of the dye indicates the amount of water and consequently the salt concentration in the mixture as the blue jet mixes with the salt solution in the tank. The depth of the mixture and the layers formed at higher flow rates are determined by placing the tank in front of a large LED panel and capturing color images of the mixtures over a long period of time (up to 3 days, see Supplementary Fig. 1). In order to quantitatively visualize the structure of the flow in the mixture, the liquid in the tank is seeded with tracer particles (PSP-20, diameter 20 μm, Dantec Dynamics). The plane of symmetry in the cylindrical tank is illuminated with a green light sheet (thickness of ≈1 mm) created by placing a laser line lens (PL0160, Thorlabs) in front of a green laser (BioRay 520, Coherent). Images are captured using a DSLR camera and a macro lens at the rate of 30 frames per second. The standard deviation of the light intensity for each pixel in the sequence of the recorded grey-scale images is calculated to determine the path lines of the particles in the mixture (Fig. 2f). Moreover, the ensemble cross-correlation scheme is applied to the sequence of grey-scale images to measure the local velocities in the PIV analyses28. Square interrogation windows of pixels corresponding to grid cells of 1 × 1 mm2 with an overlap of 50 are used to obtain the velocity vectors, such as those presented in Fig. 2g. We consider double-diffusive convection of an incompressible flow of a Newtonian fluid inside a cylindrical container (after the injection). The density of the fluid varies with the temperature and the salinity following the Boussinesq approximation where , and denote, respectively, the density, salinity and temperature of the reference state and (respectively ) indicates the solutal (respectively thermal) expansion coefficient. In the case of small-density variations as in our case, this approximation is well justified. Other parameters of the problem include the thermal diffusivity , kinematic viscosity of water, solutal diffusivity and gravitational acceleration . We choose the radius of the cylinder, and to scale the length, velocity and pressure, respectively. We introduce the non-dimensional temperature and salinity as where represents the maximum and minimum temperature and likewise for . The non-dimensional equations have the form where , and are the non-dimensional velocity, pressure and time, respectively; is the Prandtl number, the thermal Rayleigh number, the Lewis number and indicates the buoyancy ratio. We solve the governing equations Eqs. 4–7 in the cylindrical coordinates by employing the commercial finite element solver COMSOL. The assumption of azimuthal independence is verified a posteriori by comparing the numerical and experimental results. We use approximately quadrilateral elements (validated with 30,000 quadrilateral elements) to discretize the computational domain, and the near-wall mesh is carefully refined in order to resolve the thermal boundary layers. Quadratic elements are adopted for and linear elements for . It is worth noting that any options for numerical diffusion in the COMSOL’s CFD module have been deactivated. We now describe the boundary conditions (BCs). They are illustrated in the sketch of the computational domain consisting of four boundaries: the axis (left), walls (right and bottom), and the free-slip surface (top) (see Supplementary Fig. 3). On the two walls we impose the no-slip BCs , on the axis , and on the top surface we apply zero normal velocity and zero tangential stress , where is the outward-pointing normal vector. Zero-flux is imposed for the salinity on all the boundaries. The same condition applies for the temperature except for the right wall, which is modeled as a conductive BC transferring the heat inside the container towards the ambient air due to the temperature difference. The conductive BC reads , where is the non-dimensional air temperature; also, denotes the Nusselt number, where and correspond to the heat transfer coefficient and heat conductivity. Finally, as the initial condition, we choose the initial density profile measured in the experiments after mixing by the injection. Our implementation has been validated against ref. 29 and our D planar version against ref. 30. The reader is also referred to ref. 31 for other flow cases where the results of COMSOL simulations show excellent agreement with the asymptotic analysis. Calculating the Rayleigh number We calculated the Rayleigh number for the stability of thermal convection in a salinity gradient due to lateral heating18. In our calculation, the gravitational acceleration is , while the kinematic viscosity and the thermal diffusivity of fluid are and , respectively at . The coefficient of fluid volume expansion is at . The temperature difference in the fluid is , which is the maximum temperature difference measured (by attaching thermocouples) between the center and the wall of the container during cooling at room temperature. Also, is the minimum of the local salinity gradient in the mixture and varies with the injection velocity . We calculated the local slopes of the density gradient curves (Fig. 2b), over a height of mm, which represents the minimum thickness of layers that we observed in our experiments. We divided this local slope by the density of water to obtain the local salinity gradient in the mixture . The minimum value of in the mixture is used to calculate the Rayleigh number (Fig. 3b). The datasets generated during and analyzed during the current study are available at http://github.com/xuenan1203/Laboratory-Layered-Latte. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. S.K. thanks the Swiss National Science Foundation (P2ELP2-158896) for funding. L.Z. thanks the Swedish Research Council (2015-06334) for a VR International Postdoc Grant. J.K.N. and H.A.S. thank the National Science Foundation (CMMI-1661672) for partial support. We thank Jie Feng, Y. Estella Yu, Suin Shim and Antonio Perazzo for valuable discussions, Ching-Yao Lai for suggestions on gel preparation and Bob Fankhauser for providing an interesting picture of layered patterns formed in coffee that inspired this work.
<urn:uuid:267a4abf-ddb5-459e-8b6a-332de59ca1fb>
3.21875
4,722
Academic Writing
Science & Tech.
31.464699
95,510,023
Authors: George Rajna Machine learning, data mining, and artificial intelligence are revolutionizing the study and understanding of mental illness. Google recently launched PAIR, an acronym of People + AI Research, in an attempt to increase the utility of AI and improve human to AI interaction. Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. Comments: 42 Pages. [v1] 2017-07-22 05:07:01 Unique-IP document downloads: 22 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:5a046f77-eaca-40b3-a5a0-2322e09043e9>
2.890625
568
Knowledge Article
Science & Tech.
34.090035
95,510,031
Share this article: With El Niño being a major player in the demise of the Texas drought, the question is will the same phenomenon help funnel heavy rain into drought-stricken California. El Niño occurs when ocean water temperatures climb above average across the central and eastern Pacific, centered around the equator. According to AccuWeather Meteorologist Ben Noll, "The warmer sea surface water strengthens the storm track over the Pacific Ocean and across the southern United States, especially during the winter, spring and autumn months of the year." The storm track during the summer is generally weak and disrupted by high pressure off the Pacific coast of North America. How stormy the pattern becomes moving forward into the winter is generally associated with the strength of El Niño or how warm the tropical Pacific waters become. This spring, the pattern has contributed to rounds of heavy rain in Texas to the point of not only breaking the drought, but also causing destructive and deadly flooding. Earlier in May, the pattern also helped to funnel some moisture into California, but with exponentially much less impact when compared to Texas and the southern Plains. Some episodes of rain can occur in California over the summer, but these would not have major and long-lasting impact on the drought. A few such episodes can occur during the first 10 days or so of June. If El Niño is going to have significant impact on California, it will likely be during the winter. El Niño began during the late winter and early spring of 2015 but was rather weak. On average, an El Niño lasts nine to 12 months. According to AccuWeather Meteorologist Mark Paquette, "There is uncertainty about how long El Niño will last and when it will peak." There is some indication that the current El Niño pattern will strengthen and will peak sometime in the autumn of 2015. "If this is the case, then California has a good chance at being pretty wet for the upcoming winter," Paquette said. "Conversely, if El Niño peaks at moderate level or weakens by early fall, it becomes more dicey in terms of storms and rainfall for California." The El Niño of the winter of 1997-98 was one of the strongest on record and delivered storm after storm to California. The storms unloaded 20-30 inches of rain in California with yards of snow in the Sierra Nevada. Another variable is where the storm track takes aim. Sometimes the parade of storms focuses along the northern part of the Pacific coast. Drought conditions have been building in Washington and Oregon in recent months. A trend which is expected to worsen through the summer and could lead to a rough wildfire season. Because of the uncertainty of the strength of El Niño, as well as the rainy season storm track months from now, people should not count on a wet winter to wipe out drought in California or building drought in Oregon and Washington. Keep checking back with AccuWeather.com for updates on the drought status and any episodes of beneficial rain. Comments that don't add to the conversation may be automatically or manually removed by Facebook or AccuWeather. Profanity, personal attacks, and spam will not be tolerated. An 11-million ton iceberg hovers over the town of Innaarsuit in Greenland. The massive iceberg floats dangerously close to shore, threatening the small town. Two people suffered shark bites while swimming in the water off Fire Island in Suffolk County, New York, according to NBC New York. Newly formed Tropical Storm Ampil is set to strengthen as it tracks toward Japan’s Ryukyu Islands into the weekend. A rainstorm moving up from the south will coincide with a shift in the jet stream and mark the beginning of an extended period of wet, humid conditions in the northeastern US that may last into August. Eventualmente, la aspirante a ingeniero ambiental espera trabajar tanto con gobiernos como con corporaciones para eliminar microplásticos de los océanos de manera segura y eficiente. Drenching thunderstorms advanced into the northeastern United States Tuesday afternoon and evening, bringing reports of flash flooding throughout the region. Weather invariably comes into play at certain points during the Tour de France, especially when some tour stages can be greater than 100 miles in length. Heavy spring rainfall in parts of the mid-Atlantic have triggered higher-than-average mosquito rates this season. It is estimated that mosquitoes are two to three times their normal rates.
<urn:uuid:84be2077-417f-4188-acde-996c60eb73c3>
3.53125
923
Truncated
Science & Tech.
44.069195
95,510,038
Circling your prey with Recursion in Elixir Looping is a very common construct in programming, It is usually how we handle sets of data we almost always want to apply some transformation to sets filtered by certain constraints. Say you have a budget app and it has a list of items that need to be bought let’s go about finding a total. Here is our simple example that uses multiple clauses which happen when several functions have the same name however elixir uses pattern matching to determine what function to run depending on the parameters provided when binding the function. Recursion is incredibly common in Elixir So what is happening in the example Here we see we have a totaller function within a total module. This function takes in a list and default sum value then proceeds to calculate the current total by adding the head of the list to it. Note that the last thing this function does is call itself here elixir knows that this function is recursive and uses Tail Call Optimization for efficiency. Recursion is incredibly common in elixir however actions such as traversing through lists and sets are provided for in the Enum module. We will rarely use recursion for such operations. How do you use recursion? Back to code 😝 State is can be maintained in Elixir through recursion
<urn:uuid:8c71585a-1aee-44b5-8551-2023b3233756>
2.8125
267
Personal Blog
Software Dev.
38.956
95,510,058
Getting started with ASP.NET This tutorial guides you step by step to create an ASP.NET Web pages ,free training document under 62 pages by Erik Reitan. Table of contents - Introduction and Overview - Tutorial Support and Comments - Creating the Project - ASP.NET Web Forms Background - Creating, reviewing and running the new project - Creating the database structure - Initializing and seeding the database - Customizing the UI using styles, graphics and a master page - Adding pages and navigation - Displaying menu details and product data - Additional Resources - Creating the Data Models - Adding Image Files - Adding a Data Control to Display Navigation Data - Running the Application and Creating the Database - Display Data Items and Details - Adding Code to Display Products - Running the Application - File Size: - 2,703.77 Kb - Submitted On: Take advantage of this course called Getting started with ASP.NET to improve your Web development skills and better understand asp. This course is adapted to your level as well as all asp pdf courses to better enrich your knowledge. All you need to do is download the training document, open it and start learning asp for free. Introduction to ASP With this tutorial you will learn how to create dynamic web pages with ASP ,a brief introduction in PDF under 8 pages designated to beginners. PHP5 web programming This PDF tutorial shows how to program a dynamic web site using PHP5 ,free training lesson under 24 pages designated to the beginners. Symfony2 and HTTP Fundamentals Symphony lets you develop faster and build more robust and efficient web site and application,it's important for every web developer. This tutorial explains the fundamental concept and the basics of Symphony. Build Your Own ASP.NET Website This PDF tutorial is aimed at beginner, intermediate, and advanced Web designers, looking to build their first web application with ASP.NET. ASP.NET and Web programming This tutorial shows you the basics of ASP dot NET programming ,free training document for download designated to intermediate level users.
<urn:uuid:3aa9f1b7-3a97-493e-91c7-6d43201650bd>
2.671875
440
Product Page
Software Dev.
43.124848
95,510,061
While it ranks far behind carbon dioxide in total emissions, methane is the second most common greenhouse gas emitted in the U.S. from human activities, accounting for 9 percent. Its lifetime in the atmosphere is much shorter than that of carbon dioxide — about 10 years — but methane is better at trapping and holding onto radiation than the other gas. Pound for pound, methane’s effect on climate change is 20 times that of carbon dioxide over 100 years, according to the Environmental Protection Agency. Knowing how much of the stuff exists in the atmosphere, then, is crucial for lawmakers and scientists alike, who collaborate on national and state greenhouse-gas reduction plans. Pinning down a national estimate, however, is proving to be tricky. Earlier this month, a pair of senators asked EPA to reconsider its estimates of methane emissions from natural-gas operations, and even rethink how it measures atmospheric methane, at a Senate Environment and Public Works Committee hearing. David Vitter, R-La., and Jim Inhofe, R-Okla., cited a September report funded by the Environmental Defense Fund and several gas operators that said the gas industry emits 10 percent less methane than what EPA’s inventory indicates. And now, research published Monday in Proceedings of the National Academy of Sciences suggests that the government database may underestimate the true values of U.S. methane gas emissions by 50 percent. Researchers traced atmospheric methane measurements across North America in 2007 and 2008 back to known emissions-producing sites, such as landfills, livestock ranches, and oil and gas facilities. Emissions from oil and gas drilling in Texas, Oklahoma, and Kansas, researchers found, were nearly triple that of most inventories, and almost five times higher than the the Emissions Database for Global Atmospheric Research, the most commonly used global emissions inventory. EPA’s latest report from its Greenhouse Gas Reporting Program showed that methane-gas emissions have slightly decreased in recent years in some industries such as fossil fuels and petroleum and natural gas. The agency is not oblivious to the discrepancies that exist between its own measurements and those of civil scientists. “EPA has not yet had the opportunity to fully review the PNAS study on methane emissions,” the agency said in a statement to National Journal. “However we are encouraged that more methane emissions measurement data are now available to the public. Research studies like these will add to our knowledge base of [greenhouse gas] emissions and will help us refine our estimates going forward.” If EPA’s own measurement data is not immune to change, it’s unlikely that state-level and other nations’ greenhouse-gas emissions inventories are either, especially as the technology that measures the potent gas and where it originates continues to develop. What We're Following See More » "Two days after President Trump’s summit with Russian President Vladimir Putin, Russian officials offered a string of assertions about what the two leaders had achieved. 'Important verbal agreements' were reached at the Helsinki meeting, Russia’s ambassador to the United States, Anatoly Antonov, told reporters in Moscow Wednesday, including preservation of the New Start and INF agreements," and cooperation in Syria. "Two weeks before his inauguration, Donald J. Trump was shown highly classified intelligence indicating that President Vladimir V. Putin of Russia had personally ordered complex cyberattacks to sway the 2016 American election. The evidence included texts and emails from Russian military officers and information gleaned from a top-secret source close to Mr. Putin, who had described to the C.I.A. how the Kremlin decided to execute its campaign of hacking and disinformation. Mr. Trump sounded grudgingly convinced, according to several people who attended the intelligence briefing. But ever since, Mr. Trump has tried to cloud the very clear findings that he received on Jan. 6, 2017, which his own intelligence leaders have unanimously endorsed."
<urn:uuid:7046af49-3720-44e2-b5b9-7d920cf3aabf>
3.921875
794
News Article
Science & Tech.
33.284593
95,510,077
Q What are bacteria? — Israel Edwards, Madison, Wis. A Brian Pfleger, associate professor in the department of chemical and biological engineering at the University of Wisconsin-Madison: Bacteria are amazing single-celled, simple organisms. They’re found everywhere on the planet in all sorts of environments from your gut to the soil to the Arctic and Antarctic. In every type of environment you can imagine, people have isolated bacteria. They do amazing things with applications in chemistry, human health and even biotechnology. The field of biotechnology is interested in using living organisms like bacteria to benefit human society. Classically, bacteria have been used in the production of medicines like antibiotics. Increasingly they’re being used for making chemicals that people need in their everyday lives, like fuel. In the past, the emphasis was on converting petroleum or fossil fuel resources to useful chemicals, but, today, researchers are looking at more sustainable solutions. With the help of bacteria, scientists can convert biomass, like corn crops and other renewable resources, into fuel or some of those same useful chemicals. There are lots of different kinds of bacteria, but the one often used in engineering applications is Escherichia coli, known as E. coli for short. E. coli bacteria are often discussed in the context of food scares or other types of health issues, but they’re useful organisms for research. It’s a well-known organism with wonderful traits for scientific purposes. Researchers already know a lot about E. coli’s properties, and it can grow very quickly on a wide range of cheap materials. Scientists can manipulate the bacteria genetically to give them new properties. By changing the blueprints in the organism, bacteria can be modified to be more useful in applications like sustainable biofuel generation.
<urn:uuid:dda546b3-4774-4cb3-8832-b28963564e41>
3.8125
373
Audio Transcript
Science & Tech.
31.059398
95,510,100
electrophoresis(redirected from ionophoresis) Also found in: Dictionary, Thesaurus, Medical, Wikipedia. electrophoresis(ĭlĕk'trōfərē`sĭs): see colloidcolloid [Gr.,=gluelike], a mixture in which one substance is divided into minute particles (called colloidal particles) and dispersed throughout a second substance. The mixture is also called a colloidal system, colloidal solution, or colloidal dispersion. ..... Click the link for more information. . (also cataphoresis), the migration of colloidal particles or ionized macromolecules under the influence of an external electric field. Electrophoresis was discovered by F. F. Reuss in 1807; it is regarded as the most important electrokinetic phenomenon. An approximate relation between the velocity v of the moving particles and the electric field strength E is given by Smoluchowski’s equation: where η is the viscosity of the medium, D is the dielectric constant, and £ is the electrokinetic potential. Electrophoresis is used in electrochemistry to study the electric double layer and ion adsorption on a surface; it also has medical applications. In industry it is used to isolate natural rubber from latex, purify water, and separate kaolin from sand. It is used in biochemistry to analyze, separate, and purify biopolymers (chiefly proteins), bacterial cells, viruses, amino acids, and vitamins. The practical application of electrophoresis began after the Swedish scientist A. Tiselius designed a special apparatus for the moving-boundary electrophoresis of proteins in solution (1937). Electrophoretic methods involving the use of inert carriers, such as paper and gels, have gained the widest application. They have been given the general designation of zone electrophoresis because fractions of the separate substances form separate immiscible zones in the carrier. Electrophoresis is frequently combined with other methods of separating organic compounds, for example, with chromatography. A technique has been developed for concentrating the electrophoretic zones of biopolymers in gels, which increases the resolving power of the method (disk electrophoresis). The combining of the antigen-antibody reaction with electrophoresis was the basis for the creation of immunoelectrophoresis. Electrophoretic analysis of biological fluids, such as blood serum (used primarily to study proteins), is widely used in the diagnosis of many diseases. REFERENCESLarskii, E. G. Melody zonal’nogo elektroforeza. Moscow, 1971. Dukhin, S. S., and B. V. Deriagin. Elektroforez. Moscow, 1976. N. N. CHERNOV
<urn:uuid:62dd3483-8e71-4e7c-954b-4168233e0c88>
3.234375
609
Structured Data
Science & Tech.
18.402387
95,510,113
Rosetta was an European Space Agency space probe that rendezvoused with the Comet, 67P/Churyumov-Gerasimenko. It was launched in 2004 and reached the comet in 2014. A lander, Philae, landed in 2014 but failed to position its solar planes effectively and achieved only a couple of days' science activities. The Rosetta science mission continued until 2016 when the probe was deliberately hard-landed on the comet. - ALICE - Ultraviolet imaging Spectrograph. - OSIRIS - optical, spectroscopic, and Infrared remote imaging system. - VIRTIS - visible and infrared thermal Imaging Spectrometer. - MIRO - Microwave instrument for the Rosetta orbiter. - CONSERT - comet nucleus sounding experiment by radiowave transmission. - RSI - Radio science investigation. - ROSINA - Rosetta orbiter Spectrometer for ion and neutral analysis (Mass Spectrometer). - MIDAS - micro-imaging dust analysis system. - COSIMA - cometary secondary ion mass analyzer. - GIADA - grain Impact analyzer and dust accumulator. - RPC - Rosetta Plasma consortium (studied Solar Wind interaction). - APXS - alpha particle X-ray spectrometer. - CIVA - comet nucleus infrared and visible analyzer. - COSAC - cometary sampling and composition. - MUPUS - multi-purpose sensors for surface and sub-surface science. - Ptolemy - instrument to measure stable Isotope ratios of volatiles on the nucleus. - ROLIS - Rosetta lander imaging system. - ROMAP - Rosetta lander Magnetometer and plasma monitor. - SD2 - sampling, drilling and distribution. - SESAME - surface electric sounding and acoustic monitoring experiments.
<urn:uuid:d7663518-4afe-4d89-9c36-bbafbcc054fc>
3.0625
401
Structured Data
Science & Tech.
10.942179
95,510,122
Forensic Technology Used to Genetically Document Infanticide in Brown Bears Scientists used a technology designed for the purposes of human forensics, to provide the first genetically documented case of infanticide in brown bears, following the murder of a female and her two cubs in Trentino, the Italian Alps, where a small re-introduced population has been genetically monitored for already 20 years. The study, conducted and authored by Francesca Davoli, The Italian Institute for Environmental Protection and Research (ISPRA), Bologna, and her team, is published in the open access journal Nature Conservation. To secure their own reproduction, males of some social mammalian species, such as lions and bears, exhibit infanticidal behaviour where they kill the offspring of their competitors, so that they can mate with the females which become fertile again soon after they lose their cubs. However, sometimes females are also killed while trying to protect their young, resulting in a survival threat to small populations and endangered species. “In isolated populations with a small number of reproductive adults, sexually selected infanticide can negatively impact the long-term conservation of the species, especially in the case where the female is killed while protecting her cubs,” point out the researchers. “Taking this into account, the genetic identification of the perpetrators could give concrete indications for the management of small populations, for example, placing radio-collars on infanticidal males to track them,” they add. “Nevertheless, genetic studies for identifying infanticidal males have received little attention.” Thanks to a database containing the genotypes of all bears known to inhabit the study site and an open-source software used to analyse human forensic genetic profiles, the scientists were able to solve the case much like in a television crime series. orsa occultata - leggeraUpon finding the three corpses, the researchers were certain that the animals had not been killed by a human. In the beginning, the suspects were all male brown bears reported from the area in 2015. Hoping to isolate the DNA of the perpetrator, the researchers collected three samples of hairs and swabbed the female’s wounds in search for saliva. Dealing with a relatively small population, the scientists expected that the animals would share a genotype to an extent, meaning they needed plenty of samples. However, while the DNA retrieved from the saliva swabs did point to an adult male, at first glance it seemed that it belonged to the cubs’ father. Later, the scientists puzzled out that the attacker must have injured the cubs and the mother alternately, thus spreading blood containing the inherited genetic material from the father bear. Previous knowledge also excluded the father, since there are no known cases of male bears killing their offspring. In fact, they seem to distinguish their own younglings, even though they most likely recognise the mother. To successfully determine the attacker, the scientists had to use the very small amount of genetic material from the saliva swabs they managed to collect and conduct a highly sophisticated analysis, in order to obtain four genetic profiles largely overlapping with each other. Then, they compared them against each of the males reported from the area that year. Eventually, they narrowed down the options to an individual listed as M7. “The monitoring of litters is a fundamental tool for the management of bear populations: it has allowed the authors to genetically confirm the existence of cases of infanticide and in the future may facilitate the retrieval of information necessary to assess the impact of SSI on demographic trends,” conclude the researchers. This article has been republished from materials provided by Pensoft Publishers. Note: material may have been edited for length and content. For further information, please contact the cited source. Francesca Davoli, Mario Cozzo, Fabio Angeli, Claudio Groff, Ettore Randi. Infanticide in brown bear: a case-study in the Italian Alps – Genetic identification of perpetrator and implications in small populations. Nature Conservation, 2018; 25: 55 DOI: 10.3897/natureconservation.25.23776. Rapid and Cost-Effective Instrument that Measures Molecular DynamicsNews By combining mass spectrometry and thermal desorption, researchers honed a new method to measure excitation and relaxation rates of uracil, the building block of RNA.READ MORE Broadly Acting Antibodies Found in Blood of Ebola SurvivorsNews Scientists have discovered a set of powerful, broadly neutralizing antibodies in the blood of Ebola survivors. In animal studies, two of these antibodies provided substantial protection against disease caused by the three species known to cause fatal human illness.READ MORE
<urn:uuid:00b3a0c8-a6a7-4cae-aa7f-f6c6742fe661>
3
961
News Article
Science & Tech.
20.712626
95,510,148
Subodh Varma | TNN | Updated: Aug 29, 2012, 18:33 IST Rare 'Blue Moon' to be visible on Friday NEW DELHI: The full moon that rises on this Friday night, August 31, 2012, will be a Blue Moon. That's what it has been dubbed as in modern folklore of the West. But will it actually be blue? Very unlikely, and in case it is, not for the reason it is called Blue Moon! In recent times, the second full moon of any month has been called a Blue Moon. Twenty nine days pass between two full moons. Our months are either of 30 or 31 days. So it is possible to have two full moons in any one month, except February. By this definition, the full moon of 31 August will be a Blue Moon because there was a full moon on 2 August. Scientists have calculated that a Blue Moon is found every two and a half years on an average. Hence the phrase "once in a blue moon" which means rare or uncommonly occurring event. This tradition of calling the second full moon of a month a blue moon was thought to have started about four hundred years ago. But recent research by historians has shown that it may have arisen as recently as in 1946 when an amateur astronomer James Hugh Pruett of Oregon state in the US wrote about it in the Sky & Telescope magazine and misinterpreted an earlier tradition prevalent among US farmers. Earlier, farmers had noted that there were three full moons in every season except one, which had four. The third full moon of that four-full-moon-season was called a Blue Moon by them. Pruett made a mistake in reading about this in an Almanac and created the modern definition. But that doesn't mean that there can never be a Blue Moon. According to Nasa, if the air is full of polluting particles of a certain size only, the moon will appear blue. This can happen because of forest fires and volcanic eruptions. Obviously, the Blue Moon will appear irrespective of the date or month - and neither will it have to be a full moon. In 1883, a massive volcano called Krakatoa erupted in Indonesia. Scientists have estimated that its power was equivalent to a 100 megaton nuclear bomb. It spewed so much ash in the atmosphere that people across the world saw lavender suns, bright red sunsets - and yes, blue moons. The ash particles of about 1 micron size scattered the red component of light and allowed blue or green light through - hence the effect. In recent times people saw blue moons in 1983 after the eruption of the El Chichon volcano in Mexico, and in 1980 after Mt. St. Helens erupted and then in 1991 after Mount Pinatubo blew up. So, this Friday, enjoy the full moon - and imagine it is blue. Perhaps in memory of Neil Armstrong.
<urn:uuid:043d2394-9bf2-47e5-918b-b70c367ce2a4>
3
596
News Article
Science & Tech.
66.238061
95,510,184
Scientists at The University of Manchester hope a major breakthrough could lead to more effective methods for detoxifying dangerous pollutants like PCBs and dioxins. The result is a culmination of 15 years of research and has been published in Nature. It details how certain organisms manage to lower the toxicity of pollutants. The team at the Manchester Institute of Biotechnology were investigating how some natural organisms manage to lower the level of toxicity and shorten the life span of several notorious pollutants. Professor David Leys explains the research: "We already know that some of the most toxic pollutants contain halogen atoms and that most biological systems simply don't know how to deal with these molecules. However, there are some organisms that can remove these halogen atoms using vitamin B12. Our research has identified that they use vitamin B12 in a very different way to how we currently understand it." He continues: "Detailing how this novel process of detoxification works means that we are now in a position to look at replicating it. We hope that ultimately new ways of combating some of the world's biggest toxins can now be developed more quickly and efficiently." It's taken Professor Leys 15 years of research to reach this breakthrough, made possible by a dedicated European Research Council (ERC) grant. The main difficulty has been in growing enough of the natural organisms to be able to study how they detoxify the pollutants. The team at the MIB were finally able to obtain key proteins through genetic modification of other, faster growing organisms. They then used X-ray crystallography to study in 3D how halogen removal is achieved. The main drive behind this research has been to look at ways of combatting the dozens of very harmful molecules that have been released into the environment. Many have been directly expelled by pollutants or from burning household waste. As the concentration of these molecules has increased over time their presence poses more of a threat to the environment and humanity. Some measures have already been taken to limit the production of pollutants, for example PCBs were banned in the United States in the 1970s and worldwide in 2001. Professor Leys says: "As well as combatting the toxicity and longevity of pollutants we're also confident that our findings can help to develop a better method for screening environmental or food samples." Morwenna Grills | Eurek Alert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b37cb18d-b5ac-4868-8a62-2cb9b720f39d>
3.734375
1,051
Content Listing
Science & Tech.
37.69677
95,510,198
AMIE obtained these images on 29 December 2004, flying over the west edge of the Moon around 70º West longitude. SMART-1 was on an orbit ranging between 1000 kilometres distance (perilune, over the South pole) and about 5000 kilometres distance from the Moon (apolune, over the North Pole). From these distances, a series of images could be obtained with some overlap between them, that allowed to build a mosaic during a good part of the orbit. It is possible to note that the images are mirror-inverted. The spacecraft attitude direction is also slightly changing along the orbit. The beginning of the series shows close-up views of the old highlands areas at the edge of the Orientale basin. Then it is possible to see the edge of Oceanus Procellarum, a Mare area in Sinus Roris (from 30º to 50º North), and the Northern highlands with some conspicuous craters. Elongated ejecta debris from the giant Orientale impact basin – image 25 Like a target ring bull's-eye, the lunar mare Mare Orientale (the 'eastern sea') is one of the most striking large scale lunar features. This impact basin is located on the extreme western edge of the visible side of the Moon, and it is difficult to see from Earth. Basin ejecta begins just outside the Montes Cordillera and extend up to 500 kilometres beyond the base of the mountains. This ejecta feature has a rough texture and contains linear patterns that point back at the centre of Orientale. Edge of Oceanus Procellarum – images 52-56 Oceanus Procellarum, Latin for 'Ocean of Storms', is the largest of lunar maria, and it is situated on the western edge of the visible side of the Moon. In this image, it is the flat area on the right hand side. Oceanus Procellarum extends over 2500 kilometres along its north-south axis and covers an area of about four million square metres. Its name derives from the old superstition that its appearance during the second quarter was bringing bad weather. Like all lunar maria, this area was formed by ancient floods from volcanic eruptions that covered the region in a thick, nearly flat layer of solidified magma. Unlike the other lunar maria, however, Procellarum is not contained within a single well-defined impact basin. Minor bays and seas such as Mare Nubium and Mare Humorum (to the South) lie around its edges. To the northeast, Oceanus Procellarum is separated from Mare Imbrium by the Carpathian Mountains. Oceanus Procellarum was the landing site of the lunar probes Surveyor 1, Surveyor 3, Luna 9 and Luna 13, and Apollo 12. Pythagoras crater – images 115-119 The Pythagoras crater is centred at 63.5° North latitude and 62.8° West longitude. Its diameter is about 130 kilometres and it is about 5 kilometres deep. The rim of this crater is very well preserved. It presents a wide terrace system and a slight rampart around its external part. It is possible to observe that Pythagoras has a hexagonal form. The crater’s floor is flattened, but with an irregular and hilly surface. There is evidence of landslides around the periphery, while in the centre it is possible to see a sharp, mountainous rise with a double-peak that rises one and a half kilometres above the floor of the crater. The crater was named after Pythagoras (582-507 BC), the Greek mathematician and philosopher who believed that everything was related to mathematics and thought that everything could be predicted and measured in rhythmic patterns or cycles. Carpenter crater - image 125 The Carpenter impact crater is located in the northern part of the Moon, at 69.4° North latitude and 50.9° West longitude, and it is visible in the upper left part of the image. Carpenter is 2.6 kilometres deep, and its diameter is 59 kilometres. In geological terms, Carpenter is a young lunar crater, much younger than the surrounding crater formations – as one can see from its features that have not been significantly eroded by new impacts. Its inner wall displays an appearance of slumping (especially along the eastern face) and presents some terraces. The outer rim presents a small crater along the south-southeastern inner wall. The interior floor within the slopping inner walls is quite flat, but it shows irregular features such as small bumps and hills. Near the middle it is possible to see an unusual double peak formation, with a smaller peak offset to the west and a larger ridge offset to the east. The crater is named after James Carpenter (1840-1899), British astronomer at the Royal Observatory in Greenwich. In 1871, together with the engineer James Nasmyth, he produced a book about the Moon titled 'The Moon: Considered as a Planet, a World, and a Satellite'. The book contained images of plaster models of the lunar surface taken from different angles - more realistic than the images that could be achieved by telescope photography at that time. Poncelet crater – image 127 Poncelet is an eroded formation 69 kilometres wide - the remains of a lunar crater that is located near the northern limb of the Moon at 75.8° North latitude and 54.1° West longitude. It is visible at the bottom of the image, slightly decentred to the left. Its interior has been flooded with lava, and it presents many tiny craters. The outer rim is a low, circular ridge which breaks towards south and northeast (in this image, the South is up). Mouchez crater - image 133 Mouchez is an 81-kilometre wide formation - the remnant of a lunar crater that is located near the northern limb of the Moon. It is centred at 78.3° North latitude and 26.6° West longitude (visible at the top of the image, slightly decentred to the left), to the north of Philolaus crater. As visible in the image, almost the entire eastern rim of this crater is missing, and the remaining arc is heavily eroded. Also visible at the bottom-right of the image is the Gioja crater (in this image, the South is up). Gioja crater – image 135 Gioja is a lunar crater that is located in the vicinity of the north pole of the Moon, at 83.3° North latitude and 2° East longitude (centre of the image, slightly decentred towards the bottom). Because it lies close to the Northern limb, it is difficult to observe it from Earth (in this image, the South is up). Gioja is 2.9 kilometres deep and 41 kilometres wide (diameter). It is attached to the southern rim of the larger, low-wall Byrd crater. The highest point of the Gioja crater's rim is situated at northwest, where it has been reinforced by the rim of the Byrd crater and by other old formations no longer visible. The interior floor is nearly flat, with a ridge going from the middle to the north-northeast rim. The inner floor is also marked by tiny craters, such as the pair visible near the west-northwestern inner wall. The Gioja crater is named after Italian marine pilot and inventor Flavio Gioja (14th century) that perfected the sailor's compass. Monica Talevi | alfa What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Social Sciences 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences
<urn:uuid:b86bdfb1-7a5c-4eaf-abd9-5e4850c76aa3>
3.375
2,199
Content Listing
Science & Tech.
49.23228
95,510,245
In Math, numbers are really where everything starts. Both with recognising certain numbers, as well as counting numbers properly. Numbers can fall into different groups or types, and different types of numbers can have unique properties. Prime Numbers, irrational numbers, quotients and so on. Below are some pages that focus on a basic introduction to some key number properties, along with what categories certain numbers can fall under. Math Numbers Pages - Whole Numbers, Place Value Each digit in a whole number has a specific place value, such as tens, hundreds and thousands etc. - Numbers as Words How to write out a whole number as a word. - Even and Odd Numbers Numbers can be even or odd, this section explains the difference, and how odd and even numbers in a sum influence the answer. - Types of Numbers The main number types are explained, such as natural, rational and irrational, among others. - Irrational and Rational Numbers The definition of both a rational and irrational number. - Prime Numbers Prime numbers are very unique and important numbers in Math. - Scientific Notation Scientific Notation is a useful and neat way to write very large or small numbers. Units of Measurement - Time Math An introduction to telling the time of day from Analog and Digital clocks. The topic of addition and subtraction involving time is also displayed. - Calendar Math How the current calendar we use is set up to measure years. Why we have leap years, also how to write the date numerically. - Measuring Length Details of metric and imperial units of measuring length. Return to TOP of page
<urn:uuid:6fe9bd93-9596-4794-9b09-78ca8df59fc8>
4.5625
342
Content Listing
Science & Tech.
43.162771
95,510,246
"E. coli has more than four thousand genes, and the functions of one-fourth of these remain unknown," says Dr. Deborah Siegele, a biology professor at Texas A&M University whose laboratory specializes in carrying out research using the bacterium. Harmless E. coli strains are normally found in the intestines of many animals, including humans, but some strains can cause diseases. Siegele and her co-workers at the University of California San Francisco, Nara Institute of Science Technology and Purdue University have devised a novel method that allows rapid and large-scale studies of the E. coli genes. The researchers believe their new method, described in the current online issue of Nature Methods, will allow them to gain a better understanding of the E. coli gene functions. The principle behind this new method is genetic interaction. Interaction between genes produces observable effects, and this allows researchers to identify the gene functions. The research team has called their new method GIANT-Coli, short for genetic interaction analysis technology for E. coli. The team believes that its method has great potential to quicken the progress of discovering new gene functions. The use of GIANT-Coli has already allowed researchers to identify some previously unknown genetic interactions in E. coli. To study genetic interaction, researchers need to use what they call double-mutant strains. GIANT-Coli allows large-scale generation of these double-mutant strains (high-throughput generation). And this is the first time that a high-throughput generation method for double mutants of E. coli has been developed. Why is it so important to know the E. coli better? "Much of what we know about other bacteria, including the more dangerous ones like Vibrio cholerae, comes from our knowledge of E. coli," says Siegele. "The E. coli is a model organism." Siegele says that GIANT-Coli can be developed to study genetic interactions in other bacteria, and because some proteins are conserved from bacteria to humans, perhaps some of the results can even be extrapolated to gene function in humans. Moreover, Siegele points out that the method has obvious application in medicine because understanding gene functions in harmful bacteria will help in developing better treatment approaches. Dr. Deborah Siegele | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Materials Sciences 20.07.2018 | Physics and Astronomy 20.07.2018 | Materials Sciences
<urn:uuid:e87df782-261f-4a64-94c4-8abf900d6c63>
3.671875
1,057
Content Listing
Science & Tech.
36.986573
95,510,258
Phase Four LLC, a startup based in El Segundo, California, announced plans March 7 to conduct the first on-orbit demonstration of its plasma propulsion technology in late 2017. If stringent demands are posed for the foundation of an environmentally sustainable space era, enabling a comprehensive tool set of sustainable space solutions appears as an important choice. However, sustainability is a broad term within this context. Achievement of environmental sustainability Facing congressional pressure to begin work on an American replacement for the Russian-built main rocket engine used today to launch most U.S. national security payloads, the U.S. Air Force quietly unveiled the initial steps in a procurement strategy that has been complicated by a key industry player’s own plans. The U.S. Air Force Research Laboratory (AFRL) has answered with a qualified “yes” the question of whether a British company’s revolutionary air-breathing rocket engine, designed for a horizontal-takeoff vehicle climbing to orbit with a single stage, holds promise. NASA’s proposed, and oft-reviled, Asteroid Redirect Mission (ARM) may be worth doing if it helps pave the way for an electric-powered interstellar rocket engine, Rep. John Culberson (R-Texas) said. The principal beneficiaries of the government program have been France’s two main satellite prime contractors, Airbus Defence and Space, and Thales Alenia Space. On the same day two Russian-made RD-180 rocket engines arrived in Alabama from Moscow, the U.S. Air Force issued a request for information on the possibility of weaning itself from those very engines. Aerojet Rocketdyne will demonstrate the use of additive manufacturing techniques to produce selected, full-scale rocket engine components.
<urn:uuid:d21c9082-ab7c-42e5-a178-3a98cf8440ab>
2.53125
365
Content Listing
Science & Tech.
31.573841
95,510,272
Geoengineering, an emerging technology aimed at counteracting the effects of human-caused climate change, also has the potential to counteract political polarization over global warming, according to a new study. Published Feb. 9 in the journal Annals of the American Academy of Political and Social Science, the study found that participants -- members of large, nationally representative samples in both the United States and England -- displayed more open-mindedness toward evidence of climate change, and more agreement on the significance of such evidence, after learning of geoengineering. "The result casts doubt on the claim that the advent of geoengineering could lull the public into complacency," said Dan Kahan, professor of law and psychology at Yale Law School and a member of the research team that conducted the study. "We found exactly the opposite: Members of the public who learned about geoengineering were more concerned and less polarized about global warming than those who were told of the need to reduce greenhouse gas emissions as a way to reduce climate change," he said. As defined by the U.S. National Academy of Sciences (NAS), "geoengineering" refers to deliberate, large-scale manipulations of Earth's environment in order to offset some of the harmful consequences of human-caused climate change. Potential examples include solar reflectors that would cool global temperatures by reflecting more sunlight away from the Earth and so-called "carbon scrubbers," which would remove CO2 from the atmosphere. Both the NAS and the Royal Society, the preeminent association of expert scientists in the United Kingdom, have issued reports calling for stepped-up research on geoengineering, which also was identified as a necessary measure for counteracting the impact of global warming in the latest assessment report of the United Nations' Intergovernmental Panel on Climate Change. In the study, researchers divided the 3,000 participants into groups, providing some with information on geoengineering and others with information on proposals to limit greenhouse gas emissions. They instructed the participants to read and evaluate actual study findings offering evidence human activity, including the burning of fossil fuels, was heating the Earth's temperature and creating serious environmental risks including coastal flooding and drought. "The participants who learned about geoengineering were less polarized about the validity of the evidence than were the ones who got information on carbon-emission limits," said Kahan. "In fact, the participants who read about carbon-emission limits were even more polarized than subjects in a control group, who read the information on the evidence of global warming without first learning about any potential policy responses," he said. This result was consistent with previous research on a dynamic known as "cultural cognition," which describes the tendency of individuals to react dismissively to evidence of environmental risks when that evidence threatens their values or group identities. "The information on geoengineering," said Kahan, "helped to offset bias by revealing to those study participants with a pro-technology outlook that acknowledging evidence of global warming does not necessarily imply the 'end of free markets' or the 'death of capitalism,' a theme that some climate-change policy advocates emphasize." Kahan added that the significance of the research extended beyond the issue of whether the advent of geoengineering would stifle or promote public engagement with climate science. "What's important is that people assess information about science based not only on its content but on its cultural meaning or significance," explained Kahan. "The study supports the conclusion that science communicators need to broadcast engaging signals along both the 'content' and 'meaning' channels if they want their message to get through." The study was conducted by a team of researchers associated with the Cultural Cognition Project at Yale Law School and the Center for Applied Social Research at the University of Oklahoma. Citation: Annals of the American Academy of Political and Social Science DOI: 10.1177/0002716214559002 Debra Kroszner | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:7b0358d6-0027-4e31-82f5-3c14e9eb50d5>
3.25
1,433
Content Listing
Science & Tech.
31.22588
95,510,287
A car’s climate impact depends on three main factors: the efficiency of the car, the distance driven, and in the case of electric and plug-in hybrid cars, how the electricity it uses is generated. News, Blogs & Features - Jul 18th, 2018 - Missouri Farms Hold Big Potential as Carbon Storehouse - Jul 14th, 2018 - As Seas Rise, Americans Use Nature to Fight Worsening Erosion - Jul 11th, 2018 - Air Conditioning Costs Rise With Arizona’s Heat - Jul 11th, 2018 - Report: The High Cost of Hot - Jun 10th, 2018 - Antarctic Ocean Discovery Warns of Faster Global Warming
<urn:uuid:8ead264c-1ed1-4ddc-ba02-d677d47779ab>
2.65625
143
Content Listing
Science & Tech.
-17.275
95,510,295
A fabulous sunset at Drake Bay near Corcovado National park, Costa Rica. See other beautiful phenomena from the Costa Rica. Sustainability of Aquaculture Thousands of years ago, humans learned how to farm instead of hunting and gathering. Instead of going after the food, we brought the food to us. As we have made a greater impact on the earth, we now have the need to bring other types of food to us. Fisheries have been depleted in areas of the world where they used to be very abundant. During the 1970’s, aquaculture began to pop into the public eye to combat this. But that was not the first time that aquaculture has been used. Southeast Asians have used a form of aquaculture for over 2000 years (Santis, 1984). This process actually occurred naturally and was only aided by farmers. Rice paddies provide the perfect habitat for fish and other organisms. These fish, in turn, fertilized the crops to produce more yield. Once this was understood, farmers added more fish larvae to also gain a better fish crop. Seafood used to be viewed as a luxury, but no longer is today. The consumption of seafood has gone up. From 1988 to 1995, the U.S. per capita consumption of salmon increased threefold, from 0.44 pounds to 1.41 pounds (McGinn, 1998). But commercial fishing areas have been depleted. In July 2003’s edition of Time Magazine, scientists estimated that up to 90% of the predator fish have been depleted from some areas of the ocean. Aquaculture is seen as a way to combat this. A simple explanation of aquaculture is the controlled growth and production of fish in enclosed or controlled areas. Aquaculture, as defined by the Food and Agriculture Organization is: “the farming of aquatic organisms including fish, mollusks, crustaceans and aquatic plants (ABS, 2003).” It really is farming fish. Farming implies some sort of intervention in the growing process to enhance production, such as regular stocking, feeding, protection from predators, and so on. The two characteristics that distinguish aquaculture from capture fisheries production are intervention in the rearing process and ownership of the stock being cultivated (ABS, 2003). Aquaculture is seen as a way to make up for the falling world fish production and be a more efficient producer of protein as well, but these goals come at costs. Many people may know or understand that growing monoculture agriculture is not the best thing for the land and causes certain environmental problems. These can include erosion, land degradation, and chemical runoff. In the same way, aquaculture can have harmful environmental effects, especially if grown as a monoculture. Most aquaculture takes place in enclosed areas such as inland ponds or open water pens. The biggest problem with this is waste. A 2 acre salmon farm in the United States can produce as much waste as a town of 10,000 people (Kane, 1993). This causes water problems in and around the area of the farm. Besides fish being poisoned by their own waste, fecal and urinary products, uneaten fish food, and chemicals and antibiotics used to control diseases are also wastes that may result. These can cause significant organic pollution and increased turbidity of the water and the sea floor sediments in the vicinity of the cages. This results in the temporary disappearance of animals and plants that live on or in the seabed (ABS, 2003). Fish waste and other nitrogen effluence also causes a rapid growth of algae. This results in the algae eventually consuming and depleting the water oxygen (Baringa, 1990). Fish, like humans, become much more susceptible to the spread of disease when packed into a small living environment. To prevent this, chemicals are then added to the water by fish farmers. This may help neutralize some diseases and effects of fish waste, but chemicals leaking to surrounding areas can have harmful effects on the surrounding environment and ecosystems. Another problem of aquaculture is escaping sea life. Large numbers of sea life can escape container areas, especially in open water pens. Pens must be put in areas with some tide or current to flush the pens with new water, but these areas are more vulnerable to storms and other problems that could cause the pens to break, and when a pen breaks, it can release thousands of invasive sea life. For example, in July of 1996, in Puget Sound, 100,000 salmon escaped when a pen lost a mooring (The Waters, 1997). In 1993, data from the Norwegian Directorate for Nature Management showed that about 20% of fish caught in the wild have escaped from farms (Kane, 1993). But invasive species aren’t the only problem with escaping wildlife. Many of the animals grown in aquaculture are genetically modified. Even though aquaculture has stated high and noble goals for its purpose, the primary goal, of course, is money. Fish farmers want to make as much money as possible as quickly as possible. To do this, ideal species must have high market value and grow cheaply and rapidly. In order to accomplish this, farmers change the genetic makeup of grown sea animals. Like modifications such as hybrid corn, farmers have engineered sea life to grow, taste, and look ideal for sale and consumption (Kane, 1993). One example of this is salmon grown through aquaculture. Salmon in captivity, for some reason, produce gray meat. Fish farmers have developed supplements in salmon feed that when added, turn the gray flesh of the captive fish into the eye-pleasing pink of wild salmon (Kane, 1993). Another example is the African native fish called tilapia. Tilapia is the most commonly grown fish and has been genetically altered to grow up to 60% faster than normal tilapia (McGinn, 1998). Going back to escaping sea life, this genetically altered sea life can have harmful effects on wild species. It has been predicted that genetically “homogenized” and interbred salmon that escape from farms and breed with wild salmon could potentially cause genetic suicide for wild salmon down the road (The Waters, 1997). Altering the genetic makeup of sea life can produce greater yield, but this seems to come at costs of harmful and often unknown consequences, and the health risks to both humans and the natural environment have been heavily debated. Besides aquaculture causing environmental degradation through its processes, the environment is also destroyed to make areas usable for aquaculture. Many fisheries depend on estuary habitats including salt marsh, tidal freshwater marsh, seagrass, mangroves etc. (MAP, 2004). These areas are prime natural areas for starting aquaculture productions, especially for shrimp aquaculture, which greatly threatens many of these natural coastal areas (MAP, 2004). For all its harmful effects, aquaculture has accomplished some of its lofty goals. As seafood consumption has increased over the recent decades, aquaculture has taken up much of the slack. Today, 40% of the seafood that people eat is grown in captivity (The Waters, 1997). Farmed seafood has increased enough to flood the world market and decrease the price of seafood, which actually makes commercial fishing less profitable (McGinn, 1998). Worldwide there are now 2 kilograms of fish produced for every 5 kilograms of beef (McGinn, 1998). The oceanographer Carl Safina previously said, “Aquaculture will do for coastal systems what agriculture did for the prairies of North America. It replaces natural populations of animals and replaces natural habitats (The Waters, 1997).” On its present course, aquaculture is in danger of doing just that, but it doesn’t have to result in this end. Aquaculture could be made sustainable. What is sustainability? It is the ability to preserve our present economic and environmental conditions for future generations. In 1987, the World Commission on Environment and Development developed a definition that simply reads: “Sustainable development meets the needs of the present without compromising the ability of future generations to meet their own needs.” Even though this vague definition can be interpreted very differently, the point is therefore, that aquaculture needs to find a way to asses and consider every aspect and consequence of its actions. There needs to be a way to work with the environment and make the smallest impact on ecosystems as possible, while still preserving our economic stability. However, one of the biggest problems with reaching this goal is that in the past, technological research was never aimed at this. “Most previous research has concentrated on making more money and growing cash crops like lobster and salmon instead of looking to feed the masses (Santis, 1984).” Once again, to find a sustainable manner of aquaculture, we look to Asia. Many times there, beginning centuries ago, Asians grow seafood in polyculture societies, mimicking more their natural environment. Western Culture needs to learn that several kinds of sea life can be raised together, much like a natural system with mussels, seaweeds, bottom-feeding fish, and top-feeding fish together (Kane, 1993). The Asians also figured out how to use wastes in a constructive manner. Human or animal waste that ran off the land fed organisms that fish could feed on. Fish waste was then used to fertilize rice fields, and the ponds were finally cleaned with tea leaves instead of many chemicals (McGinn, 1998). So aquaculture can be combined with agriculture to result in a more sustainable method of production. This results in many types of crops but in a more sustainable method of production, using wastes to recycle as nutrients (Kane, 1993). Aquaculture has many problems that need to be solved in order to make it more beneficial than harmful. Research and technology need to be devoted into monitoring aquaculture by setting up programs such as environmental impact assessments, standards, emissions trading, zoning and restrictions for water and resource use, and buffer zones. The sustainability of aquaculture needs to be seen as something we need to devote time and money to. But any advances toward sustainability will come only when the public and the scholarly understand the importance of the impact of this process. As aquaculture grows into more and more prominent production in the future, how our seafood is produced needs to become more of an issue to address. ABS (Australian Bureau of Statistics). “Forestry and Fishing: Aquaculture and the Environment.” http://www.abs.gov.au/Ausstats/abs@.nsf/ Year Book Australia, 2003. Baringa, Marcia. “Fish, Money, and Science in Puget Sound.” Science. Edition 247. February 9, 1990. Kane, Hal. “Growing Fish in Fields.” World Watch. September – October Edition, 1993. MAP (Mangrove Action Project). http://www.earthisland.org/map/index.html. 2004. McGinn, Anne Platt. “Blue Revolution.” World Watch. March – April Edition, 1998. Santis, Marie de. “To Hunt or Farm.” Oceans. Edition 17, 1984. “The Waters.” Audubon. March – April Edition, 1997. Return to Topic Menu We also have a GUIDE for depositing articles, images, data, etc in your research folders. Article complete. Click HERE to return to the Pre-Course Presentation Outline and Paper Posting Menu. Or, you can return to the course syllabus WEATHER & EARTH SCIENCE RESOURCES OTHER ACADEMIC COURSES, STUDENT RESEARCH, OTHER STUFF TEACHING TOOLS & OTHER STUFF It is 8:52:20 AM on Monday, July 16, 2018. Last Update: Wednesday, May 7, 2014
<urn:uuid:f5e2a555-76da-451d-b2a2-c3e4a08b7d80>
3.484375
2,462
Knowledge Article
Science & Tech.
46.002786
95,510,302
High-resolution images of Professor Dosseto and Mr Leo Rothacker, along with images from Lake Dojran, are available for editorial use and can be downloaded from Dropbox. Ben Long, Media and Corporate Communications Coordinator, T: +61 2 4221 3887 | M: +61 429 294 251 | E: firstname.lastname@example.org UOW Media Office, T: +61 4221 4227 | E: email@example.com Scientists find earliest evidence of humans altering the environment Study reveals that introduction of agriculture 3,500 years ago profoundly changed ecosystems In a paper published in Nature Scientific Reports, researchers from the University of Wollongong’s (UOW) School of Earth and Environmental Sciences have provided the earliest unequivocal evidence in the geological record of a profound human effect on the environment. The paper shows how soils have responded to natural climate variation over the past 12,000 years, and that human activity around 3,500 years ago disturbed this natural equilibrium. The paper’s lead author, UOW PhD student Mr Leo Rothacker, said soils are key components of ecosystems and vital to human societies, making it important to understand how they evolve through time. “Soils are one of the most important components of the Earth’s ecosystems. In order to sustain future soil resources, we must understand how soils respond to changes in climate and human land-use,” he said. “Previous studies have linked the downfall of civilisations to soil diminishment via accelerated erosion, which resulted in poor agricultural yields and might have caused wide-spread starvation. “Our study provides the first evidence that this is exactly what happened in ancient Greece/Macedonia 3,200 years ago. Our data indicates that the human impact via agricultural practices was so dramatic that soils were completely stripped from the landscape. “This could have contributed to the establishment of the Greek ‘Dark Ages’, a time where population declined rapidly, agriculture suffered, the metallurgy of bronze and the ability to write were forgotten.” The researchers studied sediments deposited in Lake Dojran in Macedonia and Greece to see how natural climate change and human activity had affected soils in the region over the past 12,300 years. The sedimentary record revealed an unprecedented erosion event associated with the development of agriculture in the region between 3,500 and 3,100 years ago, indicating a transition from a natural to an anthropogenic landscape. The drilling platform at Lake Dojran. Scientists analysed cores taken from sediment at the lake that dated back to 12,300 years ago. Picture: Alexander Francke “Lakes are excellent archives to unravel the environmental variability in the geological past,” co-author Dr Alexander Francke said. “In particular relatively small and shallow lakes (such as Dojran) are highly sensitive to environmental variability. “Lake Dojran further benefits from the fact its catchment area is small, which provides a more direct connection between hillslope erosion and sediment deposition in the basin, and the local paleoclimatic conditions have already been extensively studied. This allows us to directly link erosion to climate and thus to unravel natural and human causes of accelerated hillslope erosion. “Our evidences for dramatic erosion 3,200 years ago coincides with the first occurrence of cultivate plant taxa in the pollen record. This supports that humans removed trees at that time, to replace them by cultivated plants. “Climate variability cannot be invoked to explain these changes since no significant change in climatic conditions is known for this time interval, and we show that before 3,200 years ago, the response of soil erosion and development to climate change is very different to that what is observed at 3,200 years.” Co-author and team leader Associate Professor Anthony Dosseto said the study provides evidence that the Anthropocene, a proposed geological period dating from the commencement of significant human impact on the Earth's geology and ecosystems, began much earlier than its most commonly given starting point at the beginning of the Industrial Revolution. “Several propositions have been made for the onset of the Anthropocene,” Professor Dosseto said. “Some have proposed that it started as early as the emergence of agriculture during the Neolithic Revolution (ca 12,000 years ago), however, supporting observations are scarce. “Our study shows clear evidences that as early as 3,200 years ago humans modified their landscape so deeply that it is recorded in the lake sediment archives. This supports that the Anthropocene – and thus a deep human impact on the environment – started as early as a few thousand years ago.” The research team is undertaking a similar study at nearby Lake Ohrid, with early results showing similar patterns to those seen at Lake Dojran. Professor Dosseto said the team would also study other sites around the world. “We are applying the novel tools presented in our study to various locations around the world, including Australia and New Zealand. This will provide unprecedented insights on how soil resources respond to climate change, and how early humans have impacted their environment,” Professor Dosseto said. “Impact of climate change and human activity on soil landscapes over the past 12,300 years” by Leo Rothacker, Anthony Dosseto, Alexander Francke, Allan R. Chivas, Nathalie Vigier, Anna M. Kotarba-Morley and Davide Menozzi is published in Nature Scientific Report on 10 January 2018. The research was funded by ARC Discovery Project DP140100354.
<urn:uuid:def6ebaa-2789-4bfc-8394-2a71082313a0>
3.265625
1,164
News (Org.)
Science & Tech.
25.979165
95,510,308
As Tropical Depression 11W was strengthening into Tropical Storm Son-tinh near the northern Philippines, the Global Precipitation Measurement mission or GPM core satellite analyzed its rainfall. NASA-NOAA's Suomi NPP satellite passed over the Northwestern Pacific Ocean and captured a visible image of recently formed Tropical Depression 11W. A new study demonstrates that a correlation also exists between cumulative carbon emissions and future sea level rise over time -- and the news isn't good. On Sunday, July 15, the National Hurricane Center (NHC) noted that Sub-Tropical Storm Beryl was devoid of precipitation around its center of circulation and infrared imagery from NASA's Aqua satellite confirmed it. By July 16, Beryl had again become a remnant low pressure area. This year's monsoon has been assessed as average but India's Meteorological Department statistics show that daily mean rainfall for the country has recently been above normal. At least 15 people were killed by floods and landslides in India on Wednesday July 11, 2018. So far this year, close to 200 deaths may have resulted from India's heavy monsoon rainfall. The remnants of former Tropical Storm Beryl are being battered by upper level winds, and that's fragmenting them even more. NASA's Aqua satellite passed over the northwestern Atlantic Ocean and found some of those scattered thunderstorms were strong. Researchers have calculated the capacity of North American forests to sequester carbon in a detailed analysis that for the first time integrates natural processes and climate changes that are likely to alter growth over the next 60 years. Former Tropical Storm Beryl doesn't seem to want to dissipate into hurricane history. Visible data from NASA's Terra satellite captured the the remnants of Beryl lingering north of the Bahamas. As Tropical Storm Chris was strengthening into a short-lived hurricane, the Global Precipitation Measurement mission or GPM core satellite investigated the storm's rainfall and cloud heights. By July 12, Chris weakened to a tropical storm and was passing by Nova Scotia, Canada. Sea-level rise will endanger valuable salt marshes across the United Kingdom by 2100 if greenhouse gas emissions continue unabated, according to an international study co-authored by a Rutgers University-New Brunswick professor. Moreover, salt marshes in southern and eastern England face a high risk of loss by 2040, according to the study, to be published in Nature Communications.
<urn:uuid:ebb55082-138c-4b49-a989-2c04897f9c24>
3
483
Content Listing
Science & Tech.
29.176581
95,510,338
Find the amount of water R that flows from the tank during the first 42 minutes. Recently Asked Questions - An interactive poll found that 314 of 2,278 adults aged 18 or older have at least one tattoo. (a) Obtain a point estimate for the proportion of adults who - HCl + CH3CH2CH=CH2 yields ____________ CH3CH2CH2CH2Cl CH3CH2CHClCH2Cl CH3CH2CHClCH3 CH3CH2CH2CH3 - Please refer to the attachment to answer this question. This question was created from Assignment 3-FA17 AP2 (1).docx. Additional comments: "Only keep the
<urn:uuid:8fdb54a5-2ce7-4eb9-ac2f-24b0d1be1945>
2.515625
146
Q&A Forum
Science & Tech.
79.904072
95,510,357
VY Canis Majoris, one of the most luminous infrared objects in the sky, is an old star about 5,000 light years away. It's a half million times more luminous than the sun, but glows mostly in the infrared because it's a cool star. It truly is "supergiant" -- 25 times as massive as the sun and so huge that it would fill the orbit of Jupiter. But the star is losing mass so fast that in a million years -- an astronomical eyeblink -- it will be gone. The star already has blown away a large part of its atmosphere, creating its surrounding envelope that contains about twice as much oxygen as carbon. Ziurys and her colleagues are not yet halfway through their survey of VY Canis Majoris, but they've already published in the journal, Nature (June 28 issue), about their observations of a score of chemical compounds. These include some molecules that astronomers have never detected around stars and are needed for life. Among the molecules Ziurys and her team reported in Nature are table salt (NaCl); a compound called phosphorus nitride (PN), which contains two of the five most necessary ingredients for life; molecules of HNC, which is a variant form of the organic molecule, hydrogen cyanide; and an ion molecule form of carbon monoxide that comes with a proton attached (HCO+). Astronomers have found very little phosphorus or ion molecule chemistry in outflows from cool stars until now. "We think these molecules eventually flow from the star into the interstellar medium, which is the diffuse gas between stars. The diffuse gas eventually collapses into denser molecular clouds, and from these solar systems eventually form," Ziurys said. Comets and meteorites dump about 40,000 tons of interstellar dust on Earth each year. We wouldn't be carbon-based life forms otherwise, Ziurys noted, because early Earth lost all of its original carbon in the form of a methane atmosphere. "The origin of organic material on Earth -- the chemical compounds that make up you and me -- probably came from interstellar space. So one can say that life's origins really begin in chemistry around objects like VY Canis Majoris." Astronomers previously studied VY Canis Majoris with optical and infrared telescopes. "But that's kind of like diving in with a butcher knife to look at what's there, when what you need is an oyster fork," Ziurys said. The Arizona Radio Observatory's 10-meter Submillimeter Telescope (SMT) on Mount Graham, Ariz., excels as a sensitive stellar "oyster fork." Chemical molecules each possess their own unique radio frequencies. The astronomers identify the unique radio signatures of chemical compounds in laboratory work, enabling them to identify the molecules in space. The ARO team recently began testing a new receiver in collaboration with the National Radio Astronomy Observatory. The receiver was developed as a prototype for the Atacama Large Millimeter Array, a telescope under construction in Chile. The state-of-the-art receiver has given the SMT 10 times more sensitivity at millimeter wavelengths than any other radio telescope. The SMT can now detect emission weaker than a typical light bulb from distant space at very precise frequencies. The UA team has discovered that the molecules aren't just flowing out as a gas sphere around VY Canis Majoris, but also are blasting out as jets through the spherical envelope. "The signals we receive show not only which molecules are seen, but how the molecules are moving toward and away from us," said Stefanie Milam, a recent doctoral graduate on the ARO team. The molecules flowing out from VY Canis Majoris trace complex winds in three outflows: the general, spherical outflow from the star, a jet of material blasting out towards Earth, and another jet shooting out a 45 degree angle away from Earth. Astronomers have seen bipolar outflows from stars before, but not two, unconnected, asymmetric and apparently random outflows, Ziurys said. Ziurys said she believes the two random jets are evidence for what astronomers earlier proposed are "supergranules" that form in very massive stars, and has been seen in Betelgeuse. Supergranules are huge cells of gas that form inside the star, then float to the surface and are ejected out of the star, where they cool in space and form molecules, creating jet outflows with certain molecular compositions. Back in the 1960s, no one believed molecules could survive the harsh environment of space. Ultraviolet radiation supposedly reduced matter to atoms and atomic ions. Now scientists conclude that at least half of the gas in space between the stars within the 33-light-year inner galaxy is molecular, Ziurys said. "Our results are more evidence that we live in a really molecular universe, as opposed to an atomic one," Ziurys said. The Arizona Radio Observatory (ARO) owns and operates two radio telescopes in southern Arizona: The former NRAO 12 Meter (KP12m) Telescope located 50 miles southwest of Tucson on Kitt Peak and the Submillimeter Telescope (SMT) located on Mount Graham near Safford, Ariz. The telescopes are operated around-the-clock for about nine to 10 months per year for a combined 10,000 hours per observing season. About 1,500 hours are dedicated to sub-mm wavelengths at the SMT. The ARO offices are centrally located in the Steward Observatory building on the UA campus in Tucson. Lori Stiles | University of Arizona Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e422712e-8ebe-44eb-afea-63fc98ef0a06>
3.375
1,723
Content Listing
Science & Tech.
38.374278
95,510,380
Monday July 16, 2018 Aug-12-2010 03:25TweetFollow @OregonNews Dinosaur Found Alive: Two Species Recorded in Papua New GuineaTerrence Aym Salem-News.com Jurassic Park in New Guinea? Not quite, but close... (PAPUA, New Guinea) - The Ropen or ‘demon flyer‘ is a monstrous creature that’s terrified the natives of Papua New Guinea for thousands of years. Another smaller creature, the Duah, is possibly related to the Ropen; this creature haunts some of the far flung outlying islands. Now sensational eyewitness reports—collected by determined exploration teams seeking strong evidence of the creatures—have led serious researchers to the conclusion that two distinct animals exist. The descriptions of both monsters match that of fabled pterosaurs—ferocious flying dinosaurs thought to be extinct for 65 million years. The hunt for the glowing dinosaurs Jim Blume and David Woetzel are two daring researchers that have explored the dangerous regions—including the treacherous outlying islands—where the prehistoric monsters are known to hunt their prey. Not only have these two compiled eyewitness accounts of the creatures from frightened natives, and physical evidence of gigantic nesting sites in some of the mountainous cliff areas, both men have personally witnessed the soaring creatures—and Woetzel even shot some video footage of one. The creatures first came to the attention of missionaries who described these nocturnal fliers as having large creatures with bat-like wings, that were connected to an elongated beak. They described razor sharp teeth, muscular tearing claws and a very long whip-like tail with a split or flange on the end. There are reports from both investigators and natives of these creatures glowing in the dark. Researcher David Woetzel called the phenomenon 'Ropen light', he spent time studying them and even recorded images with his video camera. It’s hypothesized the bio-luminescent glow assists the creatures' efforts to hunt and catch fish—their primary diet—in the deep darkness of the tropical night. The evidence for two types of living pterosaurs Although the Ropen and Duah have strikingly similar physical characteristics, the giant Ropen inhabits Papua New Guinea while the smaller Duah stays relatively close to the outlying islands. Other than actual modern-day sightings of the two, a surviving 16th Century maritime chart lends credence to the hypothesis of two distinctly different creatures. Despite the general consensus amongst orthodox zoologists that the creatures don't exist, those that have actually traveled deep into Papua New Guinea's primitive rain forests and tiny offshore islands are convinced the creatures are living there now—especially since they have seen them firsthand. (Editor's note: We have included one video with this article, but there are more. Go to helium.com, the link is listed below, and check out Terrence's more expanded report with more video: Terrence Aym, Writer/Contributor is based in Chicago, and is well known nationally for his stirring reports on the top ranked site, helium.com. Born in Minnesota, Terrence Aym grew up in the Chicagoland suburbs. Having traveled to 40 of the 50 states and lived in 7 of them, Aym is no stranger to travel. He's also spent time in Canada, Mexico, the Caribbean, Europe, Asia and Western Africa. An executive for many years with Wall Street broker-dealer firms, Aym has also had a life-long interest in science, technology, the arts, philosophy and history. If it's still possible to be a 'Renaissance man' in the 21st Century, Aym is working hard to be one. Aym has several book projects in the works. Media sites that have recently featured Aym, and/or discussed his articles, include ABC News, TIME Magazine, Business Insider, Crunchgear.com, Discover, Dvice, Benzinga and more recently, his work has been showing up in South Africa and Russia. Articles for August 11, 2010 | Articles for August 12, 2010 | Articles for August 13, 2010
<urn:uuid:ec6677da-0c40-4095-b86f-2bc14549ccc5>
2.890625
851
News Article
Science & Tech.
39.406958
95,510,416
Jumping gene flash: horizontal transfer is a major evolution driver Study of 759 species finds derided mechanism in fact exerts substantial influence. Stephen Fleischfresser reports. It isn’t supposed to happen according to one of the central tenets biology, but it does. Now, for the first time, researchers have tried to ascertain how often genes jump from one species to another and just how much of an impact this has had on evolutionary history. The concept known as the Weismann Barrier in biology posits that genetic information only passes from sex cells (such as sperm and ova) to body cells. This means that genetic information only passes vertically from parent to offspring and genetic novelty is mostly created by recombination and mutation: the environment can’t create inheritable changes and genetic information can’t be introduced from another individual, let alone another species. Those who held otherwise have been largely overlooked or dismissed, from the nineteenth century French proto-evolutionist Jean-Baptiste Lamarck to the contemporary Australian Neo-Lamarkian molecular immunologist Edward Steele. Now, however, scientists are finding that genes are jumping around all over the place, in a phenomenon known as Horizontal Gene Transfer (HGT). New research published in the journal Genome Biology has focussed on genes called retrotransposons, also known transposable elements (TEs), or, more colloquially, “jumping genes”. TEs are genes that can change position on the chromosome, and were first uncovered by the Nobel prize-winning cytogeneticist Barbara McClintock. TEs, however, can jump a lot further and do so far more regularly than anyone imagined. In the largest study of its kind, lead researcher David Adelson, Director of the University of Adelaide’s Bioinformatics Hub and a team of University of Adelaide scientists have sifted through the genomes of 759 species of plants, animals and fungi, tracking two jumping genes, known as L1 and BovB. What they found is startling. The genes have jumped from species to species, even phylum to phylum, regularly throughout evolutionary history. “Jumping genes … copy and paste themselves around genomes, and in genomes of other species,” says Adelson. “How they do this is not yet known although insects like ticks or mosquitoes or possibly viruses may be involved – it’s still a big puzzle.” One of the genes tracked, L1, is a TE long thought only to pass vertically from parent to offspring, but was found in abundance across animals and plants, in 74% of species studied. Ubiquitous in so-called therian mammals – those which give birth to live young – L1 almost certainly entered the lineage in a horizontal gene transfer event not long after the group’s divergence from monotremes. (Egg-laying mammals, the platypus and echidna, from which L1 is utterly absent.) The effect of the introduction of TEs into mammals was striking. “We think the entry of L1s into the mammalian genome was a key driver of the rapid evolution of mammals over the past 100 million years,” says Adelson. The specific genes that jump are not so important, Adelson continues; rather “it’s the fact that they introduce themselves into other genomes and cause disruption of genes and how they are regulated.” Despite being the largest study of HGT to date, Adelson believes they have “only begun to scratch the surface of horizontal gene transfer. There are many more species to investigate and other types of jumping genes.”
<urn:uuid:ff0dea8b-e07d-40a7-882d-5e3f2de96eb7>
3.640625
767
Truncated
Science & Tech.
30.641543
95,510,418
I am presenting here a pascal program for matrix multiplication and the corresponding PDP-11 program from the book "Computer Organization" by Carl Hamacher. I have some doubts which I would request the list to look into. I have appended these questions at the end. for i:= 0 to n-1 do for j:= 0 to n-1 do for k:= 0 to n-1 do C(i,j):= C(i,j) + A(i,k) x B(i,k) LOOPI: CLR R2 LOOPJ: MOV R2,R1 LOOPK: MOV R4,R3 Here, R0, R2 and R4 hold the values i, j and k respectively. N is the memory location holding the value n. Array subscripts run from (0,0) to (n-1,n-1). The elements of each array are stored in consecutive word locations beginning with (0,0) and continuing in column order. All the elements are 16-bit integers. My question is: In the book, the formula for calculating the byte address of element(i,j) of an array, relative to the address of the first element, is given as 2(nxj+i). Could anyone provide an explanation?. Secondly, how is it that the addressable space of a PDP-11 is 2-to-the-power-of-15 words or 2-to-the-power-of-16 bytes? Since each word consists of 16 bits, it should have been 2-to-the-power-of-16 words. thanks and regards
<urn:uuid:c70bda71-07dd-4af6-adf6-d1146e6595bd>
3.234375
357
Q&A Forum
Software Dev.
85.168659
95,510,423
The Binomial and Related Distributions The binomial distribution was introduced in §3.5, where we also mentioned the hypergeometric distribution, the Poisson distribution and the multinomial distribution. All these distributions are related to the binomial distribution. In the present chapter, we will discuss the binomial distribution in §9.2, the hypergeometric distribution in §9.3, the Poisson distribution in §9.4, and the multinomial distribution in §9.5. KeywordsPoisson Distribution Probability Function Binomial Distribution Normal Approximation Relate Distribution Unable to display preview. Download preview PDF.
<urn:uuid:920364cd-9aaf-4ef8-ac51-d72c1a8a900f>
3.03125
133
Truncated
Science & Tech.
20.049853
95,510,462
The first plasma discharge from China's experimental advanced superconducting research center -- the so-called "artificial sun" -- is set to occur next month. The discharge, expected about Aug. 15, will be conducted at Science Island in Hefei, in east China's Anhui Province, the Peoples Daily reported Monday. Scientists told the newspaper a successful test will mean the world's first nuclear fusion device of its kind will be ready to go into actual operation, the newspaper said. The plasma discharge will draw international attention since some scientists are concerned with risks involved in such a process. But Chinese researchers involved in the project say any radiation will cease once the test is completed. The experiment will take place in a structure made of reinforced concrete, with five-foot-thick walls and a three-foot-thick roof. Copyright 2006 by United Press International Explore further: X-ray triggered nano-bubbles to target cancer
<urn:uuid:966ee323-80ff-4999-b9c6-572789b45c4e>
2.953125
194
News Article
Science & Tech.
37.567647
95,510,463
Approximately every 11 years the magnetic field on the sun reverses completely the north magnetic pole switches to south, and vice versa. It's as if a bar magnet slowly lost its magnetic field and regained it in the opposite direction, so the positive side becomes the negative side. But, of course, the sun is not a simple bar magnet and the causes of the switch, not to mention the complex tracery of moving magnetic fields throughout the eleven-year cycle, are not easy to map out. Mapping such fields, however, is a crucial part of understanding how and, in turn, when the sun will exercise its next flip. This flip coincides with the greatest solar activity seen on the sun in any given cycle, known as "solar maximum." While the cycle unfolds with seeming regularity every 11 years, in two upcoming papers scientists highlight just how asymmetrical this process actually is. Currently the polarity at the north of the sun appears to have decreased close to zero that is, it seems to be well into its polar flip from magnetic north to south -- but the polarity at the south is only just beginning to decrease. "Right now, there's an imbalance between the north and the south poles," says Jonathan Cirtain, a space scientist at NASA's Marshall Space Flight Center in Huntsville, Ala., who is also NASA's project scientist for a Japanese solar mission called Hinode. "The north is already in transition, well ahead of the south pole, and we don't understand why." One of the two papers relies on Hinode data that shows direct observations of this polar switch. The other paper makes use of a new technique observing microwave radiation from the sun's polar atmosphere to infer the magnetic activity on the surface. The asymmetry described in the papers belies models of the sun that assume that the sun's north and south polarities switch at the same time. In addition, both papers agree that the switch is imminent at the north pole, well in advance of general predictions that solar maximum for this cycle will occur in 2013. Lastly, the direct Hinode results also suggest a need to re-examine certain other solar models as well. Measuring the magnetic activity near the poles isn't easy because all of our solar telescopes view the sun approximately at its equator, offering only an oblique view of the poles, when they require a top-down view for accurate magnetic measurements. Hinode can observe this activity annually with its high resolution Solar Optical Telescope that can map magnetic fields when observing them from near the equator. The microwave radiation technique described in the second paper makes use of the discovery in 2003 that as the sun moves toward solar maximum, giant eruptions on the sun, called prominence eruptions which during solar minimum, are concentrated at lower solar latitudes -- begin to travel toward higher latitudes near the poles. In addition, the polar brightness in the microwave wavelengths declines to very low values. "These prominence eruptions are associated with increased solar activity such as coronal mass ejections or CMEs, so CMEs originating from higher latitudes also point to an oncoming solar maximum," says Nat Gopalswamy. Gopalswamy is a solar scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. who is the first author on the microwave observations paper, which was accepted by The Astrophysical Journal on April 11, 2012. "When we start to see prominence eruptions above 60 degrees latitude on the sun, then we know that we are reaching solar maximum." To look at the prominence eruptions toward the poles, Gopalswamy and his team used observations from Japan's Nobeyama Solar Radio Observatory telescopes and the joint ESA/NASA mission the Solar Heliospheric Observatory (SOHO). They watched the sun in the microwave wavelengths which are used to observe the area of the sun's atmosphere just above the surface, known as the chromosphere. Gopalswamy created precise techniques to use such microwave radiation to measure the intensity of magnetic activity on the sun's surface at the poles. By mapping the brightness of the microwave radiation throughout the chromosphere, the scientists showed that the intensity at the north pole has already dropped to the threshold that was reached in the last solar maximum cycle, suggesting the onset of solar max there. This is backed by the fact that prominence eruptions are also occurring at high latitudes in the north. Eruption activity in the south half of the sun, however, is only just beginning to increase the first CME occurred there in early March 2012. The Hinode data also shows this discrepancy between the north and the south. The Hinode results are reported by a Japanese team, led by Daikou Shiota a solar scientist at RIKEN Institute of Physics and Chemical Research, and were recently submitted to The Astrophysical Journal for publication. Shiota and his team used Hinode to observe the magnetic map of the poles every month since September of 2008. Early maps showed large, strong concentrations of magnetic fields that are almost all magnetically negative in polarity. Recent maps, however, show a different picture. Not only are the patches of magnetism smaller and weaker, but now there is a great deal of positive polarity visible as well. What once pointed to a strongly negative north pole, is now a weakly magnetized, mixed pole that will become neutral which occurs at solar maximum -- within the month according to the team's predictions. "This is the first direct observation of this field reversal," says Cirtain. "And it is extremely important to understanding how the sun's magnetism generates the solar cycle." Ted Tarbell is the principal investigator for Hinode's Solar Optical Telescope at Lockheed Martin in Palo Alto, Calif., and he points out that the direct measurements showed the progress of the pole reversal, and highlights the earlier portion of the cycle in 2008. Typical models of the magnetic flip, suggest that as active regions rotate around the equator, their higher, trailing edge which is almost always the opposite polarity from the pole in their hemisphere drift upward, eventually dominating the status quo and turning positive to negative or negative to positive. The Hinode data show that this transition at the north began before such drifting had a chance to occur. "This is one of the most interesting things in this Hinode paper to me," says Tarbell. "How did the polar reversal start so early, even though the onset of the solar cycle, that is, increased activity at lower latitudes, hadn't begun yet?" Tarbell thinks these observations mean that this model, too, may need to be re-examined. Such adjustments to models are of course expected whenever new and better data is collected. Indeed, David Hathaway, who is a solar scientist at NASA's Marshall, and who is a co-author on the microwave observations paper with Gopalswamy, points out that the idea that asymmetries exist in the sun is not completely new. Other work has recently emphasized symptoms of this asymmetry, measuring, for example, more sunspots in the northern hemisphere than in the south at the moment. "But most of the well-developed models don't incorporate the asymmetry in them," Hathaway says. "More complicated models that incorporate asymmetries do exist, but they have other ways in which they fail to match observations." Continued study on these differences, using the best observatories as well as new techniques for analysis will help expand and improve our understanding of the sun, its 11-year cycle, and the great eruptions that occur on its surface. Scientists will also keep their eye on the current cycle numbered Solar Cycle 24 because a polar switch at the north that is sooner than was expected also implies this may be a fairly small cycle in terms of the number of sunspots and amount of solar activity. Explore further: Ulysses Flyby of the Sun's North Pole
<urn:uuid:cd66bfcc-0ec1-4d10-9e39-e7b0bd7a1956>
3.78125
1,618
News Article
Science & Tech.
33.026425
95,510,464
The team of researchers at Saarland University, led by Professor of Condensed Matter Physics Karin Jacobs, initially had something quite different in mind. Originally, the team set out to research and describe the characteristics of hydrophobins - a group of naturally occurring proteins. 'We noticed that the hydrophobins form colonies when they are placed in water. They immediately arrange themselves into tightly packed structures at the interface between water and glass or between water and air,' explains Karin Jacobs. 'There must therefore be an attractive force acting between the individual hydrophobin molecules, otherwise they would not organize themselves into colonies.' But Professor Jacobs, research scientist Dr Hendrik Hähl and their team did not know how strong this force was. Hydrophobins are a family of naturally occurring proteins with a hydrophilic part (blue) and a hydrophobic part (red). Like lipids, they form molecular bilayers and vesicles, which are small spherical structures with an outer bilayer boundary. In an aqueous environment (light blue), all of the water-repellent parts of the protein are located in the inside of the bilayer. In fatty or oily environments (yellow) the situation is reversed. As a result the interior of a vesicle can represent a protected space for transporting molecules that would otherwise be insoluble in the external (aqueous or oil-based) environment. Credit: AG Jacobs This is where the neighbouring research group led by Professor Ralf Seemann got involved. One of Seemann's research teams, which is headed by Dr Jean-Baptiste Fleury, studies processes that occur at the interfaces between two liquids. The research team set up a minute experimental arrangement with four tiny intersecting flow channels, like a crossroads, and allowed a stream of oil to flow continuously from one side of the crossing to the other. From the other two side channels they injected 'fingers' of water which protruded into the crossing zone. As the hydrophobins tended to gather at the interface of the carrier medium, they were in this case arranged at the water-oil interface at the front of the fingers. The physicists then 'pushed' the two fingers closer and closer together in order to see when the attractive force took effect. 'At some point the two aqueous fingers suddenly coalesced to form a single stable interface consisting of two layers,' says Ralph Seemann. 'The weird thing is that it also functions the other way around, that is, when we use oil fingers to interrupt a continuous flow of water,' he explains. This finding is quite new, as up until now other molecules have only exhibited this sort of behaviour in the one or the other scenario. Normally proteins will orient themselves so that either their hydrophilic ('water loving') sides are in contact with the aqueous medium, or their hydrophobic ('water fearing') side is in contact with an oily medium. That a type of molecule can form stable bilayers in both environments is something wholly new. Encouraged by these findings, the researchers decided to undertake a third phase of experiments to find out whether the stable bilayer could be reconfigured to form a small membrane-bound transport sac -- a vesicle. They attempted to inflate the stable membrane bilayer in a manner similar to creating a soap bubble, but using water rather than air. The experiment worked. The cell-like sphere with the outer bilayer of natural proteins was stable. 'That's something no one else has achieved,' says Jean-Baptiste Fleury, who carried out the successful experiments. Up until now it had only been possible to create monolayer membranes or vesicles from specially synthesized macromolecules. Vesicles made from a bilayer of naturally occurring proteins that can also be tailored for use in an aqueous or an oil-based environment are something quite new. In subsequent work, the research scientists have also demonstrated that ion channels can be incorporated into these vesicles, allowing charged particles (ions) to be transported through the bilayer of hydrophobins in a manner identical to the way ions pass through the lipid bilayers of natural cells. As a result, the physicists now have a basis for further research work, such as examining the means of achieving more precisely targeted drug delivery. In one potential scenario, the vesicles could be used to transport water-soluble molecules through an aqueous milieu or fat-soluble molecules through an oily environment. Dr Hendrik Hähl describes the method as follows: 'Essentially we are throwing a vesicle "cape" over the drug molecule. And because the "cape" is composed of naturally occurring molecules, vesicles such as these have the potential to be used in the human body.' The results of this research work were a surprise. Originally, the goal was simply to measure the energy associated with the agglomeration of the hydrophobin molecules when they form colonies. But the discovery that hydrophobin bilayers could be formed in both orientations, opened the door to experiments designed to see whether vesicles could be formed. That one thing would lead to another in this way, offers an excellent example of the benefits of this type of basic, curiosity-driven research. 'The "discovery" of these vesicles is archetypal of this kind of fundamental research. Or to put it another way, if someone had said to us at the beginning: "Create these structures from a natural bilayer," we very probably wouldn't have succeeded,' says Professor Karin Jacobs in summary. The article 'Pure Protein Bilayers and Vesicles from Native Fungal Hydrophobins' was published on October 14th 2016 in the journal Advanced Materials: http://onlinelibrary. Prof. Dr. Karin Jacobs Dr. Hendrik Hähl Prof. Dr. Ralf Seemann Dr. Jean-Baptiste Fleury Karin Jacobs | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:b44910e6-a3de-4969-bebc-68963e04dad0>
3.6875
1,872
Content Listing
Science & Tech.
38.373338
95,510,482
Are you new to mock objects? Are you trying to learn how to use them? Are you looking for some “hello world” examples for Mock Objects? Mock objects can be a really good tool sometimes, but not always. This is a guide, to help you learn how and when you can use them. In simple words what is a mock object? A mock object is an object that you will use in place of another one. When do you want to use a mock object? Normally you want to use a Mock Object when you want to delegate some functionality to other object but you don’t want to test the real functionality on your current test, so you replace that object with other that is easier to control. Let’s call this object dependency… So, you use a mock when the code you are testing has a dependency but you don’t want to use the real dependency, you just want to check that you are interacting with that dependency in the right way. This interaction normally comes in three different flavors. You could want to use that dependency to query for something, to change something or to change something and expect something back. So at least you should know how to test this interactions. How you can use a mock object when you want to query for something? When you are using the dependency to “query” for something, you don’t need to use the “mock API”. You can create a regular object with the expected interface, and test for the expected output in the object that you are testing. describe "Books catalog" do class FakeDB def initialize(books:) @books = books end def fetch_books @books end end it "has the stored books" do db = FakeDB.new(books: ["Principito"]) catalog = BooksCatalog.new(db) expect(catalog.books).to eq ["Principito"] end end How you can use a mock object when you want to change something? When you want to make a change in your dependency or do something with side effects like inserting a new record on a database, sending an email, make a payment, etc. Instead of testing that the change or side effect was produced, you check that you are calling the right function/method with the right attributes. describe "Books catalog" do class FakeDB def self.insert(book) end end def db FakeDB end it "stores new added books" do catalog = BooksCatalog.new(db) # This is how you can use the Mock API of rspec expect(db).to receive(:insert).with("Harry Potter") catalog.add_book("Harry Potter") end end How you can use a mock object when you want to change something and expect something back? Sometimes you need will expect something back after calling your dependency to change something. For example is common to create a record and expect the created record or the id in return. In this case what you can do is to tell rspec what you expect from the call to your dependency. describe "Books catalog" do class FakeDB def self.insert(book) end end def db FakeDB end it "returns the id of the created record" do catalog = BooksCatalog.new(db) # This is how you can use rspec to define a response. allow(db).to receive(:insert).and_return(id: "book-1234", name: "Harry Potter") book_id = catalog.add_book("Harry Potter") expect(book_id).to eq "book-1234" end end These are some basic examples, and in some cases you are going to need something more complex, but really you can do a lot just with this knowledge =)
<urn:uuid:9c9dde99-e1b8-4698-9794-d74b9b603bfb>
3.375
793
Tutorial
Software Dev.
63.763648
95,510,510
A Whirlpool Galaxy is one of the many close galaxy neighbors to the Milky Way. It was first discovered by Charles Messier in 1773. This spiral galaxy has a diameter of around 60,000 light years. It is located about 23 million light years away in the northern constellation Canes Venatici. The official name of this galaxy is M51a and also very commonly known by the name NGC 5194. The actual shape of this galaxy was recognized by William Parsons in 1845. It also has a dwarf galaxy neighbor named M51b. These two galaxies are believed to be slowly merging together. Quick Facts: – - A Whirlpool Galaxy has a supermassive black hole in it which is encircled by rings of dust. It is similar to the one found in the Milky Way. - The core of this galaxy is highly active and it makes a whirlpool which is known as ‘Seyfert Galaxy’ by astronomers. - It is one of the two largest groups of active galaxies, along with the quasars. - This galaxy can be easily observed using binoculars even under dark sky conditions. - In February 2016, three different supernovas were discovered in the Whirlpool Galaxy. - There is a bridge made of gas and dust that ties this galaxy and its dwarf neighbor M51b together. - It is also the brightest galaxy in the entire M51 group. This group includes several other galaxies. - The total mass of this galaxy is around 160 billion solar masses. Cite This Page You may cut-and-paste the below MLA and APA citation examples: MLA Style Citation Declan, Tobin. " Amazing Facts for Kids about Whirlpool Galaxy ." Easy Science for Kids, Jul 2018. Web. 21 Jul 2018. < http://easyscienceforkids.com/whirlpool-galaxy/ >. APA Style Citation Tobin, Declan. (2018). Amazing Facts for Kids about Whirlpool Galaxy. Easy Science for Kids. Retrieved from http://easyscienceforkids.com/whirlpool-galaxy/ Sponsored Links :
<urn:uuid:e57c0a25-2b58-4af3-b404-336f97687116>
3.546875
453
Knowledge Article
Science & Tech.
57.428619
95,510,520
A Connection object represents a connection with a database. When we connect to a database by using connection method, we create a Connection Object, which represents the connection to the database. An application may have one or more than one connections with a single database or many connections with the different databases also. We can use the Connection object for the following things: 1). It creates the Statement, PreparedStatement and CallableStatement objects for executing the SQL statements. 2). It helps us to Commit or roll back a jdbc transactionn. 3). If you want to know about the database or data source to which you are connected then the Connection object gathers information about the database or data source by the use of DatabaseMetaData. 4). It helps us to close the data source. The Connection.isClosed() method returns true only if the Connection.close() has been called. This method is used to close all the connection. Firstly we need to to establish the connection with the database. This is done by using the method DriverManager.getConnection(). This method takes a string containing a URL. The DriverManager class, attempts to locate a driver that can connect to the database represented by the string URL. Whenever the getConnection() method is called the DriverManager class checks the list of all registered Driver classes that can connect to the database specified in the URL. String url = "jdbc: odbc: makeConnection"; Connection con = DriverManager.getConnection(url, "userID", "password");
<urn:uuid:9ec114d3-32b0-449a-a021-606ba141364d>
3.640625
315
Documentation
Software Dev.
42.312817
95,510,531
This article represents concepts and code samples on how to append rows to a data frame when working with R programming language. Please feel free to comment/suggest if I missed mentioning one or more important points. Also, sorry for the typos. Following are the key points described later in this article: - How to append one or more rows to an empty data frame - How to append one or more rows to non-empty data frame For illustration purpose, we shall use a student data frame having following information: First.Name Age 1 Calvin 10 2 Chris 25 3 Raj 19 How to Append one or more rows to an Empty Data Frame Following code represents how to create an empty data frame and append a row. # Create an empty data frame, teachers, with columns as name, and age # Note stringsAsFactors = FALSE teachers <- data.frame( "name" = character(), "age" = integer(), stringsAsFactors=FALSE) # Alternate way to create is to specify 0 as size of vector teachers <- data.frame( "name" = character(0), "age" = integer(0), stringsAsFactors=FALSE) # Append a row teachers[nrow(teachers) + 1, ] <- c( "ted", 50) teachers[nrow(teachers) + 1, ] <- c( "james", 55) # Print teachers teachers Following will get printed name age 1 ted 50 2 james 55 How to Append One or More Rows Approach 1:Lets say, you have a student data frame consisting of two columns, namely, First name and Age. The need is to add additional rows. Following code demonstrate the way you could add rows to existing data frame. # Following code uses rbind to append a data frame to existing data frame student <- rbind( student, data.frame("First Name"="James", "Age"=55)) # View the student data frame student Following is the new data frame: First.Name Age 1 Calvin 10 2 Chris 25 3 Raj 19 4 James 55 Approach 2: Following is another approach. It assumed that the student data frame was created using stringsAsFactors=FALSE. Note that this is key for following to work. # Assign a vector to the new row accessed using index, nrow(student) + 1 student[nrow(student)+1,] <- c("John", 55 ) # Print student student Following gets printed First.Name Age 1 Calvin 10 2 Chris 25 3 Raj 19 4 James 55 5 John 55 He has also authored the book, Building Web Apps with Spring 5 and Angular. Latest posts by Ajitesh Kumar (see all) - Is Blockchain a Database? - July 11, 2018 - Blockchain – Opportunities & Risks for Financial Institutions - July 6, 2018 - MongoDB Commands Cheat Sheet for Beginners - July 6, 2018
<urn:uuid:2bf75dac-9666-4278-a39a-5b72010be77d>
3.453125
617
Personal Blog
Software Dev.
54.591826
95,510,533
Study uses hurricane forecasting tool to show fishes affinity for ocean fronts and eddies Researchers at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science developed a new method to estimate fish movements using ocean heat content images, a dataset commonly used in hurricane intensity forecasting. This image shows front and eddy utilization in the Gulf of Mexico by pelagic fishes revealed by ocean heat content: (a) a yellowfin tuna (Thunnus albacares); and, (b) an Atlantic sailfish. OHC maps are based on calculating thermal energy from the depths of the 20°C isotherm. Credit: Jiangang Luo, UM Rosenstiel School of Marine and Atmospheric Science With Atlantic tarpon as the messenger, this is the first study to quantitatively show that large migratory fishes, such as yellowfin and bluefin tunas, blue and white marlin, and sailfish have affinities for ocean fronts and eddies. "Ocean heat content data revealed detailed movements of fishes that were not readily apparent using surface temperature data," said Jerald S. Ault, UM Rosenstiel School professor of marine biology and ecology. "This offers a powerful new approach to study how fish interact with dynamic water features relatively common in the ocean." Ocean heat content (OHC) relative to the 26°C isotherm, a measure of heat stored in the upper surface layers of the ocean, has been used for more than four decades by scientists to help predict hurricane intensity. Over the past two decades, OHC has been monitored daily using satellite fields and in-situ data that provide basin-scale variability for both weather and climate studies. In addition to providing the OHC for forecasting, these previous studies showed OHC images reveal dynamic ocean features, such as fronts and eddies, in the ocean better than just using standard techniques (e.g., sea surface temperature), especially during the summer months. The researchers compared data on fish movements obtained from pop-up satellite tags affixed to the highly migratory fish alongside maps of the heat stored in the upper ocean. "Using an advanced optimization algorithm and OHC maps, we developed a method to greatly improve geolocation accuracy and refine fish movement tracks derived from satellite tags," said Jiangang Luo, lead author and UM scientist at the Tarpon and Bonefish Research Center. The analysis revealed that fish commonly swim along the boundaries of water features in the ocean, such as fronts, like the Florida and Loop Current and their complex eddy fields. "Using the OHC approach in a new way offers an unprecedented view of how these animals move with currents and eddies in the ocean," said Nick Shay, UM Rosenstiel School professor of ocean sciences. "Our study provides a more detailed picture of the ocean ecosystem as an entity." In one 109-day analysis, the researchers documented a yellowfin tuna move along a weak front off the Mississippi River before reaching an eddy centered in the Gulf of Mexico. In separate analysis, a yellowfin tuna swam around the periphery of the same eddy many times over a 20-day period, rarely passing over it. Eddies are swirling masses of water that have been shed from strong ocean current fronts, and pump nutrient-rich water to the surface. Fronts are a type of current created at a boundary between two distinct water masses with differing physical properties, such as different temperatures, salinities. In the Gulf of Mexico, warm eddies are often shed from the Loop Current in the summer months causing a rapid intensification of hurricanes (e.g., Katrina) as they pass over it. "Our new method shows that hurricanes and highly migratory fish share at least one common oceanographic interest - warm swirling ocean eddies," said Ault. The study, titled "Ocean Heat Content Reveals Secrets of Fish Migration," was published in the Oct. 20 issue of the journal PLOS ONE. The study's authors include: Jiangang Luo, Jerald S. Ault, Lynn "Nick" Shay of the UM Rosenstiel School; John P. Hoolihan from the Cooperative Institute for Marine and Atmospheric Science at the University of Miami; Eric D. Prince and Craig A. Brown from the NOAA Southeast Fisheries Science Center; and Jay R. Rooker from Texas A&M University. The work was supported by grants from the Bonefish and Tarpon Trust, the Robertson Foundation, National Science Foundation, McDaniel Charitable Foundation, The Billfish Foundation, Adopt-A-Billfish Program and the National Oceanic and Atmospheric Administration (NOAA). The study can be accessed on line here: A video of a migration can be seen at: About the University of Miami's Rosenstiel School The University of Miami is one of the largest private research institutions in the southeastern United States. The University's mission is to provide quality education, attract and retain outstanding students, support the faculty and their research, and build an endowment for University initiatives. Founded in the 1940's, the Rosenstiel School of Marine & Atmospheric Science has grown into one of the world's premier marine and atmospheric research institutions. Offering dynamic interdisciplinary academics, the Rosenstiel School is dedicated to helping communities to better understand the planet, participating in the establishment of environmental policies, and aiding in the improvement of society and quality of life. For more information, visit: http://www. Diana Udel | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:7bc20384-e507-4e76-bda0-a5d022ccc1a1>
3.296875
1,706
Content Listing
Science & Tech.
34.105069
95,510,543
A record two-hour observation of Jupiter using a superior technique to remove atmospheric blur has produced the sharpest whole-planet picture ever taken from the ground. The series of 265 snapshots obtained with the Multi-Conjugate Adaptive Optics Demonstrator (MAD) prototype instrument mounted on ESO's Very Large Telescope (VLT) reveal changes in Jupiter's smog-like haze, probably in response to a planet-wide upheaval more than a year ago. Being able to correct wide field images for atmospheric distortions has been the dream of scientists and engineers for decades. The new images of Jupiter prove the value of the advanced technology used by MAD, which uses two or more guide stars instead of one as references to remove the blur caused by atmospheric turbulence over a field of view thirty times larger than existing techniques . "This type of adaptive optics has a big advantage for looking at large objects, such as planets, star clusters or nebulae," says lead researcher Franck Marchis, from UC Berkeley and the SETI Institute in Mountain View, California, USA. "While regular adaptive optics provides excellent correction in a small field of view, MAD provides good correction over a larger area of sky. And in fact, were it not for MAD, we would not have been able to perform these amazing observations." MAD allowed the researchers to observe Jupiter for almost two hours on 16 and 17 August 2008, a record duration, according to the observing team. Conventional adaptive optics systems using a single Jupiter moon as reference cannot monitor Jupiter for so long because the moon moves too far from the planet. The Hubble Space Telescope cannot observe Jupiter continuously for more than about 50 minutes, because its view is regularly blocked by the Earth during Hubble's 96-minute orbit. Using MAD, ESO astronomer Paola Amico, MAD project manager Enrico Marchetti and Sébastien Tordo from the MAD team tracked two of Jupiter's largest moons, Europa and Io – one on each side of the planet – to provide a good correction across the full disc of the planet. "It was the most challenging observation we performed with MAD, because we had to track with high accuracy two moons moving at different speeds, while simultaneously chasing Jupiter," says Marchetti. With this unique series of images, the team found a major alteration in the brightness of the equatorial haze, which lies in a 16 000-kilometre wide belt over Jupiter's equator . More sunlight reflecting off upper atmospheric haze means that the amount of haze has increased, or that it has moved up to higher altitudes. "The brightest portion had shifted south by more than 6000 kilometres," explains team member Mike Wong. This conclusion came after comparison with images taken in 2005 by Wong and colleague Imke de Pater using the Hubble Space Telescope. The Hubble images, taken at infrared wavelengths very close to those used for the VLT study, show more haze in the northern half of the bright Equatorial Zone, while the 2008 VLT images show a clear shift to the south. "The change we see in the haze could be related to big changes in cloud patterns associated with last year's planet-wide upheaval, but we need to look at more data to narrow down precisely when the changes occurred," declares Wong. Telescopes on the ground suffer from a blurring effect introduced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights the poets but frustrates the astronomers, since it smears out the fine details of the images. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., approaching conditions in space. Adaptive Optics systems work by means of a computer-controlled deformable mirror that counteracts the image distortion introduced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a 'wavefront sensor' (a special camera) at very high speed, many hundreds of times each second. Present AO systems can only correct the effect of atmospheric turbulence in a very small region of the sky — typically 15 arcseconds or less — the correction degrading very quickly when moving away from the central axis. Engineers have therefore developed new techniques to overcome this limitation, one of which is multi-conjugate adaptive optics. See ESO 19/07 for more details on the Multi-Conjugate Adaptive Optics Demonstrator (MAD) prototype instrument. The haze, which could be the nitrogen compound hydrazine — used on Earth as a rocket propellant — or possibly frozen crystals of ammonia, water or ammonium hydrosulphide from deeper in the gaseous planet, is very prominent in infrared images. Because visible light can penetrate to deeper levels than light at the infrared wavelengths detected by MAD (around 2 microns), optical telescopes see light reflected from deeper, thicker clouds lying beneath the haze. The haze behaves somewhat like particles in the tops of thunderheads on Earth (known as cumulonimbus anvils) or in the ash plumes from large volcanic eruptions, which rise into the upper atmosphere and spread around the world. On Jupiter, ammonia injected into the upper atmosphere also interacts with sunlight to form hydrazine, which condenses into a mist of fine ice particles. The hydrazine chemistry in Jupiter’s atmosphere is similar to that occurring in the Earth’s atmosphere after a volcanic eruption, when sulphur dioxide is converted by solar ultraviolet light into sulphuric acid. Henri Boffin | alfa Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Materials Sciences 23.07.2018 | Information Technology 23.07.2018 | Health and Medicine
<urn:uuid:48442424-194c-4086-bab1-f934509328b9>
3.015625
1,718
Content Listing
Science & Tech.
34.427506
95,510,562
World Meteorological Day 2009 enter the website Each year, on 23 March, the World Meteorological Organization, its 188 Members and the worldwide meteorological community celebrate World Meteorological Day around a chosen theme. This Day commemorates the entry into force, on that date in 1950, of the WMO Convention creating the Organization. Subsequently, in 1951, WMO was designated a specialized agency of the United Nations System. This year, the theme is “Weather, climate and the air we breathe”. 5 days ago
<urn:uuid:56de8787-d3fd-43c1-829c-f071f240be63>
2.890625
109
Personal Blog
Science & Tech.
28.9675
95,510,568
Free Search (22632 images) The magnetic field along the Galactic plane - Title The magnetic field along the Galactic plane - Released 15/12/2014 12:00 pm - Copyright ESA/Planck Collaboration. Acknowledgment: M.-A. Miville-Deschênes, CNRS – Institut d’Astrophysique Spatiale, Université Paris-XI, Orsay, France While the pastel tones and fine texture of this image may bring to mind brush strokes on an artist’s canvas, they are in fact a visualisation of data from ESA’s Planck satellite. The image portrays the interaction between interstellar dust in the Milky Way and the structure of our Galaxy’s magnetic field. Between 2009 and 2013, Planck scanned the sky to detect the most ancient light in the history of the Universe – the cosmic microwave background. It also detected significant foreground emission from diffuse material in our Galaxy which, although a nuisance for cosmological studies, is extremely important for studying the birth of stars and other phenomena in the Milky Way. Among the foreground sources at the wavelengths probed by Planck is cosmic dust, a minor but crucial component of the interstellar medium that pervades the Galaxy. Mainly gas, it is the raw material for stars to form. Interstellar clouds of gas and dust are also threaded by the Galaxy’s magnetic field, and dust grains tend to align their longest axis at right angles to the direction of the field. As a result, the light emitted by dust grains is partly ‘polarised’ – it vibrates in a preferred direction – and, as such, could be caught by the polarisation-sensitive detectors on Planck. Scientists in the Planck collaboration are using the polarised emission of interstellar dust to reconstruct the Galaxy’s magnetic field and study its role in the build-up of structure in the Milky Way, leading to star formation. In this image, the colour scale represents the total intensity of dust emission, revealing the structure of interstellar clouds in the Milky Way. The texture is based on measurements of the direction of the polarised light emitted by the dust, which in turn indicates the orientation of the magnetic field. This image shows the intricate link between the magnetic field and the structure of the interstellar medium along the plane of the Milky Way. In particular, the arrangement of the magnetic field is more ordered along the Galactic plane, where it follows the spiral structure of the Milky Way. Small clouds are seen just above and below the plane, where the magnetic field structure becomes less regular. From these and other similar observations, Planck scientists found that filamentary interstellar clouds are preferentially aligned with the direction of the ambient magnetic field, highlighting the strong role played by magnetism in galaxy evolution. The emission from dust is computed from a combination of Planck observations at 353, 545 and 857 GHz, whereas the direction of the magnetic field is based on Planck polarisation data at 353 GHz. - Id 331322 Thank you for rating! You have already rated this page, you can only rate it once!
<urn:uuid:7e5b575f-17fc-4be1-b735-e2f1d8047f2c>
2.984375
650
Truncated
Science & Tech.
37.736168
95,510,570
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up Clone this wiki locally Behaviors are a means to extend traditional ruby classes/objects with ontological information. The “Ontology” Behavior The ontology behavior uses a specified attribute of a class (default is name) as an ontology term. A query is made to find all the relationships of the ontology term contained within a triple store. These are accessed via the ontology method on an instance of a class. This example is based on some terms in the Radlex 2.0 ontology. require 'active_sesame' class OntologyTest attr_accessor :owl_term ActiveSesame::Behaviors::Ontology.mimic(self) end ot = OntologyTerm.new ot.owl_term = "http://www.owl-ontologies.com/Ontology1241733063#RID3436" ot.ontology #Builds a ActiveSesame::Ontology::Term for the value of owl_term ot.ontology.relationships ot.ontology.Preferred_Name.term #Looks up relationships and returns the Ontology::Term for its value ot.ontology.Is_A ot.ontology.Is_A.Preferred_Name end
<urn:uuid:47307376-79f9-4097-83dd-14acacea9e05>
3.03125
295
Documentation
Software Dev.
28.675181
95,510,575
Ablation is removal of material from the surface of an object by vaporization, chipping, or other erosive processes. Examples of ablative materials are described below, and include spacecraft material for ascent and atmospheric reentry, ice and snow in glaciology, biological tissues in medicine and passive fire protection materials. Biological ablation is the removal of a biological structure or functionality. Genetic ablation is another term for gene silencing, in which gene expression is abolished through the alteration or deletion of genetic sequence information. In cell ablation, individual cells in a population or culture are destroyed or removed. Both can be used as experimental tools, as in loss-of-function experiments. In glaciology and meteorology, ablation—the opposite of accumulation—refers to all processes that remove snow, ice, or water from a glacier or snowfield.[page needed] Ablation refers to the melting of snow or ice that runs off the glacier, evaporation, sublimation, calving, or erosive removal of snow by wind. Air temperature is typically the dominant control of ablation, with precipitation exercising secondary control. In a temperate climate during ablation season, ablation rates typically average around 2 mm/h. Where solar radiation is the dominant cause of snow ablation (e.g., if air temperatures are low under clear skies), characteristic ablation textures such as suncups and penitentes may develop on the snow surface. Ablation can refer either to the processes removing ice and snow or to the quantity of ice and snow removed. Debris-covered glaciers have also been shown to greatly impact the ablation process. There is a thin debris layer that can be located on the top of glaciers that intensifies the ablation process below the ice. The debris-covered parts of a glacier that is experiencing ablation are sectioned into three categories which include ice cliffs, ponds, and debris. These three sections allow scientists to measure the heat digested by the debris-covered area and is calculated. The calculations are dependent on the area and net absorbed heat amounts in regards to the entire debris-covered zones. These types of calculations are done to various glaciers to understand and analyze future patterns of melting. Moraine (glacial debris) is moved by natural processes that allow for down-slope movement of materials on the glacier body. It is noted that if the slope of a glacier is too high then the debris will continue to move along the glacier to a further location. The sizes and locations of glaciers vary around the world, so depending on the climate and physical geography the varieties of debris can differ. The size and magnitude of the debris is dependent on the area of glacier and can vary from dust-size fragments to blocks as large as a house. There has been many experiments done to demonstrate the effect of debris on the surface of glaciers. Yoshiyuki Fujii, a professor at the National Institute of Polar Research designed an experiment that showed ablation rate was accelerated under a thin debris layer and was retarded under a thick one as compared with that of a natural snow surface. This science is significant due to the importance of long-term availability of water resources and assess glacier response to climate change. Natural resource availability is a major drive behind research conducted in regards to the ablation process and overall study of glaciers. Laser ablation is greatly affected by the nature of the material and its ability to absorb energy, therefore the wavelength of the ablation laser should have a minimum absorption depth. While these lasers can average a low power, they can offer peak intensity and fluence given by: while the peak power is Surface ablation of the cornea for several types of eye refractive surgery is now common, using an excimer laser system (LASIK and LASEK). Since the cornea does not grow back, laser is used to remodel the cornea refractive properties to correct refraction errors, such as astigmatism, myopia, and hyperopia. Laser ablation is also used to remove part of the uterine wall in women with menstruation and adenomyosis problems in a process called endometrial ablation. Recently, researchers have demonstrated a successful technique for ablating subsurface tumors with minimal thermal damage to surrounding healthy tissue, by using a focused laser beam from an ultra-short pulse diode laser source. Marine surface coatingsEdit Antifouling paints and other related coatings are routinely used to prevent the buildup of microorganisms and other animals, such as barnacles for the bottom hull surfaces of recreational, commercial and military sea vessels. Ablative paints are often utilized for this purpose to prevent the dilution or deactivation of the antifouling agent. Over time, the paint will slowly decompose in the water, exposing fresh antifouling compounds on the surface. Engineering the antifouling agents and the ablation rate can produce long-lived protection from the deleterious effects of biofouling. In medicine, ablation is the same as removal of a part of biological tissue, usually by surgery. Surface ablation of the skin (dermabrasion, also called resurfacing because it induces regeneration) can be carried out by chemicals (chemoablation), by lasers (laser ablation), by freezing (cryoablation), or by electricity (fulguration). Its purpose is to remove skin spots, aged skin, wrinkles, thus rejuvenating it. Surface ablation is also employed in otolaryngology for several kinds of surgery, such as for snoring. Ablation therapy using radio frequency waves on the heart is used to cure a variety of cardiac arrhythmiae such as supraventricular tachycardia, Wolff–Parkinson–White syndrome (WPW), ventricular tachycardia, and more recently as management of atrial fibrillation. The term is often used in the context of laser ablation, a process in which a laser dissolves a material's molecular bonds. For a laser to ablate tissues, the power density or fluence must be high, otherwise thermocoagulation occurs, which is simply thermal vaporization of the tissues. Rotoablation is a type of arterial cleansing that consists of inserting a tiny, diamond-tipped, drill-like device into the affected artery to remove fatty deposits or plaque. The procedure is used in the treatment of coronary heart disease to restore blood flow. Radiofrequency ablation (RFA) is a method of removing aberrant tissue from within the body via minimally invasive procedures. Microwave ablation (MWA) is similar to RFA but at higher frequencies of electromagnetic radiation. Bone marrow ablation is a process whereby the human bone marrow cells are eliminated in preparation for a bone marrow transplant. This is performed using high-intensity chemotherapy and total body irradiation. As such, it has nothing to do with the vaporization techniques described in the rest of this article. Recently, some researchers reported successful results with genetic ablation. In particular, genetic ablation is potentially a much more efficient method of removing unwanted cells, such as tumor cells, because large numbers of animals lacking specific cells could be generated. Genetically ablated lines can be maintained for a prolonged period of time and shared within the research community. Researchers at Columbia University report of reconstituted caspases combined from C. elegans and humans, which maintain a high degree of target specificity. The genetic ablation techniques described could prove useful in battling cancer. Passive fire protectionEdit Firestopping and fireproofing products can be ablative in nature. This can mean endothermic materials, or merely materials that are sacrificial and become "spent" over time while exposed to fire, such as silicone firestop products. Given sufficient time under fire or heat conditions, these products char away, crumble, and disappear. The idea is to put enough of this material in the way of the fire that a level of fire-resistance rating can be maintained, as demonstrated in a fire test. Ablative materials usually have a large concentration of organic matter that is reduced by fire to ashes. In the case of silicone, organic rubber surrounds very finely divided silica dust (up to 380 m² of combined surface area of all the dust particles per gram of this dust). When the organic rubber is exposed to fire, it burns to ash and leaves behind the silica dust with which the product started. In spacecraft design, ablation is used to both cool and protect mechanical parts and/or payloads that would otherwise be damaged by extremely high temperatures. Two principal applications are heat shields for spacecraft entering a planetary atmosphere from space and cooling of rocket engine nozzles. Examples include the Apollo Command Module that protected astronauts from the heat of atmospheric reentry and the Kestrel second stage rocket engine designed for exclusive use in an environment of space vacuum since no heat convection is possible. In a basic sense, ablative material is designed to slowly burn away in a controlled manner, so that heat can be carried away from the spacecraft by the gases generated by the ablative process while the remaining solid material insulates the craft from superheated gases. There is an entire branch of spaceflight research involving the search for new fireproofing materials to achieve the best ablative performance; this function is critical to protect the spacecraft occupants and payload from otherwise excessive heat loading. The same technology is used in some passive fire protection applications, in some cases by the same vendors, who offer different versions of these fireproofing products, some for aerospace and some for structural fire protection. - Cell Ablation definition, Change Bioscience. - Paterson, W. S. B. 1999. The Physics of Glaciers. Tarrytown, N.Y., Pergamon. - Glossary of Meteorology - Betterton, M. D. "Theory of structure formation in snowfields motivated by penitentes, suncups, and dirt cones". Physical Review E 63.5 (2001): 056129. - Sakai, Akiko, et al. "Role of supraglacial ponds in the ablation process of a debris-covered glacier in the Nepal Himalayas." IAHS PUBLICATION (2000): 119-132. - Paul, Frank, Christian Huggel, and Andreas Kääb. "Combining satellite multispectral image data and a digital elevation model for mapping debris-covered glaciers." Remote sensing of Environment 89.4 (2004): 510-518. - Fujii, Yoshiyuki. "Field experiment on glacier ablation under a layer of debris cover." Journal of the Japanese Society of Snow and Ice 39.Special (1977): 20-21. - Kayastha, Rijan Bhakta, et al. "Practical prediction of ice melting beneath various thickness of debris cover on Khumbu Glacier, Nepal, using a positive degree-day factor." IAHS PUBLICATION 7182 (2000). - Amir Yousef Sajjadi, Kunal Mitra, Michael Grace, "Ablation of subsurface tumors using an ultra-short pulse laser", Optics and Lasers in Engineering, Volume 49, Issue 3, March 2011, Pages 451–456, ISSN 0143-8166 - Chelur, Dattananda S.; Chalfie, Martin (February 2007). "Targeted cell killing by reconstituted caspases". Proceedings of the National Academy of Sciences. 104 (7): 2283–8. Bibcode:2007PNAS..104.2283C. doi:10.1073/pnas.0610877104. PMC . PMID 17283333. Retrieved 2007-03-08. - Parker, John and C. Michael Hogan, "Techniques for Wind Tunnel assessment of Ablative Materials", NASA Ames Research Center, Technical Publication, August 1965.
<urn:uuid:5c7eda9a-6da7-43be-92ea-af606e4be3b2>
3.703125
2,471
Knowledge Article
Science & Tech.
29.067478
95,510,604
Understanding the processes inside the nucleus of a cell, which houses DNA and is the site for transcribing genes, could lead to greater comprehension of genetics and the factors that regulate expression. Scientists have used proteins or dyes to track activity in the nucleus, but those can be large and tend to be sensitive to light, making them hard to use with simple microscopy techniques. Researchers have been exploring a class of nanoparticles called quantum dots, tiny specks of semiconductor material only a few molecules big that can be used to monitor microscopic processes and cellular conditions. Quantum dots offer the advantages of small size, bright fluorescence for easy tracking, and excellent stability in light. “Lots of people rely on quantum dots to monitor biological processes and gain information about the cellular environment. But getting quantum dots into a cell for advanced applications is a problem,” said professor Min-Feng Yu, a professor of mechanical science and engineering. Getting any type of molecule into the nucleus is even trickier, because it’s surrounded by an additional membrane that prevents most molecules in the cell from entering. Yu worked with fellow mechanical science and engineering professor Ning Wang and postdoctoral researcher Kyungsuk Yum to develop a nanoneedle that also served as an electrode that could deliver quantum dots directly into the nucleus of a cell – specifically to a pinpointed location within the nucleus. The researchers can then learn a lot about the physical conditions inside the nucleus by monitoring the quantum dots with a standard fluorescent microscope. “This technique allows us to physically access the internal environment inside a cell,” Yu said. “It’s almost like a surgical tool that allows us to ‘operate’ inside the cell.” The group coated a single nanotube, only 50 nanometers wide, with a very thin layer of gold, creating a nanoscale electrode probe. They then loaded the needle with quantum dots. A small electrical charge releases the quantum dots from the needle. This provides a level of control not achievable by other molecular delivery methods, which involve gradual diffusion throughout the cell and into the nucleus. “Now we can use electrical potential to control the release of the molecules attached on the probe,” Yu said. “We can insert the nanoneedle in a specific location and wait for a specific point in a biologic process, and then release the quantum dots. Previous techniques cannot do that.” Because the needle is so small, it can pierce a cell with minimal disruption, while other injection techniques can be very damaging to a cell. Researchers also can use this technique to accurately deliver the quantum dots to a very specific target to study activity in certain regions of the nucleus, or potentially other cellular organelles.“Location is very important in cellular functions,” Wang said. “Using the nanoneedle approach you can get to a very specific location within the nucleus. That’s a key advantage of this method.” The new technique opens up new avenues for study. The team hopes to continue to refine the nanoneedle, both as an electrode and as a molecular delivery system. They hope to explore using the needle to deliver other types of molecules as well – DNA fragments, proteins, enzymes and others – that could be used to study a myriad of cellular processes. “It’s an all-in-one tool,” Wang said. “There are three main types of processes in the cell: chemical, electrical, and mechanical. This has all three: It’s a mechanical probe, an electrode, and a chemical delivery system.” The team’s findings will appear in the Oct. 4 edition of the journal Small. The National Institutes of Health and the National Science Foundation supported this work. Liz Ahlberg | University of Illinois World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:d36f0ddd-e585-4b12-a97f-05ee48a64257>
4.09375
1,457
Content Listing
Science & Tech.
38.42164
95,510,606
This discovery is of significant interest to the international scientific community. The results are published in this week’s edition of the American journal Nature Genetics. The authors describe the discovery of a novel class of mutations that disrupt the function of a gene and thereby cause a specific phenotype. The mutation created the appearance of an “illegitimate” microRNA (miRNA) recognition site in a gene that did not have it in its normal form. In this study, the gene concerned is the myostatin. This gene is expressed in the skeletal muscle and the function of the derived protein is to inhibit muscular growth. The mutation discovered among sheep exposed a recognition site for two miRNAs that are highly expressed in the muscle. In “mutant” animals, these miRNAs will consequently target the myostatin gene and block its translation. The result is that the absence of myostatin provokes a muscular hypertrophy among Texel sheep. A mechanism observed in other species as well However, Michel Georges’ team investigated further. Pursuing the study using bioinformatic approaches, the team identified polymorphisms (common mutations) among humans and mice that are likely to act in the same way as they do in the Texel breed. It appears, therefore, that this new kind of mutation, discovered while studying sheep, could contribute significantly to the phenotypic variation observed in many species – among which humans – including the hereditary predisposition to various diseases. Researchers at ULg have thus produced a database available online that compiles all these mutations (the Patrocles database: http://www.patrocles.org). It will assist researchers around the world in discovering similar phenomena for other phenotypes including hereditary diseases. World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:3b873d7a-27d9-4dc6-9316-a2f3e9b589bb>
3.40625
1,018
Content Listing
Science & Tech.
34.422673
95,510,607
Bioluminescence can also be used as a tool by researchers to learn more about the ocean and its mysteries. Edie Widder, a scientist who specializes in bioluminescence, was with a group attempting to film the giant squid for the first time. She suspected that the giant squid would be lured to a bioluminescent light attached to a fake squid—not because it wanted to eat the small fake squid, but because its flashing light "burglar alarm" could mean that there was larger prey in the vicinity. Her theory proved right. A live giant squid was captured for the first time on film in 2012!
<urn:uuid:7f4558e7-da7b-4df2-8402-b14bc2e8b764>
3.296875
130
Knowledge Article
Science & Tech.
54.755692
95,510,627
Washington: Scientists have identified the path that led to the development of new super-crops, which use an advanced form of photosynthesis. A new study has traced back the evolutionary paths of all the plants that use advanced photosynthesis, including maize, sugar cane and millet, to find out how they evolved the same ability independently, despite not being directly related to one another. Using a mathematical analysis, the authors uncovered a number of tiny changes in the plants` physiology that, when combined, allow them to grow more quickly; using a third as much water as other plants; and capture around thirteen times more carbon dioxide from the atmosphere. Together, these individual evolutionary advances make up a `recipe` that could be used to improve key agricultural crops that only use the less efficient form. Mathematician Dr Iain Johnston from Imperial College London and plant biologist Dr Ben Williams from the University of Cambridge came together to test whether a new mathematical model of evolution could be used to unpick the evolutionary pathways that led to the advanced photosynthesis. "Encouragingly for the efforts to design super-efficient crops, we found that several different pathways lead to the more efficient photosynthesis- so there are plenty of different recipes biologists could follow to achieve this," Johnston said. The study is published in the journal eLife.
<urn:uuid:39cc84a1-6be3-4280-b3eb-88d1773ee958>
3.796875
270
News Article
Science & Tech.
11.714887
95,510,630
Scientists have created an algorithm that analyzes RF signals in order to monitor you in your sleep. It uses a device similar to a Wi-Fi router to determine everything from how shallow your breathing is to whether you’re having a dream or not. Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation. Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way. Do you want to be a cryptocurrency millionaire? Don't get your hopes up. The sensor works by sending out low-level RF signals that bounce off of people. The subtle movements of the human body are detected by an RF device similar to a router and analyzed by the algorithm. This AI gathers the same diagnostic measurements and information obtained by attaching electrode sensors directly to a subject, without wires. In fact, this line of research previously yielded a similar device that could be used to detect human emotions wirelessly. The AI-powered device has the potential to revolutionize sleep disorder research; instead of forcing patients into unfamiliar environments, the laptop-sized device could be discretely deployed in a person’s bedroom. It doesn’t take a scientist to figure out that sleep research would be easier to conduct if doctors could simply monitor a patient’s normal sleep routine remotely while the patient rested in their normal sleeping environment, all without sticky electrodes attached to wires or noisy machines. In the US alone, an estimated 50 million people — nearly 16 percent of the population — suffer from a sleep disorder. That’s a lot of people working, driving, parenting, and studying without a good night’s rest. A breakthrough in the ability to monitor and and analyze the way people sleep, in real-time, could benefit everyone. Some day soon your Wi-Fi router will let your doctor know if you aren’t sleeping well. Better yet – perhaps it’ll sense you’re having a bad dream and play some Jack Johnson to soothe you back to a deep restful sleep. Read next: Remote working trends in 2017
<urn:uuid:33b3d7ff-62f1-458d-ac33-8a8958be45c5>
2.9375
457
Truncated
Science & Tech.
42.289133
95,510,636
A View from Emerging Technology from the arXiv The Puzzle Over Saturn's Orbit (cont'd) If modified theories of gravity are correct, we ought to see the effects in the orbit of Saturn. But nobody is quite sure whether we do or not. Many astronomers think our universe is filled with mysterious dark stuff that exerts a gravitational pull on big things like galaxies. In fact, most galaxies spin so fast that they would fly apart unless there were a substantial amount of this dark gloop holding them together. But if dark matter does fill our galaxy, we ought to see it in our Solar System. There’s no shortage of dark matter detectors looking for the stuff. Most have drawn a blank and those that do claim to have seen it have been ridiculed. There is an alternative hypothesis, however. This is the idea that Newton’s equations of motion work in a different way at the very low accelerations that stars experience as they orbit a galaxy. The equations that describe this so-called Modified Newtonian Dynamics or MOND are non-linear and so lead to other predictions. “An important consequence of the non-linearity is that the gravitational dynamics of a system is influenced by the external gravitational environment in which the system is embedded,” say Luc Blanchet at the Universite Pierre et Marie Curie and Jerome Novak at the Universite Denis Diderot, both in Paris. This is called the external field effect and it ought to have a measurable influence on the Solar System, particularly on the precession of the perihelion of the planets. Today, Blanchet and Novak calculate the size of this effect and compare it to the best data we have of planetary motion. It turns out that the planets most effected are the distant gas giants: Saturn, Uranus and Neptune. And the best monitored of these is Saturn, since astronomers have been able to follow the motion of the Cassini spacecraft as it orbits the ringed giant. Blanchet and Novak say that the accuracy of these measurements can be used to rule out some formulations of MOND. “We find that the precession effect is rather large for outer gaseous planets, and in the case of Saturn is comparable to, and in some cases marginally excluded by published residuals of precession permitted by the best planetary ephemerides.” But the story doesn’t end there. One of the best sets of data about Saturn’s motion has been compiled by the Russian astronomer Elena Pitjeva, who heads the Laboratory of Ephemeris Astronomy at the Institute of Applied Astronomy in St Petersburg. In 2005, she published a comprehensive set of data on Saturn’s motion. It’s this that Blanchet and Novak used to compare their calculations against. But back in 2008, rumours began to circulate that Pitjeva had found something strange in more recent data. These were outlined in a paper by Lorenzo Iorio at the National Institute of Nuclear Physics in Italy and covered by the Physics arXiv Blog at the time. The bottom line was that Pitjeva had reportedly discovered that the precession of Saturn’s perihelion, as predicted by general relativity, needed to be corrected to fit the most recent data from Cassini. Pitjeva doesn’t appear to have published these data, even now almost three years later. And Iorio hasn’t updated his paper either. But it raises an intriguing question. Could the data from Cassini be telling us something interesting about MOND? Perhaps Blanchet and Novak could politely enquire about the status of Pitjeva’s result and compare it with their calculations. Just to settle the matter for curious souls. Ref: arxiv.org/abs/1105.5815: Testing MOND in the Solar System Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:8ee90943-7598-461a-9e78-32f23bce64c2>
3.3125
838
News Article
Science & Tech.
43.554091
95,510,639
Prenatal gene therapy has been used to prevent acute neuronopathic Gaucher’s disease, however this approach is using viruses to deliver normal copies of genes. Altering the genome, typically to better understand gene and protein function. This also paves the way for gene therapy. Scientists have discovered that CRISPR/Cas9 gene editing can cause greater genetic damage in cells than was previously thought. The Nuffield Council on Bioethics has concluded that using gene editing tools on human embryos, sperm, or eggs for heritable gene editing could be ‘morally permissible’ in some cases. Scientists are taking advantage of the “self-homing” abilities of cancer cells and are creating armies of cancer-killing cells using CRISPR gene-editing. Researchers have for the first time, used gene-editing tools in adult monkeys to disable a gene throughout much of the liver. Researchers have for the first time used a gene editing technique to successfully cure a genetic condition in a mouse model. New research could allow us greater control over what happens to genetically modified organisms once they’re in the wild. Researchers have for the first time succeeded in converting human skin cells into pluripotent stem cells by activating the cell’s own genes, using gene editing technology CRISPRa. CRISPR gene drives have been tested in laboratory mice for the first time, offering a way in which multiple genes in mice can be altered to model complex multigenic human diseases. Could this step eventually lead to the eradication of pest species or is the technology still too controversial? Researchers in the UK have invented a switch that allows them to turn protein expression off and on at will, potentially offering a control over gene editing tools. Why are consumers so reluctant to embrace genetically modified foods? A new study suggests agricultural biotech companies are failing to show consumers a personal benefit to buying GM foods.
<urn:uuid:ba2679a0-c7a2-4778-bc72-62b79bbf7660>
3.265625
396
Content Listing
Science & Tech.
29.158848
95,510,644
This information addresses common questions about the emissions from eruptive fissures on Kīlauea Volcano's Lower East Rift Zone (LERZ), like those in Leilani Estates. At this time, the eruption and response is ongoing and this information was last updated on 28 June 2018. The LERZ eruption is different from the 2014 Pāhoa lava flows. The vent for the 2014 lava flows was located close to the Puʻu ʻŌʻō vent, many miles upslope from the current LERZ eruption. Gas from the lava escaped as the lava flowed downhill. The active lava flows had already lost much of their gas by the time they entered the Pāhoa community. In the current situation, the eruptive vents are located within or near residential areas. The lava being erupted contains very high amounts of gas, so the gas concentrations near the lava can be much higher than during the Pāhoa event. These concentrations may be similar to the amount that was released from the Halema'uma'u lava lake (prior to May 2018) in the restricted areas at the Kīlauea summit. The air pollutants of most concern during the current volcanic activity are: The VMAP plume colors are coded to the HDOH SO2 health advisory levels (PDF). UH Manoa’s Vog Measurement and Prediction project (VMAP) provides forecast movies showing plume location and SO2 concentration for a 3-day period (Link).
<urn:uuid:9f9e947d-1451-45ab-96af-4ef44c6ab195>
3.03125
313
Knowledge Article
Science & Tech.
47.249443
95,510,653
A protein characterized by researchers at Baylor College of Medicine plays an important role in communication between neurons. This protein is overactive (up-regulated) in children with Downs Syndrome. Identifying this protein - Dap160 — and its function is an important step in understanding how neurons communicate with one another, said Dr. Hugo Bellen, BCM professor of molecular and human genetics, a Howard Hughes Medical Institute investigator, and director of the program in developmental biology. The report appears in the July 22, 2004, issue of the journal Neuron. Dap 160 was found as part of a new screen developed in Bellens laboratory. The screen revealed many genes involved in neuronal function and development, said Bellen. Dap160 stands for Dynamin-associated protein of 160 kD (kilodaltons). Dynamin is a protein that is crucial to the final portion of the synaptic process. Ross Tomlin | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:61d304d7-564e-4b55-a790-c84d78cecae5>
2.765625
771
Content Listing
Science & Tech.
36.50976
95,510,682
doi:10.1038/nindia.2018.30 Published online 14 March 2018 Researchers have synthesised a new kind of catalyst that can efficiently help produce oxygen by splitting water1. Splitting water also generates hydrogen, making the catalyst potentially useful for yielding clean fuel. An oxygen-generating reaction is a key process that keeps metal-air batteries, fuel cells and solar cells functioning. However, this reaction is slow. Metal-oxide-based catalysts used to accelerate this reaction are expensive and generate an oxide layer that reduces the conductivity of batteries and fuel cells. To find an efficient catalyst, scientists from the CSIR-Central Electrochemical Research Institute in Karaikudi, Tamil Nadu, led by Subbiah Alwarappan, prepared the catalyst by using nickel-nanoparticle-loaded modified graphene nanoribbons. They then explored the catalyst’s potential to catalyse an oxygen-generating reaction. Increasing the amount of nickel in the catalyst significantly increased its catalytic activity at a low voltage. This shows that nickel present in the graphene nanoribbons played a vital role in catalysing the oxygen-generating reaction. The catalyst retained its catalytic efficiency over a period of 10 hours at a steady current density. Such stability can be attributed to a closely packed structure in which nickel nanoparticles are encapsulated in the matrix of graphene nanoribbons. It can also be employed as an enzyme-free catalyst in various biosensors, says Alwarappan. 1. Joy, J. et al. Nickel-incorporated, nitrogen-doped graphene nanoribbons as efficient electrocatalysts for oxygen evolution reaction. J. Electrochem. Soc. 165 (2018) doi: 10.1149/2.0601803jes
<urn:uuid:a4b9c86f-0655-4e30-a114-6eb8352a9ad8>
3.84375
372
Truncated
Science & Tech.
29.224413
95,510,687
Numbers can also be compared. Normal comparisons work for numbers exactly as you'd expect. print( 1 < 2 ) # Less than print( 2.0 >= 1 ) # Greater than or equal: mixed-type 1 converted to 1.0 print( 2.0 == 2.0 ) # Equal value print( 2.0 != 2.0 ) # Not equal value # w w w. jav a 2s. c o m Python allows us to chain multiple comparisons together to perform range tests. Chained comparisons are a shorthand for larger Boolean expressions. For example, the expression (A < B < C) tests whether B is between A and C. It is equivalent to the Boolean test: A < B and B < C. For example, assume the following assignments: X = 2 Y = 4 # w w w. j a va2 s .co m Z = 6 print( X < Y < Z ) # Chained comparisons: range tests print( X < Y and Y < Z ) print( X < Y > Z ) print( X < Y and Y > Z ) print( 1 < 2 < 3.0 < 4 ) print( 1 > 2 > 3.0 > 4 )
<urn:uuid:fdb0dbe3-e4e2-40fe-8d81-fe2a275543dd>
4.5625
256
Documentation
Software Dev.
109.862972
95,510,705
12 July 2018 What price for cleaner air in the Middle East? Published online 21 August 2015 Conflict, economic recession and favourable policies in a few countries have surprisingly reduced air pollution in the Middle East since 2010. The Middle East is undergoing “dramatic” atmospheric changes with plummeting levels of the pollutant nitrogen dioxide over the past five years. The bad news is that cleaner air has resulted from an industrial slowdown caused by political upheaval and armed conflict, especially in countries like Syria and Egypt. The findings are part of a new study led by Jos Lelieveld of the Max Planck Institute for Chemistry in Mainz, Germany, published today in Science Advances. Lelieveld and his colleagues ‑ including scientists from King Saud University (KSU) and King Abdullah University for Science and Technology (KAUST) – have been using satellite platforms to observe the emissions of nitrogen oxides in the Middle East for 10 years. Their results are the first to reveal a correlation between the political climate and the atmospheric one in the Middle East. The study is the first to employ both stationary and orbiting satellites to produce a 10-year dataset in high resolution. “In the Middle East, large changes of NO2 have occurred,” Lelieveld told a press conference on Thursday, describing the atmospheric changes as “unique worldwide.” “These [findings] disagree with scenarios used in prediction of air pollution and climate change for the future,” he added. In the past, projections were almost always linked to levels of CO2 in the air – which Lelieveld says is not a valid predictor of climate trends in the Middle East. He tracked NO2, a byproduct of fossil fuel use and road traffic exhaust and a highly reactive gas that contributes to reactions producing ground-level ozone (smog), and one which poses the greatest hazards to health. The Middle East has no air-quality networks on the ground, therefore long-term space observatories of NO2 were used to study air pollution emissions in the region. It is amazing how strong the changes have been in the Middle East. Lelieveld found that from 2005 to 2010, the Middle East recorded the world’s fastest-rising air pollution emissions. A similar trend was plotted in East Asia, and has been linked to economic growth there. “However, [the Middle East] is the only region where this pollution trajectory was interrupted around 2010 and followed by a strong decline.” The team attributes the decrease to some good practices, mainly implementation of environmental control measures. But, curtailed industrial activity caused by economic pressure and conflict is the primary reason for the changes in the Middle East’s environmental trends. “In Syria and Egypt, armed uprisings and political crises have led to economic and social pressures. In Syria and Iraq, the [outpouring] of refugees has reduced pollution.” No environmental policies were implemented in these countries after 2010, yet there was a decline in NO2 emissions by 20-50%. “It is amazing how strong the changes have been in the Middle East,” says Lelievand. He explains that unlike other crisis areas, the trends in this region changed sharply rather than gradually. He cited how the activities of the insurgent group Islamic State (IS) are rapidly changing trends. When people flee en masse from one part of Iraq to another, the atmospheric trends change. Pollution decreases in the area from which they leave and increases wherever they find refuge. This correlates with information from the UN on the movement of these refugees. Apart from the countries in which good practices are responsible for cleaner air, specifically the Gulf states, this information is purely diagnostic. An unfortunate side effect For war-affected countries, Lelievand concedes that air pollution information is not going to greatly alleviate their problems. But, this information and new diagnostics may at some point be analysed for each country and used be policymakers in the region, he adds. “I don’t want to create the illusion that air pollution measurements [taken] from space will help people in areas like Iraq and Syria. It’s a simple diagnostic of what is going on, which is possibly helpful because in many cases when policies are implemented and emissions are estimated, you can improve emissions [based on these numbers].” The researcher says he will continue his observations of the region, hoping that in the future he will have access to more advanced ecological satellite imaging instruments that will provide a wider array of information, in high resolution and across different timeframes. He says he’s collaborating with scientists at KAUST, Saudi ARAMCO, and in Egypt and Lebanon to that end. “What’s happening in the Middle East is unusual,” says Lelievand. “I am hoping that the world is being reminded again that we need the international community to get active and try to help resolve some of these problems.” Lelievand, J. et al. Abrupt recent trend changes in atmospheric nitrogen dioxide over the Middle East. Sci. Adv. http://dx.doi.org/10.1126/sciadv.1500498 (2015)
<urn:uuid:997c01dd-0e64-4fb1-af24-ce016b981ad2>
2.96875
1,087
Truncated
Science & Tech.
40.480844
95,510,706
Please note that we have stopped the regular imports of Gene Expression Omnibus (GEO) data into ArrayExpress. This may not be the latest version of this experiment. E-GEOD-7853 - Effects of hypothermia on gene expression in zebrafish gills Submitted on 19 May 2007, released on 20 June 2010, last updated on 2 May 2014 Ectothermic vertebrates are different from mammals that are sensitive to hypothermia and they have to maintain core temperature for survival. Why and how ectothermic animals can survive, grow and reproduce in low temperature have been for a long time a scientifically challenging and important inquiry to biologists. We used a microarray to profile the gill transcriptome in zebrafish (Danio rerio) after exposure to low temperature. Adult zebrafish were acclimated to a low temperature of 12 C for 1 (1-d) and 30 d (30-d), and the gill transcriptome was compared to wild types by oligonucleotide microarray hybridization. Results showed 11 and 22 transcripts were found to be upregulated by low-temperature treatment for 1-d and 30-d respectively, while 56 and 70 transcrips were downregulated. The gill transcriptome profiles revealed that ionoregulation-related gene was highly upregulated in cold-acclimated zebrafish. This observation encouraged us to investigate the role of ionoregulatory genes in zebrafish gills during cold acclimation. Cold acclimation caused upregulation of genes that are essential for ionocyte specification, differentiation, ionoregulation, and acid/base balance, and also increased the numbers of cells expressing these genes. mRNA expression of epithelial Ca2+ channel (ECaC), one of these genes, was increased in parallel with the level of Ca2+ influx, revealing a functional compensation after long-term acclimation to cold. Phospho-histone H3 and TUNEL staining showed that the cell turnover rate was retarded in cold-acclimated gills. These results suggest that gills may sustain their functions by yielding mature ionocytes from preexisting undifferentiated progenitors in low-temperature environments. The AB strain of zebrafish (D. rerio) was originally obtained from the University of Oregon, and were kept in the zebrafish stock center at Academia Sinica, Taipei, Taiwan. Fish were reared in local tap water at 28 C and a photoperiod regime of 14-h L/10-h D. Adult zebrafish were acclimated to 12 C with a gradually reduced temperature at a gradient of 4 C/h in order to prevent temperature shock and reduce mortality. After 30 d of acclimation, surviving (over an 80% survival rate) fish appeared to be feeding and behaving normally compared with control fish. The experimental protocols were approved by the Academia Sinica Institutional Animal Care and Utilization Committee (approval no. RFiZOOHP2006083). After the low-temperature treatment, gill tissues dissected from 6 individuals were pooled as a sample and then homogenized in 5 ml Trizol reagent (Invitrogen, Carlsbad, CA). Thirty six individuals (18 for low-temperature treatment and 18 for control) were sacrificed for each microarray hybridization experiment, and another 60 individuals were used for quantitative reverse-transcription polymerase chain reaction. After chloroform extraction, RNA precipitation and ethanol washing, the RNA samples were purified and treated with DNase1 to remove the genomic DNA by using RNeasy Mini Kit (Qiagen, Huntsville, Alabama). The quantity and quality of total RNA were assessed by spectrophotometry and agarose gel electrophoresis, respectively. The commercial zebrafish 14K oligonucleotides set (MWG Biotech AG, Ebersbach, Germany) were obtained and were printed on UltraGAPS Coated slide (Corning, New York, NY ) with use of the OmniGrid 100 microarrayer (Genomic Solutions, Ann Arbor, MI) according to the manufacture’s instructions. The 14,067 oligonucleotides represent 9666 genes (7009 singlet genes and 2657 redundant genes), and the redundancy of this chip is 31 %. The detailed description of the oligonucleotides information can be obtained on the Ocimun Biosolution website (http://www.ocimumbio.com/web/default.asp). cDNA probes were synthesized by reverse transcription of 20 μg total RNA using a SuperScript indirect cDNA labeling system (Invitrogen) and were labeled with Cy5 (cold treatment groups) and Cy3 (control groups)(Amersham Bioscience, Buckinghamshire, UK) , respectively. The zebrafish 14K OciChip array chip was pretreated with 1% bovine serum albumin (BSA) (fraction V), 4x saline-sodium citrate (SSC), and 1% sodium dodecylsulfate (SDS) at 42 °C for 45 min, and then hybridized overnight in a cocktail containing 5x Denhardt's solution, 6x SSC, 0.5% SDS, 50% formamide, 50 mM sodium phosphate, and 2 µg/µl yeast tRNA. Slides were washed with 2x SSC and 0.1% SDS (5 min), 1x SSC and 0.1% SDS (5 min), 0.5x SSC (5 min), and twice with 0.1x SSC (2 min each). Scanning was performed with a Genepix scanner (Molecular Devices, Sunnyvale, CA). The acquired images were analyzed using ScanArray Express 3.0 (PerkinElmer, Waltham, MA) and Genespring (Aglient Technologies, Foster City, CA) software. The measurements of spots were filtered by flags, and the lowest normalization was performed after subtraction of the median background. To assess the differential expressions of genes, we identified the ratio of Cy5/Cy3 intensity > 2 or < 0.5. Each experiment contained 3 biological replicates (including 1 dye swap) with different samples, and the differentially expressed genes were selected from those with at least 2 of 3 significant signals (ratio > 2 or < 0.5). transcription profiling by array Ming Yi Chou <firstname.lastname@example.org>, CD Hsiao, IW Chen, MY Chou, PP Hwang, SC Chen, ST Liu Effects of hypothermia on gene expression in zebrafish gills: upregulation in differentiation and function of ionocytes as compensatory responses. Chou MY, Hsiao CD, Chen SC, Chen IW, Liu ST, Hwang PP.
<urn:uuid:8dc55826-27bb-49d2-aeb0-44844dba9690>
2.765625
1,433
Academic Writing
Science & Tech.
34.81603
95,510,713
Like a balloon bobbing along in the air while tied to a child's hand, a tracer has been found in the sun's atmosphere to help track the flow of material coursing underneath the sun's surface. New research that uses data from NASA's Solar Dynamics Observatory, or SDO, to track bright points in the solar atmosphere and magnetic signatures on the sun's surface offers a way to probe the star's depths faster than ever before. Brightpoints in the sun's atmosphere, left, correspond to magnetic parcels on the sun's surface, seen in the processed data on the right. Green spots show smaller parcels, red and yellow much bigger ones. Images based on data from NASA's SDO captured at 8 p.m. EDT on May 15, 2010. Image Credit: NASA/SDO The technique opens the door for near real-time mapping of the sun's roiling interior – movement that affects a wide range of events on the sun from its 22-year sunspot cycle to its frequent bursts of X-ray light called solar flares. "There are all sorts of things lurking below the surface," said Scott McIntosh, first author of a paper on these results in the April 1, 2014, issue of The Astrophysical Journal Letters. "And we've found a marker for this deep rooted activity. This is kind of a gateway to the interior, and we don't need months of data to get there." One of the most common ways to probe the sun's interior is through a technique called helioseismology in which scientists track the time it takes for waves – not unlike seismic waves on Earth -- to travel from one side of the sun to the other. From helioseismology solar scientists have some sense of what's happening inside the sun, which they believe to be made up of granules and super-granules of moving solar material. The material is constantly overturning like boiling water in a pot, but on a much grander scale: A granule is approximately the distance from Los Angeles to New York City; a super-granule is about twice the diameter of Earth. Instead of tracking seismic waves, the new research probes the solar interior using the Helioseismic Magnetic Imager on NASA's Solar Dynamics Observatory, or SDO, which can map the dynamic magnetic fields that thread through and around the sun. Since 2010, McIntosh has tracked the size of different magnetically-balanced areas on the sun, that is, areas where there are an even number of magnetic fields pointing down in toward the sun as pointing out. Think of it like looking down at a city from above with a technology that observed people, but not walls, and recording areas that have an even number of men and women. Even without seeing the buildings, you'd naturally get a sense for the size of rooms, houses, buildings, and whole city blocks – the structures in which people naturally group. The team found that the magnetic parcels they mapped corresponded to the size of granules and supergranules, but they also spotted areas much larger than those previously noted -- about the diameter of Jupiter. It's as if when searching for those pairs of men and women, one suddenly realized that the city itself and the sprawling suburbs was another scale worth paying attention to. The scientists believe these areas correlate to even larger cells of flowing material inside the sun. The researchers also looked at these regions in SDO imagery of the sun's atmosphere, the corona, using the Atmospheric Imaging Assembly instrument. They noticed that ubiquitous spots of extreme ultraviolet and X-ray light, known as brightpoints, prefer to hover around the vertices of these large areas, dubbed g-nodes. "Imagine a bunch of helium balloons with weights on them," said Robert Leamon, co-author on the paper at Montana State University in Bozeman and NASA Headquarters in Washington. "The weights get carried along by the motions at the bottom. We can track the motion of the helium balloons floating up high and that tells us what's happening down below." By opening up a way to peer inside the sun quickly, these techniques could provide a straightforward way to map the sun's interior and perhaps even improve our ability to forecast changes in magnetic fields that can lead to solar eruptions. SDO is the first mission in NASA's Living with a Star program to explore aspects of the connected sun-Earth system that directly affect life and society. For more information about SDO and its mission, visit: Susan Hendrix | Eurek Alert! Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences Nano-kirigami: 'Paper-cut' provides model for 3D intelligent nanofabrication 16.07.2018 | Chinese Academy of Sciences Headquarters For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:cbb3ca77-2a0a-4d72-b146-ba1e6faed0fe>
3.5625
1,571
Content Listing
Science & Tech.
46.531903
95,510,714
RNA Molecules Lives Are 10 Times Shorter Than Previously Thought RNA molecules live an average of two minutes before they are eliminated by an exosome. (Image: University of Basel, Biozentrum) A research group at the Biozentrum of the University of Basel has developed a new method to measure the half-life of RNA molecules. It was shown that standard methods yield distorted results and RNA molecules live on average only two minutes, ten times shorter than previously assumed. The results are now published in the scientific journal "Science Advances". RNA molecules are single copies of the DNA of a cell. They transfer the genetic information of the DNA and serve as a template for the production of proteins that control all processes in the cell. These small information carriers are regulated over their lifetime, or rather half-life. After their production, RNA molecules serve as a template for protein production for a limited time before they are degraded. So far, there have been two scientific methods used to measure the half-life of the RNAs. As Prof. Attila Becskei's research team at the Biozentrum, University of Basel, discovered, these conventional methods can be rather inaccurate and sometimes give inconsistent results. Becskei's team has now found a new way to show that RNA molecules do not live on average for 20 minutes, but only two minutes. "This was a challenging task for us because nobody knew in advance which method would deliver the right results," says Becskei. The "Gene Control Method" shows: RNAs live briefly The half-life of an RNA is relevant for scientific studies on the cell cycle. The whole process of cell division depends on the right amount of proteins to be available at the right time. If the concentrations in certain phases of the cell cycle do not match, errors occur. The gene control method used by Becskei is already known, but has not hitherto been used to measure the half-life of RNA molecules. The reason for this is that complex gene techniques are necessary for this and it is protracted, since only one RNA can be examined at the same time. A single gene is regulated on the DNA so that the production of the RNA can be switched on and off. If RNA production is stopped, it is possible to measure how long the RNAs already produced in the cell persist. Thus, the lifetime of this RNA molecule can be determined. "So the method only yields the result for an RNA, the result is reliable," says Becskei. The experiments were repeated for approximately 50 different genes and showed that 80% of all RNAs have a short lifetime and live less than 2 minutes. Only about 20 percent live longer, about 5 to 10 minutes. "These results are astonishing considering that it has previously been assumed that RNAs persist in the cell for an average of 20 minutes," says Becskei. Conventional methods with hooks So far, there have been essentially two main methods that scientists have used to measure the half-life of RNA molecules. In the transcriptional inhibition, a substance is administered to the cell which stops the production of the RNAs by all genes. "If, however, the production of all RNAs is stopped, other processes in the cell are also altered and they function. This distorts the results, "says Becskei. The In Vivo label also has its shadow side: Here, the RNAs are first labeled and observed how long they persist in the cell. However, labeling with modified molecules can interfere with the function of the cell and lead to incorrect results. Thus, all methods used so far have a drawback since the measurement itself affects the processes to be measured. "It is hard to believe that scientists around the world knowingly work with methods that produce distorted results," says Becskei. "It seems that the philosopher and science theorist Paul Feyerabend was right: science is often quite anarchist." The highest correlation was found between Becskei's method and a variant of the "in-vivo labelling" method. In most cases, both measures classified the same RNAs as stable and unstable, even if the average half-lives differ. Now the team would like to investigate in which areas the latter provides the right results and can be reliably deployed. Hay Fever Risk Genes Overlap with Autoimmune DiseaseNews In a large international study involving almost 900,000 participants, researchers from the University of Copenhagen and COPSAC have found new risk genes for hay fever. It is the largest genetic study so far on this type of allergy, which affects millions of people around the world.READ MORE Hidden Signals in RNAs Regulate Protein SynthesisNews Scientists have long known that RNA encodes instructions to make proteins. In a new study published in Nature, scientists describe how the protein-making machinery identifies alternative initiation sites from which to start protein synthesis.READ MORE ExPecto Patronum! Magical Machine Learning Tool Summons DNA Dark Matter DataNews A new machine learning framework, dubbed ExPecto, can predict the effects of mutations in the so-called “dark matter” regions of the human genome. ExPecto pinpoints how mutations can disrupt the way genes turn on and off throughout your body.
<urn:uuid:d3903d8e-bad3-427e-afab-ddc55a0f516e>
3.375
1,090
News Article
Science & Tech.
40.2238
95,510,731
|Debugging with GDB| The gdb command file lets you choose with program to debug. (gdbslet) file prog gdb then attempts to read the symbol table of prog. gdb locates the file by searching the directories listed in the command search path. If the file was compiled with debug information (option ‘-g’), source files will be searched as well. gdb locates the source files by searching the directories listed in the directory search path (see Your Program's Environment). If it fails to find a file, it displays a message such as: prog: No such file or directory. When this happens, add the appropriate directories to the search paths with the gdb commands dir, and execute the target command again.
<urn:uuid:e228d559-d6b0-4570-b16a-dbd2043436bc>
3.546875
164
Documentation
Software Dev.
61.613613
95,510,762
Comedy maestro Bill Bailey has a song about zebras, in which he casts their black and white stripes as a message of racial harmony. (“In a world of confusion/ We all need a sign/ If only we could live side by side/ Like the stripes down a zebra’s spine.”) Which, honestly, makes about as much sense as the most commonly cited hypothesis about zebra stripes—that they're a form of camouflage. “The zebra is conspicuously striped,” wrote Darwin, on one of his less insightful days. The idea that its black-and-white coat might help it blend in rather than, say, stand out seems preposterous, but there are two ways in which this could work. First, the black stripes could match dark tree trunks while the white ones match shafts of light between the trunks. Alternatively, the stripes break up the zebra's outline, making it harder to identify as a juicy piece of horse-shaped steak. Both ideas have been around for a while, but neither has been tested well. The problem is that we've always looked at zebras through the wrong eyes—ours. Human eyes are exceptionally good at resolving detail in daylight, so “we have a very odd appreciation of the coat of a zebra,” says Tim Caro from the University of California, Davis. By contrast, their main adversaries—lions and hyenas—have eyes with poorer resolution, but greater sensitivity at dawn, dusk, and darkness. So Caro, together with Amanda Melin from the University of Calgary worked out what zebras look like to these predators. The team measured stripe widths from different body parts on all three zebra species, and used published data to estimate the acuity of lion and hyena eyes. They then calculated how good those predators are at resolving zebra stripes at different distances and light levels. They found that in daylight, humans with 20/20 vision can resolve zebra flank stripes from around 180 meters away. By contrast, lions can only do so at 80 meters, and hyenas at 48 meters. Those values get much worse for Grevy's zebra (the species with the thinnest stripes), for leg stripes (which are also thinner), and at darker times of day. At dawn and dusk, lions, and hyenas can only resolve zebra stripes at 46 meters and 26 meters respectively. “At most distances, the zebras are going to look to a lion like a gray waterbuck,” says Caro. “Those stripes are going to fuse together and be indistinguishable.” That rules out both the blends-among-trees idea and the breaks-up-outline one—neither can possibly be true if the predators can't see the stripes. “If the stripes are doing something exciting, they’ll be doing it close up, by which point the predators have probably realized the zebra is there, because they can smell or hear it,” says Caro. Zebras, being very noisy browsers, are hardly stealthy. “It’s the first proper test of a very longstanding and prominent idea,” says Martin Stevens from the University of Exeter, who studies camouflage. Its only flaw is that the team didn't specifically measure how closely a zebra matches its background environment, in either color or brightness. Still, “I very much doubt zebra stripes do work in concealment,” adds Stevens. So, if not camouflage, then what? Caro, who has been studying zebras for a decade and has written a forthcoming book about their stripes, thinks he knows the answer. “I’ve come to the conclusion that really, it just has to be biting flies,” he says. In Africa, horses are plagued by horseflies and tsetse flies. Collectively, these insects can drink up to half a liter of blood a day, and they spread deadly diseases like sleeping sickness, equine influenza, and African horse sickness. And for some reason, these blood-sucking insects don't like to land on black-and-white stripes. Gabor Horvath from Eotvos University showed that through several experiments, including one where his team stuck several painted horse models in a fly-infested field. Caro found more evidence to support this idea. First, he mapped the geographic ranges of all seven species of wild horse—the three striped zebras, the African wild ass with thin stripes on its legs, and the uniformly colored Asiatic wild ass, Przewalski's horse, and kiang. Then, he compared these ranges to other factors, including lion and hyena distributions, habitat, temperature, herd size, and the presence of biting flies. “Every time we ran the analysis, nothing showed up except biting flies, particularly the tabanids—horseflies and deerflies,” says Caro. “You don’t find striping where there’s lots of trees or hyenas, but you do find it where there are lots of biting flies.” Why do stripes deter flies? Perhaps they create some kind of optical illusion. “We don't know whether the flies don't see a black-and-white striped animal, or misinterpret it as something else,” says Caro. “But the bottom line is that they don’t land.” Okay, but flies are surely a nuisance to all kinds of savannah animals. Why have zebras gone to such evolutionary lengths to avoid fly attacks while other savannah animals have not? Put it another way: If the fly hypothesis is right, why aren't impalas or wildebeest striped? Caro says that zebras have much shorter coats than other hoofed mammals that they live with, making them especially vulnerable to the probing snouts of flies. He also suspects that they and other horses are uniquely susceptible to the diseases carried by the flies. Domestic horses are very difficult to keep in parts of Central Africa for precisely this reason. Case closed? Hold your horses. Brenda Larison from the University of California, Los Angeles says that Caro's team don't have any actual data for the numbers of biting flies; they just estimated those numbers based on temperature and humidity. Also, the experiments from Horvath and others “haven't been done using realistic targets or under well-controlled conditions,” she says. It's unclear how stripes would affect the attractiveness of live animals, which also give off heat and odors. In her own study, Larison studied the variations in the plains zebra's stripes—in terms of number, thickness, and definition—and found they correlate most strongly with temperature. She reasons that black bands heat up faster than white ones, and so alternating rows would create circulating currents of air that cool the animals down. “This is the one hypothesis that has been completely ignored by researchers,” she says. Caro is dubious. “There isn't good evidence in the physics literature that you get these convection currents from black and white stripes,” he says. "And those currents wouldn't cool the animal in windy conditions, or when it's moving.” Larison counters that plains zebras “stand still during the hottest part of the day” and when they walk, they do so slowly. “Zebras spend almost two hours more per day in the sun grazing than solid-colored ruminants,” she notes. “What allows them to do so?” Both of them could be right. There's no reason to think that only one factor drove the evolution of zebra stripes. Temperature and biting flies might both have played a part; the stripes might also dazzle predators at close range when moving, or provide some other weird benefit. The only way to tell is to do more work—on convection currents, on fly eyes, on fly diseases, and more. As Dr. Seuss said, “There’s no end to the things you might know, depending how far beyond Zebra you go!” We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:a4202226-4bf9-48bb-9cc6-a9e4f4245d35>
3.40625
1,743
News Article
Science & Tech.
54.019357
95,510,789
Andricus rhyzomae (Hartig, 1843) on Quercus, agamous generation galls (from Houard, 1908a) Quercus robur, Elspeet, landgoed Staverden © Hans Jonkman: young, still fleshy gall Quercus robur, De Schipborg, Aa en Hunze © Arnold Grosscurt two tiny galls Galls, mostly several together, on the bark mainly near the base of a trunk. Fresh gall are semi-globular, 2-5 mm high, and have a fleshy red outer layer. After this has been eroded away, a low, woody, truncate cone remains with a radial striation on its base, and a large exit opening at its tip. Quercus petraea, pubescens, robur. for over more than half century is is assumed that A. rhyzomae is the agamous generation of Andricus testaceipes, but experimental proof is still being awaited. In case the synonymy might be proven, then testaceipes is the oldest, hence valid, name. Similarly, it is broadly assumed that Andricus testaceipes var. nodifex is the sexual generation of rhyzomae; here also experimental confirmation is waiting. Andricus rhizomae (Schenck, 1863). Buhr (1965a), Cerasa (2015a), Dauphin & Aniotsbehere (1997a), Eady & Quinlan (1963a), Hellrigl (2009a), Hellrigl & Bodur (2015a), Houard (1908a), Kwast (2014a), Meika (2006a), Melika, Csóka & Pujade-Villar (2000a), Redfern & Shirley (2011a), Roskam (2009a), Tomasi (2014a), Williams (2010a).
<urn:uuid:497df3f4-5f02-40b9-a707-560755c3beb9>
2.703125
412
Knowledge Article
Science & Tech.
40.301364
95,510,796
Environmental Epigenetics Affects Disease, Evolution News Aug 05, 2015 Their assertion is a dramatic shift in how we might think of disease and evolution’s underlying biology and “changes how we think about where things come from,” said Michael Skinner, founding director of the Center for Reproductive Biology in WSU’s School of Biological Sciences. “The ability of environmental factors to promote epigenetic inheritance that subsequently promotes genetic mutations is a significant advance in our understanding of how the environment impacts disease and evolution,” they write. Skinner is a pioneer in the field of epigenetics, which looks at the effect of changes in how genetic information is passed between generations even if DNA remains unchanged. Earlier work by Skinner has found epigenetic effects from a host of environmental toxicants, connecting plastics, pesticides, fungicide, dioxin and hydrocarbons to diseases and abnormalities as many as three generations later. His recent study exposed gestating female rats to the fungicide vinclozolin. Sperm in the first generation of male offspring showed epimutations, or alterations in the methyl groups that stick to DNA and affect its activation. Third generation, or great-grand offspring, had increased genetic mutations, which the researchers saw in increased DNA structure changes known as copy-number variations. Multiple generations of control animals had no such variations. This, said Skinner, suggests that environment has a more important role in mutations, disease and evolution than previously appreciated, and appears to be one of the main drivers of intergenerational changes, not simply a passive component. In short, Skinner and his colleagues say, the environment and epigenetics can drive genetics. “There’s not a type of genetic mutation known that’s not potentially influenced by environmental epigenetic effects,” Skinner said.
<urn:uuid:a0e96fdd-4858-4da7-9bea-c52c17191a26>
2.96875
376
News Article
Science & Tech.
13.216869
95,510,812
Atmospheric Circulation and the Mixing Zone. (by Diego Fdez-Sevilla, PhD.) Available also at Researchgate. Atmospheric Circulation and the Mixing Zone. (by Diego Fdez-Sevilla, PhD.) Independent Research · DOI: 10.13140/RG.2.2.34019.04645 Back in July 2015, wildfires in Canada made the news due to their proportions. According to the Canadian Interagency Forest Fire Centre, more than 4,500 individual fires were observed in the first half of July and a total of 2.7 million acres banished up in smoke. There are many points of view which could be applied to enumerate the implications derived from their devastating impact over communities and the biodiversity in the area where they occur. But furthermore, the smoke released can be applied to visually study the behaviour of gases and aerosols in our atmosphere through the streamlines generated. Through my research I have kept a close look into the behaviour of the Polar Jet Stream as the key feature pointing into the mechanisms and forces driving the atmospheric circulation in the North Hemisphere. I used the visible trail drawn by the vast amounts of smoke released into the atmosphere by the wildfires to observe the behaviour of the gases and particles released and their interaction with atmospheric currents. “As air flows throughout the atmosphere atmospheric waves are often generated describing the dynamic behaviour of a fluid. Accordingly, those patterns are quite similar to ripples in a pond or eddies in a river. But furthermore, if the flow is laminar (like straight hair) or turbulent (curly hair), it can also give us indirect information about the thermal conditions (stability, etc) and homogeneity in the composition (water vapour… ) of the air. And changes in those visual patterns usually help to locate “factors” generating weather events (e.g. weather fronts, cyclonic circulation and adiabatic processes). Those can be when two or more different (thermally or water content dry/humid) masses of air get in contact or even when orographic features interfere with tropospheric circulation inducing changes in those air masses due to e.g. changes in pressure and thermal gradients in altitude. Some of those waves are visible thanks to the condensation of water but most often those waves are invisible. However, similarly as with experiments in a wind tunnel aimed to observe aerodynamic behaviours (image on top was taken from my PhD 2007), here the smoke from fires allows us to see patterns in atmospheric circulation which we might not see otherwise…” “… The 10th of July, the same day that the previous satellite images caught the smoke from Canadian wildfires in atmospheric circulation, I was looking at the state of the overheated atmosphere in Southern Europe. Among other resources I was looking at the real time images being broadcast by the International Space Station (there is a link and video streaming on the right side of this blog). During its fly over the North Atlantic (from west to east) the ISS had operational the camera facing backwards. The next image shows the state of the atmosphere over the North west Atlantic where we can see the grey trail of smoke getting spread in a wide whirlpool at stratospheric circulation. (link Here)” In May 2016, Wildfires in Canada made the news, again… On May 5, 2016 at 0956 UTC (5:56 a.m. EDT), the Visible Infrared Imaging Radiometer Suite (VIIRS) on Suomi-NPP acquired a night-time image of the Fort McMurray wildfire by using its “day-night band” to sense the fire in the visible portion of the spectrum. In the image, the brightest parts of the fire appear white while smoke appears light gray.(NASA’s blog) The 10th of May 2016, NASA’s Aqua satellite captured this image of the clouds over Canada. Entwined within the clouds is the smoke billowing up from the wildfires that are currently burning across a large expanse of the country. The smoke has become entrained within the clouds causing it to twist within the circular motion of the clouds and wind. This image was taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on the Aqua satellite on May 9, 2016. By the 11th of May 2016, the huge forest fire that prompted the evacuation of Fort McMurray burned more than 884 square miles of land and destroyed some 2,400 homes and other buildings. An image from space showing the smoke plume from a forest fire that engulfed a Canadian city and force thousands to evacuate shows just how staggering the impact of the blaze has been. Jeff Williams, a Nasa astronaut on the International Space Station (ISS), shared the image on his Twitter feed with the caption: “Canadian smoke plume.” Same day 11th May 2016 I was looking at the images being broadcast by the ISS Atmospheric Circulation and the Mixing Zone. (from previous publication) “I believe that the previous images offer a vivid illustration of how easily plumes of air being generated over land, containing diverse composition of gases and aerosols (in this case COx and ashes from the combustion of wood), get introduced high in atmospheric circulation. In the actual times, the relevance of such assessment is a matter of open discussion. Some points of view defend that the atmosphere can take almost everything that activities overland might generate without the patterns followed in global atmospheric circulation suffering any major alteration over time. Personally, based on my limited knowledge, I do not see it as an accurate point of view.” The instability over the configuration of the Polar Jet Stream is showing to be a more permanent and stronger configuration than any variations over the ENSO or any other atmospheric indices applied until today. Meanwhile the ENSO, NAO, PDO, etc… keep oscillating over time, the Polar Jet Stream has been wobbling year after year for a period of time which has not been characterised yet even though I believe that the data available shows already its impact over the North Hemispheric ecosystems. The results obtained from my research suggest strongly that the energy been captured by GHGs in the atmosphere would allow equatorial and mid-latitude circulation to expand over the Polar circulation in an intrusive way. Such intrusions would increment the energy pool of the Polar masses of air debilitating the thermal contrast which generates the Polar Jet Stream. But at the same time, an increase in the amount of energy being captured by GHGs would induce different types of developments. I have discussed most of them in previous publications, but some of those implicated in recent events could be the following: - An increase in temperature would allow an expansion of the volume occupied by masses of air between polar Jet Streams extending the influence of Hadley circulation and weathering off the thermal contrast between Polar and Mid latitudinal circulation. - Also an increase of temperature due to GHGs would allow the atmosphere to carry more water vapour. Water can absorb more amounts of energy than other greenhouse gases with the particularity of having a minor time of permanency in the atmosphere. Such particularity implies that water has the property of absorbing, carrying and releasing energy between locations, and therefore, it becomes part of a mechanism transferring energy through the whole globe. - Having an increase in the energy pool available in the atmosphere will induce changes in the interaction between phenomena for which their origin, time permanency and stability lies on contrast with its surrounding. Thus, persistent features will become more frequent since in order to release their energy into the surroundings and disappear will become more difficult. But also, the moment in which the conditions allow for the atmosphere to discharge its cargo, the amount of energy ready to be liberated could easily overcome any expectations. Already we can see some example of such potential through blocking patterns and persistent cyclones moving through latitudes and longitudes. In the most recent weeks, we have been able to observe how three low pressures have positioned in front of the Iberian Peninsula and kept stationary for days. Continuity from previous research The turbulent motion of the Polar Jet Stream has increased its mixing velocity with the masses of air interacting with the Polar Jet stream. This situation has mostly diluted the plume of smoke generated in Canada through previous days and yet, the 21st May 2016, smoke from western wildfires could be easily seen across the Atlantic Ocean. From previous publication: The incorporation of gases and aerosols into atmospheric circulation at equator and mid-latitudes (wild fires/industry) enhance the thermal conductivity of the atmosphere. Such effect increases the capacity for the atmosphere to carry energy, mostly identified in the form of temperature (but not only). An elevation of temperature in the atmosphere increments the capacity of the atmosphere to absorb and transport a strong natural GHG’s which is water on its gaseous form. The process of evaporating water captures energy within water molecules. This energy gets incorporated in the energetic pool of the atmosphere as thermal energy (latent heat) also with the mass of water molecules increasing the gravitational energy available. The difference between the thermal energy carried by masses of air generate winds and ultimately, it is such contrast makes the fuel for the kinetic energy generating Jet Currents like the Polar Jet stream. My approach, data, observations and analyses indicate that the weakening of the North Polar Jet Stream is not due to processes of early snow precipitation as it proposes the theory of Arctic Amplification, (neither albedo or Arctic SST). Furthermore, my analyses pint in opposite direction to as the Arctic being the driver delivering “influences on mid-latitude weather and extreme events“, but instead, the Arctic is absorbing the impact from mid-latitudinal weather, showing anomalies in Temp (Latitude and Altitude SSW) and Liquid Precipitation through the Winter, due to the weathering effect exerted by warmer mid-latitudes pushing towards the Poles. (update 28 March 2018. Instead of “Arctic forcing on mid-latitudes” my analyses suggest “convective forcing”(D. Fdez-Sevilla Feb 2018) as the driver behind Arctic dynamics. Convective forcing originated by energy contained at Mid-latitudes, carried by water vapour as latent heat, and being introduced as “mid-latitude forcing on Arctic circulation. Which will be replicated over the Antarctica as the time moves forward and the process increase in strength. end update 28 March 2018) Considering atmospheric and terrestrial interactions conformed by Biotic and non-biotic components and processes, and based on the developments pointed out through my research, those synergistic interactions have the potential to develop patterns in environmental evolution which will be sustained only temporally, in a period of transition. Since the Arctic has the lowest energy pool of the entire North Hemisphere, and any process of amplification requires an investment of energy aside the energy being received, the conclusions delivered by my line of research dismisses any type of amplification in the Arctic circulation. Moreover, following the arguments applied in previous discussions over atmospheric dynamics and ENSO circulation, the amount of energy being absorbed at the Arctic would have an indirect effect over the conditions at the most energised part of the atmosphere, the Equator. There, at the Equator, is where we might see in the near future, the use of energy amplifying processes yet being considered too mild to become relevant. Some other current atmospheric events seem to support previous assessments. The location of the highs over the Pacific and the Atlantic are the same to as previous years and two low pressures have been hovering in the last month over the Atlantic and in front the Iberian peninsula, with enough energy to make them resilient in time and active through days, without dissipating their energy or being displaced to the East. Until the 26th May 2016, the number of Low Pressures hovering over the Atlantic have increased with a new low system located in the same location in front of the Iberian peninsula as to the previous ones. But if we look at the start of the transition out of Earth’s tilt winter 2016, we can count the first low pressure moving through latitudes at middle north Atlantic in January with the storm called Alex. Furthermore, contrasts of temperature continue to be associated with the behaviour of the weak Polar Jet Stream in agreement with previous assessments which point to a seasonal transitions driven by the frequency of isolated masses of air crossing in Latitude between mid-latitudes and polar circulation. Understanding the ways how Mid-Latitudinal circulation affects the Polar circulation will help to understand other meteorological phenomena evolving through the whole hemisphere, including the development of T-Storms and tornadoes over North America (image showing conditions over 23rd May 2016), the development of cold bursts out of season, heat waves like the ones occurring at India last May 2015 and the present May 2016 or the heat waves moving northwards across Europe in 2015 (see following image, charts by Giulio Betti). I believe that also will help to understand the bipolarity seeing over the behaviour found between the Arctic and Antarctic developments. Within the limitations (an freedom) of performing my line of research independently from any institution and single handed the level of confidence that I have over my assessments are described by the following chart: Timing the pace at which our environment evolves is crucial in order to adopt measures to adapt and mitigate possible synergistic interactions between human activities and threats raising from environmental changes. The dynamics shown in the integration of the smoke originated by wildfires into atmospheric circulation indicates how fast the mixing rate at the mixing zone absorbs everything being released into the atmosphere. Now we only can expect that variations in the composition of the atmosphere will also trigger variations over its behaviour and the pace at which process develop, in other words, variations in speed and maybe even directionality. And all of that in conjunction with other factors such as albedo, SST and the interaction from biological processes. If you want to know more over my assessments over those interactions you should follow the links through the present text and explore the rest of publications and categories described in the top menu of this blog. The aim of publishing my work openly is to allow for it to be exposed for an open review. So any constructive feedback is welcome. After a period of time of at least a month from the publishing date on this blog and at LinkedIn, if no comments are found discussing the value of the piece published I then publish it at ResearchGate generating a DOI for posterior references. In order to protect my intellectual rights, plagiarism and misappropriation, more assessment in depth and, more statistical and numerical analyses that I have performed to support my arguments, can be discussed by direct contact at my email: d.fdezsevilla(at)gmail.com The performance of my work as independent researcher is limited by my access to resources and economic stability. So far what I have published in this blog is what I have been able to offer with those limitations. The work load behind the production of the analyses offered in this blog represents an added combination of resources being the most expensive Time. Time to keep updated with current developments; time contrasting the veracity of information being released and the discussion generated around it; time performing research either statistical, conceptual and observational, over general and specific issues; time exploring possible connections between disciplines; time validating the continuity over time for previous assessments and conclusions here published; time looking for data suitable to perform analyses; time to perform a multidisciplinary set of analyses and to manage the data sets obtained being numerical data, imagery, literature, …; time to generate visual aids in the form of images, animations or schemes expressing conceptual arguments relevant for the analysis and conclusions discussed; time to write down ideas, thoughts and discussions; but the type of time which is the most costly, valuable and difficult to obtain without support is “quality time”. Time allowing to focus the attention on specific topics without the distraction of being constantly reminded of that “time” is actually counting down the days left until you run out of money. If you value the work that I present here you can offer your support in two ways not exclusive of each other. You can share your thoughts publicly and openly so people can be aware of my work. That would increase the chances for my activity to reach people willing to offer economic support and sponsorship. But also you can get involved in my research and become an sponsor supporting my work with donations. Because in order to keep going and to improve my work, somebody will have to invest in me, why not you? Since October 2013 I have been studying the behaviour of the Polar Jet Stream and the weather events associated as well as the implications derived into atmospheric dynamics and environmental synergies. Many of the atmospheric configurations and weather and climate events we see these days are very similar with the progression followed since 2013. Please take a look at posts addressing those events from previous publications in this blog or look at the categories in the top menu. Also at research-gate. Feedback is always welcomed either in this blog or at my email (d.fdezsevilla(at)gmail.com). All my work is part of my Intellectual Portfolio, registered under Creative Commons Attribution-NonCommercial 4.0 International License, WordPress.com license and it is being implemented at my profile in researchgate. I will fight for its recognition in case of misuse. - New theory proposal to assess possible changes in Atmospheric Circulation (by Diego Fdez-Sevilla) Posted on October 21, 2014. http://wp.me/p403AM-k3 - Why there is no need for the Polar Vortex to break in order to have a wobbling Jet Stream and polar weather? (by Diego Fdez-Sevilla) Posted on November 14, 2014. http://wp.me/p403AM-mt - State of the Polar Vortex. Broken? From 29 Nov 2014 to 5th Jan 2015 (by Diego Fdez-Sevilla). Posted on November 29, 2014. http://wp.me/p403AM-o7 - Gathering data to make visible the invisible (by Diego Fdez-Sevilla) Posted on December 22, 2014. http://wp.me/p403AM-pN - Probability in the atmospheric circulation dictating the Weather (by Diego Fdez-Sevilla) Posted on January 15, 2015. http://wp.me/p403AM-rm - Meteorological Outlook Feb 2015 (by Diego Fdez-Sevilla) Posted on February 7, 2015. http://wp.me/p403AM-sU - Revisiting the theory of “Facing a decrease in the differential gradients of energy in atmospheric circulation” by Diego Fdez-Sevilla. Posted on February 10, 2015. http://wp.me/p403AM-to - Drops of Weather. (by Diego Fdez-Sevilla)March 7, 2015 - Steering climate´s course (by Diego Fdez-Sevilla)March 27, 2015 - Climate. Looking at the forest for the trees (by Diego Fdez-Sevilla)April 9, 2015 - Matching Features Between Land Surface and Atmospheric Circulation (by Diego Fdez-Sevilla)April 23, 2015 - Domesticating Nature. (by Diego Fdez-Sevilla)May 7, 2015 - A roller-coaster of temperatures in South Europe. Spain (by Diego Fdez-Sevilla) May 14, 2015 - Talking about climate (by Diego Fdez-Sevilla)May 12, 2015 - News from an Ecosystem (by Diego Fdez-Sevilla)May 20, 2015 - In climate it is becoming Less probable to not have a High probability. (by Diego Fdez-Sevilla)May 29, 2015 - Drinking from the source (by Diego Fdez-Sevilla)June 5, 2015 - Communication takes more than just publishing thoughts. (by Diego Fdez-Sevilla)June 9, 2015 - Extreme climatic events, implications for projections of species distributions and ecosystem structure (by Diego Fdez-Sevilla)June 18, 2015 - The scope of Environmental Science and scientific thought. From Thought-driven to Data-driven, from Critical Thinking to Data Management. (by Diego Fdez-Sevilla)June 26, 2015 - Atmospheric Circulation and Climate Drift. Are we there yet? (by Diego Fdez-Sevilla)July 2, 2015 - Lateral thinking. From Micro to Macro (by Diego Fdez-Sevilla)July 4, 2015 - Something for the curious minds. Climate and Streamlines (by Diego Fdez-Sevilla)July 17, 2015 - Solar Activity and Human Activity, Settling Their Environmental Liability. (by Diego Fdez-Sevilla)July 24, 2015 - Atmospheric composition and thermal conductivity? (by Diego Fdez-Sevilla)August 6, 2015 - Latitudinal barriers and typhoons (by Diego Fdez-Sevilla)August 13, 2015 - The Earth is Ticking (by Diego Fdez-Sevilla)August 20, 2015 - What if, the relevant bit lies hidden on identifying the pattern behind similarities instead of trying to match anomalies? (by Diego Fdez-Sevilla)September 3, 2015 - A Climate “Between Waters” (by Diego Fdez-Sevilla).September 8, 2015 - Sensing Atmospheric Dynamics (by Diego Fdez-Sevilla)September 22, 2015 - InFormAtion. The “Act” of “Giving Form” to “Knowledge” (by Diego fdez-Sevilla)September 30, 2015 - Arctic Intake of Water Vapour (by Diego Fdez-Sevilla)October 7, 2015 - SST Anomalies and Heat Waves. Are They Not All Just Heat Displacements? (by Diego Fdez-Sevilla)October 16, 2015 - Discussing Climatic Teleconnections. Follow Up On My Previous Research (by Diego Fdez-Sevilla)October 21, 2015 - Follow-up on Arctic circulation 30 Oct 2015 ( by Diego Fdez-Sevilla) October 30, 2015 - There is Ice or Frost In Antarctica? (by Diego Fdez-Sevilla) November 5, 2015 - Starts Raining Drops of Winter at Mid-Latitudes. The new Autumn? (by Diego Fdez-Sevilla) November 10, 2015 - Press release. Ask NASA (by Diego Fdez-Sevilla) November 12, 2015 - Following the Behaviour of the Jet Stream (by Diego Fdez-Sevilla) November 19, 2015 - What Is Wrong With The Concept “Bio”? (by Diego Fdez-Sevilla) November 26, 2015 - Energy. Looking For Sources of Something We Waste. (by Diego Fdez-Sevilla) December 3, 2015 - SOILS. The Skeleton Holding The Muscle On Our Ecosystems (by Diego Fdez-Sevilla) December 9, 2015 - Could It Be El Niño The New “Wolf” Coming? (by Diego Fdez-Sevilla) December 11, 2015 - Climate and weather December 2015. Another Polar Vortex another Heat Wave? (by Diego Fdez-Sevilla) December 18, 2015 - New insides on old concepts (by Diego Fdez-Sevilla) December 23, 2015 - Atmospheric Dynamics And Shapes (by Diego Fdez-Sevilla) January 13, 2016 - European weather. Old News, Same News? by Diego Fdez-Sevilla January 15, 2016 - Observational events on atmospheric dynamics. A follow-up assessment over the theory proposed over Energetic gradients by Diego Fdez-Sevilla. January 29, 2016 - North American Weather. Old News, Same News? (by Diego Fdez-Sevilla) January 20, 2016 - Observed Atmospheric Dynamics. A follow-up assessment over the theory proposed on Energetic gradients by Diego Fdez-Sevilla. January 29, 2016 - (updated 11-18 Feb2016) Polar Vortex, Old News, Same News? (by Diego Fdez-Sevilla) February 4, 2016 - Forecasting Past Events. Snow Coming to Spain (by Diego Fdez-Sevilla) February 12, 2016 - Do You Believe in the Value of Your Work? (by Diego Fdez-Sevilla) February 23, 2016 - Forecasts For Ecosystems (by Diego Fdez-Sevilla) February 25, 2016 - Seasonality Spring 2016. Continuous follow-up on my previous research assessing atmospheric dynamics. (by Diego Fdez-Sevilla) March 3, 2016 - Tangled in Words. Atmospheric Dynamics, Stefan Boltzmann Calculations and Energy Balance (by Diego Fdez-Sevilla) March 10, 2016 - Pacific atmospheric dynamics with and without a positive ENSO (by Diego Fdez-Sevilla) March 22, 2016 - Plant growth, CO2, Soil and Nutrients. (by Diego Fdez-Sevilla) March 31, 2016 - Atmospheric Dynamics, GHG’s, Thermal Conductivity and Polar Jet Stream (by Diego Fdez-Sevilla) April 6, 2016 - Feedback. Have Your Say. (by Diego Fdez-Sevilla) April 14, 2016 - Plant an Idea and Then a Tree… But Which Ones? (by Diego Fdez-Sevilla) April 22, 2016 - (updated 28/April/2016) Severe weather warning 27 April 2016 USA (by Diego Fdez-Sevilla) April 28, 2016 - Research Update May 2016 (by Diego Fdez-sevilla) May 6, 2016 - Scientifically Challenged (by Diego Fdez-Sevilla) May 12, 2016 - Another roller-coaster of temperatures in South Europe. Spain (by Diego Fdez-Sevilla) May 13, 2015 May 13, 2016 - Our Environment. One Vision and Many Thoughts. (by Diego Fdez-Sevilla) May 20, 2016 - Atmospheric Circulation and the Mixing Zone. (by Diego Fdez-Sevilla) May 26, 2016
<urn:uuid:88c65e95-c869-4125-9d50-3a66d2d9ee86>
3.671875
5,558
Personal Blog
Science & Tech.
36.392508
95,510,822
Scientists at the American Museum of Natural History, Cold Spring Harbor Laboratory, The New York Botanical Garden, and New York University have created the largest genome-based tree of life for seed plants to date. This is a phylogenomic reconstruction of the evolutionary diversification of seed plants. Credit: E.K. Lee et al. Their findings, published today in the journal PLoS Genetics, plot the evolutionary relationships of 150 different species of plants based on advanced genome-wide analysis of gene structure and function. This new approach, called "functional phylogenomics," allows scientists to reconstruct the pattern of events that led to the vast number of plant species and could help identify genes used to improve seed quality for agriculture. "Ever since Darwin first described the 'abominable mystery' behind the rapid explosion of flowering plants in the fossil record, evolutionary biologists have been trying to understand the genetic and genomic basis of the astounding diversity of plant species," said Rob DeSalle, a corresponding author on the paper and a curator in the Museum's Division of Invertebrate Zoology who conducts research at the Sackler Institute for Comparative Genomics. "Having the architecture of this plant tree of life allows us to start to decipher some of the interesting aspects of evolutionary innovations that have occurred in this group." The research, performed by members of the New York Plant Genomics Consortium, was funded by the National Science Foundation (NSF) Plant Genome Program to identify the genes that caused the evolution of seeds, a trait of important economic interest. The group selected 150 representative species from all of the major seed plant groups to include in the study. The species span from the flowering variety—peanuts and dandelions, for example—to non-flowering cone plants like spruce and pine. The sequences of the plants' genomes—all of the biological information needed to build and maintain an organism, encoded in DNA—were either culled from pre-existing databases or generated, in the field and at The New York Botanical Garden in the Bronx, from live specimens. With new algorithms developed at the Museum and NYU and the processing power of supercomputers at Cold Spring Harbor Laboratory and overseas, the sequences—nearly 23,000 sets of genes (specific sections of DNA that code for certain proteins)—were grouped, ordered, and organized in a tree according to their evolutionary relationships. Algorithms that determine similarities of biological processes were used to identify the genes underlying species diversity. "Previously, phylogenetic trees were constructed from standard sets of genes and were used to identify the relationships of species," said Gloria Coruzzi, a professor in New York University's Center for Genomics and Systems Biology and the principal investigator of the NSF grant. "In our novel approach, we create the phylogeny based on all the genes in a genome, and then use the phylogeny to identify which genes provide positive support for the divergence of species." The results support major hypotheses about evolutionary relationships in seed plants. The most interesting finding is that gnetophytes, a group that consists mostly of shrubs and woody vines, are the most primitive living non-flowering seed plants—present since the late Mesozoic era, the "age of dinosaurs." They are situated at the base of the evolutionary tree of seed plants. "This study resolves the long-standing problem of producing an unequivocal evolutionary tree of the seed plants," said Dennis Stevenson, vice president for laboratory research at The New York Botanical Garden. "We can then use this information to determine when and where important adaptations occur and how they relate to plant diversification. We also can examine the evolution of such features as drought tolerance, disease resistance, or crop yields that sustain human life through improved agriculture." In addition, the researchers were able to make predictions about genes that caused the evolution of important plant characteristics. One such evolutionary signal is RNA interference, a process that cells use to turn down or silence the activity of specific genes. Based on their new phylogenomic maps, the researchers believe that RNA interference played a large role in the separation of monocots—plants that have a single seed leaf, including orchids, rice, and sugar cane—from other flowering plants. Even more surprising, RNA interference also played a major role in the emergence of flowering plants themselves. "Genes required for the production of small RNA in seeds were at the very top of the list of genes responsible for the evolution of flowering plants from cone plants," said Rob Martienssen, a professor at Cold Spring Harbor Laboratory. "In collaboration with colleagues from LANGEBIO [Laboratorio Nacional de Genomica para la Biodiversidad] in Mexico last year, we found that these same genes control maternal reproduction, providing remarkable insight into the evolution of reproductive strategy in flowering plants." The data and software resources generated by the researchers are publicly available and will allow other comparative genomic researchers to exploit plant diversity to identify genes associated with a trait of interest or agronomic value. These studies could have implications for improving the quality of seeds and, in turn, agricultural products ranging from food to clothing. In addition, the phylogenomic approach used in this study could be applied to other groups of organisms to further explore how species originated, expanded, and diversified. "The collaboration among the institutions involved here is a great example of how modern science works," said Sergios-Orestis Kolokotronis, a term assistant professor at Columbia University's Barnard College and a research associate at the Museum's Sackler Institute. "Each of the four institutions involved has its own strengths and these strengths were nicely interwoven to produce a novel vision of plant evolution." Other authors include Ernest Lee, American Museum of Natural History; Angelica Cibrian-Jaramillo, American Museum of Natural History, The New York Botanical Garden, and New York University – currently at the Laboratorio Nacional de Genomica para la Biodiversidad, Mexico; Manpreet Katari, New York University; Alexandros Stamatakis, Technical University Munich – currently at Heidelberg Institute for Theoretical Studies; Michael Ott, Technical University Munich; Joanna Chiu, University of California, Davis; Damon Little, The New York Botanical Garden; and W. Richard McCombie, Cold Spring Harbor Laboratory. Kendra Snyder | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:6113391c-1865-47c1-ad2e-df38af239173>
3.703125
1,951
Content Listing
Science & Tech.
25.281697
95,510,827
The TRMM satellite had an excellent view of tropical storm Bud on May 22, 2012 at 2243 UTC 6:43 p.m. EDT/2:43 p.m. PDT). TRMM's Microwave Imager (TMI) and Precipitation Radar (PR) data shows that Bud contained bands of very heavy rainfall near the center of circulation. TRMM revealed that some of these intense storms were dropping rainfall at a rate greater than 50mm/hr (~2 inches). The TRMM satellite passed over Bud on May 22, 2012, at 6:43 p.m. EDT. This 3-D image from TRMM shows that some of the strong convective towers near Bud's center were taller than 15 km (~9.3 miles). Credit: NASA/TRMM, Hal Pierce A 3-D image from TRMM's PR shows that some of the strong convective towers near Bud's center were taller than 15km (~9.3 miles). TRMM PR found reflectivity values of over 58.050 dBz indicating that very heavy rainfall was occurring. On May 23, at 1500 UTC (8 a.m. PDT) Tropical Storm Bud's maximum sustained winds were near 65 mph (100 kph). It was located near latitude 13.4 North and longitude 107.6 West, about 445 miles (715 km) south-southwest of Manzanillo, Mexico. Bud is headed northwest near 9 mph (15 kph) and is expected to slow down, according to the National Hurricane Center (NHC). The NHC also forecasts that Bud will slow and turn to the north-northeast by Friday, May 25. NHC stated that Bud could become a hurricane later today (May 23) or tonight. Rob Gutro | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:23bdfcdf-8ab5-4987-8167-168e3a30581d>
2.640625
998
Content Listing
Science & Tech.
58.571929
95,510,857
People have them, cats have them and whales have some, too. Neurons, those interlinked nerve cells that carry sensations including pain, stretch from our spinal cords to the tips of our toes, paws or fins. According to a new study published in the journal Cell, scientists from the Harvard Medical School, the University of Montreal and the Dana-Farber Cancer Institute have found a new way by which nerve cells relay information that tell them to grow from millimeters to meters in length. In other words, the researchers found a new signaling pathway that charters the course for cell progression to allow their growth. The team made an intriguing connection between nerve cells and a receptor called DCC (Deleted in Colorectal Carcinoma). The discovery means cells perform functions in unimagined ways – challenging previous views on how cells respond to their environment – that could prove beneficial in cell growth following nerve damage or detrimental in diseases such as cancer. "We found an alternate way that helps nerve cells respond quickly and locally," says co-author Philippe P. Roux, a professor of pathology and cell biology and a researcher at the University of Montreal Institute for Research in Immunology and Cancer (IRIC). "This is just the beginning, since our findings suggest that more cellular receptors may function in the same way." Dr. Roux, who is also Canada Research Chair in Signal Transduction and Proteomics, says the study could potentially open new treatment avenues: "We can envisage manipulating this alternate mechanism to make cells respond locally to their environment. Our findings mean that scientists must consider a new way that cells organize themselves to perform essential functions." Partners in research: This study was supported by the National Institutes of Health, Canadian Cancer Society Research Institute, Howard Hughes Medical Institute, Canadian Institutes of Health Research and Human Frontier Science Program Organization. About the study: The article, "Transmembrane Receptor DCC Associates with Protein Synthesis Machinery and Regulates Translation," published in the journal Cell, was authored by Joseph Tcherkezian, Perry A. Brittis and John G. Flanagan of the Harvard Medical School; Franziska Thomas of the Dana-Farber Cancer Institute; Philippe P. Roux of the University of Montreal. Note to editors: The Université de Montréal name can be adapted to University of Montreal (never Montreal University). On the Web:Cell: www.cell.com Sylvain-Jacques Desjardins | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:695716b5-04c0-4461-ad35-834745bece3b>
3.3125
1,147
Content Listing
Science & Tech.
38.667377
95,510,858
Professor Mike Ashfold, email: email@example.com A mass spectrometer consists of an ion source to ionise neutral species, an analyser which separates the ions according to their mass to charge (m/z) ratios, and a detector which, effectively, counts the number of mass filtered ions. There are many ways in which this can be achieved. Most early mass spectrometers employed a magnetic field to deflect ions by an amount proportional to their mass. Others use two DC electric fields to provide two stages of acceleration and then separate according to the ion time-of-flight (TOF). However, most common - at least in plasma processing and most residual gas analysis applications - is the quadrupole mass spectrometer (QMS). Mass-selectivity in a QMS is achieved using an AC electric field. Consider a three dimensional electric field described by the potential variation F(x,y,z) = (1) This potential is invariant along the z axis and, for a given value of F0, the equipotentials in the xy plane are four rectangular hyperbolae with asymptotes at 45° to the Cartesian axes, as shown in fig.1. Four symmetrically arranged and precisely parallel cylindrical rod electrodes mounted and biased as shown in fig.2 provide a very good approximation to a potential of this form. Addition of an ionisation source and a detector, one at each end of the z axis of the quadrupole field, constitutes the basis of a QMS. Consider the motion of an ion (with mass m and charge z) in an electric field of the form defined by eq.1. The problem is separable, and leads to three independent differential equations: + F0x = 0 (2) - F0y = 0 (3) and = 0. (4) The last of these is straightforward: there is no acceleration along the z axis and so the axial velocity is constant. The motion in the x and y directions depends on the time dependence of F0. Quadrupole mass spectrometers operate with a superposition of DC and AC voltages, i.e. F0= U - Vcos(2pnt). (5) Opposite pairs of electrodes are connected electrically, and n is usually a radio frequency. Numerical integration of eqs.(2)-(4), with F0as in eq.(5), yields complicated trajectories the details of which depend on the chosen values for r0, n, U and V. Fig.3 shows representative trajectories for m/z 27, 28 and 29. Fig. 3: Representative trajectories for ions with m/z 27, 28 and 29 for = 2 MHz and r0= 2.75 mm. As these demonstrate, an ion with appropriate m/z ratio (28 in this case) will oscillate symmetrically and continuously about the central axis and pass cleanly through the entire length of the quadrupole and emerge to be detected (ion a+ in fig.2). However all ions with lighter (or heavier) m/z ratio will suffer unstable motion which forces their trajectories to deviate from the central axis in the x (or y) direction where they will strike the appropriate electrode and be neutralised (ions b+ in fig.2). Mass selection is thus achieved by varying V and U (whilst maintaining a constant V/U ratio), or by varying the RF frequency . As the ions emerge from the region bounded by the four rods they are accelerated by a high negative voltage towards the detector (e.g. a channel electron multiplier). A mass spectrum is simply a plot of this detector output as a function of m/z. A couple of practical details merit further consideration. Ionisation of the neutral species of interest is normally achieved by electron impact. The electrons are produced by thermionic emission from a hot filament (usually tungsten or thoriated iridium) and accelerated typically to 70 eV, an energy which corresponds to a maximum in the ionisation efficiency curve for most atoms and molecules. However, the quantitative interpretation of mass spectra obtained in this way is complicated by the fact that this energy is more than sufficient not just to ionise the species of interest but also to cause dissociative ionisation. Consider, for example, mass spectrometric analysis of the gases involved in Si etching using an RF discharge in a CF4/8% O2 gas mixture. F+ ions (m/z 19) can arise both from electron impact ionisation of F atoms in the discharge, F + e F+ + 2e and by dissociative ionisation of the (more plentiful) CF4 process gas, CF4 + e CF3+ + F+ + 2e. How then can we assess the F atom content in the plasma? There are two routes to resolving this problem. The first involves looking up, or measuring, the cracking pattern of CF4 and subtracting the F+ contribution due to CF4 from the observed F+ peak intensity. Table 1 lists a number of mass spectral cracking patterns. There may, however, be other cracking reactions that produce F+. The alternative involves operation of the ion source at lower electron energies, above the ionisation potential (IP) of the F atom (17.4 eV) but below the appearance potential (AP) of F+ from the dissociative ionisation of CF4 (25 eV). Such a strategy leads to reduced signal - since the F atom has a much smaller cross-section for electron impact ionisation at, say 20 eV, than at 70 eV, but at least eliminates one interfering source of F+ ions. |Molecule||Mol. wt.||Cracking Pattern| Cracking patterns and electron energy discrimination also enable distinction between different species with the same molecular weight. As Table 1 shows, an ion with m/z 28 could be CO+, N2+ or Si+. The presence of CO, and its relative abundance, can be established by looking also at the m/z 12 peak, which is unique to CO. In the same way, N2+ would be responsible for any significant signal at m/z 14, whilst any signal at m/z 30 would indicate Si (30Si has a natural abundance of 3%). A mixture of CO+, N2+ and Si+ can thus be analysed in principle, and the contribution each makes to the peak with m/z 28 can be deduced. CF4 is unusual in that it shows no parent ion peak, i.e. there is no m/z 88 peak due to formation of CF4+. This reflects the very short lifetime of this parent ions. As Table 1 shows, ionisation of CF4 results in CF3+, CF2+ and F+ fragment ions. 70 eV is more than sufficient to remove two electrons from all the species shown in Table 1, so it should come as no surprise to learn that doubly charged ions are often observed: CF32+ (m/z 34.5 is one example). Cracking patterns are a function of electron energy. Even at 'standard' electron energies (e.g. 70 eV) the cracking pattern for any chosen molecule will show some variation with design of mass spectrometer and, for any given spectrometer, is likely to show some change with time. Thus it is good practice to periodically measure cracking patterns in ones own spectrometer using the appropriate pure gas. Electron energy discrimination as outlined above also has some limitations, since the energy of the thermally produced electrons is not monochromatic; rather, it has a Boltzmann distribution. Thus a plot of the ion yield with any given m/z versus (nominal) electron energy does not show a sharp intercept on the energy axis, but instead tails off asymptotically. Empirical routes to establishing IPs and APs are available, but their accuracy is generally only good to ~ 0.3 eV. Table 2 lists some illustrative values. |Ion||m/z||IP / eV| |Ion||m/z||Source Gas||IP / eV||AP / eV| Table 2: IPs and APs of selected species likely to arise in CF4/ 8% O2 etching of Si materials. Data from H.M. Rosenstock et al. J. Phys. Chem. Ref. Data 6, Suppl. 1, (1977). Given the available energy resolution and the data in Table 2 it is clear that it should be easy to distinguish Si (IP = 8.95 eV) from CO or N2 (IPs = 14.0 and 14.1 eV, respectively) by appropriate choice of electron energy. CO and N2, however, cannot be distinguished in this way. Maintaining an electron energy less than (say) 17 eV should preclude any m/z 28 signal due to CO+ from CO2 (AP = 19.5 eV) or F2CO (AP = 23.0 eV), or due to Si+ from SiF4 (AP = 31 eV). These appearance potentials are much higher than the IPs because the impacting electron must break one or more bonds as well as ionise the molecule. Cracking is just one of several factors which need to be considered if one wishes to use mass spectrometry to make quantitative estimates of the various gas phase species present in an etching chamber. Another is the fact that, even at 70 eV, different species have different ionisation cross-sections, i.e. different probabilities for being ionised and thus detected. Table 3 lists ionisation cross-sections for a number of species relevant in plasma etching. Table 3: Ionisation cross-sections for selected gas phase species important in plasma etching processes. (from D. Rapp and P. Englander-Golden, J. Chem. Phys. 43, 1464 (1965) and A.J. Hydes, Ph.D. Thesis, University of Bristol, (1984)). Inherent mass discrimination in a QMS leads to another correction. A quadrupole mass filter has a lower overall transmission for higher mass ions, largely because of fringe fields in the vicinity of the entrance space. The ion extraction efficiency scales as ~m½, where m is the ion mass. Clearly, for quantitative species concentration estimates it would be sensible to try and determine relative sensitivities using premixed gas samples containing known partial pressures of as many of the species of interest as possible. There is another experimental difficulty specific to the present application - namely the use of a QMS for on line monitoring of the composition of gas phase species involved in the etching process. The etcher is typically running at pressures of the order of 0.1 - 1 Torr, whilst the QMS system is designed to run at pressures <10-6 Torr. Various routes have been designed to overcome this large pressure differential. These include: This is the simplest scheme to implement. The QMS is attached to the exhaust line, between the reactor and the pump(s). The necessary pressure drop is introduced via a length of capillary or a throttle valve, and the QMS itself is pumped continually. The disadvantage with positioning the QMS this far down stream is that it can only possibly provide information on the fully 'relaxed' gas phase chemistry; all ions and radical species that were perhaps crucial in the etching process itself will long since have been destroyed by gas phase or wall reactions. It is much preferable to withdraw samples directly from the plasma itself. This cuts down the transit time sufficiently that it is sometimes possible to detect some of the more stable radical species - depending upon how frequently they collide with the walls en route to the detector (F atoms largely recombine by wall collisions) or what scavengers are present (CF2 radicals are relatively stable but react readily with oxygen). Clearly, therefore, it is sensible to minimise the residence time in the sampling tube, and to minimise surface recombination by, for example, coating any metal tubing with teflon. Additional factors meriting consideration are the extent to which the introduction of the probe perturbs the plasma (a sheath potential will form around the probe), and whether the sampling efficiency of the probe has any mass dependence. Again, calibration experiments using premixed gas samples of known composition are usually necessary. Though generally most difficult to implement, this is the method of choice for quantitaive in-situ measurements of the gas phase species involved in an etching process. Plasma species are allowed to effuse through a small hole into a differentially pumped QMS system. Ions in the plasma (which, though normally present only at very low concentrations often play a disproportionate role in the etching process) can also be sampled with such a system simply by switching off the electron impact source. Yet again, though, one must worry about mass dependent sampling efficiencies etc and extensive calibration experiments are advisable. The Bristol Diamond Group use such a system to sample the gas phase chemistry involved in a related field - namely the growth of thin film diamond using hot filament and/or microwave plasma enhanced chemical vapour deposition (CVD). Fig.4 shows one illustrative application of the way in which time resolved (capillary probe) mass spectroscopy can allow precise observation of an etch end-point. The sample in question was a 2 µm overlayer of Ti (containing 15% W to improve adhesion) on Si, undergoing reactive ion etching in a CF2Cl2 plasma. Peaks corresponding to m/z 63 (SiCl+), 120 (CF2Cl2+) and 133 (SiCl3+) have been monitored and, in each case, the end-point is signalled by a large increase in the concentration of these species. Since, in each case, the pressure rises on passing through the end-point we conclude that either reactants hitherto involved in TiW etching increase in concentration (as in middle trace) or products of Si etching increase (top and bottom curves). Fig. 4: Partial pressure-time plots for three species suitable for end-point detection during the reactive ion etching (CF2Cl2 plasma) of a TiW/Si interface. (from A.P. Day et al., Semiconductor International 12, 110 (1989).
<urn:uuid:9d70b011-eb5a-4bf8-bbc9-9320b18c9d7b>
3.828125
2,997
Tutorial
Science & Tech.
53.337323
95,510,867
Mergers of Charged Black Holes: Gravitational-wave Events, Short Gamma-Ray Bursts, and Fast Radio Bursts The discoveries of GW150914, GW151226, and LVT151012 suggest that double black hole (BH-BH) mergers are common in the universe. If at least one of the two merging black holes (BHs) carries a certain amount of charge, possibly retained by a rotating magnetosphere, the inspiral of a BH-BH system would drive a global magnetic dipole normal to the orbital plane. The rapidly evolving magnetic moment during the merging process would drive a Poynting flux with an increasing wind power. The magnetospheric activities during the final phase of the merger would make a fast radio burst (FRB) if the BH charge can be as large as a factor of q ∼ (10-9-10-8) of the critical charge Qc of the BH. At large radii, dissipation of the Poynting flux energy in the outflow would power a shortduration high-energy transient, which would appear as a detectable short-duration gamma-ray burst (GRB) if the charge can be as large as q ∼ (10-5 10-4). The putative short GRB coincident with GW150914 recorded by Fermi GBM may be interpreted with this model. Future joint GW/GRB/FRB searches would lead to a measurement or place a constraint on the charges carried by isolate BHs. © 2016. The American Astronomical Society. All rights reserved. radiation mechanisms: non-thermal; stars: black holes Mergers of Charged Black Holes: Gravitational-wave Events, Short Gamma-Ray Bursts, and Fast Radio Bursts. Astrophysical Journal Letters, 827(2),
<urn:uuid:4f66064e-a8bb-4973-8f1b-f668a2a6e95c>
2.640625
389
Academic Writing
Science & Tech.
53.093672
95,510,870
A research team at the Broad Institute of Harvard and MIT and Beth Israel Deaconess Medical Center has uncovered a vast new class of previously unrecognized mammalian genes that do not encode proteins, but instead function as long RNA molecules. Their findings, presented in the February 1st advance online issue of the journal Nature, demonstrate that this novel class of “large intervening non-coding RNAs” or “lincRNAs” plays critical roles in both health and disease, including cancer, immune signaling and stem cell biology. “We’ve known that the human genome still has many tricks up its sleeve,” said Eric Lander, founding director of the Broad Institute and co-senior author of the Nature paper. “But, it is astounding to realize that there is a huge class of RNA-based genes that we have almost entirely missed until now.” Standard “textbook” genes encode RNAs that are translated into proteins, and mammalian genomes harbor about 20,000 such protein-coding genes. Some genes, however, encode functional RNAs that are never translated into proteins. These include a handful of classical examples known for decades and some recently discovered classes of tiny RNAs, such as microRNAs. By contrast, the newly discovered lincRNAs are thousands of bases long. Because only about ten examples of functional lincRNAs were known previously, they seemed more like genomic oddities than critical components. The new Nature study shows that there are actually thousands of such genes and that they have been conserved across mammalian evolution. “The challenge in finding these lincRNAs is that they have been hiding in plain sight,” said John Rinn, a Harvard Medical School assistant professor at Beth Israel Deaconess Medical Center and an associate member of the Broad Institute of Harvard and MIT. “The human and mouse genomes are already known to produce many large RNA molecules, but the vast majority show no evolutionary conservation across species, suggesting that they may simply be ‘genomic noise’ without any biological function.” To uncover this large collection of new genes, the Broad scientific team looked not at the RNA molecules themselves but at telltale signs in the DNA called chromatin modifications or epigenomic marks. They searched for genomic regions that have the same chromatin patterns as protein-coding genes, but do not encode proteins. By surveying the genomes of four different types of mouse cells (including embryonic stem cells and cells from various tissue types), they found an astounding 1,586 such loci that had not been previously described. The researchers also found that the vast majority of these genomic regions are transcribed into lincRNAs, and that these are conserved across mammals. “The epigenomic marks revealed where these genes were hiding,” said Mitch Guttman, a MIT graduate student working at the Broad Institute. “Analysis of their sequence then revealed that the genes are highly conserved in mammalian genomes, which strongly suggested that these genes play critical biological functions.” By correlating the expression patterns of lincRNAs in various cell types with the expression patterns of known critical protein-coding genes in those same cells, the scientists observed that lincRNAs likely play critical roles in helping to regulate a variety of different cellular processes, including cell proliferation, immune surveillance, maintenance of embryonic stem cell pluripotency, neuronal and muscle development, and gametogenesis. Further experimental evidence from several of the identified lincRNAs verified these observations. Because of the stringent experimental conditions imposed by the researchers in identifying the 1,600 lincRNAs in the Nature study, it is likely that there are many more lincRNA genes hiding in plain sight in the genome, as well as other RNA-encoding genes that are as important to genome function as their better-recognized protein-coding counterparts. Guttman et al. 2009 “Chromatin signature reveals over a thousand highly conserved, large non-coding RNAs in mammals.” Nature DOI 10.1038/nature07672 About the Broad Institute of Harvard and MIT The Broad Institute of Harvard and MIT was founded in 2003 to bring the power of genomics to biomedicine. It pursues this mission by empowering creative scientists to construct new and robust tools for genomic medicine, to make them accessible to the global scientific community, and to apply them to the understanding and treatment of disease. The Institute is a research collaboration that involves faculty, professional staff and students from throughout the MIT and Harvard academic and medical communities. It is jointly governed by the two universities. Organized around Scientific Programs and Scientific Platforms, the unique structure of the Broad Institute enables scientists to collaborate on transformative projects across many scientific and medical disciplines. About Beth Israel Deaconess Medical Center Beth Israel Deaconess Medical Center is a patient care, research and teaching affiliate of Harvard Medical School and ranks third in National Institutes of Health funding among independent hospitals nationwide. BIDMC is clinically affiliated with the Joslin Diabetes Center and is a research partner of the Dana-Farber/Harvard Cancer Center. BIDMC is the official hospital of the Boston Red Sox. Further reports about: > BIDMC > Broad Institute > Coding > Deaconess > ENCODE > Medical > Nature Immunology > Non-Protein > RNA > RNA molecules > Universität Harvard > cellular process > embryonic stem > embryonic stem cell > lincRNA > mammalian > mammals > previously > protein-coding > protein-coding genes Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:cbb5acc8-ae8a-4bd1-aafa-db5288d4fac5>
3.265625
1,791
Content Listing
Science & Tech.
30.472806
95,510,874
How City Life Has Changed the Bumblebee Genome Image via Pixabay Bumblebees living in the city have genes that differ from those of their relatives in the countryside. Although genetic differences are not major, they nevertheless may influence how well the insects adapt to their habitat. For example, urban bumblebees are probably better able to react to environmental challenges that come with city life, such as higher temperatures. These differences in their genetic makeup are an indication that urban life does impact the evolutionary trajectory of a species, write researchers at Martin Luther University Halle-Wittenberg (MLU) and the German Centre for Integrative Biodiversity Research (iDiv) Halle-Leipzig-Jena in the current issue of the renowned journal "Proceedings of the Royal Society B". For bumblebees, life in the city is a two-edged sword. "On the one hand, food is abundant for the insects thanks to the numerous urban gardens and balconies. But on the other hand, bumblebees have more parasites and there is a considerably higher degree of habitat fragmentation in cities," says Dr Panagiotis Theodorou from the Institute of Biology at MLU and iDiv. He headed the new study alongside bee researcher Professor Robert Paxton, also from Halle. Cities around the world are expanding, which threatens the natural habitant of many species. "In order for individuals to survive, they have to adjust to the new living conditions. This process should also be detectable in a species’ genetic material," says Paxton, explaining the idea behind the new study. In order to test this, the researchers collected bumblebees from nine large German cities and paired neighboring rural regions and analyzed the bees’ genetic material with the aid of so-called next generation sequencing. "Overall differences in the genetic material of urban and rural bees are subtle," says Paxton. However, a more detailed analysis of the data revealed that urban bumblebees are consistently distinguishable genetically from their country counterparts. "For example, we detected differences in genes associated with metabolism and environmental stress, such as heat or energetic stress," adds biologist Panagiotis Theodorou. The researchers were unable to identify these changes in the rural bumblebees. The biologists from Halle are unable to say for sure which of the many, different living conditions in cities cause these changes in the genetic material. However, their study is further proof of how plants and animals adapt to an environment shaped by humans and how this is reflected in their genetic material. This article has been republished from materials provided by the Martin Luther University of Halle-Wittenberg. Note: material may have been edited for length and content. For further information, please contact the cited source. Reference: Theodorou, P., Radzevičiūtė, R., Kahnt, B., Soro, A., Grosse, I., & Paxton, R. J. (2018). Genome-wide single nucleotide polymorphism scan suggests adaptation to urbanization in an important pollinator, the red-tailed bumblebee (Bombus lapidarius L.). Proc. R. Soc. B, 285(1877), 20172806. https://doi.org/10.1098/rspb.2017.2806 Working Together Helps Phage Overcome CRISPRNews Surprising results show that phage join forces to overcome bacteria’s CRISPR -based immune defenses. Improved understanding of the interactions between phage and their bacterial hosts could help advance phage-based therapies and stimulate viral research.READ MORE CRISPR Screening Reveals Sickle Cell Disease TargetNews A key signaling protein, known as heme-regulated inhibitor (HRI), has been identified as a potential therapeutic target for the development of drugs to treat sickle cell disease, using a CRISPR screening approach.READ MORE DNA Methylation Related to Liver Disease in Obese PatientsNews DNA methylation implicated in initiation of liver fibrosis in non-alcoholic fatty liver diseaseREAD MORE International Conference on Neurooncology and Neurosurgery Sep 17 - Sep 18, 2018
<urn:uuid:b17f2306-5434-459f-a92e-6e56b9b6140c>
3.578125
866
News Article
Science & Tech.
30.447941
95,510,879
2)Is sinx=2sinx*cosx an identity? 3)Rewrite 2sin 30 degrees*cos 30 degrees as a single trig function 4)If sinx=5/13 and x is in quadrant I, then: 5)Convert squarerootx-y+3=0 to normal form of a line 6)Convert xcos210 degrees+ysin210 degrees-5 to rectangular form of a line Recently Asked Questions - A sample of gas has a mass of 0.535 g Its volume is 124 mL at a temperature of 85 . C and a pressure of 763 mmHg . Find the molar mass of the gas. I - what do authors mean by instrumental goal relations? - Given the following financial statements, what is the forecasted external financing needed if all revenues, expenses, and assets remain a constant percent of
<urn:uuid:27f352b9-8191-4699-ba6d-2c917278b6e7>
2.53125
185
Content Listing
Science & Tech.
71.440597
95,510,905
The First GPS IERS and Geodynamics Experiment — 1991 Over 120 GPS receivers were deployed from January 22 to February 13, 1991 through a collaboration of about 70 agencies; this campaign was the first IERS (International Earth Rotation Service) experiment involving the GPS. Although earlier global tracking experiments have been undertaken (GOTEX, CASA UNO), the global distribution and concomitant geometric strength of the tracking sites in the GIG’91 campaign were unprecedented. Consequently, GIG’91 should yield significant advances in our understanding of the inherent accuracy of the GPS as a space geodetic system, as well as its strengths and limitations. The overall goal of the GIG’91 experiment was to obtain a high quality data set to be used to assess the utility of the GPS in the mix of space techniques currently used by the IERS for Earth Orientation monitoring and terrestrial reference frame control. The primary objectives of the campaign were, 1) to obtain an extensive and complete set of GPS observations from a global network with world-wide distribution, and 2) to encourage the broad participation by many groups internationally in the campaign and in the subsequent data analysis and assessment activities. The second objective resulted in a wide diversity of equipment used in the campaign. The 120-plus receivers in GIG’91 included 12 different receiver models and an even larger number of different antenna systems. Nevertheless, this problem of mixing systems should be mitigated significantly by the commonality in many of the systems used, the application of known offsets for certain combinations of mixed systems, and the standardization in data-taking and logging procedures that were mostly followed in GIG’91. KeywordsAmbiguity Resolution Earth Rotation Terrestrial Reference Frame Tracking Network Earth Rotation Parameter Unable to display preview. Download preview PDF.
<urn:uuid:12f1a477-95bc-49d3-aefc-c918c5e9479f>
2.625
369
Truncated
Science & Tech.
20.024025
95,510,921
A light-harvesting complex has a complex of subunit proteins that may be part of a larger supercomplex of a photosystem, the functional unit in photosynthesis. It is used by plants and photosynthetic bacteria to collect more of the incoming light than would be captured by the photosynthetic reaction center alone. Light-harvesting complexes are found in a wide variety among the different photosynthetic species. The complexes consist of proteins and photosynthetic pigments and surround a photosynthetic reaction center to focus energy, attained from photons absorbed by the pigment, toward the reaction center using Förster resonance energy transfer. The function of a light-harvesting complex Absorption of a photon by a molecule takes place leading to electronic excitation when the energy of the captured photon matches that of an electronic transition. The fate of such excitation can be a return to the ground state or another electronic state of the same molecule. When the excited molecule has a nearby neighbour molecule, the excitation energy may also be transferred, through electromagnetic interactions, from one molecule to another. This process is called resonance energy transfer, and the rate depends strongly on the distance between the energy donor and energy acceptor molecules. Light-harvesting complexes have their pigments specifically positioned to optimize these rates. Light-harvesting complexes in bacteria Light-harvesting complexes in plants Chlorophylls and carotenoids are important in light-harvesting complexes present in plants. Chlorophyll b is almost identical to chlorophyll a except it has a formyl group in place of a methyl group. This small difference makes chlorophyll b absorb light with wavelengths between 400 and 500 nm more efficiently. Carotenoids are long linear organic molecules that have alternating single and double bonds along their length. Such molecules are called polyenes. Two examples of carotenoids are lycopene and β-carotene. These molecules also absorb light most efficiently in the 400 – 500 nm range. Due to their absorption region, carotenoids appear red and yellow and provide most of the red and yellow colours present in fruits and flowers. The carotenoid molecules also serve a safeguarding function. Carotenoid molecules suppress damaging photochemical reactions, in particular those including oxygen, which exposure to sunlight can cause. Plants that lack carotenoid molecules quickly die upon exposure to oxygen and light. This section's factual accuracy is disputed. (January 2010) (Learn how and when to remove this template message) Little light reaches algae that reside at a depth of one meter or more in seawater, as light is absorbed by seawater. A phycobilisome is a light-harvesting protein complex present in cyanobacteria, glaucocystophyta, and red algae and is structured like a real antenna. The pigments, such as phycocyanobilin and phycoerythrobilin, are the chromophores that bind through a covalent thioether bond to their apoproteins at cysteins residues. The apoprotein with its chromophore is called phycocyanin, phycoerythrin, and allophycocyanin, respectively. They often occur as hexamers of α and β subunits (α3β3)2. They enhance the amount and spectral window of light absorption and fill the "green gap", which occur in higher Plants. The geometrical arrangement of a phycobilisome is very elegant and results in 95% efficiency of energy transfer. There is a central core of allophycocyanin, which sits above a photosynthetic reaction center. There are phycocyanin and phycoerythrin subunits that radiate out from this center like thin tubes. This increases the surface area of the absorbing section and helps focus and concentrate light energy down into the reaction center to a Chlorophyll. The energy transfer from excited electrons absorbed by pigments in the phycoerythrin subunits at the periphery of these antennas appears at the reaction center in less than 100 ps. - Photosynthetic reaction center - Photosystem II light-harvesting protein - Light harvesting pigment - Caffarri (2009)Functional architecture of higher plantphotosystem II supercomplexes. The EMBO Journal 28: 3052–3063 - Govindjee & Shevela (2011) Adventures with cyanobacteria: a personal perspective. Frontiers in Plant Science. - Liu et al. (2004) Crystal structure of spinach major light-harvesting complex at 2.72A° resolution. Nature 428: 287–292. - Lokstein (1994)The role of light-harvesting complex II energy dissipation: an in-vivo fluorescence in excess excitation study on the origin of high-energy quenching. Journal of Photochemistry and Photobiology 26: 175-184 - MacColl (1998) Cyanobacterial Phycobilisomes. JOURNAL OF STRUCTURAL BIOLOGY 124(2-3): 311-34. - UMich Orientation of Proteins in Membranes families/superfamily-2 - Calculated spatial positions of light harvesting complexes in membrane - Light-harvesting protein complexes at the US National Library of Medicine Medical Subject Headings (MeSH) - http://www.life.illinois.edu/govindjee/photoweb - Photosynthesis and all sub categories
<urn:uuid:d6b8772d-feed-49dd-89de-bfb7977b066b>
4.25
1,158
Knowledge Article
Science & Tech.
25.340578
95,510,941
Millions of tiny pieces of plastic are escaping wastewater treatment plant filters and winding up in rivers where they could potentially contaminate drinking water supplies and enter the food system, according to new research being presented here. Microplastics - small pieces of plastic less than 5 millimeters (0.20 inches) wide - are an emerging environmental concern in ocean waters, where they can harm ocean animals. Although wastewater treatment plants are catching 90 percent or more of the incoming microplastics in wastewater, the amount of microplastics being released daily with treated wastewater into rivers is significant, ranging from 15,000 to 4.5 million microplastic particles per day per treatment plant. These microplastics can be a source of pathogenic bacteria. Pictured here is some plastic found in wastewater influent (raw sewage entering a wastewater treatment plant), near Bartlett, Illinois. Credit: Timothy Hoellein Although the majority of ocean debris - including plastics - is transported to oceans from rivers, much less is known about how microplastics are entering rivers and affecting river ecosystems, according to Timothy Hoellein, an assistant professor at Loyola University Chicago. Rivers are sources of drinking water for many communities and also a habitat for wildlife, Hoellein said. Fish and invertebrates eat the tiny pieces of plastic in rivers, which then make their way up the food chain - possibly ending up on our dinner plates, he said. Like microplastics in the ocean, plastics found in rivers carry potentially harmful bacteria and other pollutants on their surfaces. "Rivers have less water in them (than oceans), and we rely on that water much more intensely," Hoellein said. Hoellein previously found that water downstream from a wastewater treatment plant had a higher concentration of microplastics than water upstream from the plant. Now, new research by Hoellein and his colleagues studying 10 urban rivers in Illinois supports this initial finding. Although initial estimates suggest that wastewater treatment plants are catching 90 percent or more of the incoming microplastics, the amount of microplastics being released daily with treated wastewater into rivers is significant, ranging from 15,000 to 4.5 million microplastic particles per day per treatment plant, according to the new research. Wastewater treatment plants were a source of microplastics in 80 percent of the rivers studied, regardless of the size of the river or the size and type of wastewater treatment plant. The new research also found that in each river, the tiny plastic particles that escaped the wastewater treatment plants were home to bacterial communities that were more likely to be potentially harmful than the bacteria found in the rivers. "[Wastewater treatment plants] do a great job of doing what they are designed to do - which is treat waste for major pathogens and remove excess chemicals like carbon and nitrogen from the water that is released back into the river," Hoellein said. "But they weren't designed to filter out these tiny particles." Hoellein will present new findings on microplastics in rivers Thursday, February 25 at the 2016 Ocean Sciences Meeting, co-sponsored by the Association for the Sciences of Limnology and Oceanography, The Oceanography Society and the American Geophysical Union. The new research found that not only do microplastics stay in ecosystems for a long time, but they often travel a long way from their point of origin. The researchers found microplastics as far as 2 kilometers (1.24 miles) downstream from the treatment plants, which supports the idea that that rivers can transport plastic and pathogens over long distances, Hoellein said. As the microplastics travel downstream, they are being introduced and incorporated into many ecosystems, he added. Hoellein said scientists are working to figure out how much plastic stays in the rivers and how much ends up in the oceans. Studying microplastics in rivers could help scientists better understand the entire lifecycle of these tiny pieces of plastic - from land to the ocean, Hoellein said. "The study of microplastics shouldn't be separated by an artificial disciplinary boundary," he said. "These aquatic ecosystems are all connected." Notes for Journalists The researchers on this study will present an oral presentation about their work on Thursday, 25 February 2016 at the Ocean Sciences Meeting. The meeting is taking place from 21 - 26 February at the Ernest N. Morial Convention Center in New Orleans. Visit the Ocean Sciences Media Center for information for members of the news media. Below is an abstract of the presentation. The abstract is a part of HI41A: The Emerging Science of Marine Debris: From Assessment to Knowledge that Informs Solutions I being held Thursday, 25 February from 8:00 a.m. to 10:00 a.m. Central Time in room RO1. Title: Consider a source: Microplastic in rivers is abundant, mobile, and selects for unique bacterial assemblages Session #: HI41A Abstract #: HI41A-02 Date: 25 February 2016 Time: 8:15 a.m. Authors: Timothy Hoellein, John Kelly, Amanda McCormick, Amanda McCormick: Loyola University Chicago, Chicago, Illinois, U.S.A. Abstract: Microplastic particles (< 5mm) in oceans are an emerging ecological concern. While rivers are considered a major source of microplastic to oceans, little is known about microplastic abundance, transport, and biological interactions in rivers. Our initial research an urban river showed microplastic collected downstream of a wastewater treatment plant (WWTP) was more abundant than upstream, more abundant than many marine sites, and had higher occurrences of bacterial taxa associated with plastic decomposition and gastrointestinal pathogens than natural habitats (e.g., seston and water column). Based on these data, we conducted follow-up projects to measure 1) the role of WWTPs on microplastic abundance in 10 rivers, 2) microplastic concentrations in WWTP influent, sludge, and effluent, and 3) deposition rates of microplastic downstream of a WWTP point source. In each project, we characterized bacterial community composition on microplastic and natural habitats using next-generation Illumina sequencing. Although maximum concentrations varied among 10 sites, microplastic concentration was significantly higher downstream of WWTPs than upstream. WWTPs retained a significant component of microplastic in two activated sludge plants (>90%). Microplastic deposition length in an urban river was >2 km, and concentrations were orders of magnitude higher in the sediment than water column. Finally, bacterial communities were distinct on microplastic in water column and sediment habitats, yet communities became more similar with increasing distance from WWTP effluent sites. These data support the role of rivers as sources of microplastic to downstream ecosystems, but also illustrate that rivers are active sites of microplastic retention and bacterial colonization. Results will inform policies and engineering advances for mitigating microplastic inputs and redistribution. We advocate for research on plastic in the environment which synthesizes data from freshwater and marine disciplines. This approach is needed to facilitate quantitative analyses of the physical and biological factors driving the 'life cycle' of plastic at a global scale. Contact information for the researchers: Timothy Hoellein, Ph.D.: email@example.com, +1 (724) 272 9799 Ocean Sciences Press Office Contacts: +1 (914) 552-5759 +1 (504) 427-6069 Lauren Lipuma | American Geophysical Union Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:4dd4c1da-a7e3-483e-81ac-bf39fa7ac28e>
3.703125
2,166
Knowledge Article
Science & Tech.
36.352769
95,510,943
The majority of oil from oceanic oil spills (e.g. the recent accident in the Gulf of Mexico) converges on coastal ecosystems such as mangroves. Microorganisms are directly involved in biogeochemical cycles as key drivers of the degradation of many carbon sources, including petroleum hydrocarbons. When properly understood and managed, microorganisms provide a wide range of ecosystem services, such as bioremediation, and are a promising alternative for the recovery of impacted environments. Previous studies have been conducted with emphasis on developing and selecting strategies for bioremediation of mangroves, mostly in vitro, with few field applications described in the literature. Many factors can affect the success of bioremediation of oil in mangroves, including the presence and activity of the oil-degrading microorganisms in the sediment, availability and concentration of oil and nutrients, salinity, temperature and oil toxicity. More studies are needed to provide efficient bioremediation strategies to be applicable in large areas of mangroves impacted with oil. A major challenge to mangrove bioremediation is defining pollution levels and measuring recuperation of a mangrove. Typically, chemical parameters of pollution levels, such as polycyclic aromatic hydrocarbons (PAHs), are used but are extremely variable in field measurements. Therefore, meaningful mangrove monitoring strategies must be developed. This review will present the state of the art of bioremediation in oil-contaminated mangroves, new data about the use of different mangrove microcosms with and without tide simulation, the main factors that influence the success of bioremediation in mangroves and new prospects for the use of molecular tools to monitor the bioremediation process. We believe that in some environments, such as mangroves, bioremediation may be the most appropriate approach for cleanup. Because of the peculiarities and heterogeneity of these environments, which hinder the use of other physical and chemical analyses, we suggest that measuring plant recuperation should be considered with reduction in polycyclic aromatic hydrocarbons (PAHs). This is a crucial discussion because these key marine environments are threatened with worldwide disappearance. We highlight the need for and suggest new ways to conserve, protect and restore these environments. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:f85e5bb8-8ed0-4a75-804f-f5ff3139f668>
3.203125
484
Academic Writing
Science & Tech.
3.675814
95,510,945
The goal of sensing in biology is to have the capability to study a wide variety of single-molecule behaviors. Optical single-molecule detection is one such technology that enables the study of molecular motion and interaction at the single-molecule level, and it has become an indispensable tool in many applications, such as diagnostics,1 DNA sequencing,2,3 and molecular biology.4 Single-molecule analysis yields invaluable information on both the individual molecular properties and their microenvironment, unlike the information that is normally disguised in ensemble measurements. Among the various single-molecule technologies, the detection with optical methods stands out because of its advantages, such as greater sensitivity, electrical passiveness, and robustness. Optical detection of single molecules can also be used over a wide range of concentrations. For simplicity, this review mainly focuses on the single-molecule detection in the following three regimes: (1) on a dry surface, (2) in solutions at ultralow concentrations, and (3) in solutions at native physiological concentrations. Optical detection of single molecules on a dry surface can be enhanced using nanostructures that enhance the fluorescence and modify the radiation direction. The detection in solutions at ultralow concentrations requires the development of ultrasensitive technology to distinguish the few molecules in the solution. A sample solution is at an ultralow concentration typically because either the sample itself is rare and hard to prepare in high concentrations or it is purposely diluted to facilitate single-molecule detection. The optical detection of single molecules in solutions at native physiological concentrations is very challenging but highly desired because meaningful information for many biologically relevant processes can be obtained only under native physiological conditions.5,6 With the rapid development of nanotechnology, both the excitation light and the solution of the molecules can be properly engineered to enhance single-molecule detection in all three regimes. For example, plasmonic nanostructures, which consist of carefully designed metallic nanostructures, have been used to enhance the single-molecule detection at both ultralow and high concentrations.22.214.171.124.–12 Another easy-to-ignore but critically important factor for single-molecule detection is the time required for a successful detection. Due to the small size of the active detection area, it can take an impractically long time for the molecules to diffuse into the detection area, especially at ultralow concentrations. Fortunately, several methods have been proposed to solve this problem as will be discussed in this article. The remainder of this review article is outlined as follows: Sec. 2 will describe the techniques for the advanced detection of single molecules on a surface and at low concentrations; Sec. 3 will introduce methods that have been applied to overcome the diffusion limit or the detection-time issue; Sec. 4 will review the recent developments for the optical detection of single molecules at native physiological concentrations; and finally, Sec. 5 will conclude with a discussion of the perspectives for future developments in this promising field. Enhanced Single-Molecule Detection on a Surface and at Low Concentrations In the past few decades, engineering of near-field radiation with plasmonic nanostructures has proven to be a promising pathway for enhancing the weak signals from molecules with low quantum yields. Plasmonics is a research field that investigates the interaction of electromagnetic waves with metallic nanostructures.13,14 In recent years, the study of plasmonics has been extended to many nonmetallic materials, such as doped semiconductor nanoparticles15,16 and graphene.17,18 The surface plasmon resonance (SPR) on these nanostructures, which is a condition in which the electromagnetic wave resonantly couples with the free-electron motion of the nanostructures, can focus light far beyond the diffraction limit.1920.–21 Hundreds of such nanostructures can be fabricated on a substrate with each nanostructure serving as an independent nanodetector for the enhanced single-molecule detection. Therefore, it enables label-free and parallel detection of single molecules with significantly improved throughput. It also allows the enhancement of the fluorescence of molecules near these plasmonic nanostructures. The fluorescence enhancement (FE) factor is defined as22,23 A wide variety of plasmonic nanostructures, also called optical nanoantennas because of their similarity to conventional radio frequency (RF) antennas, have been developed for single-molecule detection.2425.–26 The most simple and easy-to-realize optical nanoantenna is a spherical nanoparticle, which has been used to reduce the observation volume by four orders of magnitude beyond the diffraction limit.27 Similar to an RF dipole antenna, an optical dipole nanoantenna (ODN), which consists of a metallic nanorod (NR), has a response to the excitation light that strongly depends on its aspect ratio and can be carefully designed to optimize the excitation and emission rate for single-molecule detection. Figure 1(a) shows such an ODN for the detection of light-harvesting complex 2 (LH2). The ODN consists of a gold nanorod (AuNR) on a glass coverslip and can be prepared in arrays for parallel detection as shown in the inset of Fig. 1(a). The study of individual LH2 at ambient conditions without the help of optical nanoantennas is limited by the low fluorescence efficiency and its limited photostability. With the help of the ODN, an FE of light from the LH2 with more than a 500-fold enhancement has been demonstrated at the single-molecule level.24 Figure 1(b) shows the fluorescence image of the LH2 on an ODN array. Interestingly, the authors have claimed the first demonstration of photon antibunching from a single LH complex under ambient conditions, which shows its potential application as a quantum emitter. The field enhancement, which is defined as the ratio between the magnitudes of the electric field near the optical nanoantenna to that of the incident light, is limited for a single ODN. To increase the field enhancement, a similar optical nanoantenna can be put together to form a small gap. The near-field light coupling between the adjacent nanoantennas can give rise to a large field enhancement in the gap. Figure 1(c) shows such a coupled optical nanoantenna in a bowtie shape, a so-called bowtie nanoantenna. In addition to the huge field enhancement in the gap of the bowtie nanoantenna as shown in the simulation in Fig. 1(d), another advantage lies in its broadband response. Kinkhabwala et al.25 demonstrated large single-molecule FE by the bowtie nanoantenna. The bowtie nanoantennas are fabricated on a quartz coverslip with electron-beam lithography, and the low-quantum-efficiency fluorescent dye TPQDI is doped in PMMA and spin coated on the bowtie nanoantennas as shown in Fig. 1(c). The fluorescence of low-quantum-efficiency emitters has been shown to have a much higher potential to be enhanced in the presence of the optical nanoantennas than high-quantum-efficiency emitters.28,29 Figure 1(e) shows the experimental measurement of the FE of single molecules on 16 bowtie nanoantennas. Figure 1(f) shows the fluorescence time trace of a single molecule at one bowtie nanoantenna. A FE factor as high as 1340 has been reported for these nanoantennas. The FE of a molecule in the gap of an optical nanoantenna strongly depends on the gap size of the optical nanoantenna. To further enhance the single-molecule detection, optical nanoantennas with gap sizes in the sub-10 nm range are desired. However, it is not an easy task to fabricate such small gap sizes with top-down nanofabrication methods, such as electron-beam lithography or focused ion-beam milling. To solve this problem, a simple yet clever design is proposed as shown in Fig. 2(a). In this system, a silver nanocube sits on top of a dielectric layer that is sandwiched between the nanocube and a gold film, which is a so-called nanoscale patch antenna (NPA).30,3334.35.–36 The gap of this NPA can be precisely controlled in the range of 5 to 15 nm during the spin-coating process used to fabricate the dielectric layer.30 Single fluorescent emitters are embedded in the dielectric layer. Figure 2(b) shows the fundamental plasmonic mode of NPA together with the large field enhancement in the gap. An enhanced spontaneous emission rate that exceeds 1000 has been obtained in the measurements of fluorescence lifetimes of different ensembles of emitters. The NPA also facilitates directional emission of the single molecules and therefore improves the collection efficiency of the system.30 In addition to top-down nanofabrication methods that are commonly used to fabricate optical nanoantennas for enhanced single-molecule detection, metallic nanoparticles, such as gold and silver nanoparticles, can be synthesized with bottom-up methods and assembled into well-organized nanostructures with extremely small gaps for enhanced single-molecule detection. For example, the self-assembly of metallic nanoparticles using the DNA origami technique has been extensively used to build plasmonic nanostructures for the detection of single molecules.3738.39.40.–41 The nanogaps in these nanostructures also allow for surface-enhanced Raman scattering (SERS) detection with single-molecule sensitivity.42 Enhancement factors in the range from to have been reported in the literature.4243.44.45.–46 However, the reproducibility of the SERS sites is still poor, which results in unquantifiable SERS signals. To overcome these limitations, Lim et al.31 demonstrated a new form of nanostructure that is built with DNA-modified gold nanoparticles known as gold nanobridged nanogap particles (Au-NNPs) as shown in Fig. 2(e). Uniform 1 nm gaps can be formed between the gold core and the gold shell, and Raman dyes can be precisely loaded into the gap. Therefore, highly stable and quantitative SERS signals can be generated and amplified by roughly two orders of magnitude on this Au-NNP. Figures 2(f) and 2(g) show the distribution of the SERS enhancement factor of the Raman spectrum of Au-NNPs at 1190 and , respectively. The aforementioned methods can generate significantly enhanced Raman signals for single-molecule detection; however, the SERS site is typically fixed on the substrate making it hard to create an SERS map, which contains valuable positional information. Tip-enhanced Raman scattering (TERS) has been developed not only for detection but also for chemical recognition of individual molecules at any desired location on a surface.47,48 Figure 2(h) schematically shows a typical TERS configuration, which consists of a gap formed between a scanning tip and a metallic surface. A submonolayer of molecules is immobilized on the metallic surface, and the scanning tip can freely move over the surface to create an SERS map of the immobilized molecules. This special configuration offers stable control of the gaps at incredibly small tip–sample distances. It also provides in situ tuning of the plasmon resonance of the nanocavity, which is defined by the tip and substrate.49 Recently, Jiang et al.32 have shown the ability to distinguish adjacent molecules ( and ZnTPP) in real space that are within the van der Waals contact distance by using this method. Figure 2(i) shows that both ZnTPP and molecules are coadsorbed at the step edge with a total of six molecules arranged alternately due to the van der Waals interactions. In addition to the aforementioned techniques, Förster resonance energy transfer (FRET) is another important method for single-molecule measurement. FRET depends on the nonradiative energy transfer from a donor fluorophore to an acceptor fluorophore, which is distance dependent. Therefore, it allows for the measurement of molecular distances at nanometer scales. FRET can also be enhanced using plasmonic nanostructures. For example, Zhang et al.50 have investigated the effect of a metallic silver nanoparticle on FRET between a nearby donor–acceptor pair and showed enhanced FRET efficiency due to the metal nanoparticles. Moreover, the resonance energy transfer can directly occur between plasmonic nanostructures and dye molecules.51 For example, Zheng’s group has reported plasmon-induced resonance energy transfer from single AuNRs to merocyanine dye molecules.52 This shows that AuNRs can function as donors that have a larger absorption cross section than the dye molecules, which leads to an enhanced acceptor excitation. Breaking the Diffusion Limit of Single Molecules by Increasing the Molecular Concentration Engineering of the near-field excitation light with plasmatic nanostructures allows for the control of single-molecule emissions at the nanoscale. However, this will not guarantee the successful detection of single molecules in practice, especially when the molecules are in a highly diluted solution with concentrations in the femtomolar () or attomolar () range. This is mainly because it takes an unrealistically long time for the molecules to diffuse to the small detection area, or in other words, it is diffusion-limited.5354.55.56.–57 To break this diffusion limit, several methods have been demonstrated that allow for the significant reduction of the single-molecule diffusion time by increasing the concentration of the molecular solutions in the detection area. De Angelis et al.56 have demonstrated that a few molecules can be localized and detected at an attomolar concentration by combining superhydrophobic artificial surfaces and plasmonic nanostructures. The basic idea is schematically shown in Fig. 3(a) as follows: (1) put a drop of a solution that contains the molecules on a hydrophobic surface; (2) let the droplet evaporate to increase the concentration of the molecules in the drop; (3) as the concentration increases, the detection time decreases until the molecule eventually sticks on the detection surface; and (4) the detection surface is constructed using SERS active structures so that the single molecule can be detected with SERS. In this work, the hydrophobic surface is made of periodical silicon micropillar arrays as shown in the SEM image of Fig. 3(b). The hydrophobic surface allows the droplet to slide on the surface and, at the same time, avoids it being pinned at its initial contact point.5859.60.–61 Figure 3(c) shows the measurement of the contact angle of the droplet during the evaporation at four different times, which is almost constant. The recognition and localization of a single lambda DNA molecule has been successfully demonstrated using this system. Recently, Wong’s group62 developed a slippery substrate that allows for the free movement of the droplet on a substrate to achieve similar functions. To break the diffusion limit, the solution can also be forced to flow through nanochannels to increase the molecular concentration in the detection area.63,64 Figure 4(a) schematically shows the working mechanism of such a system.63 An array of nanoholes in a gold film is integrated with a microfluidic system to serve as both a plasmonic sensor and flow-through nanochannels. A voltage is applied across the solution to force the solution to flow through the nanochannels and concentrate the analyte on the plasmonic sensor. Figure 4(b) shows the evolution of the fluorescence signal collected from the nanoholes by using this method. The increase in the concentration of the analytes on the sensor is clearly demonstrated. In addition to these techniques, other methods have also been developed to increase the binding probability of the analytes on the sensor and to increase the detection speed. For example, the surfaces of plasmonic nanoparticles serving as sensors can be antibody modified to target the molecules, which is another effective way to increase the binding rate and reduce the detection time.65126.96.36.199.–70 Enhanced Single-Molecule Detection at Physiological Concentrations Using Optical Aperture Nanoantennas Another detection that seems easy but is very challenging happens at native physiological concentrations. Many biologically relevant processes occur at high concentrations, for example, micromolar () concentrations. Therefore, single-molecule detection at these high concentrations is highly desired.5,6 A common sense is that the higher the concentration of the molecules, the higher the signal and the easier the detection. However, the problem is that it is difficult to distinguish the signal from a single molecule at such high concentrations. The higher the concentrations, the smaller the average distance between adjacent molecules. Therefore, the distance between two molecules will be much smaller than the diffraction limit of light, which means it is almost impossible to excite only one molecule at a time with diffraction-limited light except that the molecule itself can be photo activatable and separated temporally like the molecules used in super-resolution imaging.71,72 The ODN introduced in Sec. 2 can be used to focus light at the nanoscale and excites only one molecule at a time at a high concentration.37,38,73,74 However, the ODNs are typically illuminated with diffraction-limited light; therefore, the molecules are not only excited by the near-field light of the ODNs but also by the diffraction-limited light. The fluorescence signal from the molecules excited by the diffraction-limited light may contribute a background noise to the detection. Optical aperture nanoantennas (OANs), which consist of nanoapertures formed on a metallic film, have been developed to overcome this limitation.8,9,75,76 The OANs can effectively block the incident light and confine the excitation light in the nanoaperture. Therefore, it can excite only one molecule at a time and gives near-zero background noise. The OANs with these properties are ideal for single-molecule detection at high concentrations. Zhao’s group has provided a review on this topic before, and we refer the reader to Ref. 8 for more details. In this section, we provide a brief and updated overview on this topic. A commonly used OAN is a circular nanohole that is formed in a metallic film such as a gold film. Djaker et al.77 have investigated the SERS of 4-aminothiophenol (pMA) with such an OAN. The pMA is self-assembled inside the OAN. An average SERS enhancement factor of 250 is obtained when the aperture size is 100 nm because of the effects of SPR on the aperture. OANs with other designs have been demonstrated such as the one shown in Fig. 5(a).78 The OAN consists of two touching nanoholes to form a double-hole shaped OAN, which can give a large field enhancement at the apex between the holes when the polarization of the exciting light is parallel to the apex as shown in Figs. 5(b) and 5(c). Gordon’s group has systematically investigated the optical properties of this double-hole shaped OAN.8081.–82 Figure 5(d) shows the fluorescence correlation spectroscopy analysis and the corresponding temporal correlation of the intensity traces, which illustrates a reduced number of molecules within the nanoscale detection volume when the excitation is set parallel to the apex region. The confocal measurement for the reference solution demonstrates a reasonably high average fluorescence intensity as shown by the green curve in Fig. 5(b).78 The same group later showed that this double-hole shaped OAN can also be used for trapping and sensing single proteins.79 Figure 5(e) shows the real-time optical transmission signal from the OAN when different agents are trapped and sensed by the OAN. The signal at steps (a), (b), and (c) in Fig. 5(e) shows the optical signal without a trap, with a 20-nm biotin-coated polystyrene (PS) particle trapped, and binding of streptavidin with the trapped biotin-coated PS particle, respectively. The OANs formed on a gold film as shown in Fig. 5 can block the light outside the aperture; however, some light can still reach the top surface due to plasmonic coupling through the aperture.83 The “leaked” light to the top surface will excite extra molecules that may contribute a background noise to single-molecule detection at high concentrations. An ideal OAN for single-molecule detection at high concentrations should focus the light only at the bottom of the aperture with minimum light on the top surface. OANs that are formed on hybrid metallic film, such as Al-Au83 and Cr-Au84 films, have been used to achieve this goal. The top metallic film allows for the quenching of the top field on the Au film, therefore confining the light more efficiently at the bottom of the aperture. Limited by the current nanofabrication technologies, the nanoaperture in an OAN is typically larger than the nanogap in an ODN. Therefore, the ODN gives rise to a larger field enhancement than the OAN. Wenger’s group has designed a so-called antenna-in-box,85,86 which is formed by fabricating an ODN inside an OAN. This design combines the advantages of the OAN’s low background noise with the ODN’s high field enhancement. The nanogap in the ODN in the antenna-in-box can be fabricated with a much smaller size than the nanoaperture in an OAN. Recently, Flauraud et al.87 have developed a nanofabrication technique that allows for the fabrication of large flat arrays of such antenna-in-boxes featuring 10 nm gaps as schematically shown in Fig. 6(a). Figure 6(b) shows the FE of crystal violet molecules from the nanoantenna with different gaps. Such an antenna-in-box can be used to enhance single-molecule fluorescence by a factor of to and to reduce the detection volumes to 20 zeptoliter. The antenna-in-box can also be used to enhance the FRET efficiency as schematically shown in Fig. 6(c). Torres et al.88 have demonstrated that the strong inhomogeneous and localized electric field in the antenna-in-box opens new energy transfer routes; therefore, it overcomes the limitations from the mutual dipole orientation in conventional FRET and ultimately enhances the FRET efficiency. It also enables the energy transfer between near-perpendicular orientations of the donor and acceptor dipoles that would be otherwise forbidden in conventional FRET. Figure 6(d) compares the FRET efficiency histograms for different nanostructures that are in confocal setup, 200-nm diameter aperture, and antenna-in-box with 20-nm gap antenna. FRET efficiency at different configurations of the donor and acceptor, which is defined by the orientation parameter , is also investigated. has a maximum value of four when both dipoles are aligned, whereas it is zero for dipoles in parallel planes with perpendicular orientation. The FRET efficiency increased for the antenna-in-box with the smallest orientation parameter () for the Cy3-12-Cy5 sample. We have witnessed the rapid development of this promising field in recent years. Engineering of the excitation light and the solution of molecules allows for the enhancement of single-molecule detection at both ultralow and physiological concentrations. In addition, the radiation of the molecules can be controlled by the plasmonic nanostructures. Based on these recent developments, we provide insight into some new possibilities for future developments in this promising field. Broadband and Near-Infrared Operation Most plasmonic nanostructures are designed to have a narrow resonance spectrum such as at the excitation or emission spectrum of a molecule, but not both. However, it is highly desired to optimize the SPR of the optical nanoantennas so that it is optimized for both the excitation and the emission spectrum of a molecule to achieve maximum enhancement of its fluorescence. Recently, Liu and Lei89 proposed a double-resonant AuNR that can be used to match both the excitation and emission wavelengths of an emitter. In addition, optical detection of single molecules in the near-infrared window would be beneficial for many bioapplications because of the deep light penetration and reduced background absorption for this spectrum.90,91 There is increasing interest and demand in the simultaneous detection of multiple molecular targets for many biomedical applications. Optical detection based on fluorescence and Raman signals can both be used for this purpose. While the fluorescence method requires multiple labeling of the target molecules for multiplex detection,92,93 Raman has its intrinsic advantages in terms of multiplicity and sensitivity. For example, multiplex detection of viral antigens,94 bacterial meningitis,95 and cancer markers96 using SERS has been demonstrated. In addition, the combination of fluorescence, Raman, and plasmonic nanostructures on the same platform will allow for multiple and parallel detection of molecular targets as well as reduced detection times.97,98 The engineering of the optical field with far-field optics typically involves many bulk optical elements, which occupy a large physical space. In contrast, the engineering of the near-field light with plasmonic nanostructures can reduce the system size significantly and possibly integrate the whole system on a single chip. However, bulk optics together with free-space lasers are still used to illuminate the plasmonic nanostructures in most of the single-molecule detection systems, which limits the further miniaturization of the system. Light sources with small form factors are desperately needed for these systems. Integrated silicon photonic devices with plasmonic nanostructures are a possible way to overcome this limitation. Silicon photonic devices have undergone a rapid development in recent years.99100.101.–102 Their fabrication shares the commercially complementary metal-oxide semiconductor chip fabrication facilities and, therefore, allows for low-cost and massive production. In addition, sample solution handling with microfluidic technology and imaging with smartphones103 will further facilitate the miniaturization systems for single-molecule detection. The plasmonic nanostructures for single-molecule detection are typically made of metallic materials. Upon light illumination, heat is generated around these nanostructures due to the intrinsic ohmic loss in the materials. The heat generation disturbs the fluid flow in the system and may cause unpredictable effects for the single-molecule detection, which itself is an interesting topic to study. Therefore, the heat generation must be minimized in these systems. Caldarola et al.104 have proposed to use a nonplasmonic platform based on dielectric nanostructures to reduce the heat generation, which may be beneficial for new developments in this field. The natural evaporation of the sample solution has been applied to break the diffusion limit in single-molecule detection.56,62 However, this process still takes several minutes. New technologies are required to further reduce the detection time. This might be achieved by either reducing the droplet size with microfluidic devices or by speeding up the droplet evaporation process. The authors acknowledge the financial support from the Hanley Sustainability Institute (HSI), the Summer Research Fellowship, and the Graduate Student Summer Fellowship at the University of Dayton. The authors declare no conflict of interest. Chenglong Zhao received his PhD from Peking University, Beijing, China, in 2011. He is an assistant professor at the University of Dayton with a joint appointment in the Department of Physics and Department of Electro-Optics and Photonics. Then, he carried out postdoctoral research at Pennsylvania State University and the National Institute of Standards and Technology. He leads the Nano-Photonic and Nano-Manipulation Lab, which is dedicated to the development of cutting-edge nanotechnologies for applications in additive nanomanufacturing, single-molecule detection, and ultrasensitive biosensing.
<urn:uuid:f7e988c7-d837-4c30-b341-db17c00354ac>
3.25
5,961
Truncated
Science & Tech.
28.085881
95,510,947
News & Opinion Top 10 Innovations Cell & Molecular Biology Disease & Medicine Ecology & Environment Genetics & Genomics Pharma & Biotech Image of the Day Sharks May Have Evolved from Acanthodians Joshua A. Krisch | Mar 14, 2017 Analysis of an ancient shark fossil provides the strongest evidence to date that modern sharks derive from a class of 400 million–year-old bony fish. Vision Helped Ancestral Fish Adapt to Life on Land Diana Kwon | Mar 8, 2017 “Buena vista” hypothesis suggests that changes in the sizes of eyes, rather than a shift from fins to limbs, led fish to transition to land more than 300 million years ago. Understanding the Roots of Human Musicality Catherine Offord | Mar 1, 2017 Researchers are using multiple methods to study the origins of humans’ capacity to process and produce music, and there’s no shortage of debate about the results. Ancient Marine Reptile Birthed Live Young Bob Grant | Feb 15, 2017 Researchers have described a pregnant , and the fossilized remains suggest that the massive animal did not lay eggs, as previously suspected. Study: Horses Did Not Develop New Traits During Periods of Rapid Speciation Diana Kwon | Feb 13, 2017 Speciation and development of new traits may not always go hand-in-hand. Science Teaching Standards up for Revision in Texas Kerry Grens | Feb 9, 2017 Despite a committee of educators recommending the removal of language challenging evolution in science curricula, state education board members vote to reintroduce controversial concepts. How Plants Evolved to Eat Meat Diana Kwon | Feb 7, 2017 Pitcher plants across different continents acquired their tastes for meat in similar ways. Earliest Deuterostome Fossils Described Kerry Grens | Jan 31, 2017 These millimeter-size sea creatures lived 540 million years ago. Another Explanation for Africa’s Enigmatic Fairy Circles Diana Kwon | Jan 20, 2017 Using simulations, scientists report that a mixture of termites and plant competition may be responsible for the strange patterns of earth surrounded by plants in the Namib desert. Baboons Can Make Sounds Found in Human Speech Diana Kwon | Jan 13, 2017 The findings suggest language may have started to evolve millions of years earlier than once thought.
<urn:uuid:eecad88b-e237-4d39-9e93-62e80d1aca93>
2.734375
513
Content Listing
Science & Tech.
29.028733
95,510,948
William Sunday 25th March 2018, 16:00 webmaster@cfcc0ed1 Sunday 25th March 2018, 16:56In Object Oriented Programming, you write programs that manipulate objects (combinations of data and program code) that are created from classes. |Suggested Topics||View / Reply||Forum| |webmaster@cfcc0ed1 started C++ Tutorials||796 / 0||General| |William started What is OOP?||1.8K / 1||General| |webmaster@cfcc0ed1 started Libraria, part 4||1.1K / 2||Database| |5386b4ae started Extended||696 / 1||General|
<urn:uuid:88473c9c-318f-4dd9-a172-0c984aa2e659>
3.078125
152
Comment Section
Software Dev.
61.990702
95,510,955
New research reveals secrets of former subglacial lakes in North America Researchers at the University of Sheffield have provided a unique glimpse into one of the least understood environments on Earth by revealing for the first time former subglacial lakes and their drainage routes beneath the North American ice sheets. By investigating a very strange flat spot and associated channel in Alberta, Canada, which had no water in it, academics discovered the former existence of a lake trapped beneath an ice sheet during the last glaciation. As this relict lake is no longer covered by many kilometres of ice, they were able to reconstruct what the lake would have looked like and how it drained from the landforms and sediments. Their observations, published in the journal Nature Communications today (Monday 13 June 2016), suggest the lake existed as a shallow lens of water which repeatedly drained through channels cut into the bed. The team's results provide constraints for the modelling of similar subglacial lake drainages beneath the Antarctic and Greenland ice sheets. These are a crucial component of the subglacial hydrological system, able to store and rapidly drain large volumes of meltwater, but we do not know enough about the drainage process to fully understand their influence on ice flow. Dr Stephen Livingstone, from the University's Department of Geography and lead author of the paper, said: "We've seen these flat spots connected to relict channels in Canada, and are inferring these as former subglacial lakes and their drainage imprint. "As ice no longer covers these relict lakes, our discovery has allowed us to reconstruct how the subglacial lakes would have looked and how they drained from the landforms and sediments. Our results provide key constraints for the investigation of modern subglacial lakes beneath the Antarctic and Greenland ice sheets." More information: Stephen J. Livingstone et al, Discovery of relict subglacial lakes and their geometry and mechanism of drainage, Nature Communications (2016). DOI: 10.1038/NCOMMS11767 Provided by: University of Sheffield
<urn:uuid:da70d533-200f-4b52-aa4c-9aaf5511d95f>
4.0625
416
News (Org.)
Science & Tech.
24.966047
95,510,980
Water analysis by ICP-AES Water is a valuable commodity even where it is abundant, and its quality in respect of dissolved or suspended material is of great importance in many areas. The need for care with water destined for human consumption is obvious, in an age where the contamination of water supplies by human activity is all too common. The composition of water used in food production (e.g. in irrigation and pisciculture) is likewise very important. The effect of industrial pollution on the aqueous environment needs regular monitoring, and this in turn implies a knowledge of the composition of the water, and the mode of entry and ultimate fate of any contaminants. In addition, many industrial processes require the use of water of guaranteed quality, and in most countries, industries are subject to legislative control on the quality of water they discharge into national water systems and sewers. This widespread interest in the composition of water requires a commensurate investment in facilities for its analysis, and the ICP is playing an increasing part in this effort. KeywordsWater Analysis Lanthanum Nitrate Ultrasonic Nebulization Sodium Diethyldithiocarbamate Instrumental Detection Limit Unable to display preview. Download preview PDF.
<urn:uuid:4f3ff096-1bf4-4eee-a541-11ddd24b0bc6>
2.890625
247
Truncated
Science & Tech.
17.741
95,511,029
A View from Emerging Technology from the arXiv The Curious Link Between Parked Cars and Perched Birds Could an uncanny resemblance between the statistics of parked cars and perched birds help us understand the relationship between mathematics and physics? At first sight, parallel parking may not seem to have much to do with the way that birds perch on electricity wires, but Petr Šeba at the Czech Technical University, in Prague, begs to differ. He has measured the gaps between parked cars and says that the statistical patterns in the data bear an uncanny likeness to those in the distances between perched birds. (Note that distances are by no means random.) That’s kind of interesting, but perhaps not for the reason that Šeba gives. His argument is that the mechanisms that birds and humans use to judge distances are essentially the same. He says that this is because both species evolved from a common ancestor that also perceived space in the same way. Perhaps. A more interesting explanation is that Šeba has stumbled across a deep connection between the statistics of seemingly unrelated phenomena. It has been known for some time that the statistics associated with the gaps between parked cars can be described by a branch of mathematics known as random matrix theory. This theory also describes the distribution of peaks in the way that neutrons scatter off heavy nuclei, the zeros in the Riemann zeta function, and the statistics of the bus system in the city of Cuernavaca, in Mexico (as Šeba knows all too well: he went to Mexico to study the buses). In fact, random matrix theory seems to be remarkable in its effectiveness at describing not just the physical world but the mathematical world too. In 2006, Percy Deift from the Courant Institute of Mathematical Sciences, in New York, even went so far as to say that random matrix theory may play the same role in mathematics as thermodynamics does in the physical world. In other words, random matrix theory is a manifestation of some fundamental universal property of mathematics. That is a profound idea that leads to many fascinating questions. For example, if seemingly unrelated phenomena can be linked by the mathematics of random matrix theory, are they also linked in some physical way? And if so, what then links the real world governed by physical laws and the nonphysical world of mathematical patterns? Perhaps the way that birds and humans line up offers us a clue. Ref: arxiv.org/abs/0907.1914: Parking and the visual perception of space Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:07d931ca-37f0-41fe-b6eb-4a57427f5f32>
2.625
550
Truncated
Science & Tech.
44.986563
95,511,056
A new smartphone app offers hope of stemming the spread of diseases like Lyme, and providing some peace of mind in the process. Ecology Archives - State of the Planet Scientists have long determined what extinct animals ate by analyzing carbon isotopes locked inside their fossil teeth. But a new study shows that in many cases, they may be plugging the wrong numbers into their equations. The findings may change some views of how mammals, including us, evolved. Artificial intelligence is helping us manage the impacts of climate change and protect the environment in many ways. On June 2nd, residents in and around New York City can join scientists in exploring our estuary and assessing the diversity of our local waterways. Alexandria Ang takes home the prestigious recognition for her research on a climate change-driven algae whose greenish blooms plague the Arabian Sea. A new study has uncovered when and why the native vegetation that today dominates much of Australia first expanded across the continent. Two new papers find that the line that divides the moist East and arid West is edging eastward due to climate change—and the implications for farming and other pursuits could be huge. Alexandria Ang, a former intern at Lamont-Doherty Earth Observatory, will present her scientific discoveries for a chance to win some major prizes. In mid-April, Agustina Besada will embark on a journey to #unplastify the world. Stormwater runoff can cause a lot of problems in aquatic ecosystems. Here’s how you can help mitigate those effects.
<urn:uuid:48a2d687-1a3d-4c65-9323-a050662aeee9>
2.609375
318
Content Listing
Science & Tech.
36.304208
95,511,057
A new study demonstrates that a correlation also exists between cumulative carbon emissions and future sea level rise over time -- and the news isn't good. Indigenous Peoples have ownership, use and management rights over at least a quarter of the world's land surface according to a new study published this week in the journal Nature Sustainability. A new understanding of the microbes and viruses in the thawing permafrost in Sweden may help scientists better predict the pace of climate change. Preventing reservoir evaporation during droughts with floating balls may not help conserve water overall, due to the water needed to make the balls. University of Melbourne researchers have helped create the first tool to calculate the 'nitrogen footprint' of an organisation. The tool will provide a guide to sustainability and pollution reduction for daily activities such as food consumption, travel and energy use. Policies to entice consumers away from fossil-fuel powered vehicles and normalize low carbon, alternative-fuel alternatives, such as electric vehicles, are vital if the world is to significantly reduce transport sector carbon emissions, according to new research. As the world seeks to curb human-induced climate change, will protecting the carbon of tropical forests also ensure the survival of their species? A study published today in the leading journal Nature Climate Change suggests the answer to this question is far from straightforward. Forests with the greatest carbon content do not necessarily house the most species, meaning carbon-focused conservation can miss large swathes of tropical forest biodiversity. A switch from subsidizing fossil fuel to pricing CO2-emissions would not only help to meet global climate targets but also create additional domestic public revenues. These revenues could finance expenses towards sustainable development, improving health-care, education and infrastructure for energy, transportation or clean water. India could cover more than 90 percent of its needs to finance progress towards these sustainability goals. This could also be an attractive option for countries like Nigeria, Burundi and Senegal. A University of Queensland-led international study could lead to more accurate predictions or the rate of global warming from greenhouse gas emissions produced by thawing permafrost in the next 100 years. The study of the microorganisms involved in permafrost carbon degradation links changing microbial communities and biogeochemistry to the rise of greenhouse gas emissions. Thousands of miles of buried fiber optic cable in densely populated coastal regions of the United States may soon be inundated by rising seas, according to a new study by researchers at the University of Wisconsin-Madison and the University of Oregon.
<urn:uuid:40b1fc01-1a8d-4ca9-8f48-0f456f43bc70>
2.890625
513
Content Listing
Science & Tech.
16.895623
95,511,070
Microsoft® Visual Basic® Scripting Edition | Language Reference | Returns a value indicating the result of a string comparison. StrComp(string1, string2[, compare]) The StrComp function syntax has these arguments: Part Description string1 Required. Any valid string expression. string2 Required. Any valid string expression. compare Optional. Numeric value indicating the kind of comparison to use when evaluating strings. If omitted, a binary comparison is performed. See Settings section for values. The compare argument can have the following values: Constant Value Description vbBinaryCompare 0 Perform a binary comparison. vbTextCompare 1 Perform a textual comparison. vbDatabaseCompare 2 Perform a comparison based upon information contained in the database where the comparison is to be performed. If StrComp returns string1 is less than string2 -1 string1 is equal to string2 0 string1 is greater than string2 1 string1 or string2 is Null Null |file: /Techref/language/asp/vbs/vbscript/169.htm, 4KB, , updated: 1996/11/22 11:12, local time: 2018/7/18 00:27, |©2018 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?| <A HREF="http://www.piclist.com/techref/language/asp/vbs/vbscript/169.htm"> Microsoft® Visual Basic® Scripting Edition </A> |Did you find what you needed?| PICList 2018 contributors: o List host: MIT, Site host massmind.org, Top posters @20180718 RussellMc, Van Horn, David, Sean Breheny, Isaac M. Bavaresco, David C Brown, Bob Blick, Neil, Denny Esterline, John Gardner, Brent Brown, * Page Editors: James Newton, David Cary, and YOU! * Roman Black of Black Robotics donates from sales of Linistep stepper controller kits. * Ashley Roll of Digital Nemesis donates from sales of RCL-1 RS232 to TTL converters. * Monthly Subscribers: Gregg Rew. on-going support is MOST appreciated! * Contributors: Richard Seriani, Sr. Welcome to www.piclist.com!
<urn:uuid:d2dc83f1-dbf5-4931-8b95-ab79979d1783>
2.6875
531
Documentation
Software Dev.
53.79105
95,511,080
Researchers at the University of Waterloo in Canada have directly entangled three photons in the most technologically useful state for the first time, thanks in part to superfast, super-efficient single-photon detectors developed by the National Institute of Standards and Technology (NIST). Entanglement is a special feature of the quantum world in which certain properties of individual particles become linked such that knowledge of the quantum state of any one particle dictates that of the others. Entanglement plays a critical role in quantum information systems. Prior to this work it was impossible to entangle more than two photons without also destroying their fragile quantum states. Entangled photon triplets could be useful in quantum computing and quantum communications—technologies with potentially vast power based on storing and manipulating information in quantum states—as well as achieving elusive goals in physics dating back to Einstein. The team went on to use the entangled triplets to perform a key test of quantum mechanics. The Waterloo/NIST experiment, described in Nature Photonics,* generated three photons with entangled polarization—vertical or horizontal orientation—at a rate of 660 triplets per hour. (The same research group previously entangled the timing and energy of three photons, a state that is more difficult to use in quantum information systems.) "The NIST detectors enabled us to take data almost 100 times faster," says NIST physicist Krister Shalm, who was a postdoctoral researcher at Waterloo. "The detectors enabled us to do things we just couldn't do before. They allowed us to speed everything up so the experiment could be much more stable, which greatly improved the quality of our results." The experiments started with a blue photon that was polarized both vertically and horizontally—such a superposition of two states is another unique feature of the quantum world. The photon was sent through a special crystal that converted it to two entangled red daughter photons, each with half the original energy. Researchers engineered the system to ensure that this pair had identical polarization. Then one daughter photon was sent through another crystal to generate two near-infrared granddaughter photons entangled with the second daughter photon. The result was three entangled photons with the same polarization, either horizontal or vertical—which could represent 0 and 1 in a quantum computer or quantum communications system. As an added benefit, the granddaughter photons had a wavelength commonly used in telecommunications, so they can be transmitted through fiber, an advantage for practical applications. Triplets are rare. In this process, called cascaded down-conversion, the first stage works only about 1 in a billion times, and the second is not much better: 1 in a million. To measure experimental polarization results against 27 possible states of a set of three photons, researchers performed forensic reconstructions by taking snapshot measurements of the quantum states of thousands of triplets. The NIST detectors were up to these tasks, able to detect and measure individual photons at telecom wavelengths more than 90 percent of the time. The superconducting nanowire single-photon detectors incorporated key recent improvements made at NIST, chiefly the use of tungsten silicide, which among other benefits greatly boosted efficiency.** To demonstrate the quality and value of the triplets, researchers tested local realism—finding evidence that, as quantum theory predicts, entangled particles do not have specific values before being measured.*** Researchers also measured one of each of a succession of triplets to show they could herald or announce the presence of the remaining entangled pairs. An on-demand system like this would be useful in quantum repeaters, which could extend the range of quantum communications systems, or sharing of secret data encryption keys. With improvements in conversion efficiency through use of novel materials or other means, it may be possible to add more stages to the down-conversion process to generate four or more entangled photons. The work was supported in part by the Ontario Ministry of Research and Innovation Early Researcher Award, Quantum Works, the Natural Sciences and Engineering Research Council of Canada, Ontario Centres of Excellence, Industry Canada, the Canadian Institute for Advanced Research, Canada Research Chairs and the Canadian Foundation for Innovation. * D.R. Hamel, L.K. Shalm, H.H. Hubel, A.J. Miller, F. Marsili, V.B. Verma, R.P. Mirin, S.W. Nam, K.J. Resch and T. Jennewein. Direct generation of three-photon polarization entanglement. Nature Photonics. Published online Sept. 14. ** For more about the detectors, see the 2011 NIST Tech Beat article, "Key Ingredient: Change in Material Boosts Prospects of Ultrafast Single-photon Detector," and updates "High Efficiency in the Fastest Single-Photon Detector System" (Feb. 2013) and "Closing the Last Bell-test Loophole for Photons" (June, 2013). *** Specifically, the researchers calculated Mermin and Svetlichny inequalities, two tests of local realism. One result was the strongest-ever measured violation of the three-particle Svetlichny inequality, according to the paper. Laura Ost | Eurek Alert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:9a24918a-ae63-4f5f-a529-e198e685060a>
3.53125
1,695
Content Listing
Science & Tech.
36.991289
95,511,090
Preparing for Hotter Oceans Could a small coral breeding experiment in Hawaii transform the world’s oceans, as climate change continues to ravage the underwater world? I meet Dr Ruth Gates on a cloudy February morning in Kaneohe Bay, a sheltered cove on the southeast coast of Oahu. It is a Sunday, when Gates normally teaches karate, but today she is taking me to Moku o Loe (Coconut Island), a small island that is visible from shore but accessible only by speedboat. Gates, who is in her mid 50s, greets me warmly and with the energetic and somewhat brisk manner of a person in perpetual motion. “I’m a doer,” she tells me later, though this seems readily apparent as much from her gusto as from her bio. Gates is the director of the Hawaii’s Institute of Marine Biology and principal investigator at the Gates Coral Lab, and the president of the International Society for Reef Studies, as well as the author of more than 100 scientific papers and a frequent public speaker. But to think of her as a coral expert focused on her work only would be a mistake. Gates is a “big picture” person, who talks about everything from new technology to organisational culture and structures. The boat ride takes us only a couple of minutes and we soon pull up to a small dock surrounded by mangroves. Coconut Island originally belonged to an eccentric billionaire who, in the 1930s, fashioned it into a private retreat. More recently, it has served as a research facility for the Hawaii Institute of Marine Biology. The island’s location is ideal as the reefs beneath the sheltered turquoise waters are easily accessible to Gates and her team of researchers who have, for the past four years, been attempting to breed a strain of climate-change resistant corals. In 2013, Gates was the recipient of Paul G Allen’s Ocean Challenge prize, a US$4-million endowment to pursue, (along with marine biologist Madeleine van Oppen in Australia) the idea that selectively breeding a tougher variety of corals – “human assisted evolution” as it is sometimes called – could produce a climate-adapted coral species and help to bring the world’s ailing reefs back to life. The fundamental concern for Gates and her team is understanding why some corals survive bleaching events – when an environmental trigger such as rising ocean temperatures or increased acidity levels causes corals to turn white and stop growing – while others, sometimes just inches away, completely perish. “My whole career has been framed by this question of what makes one coral survive in conditions that kills another,” Gates says. Gates’ career has also corresponded with a distressing period for the world’s reefs. Gates, who is English, began her studies at Newcastle University in England and then moved to Jamaica in the mid 1980s to continue her study of corals, focusing on the relationship between corals and their symbionts – the millions of tiny plants that live on corals. But the 1980s turned out to be devastating for corals in the Caribbean. Due to overfishing, pollution and development, nearly half of the Caribbean’s coral cover disappeared. Gates continued her research at UCLA and then at the University of Hawaii just as climate change was beginning to threaten coral populations. The first major bleaching event came in 1998, when unusually warm waters killed around 15 per cent of corals worldwide. In the 20 years since, this trend has continued with bleaching events recorded in 2002 and again in 2016 as normal cyclical weather conditions are mapped onto increased base temperatures. “The last El Nino created temperatures on reefs we’ve never seen before,” says Gates referring to the weather pattern that peaked in 2016 and temporarily warmed much of the surface of the planet, causing the hottest year in a historical record dating to 1880. The typical approach to reef conservation has been to protect reefs from human activity, Gates says. “Usually we put a boundary around it and try to limit human behaviour within the boundary and say ‘if we just leave it alone everything will be fine’.” But look at what happened to the Great Barrier Reef and it is clear that this approach is no longer sufficient, she says. Australia’s Great Barrier Reef was the best-managed reef system in the world. The government’s Reef 2050 Plan had placed restrictions on port development, dredging and agricultural runoff. But the disturbance in temperatures in 2016 and again in 2017 was so significant that it washed out the effects of the managed areas. Gates uses the analogy of building a sea wall that withstands the everyday storm, but is easily overwhelmed by the 100-year storm. “We didn’t expect to see this level of destruction to the Great Barrier Reef for another 30 years,” says Terry P. Hughes, director of a government-funded centre for coral reef studies at James Cook University in Australia, who recently published his findings in the journal Nature. “In the north, I saw hundreds of reefs — literally two-thirds of the reefs — were dying, and are now dead.” Scientists estimate that 90 per cent of the world’s reefs will be gone by 2050, a conservative estimate by Gates’ measure. “The reality is that we don’t have much time”, she says. Within the next five to 10 years, she hopes to introduce corals into the wild that have been bred for resilience. Corals only spawn in the summer and only at nighttime, which makes breeding them tricky, but the lab has been successful over the last two years. In an odd twist of fate two bleaching events occurred in Kaneohe Bay in 2014 and 2015, just as the research project was starting. This allowed Gates and her team to map how the coral performed to natural stress responses. “We were able to scurry out and label corals that either turned white or didn’t,” Gates says. The team has used this “living library” as a base, ultimately working only with the corals that were in non-affected areas. Gates calls these the “best performers”. The best performers are resilient, the lab has discovered, due to three main factors. The first is base genetics – that the corals with hardy parents will be even hardier, Gates says. Also, since corals are consortium organisms (they are animals that have an intimate, obligate relationship with tiny plants that live on their tissues and even inside their cells), the partnership – who the corals partner with – influences their health. The third factor is epigenetics, the modification of gene expression. “If a coral has survived stress, and it sees that same stress in the future, it doesn’t respond as strongly,” Gates explains. While this is the first time corals have been bred for specific qualities, it isn’t a new practice – selective breeding is used in everything from farming to our domestic pets. Still, when the project was first announced, the lab received an onslaught of criticism for intervening in wild marine systems. Some accused the lab of ‘playing god’ while others called the project the ‘Monsanto’ of reefs. Another concern was the risk of reducing the diversity of the species, or of ‘super coral’ becoming the next Cane Toad. (Initially introduced to help control Australia’s beetle population, the Cane Toad has become one of the world’s most invasive species.) Before heading to Coconut Island, I make a point of mentioning the coral-breeding project to people I meet. Reactions range from unease to indignation. “You just don’t mess with Mother Nature,” my Uber driver says on the road from Honolulu. I point out that it could be seen simply as a human solution to a human problem – we are the ones who pumped all of the CO2 in the atmosphere in the first place – but this reasoning doesn’t appear convincing. Gates says the concerns she fields are largely emotional and not factual; that even if you weigh the risks of say, genetic narrowing against the risks of doing nothing, “it’s a no-brainer”. And it’s not just the corals themselves that stand to benefit if the project succeeds. Corals provide physical and ecological support for a third of all marine life. This makes them what ecologists term as “keystone species,” as their health is vital for the well-being of countless other species, including humans. A quarter of fisheries are intimately linked to coral reefs where fish flourish, breed and feed. Some 500 million people worldwide therefore rely on reefs for food, income, protection or a combination of all three. “This is the thing I think many scientists don’t understand,” says Gates. “Here in Hawaii our connection to the reef is tangential… but on a small Pacific island where 70 per cent of the protein is coming from the reef and the land that you live on directly protected by the integrity of the reef, that’s a whole different discussion. I feel it behooves us to step back from our ivory towers in all ways really and say we have an obligation to do things that stabilise reefs for places that depend on them intimately.” It is useful to consider what a world without corals might look like. According to Gates, there would be tens of millions of displaced people competing for resources, a place where there is much more aggression and competition. But some scientists have also questioned how the lab could possibly scale such an endeavour. The Great Barrier Reef is just a fraction of the world’s overall reef cover and spans an area almost the size of Germany. Putting corals out in Hawaii, an isolated archipelago, could take thousands of years to spread. But the lab in Hawaii is only part of a much bigger puzzle, Gates says. And scaling the project will involve developing a capacity that is relevant to different places – Hawaii for Hawaii, Australia for Australia, etc. “It’s not about making one super coral and moving it around the world.” Scaling the project will also involve input from more than just scientists, Gates says. She describes her approach to problem solving as “reverse engineering”, envisioning the end goal, and then working backwards. “It’s a business level approach, like considering who are the players in the different parts of the pipeline who would be involved in bringing a product to market”. So far this has involved everything from reaching out to oyster farmers, who have been a great resource in refining the lab’s selective breeding practices, to partnerships with satellite imaging companies like Planet Labs, which help to track local changes to reefs. The lab is also partnering with the Carnegie Observatory and will soon be utilising a powerful camera built into an airplane that can show how many plants are alive inside a coral from the air. “Partnering with this amazing technology allows us to ask the question: can you identify high stress resistant corals from the air? That changes how we are able to scale it,” Gates says. Thinking outside the box is not always encouraged in academia, particularly in the sciences, but Gates is adamant that the only way forward is to work across disciplines and share best practices openly. She is a strong supporter of the open science movement, in which scientific results, data, notes are all made available. The peer review process can take two years or more, she says, and “we don’t have that time”. Gates’ ability to clearly communicate her goals to nonscientists has also proven useful in courting wealthy donors who want to understand in real terms the potential impact of the science. Midway through my tour of the island facilities - which include boating and diving facilities, wet tables and tanks with flow through seawater for holding and culturing corals and collection of larvae and gametes, and a molecular and microscopy lab - Gates shows me a state-of-the-art evolutionary genetics facility. The facility has a custom-designed confocal microscope – the only one in the world – which uses live imaging to help scientists watch corals live in simulated future ocean conditions. “You can warm the stage, acidify the compartment and we can then watch the animal at a microscopic level,” Gates explains. To design the microscope, the lab consulted with hospitals on the latest approaches to brain imaging – corals are similar to the human brain in density and the microscope uses technology modeled on MRI machines. The images produced wavering tentacles with spots of blue and green showing coral fluorophores. Gates plans to use the images in classrooms so children can experience the beautiful and complicated world of stinging cells, polyps, and the symbiotic relationship between corals and their plant partners. (Using the images Gates has also launched another side project called Coral Interactive, an educational platform aimed at kids who play on their cell phones that features an episodic, interactive format designed to pique their interest in corals.) The million-dollar confocal microscope was donated to the lab by Pam Omidyar, wife of Pierre Omidyar, founder of eBay. “Pam loves science,” says Gates. Philanthropists like Paul Allen and the Omidyars understand that the project would never be funded through traditional channels, she says. And they understand that it’s going to take money for the project to succeed but it will save an enormous amount more. It is difficult to attach a monetary value to the goods and services that rely on reefs, as some nations are entirely composed of reefs, but estimates run as high as US $375 billion a year. “We have a scale of problem that is arguably about an ecosystem that is in the billion to trillion dollar range, and yet, we’re treating it with hundreds of thousands to millions,” says Gates. “There’s a huge discrepancy there.” Private funding of assisted evolution raises questions. Will donors expect a return on their investment? Are we headed to a scenario where the world’s wealthy own vulnerable species or entire ecosystems? Gates believes the only way to scale the project is commercially. “There has to be a revenue stream”, she says. “This will make it of value to people. I say let’s think about how we actually take this on board as a valued asset that we have to protect.” With her attention focused on everything from the cellular health of corals to securing necessary funding (and teaching karate) I wonder if Gates ever finds time to sleep. Her answer reflects her view that time is limited. “I think we have to be doing everything now. People say ‘you’re insane’. It is insane but we only have this very short window and I don’t want to look back in 10 years and say I could have done so much more.” This article appeared in the June 2018 issue of The Peak / SCMP
<urn:uuid:43041132-233c-4028-a6db-f89e9addd0d7>
3.1875
3,173
Personal Blog
Science & Tech.
43.998124
95,511,097
Rotation around a fixed axis or about a fixed axis of revolution or motion with respect to a fixed axis of rotation is a special case of rotational motion. The fixed axis hypothesis excludes the possibility of an axis changing its orientation, and cannot describe such phenomena as wobbling or precession. According to Euler's rotation theorem, simultaneous rotation along a number of stationary axes at the same time is impossible. If two rotations are forced at the same time, a new axis of rotation will appear. This article assumes that the rotation is also stable, such that no torque is required to keep it going. The kinematics and dynamics of rotation around a fixed axis of a rigid body are mathematically much simpler than those for free rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for free rotation of a rigid body. The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes. A rigid body is an object of finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably. A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and rotational motion. Purely translational motion occurs when every particle of the body has the same instantaneous velocity as every other particle; then the path traced out by any particle is exactly parallel to the path traced out by every other particle in the body. Under translational motion, the change in the position of a rigid body is specified completely by three coordinates such as x, y, and z giving the displacement of any point, such as the center of mass, fixed to the rigid body. Purely rotational motion occurs if every particle in the body moves in a circle about a single line. This line is called the axis of rotation. Then the radius vectors from the axis to all particles undergo the same angular displacement in the same time. The axis of rotation need not go through the body. In general, any rotation can be specified completely by the three angular displacements with respect to the rectangular-coordinate axes x, y, and z. Any change in the position of the rigid body is thus completely described by three translational and three rotational coordinates. Any displacement of a rigid body may be arrived at by first subjecting the body to a displacement followed by a rotation, or conversely, to a rotation followed by a displacement. We already know that for any collection of particles—whether at rest with respect to one another, as in a rigid body, or in relative motion, like the exploding fragments of a shell, the acceleration of the center of mass is given by where M is the total mass of the system and acm is the acceleration of the center of mass. There remains the matter of describing the rotation of the body about the center of mass and relating it to the external forces acting on the body. The kinematics and dynamics of rotational motion around a single axis resemble the kinematics and dynamics of translational motion; rotational motion around a single axis even has a work-energy theorem analogous to that of particle dynamics. A particle moves in a circle of radius . Having moved an arc length , its angular position is relative to its original position, where . An angular displacement is a change in angular position: where is the angular displacement, is the initial angular position and is the final angular position. Angular velocity is the change in angular displacement per unit time. The symbol for angular velocity is and the units are typically rad s−1. Angular speed is the magnitude of angular velocity. The instantaneous angular velocity is given by Using the formula for angular position and letting , we have also where is the translational speed of the particle. Angular velocity and frequency are related by A changing angular velocity indicates the presence of an angular acceleration in rigid body, typically measured in rad s−2. The average angular acceleration over a time interval Δt is given by The instantaneous acceleration α(t) is given by Thus, the angular acceleration is the rate of change of the angular velocity, just as acceleration is the rate of change of velocity. The translational acceleration of a point on the object rotating is given by where r is the radius or distance from the axis of rotation. This is also the tangential component of acceleration: it is tangential to the direction of motion of the point. If this component is 0, the motion is uniform circular motion, and the velocity changes in direction only. The radial acceleration (perpendicular to direction of motion) is given by It is directed towards the center of the rotational motion, and is often called the centripetal acceleration. The angular acceleration is caused by the torque, which can have a positive or negative value in accordance with the convention of positive and negative angular frequency. The ratio of torque and angular acceleration (how difficult it is to start, stop, or otherwise change rotation) is given by the moment of inertia: . When the angular acceleration is constant, the five quantities angular displacement , initial angular velocity , final angular velocity , angular acceleration , and time can be related by four equations of kinematics: The moment of inertia of an object, symbolized by I, is a measure of the object's resistance to changes to its rotation. The moment of inertia is measured in kilogram metre² (kg m²). It depends on the object's mass: increasing the mass of an object increases the moment of inertia. It also depends on the distribution of the mass: distributing the mass further from the centre of rotation increases the moment of inertia by a greater degree. For a single particle of mass a distance from the axis of rotation, the moment of inertia is given by Torque is the twisting effect of a force F applied to a rotating object which is at position r from its axis of rotation. Mathematically, where × denotes the cross product. A net torque acting upon an object will produce an angular acceleration of the object according to just as F = ma in linear dynamics. The work done by a torque acting on an object equals the magnitude of the torque times the angle through which the torque is applied: The power of a torque is equal to the work done by the torque per unit time, hence: The angular momentum L is a measure of the difficulty of bringing a rotating object to rest. It is given by Angular momentum is related to angular velocity by just as p = mv in linear dynamics. The equivalent of linear momentum in rotational motion is angular momentum. The greater the angular momentum of the spinning object such as a top, the greater its tendency to continue to spin. The Angular Momentum of a rotating body is proportional to its mass and to how rapidly it is turning. In addition the angular momentum depends on how the mass is distributed relative to the axis of rotation: the further away the mass is located from the axis of rotation, the greater the angular momentum . A flat disk such as a record turntable has less angular momentum than a hollow cylinder of the same mass and velocity of rotation. Like linear momentum, angular momentum is vector quantity, and its conservation implies that the direction of the spin axis tends to remain unchanged. For this reason the spinning top remains upright whereas a stationary one falls over immediately. The angular momentum equation can be used to relate the moment of the resultant force on a body about an axis (sometimes called torque), and the rate of rotation about that axis. Torque and angular momentum are related according to just as F = dp/dt in linear dynamics. In the absence of an external torque, the angular momentum of a body remains constant. The conservation of angular momentum is notably demonstrated in figure skating: when pulling the arms closer to the body during a spin, the moment of inertia is decreased, and so the angular velocity is increased. The kinetic energy Krot due to the rotation of the body is given by just as Ktrans = 1⁄2mv2 in linear dynamics. Kinetic energy is the energy of motion. The amount of translational kinetic energy found in two variables: the mass of the object (m) and the speed of the object (v) as show in the equation above. Kinetic energy must always be either zero or a positive value. While velocity can have either a positive or negative value, velocity squared will always be positive. The above development is a special case of general rotational motion. In the general case, angular displacement, angular velocity, angular acceleration and torque are considered to be vectors. An angular displacement is considered to be a vector, pointing along the axis, of magnitude equal to that of . A right-hand rule is used to find which way it points along the axis; if the fingers of the right hand are curled to point in the way that the object has rotated, then the thumb of the right hand points in the direction of the vector. The angular velocity vector also points along the axis of rotation in the same way as the angular displacements it causes. If a disk spins counterclockwise as seen from above, its angular velocity vector points upwards. Similarly, the angular acceleration vector points along the axis of rotation in the same direction that the angular velocity would point if the angular acceleration were maintained for a long time. The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has effect on the rotation, other forces and torques are compensated by the structure. The simplest case of rotation around a fixed axis is that of constant angular speed. Then the total torque is zero. For the example of the Earth rotating around its axis, there is very little friction. For a fan, the motor applies a torque to compensate for friction. Similar to the fan, equipment found in the mass production manufacturing industry demonstrate rotation around a fixed axis effectively. For example, a multi-spindle lathe is used to rotate the material on its axis to effectively increase production of cutting, deformation and turning. The angle of rotation is a linear function of time, which modulo 360° is a periodic function. Internal tensile stress provides the centripetal force that keeps a spinning object together. A rigid body model neglects the accompanying strain. If the body is not rigid this strain will cause it to change shape. This is expressed as the object changing shape due to the "centrifugal force". Celestial bodies rotating about each other often have elliptic orbits. The special case of circular orbits is an example of a rotation around a fixed axis: this axis is the line through the center of mass perpendicular to the plane of motion. The centripetal force is provided by gravity, see also two-body problem. This usually also applies for a spinning celestial body, so it need not be solid to keep together, unless the angular speed is too high in relation to its density. (It will, however, tend to become oblate.) For example, a spinning celestial body of water must take at least 3 hours and 18 minutes to rotate, regardless of size, or the water will separate. If the density of the fluid is higher the time can be less. See orbital period.
<urn:uuid:371938d9-d652-4481-9c0e-1fb8fdd24fef>
4.03125
2,479
Knowledge Article
Science & Tech.
31.804993
95,511,101
Tachyons and superluminal wave groups In the approximation that every inertial observer experiences a homogeneous, uniform flow of time and sees a space that is Euclidean, the arena of physics is Minkowskian and one speed is the same in all intertial frames. If a given intertial observer finds an infinitesimal source or particle traveling faster than this fundamental speed near a given event, the source must appear in some inertial frame spread over neighboring positions at a given time as a spacelike structure. If this structure persists over a period of proper time, it can be interpreted as a wave group. If it is conserved, it can be interpreted as a line or tube of force. KeywordsArena Proper Time Inertial Frame Uniform Flow Wave Group Unable to display preview. Download preview PDF. - 1.A. Einstein,Ann. Physik 17, 891 (1905).Google Scholar - 2.O. M. P. Bilaniuk, V. K. Deshpande, and E. C. G. Sudarshan,Am. J. Phys. 30, 718 (1962).Google Scholar - 3.G. Feinberg,Phys. Rev. 159, 1089 (1967).Google Scholar - 4.N. Mukunda,Ann. Phys. 61, 329 (1970).Google Scholar - 5.G. H. Duffey,Theoretical Physics: Classical and Modern Views (Houghton Mifflin Co., Boston, 1973).Google Scholar - 6.T. S. Shankara,Found. Phys. 4, 97 (1974).Google Scholar
<urn:uuid:fa2e8c36-6e8c-4326-871d-ed3d672667fc>
2.6875
348
Academic Writing
Science & Tech.
71.659557
95,511,143
Metallic glasses are an exciting research target, but the difficulties associated with predicting how much energy these materials release when they fracture is slowing down development of metallic glass-based products. Recently, researchers developed a way of simulating to the atomic level how metallic glasses behave as they fracture. This modeling technique could improve computer-aided materials design and help researchers determine the properties of metallic glasses. They report their findings in the Journal of Applied Physics. Almost every golfer knows the feeling. Minutes after a picture-perfect drive down the fairway, a cascade of inexplicable missed putts leads to a disappointing triple bogey. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. In the framework of the actively developing practice of "precision farming", Lobachevsky University researchers are working to develop and introduce methods for spatially heterogeneous treatment of plants that minimize costs and improve the ecological quality of the crops, due to the less intensive use of chemical compounds. Using a mouse model to simulate binge drinking, researchers at Columbia University showed that heavy alcohol use during adolescence damages neurons in the part of the brain involved in working memory. Researchers of the University of Amsterdam and the Max Planck Institute for Evolutionary Anthropology describe for the first time the scavenging behaviour of mangabey monkeys, guinea fowls, and squirrels on energy-rich nut remnants cracked by chimpanzees and red river hogs. The team used data collected by camera traps in the rain forest of Tai National Park in Ivory Coast. The results reveal new unknown interactions between different species and increase our understanding of the complex community of animals foraging around tropical nut trees. Women who eat a high amount of fruits and vegetables each day may have a lower risk of breast cancer, especially of aggressive tumors, than those who eat fewer fruits and vegetables, according to a new study led by researchers from Harvard T.H. Chan School of Public Health. Health control measures alone could be ineffective in the long term fight against the deadly Rift Valley fever which affects both humans and animals, a new study in the journal PNAS reports. Scientists at the University of Birmingham are one step closer to developing an eye drop that could revolutionise treatment for age-related macular degeneration (AMD). In a landmark study published this week in the BMJ, Finnish researchers show that one of the most common surgical procedures in the Western world is probably unnecessary. Keyhole surgeries of the shoulder are useless for patients with 'shoulder impingement', the most common diagnosis in patients with shoulder pain. The Galaxy Europe project has set up an infrastructure offering online tutorials for researchers in the life sciences. Demand for animal protein and increasing wealth fuelled a tripling in the domestic production of livestock in China between 1980 and 2010, and the rise, despite some improvements in efficiencies at the farm level, had significant impacts on environmental sustainability, nationally and globally. The country's scientists are now aiming to redress the balance. for the first time, an open-source computing tool can, simply and intuitively, calculate the CO2 emissions in each phase of a building project, in order to obtain a global picture of its carbon footprint from its conception and to help decide every variable in the construction process. Parkinson's disease, formerly also referred to as shaking palsy, is one of the most frequent disorders affecting movement and the nervous system. Medical researchers at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) have come across a possible cause of the disease - in the patients' immune system. A University of Córdoba research group designs a new way to manufacture molds allowing small and medium-sized businesses to improve their creativity. Using an infrared sensor, biophysicists at Ruhr-Universität Bochum (RUB) have succeeded in analysing quickly and easily which active agents affect the structure of proteins and how long that effect lasts. Thus, Prof Dr Klaus Gerwert and Dr Jörn Güldenhaupt performed time-resolved measurements of the changes to the structure of protein scaffolds, which were triggered by the active agents. Their methods might help develop drugs with little side effects in a quick and targeted manner. The formation age of the big mantle wedge beneath eastern Asia and the lithospheric thinning mechanism of the North China craton are two key scientific issues. Based on new findings of deep carbon cycle, a recent study suggests that the big mantle wedge beneath eastern Asia was formed 125 Ma, and interaction between the CO2-rich silicate melt produced in the big mantle wedge and lithospheric mantle results in lithospheric thinning of the North China craton. Could robots soon help rescue crews save the survivors of a natural disaster? Such a mission would require that the robots be able to determine, on their own, which tasks to perform and in what order to perform them. Researchers at ULB's IRIDIA laboratory have shown, for the first time, that this ability can emerge from a group of robots. A collaborative research team based in Japan has designed new proteins that can self-assemble into the complex structures underlying biological organisms, laying the groundwork for leading-edge applications in biotechnology. The researchers created and developed the proteins with a specific function and their method reveals a possibility that certain protein functions can be created on demand. It is expected to contribute to the development of nanobiomaterials, which could be used as a drug delivery system or an artificial vaccine.
<urn:uuid:002a922e-7591-463e-8b19-4fc897d70bd4>
2.71875
1,158
Content Listing
Science & Tech.
23.930325
95,511,149
Summary:This research note shows microscopic images/ videos of three insects observed with white ... Public Lab is an open community which collaboratively develops accessible, open source, Do-It-Yourself technologies for investigating local environmental health and justice issues. Public Lab chatroom Reset your password Read more: publiclab.org/n/16540 Summary:This research note shows microscopic images/ videos of three insects observed with white and IR (850nm/940nm) LED light. The images show that viewing insect internal features is possible. Introduction: This note demonstrates that DIY microscope kits such as https://www.kickstarter.com/projects/publiclab/the-community-microscope-kit , can be modified with a Raspberry Pi NoIR camera and IR LEDs to obtain infrared images. One reason to adapt microscopes for IR imaging is that many insect parts are transparent when viewed with infrared light. Insect infrared observations have been done since 2005 using a variety of IR photomicroscopy techniques ( http://www.microscopy-uk.org.uk/mag/indexmag.html?http://www.microscopy-uk.org.uk/mag/artoct05/dwd50ir.html ). Adapting microscopes with a Raspberry PI NoIR camera/ IR LEDs extends the range of these techniques by permitting high definition video, real time image processing and multispectral imaging. Microscope modifications: -The general approach is to combine a Raspberry Pi NoIR camera with a microscope objective. One possible method is to use a camera holder such as: https://publiclab.org/notes/partsandcrafts/11-26-2017/building-a-raspberry-pi-microscope -Lighting modifications:Push button switches were used to toggle between different wavelengths. Two LEDS were used for IR viewing. -Two 850nm LEDs connected to push button switch -Two 940 nm LEDs connected to push button switch -One white LED connected to push button switch -Video Processing-Since IR images have a washed out effect, video was adjusted for high contrast (80 picamera settings). This adjustment makes a noticeable difference in being able to detect internal features. Video was captured and displayed (on youtube) in h264 format. -Bug collection - Sticky paper on top of light (sticky side up) placed outdoors at night. Typically is takes less then 20 minutes to collect bug samples. Paper (with bug) is then cut for mounting on microscope stage. Insect Infrared Microscope Examples: -Bug 1a video (center) shows circulation/pumping action/850 nm view https://youtu.be/4D7N47cHwHA -Bug 1b video (head) shows head internal features /850nm view -Bug 2 video (center) shows circulation/pumping action/850nm and 940nm views https://youtu.be/keh14V4n7Vg -Bug 3 video (head) shows pumping action/850nm and 940nm views https://youtu.be/WOMTCmb2dVw @warren, @icarito, @amirberAgain I did this Help out by offering feedback! None yet. Be the first to post one! This is marked this as an activity for others to try. Try it now Click here to add some more details. How long does this activity take? How hard is this activity? What kind of activity is it? What is it's current status? This is so cool!!!! Wow, this is amazing! I was wondering what this might look like... I'm excited to try to replicate this one! This is interesting, did you try to re-focus for the NIR LEDs? It's very likely that the lens wasn't designed for these wavelengths and re-focusing would let you capture images as sharp with those LEDs as well. Also, IDK about those insects specifically, but using a <400 nm LED could show some fluorescence. Is this a question? Click here to post it to the Questions page. Great idea @amirberagain. @maggpi if you have a chance to record video of a live insect's face that would be so cool... 🐜 You must be logged in to comment. updated 5 days ago Community Microscope Kit updated 12 days ago updated 5 days ago Public Lab partnership with NASA AREN project brings new easy-to-use tools for classroom use updated 4 days ago Leaflet Environmental Layer Library - Part 2 updated 7 days ago High Dynamic Range (HDR) Imaging (revisited) updated 6 days ago computer vision color detection updated 14 days ago Integrating User instructions into the Community Microscope build materials updated 13 days ago Camera exposure matrix updated 21 days ago Community Microscope-- THANK YOU updated 24 days ago
<urn:uuid:a2484435-b08a-4a81-912d-4767e096b8de>
2.53125
1,036
Comment Section
Science & Tech.
52.529699
95,511,157